RSS Security

🔒
❌ About FreshRSS
There are new articles available, click to refresh the page.
Today — 17 May 2021Tools

Eyeballer - Convolutional Neural Network For Analyzing Pentest Screenshots


Eyeballer is meant for large-scope network penetration tests where you need to find "interesting" targets from a huge set of web-based hosts. Go ahead and use your favorite screenshotting tool like normal (EyeWitness or GoWitness) and then run them through Eyeballer to tell you what's likely to contain vulnerabilities, and what isn't.


Example Labels

Old-Looking Sites



Login Pages


Webapp


Custom 404's


Parked Domains


What the Labels Mean

Old-Looking Sites Blocky frames, broken CSS, that certain "je ne sais quoi" of a website that looks like it was designed in the early 2000's. You know it when you see it. Old websites aren't just ugly, they're also typically super vulnerable. When you're looking to hack into something, these websites are a gold mine.

Login Pages Login pages are valuable to pen testing, they indicate that there's additional functionality you don't currently have access to. It also means there's a simple follow-up process of credential enumeration attacks. You might think that you can set a simple heuristic to find login pages, but in practice it's really hard. Modern sites don't just use a simple input tag we can grep for.

Webapp This tells you that there is a larger group of pages and functionality available here that can serve as surface area to attack. This is in contrast to a simple login page, with no other functionality. Or a default IIS landing page which has no other functionality. This label should indicate to you that there is a web application here to attack.

Custom 404 Modern sites love to have cutesy custom 404 pages with pictures of broken robots or sad looking dogs. Unfortunately, they also love to return HTTP 200 response codes while they do it. More often, the "404" page doesn't even contain the text "404" in it. These pages are typically uninteresting, despite having a lot going on visually, and Eyeballer can help you sift them out.

Parked Domains Parked domains are websites that look real, but aren't valid attack surface. They're stand-in pages, usually devoid of any real functionality, consist almost entirely of ads, and are usually not run by our actual target. It's what you get when the domain specified is wrong or lapsed. Finding these pages and removing them from scope is really valuable over time.


Setup

Download required packages on pip:

sudo pip3 install -r requirements.txt

Or if you want GPU support:

sudo pip3 install -r requirements-gpu.txt

NOTE: Setting up a GPU for use with TensorFlow is way beyond the scope of this README. There's hardware compatibility to consider, drivers to install... There's a lot. So you're just going to have to figure this part out on your own if you want a GPU. But at least from a Python package perspective, the above requirements file has you covered.

Pretrained Weights

For the latest pretrained weights, check out the releases here on GitHub.

Training Data You can find our training data here:

https://www.dropbox.com/s/rpylhiv2g0kokts/eyeballer-3.0.zip?dl=1

There's two things you need from the training data:

  1. images/ folder, containing all the screenshots (resized down to 224x224)
  2. labels.csv that has all the labels
  3. bishop-fox-pretrained-v3.h5 A pretrained weights file you can use right out of the box without training.

Copy all three into the root of the Eyeballer code tree.


Predicting Labels

NOTE: For best results, make sure you screenshot your websites in a native 1.6x aspect ratio. IE: 1440x900. Eyeballer will scale the image down automatically to the right size for you, but if it's the wrong aspect ratio then it will squish in a way that will affect prediction performance.

To eyeball some screenshots, just run the "predict" mode:

eyeballer.py --weights YOUR_WEIGHTS.h5 predict YOUR_FILE.png

Or for a whole directory of files:

eyeballer.py --weights YOUR_WEIGHTS.h5 predict PATH_TO/YOUR_FILES/

Eyeballer will spit the results back to you in human readable format (a results.html file so you can browse it easily) and machine readable format (a results.csv file).


Performance

Eyeballer's performance is measured against an evaluation dataset, which is 20% of the overall screenshots chosen at random. Since these screenshots are never used in training, they can be an effective way to see how well the model is performing. Here are the latest results:

Overall Binary Accuracy 93.52%
All-or-Nothing Accuracy 76.09%

Overall Binary Accuracy is probably what you think of as the model's "accuracy". It's the chance, given any single label, that it is correct.

All-or-Nothing Accuracy is more strict. For this, we consider all of an image's labels and consider it a failure if ANY label is wrong. This accuracy rating is the chance that the model correctly predicts all labels for any given image.

Label Precision Recall
Custom 404 80.20% 91.01%
Login Page 86.41% 88.47%
Webapp 95.32% 96.83%
Old Looking 91.70% 62.20%
Parked Domain 70.99% 66.43%

For a detailed explanation on Precision vs Recall, check out Wikipedia.


Training

To train a new model, run:

eyeballer.py train

You'll want a machine with a good GPU for this to run in a reasonable amount of time. Setting that up is outside the scope of this readme, however.

This will output a new model file (weights.h5 by default).


Evaluation

You just trained a new model, cool! Let's see how well it performs against some images it's never seen before, across a variety of metrics:

eyeballer.py --weights YOUR_WEIGHTS.h5 evaluate

The output will describe the model's accuracy in both recall and precision for each of the program's labels. (Including "none of the above" as a pseudo-label)



Yesterday — 16 May 2021Tools

DFIR-O365RC - PowerShell Module For Office 365 And Azure AD Log Collection


PowerShell module for Office 365 and Azure AD log collection


Module description

The DFIR-O365RC PowerShell module is a set of functions that allow the DFIR analyst to collect logs relevant for Office 365 Business Email Compromise investigations.

The logs are generated in JSON format and retrieved from two main data sources:

The two data sources can be queried from different endpoints:

Data source / Endpoint History Performance Scope Pre-requisites (OS or Azure)
Unified Audit Logs / Exchange Online PowerShell 90 days Poor All Office 365 logs (Azure AD included) None
Unified Audit Logs / Office 365 Management API 7 days Good All Office 365 logs (Azure AD included) Azure App registration
Azure AD Logs / Azure AD PowerShell Preview 30 days Good Azure AD sign-ins and audit events only Windows OS only
Azure AD Logs / MS Graph API 30 days Good Azure AD sign-ins and audit events only None

DFIR-O365RC is a forensic tool, its aim is not to monitor in real time your Office 365 infrastructure: Please use the Office 365 Management API if you want to analyze data in real time with a SIEM.

DFIR-O365RC will fetch data from:

  • Azure AD Logs using the MS Graph API because performance is good, history is 30 days and it works on PowerShell Core.
  • Unified Audit Logs using Exchange online PowerShell despite poor performance, history is 90 days and it works on PowerShell Core.

In case you are also investigating other Azure resources (IaaS, PaaS...) DFIR-O365RC can also fetch data from Azure Activity logs using the Azure Monitor RESTAPI. History is 90 days and it works on PowerShell Core.

As a result, DFIR-O365RC works also on Linux or Mac, as long as you have PowerShell Core and a browser in order to use device login.


Installation and pre-requisites

Clone the DFIR-O365RC repository. The tool works on PowerShell Desktop and PowerShell Core.

DFIR-O365 uses Jason Thompson's MSAL.PS and Boe Prox's PoshRSJob modules. To install them run the following commands:

Install-Module -Name MSAL.PS -RequiredVersion '4.21.0.1'
Install-Module -Name PoshRSJob -RequiredVersion '1.7.4.4'

If MSAL.PS module installation fails with the following message:

WARNING: The specified module ‘MSAL.PS’ with PowerShellGetFormatVersion ‘2.0’ is not supported by the current version of PowerShellGet. Get the latest version of the PowerShellGet module to install this module, ‘MSAL.PS’.

Update PowerShellGet with the following commands:

Install-PackageProvider Nuget -Force
Install-Module -Name PowerShellGet -Force

Once both modules are installed, launch a PowerShell prompt and locate your Powershell modules path with the following command:

PS> $env:PSModulePath

Copy the DFIR-O365RC directory in one of your modules path, for example on Windows:

  • %USERPROFILE%\Documents\WindowsPowerShell\Modules
  • %ProgramFiles%\WindowsPowerShell\Modules
  • %SYSTEMROOT%\system32\WindowsPowerShell\v1.0\Modules

Modules path examples on Linux:

  • /home/%USERNAME%/.local/share/powershell/Modules
  • /usr/local/share/powershell/Modules
  • /opt/microsoft/powershell/7/Modules

The DFIR-O365RC module is installed, restart the PowerShell prompt and load the module:

PS> Import-module DFIR-O365RC

Roles and license requirements

The user launching the tool should have the following roles:

  • Microsoft 365 role (portal.microsoft.com): Global reader
  • Exchange Online role (outlook.office365.com/ecp): View-Only Audit Logs

In order to retrieve Azure AD sign-ins logs with the MS Graph API you need at least one user with an Azure AD Premium P1 license. This license can be purchased at additional cost for a single user and is sometimes included in some license plans such as the Microsoft 365 Business Premium for small and medium-sized businesses.

If you need to retrieve also the Azure Activity logs you need the Log Analytics Reader role for the Azure subscription you are dumping the logs from.


Functions included in the module

The module has 6 functions:

Function name Data Source/History Performance Completeness Details
Get-O365Full Unified audit logs/90 days Poor All unified audit logs A subset of logs per record type can be retrieved. Use only on a small tenant or a short period of time
Get-O365Light Unified audit logs/90 days Good A subset of unified audit logs only Only a subset of operations considered of interest is retrieved.
Get-DefenderforO365 Unified audit logs/90 days Good A subset of unified audit logs only Retrieves Defender for Office 365 related logs. Requires at least an E5 license or a license plan such as Microsoft Defender for Office 365 Plan or cloud app security
Get-AADLogs Azure AD Logs/30 days Good All Azure AD logs Get tenant general information, all Azure sign-ins and audit logs. Azure AD sign-ins logs have more information than Azure AD logs retrieved via Unified audit logs.
Get-AADApps Azure AD Logs/30 days Good A subset of Azure AD logs only Get Azure audit logs related to Azure applications and service principals only. The logs are enriched with application or service principal object information.
Get-AADDevices Azure AD Logs/30 days Good A subset of Azure AD logs only Get Azure audit logs related to Azure AD joined or registered devices only. The logs are enriched with device object information.
Search-O365 Unified audit logs/90 days Depends on the query A subset of unified audit logs only Search for activity related to a particular user, IP address or use the freetext query.
Get-AzRMActivityLogs Azure Activity logs/90 days Good All Azure Activity logs Get all Azure activity logs for a given subscription or on every subscription the account running the function has access to

When querying Unified audit logs you are limited to 3 concurrent Exchange Online Powershell sessions. DFIR-O365RC will try to use all available sessions, please close any existing session before launching the log collection.

Each function as a comment based help which you can invoke with the get-help cmdlet.

#Display comment based help
PS> Get-help Get-O365Full
#Display comment based help with examples
PS> Get-help Get-O365Full -examples

Each function takes as a parameter a start date and an end date.

In order to retrieve Azure AD audit logs, sign-ins logs from the past 30 days and tenant information launch the following command:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-AADLogs -startdate $startdate -enddate $enddate

In order to retrieve enriched Azure AD audit logs related to Azure applications and service principals from the past 30 days launch the following command:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-AADApps -startdate $startdate -enddate $enddate

In order to retrieve enriched Azure AD audit logs related to Azure AD joined or registered devices from the past 30 days launch the following command:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-AADDevices -startdate $startdate -enddate $enddate

In order to retrieve all unified audit logs considered of interest from the past 30 days, except those related to Azure AD, which were already retrieved by the first command, launch:

$enddate = get-date
$startdate = $enddate.adddays(-30)
Get-O365Light -startdate $startdate -enddate $enddate -Operationsset "AllbutAzureAD"

In order to retrieve all unified audit logs considered of interest in a time window between -90 days and -30 days from now launch the following command:

$enddate = (get-date).adddays(-30)
$startdate = (get-date).adddays(-90)
Get-O365Light -StartDate $startdate -Enddate $enddate -Operationsset All

If mailbox audit is enabled and you want also to retrieve Mailboxlogin operations you can use the dedicated switch, on large tenants beware of a 50.000 events per day limit retrieval.

Get-O365Light -StartDate $startdate -Enddate $enddate -Operationsset All -MailboxLogin $true

If there are users with Enterprise 5 licenses or if there is a Microsoft Defender for Office 365 Plan you can retrieve Microsoft Defender related logs with the following command:

$enddate = get-date
$startdate = $enddate.adddays(-90)
Get-DefenderforO365 -StartDate $startdate -Enddate $enddate

To retrieve all Exchange Online related records from the unified audit logs between Christmas eve and Boxing day, beware that performance might be poor on a large tenant:

$startdate = get-date "12/24/2020"
$enddate = get-date "12/26/2020"
Get-O365Full -StartDate $startdate -Enddate $enddate -RecordSet ExchangeOnly

You can use the search function to look for IP addresses, activity related to specific users or perfrom a freetext search in the unified audit logs:

$enddate = get-date
$startdate = $enddate.adddays(-90)
#Retrieve events using the Exchange online Powershell AppId
Search-O365 -StartDate $startdate -Enddate $enddate -FreeText "a0c73c16-a7e3-4564-9a95-2bdf47383716"

#Search for events related to the X.X.X.X and Y.Y.Y.Y IP adresses, argument is a string separated by comas.
Search-O365 -StartDate $startdate -Enddate $enddate -IPAddresses "X.X.X.X,Y.Y.Y.Y"

#Retrieve events related to users [email protected] and [email protected] , argument is a system.array object
Search-O365 -StartDate $startdate -Enddate $enddate -UserIds "[email protected]", "[email protected]"

To retrieve all Azure Activity logs the account has access to launch the following command, available subscriptions will be displayed:

$enddate = get-date
$startdate = $enddate.adddays(-90)
Get-AzRMActivityLogs -StartDate $startdate -Enddate $enddate

When using PowerShell Core the authentication process will require a device code, you will need to use the devicecode parameter and launch your browser, open the https://microsoft.com/devicelogin URL and enter the code provided by the following message:

PS> Get-O365Light -StartDate $startdate -Enddate $enddate -DeviceCode:$true
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXXXXXX to authenticate.

Files generated

All files generated are in JSON format.

  • Get-AADApps creates a file named AADApps_%FQDN%.json in the azure_ad_apps folder where FQDN is the domain name part of the account used to collect the logs.
  • Get-AADDevices creates a file named AADDevices_%FQDN%.json in the azure_ad_devices folder.
  • Get-AADLogs creates folders named after the current date using the YYYY-MM-DD format in the azure_ad_signin folder, in each directory a file called AADSigninLog_%FQDN%_YYYY-MM-DD_HH-00-00.json is created for Azure AD sign-ins logs. A folder azure_ad_audit is also created and results are dumped in files named AADAuditLog_%FQDN%_YYYY-MM-DD.json for Azure AD audit logs. Finally a folder called azure_ad_tenant is created and the general tenant information written in a file named AADTenant_%FQDN%.json.
  • Get-AzRMActivityLogs creates folders named after the current date using the YYYY-MM-DD format in the azure_rm_activity folder, in each directory a file called AzRM_%FQDN%_%SubscriptionID%_YYYY-MM-DD_HH-00-00.json is created where %SubscriptionID% is the Azure subscription ID. A folder called azure_rm_subscriptions is created and each subscription information written in a file named AzRMsubscriptions_%FQDN%.json.
  • Get-O365Full creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%_YYYY-MM-DD_HH-00-00.json is created.
  • Get-O365Light creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%_YYYY-MM-DD.json is created.
  • Get-DefenderforO365 creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%_YYYY-MM-DD_DefenderforO365.json is created.
  • Search-O365 creates folders named after the current date using the YYYY-MM-DD format in the O365_unified_audit_logs, in each directory a file called UnifiedAuditLog_%FQDN%YYYY-MM-DD%searchtype%.json is created, where searchtype can have the values "Freetext", "IPAddresses" or "UserIds".

Launching the various functions will generate a similar directory structure:

DFIR-O365_Logs
│ Get-AADApps.log
│ Get-AADDevices.log
│ Get-AADLogs.log
| Get-AzRMActivityLogs
│ Get-DefenderforO365.log
│ Get-O365Light.log
│ Search-O365.log
└───azure_ad_apps
│ │ AADApps_%FQDN%.json
└───azure_ad_audit
│ │ AADAuditLog_%FQDN%_YYYY-MM-DD.json
│ │ ...
└───azure_ad_devices
│ │ AADDevices_%FQDN%.json
└───azure_ad_signin
│ │
│ └───YYYY-MM-DD
│ │ AADSigninLog_%FQDN%_YYYY-MM-DD_HH-00-00.json
│ │ ...
└───azure_ad_tenant
│ │ AADTenant_%FQDN%.json
└───azure_rm_activity
│ │
│ └&#947 2;──YYYY-MM-DD
│ │ AzRM_%FQDN%_%SubscriptionID%_YYYY-MM-DD_HH-00-00.json
│ │ ...
└───azure_rm_subscriptions
│ │ AzRMsubscriptions_%FQDN%.json
└───O365_unified_audit_logs
│ │
│ └───YYYY-MM-DD
│ │ UnifiedAuditLog_%FDQN%_YYYY-MM-DD.json
│ │ UnifiedAuditLog_%FQDN%_YYYY-MM-DD_freetext.json
│ │ UnifiedAuditLog_%FQDN%_YYYY-MM-DD_DefenderforO365.json
│ │ UnifiedAuditLog_%FQDN%_YYYY-MM-DD_HH-00-00.json
│ │ ...



Red-Kube - Red Team K8S Adversary Emulation Based On Kubectl


Red Kube is a collection of kubectl commands written to evaluate the security posture of Kubernetes clusters from the attacker's perspective.

The commands are either passive for data collection and information disclosure or active for performing real actions that affect the cluster.

The commands are mapped to MITRE ATT&CK Tactics to help get a sense of where we have most of our gaps and prioritize our findings.

The current version is wrapped with a python orchestration module to run several commands in one run based on different scenarios or tactics.

Please use with care as some commands are active and actively deploy new containers or change the role-based access control configuration.

Warning: You should NOT use red-kube commands on a Kubernetes cluster that you don't own!


Prerequisites:

python3 requirements

pip3 install -r requirements.txt

kubectl (Ubuntu / Debian)

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

kubectl (Red Hat based)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl

jq

sudo apt-get update -y
sudo apt-get install -y jq

Usage
usage: python3 main.py [-h] [--mode active/passive/all] [--tactic TACTIC_NAME] [--show_tactics] [--cleanup]

required arguments:
--mode run kubectl commands which are active / passive / all modes
--tactic choose tactic

other arguments:
-h --help show this help message and exit
--show_tactics show all tactics

Commands by MITRE ATT&CK Tactics
Tactic Count
Reconnaissance 2
Initial Access 0
Execution 0
Persistence 2
Privilege Escalation 4
Defense Evasion 1
Credential Access 8
Discovery 15
Lateral Movement 0
Collection 1
Command and Control 2
Exfiltration 1
Impact 0

Webinars

1 First Workshop with Lab01 and Lab02 Webinar Link

2 Second Workshop with Lab03 and Lab04 Webinar Link


Presentations

BlackHat Asia 2021


Q&A

Why choosing kubectl and not using the kubernetes api in python?

When performing red team assessments and adversary emulations, the quick manipulations and tweaks for the tools used in the arsenal are critical.

The ability to run such assessments and combine the k8s attack techniques based on kubectl and powerful Linux commands reduces the time and effort significantly.


Contact Us

This research was held by Lightspin's Security Research Team. For more information, contact us at [email protected].



Before yesterdayTools

CIMplant - C# Port Of WMImplant Which Uses Either CIM Or WMI To Query Remote Systems


C# port of WMImplant which uses either CIM or WMI to query remote systems. It can use provided credentials or the current user's session.

Note: Some commands will use PowerShell in combination with WMI, denoted with ** in the --show-commands command.


Introduction

CIMplant is a C# rewrite and expansion on @christruncer's WMImplant. It allows you to gather data about a remote system, execute commands, exfil data, and more. The tool allows connections using Windows Management Instrumentation, WMI, or Common Interface Model, CIM ; well more accurately Windows Management Infrastructure, MI. CIMplant requires local administrator permissions on the target system.


Setup:

It's probably easiest to use the built version under Releases, just note that it is compiled in Debug mode. If you want to build the solution yourself, follow the steps below.

  1. Load CIMplant.sln into Visual Studio
  2. Go to Build at the top and then Build Solution if no modifications are wanted

Usage
CIMplant.exe --help
CIMplant.exe --show-commands
CIMplant.exe --show-examples
CIMplant.exe -s [remote IP address] -c cat -f c:\users\user\desktop\file.txt
CIMplant.exe -s [remote IP address] -u [username] -d [domain] -p [password] -c cat -f c:\users\test\desktop\file.txt
CIMplant.exe -s [remote IP address] -u [username] -d [domain] -p [password] -c command_exec --execute "dir c:\\"

Some Helpful Commands



Important Files

  1. Program.cs

This is the brains of the operation, the driver for the program.

  1. Connector.cs

This is where the initial CIM/WMI connections are made and passed to the rest of the application

  1. ExecuteWMI.cs

All function code for the WMI commands

  1. ExecuteCIM.cs

All function code for the CIM (MI) commands


Detection

Of course, the first thing we'll want to be aware of is the initial WMI or CIM connection. In general, WMI uses DCOM as a communication protocol whereas CIM uses WSMan (or, WinRM). This can be modified for CIM, and is in CIMplant, but let's just go over the default values for now. For DCOM, the first thing we can do is look for initial TCP connections over port 135. The connecting and receiving systems will then decide on a new, very high port to use so that will vary drastically. For WSMan, the initial TCP connection is over port 5985.

Next, you'll want to look at the Microsoft-Windows-WMI-Activity/Trace event log in the Event Viewer. Search for Event ID 11 and filter on the IsLocal property if possible. You can also look for Event ID 1295 within the Microsoft-Windows-WinRM/Analytic log.

Finally, you'll want to look for any modifications to the DebugFilePath property with the Win32_OSRecoveryConfiguration class. More detailed information about detection can be found at Part 1 of our blog series here: CIMplant Part 1: Detection of a C# Implementation of WMImplant



Httpx - A Fast And Multi-Purpose HTTP Toolkit Allows To Run Multiple Probers Using Retryablehttp Library, It Is Designed To Maintain The Result Reliability With Increased Threads


httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads.


Features
  • Simple and modular code base making it easy to contribute.
  • Fast And fully configurable flags to probe mutiple elements.
  • Supports multiple HTTP based probings.
  • Smart auto fallback from https to http as default.
  • Supports hosts, URLs and CIDR as input.
  • Handles edge cases doing retries, backoffs etc for handling WAFs.

Supported probes:-
Probes Default check Probes Default check
URL true IP true
Title true CNAME true
Status Code true Raw HTTP false
Content Length true HTTP2 false
TLS Certificate true HTTP 1.1 Pipeline false
CSP Header true Virtual host false
Location Header true CDN false
Web Server true Path false
Web Socket true Ports false
Response Time true Request method false

Installation Instructions

From Binary

The installation is easy. You can download the pre-built binaries for your platform from the Releases page. Extract them using tar, move it to your $PATHand you're ready to go.

Download latest binary from https://github.com/projectdiscovery/httpx/releases

▶ tar -xvf httpx-linux-amd64.tar
▶ mv httpx-linux-amd64 /usr/local/bin/httpx
▶ httpx -h

From Source

httpx requires go1.14+ to install successfully. Run the following command to get the repo -

▶ GO111MODULE=on go get -v github.com/projectdiscovery/httpx/cmd/httpx

From Github
▶ git clone https://github.com/projectdiscovery/httpx.git; cd httpx/cmd/httpx; go build; mv httpx /usr/local/bin/; httpx -version

Usage
httpx -h

This will display help for the tool. Here are all the switches it supports.

Flag Description Example
H Custom Header input httpx -H 'x-bug-bounty: hacker'
follow-redirects Follow URL redirects (default false) httpx -follow-redirects
follow-host-redirects Follow URL redirects only on same host(default false) httpx -follow-host-redirects
http-proxy URL of the proxy server httpx -http-proxy hxxp://proxy-host:80
l File containing HOST/URLs/CIDR to process httpx -l hosts.txt
no-color Disable colors in the output. httpx -no-color
o File to save output result (optional) httpx -o output.txt
json Prints all the probes in JSON format (default false) httpx -json
vhost Probes to detect vhost from list of subdomains httpx -vhost
threads Number of threads (default 50) httpx -threads 100
http2 HTTP2 probing httpx -http2
pipeline HTTP1.1 Pipeline probing httpx -pipeline
ports Ports ranges to probe (nmap syntax: eg 1,2-10,11) httpx -ports 80,443,100-200
title Prints title of page if available httpx -title
path Request path/file httpx -path /api
paths Request list of paths from file httpx -paths paths.txt
content-length Prints content length in the output httpx -content-length
ml Match content length in the output httpx -content-length -ml 125
fl Filter content length in the output httpx -content-length -fl 0,43
status-code Prints status code in the output httpx -status-code
mc Match status code in the output httpx -status-code -mc 200,302
fc Filter status code in the output httpx -status-code -fc 404,500
tech-detect Perform wappalyzer based technology detection httpx -tech-detect
tls-probe Send HTTP probes on the extracted TLS domains httpx -tls-probe
tls-grab Perform TLS data grabbing httpx -tls-grab
content-type Prints content-type httpx -content-type
location Prints location header httpx -location
csp-probe Send HTTP probes on the extracted CSP domains httpx -csp-probe
web-server Prints running web sever if available httpx -web-server
sr Store responses to file (default false) httpx -sr
srd Directory to store response (optional) httpx -srd httpx-output
unsafe Send raw requests skipping golang normalization httpx -unsafe
request File containing raw request to process httpx -request
retries Number of retries httpx -retries
random-agent Use randomly selected HTTP User-Agent header value httpx -random-agent
silent Prints only results in the output httpx -silent
stats Prints statistic every 5 seconds httpx -stats
timeout Timeout in seconds (default 5) httpx -timeout 10
verbose Verbose Mode httpx -verbose
version Prints current version of the httpx httpx -version
x Request Method (default 'GET') httpx -x HEAD
method Output requested method httpx -method
response-time Output the response time httpx -response-time
response-in-json Include response in stdout (only works with -json) httpx -response-in-json
websocket Prints if a websocket is exposed httpx -websocket
ip Prints the host IP httpx -ip
cname Prints the cname record if available httpx -cname
cdn Check if domain's ip belongs to known CDN httpx -cdn
filter-string Filter results based on filtered string httpx -filter-string XXX
match-string Filter results based on matched string httpx -match-string XXX
filter-regex Filter results based on filtered regex httpx -filter-regex XXX
match-regex Filter results based on matched regex httpx -match-regex XXX

Running httpx with stdin

This will run the tool against all the hosts and subdomains in hosts.txt and returns URLs running HTTP webserver.

▶ cat hosts.txt | httpx 

__ __ __ _ __
/ /_ / /_/ /_____ | |/ /
/ __ \/ __/ __/ __ \| /
/ / / / /_/ /_/ /_/ / |
/_/ /_/\__/\__/ .___/_/|_| v1.0
/_/

projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.

https://mta-sts.managed.hackerone.com
https://mta-sts.hackerone.com
https://mta-sts.forwarding.hackerone.com
https://docs.hackerone.com
https://www.hackerone.com
https://resources.hackerone.com
https://api.hackerone.com
https://support.hackerone.com

Running httpx with file input

This will run the tool against all the hosts and subdomains in hosts.txt and returns URLs running HTTP webserver.

▶ httpx -l hosts.txt -silent

https://docs.hackerone.com
https://mta-sts.hackerone.com
https://mta-sts.managed.hackerone.com
https://mta-sts.forwarding.hackerone.com
https://www.hackerone.com
https://resources.hackerone.com
https://api.hackerone.com
https://support.hackerone.com

Running httpx with CIDR input
▶ echo 173.0.84.0/24 | httpx -silent

https://173.0.84.29
https://173.0.84.43
https://173.0.84.31
https://173.0.84.44
https://173.0.84.12
https://173.0.84.4
https://173.0.84.36
https://173.0.84.45
https://173.0.84.14
https://173.0.84.25
https://173.0.84.46
https://173.0.84.24
https://173.0.84.32
https://173.0.84.9
https://173.0.84.13
https://173.0.84.6
https://173.0.84.16
https://173.0.84.34

Running httpx with subfinder
subfinder -d hackerone.com -silent | httpx -title -content-length -status-code -silent

https://mta-sts.forwarding.hackerone.com [404] [9339] [Page not found · GitHub Pages]
https://mta-sts.hackerone.com [404] [9339] [Page not found · GitHub Pages]
https://mta-sts.managed.hackerone.com [404] [9339] [Page not found · GitHub Pages]
https://docs.hackerone.com [200] [65444] [HackerOne Platform Documentation]
https://www.hackerone.com [200] [54166] [Bug Bounty - Hacker Powered Security Testing | HackerOne]
https://support.hackerone.com [301] [489] []
https://api.hackerone.com [200] [7791] [HackerOne API]
https://hackerone.com [301] [92] []
https://resources.hackerone.com [301] [0] []

Notes
  • As default, httpx checks for HTTPS probe and fall-back to HTTP only if HTTPS is not reachable.
  • For printing both HTTP/HTTPS results, no-fallback flag can be used.
  • Custom scheme for ports can be defined, for example -ports http:443,http:80,https:8443
  • vhost, http2, pipeline, ports, csp-probe, tls-probe and path are unique flag with different probes.
  • Unique flags should be used for specific use cases instead of running them as default with other flags.
  • When using json flag, all the information (default probes) included in the JSON output.

Thanks

httpx is made by the projectdiscovery team. Community contributions have made the project what it is. See the Thanks.md file for more details. Do also check out these similar awesome projects that may fit in your workflow:

Probing feature is inspired by @tomnomnom/httprobe work



Mubeng - An Incredibly Fast Proxy Checker And IP Rotator With Ease


An incredibly fast proxy checker & IP rotator with ease.

Features
  • Proxy IP rotator: Rotates your IP address for every specific request.
  • Proxy checker: Check your proxy IP which is still alive.
  • All HTTP/S methods are supported.
  • HTTP & SOCKSv5 proxy protocols apply.
  • All parameters & URIs are passed.
  • Easy to use: You can just run it against your proxy file, and choose the action you want!
  • Cross-platform: whether you are Windows, Linux, Mac, or even Raspberry Pi, you can run it very well.

Why mubeng?

It's fairly simple, there is no need for additional configuration.

mubeng has 2 core functionality:


1. Run proxy server as proxy IP rotation

This is useful to avoid different kinds of IP ban, i.e. bruteforce protection, API rate-limiting or WAF blocking based on IP. We also leave it entirely up to user to use proxy pool resources from anywhere.


2. Perform proxy checks

So, you don't need any extra proxy checking tools out there if you want to check your proxy pool.


Installation

Binary

Simply, download a pre-built binary from releases page and run!


Docker

Pull the Docker image by running:

▶ docker pull kitabisa/mubeng

Source

Using Go (v1.15+) compiler:

▶ GO111MODULE=on go get -u ktbs.dev/mubeng/cmd/mubeng
NOTE: The same command above also works for updating.

— or

Manual building executable from source code:

▶ git clone https://github.com/kitabisa/mubeng
▶ cd mubeng
▶ make build
▶ (sudo) mv ./bin/mubeng /usr/local/bin
▶ make clean

Usage

For usage, it's always required to provide your proxy list, whether it is used to check or as a proxy pool for your proxy IP rotation.


Basic
▶ mubeng [-c|-a :8080] -f file.txt [options...]

Options

Here are all the options it supports.

▶ mubeng -h
Flag Description
-f, --file <FILE> Proxy file.
-a, --address <ADDR>:<PORT> Run proxy server.
-d, --daemon Daemonize proxy server.
-c, --check To perform proxy live check.
-t, --timeout Max. time allowed for proxy server/check (default: 30s).
-r, --rotate <AFTER> Rotate proxy IP for every AFTER request (default: 1).
-v, --verbose Dump HTTP request/responses or show died proxy on check.
-o, --output Log output from proxy server or live check.
-u, --update Update mubeng to the latest stable version.
-V, --version Show current mubeng version.

NOTES:
  • Rotations are counted for all requests, even if the request fails.
    • Rotation means random, NOT choosing a proxy after/increment from proxy pool. We do not set up conditions if a proxy has been used. So, there is no guarantee if your request reaches the N value (-r/--rotate) your IP proxy will rotate.
  • Daemon mode (-d/--daemon) will install mubeng as a service on the (Linux/OSX) system/setting up callback (Windows).
    • Hence you can control service with journalctl, service or net (for Windows) command to start/stop proxy server.
    • Whenever you activate the daemon mode, it works by forcibly stop and uninstalling the existing mubeng service, then re-install and starting it up in daemon.
  • Verbose mode (-v/--verbose) and timeout (-t/--timeout) apply to both proxy check and proxy IP rotation actions.
  • HTTP traffic requests and responses is displayed when verbose mode (-v/--verbose) is enabled, but
    • We DO NOT explicitly display the request/response body, and
    • All cookie values in headers will be redacted automatically.
  • If you use output option (-o/--output) to run proxy IP rotator, request/response headers are NOT written to the log file.
  • A timeout option (-t/--timeout) value is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "5s", "300ms", "-1.5h" or "2h45m".
    • Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", and "h".

Examples

For example, you've proxy pool (proxies.txt) as:

http://127.0.0.1:8080
https://127.0.0.1:3128
socks5://127.0.0.1:2121
...
...

Because we use auto-switch transport, mubeng can accept multiple proxy protocol schemes at once.
Please refer to documentation for this package.


Proxy checker

Pass --check flag in command to perform proxy checks:

▶ mubeng -f proxies.txt --check --output live.txt

The above case also uses --output flag to save a live proxy into file (live.txt) from checking result.


(Figure: Checking proxies mubeng with max. 5s timeout)


Proxy IP rotator

Furthermore, if you wish to do proxy IP rotator from proxies that are still alive earlier from the results of checking (live.txt) (or if you have your own list), you must use -a (--address) flag instead to run proxy server:

▶ mubeng -a localhost:8089 -f live.txt -r 10

The -r (--rotate) flag works to rotate your IP for every N request value you provide (10).


(Figure: Running mubeng as proxy IP rotator with verbose mode)


Burp Suite Upstream Proxy

In case you want to use mubeng (proxy IP rotator) as an upstream proxy in Burp Suite, acting in-between Burp Suite and mubeng to the internet, so you don't need any additional extensions in Burp Suite for that. To demonstrate this:


(Figure: Settings Burp Suite Upstream Proxy to mubeng)

In your Burp Suite instance, select Project options menu, and click Connections tab. In the Upstream Proxy Servers section, check Override user options then press Add button to add your upstream proxy rule. After that, fill required columns (Destination host, Proxy host & Proxy port) with correct details. Click OK to save settings.


OWASP ZAP Proxy Chain

It acts the same way when you using an upstream proxy. OWASP ZAP allows you to connect to another proxy for outgoing connections in OWASP ZAP session. To chain it with a mubeng proxy server:


(Figure: Settings proxy chain connection in OWASP ZAP to mubeng)

Select Tools in the menu bar in your ZAP session window, then select the Options (shortcut: Ctrl+Alt+O) submenu, and go to Connection section. In that window, scroll to Use proxy chain part then check Use an outgoing proxy server. After that, fill required columns (Address/Domain Name & Port) with correct details. Click OK to save settings.


Limitations

Currently IP rotation runs the proxy server only as an HTTP protocol, not a SOCKSv5 protocol, even though the resource you have is SOCKSv5. In other words, the SOCKSv5 resource that you provide is used properly because it uses auto-switch transport on the client, but this proxy server DOES NOT switch to anything other than HTTP protocol.


Contributors

This project exists thanks to all the people who contribute. To learn how to setup a development environment and for contribution guidelines, see CONTRIBUTING.md.


Pronunciation

jv_ID/mo͞oˌbēNG/ — mubeng-mubeng nganti mumet. (ꦩꦸꦧꦺꦁ​ꦔꦤ꧀ꦠꦶ​ꦩꦸꦩꦺꦠ꧀)


Changes

For changes, see CHANGELOG.md.



R77-Rootkit - Fileless Ring 3 Rootkit With Installer And Persistence That Hides Processes, Files, Network Connections, Etc...


Ring 3 rootkit

r77 is a ring 3 Rootkit that hides following entities from all processes:

  • Files, directories, junctions, named pipes, scheduled tasks
  • Processes
  • CPU usage
  • Registry keys & values
  • Services
  • TCP & UDP connections

It is compatible with Windows 7 and Windows 10 in both x64 and x86 editions.


Hiding by prefix

All entities where the name starts with "$77" are hidden.



Configuration System

The dynamic configuration system allows to hide processes by PID and by name, file system items by full path, TCP & UDP connections of specific ports, etc.



The configuration is stored in HKEY_LOCAL_MACHINE\SOFTWARE\$77config and is writable by any process without elevated privileges. The DACL of this key is set to grant full access to any user.

The $77config key is hidden when RegEdit is injected with the rootkit.


Installer

r77 is deployable using a single file "Install.exe". It installs the r77 service that starts before the first user is logged on. This background process injects all currently running processes, as well as processes that spawn later. Two processes are needed to inject both 32-bit and 64-bit processes. Both processes are hidden by ID using the configuration system.

Uninstall.exe removes r77 from the system and gracefully detaches the rootkit from all processes.


Child process hooking

When a process creates a child process, the new process is injected before it can run any of its own instructions. The function NtResumeThread is always called when a new process is created. Therefore, it's a suitable target to hook. Because a 32-bit process can spawn a 64-bit child process and vice versa, the r77 service provides a named pipe to handle child process injection requests.

In addition, there is a periodic check every 100ms for new processes that might have been missed by child process hooking. This is necessary because some processes are protected and cannot be injected, such as services.exe.


In-memory injection

The rootkit DLL (r77-x86.dll and r77-x64.dll) can be injected into a process from memory and doesn't need to be stored on the disk. Reflective DLL injection is used to achieve this. The DLL provides an exported function that when called, loads all sections of the DLL, handles dependency loading and relocations, and finally calls DllMain.


Fileless persistence

The rootkit resides in the system memory and does not write any files to the disk. This is achieved in multiple stages.

Stage 1: The installer creates two scheduled tasks for the 32-bit and the 64-bit r77 service. A scheduled task does require a file, named $77svc32.job and $77svc64.job to be stored, which is the only exception to the fileless concept. However, scheduled tasks are also hidden by prefix once the rootkit is running.

The scheduled tasks start powershell.exe with following command line:

[Reflection.Assembly]::Load([Microsoft.Win32.Registry]::LocalMachine.OpenSubkey('SOFTWARE').GetValue('$77stager')).EntryPoint.Invoke($Null,$Null)

The command is inline and does not require a .ps1 script. Here, the .NET Framework capabilities of PowerShell are utilized in order to load a C# executable from the registry and execute it in memory. Because the command line has a maximum length of 260 (MAX_PATH), there is only enough room to perform a simple Assembly.Load().EntryPoint.Invoke().




Stage 2: The executed C# binary is the stager. It will create the r77 service processes using process hollowing. The r77 service is a native executable compiled in both 32-bit and 64-bit separately. The parent process is spoofed and set to winlogon.exe for additional obscurity. In addition, the two processes are hidden by ID and are not visible in the task manager.



No executables or DLL's are ever stored on the disk. The stager is stored in the registry and loads the r77 service executable from its resources.

The PowerShell and .NET dependencies are present in a fresh installation of Windows 7 and Windows 10. Please review the documentation for a complete description of the fileless initialization.


Hooking

Detours is used to hook several functions from ntdll.dll. These low-level syscall wrappers are called by any WinAPI or framework implementation.

  • NtQuerySystemInformation
  • NtResumeThread
  • NtQueryDirectoryFile
  • NtQueryDirectoryFileEx
  • NtEnumerateKey
  • NtEnumerateValueKey
  • EnumServiceGroupW
  • EnumServicesStatusExW
  • NtDeviceIoControlFile

The only exception is advapi32.dll. Two functions are hooked to hide services. This is because the actual service enumeration happens in services.exe, which cannot be injected.


Test environment

The Test Console can be used to inject r77 to or detach r77 from individual processes.



Technical Documentation

Please read the technical documentation to get a comprehensive and full overview of r77 and its internals, and how to deploy and integrate it.


Project Page

bytecode77.com/r77-rootkit



3klCon - Automation Recon Tool Which Works With Large And Medium Scope


Full Automation Recon tool which works with Small and Medium scopes.

ّIt's recommended to use it on VPS, it'll discover secrets and searching for vulnerabilities

So, Welcome and let's deep into it <3


Updates

Version 1.1, what's new? (Very Recommended)
  1. Fixing multiple issues with the used tools.
  2. Upgrading to python3
  3. Editing the tool's methedology, you can check it there :)
  4. Editing the selected tools, change some and use more tools
  5. Editing some processes to be as a user option like directory bruteforcing and port scan

Installation instructions

1. Befor ANY installation instruction: You MUST be the ROOT user

$ su -

Because some of tools and dependencies will need the root permission


2. Install required tools (You MUST run it even if you install the used tools)

chmod +x install_tools.sh

./install_tools.sh


3. Running tool (Preferred to use python2 not python3)

python 3klcon.py -t target.com


4. Check that you already installed the last version of GO because some tools require to be updated!

Notes

[+] If you face any problem at the installation process, check that:

1. You logged in as ROOT user not normal user 
2. Check that you installed the GO language and this path is exist /root/go/bin

[+] It will take almost 5 ~ 6 hours running if your target is a medium. So, be Patient or use VPS and sleep while running :)

[+] It will collect all the result into one directory with your target name

[+] Some of tools may need your reaction like entering your GitHub's 2FA or username, password, etc.


Tools useds
  1. Subfinder
  2. Assetfinder
  3. Altdns
  4. Dirsearch
  5. Httpx
  6. Waybackurls
  7. Gau
  8. Git-hound
  9. Gitdorks.sh
  10. Naabu
  11. Gf
  12. Gf-templetes
  13. Nuclei
  14. Nuclei-templets
  15. Subjack
  16. Port_scan.sh

Stay in touch <3

LinkedIn | Blog | Twitter



Snuffleupagus - Security Module For Php7 And Php8 - Killing Bugclasses And Virtual-Patching The Rest!


Security module for php7 and php8 - Killing bugclasses and virtual-patching the rest!

Snuffleupagus is a PHP 7+ and 8+ module designed to drastically raise the cost of attacks against websites, by killing entire bug classes. It also provides a powerful virtual-patching system, allowing administrator to fix specific vulnerabilities and audit suspicious behaviours without having to touch the PHP code.


Key Features
  • No noticeable performance impact
  • Powerful yet simple to write virtual-patching rules
  • Killing several classes of vulnerabilities
  • Several hardening features
    • Automatic secure and samesite flag for cookies
    • Bundled set of rules to detect post-compromissions behaviours
    • Global strict mode and type-juggling prevention
    • Whitelisting of stream wrappers
    • Preventing writeable files execution
    • Whitelist/blacklist for eval
    • Enforcing TLS certificate validation when using curl
    • Request dumping capability
  • A relatively sane code base:

Download

We've got a download page, where you can find packages for your distribution, but you can of course just git clone this repo, or check the releases on github.


Examples

We're providing various example rules, that are looking like this:

# Harden the `chmod` function
sp.disable_function.function("chmod").param("mode").value_r("^[0-9]{2}[67]$").drop();

# Mitigate command injection in `system`
sp.disable_function.function("system").param("command").value_r("[$|;&`\\n]").drop();

Upon violation of a rule, you should see lines like this in your logs:

[snuffleupagus][0.0.0.0][disabled_function][drop] The execution has been aborted in /var/www/index.php:2, because the return value (0) of the function 'strpos' matched a rule.

Documentation

We've got a comprehensive website with all the documentation that you could possibly wish for. You can of course build it yourself.


Thanks

Many thanks to the Suhosin project for being a huge source of inspiration, and to all our contributors.



ByeIntegrity-UAC - Bypass UAC By Hijacking A DLL Located In The Native Image Cache


Bypass User Account Control (UAC) to gain elevated (Administrator) privileges to run any program at a high integrity level. 


Requirements
  • Administrator account
  • UAC notification level set to default or lower

How it works

ByeIntegrity hijacks a DLL located in the Native Image Cache (NIC). The NIC is used by the .NET Framework to store optimized .NET Assemblies that have been generated from programs like Ngen, the .NET Framework Native Image Generator. Because Ngen is usually run under the current user with Administrative privileges through the Task Scheduler, the NIC grants modify access for members of the Administrators group.

The Microsoft Management Console (MMC) Windows Firewall Snap-in uses the .NET Framework, and upon initializing it, modules from the NIC are loaded into the MMC process. The MMC executable uses AutoElevate, a mechanism Windows uses that automatically elevates a process’s token without UAC prompting.

ByeIntegrity hijacks a specific DLL located in the NIC named Accessibility.ni.dll. It writes some shellcode into an appropriately-sized area of padding located in the .text section of the DLL. The entry point of the DLL is then updated to point to the shellcode. Upon DLL load, the entry point (which is actually the shellcode) is executed. The shellcode calculates the address of kernel32!CreateProcessW, creates a new instance of cmd.exe running as an Administrator, and then simply returns TRUE. This is only for the DLL_PROCESS_ATTACH reason; all other reasons will immediately return TRUE.


UACMe

This attack is implemented in UACMe as method #63. If you want to try out this attack, please, use UACMe first. The attack is the same, however, UACMe uses a different method to modify the NIC. ByeIntegrity uses IFileOperation while UACMe uses ISecurityEditor. In addition, UACMe chooses the correct Accessibility.ni.dll for your system and preforms the system maintenance tasks if necessary (to generate the NIC components). ByeIntegrity simply chooses the first NIC entry that exists (which may/may not be the correct entry that MMC is using) and does not run the system maintenance tasks. ByeIntegrity contains significantly more code than UACMe, so reading the UACMe implementation will be much easier to understand than reading the ByeIntegrity code. Lastly, ByeIntegrity launches a child process during the attack whereas UACMe does not.

tl;dr: UACMe is simpler and more effective than ByeIntegrity, so use UACMe first.


Using the code

If you’re reading this then you probably know how to compile the source. Just note that this hasn’t been tested or designed with x86 in mind at all, and it probably won’t work on x86 anyways.

Just like UACMe, I will never upload compiled binaries to this repo. There are always people who want the world to crash and burn, and I'm not going to provide an easy route for them to run this on somebody else's computer and cause intentional damage. I also don't want script-kiddies to use this attack without understanding what it does and the damage it can cause.


Supported Versions

This attack works from Windows 7 (7600) up until the latest version of Windows 10.



APSoft-Web-Scanner-v2 - Powerful Dork Searcher And Vulnerability Scanner For Windows Platform


APSoft Webscanner Version 2

new version of APSoft Webscanner Version 1


Software pictures





What can i do with this ?

with this software, you will be able to search your dorks in supported search engines and scan grabbed urls to find their vulnerabilities. in addition , you will be able to generate dorks, scan urls and saerch dorks separately when ever you want


Supported search engines
  • Google
  • Yahoo
  • Bing

Supported vulnerabilities
  • SQL Injection
  • XSS
  • LFI

Whats new in version 2 (most important updates) ?

adding custom payloads

you can edit payloads.json file which will be created when you open and close software once, and add payloads as much as you want , easier than drinking water


adding custom error checks

once a payload injected in url, software will looks for errors in new website source, you can also customize those errors too. what you have to do is easily edit payloadserror.json file which will be created when you open and close software once. you can also use regexes as error , with REIT|your regex here format


multy vulnerability check

in old version, you were not able to choose more than 1 vulnerabilites to check, but in v2, you can do this easily.


multy search engine grabber

in old version, you were not able to choose more than 1 saerchengines to saerch in, but in v2, you can do this easily.


memory management

we`ve added memory management to avoid lack of memory in your system


dork generator

you can generate dorks and save them very fast with your custom configurations and keywords. valid configuration format should contain {DORK} that will be replaced with each keyword in dork generation process


updates list (all)
  • new threading system based on microsoft task
  • using linq technology
  • dork generator part
  • ability to add regexes as payloads error
  • low usage
  • moving from WPF to Windows form (just because my designes are bad, contact me if you can do better)
  • ability to use scanner-graber separately and simultaneously
  • and ....

support / suggestion = [email protected] - t.me/ph09nix

Leave a STAR if you found this usefull :)


Short story about Clubhouse user scraping and social graphs


TL;DR

During this RedTeam testing, Hexway team used Clubhouse as a social engineering tool to find out more about their client’s employees.


UPDATE:

While Hexway were preparing this article for publication, cybernews.com reported: 1.3 million scraped user records leaked online for free

In this research, Hexway didn’t attack Clubhouse users and didn’t exploit any Clubhouse vulnerabilities



Intro

Hi!

RedTeam projects have become routine for many pentest companies quite a long time ago. In Hexway, we don’t do them a lot only because our main focus is our collaborative pentesting platform, Hive. But in this case, we couldn’t resist - the project seemed to be very interesting.

We won’t go into detail on the project itself but rather focus on one of its parts. So, in this ReadTeam testing, our goal was to compromise the computer of the CTO of a large financial organization, X corp. To achieve that, we needed the CTO to open a docx file with our payload. Naturally, the question was: what’s the best way to deliver that file?

Here are some obvious options: - Corporate email - LinkedIn - Facebook

Instead, we wanted to try something new. And that’s where Clubhouse comes in.


Clubhouse? What?!

Clubhouse is a voice-based social network. It was popular for a couple of weeks in February 2021.

At that time, Clubhouse offered us a few advantages: - Huge popularity - Users mostly sign up with their real names, photos, and links to other social media - It’s quite easy to get into a room with interesting people, who are often hard to reach through traditional channels like email, LinkedIn, etc. - Our experience tells us that people are suspicious of cold emails with attachments and don’t open them. But in the context of an informal social platform, they seem to be less alert, which is good for RedTeam.

  • Here’s the plan:
  • Sign up in Clubhouse
  • Find our target in Clubhouse
  • Wait until they participate in a room as a speaker
  • Join the room
  • Try to engage them in a conversation. Get them interested and move the conversation over to email
  • Send them an email with the attachment and payload
  • The target opens our docx
  • Profit!


First problems

First, we registered in Clubhouse. That was easy! We’re looking for our target … and find nothing. We couldn’t find them by their name or nicknames on other platforms. Unfortunately, you can’t search users by profile description or Twitter/Instagram accounts. So, they are not on Clubhouse? Maybe they have an Android? (when this article is written, 06.04.21, Clubhouse is officially available only for iOS)?


This is the way!

Okay, chin up. Our target could be using a fake name not to reveal themselves and participate in rooms dedicated to non-work-related topics. It’s time to find out. Let’s try to use the power of social graphs.

Here’s the new plan: - Find any X corp employee - Get their list of followers and their accounts - Get the list of users they follow and their accounts - Get the lists of users of the clubs these accounts are in - Filter all these users by “X corp” in the About profile section - Make social graphs to find our target in someone’s connections + invitation chains (down to the first Clubhouse users) + “following” connections + “follower” connections

To do all that, we have to parse Clubhouse. There’s no official API, so we used an unofficial API (thanks to stypr)!)

The library clubhouse-py is pretty easy to use, and we could set up a parser script in no time. Clubhouse returns the following json in response to the API-request get_profile

Warning! To demonstrate how graphs work, we’re not going to use real X corp employees’ data.

{
"user_profile":{
"user_id":4,
"name":"Rohan Seth",
"displayname":"",
"photo_url":"https://clubhouseprod.s3.amazonaws.com:443/4_b471abef-7c14-43af-999a-6ecd1dd1709c",
"username":"rohan",
"bio":"Cofounder at Clubhouse 👋🏽 (this app!) and Lydian Accelerator 🧬 (non profit for fixing genetic diseases)",
"twitter":"rohanseth",
"instagram":"None",
"num_followers":5502888,
"num_following":636,
"time_created":"2020-03-17T07:51:28.085566+00:00",
"follows_me":false,
"is_blocked_by_network":false,
"mutual_follows_count":0,
"mutual_follows":[],
"notification_type":3,
"invited_by_user_profile":"None",
"invited_by_club":"None",
"clubs":[],
"url":"https://www.joinclubhouse.com/@rohan",
"can_receive_direct_payment":true,
"direct_payment_fee_rate":0.029,
"direct_payment_fee_fixed":0.3
},
"success":true
}

Example 1. Get the information about the user chipik and all of their followers and followed the accounts.

 ~python3 clubhouse-graphs.py -u chipik --followers --following
|------------|-----------|-------------|--------------------------------------------------------------------------------------------|----------|------------------------|---------|-----------|-----------|-----------|------------|-----------------|
| user_id | name | displayname | photo_url | username | bio | twitter | instagram | followers | following | invited by | invited by name |
|------------|-----------|-------------|--------------------------------------------------------------------------------------------|----------|------------------------|---------|-----------|-----------|-----------|------------|-----------------|
| 1964245387 | Dmitry Ch | | https://clubhouseprod.s3.amazonaws.com:443/1964245387_428c3161-1d0e-456e-b2a7-66f82b143094 | chipik | - hacker | _chipik | | 110 | 96 | 854045411 | Al Fova |
| | | | | | - researcher | | | | | | |
| | | | | | - speaker | | | | | | |
| | | | | | | | | | | | |
| | | | | | Do things at hexway.io | | | | | | |
| | | | | | tg: @chpkk | | | | | | |
|------------|-----------|-------------|--------------------------------------------------------------------------------------------|----------|------------------------|---------|-----------|-----------|-----------|------------|-----------------|

Example 2. Get the list of the participants of “Cybersecurity Club”

~python3 clubhouse-graphs.py --group 444701692
[INFO ] Getting info about group Cybersecurity Club
[INFO ] Adding member: 1/750
[INFO ] Adding member: 2/750
...
[INFO ] Adding member: 749/750
Done!
Check file ch-group-444701692.html with group's users graph
That’s a graph for all the group members. When hovering over a node, we see the information about the user.



We’ve experimented with server request frequency to see if there are any request limits. A few times, we were temporarily blocked for “too frequent use of API”, but the block expired quickly. For all the time we spent testing, our account wasn’t permanently blocked.
A few days later, we had a base of 300,000 Clubhouse users somehow connected to X corp.
Now, we can search users by different patterns in their bio:

Example 3. Find all users who allegedly work/worked at the WIRED magazine and their followers and followed the accounts.

~python3 clubhouse-graphs.py --find_by_bio wired
[INFO ] Searching users with wired in bio
[INFO ] Adding 1/100
[INFO ] Adding 2/100
...
[INFO ] Adding 100/100
Done!
Find graph in ch-search-wired.html file
Here’s the interactive graph with user profiles.


wired


Example 4. Clubhouse invitation chain

To sign up in Clubhouse, you need an invitation from a Clubhouse user. We can use that fact as additional evidence of connections between accounts.

~ python3 clubhouse-graphs.py -I kevinmitnick
[INFO ] Getting invite graph for user kevinmitnick
Kevin Mitnick<--Maite Robles
Maite Robles<--Roni Broyde
Roni Broyde<--Alex Eick
Alex Eick<--Summer Elsayed
Summer Elsayed<--Dena Mekawi
Dena Mekawi<--Eric Parker
Eric Parker<--Global Mogul Chale
Kojo Terry Oppong<--Shaka Senghor
Shaka Senghor<--Andrew Chen
Done! Find graph in ch-invitechain-kevinmitnick.html file

Here’s the interactive graph of invitations leading us to Kevin Mitnick.

kevin-graph

Results

We collected the users, filtered them by jobs, and built a graph showing connections between them (followers, followed, invitations). Thus, we found a user with a dog on a scooter as their profile pic and no bio. This user is followed by almost all the found X corp employees but follows just one of them. Finally, the user’s name contained the target’s initials, so we felt safe to assume it’s them.
The hardest part is done. We followed the target from one account and used another one to engage them in a conversation in some small room.
Some social engineering magic, and we got their email. After a short chain of letters, we sent them the docx with a payload. A few hours later, we got a shell on their laptop. It’s done!

Takeaways

  • Do not limit yourself to “standard” social engineering channels.
  • Be careful with the information you put out on social media, especially if it concerns your current or previous employment.
  • Most likely, the popularity of Clubhouse has passed. But there are a lot of users with real data, which can be parsed easily. All that makes us think that someone could already have collected a database of Clubhouse users, and some time later it may end up leaked.
P.S. The scripts developed during this project are available in our repository Clubhouse dummy parser and graph generator (CDPaGG)











VAST - Visibility Across Space And Time


The network telemetry engine for data-driven security investigations.


Getting StartedInstallationDocumentationDevelopmentChangelogLicense and Scientific Use

Chat with us on Gitter, or join us on Matrix at #tenzir_vast:gitter.im.


Key Features
  • High-Throughput Ingestion: import numerous log formats over 100k events/second, including Zeek, Suricata, JSON, and CSV.

  • Low-Latency Queries: sub-second response times over the entire data lake, thanks to multi-level bitmap indexing and actor model concurrency. Particularly helpful for instant indicator checking over the entire dataset.

  • Flexible Export: access data in common text formats (ASCII, JSON, CSV), in binary form (MRT, PCAP), or via zero-copy relay through Apache Arrow for arbitrary downstream analysis.

  • Powerful Data Model and Query Language: the generic semi-structured data model allows for expressing complex data in a typed fashion. An intuitive query language that feels like grep and awk at scale enables powerful subsetting of data with domain-specific operations, such as top-k prefix search for IP addresses and subset relationships.

  • Schema Pivoting: the missing link to navigate between related events, e.g., extracting a PCAP for a given IDS alert, or locating all related logs for a given query.


Get VAST

Linux users can download our latest static binary release via browser or cURL.

curl -L -O https://storage.googleapis.com/tenzir-public-data/vast-static-builds/vast-static-latest.tar.gz

Unpack the archive. It contains three folders bin, etc, and share. To get started invoke the binary in the bin directory directly.

tar xfz vast-static-latest.tar.gz
bin/vast --help

To install VAST properly for your local user simly place the unpacked folders in /usr/local/.

FreeBSD and macOS users have to build from source. Clone the master branch to get the most recent version of VAST.

git clone --recursive https://github.com/tenzir/vast

Once you have all dependencies in place, build VAST with the following commands:

./configure
cmake --build build
cmake --build build --target test
cmake --build build --target integration
cmake --build build --target install

The installation guide contains more detailed and platform-specific instructions on how to build and install VAST.


Getting Started

Here are some commands to get a first glimpse of what VAST can do for you.

Start a VAST node:

vast start

Ingest Zeek logs of various kinds:

zcat *.log.gz | vast import zeek

Run a query over the last hour, rendered as JSON:

vast export json ':timestamp > 1 hour ago && (6.6.6.6 || 5353/udp)'

Ingest a PCAP trace with a 1024-byte flow cutoff:

vast import pcap -c 1024 < trace.pcap

Run a query over PCAP data, sort the packets by time, and feed them into tcpdump:

vast export pcap "sport > 60000/tcp && src !in 10.0.0.0/8" \
| ipsumdump --collate -w - \
| tcpdump -r - -nl

License and Scientific Use

VAST comes with a 3-clause BSD license. When referring to VAST in a scientific context, please use the following citation:

@InProceedings{nsdi16:vast,
author = {Matthias Vallentin and Vern Paxson and Robin Sommer},
title = {{VAST: A Unified Platform for Interactive Network Forensics}},
booktitle = {Proceedings of the USENIX Symposium on Networked Systems
Design and Implementation (NSDI)},
month = {March},
year = {2016}
}

You can download the paper from the NSDI '16 proceedings.

Developed with ❤️ by Tenzir



Baserunner - A Tool For Exploring Firebase Datastores


A tool for exploring and exploiting Firebase datastores.


Set up
  1. git clone https://github.com/iosiro/baserunner.git
  2. cd baserunner
  3. npm install
  4. npm run build
  5. npm start
  6. Go to http://localhost:3000 in your browser.

Usage

The Baserunner interface looks like this:



First, use the configuration textbox to load a Firebase configuration JSON structure from the app you'd like to test. It looks like this:

{
"apiKey": "API_KEY",
"authDomain": "PROJECT_ID.firebaseapp.com",
"databaseURL": "https://PROJECT_ID.firebaseio.com",
"projectId": "PROJECT_ID",
"storageBucket": "PROJECT_ID.appspot.com",
"messagingSenderId": "SENDER_ID",
"appId": "APP_ID",
"measurementId": "G-MEASUREMENT_ID",
"databaseURL": "https://PROJECT_ID.firebasedatabase.app/"
}

Then log in as a regular user, either with email and password or with a mobile phone number. When logging in with a mobile phone number, complete the CAPTCHA before submitting your number. You then be prompted for an OTP from your SMS. Enter this without completing the CAPTCHA to finish logging in. Note that you can skip this step to test queries without authentication.

Finally, you can use the query interface to submit queries to the application's Cloud Firestore. Baserunner provides a number of template queries for common actions. Click on one of them to load it in the textbox, and replace the values that look ==LIKE THIS== with valid names of collections, IDs, fields, etc.

As there is no way of getting a list of available collection using the Firebase JavaScript SDK, you will need to guess these, or source their names from the application's front-end JavaScript.


FAQ

How do I tell if an app is using Cloud Firestore or Realtime Database?

Applications using Realtime Database will have a databaseURL key in their configuration objects. Applications without this key can be assumed to use Cloud Firestore. Note that it is possible for Firebase applications to use both datastores, so when in doubt, run both types of queries.

I'm getting blocked by CORS!

To function as intended, Baserunner expects applications to accept requests from localhost, which is enabled by default. Therefore, Baserunner cannot be used as a hosted application.

Should requests from localhost be disallowed by the application you're testing, a version of Baserunner with a reduced featureset can still be run by opening dist/index.html in your browser. Note that this way of running Baserunner only supports email + password login and not phone login.

How do I know what collections to query for?

Cloud Firestore: For security reasons, Firebase's client-side JavaScript API does not provide a mechanism for listing collections. You will need to deduce these from looking at the target application's JavaScript code and making educated guesses.

Realtime Database: As this datastore is represented as a JSON object, you can use the provided "[Realtime Database] Read datastore" query to attempt to view the entire thing. Note that this may fail depending on the rules configured.

Can I see the results of previous queries?

While only the latest query result is displayed on the Baserunner page, all results are logged to the browser console.

When running Realtime Database queries, I get an error that says the client is offline.

Try rerunning the query.



DNSObserver - A Handy DNS Service Written In Go To Aid In The Detection Of Several Types Of Blind Vulnerabilities


A handy DNS service written in Go to aid in the detection of several types of blind vulnerabilities. It monitors a pentester's server for out-of-band DNS interactions and sends notifications with the received request's details via Slack. DNSObserver can help you find bugs such as blind OS command injection, blind SQLi, blind XXE, and many more!


For a more detailed overview and setup instructions, see:

https://www.allysonomalley.com/2020/05/22/dnsobserver/


Setup

What you'll need:

  • Your own registered domain name
  • A Virtual Private Server (VPS) to run the script on (I'm using Ubuntu - I have not tested this tool on other systems)
  • [Optional] Your own Slack workspace and a webhook

Domain and DNS Configuration

If you don't already have a VPS ready to use, create a new Linux VPS with your preferred provider. Note down its public IP address.

Register a new domain name with your preferred registrar - any registrar should be fine as long as they allow setting custom name servers and glue records.

Go into your new domain's DNS settings and find the 'glue record' section. Add two entries here, one for each new name server, and supply both with the public IP address of your VPS.

Next, change the default name servers to:

ns1.<YOUR-DOMAIN>
ns2.<YOUR-DOMAIN>

Server Setup

SSH into your VPS, and perform these steps:

  • Install Go if you don't have it already. Installation instructions can be found here

  • Make sure that the default DNS ports are open - 53/UDP and 53/TCP. Run:

     sudo ufw allow 53/udp
    sudo ufw allow 53/tcp
  • Get DNSObserver and its dependencies:

     go get github.com/allyomalley/dnsobserver/...

DNSObserver Configuration

There are two required arguments, and two optional arguments:

domain [REQUIRED]
Your new domain name.

ip [REQUIRED]
Your VPS' public IP address.

webhook [Optional]
If you want to receive notifications, supply your Slack webhook URL. You'll be notified of any lookups of your domain name, or for any subdomains of your domain (I've excluded notifications for queries for any other apex domains and for your custom name servers to avoid excessive or random notifications). If you do not supply a webhook, interactions will be logged to standard output instead. Webhook setup instructions can be found here.

recordsFile [Optional]
By default, DNSObserver will only respond with an answer to queries for your domain name, or either of its name servers. For any other host, it will still notify you of the interaction (as long as it's your domain or a subdomain), but will send back an empty response. If you want DNSObserver to answer to A lookups for certain hosts with an address, you can either edit the config.yml file included in this project, or create your own based on this template:

a_records:
- hostname: ""
ip: ""
- hostname: ""
ip: ""

Currently, the tool only uses A records - in the future I may add in CNAME, AAAA, etc). Here is an example of a complete custom records file:

a_records:
- hostname: "google.com"
ip: "1.2.3.4"
- hostname: "github.com"
ip: "5.6.7.8"

These settings mean that I want to respond to queries for 'google.com' with '1.2.3.4', and queries for 'github.com' with '5.6.7.8'.


Usage

Now, we are ready to start listening! If you want to be able to do other work on your VPS while DNSObserver runs, start up a new tmux session first.

For the standard setup, pass in the required arguments and your webhook:

dnsobserver --domain example.com --ip 11.22.33.44 --webhook https://hooks.slack.com/services/XXX/XXX/XXX

To achieve the above, but also include some custom A lookup responses, add the argument for your records file:

dnsobserver --domain example.com --ip 11.22.33.44 --webhook https://hooks.slack.com/services/XXX/XXX/XXX --recordsFile my_records.yml

Assuming you've set everything up correctly, DNSObserver should now be running. To confirm it's working, open up a terminal on your desktop and perform a lookup of your new domain ('example.com' in this demo):

dig example.com

You should now receive a Slack notification with the details of the request!



CyberBattleSim - An Experimentation And Research Platform To Investigate The Interaction Of Automated Agents In An Abstract Simulated Network Environments


CyberBattleSim is an experimentation research platform to investigate the interaction of automated agents operating in a simulated abstract enterprise network environment. The simulation provides a high-level abstraction of computer networks and cyber security concepts. Its Python-based Open AI Gym interface allows for training of automated agents using reinforcement learning algorithms.

The simulation environment is parameterized by a fixed network topology and a set of vulnerabilities that agents can utilize to move laterally in the network. The goal of the attacker is to take ownership of a portion of the network by exploiting vulnerabilities that are planted in the computer nodes. While the attacker attempts to spread throughout the network, a defender agent watches the network activity and tries to detect any attack taking place and mitigate the impact on the system by evicting the attacker. We provide a basic stochastic defender that detects and mitigates ongoing attacks based on pre-defined probabilities of success. We implement mitigation by re-imaging the infected nodes, a process abstractly modeled as an operation spanning over multiple simulation steps.

To compare the performance of the agents we look at two metrics: the number of simulation steps taken to attain their goal and the cumulative rewards over simulation steps across training epochs.


Project goals

We view this project as an experimentation platform to conduct research on the interaction of automated agents in abstract simulated network environments. By open sourcing it we hope to encourage the research community to investigate how cyber-agents interact and evolve in such network environments.

The simulation we provide is admittedly simplistic, but this has advantages. Its highly abstract nature prohibits direct application to real-world systems thus providing a safeguard against potential nefarious use of automated agents trained with it. At the same time, its simplicity allows us to focus on specific security aspects we aim to study and quickly experiment with recent machine learning and AI algorithms.

For instance, the current implementation focuses on the lateral movement cyber-attacks techniques, with the hope of understanding how network topology and configuration affects them. With this goal in mind, we felt that modeling actual network traffic was not necessary. This is just one example of a significant limitation in our system that future contributions might want to address.

On the algorithmic side, we provide some basic agents as starting points, but we would be curious to find out how state-of-the art reinforcement learning algorithms compare to them. We found that the large action space intrinsic to any computer system is a particular challenge for Reinforcement Learning, in contrast to other applications such as video games or robot control. Training agents that can store and retrieve credentials is another challenge faced when applying RL techniques where agents typically do not feature internal memory. These are other areas of research where the simulation could be used for benchmarking purposes.

Other areas of interest include the responsible and ethical use of autonomous cyber-security systems: How to design an enterprise network that gives an intrinsic advantage to defender agents? How to conduct safe research aimed at defending enterprises against autonomous cyber-attacks while preventing nefarious use of such technology?


Documentation

Read the Quick introduction to the project.


Benchmark

See Benchmark.


Setting up a dev environment

It is strongly recommended to work under a Linux environment, either directly or via WSL on Windows. Running Python on Windows directly should work but is not supported anymore.

Start by checking out the repository:

git clone https://github.com/microsoft/CyberBattleSim.git

On Linux or WSL

The instructions were tested on a Linux Ubuntu distribution (both native and via WSL). Run the following command to set-up your dev environment and install all the required dependencies (apt and pip packages):

./init.sh

The script installs python3.8 if not present. If you are running a version of Ubuntu older than 20 it will automatically add an additional apt repository to install python3.8.

The script will create a virtual Python environment under a venv subdirectory, you can then run Python with venv/bin/python.

Note: If you prefer Python from a global installation instead of a virtual environment then you can skip the creation of the virtual envrionment by running the script with ./init.sh -n. This will instead install all the Python packages on a system-wide installation of Python 3.8.


Windows Subsystem for Linux

The supported dev environment on Windows is via WSL. You first need to install an Ubuntu WSL distribution on your Windows machine, and then proceed with the Linux instructions (next section).


Git authentication from WSL

To authenticate with Git you can either use SSH-based authentication, or alternatively use the credential-helper trick to automatically generate a PAT token. The latter can be done by running the following commmand under WSL (more info here):

git config --global credential.helper "/mnt/c/Program\ Files/Git/mingw64/libexec/git-core/git-credential-manager.exe"

Docker on WSL

To run your environment within a docker container, we recommend running docker via Windows Subsystem on Linux (WSL) using the following instructions: Installing Docker on Windows under WSL).


Windows (unsupported)

This method is not maintained anymore, please prefer instead running under a WSL subsystem Linux environment. But if you insist you want to start by installing Python 3.8 then in a Powershell prompt run the ./init.ps1 script.


Getting started quickly using Docker

The quickest method to get up and running is via the Docker container.

NOTE: For licensing reasons, we do not publicly redistribute any build artifact. In particular the docker registry spinshot.azurecr.io referred to in the commands below is kept private to the project maintainers only.

As a workaround, you can recreate the docker image yourself using the provided Dockerfile, publish the resulting image to your own docker registry and replace the registry name in the commands below.

commit=7c1f8c80bc53353937e3c69b0f5f799ebb2b03ee
docker login spinshot.azurecr.io
docker pull spinshot.azurecr.io/cyberbattle:$commit
docker run -it spinshot.azurecr.io/cyberbattle:$commit cyberbattle/agents/baseline/run.py

Check your environment

Run the following command to run a simulation with a baseline RL agent:

python cyberbattle/agents/baseline/run.py --training_episode_count 1 --eval_episode_count 1 --iteration_count 10 --rewardplot_with 80  --chain_size=20 --ownership_goal 1.0

If everything is setup correctly you should get an output that looks like this:

torch cuda available=True
###### DQL
Learning with: episode_count=1,iteration_count=10,ϵ=0.9,ϵ_min=0.1, ϵ_expdecay=5000,γ=0.015, lr=0.01, replaymemory=10000,
batch=512, target_update=10
## Episode: 1/1 'DQL' ϵ=0.9000, γ=0.015, lr=0.01, replaymemory=10000,
batch=512, target_update=10
Episode 1|Iteration 10|reward: 139.0|Elapsed Time: 0:00:00|###################################################################|
###### Random search
Learning with: episode_count=1,iteration_count=10,ϵ=1.0,ϵ_min=0.0,
## Episode: 1/1 'Random search' ϵ=1.0000,
Episode 1|Iteration 10|reward: 194.0|Elapsed Time: 0:00:00|###################################################################|
simulation ended
Episode duration -- DQN=Red, Random=Green
10.00 ┼
Cumulative rewards -- DQN=Red, Random=Green
194.00 ┼ ╭──╴
174.60 ┤ │
155.20 ┤╭─────╯
135.80 ┤│ ╭──╴
116.40 ┤│ │
97.00 ┤│ ╭╯
77.60 ┤│ │
58.20 ┤╯ ╭──╯
38.80 ┤ │
19.40 ┤ │
0.00 ┼──╯

Jupyter notebooks

To quickly get familiar with the project you can open one the the provided Juptyer notebooks to play interactively with the gym environments. Just start jupyter with jupyter notebook, or venv/bin/jupyter notebook if you are using a virtual environment setup.

The following .py notebooks are best viewed in VSCode or in Jupyter with the Jupytext extension and can easily be converted to .ipynb format if needed:


How to instantiate the Gym environments?

The following code shows how to create an instance of the the OpenAI Gym environment CyberBattleChain-v0, an environment based on a chain-like network structure, with 10 nodes (size=10) where the agent's goal is to either gain full ownership of the network (own_atleast_percent=1.0) or break the 80% network availability SLA (maintain_sla=0.80), while the netowrk is being monitored and protected by basic probalistically-modelled defender (defender_agent=ScanAndReimageCompromisedMachines):

import cyberbattle._env.cyberbattle_env

cyberbattlechain_defender =
gym.make('CyberBattleChain-v0',
size=10,
attacker_goal=AttackerGoal(
own_atleast=0,
own_atleast_percent=1.0
),
defender_constraint=DefenderConstraint(
maintain_sla=0.80
),
defender_agent=ScanAndReimageCompromisedMachines(
probability=0.6,
scan_capacity=2,
scan_frequency=5))

To try other network topologies, take example on chainpattern.py to define your own set of machines and vulnerabilities, then add an entry in the module initializer to declare and register the Gym environment.


Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.


Ideas for contributions

Here are some ideas on how to contribute: enhance the simulation (event-based, refined the simulation, …), train an RL algorithm on the existing simulation, implement benchmark to evaluate and compare novelty of agents, add more network generative modes to train RL-agent on, contribute to the doc, fix bugs.

See also the wiki for more ideas.


Citing this project
@misc{msft:cyberbattlesim,
Author = {Microsoft Defender Research Team.}
Note = {Created by Christian Seifert, Michael Betser, William Blum, James Bono, Kate Farris, Emily Goren, Justin Grana, Kristian Holsheimer, Brandon Marken, Joshua Neil, Nicole Nichols, Jugal Parikh, Haoran Wei.},
Publisher = {GitHub},
Howpublished = {\url{https://github.com/microsoft/cyberbattlesim}},
Title = {CyberBattleSim},
Year = {2021}
}

Note on privacy

This project does not include any customer data. The provided models and network topologies are purely fictitious. Users of the provided code provide all the input to the simulation and must have the necessary permissions to use any provided data.


Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.



Lucifer - A Powerful Penetration Tool For Automating Penetration Tasks Such As Local Privilege Escalation, Enumeration, Exfiltration And More...


A Powerful Penetration Tool For Automating Penetration Tasks Such As Local Privilege Escalation, Enumeration, Exfiltration and More... Use Or Build Automation Modules To Speed Up Your Cyber Security Life

Setup
git clone https://github.com/Skiller9090/Lucifer.git
cd Lucifer
pip install -r requirements.txt
python main.py --help

If you want the cutting edge changes add -b dev to the end of git clone https://github.com/Skiller9090/Lucifer.git


Commands
Command Description
help Displays This Menu
name Shows name of current shell
id Displays current shell's id
show Shows options or modules based on input, EX: show <options/modules>
options Shows a list of variable/options already set
set Sets a variable or option, EX: set
set_vars Auto sets need variables for loaded module
description Displays description of the module loaded
auto_vars Displays is auto_vars is True or False for current shell
change_auto_vars Changes the auto_var options for one shell, all shells or future shells
reindex Re-indexes all modules, allows for dynamic additions of modules
use Move into a module, EX: use
run Runs the current module, can also use exploit to do the same
spawn_shell Spawns a alternative shell
open_shell Open a shell by id EX: open_shell
show_shells Show all shell ids and attached name
set_name Sets current shells name EX: set_name
set_name_id Set a shells name by id EX: set_name_id
clear Clear screen
close Kills current input into opened shell
reset Resets Everything
exit Exits the program, can also use quit to do the same

Command Use
  • No-Arg Commands
    • help - to display help menu

    • name - shows name of current shell

    • id - shows current shell id

    • options - shows a table of set options/vars

    • set_vars - automatically sets vars needed for the loaded module (default defined in a module)

    • description - show description of current loaded module

    • auto_vars - displays current setting of auto_vars (auto_vars if true will automatically run set_vars on module load)

    • run - runs the module with the current options, exploit works the same

    • spawn_shell - spawns a new Shell instance

    • show_shells - shows all open shells ids and names

    • clear - clears the terminal/console screen

    • close - kills the input to current shell

    • reset - resets everything (not implemented)

    • exit - quits the program

  • Arg Commands
    • show <options/modules> - displays a list of set options or modules depending on argument.

    • set <var_name> <value> - sets a variable/option

    • change_auto_vars <to_set> <args>:

      • <to_set> - can be true or false (t or f) (-t or -f)

      • <args>:

        • -g = global - sets for all shells spawned

        • -n = new - sets this option for future shell spawns

        • -i = inclusive - no matter what, set current shell to <to_set>

    • use <module> <args>:

      • <module> - path to module

      • <args>:

        • -R - Override cache (reload dynamically)
    • open_shell <id> - opens a shell by its id

    • set_name <name> - set the name of the current shell

    • set_name_id <id> <name> - set the name of the shell specified by


Using Java

Lucifer allows for Python and Java code to work side by side through the use of LMI.Java extension. For this to work you will need to install jpype1, to do this run the following command in your python environment:
pip install jpype1
>From here you are free to interact with LMI.Java.compiler and LMI.Java.luciferJVM which allows you to call java functions and instantiate java classes through python, more documentation of this will be created later on, on the lucifer wiki.


Examples

Settings Variables



Running Module



Settings



Versioning

The standard of versioning on this project is:


MAJOR.MINOR.PATCH.STAGE.BUILD

Major:
  • incremented when either there has been a significant amount of new features since the start of the major or if there is a change which is so big that is can cause compatibility issues (Major of 0 if very unstable
  • Could cause incompatibility issues

Minor:
  • incremented when a new feature or feature-set is added to the project
  • should not cause incompatibility errors due to only additions made

Patch:
  • incremented on bugfixes or if feature is so small that it is worth incrementing minor
  • very low risk of incompatibility error

Stage:
  • The stage of current MAJOR.MINOR.PATCH BUILD, either alpha, beta, release candidate or release
  • Indicates how far through development the new MAJOR.MINOR.PATCH is
  • Stage number to name translation:
    • 0 => beta (b)
    • 1 => alpha (a)
    • 2 => release candidate (rc)
    • 3 => release (r)

Build:
  • this should be incremented on every change made to the code, even on a one character change

This version structure can be stored and displayed in a few ways:

  • The best way to store the data within code is via a tuple such as:
    • (Major, Minor, Patch, Stage, Build)
      • Example is: (1, 4, 1, 2, 331)
  • The long display would be:
    • {stage} {major}.{minor}.{patch} Build {build}
      • Example is: Alpha 1.4.1 Build 331
  • The short display would be:
    • {major}.{minor}.{patch}{stage}{build}
      • Example is: 1.4.1a331


Waybackurls - Fetch All The URLs That The Wayback Machine Knows About For A Domain


Accept line-delimited domains on stdin, fetch known URLs from the Wayback Machine for *.domain and output them on stdout.

Usage example:

▶ cat domains.txt | waybackurls > urls

Install:

▶ go get github.com/tomnomnom/waybackurls


Credit

This tool was inspired by @mhmdiaa's waybackurls.py script. Thanks to them for the great idea!



Kiterunner - Contextual Content Discovery Tool


For the longest of times, content discovery has been focused on finding files and folders. While this approach is effective for legacy web servers that host static files or respond with 3xx’s upon a partial path, it is no longer effective for modern web applications, specifically APIs.

Over time, we have seen a lot of time invested in making content discovery tools faster so that larger wordlists can be used, however the art of content discovery has not been innovated upon.

Kiterunner is a tool that is capable of not only performing traditional content discovery at lightning fast speeds, but also bruteforcing routes/endpoints in modern applications.

Modern application frameworks such as Flask, Rails, Express, Django and others follow the paradigm of explicitly defining routes which expect certain HTTP methods, headers, parameters and values.

When using traditional content discovery tooling, such routes are often missed and cannot easily be discovered.

By collating a dataset of Swagger specifications and condensing it into our own schema, Kiterunner can use this dataset to bruteforce API endpoints by sending the correct HTTP method, headers, path, parameters and values for each request it sends.

Swagger files were collected from a number of datasources, including an internet wide scan for the 40+ most common swagger paths. Other datasources included GitHub via BigQuery, and APIs.guru.


Installation

Downloading a release

You can download a pre-built copy from https://github.com/assetnote/kiterunner/releases.


Building from source
# build the binary
make build

# symlink your binary
ln -s $(pwd)/dist/kr /usr/local/bin/kr

# compile the wordlist
# kr kb compile <input.json> <output.kite>
kr kb compile routes.json routes.kite

# scan away
kr scan hosts.txt -w routes.kite -x 20 -j 100 --ignore-length=1053

The JSON datasets can be found below:

Alternatively, it is possible to download the compile .kite files from the links below:


Usage

Quick Start
kr [scan|brute] <input> [flags]
  • <input> can be a file, a domain, or URI. we'll figure it out for you. See Input/Host Formatting for more details
# Just have a list of hosts and no wordlist
kr scan hosts.txt -A=apiroutes-210328:20000 -x 5 -j 100 --fail-status-codes 400,401,404,403,501,502,426,411

# You have your own wordlist but you want assetnote wordlists too
kr scan target.com -w routes.kite -A=apiroutes-210328:20000 -x 20 -j 1 --fail-status-codes 400,401,404,403,501,502,426,411

# Bruteforce like normal but with the first 20000 words
kr brute https://target.com/subapp/ -A=aspx-210328:20000 -x 20 -j 1

# Use a dirsearch style wordlist with %EXT%
kr brute https://target.com/subapp/ -w dirsearch.txt -x 20 -j 1 -exml,asp,aspx,ashx -D

CLI Help
Usage:
kite scan [flags]

Flags:
-A, --assetnote-wordlist strings use the wordlists from wordlist.assetnote.io. specify the type/name to use, e.g. apiroutes-210228. You can specify an additional maxlength to use only the first N values in the wordlist, e.g. apiroutes-210228;20000 will only use the first 20000 lines in that wordlist
--blacklist-domain strings domains that are blacklisted for redirects. We will not follow redirects to these domains
--delay duration delay to place inbetween requests to a single host
--disable-precheck whether to skip host discovery
--fail-status-codes ints which status codes blacklist as fail. if this is set, this will override success-status-codes
--filter-api strings only scan apis matching this ksuid
--force-method string whether to ignore the methods specified in the ogl file and force this method
-H, --header strings headers to add to requests (default [x-forwarded-for: 127.0.0.1])
-h, --help help for scan
--ignore-length strings a range of content length bytes to ignore. you can have multiple. e.g. 100-105 or 1234 or 123,34-53. This is inclusive on both ends
--kitebuilder-full-scan perform a full scan without first performing a phase scan.
-w, --kitebuilder-list strings ogl wordlist to use for scanning
-x, --max-connection-per-host int max connections to a single host (default 3)
-j, --max-parallel-hosts int max number of concurrent hosts to scan at once (default 50)
--max-redirects int maximum number of redirects to follow (default 3)
-d, --preflight-depth int when performing preflight checks, what directory depth do we attempt to check. 0 means that only the docroot is checked (default 1)
--profile-name st ring name for profile output file
--progress a progress bar while scanning. by default enabled only on Stderr (default true)
--quarantine-threshold int if the host return N consecutive hits, we quarantine the host as wildcard. Set to 0 to disable (default 10)
--success-status-codes ints which status codes whitelist as success. this is the default mode
-t, --timeout duration timeout to use on all requests (default 3s)
--user-agent string user agent to use for requests (default "Chrome. Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36")
--wildcard-detection can be set to false to disable wildcard redirect detection (default true)

Global Flags:
--config string config file (default is $HOME/.kiterunner.yaml)
-o, --output string output format. can be json,t ext,pretty (default "pretty")
-q, --quiet quiet mode. will mute unecessarry pretty text
-v, --verbose string level of logging verbosity. can be error,info,debug,trace (default "info")

bruteforce flags (all the flags above +)

  -D, --dirsearch-compat              this will replace %EXT% with the extensions provided. backwards compat with dirsearch because shubs loves him some dirsearch
-e, --extensions strings extensions to append while scanning
-w, --wordlist strings normal wordlist to use for scanning

Input/Host Formatting

When supplied with an input, kiterunner will attempt to resolve the input in the following order:

  1. Is the input a file. If so read all the lines in the file as separate domains
  2. The input is treated as a "domain"

If you supply a "domain", but it exists as a file, e.g. google.com but google.com is also a txt file in the current directory, we'll load google.com the text file, because we found it first.

Domain Parsing

Its preferred that you provide a full URI as the input, however you can provide incomplete URIs and we'll try and guess what you mean. An example list of domains you can supply are:

one.com
two.com:80
three.com:443
four.com:9447
https://five.com:9090
http://six.com:80/api

The above list of domains will expand into the subsequent list of targets

(two targets are created for one.com, since neither port nor protocol was specified)
http://one.com (port 80 implied)
https://one.com (port 443 implied)

http://two.com (port 80 implied)
https://three.com (port 443 implied)
http://four.com:9447 (non-tls port guessed)
https://five.com:9090
http://six.com/api (port 80 implied; basepath API appended)

the rules we apply are:

  • if you supply a scheme, we use the scheme.
    • We only support http & https
    • if you don't supply a scheme, we'll guess based on the port
  • if you supply a port, we'll use the port
    • If your port is 443, or 8443, we'll assume its tls
    • if you don't supply a port, we'll guess both port 80, 443
  • if you supply a path, we'll prepend that path to all requests against that host

API Scanning

When you have a single target

# single target
kr scan https://target.com:8443/ -w routes.kite -A=apiroutes-210228:20000 -x 10 --ignore-length=34

# single target, but you want to try http and https
kr scan target.com -w routes.kite -A=apiroutes-210228:20000 -x 10 --ignore-length=34

# a list of targets
kr scan targets.txt -w routes.kite -A=apiroutes-210228:20000 -x 10 --ignore-length=34

Vanilla Bruteforcing
kr brute https://target.com -A=raft-large-words -A=apiroutes-210228:20000 -x 10 -d=0 --ignore-length=34 -ejson,txt

Dirsearch Bruteforcing

For when you have an old-school wordlist that still has %EXT% in the wordlist, you can use -D. this will only substitute the extension where %EXT% is present in the path

kr brute https://target.com -w dirsearch.txt -x 10 -d=0 --ignore-length=34 -ejson,txt -D

Technical Features

Depth Scanning

A key feature of kiterunner is depth based scanning. This attempts to handle detecting wildcards given virtual application path based routing. The depth defines how many directories deep the baseline checks are performed E.g.

~/kiterunner $ cat wordlist.txt

/api/v1/user/create
/api/v1/user/delete
/api/v2/user/
/api/v2/admin/
/secrets/v1/
/secrets/v2/
  • At depth 0, only / would have the baseline checks performed for wildcard detection
  • At depth 1, /api and /secrets would have baseline checks performed; and these checks would be used against /api and /secrets correspondingly
  • At depth 2, /api/v1, /api/v2, /secrets/v1 and /secrets/v2 would all have baseline checks performed.

By default, kr scan has a depth of 1, since from internal usage, we've often seen this as the most common depth where virtual routing has occured. kr brute has a default depth of 0, as you typically don't want this check to be performed with a static wordlist.

Naturally, increasing the depth will increase the accuracy of your scans, however this also increases the number of requests to the target. (# of baseline checks * # of depth baseline directories). Hence, we recommend against going above 1, and in rare cases going to depth 2.


Using Assetnote Wordlists

We provide inbuilt downloading and caching of wordlists from assetnote.io. You can use these with the -A flag which receives a comma delimited list of aliases, or fullnames.

You can get a full list of all the Assetnote wordlists with kr wordlist list.

The wordlists when used, are cached in ~/.cache/kiterunner/wordlists. When used, these are compiled from .txt -> .kite

+-----------------------------------+-------------------------------------------------------+----------------+---------+----------+--------+
| ALIAS | FILENAME | SOURCE | COUNT | FILESIZE | CACHED |
+-----------------------------------+-------------------------------------------------------+----------------+---------+----------+--------+
| 2m-subdomains | 2m-subdomains.txt | manual.json | 2167059 | 28.0mb | false |
| asp_lowercase | asp_lowercase.txt | manual.json | 24074 | 1.1mb | false |
| aspx_lowercase | aspx_lowercase.txt | manual.json | 80293 | 4.4mb | false |
| bak | bak.txt | manual.json | 3172 5 | 634.8kb | false |
| best-dns-wordlist | best-dns-wordlist.txt | manual.json | 9996122 | 139.0mb | false |
| cfm | cfm.txt | manual.json | 12100 | 260.3kb | true |
| do | do.txt | manual.json | 173152 | 4.8mb | false |
| dot_filenames | dot_filenames.txt | manual.json | 3191712 | 71.3mb | false |
| html | html.txt | manual.json | 4227526 | 107.7mb | false |
| apiroutes-201120 | httparchive_apiroutes_2020_11_20.txt | automated.json | 953011 | 45.3mb | false |
| apiroutes-210128 | httparchive_apiroutes_2021_01_28.txt | autom ated.json | 225456 | 6.6mb | false |
| apiroutes-210228 | httparchive_apiroutes_2021_02_28.txt | automated.json | 223544 | 6.5mb | true |
| apiroutes-210328 | httparchive_apiroutes_2021_03_28.txt | automated.json | 215114 | 6.3mb | false |
| aspx-201118 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2020_11_18.txt | automated.json | 63200 | 1.7mb | false |
| aspx-210128 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2021_01_28.txt | automated.json | 46286 | 928.7kb | false |
| aspx-210228 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2021_02_28.txt | automated.json | 43958 | 883.3kb | false |
| aspx-210328 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2021_03_28.txt | automated.json | 45928 | 926.8kb | false |
| cgi-201118 | httparchive_cgi_pl_2020_11_18.txt | automated.json | 2637 | 44.0kb | false |

<SNIP>

Usage

kr scan targets.txt -A=apiroutes-210228 -x 10 --ignore-length=34
kr brute targets.txt -A=aspx-210228 -x 10 --ignore-length=34 -easp,aspx

Head Syntax

When using assetnote provided wordlists, you may not want to use the entire wordlist, so you can opt to use the first N lines in a given wordlist using the head syntax. The format is <wordlist_name>:<N lines> when specifying a wordlist.

Usage

# this will use the first 20000 lines in the api routes wordlist
kr scan targets.txt -A=apiroutes-210228:20000 -x 10 --ignore-length=34

# this will use the first 10 lines in the aspx wordlist
kr brute targets.txt -A=aspx-210228:10 -x 10 --ignore-length=34 -easp,aspx

Concurrency Settings/Going Fast

Kiterunner is made to go fast on a lot of hosts. But, just because you can run kiterunner at 20000 goroutines, doesn't mean its a good idea. Bottlenecks and performance degredation will occur at high thread counts due to more time spent scheduling goroutines that are waiting on network IO and kernel context switching.

There are two main concurrency settings for kiterunner:

  • -x, --max-connection-per-host - maximum number of open connections we can have on a host. Governed by 1 goroutine each. To avoid DOS'ing a host, we recommend keeping this in a low realm of 5-10. Depending on latency to the target, this will yield on average between 1-5 requests per second per connection (200ms - 1000ms/req) to a host.
  • -j, --max-parallel-hosts - maximum number of hosts to scan at any given time. Governed by 1 goroutine supervisor for each

Depending on the hardware you are scanning from, the "maximum" number of goroutines you can run optimally will vary. On an AWS t3.medium, we saw performance degradation going over 2500 goroutines. Meaning, 500 hosts x 5 conn per host (2500) would yield peak performance.

We recommend against running kiterunner from your macbook. Due to poor kernel optimisations for high IO counts and Epoll syscalls on macOS, we noticed substantially poorer (0.3-0.5x) performance when compared to running kiterunner on a similarly configured linux instance.

To maximise performance when scanning an individual target, or a large attack surface we recommend the following tips:

  • Spin up an EC2 instance in a similar geographic region/datacenter to the target(s) you are scanning
  • Perform some initial benchmarks against your target set with varying -x and -j options. We recommend having a typical starting point of around -x 5 -j 100 and moving -j upwards as your CPU usage/network performance permits

Converting between file formats

Kiterunner will also let you convert between the schema JSON, a kite file and a standard txt wordlist.

Usage

The format is decided by the filetype extension supplied by the <input> and <output> fields. We support txt, json and kite

kr kb convert wordlist.txt wordlist.kite
kr kb convert wordlist.kite wordlist.json
kr kb convert wordlist.kite wordlist.txt
❯ go run ./cmd/kiterunner kb convert -qh
convert an input file format into the specified output file format

this will determine the conversion based on the extensions of the input and the output
we support the following filetypes: txt, json, kite
You can convert any of the following into the corresponding types

-d Debug mode will attempt to convert the schema with error handling
-v=debug Debug verbosity will print out the errors for the schema

Usage:
kite kb convert <input> <output> [flags]

Flags:
-d, --debug debug the parsing
-h, --help help for convert

Global Flags:
--config string config file (default is $HOME/.kiterunner.yaml)
-o, --output string output format. can be json,text,pretty (default "pretty")
-q, --quiet quiet mode. will mute unecessarry pretty text
-v, --verbose string level of logging verbosity. can be error,info,debug,trace ( default "info")``bigquery

Replaying requests

When you receive a bunch of output from kiterunner, it may be difficult to immediately understand why a request is causing a specific response code/length. Kiterunner offers a method of rebuilding the request from the wordlists used including all the header and body parameters.

  • You can replay a request by copy pasting the full response output into the kb replay command.
  • You can specify a --proxy to forward your requests through, so you can modify/repeat/intercept the request using 3rd party tools if you wish
  • The golang net/http client will perform a few additional changes to your request due to how the default golang spec implementation (unfortunately).
❯ go run ./cmd/kiterunner kb replay -q --proxy=http://localhost:8080 -w routes.kite "POST    403 [    287,   10,   1] https://target.com/dedalo/lib/dedalo/publication/server_api/v1/json/thesaurus_parents 0cc39f76702ea287ec3e93f4b4710db9c8a86251"
11:25AM INF Raw reconstructed request
POST /dedalo/lib/dedalo/publication/server_api/v1/json/thesaurus_parents?ar_fields=48637466&code=66132381&db_name=08791392&lang=lg-eng&recursive=false&term_id=72336471 HTTP/1.1
Content-Type: any


11:25AM INF Outbound request
POST /dedalo/lib/dedalo/publication/server_api/v1/json/thesaurus_parents?ar_fields=48637466&code=66132381&db_name=08791392&lang=lg-eng&recursive=false&term_id=72336471 HTTP/1.1
Host: target.com
User-Agent: Go-http-client/1.1
Content-Length: 0
Content-Type: any
Accept-Encoding: gzip


11:25AM INF Response After Redirects
HTTP/1.1 403 Forbidden
Connection: close
Content-Length: 45
Content-Type: application/json
Date: Wed, 07 Apr 2021 01:25:28 GMT
X-Amzn-Requestid: 7e6b2ea1-c662-4671-9eaa-e8cd31b463f2

User is not authorized to perform this action

Technical Implementation

Intermediate Data Type (PRoutes)

We use an intermediate representation of wordlists and kitebuilder json schemas in kiterunner. This is to allow us to dynamically generate the fields in the wordlist and reconstruct request bodies/headers and query parameters from a given spec.

The PRoute type is composed of Headers, Body, Query and Cookie parameters that are encoded in pkg/proute.Crumb. The Crumb type is an interface that is implemented on types such as UUIDs, Floats, Ints, Random Strings, etc.

When performing conversions to and from txt, json and kite files, all the conversions are first done to the proute.API intermediate type. Then the corresponding encoding is written out


Kite File Format

We use a super secret kite file format for storing the json schemas from kitebuilder. These are simply protobuf encoded pkg/proute.APIS written to a file. The compilation is used to allow us to quickly deserialize the already parsed wordlist. This file format is not stable, and should only be interacted with using the inbuilt conversion tools for kiterunner.

When a new version of the kite file format is released, you may need to recompile your kite files



❌