๐Ÿ”’
There are new articles available, click to refresh the page.
Before yesterdayKitPloit - PenTest & Hacking Tools

DLLHSC - DLL Hijack SCanner A Tool To Assist With The Discovery Of Suitable Candidates For DLL Hijacking

15 March 2021 at 11:30
By: Zion3R


DLL Hijack SCanner - A tool to generate leads and automate the discovery of candidates for DLL Search Order Hijacking


Contents of this repository

This repository hosts the Visual Studio project file for the tool (DLLHSC), the project file for the API hooking functionality (detour), the project file for the payload and last but not least the compiled executables for x86 and x64 architecture (in the release section of this repo). The code was written and compiled with Visual Studio Community 2019.

If you choose to compile the tool from source, you will need to compile the projects DLLHSC, detour and payload. The DLLHSC implements the core functionality of this tool. The detour project generates a DLL that is used to hook APIs. And the payload project generates the DLL that is used as a proof of concept to check if the tested executable can load it via search order hijacking. The generated payload has to be placed in the same directory with DLLHSC and detour named payload32.dll for x86 and payload64.dll for x64 architecture.


Modes of operation

The tool implements 3 modes of operation which are explained below.


Lightweight Mode

Loads the executable image in memory, parses the Import table and then replaces any DLL referred in the Import table with a payload DLL.

The tool places in the application directory only a module (DLL) the is not present in the application directory, does not belong to WinSxS and does not belong to the KnownDLLs.

The payload DLL upon execution, creates a file in the following path: C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSC.tmp as a proof of execution. The tool launches the application and reports if the payload DLL was executed by checking if the temporary file exists. As some executables import functions from the DLLs they load, error message boxes may be shown up when the provided DLL fails to export these functions and thus meet the dependencies of the provided image. However, the message boxes indicate the DLL may be a good candidate for payload execution if the dependencies are met. In this case, additional analysis is required. The title of these message boxes may contain the strings: Ordinal Not Found or Entry Point Not Found. DLLHSC looks for windows that contain these strings, closes them as soon as they shown up and reports the results.


List Modules Mode

Creates a process with the provided executable image, enumerates the modules that are loaded in the address space of this process and reports the results after applying filters.

The tool only reports the modules loaded from the System directory and do not belong to the KnownDLLs. The results are leads that require additional analysis. The analyst can then place the reported modules in the application directory and check if the application loads the provided module instead.


Run-Time Mode

Hooks the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours and reports the modules that are loaded in run-time.

Each time the scanned application calls LoadLibrary and LoadLibraryEx, the tool intercepts the call and writes the requested module in the file C:\Users\%USERNAME%\AppData\Local\Temp\DLLHSCRTLOG.tmp. If the LoadLibraryEx is specifically called with the flag LOAD_LIBRARY_SEARCH_SYSTEM32, no output is written to the file. After all interceptions have finished, the tool reads the file and prints the results. Of interest for further analysis are modules that do not exist in the KnownDLLs registry key, modules that do not exist in the System directory and modules with no full path (for these modules loader applies the normal search order).


Compile and Run Guidance

Should you choose to compile the tool from source it is recommended to do so on Visual Code Studio 2019. In order the tool to function properly, the projects DLLHSC, detour and payload have to be compiled for the same architecture and then placed in the same directory. Please note that the DLL generated from the project payload has to be renamed to payload32.dll for 32-bit architecture or payload64.dll for 64-bit architecture.


Help menu

The help menu for this application

NAME
dllhsc - DLL Hijack SCanner

SYNOPSIS
dllhsc.exe -h

dllhsc.exe -e <executable image path> (-l|-lm|-rt) [-t seconds]

DESCRIPTION
DLLHSC scans a given executable image for DLL Hijacking and reports the results

It requires elevated privileges

OPTIONS
-h, --help
display this help menu and exit

-e, --executable-image
executable image to scan

-l, --lightweight
parse the import table, attempt to launch a payload and report the results

-lm, --list-modules
list loaded modules that do not exist in the application's directory

-rt, --runtime-load
display modules loaded in run-time by hooking LoadLibrary and LoadLibraryEx APIs

-t, --timeout
number of seconds to wait f or checking any popup error windows - defaults to 10 seconds


Example Runs

This section provides examples on how you can run DLLHSC and the results it reports. For this purpose, the legitimate Microsoft utility OleView.exe (MD5: D1E6767900C85535F300E08D76AAC9AB) was used. For better results, it is recommended that the provided executable image is scanned within its installation directory.

The flag -l parses the import table of the provided executable, applies filters and attempts to weaponize the imported modules by placing a payload DLL in the application's current directory. The scanned executable may pop an error box when dependencies for the payload DLL (exported functions) are not met. In this case, an error message box is poped. DLLHSC by default checks for 10 seconds if a message box was opened or for as many seconds as specified by the user with the flag -t. An error message box indicates that if dependencies are met, the module can be weaponized.

The following screenshot shows the error message box generated when OleView.dll loads the payload DLL :



The tool waits for a maximum timeframe of 10 seconds or -t seconds to make sure the process initialization has finished and any message box has been generated. It then detects the message box, closes it and reports the result:



The flag -lm launches the provided executable and prints the modules it loads that do not belong in the KnownDLLs list neither are WinSxS dependencies. This mode is aimed to give an idea of DLLs that may be used as payload and it only exists to generate leads for the analyst.



The flag -rt prints the modules the provided executable image loads in its address space when launched as a process. This is achieved by hooking the LoadLibrary and LoadLibraryEx APIs via Microsoft Detours.



Feedback

For any feedback on this tool, please use the GitHub Issues section.



Retoolkit - Reverse Engineer's Toolkit

26 March 2021 at 11:30
By: Zion3R


This is a collection of tools you may like if you are interested on reverse engineering and/or malware analysis on x86 and x64 Windows systems. After installing this toolkit you'll have a folder in your desktop with shortcuts to RE tools like these:


Why do I need it?

You don't. Obviously, you can download such tools from their own website and install them by yourself in a new VM. But if you download retoolkit, it can probably save you some time. Additionally, the tools come pre-configured so you'll find things like x64dbg with a few plugins, command-line tools working from any directory, etc. You may like it if you're setting up a new analysis VM.


Download

The *.iss files you see here are the source code for our setup program built with Inno Setup. To download the real thing, you have to go to the Releases section and download the setup program.


Included tools

Check the wiki.



Is it safe to install it in my environment?

I don't know. Some included tools are not open source and come from shady places. You should use it exclusively in virtual machines and under your own responsibility.


Can you add tool X?

It depends. The idea is to keep it simple. We won't add a tool just because it's not here yet. But if you think there's a good reason to do so, and the license allows us to redistribuite the software, please file a request here.



Php_Code_Analysis - San your PHP code for vulnerabilities


This script will scan your code

the script can find

  1. check_file_upload issues
  2. host_header_injection
  3. SQl injection
  4. insecure deserialization
  5. open_redirect
  6. SSRF
  7. XSS
  8. LFI
  9. command_injection

features
  1. fast
  2. simple report

usage:
python code.py <file name> >>> this will scan one file
python code.py >>> this will scan full folder (.)
python code.py <path> >>> scan full folder

Kaiju - A Binary Analysis Framework Extension For The Ghidra Software Reverse Engineering Suite


CERT Kaiju is a collection of binary analysis tools for Ghidra.

This is a Ghidra/Java implementation of some features of the CERT Pharos Binary Analysis Framework, particularly the function hashing and malware analysis tools, but is expected to grow new tools and capabilities over time.

As this is a new effort, this implementation does not yet have full feature parity with the original C++ implementation based on ROSE; however, the move to Java and Ghidra has actually enabled some new features not available in the original framework -- notably, improved handling of non-x86 architectures. Since some significant re-architecting of the framework and tools is taking place, and the move to Java and Ghidra enables different capabilities than the C++ implementation, the decision was made to utilize new branding such that there would be less confusion between implementations when discussing the different tools and capabilities.

Our intention for the near future is to maintain both the original Pharos framework as well as Kaiju, side-by-side, since both can provide unique features and capabilities.

CAVEAT: As a prototype, there are many issues that may come up when evaluating the function hashes created by this plugin. For example, unlike the Pharos implementation, Kaiju's function hashing module will create hashes for very small functions (e.g., ones with a single instruction like RET causing many more unintended collisions). As such, analytical results may vary between this plugin and Pharos fn2hash.


Quick Installation

Pre-built Kaiju packages are available. Simply download the ZIP file corresponding with your version of Ghidra and install according to the instructions below. It is recommended to install via Ghidra's graphical interface, but it is also possible to manually unzip into the appropriate directory to install.

CERT Kaiju requires the following runtime dependencies:

NOTE: It is also possible to build the extension package on your own and install it. Please see the instructions under the "Build Kaiju Yourself" section below.


Graphical Installation

Start Ghidra, and from the opening window, select from the menu: File > Install Extension. Click the plus sign at the top of the extensions window, navigate and select the .zip file in the file browser and hit OK. The extension will be installed and a checkbox will be marked next to the name of the extension in the window to let you know it is installed and ready.

The interface will ask you to restart Ghidra to start using the extension. Simply restart, and then Kaiju's extra features will be available for use interactively or in scripts.

Some functionality may require enabling Kaiju plugins. To do this, open the Code Browser then navigate to the menu File > Configure. In the window that pops up, click the Configure link below the "CERT Kaiju" category icon. A pop-up will display all available publicly released Kaiju plugins. Check any plugins you wish to activate, then hit OK. You will now have access to interactive plugin features.

If a plugin is not immediately visible once enabled, you can find the plugin underneath the Window menu in the Code Browser.

Experimental "alpha" versions of future tools may be available from the "Experimental" category if you wish to test them. However these plugins are definitely experimental and unsupported and not recommended for production use. We do welcome early feedback though!


Manual Installation

Ghidra extensions like Kaiju may also be installed manually by unzipping the extension contents into the appropriate directory of your Ghidra installation. For more information, please see The Ghidra Installation Guide.


Usage

Kaiju's tools may be used either in an interactive graphical way, or via a "headless" mode more suited for batch jobs. Some tools may only be available for graphical or headless use, by the nature of the tool.


Interactive Graphical Interface

Kaiju creates an interactive graphical interface (GUI) within Ghidra utilizing Java Swing and Ghidra's plugin architecture.

Most of Kaiju's tools are actually Analysis plugins that run automatically when the "Auto Analysis" option is chosen, either upon import of a new executable to disassemble, or by directly choosing Analysis > Auto Analyze... from the code browser window. You will see several CERT Analysis plugins selected by default in the Auto Analyze tool, but you can enable/disable any as desired.

The Analysis tools must be run before the various GUI tools will work, however. In some corner cases, it may even be helpful to run the Auto Analysis twice to ensure all of the metadata is produced to create correct partitioning and disassembly information, which in turn can influence the hashing results.

Analyzers are automatically run during Ghidra's analysis phase and include:

  • DisasmImprovements = improves the function partitioning of the disassembly compared to the standard Ghidra partitioning.
  • Fn2Hash = calculates function hashes for all functions in a program and is used to generate YARA signatures for programs.

The GUI tools include:

  • Function Hash Viewer = a plugin that displays an interactive list of functions in a program and several types of hashes. Analysts can use this to export one or more functions from a program into YARA signatures.
    • Select Window > CERT Function Hash Viewer from the menu to get started with this tool if it is not already visible. A new window will appear displaying a table of hashes and other data. Buttons along the top of the window can refresh the table or export data to file or a YARA signature. This window may also be docked into the main Ghidra CodeBrowser for easier use alongside other plugins. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.
  • OOAnalyzer JSON Importer = a plugin that can load, parse, and apply Pharos-generated OOAnalyzer results to object oriented C++ executables in a Ghidra project. When launched, the plugin will prompt the user for the JSON output file produced by OOAnalyzer that contains information about recovered C++ classes. After loading the JSON file, recovered C++ data types and symbols found by OOAnalyzer are updated in the Ghidra Code Browser. The plugin's design and implementation details are described in our SEI blog post titled Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra.
    • Select CERT > OOAnalyzer Importer from the menu to get started with this tool. A simple dialog popup will ask you to locate the JSON file you wish to import. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.

Command-line "Headless" Mode

Ghidra also supports a "headless" mode allowing tools to be run in some circumstances without use of the interactive GUI. These commands can therefore be utilized for scripting and "batch mode" jobs of large numbers of files.

The headless tools largely rely on Ghidra's GhidraScript functionality.

Headless tools include:

  • fn2hash = automatically run Fn2Hash on a given program and export all the hashes to a CSV file specified
  • fn2yara = automatically run Fn2Hash on a given program and export all hash data as YARA signatures to the file specified
  • fnxrefs = analyze a Program and export a list of Functions based on entry point address that have cross-references in data or other parts of the Program

A simple shell launch script named kaijuRun has been included to run these headless commands for simple scenarios, such as outputing the function hashes for every function in a single executable. Assuming the GHIDRA_INSTALL_DIR variable is set, one might for example run the launch script on a single executable as follows:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun fn2hash example.exe

This command would output the results to an automatically named file as example.exe.Hashes.csv.

Basic help for the kaijuRun script is available by running:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun --help

Please see docs/HeadlessKaiju.md file in the repository for more information on using this mode and the kaijuRun launcher script.


Further Documentation and Help

More comprehensive documentation and help is available, in one of two formats.

See the docs/ directory for Markdown-formatted documentation and help for all Kaiju tools and components. These documents are easy to maintain and edit and read even from a command line.

Alternatively, you may find the same documentation in Ghidra's built-in help system. To access these help docs, from the Ghidra menu, go to Help > Contents and then select CERT Kaiju from the tree navigation on the left-hand side of the help window.

Please note that the Ghidra Help documentation is the exact same content as the Markdown files in the docs/ directory; thanks to an in-tree gradle plugin, gradle will automatically parse the Markdown and export into Ghidra HTML during the build process. This allows even simpler maintenance (update docs in just one place, not two) and keeps the two in sync.

All new documentation should be added to the docs/ directory.


Building Kaiju Yourself Using Gradle

Alternately to the pre-built packages, you may compile and build Kaiju yourself.


Build Dependencies

CERT Kaiju requires the following build dependencies:

  • Ghidra 9.1+ (9.2+ recommended)
  • gradle 6.4+ (latest gradle 6.x recommended, 7.x not supported)
  • GSON 2.8.6
  • Java 11+ (we recommend OpenJDK 11)

NOTE ABOUT GRADLE: Please ensure that gradle is building against the same JDK version in use by Ghidra on your system, or you may experience installation problems.

NOTE ABOUT GSON: In most cases, Gradle will automatically obtain this for you. If you find that you need to obtain it manually, you can download gson-2.8.6.jar and place it in the kaiju/lib directory.


Build Instructions

Once dependencies are installed, Kaiju may be built as a Ghidra extension by using the gradle build tool. It is recommended to first set a Ghidra environment variable, as Ghidra installation instructions specify.

In short: set GHIDRA_INSTALL_DIR as an environment variable first, then run gradle without any options:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle

NOTE: Your Ghidra install directory is the directory containing the ghidraRun script (the top level directory after unzipping the Ghidra release distribution into the location of your choice.)

If for some reason your environment variable is not or can not be set, you can also specify it on the command like with:

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>

In either case, the newly-built Kaiju extension will appear as a .zip file within the dist/ directory. The filename will include "Kaiju", the version of Ghidra it was built against, and the date it was built. If all goes well, you should see a message like the following that tells you the name of your built plugin.

Created ghidra_X.Y.Z_PUBLIC_YYYYMMDD_kaiju.zip in <path/to>/kaiju/dist

where X.Y.Z is the version of Ghidra you are using, and YYYYMMDD is the date you built this Kaiju extension.


Optional: Running Tests With AUTOCATS

While not required, you may want to use the Kaiju testing suite to verify proper compilation and ensure there are no regressions while testing new code or before you install Kaiju in a production environment.

In order to run the Kaiju testing suite, you will need to first obtain the AUTOCATS (AUTOmated Code Analysis Testing Suite). AUTOCATS contains a number of executables and related data to perform tests and check for regressions in Kaiju. These test cases are shared with the Pharos binary analysis framework, therefore AUTOCATS is located in a separate git repository.

Clone the AUTOCATS repository with:

git clone https://github.com/cmu-sei/autocats

We recommend cloning the AUTOCATS repository into the same parent directory that holds Kaiju, but you may clone it anywhere you wish.

The tests can then be run with:

gradle -PKAIJU_AUTOCATS_DIR=path/to/autocats/dir test

where of course the correct path is provided to your cloned AUTOCATS repository directory. If cloned to the same parent directory as Kaiju as recommended, the command would look like:

gradle -PKAIJU_AUTOCATS_DIR=../autocats test

The tests cannot be run without providing this path; if you do forget it, gradle will abort and give an error message about providing this path.

Kaiju has a dependency on JUnit 5 only for running tests. Gradle should automatically retrieve and use JUnit, but you may also download JUnit and manually place into lib/ directory of Kaiju if needed.

You will want to run the update command whenever you pull the latest Kaiju source code, to ensure they stay in sync.


First-Time "Headless" Gradle-based Installation

If you compiled and built your own Kaiju extension, you may alternately install the extension directly on the command line via use of gradle. Be sure to set GHIDRA_INSTALL_DIR as an environment variable first (if you built Kaiju too, then you should already have this defined), then run gradle as follows:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle install

or if you are unsure if the environment variable is set,

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir> install

Extension files should be copied automatically. Kaiju will be available for use after Ghidra is restarted.

NOTE: Be sure that Ghidra is NOT running before using gradle to install. We are aware of instances when the caching does not appear to update properly if installed while Ghidra is running, leading to some odd bugs. If this happens to you, simply exit Ghidra and try reinstalling again.


Consider Removing Your Old Installation First

It might be helpful to first completely remove any older install of Kaiju before updating to a newer release. We've seen some cases where older versions of Kaiju files get stuck in the cache and cause interesting bugs due to the conflicts. By removing the old install first, you'll ensure a clean re-install and easy use.

The gradle build process now can auto-remove previous installs of Kaiju if you enable this feature. To enable the autoremove, add the "KAIJU_AUTO_REMOVE" property to your install command, such as (assuming the environment variable is probably set as in previous section):

gradle -PKAIJU_AUTO_REMOVE install

If you'd prefer to remove your old installation manually, perform a command like:

rm -rf $GHIDRA_INSTALL_DIR/Extensions/Ghidra/*kaiju*.zip $GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju


DcRat - A Simple Remote Tool Written In C#

12 July 2021 at 21:30
By: Zion3R


DcRat is a simple remote tool written in C#


Introduction

Features
  • TCP connection with certificate verification, stable and security
  • Server IP port can be archived through link
  • Multi-Server,multi-port support
  • Plugin system through Dll, which has strong expansibility
  • Super tiny client size (about 40~50K)
  • Data transform with msgpack (better than JSON and other formats)
  • Logging system recording all events

Functions
  • Remote shell
  • Remote desktop
  • Remote camera
  • Registry Editor
  • File management
  • Process management
  • Netstat
  • Remote recording
  • Process notification
  • Send file
  • Inject file
  • Download and Execute
  • Send notification
  • Chat
  • Open website
  • Modify wallpaper
  • Keylogger
  • File lookup
  • DDOS
  • Ransomware
  • Disable Windows Defender
  • Disable UAC
  • Password recovery
  • Open CD
  • Lock screen
  • Client shutdown/restart/upgrade/uninstall
  • System shutdown/restart/logout
  • Bypass Uac
  • Get computer information
  • Thumbnails
  • Auto task
  • Mutex
  • Process protection
  • Block client
  • Install with schtasks
  • etc

Deployment
  • Build๏ผšvs2019
  • Runtime๏ผš
Project Runtime
Server .NET Framework 4.61
Client and others .NET Framework 4.0

Support
  • The following systems (32 and 64 bit) are supported
    • Windows XP SP3
    • Windows Server 2003
    • Windows Vista
    • Windows Server 2008
    • Windows 7
    • Windows Server 2012
    • Windows 8/8.1
    • Windows 10

TODO
  • Password recovery and other stealer (only chrome and edge are supported now)
  • Reverse Proxy
  • Hidden VNC
  • Hidden RDP
  • Hidden Browser
  • Client Map
  • Real time Microphone
  • Some fun function
  • Information Collection(Maybe with UI)
  • Support unicode in Remote Shell
  • Support Folder Download
  • Support more ways to install Clients
  • โ€ฆโ€ฆ

Compile

Open the project in Visual Studio 2019 and press CTRL+SHIFT+B.


Download

Press here to download the lastest release.


Attention

ๆˆ‘๏ผˆ็ฐž็ด”๏ผ‰ๅฏนๆ‚จ็”ฑไฝฟ็”จๆˆ–ไผ ๆ’ญ็ญ‰็”ฑๆญค่ฝฏไปถๅผ•่ตท็š„ไปปไฝ•่กŒไธบๅ’Œ/ๆˆ–ๆŸๅฎณไธๆ‰ฟๆ‹…ไปปไฝ•่ดฃไปปใ€‚ๆ‚จๅฏนไฝฟ็”จๆญค่ฝฏไปถ็š„ไปปไฝ•่กŒไธบๆ‰ฟๆ‹…ๅ…จ้ƒจ่ดฃไปป๏ผŒๅนถๆ‰ฟ่ฎคๆญค่ฝฏไปถไป…็”จไบŽๆ•™่‚ฒๅ’Œ็ ”็ฉถ็›ฎ็š„ใ€‚ไธ‹่ฝฝๆœฌ่ฝฏไปถๆˆ–่ฝฏไปถ็š„ๆบไปฃ็ ๏ผŒๆ‚จ่‡ชๅŠจๅŒๆ„ไธŠ่ฟฐๅ†…ๅฎนใ€‚
I (qwqdanchun) am not responsible for any actions and/or damages caused by your use or dissemination of the software. You are fully responsible for any use of the software and acknowledge that the software is only used for educational and research purposes. If you download the software or the source code of the software, you will automatically agree with the above content.


Thanks


LazySign - Create Fake Certs For Binaries Using Windows Binaries And The Power Of Bat Files

23 August 2021 at 21:30
By: Zion3R


Create fake certs for binaries using windows binaries and the power of bat files

Over the years, several cool tools have been released that are capeable of stealing or forging fake signatures for binary files. All of these tools however, have additional dependencies which require Go,python,...


This repo gives you the opportunity of fake signing with 0 additional dependencies, all of the binaries used are part of Microsoft's own devkits. I took the liberty of writing a bat file to make things easy.

So if you are lazy like me, just clone the git, run the bat, follow the instructions and enjoy your new fake signed binary. With some adjustments it could even be used to sign using valid certs as well ยฏ\(ใƒ„)/ยฏ



Karta - Source Code Assisted Fast Binary Matching Plugin For IDA

11 September 2021 at 11:30
By: Zion3R


"Karta" (Russian for "Map") is an IDA Python plugin that identifies and matches open-sourced libraries in a given binary. The plugin uses a unique technique that enables it to support huge binaries (>200,000 functions), with almost no impact on the overall performance.

The matching algorithm is location-driven. This means that it's main focus is to locate the different compiled files, and match each of the file's functions based on their original order within the file. This way, the matching depends on K (number of functions in the open source) instead of N (size of the binary), gaining a significant performance boost as usually N >> K.

We believe that there are 3 main use cases for this IDA plugin:

  1. Identifying a list of used open sources (and their versions) when searching for a useful 1-Day
  2. Matching the symbols of supported open sources to help reverse engineer a malware
  3. Matching the symbols of supported open sources to help reverse engineer a binary / firmware when searching for 0-Days in proprietary code

Read The Docs

https://karta.readthedocs.io/


Installation (Python 3 & IDA >= 7.4)

For the latest versions, using Python 3, simply git clone the repository and run the setup.py install script. Python 3 is supported since versions v2.0.0 and above.


Installation (Python 2 & IDA < 7.4)

As of the release of IDA 7.4, Karta is only actively developed for IDA 7.4 or newer, and Python 3. Python 2 and older IDA versions are still supported using the release version v1.2.0, which is most probably going to be the last supported version due to python 2.X end of life.


Identifier

Karta's identifier is a smaller plugin that identifies the existence, and fingerprints the versions, of the existing (supported) open source libraries within the binary. No more need to reverse engineer the same open-source library again-and-again, simply run the identifier plugin and get a detailed list of the used open sources. Karta currently supports more than 10 open source libraries, including:

  • OpenSSL
  • Libpng
  • Libjpeg
  • NetSNMP
  • zlib
  • Etc.

Matcher

After identifying the used open sources, one can compile a .JSON configuration file for a specific library (libpng version 1.2.29 for instance). Once compiled, Karta will automatically attempt to match the functions (symbols) of the open source in the loaded binary. In addition, in case your open source used external functions (memcpy, fread, or zlib_inflate), Karta will also attempt to match those external functions as well.


Folder Structure
  • src: source directory for the plugin
  • configs: pre-supplied *.JSON configuration files (hoping the community will contribute more)
  • compilations: compilation tips for generating the configuration files, and lessons from past open sources
  • docs: sphinx documentation directory

Additional Reading

Credits

This project was developed by me (see contact details below) with help and support from my research group at Check Point (Check Point Research).


Contact (Updated)

This repository was developed and maintained by me, Eyal Itkin, during my years at Check Point Research. Sadly, with my departure of the research group, I will no longer be able to maintain this repository. This is mainly because of the long list of requirements for running all of the regression tests, and the IDA Pro versions that are involved in the process.

Please accept my sincere apology.

@EyalItkin



Autoharness - A Tool That Automatically Creates Fuzzing Harnesses Based On A Library

12 September 2021 at 20:30
By: Zion3R


AutoHarness is a tool that automatically generates fuzzing harnesses for you. This idea stems from a concurrent problem in fuzzing codebases today: large codebases have thousands of functions and pieces of code that can be embedded fairly deep into the library. It is very hard or sometimes even impossible for smart fuzzers to reach that codepath. Even for large fuzzing projects such as oss-fuzz, there are still parts of the codebase that are not covered in fuzzing. Hence, this program tries to alleviate this problem in some capacity as well as provide a tool that security researchers can use to initially test a code base. This program only supports code bases which are coded in C and C++.


Setup/Demonstration

This program utilizes llvm and clang for libfuzzer, Codeql for finding functions, and python for the general program. This program was tested on Ubuntu 20.04 with llvm 12 and python 3. Here is the initial setup.

sudo apt-get update;
sudo apt-get install python3 python3-pip llvm-12* clang-12 git;
pip3 install pandas lief subprocess os argparse ast;

Follow the installation procedure for Codeql on https://github.com/github/codeql. Make sure to install the CLI tools and the libraries. For my testing, I have stored both the tools and libraries under one folder. Finally, clone this repository or download a release. Here is the program's output after running on nginx with the multiple argument mode set. This is the command I used.

python3 harness.py -L /home/akshat/nginx-1.21.0/objs/ -C /home/akshat/codeql-h/ -M 1 -O /home/akshat/autoharness/ -D nginx -G 1 -Y 1 -F "-I /home/akshat/nginx-1.21.0/objs -I /home/akshat/nginx-1.21.0/src/core -I /home/akshat/nginx-1.21.0/src/event -I /home/akshat/nginx-1.21.0/src/http -I /home/akshat/nginx-1.21.0/src/mail -I /home/akshat/nginx-1.21.0/src/misc -I /home/akshat/nginx-1.21.0/src/os -I /home/akshat/nginx-1.21.0/src/stream -I /home/akshat/nginx-1.21.0/src/os/unix" -X ngx_config.h,ngx_core.h

Results: ย 

ย It is definitely possible to raise the success by further debugging the compilation and adding more header files and more. Note the nginx project does not have any shared objects after compiling. However, this program does have a feature that can convert PIE executables into shared libraries.


Planned Features (in order of progress)

  1. Struct Fuzzing

The current way implemented in the program to fuzz functions with multiple arguments is by using fuzzing data provider. There are some improvements to make in this integration; however, I believe I can incorporate this feature with data structures. A problem which I come across when coding this is with codeql and nested structs. It becomes especially hard without writing multiple queries which vary for every function. In short, this feature needs more work. I was also thinking about a simple solution using protobufs.


  1. Implementation Based Harness Creation

Using codeql, it is possible to use to generate a control flow graph that maps how the parameters in a function are initialized. Using that information, we can create a better harness. Another way is to look for implementations for the function that exist in the library and use that information to make an educated guess on an implementation of the function as a harness. The problems I currently have with this are generating the control flow graphs with codeql.


  1. Parallelized fuzzing/False Positive Detection

I can create a simple program that runs all the harnesses and picks up on any of the common false positives using ASAN. Also, I can create a new interface that runs all the harnesses at once and displays their statistics.


Contribution/Bugs

If you find any bugs with this program, please create an issue. I will try to come up with a fix. Also, if you have any ideas on any new features or how to implement performance upgrades or the current planned features, please create a pull request or an issue with the tag (contribution).


PSA

This tool generates some false positives. Please first analyze the crashes and see if it is valid bug or if it is just an implementation bug. Also, you can enable the debug mode if some functions are not compiling. This will help you understand if there are some header files that you are missing or any linkage issues. If the project you are working on does not have shared libraries but an executable, make sure to compile the executable in PIE form so that this program can convert it into a shared library.


References
  1. https://lief.quarkslab.com/doc/latest/tutorials/08_elf_bin2lib.html


AFLTriage - Tool To Triage Crashing Input Files Using A Debugger

9 December 2021 at 20:30
By: Zion3R


AFLTriage is a tool to triage crashing input files using a debugger. It is designed to be portable and not require any run-time dependencies, besides libc and an external debugger. It supports triaging crashes generated by any program, not just AFL, but recognizes AFL directories specially, hence the name.

Some notable features include:

  • Multiple report formats: text, JSON, and raw debugger JSON
  • Parallel crash triage
  • Crash deduplication
  • Sanitizer report parsing
  • Supports binary targets with or without symbols/debugging information
  • Source code and variables will be annotated in reports for context

Currently AFLTriage only supports GDB and has only been tested on Linux C/C++ targets. Note that AFLTriage does not classify crashes by potential exploitablity. Accurate exploitability classification is very target and scenario specific and is best left to specialized tools and expert analysts.


Usage

Usage of AFLTriage is quite straightforward. You need your inputs to triage, an output directory for reports, and the binary and its arguments to triage.

Example:

$ afltriage -i fuzzing_directory -o reports ./target_binary --option-one @@
AFLTriage v1.0.0

[+] GDB is working (GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 - Python 3.6.9 (default, Jan 26 2021, 15:33:00))
[+] Image triage cmdline: "./target_binary --option-one @@"
[+] Reports will be output to directory "reports"
[+] Triaging AFL directory fuzzing_directory/ (41 files)
[+] Triaging 41 testcases
[+] Using 24 threads to triage
[+] Triaging [41/41 00:00:02] [####################] CRASH: ASAN detected heap-buffer-overflow in buggy_function after a READ leading to SIGABRT (si_signo=6) / SI_TKILL (si_code=-6)
[+] Triage stats [Crashes: 25 (unique 12), No crash: 16, Errored: 0]

Similar to AFL the @@ is replaced with the path of the file to be triaged. AFLTriage will take care of the rest.

Building and Running

You will need a working Rust build environment. Once you have cargo and rust installed, building and running is simple:

cd afltriage-rs/
cargo run --help

<compilation>

Finished dev [unoptimized + debuginfo] target(s) in 0.33s
Running `target/debug/afltriage --help`

<AFLTriage usage>
...

Extended Usage

debugging output of triage operations. -h, --help Prints help information -V, --version Prints version information ARGS: <command>... The binary executable and args to execute. Use '@@' as a placeholder for the path to the input file or --stdin. Optionally use -- to delimit the start of the command. ">
afltriage 1.0.0
Quickly triage and summarize crashing testcases

USAGE:
afltriage -i <input>... -o <output> <command>...

OPTIONS:
-i <input>...
A list of paths to a testcase, directory of testcases, AFL directory, and/or directory of AFL directories to
be triaged. Note that this arg takes multiple inputs in a row (e.g. -i input1 input2...) so it cannot be the
last argument passed to AFLTriage -- this is reserved for the command.
-o <output>
The output directory for triage report files. Use '-' to print entire reports to console.

-t, --timeout <timeout>
The timeout in milliseconds for each testcase to triage. [default: 60000]

-j, --jobs <jobs>
How many threads to use during triage.

--report-formats <report_format s>...
The triage report output formats. Multiple values allowed: e.g. text,json. [default: text] [possible
values: text, json, rawjson]
--bucket-strategy <bucket_strategy>
The crash deduplication strategy to use. [default: afltriage] [possible values: none, afltriage,
first_frame, first_frame_raw, first_5_frames, function_names, first_function_name]
--child-output
Include child output in triage reports.

--child-output-lines <child_output_lines>
How many lines of program output from the target to include in reports. Use 0 to mean unlimited lines (not
recommended). [default: 25]
--stdin
Provide testcase input to the target via stdin instead of a file.

--profile-only
Perform environment chec ks, describe the inputs to be triaged, and profile the target binary.

--skip-profile
Skip target profiling before input processing.

--debug
Enable low-level debugging output of triage operations.

-h, --help
Prints help information

-V, --version
Prints version information


ARGS:
<command>...
The binary executable and args to execute. Use '@@' as a placeholder for the path to the input file or
--stdin. Optionally use -- to delimit the start of the command.

Related Projects



EDRHunt - Scan Installed EDRs And AVs On Windows

8 February 2022 at 20:30
By: Zion3R


EDRHunt scans Windows services, drivers, processes, registry for installed EDRs (Endpoint Detection And Response). Read more about EDRHunt here.


Install

  • Binary

    • Download the latest release from the release section. Releases are built for windows/amd64.
  • Go

    • Requires Go to be installed on system. Tested on Go1.17+.
    • go install github.com/FourCoreLabs/EDRHunt/cmd/[email protected]

Usage

  • Find installed EDRs
$ .\EDRHunt.exe scan
[EDR]
Detected EDR: Windows Defender
Detected EDR: Kaspersky Security
  • Scan Everything
$ .\EDRHunt.exe all
Running in user mode, escalate to admin for more details.
Scanning processes, services, drivers, and registry...
[PROCESSES]

Suspicious Process Name: MsMpEng.exe
Description: MsMpEng.exe
Caption: MsMpEng.exe
Binary:
ProcessID: 6764
Parent Process: 1148
Process CmdLine :
File Metadata:
Matched Keyword: [msmpeng]


Suspicious Process Name: NisSrv.exe
Description: NisSrv.exe
Caption: NisSrv.exe
Binary:
ProcessID: 9840
Parent Process: 1148
Process CmdLine :
File Metadata:
Matched Keyword: [nissrv]
...
  • Find drivers matching EDR keywords
Microsoft Corporation FileDescription: Microsoft antimalware file system filter driver ProductVersion: 4.18.2109.6 Comments: LegalCopyright: ยฉ Microsoft Corporation. All rights reserved. LegalTrademarks: Matched Keyword: [antimalware malware] Suspicious Driver Module: hvsifltr.sys Driver FilePath: c:\windows\system32\drivers\hvsifltr.sys Driver File Metadata: ProductName: Microsoftยฎ Windowsยฎ Operating System OriginalFileName: hvsifltr.sys.mui InternalFileName: hvsifltr.sys Company Name: Microsoft Corporation FileDescription: Microsoft Defender Application Guard Filter Driver ProductVersion: 10.0.19041.1 Comments: LegalCopyright: ยฉ Microsoft Corporation. All rights reserved. LegalTrademarks: Matched Keyword: [defender] Suspicious Driver Module: WdNisDrv.sys Driver FilePath: c:\windows\system32\drivers\wd\wdnisdrv.sys Driver File Metadata: ProductName: Microsoftยฎ Windowsยฎ Operating System OriginalFileName: wdnisdrv.sys InternalFileName: wdnisdrv.sys Company Name: Microsoft Corporation FileDescription: Windows Defender Network Stream Filter ProductVersion: 4.18.2109.6 Comments: LegalCopyright: ยฉ Microsoft Corporation. All rights reserved. LegalTrademarks: Matched Keyword: [defender] ...">
    __________  ____     __  ____  ___   ________
/ ____/ __ \/ __ \ / / / / / / / | / /_ __/
/ __/ / / / / /_/ / / /_/ / / / / |/ / / /
/ /___/ /_/ / _, _/ / __ / /_/ / /| / / /
/_____/_____/_/ |_| /_/ /_/\____/_/ |_/ /_/

FourCore Labs (https://fourcore.vision) | Version: 1.1

Running in user mode, escalate to admin for more details.
[DRIVERS]
Suspicious Driver Module: WdFilter.sys
Driver FilePath: c:\windows\system32\drivers\wd\wdfilter.sys
Driver File Metadata:
ProductName: Microsoftยฎ Windowsยฎ Operating System
OriginalFileName: WdFilter.sys
InternalFileName: WdFilter
Company Name: Microsoft Corporation
FileDescription: Microsoft antimalware file system filter driver
ProductVersion: 4.18.2109.6
Comments:
LegalCopyright: ยฉ Microsoft Corporation. All rights reserved.
LegalTrademark s:
Matched Keyword: [antimalware malware]

Suspicious Driver Module: hvsifltr.sys
Driver FilePath: c:\windows\system32\drivers\hvsifltr.sys
Driver File Metadata:
ProductName: Microsoftยฎ Windowsยฎ Operating System
OriginalFileName: hvsifltr.sys.mui
InternalFileName: hvsifltr.sys
Company Name: Microsoft Corporation
FileDescription: Microsoft Defender Application Guard Filter Driver
ProductVersion: 10.0.19041.1
Comments:
LegalCopyright: ยฉ Microsoft Corporation. All rights reserved.
LegalTrademarks:
Matched Keyword: [defender]

Suspicious Driver Module: WdNisDrv.sys
Driver FilePath: c:\windows\system32\drivers\wd\wdnisdrv.sys
Driver File Metadata:
ProductName: Microsoftยฎ Windowsยฎ Operating System
OriginalFileName: wdnisdrv.sys
InternalFileName: wdnisdrv.sys
Company Name: Microsoft Corporation
FileDescription: Windows Defender Network Stream Filter
ProductVersion: 4.18.2109.6
Comments:
LegalCopyright: ยฉ Microsoft Corporation. All rights reserved.
LegalTrademarks:
Matched Keyword: [defender]
...
  • Find services matching EDR keywords
$ .\EDRHunt.exe -s
  • Find drivers matching EDR keywords
$ .\EDRHunt.exe -d
  • Find registry keys matching EDR keywords
$ .\EDRHunt.exe -r

Detections

EDR Detections Currently Available

  • Windows Defender
  • Kaspersky Security
  • Symantec Security
  • Crowdstrike Security
  • Mcafee Security
  • Cylance Security
  • Carbon Black
  • SentinelOne
  • FireEye
  • Elastic EDR

More to be added soon.

Community

Would appreciate if you ran EDRHunt on your own deployments and test the detections! Thanks.



Wslu - A Collection Of Utilities For Windows 10 Linux Subsystems

9 February 2022 at 11:30
By: Zion3R


This is a collection of utilities for Windows 10 Linux Subsystem, such as retrieving Windows 10 environment variables or creating your favorite Linux GUI application shortcuts on Windows 10 Desktop.

Requires Windows 10 Creators Update; Some of the feature requires a higher version of Windows 10; Supports WSL2.


Feature

wslusc

A WSL shortcut creator to create a shortcut on your Windows 10 Desktop.

wslsys

A WSL system information printer to print out system informations from Windows 10 or WSL.

wslfetch

A WSL screenshot information tool to print information in an elegant way.

wslvar

A WSL tool to help you get Windows system environment variables.

wslview

With alias wview/wslstart/wstart

A fake WSL browser that can help you open link in default Windows browser or open files on Windows.

wslupath

โš 
Deprecated

A WSL tool to convert path styles.

wslact

A set of quick actions for WSL such as quickly mounting all drives or manually sync time between Windows and WSL.

Installation

Alpine Linux

You can install wslu on Alpine Linux 3.12+ with the following command:

sudo apk add wslu

Arch Linux

AUR version of wslu is pulled due to that it violated its policy.

Download the latest package from release and install using the command: sudo pacman -U *.zst

CentOS/RHEL/Oracle Linux

Add the repository for the corresponding Linux distribution:

  • CentOS 7:
sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/CentOS_7/home:wslutilities.repo
  • CentOS 8:
sudo dnf install -y epel-release 
sudo dnf config-manager --set-enabled PowerTools
sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/CentOS_8/home:wslutilities.repo
  • Oracle Linux 7:
sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/RHEL_7/home:wslutilities.repo
  • Oracle Linux 8:
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-$(/bin/arch)-rpms
sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/CentOS_8/home:wslutilities.repo
  • Red Hat Enterprise Linux 7:
sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/RHEL_7/home:wslutilities.repo
  • Red Hat Enterprise Linux 8:
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-$(/bin/arch)-rpms
sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/CentOS_8/home:wslutilities.repo

Then install with the command sudo yum install wslu.

Debian

You can install wslu with the following command:

sudo apt install gnupg2 apt-transport-https
wget -O - https://pkg.wslutiliti.es/public.key | sudo tee -a /etc/apt/trusted.gpg.d/wslu.asc
echo "deb https://pkg.wslutiliti.es/debian buster main" | sudo tee -a /etc/apt/sources.list
sudo apt update
sudo apt install wslu

Fedora

sudo dnf copr enable wslutilities/wslu
sudo dnf install wslu

Fedora Remix for WSL

Preinstalled.

Kali Linux

You can install wslu with the following command:

sudo apt install gnupg2 apt-transport-https
wget -O - https://pkg.wslutiliti.es/public.key | sudo tee -a /etc/apt/trusted.gpg.d/wslu.asc
echo "deb https://pkg.wslutiliti.es/kali kali-rolling main" | sudo tee -a /etc/apt/sources.list
sudo apt update
sudo apt install wslu

Pengwin

Preinstalled.

Pengwin Enterprise 7

You can install wslu with the following command:

sudo yum install wslu

Pengwin Enterprise 8

Add the EPEL repository:

sudo dnf install -y epel-release

You can install wslu with the following command:

sudo dnf install -y wslu

Ubuntu

Attention!

For Ubuntu version, you should not only report bug here but also report bug at Launchpad.

Preinstalled in the latest apps. On older installations of Ubuntu please install ubuntu-wsl that depends on wslu:

sudo apt update
sudo apt install ubuntu-wsl

To install the latest version before wslu reaches main reporsitory, you can install via our PPA: https://launchpad.net/~wslutilities/+archive/ubuntu/wslu

OpenSUSE

You can install wslu with the following command:

sudo zypper addrepo https://download.opensuse.org/repositories/home:/wslutilities/openSUSE_Leap_15.1/home:wslutilities.repo
sudo zypper up
sudo zypper in wslu

SUSE Linux Enperprise Server

You can install wslu with the following command:

SLESCUR_VERSION="$(grep VERSION= /etc/os-release | sed -e s/VERSION=//g -e s/\"//g -e s/-/_/g)"
sudo zypper addrepo https://download.opensuse.org/repositories/home:/wslutilities/SLE_$SLESCUR_VERSION/home:wslutilities.repo
sudo zypper addrepo https://download.opensuse.org/repositories/graphics/SLE_12_SP3_Backports/graphics.repo
sudo zypper up
sudo zypper in wslu

Other distributions

Not Recommend

curl | bash method is not secure. Related article

You can install wslu with the following command on your preferred distribution: curl -sL https://raw.githubusercontent.com/wslutilities/wslu/master/extras/scripts/wslu-install | bash


License & Credits

This project uses GPLv3 License.

Logo of WSL Utilities and icons for wslusc desktop shortcuts are licensed under CC BY 4.0 International License.

For other third party files and assets used, please refer to THIRD_PARTY_LICENSE.



AWS-Loot - Pull Secrets From An AWS Environment

9 February 2022 at 20:30
By: Zion3R


Searches an AWS environment looking for secrets, by enumerating environment variables and source code. This tool allows quick enumeration over large sets of AWS instances and services.


Install

pip install -r requirements.txt

An AWS credential file (.aws/credentials) is required for authentication to the target environment

  • Access Key
  • Access Key Secret

How it works

Awsloot works by going through EC2, Lambda, CodeBuilder instances and searching for high entropy strings. The EC2 Looter works by querying all available instance ID's in all regions and requesting instance's USERDATA where often developers leave secrets. The Lambda looter operates across regions as well. Lambada looter can search all available versions of a found function. It starts by searching the functions environment variables then downloads the source code and scans the source for secrets. The Codebuilder Looter works by searching for build instances and searching those builds for environment variables that might contain secrets.

Usage

Python3 awsloot.py 

Next Features

  • Allow users to specify an ARN to scan
  • Looter for additional services


LDAP-Password-Hunter - Password Hunter In The LDAP Infamous Database

10 February 2022 at 11:30
By: Zion3R


It happens that due to legacy services requirements or just bad security practices password are world-readable in the LDAP database by any user who is able to authenticate.
LDAP Password Hunter is a tool which wraps features of getTGT.py (Impacket) and ldapsearch in order to look up for password stored in LDAP database. Impacket getTGT.py script is used in order to authenticate the domain account used for enumeration and save its TGT kerberos ticket. TGT ticket is then exported in KRB5CCNAME variable which is used by ldapsearch script to authenticate and obtain TGS kerberos tickets for each domain/DC LDAP-Password-Hunter is ran for. Basing on the CN=Schema,CN=Configuration export results a custom list of attributes is built and filtered in order to identify a big query which might contains interesting results. Results are shown and saved in a sqlite3 database. The DB is made of one table containing the following columns:

  • DistinguishedName
  • AttributeName
  • Value
  • Domain

Results are way more clean than the previous version and organized in the SQL DB. The output shows the entries found only if they are not in DB, so new entries pop up but the overall outcome of the analysis is still saved in a file with a timestamp.


Usage

Be sure your krb5.conf file is clean and the domains.txt and conf.txt are filled properly.

From the project folder:

./run.sh

Easy-peasy :)

Credits

  • SecureAuthCorp: For the work on Impacket project
  • Alberto Solino (@agsolino): For the work on kerberoast modules based on the Impacket framework
  • Retrospected: For helping out every Friday with debugging the code and brainstorming on new features


Php-Malware-Finder - Detect Potentially Malicious PHP Files

10 February 2022 at 20:30
By: Zion3R


PHP-malware-finder does its very best to detect obfuscated/dodgy code as well as files using PHP functions often used in malwares/webshells.

The following list of encoders/obfuscators/webshells are also detected:

Of course it's trivial to bypass PMF, but its goal is to catch kiddies and idiots, not people with a working brain. If you report a stupid tailored bypass for PMF, you likely belong to one (or both) category, and should re-read the previous statement.


How does it work?

Detection is performed by crawling the filesystem and testing files against a set of YARA rules. Yes, it's that simple!

Instead of using an hash-based approach, PMF tries as much as possible to use semantic patterns, to detect things like "a $_GET variable is decoded two times, unzipped, and then passed to some dangerous function like system".

Installation

  • Install Yara.
    This is also possible via some Linux package managers:
    • Debian: sudo apt-get install yara
    • Red Hat: yum install yara (requires the EPEL repository)

You can also compile it from source:

git clone [email protected]:VirusTotal/yara.git
cd yara/
YACC=bison ./configure
make
  • Download php-malware-finder git clone https://github.com/jvoisin/php-malware-finder.git

How to use it?

$ ./phpmalwarefinder -h
Usage phpmalwarefinder [-cfhtvl] <file|folder> ...
-c Optional path to a rule file
-f Fast mode
-h Show this help message
-t Specify the number of threads to use (8 by default)
-v Verbose mode

Or if you prefer to use yara:

$ yara -r ./php.yar /var/www

Please keep in mind that you should use at least YARA 3.4 because we're using hashes for the whitelist system, and greedy regexps. Please note that if you plan to build yara from sources, libssl-dev must be installed on your system in order to have support for hashes.

Oh, and by the way, you can run the comprehensive testsuite with make tests.

Whitelisting

Check the whitelist.yar file. If you're lazy, you can generate whitelists for entire folders with the generate_whitelist.py script.

Why should I use it instead of something else?

Because:

  • It doesn't use a single rule per sample, since it only cares about finding malicious patterns, not specific webshells
  • It has a complete testsuite, to avoid regressions
  • Its whitelist system doesn't rely on filenames
  • It doesn't rely on (slow) entropy computation
  • It uses a ghetto-style static analysis, instead of relying on file hashes
  • Thanks to the aforementioned pseudo-static analysis, it works (especially) well on obfuscated files

Licensing

PHP-malware-finder is licensed under the GNU Lesser General Public License v3.

The amazing YARA project is licensed under the Apache v2.0 license.

Patches, whitelists or samples are of course more than welcome.



TerraGoat - Vulnerable Terraform Infrastructure

11 February 2022 at 11:30
By: Zion3R


TerraGoat is Bridgecrew's "Vulnerable by Design" Terraform repository. TerraGoat is a learning and training project that demonstrates how common configuration errors can find their way into production cloud environments.


Introduction

TerraGoat was built to enable DevSecOps design and implement a sustainable misconfiguration prevention strategy. It can be used to test a policy-as-code framework like Bridgecrew & Checkov, inline-linters, pre-commit hooks or other code scanning methods.

TerraGoat follows the tradition of existing *Goat projects that provide a baseline training ground to practice implementing secure development best practices for cloud infrastructure.

Important notes

Before you proceed please take a not of these warning:

TerraGoat creates intentionally vulnerable AWS resources into your account. DO NOT deploy TerraGoat in a production environment or alongside any sensitive AWS resources.

Requirements

  • Terraform 0.12
  • aws cli
  • azure cli

To prevent vulnerable infrastructure from arriving to production see: Bridgecrew & checkov, the open source static analysis tool for infrastructure as code.

Getting started

AWS Setup

Installation (AWS)

You can deploy multiple TerraGoat stacks in a single AWS account using the parameter TF_VAR_environment.

Create an S3 Bucket backend to keep Terraform state

export TERRAGOAT_STATE_BUCKET="mydevsecops-bucket"
export TF_VAR_company_name=acme
export TF_VAR_environment=mydevsecops
export TF_VAR_region="us-west-2"

aws s3api create-bucket --bucket $TERRAGOAT_STATE_BUCKET \
--region $TF_VAR_region --create-bucket-configuration LocationConstraint=$TF_VAR_region

# Enable versioning
aws s3api put-bucket-versioning --bucket $TERRAGOAT_STATE_BUCKET --versioning-configuration Status=Enabled

# Enable encryption
aws s3api put-bucket-encryption --bucket $TERRAGOAT_STATE_BUCKET --server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms"
}
}
]
}'

Apply TerraGoat (AWS)

cd terraform/aws/
terraform init \
-backend-config="bucket=$TERRAGOAT_STATE_BUCKET" \
-backend-config="key=$TF_VAR_company_name-$TF_VAR_environment.tfstate" \
-backend-config="region=$TF_VAR_region"

terraform apply

Remove TerraGoat (AWS)

terraform destroy

Creating multiple TerraGoat AWS stacks

cd terraform/aws/
export TERRAGOAT_ENV=$TF_VAR_environment
export TERRAGOAT_STACKS_NUM=5
for i in $(seq 1 $TERRAGOAT_STACKS_NUM)
do
export TF_VAR_environment=$TERRAGOAT_ENV$i
terraform init \
-backend-config="bucket=$TERRAGOAT_STATE_BUCKET" \
-backend-config="key=$TF_VAR_company_name-$TF_VAR_environment.tfstate" \
-backend-config="region=$TF_VAR_region"

terraform apply -auto-approve
done

Deleting multiple TerraGoat stacks (AWS)

cd terraform/aws/
export TF_VAR_environment = $TERRAGOAT_ENV
for i in $(seq 1 $TERRAGOAT_STACKS_NUM)
do
export TF_VAR_environment=$TERRAGOAT_ENV$i
terraform init \
-backend-config="bucket=$TERRAGOAT_STATE_BUCKET" \
-backend-config="key=$TF_VAR_company_name-$TF_VAR_environment.tfstate" \
-backend-config="region=$TF_VAR_region"

terraform destroy -auto-approve
done

Azure Setup

Installation (Azure)

You can deploy multiple TerraGoat stacks in a single Azure subscription using the parameter TF_VAR_environment.

Create an Azure Storage Account backend to keep Terraform state

export TERRAGOAT_RESOURCE_GROUP="TerraGoatRG"
export TERRAGOAT_STATE_STORAGE_ACCOUNT="mydevsecopssa"
export TERRAGOAT_STATE_CONTAINER="mydevsecops"
export TF_VAR_environment="dev"
export TF_VAR_region="westus"

# Create resource group
az group create --location $TF_VAR_region --name $TERRAGOAT_RESOURCE_GROUP

# Create storage account
az storage account create --name $TERRAGOAT_STATE_STORAGE_ACCOUNT --resource-group $TERRAGOAT_RESOURCE_GROUP --location $TF_VAR_region --sku Standard_LRS --kind StorageV2 --https-only true --encryption-services blob

# Get storage account key
ACCOUNT_KEY=$(az storage account keys list --resource-group $TERRAGOAT_RESOURCE_GROUP --account-name $TERRAGOAT_STATE_STORAGE_ACCOUNT --query [0].value -o tsv)

# Create blob container
az storage container create --name $TERRAGOAT_STATE_CONTAINER --account-name $TERRAGOAT_STATE_STORAGE_ACCOUNT --account-key $ACCOUNT_KEY

Apply TerraGoat (Azure)

cd terraform/azure/
terraform init -reconfigure -backend-config="resource_group_name=$TERRAGOAT_RESOURCE_GROUP" \
-backend-config "storage_account_name=$TERRAGOAT_STATE_STORAGE_ACCOUNT" \
-backend-config="container_name=$TERRAGOAT_STATE_CONTAINER" \
-backend-config "key=$TF_VAR_environment.terraform.tfstate"

terraform apply

Remove TerraGoat (Azure)

terraform destroy

GCP Setup

Installation (GCP)

You can deploy multiple TerraGoat stacks in a single GCP project using the parameter TF_VAR_environment.

Create a GCS backend to keep Terraform state

To use terraform, a Service Account and matching set of credentials are required. If they do not exist, they must be manually created for the relevant project. To create the Service Account:

  1. Sign into your GCP project, go to IAM > Service Accounts.
  2. Click the CREATE SERVICE ACCOUNT.
  3. Give a name to your service account (for example - terragoat) and click CREATE.
  4. Grant the Service Account the Project > Editor role and click CONTINUE.
  5. Click DONE.

To create the credentials:

  1. Sign into your GCP project, go to IAM > Service Accounts and click on the relevant Service Account.
  2. Click ADD KEY > Create new key > JSON and click CREATE. This will create a .json file and download it to your computer.

We recommend saving the key with a nicer name than the auto-generated one (i.e. terragoat_credentials.json), and storing the resulting JSON file inside terraform/gcp directory of terragoat. Once the credentials are set up, create the BE configuration as follows:

export TF_VAR_environment="dev"
export TF_TERRAGOAT_STATE_BUCKET=remote-state-bucket-terragoat
export TF_VAR_credentials_path=<PATH_TO_CREDNETIALS_FILE> # example: export TF_VAR_credentials_path=terragoat_credentials.json
export TF_VAR_project=<YOUR_PROJECT_NAME_HERE>

# Create storage bucket
gsutil mb gs://${TF_TERRAGOAT_STATE_BUCKET}

Apply TerraGoat (GCP)

cd terraform/gcp/
terraform init -reconfigure -backend-config="bucket=$TF_TERRAGOAT_STATE_BUCKET" \
-backend-config "credentials=$TF_VAR_credentials_path" \
-backend-config "prefix=terragoat/${TF_VAR_environment}"

terraform apply

Remove TerraGoat (GCP)

terraform destroy

Bridgecrew's IaC herd of goats

  • CfnGoat - Vulnerable by design Cloudformation template
  • TerraGoat - Vulnerable by design Terraform stack
  • CDKGoat - Vulnerable by design CDK application

Contributing

Contribution is welcomed!

We would love to hear about more ideas on how to find vulnerable infrastructure-as-code design patterns.

Support

Bridgecrew builds and maintains TerraGoat to encourage the adoption of policy-as-code.

If you need direct support you can contact us at [email protected].



Dive - A Tool For Exploring Each Layer In A Docker Image

11 February 2022 at 20:30
By: Zion3R


A tool for exploring a docker image, layer contents, and discovering ways to shrink the size of your Docker/OCI image.


To analyze a Docker image simply run dive with an image tag/id/digest:

dive <your-image-tag>

or if you want to build your image then jump straight into analyzing it:

dive build -t <some-tag> .

Building on Macbook (supporting only the Docker container engine)

docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)":"$(pwd)" \
-w "$(pwd)" \
-v "$HOME/.dive.yaml":"$HOME/.dive.yaml" \
wagoodman/dive:latest build -t <some-tag> .

Additionally you can run this in your CI pipeline to ensure you're keeping wasted space to a minimum (this skips the UI):

CI=true dive <your-image>

This is beta quality! Feel free to submit an issue if you want a new feature or find a bug :)

Basic Features

Show Docker image contents broken down by layer

As you select a layer on the left, you are shown the contents of that layer combined with all previous layers on the right. Also, you can fully explore the file tree with the arrow keys.

Indicate what's changed in each layer

Files that have changed, been modified, added, or removed are indicated in the file tree. This can be adjusted to show changes for a specific layer, or aggregated changes up to this layer.

Estimate "image efficiency"

The lower left pane shows basic layer info and an experimental metric that will guess how much wasted space your image contains. This might be from duplicating files across layers, moving files across layers, or not fully removing files. Both a percentage "score" and total wasted file space is provided.

Quick build/analysis cycles

You can build a Docker image and do an immediate analysis with one command: dive build -t some-tag .

You only need to replace your docker build command with the same dive build command.

CI Integration

Analyze an image and get a pass/fail result based on the image efficiency and wasted space. Simply set CI=true in the environment when invoking any valid dive command.

Multiple Image Sources and Container Engines Supported

With the --source option, you can select where to fetch the container image from:

dive <your-image> --source <source>

or

dive <source>://<your-image>

With valid source options as such:

  • docker: Docker engine (the default option)
  • docker-archive: A Docker Tar Archive from disk
  • podman: Podman engine (linux only)

Installation

Ubuntu/Debian

wget https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.deb
sudo apt install ./dive_0.9.2_linux_amd64.deb

RHEL/Centos

curl -OL https://github.com/wagoodman/dive/releases/download/v0.9.2/dive_0.9.2_linux_amd64.rpm
rpm -i dive_0.9.2_linux_amd64.rpm

Arch Linux

Available as dive in the Arch User Repository (AUR).

yay -S dive

The above example assumes yay as the tool for installing AUR packages.

Mac

If you use Homebrew:

brew install dive

If you use MacPorts:

sudo port install dive

Or download the latest Darwin build from the releases page.

Windows

Download the latest release.

Go tools Requires Go version 1.10 or higher.

go get github.com/wagoodman/dive

Note: installing in this way you will not see a proper version when running dive -v.

Docker

docker pull wagoodman/dive

or

docker pull quay.io/wagoodman/dive

When running you'll need to include the docker socket file:

docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
wagoodman/dive:latest <dive arguments...>

Docker for Windows (showing PowerShell compatible line breaks; collapse to a single line for Command Prompt compatibility)

docker run --rm -it `
-v /var/run/docker.sock:/var/run/docker.sock `
wagoodman/dive:latest <dive arguments...>

Note: depending on the version of docker you are running locally you may need to specify the docker API version as an environment variable:

   DOCKER_API_VERSION=1.37 dive ...

or if you are running with a docker image:

docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-e DOCKER_API_VERSION=1.37 \
wagoodman/dive:latest <dive arguments...>

CI Integration

When running dive with the environment variable CI=true then the dive UI will be bypassed and will instead analyze your docker image, giving it a pass/fail indication via return code. Currently there are three metrics supported via a .dive-ci file that you can put at the root of your repo:

rules:
# If the efficiency is measured below X%, mark as failed.
# Expressed as a ratio between 0-1.
lowestEfficiency: 0.95

# If the amount of wasted space is at least X or larger than X, mark as failed.
# Expressed in B, KB, MB, and GB.
highestWastedBytes: 20MB

# If the amount of wasted space makes up for X% or more of the image, mark as failed.
# Note: the base image layer is NOT included in the total image size.
# Expressed as a ratio between 0-1; fails if the threshold is met or crossed.
highestUserWastedPercent: 0.20

You can override the CI config path with the --ci-config option.

KeyBindings

Key Binding Description
Ctrl + C Exit
Tab Switch between the layer and filetree views
Ctrl + F Filter files
PageUp Scroll up a page
PageDown Scroll down a page
Ctrl + A Layer view: see aggregated image modifications
Ctrl + L Layer view: see current layer modifications
Space Filetree view: collapse/uncollapse a directory
Ctrl + Space Filetree view: collapse/uncollapse all directories
Ctrl + A Filetree view: show/hide added files
Ctrl + R Filetree view: show/hide removed files
Ctrl + M Filetree view: show/hide modified files
Ctrl + U Filetree view: show/hide unmodified files
Ctrl + B Filetree view: show/hide file attributes
PageUp Filetree view: scroll up a page
PageDown Filetree view: scroll down a page

UI Configuration

No configuration is necessary, however, you can create a config file and override values:

# supported options are "docker" and "podman"
container-engine: docker
# continue with analysis even if there are errors parsing the image archive
ignore-errors: false
log:
enabled: true
path: ./dive.log
level: info

# Note: you can specify multiple bindings by separating values with a comma.
# Note: UI hinting is derived from the first binding
keybinding:
# Global bindings
quit: ctrl+c
toggle-view: tab
filter-files: ctrl+f, ctrl+slash

# Layer view specific bindings
compare-all: ctrl+a
compare-layer: ctrl+l

# File view specific bindings
toggle-collapse-dir: space
toggle-collapse-all-dir: ctrl+space
toggle-added-files: ctrl+a
toggle-removed-files: ctrl+r
toggle-modified-files: ctrl+m
toggle-unmodified-files: ctrl+u
toggle-filetree-attributes: ctrl+b
page-up: pgup
page-down: pgdn

diff:
# You can cha nge the default files shown in the filetree (right pane). All diff types are shown by default.
hide:
- added
- removed
- modified
- unmodified

filetree:
# The default directory-collapse state
collapse-dir: false

# The percentage of screen width the filetree should take on the screen (must be >0 and <1)
pane-width: 0.5

# Show the file attributes next to the filetree
show-attributes: true

layer:
# Enable showing all changes from this layer and every previous layer
show-aggregated-changes: false

dive will search for configs in the following locations:

  • $XDG_CONFIG_HOME/dive/*.yaml
  • $XDG_CONFIG_DIRS/dive/*.yaml
  • ~/.config/dive/*.yaml
  • ~/.dive.yaml


Cloudsploit - Cloud Security Posture Management (CSPM)

12 February 2022 at 11:30
By: Zion3R


Quick Start

Generic

$ git clone https://github.com/aquasecurity/cloudsploit.git
$ cd cloudsploit
$ npm install
$ ./index.js -h

Docker

$ git clone https://github.com/aquasecurity/cloudsploit.git
$ cd cloudsploit
$ docker build . -t cloudsploit:0.0.1
$ docker run cloudsploit:0.0.1 -h
$ docker run -e AWS_ACCESS_KEY_ID=XX -e AWS_SECRET_ACCESS_KEY=YY cloudsploit:0.0.1 --compliance=pci

Documentation


Background

CloudSploit by Aqua is an open-source project designed to allow detection of security risks in cloud infrastructure accounts, including: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and GitHub. These scripts are designed to return a series of potential misconfigurations and security risks.

Deployment Options

CloudSploit is available in two deployment options:

Self-Hosted

Follow the instructions below to deploy the open-source version of CloudSploit on your machine in just a few simple steps.

Hosted at Aqua Wave

A commercial version of CloudSploit hosted at Aqua Wave. Try Aqua Wave today!

Installation

Ensure that NodeJS is installed. If not, install it from here.

$ git clone [email protected]:cloudsploit/scans.git
$ npm install

Configuration

CloudSploit requires read-only permission to your cloud account. Follow the guides below to provision this access:

For AWS, you can run CloudSploit directly and it will detect credentials using the default AWS credential chain.

CloudSploit Config File

The CloudSploit config file allows you to pass cloud provider credentials by:

  1. A JSON file on your file system
  2. Environment variables
  3. Hard-coding (not recommended)

Start by copying the example config file:

$ cp config_example.js config.js

Edit the config file by uncommenting the relevant sections for the cloud provider you are testing. Each cloud has both a credential_file option, as well as inline options. For example:

azure: {
// OPTION 1: If using a credential JSON file, enter the path below
// credential_file: '/path/to/file.json',
// OPTION 2: If using hard-coded credentials, enter them below
// application_id: process.env.AZURE_APPLICATION_ID || '',
// key_value: process.env.AZURE_KEY_VALUE || '',
// directory_id: process.env.AZURE_DIRECTORY_ID || '',
// subscription_id: process.env.AZURE_SUBSCRIPTION_ID || ''
}

Credential Files

If you use the credential_file option, point to a file in your file system that follows the correct format for the cloud you are using.

AWS

{
"accessKeyId": "YOURACCESSKEY",
"secretAccessKey": "YOURSECRETKEY"
}

Azure

{
"ApplicationID": "YOURAZUREAPPLICATIONID",
"KeyValue": "YOURAZUREKEYVALUE",
"DirectoryID": "YOURAZUREDIRECTORYID",
"SubscriptionID": "YOURAZURESUBSCRIPTIONID"
}

GCP

Note: For GCP, you generate a JSON file directly from the GCP console, which you should not edit.

{
"type": "service_account",
"project": "GCPPROJECTNAME",
"client_email": "GCPCLIENTEMAIL",
"private_key": "GCPPRIVATEKEY"
}

Oracle OCI

{
"tenancyId": "YOURORACLETENANCYID",
"compartmentId": "YOURORACLECOMPARTMENTID",
"userId": "YOURORACLEUSERID",
"keyFingerprint": "YOURORACLEKEYFINGERPRINT",
"keyValue": "YOURORACLEKEYVALUE",
}

Environment Variables

CloudSploit supports passing environment variables, but you must first uncomment the section of your config.js file relevant to the cloud provider being scanned.

You can then pass the variables listed in each section. For example, for AWS:

{
access_key: process.env.AWS_ACCESS_KEY_ID || '',
secret_access_key: process.env.AWS_SECRET_ACCESS_KEY || '',
session_token: process.env.AWS_SESSION_TOKEN || '',
}

Running

To run a standard scan, showing all outputs and results, simply run:

$ ./index.js

CLI Options

CloudSploit supports many options to customize the run time. Some popular options include:

  • AWS GovCloud support: --govcloud
  • AWS China support: --china
  • Save the raw cloud provider response data: --collection=file.json
  • Ignore passing (OK) results: --ignore-ok
  • Exit with a non-zero code if non-passing results are found: --exit-code
    • This is a good option for CI/CD systems
  • Change the output from a table to raw text: --console=text

See Output Formats below for more output options.

Click for a full list of options
$ ./index.js -h

_____ _ _ _____ _ _ _
/ ____| | | |/ ____| | | (_) |
| | | | ___ _ _ __| | (___ _ __ | | ___ _| |_
| | | |/ _ \| | | |/ _` |\___ \| '_ \| |/ _ \| | __|
| |____| | (_) | |_| | (_| |____) | |_) | | (_) | | |_
\_____|_|\___/ \__,_|\__,_|_____/| .__/|_|\___/|_|\__|
| |
|_|

CloudSploit by Aqua Security, Ltd.
Cloud security auditing for AWS, Azure, GCP, Oracle, and GitHub

usage: index.js [-h] --config CONFIG [--compliance {hipaa,cis,cis1,cis2,pci}] [--plugin PLUGIN] [--govcloud] [--china] [--csv CSV] [--json JSON] [--junit JUNIT]
[--table] [--console {none,text,table}] [--collection COLLECTION] [--ignore-ok] [--exit-code] [--skip-paginate] [-- suppress SUPPRESS]

optional arguments:
-h, --help show this help message and exit
--config CONFIG
The path to a cloud provider credentials file.
--compliance {hipaa,cis,cis1,cis2,pci}
Compliance mode. Only return results applicable to the selected program.
--plugin PLUGIN A specific plugin to run. If none provided, all plugins will be run. Obtain from the exports.js file. E.g. acmValidation
--govcloud AWS only. Enables GovCloud mode.
--china AWS only. Enables AWS China mode.
--csv CSV Output: CSV file
--json JSON Output: JSON file
--junit JUNIT Output: Junit file
--table Output: table
--console {none,text,table}
Console output format. Default: table
--collection COLLECTION
Output: full collection JSON as file
--ignore-ok Ignore passing (OK) results
--exit-code Exits with a non-zero status code if non-passing results are found
--skip-paginate AWS only. Skips pagination (for debugging).
--suppress SUPPRESS Suppress results matching the provided Regex. Format: pluginId:region:resourceId

Compliance

CloudSploit supports mapping of its plugins to particular compliance policies. To run the compliance scan, use the --compliance flag. For example:

$ ./index.js --compliance=hipaa
$ ./index.js --compliance=pci

Multiple compliance modes can be run at the same time:

$ ./index.js --compliance=cis1 --compliance=cis2

CloudSploit currently supports the following compliance mappings:

HIPAA

$ ./index.js --compliance=hipaa

HIPAA scans map CloudSploit plugins to the Health Insurance Portability and Accountability Act of 1996.

PCI

$ ./index.js --compliance=pci

PCI scans map CloudSploit plugins to the Payment Card Industry Data Security Standard.

CIS Benchmarks

$ ./index.js --compliance=cis
$ ./index.js --compliance=cis1
$ ./index.js --compliance=cis2

CIS Benchmarks are supported, both for Level 1 and Level 2 controls. Passing --compliance=cis will run both level 1 and level 2 controls.

Output Formats

CloudSploit supports output in several formats for consumption by other tools. If you do not specify otherwise, CloudSploit writes output to standard output (the console) as a table.

Note: You can pass multiple output formats and combine options for further customization. For example:

# Print a table to the console and save a CSV file
$ ./index.js --csv=file.csv --console=table

# Print text to the console and save a JSON and JUnit file while ignoring passing results
$ ./index.js --json=file.json --junit=file.xml --console=text --ignore-ok

Console Output

By default, CloudSploit results are printed to the console in a table format (with colors). You can override this and use plain text instead, by running:

$ ./index.js --console=text

Alternatively, you can suppress the console output entirely by running:

$ ./index.js --console=none

Ignoring Passing Results

You can ignore results from output that return an OK status by passing a --ignore-ok commandline argument.

CSV

$ ./index.js --csv=file.csv

JSON

$ ./index.js --json=file.json

JUnit XML

$ ./index.js --junit=file.xml

Collection Output

CloudSploit saves the data queried from the cloud provider APIs in JSON format, which can be saved alongside other files for debugging or historical purposes.

$ ./index.js --collection=file.json

Suppressions

Results can be suppressed by passing the --suppress flag (multiple options are supported) with the following format:

--suppress pluginId:region:resourceId

For example:

# Suppress all results for the acmValidation plugin
$ ./index.js --suppress acmValidation:*:*

# Suppress all us-east-1 region results
$ ./index.js --suppress *:us-east-1:*

# Suppress all results matching the regex "certificate/*" in all regions for all plugins
$ ./index.js --suppress *:*:certificate/*

Running a Single Plugin

The --plugin flag can be used if you only wish to run one plugin.

$ ./index.js --plugin acmValidation

Architecture

CloudSploit works in two phases. First, it queries the cloud infrastructure APIs for various metadata about your account, namely the "collection" phase. Once all the necessary data is collected, the result is passed to the "scanning" phase. The scan uses the collected data to search for potential misconfigurations, risks, and other security issues, which are the resulting output.

Writing a Plugin

Please see our contribution guidelines and complete guide to writing CloudSploit plugins.

Writing a remediation

The --remediate flag can be used if you want to run remediation for the plugins mentioned as part of this argument. This takes a list of plugin names. Please see our developing remediation guide for more details.

Other Notes

For other details about the Aqua Wave SaaS product, AWS security policies, and more, click here.



truffleHog - Searches Through Git Repositories For High Entropy Strings And Secrets, Digging Deep Into Commit History

12 February 2022 at 20:30
By: Zion3R


Searches through git repositories for secrets, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.


Join The Slack

Have questions? Feedback? Jump in slack and hang out with me

https://join.slack.com/t/trufflehog-community/shared_invite/zt-pw2qbi43-Aa86hkiimstfdKH9UCpPzQ

NEW

truffleHog previously functioned by running entropy checks on git diffs. This functionality still exists, but high signal regex checks have been added, and the ability to suppress entropy checking has also been added.

truffleHog --regex --entropy=False https://github.com/dxa4481/truffleHog.git

or

truffleHog file:///user/dxa4481/codeprojects/truffleHog/

With the --include_paths and --exclude_paths options, it is also possible to limit scanning to a subset of objects in the Git history by defining regular expressions (one per line) in a file to match the targeted object paths. To illustrate, see the example include and exclude files below:

include-patterns.txt:

src/
# lines beginning with "#" are treated as comments and are ignored
gradle/
# regexes must match the entire path, but can use python's regex syntax for
# case-insensitive matching and other advanced options
(?i).*\.(properties|conf|ini|txt|y(a)?ml)$
(.*/)?id_[rd]sa$

exclude-patterns.txt:

(.*/)?\.classpath$
.*\.jmx$
(.*/)?test/(.*/)?resources/

These filter files could then be applied by:

trufflehog --include_paths include-patterns.txt --exclude_paths exclude-patterns.txt file://path/to/my/repo.git

With these filters, issues found in files in the root-level src directory would be reported, unless they had the .classpath or .jmx extension, or if they were found in the src/test/dev/resources/ directory, for example. Additional usage information is provided when calling trufflehog with the -h or --help options.

These features help cut down on noise, and makes the tool easier to shove into a devops pipeline.

Install

pip install truffleHog

Customizing

Custom regexes can be added with the following flag --rules /path/to/rules. This should be a json file of the following format:

{
"RSA private key": "-----BEGIN EC PRIVATE KEY-----"
}

Things like subdomain enumeration, s3 bucket detection, and other useful regexes highly custom to the situation can be added.

Feel free to also contribute high signal regexes upstream that you think will benefit the community. Things like Azure keys, Twilio keys, Google Compute keys, are welcome, provided a high signal regex can be constructed.

trufflehog's base rule set sources from https://github.com/dxa4481/truffleHogRegexes/blob/master/truffleHogRegexes/regexes.json

To explicitly allow particular secrets (e.g. self-signed keys used only for local testing) you can provide an allow list --allow /path/to/allow in the following format:

{
"local self signed test key": "-----BEGIN EC PRIVATE KEY-----\nfoobar123\n-----END EC PRIVATE KEY-----",
"git cherry pick SHAs": "regex:Cherry picked from .*",
}

Note that values beginning with regex: will be used as regular expressions. Values without this will be literal, with some automatic conversions (e.g. flexible newlines).

How it works

This module will go through the entire commit history of each branch, and check each diff from each commit, and check for secrets. This is both by regex and by entropy. For entropy checks, truffleHog will evaluate the shannon entropy for both the base64 char set and hexidecimal char set for every blob of text greater than 20 characters comprised of those character sets in each diff. If at any point a high entropy string >20 characters is detected, it will print to the screen.

Help

usage: trufflehog [-h] [--json] [--regex] [--rules RULES] [--allow ALLOW]
[--entropy DO_ENTROPY] [--since_commit SINCE_COMMIT]
[--max_depth MAX_DEPTH]
git_url

Find secrets hidden in the depths of git.

positional arguments:
git_url URL for secret searching

optional arguments:
-h, --help show this help message and exit
--json Output in JSON
--regex Enable high signal regex checks
--rules RULES Ignore default regexes and source from json list file
--allow ALLOW Explicitly allow regexes from json list file
--entropy DO_ENTROPY Enable entropy checks
--since_commit SINCE_COMMIT
Only scan from a given commit hash
--branch BRANCH Scans only the selected bra nch
--max_depth MAX_DEPTH
The max commit depth to go back when searching for
secrets
-i INCLUDE_PATHS_FILE, --include_paths INCLUDE_PATHS_FILE
File with regular expressions (one per line), at least
one of which must match a Git object path in order for
it to be scanned; lines starting with "#" are treated
as comments and are ignored. If empty or not provided
(default), all Git object paths are included unless
otherwise excluded via the --exclude_paths option.
-x EXCLUDE_PATHS_FILE, --exclude_paths EXCLUDE_PATHS_FILE
File with regular expressions (one per line), none of
which may match a Git object path in order for it to
be scanned; lines starting with "#" are tr eated as
comments and are ignored. If empty or not provided
(default), no Git object paths are excluded unless
effectively excluded via the --include_paths option.

Running with Docker

First, enter the directory containing the git repository

cd /path/to/git

To launch the trufflehog with the docker image, run the following"

docker run --rm -v "$(pwd):/proj" dxa4481/trufflehog file:///proj

-v mounts the current working dir (pwd) to the /proj dir in the Docker container

file:///proj references that very same /proj dir in the container (which is also set as the default working dir in the Dockerfile)

Wishlist

  • A way to detect and not scan binary diffs
  • Don't rescan diffs if already looked at in another branch
  • A since commit X feature
  • Print the file affected


Get-RBCD-Threaded - Tool To Discover Resource-Based Constrained Delegation Attack Paths In Active Directory Environments

13 February 2022 at 11:30
By: Zion3R


Tool to discover Resource-Based Constrained Delegation attack paths in Active Directory Environments

Based almost entirely on wonderful blog posts "Wagging the Dog: Abusing Resource-Based Constrained Delegation to Attack Active Directory" by Elad Shamir and "A Case Study in Wagging the Dog: Computer Takeover" by harmj0y. Read these two blog posts if you actually want to understand what is going on here. I honestly only half understand it all myself (and that's being generous).

I don't know how to C# well so I figured out how to communicate with a domain in C# by reading through the source code of SharpSploit and SharpView.


How it works

Get-RBCD-Thread will query all Active Directory users, groups (minus privileged groups like "Domain Admins" and "BUILTIN\Administrators"), and computer objects in your current domain and compile a list of their SIDs. Get-RBCD-Threaded will then query AD for all DACLs on the computer objects in the domain. Each ACE in the DACLs will be checked to see if one of the user/group/computer SIDS has either "GenericAll", "GenericWrite", "WriteOwner", or "WriteDacl" privileges on the computer object, or if the SIDS have "WriteProp" permissions on the ms-DS-Allowed-To-Act-On-Behalf-Of-Other-Identity attribute (GUID:3f78c3e5-f79a-46bd-a0b8-9d18116ddc79). If it does, then, well, you my friend are on your way to a Resource-Based Constrained Delegation attack!

Usage

Compile in Visual Studio. This uses Parallel.ForEach to spead up searching through the DACL object, so .NET v4 is minimum required.

Options

-u|-username=, Username to authenticate as

-p|-password=, Password for the user

-d|-domain=, Fully qualified domain name to authenticate to

-s|-searchforest, Discover domains and forests through trust relationships. Enumerate all domains and forests

-pwdlastset=, Filter computers based on pwdLastSet to remove stale computer objects. If you set this to 90, it will filter out computer objects whose pwdLastSet date is more than 90 days ago

-i|-insecure, Force insecure LDAP connect if LDAPS is causing connection issues.

-o|-outputfile=, Output to a CSV file. Provided full path to file and file name.

-h|-?|-help, Show the help options

You can now specify the username, password, and domain to authenticate to. If u/p/d options are blank, Get-RBCD-Threaded will atempt to authenticate to the domain in your current user context.

-o will output to a CSV file. Provide the full file path and file name to save the output to.

The default search specifies that port 636 be used to force LDAPS. This may cause issues. If you get an error saying something about the server not being available or similar, try the "-i" flag to remove the 636 port from the connect string.

"pwdLastSet" has been added as a filtering option. In larger environments you can get a lot of stale computer objects that no longer exist as the "destination" object int he ACL, and can't really be used for the RBCD attack (at least not that I am aware of). Set pwdLastSet to a number of days. Example: "-pwdlastset=90" will filter out any computer objects from your results where the pwdLastSet date is greater or equal to 90 days ago from the current date and time.

Tested in an environment with 20k+ uses, groups, and computers (over 60k total objects). Get-RBCD-Thread took ~60 seconds to complete. By comparison, my hacked together PowerView commands in this gist to perform a similar search ran for several hours and never completed.

This tool will not perform the delegation attack for you. You'll need to read Elad Shamir's and harmj0y's blogs to figure out how to do that. This will only help you find possible targets for the RBCD attack.

Example usage from my AD lab:


Detections

This tool does nothing more than query Active Directory using LDAP queries, which may not be easy to detect. Netflow could possibly be used to detect large numbers of LDAP queries / traffic to one system.

The other possible way to detect this is through honeypot accounts. The idea would be to create a computer object that some user / group has write privileges to. The RBCD attack relies on modifying a computer object and then delegating kerberos tickets to it. The possible points of detection for the honeypot computer object could be:

  1. Monitor modifications to the honeypot computer object, specifically to the "msds-allowedtoactonbehalfofotheridentity" property
  2. Monitor for kerberos tickets requested for services on the honeypot computer object, specifically any kerberos tickets for administrator users

I made this tool to help me on penetration tests. However, defenders / blue teams / sysadmins can easily use this to help find weaknesses in their environments and (hopefully) move to remediate them.



โŒ