MasterParser stands as a robust Digital Forensics and Incident Response tool meticulously crafted for the analysis of Linux logs within the var/log directory. Specifically designed to expedite the investigative process for security incidents on Linux systems, MasterParser adeptly scans supported logs, such as auth.log for example, extract critical details including SSH logins, user creations, event names, IP addresses and much more. The tool's generated summary presents this information in a clear and concise format, enhancing efficiency and accessibility for Incident Responders. Beyond its immediate utility for DFIR teams, MasterParser proves invaluable to the broader InfoSec and IT community, contributing significantly to the swift and comprehensive assessment of security events on Linux platforms.
MasterParser Wallpapers
Love MasterParser as much as we do? Dive into the fun and jazz up your screen with our exclusive MasterParser wallpaper! Click the link below and get ready to add a splash of excitement to your device! Download Wallpaper
Supported Logs Format
This is the list of supported log formats within the var/log directory that MasterParser can analyze. In future updates, MasterParser will support additional log formats for analysis. |Supported Log Formats List| | --- | | auth.log |
Feature & Log Format Requests:
If you wish to propose the addition of a new feature \ log format, kindly submit your request by creating an issue Click here to create a request
How To Use ?
How To Use - Text Guide
From this GitHub repository press on "<> Code" and then press on "Download ZIP".
From "MasterParser-main.zip" export the folder "MasterParser-main" to you Desktop.
Open a PowerSehll terminal and navigate to the "MasterParser-main" folder.
# How to navigate to "MasterParser-main" folder from the PS terminal PS C:\> cd "C:\Users\user\Desktop\MasterParser-main\"
Now you can execute the tool, for example see the tool command menu, do this:
# How to show MasterParser menu PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Menu
To run the tool, put all your /var/log/* logs in to the 01-Logs folder, and execute the tool like this:
# How to run MasterParser PS C:\Users\user\Desktop\MasterParser-main> .\MasterParser.ps1 -O Start
The C2 Cloud is a robust web-based C2 framework, designed to simplify the life of penetration testers. It allows easy access to compromised backdoors, just like accessing an EC2 instance in the AWS cloud. It can manage several simultaneous backdoor sessions with a user-friendly interface.
C2 Cloud is open source. Security analysts can confidently perform simulations, gaining valuable experience and contributing to the proactive defense posture of their organizations.
๐ Anywhere Access: Reach the C2 Cloud from any location. ๐ Multiple Backdoor Sessions: Manage and support multiple sessions effortlessly. ๐ฑ๏ธ One-Click Backdoor Access: Seamlessly navigate to backdoors with a simple click. ๐ Session History Maintenance: Track and retain complete command and response history for comprehensive analysis.
Tech Stack
๐ ๏ธ Flask: Serving web and API traffic, facilitating reverse HTTP(s) requests. ๐ TCP Socket: Serving reverse TCP requests for enhanced functionality. ๐ Nginx: Effortlessly routing traffic between web and backend systems. ๐จ Redis PubSub: Serving as a robust message broker for seamless communication. ๐ Websockets: Delivering real-time updates to browser clients for enhanced user experience. ๐พ Postgres DB: Ensuring persistent storage for seamless continuity.
Architecture
Application setup
Management port: 9000
Reversse HTTP port: 8000
Reverse TCP port: 8888
Clone the repo
Optional: Update chait_id, bot_token in c2-telegram/config.yml
Execute docker-compose up -d to start the containers Note: The c2-api service will not start up until the database is initialized. If you receive 500 errors, please try after some time.
Automate the process of analyzing web server logs with the Python Web Log Analyzer. This powerful tool is designed to enhance security by identifying and detecting various types of cyber attacks within your server logs. Stay ahead of potential threats with features that include:
Features
Attack Detection: Identify and flag potential Cross-Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), and other common web application attacks.
Rate Limit Monitoring: Detect suspicious patterns in multiple requests made in a short time frame, helping to identify brute-force attacks or automated scanning tools.
Automated Scanner Detection: Keep your web applications secure by identifying requests associated with known automated scanning tools or vulnerability scanners.
User-Agent Analysis: Analyze and identify potentially malicious User-Agent strings, allowing you to spot unusual or suspicious behavior.
Future Features
This project is actively developed, and future features may include:
IP Geolocation: Identify the geographic location of IP addresses in the logs.
Real-time Monitoring: Implement real-time monitoring capabilities for immediate threat detection.
After cloning the repository to your local machine, you can initiate the application by executing the command python3 WLA-cli.py. simple usage example : python3 WLA-cli.py -l LogSampls/access.log -t
use -h or --help for more detailed usage examples : python3 WLA-cli.py -h
ThievingFox is a collection of post-exploitation tools to gather credentials from various password managers and windows utilities. Each module leverages a specific method of injecting into the target process, and then hooks internals functions to gather crendentials.
.NET development environment must also be installed. From Visual Studio, navigate to Tools > Get Tools And Features > Install ".NET desktop development"
Finally, python dependancies must be installed :
pip install -r client/requirements.txt
ThievingFox works with python >= 3.11
NOTE : On a Windows host, in order to use the KeePass module, msbuild must be available in the PATH. This can be achieved by running the client from within a Visual Studio Developper Powershell (Tools > Command Line > Developper Powershell)
Targets
All modules have been tested on the following Windows versions :
Windows Version
Windows Server 2022
Windows Server 2019
Windows Server 2016
Windows Server 2012R2
Windows 10
Windows 11
[!CAUTION] Modules have not been tested on other version, and are expected to not work.
Application
Injection Method
KeePass.exe
AppDomainManager Injection
KeePassXC.exe
DLL Proxying
LogonUI.exe (Windows Login Screen)
COM Hijacking
consent.exe (Windows UAC Popup)
COM Hijacking
mstsc.exe (Windows default RDP client)
COM Hijacking
RDCMan.exe (Sysinternals' RDP client)
COM Hijacking
MobaXTerm.exe (3rd party RDP client)
COM Hijacking
Usage
[!CAUTION] Although I tried to ensure that these tools do not impact the stability of the targeted applications, inline hooking and library injection are unsafe and this might result in a crash, or the application being unstable. If that were the case, using the cleanup module on the target should be enough to ensure that the next time the application is launched, no injection/hooking is performed.
ThievingFox contains 3 main modules : poison, cleanup and collect.
Poison
For each application specified in the command line parameters, the poison module retrieves the original library that is going to be hijacked (for COM hijacking and DLL proxying), compiles a library that has matches the properties of the original DLL, uploads it to the server, and modify the registry if needed to perform COM hijacking.
To speed up the process of compilation of all libraries, a cache is maintained in client/cache/.
--mstsc, --rdcman, and --mobaxterm have a specific option, respectively --mstsc-poison-hkcr, --rdcman-poison-hkcr, and --mobaxterm-poison-hkcr. If one of these options is specified, the COM hijacking will replace the registry key in the HKCR hive, meaning all users will be impacted. By default, only all currently logged in users are impacted (all users that have a HKCU hive).
--keepass and --keepassxc have specific options, --keepass-path, --keepass-share, and --keepassxc-path, --keepassxc-share, to specify where these applications are installed, if it's not the default installation path. This is not required for other applications, since COM hijacking is used.
The KeePass modules requires the Visual C++ Redistributable to be installed on the target.
Multiple applications can be specified at once, or, the --all flag can be used to target all applications.
[!IMPORTANT] Remember to clean the cache if you ever change the --tempdir parameter, since the directory name is embedded inside native DLLs.
positional arguments: target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options: -h, --help show this help message and exit -hashes HASHES, --hashes HASHES LM:NT hash -aesKey AESKEY, --aesKey AESKEY AES key to use for Kerberos Authentication -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version. -dc-ip DC_IP, --dc-ip DC_IP IP Address of the domain controller -no-pass, --no-pass Do not prompt for password --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox) --keepass Try to poison KeePass.exe --keepass-path KEEPASS_PATH The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/) --keepass-share KEEPASS_SHARE The share on which KeePass is installed (Default: c$) --keepassxc Try to poison KeePassXC.exe --keepassxc-path KEEPASSXC_PATH The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/) --ke epassxc-share KEEPASSXC_SHARE The share on which KeePassXC is installed (Default: c$) --mstsc Try to poison mstsc.exe --mstsc-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for mstsc, which will also work for user that are currently not logged in (Default: False) --consent Try to poison Consent.exe --logonui Try to poison LogonUI.exe --rdcman Try to poison RDCMan.exe --rdcman-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for RDCMan, which will also work for user that are currently not logged in (Default: False) --mobaxterm Try to poison MobaXTerm.exe --mobaxterm-poison-hkcr Instead of poisonning all currently logged in users' HKCU hives, poison the HKCR hive for MobaXTerm, which will also work for user that are currently not logged in (Default: False) --all Try to poison all applications
Cleanup
For each application specified in the command line parameters, the cleanup first removes poisonning artifacts that force the target application to load the hooking library. Then, it tries to delete the library that were uploaded to the remote host.
For applications that support poisonning of both HKCU and HKCR hives, both are cleaned up regardless.
Multiple applications can be specified at once, or, the --all flag can be used to cleanup all applications.
It does not clean extracted credentials on the remote host.
[!IMPORTANT] If the targeted application is in use while the cleanup module is ran, the DLL that are dropped on the target cannot be deleted. Nonetheless, the cleanup module will revert the configuration that enables the injection, which should ensure that the next time the application is launched, no injection is performed. Files that cannot be deleted by ThievingFox are logged.
positional arguments: target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options: -h, --help show this help message and exit -hashes HASHES, --hashes HASHES LM:NT hash -aesKey AESKEY, --aesKey AESKEY AES key to use for Kerberos Authentication -k Use kerberos authentication. For LogonUI, mstsc and cons ent modules, an anonymous NTLM authentication is performed, to retrieve the OS version. -dc-ip DC_IP, --dc-ip DC_IP IP Address of the domain controller -no-pass, --no-pass Do not prompt for password --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox) --keepass Try to cleanup all poisonning artifacts related to KeePass.exe --keepass-share KEEPASS_SHARE The share on which KeePass is installed (Default: c$) --keepass-path KEEPASS_PATH The path where KeePass is installed, without the share name (Default: /Program Files/KeePass Password Safe 2/) --keepassxc Try to cleanup all poisonning artifacts related to KeePassXC.exe --keepassxc-path KEEPASSXC_PATH The path where KeePassXC is installed, without the share name (Default: /Program Files/KeePassXC/) --keepassxc-share KEEPASSXC_SHARE The share on which KeePassXC is installed (Default: c$) --mstsc Try to cleanup all poisonning artifacts related to mstsc.exe --consent Try to cleanup all poisonning artifacts related to Consent.exe --logonui Try to cleanup all poisonning artifacts related to LogonUI.exe --rdcman Try to cleanup all poisonning artifacts related to RDCMan.exe --mobaxterm Try to cleanup all poisonning artifacts related to MobaXTerm.exe --all Try to cleanup all poisonning artifacts related to all applications
Collect
For each application specified on the command line parameters, the collect module retrieves output files on the remote host stored inside C:\Windows\Temp\<tempdir> corresponding to the application, and decrypts them. The files are deleted from the remote host, and retrieved data is stored in client/ouput/.
Multiple applications can be specified at once, or, the --all flag can be used to collect logs from all applications.
positional arguments: target Target machine or range [domain/]username[:password]@<IP or FQDN>[/CIDR]
options: -h, --help show this help message and exit -hashes HASHES, --hashes HASHES LM:NT hash -aesKey AESKEY, --aesKey AESKEY AES key to use for Kerberos Authentication -k Use kerberos authentication. For LogonUI, mstsc and consent modules, an anonymous NTLM authentication is performed, to retrieve the OS version. -dc-ip DC_IP, --dc-ip DC_IP IP Address of th e domain controller -no-pass, --no-pass Do not prompt for password --tempdir TEMPDIR The name of the temporary directory to use for DLLs and output (Default: ThievingFox) --keepass Collect KeePass.exe logs --keepassxc Collect KeePassXC.exe logs --mstsc Collect mstsc.exe logs --consent Collect Consent.exe logs --logonui Collect LogonUI.exe logs --rdcman Collect RDCMan.exe logs --mobaxterm Collect MobaXTerm.exe logs --all Collect logs from all applications
TL;DR: Galah (/ษกษหlษห/ - pronounced 'guh-laa') is an LLM (Large Language Model) powered web honeypot, currently compatible with the OpenAI API, that is able to mimic various applications and dynamically respond to arbitrary HTTP requests.
Description
Named after the clever Australian parrot known for its mimicry, Galah mirrors this trait in its functionality. Unlike traditional web honeypots that rely on a manual and limiting method of emulating numerous web applications or vulnerabilities, Galah adopts a novel approach. This LLM-powered honeypot mimics various web applications by dynamically crafting relevant (and occasionally foolish) responses, including HTTP headers and body content, to arbitrary HTTP requests. Fun fact: in Aussie English, Galah also means fool!
I've deployed a cache for the LLM-generated responses (the cache duration can be customized in the config file) to avoid generating multiple responses for the same request and to reduce the cost of the OpenAI API. The cache stores responses per port, meaning if you probe a specific port of the honeypot, the generated response won't be returned for the same request on a different port.
The prompt is the most crucial part of this honeypot! You can update the prompt in the config file, but be sure not to change the part that instructs the LLM to generate the response in the specified JSON format.
Note: Galah was a fun weekend project I created to evaluate the capabilities of LLMs in generating HTTP messages, and it is not intended for production use. The honeypot may be fingerprinted based on its response time, non-standard, or sometimes weird responses, and other network-based techniques. Use this tool at your own risk, and be sure to set usage limits for your OpenAI API.
Future Enhancements
Rule-Based Response: The new version of Galah will employ a dynamic, rule-based approach, adding more control over response generation. This will further reduce OpenAI API costs and increase the accuracy of the generated responses.
Response Database: It will enable you to generate and import a response database. This ensures the honeypot only turns to the OpenAI API for unknown or new requests. I'm also working on cleaning up and sharing my own database.
2024/01/01 04:29:10 Starting HTTP server on port 8080 2024/01/01 04:29:10 Starting HTTP server on port 8888 2024/01/01 04:29:10 Starting HTTPS server on port 8443 with TLS profile: profile1_selfsigned 2024/01/01 04:29:10 Starting HTTPS server on port 443 with TLS profile: profile1_selfsigned
2024/01/01 04:35:57 Received a request for "/.git/config" from [::1]:65434 2024/01/01 04:35:57 Request cache miss for "/.git/config": Not found in cache 2024/01/01 04:35:59 Generated HTTP response: {"Headers": {"Content-Type": "text/plain", "Server": "Apache/2.4.41 (Ubuntu)", "Status": "403 Forbidden"}, "Body": "Forbidden\nYou don't have permission to access this resource."} 2024/01/01 04:35:59 Sending the crafted response to [::1]:65434
^C2024/01/01 04:39:27 Received shutdown signal. Shutting down servers... 2024/01/01 04:39:27 All servers shut down gracefully.
% curl http://localhost:8888/are-you-a-honeypot No, I am a server.`
JSON log record:
{"timestamp":"2024-01-01T05:50:43.792479","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"61982","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/are-you-a-honeypot","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Length":"20","Content-Type":"text/plain","Server":"Apache/2.4.41 (Ubuntu)"},"body":"No, I am a server."}}
๐
% curl http://localhost:8888/i-mean-are-you-a-fake-server` No, I am not a fake server.
JSON log record:
{"timestamp":"2024-01-01T05:51:40.812831","srcIP":"::1","srcHost":"localhost","tags":null,"srcPort":"62205","sensorName":"home-sensor","port":"8888","httpRequest":{"method":"GET","protocolVersion":"HTTP/1.1","request":"/i-mean-are-you-a-fake-server","userAgent":"curl/7.71.1","headers":"User-Agent: [curl/7.71.1], Accept: [*/*]","headersSorted":"Accept,User-Agent","headersSortedSha256":"cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9","body":"","bodySha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},"httpResponse":{"headers":{"Connection":"close","Content-Type":"text/plain","Server":"LocalHost/1.0"},"body":"No, I am not a fake server."}}
CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.
Features
Detection
Description
Direct Syscall
Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
NTDLL Unhooking
Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
AMSI Patch
Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
ETW Patch
Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
PE Stomping
Identifies instances of PE (Portable Executable) stomping.
Reflective PE Loading
Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
Unbacked Thread Origin
Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
Unbacked Thread Start Address
Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
API hooking
Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
Custom Pattern Search
Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.
Installation
To get started with CrimsonEDR, follow these steps:
Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh
โ ๏ธ Warning
Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.
Usage
To use CrimsonEDR, follow these steps:
Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
Status Checker is a Python script that checks the status of one or multiple URLs/domains and categorizes them based on their HTTP status codes. Version 1.0.0 Created BY BLACK-SCORP10 t.me/BLACK-SCORP10
Features
Check the status of single or multiple URLs/domains.
Asynchronous HTTP requests for improved performance.
Color-coded output for better visualization of status codes.
Progress bar when checking multiple URLs.
Save results to an output file.
Error handling for inaccessible URLs and invalid responses.
Command-line interface for easy usage.
Installation
Clone the repository:
bash git clone https://github.com/your_username/status-checker.git cd status-checker
The Cyber Security Awareness Framework (CSAF) is a structured approach aimed at enhancing Cybersecurity" title="Cybersecurity">cybersecurity awareness and understanding among individuals, organizations, and communities. It provides guidance for the development of effective Cybersecurity" title="Cybersecurity">cybersecurity awareness programs, covering key areas such as assessing awareness needs, creating educational m aterials, conducting training and simulations, implementing communication campaigns, and measuring awareness levels. By adopting this framework, organizations can foster a robust security culture, enhance their ability to detect and respond to cyber threats, and mitigate the risks associated with attacks and security breaches.
Espionage is a network packet sniffer that intercepts large amounts of data being passed through an interface. The tool allows users to to run normal and verbose traffic analysis that shows a live feed of traffic, revealing packet direction, protocols, flags, etc. Espionage can also spoof ARP so, all data sent by the target gets redirected through the attacker (MiTM). Espionage supports IPv4, TCP/UDP, ICMP, and HTTP. Espionag e was written in Python 3.8 but it also supports version 3.6. This is the first version of the tool so please contact the developer if you want to help contribute and add more to Espionage. Note: This is not a Scapy wrapper, scapylib only assists with HTTP requests and ARP.
sudo python3 espionage.py --normal --iface wlan0 -f capture_output.pcap Command 1 will execute a clean packet sniff and save the output to the pcap file provided. Replace wlan0 with whatever your network interface is.
sudo python3 espionage.py --verbose --iface wlan0 -f capture_output.pcap Command 2 will execute a more detailed (verbose) packet sniff and save the output to the pcap file provided.
sudo python3 espionage.py --normal --iface wlan0 Command 3 will still execute a clean packet sniff however, it will not save the data to a pcap file. Saving the sniff is recommended.
sudo python3 espionage.py --verbose --httpraw --iface wlan0 Command 4 will execute a verbose packet sniff and will also show raw http/tcp packet data in bytes.
sudo python3 espionage.py --target <target-ip-address> --iface wlan0 Command 5 will ARP spoof the target ip address and all data being sent will be routed back to the attackers machine (you/localhost).
sudo python3 espionage.py --iface wlan0 --onlyhttp Command 6 will only display sniffed packets on port 80 utilizing the HTTP protocol.
sudo python3 espionage.py --iface wlan0 --onlyhttpsecure Command 7 will only display sniffed packets on port 443 utilizing the HTTPS (secured) protocol.
sudo python3 espionage.py --iface wlan0 --urlonly Command 8 will only sniff and return sniffed urls visited by the victum. (works best with sslstrip).
Press Ctrl+C in-order to stop the packet interception and write the output to file.
optional arguments: -h, --help show this help message and exit --version returns the packet sniffers version. -n, --normal executes a cleaner interception, less sophisticated. -v, --verbose (recommended) executes a more in-depth packet interception/sniff. -url, --urlonly only sniffs visited urls using http/https. -o, --onlyhttp sniffs only tcp/http data, returns urls visited. -ohs, --onlyhttpsecure sniffs only https data, (port 443). -hr, --httpraw displays raw packet data (byte order) recieved or sent on port 80.
(Recommended) arguments for data output (.pcap): -f FILENAME, --filename FILENAME name of file to store the output (make extension '.pcap').
The developer of this program, Josh Schiavone, written the following code for educational and ethical purposes only. The data sniffed/intercepted is not to be used for malicous intent. Josh Schiavone is not responsible or liable for misuse of this penetration testing tool. May God bless you all.
Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.
The feed should update daily. Actively working on making the backend more reliable
Honorable Mentions
Many of the Shodan queries have been sourced from other CTI researchers:
I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).
PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.
NOTE
Some modules use ExAllocatePool2 API to allocate kernel pool memory. ExAllocatePool2 API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replace ExAllocatePool2 API with ExAllocatePoolWithTag API.
ย
Environment
All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:
Evasion-Corners/dp/144962636X">Bill Blunden, The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System, 2nd Edition (Jones & Bartlett Learning, 2012)
Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.
BOF Usage
Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] cookie-monster Example: cookie-monster --chrome cookie-monster --edge cookie-moster --firefox cookie-monster --chromeCookiePID 1337 cookie-monster --chromeLoginDataPID 1337 cookie-monster --edgeCookiePID 4444 cookie-monster --edgeLoginDataPID 4444 cookie-monster Options: --chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD --edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD --firefox, looks for profiles.ini and locates the key4.db and logins.json file --chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file --chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file --edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file --edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
EXE usage
Cookie Monster Example: cookie-monster.exe --all Cookie Monster Options: -h, --help Show this help message and exit --all Run chrome, edge, and firefox methods --edge Extract edge keys and download Cookies/Login Data file to PWD --chrome Extract chrome keys and download Cookies/Login Data file to PWD --firefox Locate firefox key and Cookies, does not make a copy of either file
update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI
References
This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!! Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF Fileless download: https://github.com/fortra/nanodump Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI
NoArgs is a tool designed to dynamically spoof and conceal process arguments while staying undetected. It achieves this by hooking into Windows APIs to dynamically manipulate the Windows internals on the go. This allows NoArgs to alter process arguments discreetly.
Default Cmd:
Windows Event Logs:
Using NoArgs:
Windows Event Logs:
Functionality Overview
The tool primarily operates by intercepting process creation calls made by the Windows API function CreateProcessW. When a process is initiated, this function is responsible for spawning the new process, along with any specified command-line arguments. The tool intervenes in this process creation flow, ensuring that the arguments are either hidden or manipulated before the new process is launched.
Hooking Mechanism
Hooking into CreateProcessW is achieved through Detours, a popular library for intercepting and redirecting Win32 API functions. Detours allows for the redirection of function calls to custom implementations while preserving the original functionality. By hooking into CreateProcessW, the tool is able to intercept the process creation requests and execute its custom logic before allowing the process to be spawned.
Process Environment Block (PEB) Manipulation
The Process Environment Block (PEB) is a data structure utilized by Windows to store information about a process's environment and execution state. The tool leverages the PEB to manipulate the command-line arguments of the newly created processes. By modifying the command-line information stored within the PEB, the tool can alter or conceal the arguments passed to the process.
Demo: Running Mimikatz and passing it the arguments:
Process Hacker View:
All the arguemnts are hidden dynamically
Process Monitor View:
Technical Implementation
Injection into Command Prompt (cmd): The tool injects its code into the Command Prompt process, embedding it as Position Independent Code (PIC). This enables seamless integration into cmd's memory space, ensuring covert operation without reliance on specific memory addresses. (Only for The Obfuscated Executable in the releases page)
Windows API Hooking: Detours are utilized to intercept calls to the CreateProcessW function. By redirecting the execution flow to a custom implementation, the tool can execute its logic before the original Windows API function.
Custom Process Creation Function: Upon intercepting a CreateProcessW call, the custom function is executed, creating the new process and manipulating its arguments as necessary.
PEB Modification: Within the custom process creation function, the Process Environment Block (PEB) of the newly created process is accessed and modified to achieve the goal of manipulating or hiding the process arguments.
Execution Redirection: Upon completion of the manipulations, the execution seamlessly returns to Command Prompt (cmd) without any interruptions. This dynamic redirection ensures that subsequent commands entered undergo manipulation discreetly, evading detection and logging mechanisms that relay on getting the process details from the PEB.
Installation and Usage:
Option 1: Compile NoArgs DLL:
You will need microsoft/Detours">Microsoft Detours installed.
Compile the DLL.
Inject the compiled DLL into any cmd instance to manipulate newly created process arguments dynamically.
Option 2: Download the compiled executable (ready-to-go) from the releases page.
A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.
This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.
Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. โถ Watch Video
This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.
Backstory - The Why
Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.
That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.
The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.
In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.
The What
A Browser In The Browser (BITB) without any iframes! As simple as that.
Meaning that we can now use BITB with Evilginx on websites like Microsoft.
Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.
The How
Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.
Instructions
Video Tutorial
Local VM:
Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)
Update and Upgrade system packages:
sudo apt update && sudo apt upgrade -y
Evilginx Setup:
Optional:
Create a new evilginx user, and add user to sudo group:
sudo su
adduser evilginx
usermod -aG sudo evilginx
Test that evilginx user is in sudo group:
su - evilginx
sudo ls -la /root
Navigate to users home dir:
cd /home/evilginx
(You can do everything as sudo user as well since we're running everything locally)
Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.
Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.
Self-signed SSL certificates:
Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.
We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)
Create dir and parents if they do not exist:
sudo mkdir -p /etc/ssl/localcerts/fake.com/
Generate the SSL certs using the OpenSSL config file:
Copy custom substitution files (the core of our approach):
sudo cp -r ./custom-subs /etc/apache2/custom-subs
Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.
Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)
# Uncomment the one you want and remember to restart Apache after any changes: #Include /etc/apache2/custom-subs/win-chrome.conf Include /etc/apache2/custom-subs/mac-chrome.conf
Simply to make it easier, I included both versions as separate files for this next step.
Test Apache configs to ensure there are no errors:
sudo apache2ctl configtest
Restart Apache to apply changes:
sudo systemctl restart apache2
Modifying Hosts:
Get the IP of the VM using ifconfig and note it somewhere for the next step.
We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.
On Windows:
Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)
Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:
C:\Windows\System32\drivers\etc\
Change the file types (bottom-right) to "All files".
Double-click the file named hosts
On Mac:
Open a terminal and run the following:
sudo nano /private/etc/hosts
Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:
# Local Apache and Evilginx Setup [IP] login.fake.com [IP] account.fake.com [IP] sso.fake.com [IP] www.fake.com [IP] portal.fake.com [IP] fake.com # End of section
Save and exit.
Now restart your browser before moving to the next step.
Note: On Mac, use the following command to flush the DNS cache:
This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.
Trusting the Self-Signed SSL Certs:
Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.
For this step, it's easier to follow the video instructions, but here is the gist anyway.
Ignore the Unsafe Site warning and proceed to the page.
Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.
Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".
On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust
Now RESTART your Browser
You should be able to visit https://fake.com now and see the homepage without any SSL warnings.
Running Evilginx:
At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.
Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)
sudo apt install tmux -y
Start Evilginx in developer mode (using tmux to avoid losing the session):
tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer
(To re-attach to the tmux session use tmux attach-session -t evilginx)
Evilginx Config:
config domain fake.com
config ipv4 127.0.0.1
IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.
blacklist noadd
Setup Phishlet and Lure:
phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0
Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).
This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malwareanalysis world. It has also proven useful for people trying their luck at the cracking underworld.
It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.
Advantages
To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:
It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.
The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.
It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.
The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.
Installation
You can simply download the stable versions from the release section, where you can also find the installer.
Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose. You will find the binary in the folder bin\updater\updater.exe.
Tool set
This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis. Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense. You can check the complete list of tools here.
About contributions
Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
Workspaces
Collections
Requests
Users
Teams
Installation
python3 -m pip install porch-pirate
Using the client
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
Simple Search
porch-pirate -s "coca-cola.com"
Get Workspace Globals
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.
Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Automatic Search Dump
Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
Extract URLs from Workspace
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
p = porchpirate() print(p.search('coca-cola.com'))
Get Workspace Collections
p = porchpirate() print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Dumping a Workspace
p = porchpirate() collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba')) for collection in collections['data']: requests = collection['requests'] for r in requests: request_data = p.request(r['id']) print(request_data)
Grabbing a Workspace's Globals
p = porchpirate() print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other Examples
Other library usage examples can be located in the examples directory, which contains the following examples:
APKDeepLens is a Python based tool designed to scanAndroid applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.
Features
APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:
OWASP Coverage -> Covers OWASP Top 10 vulnerabilities to ensure a comprehensive security assessment.
Advanced Detection -> Utilizes custom python code for APK file analysis and vulnerability detection.
Sensitive Information Extraction -> Identifies potential security risks by extracting sensitive information from APK files, such as insecure authentication/authorization keys and insecure request protocols.
In-depth Analysis -> Detects insecure data storage practices, including data related to the SD card, and highlights the use of insecure request protocols in the code.
Intent Filter Exploits -> Pinpoint vulnerabilities by analyzing intent filters extracted from AndroidManifest.xml.
Local File Vulnerability Detection -> Safeguard your app by identifying potential mishandlings related to local file operations
Report Generation -> Generates detailed and easy-to-understand reports for each scanned APK, providing actionable insights for developers.
CI/CD Integration -> Designed for easy integration into CI/CD pipelines, enabling automated security testing in development workflows.
User-Friendly Interface -> Color-coded terminal outputs make it easy to distinguish between different types of findings.
Installation
To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:
To simply scan an APK, use the below command. Mention the apk file with -apk argument. Once the scan is complete, a detailed report will be displayed in the console.
python3 APKDeepLens.py -apk file.apk
If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source parameter.
This method utilizes TLS callbacks to execute aย payloadย without spawning any threads in a remote process. This method is inspired byย Threadless Injectionย as RemoteTLSCallbackInjection does not invoke any API calls to trigger the injectedย payload.
Create a suspended process using the CreateProcessViaWinAPIsW function (i.e. RuntimeBroker.exe).
Fetch the remote process image base address followed by reading the process's PE headers.
Fetch an address to a TLS callback function.
Patch a fixed shellcode (i.e. g_FixedShellcode) with runtime-retrieved values. This shellcode is responsible for restoring both original bytes and memory permission of the TLS callback function's address.
Inject both shellcodes: g_FixedShellcode and the main payload.
Patch the TLS callback function's address and replace it with the address of our injected payload.
Resume process.
The g_FixedShellcode shellcode will then make sure that the main payload executes only once by restoring the original TLS callback's original address before calling the main payload. A TLS callback can execute multiple times across the lifespan of a process, therefore it is important to control the number of times the payload is triggered by restoring the original code path execution to the original TLS callback function.
Demo
The following image shows our implementation, RemoteTLSCallbackInjection.exe, spawning a cmd.exe as its main payload.
SiCat is an advanced exploit search tool designed to identify and gather information about exploits from both open sources and local repositories effectively. With a focus on cybersecurity, SiCat allows users to quickly search online, finding potential vulnerabilities and relevant exploits for ongoing projects or systems.
SiCat's main strength lies in its ability to traverse both online and local resources to collect information about relevant exploitations. This tool aids cybersecurity professionals and researchers in understanding potential security risks, providing valuable insights to enhance system security.
I'm aware that perfection is elusive in coding. If you come across any bugs, feel free to contribute by fixing the code or suggesting new features. Your input is always welcomed and valued.
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
Notes
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
Required Packages
bash pip3 install -r requirements.txt
Cloning cloudgrep locally
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
Input
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Example 1 - Running the tool with default queries file
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Note
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
[+] Running GetFileDownloadUrls.*secrets_ for AWS [+] Threat Actor: LUCR3 [+] Severity: MEDIUM [+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
Example 6 - Running the tool with your own queries file
python3 main.py -f new_file.json
Running in your Cloud and Authentication cloudgrep
AWS
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
This is the companion code for the paper: 'Fuzzing Embedded Systems using Debugger Interfaces'. A preprint of the paper can be found here https://publications.cispa.saarland/3950/. The code allows the users to reproduce and extend the results reported in the paper. Please cite the above paper when reporting, reproducing or extending the results.
Folder structure
. โโโ benchmark # Scripts to build Google's fuzzer test suite and run experiments โโโ dependencies # Contains a Makefile to install dependencies for GDBFuzz โโโ evaluation # Raw exeriment data, presented in the paper โโโ example_firmware # Embedded example applications, used for the evaluation โโโ example_programs # Contains a compiled example program and configs to test GDBFuzz โโโ src # Contains the implementation of GDBFuzz โโโ Dockerfile # For creating a Docker image with all GDBFuzz dependencies installed โโโ LICENSE # License โโโ Makefile # Makefile for creating the docker image or install GDBFuzz locally โโโ README.md # This README file
Purpose of the project
The idea of GDBFuzz is to leverage hardware breakpoints from microcontrollers as feedback for coverage-guided fuzzing. Therefore, GDB is used as a generic interface to enable broad applicability. For binary analysis of the firmware, Ghidra is used. The code contains a benchmark setup for evaluating the method. Additionally, example firmware files are included.
Getting Started
GDBFuzz enables coverage-guided fuzzing for embedded systems, but - for evaluation purposes - can also fuzz arbitrary user applications. For fuzzing on microcontrollers we recommend a local installation of GDBFuzz to be able to send fuzz data to the device under test flawlessly.
Install local
GDBFuzz has been tested on Ubuntu 20.04 LTS and Raspberry Pie OS 32-bit. Prerequisites are java and python3. First, create a new virtual environment and install all dependencies.
virtualenv .venv source .venv/bin/activate make chmod a+x ./src/GDBFuzz/main.py
Run locally on an example program
GDBFuzz reads settings from a config file with the following keys.
[SUT] # Path to the binary file of the SUT. # This can, for example, be an .elf file or a .bin file. binary_file_path = <path>
# Address of the root node of the CFG. # Breakpoints are placed at nodes of this CFG. # e.g. 'LLVMFuzzerTestOneInput' or 'main' entrypoint = <entrypoint>
# Number of inputs that must be executed without a breakpoint hit until # breakpoints are rotated. until_rotate_breakpoints = <number>
# Maximum number of breakpoints that can be placed at any given time. max_breakpoints = <number>
# Blacklist functions that shall be ignored. # ignore_functions is a space separated list of function names e.g. 'malloc free'. ignore_functions = <space separated list>
# One of {Hardware, QEMU, SUTRunsOnHost} # Hardware: An external component starts a gdb server and GDBFuzz can connect to this gdb server. # QEMU: GDBFuzz starts QEMU. QEMU emulates binary_file_path and starts gdbserver. # SUTRunsOnHost: GDBFuzz start the target program within GDB. target_mode = <mode>
# Set this to False if you want to start ghidra, analyze the SUT, # and start the ghidra bridge server manually. start_ghidra = True
# Space separated list of addresses where software breakpoints (for error # handling code) are set. Execution of those is considered a crash. # Example: software_breakpoint_addresses = 0x123 0x432 software_breakpoint_addresses =
# Whether all triggered software breakpoints are considered as crash consider_sw_breakpoint_as_error = False
[SUTConnection] # The class 'SUT_connection_class' in file 'SUT_connection_path' implements # how inputs are sent to the SUT. # Inputs can, for example, be sent over Wi-Fi, Serial, Bluetooth, ... # This class must inherit from ./connections/SUTConnection.py. # See ./connections/SUTConnection.py for more information. SUT_connection_file = FIFOConnection.py
[GDB] path_to_gdb = gdb-multiarch #Written in address:port gdb_server_address = localhost:4242
[Fuzzer] # In Bytes maximum_input_length = 100000 # In seconds single_run_timeout = 20 # In seconds total_runtime = 3600
# Optional # Path to a directory where each file contains one seed. If you don't want to # use seeds, leave the value empty. seeds_directory =
[BreakpointStrategy] # Strategies to choose basic blocks are located in # 'src/GDBFuzz/breakpoint_strategies/' # For the paper we use the following strategies # 'RandomBasicBlockStrategy.py' - Randomly choosing unreached basic blocks # 'RandomBasicBlockNoDomStrategy.py' - Like previous, but doesn't use dominance relations to derive transitively reached nodes. # 'RandomBasicBlockNoCorpusStrategy.py' - Like first, but prevents growing the input corpus and therefore behaves like blackbox fuzzing with coverage measurement. # 'BlackboxStrategy.py', - Doesn't set any breakpoints breakpoint_strategy_file = RandomBasicBlockStrategy.py
[LogsAndVisualizations] # One of {DEBUG, INFO, WARNING, ERROR, CRITICAL} loglevel = INFO
# Path to a directory where output files (e.g. graphs, logfiles) are stored. output_directory = ./output
# If set to True, an MQTT client sends UI elements (e.g. graphs) enable_UI = False
An example config file is located in ./example_programs/ together with an example program that was compiled using our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common/. Start fuzzing for one hour with the following command.
We first see output from Ghidra analyzing the binary executable and susequently messages when breakpoints are relocated or hit.
Fuzzing Output
Depending on the specified output_directory in the config file, there should now be a folder trial-0 with the following structure
. โโโ corpus # A folder that contains the input corpus. โโโ crashes # A folder that contains crashing inputs - if any. โโโ cfg # The control flow graph as adjacency list. โโโ fuzzer_stats # Statistics of the fuzzing campaign. โโโ plot_data # Table showing at which relative time in the fuzzing campaign which basic block was reached. โโโ reverse_cfg # The reverse control flow graph.
Using Ghidra in GUI mode
By setting start_ghidra = False in the config file, GDBFuzz connects to a Ghidra instance running in GUI mode. Therefore, the ghidra_bridge plugin needs to be started manually from the script manager. During fuzzing, reached program blocks are highlighted in green.
GDBFuzz on Linux user programs
For fuzzing on Linux user applications, GDBFuzz leverages the standard LLVMFuzzOneInput entrypoint that is used by almost all fuzzers like AFL, AFL++, libFuzzer,.... In benchmark/benchSUTs/GDBFuzz_wrapper/common There is a wrapper that can be used to compile any compliant fuzz harness into a standalone program that fetches input via a named pipe at /tmp/fromGDBFuzz. This allows to simulate an embedded device that consumes data via a well defined input interface and therefore run GDBFuzz on any application. For convenience we created a script in benchmark/benchSUTs that compiles all programs from our evaluation with our wrapper as explained later.
NOTE: GDBFuzz is not intended to fuzz Linux user applications. Use AFL++ or other fuzzers therefore. The wrapper just exists for evaluation purposes to enable running benchmarks and comparisons on a scale!
Install and run in a Docker container
The general effectiveness of our approach is shown in a large scale benchmark deployed as docker containers.
make dockerimage
To run the above experiment in the docker container (for one hour as specified in the config file), map the example_programsand output folder as volumes and start GDBFuzz as follows.
An output folder should appear in the current working directory with the structure explained above.
Detailed Instructions
Our evaluation is split in two parts. 1. GDBFuzz on its intended setup, directly on the hardware. 2. GDBFuzz in an emulated environment to allow independend analysis and comparisons of the results.
GDBFuzz can work with any GDB server and therefore most debug probes for microcontrollers.
GDBFuzz vs. Blackbox (RQ1)
Regarding RQ1 from the paper, we execute GDBFuzz on different microcontrollers with different firmwares located in example_firmware. For each experiment we run GDBFuzz with the RandomBasicBlock and with the RandomBasicBlockNoCorpus strategy. The latter behaves like fuzzing without feedback, but we can still measure the achieved coverage. For answering RQ1, we compare the achieved coverage of the RandomBasicBlock and the RandomBasicBlockNoCorpus strategy. Respective config files are in the corresponding subfolders and we now explain how to setup fuzzing on the four development boards.
GDBFuzz on STM32 B-L4S5I-IOT01A board
GDBFuzz requires access to a GDB Server. In this case the B-L4S5I-IOT01A and its on-board debugger are used. This on-board debugger sets up a GDB server via the 'st-util' program, and enables access to this GDB server via localhost:4242.
cd ./example_firmware/stm32_disco_arduinojson/ pio run --target upload
For your info: platformio stored an .elf file of the SUT here: ./example_firmware/stm32_disco_arduinojson/.pio/build/disco_l4s5i_iot01a/firmware.elf This .elf file is also later used in the user configuration for Ghidra.
Start a new terminal, and run the following to start the a GDB Server:
st-util
Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyACM0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyACM0 in the config file to your device.
Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyUSB0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyUSB0 in the config file to your device.
Fuzzer statistics and logs are in the ./output/... directory.
GDBFuzz on MSP430F5529LP
Install TI MSP430 GCC from https://www.ti.com/tool/MSP430-GCC-OPENSOURCE
Start GDB Server
./gdb_agent_console libmsp430.so
or (more stable). Build mspdebug from https://github.com/dlbeer/mspdebug/ and use:
until mspdebug --fet-skip-close --force-reset tilib "opt gdb_loop True" gdb ; do sleep 1 ; done
Ghidra fails to analyze binaries for the TI MSP430 controller out of the box. To fix that, we import the file in the Ghidra GUI, choose MSP430X as architecture and skip the auto analysis. Next, we open the 'Symbol Table', sort them by name and delete all symbols with names like $C$L*. Now the auto analysis can be executed. After analysis, start the ghidra bridge from the Ghidra GUI manually and then start GDBFuzz.
sudo udevadm control --reload sudo udevadm trigger
Compare against Fuzzware (RQ2)
In RQ2 from the paper, we compare GDBFuzz against the emulation based approach Fuzzware. First we execute GDBFuzz and Fuzzware as described previously on the shipped firmware files. For each GDBFuzz experiment, we create a file with valid basic blocks from the control flow graph files as follows:
cut -d " " -f1 ./cfg > valid_bbs.txt
Now we can replay coverage against fuzzware result fuzzware genstats --valid-bb-file valid_bbs.txt
Finding Bugs (RQ3)
When crashing or hanging inputs are found, the are stored in the crashes folder. During evaluation, we found the following three bugs:
GDBFuzz can also run on a Raspberry Pi host with slight modifications:
Ghidra must be modified, such that it runs on an 32-Bit OS
In file ./dependencies/ghidra/support/launch.sh:125 The JAVA_HOME variable must be hardcoded therefore e.g. to JAVA_HOME="/usr/lib/jvm/default-java"
STLink must be at version >= 1.7 to work properly -> Build from sources
GDBFuzz on other boards
To fuzz software on other boards, GDBFuzz requires
A microcontroller with hardware breakpoints and a GDB compliant debug probe
The firmware file.
A running GDBServer and suitable GDB application.
An entry point, where fuzzing should start e.g. a parser function or an address
An input interface (see src/GDBFuzz/connections) that triggers execution of the code at the entry point e.g. serial connection
All these properties need to be specified in the config file.
Run the full Benchmark (RQ4 - 8)
For RQ's 4 - 8 we run a large scale benchmark. First, build the Docker image as described previously and compile applications from Google's Fuzzer Test Suite with our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common.
cd ./benchmark/benchSUTs chmod a+x setup_benchmark_SUTs.py make dockerbenchmarkimage
Next adopt the benchmark settings in benchmark/scripts/benchmark.py and benchmark/scripts/benchmark_aflpp.py to your demands (especially number_of_cores, trials, and seconds_per_trial) and start the benchmark with:
cd ./benchmark/scripts ./benchmark.py $(pwd)/../benchSUTs/SUTs/ SUTs.json ./benchmark_aflpp.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
A folder appears in ./benchmark/scripts that contains plot files (coverage over time), fuzzer statistic files, and control flow graph files for each experiment as in evaluation/fuzzer_test_suite_qemu_runs.
[Optional] Install Visualization and Visualization Example
GDBFuzz has an optional feature where it plots the control flow graph of covered nodes. This is disabled by default. You can enable it by following the instructions of this section and setting 'enable_UI' to 'True' in the user configuration.
On the host:
Install
sudo apt-get install graphviz
Install a recent version of node, for example Option 2 from here. Use Option 2 and not option 1. This should install both node and npm. For reference, our version numbers are (but newer versions should work too):
Update the mosquitto broker config: Replace the file /etc/mosquitto/conf.d/mosquitto.conf with the following content:
listener 1883 allow_anonymous true
listener 9001 protocol websockets
Restart the mosquitto broker:
sudo service mosquitto restart
Check that the mosquitto broker is running:
sudo service mosquitto status
The output should include the text 'Active: active (running)'
Start the web UI:
cd ./src/webui npm start
Your web browser should open automatically on 'http://localhost:3000/'.
Start GDBFuzz and use a user config file where enable_UI is set to True. You can use the Docker container and arduinojson SUT from above. But make sure to set 'enable_UI' to 'True'.
The nodes covered in 'blue' are covered. White nodes are not covered. We only show uncovered nodes if their parent is covered (drawing the complete control flow graph takes too much time if the control flow graph is large).
Azure DevOps Services Attack Toolkit - ADOKit is a toolkit that can be used to attack Azure DevOps Services by taking advantage of the available REST API. The tool allows the user to specify an attack module, along with specifying valid credentials (API key or stolen authentication cookie) for the respective Azure DevOps Services instance. The attack modules supported include reconnaissance, privilege escalation and persistence. ADOKit was built in a modular approach, so that new modules can be added in the future by the information security community.
Full details on the techniques used by ADOKit are in the X-Force Red whitepaper.
Installation/Building
Libraries Used
The below 3rd party libraries are used in this project.
Take the below steps to setup Visual Studio in order to compile the project yourself. This requires two .NET libraries that can be installed from the NuGet package manager.
Load the Visual Studio project up and go to "Tools" --> "NuGet Package Manager" --> "Package Manager Settings"
Go to "NuGet Package Manager" --> "Package Sources"
Add a package source with the URL https://api.nuget.org/v3/index.json
Install the Costura.Fody NuGet package.
Install-Package Costura.Fody -Version 3.3.3
Install the Newtonsoft.Json package
Install-Package Newtonsoft.Json
You can now build the project yourself!
Command Modules
Recon
check - Check whether organization uses Azure DevOps and if credentials are valid
whoami - List the current user and its group memberships
listrepo - List all repositories
searchrepo - Search for given repository
listproject - List all projects
searchproject - Search for given project
searchcode - Search for code containing a search term
searchfile - Search for file based on a search term
listuser - List users
searchuser - Search for a given user
listgroup - List groups
searchgroup - Search for a given group
getgroupmembers - List all group members for a given group
getpermissions - Get the permissions for who has access to a given project
Persistence
createpat - Create personal access token for user
listpat - List personal access tokens for user
removepat - Remove personal access token for user
createsshkey - Create public SSH key for user
listsshkey - List public SSH keys for user
removesshkey - Remove public SSH key for user
Privilege Escalation
addprojectadmin - Add a user to the "Project Administrators" for a given project
removeprojectadmin - Remove a user from the "Project Administrators" group for a given project
addbuildadmin - Add a user to the "Build Administrators" group for a given project
removebuildadmin - Remove a user from the "Build Administrators" group for a given project
addcollectionadmin - Add a user to the "Project Collection Administrators" group
removecollectionadmin - Remove a user from the "Project Collection Administrators" group
addcollectionbuildadmin - Add a user to the "Project Collection Build Administrators" group
removecollectionbuildadmin - Remove a user from the "Project Collection Build Administrators" group
addcollectionbuildsvc - Add a user to the "Project Collection Build Service Accounts" group
removecollectionbuildsvc - Remove a user from the "Project Collection Build Service Accounts" group
addcollectionsvc - Add a user to the "Project Collection Service Accounts" group
removecollectionsvc - Remove a user from the "Project Collection Service Accounts" group
getpipelinevars - Retrieve any pipeline variables used for a given project.
getpipelinesecrets - Retrieve the names of any pipeline secrets used for a given project.
getserviceconnections - Retrieve the service connections used for a given project.
Arguments/Options
/credential: - credential for authentication (PAT or Cookie). Applicable to all modules.
/url: - Azure DevOps URL. Applicable to all modules.
/search: - Keyword to search for. Not applicable to all modules.
/project: - Project to perform an action for. Not applicable to all modules.
/user: - Perform an action against a specific user. Not applicable to all modules.
/id: - Used with persistence modules to perform an action against a specific token ID. Not applicable to all modules.
/group: - Perform an action against a specific group. Not applicable to all modules.
Authentication Options
Below are the authentication options you have with ADOKit when authenticating to an Azure DevOps instance.
Stolen Cookie - This will be the UserAuthentication cookie on a user's machine for the .dev.azure.com domain.
/credential:UserAuthentication=ABC123
Personal Access Token (PAT) - This will be an access token/API key that will be a single string.
/credential:apiToken
Module Details Table
The below table shows the permissions required for each module.
Attack Scenario
Module
Special Permissions?
Notes
Recon
check
No
Recon
whoami
No
Recon
listrepo
No
Recon
searchrepo
No
Recon
listproject
No
Recon
searchproject
No
Recon
searchcode
No
Recon
searchfile
No
Recon
listuser
No
Recon
searchuser
No
Recon
listgroup
No
Recon
searchgroup
No
Recon
getgroupmembers
No
Recon
getpermissions
No
Persistence
createpat
No
Persistence
listpat
No
Persistence
removepat
No
Persistence
createsshkey
No
Persistence
listsshkey
No
Persistence
removesshkey
No
Privilege Escalation
addprojectadmin
Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
removeprojectadmin
Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
addbuildadmin
Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
removebuildadmin
Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
addcollectionadmin
Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
removecollectionadmin
Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
addcollectionbuildadmin
Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
removecollectionbuildadmin
Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
addcollectionbuildsvc
Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
Privilege Escalation
removecollectionbuildsvc
Yes - Project Collection Administrator, Project Colection Build Administrators or Project Collection Service Accounts
Privilege Escalation
addcollectionsvc
Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
removecollectionsvc
Yes - Project Collection Administrator or Project Collection Service Accounts
Privilege Escalation
getpipelinevars
Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
Privilege Escalation
getpipelinesecrets
Yes - Contributors or Readers or Build Administrators or Project Administrators or Project Team Member or Project Collection Test Service Accounts or Project Collection Build Service Accounts or Project Collection Build Administrators or Project Collection Service Accounts or Project Collection Administrators
Privilege Escalation
getserviceconnections
Yes - Project Administrator, Project Collection Administrator or Project Collection Service Accounts
Examples
Validate Azure DevOps Access
Use Case
Perform authentication check to ensure that organization is using Azure DevOps and that provided credentials are valid.
Syntax
Provide the check module, along with any relevant authentication information and URL. This will output whether the organization provided is using Azure DevOps, and if so, will attempt to validate the credentials provided.
[*] INFO: Checking if organization provided uses Azure DevOps
[+] SUCCESS: Organization provided exists in Azure DevOps
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
3/28/23 19:33:02 Finished execution of check
Whoami
Use Case
Get the current user and the user's group memberhips
Syntax
Provide the whoami module, along with any relevant authentication information and URL. This will output the current user and all of its group memberhips.
Timestamp: 4/4/2023 11:33:12 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Username | Display Name | UPN ------------------------------------------------------------------------------------------------------------------------------------------------------------ jsmith | John Smith | [email protected]. com
[*] INFO: Listing group memberships for the current user
Group UPN | Display Name | Description -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by the test controllers set up for this project collection. [TestProject2]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project. [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project. [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection.
4/4/23 15:33:19 Finished execution of whoami
List Repos
Use Case
Discover repositories being used in Azure DevOps instance
Syntax
Provide the listrepo module, along with any relevant authentication information and URL. This will output the repository name and URL.
Search for repositories by repository name in Azure DevOps instance
Syntax
Provide the searchrepo module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching repository name and URL.
================================================== Module: searchrepo Auth Type: API Key Search Term: test Target URL: https://dev.azure.com/YourOrganization
Timestamp: 3/29/2023 9:26:57 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Name | URL ----------------------------------------------------------------------------------- TestProject2 | https://dev.azure.com/YourOrganization/TestProject2/_git/TestProject2 TestProject | https://dev.azure.com/YourOrganization/TestProject/_git/TestProject
3/29/23 13:26:59 Finished execution of searchrepo
List Projects
Use Case
Discover projects being used in Azure DevOps instance
Syntax
Provide the listproject module, along with any relevant authentication information and URL. This will output the project name, visibility (public or private) and URL.
Search for projects by project name in Azure DevOps instance
Syntax
Provide the searchproject module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching project name, visibility (public or private) and URL.
4/4/23 11:45:31 Finished execution of searchproject
Search Code
Use Case
Search for code containing a given keyword in Azure DevOps instance
Syntax
Provide the searchcode module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching code file, along with the line in the code that matched.
[>] URL: https://dev.azure.com/YourOrganization/MaraudersMap/_git/MaraudersMap?path=/Test.cs |_ Console.WriteLine("PassWord"); |_ this is some text that has a password in it
3/29/23 19:22:22 Finished execution of searchco de
Search Files
Use Case
Search for files in repositories containing a given keyword in the file name in Azure DevOps
Syntax
Provide the searchfile module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the URL to the matching file in its respective repository.
Timestamp: 3/29/2023 11:28:34 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
File URL ---------------------------------------------------------------------------------------------------- https://dev.azure.com/YourOrganization/MaraudersMap/_git/4f159a8e-5425-4cb5-8d98-31e8ac86c4fa?path=/Test.cs https://dev.azure.com/YourOrganization/ProjectWithMultipleRepos/_git/c1ba578c-1ce1-46ab-8827-f245f54934e9?path=/Test.c s https://dev.azure.com/YourOrganization/TestProject/_git/fbcf0d6d-3973-4565-b641-3b1b897cfa86?path=/test.cs
3/29/23 15:28:37 Finished execution of searchfile
Create PAT
Use Case
Create a personal access token (PAT) for a user that can be used for persistence to an Azure DevOps instance.
Syntax
Provide the createpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, date valid til, and token content for the PAT created. The name of the PAT created will be ADOKit- followed by a random string of 8 characters. The date the PAT is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
PAT ID | Name | Scope | Valid Until | Token Value ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 8776252f-9e03-48ea-a85c-f880cc830898 | ADOKit- rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM | tokenValueWouldBeHere
3/31/23 18:33:10 Finished execution of createpat
List PATs
Use Case
List all personal access tokens (PAT's) for a given user in an Azure DevOps instance.
Syntax
Provide the listpat module, along with any relevant authentication information and URL. This will output the PAT ID, name, scope, and date valid til for all active PAT's for the user.
PAT ID | Name | Scope | Valid Until ------------------------------------------------------------------------------------------------------------------------------------------- 9b354668-4424-4505-a35f-d0989034da18 | test-token | app_token | 4/29/2023 1:20:45 PM 8776252f-9e03-48ea-a85c-f880cc8308 98 | ADOKit-rJxzpZwZ | app_token | 3/31/2024 12:00:00 AM
3/31/23 18:33:18 Finished execution of listpat
Remove PAT
Use Case
Remove a PAT for a given user in an Azure DevOps instance.
Syntax
Provide the removepat module, along with any relevant authentication information and URL. Additionally, provide the ID for the PAT in the /id: argument. This will output whether the PAT was removed or not, and then will list the current active PAT's for the user after performing the removal.
Timestamp: 4/3/2023 11:04:59 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[+] SUCCESS: PAT with ID 0b20ac58-fc65-4b66-91fe-4ff909df7298 was removed successfully.
PAT ID | Name | Scope | Valid Until ------------------------------------------------------------------------------------------------------------------------------------------- 9b354668-4424-4505-a35f-d098903 4da18 | test-token | app_token | 4/29/2023 1:20:45 PM
4/3/23 15:05:00 Finished execution of removepat
Create SSH Key
Use Case
Create an SSH key for a user that can be used for persistence to an Azure DevOps instance.
Syntax
Provide the createsshkey module, along with any relevant authentication information and URL. Additionally, provide your public SSH key in the /sshkey: argument. This will output the SSH key ID, name, scope, date valid til, and last 20 characters of the public SSH key for the SSH key created. The name of the SSH key created will be ADOKit- followed by a random string of 8 characters. The date the SSH key is valid until will be 1 year from the date of creation, as that is the maximum that Azure DevOps allows.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- fbde9f3e-bbe3-4442-befb-c2ddeab75c58 | ADOKit-iCBfYfFR | app_token | 4/3/2024 12:00:00 AM | ...hOLNYMk5LkbLRMG36RE=
4/3/23 18:51:24 Finished execution of createsshkey
List SSH Keys
Use Case
List all public SSH keys for a given user in an Azure DevOps instance.
Syntax
Provide the listsshkey module, along with any relevant authentication information and URL. This will output the SSH Key ID, name, scope, and date valid til for all active SSH key's for the user. Additionally, it will print the last 20 characters of the public SSH key.
Timestamp: 4/3/2023 11:37:10 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 15:37:11 Finished execution of listsshkey
Remove SSH Key
Use Case
Remove an SSH key for a given user in an Azure DevOps instance.
Syntax
Provide the removesshkey module, along with any relevant authentication information and URL. Additionally, provide the ID for the SSH key in the /id: argument. This will output whether SSH key was removed or not, and then will list the current active SSH key's for the user after performing the removal.
[+] SUCCESS: SSH key with ID a199c036-d7ed-4848-aae8-2397470aff97 was removed successfully.
SSH Key ID | Name | Scope | Valid Until | Public SSH Key ---------------------------------------------------------------------------------------------------------------------------------------------- ------------------------- ec056907-9370-4aab-b78c-d642d551eb98 | test-ssh-key | app_token | 4/3/2024 3:13:58 PM | ...nDoYAPisc/pEFArVVV0=
4/3/23 17:50:09 Finished execution of removesshkey
List Users
Use Case
List users within an Azure DevOps instance
Syntax
Provide the listuser module, along with any relevant authentication information and URL. This will output the username, display name and user principal name.
Username | Display Name | UPN ------------------------------------------------------------------------------------------------------------------------------------------------------------ user1 | User 1 | [email protected] jsmith | John Smith | [email protected] rsmith | Ron Smith | [email protected] user2 | User 2 | [email protected]
4/3/23 20:12:08 Finished execution of listuser
Search User
Use Case
Search for given user(s) in Azure DevOps instance
Syntax
Provide the searchuser module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the matching username, display name and user principal name.
Username | Display Name | UPN ------------------------------------------------------------------------------------------------------------------------------------------------------------ user1 | User 1 | [email protected] rosoft.com user2 | User 2 | [email protected]
4/3/23 20:12:24 Finished execution of searchuser
List Groups
Use Case
List groups within an Azure DevOps instance
Syntax
Provide the listgroup module, along with any relevant authentication information and URL. This will output the user principal name, display name and description of group.
UPN | Display Name | Description ------------------------------------------------------------------------------------------------------------------------------------------------------------ [TestProject]\Contributors | Contributors | Members of this group can add, modify, and delete items w ithin the team project. [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds. [YourOrganization]\Project-Scoped Users | Project-Scoped Users | Members of this group will have limited visibility to organization-level data [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds. [MaraudersMap]\Readers | Readers | Members of this group have access to the team project. [YourOrganization]\Project Collection Test Service Accounts | Project Collection Test Service Accounts | Members of this group should include the service accounts used by t he test controllers set up for this project collection. [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team. [TEAM FOUNDATION]\Enterprise Service Accounts | Enterprise Service Accounts | Members of this group have service-level permissions in this enterprise. For service accounts only. [YourOrganization]\Security Service Group | Security Service Group | Identities which are granted explicit permission to a resource will be automatically added to this group if they were not previously a member of any other group. [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management
---SNIP---
4/3/23 20:48:46 Finished execution of listgroup
Search Groups
Use Case
Search for given group(s) in Azure DevOps instance
Syntax
Provide the searchgroup module and your search criteria in the /search: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group.
UPN | Display Name | Description ------------------------------------------------------------------------------------------------------------------------------------------------------------ [TestProject2]\Build Administrators | Build Administrators | Members of this group can create, mod ify and delete build definitions and manage queued and completed builds. [ProjectWithMultipleRepos]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds. [TestProject]\Release Administrators | Release Administrators | Members of this group can perform all operations on Release Management [TestProject]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds. [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project. [TestProject2]\Project Administrators | Project Administrators | Members of th is group can perform all operations in the team project. [YourOrganization]\Project Collection Administrators | Project Collection Administrators | Members of this application group can perform all privileged operations on the Team Project Collection. [ProjectWithMultipleRepos]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project. [MaraudersMap]\Build Administrators | Build Administrators | Members of this group can create, modify and delete build definitions and manage queued and completed builds. [YourOrganization]\Project Collection Build Administrators | Project Collection Build Administrators | Members of this group should include accounts for people who should be able to administer the build resources. [TestProject]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project.
4/3/23 20:48:42 Finished execution of searchgroup
Get Group Members
Use Case
List all group members for a given group
Syntax
Provide the getgroupmembers module and the group(s) you would like to search for in the /group: command-line argument, along with any relevant authentication information and URL. This will output the user principal name of the group matching, along with each group member of that group including the user's mail address and display name.
Timestamp: 4/4/2023 9:11:03 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [TestProject2]\Build Administrators | [email protected] | User 1 [TestProject2]\Build Administrators | [email protected] | User 2 [MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins [MaraudersMap]\Project Administrators | [email protected] | Ron Smith [TestProject2]\Project Administrators | [email protected] | User 1 [TestProject2]\Project Administrators | [email protected] | User 2 [YourOrganization]\Project Collection Administrators | [email protected] | John Smith [ProjectWithMultipleRepos]\Project Administrators | [email protected] | Brett Hawkins [MaraudersMap]\Build Administrators | [email protected] | Brett Hawkins
4/4/23 13:11:09 Finished execution of getgroupmembers
Get Project Permissions
Use Case
Get a listing of who has permissions to a given project.
Syntax
Provide the getpermissions module and the project you would like to search for in the /project: command-line argument, along with any relevant authentication information and URL. This will output the user principal name, display name and description for the matching group. Additionally, this will output the group members for each of those groups.
Timestamp: 4/4/2023 9:11:16 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
UPN | Display Name | Description ------------------------------------------------------------------------------------------------------------------------------------------------------------ [MaraudersMap]\Build Administrators | Build Administrators | Mem bers of this group can create, modify and delete build definitions and manage queued and completed builds. [MaraudersMap]\Contributors | Contributors | Members of this group can add, modify, and delete items within the team project. [MaraudersMap]\MaraudersMap Team | MaraudersMap Team | The default project team. [MaraudersMap]\Project Administrators | Project Administrators | Members of this group can perform all operations in the team project. [MaraudersMap]\Project Valid Users | Project Valid Users | Members of this group have access to the team project. [MaraudersMap]\Readers | Readers | Members of this group have access to the team project.
[*] INFO: List ing group members for each group that has permissions to this project
GROUP NAME: [MaraudersMap]\Build Administrators
Group | Mail Address | Display Name --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Contributors
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [MaraudersMap]\Contributo rs | [email protected] | User 1 [MaraudersMap]\Contributors | [email protected] | User 2
GROUP NAME: [MaraudersMap]\MaraudersMap Team
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [MaraudersMap]\MaraudersMap Team | [email protected] | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Administrators
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins
GROUP NAME: [MaraudersMap]\Project Valid Users
Group | Mail Address | Display Name --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GROUP NAME: [MaraudersMap]\Readers
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [MaraudersMap]\Readers | [email protected] | John Smith
4/4/23 13:11:18 Finished execution of getpermissions
Add Project Admin
Use Case
Add a user to the Project Administrators group for a given project.
Syntax
Provide the addprojectadmin module along with a /project: and /user: for a given user to be added to the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
[*] INFO: Attempting to add user1 to the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name -------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------ [MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins [MaraudersMap]\Project Administrators | [email protected] | User 1
4/4/23 18:52:47 Finished execution of addprojectadmin
Remove Project Admin
Use Case
Remove a user from the Project Administrators group for a given project.
Syntax
Provide the removeprojectadmin module along with a /project: and /user: for a given user to be removed from the Project Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
[*] INFO: Attempting to remove user1 from the Project Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name ------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------- [MaraudersMap]\Project Administrators | [email protected] | Brett Hawkins
4/4/23 19:19:44 Finished execution of removeprojectadmin
Add Build Admin
Use Case
Add a user to the Build Administrators group for a given project.
Syntax
Provide the addbuildadmin module along with a /project: and /user: for a given user to be added to the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
[*] INFO: Attempting to add user1 to the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name -------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------ [MaraudersMap]\Build Administrators | [email protected] | User 1
4/4/23 19:41:55 Finished execution of addbuildadmin
Remove Build Admin
Use Case
Remove a user from the Build Administrators group for a given project.
Syntax
Provide the removebuildadmin module along with a /project: and /user: for a given user to be removed from the Build Administrators group for the given project. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
[*] INFO: Attempting to remove user1 from the Build Administrators group for the maraudersmap project.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name ------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------
4/4/23 19:42:11 Finished execution of removebuildadmin
Add Collection Admin
Use Case
Add a user to the Project Collection Administrators group.
Syntax
Provide the addcollectionadmin module along with a /user: for a given user to be added to the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
[*] INFO: Attempting to add user1 to the Project Collection Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------ [YourOrganization]\Project Collection Administrators | [email protected] | John Smith [YourOrganization]\Project Collection Administrators | [email protected] | User 1
4/4/23 20:04:43 Finished execution of addcollectionadmin
Remove Collection Admin
Use Case
Remove a user from the Project Collection Administrators group.
Syntax
Provide the removecollectionadmin module along with a /user: for a given user to be removed from the Project Collection Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
[*] INFO: Attempting to remove user1 from the Project Collection Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name ------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------- [YourOrganization]\Project Collection Administrators | [email protected] | John Smith
4/4/23 20:10:38 Finished execution of removecollectionadmin
Add Collection Build Admin
Use Case
Add a user to the Project Collection Build Administrators group.
Syntax
Provide the addcollectionbuildadmin module along with a /user: for a given user to be added to the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
Timestamp: 4/5/2023 8:21:39 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Administrators group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name ---------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- [YourOrganization]\Project Collection Build Administrators | [email protected] | User 1
4/5/23 12:21:42 Finished execution of addcollectionbuildadmin
Remove Collection Build Admin
Use Case
Remove a user from the Project Collection Build Administrators group.
Syntax
Provide the removecollectionbuildadmin module along with a /user: for a given user to be removed from the Project Collection Build Administrators group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
Timestamp: 4/5/2023 8:21:59 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Administrators group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name --------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
4/5/23 12:22:02 Finished execution of removecollectionbuildadmin
Add Collection Build Service Account
Use Case
Add a user to the Project Collection Build Service Accounts group.
Syntax
Provide the addcollectionbuildsvc module along with a /user: for a given user to be added to the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
Timestamp: 4/5/2023 8:22:13 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name ------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------- [YourOrganization]\Project Collection Build Service Accounts | [email protected] | User 1
4/5/23 12:22:15 Finished execution of addcollectionbuildsvc
Remove Collection Build Service Account
Use Case
Remove a user from the Project Collection Build Service Accounts group.
Syntax
Provide the removecollectionbuildsvc module along with a /user: for a given user to be removed from the Project Collection Build Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
Timestamp: 4/5/2023 8:22:27 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Build Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name ----------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
4/5/23 12:22:28 Finished execution of removecollectionbuildsvc
Add Collection Service Account
Use Case
Add a user to the Project Collection Service Accounts group.
Syntax
Provide the addcollectionsvc module along with a /user: for a given user to be added to the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
Timestamp: 4/5/2023 11:21:01 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to add user1 to the Project Collection Service Accounts group.
[+] SUCCESS: User successfully added
Group | Mail Address | Display Name --------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------- [YourOrganization]\Project Collection Service Accounts | [email protected] | John Smith [YourOrganization]\Project Collection Service Accounts | [email protected] | User 1
4/5/23 15:21:04 Finished execution of addcollectionsvc
Remove Collection Service Account
Use Case
Remove a user from the Project Collection Service Accounts group.
Syntax
Provide the removecollectionsvc module along with a /user: for a given user to be removed from the Project Collection Service Accounts group. Additionally, provide along any relevant authentication information and URL. See Module Details Table for the permissions needed to perform this action.
Timestamp: 4/5/2023 11:21:43 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
[*] INFO: Attempting to remove user1 from the Project Collection Service Accounts group.
[+] SUCCESS: User successfully removed
Group | Mail Address | Display Name -------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------ [YourOrganization]\Project Collection Service Accounts | [email protected] | John Smith
4/5/23 15:21:44 Finished execution of removecollectionsvc
Get Pipeline Variables
Use Case
Extract any pipeline variables being used in project(s), which could contain credentials or other useful information.
Syntax
Provide the getpipelinevars module along with a /project: for a given project to extract any pipeline variables being used. If you would like to extract pipeline variables from all projects specify all in the /project: argument.
Pipeline Var Name | Pipeline Var Value ----------------------------------------------------------------------------------- credential | P@ssw0rd123! url | http://blah/
4/6/23 16:08:36 Finished execution of getpipelinevars
Get Pipeline Secrets
Use Case
Extract the names of any pipeline secrets being used in project(s), which will direct the operator where to attempt to perform secret extraction.
Syntax
Provide the getpipelinesecrets module along with a /project: for a given project to extract the names of any pipeline secrets being used. If you would like to extract the names of pipeline secrets from all projects specify all in the /project: argument.
Timestamp: 4/10/2023 10:28:37 AM ==================================================
[*] INFO: Checking credentials provided
[+] SUCCESS: Credentials provided are VALID.
Build Secret Name | Build Secret Value ----------------------------------------------------- anotherSecretPass | [HIDDEN] secretpass | [HIDDEN]
4/10/23 14:28:38 Finished execution of getpipelinesecrets
Get Service Connections
Use Case
List any service connections being used in project(s), which will direct the operator where to attempt to perform credential extraction for any service connections being used.
Syntax
Provide the getserviceconnections module along with a /project: for a given project to list any service connections being used. If you would like to list service connections being used from all projects specify all in the /project: argument.
AttackGen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive MITRE ATT&CK framework. The tool generates tailored incident response scenarios based on user-selected threat actor groups and your organisation's details.
Star the Repo
If you find AttackGen useful, please consider starring the repository on GitHub. This helps more people discover the tool. Your support is greatly appreciated! โญ
Features
Generates unique incident response scenarios based on chosen threat actor groups.
Allows you to specify your organisation's size and industry for a tailored scenario.
Displays a detailed list of techniques used by the selected threat actor group as per the MITRE ATT&CK framework.
Create custom scenarios based on a selection of ATT&CK techniques.
Capture user feedback on the quality of the generated scenarios.
Downloadable scenarios in Markdown format.
๐ Use the OpenAI API, Azure OpenAI Service, Mistral API, or locally hosted Ollama models to generate incident response scenarios.
Available as a Docker container image for easy deployment.
Optional integration with LangSmith for powerful debugging, testing, and monitoring of model performance.
Releases
v0.4 (current)
What's new?
Why is it useful?
Mistral API Integration
- Alternative Model Provider: Users can now leverage the Mistral AI models to generate incident response scenarios. This integration provides an alternative to the OpenAI and Azure OpenAI Service models, allowing users to explore and compare the performance of different language models for their specific use case.
Local Model Support using Ollama
- Local Model Hosting: AttackGen now supports the use of locally hosted LLMs via an integration with Ollama. This feature is particularly useful for organisations with strict data privacy requirements or those who prefer to keep their data on-premises. Please note that this feature is not available for users of the AttackGen version hosted on Streamlit Community Cloud at https://attackgen.streamlit.app
Optional LangSmith Integration
- Improved Flexibility: The integration with LangSmith is now optional. If no LangChain API key is provided, users will see an informative message indicating that the run won't be logged by LangSmith, rather than an error being thrown. This change improves the overall user experience and allows users to continue using AttackGen without the need for LangSmith.
Various Bug Fixes and Improvements
- Enhanced User Experience: This release includes several bug fixes and improvements to the user interface, making AttackGen more user-friendly and robust.
v0.3
What's new?
Why is it useful?
Azure OpenAI Service Integration
- Enhanced Integration: Users can now choose to utilise OpenAI models deployed on the Azure OpenAI Service, in addition to the standard OpenAI API. This integration offers a seamless and secure solution for incorporating AttackGen into existing Azure ecosystems, leveraging established commercial and confidentiality agreements.
- Improved Data Security: Running AttackGen from Azure ensures that application descriptions and other data remain within the Azure environment, making it ideal for organizations that handle sensitive data in their threat models.
LangSmith for Azure OpenAI Service
- Enhanced Debugging: LangSmith tracing is now available for scenarios generated using the Azure OpenAI Service. This feature provides a powerful tool for debugging, testing, and monitoring of model performance, allowing users to gain insights into the model's decision-making process and identify potential issues with the generated scenarios.
- User Feedback: LangSmith also captures user feedback on the quality of scenarios generated using the Azure OpenAI Service, providing valuable insights into model performance and user satisfaction.
Model Selection for OpenAI API
- Flexible Model Options: Users can now select from several models available from the OpenAI API endpoint, such as gpt-4-turbo-preview. This allows for greater customization and experimentation with different language models, enabling users to find the most suitable model for their specific use case.
Docker Container Image
- Easy Deployment: AttackGen is now available as a Docker container image, making it easier to deploy and run the application in a consistent and reproducible environment. This feature is particularly useful for users who want to run AttackGen in a containerised environment, or for those who want to deploy the application on a cloud platform.
v0.2
What's new?
Why is it useful?
Custom Scenarios based on ATT&CK Techniques
- For Mature Organisations: This feature is particularly beneficial if your organisation has advanced threat intelligence capabilities. For instance, if you're monitoring a newly identified or lesser-known threat actor group, you can tailor incident response testing scenarios specific to the techniques used by that group.
- Focused Testing: Alternatively, use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain MITRE ATT&CK Tactics like 'Lateral Movement' or 'Exfiltration'. This is useful for organisations looking to evaluate and improve specific areas of their defence posture.
User feedback on generated scenarios
- Collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks.
Improved error handling for missing API keys
- Improved user experience.
Replaced Streamlit st.spinner widgets with new st.status widget
- Provides better visibility into long running processes (i.e. scenario generation).
v0.1
Initial release.
Requirements
Recent version of Python.
Python packages: pandas, streamlit, and any other packages necessary for the custom libraries (langchain and mitreattack).
OpenAI API key.
LangChain API key (optional) - see LangSmith Setup section below for further details.
Data files: enterprise-attack.json (MITRE ATT&CK dataset in STIX format) and groups.json.
If you would like to use LangSmith for debugging, testing, and monitoring of model performance, you will need to set up a LangSmith account and create a .streamlit/secrets.toml file that contains your LangChain API key. Please follow the instructions here to set up your account and obtain your API key. You'll find a secrets.toml-example file in the .streamlit/ directory that you can use as a template for your own secrets.toml file.
If you do not wish to use LangSmith, you must still have a .streamlit/secrets.toml file in place, but you can leave the LANGCHAIN_API_KEY field empty.
Data Setup
Download the latest version of the MITRE ATT&CK dataset in STIX format from here. Ensure to place this file in the ./data/ directory within the repository.
Running AttackGen
After the data setup, you can run AttackGen with the following command:
Open your web browser and navigate to the URL provided by Streamlit.
Use the app to generate standard or custom incident response scenarios (see below for details).
Option 2: Using the Docker Container Image
Run the Docker container:
docker run -p 8501:8501 mrwadams/attackgen
This command will start the container and map port 8501 (default for Streamlit apps) from the container to your host machine. 2. Open your web browser and navigate to http://localhost:8501. 3. Use the app to generate standard or custom incident response scenarios (see below for details).
Generating Scenarios
Standard Scenario Generation
Choose whether to use the OpenAI API or the Azure OpenAI Service.
Enter your OpenAI API key, or the API key and deployment details for your model on the Azure OpenAI Service.
Select your organisatin's industry and size from the dropdown menus.
Navigate to the Threat Group Scenarios page.
Select the Threat Actor Group that you want to simulate.
Click on 'Generate Scenario' to create the incident response scenario.
Use the ๐ or ๐ buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.
Custom Scenario Generation
Choose whether to use the OpenAI API or the Azure OpenAI Service.
Enter your OpenAI API Key, or the API key and deployment details for your model on the Azure OpenAI Service.
Select your organisation's industry and size from the dropdown menus.
Navigate to the Custom Scenario page.
Use the multi-select box to search for and select the ATT&CK techniques relevant to your scenario.
Click 'Generate Scenario' to create your custom incident response testing scenario based on the selected techniques.
Use the ๐ or ๐ buttons to provide feedback on the quality of the generated scenario. N.B. The feedback buttons only appear if a value for LANGCHAIN_API_KEY has been set in the .streamlit/secrets.toml file.
Please note that generating scenarios may take a minute or so. Once the scenario is generated, you can view it on the app and also download it as a Markdown file.
Contributing
I'm very happy to accept contributions to this project. Please feel free to submit an issue or pull request.
Chiasmodon is an OSINT (Open Source Intelligence) tool designed to assist in the process of gathering information about a target domain. Its primary functionality revolves around searching for domain-related data, including domain emails, domain credentials (usernames and passwords), CIDRs (Classless Inter-Domain Routing), ASNs (Autonomous System Numbers), and subdomains. the tool allows users to search by domain, CIDR, ASN, email, username, password, or Google Play application ID.
โจFeatures
[x] ๐Domain: Conduct targeted searches by specifying a domain name to gather relevant information related to the domain.
[x] ๐ฎGoogle Play Application: Search for information related to a specific application on the Google Play Store by providing the application ID.
[x] ๐CIDR and ๐ขASN: Explore CIDR blocks and Autonomous System Numbers (ASNs) associated with the target domain to gain insights into network infrastructure and potential vulnerabilities.
[x] โ๏ธEmail, ๐คUsername, ๐Password: Conduct searches based on email, username, or password to identify potential security risks or compromised credentials.
[x] ๐Output Customization: Choose the desired output format (text, JSON, or CSV) and specify the filename to save the search results.
[x] โ๏ธAdditional Options: The tool offers various additional options, such as viewing different result types (credentials, URLs, subdomains, emails, passwords, usernames, or applications), setting API tokens, specifying timeouts, limiting results, and more.
๐Comming soon
๐ฑPhone: Get ready to uncover even more valuable data by searching for information associated with phone numbers. Whether you're investigating a particular individual or looking for connections between phone numbers and other entities, this new feature will provide you with valuable insights.
๐ขCompany Name: We understand the importance of comprehensive company research. In our upcoming release, you'll be able to search by company name and access a wide range of documents associated with that company. This feature will provide you with a convenient and efficient way to gather crucial information, such as legal documents, financial reports, and other relevant records.
๐คFace (Photo): Visual data is a powerful tool, and we are excited to introduce our advanced facial recognition feature. With "Search by Face (Photo)," you can upload an image containing a face and leverage cutting-edge technology to identify and match individuals across various data sources. This will allow you to gather valuable information, such as social media profiles, online presence, and potential connections, all through the power of facial recognition.
Why Chiasmodon name ?
Chiasmodon niger is a species of deep sea fish in the family Chiasmodontidae. It is known for its ability to swallow fish larger than itself. and so do we. ๐ย
๐ Subscription
Join us today and unlock the potential of our cutting-edge OSINT tool. Contact https://t.me/Chiasmod0n on Telegram to subscribe and start harnessing the power of Chiasmodon for your domain investigations.
โฌ๏ธInstall
pip install chiasmodon
๐ปUsage
Chiasmodon provides a flexible and user-friendly command-line interface and python library. Here are some examples to demonstrate its usage:
How to use pychiasmodon library:
from pychiasmodon import Chiasmodon as ch token = "PUT_HERE_YOUR_API_KEY" obj = ch(token)
Please note that these examples represent only a fraction of the available options and use cases. Refer to the documentation for more detailed instructions and explore the full range of features provided by Chiasmodon.
๐ฌ Contributions and Feedback
Contributions and feedback are welcome! If you encounter any issues or have suggestions for improvements, please submit them to the Chiasmodon GitHub repository. Your input will help us enhance the tool and make it more effective for the OSINT community.
๐License
Chiasmodon is released under the MIT License. See the LICENSE file for more details.
โ ๏ธDisclaimer
Chiasmodon is intended for legal and authorized use only. Users are responsible for ensuring compliance with applicable laws and regulations when using the tool. The developers of Chiasmodon disclaim any responsibility for the misuse or illegal use of the tool.
๐ขAcknowledgments
Chiasmodon is the result of collaborative efforts from a dedicated team of contributors who believe in the power of OSINT. We would like to express our gratitude to the open-source community for their valuable contributions and support.
ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.
VolWeb is a digital forensic memory analysis platform that leverages the power of the Volatility 3 framework. It is dedicated to aiding in investigations and incident responses.
Objective
The goal of VolWeb is to enhance the efficiency of memory collection and forensic analysis by providing a centralized, visual, and enhanced web application for incident responders and digital forensics investigators. Once an investigator obtains a memory image from a Linux or Windows system, the evidence can be uploaded to VolWeb, which triggers automatic processing and extraction of artifacts using the power of the Volatility 3 framework.
By utilizing cloud-native storage technologies, VolWeb also enables incident responders to directly upload memory images into the VolWeb platform from various locations using dedicated scripts interfaced with the platform and maintained by the community. Another goal is to allow users to compile technical information, such as Indicators, which can later be imported into modern CTI platforms like OpenCTI, thereby connecting your incident response and CTI teams after your investigation.
Project Documentation and Getting Started Guide
The project documentation is available on the Wiki. There, you will be able to deploy the tool in your investigation environment or lab.
[!IMPORTANT] Take time to read the documentation in order to avoid common miss-configuration issues.
Interacting with the REST API
VolWeb exposes a REST API to allow analysts to interact with the platform. There is a dedicated repository proposing some scripts maintained by the community: https://github.com/forensicxlab/VolWeb-Scripts Check the wiki of the project to learn more about the possible API calls.
Issues
If you have encountered a bug, or wish to propose a feature, please feel free to open an issue. To enable us to quickly address them, follow the guide in the "Contributing" section of the Wiki associated with the project.
Contact
Contact me at [email protected] for any questions regarding this tool.
Next Release Goals
Check out the roadmap: https://github.com/k1nd0ne/VolWeb/projects/1
drozer (formerly Mercury) is the leading security testing framework for Android.
drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.
drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering. Using weasel (WithSecure's advanced exploitation payload) drozer is able to maximise the permissions available to it by installing a full agent, injecting a limited agent into a running process, or connecting a reverse shell to act as a Remote Access Tool (RAT).
drozer is a good tool for simulating a rogue application. A penetration tester does not have to develop an app with custom code to interface with a specific content provider. Instead, drozer can be used with little to no programming experience required to show the impact of letting certain components be exported on a device.
To help with making sure drozer can be run on modern systems, a Docker container was created that has a working build of Drozer. This is currently the recommended method of using Drozer on modern systems.
The Docker container and basic setup instructions can be found here.
Instructions on building your own Docker container can be found here.
Note: On Windows please ensure that the path to the Python installation and the Scripts folder under the Python installation are added to the PATH environment variable.
git clone https://github.com/WithSecureLabs/drozer.git cd drozer make deb
Installing .deb (Debian/Ubuntu/Mint)
sudo dpkg -i drozer-2.x.x.deb
Building for Redhat/Fedora/CentOS
git clone https://github.com/WithSecureLabs/drozer.git cd drozer make rpm
Installing .rpm (Redhat/Fedora/CentOS)
sudo rpm -I drozer-2.x.x-1.noarch.rpm
Building for Windows
NOTE: Windows Defender and other Antivirus software will flag drozer as malware (an exploitation tool without exploit code wouldn't be much fun!). In order to run drozer you would have to add an exception to Windows Defender and any antivirus software. Alternatively, we recommend running drozer in a Windows/Linux VM.
git clone https://github.com/WithSecureLabs/drozer.git cd drozer python.exe setup.py bdist_msi
Installing .msi (Windows)
Run dist/drozer-2.x.x.win-x.msi
Usage
Installing the Agent
Drozer can be installed using Android Debug Bridge (adb).
You should now have the drozer Console installed on your PC, and the Agent running on your test device. Now, you need to connect the two and you're ready to start exploring.
We will use the server embedded in the drozer Agent to do this.
If using the Android emulator, you need to set up a suitable port forward so that your PC can connect to a TCP socket opened by the Agent inside the emulator, or on the device. By default, drozer uses port 31415:
$ adb forward tcp:31415 tcp:31415
Now, launch the Agent, select the "Embedded Server" option and tap "Enable" to start the server. You should see a notification that the server has started.
Then, on your PC, connect using the drozer Console:
On Linux:
$ drozer console connect
On Windows:
> drozer.bat console connect
If using a real device, the IP address of the device on the network must be specified:
DroidLysis is a pre-analysis tool for Android apps: it performs repetitive and boring tasks we'd typically do at the beginning of any reverse engineering. It disassembles the Android sample, organizes output in directories, and searches for suspicious spots in the code to look at. The output helps the reverse engineer speed up the first few steps of analysis.
DroidLysis can be used over Android packages (apk), Dalvik executables (dex), Zip files (zip), Rar files (rar) or directories of files.
The configuration file is ./conf/general.conf (you can switch to another file with the --config option). This is where you configure the location of various external tools (e.g. Apktool), the name of pattern files (by default ./conf/smali.conf, ./conf/wide.conf, ./conf/arm.conf, ./conf/kit.conf) and the name of the database file (only used if you specify --enable-sql)
Be sure to specify the correct paths for disassembly tools, or DroidLysis won't find them.
Usage
DroidLysis uses Python 3. To launch it and get options:
The unzipped, pre-processed sample in a subdirectory of your output dir. The subdirectory is named using the sample's filename and sha256 sum. For example, if we analyze the Signal application and set --output /tmp, the analysis will be written to /tmp/Signalwebsiteuniversalrelease4.52.4.apk-f3c7d5e38df23925dd0b2fe1f44bfa12bac935a6bc8fe3a485a4436d4487a290.
A database (by default, SQLite droidlysis.db) containing properties it noticed.
Options
Get usage with droidlysis --help
The input can be a file or a directory of files to recursively look into. DroidLysis knows how to process Android packages, DEX, ODEX and ARM executables, ZIP, RAR. DroidLysis won't fail on other type of files (unless there is a bug...) but won't be able to understand the content.
When processing directories of files, it is typically quite helpful to move processed samples to another location to know what has been processed. This is handled by option --movein. Also, if you are only interested in statistics, you should probably clear the output directory which contains detailed information for each sample: this is option --clearoutput. If you want to store all statistics in a SQL database, use --enable-sql (see here)
DEX decompilation is quite long with Procyon, so this option is disabled by default. If you want to decompile to Java, use --enable-procyon.
DroidLysis's analysis does not inspect known 3rd party SDK by default, i.e. for instance it won't report any suspicious activity from these. If you want them to be inspected, use option --no-kit-exception. This usually creates many more detected properties for the sample, as SDKs (e.g. advertisment) use lots of flagged APIs (get GPS location, get IMEI, get IMSI, HTTP POST...).
Sample output directory (--output DIR)
This directory contains (when applicable):
A readable AndroidManifest.xml
Readable resources in res
Libraries lib, assets assets
Disassembled Smali code: smali (and others)
Package meta information: META-INF
Package contents when simply unzipped in ./unzipped
DEX executable classes.dex (and others), and converted to jar: classes-dex2jar.jar, and unjarred in ./unjarred
The following files are generated by DroidLysis:
autoanalysis.md: lists each pattern DroidLysis detected and where.
report.md: same as what was printed on the console
If you do not need the sample output directory to be generated, use the option --clearoutput.
Trackers from Exodus which are not present in your initial kit.conf are appended to ~/.cache/droidlysis/kit.conf. Diff the 2 files and check what trackers you wish to add.
SQLite database{#sqlite_database}
If you want to process a directory of samples, you'll probably like to store the properties DroidLysis found in a database, to easily parse and query the findings. In that case, use the option --enable-sql. This will automatically dump all results in a database named droidlysis.db, in a table named samples. Each entry in the table is relative to a given sample. Each column is properties DroidLysis tracks.
For example, to retrieve all filename, SHA256 sum and smali properties of the database:
What DroidLysis detects can be configured and extended in the files of the ./conf directory.
A pattern consist of:
a tag name: example send_sms. This is to name the property. Must be unique across the .conf file.
a pattern: this is a regexp to be matched. Ex: ;->sendTextMessage|;->sendMultipartTextMessage|SmsManager;->sendDataMessage. In the smali.conf file, this regexp is match on Smali code. In this particular case, there are 3 different ways to send SMS messages from the code: sendTextMessage, sendMultipartTextMessage and sendDataMessage.
a description (optional): explains the importance of the property and what it means.
Exodus Privacy maintains a list of various SDKs which are interesting to rule out in our analysis via conf/kit.conf. Add option --import_exodus to the droidlysis command line: this will parse existing trackers Exodus Privacy knows and which aren't yet in your kit.conf. Finally, it will append all new trackers to ~/.cache/droidlysis/kit.conf.
Afterwards, you may want to sort your kit.conf file:
import configparser import collections import os
config = configparser.ConfigParser({}, collections.OrderedDict) config.read(os.path.expanduser('~/.cache/droidlysis/kit.conf')) # Order all sections alphabetically config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] )) with open('sorted.conf','w') as f: config.write(f)
Updates
v3.4.6 - Detecting manifest feature that automatically loads APK at install