πŸ”’
There are new articles available, click to refresh the page.
Today β€” 24 January 2022Tools

VulnLab - A Web Vulnerability Lab Project

24 January 2022 at 11:30
By: Zion3R

VulnLab

A web vulnerability lab project developed by Yavuzlar.



Vulnerabilities

  • SQL Injection
  • Cross Site Scripting (XSS)
  • Command Injection
  • Insecure Direct Object References (IDOR)
  • Cross Site Request Forgery (CSRF)
  • XML External Entity (XXE)
  • Insecure Deserialization
  • File Upload
  • File Inclusion
  • Broken Authentication

Installation

Install with DockerHub

  1. If you want to install on DockerHub, just type this command.
     docker run --name vulnlab -d -p 1337:80 yavuzlar/vulnlab:latest
  2. Go to http://localhost:1337

Manuel Installation

  1. Clone the repo
     git clone https://github.com/Yavuzlar/VulnLab
  2. Build docker image
     docker build -t yavuzlar/vulnlab .
  3. Run container
     docker run -d -p 1337:80 yavuzlar/vulnlab
  4. Go to http://localhost:1337

Google Cloud

$ (3)


Contact

Website
Linkedln
Twitter
Instagram



Yesterday β€” 23 January 2022Tools

Whatfiles - Log What Files Are Accessed By Any Linux Process

23 January 2022 at 20:30
By: Zion3R


Whatfiles is a Linux utility that logs what files another program reads/writes/creates/deletes on your system. It traces any new processes and threads that are created by the targeted process as well.


Rationale:

I've long been frustrated at the lack of a simple utility to see which files a process touches from main() to exit. Whether you don't trust a software vendor or are concerned about malware, it's important to be able to know what a program or installer does to your system. lsof only observes a moment in time and strace is large and somewhat complicated.

Sample output:

mode:  read, file: /home/theron/.gimp-2.8/tool-options/gimp-clone-tool, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/tool-options/gimp-heal-tool, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/tool-options/gimp-perspective-clone-tool, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/tool-options/gimp-convolve-tool, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/tool-options/gimp-smudge-tool, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/tool-options/gimp-dodge-burn-tool, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/tool-options/gimp-desaturate-tool, syscall: openat(), PID: 8566, process: gim p
mode: read, file: /home/theron/.gimp-2.8/plug-ins, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /usr/lib/gimp/2.0/plug-ins, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /home/theron/.gimp-2.8/pluginrc, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /usr/share/locale/en_US/LC_MESSAGES/gimp20-std-plug-ins.mo, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /usr/lib/gimp/2.0/plug-ins/script-fu, syscall: openat(), PID: 8566, process: gimp
mode: read, file: /etc/ld.so.cache, syscall: openat(), PID: 8574, process: /usr/lib/gimp/2.0/plug-ins/script-fu
mode: read, file: /etc/ld.so.cache, syscall: openat(), PID: 8574, process: /usr/lib/gimp/2.0/plug-ins/script-fu
mode: read, file: /usr/lib/libgimpui-2.0.so.0, syscall: openat(), PID: 8574, process: /usr/lib/gimp/2.0/plug-ins/script-fu
mode: read, file: /usr/lib/libgimpwidgets-2.0.so.0, syscall: openat(), PID: 8574, process: /usr/lib/g imp/2.0/plug-ins/script-fu
mode: read, file: /usr/lib/libgimpwidgets-2.0.so.0, syscall: openat(), PID: 8574, process: /usr/lib/gimp/2.0/plug-ins/script-fu
mode: read, file: /usr/lib/libgimp-2.0.so.0, syscall: openat(), PID: 8574, process: /usr/lib/gimp/2.0/plug-ins/script-fu
mode: read, file: /usr/lib/libgimpcolor-2.0.so.0, syscall: openat(), PID: 8574, process: /usr/lib/gimp/2.0/plug-ins/script-fu

Use:

  • basic use, launches ls and writes output to a log file in the current directory:

    $ whatfiles ls -lah ~/Documents

  • specify output file location with -o:

    $ whatfiles -o MyLogFile cd ..

  • include debug output, print to stdout rather than log file:

    $ whatfiles -d -s apt install zoom

  • attach to currently running process (requires root privileges):

    $ sudo whatfiles -p 1234

Distribution

Ready-to-use binaries are on the releases page! Someone also kindly added it to the Arch repository, and letompouce set up a GitLab pipeline as well.

Compilation (requires gcc and make):

$ cd whatfiles
$ make
$ sudo make install

Supports x86, x86_64, ARM32, and ARM64 architectures.

Questions that could be asked at some point:

  • Isn't this just a reimplementation of strace -fe trace=creat,open,openat,unlink,unlinkat ./program?

    Yes. Though it aims to be simpler and more user friendly.

  • Are there Mac and Windows versions?

    No. Tracing syscalls on Mac requires task_for_pid(), which requires code signing, which I can't get to work, and anyway I have no interest in paying Apple $100/year to write free software. dtruss on Mac can be used to follow a single process and its children, though the -t flag seems to only accept a single syscall to filter on. fs_usage does something similar though I'm not sure if it follows child processes/threads. Process Monitor for Windows is pretty great.

Known issues:

  • Tabs crash when whatfiles is used to launch Firefox. (Attaching with -p [PID] once it's running works fine, as does using whatfiles to launch a second Firefox window if one's already open.)

Planned features:

  • None currently, open to requests and PRs.

Thank you for your interest, and please also check out Cloaker, Nestur, and Flying Carpet!



CFRipper – CloudFormation Security Scanning & Audit Tool

23 January 2022 at 17:15
By: Darknet
CFRipper – CloudFormation Security Scanning & Audit Tool

CFRipper is a Python-based Library and CLI security analyzer that functions as an AWS CloudFormation security scanning and audit tool, it aims to prevent vulnerabilities from getting to production infrastructure through vulnerable CloudFormation scripts.

You can use CFRipper to prevent deploying insecure AWS resources into your Cloud environment. You can write your own compliance checks by adding new custom plugins.

CFRipper should be part of your CI/CD pipeline. It runs just before a CloudFormation stack is deployed or updated and if the CloudFormation script fails to pass the security check it fails the deployment and notifies the team that owns the stack.

Read the rest of CFRipper – CloudFormation Security Scanning & Audit Tool now! Only available at Darknet.

Second-Order - Subdomain Takeover Scanner

23 January 2022 at 11:30
By: Zion3R


Scans web applications for second-order subdomain takeover by crawling the app, and collecting URLs (and other data) that match certain rules, or respond in a certain way.


Installation

From binary

Download a prebuilt binary from the releases page and unzip it.

From source

Go version 1.17 is recommended.

go install -v github.com/mhmdiaa/[email protected]

Docker

docker pull mhmdiaa/second-order

Command line options

Directory to save results in (default "output") -threads int Number of threads (default 10)">
  -target string
Target URL
-config string
Configuration file (default "config.json")
-depth int
Depth to crawl (default 1)
-header value
Header name and value separated by a colon 'Name: Value' (can be used more than once)
-insecure
Accept untrusted SSL/TLS certificates
-output string
Directory to save results in (default "output")
-threads int
Number of threads (default 10)

Configuration File

Example configuration files are in config

  • LogQueries: A map of tag-attribute queries that will be searched for in crawled pages. For example, "a": "href" means log every href attribute of every a tag.
  • LogNon200Queries: A map of tag-attribute queries that will be searched for in crawled pages, and logged only if they contain a valid URL that doesn't return a 200 status code.
  • LogInline: A list of tags whose inline content (between the opening and closing tags) will be logged, like title and script

Output

All results are saved in JSON files that specify what and where data was found

  • The results of LogQueries are saved in attributes.json
{
"https://example.com/": {
"input[name]": [
"user",
"id",
"debug"
]
}
}
  • The results of LogNon200Queries are saved in non-200-url-attributes.json
{
"https://example.com/": {
"script[src]": [
"https://cdn.old_abandoned_domain.com/app.js",
]
}
}
  • The results of LogInline are saved in inline.json
{
"https://example.com/": {
"title": [
"Example - Home"
]
},
"https://example.com/login": {
"title": [
"Example - login"
]
}
}

Usage Ideas

This is a list of tips and ideas (not necessarily related to second-order subdomain takeover) on what to use Second Order for.

  • Check for second-order subdomain takeover: takeover.json. (Duh!)
  • Collect inline and imported JS code: javascript.json.
  • Find where a target hosts static files cdn.json. (S3 buckets, anyone?)
  • Collect <input> names to build a tailored parameter bruteforcing wordlist: parameters.json.
  • Feel free to contribute more ideas!

References

https://shubs.io/high-frequency-security-bug-hunting-120-days-120-bugs/#secondorder

https://edoverflow.com/2017/broken-link-hijacking/



Before yesterdayTools

Mandiant-Azure-AD-Investigator - PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity

22 January 2022 at 20:30
By: Zion3R


This repository contains a PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity. Some indicators are "high-fidelity" indicators of compromise, while other artifacts are so called "dual-use" artifacts. Dual-use artifacts may be related to threat actor activity, but also may be related to legitimate functionality. Analysis and verification will be required for these. For a detailed description of the techniques used by UNC2452 see our blog.


This tool is read-only. It does not make any changes to the Microsoft 365 environment.

In summary this module will:

It will not:

  • Identify a compromise 100% of the time, or
  • Tell you if an artifact is legitimate admin activity or threat actor activity.

With community feedback, the tool may become more thorough in its detection of IOCs. Please open an issue, submit a PR, or contact the authors if you have problems, ideas, or feedback.

Features

Federated Domains (Invoke-MandiantAuditAzureADDomains)

This module uses MS Online PowerShell to look for and audit federated domains in Azure AD. All federated domains will be output to the file federated domains.csv.

  • Signing Certificate Unusual Validity Period - Alerts on a federated domain where the signing certificates have a validity period of > 1 year. AD FS managed certificates are valid for only one year. Validity periods that are longer than one year could be an indication that a threat actor has tampered with the domain federation settings. They may also be indicative of the use of a legitimate custom token-signing certificate. Have your administrators verify if this is the case.
  • Signing Certificate Mismatch - Alerts on federated domains where the issuer or subject of the signing certificates do not match. In most cases the token-signing certificates will always be from the same issuer and have the same subject. If there is a mismatch, then it could be an indication that a threat actor has tampered with the domain federation settings. Have your administrators verify if the subject and issuer names are expected, and if not consider performing a forensic investigation to determine how the changes were made and to identify any other evidence of compromise.
  • Azure AD Backdoor (any.sts) - Alerts on federated domains configured with any.sts as the Issuer URI. This is indicative of usage of the Azure AD Backdoor tool. Consider performing a forensic investigation to determine how the changes were made and to identify any other evidence of compromise.
  • Federated Domains - Lists all federated domains and the token issuer URI. Verify that the domain should be federated and that the issuer URI is expected.
  • Unverified Domains - Lists all unverified domains in Azure AD. Unverified domains should not be kept in Azure AD for long in an unverified state. Consider removing them.

Examples

!! Evidence of AAD backdoor found.
Consider performing a detailed forensic investigation
Domain name: foobar.com
Domain federation name:
Federation issuer URI: http://any.sts/16B45E3B

‼️
The script has identified a domain that has been federated with an issuer URI that is an indicator of an Azure AD Backdoor. The backdoor sets the issuer URI to hxxp://any.sts by default. Consider performing a forensic investigation to determine how the changes were made and identify any other evidence of compromise.
!! A token signing certificate has a validity period of more than 365 days. 
This may be evidence of a signing certificate not generated by AD FS.
Domain name: foobar.com
Federation issuer uri: http://sts.foobar.com
Signing cert not valid before: 1/1/2020 00:00:00
Signing cert not valid after: 12/31/2025 23:59:59

The script has identified a federated domain with a token-signing certificate that is valid for longer than the standard 365 days. Consult with your administrators to see if the token-signing certificate is manually managed and if it is expected to have the stated validity period. Consider performing a forensic investigation if this is not expected.

Service Principals (Invoke-MandiantAuditAzureADServicePrincipals)

This module uses Azure AD PowerShell to look for and audit Service Principals in Azure AD.

  • First-party Service Principals with added credentials - First-party (Microsoft published) Service Principals should not have added credentials except in rare circumstances. Environments that are or were previously in a hybrid-mode may have credentials added to Exchange Online, Skype for Business, and AAD Password Protection Proxy Service Principals. Verify that the Service Principal credential is part of a legitimate use case. Consider performing a forensic investigation if the credential is not legitimate.
  • Service Principals with high level privileges and added credentials - Identifies Service Principals that have high-risk API permissions assigned and added credentials. While the Service Principal and added permissions are likely legitimate, the added credentials may not be. Verify that the Service Principal credentials are part of a legitimate use case. Verify that the Service Principal needs the listed permissions.

Examples

!! Identified first-party (Microsoft published) Service Principals with added credentials.
Only in rare cases should a first-party Service Principal have an added credential.
Verify that the added credential has a legitimate use case and consider further investigation if not
*******************************************************************
Object ID : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
App ID : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Display Name : Office 365 Exchange Online
Key Credentials :

CustomKeyIdentifier :
EndDate : 12/9/2017 2:10:29 AM
KeyId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
StartDate : 12/9/2015 1:40:30 AM
Type : AsymmetricX509Cert
Usage : Verify
Value :

The script has identified a first-party (Microsoft) Service Principal with added credentials. First-party Service Principals should not have added credentials except in rare cases. Environments that are or were previously in a hybrid-mode may have credentials added to Exchange Online, Skype for Business, and AAD Password Protection Proxy Service Principals. This may also be an artifact of UNC2452 activity in your environment. Consult with your administrators and search the audit logs to verify the credential is legitimate. You can also use the "Service Principal Sign-Ins" tab in the Azure AD Sign-Ins blade to search for authentications to your tenant using this Service Principal.
!! Identified Service Principals with high-risk API permissions and added credentials.
Verify that the added credential has a legitimate use case and consider further investigation if not
Object ID : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
App ID : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Display Name : TestingApp
Key Credentials :
CustomKeyIdentifier :
EndDate : 1/7/2025 12:00:00 AM
KeyId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
StartDate : 1/7/2021 12:00:00 AM
Type : Symmetric
Usage : Verify
Value :
Password Credentials :
Risky Permissions : Domain.ReadWrite.All

The script has identified a Service Principal with high-risk API permissions and added credentials. This may be expected, as some third-party or custom-built applications require added credentials in order to function. This may also be an artifact of UNC2452 activity in your environment. Consult with your administrators and search the audit logs to verify the credential is legitimate. You can also use the "Service Principal Sign-Ins" tab in the Azure AD Sign-Ins blade to search for authentications to your tenant using this Service Principal.

Applications (Invoke-MandiantAuditAzureADApplications)

This module uses Azure AD PowerShell to look for and audit Applications in Azure AD.

  • Applications with high level privileges and added credentials - Alerts on Applications that have high-risk API permissions and added credentials. While the Applications and added permissions are likely legitimate, the added credentials may not be. Verify that the Application credentials are part of a legitimate use case. Verify that the Applications needs the listed permissions.

Example

!! High-privileged Application with credentials found.
Validate that the application needs these permissions.
Validate that the credentials added to the application are associated with a legitimate use case.

ObjectID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
AppID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
DisplayName: Acme Test App
KeyCredentials:
PasswordCredentials:

CustomKeyIdentifier :
EndDate : 12/22/2021 4:01:52 PM
KeyId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
StartDate : 12/22/2020 4:01:52 PM
Value :

CustomKeyIdentifier :
EndDate : 12/21/2021 6:32:54 PM
KeyId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
StartDate : 12/21/2020 6:33:16 PM
Value :

Risky Permissions:
Mail.Read (Read mail in all mailboxes)
Directory.Read.Al l (Read all data in the organization directory)

The script has identified an Application with high-risk API permissions and added credentials. This may be expected, as some third-party or custom-built applications require added credentials in order to function. This may also be an artifact of UNC2452 activity in your environment. Consult with your administrators and search the audit logs to verify the credential is legitimate.

Cloud Solution Provider Program (Invoke-MandiantGetCSPInformation)

This module checks to see if the tenant is managed by a CSP, or partner, and if delegated administration is enabled. Delegated administration allows the CSP to access a customer tenant with the same privileges as a Global Administrator. Although the CSP program enforces strong security controls on the partner's tenant, a threat actor that compromises the CSP may be able to access customer environments. Organizations should verify if their partner needs delegated admin privileges and remove it if not. If the partner must maintain delegated admin access, consider implementing Conditional Access Policies to restrict their access.

Organizations can check and manage partner relationships by navigating to the Admin Center and navigating to Settings -> Partner Relationships on the left-hand menu bar.

Mailbox Folder Permissions (Get-MandiantMailboxFolderPermissions)

This module audits all the mailboxes in the tenant for the existance of suspicious folder permissions. Specifically, this module will examine the "Top of Information Store" and "Inbox" folders in each mailbox and check the permissions assigned to the "Default" and "Anonymous" users. Any value other than "None" will result in the mailbox being flagged for analysis. In general the Default and Anonymous users should not have permissions on user inboxes as this will allow any user to read their contents. Some organizations may find shared mailboxes with this permission, but it is not recommended practice.

Application Impersonation (Get-MandiantApplicationImpersonationHolders)

This module outputs the list of users and groups that hold the ApplicationImpersonation role. Any user or member of a group in the output of this command can use impersonation to "act as" and access the mailbox of any other user in the tenant. Organizations should audit the output of this command to ensure that only expected users and groups are included, and where possible further restrict the scope.

Unified Audit Log (Get-MandiantUnc2452AuditLogs)

This module is a helper script to search the Unified Audit Log. Searching the Unified Audit Log has many technical caveats that can be easy to overlook. This module can help simplify the search process by implementing best practices for navigating these caveats and handling some common errors.

By default, the module will search for log entries that can record UNC2452 techniques. The log records may also capture legitimate administrator activity, and will need to be verified.

  • Update Application - Records actions taken to update App Registrations.
  • Set Domain Auth - Records when authentication settings for a domain are changed, including the creation of federation realm objects. These events should occur rarely in an environment and may indicate a threat actor configuring an AAD backdoor.
  • Set Federation Settings - Records when the federation realm object for a domain is modified. These events should occur rarely in an environment and may indicate a threat actor preparing to execute a Golden SAML attack.
  • Update Application Certificates and Secrets - Records when a secret or certificate is added to an App Registration.
  • PowerShell Mailbox Logins - Records Mailbox Login operations where the client application was PowerShell.
  • Update Service Principal - Records when updates are made to an existing Service Principal.
  • Add Service Principal Credentials - Records when a secret or certificate is added to a Service Principal.
  • Add App Role Assignment - Records when an App Role (Application Permission) is added.
  • App Role Assignment for User - Records when an App Role is assigned to a user.
  • PowerShell Authentication - Records when a user authenticates to Azure AD using a PowerShell client.
  • New Management Role Assignments - Records when new management role assignments are created. This can be useful to identify new ApplicationImpersonation grants.

Usage

Required Modules

The PowerShell module requires the installation of three Microsoft 365 PowerShell modules.

  • AzureAD
  • MSOnline
  • ExchangeOnlineManagement

To install the modules:

  1. Open a PowerShell window as a local administrator (right-click then select Run As Administrator)
  2. Run the command Install-Module <MODULE NAME HERE> and follow the prompts

Required User Permissions

The PowerShell module must be run with a Microsoft 365 account assigned specific privileges.

  • Global Administrator or Global Reader role in the Azure AD portal
  • View-Only Audit Logs in the Exchange Control Panel

To grant an account View-Only Audit Logs in the Exchange Control Panel:

  1. Navigate to https://outlook.office365.com/ecp and login as a global admin or exchange admin (not the exact URL may differ if you are in an alternate cloud)
  2. Click admin roles in the dashboard, or expand the roles tab on the left and click admin roles if you are in the new UI
  3. Create a new admin role by clicking the + sign or clicking add new role group
  4. Give your role a name and default write-scope
  5. Add the View-Only Audit Logs permission to the role
  6. Add the user to the role

Note it can take up to an hour for this role to apply

Running the tool

  1. Download this tool as a ZIP and unzip it, or clone the repository to your system
  2. Open a PowerShell window
  3. Change directories to the location of this module cd C:\path\to\the\module
  4. Import this module Import-Module .\MandiantAzureADInvestigator.psd1 you should receive this output

Mandiant Azure AD Investigator
Focusing on UNC2452 Investigations

PS C:\Users\admin\Desktop\mandiant>
  1. Connect to Azure AD by running Connect-MandiantAzureEnvironment -UserPrincipalName <your username here>. You should receive a login prompt and output to the PowerShell window indicating the connections have been established. Note: If you run into issues you may need to change your execution policy by running Set-ExecutionPolicy -ExecutionPolicy RemoteSigned. This may require administrator privileges.
----------------------------------------------------------------------------
The module allows access to all existing remote PowerShell (V1) cmdlets in addition to the 9 new, faster, and more reliable cmdlets.

|--------------------------------------------------------------------------|
| Old Cmdlets | New/Reliable/Faster Cmdlets |
|--------------------------------------------------------------------------|
| Get-CASMailbox | Get-EXOCASMailbox |
| Get-Mailbox | Get-EXOMailbox |
| Get-MailboxFolderPermission | Get-EXOMailboxFolderPermission |
| Get-MailboxFolderStatistics | Get-EXOMailboxFolderStatistics |
| Get-MailboxPermission | Get-EXOMailboxPermission |
| Get-MailboxStatistics | Get-EXOMailboxStatistics |
| Get-MobileDeviceStatistics | Get-EXOMobileDeviceStatistics |
| Get-Recipient | Get-EXORecipient |
| Get-RecipientPermission | Get-EXORecipientPermission |
|--------------------------------------------------------------------------|

To get additional information, run: Get-Help Connect-ExchangeOnline or check https://aka.ms/exops-docs

Send your product improvement suggestions and feedback to [email protected] For issues related to the module, contact Microsoft support. Don't use the feedback alias for problems or support issues.
----------------------------------------------------------------------------

Account Environment TenantId TenantDomain
------- ----------- -------- ------------
[email protected] AzureCloud xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx test.onm...
  1. Run all checks Invoke-MandiantAllChecks -OutputPath <path\to\output\files>. You can also run individual checks using the specific cmdlet.
  2. Review the output on the screen and the written CSV files.

Further Reading

For additional information from Mandiant regarding UNC2452, please see:

The response to UNC2452 has been a significant effort across the security industry and these blogs heavily cite additional contributions that will be of value to users of this tool. We recommend reading the linked material from these posts to best understand activity in your environment. As always, the Mandiant team is available to answer follow-up questions or further assist on an investigation by contacting us here.



Pwndora - Massive IPv4 Scanner, Find And Analyze Internet-Connected Devices In Minutes, Create Your Own IoT Search Engine At Home

22 January 2022 at 11:30
By: Zion3R


Pwndora is a massive and fast IPv4 address range scanner, integrated with multi-threading.

Using sockets, it analyzes which ports are open, and collects more information about targets, each result is stored in Elasticsearch. You can integrate with Kibana to be able to visualize and manipulate data, basically it's like having your own IoT search engine at home.


Features

  • Port scanning with different options and retrieve software banner information.
  • Detect some web technologies running on servers, using Webtech integration.
  • Retrieves IP geolocation from Maxmind free database, updated periodically.
  • Possibility to take screenshots from hosts with HTTP using Rendertron.
  • Anonymous login detection on FTP servers

Usage

usage: CLI.py [-h] [-s START] [-e END] [-t THREADS] [--massive FILE] [--timeout TIMEOUT]
[--screenshot] [--top-ports] [--all-ports] [--update]
options:
-h, --help show this help message and exit
-s START Start IPv4 address
-e END End IPv4 address
-t THREADS Number of threads [Default: 50]
--massive FILE File path with IPv4 ranges
--timeout TIMEOUT Socket timeout [Default: 0.5]
--screenshot Take screenshots from hosts with HTTP
--top-ports Scan only 20 most used ports [Default]
--all-ports Scan 1000 most used ports
--update Update database from Wappalyzer

Examples

If this is your first time running, you should use the --update argument.

Scan only a single IPv4 address range:

python3 CLI.py -s 192.168.0.0 -e 192.168.0.255 -t 150 --top-ports

Scan from a text file with multiple IPv4 address ranges:

python3 CLI.py --massive-scan Argentina.csv -t 200 --all-ports --screenshot 

If you use an excessive amount of threads, some ISPs may detect suspicious traffic and disconnect you from the network.

To-do list

Requirements

pip install -r requirements.txt

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate.

Contact

[email protected]



T-Reqs-HTTP-Fuzzer - A Grammar-Based HTTP Fuzzer

21 January 2022 at 20:30
By: Zion3R


T-Reqs (Two Requests) is a grammar-based HTTP Fuzzer written as a part of the paper titled "T-Reqs: HTTP Request Smuggling with Differential Fuzzing" which was presented at ACM CCS 2021.


BibTeX of the paper:

@inproceedings{ccs2021treqs,
title={T-Reqs: HTTP Request Smuggling with Differential Fuzzing},
author={Jabiyev, Bahruz and Sprecher, Steven and Onarlioglu, Kaan and Kirda, Engin},
booktitle={Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
pages={1805--1820},
year={2021}
}

About

T-Reqs is for fuzzing HTTP servers by sending mutated HTTP requests with versions 1.1 and earlier. It has three main components: 1) generating inputs, 2) mutating generated inputs and 3) delivering them to the target server(s).

Generating Inputs

A CFG grammar fed into the fuzzer is used to generate HTTP requests. As the example grammar shown below is tailored for request line fuzzing, every request line component and possible values for each of them are explicitly specified. This allows us to generate valid requests with various forms of request line and also to treat each request line component as a separate unit from the mutation perspective.

 '<start>':
['<request>'],
'<request>':
['<request-line><base><the-rest>'],
'<request-line>':
['<method-name><space><uri><space><protocol><separator><version><newline>'],
'<method-name>':
['GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'CONNECT', 'OPTIONS', 'TRACE', 'PATCH'],
'<space>':
[' '],
'<uri>':
['/_URI_'],
'<protocol>':
['HTTP'],
'<separator>':
['/'],
'<version>':
['0.9', '1.0', '1.1'],
'<newline>':
['\r\n'],
'<base>':
['Host: _HOST_\r\nConnection:close\r\nX-Request-ID: _REQUEST_ID_\r\n'],
'<the-rest>':
['Content-Length: 5\r\n\r\nBBBBBBBBBB'],

Mutating Inputs

Each component can be marked in two ways: string mutable and tree mutable (see the example configuration). If a component is string mutable, then a random character can be deleted, replaced, or inserted at a random position. In the example shown below (left side), the last character in the protocol version (1) is deleted, the third letter in the method name (S) is replaced with R, and a forward slash is inserted at the beginning of the URI. Whereas, if a component is tree mutable, then a random component can be deleted, replaced, or inserted at a random position under that component. The example below (right side) shows three tree mutations applied on the request line component: 1) method is replaced by protocol, 2) an extra URI is inserted after the current URI, and 3) the existing proto< /em> is deleted.

Usage

Configuration

The fuzzer should be informed about the user preferences about the generation and mutation of inputs. More specifically, the input grammar, the mutable components, mutation preferences among other things should be specified in the configuration file (see an example configuration).

Running modes

To be able to reproduce the inputs generated and mutated in each iteration, a seed number is used. In fact, this seed number serves as a seed for random number generations during the formation and mutation of an input. Depending on how these seeds are fed into the fuzzer, it runs in one of these two modes: individual and collective. In the individual mode, inputs are generated and mutated based on the seeds specified by a user. In the command below, a single seed (i.e., 505) is specified. Alternatively, a list of seeds could be specified with -f option (see help page for more).

python3 main.py -i -c config -s 505

Whereas, in the collective mode (which is default), it starts from zero as the seed value and increments it in each iteration until the end number is reached. The beginning and end numbers can be customized.

python3 main.py -c config

Using for Finding HRS discrepancies

HTTP Request Smuggling relies on different body parsing behaviors between servers where one server uses Transfer-Encoding header while the other prefers Content-Length header to decide the boundaries of a request body, or one server ignores a request body, whereas the other one processes it.

To analyze the body parsing of servers in response to various mutations in various forms of an HTTP request, we need to have a feedback mechanism installed on those servers to tell us about the body parsing behavior. One way of installing a feedback mechanism on a server, is to run the server in the reverse-proxy mode and have it forward requests to a "feedback provider" script running as a service. This service measures the length of the body in received requests and saves it for comparing it later with other servers.

An example "feedback provider" script is available in this repository. However, this script sends the body length information back in a response assuming that this information is stored on the client side.

License

T-Reqs is licensed under MIT license.



Wireshark-Forensics-Plugin - A cross-platform Wireshark plugin that correlates network traffic data with threat intelligence, asset categorization & vulnerability data

21 January 2022 at 11:30
By: Zion3R


Wireshark is the most widely used network traffic analyzer. It is an important tool for both live traffic analysis & forensic analysis for forensic/malware analysts. Even though Wireshark provides incredibly powerful functionalities for protocol parsing & filtering, it does not provide any contextual information about network endpoints. For a typical analyst, who has to comb through GBs of PCAP files to identify malicious activity, it's like finding a needle in a haystack.

Wireshark Forensics Toolkit is a cross-platform Wireshark plugin that correlates network traffic data with threat intelligence, asset categorization & vulnerability data to speed up network forensic analysis. It does it by extending Wireshark native search filter functionality to allow filtering based on these additional contextual attributes. It works with both PCAP files and real-time traffic captures.


This toolkit provides the following functionality

  • Loads malicious Indicators CSV exported from Threat Intelligence Platforms like MISP and associates it with each source/destination IP from network traffic
  • Loads asset classification information based on IP-Range to Asset Type mapping which enables filtering incoming/outgoing traffic from a specific type of assets (e.g. filter for β€˜Database Server’, β€˜Employee Laptop’ etc)
  • Loads exported vulnerability scan information exported from Qualys/Nessus map IP to CVEs.
  • Extends native Wireshark filter functionality to allow filtering based severity, source, asset type & CVE information for each source or destination IP address in network logs

How To Use

  1. Download source Zip file or checkout the code
  2. Folder data/formatted_reports has 3 files
  • asset_tags.csv : Information about asset ip/domain/cidr and associated tags. Default file has few examples for intranet IPs & DNS servers
  • asset_vulnerabilities.csv : Details about CVE IDs and top CVSS score value for each asset
  • indicators.csv : IOC data with attributes type, value, severity & threat type
  1. All 3 files mentioned in step (2) can either be manually edited or vulnerabilities & indicators file can be generated using exported MISP & Tenable Nessus scan report. Need to place exported files under following folders with exact name specified
  • data/raw_reports/misp.csv : this file can be exported from MISP from following location, Export->CSV_Sig->Generate then Download

  • data/raw_reports/nessus.csv : this file can be exported from tenable nessus interface. Goto Scans->Scan Results->Select latest full scan entry. Select Vulnerability Detail List from Dropdown.

Then goto Options->Export as CSV->Select All->Submit. Rename downloaded file as nessus.csv and copy it to raw_reports/nessus.csv

  1. If you planning to download data from ThreatStream instead of using MISP, provide username, api_key and filter in config.json file. Each time you run python script, it will try to grab latest IOCs from threatstream & store them in data/formatted_reports/indicators.csv file.

  2. Run wft.exe if you are on windows, else run 'python wft.py' on Mac or Ubuntu to install and/or replace updated report files. Script will automatically pick up Wireshark Install location. If you have installed wireshark on custom path or using Wireshark Portable App then you can provide location as command line argument. E.g. while using Portable App, location would look something like this 'C:\Downloads\WiresharkPortable\Data'

  3. Post Installation, Open Wireshark & go to Edit->Configuration Profiles and select wireshark forensic toolkit profile. This will enable all additional columns

  4. Now relaunch Wireshark either open a PCAP file or start a live capture. In search filter you can use additional filtering parameters each starting with 'wft'. Wireshark will show dropdown for all filtering parameters available. Note all these additional filtering parameters are available for both source & destinations IP/Domain values.

List of filters available

Note all these options also available for destination, just replace 'wft.src' with 'wft.dst'

  • wft.src.domain (Source Domain Resolution using previous DNS traffic)
  • wft.src.detection (Source IP/Domain detection using IOC data)
  • wft.src.severity (Source IP/Domain detection severity using IOC data)
  • wft.src.threat_type (Source IP/Domain threat type severity using IOC data)
  • wft.src.tags (Source IP/Domain asset tags)
  • wft.src.os (Source IP/Domain operating system specified in vulnerability report)
  • wft.src.cve_ids (Comma separated list of CVE IDS for source IP/Domain)
  • wft.src.top_cvss_score (Top CVSS score among all CVE IDs for a given host)

LICENSE:

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.



Dep-Scan - Fully Open-Source Security Audit For Project Dependencies Based On Known Vulnerabilities And Advisories. Supports Both Local Repos And Container Images. Integrates With Various CI Environments Such As Azure Pipelines, CircleCI, Google CloudBuil

20 January 2022 at 11:30
By: Zion3R


dep-scan is a fully open-source security audit tool for project dependencies based on known vulnerabilities, advisories and license limitations. Both local repositories and container images are supported as input. The tool is ideal for CI environments with built-in build breaker logic.

If you have just come across this repo, probably the best place to start is to checkout the parent project slscan which include depscan along with a number of other tools.


Features

  • Local repos and container image based scanning with CVE insights [1]
  • Package vulnerability scanning is performed locally and is quite fast. No server is used!
  • Suggest optimal fix version by package group (See suggest mode)
  • Perform deep packages risk audit for dependency confusion attacks and maintenance risks (See risk audit)

NOTE:

  • [1] Only application related packages in container images are included in scanning. OS packages are not included yet.

Vulnerability Data sources

  • OSV
  • NVD
  • GitHub
  • NPM

Usage

dep-scan is ideal for use during continuous integration (CI) and also as a tool for local development.

Use with ShiftLeft Scan

dep-scan is integrated with scan, a free and open-source SAST tool. To enable this feature simply pass depscan to the --type argument. Refer to the scan documentation for more information.

---
--type python,depscan,credscan

This approach should work for all CI environments supported by scan.

Scanning projects locally (Python version)

sudo npm install -g @appthreat/cdxgen
pip install appthreat-depscan

This would install two commands called cdxgen and scan.

You can invoke the scan command directly with the various options.

cd <project to scan>
depscan --src $PWD --report_file $PWD/reports/depscan.json

Full list of options are below:

usage: depscan [-h] [--no-banner] [--cache] [--sync] [--suggest] [--risk-audit] [--private-ns PRIVATE_NS] [-t PROJECT_TYPE] [--bom BOM] -i SRC_DIR [-o REPORT_FILE]
[--no-error]
-h, --help show this help message and exit
--no-banner Do not display banner
--cache Cache vulnerability information in platform specific user_data_dir
--sync Sync to receive the latest vulnerability data. Should have invoked cache first.
--suggest Suggest appropriate fix version for each identified vulnerability.
--risk-audit Perform package risk audit (slow operation). Npm only.
--private-ns PRIVATE_NS
Private namespace to use while performing oss risk audit. Private packages should not be available in public registries by default. Comma
sep arated values accepted.
-t PROJECT_TYPE, --type PROJECT_TYPE
Override project type if auto-detection is incorrect
--bom BOM Examine using the given Software Bill-of-Materials (SBoM) file in CycloneDX format. Use cdxgen command to produce one.
-i SRC_DIR, --src SRC_DIR
Source directory
-o REPORT_FILE, --report_file REPORT_FILE
Report filename with directory
--no-error Continue on error to prevent build from breaking

Scanning containers locally (Python version)

Scan latest tag of the container shiftleft/scan-slim

depscan --no-error --cache --src shiftleft/scan-slim -o containertests/depscan-scan.json -t docker

Include license to the type to perform license audit.

depscan --no-error --cache --src shiftleft/scan-slim -o containertests/depscan-scan.json -t docker,license

You can also specify the image using the sha256 digest

depscan --no-error --src [email protected]:a5c5f8a64a0d9a436a0a6941bc3fb156be0c89996add834fe33b66ebeed2439e -o containertests/depscan-redmine.json -t docker

You can also save container images using docker or podman save command and pass the archive to depscan for scanning.

docker save -o /tmp/scanslim.tar shiftleft/scan-slim:latest
# podman save --format oci-archive -o /tmp/scanslim.tar shiftleft/scan-slim:latest
depscan --no-error --src /tmp/scanslim.tar -o reports/depscan-scan.json -t docker

Refer to the docker tests under GitHub action workflow for this repo for more examples.

Scanning projects locally (Docker container)

appthreat/dep-scan or quay.io/appthreat/dep-scan container image can be used to perform the scan.

To scan with default settings

docker run --rm -v $PWD:/app appthreat/dep-scan scan --src /app --report_file /app/reports/depscan.json

To scan with custom environment variables based configuration

docker run --rm \
-e VDB_HOME=/db \
-e NVD_START_YEAR=2010 \
-e GITHUB_PAGE_COUNT=5 \
-e GITHUB_TOKEN=<token> \
-v /tmp:/db \
-v $PWD:/app appthreat/dep-scan scan --src /app --report_file /app/reports/depscan.json

In the above example, /tmp is mounted as /db into the container. This directory is then specified as VDB_HOME for caching the vulnerability information. This way the database can be cached and reused to improve performance.

Supported languages and package format

dep-scan uses cdxgen command internally to create Software Bill-of-Materials (SBoM) file for the project. This is then used for performing the scans.

The following projects and package-dependency format is supported by cdxgen.

Language Package format
node.js package-lock.json, pnpm-lock.yaml, yarn.lock, rush.js
java maven (pom.xml [1]), gradle (build.gradle, .kts), scala (sbt)
php composer.lock
python setup.py, requirements.txt [2], Pipfile.lock, poetry.lock, bdist_wheel, .whl
go binary, go.mod, go.sum, Gopkg.lock
ruby Gemfile.lock, gemspec
rust Cargo.toml, Cargo.lock
.Net .csproj, packages.config, project.assets.json, packages.lock.json
docker / oci image All supported languages excluding OS packages

NOTE

The docker image for dep-scan currently doesn't bundle suitable java and maven commands required for bom generation. To workaround this limitation, you can -

  1. Use python-based execution from a VM containing the correct versions for java, maven and gradle.
  2. Generate the bom file by invoking cdxgen command locally and subsequently passing this to dep-scan via the --bom argument.

Integration with CI environments

Integration with Azure DevOps

Refer to this example yaml configuration for integrating dep-scan with Azure Pipelines. The build step would perform the scan and display the report inline as shown below:

Integration with GitHub Actions

This tool can be used with GitHub Actions using this action.

This repo self-tests itself with both sast-scan and dep-scan! Check the GitHub workflow file of this repo.

- name: Self dep-scan
uses: AppThreat/[email protected]
env:
VDB_HOME: ${{ github.workspace }}/db
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Customisation through environment variables

The following environment variables can be used to customise the behaviour.

  • VDB_HOME - Directory to use for caching database. For docker based execution, this directory should get mounted as a volume from the host
  • NVD_START_YEAR - Default: 2018. Supports upto 2002
  • GITHUB_PAGE_COUNT - Default: 2. Supports upto 20

GitHub Security Advisory

To download security advisories from GitHub, a personal access token with the following scope is necessary.

  • read:packages
export GITHUB_TOKEN="<PAT token>"

Suggest mode

Fix version for each vulnerability is retrieved from the sources. Sometimes, there might be known vulnerabilities in the fix version reported. Eg: in the below screenshot the fix versions suggested for jackson-databind might contain known vulnerabilities.

By passing an argument --suggest it is possible to force depscan to recheck the fix suggestions. This way the suggestion becomes more optimal for a given package group.

Notice, how the new suggested version is 2.9.10.5 which is an optimal fix version. Please note that the optimal fix version may not be the appropriate version for your application based on compatibility.

Package Risk audit

--risk-audit argument enables package risk audit. Currently, only npm and pypi packages are supported in this mode. A number of risk factors are identified and assigned weights to compute a final risk score. Packages that then exceed a maximum risk score (config.pkg_max_risk_score) are presented in a table.

Use --private-ns to specify the private package namespace that should be checked for dependency confusion type issues where a private package is available on public npm/pypi registry.

Example to check if private packages with namespaces @appthreat and @shiftleft are not accidentally made public use the below argument.

--private-ns appthreat,shiftleft
Risk category Default Weight Reason
pkg_private_on_public_registry 4 Private package is available on a public registry
pkg_min_versions 2 Packages with less than 3 versions represent an extreme where they could be either super stable or quite recent. Special heuristics are applied to ignore older stable packages
mod_create_min_seconds 1 Less than 12 hours difference between modified and creation time. This indicates that the upload had a defect that had to be rectified immediately. Sometimes, such a rapid update could also be malicious
latest_now_min_seconds 0.5 Less than 12 hours difference between the latest version and the current time. Depending on the package such a latest version may or may not be desirable
latest_now_max_seconds 0.5 Package versions that are over 6 years old are in use. Such packages might have vulnerable dependencies that are known or yet to be found
pkg_min_maintainers 2 Package has less than 2 maintainers. Many opensource projects have only 1 or 2 maintainers so special heuristics are used to ignore older stable packages
pkg_min_users 0.25 Package has less than 2 npm users
pkg_install_scripts 2 Package runs a custom pre or post installation scripts. This is often malicious and a downside of npm.
pkg_node_version 0.5 Package supports outdated version of node such as 0.8, 0.10, 4 or 6.x. Such projects might have prototype pollution or closure related vulnerabilities
pkg_scope 4 or 0.5 Packages that are used directly in the application (required scope) gets a score with a weight of 4. Optional packages get a score of 0.25
deprecated 1 Latest version is deprecated

Refer to pkg_query.py::get_category_score method for the risk formula.

Automatic adjustment

A parameter called created_now_quarantine_seconds is used to identify packages that are safely past the quarantine period (1 year). Certain risks such as pkg_min_versions and pkg_min_maintainers are suppressed for packages past the quarantine period. This adjustment helps reduce noise since it is unlikely that a malicious package can exist in a registry unnoticed for over a year.

Configuring weights

All parameters can be customized by using environment variables. For eg:

export PKG_MIN_VERSIONS=4 to increase and set the minimum versions category to 4.

License scan

dep-scan can scan the dependencies for any license limitations and report them directly on the console log. To enable license scanning set the environment variable FETCH_LICENSE to true.

export FETCH_LICENSE=true

The licenses data is sourced from choosealicense.com and is quite limited. If the license of a given package cannot be reliably matched against this list it will get silently ignored to reduce any noise. This behaviour could change in the future once the detection logic gets improved.

Alternatives

Dependency Check is considered to be the industry standard for open-source dependency scanning. After personally using this great product for a number of years I decided to write my own from scratch partly as a dedication to this project. By using a streaming database based on msgpack and using json schema, dep-scan is more performant than dependency check in CI environments. Plus with support for GitHub advisory source and grafeas report export and submission, dep-scan is on track to become a next-generation dependency audit tool

There are a number of other tools that piggy back on Sonatype ossindex API server. For some reason, I always felt uncomfortable letting a commercial company track the usage of various projects across the world. dep-scan is therefore 100% private and guarantees never to perform any tracking!



Http-Desync-Guardian - Analyze HTTP Requests To Minimize Risks Of HTTP Desync Attacks (Precursor For HTTP Request Smuggling/Splitting)

19 January 2022 at 20:30
By: Zion3R


Overview

HTTP/1.1 went through a long evolution since 1991 to 2014:

This means there is a variety of servers and clients, which might have different views on request boundaries, creating opportunities for desynchronization attacks (a.k.a. HTTP Desync).

It might seem simple to follow the latest RFC recommendations. However, for large scale systems that have been there for a while, it may come with unacceptable availability impact.

http_desync_guardian library is designed to analyze HTTP requests to prevent HTTP Desync attacks, balancing security and availability. It classifies requests into different categories and provides recommendations on how each tier should be handled.

It can be used either for raw HTTP request headers or already parsed by an HTTP engine. Consumers may configure logging and metrics collection. Logging is rate limited and all user data is obfuscated.

If you think you might have found a security impacting issue, please follow our Security Notification Process.


Priorities

  • Uniformity across services is key. This means request classification, logging, and metrics must happen under the hood and with minimally available settings (e.g., such as log file destination).
  • Focus on reviewability. The test suite must require no knowledge about the library/programming languages but only about HTTP protocol. So it's easy to review, contribute, and re-use.
  • Security is efficient when it's easy for users. Our goal is to make integration of the library as simple as possible.
  • Ultralight. The overhead must be minimal and impose no tangible tax on request handling (see benchmarks).

Supported HTTP versions

The main focus of this library is HTTP/1.1. See tests for all covered cases. Predecessors of HTTP/1.1 don't support connection re-use which limits opportunities for HTTP Desync, however some proxies may upgrade such requests to HTTP/1.1 and re-use backend connections, which may allow to craft malicious HTTP/1.0 requests. That's why they are analyzed using the same criteria as HTTP/1.1. For other protocol versions have the following exceptions:

  • HTTP/0.9 requests are never considered Compliant, but are classified as Acceptable. If any of Content-Length/Transfer-Encoding is present then it's Ambiguous.
  • HTTP/1.0 - the presence of Transfer-Encoding makes a request Ambiguous.
  • HTTP/2+ is out of scope. But if your proxy downgrades HTTP/2 to HTTP/1.1, make sure the outgoing request is analyzed.

See documentation to learn more.

Usage from C

This library is designed to be primarily used from HTTP engines written in C/C++.

  1. Install cbindgen: cargo install --force cbindgen
  2. Generate the header file:
    • Run cbindgen --output http_desync_guardian.h --lang c for C.
    • Run cbindgen --output http_desync_guardian.h --lang c++ for C++.
  3. Run cargo build --release. The binaries are in ./target/release/libhttp_desync_guardian.* files.

Learn more: generic and Nginx examples.

#include "http_desync_guardian.h"

/*
* http_engine_request_t - already parsed by the HTTP engine
*/
static int check_request(http_engine_request_t *req) {
http_desync_guardian_request_t guardian_request = construct_http_desync_guardian_from(req);
http_desync_guardian_verdict_t verdict = {0};

http_desync_guardian_analyze_request(&guardian_request, &verdict);

switch (verdict.tier) {
case REQUEST_SAFETY_TIER_COMPLIANT:
// The request is good. green light
break;
case REQUEST_SAFETY_TIER_ACCEPTABLE:
// Reject, if mode == STRICTEST
// Otherwise, OK
break;
case REQUEST_SAFETY_TIER_AMBIGUOUS:
// The request is ambiguous.
// Reject, if mode == STRICTEST
// Otherwise send it, but don't reuse both FE/BE connections.
break;
case REQUEST_SAFETY_TIER_SEVERE:
// Send 400 and close the FE connection.
break;
default:
// unreachable code
abort();
}
}

Usage from Rust

See benchmarks as an example of usage from Rust.

Security issue notifications

If you discover a potential security issue in http_desync_guardian we ask that you notify AWS Security via our vulnerability reporting page. Please do not create a public github issue.

Security

See CONTRIBUTING for more information.



Pip-Audit - Audits Python Environments And Dependency Trees For Known Vulnerabilities

19 January 2022 at 11:30
By: Zion3R


pip-audit is a tool for scanning Python environments for packages with known vulnerabilities. It uses the Python Packaging Advisory Database (https://github.com/pypa/advisory-db) via the PyPI JSON API as a source of vulnerability reports.

This project is developed by Trail of Bits with support from Google. This is not an official Google product.


Features

  • Support for auditing local environments and requirements-style files
  • Support for multiple vulnerability services (PyPI, OSV)
  • Support for emitting SBOMs in CycloneDX XML or JSON
  • Human and machine-readable output formats (columnar, JSON)
  • Seamlessly reuses your existing local pip caches

Installation

pip-audit requires Python 3.6 or newer, and can be installed directly via pip:

python -m pip install pip-audit

Third-party packages

In particular, pip-audit can be installed via conda:

conda install -c conda-forge pip-audit

Third-party packages are not directly supported by this project. Please consult your package manager's documentation for more detailed installation guidance.

Usage

You can run pip-audit as a standalone program, or via python -m:

pip-audit --help
python -m pip_audit --help
requirements file; this option can be used multiple times (default: None) -f FORMAT, --format FORMAT the format to emit audit results in (choices: columns, json, cyclonedx-json, cyclonedx-xml) (default: columns) -s SERVICE, --vulnerability-service SERVICE the vulnerability service to audit dependencies against (choices: osv, pypi) (default: pypi) -d, --dry-run collect all dependencies but do not perform the auditing step (default: False) -S, --strict fail the entire audit if dependency collection fails on any dependency (default: False) --desc [{on,off,auto}] include a description for each vulnerability; `auto` defaults to `on` for the `json` format. This flag has no effect on the `cyclonedx-json` or `cyclonedx-xml` formats. (default: auto) --cache-dir CACHE_DIR the directory to use as an HTTP cache for PyPI; uses the `pip` HTTP cache by default (default: None) --progress-spinner {on,off} display a progress spinner (default: on) --timeout TIMEOUT set the socket timeout (default: 15) --path PATHS restrict to the specified installation path for auditing packages; this option can be used multiple times (default: []) -v, --verbose give more output; this setting overrides the `PIP_AUDIT_LOGLEVEL` variable and is equivalent to setting it to `debug` (default: False)">
usage: pip-audit [-h] [-V] [-l] [-r REQUIREMENTS] [-f FORMAT] [-s SERVICE]
[-d] [-S] [--desc [{on,off,auto}]] [--cache-dir CACHE_DIR]
[--progress-spinner {on,off}] [--timeout TIMEOUT]
[--path PATHS] [-v]

audit the Python environment for dependencies with known vulnerabilities

optional arguments:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-l, --local show only results for dependencies in the local
environment (default: False)
-r REQUIREMENTS, --requirement REQUIREMENTS
audit the given requirements file; this option can be
used multiple times (default: None)
-f FORMAT, --format FORMAT
the format to emit audit results in (choices: columns,
json, cyclonedx-json, cyclonedx-xml) (default:
columns)
-s SERVICE, --vulnerability-service SERVICE
the vulnerability service to audit dependencies
against (choices: osv, pypi) (default: pypi)
-d, --dry-run collect all dependencies but do not perform the
auditing step (default: False)
-S, --strict fail the entire audit if dependency collection fails
on any dependency (default: False)
--desc [{on,off,auto}]
include a description for each vulnerability; `auto`
defaults to `on` for the `json` format. This flag has
no effect on the `cyclonedx-json` or `cyclonedx-xml`
formats. (default: auto)
--cache-dir CACHE_DIR
the directory to use as an HTTP cache for PyPI; uses
the `pip` HTTP cache by default (default: None)
--progress-spinner {on,off}
display a progress spinner (default: on)
--timeout TIMEOUT set the socket timeout (default: 15)
--path PATHS restrict to the specified installation path for
auditing packages; this option can be used multiple
times (default: [])
-v, --verbose give more output; this setting overrides the
`PIP_AUDIT_LOGLEVEL` variable and is equivalent to
setting it to `debug` (default: False)

Exit codes

On completion, pip-audit will exit with a code indicating its status.

The current codes are:

Examples

Audit dependencies for the current Python environment:

$ pip-audit
No known vulnerabilities found

Audit dependencies for a given requirements file:

$ pip-audit -r ./requirements.txt
No known vulnerabilities found

Audit dependencies for the current Python environment excluding system packages:

$ pip-audit -r ./requirements.txt -l
No known vulnerabilities found

Audit dependencies when there are vulnerabilities present:

$ pip-audit
Found 2 known vulnerabilities in 1 packages
Name Version ID Fix Versions
---- ------- -------------- ------------
Flask 0.5 PYSEC-2019-179 1.0
Flask 0.5 PYSEC-2018-66 0.12.3

Audit dependencies including descriptions:

$ pip-audit --desc
Found 2 known vulnerabilities in 1 packages
Name Version ID Fix Versions Description
---- ------- -------------- ------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------
Flask 0.5 PYSEC-2019-179 1.0 The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656.
Flask 0.5 PYSEC-2018-66 0.12.3 The Pallets Project flask version Before 0.12.3 contains a CWE-20: Improper Input Validation vulnerability in flask that can result in Large amount of memory usage possibly leading to denial of service. This attack appear to be exploitable via Attacker provides JSON data in incorrect encoding. This vulnerability appears to have been fixed in 0.12.3. NOTE: this may overlap CVE-2019-1010083.

Audit dependencies in JSON format:

$ pip-audit -f json | jq
Found 2 known vulnerabilities in 1 packages
[
{
"name": "flask",
"version": "0.5",
"vulns": [
{
"id": "PYSEC-2019-179",
"fix_versions": [
"1.0"
],
"description": "The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656."
},
{
"id": "PYSEC-2018-66",
"fix_versions": [
"0.12.3"
],
"description": "The Pallets Project flask version Before 0.12.3 contains a CWE-20: Improper Input Validation vulnerability in flask that can result in Large amount of memory usage possibly leading to denial of service. This attack appear to be exploitable via Attacker provides JSON data in incorrect encoding. This vu lnerability appears to have been fixed in 0.12.3. NOTE: this may overlap CVE-2019-1010083."
}
]
},
{
"name": "jinja2",
"version": "3.0.2",
"vulns": []
},
{
"name": "pip",
"version": "21.3.1",
"vulns": []
},
{
"name": "setuptools",
"version": "57.4.0",
"vulns": []
},
{
"name": "werkzeug",
"version": "2.0.2",
"vulns": []
},
{
"name": "markupsafe",
"version": "2.0.1",
"vulns": []
}
]

Security Model

This section exists to describe the security assumptions you can and must not make when using pip-audit.

TL;DR: If you wouldn't pip install it, you should not pip audit it.

pip-audit is a tool for auditing Python environments for packages with known vulnerabilities. A "known vulnerability" is a publicly reported flaw in a package that, if uncorrected, might allow a malicious actor to perform unintended actions.

pip-audit can protect you against known vulnerabilities by telling you when you have them, and how you should upgrade them. For example, if you have somepackage==1.2.3 in your environment, pip-audit can tell you that it needs to be upgraded to 1.2.4.

You can assume that pip-audit will make a best effort to fully resolve all of your Python dependencies and either fully audit each or explicitly state which ones it has skipped, as well as why it has skipped them.

pip-audit is not a static code analyzer. It analyzes dependency trees, not code, and it cannot guarantee that arbitrary dependency resolutions occur statically. To understand why this is, refer to Dustin Ingram's excellent post on dependency resolution in Python.

As such: you must not assume that pip-audit will defend you against malicious packages. In particular, it is incorrect to treat pip-audit -r INPUT as a "more secure" variant of pip-audit. For all intents and purposes, pip-audit -r INPUT is functionally equivalent to pip install -r INPUT, with a small amount of non-security isolation to avoid conflicts with any of your local environments.

Licensing

pip-audit is licensed under the Apache 2.0 License.

pip-audit reuses and modifies examples from resolvelib, which is licensed under the ISC license.

Contributing

See the contributing docs for details.

Code of Conduct

Everyone interacting with this project is expected to follow the PSF Code of Conduct.



goCabrito - Super Organized And Flexible Script For Sending Phishing Campaigns

18 January 2022 at 20:30
By: Zion3R


Super organized and flexible script for sending phishing campaigns.

Features

  • Sends to a single email
  • Sends to lists of emails (text)
  • Sends to lists emails with first, last name (csv)
  • Supports attachments
  • Splits emails in groups
  • Delays sending emails between each group
  • Support Tags to be placed and replaced in the message's body
    • Add {{name}} tag into the HTML message to be replaced with name (used with --to CSV).
    • Add {{track-click}} tag to URL in the HTML message.
    • Add {{track-open}} tag into the HTML message.
    • Add {{num}} tag to be replaced with a random phone number.
  • Supports individual profiles for different campaigns to avoid mistakes and confusion.
  • Supports creating database for sent emails, each email with its unique hash (useful with getCabrito)
  • Supports dry test, to run the script against your profile without sending the email to test your campaign before the launch.

Qs & As

Why not use goPhish?

goPhish is a gerat choice too. But I prefer flexibility and simplicity at the same time. I used goPhish various times but at somepoint, I'm either find it overwhelming or inflexible.

Most of the time, I don't need all these statistics, I just need a flixable way to prepare my phishing campaigns and send them. Each time I use goPhish I've to go and check the documentations about how to add a website, forward specific requests, etc. So I created goCabrito and getCabrito.

getCabrito optionally generates unique URL for email tracking.

  • Email Opening tracking: Tracking Pixel
  • Email Clicking tracking

by generate a hash for each email and append it to the end of the URL or image URL and store these information along with other things that are useful for getCabrito to import and servering. This feature is the only thing connects goCabrito with getCabrito script, so no panic!.

What's with the "Cabrito" thing?

It's just a name of once of my favorit resturants and the name was chosen by one of my team.

Prerequisites

Install gems' dependencies

sudo apt-get install build-essential libsqlite3-dev

Install gems

gem install mail sqlite3

Usage

sqlite database file (contains emails & its tracking hashes) to be imported by 'getCabrito' server. --dry Dry test, no actual email sending. -h, --help Show this message. Usage: goCabrito.rb <OPTIONS> Examples: $goCabrito.rb -s smtp.office365.com:587 -u [email protected] -p [email protected] \ -f [email protected] -t targets1.csv -c targets2.lst -b targets3.lst \ -B msg.html -S "This's title" -a file1.docx,file2.xlsx -g 3 -d 10 $goCabrito.rb --profile prf.json">
goCabrito.rb Ò€” A simple yet flexible email sender.

Help menu:
-s, --server HOST:PORT SMTP server and its port.
e.g. smtp.office365.com:587
-u, --user USER Username to authenticate.
e.g. [email protected]
-p, --pass PASS Password to authenticate
-f, --from EMAIL Sender's email (mostly the same as sender email)
e.g. [email protected]
-t, --to EMAIL|LIST|CSV The receiver's email or a file list of receivers.
e.g. [email protected] or targets.lst or targets.csv
The csv expected to be in fname,lname,email format without header.
-c, --copy EMAIL|LIST|CSV The CC'ed receiver's email or a file list of receivers.
-b, --bcopy EMAIL|LIST|CSV The BCC'ed receiver's email or a file list of receivers.
-B, --body MSG|FILE The mail's body string or a file contains the body (not attachements.)
For click and message opening and other trackings:
Add {{track-click}} tag to URL in the HTML message.
eg: http://phisher.com/file.exe/{{track-click}}
Add {{track-open}} tag into the HTML message.
eg: <html><body><p>Hi</p>{{track-open}}</body></html>
Add {{name}} tag into the HTML message to be replaced with name (used with --to CSV).
eg: <html><body><p>Dear {{name}},</p></body& gt;</html>
Add {{num}} tag to be replaced with a random phone number.
-a, --attachments FILE1,FILE2 One or more files to be attached seperated by comma.
-S, --subject TITLE The mail subject/title.
--no-ssl Do NOT use SSL connect when connect to the server (default: false).
-g, --groups NUM Number of receivers to send mail to at once. (default all in one group)
-d, --delay NUM The delay, in seconds, to wait after sending each group.
-P, --profile FILE A json file contains all the the above settings in a file
-D, --db FILE Create a sqlite database file (contains emails & its tracking hashes) to be imported by 'getCabrito' server.
--dry Dry test, no actual email sending.
-h, --help Show this message.

Usage:
goCabrito.rb <OPTIONS>
Examples:
$goCabrito.rb -s smtp.office365.com:587 -u [email protected] -p [email protected] \
-f [email protected] -t targets1.csv -c targets2.lst -b targets3.lst \
-B msg.html -S "This's title" -a file1.docx,file2.xlsx -g 3 -d 10

$goCabrito.rb --profile prf.json

How you really use it?

  1. I create directory for each customer
  2. Under the customer's directory, I create a directory for each campaign. This sub directory contains
  • The profile
  • The To, CC & BCC lists in CSV format
  • The message body in HTML format
  1. I configure the profile and prepare my HTML
  2. Execute the campaign profile in dry mode first (check the profile file dry value)
ruby goCabrito.rb -P CUSTOMER/3/camp3.json --dry
  1. I remove the --dry switch and make sure the dry value is false in the config file
  2. Send to a test email
  3. Send to the real lists

Troublesheooting

SMTP authentication issues

Nowadays, many cloud-based email vendors block SMTP authentication by default (e.g. Office365, GSuite). This of course will cause an error. To solve this, here are some steps to help you enabling AMTP authentication on different vendors.

Enable SMTP Auth Office 365

To globally enabling SMTP Auth, use powershell.

  • Support SSL For Linux/Nix (run pwsh as sudo required)
$ sudo pwsh
  • Install PSWSMan
Install-Module -Name PSWSMan -Scope AllUsers
Install-WSMan
  • Install ExchangeOnline Module
Install-Module -Name ExchangeOnlineManagement
  • Load ExchangeOnline Module
Import-Module ExchangeOnlineManagement
  • Connect to Office365 exchange using the main admin user, it will prompt you to enter credentials.
Connect-ExchangeOnline -InlineCredential

The above command will prompt you to enter Office365 admin's credentials

  PowerShell credential request
Enter your credentials.
User: [email protected]
Password for user [email protected]: **********
  • Or us this to open web browser to enter your credentils incase of 2FA.
Connect-ExchangeOnline -UserPrincipalName [email protected]ifsaudi.onmicrosoft.com 
  • Enable SMTP AUTH Gloabally
Set-TransportConfig -SmtpClientAuthenticationDisabled $false
  • To Enable for SMTP Auth for specific email
Set-CASMailbox -Identity [email protected] -SmtpClientAuthenticationDisabled $false
Get-CASMailbox -Identity [email protected] | Format-List SmtpClientAuthenticationDisabled
  • Confirm
Get-TransportConfig | Format-List SmtpClientAuthenticationDisabled

Then follow the following steps

  1. Go to Asure portal (https://aad.portal.azure.com/) from admin panel (https://admin.microsoft.com/)
  2. Select All Services
  3. Select Tenant Properties
  4. Click Manage Security defaults
  5. Select No Under Enable Security defaults

Google GSuite

Contribution

  • By fixing bugs
  • By enhancing the code
  • By reporting issues
  • By requesting features
  • By spreading the script
  • By click star :)


Driftwood - Private Key Usage Verification

18 January 2022 at 11:30
By: Zion3R


Driftwood is a tool that can enable you to lookup whether a private key is used for things like TLS or as a GitHub SSH key for a user.

Driftwood performs lookups with the computed public key, so the private key never leaves where you run the tool. Additionally it supports some basic password cracking for encrypted keys.


Installation

Three easy ways to get started.

Run with Docker

cat private.key | docker run --rm -i trufflesecurity/driftwood --pretty-json -

Run pre-built binary

Download the binary from the releases page and run it.

Build yourself

go install github.com/trufflesecurity/[email protected]

Usage

Minimal usage is

$ driftwood path/to/privatekey.pem

Run with --help to see more options.

Library Usage

Packages under pkg/ are libraries that can be used for external consumption. Packages under pkg/exp/ are considered to be experimental status and may have breaking changes.



reFlutter - Flutter Reverse Engineering Framework

17 January 2022 at 20:30
By: Zion3R


This framework helps with Flutter apps reverse engineering using the patched version of the Flutter library which is already compiled and ready for app repacking. This library has snapshot deserialization process modified to allow you perform dynamic analysis in a convenient way.


Key features:

  • socket.cc is patched for traffic monitoring and interception;
  • dart.cc is modified to print classes, functions and some fields;
  • contains minor changes for successfull compilation;
  • if you would like to implement your own patches, there is manual Flutter code change is supported using specially crafted Dockerfile

Supported engines

  • Android: arm64, arm32;
  • iOS: arm64;
  • Release: Stable, Beta

Install

# Linux, Windows, MacOS
pip3 install reflutter

Usage

Burp Suite IP: <input_ip> SnapshotHash: 8ee4ef7a67df9845fba331734198a953 The resulting apk file: ./release.RE.apk Please sign the apk file Configure Burp Suite proxy server to listen on *:8083 Proxy Tab -> Options -> Proxy Listeners -> Edit -> Binding Tab Then enable invisible proxying in Request Handling Tab Support Invisible Proxying -> true [email protected]:~$ reflutter main.ipa">
[email protected]:~$ reflutter main.apk

Please enter your Burp Suite IP: <input_ip>

SnapshotHash: 8ee4ef7a67df9845fba331734198a953
The resulting apk file: ./release.RE.apk
Please sign the apk file

Configure Burp Suite proxy server to listen on *:8083
Proxy Tab -> Options -> Proxy Listeners -> Edit -> Binding Tab

Then enable invisible proxying in Request Handling Tab
Support Invisible Proxying -> true

[email protected]:~$ reflutter main.ipa

Traffic interception

You need to specify the IP of your Burp Suite Proxy Server located in the same network where the device with the flutter application is. Next, you should configure the Proxy in BurpSuite -> Listener Proxy -> Options tab

  • Add port: 8083
  • Bind to address: All interfaces
  • Request handling: Support invisible proxying = True

You don't need to install any certificates. On an Android device, you don't need root access as well. reFlutter also allows to bypass some of the flutter certificate pinning implementations.

Usage on Android

The resulting apk must be aligned and signed. I use uber-apk-signer java -jar uber-apk-signer.jar --allowResign -a release.RE.apk. To see which code is loaded through DartVM, you need to run the application on the device. reFlutter prints its output in logcat with the reflutter tag

[email protected]:~$ adb logcat -e reflutter | sed 's/.*DartVM//' >> reflutter.txt
code output
Library:'package:anyapp/navigation/DeepLinkImpl.dart' Class: Navigation extends Object {  

String* DeepUrl = anyapp://evil.com/ ;

Function 'Navigation.': constructor. (dynamic, dynamic, dynamic, dynamic) => NavigationInteractor {

}

Function 'initDeepLinkHandle':. (dynamic) => Future<void>* {

}

Function '[email protected]':. (dynamic, dynamic, {dynamic navigator}) => void {

}

}

Library:'package:anyapp/auth/navigation/AuthAccount.dart' Class: AuthAccount extends Account {

PlainNotificationToken* _instance = sentinel;

Function 'getAuthToken':. (dynamic, dynamic, dynamic, dynamic) => Future<AccessToken*>* {

}

Function 'checkEmail':. (dynamic, dynamic) => Future<bool*>* {

}< br/>
Function 'validateRestoreCode':. (dynamic, dynamic, dynamic) => Future<bool*>* {

}

Function 'sendSmsRestorePassword':. (dynamic, dynamic) => Future<bool*>* {

}
}

Usage on iOS

Use the IPA file created after the execution of reflutter main.ipa command. To see which code is loaded through DartVM, you need to run the application on the device. reFlutter prints its output in console logs in XCode with the reflutter tag.

To Do

  • Display absolute code offset for functions;
  • Extract more strings and fields;
  • Add socket patch;
  • Extend engine support to Debug using Fork and Github Actions;
  • Improve detection of App.framework and libapp.so inside zip archive

Build Engine

The engines are built using reFlutter in Github Actions to build the desired version, commits and snapshot hashes are used from this table. The hash of the snapshot is extracted from storage.googleapis.com/flutter_infra_release/flutter/<hash>/android-arm64-release/linux-x64.zip

release

Custom Build

If you would like to implement your own patches, manual Flutter code change is supported using specially crafted Docker

sudo docker pull ptswarm/reflutter

# Linux, Windows
EXAMPLE BUILD ANDROID ARM64:
sudo docker run -e WAIT=300 -e x64=0 -e arm=0 -e HASH_PATCH=<Snapshot_Hash> -e COMMIT=<Engine_commit> --rm -iv${PWD}:/t ptswarm/reflutter

FLAGS:
-e x64=0 <disables building for x64 archiitechture, use to reduce building time>
-e arm=0 <disables building for arm archiitechture, use to reduce building time>
-e WAIT=300 <the amount of time in seconds you need to edit source code>
-e HASH_PATCH=[Snapshot_Hash] <here you need to specify snapshot hash which matches the engine_commit line of enginehash.csv table best. It is used for proper patch search in reFlutter and for successfull compilation>
-e COMMIT=[Engine _commit] <here you specify commit for your engine version, take it from enginehash.csv table or from flutter/engine repo>


Inject-Assembly - Inject .NET Assemblies Into An Existing Process

17 January 2022 at 11:30
By: Zion3R

This tool is an alternative to traditional fork and run execution for Cobalt Strike. The loader can be injected into any process, including the current Beacon. Long-running assemblies will continue to run and send output back to the Beacon, similar to the behavior of execute-assembly.


There are two components of inject-assembly:

  1. BOF initializer: A small program responsible for injecting the assembly loader into a remote process with any arguments passed. It uses BeaconInjectProcess to perform the injection, meaning this behavior can be customized in a Malleable C2 profile or with process injection BOFs (as of version 4.5).

  2. PIC assembly loader: The bulk of the project. The loader will initialize the .NET runtime, load the provided assembly, and execute the assembly. The loader will create a new AppDomain in the target process so that the loaded assembly can be totally unloaded when execution is complete.

Communication between the remote process and Beacon occurs through a named pipe. The Aggressor script generates a pipe name and then passes it to the BOF initializer.

Notable Features

  • Patches Environment.Exit() to prevent the remote process from exiting.
  • .NET assembly header stomping (MZ bytes, e_lfanew, DOS Header, Rich Text, PE Header).
  • Random pipe name generation based on SourcePoint.
  • No blocking of the Beacon, even if the assembly is loaded into the current process.

Usage

Download and load the inject-assembly.cna Aggressor script into Cobalt Strike. You can then execute assemblies using the following command:

inject-assembly pid assembly [args...]

Specify 0 as the PID to execute in the current Beacon process.

It is recommended to use another tool, like FindObjects-BOF, to locate a process that already loads the .NET runtime, but this is not a requirement for inject-assembly to function.

Warnings

  • Currently only supports x64 remote processes.
  • There are several checks throughout the program to reduce the likelihood of crashing the remote process, but it could still happen.
  • The default Cobalt Strike process injection may get you caught. Consider a custom injection BOF or UDRL IAT hook.
  • Some assemblies rely on Environment.Exit() to finish executing. This will prevent the loader's cleanup phase from occurring, but you can still disconnect the named pipe using jobkill.
  • Uncomment lines 3 or 4 of scmain.c to enable error or verbose modes, respectively. These are disabled by default to reduce the shellcode size.

References

This project would not have been possible without the following projects:

Other features and inspiration were taken from the following resources:



Registry-Spy - Cross-platform Registry Browser For Raw Windows Registry Files

16 January 2022 at 20:30
By: Zion3R


Registry Spy is a free, open-source cross-platform Windows Registry viewer. It is a fast, modern, and versatile explorer for raw registry files.

Features include:

  • Fast, on-the-fly parsing means no upfront overhead
  • Open multiple hives at a time
  • Searching
  • Hex viewer
  • Modification timestamps

Requirements

  • Python 3.8+

Installation

Download the latest version from the releases page. Alternatively, use one of the following methods.

pip (recommended)

  1. pip install registryspy
  2. registryspy

Manual

  1. pip install -r requirements.txt
  2. python setup.py install
  3. registryspy

Standalone

  1. pip install -r requirements.txt
  2. python registryspy.py

Screenshots

Main Window

Find Dialog

Building

Dependencies:

  • PyInstaller 4.5+

Regular building: pyinstaller registryspy_install.spec

Creating a single file: pyinstaller registryspy_onefile.spec

License

Registry Spy

Copyright (C) 2021 Andy Smith

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.



TokenUniverse - An Advanced Tool For Working With Access Tokens And Windows Security Policy

16 January 2022 at 11:30
By: Zion3R


Token Universe is an advanced tool that provides a wide range of possibilities to research Windows security mechanisms. It has a convenient interface for creating, viewing, and modifying access tokens, managing Local Security Authority and Security Account Manager's databases. It allows you to obtain and impersonate different security contexts, manage privileges, auditing settings, and so on.


My goal is to create a useful tool that implements almost everything I know about access tokens and Windows security model in general. And, also, to learn even more in the process. I believe that such a program can become a valuable instrument for researchers and those who want to learn more about the security subsystem. You are welcome to suggest any ideas and report bugs.

Screenshots

Β  Β  Β  Β  Β  Β 

Feature list

Token-related functionality

Obtaining tokens

  • Open process/thread token
  • Open effective thread token (via direct impersonation)
  • Query session token
  • Log in user using explicit credentials
  • Log in user without credentials (S4U logon)
  • Duplicate tokens
  • Duplicate handles
  • Open linked token
  • Filter tokens
  • Create LowBox tokens
  • Created restricted tokens using Safer API
  • Search for opened handles
  • Create anonymous token
  • Impersonate logon session token via pipes
  • Open clipboard token

Highly privileged operations

  • Add custom group membership while logging in users (requires Tcb Privilege)
  • Create custom token from scratch (requires Create Token Privilege)

Viewing

  • User
  • Statistics, source, flags
  • Extended flags (TOKEN_*)
  • Restricting SIDs
  • App container SID and number
  • Capabilities
  • Claims
  • Trust level
  • Logon session type (filtered/elevated/default)
  • Logon session information
  • Verbose terminal session information
  • Object and handle information (access, attributes, references)
  • Object creator (PID)
  • List of processes that have handles to this object
  • Creation and last modification times

Viewing & editing

  • Groups (enable/disable)
  • Privileges (enable/disable/remove)
  • Session
  • Integrity level (lower/raise)
  • UIAccess, mandatory policy
  • Virtualization (enable/disable & allow/disallow)
  • Owner and primary group
  • Originating logon session
  • Default DACL
  • Security descriptor
  • Audit overrides
  • Handle flags (inherit, protect)

Using

  • Impersonation
  • Safe impersonation
  • Direct impersonation
  • Assign primary token
  • Send handle to process
  • Create process with token
  • Share with another instance of TokenUniverse

Other actions

  • Compare tokens
  • Linking logon sessions to create UAC-friendly tokens
  • Logon session relation map

AppContainer profiles

  • Viewing AppContainer information
  • Listing AppContainer profiles per user
  • Listing child AppContainers
  • Creating/deleting AppContainers

Local Security Authority

  • Global audit settings
  • Per-user audit settings
  • Privilege assignment
  • Logon rights assignment
  • Quotas
  • Security
  • Enumerate accounts with privilege
  • Enumerate accounts with right

Security Account Manager

  • Domain information
  • Group information
  • Alias information
  • User information
  • Enumerate domain groups/aliases/users
  • Enumerate group members
  • Enumerate alias members
  • Manage group members
  • Manage alias members
  • Create groups
  • Create aliases
  • Create users
  • Sam object tree
  • Security

Process creation

Methods

  • CreateProcessAsUser
  • CreateProcessWithToken
  • WMI
  • RtlCreateUserProcess
  • RtlCreateUserProcessEx
  • NtCreateUserProcess
  • NtCreateProcessEx
  • CreateProcessWithLogon (credentials)
  • ShellExecuteEx (no token)
  • ShellExecute via IShellDispatch2 (no token)
  • CreateProcess via code injection (no token)
  • WdcRunTaskAsInteractiveUser (no token)

Parameters

  • Current directory
  • Desktop
  • Window show mode
  • Flags (inherit handles, create suspended, breakaway from job, ...)
  • Environmental variables
  • Parent process override
  • Mitigation policies
  • Child process policy
  • Job assignment
  • Run as invoker compatibility
  • AppContainer SID
  • Capabilities

Interface features

  • Immediate crash notification
  • Window station and desktop access checks
  • Debug messages reports

Process list

  • Hierarchy
  • Icons
  • Listing processes from Low integrity & AppContainer
  • Basic actions (resume/suspend, ...)
  • Customizable columns
  • Highlighting
  • Security
  • Handle table manipulation

Interface features

  • Restart as SYSTEM
  • Restart as SYSTEM+ (with Create Token Privilege)
  • Customizable columns
  • Graphical hash icons
  • Auto-detect inherited handles
  • Our own security editor with arbitrary SIDs and mandatory label modification
  • Customizable list of suggested SIDs
  • Detailed error status information
  • Detailed suggestions on errors

Misc. ideas

  • [?] Logon session creation (requires an authentication package?)
  • [?] Job-based token filtration (unsupported on Vista+)
  • [?] Privilege and audit category description from wsecedit.dll


Iptable_Evil - An Evil Bit Backdoor For Iptables

15 January 2022 at 20:30
By: Zion3R


iptable_evil is a very specific backdoor for iptables that allows all packets with the evil bit set, no matter the firewall rules.

The initial implementation is in iptable_evil.c, which adds a table to iptables and requires modifying a kernel header to insert a spot for it. The second implementation is a modified version of the ip_tables core module and its dependents to allow all Evil packets.

I have tested it on Linux kernel version 5.8.0-48, but this should be appliciable to pretty much any kernel version with a full implementation of iptables.


Explanation of the Evil Bit

RFC3514, published April 1st, 2003, defines the previously-unused high-order bit of the IP fragment offset field as a security flag. To RFC-compliant systems, a 1 in that bit position indicates evil entent and will cause the packet to be blocked.

By default, this bit is turned off, but can be turned on in your software if you're assembling the entirety of your IP packet (as some hacking tools do), or in the Linux kernel using this patch (mirrored in this repository here).

How does the backdoor work?

When a packet is received by the Linux kernel, it is processed by iptables and either sent to userspace, rejected, or modified based on the rules configured.

In particular, each iptables table uses the function ipt_do_table in ip_tables.c to decide whether to accept a given packet. I have modified that to automatically accept any packet with the evil bit set and skip all further processing.

I also attempted to add another table (iptable_evil.c) that would accept all evil packets and hand others off to the standard tables for processing, but I never figured out how to pass the packets to the next table and decided that the ipt_do_table backdoor was enough as a proof of concept.

Why did you do this?

I needed to do and write up a decently large project in computing security for one of my classes, and this seemed like a cool idea. This is probably more work than he was expecting for this but Β―\_(ツ)_/Β―.

Build

In-Tree Build

The evil table requires modification of kernel headers, so installing it requires running with a kernel produced through the full tree build.

  • Copy the contents of replace-existing to your kernel source tree, overwriting existing files.
  • Copy iptable_evil.c to linux-X.Y.Z/net/ipv4/netfilter
  • (optional) copy ip_tables.c to linux-X.Y.Z/net/ipv4/netfilter
  • Compile the kernel according to your distro's process (should produce a package)
  • Install the package file
  • Reboot into your new kernel
  • iptables -t filter -L
  • iptables -t evil -L (this will have confused output, but it will load the module)

Out-of-Tree Build

This is significantly easier and faster, but does not support the evil table and marks the kernel as "tainted". It should be possible to copy the ko files produced by this to another computer with the exact same kernel version, but I haven't tested it.

  • Run make
  • rmmod iptable_*
  • rmmod ip_tables
  • insmod ip_tables.ko
  • insmod iptable_filter.ko

Testing/Demo

To test this, you either need to rebuild your entire kernel with this patch or create your own packets using a tool like Scapy. I went with the first option because I was already building the kernel for the evil table.

In the first screenshot, I have blocked all traffic to this VM in iptables, but I am still able to connect over SSH because my packets have the evil bit set, as the second screenshot shows.


Β  Β 



When connecting to the backdoored VM from a VM that does not set the evil bit, the SSH connection will eventually time out.

Β 


Packet captures of backdoor and non-backdoor SSH connections are in the docs/ folder in this repo for your perusal.

Kernel Version

  • 5.8.0-48-generic (Ubuntu 20.04)

Further Information and Resources



Narthex - Modular Personalized Dictionary Generator

15 January 2022 at 11:30
By: Zion3R


Narthex (Greek: Νάρθηξ, νάρθηκας) is a modular & minimal dictionary generator for Unix and Unix-like operating system written in C and Shell. It contains autonomous Unix-style programs for the creation of personalised dictionaries that can be used for password recovery & security assessment. The programs make use of Unix text streams for the collaboration with each other, according to the Unix philosophy. It is licensed under the GPL v3.0. Currently under development!


I made a video to explain the usage of Narthex to non-Unix people: https://www.youtube.com/watch?v=U0UmCeLJSkk&t=938s (the timestamp is intentional)

The tools

  • nchance - A capitalization tool that appends the results to the bottom of the dictionary.
  • ninc - A incrementation tool that multiplies alphabetical lines and appends an n++ at the end of each line.
  • ncom - A combination tool that creates different combinations between the existing lines of the dictionary.
  • nrev - A reversing tool, that appends the reserved versions of the lines at the end of the dictionary.
  • nleet - A leetifier. Replaces characters with Leet equivalents, such as @ instead of a, or 3 instead of e.
  • nclean - A tool for removing passwords that don't meet your criteria (too short, no special characters etc.)
  • napp - A tool that appends characters or words before or after the lines of the dictionary.
  • nwiz - A wizard that asks for the infromation and combines the tools together to create a final dictionary.

Screenshots

Install

In order to install, execute the following commands:

$ git clone https://github.com/MichaelDim02/Narthex.git && cd Narthex
$ sudo make install

Usage

For easy use, there is a wizard program, nwiz, that you can use. Just run

$ nwiz

And it will ask you for the target's information & generate the dictionary for you.

Advanced usage

If you want to make full use of Narthex, you can read the manpages of each tool. What they all do, really, is enhance small dictionaries. They are really minimal, and use Unix text streams to read and output data. For example, save a couple keywords into a textfile words.txt in a different line each, and run the following

$ cat words.txt | nhance -f | ncom | nrev | nleet | ninc 1 30 > dictionary.txt

and you'll see the results for yourself.



❌