HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protectionmechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)
Execute Scanning Example
Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.
python3 HardeningMeter.py -f /bin/cp -s
Installation Requirements
Before installing HardeningMeter, make sure your machine has the following: 1. readelf and file commands 2. python version 3 3. pip 4. tabulate
pip install tabulate
Install HardeningMeter
The very latest developments can be obtained via git.
Clone or download the project files (no compilation nor installation is required)
Specify the files you want to scan, the argument can get more than one file seperated by spaces.
-d --directory
Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.
-e --external
Specify whether you want to add external checks (False by default).
-m --show_missing
Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.
-s --system
Specify if you want to scan the system hardening methods.
-c --csv_format'
Specify if you want to save the results to csv file (results are printed as a table to stdout by default).
Results
HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.
Notes
When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
Workspaces
Collections
Requests
Users
Teams
Installation
python3 -m pip install porch-pirate
Using the client
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
Simple Search
porch-pirate -s "coca-cola.com"
Get Workspace Globals
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.
Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Automatic Search Dump
Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
Extract URLs from Workspace
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
p = porchpirate() print(p.search('coca-cola.com'))
Get Workspace Collections
p = porchpirate() print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Dumping a Workspace
p = porchpirate() collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba')) for collection in collections['data']: requests = collection['requests'] for r in requests: request_data = p.request(r['id']) print(request_data)
Grabbing a Workspace's Globals
p = porchpirate() print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other Examples
Other library usage examples can be located in the examples directory, which contains the following examples:
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
Notes
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
Required Packages
bash pip3 install -r requirements.txt
Cloning cloudgrep locally
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
Input
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Example 1 - Running the tool with default queries file
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Note
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
[+] Running GetFileDownloadUrls.*secrets_ for AWS [+] Threat Actor: LUCR3 [+] Severity: MEDIUM [+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
Example 6 - Running the tool with your own queries file
python3 main.py -f new_file.json
Running in your Cloud and Authentication cloudgrep
AWS
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:
Nmap (XML);
Nessus (XML);
Nikto (XML);
Dirble (XML);
Testssl (JSON);
Fortify (FPR).
Rationale
This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.
Dependencies
argparse (dev-python/argparse);
prettytable (dev-python/prettytable);
python (dev-lang/python);
xlsxwriter (dev-python/xlsxwriter).
Install
Using Pip:
pip install --user sr2t
Usage
You can use sr2t in two ways:
When installed as package, call the installed script: sr2t --help.
When Git cloned, call the package directly from the root of the Git repository: python -m src.sr2t --help
optional arguments: -h, --help show this help message and exit --nmap-state NMAP_STATE Specify the desired state to filter (e.g. open|filtered). --nmap-services Specify to ouput a supplemental list of detected services. --no-nessus-autoclassify Specify to not autoclassify Nessus results. --nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE Specify to override a custom Nessus autoclassify YAML file. --nessus-tls-file NESSUS_TLS_FILE Specify to override a custom Nessus TLS findings YAML file. --nessus-x509-file NESSUS_X509_FILE Specify to override a custom Nessus X.509 findings YAML file. --nessus-http-file NESSUS_HTTP_FILE Specify to override a custom Nessus HTTP findings YAML file. --nessus-smb-file NESSUS_SMB_FILE Specify to override a custom Nessus SMB findings YAML file. --nessus-rdp-file NESSUS_RDP_FILE Specify to override a custom Nessus RDP findings YAML file. --nessus-ssh-file NESSUS_SSH_FILE Specify to override a custom Nessus SSH findings YAML file. --nessus-min-severity NESSUS_MIN_SEVERITY Specify the minimum severity to output (e.g. 1). --nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH Specify the width of the pluginid column (e.g. 30). --nessus-sort-by NESSUS_SORT_BY Specify to sort output by ip-address, port, plugin-id, plugin-name or severity. --nikto-description-width NIKTO_DESCRIPTION_WIDTH Specify the width of the description column (e.g. 30). --fortify-details Specify to include the Fortify abstracts, explanations and recommendations for each vulnerability. --annotation-width ANNOTATION_WIDTH Specify the width of the annotation column (e.g. 30). -oC OUTPUT_CSV, --output-csv OUTPUT_CSV Specify the output CSV basename (e.g. output). -oT OUTPUT_TXT, --output-txt OUTPUT_TXT Specify the output TXT file (e.g. output.txt). -oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX Specify the outpu t XLSX file (e.g. output.xlsx). Only for Nessus at the moment -oA OUTPUT_ALL, --output-all OUTPUT_ALL Specify the output basename to output to all formats (e.g. output).
specify at least one: --nessus NESSUS [NESSUS ...] Specify (multiple) Nessus XML files. --nmap NMAP [NMAP ...] Specify (multiple) Nmap XML files. --nikto NIKTO [NIKTO ...] Specify (multiple) Nikto XML files. --dirble DIRBLE [DIRBLE ...] Specify (multiple) Dirble XML files. --testssl TESTSSL [TESTSSL ...] Specify (multiple) Testssl JSON files. --fortify FORTIFY [FORTIFY ...] Specify (multiple) HP Fortify FPR files.
$ sr2t --nessus example/nessus.nessus +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+ | host | port | plugin id | plugin name | severity | annotations | +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+ | 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X | | 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X | | 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X | | 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X | | 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X | | 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X | | 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X | | 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X | | 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X | | 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X | | 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X | | 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X | | 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X | | 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X | | 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X | | 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X | | 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X | | 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X | | 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X | +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
Or to output a CSV file:
$ sr2t --nessus example/nessus.nessus -oC example $ cat example_nessus.csv host,port,plugin id,plugin name,severity,annotations 192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X 192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X 192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X 192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X 192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X 192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X 192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X 192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X 192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X 192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X 192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X 192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X 192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X 192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X 192.168.142.2,44 5,57608,SMB Signing not required,2,X
Nmap
To produce an XLSX format:
$ sr2t --nmap example/nmap.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nmap example/nmap.xml --nmap-services Nmap TCP: +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+ | | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 | +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+ | 192.168.23.78 | X | | X | X | X | X | X | X | | | | 192.168.27.243 | | | | X | X | | X | X | X | X | | 192.168.99.164 | | | | X | X | | X | X | X | X | | 192.168.228.211 | | X | | | | | | | | | | 192.168.171.74 | | | | X | X | | X | X | X | X | +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
Nmap Services: +-----------------+------+-------+---------------+-------+ | ip address | port | proto | service | state | +--------------- --+------+-------+---------------+-------+ | 192.168.23.78 | 53 | tcp | domain | open | | 192.168.23.78 | 88 | tcp | kerberos-sec | open | | 192.168.23.78 | 135 | tcp | msrpc | open | | 192.168.23.78 | 139 | tcp | netbios-ssn | open | | 192.168.23.78 | 389 | tcp | ldap | open | | 192.168.23.78 | 445 | tcp | microsoft-ds | open | | 192.168.23.78 | 3389 | tcp | ms-wbt-server | open | | 192.168.27.243 | 135 | tcp | msrpc | open | | 192.168.27.243 | 139 | tcp | netbios-ssn | open | | 192.168.27.243 | 445 | tcp | microsoft-ds | open | | 192.168.27.243 | 3389 | tcp | ms-wbt-server | open | | 192.168.27.243 | 5800 | tcp | vnc-http | open | | 192.168.27.243 | 5900 | tcp | vnc | open | | 192.168.99.164 | 135 | tcp | msrpc | open | | 192.168.99.164 | 139 | tcp | netbios-ssn | open | | 192 .168.99.164 | 445 | tcp | microsoft-ds | open | | 192.168.99.164 | 3389 | tcp | ms-wbt-server | open | | 192.168.99.164 | 5800 | tcp | vnc-http | open | | 192.168.99.164 | 5900 | tcp | vnc | open | | 192.168.228.211 | 80 | tcp | http | open | | 192.168.171.74 | 135 | tcp | msrpc | open | | 192.168.171.74 | 139 | tcp | netbios-ssn | open | | 192.168.171.74 | 445 | tcp | microsoft-ds | open | | 192.168.171.74 | 3389 | tcp | ms-wbt-server | open | | 192.168.171.74 | 5800 | tcp | vnc-http | open | | 192.168.171.74 | 5900 | tcp | vnc | open | +-----------------+------+-------+---------------+-------+
Or to output a CSV file:
$ sr2t --nmap example/nmap.xml -oC example $ cat example_nmap_tcp.csv ip address,53,80,88,135,139,389,445,3389,5800,5900 192.168.23.78,X,,X,X,X,X,X,X,, 192.168.27.243,,,,X,X,,X,X,X,X 192.168.99.164,,,,X,X,,X,X,X,X 192.168.228.211,,X,,,,,,,, 192.168.171.74,,,,X,X,,X,X,X,X
$ sr2t --nikto example/nikto.xml +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+ | target ip | target hostname | target port | description | annotations | +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+ | 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X | | 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X | | | | | agent to protect against some forms of XSS | | | 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X | | | | | render the content of the site in a different fashion to the MIME type | | +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
Or to output a CSV file:
$ sr2t --nikto example/nikto.xml -oC example $ cat example_nikto.csv target ip,target hostname,target port,description,annotations 192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X 192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS",X 192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type",X
$ sr2t --testssl example/testssl.json +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+ | ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard | +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+ | rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X | +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
Or to output a CSV file:
$ sr2t --testssl example/testssl.json -oC example $ cat example_testssl.csv ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X
Exploitation and scanning tool specifically designed for Jenkins versions <= 2.441 & <= LTS 2.426.2. It leverages CVE-2024-23897 to assess and exploit vulnerabilities in Jenkins instances.
Usage
Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.
Parameters: - -t or --target: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - -i or --input-file: Path to input file containing hosts in the format of http://1.2.3.4:8080/ (one per line). - -o or --output-file: Export results to file (optional). - -p or --port: Specify the port number. Default is 8080 (optional). - -f or --file: Specify the file to read on the target system.
Changelog
[27th January 2024] - Feature Request
Added scanning/exploiting via input file with hosts (-i INPUT_FILE).
Added export to file (-o OUTPUT_FILE).
[26th January 2024] - Initial Release
Initial release.
Contributing
Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.
This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.
RepoReaper is a precision tool designed to automate the identification of exposed .gitrepositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.
Features
Automated scanning of domains and subdomains for exposed .git repositories.
Streamlines the detection of sensitive data exposures.
User-friendly command-line interface.
Ideal for security audits and Bug Bounty.
Installation
Clone the repository and install the required dependencies:
RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β
Disclaimer
This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.
SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.
What is Swagger?
Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.
About SwaggerHub
SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.
Why OSINT on SwaggerHub?
Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:
Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.
Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.
Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.
Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.
Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.
Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.
By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.
How SwaggerSpy Works
SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.
Getting Started
To use SwaggerSpy, follow these steps:
Installation: Clone the SwaggerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/SwaggerSpy.git cd SwaggerSpy pip install -r requirements.txt
Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
python swaggerspy.py searchterm
Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.
Disclaimer
SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contribution
Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
About the Author
SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
TODO
Regular Expressions Enhancement
[ ] Review and improve existing regular expressions.
[ ] Ensure that regular expressions adhere to best practices.
[ ] Check for any potential optimizations in the regex patterns.
[ ] Test regular expressions with various input scenarios for accuracy.
[ ] Document any complex or non-trivial regex patterns for better understanding.
[ ] Explore opportunities to modularize or break down complex patterns.
[ ] Verify the regular expressions against the latest specifications or requirements.
[ ] Update documentation to reflect any changes made to the regular expressions.
License
SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.
Thanks
Special thanks to @Liodeus for providing project inspiration through swaggerHole.
AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.
How it works?
AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.
With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.
Why i create this?
During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.
options: -h, --help show this help message and exit -b BASE, --base BASE Base name to use -v, --verbose Show verbose output -t THREADS, --threads THREADS Number of threads for concurrent execution -p PERMUTATIONS, --permutations PERMUTATIONS File containing permutations
SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.
options: -h, --help show this help message and exit -u URL, --url URL Single URL for the target -r URLS_FILE, --urls_file URLS_FILE File containing a list of URLs -p, --pipeline Read from pipeline --proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080) --payload PAYLOAD File containing malicious payloads (default is payloads.txt) --single-payload SINGLE_PAYLOAD Single payload for testing --discord DISCORD Discord Webhook URL --headers HEADERS File containing headers (default is headers.txt) --threads THREADS Number of threads
Running SqliSniper
Single Url Scan
The url can be provided with -u flag for single site scan
./sqlisniper.py -u http://example.com
File Input
The -r flag allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.
./sqlisniper.py -r url.txt
piping URLs
The SqliSniper can also worked with the pipeline input with -p flag
cat url.txt | ./sqlisniper.py -p
The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.
While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.
SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.
Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.
SqliSniper is made inΒ pythonΒ with lots of <3 by @Muhammad Danial.
BucketLoot is an automated S3-compatible Bucket inspector that can help users extract assets, flag secret exposures and even search for custom keywords as well as Regular Expressions from publicly-exposed storage buckets by scanning files that store data in plain-text.
The tool can scan for buckets deployed on Amazon Web Services (AWS), Google Cloud Storage (GCS), DigitalOcean Spaces and even custom domains/URLs which could be connected to these platforms. It returns the output in a JSON format, thus enabling users to parse it according to their liking or forward it to any other tool for further processing.
BucketLoot comes with a guest mode by default, which means a user doesn't needs to specify any API tokens / Access Keys initially in order to run the scan. The tool will scrape a maximum of 1000 files that are returned in the XML response and if the storage bucket contains more than 1000 entries which the user would like to run the scanner on, they can provide platform credentials to run a complete scan. If you'd like to know more about the tool, make sure to check out our blog.
Features
Secret Scanning
Scans for over 80+ unique RegEx signatures that can help in uncovering secret exposures tagged with their severity from the misconfigured storage bucket. Users have the ability to modify or add their own signatures in the regexes.json file. If you believe you have any cool signatures which might be helpful for others too and could be flagged at scale, go ahead and make a PR!
Sensitive File Checks
Accidental sensitive file leakages are a big problem that affects the security posture of individuals and organisations. BucketLoot comes with a 80+ unique regEx signatures list in vulnFiles.json which allows users to flag these sensitive files based on file names or extensions.
Dig Mode
Want to quickly check if any target website is using a misconfigured bucket that is leaking secrets or any other sensitive data? Dig Mode allows you to pass non-S3 targets and let the tool scrape URLs from response body for scanning.
Asset Extraction
Interested in stepping up your asset discovery game? BucketLoot extracts all the URLs/Subdomains and Domains that could be present in an exposed storage bucket, enabling you to have a chance of discovering hidden endpoints, thus giving you an edge over the other traditional recon tools.
Searching
The tool goes beyond just asset discovery and secret exposure scanning by letting users search for custom keywords and even Regular Expression queries which may help them find exactly what they are looking for.