πŸ”’
There are new articles available, click to refresh the page.
Yesterday β€” 20 October 2021KitPloit - PenTest & Hacking Tools

Metabadger - Prevent SSRF Attacks On AWS EC2 Via Automated Upgrades To The More Secure Instance Metadata Service V2 (IMDSv2)

20 October 2021 at 20:30
By: Zion3R


Prevent SSRF attacks on AWS EC2 via automated upgrades to the more secure Instance Metadata Service v2 (IMDSv2).


Metabadger

Purpose and functionality

  • Diagnose and evaluate your current usage of the AWS Instance Metadata Service along with understanding how the service works
  • Prepare you to upgrade to v2 of the Instance Metadata service to safeguard against v1 attack vectors
  • Give you the ability to specifically update your instances to only use IMDSv2
  • Give you the ability to disable the Instance Metadata service where you do not need it as a way to reduce attack surface

What is the AWS Instance Metadata Service?
  • The AWS metadata service essentially gives you access to all the things within an instance, including the instance role credential & session token
  • Known SSRF vulnerabilities that exploit and use this attack as a pivot into your environment
  • The famous attacks you have heard about, some of which involved this method of gaining access via a vulnerable web app with access to the instance metadata service
  • Attacker could take said credentials from metadata service and use them outside of that particular instance

IMDSv2 and why it should be used
  • Ensuring that instances are using V2 of the metadata service at all times by making it a requirement within it’s configuration
  • Enabling session tokens with a PUT request with a mandatory request header to the AWS metadata API, IMDSv1 does not check for this making it easier for attackers to exploit the service
  • X-Forwarded-For header is not allowed in IMDSv2 ensuring that no proxy based traffic is allowed to communicate with the metadata service

Problem Statement

Engineering teams may have a vast variety of compute infrastructure in AWS that they need to protect from certain vulnerabilities that leverage the metadata service. The metadata service is required to run on instances if any IAM is used or if there is any user data information the instance might need when it boots. Limiting the attack surface of your instances is crucial in preventing the ability to pivot in your environment by stealing information provided by the service itself. Numerous famous attacks in the past have leveraged this particular service to exploit a role that is attached to the instance or dump sensitive data that is accessible via the metadata service. Metabadger can help to identify where and how you are using the instance metadata service while also giving you the ability to reduce any unwanted attack leverage to lower your overall risk posture while operating in EC2.


Disclaimer and Rollback

Using this tool may impact your AWS compute infrastructure as not all services and applications may work either without the metadata service or on version 2. Take caution when deploying this in your production environment and have a rollback plan in place incase something seems out of the ordinary. Metabadger comes built in with the ability to roll back to the default version 1 of the service using the -v1 flag, you can use this to quickly roll back your instances to use the default. Ideally, you should run this tool and update your metadata version in non-production environments as a proving grounds before applying it.


Guided Steps for Hardening

Step 1

Initially, we want to discover our overall usage of the metadata service in a particular AWS region. Metabadger will evaluate the current status of your usage in the region where your credentials point to in your /.aws/credentials file or the current role that is assumed. You may also specify the --region flag when running the discover-metadata command if you would like to change to another region than what is currently configured. Once you have a good idea of which version your instances are running and if the service is enabled or disabled, you will be able to make a much more defined action plan for hardening the service. Note that you can find specific meaning to every metadata option that is set here.

Step 2

One of the areas that should be evaluated when making the switch to v2 of the service is the use of IAM roles. Metabadger lets you identify instances in a region that may already be using an IAM role. The discover-role-usage command will output a list of instances that have roles attached to them. If you have a lot of instances using roles, you should take precaution when updating the service to v2 to ensure the overall functionality of your workloads does not become impacted.

Step 3

Upon completion of doing your initial discovery and evaluation, you can now create a staged approach to hardening your compute infrastructure to use either v2 of the metadata service or disable it where it may not be used. The harden-metadata command allows you to update all instances in a particular region by default. You can also pass instance tags using the --tags flag or an input file containing a csv of instances that you would like to apply a configuration for. Once you have made the appropriate updates to v2 and disabled the service where it is not used you can re-evaluate using the items in Step 1 to confirm your environment is locked down. If you have certain instances that you don't want to update you can exlude them via the --exclusion flag by tag or instance id.


Requirements

Metabadger requires an IAM role or credentials with the following permission:

ec2:ModifyInstanceAttribute
ec2:DescribeInstances

When making changes to the Instance Metadata service, you should be cautious and follow additional guidance from AWS on how to safely upgrade to version 2. Metabadger was designed to assist you with this process to further secure your compute infrastructure in AWS.

AWS Best Practice Guide on Updating to IMDSv2


Usage & Installation

Install via pip

pip3 install --user metabadger

Install via Github

$ git clone https://github.com/salesforce/metabadger
$ cd metabadger
$ pip install -e .

$ metabadger
Usage: metabadger [OPTIONS] COMMAND [ARGS]...

Metabadger is an AWS Security Tool used for discovering and hardening the
Instance Metadata service.

Options:
--version Show the version and exit.
--help Show this message and exit.

Commands:
disable-metadata Disable the IMDS service on EC2 instances
discover-metadata Discover summary of IMDS service usage within EC2
discover-role-usage Discover summary of IAM role usage for EC2
harden-metadata Harden the AWS instance metadata service from v1 to v2

Commands

discover-metadata

A summary of your overall instance metadata service usage including which version and an overall enforcement percentage. Using these numbers will help you understand the overall posture of how hardened your metadata usage is and where you're enforcing v2 vs v1.

Options:
-a, --all-region Provide a metadata summary for all available regions in the AWS account
-j, --json Get metadata summary in JSON format
-r, --region TEXT Specify which AWS region you will perform this command in
-p, --profile TEXT Specify the AWS IAM profile.

discover-role-usage

A summary of instances and the roles that they are using, this will give you a good idea of the caution you must take when making updates to the metadata service itself.

Options:
-p, --profile TEXT Specify the AWS IAM profile.
-r, --region TEXT Specify which AWS region you will perform this command in

harden-metadata

The ability to modify the instances to use either metadata v1 or v2 and to get an understanding of how many instances would be modified by running a dry run mode.

Options:
-e, --exclusion The exclusion flag will apply to everything besides what is specified, tags or instances
-d, --dry-run Dry run of hardening metadata changes
-v1, --v1 Enforces v1 of the metadata service
-i, --input-file PATH Path of csv file of instances to harden IMDS for
-t, --tags TEXT A comma seperated list of tags to apply the hardening setting to
-r, --region TEXT Specify which AWS region you will perform this command in
-p, --profile TEXT Specify the AWS IAM profile.

disable-metadata

Use this command to completely disable the metadata servie on instances.

Options:
-e, --exclusion The exclusion flag will apply to everything besides what is specified, tags or instances
-d, --dry-run Dry run of disabling the metadata service
-i, --input-file PATH Path of csv file of instances to disable IMDS for
-t, --tags TEXT A comma seperated list of tags to apply the hardening setting to
-r, --region TEXT Specify which AWS region you will perform this command in
-p, --profile TEXT Specify the AWS IAM profile.

Logging

All changes made by Metabadger will be logged to a file saved in the working directory called metabadger.log. The file will include the following for every action that the tool takes when it changes the metadata service:

  • The time and date stamp for when a change was made
  • Change that occured (disabled, hardened, or updated)
  • The instance ID where the change was made
  • Dry run information
  • A status on if the change was successful or not


Limelighter - A Tool For Generating Fake Code Signing Certificates Or Signing Real Ones

20 October 2021 at 11:30
By: Zion3R


A tool which creates a spoof code signing certificates and sign binaries and DLL files to help evade EDR products and avoid MSS and sock scruitney. LimeLighter can also use valid code signing certificates to sign files. Limelighter can use a fully qualified domain name such as acme.com.


Contributing

LimeLighter was developed in golang.

Make sure that the following are installed on your OS

openssl
osslsigncode

The first step as always is to clone the repo. Before you compile LimeLighter you'll need to install the dependencies. To install them, run following commands:

go get github.com/fatih/color

Then build it

go build Limelighter.go

Usage
./LimeLighter -h       

.____ .__ .____ .__ .__ __
| | |__| _____ ____ | | |__| ____ | |___/ |_ ___________
| | | |/ \_/ __ \| | | |/ ___\| | \ __\/ __ \_ __ \
| |___| | Y Y \ ___/| |___| / /_/ > Y \ | \ ___/| | \/
|_______ \__|__|_| /\___ >_______ \__\___ /|___| /__| \___ >__|
\/ \/ \/ \/ /_____/ \/ \/
@Tyl0us


[*] A Tool for Code Signing... Real and fake
Usage of ./LimeLighter:
-Domain string
Domain you want to create a fake code sign for
-I string
Unsiged file name to be signed
-O string
Signed file name
-Password string
Password for real certificate
-Real string
Path to a valid .pfx certificate file
-Verify string
Verifies a file's code sign certificate
-debug
Print debug statements

To sign a file you can use the command option Domain to generate a fake code signing certificate.


Β 

to sign a file with a valid code signing certificate use the Real and Password to sign a file with a valid code signing certificate.

To verify a signed file use the verify command.





Before yesterdayKitPloit - PenTest & Hacking Tools

LazyCSRF - A More Useful CSRF PoC Generator

19 October 2021 at 20:30
By: Zion3R


LazyCSRF is a more useful CSRF PoC generator that runs on Burp Suite.


Motivation

Burp Suite is an intercepting HTTP Proxy, and it is the defacto tool for performing web application security testing. The feature of Burp Suite that I like the most is Generate CSRF PoC. However, this does not support JSON parameters. It also uses the <form>, so it cannot send PUT/DELETE requests. In addition, multibyte characters that can be displayed in the burp itself are often garbled in the generated CSRF PoC. Those were the motivations for creating this extension.


Features
  • Generating CSRF PoC with Burp Suite Community Edition (of course, it also works in Professional Edition)
  • Support JSON parameter (like GraphQL Request)
  • Support PUT/DELETE (only work with CORS enabled with an unrestrictive policy)
  • Support displaying multibyte characters (like Japanese)

Difference in display of multibyte characters

The following image shows the difference in the display of multibyte characters between Burp's CSRF PoC generator and LazyCSRF. LazyCSRF can generate CSRF PoC without garbling multibyte characters that are not garbled on Burp.


Β 

Installation

Download the jar from GitHub Releases. In Burp Suite, go to the Extensions tab in the Extender tab, and add a new extension. Select the extension type Java, and specify the location of the jar.


How to Build

intellij

If you use IntelliJ IDEA, you can build it by following Build -> Build Artifacts -> LazyCSRF:jar -> Build.


Command line

You can build it with maven.

$ mvn install

Usage

You can generate a CSRF PoC by selecting Extensions->Generate JSON CSRF PoC with Ajax or Generate POST PoC with Form from the menu that opens by right-clicking on Burp Suite.



LICENSE

MIT License

Copyright (C) 2021 tkmru



Karma_V2 - A Passive Open Source Intelligence (OSINT) Automated Reconnaissance (Framework)

19 October 2021 at 11:30
By: Zion3R

πš”πšŠπš›πš–πšŠ 𝚟𝟸 is a Passive Open Source Intelligence (OSINT) Automated Reconnaissance (framework)


πš”πšŠπš›πš–πšŠ 𝚟𝟸 can be used by Infosec Researchers, Penetration Testers, Bug Hunters to find deep information, more assets, WAF/CDN bypassed IPs, Internal/External Infra, Publicly exposed leaks and many more about their target. Shodan Premium API key is required to use this automation. Output from the πš”πšŠπš›πš–πšŠ 𝚟𝟸 is displayed to the screen and saved to files/directories.

Regarding Premium Shodan API, Please see the Shodan site for more information.

Shodan website: Shodan WebsiteAPI : Developer API


Features
  • Powerful and flexible results via Shodan Dorks
  • SSL SHA1 checksum/fingerprint Search
  • Only hit In-Scope IPs
  • Verify each IP with SSL/TLS certificate issuer match RegEx
  • Provide Out-Of-Scope IPs
  • Find out all ports including well known/uncommon/dynamic
  • Grab all targets vulnerabilities related to CVEs
  • Banner grab for each IP, Product, OS, Services & Org etc.
  • Grab favicon Icons
  • Generate Favicon Hash using python3 mmh3 Module
  • Favicon Technology Detection using nuclei custom template
  • ASN Scan
  • BGP Neighbour
  • IPv4 & IPv6 Profixes for ASN
  • Interesting Leaks like Indexing, NDMP, SMB, Login, SignUp, OAuth, SSO, Status 401/403/500, VPN, Citrix, Jfrog, Dashboards, OpenFire, Control Panels, Wordpress, Laravel, Jetty, S3 Buckets, Cloudfront, Jenkins, Kubernetes, Node Exports, Grafana, RabbitMQ, Containers, GitLab, MongoDB, Elastic, FTP anonymous, Memcached, DNS Recursion, Kibana, Prometheus, Default Passwords, Protected Objects, Moodle, Spring Boot, Django, Jira, Ruby, Secret Key and many more...

Installation

1. Clone the repo
# git clone https://github.com/Dheerajmadhukar/karma_v2.git

2. Install shodan & mmh3 python module
# python3 -m pip install shodan mmh3

3. Install JSON Parser [JQ]
# apt install jq -y

4. Install httprobe @tomnomnom to probe the requests
# GO111MODULE=on go get -v github.com/tomnomnom/httprobe

5. Install Interlace @codingo to multithread [Follow the codingo interlace repo instructions]
# git clone https://github.com/codingo/Interlace.git & install accordingly. 

6. Install nuclei @projectdiscovery
# GO111MODULE=on go get -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei

7. Install lolcat
# apt install lolcat -y

8. Install anew
# GO111MODULE=on go get -u github.com/tomnomnom/anew

Ok, how do I use it?
# cat > .token
SHODAN_PREMIUM_API_HERE

Usage

You can use this command to check help:

$ bash karma_v2 -h



MODEs
MODE Examples
-ip bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -ip
-asn bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -asn
-cve bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -cve
-favicon bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -favicon
-leaks bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -leaks
-deep bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -deep
-count bash karma_v2 -d <DOMAIN.TLD> -l <INTEGER> -count

Demo
  • karma_v2 [mode -ip] &#10359;&#10242;&#120468;&#120458;&#120475;&#120470;&#120458; &#120479;&#120824;&#10256;&#10430; is a Passive Open Source Intelligence (OSINT) Automated Reconnaissance (framework) (8)
  • karma_v2 [mode -asn] &#10359;&#10242;&#120468;&#120458;&#120475;&#120470;&#120458; &#120479;&#120824;&#10256;&#10430; is a Passive Open Source Intelligence (OSINT) Automated Reconnaissance (framework) (9)
  • karma_v2 [mode -cve] &#10359;&#10242;&#120468;&#120458;&#120475;&#120470;&#120458; &#120479;&#120824;&#10256;&#10430; is a Passive Open Source Intelligence (OSINT) Automated Reconnaissance (framework) (10)
  • karma_v2 [mode -favicon] &#10359;&#10242;&#120468;&#120458;&#120475;&#120470;&#120458; &#120479;&#120824;&#10256;&#10430; is a Passive Open Source Intelligence (OSINT) Automated Reconnaissance (framework) (11)
  • karma_v2 [mode -leaks]

&#10359;&#10242;&#120468;&#120458;&#120475;&#120470;&#120458; &#120479;&#120824;&#10256;&#10430; is a Passive Open Source Intelligence (OSINT) Automated Reconnaissance (framework) (12)


  • karma_v2 [mode -deep]

-deep support all the above modes e.g. -count,-ip,-asn,-favicon,-cve,-leaks !


Output
output/bugcrowd.com-YYYY-MM-DD/ 

.
β”œβ”€β”€ ASNs_Detailed_bugcrowd.com.txt
β”œβ”€β”€ Collect
β”‚ β”œβ”€β”€ host_domain_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_12289a814...83029f8944b6088d60204a92e_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_17537bf84...73cb1d684a495db7ea5aa611b_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_198d6d4ec...681b77585190078b07b37c5e1_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_26a9c5618...d60eae2947b42263e154d203f_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_3da3825a2...3b852a42470410183adc3b9ee_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_4d0eab730...68cf11d2db94cc2454c906532_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_8907dab4c...12fdbdd6c445a4a8152f6b7b7_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_9a9b99eba...5dc5106cea745a591bf96b044_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_a7c14d201...b6fd4bc4e95ab2897e6a0bsfd_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_SHA1_a90f4ddb0...85780bdb06de83fefdc8a612d_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_domain_domain.tld.json.gz
β”‚ β”œβ”€β”€ ssl_subjectCN_domain.tld.json.gz
β”‚ └── ssl_subject_domain.tld.json.gz
| └── . . .
β”œβ”€β”€ IP_VULNS
β”‚ β”œβ”€β”€ 104.x.x.x.json.gz
β”‚ β”œβ”€β”€ 107.x.x.x.json.gz
β”‚ β”œβ”€β”€ 107.x.x.x.json.gz
β”‚ └── 99.x.x.x.json.gz
| └── . . .
β”œβ”€β”€ favicons_domain.tld.txt
β”œβ”€β”€ host_enum_domain.tld.txt
β”œβ”€β”€ ips_inscope_domain.tld.txt
β”œβ”€β”€ main_domain.tld.data
β”œβ”€β”€ . . .

karma_v2 Newly Added Shodan Dorks
  • SonarQube
  • Apache hadoop node
  • Directory Listing
  • Oracle Business intelligence
  • Oracle Web Login
  • Docker Exec
  • Apache Status
  • Apache-Coyote/1.1 Tomcat-5.5
  • Swagger UI
  • H-SPHERE
  • Splunk
  • JBoss
  • phpinfo
  • ID_VC
  • Confluence
  • TIBCO_Jaspersoft
  • Shipyard_Docker_management
  • Symfony PHP info AWS creds
  • Ignored-by_CDNs
  • Django_Exposed
  • Cluster_Node_etcd
  • SAP_NetWeaver_Application

πš”πšŠπš›πš–πšŠ 𝚟𝟸 Supported Shodan Dorks
DORKs DORKs DORKs
ssl.cert.fingerprint http.status:"302" oauth "Server: Jetty"
ssl http.status:"302" sso X-Amz-Bucket-Region
org title:"401 Authorization Required" "development" org:"Amazon.com"
hostname http.html:"403 Forbidden" "X-Jenkins" "Set-Cookie: JSESSIONID" http.title:"Jenkins [Jenkins]"
ssl.cert.issuer.cn http.html:"500 Internal Server Error" http.favicon.hash:81586312 200
ssl.cert.subject.cn ssl.cert.subject.cn:*vpn* product:"Kubernetes" port:"10250, 2379"
ssl.cert.expired:true title:"citrix gateway" port:"9100" http.title:"Node Exporter"
ssl.cert.subject.commonName http.html:"JFrog" http.title:"Grafana"
http.title:"Index of /" "X-Jfrog" http.title:"RabbitMQ"
ftp port:"10000" http.title:"dashboard" HTTP/1.1 307 Temporary Redirect "Location: /containers"
"Authentication: disabled" port:445 product:"Samba" http.title:"Openfire Admin Console" http.favicon.hash:1278323681
title:"Login - Adminer" http.title:"control panel" "MongoDB Server Information" port:27017 -authentication
http.title:"sign up" http.html:"* The wp-config.php creation script uses this file" port:"9200" all:"elastic indices"
http.title:"LogIn" clockwork "220" "230 Login successful." port:21
port:"11211" product:"Memcached" "port: 53" Recursion: Enabled title:"kibana"
port:9090 http.title:"Prometheus Time Series Collection and Processing Server" "default password" title:protected
http.component:Moodle http.favicon.hash:116323821 html:"/login/?next=" title:"Django"
html:"/admin/login/?next=" title:"Django" title:"system dashboard" html:jira http.component:ruby port:3000
html:"secret_key_base" I will add more soon . . .

πš”πšŠπš›πš–πšŠ 𝚟𝟸 Newly Added Shodan Dorks
DORKs DORKs DORKs
"netweaver" port:"2379" product:"etcd" http.title:"DisallowedHost"
ssl:"${target}" "-AkamaiGHost" "-GHost" ssl:"${target}" "-Cloudflare" ssl:"${target}" "-Cloudfront"
"X-Debug-Token-Link" port:443 http.title:"shipyard" HTTP/1.1 200 OK Accept-Ranges: bytes Content-Length: 5664 http.title:"TIBCO Jaspersoft:" port:"443" "1970"
"Confluence" http.title:"SonarQube" html:"jmx?qry=Hadoop:*"
http.title:"Directory Listing" http.title:"H-SPHERE" http.title:"Swagger UI - "
Server: Apache-Coyote/1.1 Tomcat-5.5" port:2375 product:"Docker" http.title:"phpinfo()"
http.title:"ID_VC_Welcome" "x-powered-by" "jboss" jboss http.favicon.hash:-656811182
http.title:"Welcome to JBoss" port:"8089, 8000" "splunkd" http.favicon.hash:-316785925
title:"splunkd" org:"Amazon.com" http.title:"oracle business intelligence sign in" http.title:"Oracle WebLogic Server Administration Console"
http.title:"Apache Status" I will add more soon . . .



Inceptor - Template-Driven AV/EDR Evasion Framework

18 October 2021 at 20:30
By: Zion3R


Modern Penetration testing and Red Teaming often requires to bypass common AV/EDR appliances in order to execute code on a target. With time, defenses are becoming more complex and inherently more difficult to bypass consistently.

Inceptor is a tool which can help to automate great part of this process, hopefully requiring no further effort.


Features

Inceptor is a template-based PE packer for Windows, designed to help penetration testers and red teamers to bypass common AV and EDR solutions. Inceptor has been designed with a focus on usability, and to allow extensive user customisation.

To have a good overview of what it was implemented and why, it might be useful to tak a look to the following resources:


Shellcode Transformation/Loading

Inceptor is able to convert existing EXE/DLL into shellcode using various open-source converters:

  • Donut: Donut is "The Converter". This tool is more like a piece of art by TheWover, and can be used to transform Native binaries, DLL, and .Net binaries into position independent code shellcode.
  • sRDI: By Monoxgas, this tool can convert existing naticcve DLL into PIC, which can then be injected as regular shellcode.
  • Pe2Sh: By Hasherazade, this tool can convert an existing native EXE into PIC shellcode, which can also be run as a normal EXE.

LI Encoders vs LD Encoders

Inceptor can encode, compress, or encrypt shellcode using different means. While developing the tool, I started differentiating between what I call loader-independent (LI) encoding, and loader-dependent (LD) encoding.

Loader-independent encoding is a type of encoding not managed by the template chosen by the user (loader). This usually means that the decoding stub is not part of the template, but embedded in the shellcode itself. Inceptor offers this kind of feature using the open-source tool sgn, which is used to make the payload polymorphic and undetectable using common signature detection.

Even strong at it is, Shikata-Ga-Nai is not really suitable for certain templates. For this reason, Inceptor also implements Loader-dependent encoders, which are designed to let the loader taking care of the decoding. As such, LD encoders install the decoding stub directly in the template. This kind of encoders, as implemented within Inceptor, are also "Chainable", meaning they can be chained together to encode a payload.

While using a chain of encoders can sometimes improve the obfuscation of a given payload, this technique can also expose multiple decoding routines, which can help Defenders to design signatures against them. For this reason, Inceptor offers multiple ways to obfuscate the final artifacts, hardening the RE process.

At the time of writing, the public version of Inceptor has been provided with the following encoders/compressors/encryptors:

  • Native
    • Xor
    • Nop (Insertion)
  • .NET
    • Hex
    • Base64
    • Xor
    • Nop (Insertion)
    • AES
    • Zlib
    • RLE
  • PowerShell
    • Hex
    • Base64
    • Xor
    • Nop (Insertion)
    • AES

Inceptor can validate an encoding chain both statically and dynamically, statically checking the decoders' input/output types, and also dynamically verifying the implementation with an independent implementation.

At any time, a user can easily validate a chain using the chain-validate.py utility.


AV Evasion Mechanisms

Inceptor also natively implements AV Evasion mechanisms, and as such, it offers the possibility to include AV evasion features to the payload in the form of "modules" (plugins).

The plugins which can be embedded are:

  • AMSI bypass
  • WLDP bypass
  • ETW bypass
  • Sandbox (Behavioural) Deception

EDR Evasion Mechanisms

Inceptor also implements EDR Evasion mechanisms, such as full unhooking, direct syscall invocation and manual DLL mapping. Direct Syscalls are implemented in C# using the outstanding "DInvoke" project, again by TheWover. In C/C++, Syscalls are implemented using SysWhispers and SysWhispers2 projects, by Jackson_T. In addition, Inceptor has built-in support for x86 Syscalls as well.

As the AV bypass features, these features can be enabled as modules, with the only difference that they require operating on a template which supports them. The techniques implemented so far are:

  • Full Unhooking
  • Manual DLL Mapping
  • Direct Syscalls

Obfuscation

Inceptor supports payload obfuscation by using external utils, such as ConfuserExand Chameleon, and provides support for C/C++ obfuscation using LLVM-Obfuscator, which is an IR-based obfuscator using the LLVM compilation platform.

  • PowerShell
  • C#
  • C/C++

Code Signing

Another feature of Inceptor is that it can code sign the resulting binary/dll by using the tool CarbonCopyUsually, files signed with code signing certificates are less strictly analysed. Many anti-malware products don't validate/verify these certificates.


Workflow

The full workflow can be summarized in the following high-level, and simplified scheme:



Installation

Inceptor has been designed to work on Windows. The update-config.py utility can locate the required Microsoft binaries and update the configuration accordingly. It might be required to install Microsoft Build Tools, the Windows SDK, and Visual Studio, update-config.py will guide the user on how to install the required dependencies.

git clone --recursive https://github.com/klezVirus/inceptor.git
cd inceptor
virtualenv venv
venv\Scripts\activate.bat
pip install -r requirements.txt
cd inceptor
python update-config.py

Useful Notes

Default Loaders

The current version of Inceptor locates a specific template using a simple naming convention (don't change template names), and the set of arguments given by the user. Among the arguments, there is also the loader (-t). If not specified, the loader will be picked-up as a function of the file to pack, following this simple schema:

$ python inceptor.py -hh

[*] Default Loaders
Input File Extension SpecialCondition Guessed Filetype Default Loader Default Template
0 .raw NaN Shellcode Simple Loader Classic
1 .exe .NET Dotnet Executable Donut Classic
2 .exe NaN Native Executable Pe2Shellcode PE Load
3 .dll NaN Native Library sRDI Classic

Template name convention

It's very important to understand also the template name convention, to avoid misinterpreting an artifact behaviour.

  • Classic: a classic template usually means it uses the VirtualAlloc/VirtualAllocEx and CreateThread/CreateRemoteThread API to allocate and execute arbitrary code
  • Dinvoke: if a template contains only dinvoke (e.g classic-dinvoke.cs), it means it uses dynamic function resolution feature of dinvoke
  • dinvoke-subtechnique: a template containing dinvoke followed by another keyword is using a particular feature of dinvoke, like manual_mapping, overload_mapping, or syscalls
  • Syscalls: as the name suggest, this template is using syscalls
  • PE Load: this template tries to map a full PE into memory, without transforming it
  • Assembly Load: this template tries to execute a .NET assembly using reflection

Usage
$ usage: inceptor.py [-h] [-hh] [-Z] {native,dotnet,powershell} ...

inceptor: A Windows-based PE Packing framework designed to help
Red Team Operators to bypass common AV and EDR solutions

positional arguments:
{native,dotnet,powershell}
native Native Binaries Generator
dotnet .NET Binaries Generator
powershell PowerShell Wrapper Scripts Generator

optional arguments:
-h, --help show this help message and exit
-hh Show functional table
-Z, --check Check file against ThreatCheck

Next Developments
  • New Template Engine
  • New Templates
  • New Encoders
  • C# Code-Based obfuscation

Resources


ImpulsiveDLLHijack - C# Based Tool Which Automates The Process Of Discovering And Exploiting DLL Hijacks In Target Binaries

18 October 2021 at 11:30
By: Zion3R


C# based tool which automates the process of discovering and exploiting DLL Hijacks in target binaries. The Hijacked paths discovered can later be weaponized during RedTeam Operations to evade EDR's.


1. Methodological Approach :

The tool basically acts on automating following stages performed for DLL Hijacking:

  • Discovery - Finding Potentially Vulnerable DLL Hijack paths
  • Exploitation - Confirming whether the Confirmatory DLL was been loaded from the Hijacked path leading to a confirmation of 100% exploitable DLL Hijack!

Discovery Methodology :

  • Provide Target binary path to ImpulsiveDLLHijack.exe
  • Automation of ProcMon along with the execution of Target binary to find Potentially Vulnerable DLL Hijackable paths.

Exploitation Methodology :

  • Parse Potentially Vulnerable DLL Hijack paths from CSV generated automatically via ProcMon.

  • Copy the Confirmatory DLL (as per the PE architecture) to the hijack paths one by one and execute the Target Binary for predefined time period simultaneously.

  • As the DLL hijacking process is in progress following are the outputs which can be gathered from the Hijack Scenario:

    • The Confirmatory DLL present on the potentially vulnerable Hijackable Path is loaded by the Target Binary we get following output on the console stating that the DLL Hijack was successful - DLL Hijack Successful -> DLLName: | <Target_binary_name>
    • The Confirmatory DLL present on the potentially vulnerable Hijackable Path is not loaded by the Target Binary we get following output on the console stating that the DLL Hijack was unsuccessful - DLL Hijack Unsuccessful -> <DLL_Path>

    Entry Point Not Found Scenarios:

    • The Confirmatory DLL present on the potentially vulnerable Hijackable Path is not loaded by the Target Binary as the Entry Point of the DLL is different from our default entry point "DllMain" throwing an error - "Entry Point Not Found", we get following output on the console stating that the DLL Hijack was hijackable if the entry point was correct -> DLL Hijack Successful -> [Entry Point Not Found - Manual Analysis Required!]: <Hijack_path>
    • The Confirmatory DLL present on the potentially vulnerable Hijackable Path is executed by the Target Binary even after the Entry Point of the DLL is different from our default entry point "DllMain" throwing an error "Entry Point Not Found", we get following output on the console stating that the DLL Hijack was success even after the entry point was not correct -> DLL Hijack Successful -> [Entry Point Not Found]: <Hijack_path>

Note: The "Entry Point not found" Error is been handled by the code programmatically no need to close the MsgBox manually :) # Rather this would crash the code further****

  • Once the DLL Hijacking process is completed for every Potentially Vulnerable DLL Hijack path we get the final output on the console as well as in a text file (C:\DLLLogs\output_logs.txt) in the following format:

    • <DLLHijack_path> --> DLL Hijack Successful (if the Hijack was successful)
    • <DLLHijack_path> --> DLL Hijack Unuccessful (if the Hijack was unsuccessful)
    • <DLLHijack_path> --> DLL Hijack Successful [Entry Point Not Found - Manual Analysis Required] (if the Entry point was not found but can be successful after manual analysis)
    • <DLLHijack_path> --> DLL Hijack Successful [Entry Point Not Found] (if the hijack was successful even after the entry point was not found)
    • <DLLHijack_path> --> Copy: Access to Path is Denied (Access denied)

**These Confirmed DLL Hijackable paths can later be weaponized during a Red Team Engagement to load a Malicious DLL Implant via a legitimate executable (such as OneDrive,Firefox,MSEdge,"Bring your own LOLBINs" etc.) and bypass State of the art EDR's as most of them fail to detect DLL Hijacking as assessed by George Karantzas and Constantinos Patsakis as mentioned in there research paper: https://arxiv.org/abs/2108.10422


2. Prerequisites:
  • Procmon.exe -> https://docs.microsoft.com/en-us/sysinternals/downloads/procmon
  • Custom Confirmatory DLL's :
    • These are DLL files which assist the tool to get the confirmation whether the DLL's are been successfully loaded from the identified hijack path
    • Compiled from the MalDLL project provided above (or use the precompiled binaries if you trust me!)
    • 32Bit dll name should be: maldll32.dll
    • 64Bit dll name should be: maldll64.dll
    • Install NuGet Package:** PeNet** -> https://www.nuget.org/packages/PeNet/ (Prereq while compiling the ImpulsiveDLLHijack project)

Note: i & ii prerequisites should be placed in the ImpulsiveDLLHijacks.exe's directory itself.

  • Build and Setup Information:

    • ImpulsiveDLLHijack

      • Clone the repository in Visual Studio
      • Once project is loaded in Visual Studio go to "Project" --> "Manage NuGet packages" --> Browse for packages and install "PeNet" -> https://www.nuget.org/packages/PeNet/
      • Build the project!
      • The ImpulsiveDLLHijack.exe will be inside the bin directory.
    • And for Confirmatory DLL's:

      • Clone the repository in Visual Studio
      • Build the project with x86 and x64
      • Rename x86 release as maldll32.dll and x64 release as maldll64.dll
    • Setup: Copy the Confirmatory DLL's (maldll32 & maldll64) in the ImpulsiveDLLHijack.exe directory & then execute ImpulsiveDLLHijack.exe :))


3. Usage:

Β 


4. Examples:
  • Target Executable: OneDrive.exe

  • Stage: Discovery

  • Stage: Exploitation

    • Successful DLL Hijacks:


    Β 

    • Unsuccessful DLL Hijacks:


    Β 

    • DLL is not loaded as the entry point is not identical! Manual Analysis might make it a successful DLL Hijack :)


    Β 

    • DLL Hijack successful even after unidentical entry point!


    Β 

  • Stage: Final Results and Logs

    • C:\DLLLogs\output_logs.txt:


    Β 

Thankyou, Feedback would be greatly appreciated! - knight!



Fapro - Free, Cross-platform, Single-file mass network protocol server simulator

17 October 2021 at 20:30
By: Zion3R


FaPro is a Fake Protocol Server tool, Can easily start or stop multiple network services.

The goal is to support as many protocols as possible, and support as many deep interactions as possible for each protocol.


Features
  • Supported Running Modes:
    • Local Machine
    • Virtual Network
  • Supported Protocols:
    • DNS
    • DCE/RPC
    • EIP
    • Elasticsearch
    • FTP
    • HTTP
    • IEC 104
    • Memcached
    • Modbus
    • MQTT
    • MySQL
    • RDP
    • Redis
    • S7
    • SMB
    • SMTP
    • SNMP
    • SSH
    • Telnet
    • VNC
    • IMAP
    • POP3
  • Use TcpForward to forward network traffic
  • Support tcp syn logging

Protocol simulation demos

Rdp

Support credssp ntlmv2 nla authentication.

Support to configure the image displayed when user login.


SSH

Support user login.

Support fake terminal commands, such as id, uid, whoami, etc.

Account format: username:password:home:uid


IMAP & SMTP

Support user login and interaction.



Mysql

Support sql statement query interaction



HTTP

Support website clone, You need to install the chrome browser and chrome driver to work.


Quick Start

Generate Config

The configuration of all protocols and parameters is generated by genConfig subcommand.

Use 172.16.0.0/16 subnet to generate the configuration file:

fapro genConfig -n 172.16.0.0/16 > fapro.json

Or use local address instead of the virtual network:

fapro genConfig > fapro.json

Run the protocol simulator

Run FaPro in verbose mode and start the web service on port 8080:

fapro run -v -l :8080

Tcp syn logging

For windows users, please install winpcap or npcap.


Log analysis

Use ELK to analyze protocol logs:



Configuration

This section contains the sample configuration used by FaPro.

{
"version": "0.38",
"network": "127.0.0.1/32",
"network_build": "localhost",
"storage": null,
"geo_db": "/tmp/geoip_city.mmdb",
"hostname": "fapro1",
"use_logq": true,
"cert_name": "unknown",
"syn_dev": "any",
"exclusions": [],
"hosts": [
{
"ip": "127.0.0.1",
"handlers": [
{
"handler": "dcerpc",
"port": 135,
"params": {
"accounts": [
"administrator:123456",
],
"domain_name": "DESKTOP-Q1Test"
}
}
]
}
]
}
  • version: Configuration version.
  • network: The subnet used by the virtual network or the address bound to the local machine(Local mode).
  • network_build: Network mode(supported value: localhost, all, userdef)
    • localhost: Local mode, all services are listening on the local machine
    • all: Create all hosts in the subnet(i.e., Can ping all the host in the subnet)
    • userdef: Create only the hosts specified in the hosts configuration.
  • storage: Specify the storage used for log collection, support sqlite, mysql, elasticsearch. e.g.
  • geo_db: MaxMind geoip2 database file path, used to generate ip geographic location information. if you use Elasticsearch storage, never need this field, it will be automatically generated using the geoip processor of Elasticsearch.
  • hostname: Specify the host field in the log.
  • use_logq: Use local disk message queue to save logs, and then send it to remote mysql or Elasticsearch to prevent remote log loss.
  • cert_name: Common name of the generated certificate.
  • syn_dev: Specify the network interface used to capture tcp syn packets. If it is empty, the tcp syn packet will not be recorded. On windows, the device name is like "\Device\NPF_{xxxx-xxxx}".
  • exclusions: Exclude remote ips from logs.
  • hosts: Each item is a host configuration.
  • handlers: Service configuration, the service configured on the host, each item is a service configuration.
  • handler: Service name (i.e., protocol name)
  • params: Set the parameters supported by the service.

Example

Create a virtual network, The subnet is 172.16.0.0/24, include 2 hosts,

172.16.0.3 run dns, ssh service,

and 172.16.0.5 run rpc, rdp service,

protocol access logs are saved to elasticsearch, exclude the access log of 127.0.0.1.

{
"version": "0.38",
"network": "172.16.0.0/24",
"network_build": "userdef",
"storage": "es://http://127.0.0.1:9200",
"use_logq": true,
"cert_name": "unknown",
"syn_dev": "any",
"geo_db": "",
"exclusions": ["127.0.0.1"],
"hosts": [
{
"ip": "172.16.0.3",
"handlers": [
{
"handler": "dns",
"port": 53,
"params": {
"accounts": [
"admin:123456"
],
"appname": "domain"
}
},
{
"handler": "ssh",
"port": 22,
"params": {
"accounts": [
"root:5555555:/root:0"
],
"prompt": "$ ",
"server_version": "SSH-2.0-OpenSSH_7.4"
}
}
]
},
{
"ip": "172.16.0.5",
"handlers": [
{
"handler": "dcerpc",
"port": 135,
"params": {
"accounts": [
"administrator:123456"
],
"domain_name": "DESKTOP-Q1Test"
}
},
{
"handler": "rdp",
"port": 3389,
"params": {
"accounts": [
"administrator:123456"
],
"auth": false,
"domain_name": "DESKTOP-Q1Test",
"image": "rdp.jpg",
"sec_layer": "auto"
}
}
]
}
]
}

FAQ

We have collected some frequently asked questions. Before reporting an issue, please search if the FAQ has the answer to your problem.


Contributing
  • Issues are welcome.


DorkScout - Golang Tool To Automate Google Dork Scan Against The Entiere Internet Or Specific Targets

17 October 2021 at 11:30
By: Zion3R


dokrscout is a tool to automate the finding of vulnerable applications or secret files around the internet throught google searches, dorkscout first starts by fetching the dorks lists from https://www.exploit-db.com/google-hacking-database and then it scans a given target or everything it founds


Installation

dorkscout can be installed in different ways:


Go Packages

throught Golang Packages (golang package manager)

go get github.com/R4yGM/dorkscout

this will work for every platform


Docker

if you don't have docker installed you can follow their guide

first of all you have to pull the docker image (only 17.21 MB) from the docker registry, you can see it here, if you don't want to pull the image you can also clone the repository and then build the image from the Dockerfile

docker pull r4yan/dorkscout:latest

if you don't want to pull the image you can download or copy the dorkscout Dockerfile that can be found here and then build the image from the Dockerfile

then if you want to launch the container you have to first create a volume to share your files to the container

docker volume create --name dorkscout_data

using docker when you launch the container it will automatically install the dork lists inside a directory called "dorkscout" :

Vulnerability Data.dorkscout' -rw-r--r-- 1 r4yan r4yan 49048 Jul 31 14:56 'Pages Containing Login Portals.dorkscout' -rw-r--r-- 1 r4yan r4yan 16112 Jul 31 14:56 'Sensitive Directories.dorkscout' -rw-r--r-- 1 r4yan r4yan 451 Jul 31 14:56 'Sensitive Online Shopping Info.dorkscout' -rw-r--r-- 1 r4yan r4yan 29938 Jul 31 14:56 'Various Online Devices.dorkscout' -rw-r--r-- 1 r4yan r4yan 2802 Jul 31 14:56 'Vulnerable Files.dorkscout' -rw-r--r-- 1 r4yan r4yan 4925 Jul 31 14:56 'Vulnerable Servers.dorkscout' -rw-r--r-- 1 r4yan r4yan 8145 Jul 31 14:56 'Web Server Detection.dorkscout' ">
-rw-r--r-- 1 r4yan r4yan   110 Jul 31 14:56  .dorkscout
-rw-r--r-- 1 r4yan r4yan 79312 Aug 10 20:30 'Advisories and Vulnerabilities.dorkscout'
-rw-r--r-- 1 r4yan r4yan 6352 Jul 31 14:56 'Error Messages.dorkscout'
-rw-r--r-- 1 r4yan r4yan 38448 Jul 31 14:56 'Files Containing Juicy Info.dorkscout'
-rw-r--r-- 1 r4yan r4yan 17110 Jul 31 14:56 'Files Containing Passwords.dorkscout'
-rw-r--r-- 1 r4yan r4yan 1879 Jul 31 14:56 'Files Containing Usernames.dorkscout'
-rw-r--r-- 1 r4yan r4yan 5398 Jul 31 14:56 Footholds.dorkscout
-rw-r--r-- 1 r4yan r4yan 5568 Jul 31 14:56 'Network or Vulnerability Data.dorkscout'
-rw-r--r-- 1 r4yan r4yan 49048 Jul 31 14:56 'Pages Containing Login Portals.dorkscout'
-rw-r--r-- 1 r4yan r4yan 16112 Jul 31 14:56 'Sensitive Directories.dorkscout'
-rw-r--r-- 1 r4yan r4yan 451 Jul 31 14:56 'Sensitive Online Shopping Info.dorkscout'
-rw-r--r-- 1 r4yan r4yan 29938 Jul 31 14:56 'Various Online Devices.dorkscout'
-rw-r--r-- 1 r4yan r4yan 2802 Jul 31 14:56 'Vulnerable Files.dorkscout'
-rw-r--r-- 1 r4yan r4yan 4925 Jul 31 14:56 'Vulnerable Servers.dorkscout'
-rw-r--r-- 1 r4yan r4yan 8145 Jul 31 14:56 'Web Server Detection.dorkscout'

so that you don't have to install them then you can start scanning by doing :

docker run -v Dorkscout:/dorkscout r4yan/dorkscout scan <options>

replace the <options> with the options/arguments you want to give to dorkscout, example :

docker run -v dorkscout_data:/dorkscout r4yan/dorkscout scan -d="/dorkscout/Sensitive Online Shopping Info.dorkscout" -H="/dorkscout/a.html"

If you wanted to scan throught a proxy using a docker container you have to add the --net host optionexample :

docker run --net host -v dorkscout_data:/dorkscout r4yan/dorkscout scan -d="/dorkscout/Sensitive Online Shopping Info.dorkscout" -H="/dorkscout/a.html -x socks5://127.0.0.1:9050"

Always save your results inside the volume and not in the container because then the results will be deleted! you can save them by writing the same volume path of the directory you are saving the results

if you added this and did everything correctly at the end of every scan you'd find the results inside the folder /var/lib/docker/volumes/dorkscout_data/_data

this will work for every platform


Executable

you can also download the already compiled binaries here and then execute them


Usage
dorkscout -h
Usage:
dorkscout [command]

Available Commands:
completion generate the autocompletion script for the specified shell
delete deletes all the .dorkscout files inside a given directory
help Help about any command
install installs a list of dorks from exploit-db.com
scan scans a specific website or all the websites it founds for a list of dorks

Flags:
-h, --help help for dorkscout

Use "dorkscout [command] --help" for more information about a command.

to start scanning with a wordlist and a proxy that will then return the results in a HTML format

dorkscout scan -d="/dorkscout/Sensitive Online Shopping Info.dorkscout" -H="/dorkscout/a.html" -x socks5://127.0.0.1:9050

results :



Install wordlists

to start scanning you'll need some dork lists and to have these lists you can install them through the install command

dorkscout install --output-dir /dorks

and this will fetch all the available dorks from exploit.db

[+] ./Advisories and Vulnerabilities.dorkscout
[+] ./Vulnerable Files.dorkscout
[+] ./Files Containing Juicy Info.dorkscout
[+] ./Sensitive Online Shopping Info.dorkscout
[+] ./Files Containing Passwords.dorkscout
[+] ./Vulnerable Servers.dorkscout
[+] ./Various Online Devices.dorkscout
[+] ./Pages Containing Login Portals.dorkscout
[+] ./Footholds.dorkscout
[+] ./Error Messages.dorkscout
[+] ./Files Containing Usernames.dorkscout
[+] ./Network or Vulnerability Data.dorkscout
[+] ./.dorkscout
[+] ./Sensitive Directories.dorkscout
[+] ./Web Server Detection.dorkscout
2021/08/11 19:02:45 Installation finished in 2.007928 seconds on /dorks


  • There are no more articles
❌