πŸ”’
There are new articles available, click to refresh the page.
Yesterday β€” 20 January 2022Tools

Dep-Scan - Fully Open-Source Security Audit For Project Dependencies Based On Known Vulnerabilities And Advisories. Supports Both Local Repos And Container Images. Integrates With Various CI Environments Such As Azure Pipelines, CircleCI, Google CloudBuil

20 January 2022 at 11:30
By: Zion3R


dep-scan is a fully open-source security audit tool for project dependencies based on known vulnerabilities, advisories and license limitations. Both local repositories and container images are supported as input. The tool is ideal for CI environments with built-in build breaker logic.

If you have just come across this repo, probably the best place to start is to checkout the parent project slscan which include depscan along with a number of other tools.


Features

  • Local repos and container image based scanning with CVE insights [1]
  • Package vulnerability scanning is performed locally and is quite fast. No server is used!
  • Suggest optimal fix version by package group (See suggest mode)
  • Perform deep packages risk audit for dependency confusion attacks and maintenance risks (See risk audit)

NOTE:

  • [1] Only application related packages in container images are included in scanning. OS packages are not included yet.

Vulnerability Data sources

  • OSV
  • NVD
  • GitHub
  • NPM

Usage

dep-scan is ideal for use during continuous integration (CI) and also as a tool for local development.

Use with ShiftLeft Scan

dep-scan is integrated with scan, a free and open-source SAST tool. To enable this feature simply pass depscan to the --type argument. Refer to the scan documentation for more information.

---
--type python,depscan,credscan

This approach should work for all CI environments supported by scan.

Scanning projects locally (Python version)

sudo npm install -g @appthreat/cdxgen
pip install appthreat-depscan

This would install two commands called cdxgen and scan.

You can invoke the scan command directly with the various options.

cd <project to scan>
depscan --src $PWD --report_file $PWD/reports/depscan.json

Full list of options are below:

usage: depscan [-h] [--no-banner] [--cache] [--sync] [--suggest] [--risk-audit] [--private-ns PRIVATE_NS] [-t PROJECT_TYPE] [--bom BOM] -i SRC_DIR [-o REPORT_FILE]
[--no-error]
-h, --help show this help message and exit
--no-banner Do not display banner
--cache Cache vulnerability information in platform specific user_data_dir
--sync Sync to receive the latest vulnerability data. Should have invoked cache first.
--suggest Suggest appropriate fix version for each identified vulnerability.
--risk-audit Perform package risk audit (slow operation). Npm only.
--private-ns PRIVATE_NS
Private namespace to use while performing oss risk audit. Private packages should not be available in public registries by default. Comma
sep arated values accepted.
-t PROJECT_TYPE, --type PROJECT_TYPE
Override project type if auto-detection is incorrect
--bom BOM Examine using the given Software Bill-of-Materials (SBoM) file in CycloneDX format. Use cdxgen command to produce one.
-i SRC_DIR, --src SRC_DIR
Source directory
-o REPORT_FILE, --report_file REPORT_FILE
Report filename with directory
--no-error Continue on error to prevent build from breaking

Scanning containers locally (Python version)

Scan latest tag of the container shiftleft/scan-slim

depscan --no-error --cache --src shiftleft/scan-slim -o containertests/depscan-scan.json -t docker

Include license to the type to perform license audit.

depscan --no-error --cache --src shiftleft/scan-slim -o containertests/depscan-scan.json -t docker,license

You can also specify the image using the sha256 digest

depscan --no-error --src [email protected]:a5c5f8a64a0d9a436a0a6941bc3fb156be0c89996add834fe33b66ebeed2439e -o containertests/depscan-redmine.json -t docker

You can also save container images using docker or podman save command and pass the archive to depscan for scanning.

docker save -o /tmp/scanslim.tar shiftleft/scan-slim:latest
# podman save --format oci-archive -o /tmp/scanslim.tar shiftleft/scan-slim:latest
depscan --no-error --src /tmp/scanslim.tar -o reports/depscan-scan.json -t docker

Refer to the docker tests under GitHub action workflow for this repo for more examples.

Scanning projects locally (Docker container)

appthreat/dep-scan or quay.io/appthreat/dep-scan container image can be used to perform the scan.

To scan with default settings

docker run --rm -v $PWD:/app appthreat/dep-scan scan --src /app --report_file /app/reports/depscan.json

To scan with custom environment variables based configuration

docker run --rm \
-e VDB_HOME=/db \
-e NVD_START_YEAR=2010 \
-e GITHUB_PAGE_COUNT=5 \
-e GITHUB_TOKEN=<token> \
-v /tmp:/db \
-v $PWD:/app appthreat/dep-scan scan --src /app --report_file /app/reports/depscan.json

In the above example, /tmp is mounted as /db into the container. This directory is then specified as VDB_HOME for caching the vulnerability information. This way the database can be cached and reused to improve performance.

Supported languages and package format

dep-scan uses cdxgen command internally to create Software Bill-of-Materials (SBoM) file for the project. This is then used for performing the scans.

The following projects and package-dependency format is supported by cdxgen.

Language Package format
node.js package-lock.json, pnpm-lock.yaml, yarn.lock, rush.js
java maven (pom.xml [1]), gradle (build.gradle, .kts), scala (sbt)
php composer.lock
python setup.py, requirements.txt [2], Pipfile.lock, poetry.lock, bdist_wheel, .whl
go binary, go.mod, go.sum, Gopkg.lock
ruby Gemfile.lock, gemspec
rust Cargo.toml, Cargo.lock
.Net .csproj, packages.config, project.assets.json, packages.lock.json
docker / oci image All supported languages excluding OS packages

NOTE

The docker image for dep-scan currently doesn't bundle suitable java and maven commands required for bom generation. To workaround this limitation, you can -

  1. Use python-based execution from a VM containing the correct versions for java, maven and gradle.
  2. Generate the bom file by invoking cdxgen command locally and subsequently passing this to dep-scan via the --bom argument.

Integration with CI environments

Integration with Azure DevOps

Refer to this example yaml configuration for integrating dep-scan with Azure Pipelines. The build step would perform the scan and display the report inline as shown below:

Integration with GitHub Actions

This tool can be used with GitHub Actions using this action.

This repo self-tests itself with both sast-scan and dep-scan! Check the GitHub workflow file of this repo.

- name: Self dep-scan
uses: AppThreat/[email protected]
env:
VDB_HOME: ${{ github.workspace }}/db
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Customisation through environment variables

The following environment variables can be used to customise the behaviour.

  • VDB_HOME - Directory to use for caching database. For docker based execution, this directory should get mounted as a volume from the host
  • NVD_START_YEAR - Default: 2018. Supports upto 2002
  • GITHUB_PAGE_COUNT - Default: 2. Supports upto 20

GitHub Security Advisory

To download security advisories from GitHub, a personal access token with the following scope is necessary.

  • read:packages
export GITHUB_TOKEN="<PAT token>"

Suggest mode

Fix version for each vulnerability is retrieved from the sources. Sometimes, there might be known vulnerabilities in the fix version reported. Eg: in the below screenshot the fix versions suggested for jackson-databind might contain known vulnerabilities.

By passing an argument --suggest it is possible to force depscan to recheck the fix suggestions. This way the suggestion becomes more optimal for a given package group.

Notice, how the new suggested version is 2.9.10.5 which is an optimal fix version. Please note that the optimal fix version may not be the appropriate version for your application based on compatibility.

Package Risk audit

--risk-audit argument enables package risk audit. Currently, only npm and pypi packages are supported in this mode. A number of risk factors are identified and assigned weights to compute a final risk score. Packages that then exceed a maximum risk score (config.pkg_max_risk_score) are presented in a table.

Use --private-ns to specify the private package namespace that should be checked for dependency confusion type issues where a private package is available on public npm/pypi registry.

Example to check if private packages with namespaces @appthreat and @shiftleft are not accidentally made public use the below argument.

--private-ns appthreat,shiftleft
Risk category Default Weight Reason
pkg_private_on_public_registry 4 Private package is available on a public registry
pkg_min_versions 2 Packages with less than 3 versions represent an extreme where they could be either super stable or quite recent. Special heuristics are applied to ignore older stable packages
mod_create_min_seconds 1 Less than 12 hours difference between modified and creation time. This indicates that the upload had a defect that had to be rectified immediately. Sometimes, such a rapid update could also be malicious
latest_now_min_seconds 0.5 Less than 12 hours difference between the latest version and the current time. Depending on the package such a latest version may or may not be desirable
latest_now_max_seconds 0.5 Package versions that are over 6 years old are in use. Such packages might have vulnerable dependencies that are known or yet to be found
pkg_min_maintainers 2 Package has less than 2 maintainers. Many opensource projects have only 1 or 2 maintainers so special heuristics are used to ignore older stable packages
pkg_min_users 0.25 Package has less than 2 npm users
pkg_install_scripts 2 Package runs a custom pre or post installation scripts. This is often malicious and a downside of npm.
pkg_node_version 0.5 Package supports outdated version of node such as 0.8, 0.10, 4 or 6.x. Such projects might have prototype pollution or closure related vulnerabilities
pkg_scope 4 or 0.5 Packages that are used directly in the application (required scope) gets a score with a weight of 4. Optional packages get a score of 0.25
deprecated 1 Latest version is deprecated

Refer to pkg_query.py::get_category_score method for the risk formula.

Automatic adjustment

A parameter called created_now_quarantine_seconds is used to identify packages that are safely past the quarantine period (1 year). Certain risks such as pkg_min_versions and pkg_min_maintainers are suppressed for packages past the quarantine period. This adjustment helps reduce noise since it is unlikely that a malicious package can exist in a registry unnoticed for over a year.

Configuring weights

All parameters can be customized by using environment variables. For eg:

export PKG_MIN_VERSIONS=4 to increase and set the minimum versions category to 4.

License scan

dep-scan can scan the dependencies for any license limitations and report them directly on the console log. To enable license scanning set the environment variable FETCH_LICENSE to true.

export FETCH_LICENSE=true

The licenses data is sourced from choosealicense.com and is quite limited. If the license of a given package cannot be reliably matched against this list it will get silently ignored to reduce any noise. This behaviour could change in the future once the detection logic gets improved.

Alternatives

Dependency Check is considered to be the industry standard for open-source dependency scanning. After personally using this great product for a number of years I decided to write my own from scratch partly as a dedication to this project. By using a streaming database based on msgpack and using json schema, dep-scan is more performant than dependency check in CI environments. Plus with support for GitHub advisory source and grafeas report export and submission, dep-scan is on track to become a next-generation dependency audit tool

There are a number of other tools that piggy back on Sonatype ossindex API server. For some reason, I always felt uncomfortable letting a commercial company track the usage of various projects across the world. dep-scan is therefore 100% private and guarantees never to perform any tracking!



Before yesterdayTools

Http-Desync-Guardian - Analyze HTTP Requests To Minimize Risks Of HTTP Desync Attacks (Precursor For HTTP Request Smuggling/Splitting)

19 January 2022 at 20:30
By: Zion3R


Overview

HTTP/1.1 went through a long evolution since 1991 to 2014:

This means there is a variety of servers and clients, which might have different views on request boundaries, creating opportunities for desynchronization attacks (a.k.a. HTTP Desync).

It might seem simple to follow the latest RFC recommendations. However, for large scale systems that have been there for a while, it may come with unacceptable availability impact.

http_desync_guardian library is designed to analyze HTTP requests to prevent HTTP Desync attacks, balancing security and availability. It classifies requests into different categories and provides recommendations on how each tier should be handled.

It can be used either for raw HTTP request headers or already parsed by an HTTP engine. Consumers may configure logging and metrics collection. Logging is rate limited and all user data is obfuscated.

If you think you might have found a security impacting issue, please follow our Security Notification Process.


Priorities

  • Uniformity across services is key. This means request classification, logging, and metrics must happen under the hood and with minimally available settings (e.g., such as log file destination).
  • Focus on reviewability. The test suite must require no knowledge about the library/programming languages but only about HTTP protocol. So it's easy to review, contribute, and re-use.
  • Security is efficient when it's easy for users. Our goal is to make integration of the library as simple as possible.
  • Ultralight. The overhead must be minimal and impose no tangible tax on request handling (see benchmarks).

Supported HTTP versions

The main focus of this library is HTTP/1.1. See tests for all covered cases. Predecessors of HTTP/1.1 don't support connection re-use which limits opportunities for HTTP Desync, however some proxies may upgrade such requests to HTTP/1.1 and re-use backend connections, which may allow to craft malicious HTTP/1.0 requests. That's why they are analyzed using the same criteria as HTTP/1.1. For other protocol versions have the following exceptions:

  • HTTP/0.9 requests are never considered Compliant, but are classified as Acceptable. If any of Content-Length/Transfer-Encoding is present then it's Ambiguous.
  • HTTP/1.0 - the presence of Transfer-Encoding makes a request Ambiguous.
  • HTTP/2+ is out of scope. But if your proxy downgrades HTTP/2 to HTTP/1.1, make sure the outgoing request is analyzed.

See documentation to learn more.

Usage from C

This library is designed to be primarily used from HTTP engines written in C/C++.

  1. Install cbindgen: cargo install --force cbindgen
  2. Generate the header file:
    • Run cbindgen --output http_desync_guardian.h --lang c for C.
    • Run cbindgen --output http_desync_guardian.h --lang c++ for C++.
  3. Run cargo build --release. The binaries are in ./target/release/libhttp_desync_guardian.* files.

Learn more: generic and Nginx examples.

#include "http_desync_guardian.h"

/*
* http_engine_request_t - already parsed by the HTTP engine
*/
static int check_request(http_engine_request_t *req) {
http_desync_guardian_request_t guardian_request = construct_http_desync_guardian_from(req);
http_desync_guardian_verdict_t verdict = {0};

http_desync_guardian_analyze_request(&guardian_request, &verdict);

switch (verdict.tier) {
case REQUEST_SAFETY_TIER_COMPLIANT:
// The request is good. green light
break;
case REQUEST_SAFETY_TIER_ACCEPTABLE:
// Reject, if mode == STRICTEST
// Otherwise, OK
break;
case REQUEST_SAFETY_TIER_AMBIGUOUS:
// The request is ambiguous.
// Reject, if mode == STRICTEST
// Otherwise send it, but don't reuse both FE/BE connections.
break;
case REQUEST_SAFETY_TIER_SEVERE:
// Send 400 and close the FE connection.
break;
default:
// unreachable code
abort();
}
}

Usage from Rust

See benchmarks as an example of usage from Rust.

Security issue notifications

If you discover a potential security issue in http_desync_guardian we ask that you notify AWS Security via our vulnerability reporting page. Please do not create a public github issue.

Security

See CONTRIBUTING for more information.



Pip-Audit - Audits Python Environments And Dependency Trees For Known Vulnerabilities

19 January 2022 at 11:30
By: Zion3R


pip-audit is a tool for scanning Python environments for packages with known vulnerabilities. It uses the Python Packaging Advisory Database (https://github.com/pypa/advisory-db) via the PyPI JSON API as a source of vulnerability reports.

This project is developed by Trail of Bits with support from Google. This is not an official Google product.


Features

  • Support for auditing local environments and requirements-style files
  • Support for multiple vulnerability services (PyPI, OSV)
  • Support for emitting SBOMs in CycloneDX XML or JSON
  • Human and machine-readable output formats (columnar, JSON)
  • Seamlessly reuses your existing local pip caches

Installation

pip-audit requires Python 3.6 or newer, and can be installed directly via pip:

python -m pip install pip-audit

Third-party packages

In particular, pip-audit can be installed via conda:

conda install -c conda-forge pip-audit

Third-party packages are not directly supported by this project. Please consult your package manager's documentation for more detailed installation guidance.

Usage

You can run pip-audit as a standalone program, or via python -m:

pip-audit --help
python -m pip_audit --help
requirements file; this option can be used multiple times (default: None) -f FORMAT, --format FORMAT the format to emit audit results in (choices: columns, json, cyclonedx-json, cyclonedx-xml) (default: columns) -s SERVICE, --vulnerability-service SERVICE the vulnerability service to audit dependencies against (choices: osv, pypi) (default: pypi) -d, --dry-run collect all dependencies but do not perform the auditing step (default: False) -S, --strict fail the entire audit if dependency collection fails on any dependency (default: False) --desc [{on,off,auto}] include a description for each vulnerability; `auto` defaults to `on` for the `json` format. This flag has no effect on the `cyclonedx-json` or `cyclonedx-xml` formats. (default: auto) --cache-dir CACHE_DIR the directory to use as an HTTP cache for PyPI; uses the `pip` HTTP cache by default (default: None) --progress-spinner {on,off} display a progress spinner (default: on) --timeout TIMEOUT set the socket timeout (default: 15) --path PATHS restrict to the specified installation path for auditing packages; this option can be used multiple times (default: []) -v, --verbose give more output; this setting overrides the `PIP_AUDIT_LOGLEVEL` variable and is equivalent to setting it to `debug` (default: False)">
usage: pip-audit [-h] [-V] [-l] [-r REQUIREMENTS] [-f FORMAT] [-s SERVICE]
[-d] [-S] [--desc [{on,off,auto}]] [--cache-dir CACHE_DIR]
[--progress-spinner {on,off}] [--timeout TIMEOUT]
[--path PATHS] [-v]

audit the Python environment for dependencies with known vulnerabilities

optional arguments:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-l, --local show only results for dependencies in the local
environment (default: False)
-r REQUIREMENTS, --requirement REQUIREMENTS
audit the given requirements file; this option can be
used multiple times (default: None)
-f FORMAT, --format FORMAT
the format to emit audit results in (choices: columns,
json, cyclonedx-json, cyclonedx-xml) (default:
columns)
-s SERVICE, --vulnerability-service SERVICE
the vulnerability service to audit dependencies
against (choices: osv, pypi) (default: pypi)
-d, --dry-run collect all dependencies but do not perform the
auditing step (default: False)
-S, --strict fail the entire audit if dependency collection fails
on any dependency (default: False)
--desc [{on,off,auto}]
include a description for each vulnerability; `auto`
defaults to `on` for the `json` format. This flag has
no effect on the `cyclonedx-json` or `cyclonedx-xml`
formats. (default: auto)
--cache-dir CACHE_DIR
the directory to use as an HTTP cache for PyPI; uses
the `pip` HTTP cache by default (default: None)
--progress-spinner {on,off}
display a progress spinner (default: on)
--timeout TIMEOUT set the socket timeout (default: 15)
--path PATHS restrict to the specified installation path for
auditing packages; this option can be used multiple
times (default: [])
-v, --verbose give more output; this setting overrides the
`PIP_AUDIT_LOGLEVEL` variable and is equivalent to
setting it to `debug` (default: False)

Exit codes

On completion, pip-audit will exit with a code indicating its status.

The current codes are:

Examples

Audit dependencies for the current Python environment:

$ pip-audit
No known vulnerabilities found

Audit dependencies for a given requirements file:

$ pip-audit -r ./requirements.txt
No known vulnerabilities found

Audit dependencies for the current Python environment excluding system packages:

$ pip-audit -r ./requirements.txt -l
No known vulnerabilities found

Audit dependencies when there are vulnerabilities present:

$ pip-audit
Found 2 known vulnerabilities in 1 packages
Name Version ID Fix Versions
---- ------- -------------- ------------
Flask 0.5 PYSEC-2019-179 1.0
Flask 0.5 PYSEC-2018-66 0.12.3

Audit dependencies including descriptions:

$ pip-audit --desc
Found 2 known vulnerabilities in 1 packages
Name Version ID Fix Versions Description
---- ------- -------------- ------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------
Flask 0.5 PYSEC-2019-179 1.0 The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656.
Flask 0.5 PYSEC-2018-66 0.12.3 The Pallets Project flask version Before 0.12.3 contains a CWE-20: Improper Input Validation vulnerability in flask that can result in Large amount of memory usage possibly leading to denial of service. This attack appear to be exploitable via Attacker provides JSON data in incorrect encoding. This vulnerability appears to have been fixed in 0.12.3. NOTE: this may overlap CVE-2019-1010083.

Audit dependencies in JSON format:

$ pip-audit -f json | jq
Found 2 known vulnerabilities in 1 packages
[
{
"name": "flask",
"version": "0.5",
"vulns": [
{
"id": "PYSEC-2019-179",
"fix_versions": [
"1.0"
],
"description": "The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656."
},
{
"id": "PYSEC-2018-66",
"fix_versions": [
"0.12.3"
],
"description": "The Pallets Project flask version Before 0.12.3 contains a CWE-20: Improper Input Validation vulnerability in flask that can result in Large amount of memory usage possibly leading to denial of service. This attack appear to be exploitable via Attacker provides JSON data in incorrect encoding. This vu lnerability appears to have been fixed in 0.12.3. NOTE: this may overlap CVE-2019-1010083."
}
]
},
{
"name": "jinja2",
"version": "3.0.2",
"vulns": []
},
{
"name": "pip",
"version": "21.3.1",
"vulns": []
},
{
"name": "setuptools",
"version": "57.4.0",
"vulns": []
},
{
"name": "werkzeug",
"version": "2.0.2",
"vulns": []
},
{
"name": "markupsafe",
"version": "2.0.1",
"vulns": []
}
]

Security Model

This section exists to describe the security assumptions you can and must not make when using pip-audit.

TL;DR: If you wouldn't pip install it, you should not pip audit it.

pip-audit is a tool for auditing Python environments for packages with known vulnerabilities. A "known vulnerability" is a publicly reported flaw in a package that, if uncorrected, might allow a malicious actor to perform unintended actions.

pip-audit can protect you against known vulnerabilities by telling you when you have them, and how you should upgrade them. For example, if you have somepackage==1.2.3 in your environment, pip-audit can tell you that it needs to be upgraded to 1.2.4.

You can assume that pip-audit will make a best effort to fully resolve all of your Python dependencies and either fully audit each or explicitly state which ones it has skipped, as well as why it has skipped them.

pip-audit is not a static code analyzer. It analyzes dependency trees, not code, and it cannot guarantee that arbitrary dependency resolutions occur statically. To understand why this is, refer to Dustin Ingram's excellent post on dependency resolution in Python.

As such: you must not assume that pip-audit will defend you against malicious packages. In particular, it is incorrect to treat pip-audit -r INPUT as a "more secure" variant of pip-audit. For all intents and purposes, pip-audit -r INPUT is functionally equivalent to pip install -r INPUT, with a small amount of non-security isolation to avoid conflicts with any of your local environments.

Licensing

pip-audit is licensed under the Apache 2.0 License.

pip-audit reuses and modifies examples from resolvelib, which is licensed under the ISC license.

Contributing

See the contributing docs for details.

Code of Conduct

Everyone interacting with this project is expected to follow the PSF Code of Conduct.



goCabrito - Super Organized And Flexible Script For Sending Phishing Campaigns

18 January 2022 at 20:30
By: Zion3R


Super organized and flexible script for sending phishing campaigns.

Features

  • Sends to a single email
  • Sends to lists of emails (text)
  • Sends to lists emails with first, last name (csv)
  • Supports attachments
  • Splits emails in groups
  • Delays sending emails between each group
  • Support Tags to be placed and replaced in the message's body
    • Add {{name}} tag into the HTML message to be replaced with name (used with --to CSV).
    • Add {{track-click}} tag to URL in the HTML message.
    • Add {{track-open}} tag into the HTML message.
    • Add {{num}} tag to be replaced with a random phone number.
  • Supports individual profiles for different campaigns to avoid mistakes and confusion.
  • Supports creating database for sent emails, each email with its unique hash (useful with getCabrito)
  • Supports dry test, to run the script against your profile without sending the email to test your campaign before the launch.

Qs & As

Why not use goPhish?

goPhish is a gerat choice too. But I prefer flexibility and simplicity at the same time. I used goPhish various times but at somepoint, I'm either find it overwhelming or inflexible.

Most of the time, I don't need all these statistics, I just need a flixable way to prepare my phishing campaigns and send them. Each time I use goPhish I've to go and check the documentations about how to add a website, forward specific requests, etc. So I created goCabrito and getCabrito.

getCabrito optionally generates unique URL for email tracking.

  • Email Opening tracking: Tracking Pixel
  • Email Clicking tracking

by generate a hash for each email and append it to the end of the URL or image URL and store these information along with other things that are useful for getCabrito to import and servering. This feature is the only thing connects goCabrito with getCabrito script, so no panic!.

What's with the "Cabrito" thing?

It's just a name of once of my favorit resturants and the name was chosen by one of my team.

Prerequisites

Install gems' dependencies

sudo apt-get install build-essential libsqlite3-dev

Install gems

gem install mail sqlite3

Usage

sqlite database file (contains emails & its tracking hashes) to be imported by 'getCabrito' server. --dry Dry test, no actual email sending. -h, --help Show this message. Usage: goCabrito.rb <OPTIONS> Examples: $goCabrito.rb -s smtp.office365.com:587 -u [email protected] -p [email protected] \ -f [email protected] -t targets1.csv -c targets2.lst -b targets3.lst \ -B msg.html -S "This's title" -a file1.docx,file2.xlsx -g 3 -d 10 $goCabrito.rb --profile prf.json">
goCabrito.rb Ò€” A simple yet flexible email sender.

Help menu:
-s, --server HOST:PORT SMTP server and its port.
e.g. smtp.office365.com:587
-u, --user USER Username to authenticate.
e.g. [email protected]
-p, --pass PASS Password to authenticate
-f, --from EMAIL Sender's email (mostly the same as sender email)
e.g. [email protected]
-t, --to EMAIL|LIST|CSV The receiver's email or a file list of receivers.
e.g. [email protected] or targets.lst or targets.csv
The csv expected to be in fname,lname,email format without header.
-c, --copy EMAIL|LIST|CSV The CC'ed receiver's email or a file list of receivers.
-b, --bcopy EMAIL|LIST|CSV The BCC'ed receiver's email or a file list of receivers.
-B, --body MSG|FILE The mail's body string or a file contains the body (not attachements.)
For click and message opening and other trackings:
Add {{track-click}} tag to URL in the HTML message.
eg: http://phisher.com/file.exe/{{track-click}}
Add {{track-open}} tag into the HTML message.
eg: <html><body><p>Hi</p>{{track-open}}</body></html>
Add {{name}} tag into the HTML message to be replaced with name (used with --to CSV).
eg: <html><body><p>Dear {{name}},</p></body& gt;</html>
Add {{num}} tag to be replaced with a random phone number.
-a, --attachments FILE1,FILE2 One or more files to be attached seperated by comma.
-S, --subject TITLE The mail subject/title.
--no-ssl Do NOT use SSL connect when connect to the server (default: false).
-g, --groups NUM Number of receivers to send mail to at once. (default all in one group)
-d, --delay NUM The delay, in seconds, to wait after sending each group.
-P, --profile FILE A json file contains all the the above settings in a file
-D, --db FILE Create a sqlite database file (contains emails & its tracking hashes) to be imported by 'getCabrito' server.
--dry Dry test, no actual email sending.
-h, --help Show this message.

Usage:
goCabrito.rb <OPTIONS>
Examples:
$goCabrito.rb -s smtp.office365.com:587 -u [email protected] -p [email protected] \
-f [email protected] -t targets1.csv -c targets2.lst -b targets3.lst \
-B msg.html -S "This's title" -a file1.docx,file2.xlsx -g 3 -d 10

$goCabrito.rb --profile prf.json

How you really use it?

  1. I create directory for each customer
  2. Under the customer's directory, I create a directory for each campaign. This sub directory contains
  • The profile
  • The To, CC & BCC lists in CSV format
  • The message body in HTML format
  1. I configure the profile and prepare my HTML
  2. Execute the campaign profile in dry mode first (check the profile file dry value)
ruby goCabrito.rb -P CUSTOMER/3/camp3.json --dry
  1. I remove the --dry switch and make sure the dry value is false in the config file
  2. Send to a test email
  3. Send to the real lists

Troublesheooting

SMTP authentication issues

Nowadays, many cloud-based email vendors block SMTP authentication by default (e.g. Office365, GSuite). This of course will cause an error. To solve this, here are some steps to help you enabling AMTP authentication on different vendors.

Enable SMTP Auth Office 365

To globally enabling SMTP Auth, use powershell.

  • Support SSL For Linux/Nix (run pwsh as sudo required)
$ sudo pwsh
  • Install PSWSMan
Install-Module -Name PSWSMan -Scope AllUsers
Install-WSMan
  • Install ExchangeOnline Module
Install-Module -Name ExchangeOnlineManagement
  • Load ExchangeOnline Module
Import-Module ExchangeOnlineManagement
  • Connect to Office365 exchange using the main admin user, it will prompt you to enter credentials.
Connect-ExchangeOnline -InlineCredential

The above command will prompt you to enter Office365 admin's credentials

  PowerShell credential request
Enter your credentials.
User: [email protected]
Password for user [email protected]: **********
  • Or us this to open web browser to enter your credentils incase of 2FA.
Connect-ExchangeOnline -UserPrincipalName [email protected] 
  • Enable SMTP AUTH Gloabally
Set-TransportConfig -SmtpClientAuthenticationDisabled $false
  • To Enable for SMTP Auth for specific email
Set-CASMailbox -Identity [email protected] -SmtpClientAuthenticationDisabled $false
Get-CASMailbox -Identity [email protected] | Format-List SmtpClientAuthenticationDisabled
  • Confirm
Get-TransportConfig | Format-List SmtpClientAuthenticationDisabled

Then follow the following steps

  1. Go to Asure portal (https://aad.portal.azure.com/) from admin panel (https://admin.microsoft.com/)
  2. Select All Services
  3. Select Tenant Properties
  4. Click Manage Security defaults
  5. Select No Under Enable Security defaults

Google GSuite

Contribution

  • By fixing bugs
  • By enhancing the code
  • By reporting issues
  • By requesting features
  • By spreading the script
  • By click star :)


Driftwood - Private Key Usage Verification

18 January 2022 at 11:30
By: Zion3R


Driftwood is a tool that can enable you to lookup whether a private key is used for things like TLS or as a GitHub SSH key for a user.

Driftwood performs lookups with the computed public key, so the private key never leaves where you run the tool. Additionally it supports some basic password cracking for encrypted keys.


Installation

Three easy ways to get started.

Run with Docker

cat private.key | docker run --rm -i trufflesecurity/driftwood --pretty-json -

Run pre-built binary

Download the binary from the releases page and run it.

Build yourself

go install github.com/trufflesecurity/[email protected]

Usage

Minimal usage is

$ driftwood path/to/privatekey.pem

Run with --help to see more options.

Library Usage

Packages under pkg/ are libraries that can be used for external consumption. Packages under pkg/exp/ are considered to be experimental status and may have breaking changes.



reFlutter - Flutter Reverse Engineering Framework

17 January 2022 at 20:30
By: Zion3R


This framework helps with Flutter apps reverse engineering using the patched version of the Flutter library which is already compiled and ready for app repacking. This library has snapshot deserialization process modified to allow you perform dynamic analysis in a convenient way.


Key features:

  • socket.cc is patched for traffic monitoring and interception;
  • dart.cc is modified to print classes, functions and some fields;
  • contains minor changes for successfull compilation;
  • if you would like to implement your own patches, there is manual Flutter code change is supported using specially crafted Dockerfile

Supported engines

  • Android: arm64, arm32;
  • iOS: arm64;
  • Release: Stable, Beta

Install

# Linux, Windows, MacOS
pip3 install reflutter

Usage

Burp Suite IP: <input_ip> SnapshotHash: 8ee4ef7a67df9845fba331734198a953 The resulting apk file: ./release.RE.apk Please sign the apk file Configure Burp Suite proxy server to listen on *:8083 Proxy Tab -> Options -> Proxy Listeners -> Edit -> Binding Tab Then enable invisible proxying in Request Handling Tab Support Invisible Proxying -> true [email protected]:~$ reflutter main.ipa">
[email protected]:~$ reflutter main.apk

Please enter your Burp Suite IP: <input_ip>

SnapshotHash: 8ee4ef7a67df9845fba331734198a953
The resulting apk file: ./release.RE.apk
Please sign the apk file

Configure Burp Suite proxy server to listen on *:8083
Proxy Tab -> Options -> Proxy Listeners -> Edit -> Binding Tab

Then enable invisible proxying in Request Handling Tab
Support Invisible Proxying -> true

[email protected]:~$ reflutter main.ipa

Traffic interception

You need to specify the IP of your Burp Suite Proxy Server located in the same network where the device with the flutter application is. Next, you should configure the Proxy in BurpSuite -> Listener Proxy -> Options tab

  • Add port: 8083
  • Bind to address: All interfaces
  • Request handling: Support invisible proxying = True

You don't need to install any certificates. On an Android device, you don't need root access as well. reFlutter also allows to bypass some of the flutter certificate pinning implementations.

Usage on Android

The resulting apk must be aligned and signed. I use uber-apk-signer java -jar uber-apk-signer.jar --allowResign -a release.RE.apk. To see which code is loaded through DartVM, you need to run the application on the device. reFlutter prints its output in logcat with the reflutter tag

[email protected]:~$ adb logcat -e reflutter | sed 's/.*DartVM//' >> reflutter.txt
code output
Library:'package:anyapp/navigation/DeepLinkImpl.dart' Class: Navigation extends Object {  

String* DeepUrl = anyapp://evil.com/ ;

Function 'Navigation.': constructor. (dynamic, dynamic, dynamic, dynamic) => NavigationInteractor {

}

Function 'initDeepLinkHandle':. (dynamic) => Future<void>* {

}

Function '[email protected]':. (dynamic, dynamic, {dynamic navigator}) => void {

}

}

Library:'package:anyapp/auth/navigation/AuthAccount.dart' Class: AuthAccount extends Account {

PlainNotificationToken* _instance = sentinel;

Function 'getAuthToken':. (dynamic, dynamic, dynamic, dynamic) => Future<AccessToken*>* {

}

Function 'checkEmail':. (dynamic, dynamic) => Future<bool*>* {

}< br/>
Function 'validateRestoreCode':. (dynamic, dynamic, dynamic) => Future<bool*>* {

}

Function 'sendSmsRestorePassword':. (dynamic, dynamic) => Future<bool*>* {

}
}

Usage on iOS

Use the IPA file created after the execution of reflutter main.ipa command. To see which code is loaded through DartVM, you need to run the application on the device. reFlutter prints its output in console logs in XCode with the reflutter tag.

To Do

  • Display absolute code offset for functions;
  • Extract more strings and fields;
  • Add socket patch;
  • Extend engine support to Debug using Fork and Github Actions;
  • Improve detection of App.framework and libapp.so inside zip archive

Build Engine

The engines are built using reFlutter in Github Actions to build the desired version, commits and snapshot hashes are used from this table. The hash of the snapshot is extracted from storage.googleapis.com/flutter_infra_release/flutter/<hash>/android-arm64-release/linux-x64.zip

release

Custom Build

If you would like to implement your own patches, manual Flutter code change is supported using specially crafted Docker

sudo docker pull ptswarm/reflutter

# Linux, Windows
EXAMPLE BUILD ANDROID ARM64:
sudo docker run -e WAIT=300 -e x64=0 -e arm=0 -e HASH_PATCH=<Snapshot_Hash> -e COMMIT=<Engine_commit> --rm -iv${PWD}:/t ptswarm/reflutter

FLAGS:
-e x64=0 <disables building for x64 archiitechture, use to reduce building time>
-e arm=0 <disables building for arm archiitechture, use to reduce building time>
-e WAIT=300 <the amount of time in seconds you need to edit source code>
-e HASH_PATCH=[Snapshot_Hash] <here you need to specify snapshot hash which matches the engine_commit line of enginehash.csv table best. It is used for proper patch search in reFlutter and for successfull compilation>
-e COMMIT=[Engine _commit] <here you specify commit for your engine version, take it from enginehash.csv table or from flutter/engine repo>


Inject-Assembly - Inject .NET Assemblies Into An Existing Process

17 January 2022 at 11:30
By: Zion3R

This tool is an alternative to traditional fork and run execution for Cobalt Strike. The loader can be injected into any process, including the current Beacon. Long-running assemblies will continue to run and send output back to the Beacon, similar to the behavior of execute-assembly.


There are two components of inject-assembly:

  1. BOF initializer: A small program responsible for injecting the assembly loader into a remote process with any arguments passed. It uses BeaconInjectProcess to perform the injection, meaning this behavior can be customized in a Malleable C2 profile or with process injection BOFs (as of version 4.5).

  2. PIC assembly loader: The bulk of the project. The loader will initialize the .NET runtime, load the provided assembly, and execute the assembly. The loader will create a new AppDomain in the target process so that the loaded assembly can be totally unloaded when execution is complete.

Communication between the remote process and Beacon occurs through a named pipe. The Aggressor script generates a pipe name and then passes it to the BOF initializer.

Notable Features

  • Patches Environment.Exit() to prevent the remote process from exiting.
  • .NET assembly header stomping (MZ bytes, e_lfanew, DOS Header, Rich Text, PE Header).
  • Random pipe name generation based on SourcePoint.
  • No blocking of the Beacon, even if the assembly is loaded into the current process.

Usage

Download and load the inject-assembly.cna Aggressor script into Cobalt Strike. You can then execute assemblies using the following command:

inject-assembly pid assembly [args...]

Specify 0 as the PID to execute in the current Beacon process.

It is recommended to use another tool, like FindObjects-BOF, to locate a process that already loads the .NET runtime, but this is not a requirement for inject-assembly to function.

Warnings

  • Currently only supports x64 remote processes.
  • There are several checks throughout the program to reduce the likelihood of crashing the remote process, but it could still happen.
  • The default Cobalt Strike process injection may get you caught. Consider a custom injection BOF or UDRL IAT hook.
  • Some assemblies rely on Environment.Exit() to finish executing. This will prevent the loader's cleanup phase from occurring, but you can still disconnect the named pipe using jobkill.
  • Uncomment lines 3 or 4 of scmain.c to enable error or verbose modes, respectively. These are disabled by default to reduce the shellcode size.

References

This project would not have been possible without the following projects:

Other features and inspiration were taken from the following resources:



Registry-Spy - Cross-platform Registry Browser For Raw Windows Registry Files

16 January 2022 at 20:30
By: Zion3R


Registry Spy is a free, open-source cross-platform Windows Registry viewer. It is a fast, modern, and versatile explorer for raw registry files.

Features include:

  • Fast, on-the-fly parsing means no upfront overhead
  • Open multiple hives at a time
  • Searching
  • Hex viewer
  • Modification timestamps

Requirements

  • Python 3.8+

Installation

Download the latest version from the releases page. Alternatively, use one of the following methods.

pip (recommended)

  1. pip install registryspy
  2. registryspy

Manual

  1. pip install -r requirements.txt
  2. python setup.py install
  3. registryspy

Standalone

  1. pip install -r requirements.txt
  2. python registryspy.py

Screenshots

Main Window

Find Dialog

Building

Dependencies:

  • PyInstaller 4.5+

Regular building: pyinstaller registryspy_install.spec

Creating a single file: pyinstaller registryspy_onefile.spec

License

Registry Spy

Copyright (C) 2021 Andy Smith

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.



TokenUniverse - An Advanced Tool For Working With Access Tokens And Windows Security Policy

16 January 2022 at 11:30
By: Zion3R


Token Universe is an advanced tool that provides a wide range of possibilities to research Windows security mechanisms. It has a convenient interface for creating, viewing, and modifying access tokens, managing Local Security Authority and Security Account Manager's databases. It allows you to obtain and impersonate different security contexts, manage privileges, auditing settings, and so on.


My goal is to create a useful tool that implements almost everything I know about access tokens and Windows security model in general. And, also, to learn even more in the process. I believe that such a program can become a valuable instrument for researchers and those who want to learn more about the security subsystem. You are welcome to suggest any ideas and report bugs.

Screenshots

Β  Β  Β  Β  Β  Β 

Feature list

Token-related functionality

Obtaining tokens

  • Open process/thread token
  • Open effective thread token (via direct impersonation)
  • Query session token
  • Log in user using explicit credentials
  • Log in user without credentials (S4U logon)
  • Duplicate tokens
  • Duplicate handles
  • Open linked token
  • Filter tokens
  • Create LowBox tokens
  • Created restricted tokens using Safer API
  • Search for opened handles
  • Create anonymous token
  • Impersonate logon session token via pipes
  • Open clipboard token

Highly privileged operations

  • Add custom group membership while logging in users (requires Tcb Privilege)
  • Create custom token from scratch (requires Create Token Privilege)

Viewing

  • User
  • Statistics, source, flags
  • Extended flags (TOKEN_*)
  • Restricting SIDs
  • App container SID and number
  • Capabilities
  • Claims
  • Trust level
  • Logon session type (filtered/elevated/default)
  • Logon session information
  • Verbose terminal session information
  • Object and handle information (access, attributes, references)
  • Object creator (PID)
  • List of processes that have handles to this object
  • Creation and last modification times

Viewing & editing

  • Groups (enable/disable)
  • Privileges (enable/disable/remove)
  • Session
  • Integrity level (lower/raise)
  • UIAccess, mandatory policy
  • Virtualization (enable/disable & allow/disallow)
  • Owner and primary group
  • Originating logon session
  • Default DACL
  • Security descriptor
  • Audit overrides
  • Handle flags (inherit, protect)

Using

  • Impersonation
  • Safe impersonation
  • Direct impersonation
  • Assign primary token
  • Send handle to process
  • Create process with token
  • Share with another instance of TokenUniverse

Other actions

  • Compare tokens
  • Linking logon sessions to create UAC-friendly tokens
  • Logon session relation map

AppContainer profiles

  • Viewing AppContainer information
  • Listing AppContainer profiles per user
  • Listing child AppContainers
  • Creating/deleting AppContainers

Local Security Authority

  • Global audit settings
  • Per-user audit settings
  • Privilege assignment
  • Logon rights assignment
  • Quotas
  • Security
  • Enumerate accounts with privilege
  • Enumerate accounts with right

Security Account Manager

  • Domain information
  • Group information
  • Alias information
  • User information
  • Enumerate domain groups/aliases/users
  • Enumerate group members
  • Enumerate alias members
  • Manage group members
  • Manage alias members
  • Create groups
  • Create aliases
  • Create users
  • Sam object tree
  • Security

Process creation

Methods

  • CreateProcessAsUser
  • CreateProcessWithToken
  • WMI
  • RtlCreateUserProcess
  • RtlCreateUserProcessEx
  • NtCreateUserProcess
  • NtCreateProcessEx
  • CreateProcessWithLogon (credentials)
  • ShellExecuteEx (no token)
  • ShellExecute via IShellDispatch2 (no token)
  • CreateProcess via code injection (no token)
  • WdcRunTaskAsInteractiveUser (no token)

Parameters

  • Current directory
  • Desktop
  • Window show mode
  • Flags (inherit handles, create suspended, breakaway from job, ...)
  • Environmental variables
  • Parent process override
  • Mitigation policies
  • Child process policy
  • Job assignment
  • Run as invoker compatibility
  • AppContainer SID
  • Capabilities

Interface features

  • Immediate crash notification
  • Window station and desktop access checks
  • Debug messages reports

Process list

  • Hierarchy
  • Icons
  • Listing processes from Low integrity & AppContainer
  • Basic actions (resume/suspend, ...)
  • Customizable columns
  • Highlighting
  • Security
  • Handle table manipulation

Interface features

  • Restart as SYSTEM
  • Restart as SYSTEM+ (with Create Token Privilege)
  • Customizable columns
  • Graphical hash icons
  • Auto-detect inherited handles
  • Our own security editor with arbitrary SIDs and mandatory label modification
  • Customizable list of suggested SIDs
  • Detailed error status information
  • Detailed suggestions on errors

Misc. ideas

  • [?] Logon session creation (requires an authentication package?)
  • [?] Job-based token filtration (unsupported on Vista+)
  • [?] Privilege and audit category description from wsecedit.dll


Iptable_Evil - An Evil Bit Backdoor For Iptables

15 January 2022 at 20:30
By: Zion3R


iptable_evil is a very specific backdoor for iptables that allows all packets with the evil bit set, no matter the firewall rules.

The initial implementation is in iptable_evil.c, which adds a table to iptables and requires modifying a kernel header to insert a spot for it. The second implementation is a modified version of the ip_tables core module and its dependents to allow all Evil packets.

I have tested it on Linux kernel version 5.8.0-48, but this should be appliciable to pretty much any kernel version with a full implementation of iptables.


Explanation of the Evil Bit

RFC3514, published April 1st, 2003, defines the previously-unused high-order bit of the IP fragment offset field as a security flag. To RFC-compliant systems, a 1 in that bit position indicates evil entent and will cause the packet to be blocked.

By default, this bit is turned off, but can be turned on in your software if you're assembling the entirety of your IP packet (as some hacking tools do), or in the Linux kernel using this patch (mirrored in this repository here).

How does the backdoor work?

When a packet is received by the Linux kernel, it is processed by iptables and either sent to userspace, rejected, or modified based on the rules configured.

In particular, each iptables table uses the function ipt_do_table in ip_tables.c to decide whether to accept a given packet. I have modified that to automatically accept any packet with the evil bit set and skip all further processing.

I also attempted to add another table (iptable_evil.c) that would accept all evil packets and hand others off to the standard tables for processing, but I never figured out how to pass the packets to the next table and decided that the ipt_do_table backdoor was enough as a proof of concept.

Why did you do this?

I needed to do and write up a decently large project in computing security for one of my classes, and this seemed like a cool idea. This is probably more work than he was expecting for this but Β―\_(ツ)_/Β―.

Build

In-Tree Build

The evil table requires modification of kernel headers, so installing it requires running with a kernel produced through the full tree build.

  • Copy the contents of replace-existing to your kernel source tree, overwriting existing files.
  • Copy iptable_evil.c to linux-X.Y.Z/net/ipv4/netfilter
  • (optional) copy ip_tables.c to linux-X.Y.Z/net/ipv4/netfilter
  • Compile the kernel according to your distro's process (should produce a package)
  • Install the package file
  • Reboot into your new kernel
  • iptables -t filter -L
  • iptables -t evil -L (this will have confused output, but it will load the module)

Out-of-Tree Build

This is significantly easier and faster, but does not support the evil table and marks the kernel as "tainted". It should be possible to copy the ko files produced by this to another computer with the exact same kernel version, but I haven't tested it.

  • Run make
  • rmmod iptable_*
  • rmmod ip_tables
  • insmod ip_tables.ko
  • insmod iptable_filter.ko

Testing/Demo

To test this, you either need to rebuild your entire kernel with this patch or create your own packets using a tool like Scapy. I went with the first option because I was already building the kernel for the evil table.

In the first screenshot, I have blocked all traffic to this VM in iptables, but I am still able to connect over SSH because my packets have the evil bit set, as the second screenshot shows.


Β  Β 



When connecting to the backdoored VM from a VM that does not set the evil bit, the SSH connection will eventually time out.

Β 


Packet captures of backdoor and non-backdoor SSH connections are in the docs/ folder in this repo for your perusal.

Kernel Version

  • 5.8.0-48-generic (Ubuntu 20.04)

Further Information and Resources



Narthex - Modular Personalized Dictionary Generator

15 January 2022 at 11:30
By: Zion3R


Narthex (Greek: Νάρθηξ, νάρθηκας) is a modular & minimal dictionary generator for Unix and Unix-like operating system written in C and Shell. It contains autonomous Unix-style programs for the creation of personalised dictionaries that can be used for password recovery & security assessment. The programs make use of Unix text streams for the collaboration with each other, according to the Unix philosophy. It is licensed under the GPL v3.0. Currently under development!


I made a video to explain the usage of Narthex to non-Unix people: https://www.youtube.com/watch?v=U0UmCeLJSkk&t=938s (the timestamp is intentional)

The tools

  • nchance - A capitalization tool that appends the results to the bottom of the dictionary.
  • ninc - A incrementation tool that multiplies alphabetical lines and appends an n++ at the end of each line.
  • ncom - A combination tool that creates different combinations between the existing lines of the dictionary.
  • nrev - A reversing tool, that appends the reserved versions of the lines at the end of the dictionary.
  • nleet - A leetifier. Replaces characters with Leet equivalents, such as @ instead of a, or 3 instead of e.
  • nclean - A tool for removing passwords that don't meet your criteria (too short, no special characters etc.)
  • napp - A tool that appends characters or words before or after the lines of the dictionary.
  • nwiz - A wizard that asks for the infromation and combines the tools together to create a final dictionary.

Screenshots

Install

In order to install, execute the following commands:

$ git clone https://github.com/MichaelDim02/Narthex.git && cd Narthex
$ sudo make install

Usage

For easy use, there is a wizard program, nwiz, that you can use. Just run

$ nwiz

And it will ask you for the target's information & generate the dictionary for you.

Advanced usage

If you want to make full use of Narthex, you can read the manpages of each tool. What they all do, really, is enhance small dictionaries. They are really minimal, and use Unix text streams to read and output data. For example, save a couple keywords into a textfile words.txt in a different line each, and run the following

$ cat words.txt | nhance -f | ncom | nrev | nleet | ninc 1 30 > dictionary.txt

and you'll see the results for yourself.



Espoofer - An Email Spoofing Testing Tool That Aims To Bypass SPF/DKIM/DMARC And Forge DKIM Signatures

14 January 2022 at 20:30
By: Zion3R

espoofer is an open-source testing tool to bypass SPF, DKIM, and DMARC authentication in email systems. It helps mail server administrators and penetration testers to check whether the target email server and client are vulnerable to email spoofing attacks or can be abused to send spoofing emails.



Figure 1. A case of our spoofing attacks on Gmail (Fixed, Demo video)

Why build this tool?

Email spoofing is a big threat to both individuals and organizations (Yahoo breach, John podesta). To address this problem, modern email services and websites employ authentication protocols -- SPF, DKIM, and DMARC -- to prevent email forgery.

Our latest research shows that the implementation of those protocols suffers a number of security issues, which can be exploited to bypass SPF/DKIM/DMARC protections. Figure 1 demonstrates one of our spoofing attacks to bypass DKIM and DMARC in Gmail. For more technical details, please see our Black Hat USA 2020 talk (with presentation video) or USENIX security 2020 paper.

In this repo, we summarize all test cases we found and integrate them into this tool to help administrators and security-practitioners quickly identify and locate such security issues.

Please use the following citation if you do scentific research (Click me).

Latex version:

@inproceedings{chen-email,
author = {Jianjun Chen and Vern Paxson and Jian Jiang},
title = {Composition Kills: A Case Study of Email Sender Authentication},
booktitle = {29th {USENIX} Security Symposium ({USENIX} Security 20)},
year = {2020},
isbn = {978-1-939133-17-5},
pages = {2183--2199},
url = {https://www.usenix.org/conference/usenixsecurity20/presentation/chen-jianjun},
publisher = {{USENIX} Association},
month = aug,
}

Word version:

Jianjun Chen, Vern Paxson, and Jian Jiang. "Composition kills: A case study of email sender authentication." In 29th USENIX Security Symposium (USENIX Security 20), pp. 2183-2199. 2020.

Installation

  • Download this tool
git clone https://github.com/chenjj/espoofer
  • Install dependencies
sudo pip3 install -r requirements.txt

Python version: Python 3 (>=3.7).

Usage

espoofer has three work modes: server ('s', default mode), client ('c') and manual ('m'). In server mode, espoofer works like a mail server to test validation in receiving services. In client mode, espoofer works as an email client to test validation in sending services. Manual mode is used for debug purposes.


Figure 2. Three types of attackers and their work modes

Server mode

To run espoofer in server mode, you need to have: 1) an IP address (1.2.3.4), which outgoing port 25 is not blocked by the ISP, and 2) a domain (attack.com).

  1. Domain configuration
  • Set DKIM public key for attack.com
selector._domainkey.attacker.com TXT Β "v=DKIM1; k=rsa; t=y; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDNjwdrmp/gcbKLaGQfRZk+LJ6XOWuQXkAOa/lI1En4t4sLuWiKiL6hACqMrsKQ8XfgqN76mmx4CHWn2VqVewFh7QTvshGLywWwrAJZdQ4KTlfR/2EwAlrItndijOfr2tpZRgP0nTY6saktkhQdwrk3U0SZmG7U8L9IPj7ZwPKGvQIDAQAB"
  • Set SPF record for attack.com
attack.com TXT "v=spf1 ip4:1.2.3.4 +all"
  1. Configure the tool in config.py
config ={
"attacker_site": b"attack.com", # attack.com
"legitimate_site_address": b"[email protected]", # legitimate.com
"victim_address": b"[email protected]", # [email protected]
"case_id": b"server_a1", # server_a1
}

You can list find the case_id of all test cases using -l option:

python3 espoofer.py -l
  1. Run the tool to send a spoofing email
python3 espoofer.py

You can change case_id in the config.py or use -id option in the command line to test different cases:

python3 espoofer.py -id server_a1

Client mode

To run epsoofer in client mode, you need to have an account on the target email services. This attack exploits the failure of some email services to perform sufficient validation of emails received from local MUAs. For example, [email protected] tries to impersonate [email protected].Β 

  1. Configure the tool in config.py
config ={
"legitimate_site_address": b"[email protected]", Β 
"victim_address": b"[email protected]",
"case_id": b"client_a1",

"client_mode": {
"sending_server": ("smtp.gmail.com", 587), Β # SMTP sending serve ip and port
"username": b"[email protected]", # Your account username and password
"password": b"your_passward_here",
},
}

You can list find the case_id of all test cases using -l option:

python3 espoofer.py -l

Note: sending_server should be the SMTP sending server address, not the receiving server address.

  1. Run the tool to send a spoofing email
python3 espoofer.py -m c

You can change case_id in the config.py and run it again, or you can use -id option in the command line:

python3 espoofer.py -m c -id client_a1

Manual mode

Here is an example of manual mode:

python3 espoofer.py -m m -helo attack.com -mfrom <[email protected]> -rcptto <[email protected]> -data raw_msg_here -ip 127.0.0.1 -port 25

Screenshots

  1. A brief overview of test cases.

Bugs found with this tool

Welcome to send a pull request to file your bug report here.

Q&A

  1. How do I know if the email has bypassed DMARC authentication successfully?

You can check it in the Authentication-results header in the raw message headers. If the header shows dmarc=pass, it means the email has passed the DMARC authentication. Β You can check some demos video here.

  1. Why do emails fail to send?

There are several possible reasons if you fail to send an email: 1) your ISP blocks outgoing emails to port 25 to prevent spam. In this case, you need to ask for permission from the ISP; 2) the IP address is in the spam list of the target email services. In many cases, you resolve the problem here, https://www.spamhaus.org/lookup/ ; Β 3) some email services check if there is a PTR record for the sending IP, you may also need to set the PTR record to bypass this check; 4) the email cannot pass the format validation of the target email service, you may want to try a different test case.

  1. Why the email goes to the spam folder? Any way to avoid this?

Currently, espoofer focuses on bypassing SPF/DKIM/DMARC authentication and doesn't aim for spam filter bypass. But you could try to use a reputable sending IP address, domain, and benign message content to bypass the spam filter.

  1. Why I send an email successfully but the email didn't show upΒ in either inbox or spam folder?

In our prior experiences, some email services filter suspicious emails silently.

  1. When testing server_a5/a6, why I cannot set specical characters like "(" in the domain?

You will need to set up your own authority DNS server, rather than use third-party DNS hosting services, as some DNS hosting services have restrictions on setting specical characters. See issue.

Credits

Welcome to add more test cases.



Raven - Advanced Cyber Threat Map (Simplified, Customizable, Responsive)

14 January 2022 at 11:30
By: Zion3R


Raven - Advanced Cyber Threat Map (Simplified, customizable and responsive. It uses D3.js with TOPO JSON, has 247 countries, ~100,000 cities, and can be used in an isolated environment without external lookups!.


Live - Demo

https://qeeqbox.github.io/raven/

Offline - Demo


Features

  • Uses D3.js (Not Anime.js)
  • Active threat map (Live and replay)
  • IP, country, city, and port info for each attack
  • Attacks stats for countries (Only known attacks)
  • Responsive interface (Move, drag, zoom in and out)
  • Customize options for countries and cites
  • 247 countries are listed on the interface (Not 174)
  • Optimized worldmap for faster rendering
  • Includes IP lookup, port information
  • Random simulation (IP, country, city)
  • Can be used online or offline (Static)
  • Theme picker module

Functions

Init the worldmap

qb_raven_map()                      //raven object constructor takes the following:

svg_id //SVG ID
world_type //round or 2d
selected_countries = [] //List of ISO_3166 alpha 2 countries that will be selected
remove_countries = [] //List of ISO_3166 alpha 2 countries that will be removed from the map
height //height of the worldmap
width //width of the worldmap
orginal_country_color //Hex color for all countries
clicked_country_color //Hex color will be applied to any clickable countries
selected_country_color //Hex color will be applied to any selected countries
countries_json_location //Countries JSON file (qcountries.json)< br/>cities_json_location //Cities JSON file (qcities.json)
global_timeout //Global timeout for animation
db_length //Size of the db that stores attack events
global_stats_limit //Limit attack stats of a country
verbose //Verbose output should be off unless (use only for debugging)

raven = new qb_raven_map("#qb-worldmap-svg", null, [], ["aq"], window.innerHeight, window.innerWidth, "#4f4f4f", "#6c4242", "#ff726f", "qcountries.json", "qcities.json", 2000, 100, 10, true)

raven.init_world() //Init the worldmap (The worldmap should be ready for you to use at this point)

Plotting data

raven.add_marker_by_name()          //Plot info by country or city name
raven.add_marker_by_ip() //Plot data by IP address
raven.add_marker_by_coordinates() //Plot data by coordinates

marker_object //An object {'from':'','to':""} see examples
colors_object //An object {'line: {'from': ''#FF0000','to': 'FF0000'}} this the color of the line between 2 points - (if null, then a random color will be picked)
timeout //Animation time out
marker = [] //A list of animation marker, use ['line'] for now

raven.add_marker_by_name({'from':'seattle,wa,us','to':'delhi,in'},{'line':{'from':null,'to':null}},2000,['line'])
raven.add_marker_by_ip({'from':'0.0.0.0','to':'0.0.0.0:53'},{'line': {'from':'#FF0000','to':'#FF0000'}},1000,['line')
raven.add_marker_by_coordinates({'from':['-11.074920','-51.648929'],'to':['51.464957','-107.583864']},{'line':{'from':null,'to':'#FFFF00'}},1000,['line'])

Plotting data + adding it to the output table

raven.add_to_data_to_table()        //Plot info and add them to the output table

method //Name, IP or coordinates
marker_object //An object {'from':'','to':""} see examples
colors_object //An object {'line: {'from': ''#FF0000','to': 'FF0000'}} this the color of the line between 2 points - (if null, then a random color will be picked)
timeout //Animation time out
marker = [] //A list of animation marker, use ['line'] for now

raven.add_to_data_to_table('name',{'from':'seattle,wa,us','to':'delhi,in'},{'line':{'from':null,'to':null}},2000,['line'])
raven.add_to_data_to_table('ip',{'from':'0.0.0.0','to':'0.0.0.0:3389'},{'line':{'from':'#FF0000','to':'#FF00 00'}},1000,['line')
raven.add_to_data_to_table('coordinates',{'from':['-11.074920','-51.648929'],'to':['51.464957','-107.583864']},{'line':{'from':null,'to':'#FFFF00'}},1000,['line'])

Timeline

  • Optimize the IP filters <- queued for testing (If you run this in an isolated environment, it should not be an issue)
  • Add Theme Picker

Resources

  • Wikipedia, naturalearthdata, d3.js, topojson, jquery, font-awesome, OSINT package, iana, geonames, AFRINIC, APNIC, ARIN, LACNIC and RIPE
  • Let me know if I missed a reference or resource!

Disclaimer\Notes

  • The dark grey style is typical in my projects (You can change that if you want)
  • If you need help improving your world map or cyber threat map, reach out, and I might be able to help you!
  • Please spend some time in understanding how this project works before opening any issues or leaving any inquiries or comments
  • If you want to see other examples of worldmaps that DO NOT have all the features listed in this project (Google image search -> world map dark grey)


AlphaGolang - IDApython Scripts For Analyzing Golang Binaries

13 January 2022 at 20:30
By: Zion3R

IDApython Scripts for Analyzing Golang Binaries (1)


AlphaGolang is a collection of IDAPython scripts to help malware reverse engineers master Go binaries. The idea is to break the scripts into concrete steps, thus avoiding brittle monolithic scripts, and mimicking the methodology an analyst might follow when tackling a Go binary.

Scripts are released under GPL license (honoring Tim Strazzere's original GolangLoaderAssist which we refactored and updated for python3, props to Tim :) ). Contributions are welcome and encouraged!

Requirements: IDA Pro (ideally v7.6+) and Python3 (ew) The first two steps (recreate_pclntab and function_discovery_and_renaming) will work on IDA v7.5- but scripts beyond that require IDAv7.6+. Newer versions are the ideal target for newer scripts going forward.

Original Reference: Mandiant Cyber Defense Summit 2021 talk (Video Pending)


AlphaGolang Analysis Methodology

  • Step 0: YARA rule to identify Go binaries (PE/ELF/MachO)

    • identify_go_binaries.yara
      • Simple header check + regex for Go build ID string.
      • Could probably improve the build ID length range.
  • Step 1: Recreate pcln table

    IDApython Scripts for Analyzing Golang Binaries (2)
    • recreate_pclntab.py (IDA v7.5- compatible)
      • Recreates the gopclntab section from heuristics
      • Mostly useful for IDA v7.5-
  • Step 2: Discover functions by walking pcln table and add names to all

    IDApython Scripts for Analyzing Golang Binaries (3)
    • function_renaming.py (IDA v7.5- compatible)
      • Split from golang loader assist
      • Bruteforces discovery of missing functions based on the pcln table
      • Fixed some function name cleaning issues from the py3 transition
  • Step 3: Surface user-generated functions

    IDApython Scripts for Analyzing Golang Binaries (4)
    • categorize_go_folders.py (Requires IDA v7.6+)
      • Automagically categorizes functions into folders
      • Requires IDAv7.6 + 'show folders' to be enabled in functions view
  • Step 4: Fix string references

    IDApython Scripts for Analyzing Golang Binaries (5)
    • fix_string_cast.py
      • Split from golang loader assist
      • Added logic to undefine previously existing string blobs before defining new string
      • New sanity checks make it far more effective
  • Step 5: Extract type information (by Ivan Kwiatkowski)

    IDApython Scripts for Analyzing Golang Binaries (6)
    • extract_types.py
      • Comments the arguments of all calls to newobject, makechan, etc.
      • Applies the correct C type to these objects and renames them
      • Obtains the human-readable name and adds it as a comment

Pending fixes and room for contributions:

  • fix_string_cast.py - Still needs refactoring + better string load heuristics
  • extract_types.py - Only works on PE files currently and looks for the hardcoded .rdata section - A proper check / implementation for varint-encoded sizes is needed

Next steps:

  • Track strings references by user-generated functions
  • Auto generate YARA signatures based on user-generated functions
  • Generate hex-rays pseudocode output for user-generated functions
  • Automatically set breakpoints for dynamic analysis of arguments
  • ???

Credit to:

  • Tim Strazzere for releasing the original golang_loader_assist
  • Milan Bohacek (Avast Software s.r.o.) for his invaluable help figuring out the idatree API.
  • Joakim Kennedy (Intezer)
  • Ivan Kwiatkowski (Kaspersky GReAT) for step 5.
  • Igor Kuznetsov (Kaspersky GReAT)


Scemu - X86 32bits Emulator, For Securely Emulating Shellcodes

13 January 2022 at 11:30
By: Zion3R


x86 32bits emulator, for securely emulating shellcodes.


Features

  • 
    rust safety, good for malware.
    • All dependencies are in rust.
    • zero unsafe{} blocks.
  • very fast emulation (much faster than unicorn)
    • 3,000,000 instructions/second
    • 100,000 instructions/second printing every instruction -vv.
  • powered by iced-x86 rust dissasembler awesome library.
  • iteration detector.
  • memory and register tracking.
  • colorized.
  • stop at specific moment and explore the state or modify it.
  • 105 instructions implemented.
  • 112 winapi implemented of 5 dlls.
  • all linux syscalls.
  • SEH chains.
  • vectored exception handler.
  • PEB, TEB structures.
  • memory allocator.
  • react with int3.
  • non debugged cpuid.
  • tests with known payloads:
    • metasploit shellcodes.
    • metasploit encoders.
    • cobalt strike.
    • shellgen.
    • guloader (not totally for now, but arrive further than the debugger)

TODO

- more fpu
- mmx
- 64 bits
- scripting?

Usage

emulator for Shellcodes 0.2.5 @sha0coder USAGE: scemu [FLAGS] [OPTIONS] FLAGS: -e, --endpoint perform communications with the endpoint, use tor or vpn! -h, --help Prints help information -l, --loops show loop interations, it is slow. -m, --memory trace all the memory accesses read and write. -n, --nocolors print without colors for redirectin to a file >out -r, --regs print the register values in every step. -V, --version Prints version information -v, --verbose -vv for view the assembly, -v only messages, without verbose only see the api calls and goes faster OPTIONS: -b, --base <ADDRESS> set base address for code -c, --console <NUMBER> select in which moment will spawn the console to inspect. -C, --console_addr <ADDRESS> spawn console on first eip = address -a, --entry <ADDRESS> entry point of the shellcode, by default starts from the beginning. -f, --filename <FILE> set the shellcode binary file. -i, --inspect <DIRECTION> monitor memory like: -i 'dword ptr [ebp + 0x24] -M, --maps <PATH> select the memory maps folder -R, --reg <REGISTER> trace a specific register in every step, value and content -s, --string <ADDRESS> monitor string on a specific address">
SCEMU 32bits emulator for Shellcodes 0.2.5
@sha0coder

USAGE:
scemu [FLAGS] [OPTIONS]

FLAGS:
-e, --endpoint perform communications with the endpoint, use tor or vpn!
-h, --help Prints help information
-l, --loops show loop interations, it is slow.
-m, --memory trace all the memory accesses read and write.
-n, --nocolors print without colors for redirectin to a file >out
-r, --regs print the register values in every step.
-V, --version Prints version information
-v, --verbose -vv for view the assembly, -v only messages, without verbose only see the api calls and goes
faster

OPTIONS:
-b, --base <ADDRESS> set base address for code
-c, --console <NUMBER> select in which moment will spawn the console to inspect.
-C, --console_addr <ADDRESS> spawn console on first eip = address
-a, --entry <ADDRESS> entry point of the shellcode, by default starts from the beginning.
-f, --filename <FILE> set the shellcode binary file.
-i, --inspect <DIRECTION> monitor memory like: -i 'dword ptr [ebp + 0x24]
-M, --maps <PATH> select the memory maps folder
-R, --reg <REGISTER> trace a specific register in every step, value and content
-s, --string <ADDRESS> monitor string on a specific address

Some use cases

scemu emulates a simple shellcode detecting the execve() interrupt.

We select the line to stop and inspect the memory.

After emulating near 2 million instructions of GuLoader win32 in linux, faking cpuid's and other tricks in the way, arrives to a sigtrap to confuse debuggers.

Example of memory dump on the api loader.

There are several maps by default, and can be created more with apis like LoadLibraryA or manually from the console.

Emulating basic windows shellcode based on LdrLoadDLl() that prints a message:

The console allow to view an edit the current state of the cpu:

--- console ---
=>h
--- help ---
q ...................... quit
cls .................... clear screen
h ...................... help
s ...................... stack
v ...................... vars
r ...................... register show all
r reg .................. show reg
rc ..................... register change
f ...................... show all flags
fc ..................... clear all flags
fz ..................... toggle flag zero
fs ..................... toggle flag sign
c ...................... continue
ba ..................... breakpoint on address
bi ..................... breakpoint on instruction number
bmr .................... breakpoint on read memory
bmw .................... breakpoint on write memory
bc ..................... clear breakpoint
n ...................... next instruction
eip .................... change eip
push .............. ..... push dword to the stack
pop .................... pop dword from stack
fpu .................... fpu view
md5 .................... check the md5 of a memory map
seh .................... view SEH
veh .................... view vectored execption pointer
m ...................... memory maps
ma ..................... memory allocs
mc ..................... memory create map
mn ..................... memory name of an address
ml ..................... memory load file content to map
mr ..................... memory read, speficy ie: dword ptr [esi]
mw ..................... memory read, speficy ie: dword ptr [esi] and then: 1af
md ..................... memory dump
mrd .................... memory read dwords
mds .................... memory dump string
mdw .................... memory dump wide string
mdd .................... memory dump to disk
mt ..................... memory test
ss ..................... search str ing
sb ..................... search bytes
sba .................... search bytes in all the maps
ssa .................... search string in all the maps
ll ..................... linked list walk
d ...................... dissasemble
dt ..................... dump structure
enter .................. step into

The cobalt strike api loader is the same that metasploit, emulating it:

Cobalt Strike API called:

Metasploit rshell API called:

Metasploit SGN encoder using few fpu to hide the polymorfism:

Metasploit shikata-ga-nai encoder that also starts with fpu:

Displaying PEB structure:

=>dt
structure=>peb
address=>0x7ffdf000
PEB {
reserved1: [
0x0,
0x0,
],
being_debugged: 0x0,
reserved2: 0x0,
reserved3: [
0xffffffff,
0x400000,
],
ldr: 0x77647880,
process_parameters: 0x2c1118,
reserved4: [
0x0,
0x2c0000,
0x77647380,
],
alt_thunk_list_ptr: 0x0,
reserved5: 0x0,
reserved6: 0x6,
reserved7: 0x773cd568,
reserved8: 0x0,
alt_thunk_list_ptr_32: 0x0,
reserved9: [
0x0,
...

Displaying PEB_LDR_DATA structure:

=>dt
structure=>PEB_LDR_DATA
address=>0x77647880
PebLdrData {
length: 0x30,
initializated: 0x1,
sshandle: 0x0,
in_load_order_module_list: ListEntry {
flink: 0x2c18b8,
blink: 0x2cff48,
},
in_memory_order_module_list: ListEntry {
flink: 0x2c18c0,
blink: 0x2cff50,
},
in_initialization_order_module_list: ListEntry {
flink: 0x2c1958,
blink: 0x2d00d0,
},
entry_in_progress: ListEntry {
flink: 0x0,
blink: 0x0,
},
}
=>

Displaying LDR_DATA_TABLE_ENTRY and first module name

=>dt
structure=>LDR_DATA_TABLE_ENTRY
address=>0x2c18c0
LdrDataTableEntry {
reserved1: [
0x2c1950,
0x77647894,
],
in_memory_order_module_links: ListEntry {
flink: 0x0,
blink: 0x0,
},
reserved2: [
0x0,
0x400000,
],
dll_base: 0x4014e0,
entry_point: 0x1d000,
reserved3: 0x40003e,
full_dll_name: 0x2c1716,
reserved4: [
0x0,
0x0,
0x0,
0x0,
0x0,
0x0,
0x0,
0x0,
],
reserved5: [
0x17440012,
0x4000002c,
0xffff0000,
],
checksum: 0x1d6cffff,
reserved6: 0xa640002c,
time_date_stamp: 0xcdf27764,
}
=>

A malware is hiding something in an exception

3307726 0x4f9673: push  ebp
3307727 0x4f9674: push edx
3307728 0x4f9675: push eax
3307729 0x4f9676: push ecx
3307730 0x4f9677: push ecx
3307731 0x4f9678: push 4F96F4h
3307732 0x4f967d: push dword ptr fs:[0]
Reading SEH 0x0
-------
3307733 0x4f9684: mov eax,[51068Ch]
--- console ---
=>

Let's inspect exception structures:

--- console ---
=>r esp
esp: 0x22de98
=>dt
structure=>cppeh_record
address=>0x22de98
CppEhRecord {
old_esp: 0x0,
exc_ptr: 0x4f96f4,
next: 0xfffffffe,
exception_handler: 0xfffffffe,
scope_table: PScopeTableEntry {
enclosing_level: 0x278,
filter_func: 0x51068c,
handler_func: 0x288,
},
try_level: 0x288,
}
=>

And here we have the error routine 0x4f96f4 and the filter 0x51068c



Wifi-Framework - Wi-Fi Framework For Creating Proof-Of-Concepts, Automated Experiments, Test Suites, Fuzzers, And More...

12 January 2022 at 20:30
By: Zion3R


We present a framework to more easily perform Wi-Fi experiments. It can be used to create fuzzers, implement new attacks, create proof-of-concepts to test for vulnerabilities, automate experiments, implement test suites, and so on.


The main advantage of the framework is that it allows you to reuse Wi-Fi functionality of Linux to more easily implement attacks and/or tests. For instance, the framework can connect to (protected) Wi-Fi networks for you and can broadcast beacons for you when testing clients. In general, any Wi-Fi functionality of Linux can be reused to more quickly implement attacks/tests. The framework accomplishes this by executing test cases on top of the hostap user space daemon.


Overview of the Wi-Fi Daemon and Framework components.

If you are new to performing Wi-Fi experiments on Linux it is highly recommended to first read the libwifi Linux Tutorial. When you are implementing basic Wi-Fi attacks without the need to reuse Linux functionality, then the framework provides limited advantages and you can instead consider directly implementing attacks in Scapy and optionally use the libwifi library.

Usage

To use the framework:

  1. Install it.

  2. Read the usage tutorial.

Example

Say you want to test whether a client ever encrypts frames using an all-zero key. This can happen during a key reinstallation attack. By using the framework you do not need to reimplement all functionality of an access point, but only need to write the following test case:

Handshake Message 3/4. Action( trigger=Trigger.Connected, action=Action.Function ), # Receive all frames and search for one encrypted with an all-zero key. Action( trigger=Trigger.NoTrigger, action=Action.Receive ), # When we receive such a frame, we can terminate the test. Action( trigger=Trigger.Received, action=Action.Terminate ) ]) def resend(self, station): # Resend 4-Way Handshake Message 3/4. station.wpaspy_command("RESEND_M3 " + station.clientmac ) def receive(self, station, frame): if frame[Dot11].addr2 != station.clientmac or not frame.haslayer(Dot11CCMP): return False # Check if CCMP-encrypted frame can be decrypted using an all-zero key plaintext = decrypt_ccmp(frame.getlayer(Dot11), tk=b"\x00"*16) if plaintext is None: return False # We received a valid plaintext frame! log(STATUS,'Client encrypted a frame with an all-zero key!', color="green") return True">
class ExampleKrackZerokey(Test):
name = "example-krack-zero-key"
kind = Test.Authenticator

def __init__(self):
super().__init__([
# Replay 4-Way Handshake Message 3/4.
Action( trigger=Trigger.Connected, action=Action.Function ),
# Receive all frames and search for one encrypted with an all-zero key.
Action( trigger=Trigger.NoTrigger, action=Action.Receive ),
# When we receive such a frame, we can terminate the test.
Action( trigger=Trigger.Received, action=Action.Terminate )
])


def resend(self, station):
# Resend 4-Way Handshake Message 3/4.
station.wpaspy_command("RESEND_M3 " + station.clientmac )


def receive(self, station, frame):
if frame[Dot11].addr2 != station.clientmac or not frame.haslayer(Dot11CCMP):
return False

# Check if CCMP-encrypted frame can be decrypted using an all-zero key
plaintext = decrypt_c cmp(frame.getlayer(Dot11), tk=b"\x00"*16)
if plaintext is None: return False

# We received a valid plaintext frame!
log(STATUS,'Client encrypted a frame with an all-zero key!', color="green")
return True

The above test case will create an access point that clients can connect to. After the client connects, a new 3rd message in the 4-way handshake will be sent to the client. A vulnerable client will then start using an all-zero encryption key, which the test case automatically detects.

You can run the above test case using simulated Wi-Fi radios as follows:

./setup/setup-hwsim.sh 4
source setup/venv/bin/activate
./run.py wlan1 example-krack-zero-key

You can connect to the created access point to test it:

./hostap.py wlan2

By changing the network configuration this AP can easily be configured to use WPA2 or WPA3 and/or can be configured to use enterprise authentication, without making any changes to the test case that we wrote! Additional benifits of using the framework in this example are:

  • No need to manually broadcast beacons
  • The authentication and association stage is handled by the framework
  • The WPA2 and/or WPA3 handshake is handled by the framework
  • Injected packets will be automatically retransmitted by the Linux kernel
  • Packets sent towards the AP will be acknowledged
  • Sleep mode of the client is automatically handled by the kernel
  • ...

See a detailed description of all our examples for more examples.

Publications

This work was published at ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec '21):

Works that have used this framework or a similar one:



RAUDI - A Repo To Automatically Generate And Keep Updated A Series Of Docker Images Through GitHub Actions

12 January 2022 at 11:30
By: Zion3R

RAUDI (Regularly and Automatically Updated Docker Images) automatically generates and keep updated a series of Docker Images through GitHub Actions for tools that are not provided by the developers.


What is RAUDI

RAUDI is what will save you from creating and managing a lot of Docker Images manually. Every time a software is updated you need to update the Docker Image if you want to use the latest features, the dependencies are not working anymore.

This is messy and time-consuming.

Don't worry anymore, we got you covered.

Setup

This repo can also be executed locally. The requirements to be met are the following:

  • Python 3.x
  • Docker

The setup phase is pretty straightforward, you just need the following commands:

git clone https://github.com/cybersecsi/RAUDI
cd RAUDI
pip install -r requirements.txt

You're ready to go!

Usage

RAUDI can build and push all the tools that are put into the tools directory. There are different options that can be used when running it.

Execution Modes

Normal Execution

In this mode RAUDI tries to build all the tools if needed. The command to run it is simply:

./raudi.py --all

Single Build

In this mode RAUDI tries to build only the specified tool. The command in this case is:

./raudi.py --single <tool_name>

tool_name MUST be the name of the directory inside the tools folder.

Show tools

If you want to know the available tools you can run this command:

./raudi.py --list

Options

Option Description Default Value
--push Whether automatically push to Docker Hub False
--remote Whether check against Docker Hub instead of local Docker before build False

Available Tools

This is the current list of tools that have been added. Those are all tools that do not have an official Docker Image provided by the developer:

Name Docker Image Source
Apktool secsi/apktool https://github.com/iBotPeaches/Apktool
bfac secsi/bfac https://github.com/mazen160/bfac
dirb secsi/dirb http://dirb.sourceforge.net/
dirhunt secsi/dirhunt https://github.com/Nekmo/dirhunt
dirsearch secsi/dirsearch https://github.com/maurosoria/dirsearch
ffuf secsi/ffuf https://github.com/ffuf/ffuf
fierce secsi/fierce https://github.com/mschwager/fierce
Findsploit secsi/findsploit https://github.com/1N3/Findsploit
Gitrob secsi/gitrob https://github.com/michenriksen/gitrob
gobuster secsi/gobuster https://github.com/OJ/gobuster
hydra secsi/hydra https://github.com/vanhauser-thc/thc-hydra
The JSON Web Token Toolkit secsi/jwt_tool https://github.com/ticarpi/jwt_tool
knock secsi/knockpy https://github.com/guelfoweb/knock
LFI Suite secsi/lfisuite https://github.com/D35m0nd142/LFISuite
MASSCAN secsi/masscan https://github.com/robertdavidgraham/masscan
MassDNS secsi/massdns https://github.com/blechschmidt/massdns
Race The Web secsi/race-the-web https://github.com/TheHackerDev/race-the-web
Retire.js secsi/retire https://github.com/RetireJS/retire.js
Sandcastle secsi/sandcastle https://github.com/0xSearches/sandcastle
sqlmap secsi/sqlmap https://github.com/sqlmapproject/sqlmap
Sublist3r secsi/sublist3r https://github.com/aboul3la/Sublist3r
theHarvester secsi/theharvester https://github.com/laramies/theHarvester
RestfulHarvest secsi/restfulharvest https://github.com/laramies/theHarvester
waybackpy secsi/waybackpy https://github.com/akamhy/waybackpy
WhatWeb secsi/whatweb https://github.com/urbanadventurer/WhatWeb

Tool Structure

Every tool in the tools directory contains at least two file:

  • config.py
  • Dockerfile.
  • README.md (optional README for Docker Hub)

If you want to add a new tool you just have to create a folder for that specific tool inside the tools directory. In this folder you have to insert the Dockerfile with defined build args to customize and automate the build. Once you created the Dockerfile you have to create a config.py in the same directory with a function called get_config(organization, common_args). Be careful: the function MUST be called this way and MUST have those two parameters (even if you do not use them). The returning value is the config for that specific tool and has the following structure:

config =  {
'name': organization+'/<name_of_the_image>',
'version': '', # Should be an helper function
'buildargs': {
},
'tests': []
}

The four keys are:

  • name: the name of the Docker Image (e.g. secsi/<tool_name>);
  • version: the version number of the Docker Image. For this you may use a helper function that is able to retrieve the latest available version number (look at tools/ffuf for an example);
  • buildargs: a dict to specify the parts of the Docker Images that are subject to updates (again: look at tools/ffuf for an example);
  • tests: an array of tests (usually just a simple one like '--help').

After doing so you are good to go! Just be careful that the name of the tool MUST BE THE SAME as the directory in which you placed its Dockerfile.

Examples

This section provides examples for the currently added Network Security Tools. As you can see the images do provide only the tool, so if you need to use a wordlist you need to mount it.

Generic Example

docker run -it --rm secsi/<tool> <command>

Specific example

docker run -it --rm -v <wordlist_src_dir>:<wordlist_container_dir> secsi/dirb <url> <wordlist_container_dir>/<wordlist_file>

Roadmap

  • Add GitHub Actions
  • Add '--local' option Add '--remote' option (by default it is local)
  • Add README for every tool Add general README for all RAUDI Docker Image
  • Add custom logger
  • Config file for customization (like the organization name) Customizable organization name in tools/main.py
  • Add GitHub page (different repo)
  • Switch to Alpine-based images
  • Automate Docker Hub README updates (doesn't seems to work with Docker Free Plan)
  • Add tests for each tool (that allows it)
  • Add auto-commit
  • Better error handling

Contributions

Everyone is invited to contribute! If you are a user of the tool and have a suggestion for a new feature or a bug to report, please do so through the issue tracker.

Credits

RAUDI is proudly developed @SecSI by:

License

RAUDI is an open-source and free software released under the GNU GPL v3.



SpoofThatMail - Bash Script To Check If A Domain Or List Of Domains Can Be Spoofed Based In DMARC Records

11 January 2022 at 20:30
By: Zion3R


Bash script to check if a domain or list of domains can be spoofed based in DMARC records


File with domains:

sh SpoofThatMail.sh -f domains.txt

One single domain:

sh SpoofThatMail.sh -d domain


WannaRace - WebApp Intentionally Made Vulnerable To Race Condition For Practicing Race Condition

11 January 2022 at 11:30
By: Zion3R


WebApp intentionally made vulnerable to Race Condition


Description

Race Condition vulnerability can be practiced in the developed WebApp. Task is to buy a Mega Box using race condition that costs more than available vouchers. Two challenges are made for practice. Challenge B is to be solved when PHPSESSID cookie is present, cookie is auto created when user is logged in. Happy learning


Building and running the docker image

Build the docker image with:

git clone https://github.com/Xib3rR4dAr/WannaRace && cd WannaRace
docker build -t xib3rr4dar/wanna_race:1.0 .

Run docker image:

docker run -it --rm xib3rr4dar/wanna_race:1.0

OR

docker run -it --rm -p 9050:80 xib3rr4dar/wanna_race:1.0

Then open in browser relevant IP:PORT


Screenshots

Challenge #1

Main Page

Β 

Four vouchers worth 400 units available for recharge

Β 

Task is to buy Mega box (which is worth 401 units) by exploiting race condition


Challenge #2

Same as Challenge #1 but requires login so that PHPSESSID and appropriate cookies are set



❌