There are new articles available, click to refresh the page.
Yesterday β€” 26 June 2022Tools

Sealighter - Easy ETW Tracing for Security Research

26 June 2022 at 21:30
By: Zion3R

I created this project to help non-developers dive into researching Event Tracing for Windows (ETW) and Windows PreProcessor Tracing (WPP).


  • Subscribe to multiple ETW and WPP Providers at once
  • Automatically parse events into JSON without needing to know format
  • Robust Event filtering including filter chaining and filter negation
  • Output to Standard out, File, or Windows Event Log (to be ingested by other tools)
  • Get event stack traces
  • Configurable Buffering many events in a time period into one with a count, to reduce the number of events generated


Sealighter leverages the feature-rich Krabs ETW Library to enable detailed filtering and triage of ETW and WPP Providers and Events.

You can subscribe and filter multiple providers, including User mode Providers, Kernel Tracing, and WPP Tracing, and output events as JSON to either stdout, a file, or the Windows Event Log (useful for high-volume traces like FileIO). No knowledge of the events the provider may produce, or their format, is necessary, Sealighter automatically captures and parses any events it is asked.

Events can then be parsed from JSON in Python, PowerShell, or forwarded to Splunk or ELK for further searching.

Filtering can be done on various aspects of an Event, from its ID or Opcode, to matching a property value, to doing an arbitrary string search across the entire event (Useful in WPP traces or when you don't know the event structure, but have an idea of its contents). You can also chain multiple filters together, or negate the filter. You can also filter the maximum events per ID, useful to investigate a new provider without being flooded by similar events.

Why this exists

ETW is an incredibly useful system for both Red and Blue teams. Red teams may glean insight into the inner workings of Windows components, and Blue teams might get valuable insight into suspicious activity.

A common research loop would be:

  1. Identify interesting ETW Providers using logman query providers or Looking for WPP Traces in Binaries
  2. Start a Session with the interesting providers enable, and capture events whilst doing something 'interesting'
  3. Look over the results, using one or more of:
    • Eyeballing each event/grepping for words you expect to see
    • Run a script in Python or PowerShell to help filter or find interesting captured events
    • Ingesting the data into Splunk or an ELK stack for some advanced UI-driven searching

Doing this with ETW Events can be difficult, without writing code to interact with and parse events from the obtuse ETW API. If you're not a strong programmer (or don't want to deal with the API), your only other options are to use a combination of older inbuilt windows tools to write to disk as binary etl files, then dealing with those. WPP traces compounds the issues, providing almost no easy-to-find data about provider and their events.

Projects like JDU2600's Event List and ETWExplorer and give some static insight, but Providers often contain obfuscated event names like Event(1001), meaning the most interesting data only becomes visible by dynamically running a trace and observing the output.

So like SilkETW?

In a way, this plays in a similar space as FuzzySec's SilkETW. But While Silk is more production-ready for defenders, this is designed for researchers like myself, and as such contains a number of features that I couldn't get with Silk, mostly due to the different Library they used to power the tool. Please see Here for more information.

Intended Audience

Probably someone who understands the basic of ETW, and really wants to dive into discovering what data you can glean from it, without having to write code or manually figure out how to get and parse events.

Getting Started

Please read the following pages:

Installation - How to start running Sealighter, including a simple config, and how to set up Windows Event logging if required.

Configuration - How to configure Sealighter, including how to specify what Providers to Log, and where to log to.

Filtering - Deep dive into all the types of filtering Sealighter provides.

Buffering - How to use buffering to report many similar events as one

Parsing Data - How to get and parse data from Sealighter.

Scenarios - Walkthrough example scenarios of how I've used Sealighter in my research.

Limitations - Things Sealighter doesn't do well or at all.

Why it's called Sealighter

The name is a contraction of Seafood Highlighter, which is what we call fake crab meat in Oz. As it's built on Krabs ETW, I thought the name was funny.

Found problems?

Feel free to raise an issue, although as I state in the comparison docs I'm only a single person, and this is a research-ready tool, not a production-ready.

Props and further reading

Scout - Lightweight URL Fuzzer And Spider: Discover A Web Server'S Undisclosed Files, Directories And VHOSTs

26 June 2022 at 12:30
By: Zion3R

Scout is a URL fuzzer and spider for discovering undisclosed VHOSTS, files and directories on a web server.

A full word list is included in the binary, meaning maximum portability and minimal configuration. Aim and fire!


Discover URLs on a given web server. version Display scout version. vhost Discover VHOSTs on a given web server. Flags: -d, --debug Enable debug logging. -h, --help help for scout -n, --no-colours Disable coloured output. -p, --parallelism int Parallel routines to use for sending requests. (default 10) -k, --skip-ssl-verify Skip SSL certificate verification. -w, --wordlist string Path to wordlist file. If this is not specified an internal wordlist will be used. ">
scout [command]

Available Commands:
help Help about any command
url Discover URLs on a given web server.
version Display scout version.
vhost Discover VHOSTs on a given web server.

-d, --debug Enable debug logging.
-h, --help help for scout
-n, --no-colours Disable coloured output.
-p, --parallelism int Parallel routines to use for sending requests. (default 10)
-k, --skip-ssl-verify Skip SSL certificate verification.
-w, --wordlist string Path to wordlist file. If this is not specified an internal wordlist will be used.

Discover URLs


-x, --extensions

File extensions to detect. (default php,htm,html,txt])

-f, --filename

Filename to seek in the directory being searched. Useful when all directories report 404 status.

-H, --header

Extra header to send with requests e.g. -H "Cookie: PHPSESSID=blah"

-c, --status-codes

HTTP status codes which indicate a positive find. (default 200,400,403,500,405,204,401,301,302)

-m, --method

HTTP method to use.

-s, --spider

Scan page content for links and confirm their existence.

Full example

$ scout url

[+] Target URL
[+] Routines 10
[+] Extensions php,htm,html
[+] Positive Codes 200,302,301,400,403,500,405,204,401,301,302

[4 01]

Scan complete. 28 results found.

Discover VHOSTs

$ scout vhost https://google.com

[+] Base Domain google.com
[+] Routines 10
[+] IP -
[+] Port -
[+] Using SSL true


Scan complete. 12 results found.


curl -s "https://raw.githubusercontent.com/liamg/scout/master/scripts/install.sh" | bash

Before yesterdayTools

Nim-Loader - WIP Shellcode Loader In Nim With EDR Evasion Techniques

25 June 2022 at 12:30
By: Zion3R

a very rough work-in-progress adventure into learning nim by cobbling resources together to create a shellcode loader that implements common EDR/AV evasion techniques.

This is a mess and is for research purposes only! Please don't expect it to compile and run without your own modifications.


  • Replace the byte array in loader.nim with your own x64 shellcode
  • Compile the EXE and run it: nim c -d:danger -d:strip --opt:size "loader.nim"
  • Probably adjust which process you want to inject into by looking in the .nim files of the injection folder method you're using...

Completed Features

  • Direct syscalls dynamically resolved from NTDLL (Thanks @ShitSecure)
  • AMSI and ETW patching (Thanks @byt3bl33d3r)
  • NTDLL unhooking (Thanks @MrUn1k0d3r)
  • CreateRemoteThread injection (Thanks @byt3bl33d3r, @ShitSecure)

WIP Features


  • Consider using denim by @LittleJoeTables for obfuscator-llvm nim compilation support!

References & Inspiration

  • OffensiveNim by Marcello Salvati (@byt3bl33d3r)
  • NimlineWhispers2 by Alfie Champion (@ajpc500)
  • SysWhispers3 by klezVirus (@KlezVirus)
  • NimPackt-v1 by Cas van Cooten (@chvancooten)
  • unhook_bof.c by Mr. Un1k0d3r (@MrUn1k0d3r)
  • NimGetSyscallStub by S3cur3Th1sSh1t (@ShitSecure)
  • NimHollow by snovvcrash (@snovvcrash)


Authcov - Web App Authorisation Coverage Scanning

24 June 2022 at 21:30
By: Zion3R

Web app authorisation coverage scanning.


AuthCov crawls your web application using a Chrome headless browser while logged in as a pre-defined user. It intercepts and logs API requests as well as pages loaded during the crawling phase. In the next phase it logs in under a different user account, the "intruder", and attempts to access each of one of the API requests or pages discovered previously. It repeats this step for each intruder user defined. Finally it generates a detailed report listing the resources discovered and whether or not they are accessible to the intruder users.

An example report generated from scanning a local Wordpress instance:



  • Works with single-page-applications and traditional multi-page-applications
  • Handles token-based and cookie-based authentication mechanisms
  • Generates an in-depth report in HTML format
  • Screenshots of each page crawled can be viewed in the report


Install the latest node version. Then run:

$ npm install -g authcov


  1. Generate a config for the site you want to scan [NOTE: It has to end in .mjs extension]:
$ authcov new myconfig.mjs
  1. Update the values in myconfig.mjs
  2. Test your configuration values by running this command to ensure the browser is logging in successfully.
$ authcov test-login myconfig.mjs --headless=false
  1. Crawl your site:
$ authcov crawl myconfig.mjs
  1. Attempt intrusion against the resources discovered during the crawling phase:
$ authcov intrude myconfig.mjs
  1. View the generated report at: ./tmp/report/index.html


The following options can be set in your config file:

option type description
baseUrl string The base URL of the site. This is where the crawler will start from.
crawlUser object The user to crawl the site under. Example: {"username": "admin", "password": "1234"}
intruders array The users who will intrude on the api endpoints and pages discovered during the crawling phase. Generally these will be users the same or less privilege than the crawlUser. To intrude as a not-logged-in user, add a user with the username "Public" and password null. Example: [{"username": "john", "password": "4321"}, {"username": "Public", "password": null}]
type string Is this a single-page-application (i.e. javascript frontend which queries an API backend) or a more "traditional" multi-page-application. (Choose "mpa" or "spa").
authenticationType string Does the site authenticate users by using the cookies sent by the browser, or by a token sent in a request header? For an MPA this will almost always be set to "cookie". In an SPA this could be either "cookie" or "token".
authorisationHeaders array Which request headers are needed to be sent in order to authenticate a user? If authenticationType=cookie, then this should be set to ["cookie"]. If authenticationType=token, then this will be something like: ["X-Auth-Token"].
maxDepth integer The maximum depth with which to crawl the site. Recommend starting at 1 and then try crawling at higher depths to make sure the crawler is able to finish fast enough.
verboseOutput boolean Log at a verbose level, useful for debugging.
saveResponses boolean Save the response bodies from API endpoints so you can view them in the report.
saveScreenshots boolean Save browser screenshots for the pages crawled so you can view them in the report.
clickButtons boolean (Experimental feature) on each page crawled, click all the buttons on that page and record any API requests made. Can be useful on sites which have lots of user interactions through modals, popups etc.
xhrTimeout integer How long to wait for XHR requests to complete while crawling each page. (seconds)
pageTimeout integer How long to wait for page to load while crawling. (seconds)
headless boolean Set this to false for the crawler to open a chrome browser so you can see the crawling happening live.
unAuthorizedStatusCodes array The HTTP response status codes that decide whether or not an API endpoint or page are authorized for the user requesting it. Optionally define a function responseIsAuthorised to determine if a request was authorized. Example: [401, 403, 404]
ignoreLinksIncluding array Do not crawl URLs containing any strings in this array. For example, if set to ["/logout"] then the url: http://localhost:3000/logout will not be crawled. Optionally define a function ignoreLink(url) below to determine if a URL should be crawled or not.
ignoreAPIrequestsIncluding array Do not record API records made to URLs which contain any of the the strings in this array. Optionally define a function ignoreApiRequest(url) to determine if a request should be recorded or not.
ignoreButtonsIncluding array If clickButtons set to true, then do not click buttons who's outer HTML contains any of the strings in this array. Optionally define a function ignoreButton(url) below.
loginConfig object Configure how the browser will login to your web app. Optionally define an async function loginFunction(page, username, password). (More about this below).
cookiesTriggeringPage string (optional) when authenticationType=cookie, this will set a page so that the intruder will browse to this page and then capture the cookies from the browser. This can be useful if the site sets the path field on cookies. Defaults to options.baseUrl.
tokenTriggeringPage string (optional) when authenticationType=token, this will set a page so that the the intruder will browse to this page and then capture the authorisationHeaders from the intercepted API requests. This can be useful if the site's baseUrl does not make any API requests and so cannot capture the auth headers from that page. Defaults to options.baseUrl.

Configuring the Login

There are two ways to configure the login in your config file:

  1. Using the default login mechanism which uses puppeteer to enter the username and password into the specified inputs and then click the specified submit button. This can be configured by setting the loginConfig option in your config file like this. See this example too.
"loginConfig": {
"url": "http://localhost/login",
"usernameXpath": "input[name=email]",
"passwordXpath": "input[name=password]",
"submitXpath": "#login-button"
  1. If your login form is more complex and involves more user interaction then you can define your own puppeteer function in your config file like this. See this example too.
  "loginFunction": async function(page, username, password){
await page.goto('http://localhost:3001/users/sign_in');
await page.waitForSelector('input[type=email]');
await page.waitForSelector('input[type=password]');

await page.type('input[type=email]', username);
await page.type('input[type=password]', password);

await page.tap('input[type=submit]');
await page.waitFor(500);


Don't foget to run the authcov test-login command in headful mode in order to verify the browser logs in successfully.


Clone the repo and run npm install. Best to use node version 17.1.0.

Unit Tests

Unit tests:

$ npm test test/unit

End2End tests:

First download and run the example app. Then run the tests:

$ npm test test/e2e

  • There are no more articles