Normal view

There are new articles available, click to refresh the page.
Before yesterdayOutflank Blog

Listing remote named pipes

19 October 2023 at 15:33

On Windows, named pipes are a form of interprocess communication (IPC) that allows processes to communicate with one another, both locally and across the network. Named pipes serve as a mechanism to transfer data between Windows components as well as third-party applications and services. Both locally as well as on a domain. From an offensive perspective, named pipes may leak some information that could be useful for reconnaissance purposes. Since named pipes can also be used (depending on configuration) to access services remotely – they could allow remote exploits (MS08-067).

In this post we will explore how named pipes can be listed remotely in offensive operations, for example via an implant running on a compromised Windows system.

Read more: Listing remote named pipes

Several tools already exist to list named pipes.

  • To display locally bound named pipes you could use SysInternals’ PipeList.
  • Bobby Cooke (@boku7) made the xPipe BOF to list local pipes and their DACLs.
  • To list named pipes on a remote system you could use smbclient.py in impacket or nmap.
Example remote listing of named pipes on a Windows system

From the above example list of named pipes shown remotely, we could learn multiple things: the Windows Search service is active (MsFteWds), terminal services/RDP sessions are active (TSVCPIPE), a Chromium-based browser is in use (mojo.*), some Adobe Creative Cloud services are available, the user makes use of an SSH agent, a PowerShell process is active, and WireShark is in use.

That’s a lot of information – in the usual configuration we can typically list these remote named pipes (e.g. with smbclient.py) using regular domain credentials against domain-joined systems.

However, when you try to remotely enumerate named pipes using the existing Win32 APIs to perform reconnaissance against a remote system, things get a bit interesting. IPC$ is the magical share name used for interprocess communication. While you could use the built-in Win32 APIs to determine the existence of a remote named pipe with \\server\IPC$\pipename, listing the \\server\IPC$ “folder” results in an error.

Checking the existence of a named pipe works. Listing all named pipes doesn’t.

Part of the reason is explained on the PipeList web page:

Did you know that the device driver that implements named pipes is actually a file system driver? In fact, the driver’s name is NPFS.SYS, for “Named Pipe File System”. What you might also find surprising is that its possible to obtain a directory listing of the named pipes defined on a system. This fact is not documented, nor is it possible to do this using the Win32 API. Directly using NtQueryDirectoryFile, the native function that the Win32 FindFile APIs rely on, makes it possible to list the pipes. The directory listing NPFS returns also indicates the maximum number of pipe instances set for each pipe and the number of active instances.

We can see what is going on with WireShark: when listing the IPC$ share of a remote system, we get a “Tree Connect Response” indicating that the Share Type is 0x02 (named pipe).

Listing of named pipes via IPC$ fails
Share type of IPC$ is not Disk (0x02) but Named pipe (0x01)

At this point, the directory listing already fails because the Share Type is not supported for file listing. A regular file share would be 0x01 (disk).

While we can locally use the Win32 APIs to list locally available named pipes, we cannot interact directly with the NtQueryDirectoryFile API remotely. Tools like smbclient.py still allow us to get a remote named pipe listing because they implement the entire SMB stack themselves and can manually call SMB functions like SMB2_FIND_FULL_DIRECTORY_INFO – regardless of the Tree Connect Response. Apparently, this SMB function calls the same NtQueryDirectoryFile API.

Using smbclient.py to list named pipes
Wireshark capture of smbclient.py listing named pipes

Unfortunately, while it seems to be not possible to list remote named pipes due to this from a Beacon Object File (BOF) in an implant that calls Win32 functions, we could still reimplement the SMB stack in BOF. And while that sounds like a lot of work, there are open-source implementations already available. We therefore built a small POC on top of SMBLibrary, an SMB library written in C#.

RemotePipeList

We’ve added a small tool to our C2 Tool Collection that uses SMBLibrary to list remotely available named pipes. Through inline assembly execution in Cobalt Strike / Stage1, we can then easily use this tool via an implant from the operator’s C2 framework.

RemotePipeList tool as part of Outflank’s C2 Tool Collection

The current code requires you to specify a username/password and always authenticates using NTLM. This could potentially be extended to support integrated authentication (and Kerberos).

We have added an aggressor script for Cobalt Strike and a new task for the Stage1 C2 server. Both can be invoked via the remotepipelist command.

View the RemotePipeList tool on GitHub in the C2 Tool Collection.

The post Listing remote named pipes appeared first on Outflank.

Mapping Virtual to Physical Addresses Using Superfetch

14 December 2023 at 15:12

With the Bring Your Own Vulnerable Driver (BYOVD) technique popping up in Red Teaming arsenals, we have seen additional capabilities being added like the ability to kill (EDR) processes or read protected memory (LSASS), all being performed by leveraging drivers operating in kernel land.

Sooner or later during BYOVD tooling development, you will run into the issue of needing to resolve virtual to physical memory addresses. Some drivers may expose routines that allow control over physical address ranges. While this is a powerful capability, how do we make the mapping between virtual and physical addresses? Mistakes can be costly and result in BSODs. That’s what we’re exploring in this blog post. We will document a technique that relies on a Windows feature referred to as “Superfetch”.

Within our Outflank Security Tooling (OST) toolkit, we work hard on BYOVD tooling that can be leveraged for process and token manipulation as well as credential dumping (supported by KernelTool and KernelKatz, implemented by our colleague and genius @bart1k).

  • KernelTool includes commands for tampering with tokens, integrity and protection levels of processes, modifying kernel callbacks, and modifying DSE (Driver Signature Enforcement) and ETW (Event Tracing for Windows) settings.
  • KernelKatz can directly access LSASS memory to dump stored credentials or re-enable plaintext password logging even while Credential Guard is enabled, bypassing userland protections such as PPL.
KernelTool downgrading the MsMpEng.exe (Defender) process to untrusted integrity level.

Both tools make use of a vulnerable driver. Depending on the driver that you leverage, different abuse primitives may be available. For instance, a primitive to kill a process or a primitive to read/write (R/W) physical memory. Of course, your driver might also support fancier features such as toggling the RGB leds of your RAM. This would make us all jealous.

If the conditions are right, you might be able access to one of the following kernel routines:

  • Process management
    • ZwOpenProcess
  • Read/write arbitrary memory
    • MmMapIoSpace
    • ZwMapViewOfSection
  • Execute code
    • KeInsertQueueApc

The research article, “POPKORN: Popping Windows Kernel Drivers At Scale” has a high-level description of these primitives and how they could be abused. They are usually exposed to user land via IOCTLs so that user land processes can interface with these kernel routines. “Finding and exploiting process killer drivers with LOL for 3000$” is a great (offensive) primer by Alice Climent-Pommeret on how communication between kernel land drivers and user land is accomplished.

In the case of KernelTool and KernelKatz, both tools use a read-write (R/W) physical memory primitive in vulnerable kernel drivers. In addition to manipulating user land and kernel objects (DKOM), OST’s KernelTool also has the capability of injecting shellcode in arbitrary processes in user land.

We try to build our kernel capabilities around this single R/W primitive at the moment so we don’t have to rely on additional primitives being available. Through just this one primitive, we are able to perform the broad range of actions that are covered by KernelTool and KernelKatz. Furthermore, if the vulnerable driver is blocked in the future, we can more easily shift to the use of a new driver that supports the same or a similar primitive.

There are now Microsoft-recommend driver block rules that can block known vulnerable drivers. These rules are enabled by default since the Windows 11 2022 Update. The blocklist is updated with each new major release of Windows (typically 1-2 times per year).

Read-Write Physical Memory via MmMapIoSpace

For our purposes, we have chosen to rely on the MmMapIoSpace function as it is commonly available in a number of vulnerable drivers. The MmMapIoSpace routine maps a given physical address range into virtual memory and returns a pointer to the newly mapped address space. When accessible via a vulnerable kernel driver (via IOCTL), this routine allows us to manipulate (read and write) physical memory.

The routine takes a physical address as an argument, the number of bytes to map, and the memory caching type. As the documentation also mentions, MmMapIoSpace should only be used with memory pages that are locked down, otherwise the memory could be freed, could be paged out, etc. This is a fairly big limitation that will create some issues for us further down the road, but is not the focus of this blog post.

For now, there’s a bigger issue we need to overcome. Without too much trouble we can usually obtain virtual addresses of objects that we want to control. However, as MmMapIoSpace takes a physical address as argument, we need to know the physical address that belongs to whatever virtual address we are attempting to manipulate.

Virtual and Physical Memory Basics

If you think you already know how virtual address mapping works, you may change your mind after reading this post called, “Physical and Virtual Memory in Windows 10“. Here’s a short recap: Physical addresses directly correspond to a physical location in the computer’s RAM. Virtual addresses on the other hand are used by the OS and applications and are mapped to a physical memory address. This allows each process to have its own virtual address space that is isolated from the virtual address space of another process.

Whereas we have private virtual address space in user mode (called “user space”), there is a single virtual address space in kernel mode (called “system space”). This has some implications: in user space our executable code can be loaded at the same virtual address in multiple processes, although it refers to different physical memory. We only have a single virtual address space in kernel mode, and address space used by one driver isn’t isolated from other drivers. See Microsoft Learn for more details.

This also means that a single virtual address (in different processes) can map to different physical memory addresses. Conversely, using the example of DLLs, Windows doesn’t necessary load a DLL into physical memory a second time for optimization reasons, so multiple virtual addresses can point to a single physical address, too.

All memory in user space may be paged out as needed. In system space, some memory may be paged out to disk (paged pool), while some memory cannot (nonpaged pool).

You can imagine the headache we’re getting into when we are attempting to make a mapping between virtual and physical addresses! The physical memory might not even be resident (paged out), preventing us from accessing it. However, that’s a problem for another day.

Mapping Virtual to Physical Memory

Say we want to change arbitrary process memory, we can usually fairly easily obtain the virtual address within that process that we’d need to manipulate. But how do we now get to the physical address?

If we had access to additional routines, such as MmGetPhysicalAddress/MmGetVirtualForPhysical, we could let those do the heavy lifting for us. But let’s assume we don’t.

The mapping of physical pages to virtual pages is done via page tables. On Windows 64-bit, the kernel keeps this mapping in multi-level tables called PT/PDT/PDPT/PML4. Since the page tables contain the information (the mapping) that we need, we could attempt to read them via our read-write primitive.

Address translation via the page tables, from the “de engineering” blog.

However, since Windows 10 version 1803, access to page tables with MmMapIoSpace is no longer possible after patches from Microsoft, meaning we no longer can read the page tables to determine the VA-PA mapping.

While there may be a myriad of other ways to achieve the same thing, we are currently relying on a technique that works completely from user-land. Introducing: Superfetch.

RAMMap

There’s a SysInternals tool called “RAMMap” for physical memory usage analysis that can tell you how much RAM is used for which purpose, and can even drill down on a per-process or file level to see which virtual addresses map to which physical addresses. It requires administrator permissions to execute.

RAMMap showing the physical pages in use by a mysterious process that is definitely not me playing Counter-Strike 2 during work time.

This sounds exactly like the information we need to make a VA-PA mapping! So how does RAMMap get this information? After a mighty reverse engineering session with strings and grep we see some references to Superfetch and FileInfo. It turns out that the combination of these two mechanisms is how RAMMap is able to present its output.

Superfetch

Superfetch is a built-in Windows service also known as “SysMain” that can speed up data access by prefetching it, preloading the information in memory. To this end, it keeps track of which memory pages are accessed and when page faults occur (e.g. when memory is paged out to disk and needs to become resident). The architecture of Superfetch is documented by Mathilde Venault & Baptiste David in their talk at BlackHat USA 2020: Fooling Windows through SuperFetch.

RAMMap retrieves Superfetch related information through a call to NtQuerySystemInformation. This NTAPI function can retrieve various information about the system and takes a SystemInformation class as a parameter: a class that indicates what type of information to request. An overview of classes is documented on Geoff Chappell’s website.

To retrieve Superfetch data, the SuperfetchInformation class is used. Some other classes include the ability to retrieve information about current running processes (SystemProcessInformation) or enumerating current open handles (SystemExtendedHandleInformation). Interestingly, some of these information classes also appear to leak system space addresses, a capability that is also very useful during BYOVD development. There is some example code available on the windows_kernel_address_leaks GitHub project to show how to leak kernel pointers using these information classes.

We can query Superfetch to obtain detailed memory page information. This call will return something called the Page Frame Number (PFN) database. The PFN database is a large table that stores information about physical memory pages in data structures such as _MMPFN_IDENTITY that allow us to find out for each memory page what it’s used for, its current state, and most usefully: the associated virtual address. Bingo 🙂

Structure of the PFN database. From BSODTutorials.

Pages may be in different states (Valid/Standby/Modified/Transition/Free/Zeroed). We should err on the side of caution and filter for active pages — modifying a page that’s already been freed wouldn’t be very useful anyway for our purposes.

Pages can have different uses: they could for instance be dedicated to process private memory (MMPFNUSE_PROCESSPRIVATE), or relate to a file being loaded into memory (MMPFNUSE_FILE).

After building the PFN database, we could filter for process private memory pages in the active state until we come across the virtual address that we were attempting to resolve. Based on the index of the page in the PFN database, we can then determine the physical address by a bitwise left-shift (PageFrameIndex << PAGE_SHIFT).

When you are resolving a VA within a userland process, you will also need to match against the UniqueProcessKey. Depending on the Windows OS version this is either the PID of the process or a system space address, and can be resolved using the SystemExtendedProcessInformation class.

Success, we can map virtual to physical addresses!

I hope it goes without saying, but the output we obtain here is a snapshot of whatever the current state is at that time. That means memory may have been freed or paged out in the meantime, which isn’t without risk.

While Superfetch can give us detailed information about VA-PA mappings, FileInfo comes into play when you’d want to find out the physical pages that belong to a specific file on disk. FileInfo is a driver that is present by default on Windows systems and registers the \Device\FileInfo device. Via a number of IOCTLs it allows to retrieve a list of file names, the volume they’re on, and a UniqueFileObjectKey. This key allows to correlate the file object with information retrieved through Superfetch (filtering for MMPFNUSE_FILE) so it’s possible to know for a specific file name which physical pages are mapped.

Further Reading

All of this information was researched and documented by Pavel Yosifovich, Mark Russinovich, Alex Ionescu and David Solomon in “Windows Internals: System architecture, processes, threads, memory management, and more.” Alex Ionescu has also given a presentation at Recon 2013, I got 99 probems but a kernel pointer ain’t one.” In his talk, he explores different ways of obtaining kernel pointers and querying Superfetch. They have released a tool called MemInfo that combines the Superfetch and FileInfo mechanisms to output detailed memory information. Note that MemInfo won’t work out of the box on newer Windows versions as a new Superfetch structure is in use.

Given all of the references above, you will notice that using Superfetch for exploit development is not new. We just wanted to document some of the background as we learned about the topic. For example, this SpeedFan driver exploit also makes use of Superfetch for collecting physical memory information.

Source: PixGround.

In order to help other red teams easily implement these techniques and more, we’ve developed Outflank Security Tooling (OST), a broad set of evasive tools that allow users to safely and easily perform complex tasks. If you’re interested in seeing the diverse offerings in OST, we recommend scheduling an expert led demo.

The post Mapping Virtual to Physical Addresses Using Superfetch appeared first on Outflank.

❌
❌