As I wrote about last week, there are holiday shopping-related scams already popping up all over the place.
But another aspect of security that many shoppers don’t consider this time of year is the security of the products they’re buying, even through a legitimate online marketplace.
This is a glaring issue with home security cameras and Wi-Fi-connected doorbells, but I can’t imagine these are particularly popular holiday gifts. With virtually everything being connected to the internet somehow these days, everything is a potential security risk if you’re buying a new piece of technology.
Take smartwatches, for example. Apple Watches and Samsung Galaxy watches are always popular on everyone’s wishlists this time of year because they’re high-priced items you normally wouldn’t buy for yourself. Many shoppers might be looking for a deal this time of year and not looking to spend hundreds on the gift, so any sort of cheaper alternative could be appealing.
I searched for “smart watches” on Amazon, and the results page displayed four different watches from four different vendors as their “Top Results,” none of which were Samsung and Apple. Well-known vendors are certainly not immune to security issues or vulnerabilities, but at least users can be confident that any known vulnerabilities will be disclosed and patched by these companies as they pop up.
The top result is for a $29.99 smartwatch that offers sleep tracking, blood pressure monitoring, dozens of different workout modes, step tracking, and more. However, there are a few security flags for me right up front with this deal (after all, if it seems too good to be true, it probably is). Amazon states the seller is a company called “Nerunsa,” but a quick search did not turn up any legitimate information on who this company is, where they’re based, or the sort of security bona fides you’d be hoping for. The only search results are for the company’s Amazon store page and a few eBay listings for people reselling the watch in question.
The app that’s listed as supporting the watch is called “GloryFit” on the Google Play and Apple app stores, and its privacy policy is equally vague. It states that the app will collect all the suspected information for someone using a smartwatch — phone calls, text messages, GPS location, personal information, health information, etc. But, the policy states that, when the user accepts the privacy policy, “You hereby consent to our process and disclose personal information to our affiliated companies (which are in the communications, social media, technology and cloud businesses) and to Third Party Service Providers for the purposes of this Privacy Policy.” And it’s not particularly clear what those other companies do, exactly — Google was no help here, either.
Apple Air Tags are also another popular tech gift every year and are usually featured in major retailers’ Black Friday sales. I personally have my own concerns about any type of tracking tag coming into my house, but that’s for another column.
On Walmart, which is increasingly trying to compete with Amazon by offering more products online, I searched for “smart tag” and found three results that appeared ahead of Apple’s legitimate Air Tags. The second-most-popular result is for a “Bluetooth Tracker and Item Locator” that’s only $15.98, compared to $86.88 for a four-pack of Apple’s. This tracker is listed as being made by “AILIUTOP,” which also remains elusive on the internet and does not seem to have any sort of legitimate contact information available to the public. Their store page on Walmart indicates the seller offers many types of products, from clothing to home goods and more.
This seems like a good bargain as a gift for someone who is always losing their keys or wallet or wants to make sure their bicycle is secure when they lock it up somewhere. But purchasing these types of “smart” devices with so much uncertainty poses a few issues.
If you do experience some sort of security failure or issue, there is no easy way to contact any of these vendors through the traditional means that the average user would go searching for. These vendors have no clear history of responsibly disclosing vulnerabilities, releasing security updates, or testing their products’ security before release.
When these types of gifts are dealing with such high-profile information like your personal information, health data, or physical location, users should be confident that their information is being stored correctly and securely, or at least there’s a way to contact the vendor should they have any questions.
When searching for holiday gifts online, make sure you’re buying from a trusted vendor, or if you haven’t heard of the vendor before, take a few extra minutes just to look them up, read their app’s privacy policy, or even read the reviews to make sure there’s no clear sign of bot activity like repetitive words or phrases or using the same photo for multiple reviews.
The one big thing
The 2023 Cisco Talos Year in Review is now available to download. Once again, the Talos team has meticulously combed through a massive amount of data to analyze the major trends that have shaped the threat landscape in 2023. Global conflict influenced a lot of these trends, altering the tactics and approaches of many threat actors. In operations ranging from espionage to cybercrime, we’ve seen geopolitical events have a significant impact on the way these are carried out.
Why do I care?
The Year in Review report includes new data and telemetry from Talos about attacker trends, popular malware seen in the wild, and much more. Despite the accelerated pace of many threat actor campaigns and the geopolitical events that shaped them, our report shows that the defensive community’s diligence, inventiveness and collaborative efforts are helping to push adversaries back.
More than six million people are reportedly victims of a large data breach at DNA and genealogy testing firm 23andMe. The breach is larger than initially expected, with more than 5.5 million users who opted into the company’s “DNA Relatives” feature, which allows customers to automatically share some of their data with other users. Another 1 million-plus users had their family tree information accessed. The attackers accessed the accounts because of password reuse from users, likely who used easy-to-guess login information or passwords they used across multiple other accounts. 23andMe was not the target of the initial breach, nor was a company account the source of the compromised credentials. Security experts are urging users to move away from traditional username-and-password login methods as these types of attacks happen more often, instead moving toward multi-factor authentication or passwordless logins. (TechCrunch, Wall Street Journal)
Apple released emergency fixes for two zero-day vulnerabilities in its WebKit browser engine that have already been exploited in the wild. The company reported that the flaws are being exploited on devices running on iOS versions before iOS 16.7.1 (released on Oct. 10, 2023). There are new patches available, which users should install immediately, in iOS, iPadOS, macOS Sonoma and the Safari web browser. The two vulnerabilities tracked as CVE-2023-42916 and CVE-2023-42917, leave affected devices vulnerable to adversaries accessing sensitive information on targeted devices. CVE-2023-42917 could also allow an attacker to execute arbitrary code on the targeted machine. (SC Magazine, Decipher)
Security researchers say a new threat actor known as “AeroBlade” compromised a U.S. aerospace company for more than a year. The actor reportedly started testing their malware and infection chain on the targeted network in September 2022 and executed malware on the network in July 2023. The activity sat undetected for months due to anti-analysis techniques. It is currently unknown what actions, if any, the actor carried out during that time or if they compromised any user or customer data. The initial infection began with a Microsoft Word lure document with the title, “"SOMETHING WENT WRONG Enable Content to load the document." The ensuing malicious Microsoft Word template (DOTM) file then loaded a DLL that served as a reverse shell. Researchers say the attacker’s intent was likely to steal data from the target to sell it, potentially supply it to international competitors, or use it to extort the target into paying a ransom. (Dark Reading, Bleeping Computer)
Can’t get enough Talos?
Security journalists from Decipher bring you the headlines, including new U.S. government sanctions on threat actor groups in our latest Threat Spotlight video.
Then, Hazel chats to Talos security researcher Joe Marshall to discuss the Talos 2023 Year in Review, and Project PowerUp, the story of how Cisco Talos worked with a multi-national, multi-company coalition of volunteers and experts to help “keep the lights on” in Ukraine, by injecting a measure of stability in Ukraine’s power transmission grid.
Virtual (Please note: This presentation will only be given in German)
The annual IT event at the end of the year where Cisco experts, including Gergana Karadzhova-Dangela from Cisco Talos Incident Response, discuss the future-oriented topics in the implementation of digitalization together with you.
Each year brings new threats that take advantage of increasingly complex security environments. Whether it’s Volt Typhoon targeting critical infrastructure organizations across the United States or ALPHV launching an attack against casino giant MGM, threat actors are becoming bolder and more evasive. That’s why it’s never been more important to leverage broad telemetry sources, deep network insights and threat intelligence to respond effectively and recover faster from sophisticated attacks. Join Amy Henderson, Director of Strategic Planning and Communications at Cisco Talos and Briana Farro, Director of XDR Product Management at Cisco, as they discuss some of the top threat trends and threats we have seen this past year and how to leverage security technology like XDR and network insights to fight against them.
The NIS2 Directive is a crucial step toward securing Europe’s critical infrastructure and essential services in an increasingly interconnected world. Organizations must act now to prepare for the new requirements, safeguard their operations, and maintain a robust cybersecurity posture. Gergana Karadzhova-Dangela from Cisco Talos Incident Response and other Cisco experts will talk about how organizations can best prepare for the coming regulations.
Most prevalent malware files from Talos telemetry over the past week
C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II. Send your Bash Bunny all the instructions it needs just over the air.
Overview
Structure
Installation & Start
Install required dependencies
pip install pygatt "pygatt[GATTTOOL]"
Make sure BlueZ is installed and gatttool is usable
sudo apt install bluez
Download BlueBunny's repository (and switch into the correct folder)
git clone https://github.com/90N45-d3v/BlueBunny cd BlueBunny/C2
Start the C2 server
sudo python c2-server.py
Plug your Bash Bunny with the BlueBunny payload into the target machine (payload at: BlueBunny/payload.txt).
Visit your C2 server from your browser on localhost:1472 and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).
Manual communication with the Bash Bunny through Python
You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.
Example Code
# Import the backend (BlueBunny/C2/BunnyLE.py) import BunnyLE
# Define the data to send data = "QUACK STRING I love my Bash Bunny" # Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file) d_type = "cmd"
# Initialize BunnyLE BunnyLE.init()
# Connect to your Bash Bunny bb = BunnyLE.connect()
# Send the data and let it execute BunnyLE.send(bb, data, d_type)
Troubleshooting
Connecting your Bash Bunny doesn't work? Try the following instructions:
Try connecting a few more times
Check if your bluetooth adapter is available
Restart the system your C2 server is running on
Check if your Bash Bunny is running the BlueBunny payload properly
How far away from your Bash Bunny are you? Is the environment (distance, interferences etc.) still sustainable for typical BLE connections?
Bugs within BlueZ
The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.
As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.
In his keynote at LABScon23, SentinelLabs’ Principal Threat Researcher Tom Hegel addressed a crucial but often overlooked aspect of global cybersecurity: cyber threat activity in less-monitored regions, particularly Africa.
Focusing on China’s strategic use of soft power across the African continent, Hegel provides a compelling analysis of how technology and investments are wielded as tools of influence and control.
Highlighting its significant investments in key sectors, Hegel explores how China has established strategic influence in African telecommunications, finance, and surveillance sectors and the implications this has for cybersecurity.
While noting that such investments are attractive to African countries for their undoubted benefits, the talk raises concerns about the trade offs. In the realm of telecommunications, Chinese firms like Huawei and ZTE can be linked to potential cases of surveillance and control, evidenced by actions like internet clampdowns in Zimbabwe during politically sensitive times. In finance, an intricate web of financial engagements provide worrying opportunities for cyber espionage. Initiatives like the Safe City projects bring technological advancements but at the potential price of civil and political surveillance.
Hegel concludes with a call to action for the cybersecurity community. The importance of collaborative efforts in monitoring and understanding the cyber activities in these regions is essential not only for the direct protection of entities in undermonitored areas but also for a broader understanding of the global cyber threat landscape.
Connecting the dots between regional cybersecurity issues in Africa and their global repercussions, this talk advocates for a more inclusive view of global cyber threats, highlighting the need for a unified and informed response from the cybersecurity community.
Tom Hegel is a Principal Threat Researcher with SentinelOne. He comes from a background of detection and analysis of malicious actors, malware, and global events with an application to the cyber domain. His past research has focused on threats impacting individuals and organizations across the world, primarily targeted attackers.
About LABScon
This presentation was featured live at LABScon 2023, an immersive 3-day conference bringing together the world’s top cybersecurity minds, hosted by SentinelOne’s research arm, SentinelLabs.
We were also finalist in these categories but were not the winner:
XCellence in Midmarket Solution: Software = Finalist
XCellence in Midmarket Solution: Services= Finalist
XCellence in Solutions Track Presentation= Finalist
XCellence in Solutions Pavilion Strategy = Finalist
Best In Show = Finalist
We are publishing a set of custom CodeQL queries for Go and C. We have used them to find critical issues that the standard CodeQL queries would have missed. This new release of a continuously updated repository of CodeQL queries joins our public Semgrep rules and Automated Testing Handbook in an effort to share our technical expertise with the community.
For the initial release of our internal CodeQL queries, we focused on issues like misused cryptography, insecure file permissions, and bugs in string methods:
Language
Query name
Vulnerability description
Go
Message not hashed before signature verification
This query detects calls to (EC)DSA APIs with a message that was not hashed. If the message is longer than the expected hash digest size, it is silently truncated.
Go
File permission flaws
This query finds non-octal (e.g., 755 vs 0o755) and unsupported (e.g., 04666) literals used as a filesystem permission parameter (FileMode).
Go
Trim functions misuse
This query finds calls to string.{Trim,TrimLeft,TrimRight} with the second argument not being a cutset but a continuous substring to be trimmed.
Go
Missing MinVersion in tls.Config
This query finds cases when you do not set the tls.Config.MinVersion explicitly for servers. By default, version 1.0 is used, which is considered insecure. This query does not mark explicitly set insecure versions.
C
CStrNFinder
This query finds calls to functions that take a string and its size as separate arguments (e.g., strncmp, strncat) but the size argument is wrong.
C
Missing null terminator
This query finds incorrectly initialized strings that are passed to functions expecting null-byte-terminated strings.
CodeQL 101
CodeQL is the static analysis tool powering GitHub Advanced Security and is widely used throughout the community to discover vulnerabilities. CodeQL operates by transforming the code being tested into a database that is queryable using a Datalog-like language. While the core engine of CodeQL remains proprietary and closed source, the tool offers open-source libraries implementing various analyses and sets of security queries.
To test our queries, install the CodeQL CLI by following the official documentation. Once the CodeQL CLI is ready, download Trail of Bits’ query packs and check whether the new queries are detected:
Now go to your project’s root directory and generate a CodeQL database, specifying either go or cpp as the programming language:
codeql database create codeql.db --language go
If the generation hasn’t succeeded or the project has a complex build system, use the command flag. Finally, execute Trail of Bits’ queries against the database:
Figure 1: An example signature generation and verification function
Of course it isn’t. The issue lies in passing raw, unhashed, and potentially long data to the ecdsa.SignASN1 and ecdsa.VerifyASN1 methods, while the Go crypto/ecdsa package (and a few other packages) expects data for signing and verification to be a hash of the actual data.
This behavior means that the code signs and verifies only the first 32 bytes of the file, as the size of the P-256 curve used in the example is 32 bytes.
The silent truncation of input data occurs in the hashToNat method, which is used internally by the ecdsa.{SignASN1,VerifyASN1} methods:
// hashToNat sets e to the left-most bits of hash, according to
// SEC 1, Section 4.1.3, point 5 and Section 4.1.4, point 3.func hashToNat[Point nistPoint[Point]](c *nistCurve[Point], e *bigmod.Nat, hash []byte) {
// ECDSA asks us to take the left-most log2(N) bits of hash, and use them as
// an integer modulo N. This is the absolute worst of all worlds: we still
// have to reduce, because the result might still overflow N, but to take
// the left-most bits for P-521 we have to do a right shift.if size := c.N.Size(); len(hash) > size {
hash = hash[:size]
We have seen this vulnerability in real-world codebases and the impact was critical. To address the issue, there are a couple of approaches:
Length validation. A simple approach to prevent the lack-of-hashing issues is to validate the length of the provided data, as done in the go-ethereum library.
Static detection. Another approach is to statically detect the lack of hashing. For this purpose, we developed the tob/go/msg-not-hashed-sig-verify query, which detects all data flows to potentially problematic methods, ignoring flows that initiate from or go through a hashing function or slicing operation.
An interesting problem we had to solve was how to set starting points (sources) for the data flow analysis? We could have used the UntrustedFlowSource class for that purpose. Then the analysis would be finding flows from any input potentially controlled by an attacker. However, UntrustedFlowSource often needs to be extended per project to be useful, so using it for our analysis would result in a lot of flows missed for a lot of projects. Therefore, our query focuses on finding the longest data flows, which are more likely to indicate potential vulnerabilities.
Okay, so file permissions are usually represented as octal integers. In our case, the secret key file would end up with the permission set to 0o620 (or rw--w----), allowing non-owners to modify the file. The integer literal used in the call to the os.Chmod method is—most probably—not the one that a developer wanted to use.
To find unexpected integer values used as FileModes, we implemented a WYSIWYG (“what you see is what you get”) heuristic in the tob/go/file-perms-flaws CodeQL query. The “what you see” is a cleaned-up integer literal (a hard-coded number of the FileMode type)—with removed underscores, a removed base prefix, and left-padded zeros. The “what you get” is the same integer converted to an octal representation. If these two parts are not equal, there may be a bug present.
// what you see
fileModeAsSeen = ("000" + fileModeLitStr.replaceAll("_", "").regexpCapture("(0o|0x|0b)?(.+)", 2)).regexpCapture("0*(.{3,})", 1)
// what you getand fileModeAsOctal = octalFileMode(fileModeInt)
// what you see != what you getand fileModeAsSeen != fileModeAsOctal
Figure 5: The WYSIWYG heuristic in CodeQL
To minimize false positives, we filter out numbers that are commonly used constants (like 0755 or 0644) but in decimal or hexadecimal form. These known, valid constants are explicitly defined in the isKnownValidConstant predicate. Here is how we implemented this predicate:
Figure 6: The CodeQL predicate that filters out common file permission constants
Using non-octal representation of numbers isn’t the only possible pitfall when dealing with file permissions. Another issue to be aware of is the use of more than nine bits in calls to permission-changing methods. File permissions are encoded only as the first nine bits, and the other bits encode file modes such as stickybit or setuid. Some permission changing methods—like os.Chmod or os.Mkdir—ignore a subset of the mode bits, depending on the operating system. The tob/go/file-perms-flaws query warns about this issue as well.
String trimming misuses in Go
API ambiguities are a common source of errors, especially when there are multiple methods with similar names and purposes accepting the same set of arguments. This is the case for Go’s strings.Trim family of methods. Consider the following calls:
Can you tell the difference between these calls and determine which one works “as expected”?
According to the documentation, the strings.TrimLeft method accepts a cutset (i.e., a set of characters) for removal, rather than a prefix. Consequently, it deletes more characters than one would expect. While the above example may seem innocent, a bug in a cross-site scripting (XSS) sanitization function, for example, could have devastating consequences.
When looking for misused strings.Trim{Left,Right} calls, the tricky part is defining what qualifies as “expected” behavior. To address this challenge, we developed the tob/go/trim-misuse CodeQL query with simple heuristics to differentiate between valid and possibly mistaken calls, based on the cutset argument. We consider a Trim operation invalid if the argument contains repeated characters or meets all of the following conditions:
Is longer than two characters
Contains at least two consecutive alphanumeric characters
Is not a common list of continuous characters
While the heuristics look oversimplified, they worked well enough in our audits. In CodeQL, the above rules are implemented as shown below. The cutset is a variable corresponding to the cutset argument of a strings.Trim{Left,Right} method call.
// repeated characters imply the bug
cutset.length() != unique(string c | c = cutset.charAt(_) | c).length()
or
(
// long strings are considered suspicious
cutset.length() > 2// at least one alphanumericand exists(cutset.regexpFind("[a-zA-Z0-9]{2}", _, _))
// exclude probable false-positivesand not cutset.matches("%1234567%")
and not cutset.matches("%abcdefghijklmnopqrstuvwxyz%")
)
Figure 8: CodeQL implementation of heuristics for a Trim operation
Interestingly, misuses of the strings.Trim methods are so common that Go developers are considering deprecating and replacing the problematic functions.
Identifying missing minimum TLS version configurations in Go
When using static analysis tools, it’s important to know their limitations. The official go/insecure-tls CodeQL query finds TLS configurations that accept insecure (outdated) TLS versions (e.g., SSLv3, TLSv1.1). It accomplishes that task by comparing values provided to the configuration’s MinVersion and MaxVersion settings against a list of deprecated versions. However, the query does not warn about configurations that do not explicitly set the MinVersion.
Why should this be a concern? The reason is that the default MinVersion for servers is TLSv1.0. Therefore, in the example below, the official query would mark only server_explicit as insecurely configured, despite both servers using the same MinVersion.
Figure 9: Explicit and default configuration of the MinVersion setting
The severity of this issue is rather low since the default MinVersion for clients is a secure TLSv1.2. Nevertheless, we filled the gap and developed the tob/go/missing-min-version-tls CodeQL query, which detects tls.Config structures without the MinVersion field explicitly set. The query skips reporting configurations used for clients and limits false positives by filtering out findings where the MinVersion is set after the structure initialization.
String bugs in C and C++
Building on top of the insightful cstrnfinder research conducted by one of my Trail of Bits colleagues, we developed the tob/cpp/cstrnfinder query. This query aims to identify invalid numeric constants provided to calls to functions that expect a string and its corresponding size as input—such as strncmp, strncpy, and memmove. We focused on detecting three erroneous cases:
Buffer underread. This occurs when the size argument (number 20 in the example below) is slightly smaller than the source string’s length:
Here, the length of the "org/tob/test/SafeData" string is 21 bytes (22 if we count the terminating null byte). However, we are comparing only the first 20 bytes. Therefore, a string like "org/tob/test/SafeDatX" is incorrectly matched.
Buffer overread. This arises when the size argument (14 in the example below) is greater than the length of the input string, causing the function to read out of bounds.
In the example, the length of the "Silmarillion" string is 12 bytes (13 with the null byte). If the password is longer than 13 bytes and starts with the "Silmarillion" substring, then the memcmp function reads data outside of the pass buffer. While functions operating on strings stop reading input buffers on a null byte and will not overread the input, the memcmp function operates on bytes and does not stop on null bytes.
Incorrect use of string concatenation function. If the size argument (BUFSIZE-1 in the example below) is greater than the source string’s length (the length of “, Beowulf\x00”, so 10 bytes), the size argument may be incorrectly interpreted as the destination buffer’s size (BUFSIZE bytes in the example), instead of the input string’s size. This may indicate a buffer overflow vulnerability.
In the code above, the all_books buffer can hold a maximum 256 bytes of data. If the books.txt file contains 250 characters, then the remaining space in the buffer before the call to the strncat function is 6 bytes. However, we instruct the function to add up to 255 (BUFSIZE-1) bytes to the end of the all_books buffer. Therefore, a few bytes of the “, Beowulf” string will end up outside the allocated space. What we should do instead is instruct the strncat to add at most 5 bytes (leaving 1 byte for the terminating \x00).
Both C and C++ allow developers to construct fixed-size strings with an initialization literal. If the length of the literal is greater than or equal to the allocated buffer size, then the literal is truncated and the terminating null byte is not appended to the string.
char b1[18] = "The Road Goes Ever On"; // missing null byte, warningchar b2[13] = "Ancrene Wisse"; // missing null byte, NO WARNINGchar b3[] = "Farmer Giles of Ham"; // correct initializationchar b4[3] = {'t', 'o', 'b'} // not a string, lack of null byte is expected
Figure 13: Example initializations of C strings
Interestingly, C compilers warn against initializers longer than the buffer size, but don’t raise alarms for initializers of a length equal to the buffer size—even though neither of the resulting strings are null-terminated. C++ compilers return errors for both cases.
The tob/cpp/no-null-terminator query uses data flow analysis to find incorrectly initialized strings passed to functions expecting a null-terminated string. Such function calls result in out-of-bounds read or write vulnerabilities.
CodeQL: past, present, and future
This will be a continuing project from Trail of Bits, so be on the lookout for more soon! One of our most valuable developments is our expertise in automated bug finding. This new CodeQL repository, the Semgrep rules, and the Automated Testing Handbook are key methods to helping others benefit from our work. Please use these resources and report any issues or improvements to them!
Cisco Talos has disclosed 10 vulnerabilities over the past two weeks, including nine that exist in a popular online PDF reader that offers a browser plugin.
Attackers could exploit these vulnerabilities in the Foxit PDF Reader to carry out a variety of malicious actions, but most notably could gain the ability to execute arbitrary code on the targeted machine. Foxit aims to have feature parity with Adobe Acrobat Reader, the most popular PDF-reading software currently on the market. The company offers paid versions of its software for a variety of users, including individuals and enterprises. There are also browser plugins of Foxit that run in a variety of web browsers, including Google Chrome and Mozilla Firefox.
Talos’ Vulnerability Research team also found an integer overflow vulnerability in the GPSd daemon, which is triggered if an attacker sends a specially crafted packet, causing the daemon to crash.
For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.
Multiple vulnerabilities in Foxit PDF Reader
Discovered by Kamlapati Choubey.
Foxit PDF Reader contains multiple vulnerabilities that could lead to remote code execution if exploited correctly.
TALOS-2023-1837 (CVE-2023-32616) and TALOS-2023-1839 (CVE-2023-38573) can be exploited if an attacker embeds malicious JavaScript into a PDF, and the targeted user opens that PDF in Foxit. These vulnerabilities can trigger the use of a previously freed object, which can lead to memory corruption and arbitrary code execution.
TALOS-2023-1838 (CVE-2023-41257) works in the same way, but in this case, it is caused by a type confusion vulnerability.
Three other vulnerabilities could allow an attacker to create arbitrary HTA files in the context application, and eventually gain the ability to execute arbitrary code on the targeted machine. TALOS-2023-1832 (CVE-2023-39542), TALOS-2023-1833 (CVE-2023-40194) and TALOS-2023-1834 (CVE-2023-35985) are all triggered if the targeted user opens a specially crafted file in the Foxit software or browser plugin.
An integer overflow vulnerability exists in the NTRIP Stream Parsing functionality of GPS daemon, which is used to collect and display GPS information in other software. A specially crafted network packet can lead to memory corruption. An attacker can send a malicious packet to trigger TALOS-2023-1860 (CVE-2023-43628).
According to GPSd’s website, this service daemon powers the map service on Android mobile devices and is “ubiquitous in drones, robot submarines, and driverless cars.”
Buildroot - embedded Linux systems builder tool
Discovered by Claudio Bozzato and Francesco Benvenuto.
Talos researchers recently found multiple data integrity vulnerabilities in Buildroot, a tool that automates builds of Linux environments for embedded systems.
An adversary could carry out a man-in-the-middle attack to exploit TALOS-2023-1845 (CVE-2023-43608) and TALOS-2023-1844 (CVE-2023-45842, CVE-2023-45839, CVE-2023-45838, CVE-2023-45840 and CVE-2023-45841) to execute arbitrary code in the builder.
As a direct consequence, an attacker could then also tamper with any file generated for Buildroot’s targets and hosts.
Malformed Excel file could lead to arbitrary code execution in WPS Office
Discovered by Marcin “Icewall” Noga.
An uninitialized pointer use vulnerability (TALOS-2023-1748/CVE-2023-31275) exists in the functionality of WPS Office, a suite of software for word and data processing, that handles Data elements in an Excel file.
A specially crafted malformed Excel file can lead to remote code execution.
WPS Office, previously known as a Kingsoft Office, is a software suite for Microsoft Windows, macOS, Linux, iOS, Android, and HarmonyOS developed by Chinese software developer Kingsoft. It is installed by default on Amazon Fire tablet devices.
Talos disclosed this vulnerability in November despite no official fix or patch from Kingsoft after the company did not respond to our notification attempts and failed the 90-day deadline as outlined in Cisco’s third-party vendor vulnerability disclosure policy.
In this episode the Beers with Talos team, led by special guest Dave Liebenberg, set out to save Thanksgiving. The TurkeyLurkey man is the hero that everybody needs, but perhaps don't deserve.
For fans and opposers of Dave's Ranksgiving list, you'll be pleased to know he's back with a whole new order, and some new snackable entrants.
Oh, and if it's security content you're after, we have some! Our 2023 Year in Review is out now, and the team recaps the top malware and attacker trends from the year. We also discussed the recent CNN article and Talos blog on our work to protect Ukraine's power grid.
PassBreaker is a command-line password cracking tool developed in Python. It allows you to perform various password cracking techniques such as wordlist-based attacks and brute force attacks.
Features
Wordlist-based password cracking
Brute force password cracking
Support for multiple hash algorithms
Optional salt value
Parallel processing option for faster cracking
Password complexity evaluation
Customizable minimum and maximum password length
Customizable character set for brute force attacks
This command attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the MD5 algorithm and a wordlist from the "passwords.txt" file.
This command performs a brute force attack to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" by trying all possible combinations of passwords with a length between 6 and 8 characters, using the character set "abc123".
This command evaluates the complexity of passwords in the "passwords.txt" file and attempts to crack the password with the hash value "5f4dcc3b5aa765d61d8327deb882cf99" using the SHA-256 algorithm. It only tries passwords that meet the complexity requirements.
This command performs password cracking with parallel processing for faster cracking. It utilizes multiple processing cores, but it may consume more system resources.
These examples demonstrate different features and use cases of the "PassBreaker" password cracking tool. Users can customize the parameters based on their needs and goals.
Disclaimer
This tool is intended for educational and ethical purposes only. Misuse of this tool for any malicious activities is strictly prohibited. The developers assume no liability and are not responsible for any misuse or damage caused by this tool.
Contributing
Contributions are welcome! To contribute to PassBreaker, follow these steps:
Fork the repository.
Create a new branch for your feature or bug fix.
Make your changes and commit them.
Push your changes to your forked repository.
Open a pull request in the main repository.
Contact
If you have any questions, comments, or suggestions about PassBreaker, please feel free to contact me:
Fuzzing is great! Throwing randomized inputs at a target really fast can have unreasonable effectiveness with the right setup. When starting with a new target a fuzzing harness can iterate along with your reversing/auditing efforts and you can sleep well at night knowing your cores are taking the night watch. When looking for bugs our time is often limited; any effort spent on tooling needs to be time well spent. LibAFL is a great library that can let us quickly adapt a fuzzer to our specific target. Not every target fits nicely into the "Command-line program that parses a file" category, so LibAFL lets us craft fuzzers for our specific situations. This adaptability opens up the power of fuzzing for a wider range of targets.
Why a workshop
The following material comes from an internal workshop used as an introduction to LibAFL. This post is a summary of the workshop, and includes a repository of exercises and examples for following along at home. It expects some existing understanding of rust and fuzzing concepts. (If you need a refresher on rust: google's comprehensive rust is great.)
There are already a few good resources for learning about LibAFL.
This workshop seeks to add to the existing corpus of example fuzzers built with LibAFL, with a focus on customizing fuzzers to our targets. You will also find a few starter problems for getting hands on experience with LibAFL. Throughout the workshop we try to highlight the versatility and power of the library, letting you see where you can fit a fuzzer in your flow.
Course Teaser
As an aside, if you are interested in this kind of thing (security tooling, bugs, fuzzing), you may be interested in our Symbolic Execution course. We have a virtual session planned for Febuary 2024 with ringzer0. There is more information at the end of this post.
The Fuzzers
The target
Throughout the workshop we beat up on a simple target that runs on Linux. This target is not very interesting, but acts as a good example target for our fuzzers. It takes in some text, line by line, and replaces certain identifiers (like {{XXd3sMRBIGGGz5b2}}) with names. To do so, it contains a function with a very large lookup tree. In this function many lookup cases can result in a segmentation fault.
//...
const char* uid_to_name(const char* uid) {
/*...*/ // big nested mess of switch statements
switch (nbuf[14]) {
case 'b':
// regular case, no segfault
addr = &names[0x4b9];
LOG("UID matches known name at %p", addr);
return *addr;
/*...*/
case '7':
// a bad case
addr = ((const char**)0x68c2);
// SEGFAULT here
LOG("UID matches known name at %p", addr);
return *addr;
/*...*/
This gives us a target that has many diverting code paths, and many reachable "bugs" to find. As we progress we will adapt our fuzzers to this target, showing off some common ways we can mold a fuzzer to a target with LibAFL.
You can find our target here, and the repository includes a couple variations that will be useful for later examples. ./fuzz_target/target.c
Pieces of a Fuzzer
Before we dive into the examples, let's establish an quick understanding of modern fuzzer internals. LibAFL breaks a fuzzer down into pieces that can be swapped out or changed. LibAFL makes great use of rust's trait system to do this. Below we have a diagram of a very simple fuzzer.
A block diagram of a minimal fuzzer
The script for this fuzzer could be as simple as the following.
while ! [ -f ./core.* ]
do
head -c 900 /dev/urandom > ./testfile
cat ./testfile | ./target
done
The simple fuzzer above follows three core steps.
1) Makes a randomized input
2) Runs the target with the new input
3) Keeps the created input if it causes a "win" (in this case a win is crash that produces a core file)
If you miss any of the above pieces, you won't have a very good fuzzer. We all have heard the sad tale of researchers who piped random inputs into their target, got an exciting crash, but were unable to ever reproduce the bug because they didn't save off the test case.
Even with the above pieces, that simple fuzzer will struggle to make any real progress toward finding bugs. It does not even have a notion of what progress means! Below we have a diagram of what a more modern fuzzer might look like.
A block diagram of a fuzzer with feedback
This fuzzer works off a set of existing inputs, which are randomly mutated to create the new test cases. The "mutations" are just a simple set of modifications to the input that can be quickly applied to generate new exciting inputs. Importantly, this fuzzer also uses observations from the executing target to know if a inputs was "interesting". Instead of only caring out crashes, a fuzzer with feedback can route mutated test cases back into the set of inputs to be mutated. This allows a fuzzer to progress by iterating on an input, tracking down interesting features in the target.
LibAFL provides tools for each of these "pieces" of a fuzzer.
Implementors of the Executor trait will run a target with a given test case.
There are other important traits we will see as well. Be sure to look at the "Implementors" section of the trait documentation to see useful implementations provided by the library.
Exec fuzzer
Which brings us to our first example! Let's walk through a bare-bones fuzzer using LibAFL.
The source is well-commented, and you should read through it. Here we just highlight a few key sections of this simple fuzzer.
//...
let mut executor = CommandExecutor::builder()
.program("../fuzz_target/target")
.build(tuple_list!())
.unwrap();
let mut state = StdState::new(
StdRand::with_seed(current_nanos()),
InMemoryCorpus::<BytesInput>::new(),
OnDiskCorpus::new(PathBuf::from("./solutions")).unwrap(),
&mut feedback,
&mut objective,
).unwrap();
Our fuzzer uses a "state" object which tracks the set of input test cases, any solution test cases, and other metadata. Notice we are choosing to keep our inputs in memory, but save out the solution test cases to disk.
We use a CommandExecutor for executing our target program, which will run the target process and pass in the test case.
//...
let mutator = StdScheduledMutator::with_max_stack_pow(
havoc_mutations(),
9, // maximum mutation iterations
);
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
We build a very simple pipeline for our inputs. This pipeline only has one stage, which will randomly select from a set of mutations for each test case.
//...
let scheduler = RandScheduler::new();
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// load the initial corpus in our state
// since we lack feedback in this fuzzer, we have to force this,
state.load_initial_inputs_forced(&mut fuzzer, &mut executor, &mut mgr, &[PathBuf::from("../fuzz_target/corpus/")]).unwrap();
// fuzz
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr).expect("Error in fuzz loop");
With a fuzzer built from a scheduler and some feedbacks (here we use a ConstFeedback::False to not have any feedback except for the objective feedback which is a CrashFeedback), we can load our initial entries and start to fuzz. We use the created stages, chosen executor, the state, and an event manager to start fuzzing. Our event manager will let us know when we start to get "wins".
Our fragile target quickly starts giving us crashes, even with no feedback. Working from a small set of useful inputs helps our mutations be able to find crashing inputs.
This simple execution fuzzer gives us a good base to work from as we add features to our fuzzer.
Exec fuzzer with custom feedback
We can't effectively iterate on interesting inputs without feedback. Currently our random mutations must generate a crashing case in one go. If we can add feedback to our fuzzer, then we can identify test cases that did something interesting. We will loop those interesting test cases back into our set of cases for further mutation.
There are many different sources we could turn to for this information. For this example, let's use the fuzz_target/target_dbg binary which is a build of our target with some debug output on stderr. By looking at this debug output we can start to identify interesting cases. If a test case gets us some debug output we hadn't previously seen before, then we can say it is interesting and worth iterating on.
There isn't an existing implementation of this kind of feedback in the LibAFL library, so we will have to make our own! If you want to try this yourself, we have provided a template file in the repository.
The LibAFL repo provides a StdErrObserver structure we can use with our CommandExecutor. This observer will allow our custom feedback structure to receive the stderr output from our run. All we need to do is create a structure that implements the is_interesting method of the Feedback trait, and we should be good to go. In that method we are provided with the state, the mutated input, the observers. We just have to get the debug output from the StdErrObserver and determine if we reached somewhere new.
I encourage you to try implementing this feedback yourself. You may want to find some heuristics to ignore unhelpful debug messages. We want to avoid reporting too many inputs as useful, so we don't overfill our input corpus. The input corpus is the set of inputs we use for generating new test cases. We will waste lots of time when there are inputs in that set that are not actually helping us dig towards a win. Ideally we want each of these inputs to be as small and quick to run as possible, while exercising a unique path in our target.
In our solution, we simply keep a set of seen hashes. We report an input to be interesting if we see it caused a unique hash.
Relying on the normal side-effects of a program (like debug output, system interactions, etc) is not very reliable for deeply exploring a target. There may be many interesting features that we miss using this kind of feedback. The feedback of choice for many modern fuzzers is "code coverage". By observing what blocks of code are being executed, we can gain insight into what inputs are exposing interesting logic.
Being able to collect that information, however, is not always straight forward. If you have access to the source code, you may be able to use a compiler to instrument the code with this information. If not, you may have to find ways to dynamically instrument your target either through binary modification, emulation, or other sources.
AFL++ provides a version of clang with compiler-level instrumentation for providing code coverage feedback. LibAFL can observe the information produced by this instrumentation, and we can use it for feedback. We have a build of our target using afl-clang-fast. With this build (target_instrumented), we can use the LibAFL ForkserverExecutor to communicate with our instrumented target. The HitcountsMapObserver can use shared memory for receiving our coverage information each run.
//...
let mut shmem_provider = UnixShMemProvider::new().unwrap();
let mut shmem = shmem_provider.new_shmem(MAP_SIZE).unwrap();
// write the id to the env var for the forkserver
shmem.write_to_env("__AFL_SHM_ID").unwrap();
let shmembuf = shmem.as_mut_slice();
// build an observer based on that buffer shared with the target
let edges_observer = unsafe {HitcountsMapObserver::new(StdMapObserver::new("shared_mem", shmembuf))};
// use that observed coverage to feedback based on obtaining maximum coverage
let mut feedback = MaxMapFeedback::tracking(&edges_observer, true, false);
// This time we can use a fork server executor, which uses a instrumented in fork server
// it gets a greater number of execs per sec by not having to init the process for each run
let mut executor = ForkserverExecutor::builder()
.program("../fuzz_target/target_instrumented")
.shmem_provider(&mut shmem_provider)
.coverage_map_size(MAP_SIZE)
.build(tuple_list!(edges_observer))
.unwrap();
The compiled-in fork server should also reduce our time needed to instantiate a run, by forking off partially instantiated processes instead of starting from scratch each time. This should offset some of the cost of our instrumentation.
When executed, our fuzzer quickly finds new paths through the process, building up our corpus of interesting cases and guiding our fuzzer.
Many of these mutations are wasteful for our target. In order to get to the vulnerable uid_to_name function, the input must first pass a valid_uid check. In this check, characters outside of the range A-Za-z0-9\-_ are rejected. Many of the havoc_mutations, such as the BytesRandInsertMutator, will introduce characters that are not in this range. This results in many test cases that are wasted.
With this knowledge about our target, we can use a custom mutator that will insert new bytes only in the desired range. Implementing the Mutator trait is simple, we just have to provide a mutate function.
//...
impl<I, S> Mutator<I, S> for AlphaByteSwapMutator
where
I: HasBytesVec,
S: HasRand,
{
fn mutate(
&mut self,
state: &mut S,
input: &mut I,
_stage_idx: i32,
) -> Result<MutationResult, Error> {
/*
return Ok(MutationResult::Mutated) when you mutate the input
or Ok(MutationResult::Skipped) when you don't
*/
Ok(MutationResult::Skipped)
}
}
If you want to try this for yourself, feel free to use the aflcc_custom_mut_template as a template to get started.
In our solution we use a set of mutators, including our new AlphaByteSwapMutator and a few existing mutators. This set should hopefully result in a greater number of valid test cases that make it to the uid_to_name function.
//...
// we will specify our custom mutator, as well as two other helpful mutators for growing or shrinking
let mutator = StdScheduledMutator::with_max_stack_pow(
tuple_list!(
AlphaByteSwapMutator::new(),
BytesDeleteMutator::new(),
BytesInsertMutator::new(),
),
9,
);
Then in our mutator we use the state's source of random to choose a location, and a new byte from a set of valid characters.
//...
fn mutate(
&mut self,
state: &mut S,
input: &mut I,
_stage_idx: i32,
) -> Result<MutationResult, Error> {
// here we apply our random mutation
// for our target, simply swapping a byte should be effective
// so long as our new byte is 0-9A-Za-z or '-' or '_'
// skip empty inputs
if input.bytes().is_empty() {
return Ok(MutationResult::Skipped)
}
// choose a random byte
let byte: &mut u8 = state.rand_mut().choose(input.bytes_mut());
// don't replace tag chars '{{}}'
if *byte == b'{' || *byte == b'}' {
return Ok(MutationResult::Skipped)
}
// now we can replace that byte with a known good byte
*byte = *state.rand_mut().choose(&self.good_bytes);
// technically we should say "skipped" if we replaced a byte with itself, but this is fine for now
Ok(MutationResult::Mutated)
}
And that is it! The custom mutator works seamlessly with the rest of the system. Being able to quickly tweak fuzzers like this is great for adapting to your target. Experiments like this can help us quickly iterate when combined with performance measurements.
At this point, we have a separate target you may want to experiment with! It is a program that contains a small maze, and gives you a chance to create a fuzzer with some custom feedback or mutations to better traverse the maze and discover a crash. Play around with some of the concepts we have introduced here, and see how fast your fuzzer can solve the maze.
In previous examples, we have made use of the ForkserverExecutor which works with the forkserver that afl-clang-fast inserted into our target. While the fork server does give us a great speed boost by reducing the start-up time for each target process, we still require a new process for each test case. If we can instead run multiple test cases in one process, we can speed up our fuzzing greatly. Running multiple testcases per target process is often called "persistent mode" fuzzing.
Basically, if you do not fuzz a target in persistent mode, then you are just doing it for a hobby and not professionally :-).
Some targets do not play well with persistent mode. Anything that changes lots of global state each run can have trouble, as we want each test case to run in isolation as much as possible. Even for targets well suited for persistent mode, we usually will have to create a harness around the target code. This harness is just a bit of code we write to call in to the target for fuzzing. The AFL++ documentation on persistent mode with LLVM is a great reference for writing these kinds of harnesses.
When we have created such a harness, the inserted fork server will detect the ability to persist, and can even use shared memory to provide the test cases. LibAFL's ForkserverExecutor can let us make use of these persistent harnesses.
Our fuzzer using a persistent harness is not much changed from our previous fuzzers.
The ForkserverExecutor takes care of the magic to make this all happen. Most of our work goes into actually creating an effective harness! If you want to try and craft your own, we have a bit of a template ready for you to get started.
In our harness we want to be careful to reset state each round, so we remain as true to our original as possible. Any modified global variables, heap allocations, or side-effects from a run that could change the behavior of future runs needs to be undone. Failure to clean the program state can result in false positives or instability. If we want our winning test cases from this fuzzer to also be able to crash the original target, then we need to emulate the original target's behavior as close as possible.
Sometimes it is not worth it to emulate the original, and instead use our harness to target deeper surface. For example in our target we could directly target the uid_to_name function, and then convert the solutions into solutions for our original target later. We would want to also call valid_uid in our harness, to ensure we don't report false positives that would never work against our original target.
You can inspect our persistent harness here; we choose to repeatedly call process_line for each line and take care to clean up after ourselves.
Where previously we saw around 2k executions per second for our fuzzers with code coverage feedback, we are now seeing around 5k or 6k, still with just one client.
Using AFL++'s compiler and fork server is not the only way to achieve multiple test cases in one process. LibAFL is an extremely flexible library, and supports all sorts of scenarios. The InProcessExecutor allows us to run test cases directly in the same process as our fuzzing logic. This means if we can link with our target somehow, we can fuzz in the same process.
The versatility of LibAFL means we can build our entire fuzzer as a library, which we can link into our target, or even preload into our target dynamically. LibAFL even supports nostd (compilation without dependency on an OS or standard library), so we can treat our entire fuzzer as a blob to inject into our target's environment. As long as execution reaches our fuzzing code, we can fuzz.
In our example we build our fuzzer and link with our target built as a static library, calling into the C code directly using rust's FFI.
Building our fuzzer and causing it to link with our target is done by providing a build.rs file, which the rust compilation will use.
//...
fn main() {
let target_dir = "../fuzz_target".to_string();
let target_lib = "target_libfuzzer".to_string();
// force us to link with the file 'libtarget_libfuzzer.a'
println!("cargo:rustc-link-search=native={}", &target_dir);
println!("cargo:rustc-link-lib=static:+whole-archive={}", &target_lib);
println!("cargo:rerun-if-changed=build.rs");
}
LibAFL also provides tools to wrap the clang compiler, if you wish to create a compiler that will automatically inject your fuzzer into the target. You can see examples of this in the LibAFL examples.
We will want a harness for this target as well, so we can pass our test cases in as a buffer instead of having the target read lines from stdin. We will use the common interface used by libfuzzer, which has us create a function called LLVMFuzzerTestOneInput. LibAFL even has some helpers that will do the FFI calls for us.
Our harness can be very similar to the one we created for persistent mode fuzzing. We also have to watch out for the same kinds of global state or memory leaks that could make our fuzzing unstable. Again, we have a template for you if you want to craft the harness yourself.
With LLVMFuzzerTestOneInput defined in our target, and a static library made, our fuzzer can directly call into the harness for each test case. We define a harness function which our executor will call with the test case data.
//...
// our executor will be just a wrapper around a harness
// that calls out the the libfuzzer style harness
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
// this is just some niceness to call the libfuzzer C function
// but we don't need to use a libfuzzer harness to do inproc fuzzing
// we can call whatever function we want in a harness, as long as it is linked
libfuzzer_test_one_input(buf);
return ExitKind::Ok;
};
let mut executor = InProcessExecutor::new(
&mut harness,
tuple_list!(edges_observer),
&mut fuzzer,
&mut state,
&mut restarting_mgr,
).unwrap();
This easy interoperability with libfuzzer harnesses is nice, and again we see a huge speed improvement over our previous fuzzers.
In this fuzzer we are also making use of a very important tool offered by LibAFL: the Low Level Message Passing (LLMP). This provides quick communication between multiple clients and lets us effectively scale our fuzzing to multiple cores or even multiple machines. The setup_restarting_mgr_std helper function creates an event manager that will manage the clients and restart them when they encounter crashes.
//...
let monitor = MultiMonitor::new(|s| println!("{s}"));
println!("Starting up");
// we use a restarting manager which will restart
// our process each time it crashes
// this will set up a host manager, and we will have to start the other processes
let (state, mut restarting_mgr) = setup_restarting_mgr_std(monitor, 1337, EventConfig::from_name("default"))
.expect("Failed to setup the restarter!");
// only clients will return from the above call
println!("We are a client!");
This speed gain is important, and can make the difference between finding the juicy bug or not. Plus, it feels good to use all your cores and heat up your room a bit in the winter.
Emulation
Of course, not all targets are so easy to nicely link with or instrument with a compiler. In those cases, LibAFL provides a number of interesting tools like libafl_frida or libafl_nyx. In this next example we are going to use LibAFL's modified version of QEMU to give us code coverage feedback on a binary with no built in instrumentation. The modified version of QEMU will expose code coverage information to our fuzzer for feedback.
The setup will be similar to our in-process fuzzer, except now our harness will be in charge of running the emulator at the desired location in the target. By default the emulator state is not reset for you, and you will want to reset any global state changed between runs.
If you want to try it out for yourself, consult the Emulator documentation, and feel free to start with our template.
In our solution we first execute some initialization until a breakpoint, then save off the stack and return address. We will have to reset the stack each run, and put a breakpoint on the return address so that we can stop after our call. We also map an area in our target where we can place our input.
//...
emu.set_breakpoint(mainptr);
unsafe { emu.run() };
let pc: GuestReg = emu.read_reg(Regs::Pc).unwrap();
emu.remove_breakpoint(mainptr);
// save the ret addr, so we can use it and stop
let retaddr: GuestAddr = emu.read_return_address().unwrap();
emu.set_breakpoint(retaddr);
let savedsp: GuestAddr = emu.read_reg(Regs::Sp).unwrap();
// now let's map an area in the target we will use for the input.
let inputaddr = emu.map_private(0, 0x1000, MmapPerms::ReadWrite).unwrap();
println!("Input page @ {inputaddr:#x}");
Now in the harness itself we will take the input and write it into the target, then start execution at the target function. This time we are executing the uid_to_name function directly, and using a mutator that will not add any invalid characters that valid_uid would have stopped.
//...
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let mut buf = target.as_slice();
let mut len = buf.len();
// limit out input size
if len > 1024 {
buf = &buf[0..1024];
len = 1024;
}
// write our testcase into memory, null terminated
unsafe {
emu.write_mem(inputaddr, buf);
emu.write_mem(inputaddr + (len as u64), b"\0\0\0\0");
};
// reset the registers as needed
emu.write_reg(Regs::Pc, parseptr).unwrap();
emu.write_reg(Regs::Sp, savedsp).unwrap();
emu.write_return_address(retaddr).unwrap();
emu.write_reg(Regs::Rdi, inputaddr).unwrap();
// run until our breakpoint at the return address
// or a crash
unsafe { emu.run() };
// if we didn't crash, we are okay
ExitKind::Ok
};
This emulation can be very quick, especially if we can get away without having to reset a lot of state each run. By targeting a deeper function here we are likely to reach crashes quickly.
LibAFL also provides some useful helpers such as QemuAsanHelper and QemuSnapshotHelper. There is even support for full system emulation, as opposed to usermode emulation. Being able to use emulators effectively when fuzzing opens up a whole new world of targets.
Generation
Our method of starting with some initial inputs and simply mutating them can be very effective for certain targets, but less so for more complicated inputs. If we start with an input of some javascript like:
if (a < b) {
somefunc(a);
}
Our existing mutations might result in the following:
if\x00 (a << b) {
somefu(a;;;;
}
Which might find some bugs in parsers, but is unlikely to find deeper bugs in any javascript engine. If we want to exercise the engine itself, we will want to mostly produce valid javascript. This is a good use case for generation! By defining a grammar of what valid javascript looks like, we can generate lots of test cases to throw against the engine.
A block diagram of a basic generative fuzzer
As you can see in the diagram above, with just generation alone we are no longer using a mutation+feedback loop. There are lots of successful fuzzers that have gotten wins off generation alone (domato, boofuzz, a bunch of weird midi files), but we would like to have some form of feedback and progress in our fuzzing.
In order to make use of feedback in our generation, we can create an intermediate representation (IR) of our generated data. Then we can feed back the interesting cases into our inputs to be further mutated.
So our earlier javascript could be expressed as tokens like:
Our mutations on this tokenized version can do things like replace tokens with other valid tokens or add more nodes to the tree, creating a slightly different input. We can then use these IR inputs and mutations as we did earlier with code coverage feedback.
A block diagram of a generative fuzzer with mutation feedback
Now mutations on the IR could produce something like so:
Which would render to valid javascript, and can be further mutated upon if it produces interesting feedback.
if (0 < b) {
somefunc(somefunc(a,a));
}
LibAFL provides some great tools for getting your own generational fuzzer with feedback going. A version of the Nautilus fuzzer is included in LibAFL. To use it with our example, we first define a grammar describing what a valid input to our target looks like.
With LibAFL we can load this grammar into a NautilusContext that we can use for generation. We use a InProcessExecutor and in our harness we take in a NautilusInput which we render to bytes and pass to our LLVMFuzzerTestOneInput.
//...
// our executor will be just a wrapper around a harness closure
let mut harness = |input: &NautilusInput| {
// we need to convert our input from a natilus tree
// into actual bytes
input.unparse(&genctx, &mut bytes);
let s = std::str::from_utf8(&bytes).unwrap();
println!("Trying:\n{:?}", s);
let buf = bytes.as_mut_slice();
libfuzzer_test_one_input(&buf);
return ExitKind::Ok;
};
We also need to generate a few initial IR inputs and specify what mutations to use.
//...
if state.must_load_initial_inputs() {
// instead of loading from an inital corpus, we will generate our initial corpus of 9 NautilusInputs
let mut generator = NautilusGenerator::new(&genctx);
state.generate_initial_inputs_forced(&mut fuzzer, &mut executor, &mut generator, &mut restarting_mgr, 9).unwrap();
println!("Created initial inputs");
}
// we can't use normal byte mutations, so we use mutations that work on our generator trees
let mutator = StdScheduledMutator::with_max_stack_pow(
tuple_list!(
NautilusRandomMutator::new(&genctx),
NautilusRandomMutator::new(&genctx),
NautilusRandomMutator::new(&genctx),
NautilusRecursionMutator::new(&genctx),
NautilusSpliceMutator::new(&genctx),
NautilusSpliceMutator::new(&genctx),
),
3,
);
With this all in place, we can run and get the combined benefits of generation, code coverage, and in-process execution. To iterate on this, we can further improve our grammar as we better understand our target.
Note that our saved solutions are just serialized NautilusInputs and will not work when used against our original target. We have created a separate project that will render these solutions out to bytes with our grammar.
//...
let input: NautilusInput = NautilusInput::from_file(path).unwrap();
let mut b = vec![];
let tree_depth = 0x45;
let genctx = NautilusContext::from_file(tree_depth, grammarpath);
input.unparse(&genctx, &mut b);
let s = std::str::from_utf8(&b).unwrap();
println!("{s}");
This brings us to our second take home problem! We have a chat client that is vulnerable to a number of issues. Fuzzing this binary could be made easier though good use of generation and/or emulation. As you find some noisy bugs you may wish to either avoid those paths in your fuzzer, or patch the bugs in your target. Bugs can often mask other bugs. You can find the target here.
The goal of this workshop is to show the versatility of LibAFL and encourage its use. Hopefully these examples have sparked some ideas of how you can encorporate custom fuzzers against some of your targets. Let us know if you have any questions or spot any issues with our examples. Alternately, if you have an interesting target and want us to find bugs in it for you, please contact us.
Course Plug
Thanks again for reading! If you like this kind of stuff, you may be interested in our course "Practical Symbolic Execution for VR and RE" where you will learn to create your own symbolic execution harnesses for: reverse engineering, deobfuscation, vulnerability detection, exploit development, and more. The next public offering is in Febuary 2024 as part of ringzer0's BOOTSTRAP24. We are also available for private offerings on request.
Security information and event management (SIEM) tooling allows security teams to collect and analyse logs from a wide variety of sources. In turn this is used to detect and handle incidents. Evidently it is important to ensure that the log ingestion is complete and uninterrupted. Luckily SIEMs offer out-of-the-box solutions and/or capabilities to create custom health monitoring. In this blog post we will take a look at the health monitoring capabilities for log ingestion in Microsoft Sentinel.
Microsoft Sentinel
Microsoft Sentinel is the cloud-native Security information and event management (SIEM) and Security orchestration, automation, and response (SOAR) solution provided by Microsoft. It provides intelligent security analytics and threat intelligence across the enterprise, offering a single solution for alert detection, threat visibility, proactive hunting, and threat response. As a cloud-native solution, it can easily scale to accommodate the growing security needs of an organization and alleviate the cost of maintaining your own infrastructure.
Microsoft Sentinel utilizes Data Connectors to handle log ingestion. Microsoft Sentinel comes with out of the box connectors for Microsoft services, these are the service-to-service connectors. Additionally, there are many built-in connectors for third-party services, which utilize Syslog, Common Event Format (CEF) or REST APIs to connect the data sources to Microsoft Sentinel.
Besides logs from Microsoft services and third-party services, Sentinel can also collect logs from Azure VMs and non-Azure VMs. The log collection is done via the Azure Monitor Agent (AMA) or the Log Analytics Agent (MMA). As a brief aside, it’s important to note that the Log Analytics Agent is on a deprecation path and won’t be supported after August 31, 2024.
The state of the Data Connectors can be monitored with the out-of-the-box solutions or by creating a custom solution.
Microsoft provides two out-of-the-box features to perform health monitoring on the data connectors: The Data connectors health monitoring workbook & SentinelHealth data table.
Using the Data connectors health monitoring workbook
The Data collection health monitoring workbook is an out-of-the-box solution that provides insight regarding the log ingestion status, detection of anomalies and the health status of the Log Analytics agents.
The workbook consists of three tabs: Overview, Data collection anomalies & Agents info.
The Overview tab shows the general status of the log ingestions in the selected workspace. It contains data such as the Events per Second (EPS), data volume and time of the last log received. For the tab to function, the required Subscription and Workspace have to be selected at the top
The Data collection anomalies tab provides info for detecting anomalies in the log ingestion process. Each tab in the view presents a specific table. The General tab is a collection of a multiple tables.
We’re given a few configuration options for the view:
AnomaliesTimeRange: Define the total time range for the anomaly detection.
SampleInterval: Define the time interval in which data is sampled in the defined time range. Each time sample gets an anomaly score, which is used for the detection.
PositiveAlertThreshold: Define the positive anomaly score threshold.
NegativeAlertThreshold: Define the negative anomaly score threshold.
The view itself contains the expected amount of events, the actual amount of events & anomaly score per table. When a significant drop or rise in events is detected, a further investigation is advised. The logic behind the view can also be re-used to setup alerting when a certain threshold is exceeded.
The Agent info tab contains information about the health of the AMA and MMA agents installed on your Azure and non-Azure machines. The view allows you to monitor System location, Heartbeat status and latency, Available memory and disk space & Agent operations. There are two tabs in the view to choose between Azure machines only and all machines.
You can find the workbook under Microsoft Sentinel > Workbooks > Templates, then type Data collection health monitoring in the search field. Click View Template to open the workbook. If you plan on using the workbook frequently, hit the Save button so it shows up under My Workbooks.
The SentinelHealth data table
The SentinelHealth data table provides information on the health of your Sentinel resources. The content of the table is not limited to only the data connectors, but also the health of your automation rules, playbooks and analytic rules. Given the scope of this blog post, we will focus solely on the data connector events.
Currently the table has support for following data connectors:
Amazon Web Services (CloudTrail and S3)
Dynamics 365
Office 365
Microsoft Defender for Endpoint
Threat Intelligence – TAXII
Threat Intelligence Platforms
For the data connectors, there are two types of events: Data fetch status change & Data fetch failure summary.
The Data fetch status change events contain the status of the data fetching and additional information. The status is represented by Success or Failure and depending on the status, different additional information is given in the ExtendedProperties field:
For a Success, the field will contain the destination of the logs.
For a Failure, the field will contain an error message describing the failure. The content of this message depends on the failure type.
These events will be logged once an hour if the status is stable (i.e. status doesn’t change from Success to Failure and vice versa). Once a status change is detected it will be logged immediately.
The Data fetch failure summary events are logged once an hour, per connector, per workspace, with an aggregated failure summary. They are only logged when the connector has experienced polling errors during the given hour. The event itself contains additional information in the ExtendedProperties field, such as all the encountered failures and the time period for which the connector’s source platform was queried.
Using the SentinelHealth data table
Before we can start using the SentinelHealth table, we first have to enable it. Go to Microsoft Sentinel > Settings > Settings tab > Auditing and health monitoring, press Enable to enable the health monitoring.
Once the SentinelHealth table contains data, we can start querying on it. Below you’ll find some example queries to run.
List the latest failure per connector
SentinelHealth
| where TimeGenerated > ago(7d)
| where OperationName == "Data fetch status change"
| where Status == "Failure"
| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId
Connector status change from Failure to Success
let success_status = SentinelHealth
| where TimeGenerated > ago(1d)
| where OperationName == "Data fetch status change"
| where Status == "Success"
| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
let failure_status = SentinelHealth
| where TimeGenerated > ago(1d)
| where OperationName == "Data fetch status change"
| where Status == "Failure"
| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
success_status
| join kind=inner (failure_status) on SentinelResourceName, SentinelResourceId
| where TimeGenerated > TimeGenerated1
Connector status change from Success to Failure
let success_status = SentinelHealth
| where TimeGenerated > ago(1d)
| where OperationName == "Data fetch status change"
| where Status == "Success"
| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
let failure_status = SentinelHealth
| where TimeGenerated > ago(1d)
| where OperationName == "Data fetch status change"
| where Status == "Failure"
| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
success_status
| join kind=inner (failure_status) on SentinelResourceName, SentinelResourceId
| where TimeGenerated > TimeGenerated1
Custom Solutions
With the help of built-in Azure features and KQL queries, there is the possibility to create custom solutions. The idea is to create a KQL query and then have it executed by an Azure feature, such as Azure Monitor, Azure Logic Apps or as a Sentinel Analytics Rule. Below you’ll find two examples of custom solutions.
Log Analytics Alert
For the first example, we’ll setup an alert in the Log Analytics workspace where Sentinel is running on. The alert logic will run on a recurring basis and alert the necessary people when it is triggered. For starters, we’ll go the the Log Analytics Workspace and and start the creation of a new alert.
Select Custom log search for the signal and we’ll use the Connector status change from Success to Failure query example as logic.
Set both the aggregation and evaluation period to 1hr, so it doesn’t incur a high monthly cost. Next, attach an email Action Group to the alert, so the necessary people are informed of the failure.
Lastly, give the alert a severity level, name and description to finish off.
Logic App Teams Notification
For the second example, we’ll create a Logic App that will send an overview via Teams of all the tables with an anomalous score.
For starters, we’ll create a logic app and create a Workflow inside the logic app.
Inside the Workflow, we’ll design the logic for the Teams Notification. We’ll start off with a Recurrence trigger. Define an interval on which you’d like to receive notifications. In the example, an interval of two days was chosen.
Next, we’ll add the Run query and visualize results action. In this action, we have to define the Subscription, Resource Group, Resource Type, Resource Name, Query, Time Range and Chart Type. Define the first parameters to select your Log Analytics Workspace and then use following query. The query is based on the logic from the Data Connector Workbook. The query looks back on the data of the past two weeks with an interval of one day per data sample. If needed, the time period and interval can be increased or decreased. The UpperThreshold and LowerThreshold parameter can be adapted to make the detection more or less sensitive.
let UpperThreshold = 5.0; // Upper Anomaly threshold score
let LowerThreshold = -5.0; // Lower anomaly threshold score
let TableIgnoreList = dynamic(['SecurityAlert', 'BehaviorAnalytics', 'SecurityBaseline', 'ProtectionStatus']); // select tables you want to EXCLUDE from the results
union withsource=TableName1 *
| make-series count() on TimeGenerated from ago(14d) to now() step 1d by TableName1
| extend (anomalies, score, baseline) = series_decompose_anomalies(count_, 1.5, 7, 'linefit', 1, 'ctukey', 0.01)
| where anomalies[-1] == 1 or anomalies[-1] == -1
| extend Score = score[-1]
| where Score >= UpperThreshold or Score <= LowerThreshold
| where TableName1 !in (TableIgnoreList)
| project TableName=TableName1, ExpectedCount=round(todouble(baseline[-1]),1), ActualCount=round(todouble(count_[-1]),1), AnomalyScore = round(todouble(score[-1]),1)
Lastly, define the Time Range and Chart Type parameter. For Time Range pick Set in query and for Chart Type pick Html Table.
Now that the execution of the query is defined, we can define the sending of a Teams message. Select the Post message in a chat or channel action and configure the action to send the body of the query to a channel/person as Flowbot.
Once the Teams action is defined, the logic app is completed. When the logic app runs, you should expect an output similar to the image below. The parameters in the table can be analysed to detect Data Connector issues.
Conclusion
In conclusion, as stated in the intro, monitoring the health of data connectors is a critical part of ensuring an uninterrupted log ingestion process into the SIEM. Microsoft Sentinel offers great capabilities for monitoring the health of data connectors, thus enabling security teams to ensure the smooth functioning of log ingestion processes and promptly address any issues that may arise. The combination of the two out-of-the-box solutions and the flexibility to create custom monitoring solutions, makes Microsoft Sentinel a comprehensive and adaptable choice for managing and monitoring security events.
Frederik Meutermans
Frederik is a Senior Security Consultant in the Cloud Security Team. He specializes in the Microsoft Azure cloud stack, with a special focus on cloud security monitoring. He mainly has experience as security analyst and security monitoring engineer.
Once again, the Talos team has meticulously combed through a massive amount of data to analyze the major trends that have shaped the threat landscape in 2023. Global conflict influenced a lot of these trends, altering the tactics and approaches of many threat actors. In operations ranging from espionage to cybercrime, we’ve seen geopolitical events have a significant impact on the way these are carried out.
At the beginning of the Year in Review is a “Top Trends” section comprised of regional trends over time and the influence of geopolitical events, the CVEs attackers exploited most often, spam tactics, and the top MITRE ATT&CK techniques that have been used within attacks. The report then deep dives on four topics:
The evolution of ransomware and extortion. The concerning rate of attacks against network infrastructure devices. The activities of advanced persistent threat (APT) actors in China, Russia, and the Middle East. This section also includes the major threats our Ukraine Task Unit dealt with this year. The shifting activities and impact of commodity loaders.
Cisco’s global presence and Talos’ world-class expertise provided a massive amount of data to research — endpoint detections, incident response engagements, network traffic, email corpus, sandboxes, honeypots and much more. Thankfully, our teammates include subject matter experts from all ends of the cybersecurity space to help us turn this intelligence into actionable information for defenders and users.
So, what is the main story of the 2023 Year in Review? Despite the accelerated pace of many threat actor campaigns and the geopolitical events that shaped them, the defensive community’s diligence, inventiveness and collaborative efforts are helping to push adversaries back.
Download the Cisco Talos Year in Review today, and please share it with your colleagues and communities. This report was written by defenders, for defenders, and we hope it proves a useful and insightful resource for you.
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
Workspaces
Collections
Requests
Users
Teams
Installation
python3 -m pip install porch-pirate
Using the client
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
Simple Search
porch-pirate -s "coca-cola.com"
Get Workspace Globals
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.
Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Automatic Search Dump
Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
Extract URLs from Workspace
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
p = porchpirate() print(p.search('coca-cola.com'))
Get Workspace Collections
p = porchpirate() print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Dumping a Workspace
p = porchpirate() collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba')) for collection in collections['data']: requests = collection['requests'] for r in requests: request_data = p.request(r['id']) print(request_data)
Grabbing a Workspace's Globals
p = porchpirate() print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other Examples
Other library usage examples can be located in the examples directory, which contains the following examples:
Previously, we looked at the attack surface of the ChargePoint Home Flex EV charger – one of the targets in the upcoming Pwn2Own Automotive contest. In this post, we look at the attack surface of another EV Charger. The Ubiquiti Connect EV Station is a weatherproof Level 2 electric vehicle charging station designed for organizations. We cover the most obvious areas a threat actor would explore when attempting to compromise the device.
The Ubiquiti Connect EV Station is a Level 2 charging station for electric vehicles. The EV Station is meant to be managed by a Ubiquiti management platform running the UniFi OS Console, such as the Ubiquiti Dream Machine or Cloud Gateway. Users can also use the iOS or Android UniFi Connect mobile apps to configure the EV Station.
Attack Surface Summary
The Ubiquiti EV Station is an Android device. In this respect, it is unique amongst the electric vehicle chargers included as target devices in Pwn2Own Automotive 2024.
Trend Micro researchers observed the UART port of the device during power-up. The Ubiquiti EV Station employs a Qualcomm APQ8053 SoC as the primary CPU. The Android operating system boots and emits boot messages on the UART serial port located inside the device housing. The following areas are confirmed and represent a potential attack surface on the device:
· Android OS · USB o Android USB debugging might be possible · Ubiquiti Connect mobile applications · Network attack surface o Wi-Fi, including Wi-Fi driver o Ethernet / Local IP networking § Realtek o Multicast IP networking § UDP port 10001 · Bluetooth Low Energy (BLE) 4.2 · Near Field Communication (NFC)
Ubiquiti EV Station Documentation
Documentation for the Ubiquiti EV Station provides only high-level information about the installation and operation of the device. Additional documentation can be found at:
Ubiquiti provides high-level technical specifications for the EV Station on their website. Trend Micro researchers have performed an analysis of the discrete hardware devices found in the EV Station. The following list summarizes the components Trend Micro research have identified as notable components and/or potential attack surface in the Ubiquiti EV Station.
Figure 1 below is an overview of the main CPU board of the Ubiquiti EV Station. The board has several collections of highly integrated components, each one isolated inside its own dedicated footprint on the board. Each of these areas of the PCB appears to be dedicated to discrete functionality, such as CPU with RAM and flash, Wi-Fi, NFC, Ethernet, USB, and display.
In the center of the board sits the Qualcomm APQ8053 and Samsung KMQX60013A-B419 combination DRAM and NAND controller. These represent the primary application processor for the device, along with the RAM and flash storage for the device. They are marked U5 on the PCB silkscreen.
Three connectors reside just beneath this section of the PCB. A connector marked JDB2 and UART DEBUG emits boot messages from the Ubiquiti EV Station upon startup. In the center is a USB-C connector marked J20. To the right is a two-pin connector marked J28. The functionality of this connector is not yet understood.
In the top center of the following image is an unpopulated component marked U20. It is possible this is an unpopulated footprint for a cellular communication module.
Figure 1 - Overview image of the main PCB of the Ubiquiti EV Station
The following image shows the Qualcomm CPU and associated RAM and NAND flash chip inside the Ubiquiti EV Station:
Figure 2 - Detail image of the EV Station Qualcomm APQ8053 SoC, Samsung KMQX60013A-B419 DRAM / NAND and UART Debug Port
In the following image, the PCB shows a stencil marked ‘J23.’ Trend Micro researchers endeavored to discover where this header is connected. They surmised it might be possible that the vias in J23 might be connected to a debug interface on the board. Upon further inspection, they determined the vias on J23 are connected to the unpopulated device marked U20.
Figure 3 - Detail image of the EV Station Realtek RTL8153-BI Ethernet controller
Network Analysis.
The device can connect to local networks over both Wi-Fi and Ethernet. Trend Micro researchers connected the EV Station to a test Ethernet network to investigate the network attack surface prior to associating the EV Station to a Ubiquiti UniFi Console.
In an unconfigured state, the EV Station does not listen on any TCP ports. The EV Station sends out regular probes looking for HTTP proxies on TCP port 8080.
Additionally, the Ubiquiti EV Station attempts to join an IGMP group using IP address 233.89.188.1. The EV Station sends packets to this address on UDP port 10001. The EV Station communicates on this port using the protocol that has been called the ‘UBNT Discovery Protocol.’ This protocol identifies the device model, firmware, and other information.
The following hex data shows an Ethernet frame, IP packet, and UDP datagram that encapsulate the UBNT discovery packet. The UBNT discovery data begins at offset 0x2A.
Bluetooth LE Analysis
In the unconfigured state, the Ubiquiti EV Station Bluetooth LE interface acts as a BLE peripheral device. Using a BLE scanning tool, the Trend Micro researchers observed the following Bluetooth LE endpoints on the EV Station.
The device set its BLE name to QCOM-BTD, which appears to be a default Qualcomm configuration. There is a single BLE service defined. This service exports three characteristics: one characteristic is read-only, one is notify-only, and one allows read, write, and notify operations.
Further analysis of the EV Station file system shows native code libraries responsible for the observed behavior. Additional investigation into these libraries may prove fruitful for contestants.
Additional information about expected BLE functionality can also be understood via analysis of the mobile applications. Trend Micro researchers performed reverse engineering of the UniFi Connect Android app and found code meant to communicate with the device over BLE. However, the discovered BLE characteristics present in the Android application do not match those broadcast by the EV Station. It is possible that after fully setting up the EV Station, the BLE stack may be reconfigured to match the expected BLE endpoints.
Future potential analysis
To mount a successful attempt against the Ubiquiti EV Station at Pwn2Own Automotive in Tokyo, contestants will need to perform additional analysis of the device to determine potential weaknesses. Trend Micro research has analyzed the Samsung KMQX60013A-B419 DRAM / NAND device by extracting it from the EV Station. This combination DRAM and NAND flash device contains the storage that supports the functionality of the EV Station.
As previously mentioned, the Ubiquiti EV Station runs the Android operating system. The EV Station flash contains numerous partitions. Using standard Linux tools, Trend Micro researchers identified several potential partitions. Some of these are real partitions and some appear to be false-positive detections by various tools. Several partitions have been verified and investigated. The following list shows the output produced on a Linux system using the `parted` command listing the partitions on the NAND flash device.
Trend Micro researchers used several methods for identifying partition data and mounting the partitions on the NAND flash device. The following command shows one method for mounting the system_a partition. Once the partition is mounted, a typical Android OS system partition is discovered.
Extracting the data from flash storage is the first step to performing the analysis necessary to discover vulnerabilities that might be present in the Ubiquiti EV Station.
Summary
While these may not be the only attack surfaces available on the Ubiquiti EV Station, they represent the most likely avenues a threat actor may use to exploit the device. We’ve already heard from several researchers who intend to register in the EV Charger category, so we’re excited to see their findings displayed in Tokyo during the event. Stay tuned to the blog for attack surface reviews for other devices, and if you’re curious, you can see all the devices included in the contest. Until then, follow the team on Twitter, Mastodon, LinkedIn, or Instagram for the latest in exploit techniques and security patches.
C2 Search Netlas is a Java utility designed to detect Command and Control (C2) servers using the Netlas API. It provides a straightforward and user-friendly CLI interface for searching C2 servers, leveraging the Netlas API to gather data and process it locally.
Usage
To utilize this terminal utility, you'll need a Netlas API key. Obtain your key from the Netlas website.
After acquiring your API key, execute the following command to search servers:
Replace <TARGET_DOMAIN> with the desired IP address or domain, <TARGET_PORT> with the port you wish to scan, and <API_KEY> with your Netlas API key. Use the optional -v flag for verbose output. For example, to search at the google.com IP address on port 443 using the Netlas API key 1234567890abcdef, enter:
c2detect -t google.com -p 443 -s 1234567890abcdef
Release
To download a release of the utility, follow these steps:
Visit the repository's releases page on GitHub.
Download the latest release file (typically a JAR file) to your local machine.
In a terminal, navigate to the directory containing the JAR file.
Execute the following command to initiate the utility:
To use this utility, you need to have a Netlas API key. You can get the key from the Netlas website. Now you can build the project and run it using the following commands:
Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2023-11-27 to 2023-12-04.
News
About the security content of iOS 17.1.2 and iPadOS 17.1.2. Two webkit vulnerabilities may have been exploited in the wild. Not to be outdone, Chrome patched their sixth 0day this year. Browsers are where the data is and the most frequent way users execute untrusted code, so its where the high value exploitation is as well.
O365 Phishing infrastructure. "Last year, mails sent by Dev Tenants got immediately flagged, but something changed." Oh boy. If there isn't a fix for this soon it will be abused.
We Hacked Ourselves With DNS Rebinding. A very neat usecase for DNS rebinding which is often a theoretical attack. I also like that the author didn't stop investigating when the change to IMDSv2 was made which prevented the outcome, but didn't solve the original "vulnerability."
This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!
windiff - Web-based tool that allows comparing symbol, type and syscall information of Microsoft Windows binaries across different versions of the OS.
PySQLRecon - Offensive MSSQL toolkit written in Python, based off SQLRecon.
Kerberos.NET - A Kerberos implementation built entirely in managed code.
Scudo is a C++ class that encrypts and dynamically executes functions. This open-source repository offers a concise solution for securing and executing encrypted functions in your codebase.
Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.
This post is cross-posted on SIXGEN's blog.
Phishing attackers aim to deceive individuals into revealing sensitive information for financial gain, credential theft, corporate network access, and spreading malware. This method often involves social engineering tactics, exploiting psychological factors to manipulate victims into compromising actions that can have profound consequences for personal and organizational security.
Over the last four months, McAfee Labs has observed a rising trend in the utilization of PDF documents for conducting a succession of phishing campaigns. These PDFs were delivered as email attachments.
Attackers favor using PDFs for phishing due to the file format’s widespread trustworthiness. PDFs, commonly seen as legitimate documents, provide a versatile platform for embedding malicious links, content, or exploits. By leveraging social engineering and exploiting the familiarity users have with PDF attachments, attackers increase the likelihood of successful phishing campaigns. Additionally, PDFs offer a means to bypass email filters that may focus on detecting threats in other file formats.
The observed phishing campaigns using PDFs were diverse, abusing various brands such as Amazon and Apple. Attackers often impersonate well-known and trusted entities, increasing the chances of luring users into interacting with the malicious content. Additionally, we will delve into distinct types of URLs utilized by attackers. By understanding the themes and URL patterns, readers can enhance their awareness and better recognize potential phishing attempts.
Figure 1 – PDF Phishing Geo Heatmap showing McAfee customers targeted in last 1 month
Different Themes of Phishing
Attackers employ a range of corporate themes in their social engineering tactics to entice victims into clicking on phishing links. Notable brands such as Amazon, Apple, Netflix, and PayPal, among others, are often mimicked. The PDFs are carefully crafted to induce a sense of urgency in the victim’s mind, utilizing phrases like “your account needs to be updated” or “your ID has expired.” These tactics aim to manipulate individuals into taking prompt action, contributing to the success of the phishing campaigns.
Below are some of the examples:
Figure 2 – Fake Amazon PDF Phish
Figure 3 – Fake Apple PDF Phish
Figure 4 – Fake Internal Revenue Service PDF Phish
Figure 5 – Fake Adobe PDF Phish
Below are the stats on the volume of various themes we have seen in these phishing campaigns.
Figure 6 – Different themed campaign stats based on McAfee customers hits in last 1 month
Abuse of LinkedIn and Google links
Cyber attackers are exploiting the popular professional networking platform LinkedIn and leveraging Google Apps Script to redirect users to phishing websites. Let us examine each method of abuse individually.
In the case of LinkedIn, attackers are utilizing smart links to circumvent Anti-Virus and other security measures. Smart links are integral to the LinkedIn Sales Navigator service, designed for tracking and marketing business accounts.
Figure 7 – LinkedIn Smart link redirecting to an external website
By employing these smart links, attackers redirect their victims to phishing pages. This strategic approach allows them to bypass traditional protection measures, as the use of LinkedIn as a referrer adds an element of legitimacy, making it more challenging for security systems to detect and block malicious activity.
In addition to exploiting LinkedIn, attackers are leveraging the functionality of Google Apps Script to redirect users to phishing pages. Google Apps Script serves as a JavaScript-based development platform used for creating web applications and various other functionalities. Attackers embed malicious or phishing code within this platform, and when victims access the associated URLs, it triggers the display of phishing or malicious pages.
Figure 8 – Amazon fake page displayed on accessing Google script URL
As shown in Figure 8, when victims click on the “Continue” button, they are subsequently redirected to a phishing website.
Summary
Crafting highly convincing PDFs mimicking legitimate companies has become effortlessly achievable for attackers. These meticulously engineered PDFs create a sense of urgency through skillful social engineering, prompting unsuspecting customers to click on embedded phishing links. Upon taking the bait, individuals are redirected to deceptive phishing websites, where attackers request sensitive information. This sophisticated tactic is deployed on a global scale, with these convincing PDFs distributed to thousands of customers worldwide. Specifically, we highlighted the increasing use of PDFs in phishing campaigns over the past four months, with attackers adopting diverse themes such as Amazon and Apple to exploit user trust. Notably, phishing tactics extend to popular platforms like LinkedIn, where attackers leverage smart links to redirect victims to phishing pages, evading traditional security measures. Additionally, Google Apps Script is exploited for its JavaScript-based functionality, allowing attackers to embed malicious code and direct users to deceptive websites.
Remediation
Protecting oneself from phishing requires a combination of awareness, caution, and security practices. Here are some key steps to help safeguard against phishing:
Be Skeptical: Exercise caution when receiving unsolicited emails, messages, or social media requests, especially those with urgent or alarming content.
Verify Sender Identity: Before clicking on any links or providing information, verify the legitimacy of the sender. Check email addresses, domain names, and contact details for any inconsistencies.
Avoid Clicking on Suspicious Links: Hover over links to preview the actual URL before clicking. Be wary of shortened URLs, and if in doubt, verify the link’s authenticity directly with the sender or through official channels.
Use Two-Factor Authentication (2FA): Enable 2FA whenever possible. This adds an extra layer of security by requiring a second form of verification, such as a code sent to your mobile device.
McAfee provides coverage against a broad spectrum of active phishing campaigns, offering protection through features such as real-time scanning and URL filtering. While it enhances security against various phishing attempts, users must remain vigilant and adopt responsible online practices along with using McAfee.
Confidence Staveley of the CyberSafe Foundation and the CyberGirls program is today's guest. CyberGirls is a year-long cohort program in which women in Africa ages 18 to 28 can learn cybersecurity basics and create career tracks to fast-track these students into cybersecurity careers! Staveley tells us about the workings of the program, how she uses her YouTube channel to teach API security with food analogies and explains the origins of what is likely the first-ever Afrobeat song about security awareness! This episode is as fun and inspiring as any I’ve recorded, so I hope you’ll tune in for today’s Cyber Work.
0:00 - Cybersecurity training for women in Africa 4:47 - How Confidence Staveley got into cybersecurity 10:35 - What is the CyberSafe Foundation? 16:57 - What is the CyberGirls fellowship? 21:30 - How to get involved in CyberGirls 30:10 - Inspiring success CyberGirls stories 43:11 - Keeping CyberGirls engaged 46:31 - API Kitchen YouTube show 52:00 - Cybersecurity initiatives in Africa 59:27 - Advice for working in cybersecurity 1:03:13 - CyberGirls' future 1:05:20 - Learn more about CyberSafe 1:07:22 - Outro
About Infosec Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.
As Russia’s invasion of Ukraine entered its first winter in late 2022, nearly half of Ukraine’s energy infrastructure had been destroyed, leaving millions without power. The resulting energy deficit has exacerbated something that hasn’t had much media attention: The effects of electronic GPS jammers affecting vital electrical equipment.
Ukraine’s high-voltage electricity substations rely on GPS for time synchronization. So, when the GPS is jammed, the stations can’t accurately report to power dispatchers on the state of the grid.
This complicates efforts to balance loads between different parts of the system, which is necessary to avoid outages and failure — especially during peak demand and surge times. Until recently, there was no solution to this problem.
Cisco Talos worked alongside several other teams at Cisco, along with government partners in the U.S and Ukraine, to find a technological solution.
Since the start of the Russian invasion of Ukraine, Talos has been unwavering in our commitment to protect Ukrainian critical infrastructure from cyberattacks.
In this blog post, you won’t find any mention of malware, DDoS, or espionage campaigns. In fact, it’s not about cybersecurity at all. This is a story about electronic warfare and GPS. It’s about how one chance conversation over dinner led me on a path through Cisco to find a solution to some very tough questions, and difficult answers.
So, who am I? My name is Joe Marshall. I work at Cisco Talos as a cyber threat researcher and security strategist. My expertise is in industrial control systems and electric grids. My colleagues and friends at Talos are on the front lines of keeping the internet safe — and from more than just cyber threats, as you’ll read.
Project PowerUp is the story of how Cisco Talos worked with a multi-national, multi-company coalition of volunteers and experts to inject a measure of stability in Ukraine’s power transmission grid.
Our ultimate goal was to “keep the lights on” in Ukraine, and help make the lives of Ukrainians who are living in an active war zone, just that little bit easier.
Chapter 1: The energy deficit
As Russia’s invasion of Ukraine entered its first winter in late 2022, Russia stepped up attacks on Ukraine’s energy sector to deprive citizens of electricity and heat during the coldest part of the year. Nearly half of Ukraine’s energy infrastructure had been destroyed, leaving millions without power. The resultant energy deficit was exacerbated by another wartime challenge that, for some reason, hasn’t had much media attention: the effects of deliberate GPS disruptions affecting vital electrical equipment.
For the past year, there have been numerous reports of Russia interfering with GPS signals, especially near and within its own borders. Use of electronic jamming devices has been linked to attempts to disrupt GPS guided munitions, protect troops, and advance the tactical and strategic goals of armed conflict.
While electronic interference can affect the battlefield, it is also having a secondary, unintended effect on Ukraine’s energy sector. Many of Ukraine’s high-voltage electrical substations — which play a vital role in the country’s domestic transmission of power — make extensive use of the availability of precise GPS timing information to help operators anticipate, react and diagnose a complex high-voltage electric grid. This is a complicated task during normal times, much less during a war.
When GPS signals are widely disrupted, substations cannot synchronize their time reporting accurately because they cannot assign accurate timestamps. Without good synchronized data, efforts to manage loads between different parts of the system can be affected, and this management avoids outages and failure, especially during peak demand and surge times. This disruption can be widespread, causing wide areas to lose GPS service for long periods of time.
Until now, Ukraine has not had a viable solution to this issue for electric power systems.
Chapter 2: A chance meeting
I first learned about this situation when I was in delivering a cyber security presentation in February of 2023. The audience just so happened to include a delegation from Ukrenergo, the electricity transmission system operator in Ukraine and is solely responsible for operating the country’s high-voltage electrical lines. Talos has been working with Ukrenergo for many years.
The night before the presentation, colleagues from Ukrenergo invited me to dinner. When we sat down, I couldn’t help but persist with a barrage of questions: “How are you? Are you safe? What’s going on?”
They started to tell me the true extent of what had been happening. This was one year since the start of the invasion. It was still deeply cold in Ukraine, and Russia had continually bombarded critical infrastructure for the entire winter. By March, there would be word that Russia’s campaign was beginning to tail off, but we didn’t know that at the time.
Ukrenergo started to list problem after problem, specifically with regards to the power grids. The obvious problems we all knew of course – kinetic strikes against substations were knocking out the power. Energy transformers were being destroyed, and replacements were scarce. One problem mentioned was rather casual, sandwiched in-between others, “We can’t get reliable timing due to electronic GPS jamming.”
As I mentioned earlier, Ukraine’s high-voltage electricity substations rely on GPS for time synchronization. So, when the GPS is deliberately disrupted, the stations can’t accurately report to power dispatchers on the state of the grid.
My ill-informed, rather bombastic response to this was, “Just buy some atomic clocks! You know…the type used by NASA.” Only after the words came out of my mouth did I remember that atomic clocks might not be a financially feasible option for this war-torn country. In fact, one member of the Ukrenergo delegation wryly retorted (I'm paraphrasing here), “Sure. Show me the aisle of the grocery store where atomic clocks can be found cheaply.”
For the rest of the night, we talked about the GPS issues, the war, and Ukraine’s response to being attacked. Despite the sober undertones, the dinner company was superb, and the fellowship top-notch. The GPS timing issue, however, wouldn’t leave my head. I tried to look at it from all different angles.
When we said goodbye that night, I silently vowed I was going to do everything in my power to help. But at the time, I had no answers.
💡
High-voltage substations are critical components in the power system where power can be pooled from generating resources, transformed to different voltage levels, and delivered to the load points. Substations are interconnected with each other, creating a network that increases the reliability of the power supply system by providing alternate paths for power flow. This ensures that power delivery is maintained at all times and there are no outages.
Substation in Ukraine damaged by Russian airstrikes
Chapter 3: The time paradox
While thinking about viable solutions, I was guided by an important principle: Whatever we do, speed is key. As I was wracking my brain, Ukraine was at war and suffering. However, I soon began to learn that it wasn’t as simple as that, due to the sheer complexities of what the country was up against.
To truly understand the layers of solving this issue, I need to talk about why GPS clock timing is so important to electric grids. Most people are familiar with GPS because we rely on it for navigation, but it has also become the dominant system for the distribution of time and frequency signals globally. The U.S. controls and operates GPS satellites that orbit the earth twice a day which broadcast signals anyone in the world can use.
These satellites send very precise time data to GPS receivers on the ground that receive and decode the signals, effectively synchronizing each receiver to the same clock. This enables users to determine the time within 100 nanoseconds without the cost of owning and operating expensive and complex equipment, such as atomic clocks.
Because GPS time is so accurate, GPS-disciplined clocks are commonly used in industrial systems, like Ukraine’s power grid, that require extremely precise time across a vast geographic area.
Most devices that rely on time to calculate measurements have frequency references. The frequency reference is provided by an internal crystal oscillator within the device, and that crystal tells the device how fast time is going. However, these times are never perfectly accurate due to manufacturing variations and other variables in the crystal oscillators, causing time to advance at slightly different rates across various devices. This is why the clock on your laptop might be a few seconds or minutes ahead or behind the clock in your car.
GPS solves this challenge. Devices can use the GPS satellites’ time signal to determine how accurate its local time reference is and then adjust the time accordingly, thereby enabling all devices running GPS-enabled clocks to be aligned to the exact same time.
These GPS time signals are crucial for making a key piece of power equipment called a phasor measurement unit (PMU) run effectively. PMUs are used in power systems around the world to augment operators’ visibility into what is happening throughout a vast power grids network. A PMU measures a quantity called a phasor, which is the magnitude and phase angle of a voltage or current at a specific location on a power line.
PMUs are essential to providing a detailed and accurate view of power quality across a wide geographic grid. Data from PMUs allows operators to predict and detect stress and stability on the grid, identify inefficiencies, and provide information for event analysis after a disturbance occurs.
PMU data is time-stamped — to the precision of a microsecond — using the timing signal from GPS satellites. Therefore, measurements taken by PMUs in different locations are accurately synchronized with each other and time-aligned using the same global time reference marker. This allows all PMU data to be combined to provide precise and comprehensive information about an entire power grid.
When GPS clocks are unavailable and the corresponding time signal has an error, that error can causefalse calculations of phase angle and mis-alignment of grid conditions relative to other PMUs. Without the ability to analyze the precise timing of an electrical anomaly as it propagates through a grid, grid operators have difficulty diagnosing the exact issue that requires correction. Relatedly, if GPS timing is down, grid operators will have increased difficulty balancing power during the adverse events that occur during wartime.
Chapter 4: "You don't need atomic clocks"
After that fateful dinner with Ukrenergo, I spent the next few nights in deep thought. My brain wouldn’t let go of this timing issue. I consulted with colleagues and experts from other organizations who specialize in electric grid security, and ironically, they all suggested the same thing – atomic clocks.
I knocked on Talos Vice President Matt Watchinksi’s door. I explained the situation to him, and ended by saying, “So can Cisco make an atomic clock?” I’d got it into my head that the only possible solution was to create a version of an atomic clock, as their holdover is measured in nanoseconds of accuracy. More than enough accuracy for a power grid.
Matt responded by saying he had no idea, but he would make some phone calls. That led me to a meeting with our Cisco Internet of Things (IoT) division. I asked them the same question I asked Matt: “Can Cisco make an atomic clock to counteract the GPS jamming, like what is being reported in Ukraine?
After some research and identifying all manners of issues with locating an atomic clock, the team said, “Actually. We don’t think you need one. We think we have an existing solution within our IoT networking equipment. We can use that to build something unique for this specific situation.”
As is the case with most things in life, you should put your faith in the experts. And I’m so glad I listened to the IoT team. Because that was how we turned the ship, and Project PowerUp was a go.
Together with Cisco’s IOT networking team, we were going to design, create and deliver custom devices to Ukraine to keep substations running and delivering power to the entire country.
“Throughout this war, I’ve seen and heard how resilient Ukrainians are. It’s very true. Citizens are dealing with one awful situation after the other, to the extent that this mentality of everyday trauma has become normalized. However, ‘getting used’ to power outages and not being able to keep warm in the Winter shouldn’t be normal. That’s what this whole project is about.” Eric Wenger, senior director of technology policy for Cisco Government Affairs
Chapter 5: Is it good enough?
I mentioned earlier that this initiative was guided by the principle that speed was key. Delays meant potentially disastrous consequences. But I soon came to add another principle: Perfection is the enemy of good enough.
The IoT team’s suggestion was that a Cisco Industrial Ethernet switch would be the best starting point in identifying a potential solution to the issues caused by Ukraine’s GPS outages. Industrial Ethernet switches do not have atomic clocks for holdover accuracy – but they have a good enough clock, able to measure time accurately in microseconds, to sustain an accurate time sync. This is important – Ukraine's electric grids operate on a 50hz frequency and have timing needs in microseconds.
An Industrial Ethernet switch is part of Cisco’s hardened suite of switches, routers and other products designed specifically for rugged deployment, and Ukraine’s warzone undoubtedly fits into that category. These devices are built to withstand harsh industrial environments and extreme temperature ranges (-40° to 75°C).
Hardened switches also have various internal resiliency features, including a source for its internal clock. Most network hardware devices use an internal crystal oscillator to generate their clock time, but these crystals’ frequencies can oscillate widely based on local conditions. However, an Industrial Ethernet switch can avoid this problem, as its crystal is a superior and resilient design, providing better frequency stability for precise synchronization of features such as GPS reception.
Despite an Industrial Ethernet switch’s advantages, we needed to make some software modifications that would enable the device to address the specific set of challenges facing Ukraine’s power grid.
There were two core issues we had to address with the Industrial Ethernet switch that required us to make enhancements to the device. First, we had to ensure interoperability between an Industrial Ethernet switch and the PMUs, and second, an Industrial Ethernet switch needed to provide the necessary holdover during GPS outages for the PMUs to continue working. Holdover is the time period to keep the clocks in sync until timing signals can be restored.
During Ukraine’s GPS outages, which can last several hours, the PMUs effectively declare that something is wrong and stop sending data to the broader power management infrastructure — which causes significant upstream effects. Our first goal was to find a way to keep the PMU transmitting data. By modifying the metadata that an Industrial Ethernet switch sends to the PMUs, the PMUs will continue operating and sending data even without that signal.
Next, we had to enable the Industrial Ethernet switch to provide an accurate time to the PMUs when time was unavailable (aka, the “holdover” period). We modified the Industrial Ethernet switch’s code to provide trusted time.
With an Industrial Ethernet switch deployed to Ukraine’s substations, it measures the difference between the PMU’s local time reference used by the PMU and GPS time while GPS is still active. Then, when GPS signal is lost, the PMU can revert to using the local time reference, which is now highly accurate from the earlier error measurements, thereby allowing the PMU to continue operating.
To ensure that an Industrial Ethernet switch fully understands what the GPS signal is telling it before the signal shuts down, Cisco created new, enhanced clock recovery algorithms. We also applied some additional filtering to the device’s software to allow it to recognize that the signal is down and to provide a “best guess” of what the time was when GPS was lost.
We now had a device that was ready for production, but the job wasn’t done until testing was completed. After successful testing, Cisco immediately prioritized production of these devices. Hardware and software engineers from across the company pooled their collective expertise and created a production line capable of supporting the unique needs of Ukraine.
Our switches in Ukraine!
From the very start of Project PowerUp all I kept thinking about was the big picture of what we were trying to achieve. I’m proud to say that Cisco did this in an incredibly fast timeline. It is no easy feat to re-prioritize production efforts, especially in a technology company as vast as Cisco. But we had that guiding principle of speed and urgency – the longer this took for us to get these devices into Ukraine, the more days Ukrainians would be threatened with grid instability.
A special shoutout to our Cisco Critical Accounts team. This team has been relentless in helping get key deliveries to Ukraine since the start of the invasion, and they were able to help drive the urgency for Project PowerUp too.
Chapter 6: Closing thoughts
As I write this, our Industrial Ethernet switches are in Ukraine, and helping keep the lights on. This reminds me of what we do at Talos every day. We fight the good fight every day to protect others.
It is a lamentable fact that in cybersecurity and in critical infrastructure protection, we’re often confronted with the fact that our work, while valuable, may never be realized in our lifetimes as professionals. It is the legacy we leave with others we help protect and is built upon a large community who believe in fighting that good fight for generations to come.
Project PowerUp is a little different. We know, beyond a doubt, that our work there will help save lives and will help keep the lights on. The effects are incredibly difficult to calculate, but we know it’s going to make life better. It’s helping others stay out of harm’s way. It’s helping a hospital that may not have reliable backup power. It’s giving a child just five more minutes of their childhood watching cartoons.
If anything can be taken away from this, it’s that acting and leading with empathy is core to our mission at Talos. This year we took a chance to make a tangible difference in the lives of others and help them have a better life. Fighting the good fight isn’t just about cybersecurity – it’s about doing the right thing and helping others in the face of adversity.
What started as a chance presentation this year turned into a multi-national, multi-company global team of power grid security practitioners who had never worked together before. As a team, we overcame numerous challenges to make Project PowerUp work. We could not have been successful without the support of numerous experts in Cisco who helped innovate this novel solution. And, of course, we must thank our partners in Ukraine, the U.S. government, and ICS vendors and experts who lent us their time, empathy, and expertise. We are humble and grateful for their help.
Vendor: Sonos
Vendor URL: https://www.sonos.com/
Versions affected:
* Confirmed 73.0-42060
Systems Affected: Sonos Era 100
Author: Ilya Zhuravlev
Advisory URL: Not provided by Sonos. Sonos state an update was released on 2023-11-15 which remediated the issue.
CVE Identifier: N/A
Risk: High
Summary
Sonos Era 100 is a smart speaker released in 2023. A vulnerability exists in the U-Boot component of the firmware which would allow for persistent arbitrary code execution with Linux kernel privileges. This vulnerability could be exploited either by an attacker with physical access to the device, or by obtaining write access to the flash memory through a separate runtime vulnerability.
Impact
An unsigned attacker-controlled rootfs may be loaded by the Linux kernel. This achieves a persistent bypass of the secure boot mechanism, providing early code execution within the Linux userspace under the /init process as the “root” user. It can be further escalated into kernel-mode arbitrary code execution by loading a custom kernel module.
Details
The implementation of the custom “sonosboot” command loads the kernel image, performs the signature check, and then passes execution to the built-in U-Boot “bootm” command. Since “bootm” uses the “bootargs” environment variable as Linux kernel arguments, the “sonosboot” command initializes it with a call to `setenv`:
setenv(“bootargs”,(char *)kernel_cmdline);
However, the return result of `setenv` is not checked. If this call fails, “bootargs” will keep its previous value and “bootm” will pass it to the Linux kernel.
On the Sonos Era 100 the U-Boot environment is loaded from the eMMC from address 0x500000. Whilst the factory image does not contain a valid U-Boot environment there, and we can confirm it through the presence of the “*** Warning – bad CRC, using default environment” warning message displayed on UART, it is possible to place a valid environment by directly writing to the eMMC with a hardware programmer.
There is a feature in U-Boot that allows setting environment variables as read-only. For example, setting “bootargs=something” and then “.flags=bootargs:sr” would make any future writes to “bootargs” fail. Thus, the Linux kernel will boot with an attacker-controlled “bootargs“.
As a result, it is possible to fully control the Linux kernel command line. From there, an adversary could append the “initrd=0xADDR,0xSIZE” option to load their own initramfs, overwriting the one embedded in the image.
By replacing the “/init” process it is then possible to obtain early persistent code execution on the device.
Recommendation
Consider setting CONFIG_ENV_IS_NOWHEREto disable loading of a U-boot environment from the flash memory.
Validate the return value of setenv and abort the boot process if the call fails.
Vendor Communication
Date
Communication
2023-09-04
Issue reported to vendor.
2023-09-07
Sonos has triaged report and is investigating.
2023-11-29
NCC queries Sonos for expected patch date.
2023-11-29
Sonos informs NCC that they already shipped a patch on the 15th Nov.
2023-11-30
NCC queries why there are no release notes, CVE, or credit for the issues.
2023-12-01
NCC informs Sonos that technical details will be published the w/c 4th Dec.
NCC Group is a global expert in cybersecurity and risk mitigation, working with businesses to protect their brand, value and reputation against the ever-evolving threat landscape. With our knowledge, experience and global footprint, we are best placed to help businesses identify, assess, mitigate respond to the risks they face. We are passionate about making the Internet safer and revolutionizing the way in which organizations think about cybersecurity.
Research performed by Ilya Zhuravlev supporting the Exploit
Development Group (EDG).
The Era 100 is Sonos’s flagship device, released on March 28th 2023
and is a notable step up from the Sonos One. It was also one of the
target devices for Pwn2Own
Toronto 2023. NCC found multiple security weaknesses within the
bootloader of the device which could be exploited leading to root/kernel
code execution and full compromise of the device.
According to Sonos, the issues reported were patched in an update
released on the 15th of November with no CVE issued or public details of
the security weakness. NCC is not aware of the full scope of devices
impacted by this issue. Users of Sonos devices should ensure to apply
any recent updates.
To develop an exploit eligible for the Pwn2Own contest, the first
step is to dump the firmware, gain initial access to the firmware, and
perhaps even set up debugging facilities to assist in debugging any
potential exploits.
In this article we will document the process of analyzing the
hardware, discovering several issues and developing a persistent secure
boot bypass for the Sonos Era 100.
Exploitation was also chained with a previously disclosed exploit
by bl4sty to obtain EL3 code
execution and obtain cryptographic key material.
Initial recon
After opening the device, we quickly identified UART pins broken out
on the motherboard:
The pinout is TX, RX, GND, Vcc
We can now attach a UART adapter and monitor the boot process:
SM1:BL:511f6b:81ca2f;FEAT:B0F02990:20283000;POC:F;RCY:0;EMMC:0;READ:0;0.0;0.0;CHK:0;
bl2_stage_init 0x01
bl2_stage_init 0xc1
bl2_stage_init 0x02
/* Skipped most of the log here */
U-Boot 2016.11-S767-Strict-Rev0.10 (Oct 13 2022 - 09:14:35 +0000)
SoC: Amlogic S767
Board: Sonos Optimo1 Revision 0x06
Reset: POR
cpu family id not support!!!
thermal ver flag error!
flagbuf is 0xfa!
read calibrated data failed
SOC Temperature -1 C
I2C: ready
DRAM: 1 GiB
initializing iomux_cfg_i2c
register usb cfg[0][1] = 000000007ffabde0
MMC: SDIO Port C: 0
*** Warning - bad CRC, using default environment
In: serial
Out: serial
Err: serial
Init Video as 1920 x 1080 pixel matrix
Net: dwmac.ff3f0000
checking cpuid allowlist (my cpuid is 2b:0b:17:00:01:17:12:00:00:11:33:38:36:55:4d:50)...
allowlist check completed
Hit any key to stop autoboot: 0
pending_unlock: no pending DevUnlock
Image header on sect 0
Magic: 536f7821
Version 1
Bootgen 0
Kernel Offset 40
Kernel Checksum 78c13f6f
Kernel Length a2ba18
Rootfs Offset 0
Rootfs Checksum 0
Rootfs Length 0
Rootfs Format 2
Image header on sect 1
Magic: 536f7821
Version 1
Bootgen 2
Kernel Offset 40
Kernel Checksum 78c13f6f
Kernel Length a2ba18
Rootfs Offset 0
Rootfs Checksum 0
Rootfs Length 0
Rootfs Format 2
Both headers OK, bootgens 0 2
uboot: section-1 selected
boot_state 0
364 byte kernel signature verified successfully
JTAG disabled
disable_usb: DISABLE_USB_BOOT fuse already set
disable_usb: DISABLE_JTAG fuse already set
disable_usb: DISABLE_M3_JTAG fuse already set
disable_usb: DISABLE_M4_JTAG fuse already set
srk_fuses: not revoking any more SRK keys (0x1)
srk_fuses: locking SRK revocation fuses
Start the watchdog timer before starting the kernel...
get_kernel_config [id = 1, rev = 6] returning 22
## Loading kernel from FIT Image at 00100040 ...
Using 'conf@23' configuration
Trying 'kernel@1' kernel subimage
Description: Sonos Linux kernel for S767
Type: Kernel Image
Compression: lz4 compressed
Data Start: 0x00100128
Data Size: 9076344 Bytes = 8.7 MiB
Architecture: AArch64
OS: Linux
Load Address: 0x01080000
Entry Point: 0x01080000
Hash algo: crc32
Hash value: 2e036fce
Verifying Hash Integrity ... crc32+ OK
## Loading fdt from FIT Image at 00100040 ...
Using 'conf@23' configuration
Trying 'fdt@23' fdt subimage
Description: Flattened Device Tree Sonos Optimo1 V6
Type: Flat Device Tree
Compression: uncompressed
Data Start: 0x00a27fe8
Data Size: 75487 Bytes = 73.7 KiB
Architecture: AArch64
Hash algo: crc32
Hash value: adbd3c21
Verifying Hash Integrity ... crc32+ OK
Booting using the fdt blob at 0xa27fe8
Uncompressing Kernel Image ... OK
Loading Device Tree to 00000000417ea000, end 00000000417ff6de ... OK
Starting kernel ...
vmin:32 b5 0 0!
From this log, we can see that the boot process is very similar to
other Sonos devices. Moreover, despite the marking on the SoC and the
boot log indicating an undocumented Amlogic S767a chip, the first line
of the BootROM log containing “SM1” points us to S905X3, which has a
datasheet available.
Whilst it’s possible to interrupt the U-Boot boot process, Sonos has
gone through several rounds of boot hardening and by now the U-Boot
console is only accessible with a password that is stored hashed inside
the U-Boot binary. Additionally, the set of accessible U-Boot commands
is heavily restricted.
Dumping the eMMC
Continuing probing the PCB, it was possible to locate eMMC data pins
next in order to attempt an in-circuit eMMC dump. From previous
generations of Sonos devices, we knew that the data on the flash is
mostly encrypted. Nevertheless, an in-circuit eMMC connection would also
allow to rapidly modify the flash memory contents, without having to
take the chip off and put it back on every time.
By probing termination resistors and test points located in the
general area between the SoC and the eMMC chip, first with an
oscilloscope and then with a logic analyzer, it was possible to identify
several candidates for eMMC lines.
To perform an in-circuit dump, we have to connect CLK, CMD, DAT0 and
ground at the minimum. While CLK and CMD are pretty obvious from the
above capture, there are multiple candidates for the DAT0 pin. Moreover,
we could only identify 3 out of 4 data pins at this point. Fortunately,
after trying all 3 of these, it was possible to identify the following
connections:
Note that the extra pin marked as “INT” here is used to interrupt the
BootROM boot process. By connecting it to ground during boot, the
BootROM gets stuck trying to boot from SPINOR, which allows us to
communicate on the eMMC lines without interference.
From there, it was possible to dump the contents of eMMC and confirm
that the bulk of the firmware including the Linux rootfs was
encrypted.
Investigating U-Boot
While we were unable to get access to the Sonos Era 100 U-Boot binary
just yet, previous work on Sonos devices enabled us to obtain a
plaintext binary for the Sonos One U-Boot. At this point we were hoping
that the images would be mostly the same, and that a vulnerability
existed in U-Boot that could be exploited in a black-box manner
utilizing the eMMC read-write capability.
Several such issues were identified and are documented below.
Issue 1: Stored environment
Despite the device not utilizing the stored environment feature of
U-Boot, there’s still an attempt to load the environment from flash at
startup. This appears to stem from a misconfiguration where the
CONFIG_ENV_IS_NOWHERE flag is not set in U-Boot. As a
result, during startup it will try to load the environment from flash
offset 0x500000. Since there’s no valid environment there,
it displays the following warning message over UART:
*** Warning - bad CRC, using default environment
The message goes away when a valid environment is written to that
location. This enables us to set variables such as bootcmd,
essentially bypassing the password-protected Sonos U-Boot console.
However, as mentioned above, the available commands are heavily
restricted.
Issue 2: Unchecked setenv()
call
By default on the Sonos Era 100, U-Boot’s “bootcmd” is set to
“sonosboot”. To understand the overall boot process, it was possible to
reverse engineer the custom “sonosboot” handler. On a high level, this
command is responsible for loading and validating the kernel image after
which it passes control to the U-Boot “bootm” built-in. Because “bootm”
uses U-Boot environment variables to control the arguments passed to the
Linux kernel, “sonosboot” makes sure to set them up first before passing
control:
setenv("bootargs",(char*)kernel_cmdline);
There is however no check on the return value of this
setenv call. If it fails, the variable will keep its
previous value, which in our case is the value loaded from the stored
environment.
As it turns out, it is possible to make this setenv call
fail. A somewhat obscure feature of U-Boot allows marking
variables as read-only. For example, by setting
“.flags=bootargs:sr”, the “bootargs” variable becomes read-only and all
future writes without the H_FORCE flag fail.
All we have to do at this point to exploit this issue is to construct
a stored environment that first defines the “bootargs” value, and then
sets it as read-only by defining “.flags=bootargs:sr”. The execution of
“sonosboot” will then proceed into “bootm” and it will start the Linux
kernel with fully controlled command-line arguments.
One way to obtain code execution from there is to insert an
“initrd=0xADDR,0xSIZE” argument which will cause the Linux kernel to
load an initramfs from memory at the specified address, overriding the
built-in image.
Issue 3: Malleable firmware
image
The exploitation process described above, however, requires that
controlled data is placed at a known static address. One way it was
found to do that is to abuse the custom Sonos
image header. According to U-Boot logs, this is always loaded at
address 0x100000:
## Loading kernel from FIT Image at 00100040 ...
Using 'conf@23' configuration
Trying 'kernel@1' kernel subimage
Description: Sonos Linux kernel for S767
Type: Kernel Image
Compression: lz4 compressed
Data Start: 0x00100128
Data Size: 9076344 Bytes = 8.7 MiB
Architecture: AArch64
OS: Linux
Load Address: 0x01080000
Entry Point: 0x01080000
Hash algo: crc32
Hash value: 2e036fce
Verifying Hash Integrity ... crc32+ OK
The image header can be represented in pseudocode as follows:
The issue is that while the value of kernel_offset is
normally 0x40, it is not enforced by U-Boot. By setting the offset to a
higher value and then filling the empty space with arbitrary data, we
can place the data at a known fixed location in U-Boot memory while
ensuring that the signature check on the image still passes.
Combining all three issues outlined above, it is possible to achieve
persistent code execution within Linux under the /init process as the
“root” user.
Moreover, by inserting a kernel module this access can be escalated
to kernel-mode arbitrary code execution.
Epilogue
There’s just one missing piece and that is to dump the one time
programmable (OTP) data so that we can decrypt any future firmware.
Fortunately, the factory firmware that the device came pre-flashed with
does not contain a fix for the vulnerability disclosed in
https://haxx.in/posts/dumping-the-amlogic-a113x-bootrom/
From there, slight modifications are required to adjust the exploit
for the different EL3 binary of this device. The arbitrary read
primitive provided by the a113x-el3-pwn tool works as-is
and allows for the EL3 image to be dumped. With the adjusted exploit we
were then able to dump full OTP contents and decrypt any future firmware
update for this device.
Disclosure Timeline
Date
Action
2023-09-04
NCC reports issues to Sonos
2023-09-07
Sonos has triaged report and is investigating
2023-11-29
NCC queries Sonos for expected patch date
2023-11-29
Sonos informs NCC that they already shipped a patch on the 15th
Nov
2023-11-30
NCC queries why no release notes, CVE or credit for the issues
2023-12-01
NCC informs Sonos that technical details will be published the w/c
4th Dec
Remote Play Together, developed by Valve, allows sharing local multi-player games with friends over the network through streaming. The associated protocol is elaborate enough to shelter a valuable attack surface that has scarcely been ventured into in the past.
This post covers the reverse engineering of the protocol and client/server implementations inside Steam, before presenting a dedicated fuzzer that unveiled a few critical vulnerabilities.
Basically, NimExec is a filelessremote command execution tool that uses The Service Control Manager Remote Protocol (MS-SCMR). It changes the binary path of a random or given service run by LocalSystem to execute the given command on the target and restores it later via hand-crafted RPC packets instead of WinAPI calls. It sends these packages over SMB2 and the svcctl named pipe.
NimExec needs an NTLM hash to authenticate to the target machine and then completes this authentication process with the NTLM Authentication method over hand-crafted packages.
Since all required network packages are manually crafted and no operating system-specific functions are used, NimExec can be used in different operating systems by using Nim's cross-compilability support.
This project was inspired by Julio's SharpNoPSExec tool. You can think that NimExec is Cross Compilable and built-in Pass the Hash supported version of SharpNoPSExec. Also, I learned the required network packet structures from Kevin Robertson's Invoke-SMBExec Script.
Compilation
nim c -d:release --gc:markAndSweep -o:NimExec.exe Main.nim
The above command uses a different Garbage Collector because the default garbage collector in Nim is throwing some SIGSEGV errors during the service searching process.
Also, you can install the required Nim modules via Nimble with the following command:
[+] Connected to 10.200.2.2:445 [+] NTLM Authentication with Hash is succesfull! [+] Connected to IPC Share of target! [+] Opened a handle for svcctl pipe! [+] Bound to the RPC Interface! [+] RPC Binding is acknowledged! [+] SCManager handle is obtained! [+] Number of obtained services: 265 [+] Selected service is LxpSvc [+] Service: LxpSvc is opened! [+] Previous Service Path is: C:\Windows\system32\svchost.exe -k netsvcs [+] Service config is changed! [!] StartServiceW Return Value: 1053 (ERROR_SERVICE_REQUEST_TIMEOUT) [+] Service start request is sent! [+] Service config is restored! [+] Service handle is closed! [+] Service Manager handle is closed! [+] SMB is closed! [+] Tree is disconnected! [+] Session logoff!
It's tested against Windows 10&11, Windows Server 16&19&22 from Ubuntu 20.04 and Windows 10 machines.
Command Line Parameters
-v | --verbose Enable more verbose output. -u | --username <Username> Username for NTLM Authentication.* -h | --hash <NTLM Hash> NTLM password hash for NTLM Authentication.* -t | --target <Target> Lateral movement target.* -c | --command <Command> Command to execute.* -d | --domain <Domain> Domain name for NTLM Authentication. -s | --service <Service Name> Name of the service instead of a random one. --help Show the help message.
‘Blade Runner’ – the cult classic movie – teaches us that the (non-)human traits/behaviors can be detected with a so-called Voight-Kampff test. This post is about discussing (not designing yet) a similar test for our threat hunting purposes… The key … Continue reading →
T3SF is a framework that offers a modular structure for the orchestration of events based on a master scenario events list (MSEL) together with a set of rules defined for each exercise (optional) and a configuration that allows defining the parameters of the corresponding platform. The main module performs the communication with the specific module (Discord, Slack, Telegram, etc.) that allows the events to present the events in the input channels as injects for each platform. In addition, the framework supports different use cases: "single organization, multiple areas", "multiple organization, single area" and "multiple organization, multiple areas".
Getting Things Ready
To use the framework with your desired platform, whether it's Slack or Discord, you will need to install the required modules for that platform. But don't worry, installing these modules is easy and straightforward.
To do this, you can follow this simple step-by-step guide, or if you're already comfortable installing packages with pip, you can skip to the last step!
# Python 3.6+ required python -m venv .venv # We will create a python virtual environment source .venv/bin/activate # Let's get inside it
pip install -U pip # Upgrade pip
Once you have created a Python virtual environment and activated it, you can install the T3SF framework for your desired platform by running the following command:
pip install "T3SF[Discord]" # Install the framework to work with Discord
or
pip install "T3SF[Slack]" # Install the framework to work with Slack
This will install the T3SF framework along with the required dependencies for your chosen platform. Once the installation is complete, you can start using the framework with your platform of choice.
We strongly recommend following the platform-specific guidance within our Read The Docs! Here are the links:
We created this framework to simplify all your work!
Using Docker
Supported Tags
slack → This image has all the requirements to perform an exercise in Slack.
discord → This image has all the requirements to perform an exercise in Discord.
Using it with Slack
$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:slack
Inside your .env file you have to provide the SLACK_BOT_TOKEN and SLACK_APP_TOKEN tokens. Read more about it here.
There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.
Using it with Discord
$ docker run --rm -t --env-file .env -v $(pwd)/MSEL.json:/app/MSEL.json base4sec/t3sf:discord
Inside your .env file you have to provide the DISCORD_TOKEN token. Read more about it here.
There is another environment variable to set, MSEL_PATH. This variable tells the framework in which path the MSEL is located. By default, the container path is /app/MSEL.json. If you change the mount location of the volume then also change the variable.
Once you have everything ready, use our template for the main.py, or modify the following code:
Here is an example if you want to run the framework with the Discord bot and a GUI.