Normal view

There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

Enforcing a Sysmon Archive Quota

30 June 2022 at 12:19

Sysmon (System Monitor) is a well-known and widely used Windows logging utility providing valuable visibility into core OS (operating system) events. From a defender’s perspective, the presence of Sysmon in an environment greatly enhances detection and forensic capabilities by logging events involving processes, files, registry, network connections and more.

Since Sysmon 11 (released April 2020), the FileDelete event provides the capability to retain (archive) deleted files, a feature we especially adore during active compromises when actors drop-use-delete tools. However, as duly noted in Sysmon’s documentation, the usage of the archiving feature might grow the archive directory to unreasonable sizes (hundreds of GB); something most environments cannot afford.

This blog post will cover how, through a Windows-native feature (WMI event consumption), the Sysmon archive can be kept at a reasonable size. In a hurry? Go straight to the proof of concept!

Figure 1: A Sysmon archive quota removing old files.

The Challenge of Sysmon File Archiving

Typical Sysmon deployments require repeated fine-tuning to ensure optimized performance. When responding to hands-on-keyboard attackers, this time-consuming process is commonly replaced by relying on robust base-lined configurations (some of which open-source such as SwiftOnSecurity/sysmon-config or olafhartong/sysmon-modular). While most misconfigured events have at worst an impact on CPU and log storage, the Sysmon file archiving can grind a system to a halt by exhausting all available storage. So how could one still perform file archiving without risking an outage?

While searching for a solution, we defined some acceptance requirements. Ideally, the solution should…

  • Be Windows-native. We weren’t looking for yet another agent/driver which consumes resources, may cause compatibility issues and increase the attack surface.
  • Be FIFO-like (First In, First Out) to ensure the oldest archived files are deleted first. This ensures attacker tools are kept in the archive just long enough for our incident responders to grab them.
  • Have a minimal system performance impact if we want file archiving to be usable in production.

A common proposed solution would be to rely on a scheduled task to perform some clean-up activities. While being Windows-native, this execution method is “dumb” (schedule-based) and would execute even without files being archived.

So how about WMI event consumption?

WMI Event Consumption

WMI (Windows Management Instrumentation) is a Windows-native component providing capabilities surrounding the OS’ management data and operations. You can for example use it to read and write configuration settings related to Windows, or monitor operations such as process and file creations.

Within the WMI architecture lays the permanent event consumer.

You may want to write an application that can react to events at any time. For example, an administrator may want to receive an email message when specific performance measures decline on network servers. In this case, your application should run at all times. However, running an application continuously is not an efficient use of system resources. Instead, WMI allows you to create a permanent event consumer. […]

A permanent event consumer receives events until its registration is explicitly canceled.

docs.microsoft.com

Leveraging a permanent event consumer to monitor for file events within the Sysmon archive folder would provide optimized event-based execution as opposed to the scheduled task approach.

In the following sections we will start by creating a WMI event filter intended to select events of interest; after which we will cover the WMI logical consumer whose role will be to clean up the Sysmon archive.

WMI Event Filter

A WMI event filter is an __EventFilter instance containing a WQL (WMI Query Language, SQL for WMI) statement whose role is to filter event tables for the desired events. In our case, we want to be notified when files are being created in the Sysmon archive folder.

Whenever files are created, a CIM_DataFile intrinsic event is fired within the __InstanceCreationEvent class. The following WQL statement would filter for such events within the default C:\Sysmon\ archive folder:

SELECT * FROM __InstanceCreationEvent
WHERE TargetInstance ISA 'CIM_DataFile'
	AND TargetInstance.Drive='C:'
	AND TargetInstance.Path='\\Sysmon\\'

Intrinsic events are polled at specific intervals. As we wish to ensure the polling period is not too long, a WITHIN clause can be used to define the maximum amount of seconds that can pass before the notification of the event must be delivered.

The beneath query requires matching event notifications to be delivered within 10 seconds.

SELECT * FROM __InstanceCreationEvent
WITHIN 10
WHERE TargetInstance ISA 'CIM_DataFile'
	AND TargetInstance.Drive='C:'
	AND TargetInstance.Path='\\Sysmon\\' 

While the above WQL statement is functional, it is not yet optimized. As an example, if Sysmon came to archive 1000 files, the event notification would fire 1000 times, later resulting in our clean-up logic to be executed 1000 times as well.

To cope with this property, a GROUP clause can be used to combine events into a single notification. Furthermore, to ensure the grouping occurs within timely manner, another WITHIN clause can be leveraged. The following WQL statement waits for up to 10 seconds to deliver a single notification should any files have been created in Sysmon’s archive folder.

SELECT * FROM __InstanceCreationEvent
WITHIN 10
WHERE TargetInstance ISA 'CIM_DataFile'
	AND TargetInstance.Drive='C:'
	AND TargetInstance.Path='\\Sysmon\\' 
GROUP WITHIN 10

To create a WMI event filter we can rely on PowerShell’s New-CimInstance cmdlet as shown in the following snippet.

$Archive = "C:\\Sysmon\\"
$Delay = 10
$Filter = New-CimInstance -Namespace root/subscription -ClassName __EventFilter -Property @{
    Name = 'SysmonArchiveWatcher';
    EventNameSpace = 'root\cimv2';
    QueryLanguage = "WQL";
    Query = "SELECT * FROM __InstanceCreationEvent WITHIN $Delay WHERE TargetInstance ISA 'CIM_DataFile' AND TargetInstance.Drive='$(Split-Path -Path $Archive -Qualifier)' AND TargetInstance.Path='$(Split-Path -Path $Archive -NoQualifier)' GROUP WITHIN $Delay"
}

WMI Logical Consumer

The WMI logical consumer will consume WMI events and undertake actions for each occurrence. Multiple logical consumer classes exist providing different behaviors whenever events are received, such as:

The last CommandLineEventConsumer class is particularly interesting as it would allow us to run a PowerShell script whenever files are archived by Sysmon (a feature attackers do enjoy as well).

The first step on our PowerShell code would be to obtain a full list of archived files ordered from oldest to most recent. This list will play two roles:

  1. It will be used to compute the current directory size.
  2. It will be used as a list of files to remove (in FIFO order) until the directory size is back under control.

While getting a list of files is easy through the Get-ChildItem cmdlet, sorting these files from oldest to most recently archived requires some thinking. Where common folders could rely on the file’s CreationTimeUtc property, Sysmon archiving copies this file property over. As a consequence the CreationTimeUtc field is not representative of when a file was archived and relying on it could result in files being incorrectly seen as the oldest archives, causing their premature removal.

Instead of relying on CreationTimeUtc, the alternate LastAccessTimeUtc property provides a more accurate representation of when a file was archived. The following snippet will get all files within the Sysmon archive and order them in a FIFO-like fashion.

$Archived = Get-ChildItem -Path 'C:\\Sysmon\\' -File | Sort-Object -Property LastAccessTimeUtc

Once the archived files listed, the folder size can be computed through the Measure-Object cmdlet.

$Size = ($Archived | Measure-Object -Sum -Property Length).Sum

All that remains to do is then loop the archived files and remove them while the folder exceeds our desired quota.

for($Index = 0; ($Index -lt $Archived.Count) -and ($Size -gt 5GB); $Index++)
{
	$Archived[$Index] | Remove-Item -Force
	$Size -= $Archived[$Index].Length
}

Sysmon & Hard Links

In some situations, Sysmon archives a file by referencing the file’s content from a new path, a process known as hard-linking.

A hard link is the file system representation of a file by which more than one path references a single file in the same volume.

docs.microsoft.com

As an example, the following snippet creates an additional path (hard link) for an executable. Both paths will now point to the same on-disk file content. If one path gets deleted, Sysmon will reference the deleted file by adding a path, resulting in the file’s content having two paths, one of which within the Sysmon archive.

:: Create a hard link for an executable.
C:\>mklink /H C:\Users\Public\NVISO.exe C:\Users\NVISO\Downloads\NVISO.exe
Hardlink created for C:\Users\Public\NVISO.exe <<===>> C:\Users\NVISO\Downloads\NVISO.exe

:: Delete one of the hard links causing Sysmon to archive the file.
C:\>del C:\Users\NVISO\Downloads\NVISO.exe

:: The archived file now has two paths, one of which within the Sysmon archive.
C:\>fsutil hardlink list Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
\Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
\Users\Public\NVISO.exe

The presence of hard links within the Sysmon archive can cause an edge-case should the non-archive path be locked by another process while we attempt to clean the archive. Should for example a process be created from the non-archive path, removing the archived file will become slightly harder.

:: If the other path is locked by a process, deleting it will result in a denied access.
C:\>del Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
C:\Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
Access is denied.

Removing hard links is not straight-forward and commonly relies on non-native software such as fsutil (itself requiring the Windows Subsystem for Linux). However, as the archive’s hard link does technically not consume additional storage (the same content is referenced from another path), such files could be ignored given they do not partake in the storage exhaustion. Once the non-archive hard links referencing a Sysmon-archived file are removed, the archived file is not considered a hard link anymore and will be removable again.

To cope with the above edge-case, hard links can be filtered-out and removal operations can be encapsulated in try/catch expressions should other edge-cases exists. Overall, the WMI logical consumer’s logic could look as follow:

$Archived = Get-ChildItem -Path 'C:\\Sysmon\\' -File | Where-Object {$_.LinkType -ne 'HardLink'} | Sort-Object -Property LastAccessTimeUtc
$Size = ($Archived | Measure-Object -Sum -Property Length).Sum
for($Index = 0; ($Index -lt $Archived.Count) -and ($Size -gt 5GB); $Index++)
{
	try
	{
		$Archived[$Index] | Remove-Item -Force -ErrorAction Stop
		$Size -= $Archived[$Index].Length
	} catch {}
}

As we did for the event filter, a WMI consumer can be created through the New-CimInstance cmdlet. The following snippet specifically creates a new CommandLineEventConsumer invoking our above clean-up logic to create a 10GB quota.

$Archive = "C:\\Sysmon\\"
$Limit = 10GB
$Consumer = New-CimInstance -Namespace root/subscription -ClassName CommandLineEventConsumer -Property @{
    Name = 'SysmonArchiveCleaner';
    ExecutablePath = $((Get-Command PowerShell).Source);
    CommandLineTemplate = "-NoLogo -NoProfile -NonInteractive -WindowStyle Hidden -Command `"`$Archived = Get-ChildItem -Path '$Archive' -File | Where-Object {`$_.LinkType -ne 'HardLink'} | Sort-Object -Property LastAccessTimeUtc; `$Size = (`$Archived | Measure-Object -Sum -Property Length).Sum; for(`$Index = 0; (`$Index -lt `$Archived.Count) -and (`$Size -gt $Limit); `$Index++){ try {`$Archived[`$Index] | Remove-Item -Force -ErrorAction Stop; `$Size -= `$Archived[`$Index].Length} catch {}}`""
}

WMI Binding

In the above two sections we defined the event filter and logical consumer. One last point worth noting is that event filters need to be bound to an event consumers in order to become operational. This is done through a __FilterToConsumerBinding instance as shown below.

New-CimInstance -Namespace root/subscription -ClassName __FilterToConsumerBinding -Property @{
    Filter = [Ref]$Filter;
    Consumer = [Ref]$Consumer;
}

Proof of Concept

The following proof-of-concept deployment technique has been tested in limited environments. As should be the case with anything you introduce into your environment, make sure rigorous testing is done and don’t just deploy straight to production.

The following PowerShell script creates a WMI event filter and logical consumer with the logic we defined previously before binding them. The script can be configured using the following variables:

  • $Archive as the Sysmon archive path. To be WQL-compliant, special characters have to be back-slash (\) escaped, resulting in double back-slashed directory separators (\\).
  • $Limit as the Sysmon archive’s desired maximum folder size (see real literals).
  • $Delay as the event filter’s maximum WQL delay value in seconds (WITHIN clause).

Do note that Windows security boundaries apply to WMI as well and, given the Sysmon archive directory is restricted to the SYSTEM user, the following script should be ran using the SYSTEM privileges.

$ErrorActionPreference = "Stop"

# Define the Sysmon archive path, desired quota and query delay.
$Archive = "C:\\Sysmon\\"
$Limit = 10GB
$Delay = 10

# Create a WMI filter for files being created within the Sysmon archive.
$Filter = New-CimInstance -Namespace root/subscription -ClassName __EventFilter -Property @{
    Name = 'SysmonArchiveWatcher';
    EventNameSpace = 'root\cimv2';
    QueryLanguage = "WQL";
    Query = "SELECT * FROM __InstanceCreationEvent WITHIN $Delay WHERE TargetInstance ISA 'CIM_DataFile' AND TargetInstance.Drive='$(Split-Path -Path $Archive -Qualifier)' AND TargetInstance.Path='$(Split-Path -Path $Archive -NoQualifier)' GROUP WITHIN $Delay"
}

# Create a WMI consumer which will clean up the Sysmon archive folder until the quota is reached.
$Consumer = New-CimInstance -Namespace root/subscription -ClassName CommandLineEventConsumer -Property @{
    Name = 'SysmonArchiveCleaner';
    ExecutablePath = (Get-Command PowerShell).Source;
    CommandLineTemplate = "-NoLogo -NoProfile -NonInteractive -WindowStyle Hidden -Command `"`$Archived = Get-ChildItem -Path '$Archive' -File | Where-Object {`$_.LinkType -ne 'HardLink'} | Sort-Object -Property LastAccessTimeUtc; `$Size = (`$Archived | Measure-Object -Sum -Property Length).Sum; for(`$Index = 0; (`$Index -lt `$Archived.Count) -and (`$Size -gt $Limit); `$Index++){ try {`$Archived[`$Index] | Remove-Item -Force -ErrorAction Stop; `$Size -= `$Archived[`$Index].Length} catch {}}`""
}

# Create a WMI binding from the filter to the consumer.
New-CimInstance -Namespace root/subscription -ClassName __FilterToConsumerBinding -Property @{
    Filter = [Ref]$Filter;
    Consumer = [Ref]$Consumer;
}

Once the WMI event consumption configured, the Sysmon archive folder will be kept at reasonable size as shown in the following capture where a 90KB quota has been defined.

Figure 2: A Sysmon archive quota of 90KB removing old files.

With Sysmon archiving under control, we can now happily wait for new attacker tool-kits to be dropped…

Detecting & Preventing Rogue Azure Subscriptions

18 May 2022 at 15:41

A few weeks ago, NVISO observed how a phishing campaign resulted in a compromised user creating additional attacker infrastructure in their Azure tenant. While most of the malicious operations were flagged, we were surprised by the lack of logging and alerting on Azure subscription creation.

Creating a rogue subscription has a couple of advantages:

  • By default, all Azure Active Directory members can create new subscriptions.
  • New subscriptions can also benefit from a trial license granting attackers $200 worth of credits.
  • By default, even global administrators have no visibility over such new subscriptions.

In this blog post we will cover why rogue subscriptions are problematic and revisit a solution published a couple of years ago on Microsoft’s Tech Community. Finally, we will conclude with some hardening recommendations to restrict the creation and importation of Azure subscriptions.

Don’t become ‘that’ admin…

The deployments and recommendations discussed throughout this blog post require administrative privileges in Azure. As with any administrative actions, we recommend you exercise caution and consider any undesired side-effects privileged changes could cause.

With the above warning in mind, global administrators in a hurry can directly deploy the logging of available subscriptions (and reading the hardening recommendations)…

Deploy to Azure

Azure’s Hierarchy

To understand the challenges behind logging and monitoring subscription creations, one must first understand how Azure’s hierarchy looks like.

In Azure, resources such as virtual machines or databases are logically grouped within resource groups. These resource groups act as logical containers for resources with a similar purpose. To invoice the usage of these resources, resource groups are part of a subscription which also defines quotas and limits. Finally, subscriptions are part of management groups which provides centralized management for access, policies or compliance.

Figure 1: Management levels and hierarchy in “Organize your Azure resources effectively” on docs.microsoft.com.

Most Azure components are resources as is the case with monitoring solutions. As an example, creating an Azure Sentinel instance will require the prior creation of a subscription. This core hierarchy of Azure implies that monitoring and logging is commonly scoped to a specific set of subscriptions as can be seen when creating rules.

Figure 2: Alert rules and their scope selection limited to predefined subscriptions in the Azure portal.

This Azure hierarchy creates a problem of the chicken or the egg: monitoring for subscription creations requires prior knowledge of the subscription.

Another small yet non negligible Azure detail is that by default even global administrators cannot view all subscriptions. As detailed in “Elevate access to manage all Azure subscriptions and management groups“, viewing all subscriptions first requires additional elevation through the Azure Active Directory properties followed by the unchecking of the global subscription filter.

Figure 3: The Azure Active Directory access management properties.
Figure 4: The global subscriptions filter enabled by default in the Azure portal.

The following image slider shows the view prior (left) and after (right) the above elevation and filtering steps have been taken.

Figure 5: Subscriptions before (left) and after (right) access elevation and filter removal in the Azure portal.

In the compromise NVISO observed, the rogue subscriptions were all named “Azure subscription 1”, matching the default name enforced by Azure when leveraging free trials (as seen in the above figure).

Detecting New Subscriptions

A few years ago a Microsoft’s Tech Community blog post covered this exact challenge and solved it through a logic app. This following section revisits their solution with a slight variation using Azure Sentinel and system-assigned identities. Through a simple logic app, one can store the list of subscriptions in a log analytics workspace for which an alert rule can then be set up to alert on new subscriptions.

Deploy to Azure

Collecting the Subscription Logs

The first step in collecting the subscription logs is to create a new empty logic app (see the “Create a Consumption logic app resource” documentation section for more help). Once created, ensure the logic app has system-assigned identity enabled from it’s identity settings.

Figure 6: A logic app’s identity settings in the Azure portal.

To grant the logic app reader access to the Azure Management API, go to the management groups and open the “Tenant Root Group”.

Figure 7: The management groups in the Azure portal.

Within the “Tenant Root Group”, open the access control (IAM) settings and click “Add” to add a new access.

Figure 8: The tenant root group’s access control (IAM) in the Azure portal.

From the available roles, select the “Reader” role which will grant your logic app permissions to read the list of subscriptions.

Figure 9: A role assignment’s role selection in the Azure portal.

Once the role selected, assign it to the logic app’s managed identity.

Figure 10: A role assignment’s member selection in the Azure portal.

When the logic app’s managed identity is selected, feel free to document the role assignment’s purpose and press “Review + assign”.

Figure 11: A role assignment’s member selection overview in the Azure portal.

With the role assignment performed, we can move back to the logic app and start building the logic to collect the subscriptions. From the logic app’s designer, select a “Recurrence” trigger which will trigger the collection at a set interval.

Figure 12: An empty logic app’s designer tool in the Azure portal.

While the original Microsoft Tech Community blog post had an hourly recurrence, we recommend to lower that value (e.g. 5 minutes or less, the fastest interval for alerting) given we observed the subscription being rapidly abused.

Figure 13: A recurrence trigger in a logic app’s designer tool.

With the trigger defined, click the “New step” button to add an operation. To recover the list of subscriptions search for, and select, the “Azure Resource Manager List Subscriptions” action.

Figure 14: Searching for the Azure Resource Manager in a logic app’s designer tool.

Select your tenant and proceed to click “Connect with managed identity” to have the authentication leverage the previously assigned role.

Figure 15: The Azure Resource Manager’s tenant selection in a logic app’s designer tool.

Proceed by naming your connection (e.g.: “List subscriptions”) and validate the managed identity is the system-assigned one. Once done, press the “Create” button.

Figure 16: The Azure Resource Manager’s configuration in a logic app’s designer tool.

With the subscriptions recovered, we can add another operation to send them into a log analytics workspace. To do so, search for, and select, the “Azure Log Analytics Data Collector Send Data” operation.

Figure 17: Searching for the Log Analytics Data Collector in a logic app’s designer tool.

Setting up the “Send Data” action requires the target Log Analytics’ workspace ID and primary key. These can be found in the Log Analytics workspace’s agents management settings.

Figure 18: A log analytics workspace’s agent management in the Azure portal.

In the logic app designer, name the Azure Log Analytics Data Collector connection (e.g.: “Send data”) and provide the target Log Analytics’ workspace ID and primary key. Once done, press the “Create” button.

Figure 19: The Log Analytics Data Collector’s configuration in a logic app’s designer tool.

We can then select the JSON body to send. As we intend to store the individual subscriptions, look for the “Item” dynamic content which will contain each subscription’s information.

Figure 20: The Log Analytics Data Collector’s JSON body selection in a logic app’s designer tool.

Upon selecting the “Item” content, a loop will automatically encapsulate the “Send Data” operation to cover each subscription. All that remains to be done is to name the custom log, which we’ll name “SubscriptionInventory”.

Figure 21: The encapsulation of the Log Analytics Data Connector in a for-each loop as seen in a logic app’s designer tool.

Once this last step configured, the logic app is ready and can be saved. After a few minutes the new custom SubscriptionInventory_CL table will get populated.

Alerting on New Subscriptions

While collecting the logs was the hard part, the last remaining step is to create an analytics rule to flag new subscriptions. As an example, the following KQL query identifies new subscriptions and is intended to run every 5 minutes.

let schedule = 5m;
SubscriptionInventory_CL
| summarize arg_min(TimeGenerated, *) by SubscriptionId
| where TimeGenerated > ago(schedule)

A slightly more elaborate query variant can take base-lining and delays into account which is available either packaged within the complete ARM (Azure Resource Manager) template or as a standalone rule template.

Once the rule deployed, new subscriptions will result in incidents being created as shown below. These incidents provide much-needed signals to identify potentially rogue subscriptions prior to their abuse.

Figure 22: A custom “Unfamiliar Azure subscription creation” incident in Azure Sentinel.

To empower your security team to investigate such events, we do recommend you grant them with Reader rights on the “Tenant Root Group” management group to ensure these rights are inherited on new subscriptions.

Hardening an Azure Tenant

While logging and alerting are great, preventing an issue from taking place is always preferable. This section provides some hardening options that Azure administrators might want to consider.

Restricting Subscription Creation

Azure users are by default authorized to sign up for a cloud service and have an identity automatically be created for them, a process called self-servicing. As we saw throughout this blog post, this opens an avenue for free trials to be abused. This setting can however be controlled by an administrator through the Set-MsolCompanySettings cmdlet’s AllowAdHocSubscriptions parameter.

AllowAdHocSubscriptions controls the ability for users to perform self-service sign-up. If you set that parameter to $false, no user can perform self-service sign-up.

docs.microsoft.com

As such, Azure administrators can prevent users from singing up for services (incl. free trials), after careful consideration, through the following MSOnline PowerShell command:

Set-MsolCompanySettings -AllowAdHocSubscriptions $false

Restricting Management Group Creation

Another Azure component users should not usually interact with are management groups. As stated previously, management groups provide centralized management for access, policies or compliance and act as a layer above subscriptions.

By default any Azure AD security principal has the ability to create new management groups. This setting can however be hardened in the management groups’ settings to require the Microsoft.Management/managementGroups/write permissions on the root management group.

Figure 23: The management groups settings in the Azure portal.

Restricting Subscriptions from Switching Azure AD Directories

One final avenue of exploitation which we haven’t seen being abused so far is the transfer of subscriptions into or from your Azure Active Directory environment. As transferring subscriptions poses a governance challenge, the subscriptions’ policy management portal offers two policies capable of prohibiting such transfers.

We highly encourage Azure administrators to consider enforcing these policies.

Figure 24: The subscriptions’ policies in the Azure portal.

Conclusions

In this blog post we saw how Azure’s default of allowing anyone to create subscriptions poses a governance risk. This weak configuration is actively being leveraged by attackers gaining access to compromised accounts.

We revisited a solution initially published on Microsoft’s Tech Community and proposed slight improvements to it alongside a ready-to-deploy ARM template.

Finally, we listed some recommendations to harden these weak defaults to ensure administrative-like actions are restricted from regular users.


You want to move to the cloud, but have no idea how to do this securely?
Having problems applying the correct security controls to your cloud environment?

❌
❌