🔒
There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

Finding hooks with windbg

5 August 2022 at 15:06

In this blogpost we are going to look into hooks, how to find them, and how to restore the original functions.

I’ve developed the methods discussed here by myself and they have been proven to be useful for me. I was assigned to evaluate the security and the inner working of a specific application control solution. I needed a practical and easy solution, without too much coding preferably using windbg. For that I wanted to be able to:

  1. Detect the DLL which performs hooking
  2. Detect all the hooks that it sets up
  3. Restore all the previous instructions (before the hook)

What are hooks?

As hooks is the thing we are looking for let’s briefly talk about what hooks actually are and how they look like.

Specifically we will cover MS Detours.

Basically hooking allows you to execute your own code when the target function is called. It was originally developed to extend the functionality of the functions of closed software. When your code is called by the hooked function it’s up to you what to you want to do next. You can for example inspect the arguments and based on that resume the execution of the original target function if you wish.

To better illustrate how a hook looks like, I’m going to use the picture from the “Detours: Binary Interception of Win32 Functions” document.

MS Detours hook
MS Detours hook

The picture above shows trampoline and target functions, before and after insertion of the detour (left and right).

Of course in order for this to be useful the trampoline function would normally end up calling your custom code, before resuming the target function. For us one important thing to notice is the jump instruction at the beginning of the target function. If it’s there this is a good indicator that a function is hooked.

As we can see, a jump instruction is used to hook a target function and replace the first 4 instructions of the original target function. This results in the target function jumping to a trampoline function and the trampoline function executing the original 4 instructions that were replaced. Then, a jump instruction is used again in the trampoline function to resume the execution of the target function after the jump instruction (TargetFunction+5).

If you’re interested in the official documentation you can find it here and here.

The setup

To better demonstrate the concept, I’ve created a few simple programs.

  • begin.exe – Calls CreateProcess API to start myapp.exe.
  • myapp.exe – Simple program that shows a message box.
  • AChook.dll – Application Control hooking DLL. Simple DLL that forbids any execution of CreateProcessA and CreateProcessW APIs.

First, let’s show these programs in action. Let’s run begin.exe:

begin.exe starts and shows a dialogue that halts execution.
begin.exe starts and shows a dialogue that halts execution.

It shows a message box asking to inject a DLL. This halts execution until the “OK” button is clicked and allows us to take our time injecting a DLL if we want to.

myapp.exe is started by begin.exe.
myapp.exe is started by begin.exe.

Then it launches myapp.exe, which just shows another message box asking if you want to launch a virus. Of course myapp.exe is not a virus and just exits after showing the message box (no matter if the user clicks on “Yes” or “No”).

Now let’s run begin.exe again but this time let’s inject the AChook.dll into it while the message box is shown.

begin.exe waiting for user interaction.
begin.exe waiting for user interaction.

We use “Process Hacker” to inject AChook.dll.

Using Process hacker to inject our DLL into begin.exe.
Using Process hacker to inject our DLL into begin.exe.

AChook.dll also prints some additional messages to the console output:

AChook.dll is injected into begin.exe.
AChook.dll is injected into begin.exe.

When we click now on the OK button, myapp.exe does not run anymore and thus the virus message box is no longer shown. Instead additional messages are printed to the console by AChook.dll.

AChook.dll's hook prevented execution of myapp.exe.
AChook.dll‘s hook prevented execution of myapp.exe.

IDENTIFYING THE HOOKING DLL

First we need to identify which DLL is the one that sets the hooks.

To list the loaded DLLs of a running process we use “Process Explorer”.

We select the process begin.exe and press [Ctrl]+[D]:

DLLs loaded by begin.exe in Process Explorer.
DLLs loaded by begin.exe in Process Explorer.

Now we can look for any DLL that looks suspicious. In this case it’s easy because the DLL has the phrase “hook” in its name, which is something that certainly stands out!

A different way to identify the hooking DLL is to compare the list of loaded DLLs with and without the security solution active. To simulate this we run begin.exe twice – once with and once without the AChook.dll. To list the DLLs as a text we can use “listdlls”:

Output of listdlls against the begin.exe process.
Output of listdlls against the begin.exe process.

First we need to identify which DLL was injected into a process. We start by running listdlls against the just started begin.exe process and saving the output:

listdlls begin.exe > before

Then we inject AChook.dll using Process Hacker and save listdlls’s output again:

listdlls begin.exe > after

Next, we compare those two output files using “gvim” (of course any other text editor can be used).

Using gvim to compare both outputs.
Using gvim to compare both outputs.

As we can see below, a new DLL AChook.dll was added:

Diff of both lists of loaded DLLs in the begin.exe process.
Diff of both lists of loaded DLLs in the begin.exe process.

Alright. So far we determined that a DLL was injected to the process. At this point we could search the DLL on disk to see to if it belongs to your target security solution. In our case we created it ourselves though, so we’re not going to do that.

The DLL is suspicious because its name contains the phrase “hook”. However we want to gain more confidence that it really hooks anything.

When you are examining a security solution it’s always a good idea to read its documentation. The product that I was analysing had specifically mentioned that it uses MS Detours hooks to function. However, it did not mention anything regarding the application control implemented in kernel space and also did not mention which DLL it used for hooking.

Unfortunately there is no single (special) Windows API that would do the hooking. Instead it uses multiple APIs to do its job. I wanted to find a rare API or a sequence of APIs that I could use as some sort of signature. I found one API that is quite special and rarely used (unless you want to do hooking): “FlushInstructionCache”.

https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-flushinstructioncache

As the documentation says:

“Applications should call FlushInstructionCache if they generate or modify code in memory. The CPU cannot detect the change, and may execute the old code it cached.”

So if the MS Detours code wants its new jump instruction to be executed it needs to call FlushInstructionCache API. In summary what MS Detours needs to do when installing the hook is to:

  • Allocate memory for the trampoline function;
  • Change the access of the code of the target function to make it writable;
  • Copy the instructions from the beginning of the target function (the ones that it’s going to replace) to previously allocated space; and make changes there so that the trampoline function ends up executing your code;
  • Replace the beginning of the target function with a jump instruction to trampoline function;
  • Change the access of the code of the target function back to the original access;
  • Flush the instruction cache.

You can find the FlushInstructionCache function in the imports of AChook.dll as can be seen in IDA:

IDA displaying the PE imports of begin.exe.
IDA displaying the PE imports of begin.exe.

Or you can use “dumpbin” to do the same:

Finding the FlushInstructionCache PE import in begin.exe using dumpbin.
Finding the FlushInstructionCache PE import in begin.exe using dumpbin.

At this point we have a very suspicious DLL and we want to determine which APIs it hooks and also restore them.

IDENTIFYING HOOKS AND RESTORING THEM

Since I was experimenting with dynamic binary instrumentation tools before, I knew that it is also possible to detect hooks by using Intel’s Pintools. It requires you to write a small program however. I won’t go into detail here, maybe this is a topic for another blogpost.

But in short Pintools enables you to split the code into blocks, something very similar to what IDA does. It also enables you to determine to which binary or DLL this code block belongs to.

Remember that MS detours installed a jmp instruction at the beginning of the target API which jumped to a newly allocated memory region. So if you see at the beginning of the API that a code block is executed that does not belong to the API’s DLL then this API is hooked. The drawback of this solution is that the hooked API needs to run in order to be detected. It also does not allow you to retrieve the original bytes of the hooked API for restoration.

More information about Pintools can be found here.

Let’s discuss something much simpler and more effective instead. Remember that MS Detours first changes the memory to be writable and then changes it back, let’s use that to our advantage.

We will use windbg for this. What we need to do is to:

  1. Start begin.exe
  2. Attach windbg to the begin.exe process.
  3. Set a breakpoint on loading of AChook.dll (sxe ld AChook.dll)
  4. Continue execution of begin.exe process (g)
  5. Inject AChook.dll into begin.exe process (Process Hacker)
  6. The breakpoint will hit.
  7. Set new breakpoint on VirtualProtect with a custom command to print first 5 instructions and continue execution. (bp KERNELBASE!VirtualProtect “u rcx L5;g” )
  8. Set output log file and continue execution (.logopen C:\BLOGPOST\OUTPUT.log ; g)
  9. The debugger will start hitting and continuing the breakpoints. After the output stops moving click the pause button on the debugger.
  10. Don’t click on the ok button of the message box. Close the log file. Collect and inspect the data in the log file. Remove a few – if any – false positives (.logclose).

The whole process might look like this:

Debugging the begin.exe process in windbg.
Debugging the begin.exe process in windbg.

The output above shows that when the breakpoint of the CreateProcessWStub and CreateProcessAStub are hit for the first time, they are not hooked yet: they don’t contain the jmp instruction at the beginning yet. However, the second time they are hit we can see a jmp instruction at the beginning, thus we can cunclude that they are hooked.

From this output we know that  CreateProcessW and  CreateProcessA were hooked. It also gives us the original bytes so we could restore the original functions if we wanted to.

RESTORING ORIGINAL FUNCTIONS:

Using the above output of windbg, we can restore the original functions with the following windbg commands:

eb KERNEL32!CreateProcessWStub 4c 8b dc 48 83 ec 58
eb KERNEL32!CreateProcessAStub 4c 8b dc 48 83 ec 58

The steps are easier this time:

  1. Run begin.exe
  2. Inject AChook.dll into it (using Process Hacker)
  3. Attach windbg to the begin.exe process
  4. Run the commands mentioned above and continue execution (eb … ; g)
  5. Click on the “OK” button of the message box to launch myapp.exe

And – voilà! – here is the result:

myapp.exe executed by begin.exe after restoring hooked functions.
myapp.exe executed by begin.exe after restoring hooked functions.

CONCLUSION

In this blogpost we have discussed what hooks are, how to identify a DLL that does the hooking, how to identify the hooks that it sets and also how to restore the original functions once the hooking DLL was loaded. Of course a proper security solution uses controls in kernel space to do application control, so it’s not possible for the application to just restore the original functions. Although there could be implementation mistakes in that as well, but that is a story for another time.

I hope you enjoyed.

About the author

Oliver, is a cyber security expert at NVISO. He has almost a decade and a half of IT experience of which half of it is in cyber security. Throughout his career he has obtained many useful skills and also certificates. He’s constantly exploring and looking for more knowledge. You can find Oliver on LinkedIn.

Analysis of a trojanized jQuery script: GootLoader unleashed

20 July 2022 at 08:00

In this blog post, we will perform a deep analysis into GootLoader, malware which is known to deliver several types of payloads, such as Kronos trojan, REvil, IcedID, GootKit payloads and in this case Cobalt Strike.

In our analysis we’ll be using the initial malware sample itself together with some malware artifacts from the system it was executed on. The malicious JavaScript code is hiding within a jQuery JavaScript Library and contains about 287kb of data and consists of almost 11.000 lines of code. We’ll do a step-by-step analysis of the malicious JavaScript file.

TLDR techniques we used to analyze this GootLoader script:

  1. Stage 1: A legitimate jQuery JavaScript script is used to hide a trojan downloader:
    Several new functions were added to the original jQuery script. Analyzing these functions would show a blob of obfuscated data and functions to deobfuscate this blob.
  2. The algorithm used for deobfuscating this blob (trojan downloader):
    1. For each character in the obfuscated data, assess whether it is at an even or uneven position (index starting at 0)
    1. If uneven, put it in front of an accumulator string
    1. If even, put it at the back of the accumulator string
    1. The result is more JavaScript code
  3. Attempt to download the (obfuscated) payload from one of three URLs listed in the resulting JavaScript code.
    1. This failed due to the payload not being served anymore and we resorted to make an educated guess to search for an obfuscated (as defined in the previous output) “createobject” string on VirusTotal with the “content” filter, which resulted in a few hits.
  4. Stage 2: Decode the obfuscated payload
    1. Take 2 digits
    1. Convert these 2 decimal digits to an integer
    1. Add 30
    1. Convert to ASCII
    1. Repeat till the end
    1. The result is a combination of JavaScript and PowerShell
  5. Extract the JavaScript, PowerShell loader, PowerShell persistence and analyze it to extract the obfuscated .NET loader embedded in the payload
  6. Stage 3: Analyze the .NET loader to deobfuscate the Cobalt Strike DLL
  7. Stage 4: Extract the config from the Cobalt Strike DLL

Stage 1 – sample_supplier_quality_agreement 33187.js

Filename: sample_supplier_quality_agreement 33187.js
MD5: dbe5d97fcc40e4117a73ae11d7f783bf
SHA256: 6a772bd3b54198973ad79bb364d90159c6f361852febe95e7cd45b53a51c00cb
File Size: 287 KB

To find the trojan downloader inside this JavaScript file, the following grep command was executed:

grep -P "^[a-zA-Z0-9]+\("
Fig 1. The function “hundred71(3565)” looks out of place here

This grep command will find entry points that are calling a JavaScript function outside any function definition, thus without indentation (leading whitespace). This is a convention that many developers follow, but it is not a guarantee to quickly find the entry point. In this case, the function call hundred17(3565) looks out of place in a mature JavaScript library like jQuery.

When tracing the different calls, there’s a lot of obfuscated code, the function “color1” is observed Another way to figure out what was changed in the script could be to compare it to the legitimate version[1] of the script and “diff” them to see the difference. The legitimate script was pulled from the jQuery website itself, based on the version displayed in the beginning of the malicious script.

Fig 2. The version of the jQuery JavaScript Library displayed here was used to fetch the original

Before starting a full diff on the entire jQuery file, we first extracted the functions names with the following grep command:

grep 'function [0-9a-zA-Z]'

This was done for both the legitimate jQuery file and the malicious one and allows us to quickly see which additional functions were added by the malware creator. Comparing these two files immediately show some interesting function names and parameters:

Fig 3. Many functions were added by the malware author as seen in this screenshot

A diff on both files without only focusing on the function names gave us all the added code by the malware author.

Color1 is one of the added functions containing most of the data, seemingly obfuscated, which could indicated this is the most relevant function.

Fig 4. Out of all the added functions, “color1()” contains the most amount of data

The has6 variable is of interest in this function, as it combines all the previously defined variables into 1:

Further tracing of the functions eventually leads to the main functions that are responsible for deobfuscating this data: “modern00” and “gun6”

Fig 5. Function modern00, responsible for part of the deobfuscation algorithm
Fig 6. Function gun6, responsible for the modulo part of the deobfuscation algorithm

The deobfuscation algorithm is straightforward:

For each character in the obfuscated string (starting with the first character), add this character to an accumulator string (initially empty). If the character is at an uneven position (index starting from 0), put it in front of the accumulator, otherwise put it at the back. When all characters have been processed, the accumulator will contain the deobfuscated string.

The script used to implement the algorithm would look similar to the following written in Python:

Fig 7. Proof of concept Python script to display how the algorithm functions
Fig 8. Running the deobfuscation script displays readable code

CreateObject, observed in the deobfuscated script, is used to create a script execution object (WScript.Shell) that is then passed the script to execute (first script). This script (highlightd in white) is also obfuscated with JavaScript obfuscation and the same script obfuscation that was observed in the first script.

Deobfuscating that script yields a second JavaScript script. Following, is the second script, with deobfuscated strings and code, and “pretty-printed”:

Fig 9. Pretty printed deobfuscated code

This script is a downloader script, attempting to initiate a download from 3 domains.

  • www[.]labbunnies[.]eu
  • www[.]lenovob2bportal[.]com
  • www[.]lakelandartassociation[.]org

The HTTPS requests have a random component and can convey a small piece of information: if the request ends with “4173581”, then the request originates from a Windows machine that is a domain member (the script determines this by checking for the presence of environment variable %USERDNSDOMAIN%).

The following is an example of a URL:
hxxps://www[.]labbunnies[.]eu/test[.]php?cmqqvfpugxfsfhz=71941221366466524173581

If the download fails (i.e., HTTP status code different from 200), the script sleeps for 12 seconds (12345 milliseconds to be precise) before trying the next domain. When the download succeeds, the next stage is decoded and executed as (another) JavaScript script. Different methods were attempted to download the payload (with varying URLs), but all methods were unsuccessful. Most of the time a TCP/TLS connection couldn’t be established to the server. The times an HTTP reply was received, the body was empty (content-length 0). Although we couldn’t download the payload from the malicious servers, we were able to retrieve it from VirusTotal.

Stage 2 – Payload

We were able to find a payload that we believe, with high confidence, to be the original stage 2. With high confidence, it was determined that this is indeed the payload that was served to the infected machine, more information on how this was determined can be found in the following sections. The payload, originally uploaded from Germany, can be found here: https://www.virustotal.com/gui/file/f8857afd249818613161b3642f22c77712cc29f30a6993ab68351af05ae14c0f

MD5: ae8e4c816e004263d4b1211297f8ba67
SHA-256: f8857afd249818613161b3642f22c77712cc29f30a6993ab68351af05ae14c0f
File Size: 1012.97 KB

The payload consists of digits. To decode it, take 2 digits, add 30, convert to an ASCII character, and repeat this till the end of the payload. This deobfuscation algorithm was deduced from the previous script, in the last step:

Fig 10. Stage 2 acquired from VirusTotal
Fig 11. Deobfuscation algorithm for stage 2

As an example, we’ll decode the first characters of the strings in detail: 88678402

  1. 88 –> 88+30 = 118
Fig 12. ASCII value 118 equals the letter v
  1. 67 –> 67 + 30 = 97
Fig 13. ASCII value 97 equals the letter a
  1. 84 –> 84 + 30 = 114
Fig 14. ASCII value 114 equals the letter r
  1. 02 –> 02+30 = 32
Fig 15. ASCII value 32 equals the symbol “space”

This results in: “var “, which indicates the declaration of a variable in JavaScript. This means we have yet another JavaScript script to analyze.
To decode the entire string a bit faster we can use a small Python script, which will automate the process for us:

Fig 16. Proof of concept Python script to display how the algorithm functions

First half of the decoded string:

Fig 17. Output of the deobfuscation script, showing the first part

Second half of the decoded string:

Fig 18. Output of the deobfuscation script, showing the second part

The same can be done with the following CyberChef recipe, it will take some time, due to the amount of data, but we saw it as a small challenge to use CyberChef to do the same.

#recipe=Regular_expression('User%20defined','..',true,true,false,false,false,false,'List%20matches')Find_/_Replace(%7B'option':'Regex','string':'%5C%5Cn'%7D,'%2030,',true,false,true,false)Fork(',','',false)Sum('Space')From_Charcode('Comma',10)
Fig 19. The CyberChef recipe in action

The decoded payload results in another JavaScript script.
MD5: a8b63471215d375081ea37053b52dfc4
SHA256: 12c0067a15a0e73950f68666dafddf8a555480c5a51fd50c6c3947f924ec2fb4
File size: 507 KB

The JavaScript script contains code to insert an encoded PE file (unmanaged code) and create a key with as value as encoded assembly (“HKEY_CURRENT_USER\SOFTWARE\Microsoft\Phone”) and then launches 2 PowerShell scripts. These 2 PowerShell scripts are fileless, and thus have no filename. For referencing in this document, the PowerShell scripts are named as follows:

  1. powershell_loader: this PowerShell script is a loader to execute the PE file injected into the registry
  2. powershell_persistence: this PowerShell script creates a scheduled task to execute the loader PowerShell script (powershell_loader) at boot time.

Fig 20. Deobfuscated & pretty-printed JavaScript script found in the decoded payload

A custom script was utilized to decode this payload as a whole and extract all separate elements from it (based on the reverse engineering of the script itself). The following is the output of the custom script:

Fig 21. Output of the custom script parsing all the components from the deobfuscated

All the artifacts extracted with this script match exactly with the artifacts recovered from the infected machine. These can be verified with the fileless artifacts extracted from Defender logs, with matching cryptographic hash:

  • Stage 2 SHA256 Script: 12c0067a15a0e73950f68666dafddf8a555480c5a51fd50c6c3947f924ec2fb4
  • Stage 2 SHA256 Persistence PowerShell script (powershell_persistence): 48e94b62cce8a8ce631c831c279dc57ecc53c8436b00e70495d8cc69b6d9d097
  • Stage 2 SHA256 PowerShell script (powershell_loader) contained in Persistence PowerShell script: c8a3ce2362e93c7c7dc13597eb44402a5d9f5757ce36ddabac8a2f38af9b3f4c
  • Stage 3 SHA256 Assembly: f1b33735dfd1007ce9174fdb0ba17bd4a36eee45fadcda49c71d7e86e3d4a434
  • Stage 4 SHA256 DLL: 63bf85c27e048cf7f243177531b9f4b1a3cb679a41a6cc8964d6d195d869093e

Based on this information, it can be concluded, with high confidence, that the payload found on VirusTotal is identical to the one downloaded by the infected machine: all hashes match with the artifacts from the infected machine.

In addition to the evidence these matching hashes bring, the stage 2 payload file also ends with the following string (this is not part of the encoded script): @[email protected] This is the random part of the URL used to request this payload. Notice that it ends with 4173581, the unique number for domain joined machines found in the trojanized jQuery script.

Payload retrieval from VirusTotal

Although VirusTotal has reports for several URLs used by this malicious script, none of the reports contained a link to the actual downloaded content. However, using the following query: content:”378471678671496876716986″, the download content (payload) was found on VirusTotal; This string of digits corresponds to the encoding of string “CreateObject”. (see Fig. 20)

In order to attempt the retrieval of the downloaded content, an educated guess was made that the downloaded payload would contain calls to function CreateObject, because such functions calls are also present in the trojanized jQuery script. There are countless files on VirusTotal that contain the string “CreateObject”, but in this particular case, it is encoded with an encoding specific to GootLoader. Each letter of the string “CreateObject” is encoded to its numerical representation (ASCII code), and subtracted with 30. This returns the string “378471678671496876716986”.

Stage 3 – .NET Loader

MD5 Assembly: d401dc350aff1e3fd4cc483238208b43
SHA256 Assembly: f1b33735dfd1007ce9174fdb0ba17bd4a36eee45fadcda49c71d7e86e3d4a434
File Size: 13.50 KB

This .NET loader is fileless and thus has no filename.

The PowerShell loader script (powershell_loader)

  1. extracts the .NET Loader from the registry
  2. decodes it
  3. dynamically loads & executes it (i.e., it is not written to disk).

The .NET Loader is encoded in hexadecimal and stored inside the registry. It is slightly obfuscated: character # has to be replaced with 1000.

The .NET loader:

  1. extracts the DLL (stage 4) from the registry
  2. decodes it
  3. dynamically loads & executes it ( i.e., it is not written to disk).

The DLL is encoded in hexadecimal, but with an alternative character set. This is translated to regular hexadecimal via the following table:

Fig 22. “Test” function that decodes the DLL by using the replace

This Test function decodes the DLL and executes it in memory. Note that without the .NET loader, statistical analysis could reveal the DLL as well. A blog post[2], written by our colleague Didier Stevens on how to decode a payload by performing statistical analysis can offer some insights on how this could be done.

Stage 4 – Cobalt Strike DLL

MD5 DLL: 92a271eb76a0db06c94688940bc4442b
SHA256 DLL: 63bf85c27e048cf7f243177531b9f4b1a3cb679a41a6cc8964d6d195d869093e

This is a typical Cobalt Strike beacon and has the following configuration (extracted with 1768.py)

Fig 23. 1768.py by DidierStevens used to detect and parse the Cobalt Strike beacon

Now that Cobalt Strike is loaded as final part of the infection chain, the attacker has control over the infected machine and can start his reconnaissance from this machine or make use of the post-exploitation functionality in Cobalt Strike, e.g. download/upload files, log keystrokes, take screenshots, …

Conclusion

The analysis of the trojanized jQuery JavaScript confirms the initial analysis of the artifacts collected from the infected machine and confirms that the trojanized jQuery contains malicious obfuscated code to download a payload from the Internet. This payload is designed to filelessly, and with boot-persistence, instantiate a Cobalt Strike beacon.

About the authors

Didier Stevens Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.
Sasja Reynaert Sasja Reynaert is a forensic analyst working for NVISO. Sasja is a GIAC Certified Incident Handler, Forensics Examiner & Analyst (GCIH, GCFE, GCFA). You can find Sasja on LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.


[1]:https://code.jquery.com/jquery-3.6.0.js
[2]:https://blog.didierstevens.com/2022/06/20/another-exercise-in-encoding-reversing/

Investigating an engineering workstation – Part 4

6 July 2022 at 08:00

Finally, as the last part of the blog series we will have a look at the network traffic observed. We will do this in two sections, the first one will cover a few things useful to know if we are in the situation that Wireshark can dissect the traffic for us. The second section will look into the situation where the dissection is not nicely done by Wireshark.

Nicely dissected traffic

We start by looking into a normal connection setup on Port 102 TCP. Frames number 2 to 4 shown in figure 1 representing the standard three way handshake to establish the TCP connection. If this is done successfully, the COTP (Connection-Oriented Transport Protocol) session is established by sending a connection request (frame 5) and a confirmation of the connection (frame 6). Based on this the S7 communication is setup, as shown in frame 7 and 8.

Figure 1: Connection setup shown in Wireshark

Zooming into frame number 5, we can see the how Wireshark dissects the traffic and provide us with the information that we are dealing with a connect request.

Figure 2: Details of frame number 5

In order to use Wireshark or tshark to filter for these frames we can apply the following display filters:

  • cotp.type == 0x0e # filter for connection requests
  • cotp.type == 0x0d # filter for connection confirmation

Looking into the S7 frames, in this case frame number 7, we can see the communication setup function code sent to the PLC.

Figure 3: Communication setup function code

Apart from the function code for the communication setup (“0xf0”), we can also learn something very important here. The field “Protocol Id” contains the value “0x32”, this is the identifier of the S7 protocol and according to our experience, this protocol id has significant impact if Wireshark can dissect the traffic, like shown above, or not.

With the example of requesting the download of a block (containing logic or variables etc.) we will have a look on how jobs are send to the PLC. By the way, the communication setup is already a job sent to the PLC. To keep the screenshot as clean as possible, the traffic is filtered to only show traffic identified as S7 protocol.

Figure 4: Download a block to a PLC

The IP-address ending with .40 is the PLC and the IP-address ending with .10 is the source of the commands and data. Indicated by the blue background, frames number 43 to 57 represent the download of a block to the PLC. Frames 43 and 44 are initializing the download of the block, in this case a block called “DB1”. We can see that the .10 host is sending a “Job” to the PLC (.40) in frame 43, the PLC acknowledge this job in the next frame (number 44) and starts to download the block in frame 46. So, in essence the PLC is instructed to actively request (download) the block. The block is not pushed to the PLC. This also explains why the term “download to the PLC” is used when a project is transferred form an engineering workstation to a PLC. The download of the block ends with frames 55 and 56, where the corresponding function code is transmitted and acknowledged.

A few handy display filters for Wireshark or tshark:

  • s7comm.header.rosctr == 1 # filter for jobs being requested to be performed
  • s7comm.header.rosctr == 3 # acknowledge of requested jobs
  • s7comm.param.func == 0x1a # downloads being requested/acknowledged
  • s7comm.param.func == 0x1b # download of blocks
  • s7comm.param.func == 0x05 # write a variable
  • s7comm.param.func == 0x04 # read a variable
  • s7comm.param.func == 0xf0 # communication setup, also shown above

In regards of the download of blocks (s7comm.param.func == 0x1b), the actual data is contained in the acknowledge frames send (s7comm.header.rosctr == 3).

Less nicely dissected traffic

Working with nicely dissected traffic in Wireshark or tshark is always a bless. But sometimes we do not have this luxury. The communication between a workstation running TIA Portal version 15.1 and a Siemens Simatic S7-1200 PLC is shown in figure 5.

Figure 5: Traffic between workstation running TIA 15.1 and S7-1200 PLC in Wireshark

A filter was applied to only show the traffic between the workstation and the PLC, you must believe us here that we did not hide the S7 protocol. We can see similarities between this traffic and the traffic discussed earlier: it involves Port 102/TCP and COTP. We might not have the luxury of nicely dissected traffic, but we are not out of luck.

We can use Wireshark’s “Follow TCP Stream” function and the search functionality to look out for some very specific strings. If you are searching for a specific string in a traffic dump, it would be pretty cumbersome to manually follow every TCP stream and use the search field in the resulting window. Thankfully Wireshark offers something better. While you are in the main windows of Wireshark hit “CTRL+f” which will add the search functionality below the display filter area.

Figure 6: Search file in main windows of Wireshark

Above you also can see the choices we have to make in order to search for strings. Key is that we are looking into “Packet bytes” and we are looking into finding a “String”. An example where we searched for the string “ReleaseMngmtRoot” is shown below:

Figure 7: Example of searching “ReleaseMngmtRoot” in frames

You may ask yourself why all this is important. An excellent question we are going to answer now.

Based on our observations we can identify the following actions by analysing the occurrences of specific strings:

  • Download of changes of a block
  • Download of changes to a text list (Text library)
  • Download of the complete software, not only the changes
  • Download of the hardware configuration
Download changes of a block

We will start with the download of changes of a block to the PLC. Below you can see which string occurrences are to be expected and in which direction they are send.

Figure 8: String occurrences for download of changes of a block

The TCP stream view in figure 9 shows the second, third and fourth step. Please be aware that the String “PLCProgramChange” in the schema above refers to the last occurrence which is followed by the next string “DLTransaction”. Traffic in blue is traffic from the PLC to the Workstation and traffic marked with red background is the other direction

Figure 9: Excerpt of TCP stream view showing steps 2,3 and 4

The strings in the “ReleaseMngmtRoot” sections containing some very valuable information as demonstrated in the following screenshot.

Figure 10: TCP Stream View on “ReleaseMngmtRoot” sections

In the blue section the PLC is transmitting its state to the Workstation and the Workstation does the same in the red section. We can actually see the name of the project deployed on the PLC, in this case: “BridgeControl” followed by information on the PLC. For example, “6ES7212-1AE40-0XB0” is the article number of the PLC, which can be used to find more information on it. If you follow along, you can see that the workstation wants to deploy changes taken from a project file called “BridgeControl_Malicious”.

Finding the name of the changed block is possible, but it really helps if you know the names of the possible blocks, as it will be hidden in a lot of characters forming (nonsense) strings. The block changed in our case was “MotorControl”.

Figure 11: Presence of block name in TCP Stream view
Downloading changes for text lists

Figure 12 shows the schema for changes to text lists/libraries, following the same convention as above.

Figure 12: String occurrences for download of changes of a text lists/libraries

Be aware though that “TextLibrary…” is followed by its content, so expect a lot of strings to follow.

Figure 13: TCP stream view showing parts of a downloaded text library
Downloading complete software

Downloading the complete software means that everything is downloaded to the PLC, instead of just the changes.

Figure 14: String occurrences for a complete software download

Please note that the string “PLCProgram” also appears in what we assume is the banner or functions list. But is has been observed at the position shown above only in case of a full software download to the PLC. Of cause “TextLibrary…” is followed by the content of the library, like mentioned previously.

Downloading hardware configuration

The hardware configuration can be downloaded to the PLC together with the software changes or as a single task. In both cases the following schema was observed

Figure 15: String occurrences for a hardware configuration download

Please note that the string “HWConfiguration” also has been observed as part of a TextLibrary.

Figure 16: TCP Steam view of a hardware configuration download

Above excerpt shows the two “ReleaseMngmtRoot” occurrences as well as the occurrence of the “HWConfiguration” string. Again, blue indicating traffic from the PLC to the workstation, red the other direction.

Now if you have followed the post until this section, it is the time to mention that there is at least one dissector available for this version of the protocol. The protocol discussed in the second section is usually referred to as S7commPlus. You can identify it by looking at the location where you would expect the value “0x32” (dissected as field “Protocol Id”), in case of S7commPlus it contains “0x72”.

Figure 17: S7commPlus Protocol ID

The screenshot above was taken from Wireshark with a freely available S7CommPlus dissector installed. Although we are not going to cover the dissector in this blog post, we mentioned it for completeness.

If you like to play around with it, you can find it online at least on https://sourceforge.net/projects/s7commwireshark/ . One word of caution: Use at your own risk. We did not spend much time using this dissector yet. The dissector downloaded from sourceforge comes as a precompiled dll file that needs to be placed in corresponding folder (In our testing: “C:\Program Files\Wireshark\plugins\3.6\epan” as we used Wireshark Version 3.6). Do your own risk assessment when dealing with files like dlls downloaded from the internet.

Conclusion & Outlook

Even if we cannot start our analysis on well dissected traffic, we still can identify specific patterns in the traffic. Of cause this all applies to traffic that is not encrypted, enabling us to have a look into the bits and bytes transferred.

This post marks the end of this series of blog posts. We again like to stress the point that the discussed content is based on testing and observations. There is no guarantee that our testing has been including all possibilities, or for example that different versions of the TIA portal do behave the same way. More research and testing is needed, to learn more on behaviour of software evolved in OT. If we would like to have reached one goal with this series of posts, it would be to have inspired at least one person to perform research in this area and share it with the community.

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

Enforcing a Sysmon Archive Quota

30 June 2022 at 12:19

Sysmon (System Monitor) is a well-known and widely used Windows logging utility providing valuable visibility into core OS (operating system) events. From a defender’s perspective, the presence of Sysmon in an environment greatly enhances detection and forensic capabilities by logging events involving processes, files, registry, network connections and more.

Since Sysmon 11 (released April 2020), the FileDelete event provides the capability to retain (archive) deleted files, a feature we especially adore during active compromises when actors drop-use-delete tools. However, as duly noted in Sysmon’s documentation, the usage of the archiving feature might grow the archive directory to unreasonable sizes (hundreds of GB); something most environments cannot afford.

This blog post will cover how, through a Windows-native feature (WMI event consumption), the Sysmon archive can be kept at a reasonable size. In a hurry? Go straight to the proof of concept!

Figure 1: A Sysmon archive quota removing old files.

The Challenge of Sysmon File Archiving

Typical Sysmon deployments require repeated fine-tuning to ensure optimized performance. When responding to hands-on-keyboard attackers, this time-consuming process is commonly replaced by relying on robust base-lined configurations (some of which open-source such as SwiftOnSecurity/sysmon-config or olafhartong/sysmon-modular). While most misconfigured events have at worst an impact on CPU and log storage, the Sysmon file archiving can grind a system to a halt by exhausting all available storage. So how could one still perform file archiving without risking an outage?

While searching for a solution, we defined some acceptance requirements. Ideally, the solution should…

  • Be Windows-native. We weren’t looking for yet another agent/driver which consumes resources, may cause compatibility issues and increase the attack surface.
  • Be FIFO-like (First In, First Out) to ensure the oldest archived files are deleted first. This ensures attacker tools are kept in the archive just long enough for our incident responders to grab them.
  • Have a minimal system performance impact if we want file archiving to be usable in production.

A common proposed solution would be to rely on a scheduled task to perform some clean-up activities. While being Windows-native, this execution method is “dumb” (schedule-based) and would execute even without files being archived.

So how about WMI event consumption?

WMI Event Consumption

WMI (Windows Management Instrumentation) is a Windows-native component providing capabilities surrounding the OS’ management data and operations. You can for example use it to read and write configuration settings related to Windows, or monitor operations such as process and file creations.

Within the WMI architecture lays the permanent event consumer.

You may want to write an application that can react to events at any time. For example, an administrator may want to receive an email message when specific performance measures decline on network servers. In this case, your application should run at all times. However, running an application continuously is not an efficient use of system resources. Instead, WMI allows you to create a permanent event consumer. […]

A permanent event consumer receives events until its registration is explicitly canceled.

docs.microsoft.com

Leveraging a permanent event consumer to monitor for file events within the Sysmon archive folder would provide optimized event-based execution as opposed to the scheduled task approach.

In the following sections we will start by creating a WMI event filter intended to select events of interest; after which we will cover the WMI logical consumer whose role will be to clean up the Sysmon archive.

WMI Event Filter

A WMI event filter is an __EventFilter instance containing a WQL (WMI Query Language, SQL for WMI) statement whose role is to filter event tables for the desired events. In our case, we want to be notified when files are being created in the Sysmon archive folder.

Whenever files are created, a CIM_DataFile intrinsic event is fired within the __InstanceCreationEvent class. The following WQL statement would filter for such events within the default C:\Sysmon\ archive folder:

SELECT * FROM __InstanceCreationEvent
WHERE TargetInstance ISA 'CIM_DataFile'
	AND TargetInstance.Drive='C:'
	AND TargetInstance.Path='\\Sysmon\\'

Intrinsic events are polled at specific intervals. As we wish to ensure the polling period is not too long, a WITHIN clause can be used to define the maximum amount of seconds that can pass before the notification of the event must be delivered.

The beneath query requires matching event notifications to be delivered within 10 seconds.

SELECT * FROM __InstanceCreationEvent
WITHIN 10
WHERE TargetInstance ISA 'CIM_DataFile'
	AND TargetInstance.Drive='C:'
	AND TargetInstance.Path='\\Sysmon\\' 

While the above WQL statement is functional, it is not yet optimized. As an example, if Sysmon came to archive 1000 files, the event notification would fire 1000 times, later resulting in our clean-up logic to be executed 1000 times as well.

To cope with this property, a GROUP clause can be used to combine events into a single notification. Furthermore, to ensure the grouping occurs within timely manner, another WITHIN clause can be leveraged. The following WQL statement waits for up to 10 seconds to deliver a single notification should any files have been created in Sysmon’s archive folder.

SELECT * FROM __InstanceCreationEvent
WITHIN 10
WHERE TargetInstance ISA 'CIM_DataFile'
	AND TargetInstance.Drive='C:'
	AND TargetInstance.Path='\\Sysmon\\' 
GROUP WITHIN 10

To create a WMI event filter we can rely on PowerShell’s New-CimInstance cmdlet as shown in the following snippet.

$Archive = "C:\\Sysmon\\"
$Delay = 10
$Filter = New-CimInstance -Namespace root/subscription -ClassName __EventFilter -Property @{
    Name = 'SysmonArchiveWatcher';
    EventNameSpace = 'root\cimv2';
    QueryLanguage = "WQL";
    Query = "SELECT * FROM __InstanceCreationEvent WITHIN $Delay WHERE TargetInstance ISA 'CIM_DataFile' AND TargetInstance.Drive='$(Split-Path -Path $Archive -Qualifier)' AND TargetInstance.Path='$(Split-Path -Path $Archive -NoQualifier)' GROUP WITHIN $Delay"
}

WMI Logical Consumer

The WMI logical consumer will consume WMI events and undertake actions for each occurrence. Multiple logical consumer classes exist providing different behaviors whenever events are received, such as:

The last CommandLineEventConsumer class is particularly interesting as it would allow us to run a PowerShell script whenever files are archived by Sysmon (a feature attackers do enjoy as well).

The first step on our PowerShell code would be to obtain a full list of archived files ordered from oldest to most recent. This list will play two roles:

  1. It will be used to compute the current directory size.
  2. It will be used as a list of files to remove (in FIFO order) until the directory size is back under control.

While getting a list of files is easy through the Get-ChildItem cmdlet, sorting these files from oldest to most recently archived requires some thinking. Where common folders could rely on the file’s CreationTimeUtc property, Sysmon archiving copies this file property over. As a consequence the CreationTimeUtc field is not representative of when a file was archived and relying on it could result in files being incorrectly seen as the oldest archives, causing their premature removal.

Instead of relying on CreationTimeUtc, the alternate LastAccessTimeUtc property provides a more accurate representation of when a file was archived. The following snippet will get all files within the Sysmon archive and order them in a FIFO-like fashion.

$Archived = Get-ChildItem -Path 'C:\\Sysmon\\' -File | Sort-Object -Property LastAccessTimeUtc

Once the archived files listed, the folder size can be computed through the Measure-Object cmdlet.

$Size = ($Archived | Measure-Object -Sum -Property Length).Sum

All that remains to do is then loop the archived files and remove them while the folder exceeds our desired quota.

for($Index = 0; ($Index -lt $Archived.Count) -and ($Size -gt 5GB); $Index++)
{
	$Archived[$Index] | Remove-Item -Force
	$Size -= $Archived[$Index].Length
}

Sysmon & Hard Links

In some situations, Sysmon archives a file by referencing the file’s content from a new path, a process known as hard-linking.

A hard link is the file system representation of a file by which more than one path references a single file in the same volume.

docs.microsoft.com

As an example, the following snippet creates an additional path (hard link) for an executable. Both paths will now point to the same on-disk file content. If one path gets deleted, Sysmon will reference the deleted file by adding a path, resulting in the file’s content having two paths, one of which within the Sysmon archive.

:: Create a hard link for an executable.
C:\>mklink /H C:\Users\Public\NVISO.exe C:\Users\NVISO\Downloads\NVISO.exe
Hardlink created for C:\Users\Public\NVISO.exe <<===>> C:\Users\NVISO\Downloads\NVISO.exe

:: Delete one of the hard links causing Sysmon to archive the file.
C:\>del C:\Users\NVISO\Downloads\NVISO.exe

:: The archived file now has two paths, one of which within the Sysmon archive.
C:\>fsutil hardlink list Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
\Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
\Users\Public\NVISO.exe

The presence of hard links within the Sysmon archive can cause an edge-case should the non-archive path be locked by another process while we attempt to clean the archive. Should for example a process be created from the non-archive path, removing the archived file will become slightly harder.

:: If the other path is locked by a process, deleting it will result in a denied access.
C:\>del Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
C:\Sysmon\B99D61D874728EDC0918CA0EB10EAB93D381E7367E377406E65963366C874450.exe
Access is denied.

Removing hard links is not straight-forward and commonly relies on non-native software such as fsutil (itself requiring the Windows Subsystem for Linux). However, as the archive’s hard link does technically not consume additional storage (the same content is referenced from another path), such files could be ignored given they do not partake in the storage exhaustion. Once the non-archive hard links referencing a Sysmon-archived file are removed, the archived file is not considered a hard link anymore and will be removable again.

To cope with the above edge-case, hard links can be filtered-out and removal operations can be encapsulated in try/catch expressions should other edge-cases exists. Overall, the WMI logical consumer’s logic could look as follow:

$Archived = Get-ChildItem -Path 'C:\\Sysmon\\' -File | Where-Object {$_.LinkType -ne 'HardLink'} | Sort-Object -Property LastAccessTimeUtc
$Size = ($Archived | Measure-Object -Sum -Property Length).Sum
for($Index = 0; ($Index -lt $Archived.Count) -and ($Size -gt 5GB); $Index++)
{
	try
	{
		$Archived[$Index] | Remove-Item -Force -ErrorAction Stop
		$Size -= $Archived[$Index].Length
	} catch {}
}

As we did for the event filter, a WMI consumer can be created through the New-CimInstance cmdlet. The following snippet specifically creates a new CommandLineEventConsumer invoking our above clean-up logic to create a 10GB quota.

$Archive = "C:\\Sysmon\\"
$Limit = 10GB
$Consumer = New-CimInstance -Namespace root/subscription -ClassName CommandLineEventConsumer -Property @{
    Name = 'SysmonArchiveCleaner';
    ExecutablePath = $((Get-Command PowerShell).Source);
    CommandLineTemplate = "-NoLogo -NoProfile -NonInteractive -WindowStyle Hidden -Command `"`$Archived = Get-ChildItem -Path '$Archive' -File | Where-Object {`$_.LinkType -ne 'HardLink'} | Sort-Object -Property LastAccessTimeUtc; `$Size = (`$Archived | Measure-Object -Sum -Property Length).Sum; for(`$Index = 0; (`$Index -lt `$Archived.Count) -and (`$Size -gt $Limit); `$Index++){ try {`$Archived[`$Index] | Remove-Item -Force -ErrorAction Stop; `$Size -= `$Archived[`$Index].Length} catch {}}`""
}

WMI Binding

In the above two sections we defined the event filter and logical consumer. One last point worth noting is that event filters need to be bound to an event consumers in order to become operational. This is done through a __FilterToConsumerBinding instance as shown below.

New-CimInstance -Namespace root/subscription -ClassName __FilterToConsumerBinding -Property @{
    Filter = [Ref]$Filter;
    Consumer = [Ref]$Consumer;
}

Proof of Concept

The following proof-of-concept deployment technique has been tested in limited environments. As should be the case with anything you introduce into your environment, make sure rigorous testing is done and don’t just deploy straight to production.

The following PowerShell script creates a WMI event filter and logical consumer with the logic we defined previously before binding them. The script can be configured using the following variables:

  • $Archive as the Sysmon archive path. To be WQL-compliant, special characters have to be back-slash (\) escaped, resulting in double back-slashed directory separators (\\).
  • $Limit as the Sysmon archive’s desired maximum folder size (see real literals).
  • $Delay as the event filter’s maximum WQL delay value in seconds (WITHIN clause).

Do note that Windows security boundaries apply to WMI as well and, given the Sysmon archive directory is restricted to the SYSTEM user, the following script should be ran using the SYSTEM privileges.

$ErrorActionPreference = "Stop"

# Define the Sysmon archive path, desired quota and query delay.
$Archive = "C:\\Sysmon\\"
$Limit = 10GB
$Delay = 10

# Create a WMI filter for files being created within the Sysmon archive.
$Filter = New-CimInstance -Namespace root/subscription -ClassName __EventFilter -Property @{
    Name = 'SysmonArchiveWatcher';
    EventNameSpace = 'root\cimv2';
    QueryLanguage = "WQL";
    Query = "SELECT * FROM __InstanceCreationEvent WITHIN $Delay WHERE TargetInstance ISA 'CIM_DataFile' AND TargetInstance.Drive='$(Split-Path -Path $Archive -Qualifier)' AND TargetInstance.Path='$(Split-Path -Path $Archive -NoQualifier)' GROUP WITHIN $Delay"
}

# Create a WMI consumer which will clean up the Sysmon archive folder until the quota is reached.
$Consumer = New-CimInstance -Namespace root/subscription -ClassName CommandLineEventConsumer -Property @{
    Name = 'SysmonArchiveCleaner';
    ExecutablePath = (Get-Command PowerShell).Source;
    CommandLineTemplate = "-NoLogo -NoProfile -NonInteractive -WindowStyle Hidden -Command `"`$Archived = Get-ChildItem -Path '$Archive' -File | Where-Object {`$_.LinkType -ne 'HardLink'} | Sort-Object -Property LastAccessTimeUtc; `$Size = (`$Archived | Measure-Object -Sum -Property Length).Sum; for(`$Index = 0; (`$Index -lt `$Archived.Count) -and (`$Size -gt $Limit); `$Index++){ try {`$Archived[`$Index] | Remove-Item -Force -ErrorAction Stop; `$Size -= `$Archived[`$Index].Length} catch {}}`""
}

# Create a WMI binding from the filter to the consumer.
New-CimInstance -Namespace root/subscription -ClassName __FilterToConsumerBinding -Property @{
    Filter = [Ref]$Filter;
    Consumer = [Ref]$Consumer;
}

Once the WMI event consumption configured, the Sysmon archive folder will be kept at reasonable size as shown in the following capture where a 90KB quota has been defined.

Figure 2: A Sysmon archive quota of 90KB removing old files.

With Sysmon archiving under control, we can now happily wait for new attacker tool-kits to be dropped…

Cortex XSOAR Tips & Tricks – Creating indicator relationships in automations

23 June 2022 at 08:00

Introduction

In Cortex XSOAR, indicators are a key part of the platform as they visualize the Indicators Of Compromise (IOC) of a security alert in the incident to the SOC analyst and can be used in automated analysis workflows to determine the incident outcome. If you have a Cortex XSOAR Threat Intelligence Management (TIM) license, it is possible to create predefined relationships between indicators to describe how they relate to each other. This enables the SOC analyst to do a more efficient incident analysis based on the indicators associated to the incident.

In this blog post, we will provide some insights into the features of Cortex XSOAR Threat Intelligence Management and how to create indicator relationships in an automation.

Threat Intelligence Management

Threat Intelligence Management (TIM) is a new feature in Cortex XSOAR which requires an additional license on top of your Cortex XSOAR user licenses. It is created to improve the use of threat intel in your SOC. Using TIM, you can automate threat intel management by ingesting and processing indicators sources to export the enriched intelligence data to the SIEMs, firewalls, and other security platforms.

Cortex XSOAR TIM is a Threat Intelligence Platform with highly actionable Threat data from Unit 42 and not only identify and discover new malware families or campaigns but ability to create and disseminate strategic intelligence reports.

https://www.paloaltonetworks.com/cortex/threat-intel-management

When the TIM license is imported into your Cortex XSOAR environment, all built-in indicator types will have a new Unit 42 Intel tab available:

Unit 42 Intel

This tab contains the threat intelligence data for the specific indicator gathered by Palo Alto and makes it directly available to your SOC analysts.

For Cortex XSOAR File indicators, the Wildfire analysis (the cloud-base threat analysis service of Palo Alto) is available in the indicator layout providing your SOC analysts a detailed analysis of malicious binaries if its file hash is known:

Wildfire Analysis

The TIM license also adds the capability to Cortex XSOAR to create relationships between indicators.

If you for example have the following indicators in Cortex XSOAR:

  • Host: ict135456.domain.local
  • MAC: 38-DA-09-8D-57-B1
  • Account: u4872
  • IP: 10.15.62.78
  • IP: 78.132.17.56

Without a TIM license, these indicators would be visible in the indicators section in the incident layout without any context about how they relate to each other:

By creating relationships between these indicators, a SOC analyst can quickly see how these indicators have interacted with each other during the detected incident:

Indicator Relationships

EntityRelationship Class

To create indicator relationships, the EntityRelationship class is available in the CommonServerPython automation.

CommonServerPython is an automation created by Palo Alto which contains Python code that can be used by other automations. Similar to CommonServerUserPython, CommonServerPython is added to all automations making the code available for you to use in your own custom automation.

In the Relationships subclass of EntityRelationship, you can find all the possible relationships that can be created and how they relate to each other.

RELATIONSHIPS_NAMES = {'applied': 'applied-on',
                       'attachment-of': 'attaches',
                       'attaches': 'attachment-of',
                       'attribute-of': 'owns',
                       'attributed-by': 'attributed-to',
                       'attributed-to': 'attributed-by',
                       'authored-by': 'author-of',
                       'beacons-to': 'communicated-by',
                       'bundled-in': 'bundles',
                       'bundles': 'bundled-in',
                       'communicated-with': 'communicated-by',
                       'communicated-by': 'communicates-with',
                       'communicates-with': 'communicated-by',
                       'compromises': 'compromised-by',
                       'contains': 'part-of',
                       .......

You can define a relationship between indicators by creating an instance of the EntityRelationship class:

indicator_relationship = EntityRelationship(
    name=EntityRelationship.Relationships.USES,
    entity_a="u4872",
    entity_a_type="Account",
    entity_b="ict135456.domain.local",
    entity_b_type="Host"
)

In the name attribute, you add which relationship you want to create. Best to use the Relationships Enum subclass in case the string values of the relationship names change in a future release.

In the entity_a attribute, add the value of the source indicator.

In the entity_a_type attribute, add the type of the source indicator.

In the entity_b attribute, add the value of the destination indicator.

In the entity_b_type attribute, add the type of the destination indicator.

When initializing the EntityRelationship class, it will validate all the required attributes to see if all information is present to create the relationship. If not, a ValueError exception will be raised.

Create Indicator Relationships

Now we know which class to use, let’s create the indicator relationships in Cortex XSOAR.

For each relationship we want to create, an instance of the EntityRelationship which describes the relationship between the indicators should be added to a list :

indicator_relationships = []

indicator_relationships.append(
    EntityRelationship(
        name=EntityRelationship.Relationships.USES,
        entity_a="u4872",
        entity_a_type="Account",
        entity_b="ict135456.domain.local",
        entity_b_type="Host"
    )
)

indicator_relationships.append(
    EntityRelationship(
        name=EntityRelationship.Relationships.BEACONS_TO,
        entity_a="78.132.17.56",
        entity_a_type="IP",
        entity_b="ict135456.domain.local",
        entity_b_type="Host"
    )
)

indicator_relationships.append(
    EntityRelationship(
        name=EntityRelationship.Relationships.ATTRIBUTED_BY,
        entity_a="38-DA-09-8D-57-B1",
        entity_a_type="MAC",
        entity_b="ict135456.domain.local",
        entity_b_type="Host"
    )
)

indicator_relationships.append(
    EntityRelationship(
        name=EntityRelationship.Relationships.USES,
        entity_a="10.15.62.78",
        entity_a_type="IP",
        entity_b="ict135456.domain.local",
        entity_b_type="Host"
    )
)

To create the relationships in Cortex XSOAR, the list of EntityRelationship instances needs to be returned in an instance of the CommandResults class using the return_results function:

return_results(
    CommandResults(
        relationships=indicator_relationships
    )
)

If you now open the relationship view of the Host indicator in Cortex XSOAR, you will see that the relationships have been created:

Indicator Relationships

References

https://docs.paloaltonetworks.com/cortex/cortex-xsoar/6-8/cortex-xsoar-threat-intel-management-guide

https://xsoar.pan.dev/docs/reference/api/common-server-python#entityrelationship

https://xsoar.pan.dev/docs/reference/api/common-server-python#relationshipstypes

https://xsoar.pan.dev/docs/reference/api/common-server-python#relationships

https://xsoar.pan.dev/docs/reference/api/common-server-python#commandresults

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the SOAR engineering team lead, he is responsible for the development and deployment of automated workflows in Palo Alto Cortex XSOAR which enable the NVISO SOC analysts to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can contact Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

Why a successful Cyber Security Awareness month starts … now!

17 June 2022 at 08:00

Have you noticed that it’s June, already?! Crazy how fast time flies by when busy. But Q2 of 2022 is almost ready to be closed, so why not have a peak at what the second half of the year has in store for us? Summer holidays you say? Sandy beaches and happy hour cocktails? Or cool mountain air and challenging MBT tracks? Messily written out-of-office messages, kindly asking to park that question till early September?

Yes. It’s all that. But there is more. For us, it’s October that is highlighted.
October is Cyber Security Awareness Month, the peak season for Security Awareness professionals (and enthusiasts 😉). In Europe, the European Cybersecurity Month (ECSM) has taken place every year since 2013 and has been incorporated in the implementing actions of the Cybersecurity Act (CSA). The focus on making it a yearly recurring event is strong and in today’s world we might need this initiative to spotlight security awareness more than ever.

But October is still 3 months away…

3 reasons why a successful Cyber Security Awareness Month starts NOW!

1. Take your time to get inspired

Cyber Security is a specific yet very broad domain. There are endless topics to cover, levels of complexity, target audiences… Needless to say pinpointing your focus for Cyber Month won’t be easy. Depending on available time and budget it may be difficult to pick the best approach. That is why it is important to take your time to get inspired. How? Here some ideas.

  • Take a step back and review what you cover already in the previous quarter. Which topics went down extremely well with your target audience? Which ones didn’t and why?
  • What topics people are already familiar with? It might be a good idea to remind your team of what they already know instead of overwhelming them with only new content;
  • Do not let a good incident go to waste!
    • Make sure to involve the technical teams managing security and dig  for input on “real” incidents. Showing people what happened or might happen makes security more tangible;
    • Use security awareness topics that are trending in the local media as a coat rack for the message you want to bring across;
  • Check what other organizations are talking about this year. Is it relevant for you? No need to reinvent the wheel 😉
  • Keep it simple. Focus on 1 topic and make it really stick.

2. Organizing impactful activities takes time!

Once you have a clear view on the topic to cover and the message you want to bring across, it’s important to consider how you want to do that. And let’s be honest, if you really want to make an impact during Cyber Month, sending a boring email that is all work and no play isn’t going to cut it. There is no magic formula but there are a few things that you could consider:

  • Triggers and motivation: typically cybersecurity awareness month allows us to be more playful than the rest of the year. Why not using different triggers too?
  • Get emotions running: testimonials are among the most relatable tool you can use. Careful with the balance between “scary” and “empowering” stories!
  • Talk to the informed ones: propose an in-depth approach. extra-professional resources, panel discussions, external speakers…
  • Roll up sleeves: Most of us learn by doing. That is why games, experience workshops and 1to1 demos work well.

Make sure you have a good motivation to attract your people.

  • Contests with a final gift are a classic, but you only need to attend a professional fair to see those still work. 
  • Goodies? Require budget. Physical items may draw negative attention by being perceived as wasteful.  Are we against them? No, but choose carefully.
  • Make sure you have a good “how this will improve your life” story. Remember that protecting your family and friends is a better motivator than protecting your company (ok, it is not so much of a secret)

You can read in our blog how we applied all this last year or reach out for a demo.


3. Get your stakeholders on board early

Cyber Month is not a one man/girl/team show. No matter how inspiring your activities, if you are running it alone it will be very difficult to bring your message across. That’s why it is crucial to start promoting Cyber Month early towards all stakeholders. Often even before you have anything planned. Getting all off your ducks in a row before summer will give you peace of mind when organizing and planning later on.

Here’s a few stakeholders to consider and why*:

  • Top Management: money and support!
  • Communications: to make sure you reserve a spot for cyber month on all communication channels (weekly newsletters, intranet, emails, cctv, social media, …);
  • Technical teams: back to the “inspiration” argument. And of course to validate content.
  • HR: to help you define and identify target audiences and DOs / DON’Ts in the organization.

*Depending on the size of organisation there might be more or less stakeholders to consider.

“Opps!  I wish I had read this 2 months ago”

Are you reading this by the 20th September? 0 € on your budget?
Don’t panic. Even with time and money constraints, there is good, generic content freely available on the internet covering at least the top 10 of most current threats. It’s usually even tweakable to make it look and feel branded for your own organisation.

ENISA, the European Union Agency for Cyber Security, coordinates the organisation of the European Cybersecurity Month (ECSM) and act as “hub” for all participating Member States and EU Institutions. The Agency also publishes new materials on yearly basis.

A great resource in Belgium is Safeonweb, an initiative of the Centre for Cyber Security Belgium that also launches a new campaign every year.

If nothing else, these will provide a good starting point. And next year, make sure you start early on!

About the authors

Hannelore Goffin is an experienced consultant within the Cyber Strategy team at NVISO where she is passionate about raising awareness on all cyber related topics, both for the professional and personal context. 

Mercedes M Diaz leads NVISO Cyberculture practice. She supports businesses trying to reduce their risks by helping teams understanding their role in protecting the company.

Cortex XSOAR Tips & Tricks – Discovering undocumented API endpoints

7 June 2022 at 08:00

Introduction

When you use the Cortex XSOAR API in your automations, playbooks or custom scripts, the first place you will start is the API documentation to see which API endpoints are available. But what if you cannot find an API Endpoint for the task you want to automate in the documentation?

In this blog post we will show you how to discover undocumented Cortex XSOAR API endpoints using the Firefox Developer Tools and how to craft HTTP requests with Curl.

Discover API Endpoints

The Cortex XSOAR API documentation can be found in Settings > Integrations > API Keys as a web page on the server, a PDF document or a Swagger file. It contains a list of API Endpoints with their description, HTTP method, return codes, parameters, request body schema and example responses.

When the you cannot find an API endpoint in the documentation with the required functionality you are looking for, the Cortex XSOAR API allows you to use the undocumented API endpoints which are used by the Cortex XSOAR web interface. You can use the developer tools of your browser to discover which API endpoint is used when performing a certain task and see what request body is required.

As an example, we discover which undocumented API endpoints are used when starting/stopping accounts on a multi-tenant Cortex XSOAR server using Firefox.

To start/stop a multi-tenant account, go to Settings > Accounts Management:

Here you can start/stop an account by selecting it and using the Start/Stop buttons.

To see which API endpoint is used by the Cortex XSOAR web interface, open the Firefox Developer Tools by pressing Ctrl + Shift + i:

When you now stop an account using the web interface, you will see all HTTP requests that are executed in the Network tab:

If you click the first entry, you will see the details of the HTTP request for stopping the account. In the Headers tab, you will see which API Endpoint is used,

The API endpoint used for stopping accounts is /accounts/stop.

In the Request tab, you will see the HTTP request body required for the HTTP POST request to the /accounts/stop API endpoint:

As a requests body for this API endpoint, you will need to pass the following JSON:

{
  "names": [
    "acc_Profit"
  ]
}

The account name should be in the format acc_<account_name> as an element of the names array.

To get the account name, we could also look at the second entry in the Network tab which is the response of the HTTP GET request to the /account API endpoint.

If you open the response tab in the request details, you will see the details of each account:

Next, we’ll see which API endpoint is used to start an account. In the Network tab of the Developer Tools, first click the trashcan button to clear all entries. Now let’s start the account from the Cortex XSOAR web interface by selecting the account and clicking the Start button.

You will now see the following HTTP Requests:

Click on the first HTTP POST request to see the request details:

The API endpoint used for starting accounts is /accounts/start.

In the Request tab, you will see the HTTP request body required for the HTTP POST request to the /accounts/start API endpoint:

As a requests body for this API endpoint, you will need to pass the following JSON:

{
  "accounts": [
    {
      "name": "acc_Profit"
    }
  ]
}

Now that we know the API endpoints and required request bodies for starting and stopping multi-tenant accounts, we can create the Curl commands.

With the following Curl command, you can stop an account:

curl -X 'POST' \
'https://demo-xsoar.westeurope.cloudapp.azure.com/accounts/stop' \
-H 'accept: application/json' \
-H 'Authorization: ********************************' \
-H 'Content-Type: application/json' -d '{"names": ["acc_Profit"]}'

In the Authorization header you will need to add an API key you created in Settings > Integrations > API Keys.

In the Accounts Management tab in Cortex XSOAR, you will now see that the account is stopped:

With the following Curl command, you can start an account:

curl -X 'POST' \
'https://demo-xsoar.westeurope.cloudapp.azure.com/accounts/start' \
-H 'accept: application/json' \
-H 'Authorization: ********************************' \
-H 'Content-Type: application/json' -d '{"accounts":[{"name":"acc_Profit"}]}'

In the Accounts Management tab in Cortex XSOAR, you will now see that the account is running:

You can now implement these HTTP requests in your own automation or playbook making use of the Demisto REST API integration or in your custom script.

By using the developer tools of your browser, you can discover any API endpoint used by the Cortex XSOAR web interface. This allows you to automate anything you could do manually in the web interface which greatly increases the possible use cases for automation.

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the SOAR engineering team lead, he is responsible for the development and deployment of automated workflows in Palo Alto Cortex XSOAR which enable the NVISO SOC analysts to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can contact Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

Cortex XSOAR Tips & Tricks – Exploring the API using Swagger Editor

1 June 2022 at 08:00

Introduction

When using the Cortex XSOAR API in your automations, playbooks or custom scripts, knowing which API endpoints are available and how to use them is key. In a previous blog post in this series, we showed you where you could find the API documentation in Cortex XSOAR. The documentation was available on the server itself, as a PDF, or as a Swagger file.

Swagger is a set of developer tools for developing and interacting with APIs. It is also a former specification for documenting APIs on which the OpenAPI specification is based.

In this blog post we will show you how to setup a Swagger Editor instance together with the Cortex XSOAR API Swagger file to visualize and interact with the Cortex XSOAR API. This will allow you to easily explore it’s capabilities, craft HTTP requests and view the returned data without the need to write a single line of code.

Swagger Editor

The Swagger Editor is an open source editor to design and document RESTful APIs in the OpenAPI (formaly Swagger) specification.

To install Swagger Editor we will be using the official docker image available on Docker Hub. If Docker is not yet installed, please follow the Docker installation documentation.

Start the docker image with the following commands:

docker pull swaggerapi/swagger-editor
docker run -d -p 80:8080 swaggerapi/swagger-editor

The Swagger Editor will now be available in your browser on address http://localhost

Swagger Editor

Before we can start interacting with the Cortex XSOAR API, we will need to bypass CORS restrictions in your browser.

Cross-Origin Resource Sharing (CORS) is a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served. In Firefox, CORS is only allowed when the server returns the Access-Control-Allow-Origin: * header which we are going to set using a Firefox extension.

In Firefox, the extension CORS Everywhere is available for installation in the Firefox Add-ons. Once installed, a new icon will be available in the Firefox toolbar. To bypass CORS restrictions, click the CorsE icon and it will turn green.

Explore Cortex XSOAR API

To start exploring the Cortex XSOAR API from the Swagger Editor, we will need to create an API key and download the REST Swagger file. Open Cortex XSOAR > Settings > Integration > API Keys:

Cortex XSOAR API Keys

Click Get Your Key to create an API key and copy the key.

Download the REST Swagger file, copy the content of the downloaded JSON file and paste it into the Swagger Editor.

Click OK to convert the JSON to YAML:

After importing the JSON, an error will be shown which can be ignored by clicking Hide:

On line 36 of the imported YAML file, replace hostname with the URL of your Cortex XSOAR server:

host: dev.xsoar.eu:443

Click Authorize to add authentication credentials:

Paste your API key and click Authorize:

Now you are ready to start exploring the Cortex XSOAR API. For each available API endpoint you will see an an entry in the Swagger Editor together with its supported HTTP method.

We are going to use the /incidents/search API Endpoint as an example.

When you expand the /incident/search entry, you will see it’s description:

Next you will see the required and optional parameters, together with their required data models, either in JSON or XML:

Finally you will see the possible response codes, content types and example data returned by the API endpoint:

All this information will allow you to craft the HTTP request to the Cortex XSOAR API for your automation or custom script. But the Swagger Editor also allows you to interact with an API directly from its web interface.

In the entry of the /incident/search API endpoint, click on Try it out:

You will see that you can now edit the value of the filter parameter. We will be searching for an incident in Cortex XSOAR based on its ID:

{
  "filter": {
    "id": [
      "12001"
    ]
  }
}

After pasting the JSON in the filter value, click Execute:

The API request will now be executed against the Cortex XSOAR API.

In the Responses section, you will see the Curl request of the executed API request. You can use this command in a terminal to execute the API request again.

The response body of the API request can be seen in the Server response section.

By using the Swagger Editor to interact with the Cortex XSOAR API, you can explore the available API requests and their responses without implementing any code. This allows you to see if the Cortex XSOAR API supports the functionality for your automated workflow case before you start development.

References

https://editor.swagger.io/

https://hub.docker.com/r/swaggerapi/swagger-editor/

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the SOAR engineering team lead, he is responsible for the development and deployment of automated workflows in Palo Alto Cortex XSOAR which enable the NVISO SOC analysts to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can contact Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

CVE Farming through Software Center – A group effort to flush out zero-day privilege escalations

31 May 2022 at 08:19

Intro

In this blog post we discuss a zero-day topic for finding privilege escalation vulnerabilities discovered by Ahmad Mahfouz. It abuses applications like Software Center, which are typically used in large-scale environments for automated software deployment performed on demand by regular (i.e. unprivileged) users.

Since the topic resulted in a possible attack surface across many different applications, we organized a team event titled “CVE farming” shortly before Christmas 2021.

Attack Surface, 0-day, … What are we talking about exactly?

NVISO contributors from different teams (both red and blue!) and Ahmad gathered together on a cold winter evening to find new CVEs.

Targets? More than one hundred installation files that you could normally find in the software center of enterprises.
Goal? Find out whether they could be used for privilege escalation.

The original vulnerability (patient zero) resulting in the attack surface discovery was identified by Ahmad and goes as follows:

Companies correctly don’t give administrative privileges to all users (according to the least privilege principle). However, they also want the users to be able to install applications based on their business needs. How  is this solved? Software Center portals using SCCM (System Center Configuration Manager, now part of Microsoft Endpoint Manager) come to the rescue. Using these portals enables users to install applications without giving them administrative privileges.

However, there is an issue. More often than not these portals run the installation program with SYSTEM privileges, which in their turn use a temporary folder for reading or writing resources used during installation. There is a special characteristic for the TMP environment variable of SYSTEM. And that is – it is writable for a regular user.

Consider the following example:

By running the previous command, we just successfully wrote to a file located in the TEMP directory of SYSTEM.

Even if we can’t read the file anymore on some systems, be assured that the file was successfully  written:

To check that SYSTEM really has TMP pointing to C:\Windows\TEMP, you could run the following commands (as administrator):

PsExec64.exe /s /i cmd.exe

echo %TMP%

The /s option of PsExec tells the program to run the process in the SYSTEM context. Now if you would try to write to a file of an Administrator account’s TMP directory, it would not work since your access is denied. So if the installation runs under Administrator and not SYSTEM, it is not vulnerable to this attack.

How can this be abused?

Consider a situation where the installation program, executed under a SYSTEM context:

  • Loads a dll from TMP
  • Executes an exe file from TMP
  • Executes an msi file from TMP
  • Creates a service from a sys file in TMP

This provides some interesting opportunities! For example, the installation program can search in TMP for a dll file. If the file is present, it will load it. In that case the exploitation is simple; we just need to craft our custom dll, rename it, and place it where it is being looked for. Once the installation runs we get code execution as SYSTEM.

Let’s take another example. This time the installation creates an exe file in TMP and executes it. In this case it can still be exploitable but we have to abuse a race condition. What we need to do is craft our own exe file and continuously overwrite the target exe file in TMP with our own exe. Then we start the installation and hope that our own exe file will be executed instead of the one from the installation. We can introduce a small delay, for example 50 milliseconds, between the writes hoping the installation will drop its exe file, which gets replaced by ours and executed by the installation within that small delay. Note that this kind of exploitation might take more patience and might need to restart the installation process multiple times to succeed. The video below shows an example of such a race condition:

However, even in case of execution under a SYSTEM context, applications can take precautions against abuse. Many of them read/write their sources to/from a randomized subdirectory in TMP, making it nearly impossible to exploit. We did notice that in some cases the directory appears random, but in fact remains constant in between installations, also allowing for abuse. 

So, what was the end result?

Out of 95 tested installers, 13 were vulnerable, 7 need to be further investigated and 75 were not found to be vulnerable. Not a bad result, considering that those are 13 easy to use zero-day privilege escalation vulnerabilities 😉. We reported them to the respective developers but were met with limited enthousiasm. Also, Ahmad and NVISO reported the attack surface vulnerability to Microsoft, and there is no fix for file system permission design. The recommendation is for the installer to follow the defense in depth principle, which puts responsibility with the developers packages their software.

If you’re interested in identifying this issue on systems you have permission on, you can use the helper programs we will soon release in an accompanying Github repository.

Stay tuned!

Defense & Mitigation

Since the Software Center is working as designed, what are some ways to defend against this?

  • Set AppEnforce user context if possible
  • Developers should consider absolute paths while using custom actions or make use of randomized folder paths
  • As a possible IoC for hunting: Identify DLL writes to c:\windows\temp

References

https://docs.microsoft.com/en-us/windows/win32/msi/windows-installer-portal
https://docs.microsoft.com/en-us/windows/win32/msi/installation-context
https://docs.microsoft.com/en-us/windows/win32/services/localsystem-account
https://docs.microsoft.com/en-us/mem/configmgr/comanage/overview
https://docs.microsoft.com/en-us/mem/configmgr/apps/deploy-use/packages-and-programs
https://docs.microsoft.com/en-us/mem/configmgr/apps/deploy-use/create-deploy-scripts
https://docs.microsoft.com/en-us/windows/win32/msi/custom-actions
https://docs.microsoft.com/en-us/mem/configmgr/core/understand/software-center
https://docs.microsoft.com/en-us/mem/configmgr/core/clients/deploy/deploy-clients-cmg-azure
https://docs.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-security

About the authors

Ahmad, who discovered this attack surface, is a cyber security researcher mainly focus in attack surface reduction and detection engineering. Prior to that he did software development and system administration and holds multiple certificates in advanced penetration testing and system engineering. You can find Ahmad on LinkedIn.

Oliver, the main author of this post, is a cyber security expert at NVISO. He has almost a decade and a half of IT experience which half of it is in cyber security. Throughout his career he has obtained many useful skills and also certificates. He’s constantly exploring and looking for more knowledge. You can find Oliver on LinkedIn.

Jonas Bauters is a manager within NVISO, mainly providing cyber resiliency services with a focus on target-driven testing. As the Belgian ARES (Adversarial Risk Emulation & Simulation) solution lead, his responsibilities include both technical and non-technical tasks. While occasionally still performing pass the hash (T1550.002) and pass the ticket (T1550.003), he also greatly enjoys passing the knowledge. You can find Jonas on LinkedIn.


Detecting BCD Changes To Inhibit System Recovery

30 May 2022 at 08:00

Introduction

Earlier this year, we observed a rise in malware that inhibits system recovery. This tactic is mostly used by ransomware and wiper malware. One notable example of such malware is “Hermetic wiper”. To inhibit recovery an attacker has many possibilities, one of which is changing the Boot Configuration Database (BCD). This post will dive into the effects of BCD changes applied by such malware, and cover:

  • What is BCD?
  • Effect of changing boot entries.
  • Gather related Telemetry.
  • Derive possible detection opportunities.

What is BCD?

To understand BCD, we need to address the Windows 10 boot process. A detailed description of the entire boot process is provided by Microsoft.

Figure 1: BCD Location In Boot Sequence.

When the host receives power, the firmware loads the Unified Extensible Firmware Interface (UEFI) environment which in turn launches the Windows Boot Manager (WBM) and later hands over execution to the operating system (OS). At some point in the execution chain, the WBM reads the BCD to determine which boot applications need to run and in which order to run them.

The BCD contains “boot entries” that are adjustable. This allows the user to indirectly control the actions of the WBM and as such the boot procedure of the host itself.

Targeted boot entries

As described by MITRE ATT&CK, The following boot entries are most often changed to inhibit system recovery:

  • Bootstatuspolicy
  • Recovery Enabled

BootStatusPolicy and Recovery Enabled

During startup, when a computer with the Windows OS fails to boot twice consecutively, it automatically fails over to the Windows Recovery Environment (WinRE). The OS is aware of the failure, during the boot, as a status flag is set to indicate the OS is booting. The flag gets cleared on successful startup, meaning if the OS fails to boot the flag isn’t reset and when booted once more, the WBM will start WinRE instead of the main OS. The BootStatusPolicy and “Recovery enabled” boot entries determine when/if the boot manager is allowed to transfer control to WinRE.

One of the effects of disabling WinRE is the removal of the “Automatic Repair” function.

Figure 2: Automatic Repair

“Automatic Repair” attempts to diagnose and repair the source of the boot failure. A detailed view of the performed checks is available at “C:\Windows\System32\LogFiles\Srt\Srt Trail. txt”, a summary of the tests is visible below:

Test Performed

Check for updates Event log diagnosis Bugcheck analysis
System disk test Internal state check Setup state check
Disk failure diagnosis Check for installed LCU Registry hives test
Disk metadata test Check for installed driver updates Volume content check
Target OS test Check for pending package install Boot manager diagnosis
Windows boot log diagnosis Boot status test System boot log diagnosis

When the “Automatic Repair” feature repairs the OS, it can remove objects or revert changes related to the issue. Figure 3, shows a Windows user notification stating the recovery feature reverted the recent updates to resolve booting failure.

Figure 3: Windows Prompts The User Of Recovery Actions Performed.

Applying boot entry changes.

Boot entry changes are commonly applied via BCDedit.exe. The boot entries that are relevant to this post are changed via the commands below:

bcdedit.exe /set  bootstatuspolicy ignoreallfailures
bcdedit /set  recoveryenabled no

To modify the BCD via BCDedit, administrative privileges are required as seen below:

Figure 4: Error Prompt UnderPrivileged User.

Telemetry Gathering

To gather relevant telemetry, scenarios are executed in a controlled space. This telemetry will be used to validate claims and leveraged to provide detection logic later in this post.

Setup

A single clean Windows 10 enterprise, non-domain-joined, host was created in a virtualized environment. To monitor system interactions the Sysinternals tools Sysmon and Procmon got installed. Sysmon is running with the standard Olaf Harthong configuration unless indicated otherwise. Further, two agents forward the Sysmon-and MDE logs to a Sentinel environment. Kusto Query Language(KQL) is used to query the forwarded data. When Procmon data is inspected, it’s exported into a CSV file and loaded into an Excel pivot table.

The sysmon events get parsed by a stored function in MDE, called “sysmon_parsed”. All exported data and used configurations are available on the following GitHub link.

Scenario

Scenario 1: Apply BCD changes via BCDedit.exe.

As malware changes specific BCD entries via BCDedit, the same behavior is performed via an elevated PowerShell prompt.

bcdedit.exe /set  bootstatuspolicy ignoreallfailures
bcdedit /set  recoveryenabled no

First, we investigate the detection capabilities of the standard Sysmon configuration.

Figure 5: Sysmon output when adjusting BCD with the standard configuration file.

In both figures, the BCDedit process creation (Event ID 1) is highlighted. After process creation, we only observe irrelevant file creation events (Event ID 11) of the BCDedit prefetch file. Meaning, that no detectable events get generated or the standard configuration doesn’t monitor the effects of the BCD changes made by BCDedit.

Next Procmon is utilized to provide a more detailed overview of BCDedit’s system interactions.

Figure 6: Pivot Table Overview Boot Entry Changes

The pivot table shows the details of successful system operations performed by BCDedit. The right column “Type” is divided into three subcolumns. Every column Represents all operations related to a single boot entry change. The “none” subcolumn is “the baseline“, here BCDedit is run without any arguments. This is used to see if the telemetry is inherent to the BCDedit executable or to the boot entry changes performed by BCDedit.

There are two “Registry set value” operations highlighted in the pivot table above.

Registry key Value Boot entry
HKLM\BCD00000000\Objects\{60e96da1-5243-11ec-a250-810052a36a7f}\Elements\16000009\Element Type: REG_BINARY, Length: 1, Data: 00 recoveryenabled
HKLM\BCD00000000\Objects\{60e96da1-5243-11ec-a250-810052a36a7f}\Elements\250000e0\Element Type: REG_BINARY, Length: 8, Data: 01 00 00 00 00 00 00 00 bootstatuspolicy

These operations don’t occur in the baseline and are unique to its respected boot entry change. Remarkable is that the registry keys contain something resembling a unique identifier for a BCD object and something called an Element.

Microsoft states:

Each BCD element represents a specific boot option.

BCD object, which is a collection of elements that describes the settings for the object that are used during the boot process.

This information looks promising. To justify the use of these registry keys, for detection purposes, the following questions need to be answered:

  • As the BCD object has a unique ID, does it change across host systems?
  • Are the same registry elements changed across host systems?
  • When the value supplied to BCDedit and the related registry key holds the same value, is a “registry set value” event still created?
  • Are the registry entries one-to-one related to the BCD or are the registry keys only a visualization of the BCD?

To satisfy the first and second questions, we run the same scenario on another copy of our environment. On the new copy, we perform all previous steps and apply the same analysis steps.

Figure 7: Machine 2 Pivot Table Overview Boot Entry Changes

For convenience sake, the relevant telemetry is placed side-by-side in the table below:

Machine 1 Machine 2
Path Details Path Details
HKLM\BCD00000000\Objects\{60e96da1-5243-11ec-a250-810052a36a7f}\Elements\16000009\Element Type: REG_BINARY, Length: 1, Data: 00 HKLM\BCD00000000\Objects\{fcda0303-b063-11ec-a39e-d00df54d50c1}\Elements\16000009\Element Type: REG_BINARY, Length: 1, Data: 00
HKLM\BCD00000000\Objects\{60e96da1-5243-11ec-a250-810052a36a7f}\Elements\250000e0\Element Type: REG_BINARY, Length: 8, Data: 01 00 00 00 00 00 00 00 HKLM\BCD00000000\Objects\{fcda0303-b063-11ec-a39e-d00df54d50c1}\Elements\250000e0\Element Type: REG_BINARY, Length: 8, Data: 01 00 00 00 00 00 00 00

We can conclude that the same registry values are changed identically on the two different systems under a different object ID. In other words, these keys can be used for detection purposes, but we must exclude the object ID in the detection logic.

To determine the consistency of monitoring registry key value changes, the 3rd question needs to be answered: “When the value supplied to BCDedit and the related registry key holds the same value, is a registry set value event created?“

Via BCDedit, the same argument is run multiple times to see if a registry change occurs for every iteration.

Figure 8: Multiple Same Value Registry Changes

As seen above, a registry set event occurs even if the attribute is already configured with the same value.

Scenario 2: Apply BCD changes via direct registry manipulation.

In this scenario, the earlier determined keys are manually adjusted without the use of BCDedit. To confirm if the changes took effect, the recovery sequence is triggered.

To see if this scenario is even possible, the last question needs to be addressed: “Are the registry entries one-to-one related to the BCD, or are the registry keys only a visualization of the BCD?“

To apply changes to the registry directly, an elevated Powershell prompt is used:

Figure 9: Direct Registry Manipulation Failure With Elevated Prompt

It seems that even with an elevated PowerShell instance, direct registry manipulation isn’t possible. Elevating the prompt to system-level fixes this issue:

Figure 10: Successful Direct Registry Manipulation With System Prompt.

For convenience, the following PowerShell code was written. This code looks for registry objects with all the subkeys in our $pack variable. This is needed as the element names can be reused within the BCD hive. We simply enumerate the keys, filter on the objects that hold both keys, and respectively added the new values to the keys.  

$pack [email protected]{ '16000009'=([byte[]](0x01)); '250000e0'=([byte[]](0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00))} 
cd 'HKLM:\BCD00000000\Objects\' 
$items = $pack.keys|%{(ls -path $_ -Recurse)} $summary=$items|%{$_.Name.trim($_.PSChildName)}| group 
$CorrectHive=($summary|Where-Object {$_.count -eq $subkeys.length}).Name $correctitems= $items |Where-Object {$_.Name.contains($CorrectHive)} $correctitems|%{$_|Set-ItemProperty -Name Element -value  $pack[$_.PSChildName]}
Figure 11: Registry Changes Via Powershell Script

To confirm the changes took effect, attempts were made to trigger the recovery sequence. This test was performed multiple times on different machines. We observed that these changes do disable the recovery feature. As a good measure, the settings were reversed, and the “Auto Repair” feature functioned as expected. In other words, it’s possible to change boot entries by changing the registry directly.

Detection.

As this is a known attack, that primarily uses BCDedit, detection rules are available that look for suspicious BCD arguments. Below KQL examples are provided for both MDE and Sysmon:

Sysmon_Parsed 
| where EventID == 1 
| where OriginalFileName =~"bcdedit.exe" 
| where CommandLine has "set" | where CommandLine has_all ("recoveryenabled", "no") or CommandLine has_all ("bootstatuspolicy", "ignoreallfailures")
DeviceProcessEvents 
| where FileName =~ "bcdedit.exe" 
| where ProcessCommandLine has "set" 
| where ProcessCommandLine has_all ("recoveryenabled", "no") or ProcessCommandLine has_all ("bootstatuspolicy", "ignoreallfailures") 
| project DeviceName,ActionType,TimeGenerated,ProcessCommandLine
Figure 12: KQL Output BCDedit Arguments Checks.

In the above output, only BCD changes via BCDedit are logged and direct registry manipulations aren’t detected. Next, we monitor for registry manipulation of the elements “16000009”and “250000e0”. Again KQL examples are provided for both MDE and Sysmon:

DeviceRegistryEvents 
| where TimeGenerated > ago (60d) 
| where not(InitiatingProcessFolderPath in~ (@"c:\$windows.~bt\sources\setuphost.exe", @"c:\windows\system32\bitlockerwizardelev.exe")) 
| where ActionType == "RegistryValueSet" 
| where RegistryKey has "elements" and RegistryKey has_any("16000009", "250000e0") 
| summarize by DeviceId, InitiatingProcessFolderPath, RegistryKey, RegistryValueData, ActionType, InitiatingProcessCommandLine, RegistryValueType
Sysmon_Parsed 
| where EventID == 13 
| where TargetObject has "elements" 
| where TargetObject has_any( "16000009","250000e0") 
| project TargetObject,Details,Image

Note: that for the systems running Sysmon, the following lines need to be added to the <RegistryEvent onmatch=”include”> group in the Sysmon XML file.

<TargetObject name="technique_id=T1490,technique_name=Disable Automatic Windows recovery" condition="contains all">HKLM\BCD;\Elements\16000009\Element</TargetObject> 
<TargetObject name="technique_id=T1490,technique_name=Disable Automatic Windows recovery" condition="contains all">HKLM\BCD;\Elements\250000e0\Element</TargetObject>
Figure 13: KQL Output Registry Value Monitoring

As expected, both the BCDedit manipulating BCD and direct registry manipulation are detected. However, the registry values only indicate “binary data” and not the actual binary value. This has a negative effect on the detection rule as it’s impossible to determine what information was written into the registry key.

The rules got tested in several environments with 5000+ endpoints, the largest of which was 10000. Overall it functions well with a low false positive(FP) rating.

One possible FP is WMI initiating BCD changes, the below command line was seen on multiple occasions manipulating the 250000e0 registry key:

wmiprvse.exe -secured -Embedding

As WMI causes ancestry chain break, there is no easy way to deduce what process initiated the registry change. The only investigation you can perform is a timeline analysis on the machine, looking for WMI-related activity.

Final Conclusions

In this post, we analyzed the effect of changing commonly abused BCD attributes via:

  • BCDedit.exe
  • Direct Registry Manipulation.

The Telemetry showed:

  1. Direct registry manipulation has a one-to-one effect on the BCD.
  2. Registry set event occurs even if the attribute is already configured with the same value.
  3. Binary Data values aren’t visible in MDE and Sysmon.

We provided:

  • A more resilient detection rule for both MDE and Sysmon systems.

Although the detection rule is more resilient, due to the limitation of the logs and tools, we aren’t able to distinguish between enabling and disabling boot entries. However, these queries were run against a vast amount of endpoints with a low FP outcome.

Red Team Nuggets

It’s only possible for MDE and Sysmon systems to detect registry binary changes. The registry content itself is not visible to the analyst. This provides an opportunity for attackers to drop a payload in a binary registry key, with a low chance of being detected.

Mentions

Special thanks to our senior intrusion analyst member Remco Hofman for assisting in fine-tuning the detection logic.

Also a special thanks to Bart Parys for proofreading the post.

References

https://docs.microsoft.com/en-us/windows-hardware/drivers/bringup/boot-and-uefi
https://attack.mitre.org/techniques/T1490/
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-re-troubleshooting-features?view=windows-11
https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/bcdedit–set
https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/bcd-boot-options-reference
https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/
https://github.com/NVISOsecurity/blogposts/tree/master/Detecting-BCD-Changes-To-Inhibit-System-Recovery

Breaking out of Windows Kiosks using only Microsoft Edge

24 May 2022 at 08:00

Introduction

In this blog post, I will take you through the steps that I performed to get code execution on a Windows kiosk host using ONLY Microsoft Edge. Now, I know that there are many resources out there for breaking out of kiosks and that in general it can be quite easy, but this technique was a first for me.

Maybe a little bit of explanation of what a kiosk is for those that don’t know, a kiosk is basically a machine that hosts one or more applications for users with physical access to the machine to use (e.g. a reception booth with a screen where guests can register their arrival at a company). The main idea of a kiosk is that users should not be able to do anything else on the machine, except for using the hosted application(s) in their intended way.

I have to admit, I struggled quite hard to get the eventual code execution on the underlying host, but I was quite happy that I got there by using creative thinking. As far as I could see, I didn’t find a direct guide on how to break out of kiosks the way I did it, thus the reason I made this blog post. At the very end, I will also show a quick and easy breakout that I found in a John Hammond video.

Setup

To start things off, I set up my own little Windows Kiosk in a virtual machine. I’m not going to detail how to set up a kiosk in this blog post, but here’s a nice little video on Youtube on how to set one up yourself.

Our little kiosk

In this configuration, there is a URL bar and a keyboard available, which makes the kiosk escape quite a bit easier, but there are plenty of breakout tactics even without access to the URL bar. I’ll show an example later on.

As you can see, there is no internet access either, so we can’t simply browse to a kiosk pwning website to get an easy win. Furthermore, the Microsoft Edge browser in Windows Kiosk Mode is also restricted in several ways, which means that we can’t tamper with the settings or configurations. More information about the restrictions can be found here.

Escaping Browser Restrictions

First things first, it would be nice to escape the restricted Microsoft Edge browser so we can at least have some breathing room and more options available to us. Before we do this, let’s make use of the web URL bar to browse local directories and see the general structure of the underlying system.

Although this might possibly reveal interesting information, I sadly didn’t find a “passwords.txt” file with the local administrator password on our desktop.

If you use an alternative protocol in a URL bar, the operating system will, in some cases, prompt the user to select an application to execute the operation. Look what happens when we browse to “ftp://something”:

Interesting, right?

We can possibly browse and select any application to launch this URL with. Sadly, though, Windows Kiosk Mode is pretty locked down (so far) and only allows Microsoft Edge to run as configured. So let’s select Microsoft Edge as our application. NOTE that you should deselect the “Always use this app” checkbox, otherwise you won’t be able to do this again later. If you select this checkbox (which it is by default), then you won’t get prompted when trying to use the same protocol again.

Look at that! We now have an unrestricted Microsoft Edge browser to play around with. Before we move on to code execution, let’s take a look at an alternative way we could’ve achieved this without using the URL bar.

So let’s go back to the restricted Edge browser and use some keyboard magic this time. As I’ve said earlier, we’re not going through all methodologies, but you can find a nice cheatsheet here and a blogpost made by Trustedsec over here .

In the restricted Edge browser, you can use keyboard combinations like “ctrl+o” (open file), “ctrl+s” (save file) and “ctrl+p” (print file) to launch an Explorer window. With the “ctrl+p” method, you’d also need to select “Microsoft Print to PDF” and then click the “Print” button to spawn the Explorer window. Let’s use “ctrl+o”:

And here it is, a nice way to spawn a new unrestricted Edge browser by just entering “msedge.exe” in the toolbar and pressing enter. At this point, I had tried to spawn “cmd.exe” or something similar, but everything was blocked by the kiosk configuration.

Gaining Code Execution

To gain code execution with the new, unrestricted Edge browser, I had to resort to some creative thinking. I already knew plain old Javascript wasn’t going to execute shell commands for me, except if NodeJS was installed on the system (spoiler alert, it wasn’t), so I started to look for something else.

After Googling around for a bit on how to execute shell commands using Javascript, I came across the following post on Stack Overflow, which details how we could use ActiveXObject to execute shell commands on Windows operating systems.

Bingo? Not quite yet, as there’s a catch to this. The usage of shell-executing functions in Javascript, such as ActiveXObject, do not work via Microsoft Edge, as they are quite insecure. I still tried it out, but the commands did indeed not execute. At this point, it became clear to me that I either had to find another route or dig deeper into how ActiveXObject and Microsoft Edge work.

Another round of Googling brought me to yet another post, which touches on the subject of running ActiveXObject via Microsoft Edge. One answer piqued my interest immediately:

Apparently, there’s a way to run Microsoft Edge in Internet Explorer mode? I had never heard of this before, as I usually don’t use Edge myself. Nevertheless, I looked further into this using Google and the unrestricted Edge browser that we spawned earlier.

So here’s how we’re going to run Microsoft Edge in Internet Explorer mode, but let’s go through it step by step. First, in our unrestricted Edge browser, we will go to Settings > Default browser:

Here, we can set “Allow sites to be reloaded in Internet Explorer mode” to “Allow” and we can also already add the full path to our upcoming webshell in the “Internet Explorer mode pages” tab. We can only save documents to our own user’s downloads folder, so that seems like a good location to store a “pwn.html” webshell. Note that “pwn.html” does not exist yet, we will create it later.

If we now click the blue restart button, there’s only one thing left to do and that’s getting the actual code to a html file on disk without using a text editor like Notepad. Some quick thinking led me to the idea of using the developer console to change the current page’s HTML code and then saving it to disk.

First, just to be sure, we need to get rid of other HTML / Javascript code that might interfere with our own code. Go ahead and delete pretty much everything on the page, except the already existing <html> and <body> tags. We will then write the webshell code snippet displayed below in the developer console:

<script>
    function shlExec() {
        var cmd = document.getElementById('cmd').value
        var shell = new ActiveXObject("WScript.Shell");
        try {
            var execOut = shell.Exec("cmd.exe /C \"" + cmd + "\"");
        } catch (e) {
            console.log(e);
        }

        var cmdStdOut = execOut.StdOut;
        var out = cmdStdOut.ReadAll();
        alert(out);
    }
</script>

<form onsubmit="shlExec()">
    Command: <input id="cmd" name="cmd" type="text">
    <input type="submit">
</form> 

Once all the default Edge clutter is removed, the page source should look something like this:

Let’s save this page (ctrl+s or via menu) as “pwn.html” as we planned earlier and then browse to it.

Notice the popup prompt at the bottom of the page asking us to allow blocked content. We’ll go ahead and allow said content. If we now use our little webshell to execute commands:

We will need to approve this popup windows everytime we execute commands, but look what we get after we accept!

So yeah, all of this is quite some effort, but at least it’s another way of gaining command execution on a kiosk system using only Microsoft Edge.

Alternative Easy Path

It was only after the project ended that I encountered a Youtube video from John Hammond where he completely invalidates my efforts and gets code execution in a really simple way. Honestly, I can’t believe I didn’t think about this before.

Starting from an unrestricted browser, one can simply start by downloading “powershell.exe” from “C:\Windows\System32\WindowsPowershell\V1.0”.

Then in the downloads folder, rename the “powershell.exe” to “msedge.exe” and execute it.

Something like this could potentially be fixed by only allowing Edge to run from its original, full path, but it still works on the newest Windows 11 kiosk mode at the time of writing this blog post.

Mitigation

As for mitigating kiosk breakouts like these, there are a few things that I can advise you to help prevent them. Note that this is not a complete list.

  • If possible, hide the URL bar completely to further prevent the alternative protocol escape. If hiding the URL bar is not an option, maybe look into pre-selecting alternative protocol apps with the “Always use this application” checkmark.
  • Disable or remap keys like ctrl, alt… . It’s also possible to provide a keyboard that doesn’t have these keys.
  • Enable AppLocker to only allow applications to run from whitelisted destinations, such as “C:\Program Files”. Keep in mind that AppLocker can easily be misconfigured and then bypassed, so set it to be quite strict for kiosks.
  • Configure Microsoft Edge in the following ways:
    • Computer Configuration > Administrative Templates > Windows Components > Microsoft Edge > Enable “Prevent access to the about:flags page in Microsoft Edge”
    • Block access to “edge://settings”, you could do this by editing the local kiosk user’s Edge settings before deploying the kiosk mode itself

References

Microsoft – Configure Microsoft Edge kiosk mode

https://docs.microsoft.com/en-us/deployedge/microsoft-edge-configure-kiosk-mode

Github – Kiosk Example Page

https://github.com/KualiCo/kiosk

Pentest Diary – Kiosk breakout cheatsheet

http://pentestdiary.blogspot.com/2017/12/kiosk-breakout-cheatsheet.html

Trustedsec – Kiosk breakout keys in Windows

https://www.trustedsec.com/blog/kioskpos-breakout-keys-in-windows/

Youtube – How to set up Windows Kiosk Mode

https://www.youtube.com/watch?v=4dEYKLxXBxE

John Hammond – Kiosk Breakout

https://youtu.be/aBMvFmoMFMI?t=1385

Stack Overflow – Javascript shell execution

https://stackoverflow.com/questions/44825859/get-output-on-shell-execute-in-js-with-activexobject

Microsoft – ActiveXObject in Micrososft Edge

https://answers.microsoft.com/en-us/microsoftedge/forum/all/enable-activex-control-in-microsoft-edge-latest/979e619d-f9f2-47da-9e7d-ffd755234655

Browserhow – Microsoft Edge in IE Mode

https://browserhow.com/how-to-enable-and-use-ie-mode-in-microsoft-edge/

Stack Overflow – Disable Shortcut Keys

https://superuser.com/questions/1131889/disable-all-keyboard-shortcuts-in-windows

About The Author

Firat is a red teamer in the NVISO Software Security & Assessments team, focusing mostly on Windows Active Directory, malware and tools development, and internal / external infrastructure pentests.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

What ISO27002 has in store for 2022

23 May 2022 at 08:00

In current times, security measures have become increasingly important for the continuity of our businesses, to guarantee the safety for our clients and to confirm our company’s reputation.

While thinking of security, our minds will often jump to the ISO/IEC 27001:2013 and ISO/IEC 27002:2013 standards. Especially in Europe & Asia, these have been the leading standards for security since, well… 2013. As of 2022, things will change as ISO has recently published an update of its ISO/IEC 27002:2022 and is planning on releasing an update of  ISO/IEC 27001:2022 during this year. However, little to no updates to the ISO/IEC 27001:2022 are expected, beyond the amending its Annex A to the new control structure of ISO/IEC 27002:2022.

No ISO stands on its own. This mean that by extension, the new standards will be affecting various other standards including ISO/IEC 27017, ISO/IEC 27018, ISO/IEC 27701. So, make sure to keep an eye on the new ISO/IEC 27001/27002 releases if you are certified for either of those as well.

“The new ISO this, the new ISO that”: By now you are probably wondering what they actually added, changed and removed. We’ve got you covered.

Let’s begin with the new title that the document will have, being “Information Security, cybersecurity and privacy protection – Information Security Control”, instead of the previous iterations where it was called “Code of practice for information security controls”. The change in the title seems to acknowledge that there is a difference between information security and cybersecurity, adding the need to include data privacy to the topics covered in the standard.

As part of the content, the main changes introduced in ISO/IEC 27002:2022 revolve around the structure of the available controls, meaning the way these are organized within the standard itself. The re-organization of the controls aims to update the current standard to reflect the current cyber threat landscape: they have increased the level of efficiency of the standard by merging certain high-level controls into a single control or introducing more specific controls.

In particular, the controls have been re-grouped into four main categories, instead of the fourteen found in the 2013 version. These categories are as follows:

  • 5. Organizational controls (37 controls)
  • 6. Organization of Information Security (8 controls)
  • 7. Physical Controls (14 controls)
  • 8. Technological controls (34 controls)

On top of that they have trimmed down the number of controls from a total of one hundred and fourteen in the previous version to ninety-three currently. This is not the end of the improvements on efficiency. Both in terms of reading and analysing the standard, the introduction of complementary tagging will certainly help you out during the implementation and preparation leading up to your certification. We know of the following families of tags that are being introduced:

As mentioned above, ISO has done a fair bit of trimming in the controls, this was not limited to the removal of controls or combining multiple controls into one. In ISO/IEC 27002, twelve new controls were introduced. All these controls reflect the intention of ISO to have this latest version cover some of the most important trends regarding new technologies that have a strong relation with security, as reflected in the new title as well. Examples are: Threat Intelligence, Cloud Services and Data Privacy, of which the latter two are also being covered by separate ISO Standards, respectively ISO/IEC 27017 and ISO/IEC 27701.

We wonder, why does including these controls in ISO/IEC 27002:2022 help shape some of the new trends of cybersecurity? One explanation we can attribute this to is the ever-growing threat landscape. The increase of vulnerabilities, like the Log4J we have seen in the past few months, increases the need to update ISO/IEC 27002. A second explanation lies in the demand for increased interoperability between ISO standards by unifying the controls and adding the aforementioned tagging system.

Proof of this interoperability can be also found if we take a look at the operation capabilities such as Asset Management (Classification of Information and Asset Handling). These were already implicitly covering data privacy and threat intel in the 2013 version, which in the new release are more prevalent among the controls. As with Asset Management, Access Control (Logging & Monitoring thereof and Access Management) will also be integrated by the introduction of the new cloud related controls.

The interoperability is not limited to ISO either. Many of the operational capabilities that are covered by the controls as part of ISO/IEC 27001 will also be covered by controls that are part of other certifications, like PCI-DSS, NIST, QTSP (ETSI), SWIFT and ISAE3402. This is not to say that you should not aim for an ISO certification, if your company already has one or more of those other certifications we just mentioned. Certifying to ISO/IEC 27001 should go rather smoothly if you already have a framework in place from a different certification and there is no harm in improving your company’s security.

The ISO controls can offer an entirely new approach to mitigate certain risks that you would not have thought of otherwise. If you have the resources to expand your list of certifications with ISO/IEC 27001:2022, we can only recommend doing so and adding an extra layer of defence to your security framework.

We can already see some of you worry: “We’ve only recently got certified to ISO/IEC 27001?” or “We are in the middle of the audit, but it won’t be over by the time the new ISO/IEC 27001 releases, is all that effort wasted?”. We can assure you that there is no reason to panic. Only when the ISO/IEC 27001:2022 is released, will the ISO Accreditation Bodies be able to start certifying against it, as part of the standard 3-year audit cycle defined by ISO. However, companies will be granted a period to fully comprehend and adapt to the new standard before undergoing the audit for recertification, and ISO surveillance / (re)certification audits are not expected to use the new ISO/IEC 27001:2022 version for at least 1 year after its public release. Whether you start on your endeavour to become ISO/IEC 27001 certified or whether you want to commence with the transposing of your current ISO/IEC 27001:2013 certification to the new 2022 flavour, know that NVISO is there to help you! NVISO has developed a proven service to become ISO certified for the new adopters, as well as an “ISO quick scan” for the companies already holding the 2013 certification, where we assist and kickstart your transition to the ISO/IEC 27001:2022 certification.

Detecting & Preventing Rogue Azure Subscriptions

18 May 2022 at 15:41

A few weeks ago, NVISO observed how a phishing campaign resulted in a compromised user creating additional attacker infrastructure in their Azure tenant. While most of the malicious operations were flagged, we were surprised by the lack of logging and alerting on Azure subscription creation.

Creating a rogue subscription has a couple of advantages:

  • By default, all Azure Active Directory members can create new subscriptions.
  • New subscriptions can also benefit from a trial license granting attackers $200 worth of credits.
  • By default, even global administrators have no visibility over such new subscriptions.

In this blog post we will cover why rogue subscriptions are problematic and revisit a solution published a couple of years ago on Microsoft’s Tech Community. Finally, we will conclude with some hardening recommendations to restrict the creation and importation of Azure subscriptions.

Don’t become ‘that’ admin…

The deployments and recommendations discussed throughout this blog post require administrative privileges in Azure. As with any administrative actions, we recommend you exercise caution and consider any undesired side-effects privileged changes could cause.

With the above warning in mind, global administrators in a hurry can directly deploy the logging of available subscriptions (and reading the hardening recommendations)…

Deploy to Azure

Azure’s Hierarchy

To understand the challenges behind logging and monitoring subscription creations, one must first understand how Azure’s hierarchy looks like.

In Azure, resources such as virtual machines or databases are logically grouped within resource groups. These resource groups act as logical containers for resources with a similar purpose. To invoice the usage of these resources, resource groups are part of a subscription which also defines quotas and limits. Finally, subscriptions are part of management groups which provides centralized management for access, policies or compliance.

Figure 1: Management levels and hierarchy in “Organize your Azure resources effectively” on docs.microsoft.com.

Most Azure components are resources as is the case with monitoring solutions. As an example, creating an Azure Sentinel instance will require the prior creation of a subscription. This core hierarchy of Azure implies that monitoring and logging is commonly scoped to a specific set of subscriptions as can be seen when creating rules.

Figure 2: Alert rules and their scope selection limited to predefined subscriptions in the Azure portal.

This Azure hierarchy creates a problem of the chicken or the egg: monitoring for subscription creations requires prior knowledge of the subscription.

Another small yet non negligible Azure detail is that by default even global administrators cannot view all subscriptions. As detailed in “Elevate access to manage all Azure subscriptions and management groups“, viewing all subscriptions first requires additional elevation through the Azure Active Directory properties followed by the unchecking of the global subscription filter.

Figure 3: The Azure Active Directory access management properties.
Figure 4: The global subscriptions filter enabled by default in the Azure portal.

The following image slider shows the view prior (left) and after (right) the above elevation and filtering steps have been taken.

Figure 5: Subscriptions before (left) and after (right) access elevation and filter removal in the Azure portal.

In the compromise NVISO observed, the rogue subscriptions were all named “Azure subscription 1”, matching the default name enforced by Azure when leveraging free trials (as seen in the above figure).

Detecting New Subscriptions

A few years ago a Microsoft’s Tech Community blog post covered this exact challenge and solved it through a logic app. This following section revisits their solution with a slight variation using Azure Sentinel and system-assigned identities. Through a simple logic app, one can store the list of subscriptions in a log analytics workspace for which an alert rule can then be set up to alert on new subscriptions.

Deploy to Azure

Collecting the Subscription Logs

The first step in collecting the subscription logs is to create a new empty logic app (see the “Create a Consumption logic app resource” documentation section for more help). Once created, ensure the logic app has system-assigned identity enabled from it’s identity settings.

Figure 6: A logic app’s identity settings in the Azure portal.

To grant the logic app reader access to the Azure Management API, go to the management groups and open the “Tenant Root Group”.

Figure 7: The management groups in the Azure portal.

Within the “Tenant Root Group”, open the access control (IAM) settings and click “Add” to add a new access.

Figure 8: The tenant root group’s access control (IAM) in the Azure portal.

From the available roles, select the “Reader” role which will grant your logic app permissions to read the list of subscriptions.

Figure 9: A role assignment’s role selection in the Azure portal.

Once the role selected, assign it to the logic app’s managed identity.

Figure 10: A role assignment’s member selection in the Azure portal.

When the logic app’s managed identity is selected, feel free to document the role assignment’s purpose and press “Review + assign”.

Figure 11: A role assignment’s member selection overview in the Azure portal.

With the role assignment performed, we can move back to the logic app and start building the logic to collect the subscriptions. From the logic app’s designer, select a “Recurrence” trigger which will trigger the collection at a set interval.

Figure 12: An empty logic app’s designer tool in the Azure portal.

While the original Microsoft Tech Community blog post had an hourly recurrence, we recommend to lower that value (e.g. 5 minutes or less, the fastest interval for alerting) given we observed the subscription being rapidly abused.

Figure 13: A recurrence trigger in a logic app’s designer tool.

With the trigger defined, click the “New step” button to add an operation. To recover the list of subscriptions search for, and select, the “Azure Resource Manager List Subscriptions” action.

Figure 14: Searching for the Azure Resource Manager in a logic app’s designer tool.

Select your tenant and proceed to click “Connect with managed identity” to have the authentication leverage the previously assigned role.

Figure 15: The Azure Resource Manager’s tenant selection in a logic app’s designer tool.

Proceed by naming your connection (e.g.: “List subscriptions”) and validate the managed identity is the system-assigned one. Once done, press the “Create” button.

Figure 16: The Azure Resource Manager’s configuration in a logic app’s designer tool.

With the subscriptions recovered, we can add another operation to send them into a log analytics workspace. To do so, search for, and select, the “Azure Log Analytics Data Collector Send Data” operation.

Figure 17: Searching for the Log Analytics Data Collector in a logic app’s designer tool.

Setting up the “Send Data” action requires the target Log Analytics’ workspace ID and primary key. These can be found in the Log Analytics workspace’s agents management settings.

Figure 18: A log analytics workspace’s agent management in the Azure portal.

In the logic app designer, name the Azure Log Analytics Data Collector connection (e.g.: “Send data”) and provide the target Log Analytics’ workspace ID and primary key. Once done, press the “Create” button.

Figure 19: The Log Analytics Data Collector’s configuration in a logic app’s designer tool.

We can then select the JSON body to send. As we intend to store the individual subscriptions, look for the “Item” dynamic content which will contain each subscription’s information.

Figure 20: The Log Analytics Data Collector’s JSON body selection in a logic app’s designer tool.

Upon selecting the “Item” content, a loop will automatically encapsulate the “Send Data” operation to cover each subscription. All that remains to be done is to name the custom log, which we’ll name “SubscriptionInventory”.

Figure 21: The encapsulation of the Log Analytics Data Connector in a for-each loop as seen in a logic app’s designer tool.

Once this last step configured, the logic app is ready and can be saved. After a few minutes the new custom SubscriptionInventory_CL table will get populated.

Alerting on New Subscriptions

While collecting the logs was the hard part, the last remaining step is to create an analytics rule to flag new subscriptions. As an example, the following KQL query identifies new subscriptions and is intended to run every 5 minutes.

let schedule = 5m;
SubscriptionInventory_CL
| summarize arg_min(TimeGenerated, *) by SubscriptionId
| where TimeGenerated > ago(schedule)

A slightly more elaborate query variant can take base-lining and delays into account which is available either packaged within the complete ARM (Azure Resource Manager) template or as a standalone rule template.

Once the rule deployed, new subscriptions will result in incidents being created as shown below. These incidents provide much-needed signals to identify potentially rogue subscriptions prior to their abuse.

Figure 22: A custom “Unfamiliar Azure subscription creation” incident in Azure Sentinel.

To empower your security team to investigate such events, we do recommend you grant them with Reader rights on the “Tenant Root Group” management group to ensure these rights are inherited on new subscriptions.

Hardening an Azure Tenant

While logging and alerting are great, preventing an issue from taking place is always preferable. This section provides some hardening options that Azure administrators might want to consider.

Restricting Subscription Creation

Azure users are by default authorized to sign up for a cloud service and have an identity automatically be created for them, a process called self-servicing. As we saw throughout this blog post, this opens an avenue for free trials to be abused. This setting can however be controlled by an administrator through the Set-MsolCompanySettings cmdlet’s AllowAdHocSubscriptions parameter.

AllowAdHocSubscriptions controls the ability for users to perform self-service sign-up. If you set that parameter to $false, no user can perform self-service sign-up.

docs.microsoft.com

As such, Azure administrators can prevent users from singing up for services (incl. free trials), after careful consideration, through the following MSOnline PowerShell command:

Set-MsolCompanySettings -AllowAdHocSubscriptions $false

Restricting Management Group Creation

Another Azure component users should not usually interact with are management groups. As stated previously, management groups provide centralized management for access, policies or compliance and act as a layer above subscriptions.

By default any Azure AD security principal has the ability to create new management groups. This setting can however be hardened in the management groups’ settings to require the Microsoft.Management/managementGroups/write permissions on the root management group.

Figure 23: The management groups settings in the Azure portal.

Restricting Subscriptions from Switching Azure AD Directories

One final avenue of exploitation which we haven’t seen being abused so far is the transfer of subscriptions into or from your Azure Active Directory environment. As transferring subscriptions poses a governance challenge, the subscriptions’ policy management portal offers two policies capable of prohibiting such transfers.

We highly encourage Azure administrators to consider enforcing these policies.

Figure 24: The subscriptions’ policies in the Azure portal.

Conclusions

In this blog post we saw how Azure’s default of allowing anyone to create subscriptions poses a governance risk. This weak configuration is actively being leveraged by attackers gaining access to compromised accounts.

We revisited a solution initially published on Microsoft’s Tech Community and proposed slight improvements to it alongside a ready-to-deploy ARM template.

Finally, we listed some recommendations to harden these weak defaults to ensure administrative-like actions are restricted from regular users.


You want to move to the cloud, but have no idea how to do this securely?
Having problems applying the correct security controls to your cloud environment?

NVISO approved as APT Response Service Provider

13 May 2022 at 10:02

NVISO is proud to announce that it has successfully qualified as an APT Response service provider and is now recommended on the website of the German Federal Office for Information Security (BSI).  

Advanced Persistent Threats (APT) are typically described as attack campaigns in which highly skilled, often state-sponsored, intruders orchestrate targeted, long-term attacks. Due to their complex nature, these types of attacks pose a serious threat to any company or organisation.  

The main purpose of the German Federal Office for Information Security (BSI) is to provide advice and support to operators of critical infrastructure and recommend qualified incident response service providers that comply  with their strict quality requirements. 

It is with great pride that we can now confirm that NVISO has passed the rigorous BSI assessment and we are thus listed as a recommended APT Response service provider.  

To attain the coveted BSI recommendation, we had to demonstrate the quality of the service offered by NVISO.  

This included amongst others:  

  • 24×7 readiness and availability of the incident response team  
  • An ISO27001 certification covering the entire organisation  
  • The ability to perform malware analysis and forensics (on hosts and on the network) 
  • Our experts spent multiple hours in interview sessions where they showcased their experience and expertise in dealing with cyber threats.  

Already a European cyber security powerhouse employing a variety of world-class experts (e.g. SANS Instructors, SANS Authors and forensic tool developers), this new recognition further highlights NVISO’s position as a leading European player that can deliver world-class cyber security services.  

Next to our incident response services, NVISO can also help you improve your overall cyber security posture before an incident happens. Our services span a variety of security consulting and managed security services.  

Please don’t hesitate to get in touch!  

[email protected] 
+49 69 9675 8554  

  

About NVISO  

Our mission is to safeguard the foundations of European society from cyber-attacks.  

NVISO is a pure-play cyber security services firm founded in 2013. Over 150 specialized security experts in Belgium, Germany, Austria and Greece help to make our mission a reality.  

DEUTSCH

NVISO ist stolz, bekannt zu geben, dass wir nach erfolgreicher Bewertung vom Bundesamt für Sicherheit in der Informationstechnik (BSI) als Qualifizierter APT-Response-Dienstleister gelistet sind. 

Advanced Persistent Threats (APT) sind gezielte Cyberangriffe über einen längeren Zeitraum hinweg. Sie gehen häufig von gut ausgebildeten, staatlich gesteuerten Angreifern aus. Aufgrund ihrer Komplexität sind sie eine ernsthafte Gefährdung für alle Unternehmen oder Institutionen. 

Die Hauptaufgabe des Bundesamts für Sicherheit in der Informationstechnik (BSI) ist, Betreiber Kritischer Infrastrukturen zu beraten und qualifizierte Incident Response Dienstleister zu empfehlen. 

Mit großem Stolz können wir jetzt bekannt geben, dass NVISO erfolgreich den aufwändigen Qualifizierungsprozess durchlaufen hat und wir als empfohlener APT-Dienstleister auf der Website des Bundesamts für Sicherheit in der Informationstechnik (BSI) gelistet sind. 

NVISO’s Servicequalität überzeugte das BSI anhand folgender Kriterien, worauf es die begehrte Empfehlung aussprach: 

  • 24×7 Bereitschaft des Incident Response Teams 
  • ISO27001 Zertifizierung für das gesamte Unternehmen  
  • Durchführung von Malware-Analyse, Host- und Netzwerkforensik 
  • Unsere Experten wurden in einem mehrstündigen Interview zu ihren Fähigkeiten und Erfahrungen im Umgang mit Cyber-Bedrohungen befragt 

Wir sind stolz darauf, dass erstklassige Experten bei NVISO arbeiten (u.a. SANS Instruktoren, SANS Autoren und Entwickler von Forensik-Tools). Die Auszeichnung des BSI hebt die Position von NVISO als echte Größe in Europa weiter hervor. 

NVISO bietet eine große Bandbreite herausragender Cybersecurity Services an. Wir helfen mit zielgerichteter Beratung und Managed Security Services ihre gesamte Sicherheitslage zu verbessern – noch bevor Zwischenfälle passieren. 

  

Wir freuen uns auf Ihre Anfrage! 

[email protected]
+49 69 9675 8554  

  

Über NVISO 

Unsere Mission ist die Grundfesten der europäischen Gesellschaft vor Cyberangriffen zu schützen. NVISO wurde 2013 als reines Cybersecurity-Unternehmen gegründet. Über 150 Experten in Deutschland, Österreich, Belgien und Griechenland arbeiten mittlerweile an der Umsetzung unserer Mission. 



Introducing pyCobaltHound – Let Cobalt Strike unleash the Hound

9 May 2022 at 13:02

Introduction

During our engagements, red team operators often find themselves operating within complex Active Directory environments. The question then becomes finding the needle in the haystack that allows the red team to further escalate and/or reach their objectives. Luckily, the security community has already come up with ways to assist operators in answering these questions, one of these being BloodHound. Having a BloodHound collection of the environment you are operating in, if OPSEC allows for it, often gives a red team a massive advantage.

As we propagate laterally throughout these environments and compromise key systems, we tend to compromise a number of users along the way. We therefore find ourselves running the same Cypher queries for each user (e.g. “Can this user get me Domain Admin?” or “Can this user help me get to my objective?”). You never know after all, there could have been a Domain Admin logged in to one of the workstations or servers you just compromised.

This led us to pose the question: “Can we automate this to simplify our lives and improve our situational awareness?”

To answer our question, we developed pyCobaltHound, which is an Aggressor script extension for Cobalt Strike aiming to provide a deep integration between Cobalt Strike and Bloodhound.

Meet pyCobaltHound

You can’t release a tool without a fancy logo, right?

pyCobaltHound strives to assists red team operators by:

  • Automatically querying the BloodHound database to discover escalation paths opened up by newly collected credentials.
  • Automatically marking compromised users and computers as owned.
  • Allowing operators to quickly and easily investigate the escalation potential of beacon sessions and users.

To accomplish this, pyCobaltHound uses a set of built-in queries. Operators are also able to add/remove their own queries to fine tune pyCobaltHound’s monitoring capabilities. This grants them the flexibility to adapt pyCobaltHound on the fly during engagements to account for engagement-specific targets (users, hosts, etc.).

The pyCobaltHound repository can be found on the official NVISO Github page.

Credential store monitoring

pyCobaltHound’s initial goal was to monitor Cobalt Strike’s credential cache (View > Credentials) for new entries. It does this by reacting to the on_credentials event that Cobalt Strike fires when changes to the credential store are made. When this event is fired, pyCobaltHound will:

  1. Parse and validate the data recieved from Cobalt Strike
  2. Check if it has already investigated these entities by reviewing its cache
  3. Add the entities to a cache for future runs
  4. Check if the entities exist in the BloodHound database
  5. Mark the entities as owned
  6. Query the BloodHound database for each new entity using both built-in and custom queries.
  7. Parse the returned results, notify the operator of any interesting findings and write them to a basic HTML report.

Since all of this takes place asynchronously from the main Cobalt Strike client, this process should not block your UI so you can keep working while pyCobaltHound investigates away in the background. If any of the queries for which pyCobaltHound was configured returns an objects, it will notify the operator.

pyCobaltHound returning the number of hits for each query

If asked, pyCobaltHound will also output a simple HTML report where it will group the results per query. This is recommended, since this will allow the operator to find out which specific accounts they should investigate.

A sample pyCobaltHound report

Beacon management

After implementing the credential monitoring, we also enabled pyCobaltHound to interact with existing beacon sessions.

This functionality is especially useful when dealing with users and computers whose credentials have not been compromised (yet), but that are effectively under our control (e.g. because we have a beacon running under their session token).

This functionality can be found in the beacon context menu. Note that these commands can be executed on a single beacon or a selections of beacons.

Mark as owned

The Mark as owned functionality (pyCobaltHound > Mark as owned) can be used to mark a beacon (or collection of beacons) as owned in the BloodHound database.

Investigation

The Investigate functionality (pyCobaltHound > Investigate) can be used to investigate the users and hosts associated with a beacon (or collection of beacons).

In both these cases, both the user and computer associated with the beacon context will be marked as owned or investigated. Before it marks/investigates a computer pyCobaltHound will check if the computer account can be considered as “owned”. Do do so, it will check if the beacon session is running as local admin, SYSTEM or a high integrity session as another user. This behaviour can be changed on the fly however.

Entity investigation

In addition to investigating beacon sessions, we also implemented the option to freely investigate entities. This can be found in the main menu (Cobalt Strike > pyCobaltHound > Investigate ).

This functionality is especially useful when dealing with users and computers whose credentials have not been compromised and are not under our control. We mostly use it to quickly identify if a specific account will help us reach our goals by running it through our custom pathfinding queries. A good use case is investigating each token on a compromised host to see if any of them are worth impersonating.

Standing on the shoulders of giants

pyCobaltHound would not have been possible with out the great work done by dcsync in their pyCobalt repository. The git submodule that pyCobaltHound uses is a fork of their work with only some minor fixes done by us.

About the author

Adriaan is a senior security consultant at NVISO specialized in the execution of red teaming, adversary simulation and infrastructure related assessments.

Girls Day at NVISO Encourages Young Guests To Find Their Dream Job

2 May 2022 at 09:52

NVISO employees in Frankfurt and Munich showcased their work in Cybersecurity to the girls with live hacking demos, a view behind the scenes of NVISO and hands-on tips for their personal online security. Participating in the Germany- Wide “Girls Day”, we further widened the field of future career choices for the young visitors and brought them away from the ideas of “stereotypical male jobs”.

Everyone and their dog know that diversity is not only a nice gimmick, no it is beneficially impacting the success of companies. “Delivering through Diversity”, a study by McKinsey in 2018, reported that companies are much more likely to make decisions that result in financial returns above their industry mean if the team showed gender diversity.

While the first programmers were women, the reality of today is that cybersecurity is a field with more employees who identify as male than female. The image of a typical IT geek with a hoodie in front of a PC could come to mind. But what is also true nowadays, is that IT companies are looking for great new hires, independently from their gender. Given that statistically young women are doing better in German schools, there should be a lot of great female employees – if they would pursue careers in STEM related fields. Breaching into a new field is hard, but it gets easier when you see other’s doing it. For Girls in STEM related fields, it is valuable to have role models like our employees. This is why we at NVISO took the Girls Day initiative at heart and participated in the initiative this week , to be actively part of a change that we see as fundamental.

Girls Day is an initiative that started in 2001 and could be seen like a One-Day- Internship into technical jobs for girls. Based on this, at the same day, a Boys Day is happening to encourage boys to explore career options in care or social jobs. The Girls’ Day is supported and sponsored by the governmental ministries BMFSFJ and BMBF, to promote it throughout Germany and that interested girls can miss school on the day they visit companies.

Within the last 20 years, the initiative has not only grown the target group, it also is now the project with the most participants and also acknowledged worldwide with enthusiasts all over the globe to help fight against the stereotypes that impact “typical” career choices. 72% of the participants 2021 said it was helpful to be there that day to learn about possible future jobs, according to “Datenbasis: Evaluationsergebnisse 2021” by the founders of Girls Day, Kompetenzzetrum Technik- Diversity- Chancengleichheit e.V..

NVISO participated in the initiative for the first time, initiated and lead by Carola Wondrak. “It is more like a win-win-win situation for all participating parties”, she said. “Firstly, We can see what future employees are expecting of the company of the future and learn about their environment. Secondly, we do world- class work here and it is beneficial for us to showcase this and put our pin onto the map.” Grinning she adds, “And thirdly, as the saying goes: You have only understood it well, if you can explain it to a child.”

All of our German offices were enthusiastically participating and welcomed our young guests on-site in Frankfurt and Munich for the day. “I don’t want to wait a year to come here,” said one of our participants from Frankfurt, while another girl from Munich is planning her internship with us now. These great feedbacks were due to an engaging agenda for the day, ranging from a live hacking demo of a well- known app to presentations of different fields of work within NVISO. Finding out what is “typical me”, instead of gender- based is a first step to identify potential future career paths.

We have an employee resource program called NEST (NVISO Equality: Stronger Together!) working on the continuous improvement of NVISO’s posture on diversity and inclusion. Throughout NEST, NVISO commits to keep being a great working environment, where all kinds of diversity are respected, as well as to act by example to bring a significant added value to the whole European cybersecurity community.

We believe, we made an impact – if the result results are still accurate today, 49% of girls attending the Girls Day said they can imagine working in this field that their visited company is operating in. We are looking forward to some Girls Day alumni in our new joiners!

If you have questions or want to apply straight away now, please reach out to the Girls Day Initiative Lead, Carola Wondrak, at [email protected] .

Analyzing VSTO Office Files

29 April 2022 at 09:25

VSTO Office files are Office document files linked to a Visual Studio Office File application. When opened, they launch a custom .NET application. There are various ways to achieve this, including methods to serve the VSTO files via an external web server.

An article was recently published on the creation of these document files for phishing purposes, and since then we have observed some VSTO Office files on VirusTotal.

Analysis Method (OOXML)

Sample Trusted Updater.docx (0/60 detections) appeared first on VirusTotal on 20/04/2022, 6 days after the publication of said article. It is a .docx file, and as can be expected, it does not contain VBA macros (per definition, .docm files contain VBA macros, .docx files do not):

Figure 1: typical VSTO document does not contain VBA code

Taking a look at the ZIP container (a .docx file is an OOXML file, i.e. a ZIP container containing XML files and other file types), there are some aspects that we don’t usually see in “classic” .docx files:

Figure 2: content of sample file

Worth noting is the following:

  1. The presence of files in a folder called vstoDataStore. These files contain metadata for the execution of the VSTO file.
  2. The timestamp of some of the files is not 1980-01-01, as it should be with documents created with Microsoft Office applications like Word.
  3. The presence of a docsProp/custom.xml file.

Checking the content of the custom document properties file, we find 2 VSTO related properties: _AssemblyLocation and _AssemblyName:

Figure 3: custom properties _AssemblyLocation and _AssemblyName

The _AssemblyLocation in this sample is a URL to download a VSTO file from the Internet. We were not able to download the VSTO file, and neither was VirusTotal at the time of scanning. Thus we can not determine if this sample is a PoC, part of a red team engagement or truly malicious. It is a fact though, that this technique is known and used by red teams like ours, prior to the publication of said article.

There’s little information regarding domain login03k[.]com, except that it appeared last year in a potential phishing domain list, and that VirusTotal tags it as DGA.

If the document uses a local VSTO file, then the _AssemblyLocation is not a URL:

Figure 4: referencing a local VSTO file

Analysis Method (OLE)

OLE files (the default Office document format prior to Office 2007) can also be associated with VSTO applications. We have found several examples on VirusTotal, but none that are malicious.
Therefore, to illustrate how to analyze such a sample, we converted the .docx maldoc from our first analysis, to a .doc maldoc.

Figure 5: analysis of .doc file

Taking a look at the metadata with oledump‘s plugin_metadata, we find the _AssemblyLocation and _AssemblyName properties (with the URL):

Figure 6: custom properties _AssemblyLocation and _AssemblyName

Notice that this metadata does not appear when you use oledump’s option -M:

Figure 7: olefile’s metadata result

Option -M extracts the metadata using olefile’s methods, and this olefile Python module (whereupon oledump relies) does not (yet) parse user defined properties.

Conclusion

To analyze Office documents linked with VSTO apps, search for custom properties _AssemblyLocation and _AssemblyName.

To detect Office documents like these, we have created some YARA rules for our VirusTotal hunting. You can find them on our Github here. Some of them are rather generic by design, and will generate too many hits for use in a production environment. They are originally designed for hunting on VT.

We will discus these rules in detail in a follow-up blog post, but we already wanted to share these with you.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

Cortex XSOAR Tips & Tricks – Execute Commands Using The API

28 April 2022 at 08:00

Introduction

Every automated task in Cortex XSOAR relies on executing commands from integrations or automations either in a playbook or directly in the incident war room or playground. But what if you wanted to incorporate a command or automation from Cortex XSOAR into your own custom scripts? For that you can use the API.

In the previous post in this series, we demonstrated how to use the Cortex XSOAR API in an automation. In this blog post, we will dive deeper into the API and show you how to execute commands using the Cortex XSOAR API.

To enable you to do this in your own automations, we have created a nitro_execute_api_command function which is available on the NVISO Github:

https://github.com/NVISOsecurity/blogposts/blob/master/CortexXSOAR/nitro_execute_api_command.py

Cortex XSOAR API Endpoints

When reviewing the Cortex XSOAR API documentation, you can find the following API endpoints:

  • /entry: API to create an entry (markdown format) in existing investigation
  • /entry/execute/sync: API to create an entry (markdown format) in existing investigation

Based on the description it might not be obvious, but both can be used to execute commands using the API. An entry in an existing investigation can contain a command which can be executed in the context of an incident or in the Cortex XSOAR playground.

We will be using the /entry/execute/sync endpoint, because this will wait for the command to be completed and the API request will return the command’s result. The /entry endpoint only creates an entry in the war room/playground without returning the result.

A HTTP POST request to the /entry/execute/sync endpoint accepts the following request body:

{
  "args": {
    "string": "<<_advancearg>>"
  },
  "data": "string",
  "id": "string",
  "investigationId": "string",
  "markdown": true,
  "primaryTerm": 0,
  "sequenceNumber": 0,
  "version": 0
}

To execute a simple print command in the context of an incident, you can use the following curl command:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "423","data": "!Print value=\"Printed by API\""}
        '

The body of the HTTP POST request should contain the following keys:

  • investigationId: the XSOAR Incident ID
  • data: the command to execute

After executing the HTTP POST request, you will see the entry created in the incident war room:

When you do not require the command to be executed in the context of an Cortex XSOAR incident, it is possible to execute it in the playground. For this you should replace the investiationId key by the playground ID.

This can be found by using the investigation/search API endpoint:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/investigations/search' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"filter": {"type": [9]}}'

This will return the following response body:

{
  "total": 1,
  "data": [
    {
      "id": "248b2bc0-def4-4492-8c80-d5a7e03be9fb",
      "version": 2,
      "cacheVersn": 0,
      "modified": "2022-04-08T14:20:00.262348298Z",
      "name": "Playground",
      "users": [
        "wstinkens"
      ],
      "status": 0,
      "type": 9,
      "reason": null,
      "created": "2022-04-08T13:56:03.294180041Z",
      "closed": "0001-01-01T00:00:00Z",
      "lastOpen": "0001-01-01T00:00:00Z",
      "creatingUserId": "wstinkens",
      "details": "",
      "systems": null,
      "tags": null,
      "entryUsers": [
        "wstinkens"
      ],
      "slackMirrorType": "",
      "slackMirrorAutoClose": false,
      "mirrorTypes": null,
      "mirrorAutoClose": null,
      "category": "",
      "rawCategory": "",
      "runStatus": "",
      "highPriority": false,
      "isDebug": false
    }
  ]
}

By using the id in the investigationId key in the request body of a HTTP POST request to /entry/execute/sync, it will be executed in the Cortex XSOAR playground:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb","data": "!Print value=\"Printed by API\""}'

By default, the Markdown output of the command visible in the war room/playground will be returned by the HTTP POST request:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb","data": "!azure-sentinel-list-tables"}'

This will return the result of the command as Markdown in the contents key:

[
  {
    "id": "[email protected]",
    "version": 1,
    "cacheVersn": 0,
    "modified": "2022-04-27T10:49:23.872137691Z",
    "type": 1,
    "created": "2022-04-27T10:49:23.87206309Z",
    "incidentCreationTime": "2022-04-27T10:49:23.87206309Z",
    "retryTime": "0001-01-01T00:00:00Z",
    "user": "",
    "errorSource": "",
    "contents": "### Azure Sentinel (NITRO) List Tables\n401 tables found in Sentinel Log Analytics workspace.\n|Table name|\n|---|\n| UserAccessAnalytics |\n| UserPeerAnalytics |\n| BehaviorAnalytics |\n| IdentityInfo |\n| ProtectionStatus |\n| SecurityNestedRecommendation |\n| CommonSecurityLog |\n| SecurityAlert |\n| SecureScoreControls |\n| SecureScores |\n| SecurityRegulatoryCompliance |\n| SecurityEvent |\n| SecurityRecommendation |\n| SecurityBaselineSummary |\n| Update |\n| UpdateSummary |\n",
    "format": "markdown",
    "investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb",
    "file": "",
    "fileID": "",
    "parentId": "[email protected]",
    "pinned": false,
    "fileMetadata": null,
    "parentContent": "!azure-sentinel-list-tables",
    "parentEntryTruncated": false,
    "system": "",
    "reputations": null,
    "category": "artifact",
    "note": false,
    "isTodo": false,
    "tags": null,
    "tagsRaw": null,
    "startDate": "0001-01-01T00:00:00Z",
    "times": 0,
    "recurrent": false,
    "endingDate": "0001-01-01T00:00:00Z",
    "timezoneOffset": 0,
    "cronView": false,
    "scheduled": false,
    "entryTask": null,
    "taskId": "",
    "playbookId": "",
    "reputationSize": 0,
    "contentsSize": 10315,
    "brand": "Azure Sentinel (NITRO)",
    "instance": "QA-Azure Sentinel (NITRO)",
    "InstanceID": "e39e69f0-3882-4478-824d-ac41089381f2",
    "IndicatorTimeline": [],
    "Relationships": null,
    "mirrored": false
  }
]

To return the data of the executed command as JSON, you should add the raw-response=true parameter to your command:

curl -X 'POST' \
  'https://xsoar.dev/acc_wstinkens/entry/execute/sync' \
  -H 'accept: application/json' \
  -H 'Authorization: **********************' \
  -H 'Content-Type: application/json' \
  -d '{"investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb","data": "!azure-sentinel-list-tables raw-response=true"}'

This will return the result of the command as JSON in the contents key:

[
  {
    "id": "[email protected]",
    "version": 1,
    "cacheVersn": 0,
    "modified": "2022-04-27T06:34:59.448622878Z",
    "type": 1,
    "created": "2022-04-27T06:34:59.448396275Z",
    "incidentCreationTime": "2022-04-27T06:34:59.448396275Z",
    "retryTime": "0001-01-01T00:00:00Z",
    "user": "",
    "errorSource": "",
    "contents": [
      "UserAccessAnalytics",
      "UserPeerAnalytics",
      "BehaviorAnalytics",
      "IdentityInfo",
      "ProtectionStatus",
      "SecurityNestedRecommendation",
      "CommonSecurityLog",
      "SecurityAlert",
      "SecureScoreControls",
      "SecureScores",
      "SecurityRegulatoryCompliance",
      "SecurityEvent",
      "SecurityRecommendation",
      "SecurityBaselineSummary",
      "Update",
      "UpdateSummary",
    ],
    "format": "json",
    "investigationId": "248b2bc0-def4-4492-8c80-d5a7e03be9fb",
    "file": "",
    "fileID": "",
    "parentId": "[email protected]80-d5a7e03be9fb",
    "pinned": false,
    "fileMetadata": null,
    "parentContent": "!azure-sentinel-list-tables raw-response=\"true\"",
    "parentEntryTruncated": false,
    "system": "",
    "reputations": null,
    "category": "artifact",
    "note": false,
    "isTodo": false,
    "tags": null,
    "tagsRaw": null,
    "startDate": "0001-01-01T00:00:00Z",
    "times": 0,
    "recurrent": false,
    "endingDate": "0001-01-01T00:00:00Z",
    "timezoneOffset": 0,
    "cronView": false,
    "scheduled": false,
    "entryTask": null,
    "taskId": "",
    "playbookId": "",
    "reputationSize": 0,
    "contentsSize": 9402,
    "brand": "Azure Sentinel (NITRO)",
    "instance": "QA-Azure Sentinel (NITRO)",
    "InstanceID": "e39e69f0-3882-4478-824d-ac41089381f2",
    "IndicatorTimeline": [],
    "Relationships": null,
    "mirrored": false
  }
]

nitro_execute_api_command()

Even in Cortex XSOAR automations, executing commands through the API can be useful. When using automations, you will see that outputting results to the war room/playground and context data is only done after the automation has been executed. If you, for example, want to perform a task which requires the entry ID of a war room/playground entry or of a file, you will need to run 2 consequent automations. Another solution would be executing a command using the Cortex XSOAR API which will create the entry in the war room/playground during the runtime of your automation and returns it’s entry ID. Later in this post, we will provide an example of how this can be used.

To execute command through the API from automations, we have created the nitro_execute_api_command function:

def nitro_execute_api_command(command: str, args: dict = None):
    """Execute a command using the Demisto REST API

    :type command: ``str``
    :param command: command to execute
    :type args: ``dict``
    :param args: arguments of command to execute

    :return: list of returned results of command
    :rtype: ``list``
    """
    args = args or {}

    # build the command string in the form !Command arg1="val1" arg2="val2"
    cmd_str = f"!{command}"

    for key, value in args.items():
        if isinstance(value, dict):
            value = json.dumps(json.dumps(value))
        else:
            value = json.dumps(value)
        cmd_str += f" {key}={value}"

    results = nitro_execute_command("demisto-api-post", {
        "uri": "/entry/execute/sync",
        "body": json.dumps({
            "investigationId": demisto.incident().get('id', ''),
            "data": cmd_str
        })
    })

    if not isinstance(results, list) \
            or not len(results)\
            or not isinstance(results[0], dict):
        return []

    results = results[0].get("Contents", {}).get("response", [])
    for result in results:
        if "contents" in result:
            result["Contents"] = result.pop("contents")

    return results

To use this function, the Demisto REST API integration needs to be enabled. How to set this up is described in the previous post in this series.

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_execute_api_command in all your custom automations.

Incident Evidences Example

To demonstrate the use case for executing commands through the Cortex XSOAR API in automations, we will, again, build upon the example of adding evidences to the incident Evidence board. In the previous posts, we added tags to war room/playground entries which we then used in a second automation to search and add them to the incident Evidences board. This required a playbook which execute both automations consequently.

Now we will show you how to do this through the Cortex XSOAR API, negating the requirement of a playbook.

First we need an automation which creates an entry in the incident war room:

results = [
    {
        'FileName': 'malware.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Detected'
    },
    {
        'FileName': 'evil.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Prevented'
    }
]
title = "Malware Mitigation Status"

return_results(
    CommandResults(
        readable_output=tableToMarkdown(title, results, None, removeNull=True),
        raw_response=results
    )
)

This automation creates an entry in the incident war room:

We call this automation using the nitro_execute_api_command function:

results = nitro_execute_api_command(command='MalwareStatus')

The entry ID of the war room entry will be available in the returned result in the id key:

[
    {
        "IndicatorTimeline": [],
        "InstanceID": "ScriptServicesModule",
        "Relationships": null,
        "brand": "Scripts",
        "cacheVersn": 0,
        "category": "artifact",
        "contentsSize": 152,
        "created": "2022-04-27T08:37:29.197107197Z",
        "cronView": false,
        "dbotCreatedBy": "wstinkens",
        "endingDate": "0001-01-01T00:00:00Z",
        "entryTask": null,
        "errorSource": "",
        "file": "",
        "fileID": "",
        "fileMetadata": null,
        "format": "markdown",
        "id": "[email protected]",
        "incidentCreationTime": "2022-04-27T08:37:29.197107197Z",
        "instance": "Scripts",
        "investigationId": "6974",
        "isTodo": false,
        "mirrored": false,
        "modified": "2022-04-27T08:37:29.197139897Z",
        "note": false,
        "parentContent": "!MalwareStatus",
        "parentEntryTruncated": false,
        "parentId": "[email protected]",
        "pinned": false,
        "playbookId": "",
        "recurrent": false,
        "reputationSize": 0,
        "reputations": null,
        "retryTime": "0001-01-01T00:00:00Z",
        "scheduled": false,
        "startDate": "0001-01-01T00:00:00Z",
        "system": "",
        "tags": null,
        "tagsRaw": null,
        "taskId": "",
        "times": 0,
        "timezoneOffset": 0,
        "type": 1,
        "user": "",
        "version": 1,
        "Contents": "### Malware Mitigation Status\n|DetectionStatus|FileName|FilePath|\n|---|---|---|\n| Detected | malware.exe | c:\\temp |\n| Prevented | evil.exe | c:\\temp |\n"
    }
]

Next, we get all entry IDs from the results of nitro_execute_api_command:

entry_ids = [result.get('id') for result in results]

Finally we loop through all entry IDs in the nitro_execute_api_command result and use the AddEvidence command to add them to the evidence board:

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': 'Example Evidence'})

The war room entry created by the command executed through the Cortex XSOAR API will now be added to the Evidence Board of the incident:

References

https://docs.paloaltonetworks.com/cortex/cortex-xsoar/6-6/cortex-xsoar-admin/incidents/incident-management/war-room-overview

https://xsoar.pan.dev/docs/concepts/concepts#playground

https://xsoar.pan.dev/marketplace/details/DemistoRESTAPI

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the SOAR engineering team lead, he is responsible for the development and deployment of automated workflows in Palo Alto Cortex XSOAR which enable the NVISO SOC analysts to faster detect attackers in customers environments. With his experience in cloud and DevOps, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can contact Wouter via his LinkedIn page.


Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.
https://forms.office.com/r/dpuep3PL5W

Investigating an engineering workstation – Part 3

20 April 2022 at 08:00

In our third blog post (part one and two are referenced above) we will focus on information we can get from the projects itself.

You may remember from Part 1 that a project created with the TIA Portal is not a single file. So far we talked about files with the “.apXX” extension, like “.ap15_1” in our example. Actually these files are used to open projects in the TIA Portal but they do not contain all the information that makes up a project. If you open an “.ap15_1” file there is not much to see as demonstrated below:

Figure 1: .at15_1 file content excerpt

The file we are actually looking for is named “PEData.plf” and located in the “System” folder stored within the “root” folder of the project. The “root” folder is also the location of the “.ap15_1” file.

Figure 2: Showing content of project “root” and “System” folder

As demonstrated below, the “PEData.plf” file is a binary file format and reviewing its content does not show any usefully information at first sight.

Figure 3: Hexdump of a PEData.plf file

But we can get useful information from the file if we know what to look for. When we compare the two “PEData” files of projects, where just some slight changes were performed, we can get a first idea how the file is structured. In the following example two variables were added to a data block, the project was saved, downloaded to the PLC and the TIA Portal was closed saving all changes to the project. (If you are confused with the wording “downloaded to the PLC”, do not worry about it too much for now. This is just the wording for getting the logic deployed on the PLC.)

The tool colordiff can provide a nice side-by-side view, with the differences highlighted, by using the following command. (The files were renamed for a better understanding):

colordiff -y <(xxd Original_state_PEData.plf) <(xxd CHANGES_MADE_PEData.plf)

Figure 4: colordiff output showing appended changes

The output shows that the changes made are appended to the “PEData.plf” file. Figure 4 shows the starting offset of the change, in our case at offset 0xA1395. We have performed multiple tests by applying small changes. In all cases, data was appended to the “PEData.plf” file. No data was overwritten or changed in earlier sections of the file.

To further investigate the changes, we extract the changes to a file:

dd skip=660373 if=CHANGES_MADE_PEData.plf of=changes.bin bs=1

We set the block size of the dd command to 1 and skip the first 660373 blocks (0xA1395 in hex). As demonstrated below, the resulting file, named “changes.bin”, has the size of 16794 bytes. Exactly the difference in size between the two files we compared.

Figure 5: Showing file sizes of compared files and the extracted changes

Trying to reverse engineer which bytes of the appended data might be header data and which is actual content, is way above the scope of this series of blog posts. But with the changes extracted and the use of tools like strings, we still get an insight on the activities.

Figure 6: Parts of strings output when run against “changes.bin” file

Looking through the whole output, we can immediately find out that the change ends with the string: “##CLOSE##”. This is also the only appearance of this specific string in the extracted changes. Further we can see that not far above the “##CLOSE##” string there is the string “$$COMMIT$”. In this case we will find two occurrences for this specific string, we will explain later why this might be the case.

Figure 7: String occurrences of “##CLOSE” and “$$COMMIT$” at the end of changes

The next string of interest is “PLUSBLOCK”, if you review figure 6, you will already notice it in the 8th line. In the current example we get three occurrences of this string. No worries if you are already lost in which string occurred how many times etc., we will provide an overview shortly. Before showing the overview, it will help to review more content of the strings output.

Below you can review the changes we introduced in “CHANGES_MADE_PEData.plf” compared to the project state represented by “Original_state_PEData.plf”.

Figure 8: Overview of changes made to the project

In essence, we added two variables to “Data_block_1”. These are the variables “DB_1_var3” and “DB_1_var4”. These variable names are also present in the extracted changes as shown in figure 9. Please note, this block occurs two times in our extracted changes, and also contains the already existing variable names “DB_1_var1” and “DB_1_var2”.

Figure 9: Block in “changes.bin” containing variables names

One section we need to mention before we can start drawing conclusions from the overview is the “DownloadLog” section, showing up just once in our changes. We will have a look at the content of this section and which behaviour we observed later in this blog post.

Overview and behaviour

As promised earlier, we finally start showing an overview.

Line number String / Section of interest
8 PLUSBLOCK
22 Section containing the variable names
35 $$COMMIT$
48 PLUSBLOCK
58 Section containing the variable names
61 PLUSBLOCK
78 Start of “DownloadLog”
101 $$COMMIT$
109 ##CLOSE#
Table 1: Overview of string occurrences in “changes.bin”

The following steps were performed while introducing the change:

  1. Copy existing project to new location & open the project from the new location using the TIA Portal
  2. Adding variables “DB_1_var3” and “DB_1_var4” to the already existing datablock “Data_block_1”
  3. Saving the project
  4. Downloading the project to the PLC
  5. Closing the TIA Portal and save all changes

The “$$COMMIT$” string in line 35 and 101 seems to be aligning with our actions in step 3 (saving the project) and step 4 & 5 (downloading the project & close and save). Following this theory, if we would skip step 3, we should not get two occurrences of the variable name section and would not see the string “$$COMMIT$” twice. In a second series of tests we did exactly this, resulting in the following overview (of course the line numbers differ, as a different project was used in testing).

Line number String / Section of interest
6 PLUSBLOCK
28 Section containing the variable names
35 PLUSBLOCK
40 Start of “DownloadLog”
70 $$COMMIT$
75 ##CLOSE#
Table 2: Overview of string occurrences in “changes.bin” for test run 2

This pretty much looks like what we expected, we only see one “$$COMMIT$”, one section with the variable names and one less “PLUSBLOCK”. To further validate the theory, we did another test by creating a new, empty project and downloaded it to the PLC (State 1). Afterwards we performed the following steps to reach State 2:

  1. Adding a new data block containing two variables
  2. Saving the project
  3. Adding two more variables to the data block (4 in total now)
  4. Saving the project
  5. Downloading the project to the PLC
  6. Closing the TIA Portal and save all changes

If we again just focus on the additions made to the “PEData.plf” we will get the following overview. Entries with “####” are comments we added to reference the steps mentioned above.

Line number String / Section of interest
11 PLUSBLOCK
32 Start of “DownloadLog”
194 Section containing the variable names (first two variables)
223 $$COMMIT$
#### Comment: above added by step 2 (saving the project)
237 PLUSBLOCK
266 Section containing the variable names (all four variables)
270 $$COMMIT$
#### Comment: above added by step 4 (saving the project)
273 PLUSBLOCK
278 Section containing the variable names (all four variables)
290 PLUSBLOCK
456 Start of “DownloadLog”
509 $$COMMIT$
513 ##CLOSE#
#### Comment: added by step 5 and 6
Table 3: Overview of string occurrences in test run 3

The occurrence of the “DownloadLog” at line 32 might come as a surprise to you at this point in time. As already stated earlier, the explanation of the “DownloadLog” will follow later. For now just accept that it is there.

Conclusions so far

Based on the observations described above, we can draw the following conclusions:

  1. Adding a change to a project and saving it will cause the following structure: “PLUSBLOCK”,…changes…, “$$COMMIT$”
  2. Adding a change to a project, saving it and closing the TIA Portal will cause the following structure: “PLUSBLOCK”,…changes…, “$$COMMIT$”,”##CLOSE#”
  3. Downloading changes to a PLC and choosing save when closing the TIA Portal causes the following structure: “PLUSBLOCK”,…changes…, “PLUSBLOCK”,DownloadLog, “$$COMMIT$”,”##CLOSE”
DownloadLog

The “DownloadLog” is a xml like structure, present in clear in the PEData.plf file. Figure 10 shows an example of a “DownloadLog”.

Figure 10: DownloadLog structure example

As you might have guessed already, the “DownloadTimeStamp” represents the date and time the changes were downloaded to the PLC. Date and time are written as Epoch Unix timestamp, and can be easily converted with tools like CyberChef using the appropriate recipe. If we take the last value ( “1641820395551408400” ) from the “DownloadLog” example and convert it, we can learn that there was a download to the PLC happening on Mon 10 January 2022 13:13:15.551 UTC. By definition Epoch Unix timestamps are in UTC, we can confirm that the times in our tests were created based on UTC and not on the local system time. Also demonstrated above, the “DownloadLog” can contain past timestamps, showing a kind of history in regards of download activities. Remember what was mentioned above, the changes to a project are appended to the file, this also is true for the “DownloadLog”. So an existing “DownloadLog” is not updated, instead a new one is appended and extended with a new “DownloadSet” node. Unfortunately, it is not as straight forward as it may sound at the moment.

Starting again with a fresh project, configuring the hardware (setting the IP-Address for the PLC), saving the project, downloading the project to the PLC and closing the TIA Portal (Save all changes) we ended up with one “DownloadLog” containing one “DownloadTimeStamp” in the PEData.plf file:

  1. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″

As next step we added a data block, again saving the project, downloading it to the PLC and closing the TIA Portal saving all changes. This resulted in the following overview of “DownloadLog” entries:

  1. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  2. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  3. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
    • DownloadTimeStamp=”1639754268869841200″

The first “DownloadLog” is repeated, and a third “DownloadLog” is added containing the date and time of the most recent download activity. So overall, two “DownloadLogs” were added.

In the third step we added variables to the data block followed by saving, downloading and closing TIA Portal with save.

  1. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  2. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
  3. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
    • DownloadTimeStamp=”1639754268869841200″
  4. DownloadLog
    • DownloadTimeStamp=”1639754084193097100″
    • DownloadTimeStamp=”1639754268869841200″
    • DownloadTimeStamp=”1639754601898276800″

This time only one “DownloadLog” was added, which repeats the content of “DownloadLog” number 3 and also contains the most recent date and time. We repeated the same actions of step 3 again, observing the same behaviour. One “DownloadLog” is added, which repeats the content of the previous “DownloadLog” and adds date and time of the current download activity. After doing this, we did not observed anymore “DownloadLog” entries added to the “PEdata.plf” file, no matter which changes we introduced and downloaded to the PLC. In further testing we encountered different behaviours of the “DownloadLog” and if it is repeated as a whole or not (Occurrence 2 in the examples above). Currently we believe that only 4 “DownloadLog” entries, showing new download activity, are added to the “PEData.plf” file. If a “DownloadLog” entry is just repeated, it is not counted.

Conclusion on the DownloadLog
  1. When “DownloadTimeStamp” entries are present in a “PEData.plf” file, they do represent download activity.
  2. If there are 4 unique “DownloadLog” entries in a “PEData.plf” file, we cannot tell (from the “PEData.plf” file) if there was any download activity after the most recent timestamp in the last occurrence of a unique “DownloadLog” entry.

Overall Conclusions & Outlook

We have shown that changes made to a project can be isolated and to a certain part analysed with tools like strings, xdd or diff. Further we have demonstrated that we can reconstruct download activity from a project, at least up to the first four download actions. Last but not least we can conclude that more testing and research has to be performed to get a better understanding of data points that can be extracted from projects. For example, we did not perform research to see if we can identify strings representing the project name or the author name for the project in the “PEData.plf” file without knowing them upfront. Further we only looked at the Siemens TIA Portal Version 15.1, different versions might produce other formats or behave in a different way. Further Siemens is not the only vendor that plays a relevant role in this area.

In the next part we will have a look at network traffic observed in out testing. Stay tuned!

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

❌