Normal view

There are new articles available, click to refresh the page.
Before yesterdayVulnerabily Research

Data Protection Day 2024: As Technology and Threats Evolve, Data Protection Is Paramount

31 January 2024 at 20:13

Today’s cybersecurity landscape poses one of the most significant risks to data. This holds true for organizations of all sizes, across all industries, tasked with protecting their most essential data amid an increasingly regulated environment and faster, more innovative adversaries.

Recent years have introduced a steady drumbeat of new data privacy regulations. There are now 14 U.S. states that have passed privacy laws. In July 2023, the Securities and Exchange Commission (SEC) adopted new rules requiring organizations to disclose material cybersecurity incidents, as well as information regarding their risk management, strategy and governance. On a global level, dozens of countries have updated their guidance on data privacy.  

Organizations must now comply with an “alphabet soup” of data protection requirements including GDPR, CCPA, APPI, PDPA and LGPD. Some of these are evolving to incentivize the adoption of stronger security practices. Newly updated regulations in Brazil, for example, give breached organizations a fine reduction of up to 75% if they have state-of-the-art protection in place at the time of a cyberattack. 

The list is growing: In 2024, many organizations will face new requirements stemming from the SEC’s new rules and state privacy laws, including amendments to the CCPA, industry-specific mandates, and those imposed on critical infrastructure by the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA). These developments include new incident reporting obligations and requirements to implement certain security technologies, as well as demonstrate compliance through cybersecurity audits, risk assessments, public disclosures and other measures. 

These myriad legal requirements broadly raise the bar for “reasonable” security. However, adversaries typically move faster than data protection mandates can keep up. Organizations must pay close attention to how adversaries are evolving their techniques and determine whether they’re prepared to defend their data against modern threats.

Data Extortion and the Defender’s Dilemma 

The emergence of new regulations has been a game-changer for adversaries and defenders alike. Protecting against data breaches has only grown more challenging as threat actors evolve their tradecraft and quickly learn the pressure these regulations put on breached organizations.

Today’s adversaries are working smarter, not harder. This is clear in the growth of data extortion, which has emerged in recent years as an easier, less risky means for adversaries to profit. Threat actors are shifting away from noisy ransomware campaigns, which typically trigger alarm bells in security tools — instead, they are quietly stealing victims’ data and then threatening to leak it if their financial demands aren’t met. 

The rise in data extortion has corresponded with adversaries increasingly targeting identities, a critical threat vector organizations must consider as they build their data protection plans.  Rather than relying on malware-laced phishing emails to breach target organizations, they can use a set of compromised credentials to simply log in. A growing number of access broker advertisements enables the sale of credentials, vulnerability exploits and other forms of illicit access: Last year, CrowdStrike reported a 147% increase in access broker ads on the dark web. Adversaries can now more stealthily infiltrate organizations, take valuable data and demand their price, putting victims in a tough position.

Data protection regulations change the calculus for organizations hit with data extortion — and adversaries know it. When threat actors steal information and tell their victims they’re in violation of HIPAA, GDPR, CCPA or other regulations, the stakes are higher. They know exactly how much an extortion attack will cost a business once it’s disclosed to regulators, and they can use this to coerce organizations into paying them instead. This may be a false choice, as many disclosure requirements apply regardless, but the coercion is real.

There are other ways adversaries use regulation consciousness to their advantage. In one 2023 case, a ransomware gang filed an SEC whistleblower complaint directed at one of its victims. The complaint, filed before the new SEC rules actually went into effect, attempted to claim that the victim was in violation of its duty to disclose a material cyber incident. 

Organizations must be incentivized to protect their data from modern threats. They should not feel stuck between the fear of reporting a breach and the pressure to meet adversaries’ ransom demands. With the right safeguards in place, businesses can protect their data from adversaries’ evolving attempts to access it. This is where CrowdStrike comes in. 

How CrowdStrike Can Help 

As we recognize Data Protection Day 2024, it is essential we consider what data protection involves and how critical cybersecurity is — not only for compliance, but for protecting privacy. Organizations must adopt best practices to protect their data in addition to achieving compliance requirements. 

Visibility is essential to maintain regulatory compliance and protect sensitive data from today’s adversaries. If you don’t have visibility into your data flows, your credentials or the sensitive data your organization holds, how can you know whether that data is at risk? 

An organization’s data is among its most valuable assets — and adversaries are after it. Protecting that data should be a top priority. CrowdStrike Falcon® Data Protection provides deep, real-time visibility into what’s happening with your sensitive data as it flows across endpoints, cloud, web browsers and SaaS applications. As the modern approach to data protection, our technology ensures compliance with minimal configuration and provides comprehensive protection against modern threats. 

It is more important than ever for organizations to understand data protection and data security are interdependent and cannot be considered in isolation. Both are critical in protecting privacy. Moreover, if personal data is stolen in a cyberattack, those affected can claim damages — but certain jurisdictions provide fine and liability mitigations where the breached organization can prove its cybersecurity protections were reasonable and state-of-the-art.

In this threat landscape and regulatory environment, Data Protection Day provides an opportunity for privacy and security teams to align on modern threats to privacy, risks of non-compliance and the best technical and organizational means to protect data.

Additional Resources

CrowdStrike’s View on the New U.S. Policy for Artificial Intelligence

21 November 2023 at 20:37

The major news in technology policy circles is this month’s release of the long-anticipated Executive Order (E.O.) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. While E.O.s govern policy areas within the direct control of the U.S. government’s Executive Branch, they are important broadly because they inform industry best practices and can even potentially inform subsequent laws and regulations in the U.S. and abroad.

Accelerating developments in AI — particularly generative AI — over the past year or so has captured policymakers’ attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) in particular has further heightened attention in Washington, D.C. In that context, the E.O. should be viewed as an early and significant step addressing AI policy rather than a final word.

Given CrowdStrike’s extensive experience with AI since the company’s founding in 2011, we want to highlight a few key topics that relate to innovation, public policy and cybersecurity.

The E.O. in Context

Like the technology it seeks to influence, the E.O. itself has many parameters. Its 13 sections cover a broad cross section of administrative and policy imperatives. These range from policing and biosecurity to consumer protection and the AI workforce. Appropriately, there’s significant attention to the nexus between AI and cybersecurity, which is covered at some length in Section 4.

Before diving into specific cybersecurity provisions, it is important to highlight a few observations on the document’s overall scope and approach. Fundamentally, the document strikes a reasonable balance between exercising caution regarding potential risks and enabling innovation, experimentation and adoption of potentially transformational technologies. In complex policy areas, some stakeholders will always disagree with how to achieve balance, but we’re encouraged by several attributes of the document.

First, in numerous areas of the E.O., agencies are designated as “owners” of specific next steps. This clarifies for stakeholders how to provide feedback and reduces the odds for gaps or duplicative efforts.

Second, the E.O. outlines several opportunities for stakeholder consultation and feedback. These will likely materialize through Request for Comment (RFC) opportunities issued by individual agencies. Further, there are several areas where the E.O. tasks existing — or establishes new — advisory panels to integrate structured stakeholder feedback on AI policy issues.

Third, the E.O. mandates a brisk progression for next steps. Many E.O.s require tasks to be finished in 30- or 60-day windows, which are difficult for agencies to meet at all, let alone in deliberate fashion. This document in many instances provides for 240-day deadlines, which should enable 30- and 60-day engagement periods through RFCs, as outlined above.

Finally, the E.O. states plainly that “as generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI.” This should help ensure that government agencies explore positive use cases for leveraging AI for their own mission areas. If history is any guide, it’s easy to imagine a scenario where a talented junior staffer at a given agency identifies a key way to leverage AI at some time next year, that no one could easily forecast this year. It would be unwise to foreclose that possibility, as innovation should be encouraged inside and outside of government.

AI and Cybersecurity Provisions

On cybersecurity specifically, the E.O. touches on a number of key areas. It’s good to see specific callouts to agencies like the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA) and Office of the National Cyber Director (ONCD) that have significant applied cyber expertise.

One section of the E.O. attempts to reduce risks of synthetic content — that is, generative audio, imagery and text. It’s clear the measures cited here are exploratory in nature rather than rigidly prescriptive. As a community, we’ll need to innovate solutions to this problem set. And with U.S. elections around the corner, we hope to see rapid advancements in this space.

In many instances, the E.O.’s authors paid close attention to enumerating AI policy through established mechanisms, some of which are closely related to ongoing cybersecurity efforts. This includes the direction to align with the AI Risk Management Framework (NIST AI 100-1) and the Secure Software Development Framework. This will reduce risks associated with establishing new processes, while enabling more coherent frameworks for areas where there are only subtle distinctions or boundaries between, for example, software, security and AI.

The document also attempts to leverage sector risk management agencies (SRMAs) to drive better preparedness within critical infrastructure sectors. Specifically, it mandates:

Within 90 days of the date of this order, and at least annually thereafter … relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.

This is important, but we also encourage these working groups to consider benefits along with risks. There are many areas where AI can drive better protection of critical assets. When done correctly, AI can rapidly surface hidden threats, accelerate the decision making of less experienced security analysts and simplify a multitude of complex tasks.

At CrowdStrike, AI has been fundamental to our approach from the beginning and has been built natively into the CrowdStrike Falcon® platform. Beyond replacing legacy AV, our platform uses analytics to help prioritize critical vulnerabilities that introduce risk and employs the power of AI to generate and validate new indicators of attack (IOAs). With Charlotte AI, CrowdStrike is harnessing the power of generative AI to make customers faster at detecting and responding to incidents, more productive by automating manual tasks, and more valuable by learning new skills with ease. This type of AI-fueled innovation is fundamental to keep pace with ever-evolving adversaries incorporating AI into their own tactics, techniques and procedures.

In Summary

This E.O. represents a key step in the evolution of U.S. AI policy. It’s also particularly timely. As we described in our recent testimony to the House Judiciary Committee, AI is key to driving better cybersecurity outcomes and is also of increasing interest to cyber threat actors. As a community, we’ll need to continue to work together to ensure defenders realize the leverage AI can provide, while mitigating whatever harms might come from threat actors’ abuse of AI systems.

This article was first published in SC Magazine: The Biden EO on AI: A stepping stone to the cybersecurity benefits of AI

Additional Resources

Emulation of Kernel Mode Rootkits With Speakeasy

20 January 2021 at 16:45

In August 2020, we released a blog post about how the Speakeasy emulation framework can be used to emulate user mode malware such as shellcode. If you haven’t had a chance, give the post a read today.

In addition to user mode emulation, Speakeasy also supports emulation of kernel mode Windows binaries. When malware authors employ kernel mode malware, it will often be in the form of a device driver whose end goal is total compromise of an infected system. The malware most often doesn’t interact with hardware and instead leverages kernel mode to fully compromise the system and remain hidden.

Challenges With Dynamically Analyzing Kernel Malware

Ideally, a kernel mode sample can be reversed statically using tools such as disassemblers. However, binary packers just as easily obfuscate kernel malware as they do user mode samples. Additionally, static analysis is often expensive and time consuming. If our goal is to automatically analyze many variants of the same malware family, it makes sense to dynamically analyze malicious driver samples.

Dynamic analysis of kernel mode malware can be more involved than with user mode samples. In order to debug kernel malware, a proper environment needs to be created. This usually involves setting up two separate virtual machines as debugger and debugee. The malware can then be loaded as an on-demand kernel service where the driver can be debugged remotely with a tool such as WinDbg.

Several sandbox style applications exist that use hooking or other monitoring techniques but typically target user mode applications. Having similar sandbox monitoring work for kernel mode code would require deep system level hooks that would likely produce significant noise.

Driver Emulation

Emulation has proven to be an effective analysis technique for malicious drivers. No custom setup is required, and drivers can be emulated at scale. In addition, maximum code coverage is easier to achieve than in a sandbox environment. Often, rootkits may expose malicious functionality via I/O request packet (IRP) handlers (or other callbacks). On a normal Windows system these routines are executed when other applications or devices send input/output requests to the driver. This includes common tasks such as reading, writing, or sending device I/O control (IOCTLs) to a driver to execute some type of functionality.

Using emulation, these entry points can be called directly with doped IRP packets in order to identify as much functionality as possible in the rootkit. As we discussed in the first Speakeasy blog post, additional entry points are emulated as they are discovered. A driver’s DriverMain entry point is responsible for initializing a function dispatch table that is called to handle I/O requests. Speakeasy will attempt to emulate each of these functions after the main entry point has completed by supplying a dummy IRP. Additionally, any system threads or work items that are created are sequentially emulated in order to get as much code coverage as possible.

Emulating a Kernel Mode Implant

In this blog post, we will show an example of Speakeasy’s effectiveness at emulating a real kernel mode implant family publicly named Winnti. This sample was chosen despite its age because it transparently implements some classic rootkit functionality. The goal of this post is not to discuss the analysis of the malware itself as it is fairly antiquated. Rather, we will focus on the events that are captured during emulation.

The Winnti sample we will be analyzing has SHA256 hash c465238c9da9c5ea5994fe9faf1b5835767210132db0ce9a79cb1195851a36fb and the original file name tcprelay.sys. For most of this post, we will be examining the emulation report generated by Speakeasy. Note: many techniques employed by this 32-bit rootkit will not work on modern 64-bit versions of Windows due to Kernel Patch Protection (PatchGuard) which protects against modification of critical kernel data structures.

To start, we will instruct Speakeasy to emulate the kernel driver using the command line shown in Figure 1. We instruct Speakeasy to create a full memory dump (using the “-d” flag) so we can acquire memory later. We supply the memory tracing flag (“-m”) which will log all memory reads and writes performed by the malware. This is useful for detecting things like hooking and direct kernel object manipulation (DKOM).


Figure 1: Command line used to emulate the malicious driver

Speakeasy will then begin emulating the malware’s DriverEntry function. The entry point of a driver is responsible for setting up passive callback routines that will service user mode I/O requests as well as callbacks used for device addition, removal, and unloading. Reviewing the emulation report for the malware’s DriverEntry function (identified in the JSON report with an “ep_type” of “entry_point”), shows that the malware finds the base address of the Windows kernel. The malware does this by using the ZwQuerySystemInformation API to locate the base address for all kernel modules and then looking for one named “ntoskrnl.exe”. The malware then manually finds the address of the PsCreateSystemThread API. This is then used to spin up a system thread to perform its actual functionality. Figure 2 shows the APIs called from the malware's entry point.


Figure 2: Key functionality in the tcprelay.sys entry point

Hiding the Driver Object

The malware attempts to hide itself before executing its main system thread. The malware first looks up the “DriverSection” field in its own DRIVER_OBJECT structure. This field holds a linked list containing all loaded kernel modules and the malware attempts to unlink itself to hide from APIs that list loaded drivers. In the “mem_access” field in the Speakeasy report shown in Figure 3, we can see two memory writes to the DriverSection entries before and after itself which will remove itself from the linked list.


Figure 3: Memory write events representing the tcprelay.sys malware attempting to unlink itself in order to hide

As noted in the original Speakeasy blog post, when threads or other dynamic entry points are created at runtime, the framework will follow them for emulation. In this case, the malware created a system thread and Speakeasy automatically emulated it.

Moving on to the newly created thread (identified by an “ep_type” of “system_thread”), we can see the malware begin its real functionality. The malware begins by enumerating all running processes on the host, looking for the service controller process named services.exe. It's important to note that the process listing that gets returned to the emulated samples is configurable via JSON config files supplied at runtime. For more information on these configuration options please see the Speakeasy README on our GitHub repository. An example of this configurable process listing is shown in Figure 4.


Figure 4: Process listing configuration field supplied to Speakeasy

Pivoting to User Mode

Once the malware locates the services.exe process, it will attach to its process context and begin inspecting user mode memory in order to locate the addresses of exported user mode functions. The malware does this so it can later inject an encoded, memory-resident DLL into the services.exe process. Figure 5 shows the APIs used by the rootkit to resolve its user mode exports.


Figure 5: Logged APIs used by tcprelay.sys rootkit to resolve exports for its user mode implant

Once the exported functions are resolved, the rootkit is ready to inject the user mode DLL component. Next, the malware manually copies the in-memory DLL into the services.exe process address space. These memory write events are captured and shown in Figure 6.


Figure 6: Memory write events captured while copying the user mode implant into services.exe

A common technique that rootkits use to execute user mode code involves a Windows feature known as Asynchronous Procedure Calls (APC). APCs are functions that execute asynchronously within the context of a supplied thread. Using APCs allows kernel mode applications to queue code to run within a thread’s user mode context. Malware often wants to inject into user mode since much of the common functionality (such as network communication) within Windows can be more easily accessed. In addition, by running in user mode, there is less risk of being detected in the event of faulty code bug-checking the entire machine.

In order to queue an APC to fire in user mode, the malware must locate a thread in an “alertable” state. Threads are said to be alertable when they relinquish their execution quantum to the kernel thread scheduler and notify the kernel that they are able to dispatch APCs. The malware searches for threads within the services.exe process and once it detects one that’s alertable it will allocate memory for the DLL to inject then queue an APC to execute it.

Speakeasy emulates all kernel structures involved in this process, specifically the executive thread object (ETHREAD) structures that are allocated for every thread on a Windows system. Malware may attempt to grovel through this opaque structure to identify when a thread’s alertable flag is set (and therefore a valid candidate for an APC). Figure 7 shows the memory read event that was logged when the Winnti malware manually parsed an ETHREAD structure in the services.exe process to confirm it was alertable. At the time of this writing, all threads within the emulator present themselves as alertable by default.


Figure 7: Event logged when the tcprelay.sys malware confirmed a thread was alertable

Next, the malware can execute any user mode code it wants using this thread object. The undocumented functions KeInitializeApc and KeInsertQueueApc will initialize and execute a user mode APC respectively. Figure 8 shows the API set that the malware uses to inject a user mode module into the services.exe process. The malware executes a shellcode stub as the target of the APC that will then execute a loader for the injected DLL. All of this can be recovered from the memory dump package and analyzed later.


Figure 8: Logged APIs used by tcprelay.sys rootkit to inject into user mode via an APC

Network Hooks

After injecting into user mode, the kernel component will attempt to install network obfuscation hooks (presumably to hide the user mode implant). Speakeasy tracks and tags all memory within the emulation space. In the context of kernel mode emulation, this includes all kernel objects (e.g. Driver and Device objects, and the kernel modules themselves). Immediately after we observe the malware inject its user mode implant, we see it begin to attempt to hook kernel components. This was confirmed during static analysis to be used for network hiding.

The memory access section of the emulation report reveals that the malware modified the netio.sys driver, specifically code within the exported function named NsiEnumerateObjectsAllParametersEx. This function is ultimately called when a user on the system runs the “netstat” command and it is likely that the malware is hooking this function in order to hide connected network ports on the infected system. This inline hook was identified by the event captured in Figure 9.


Figure 9: Inline function hook set by the malware to hide network connections

In addition, the malware hooks the Tcpip driver object in order to accomplish additional network hiding. Specifically, the malware hooks the IRP_MJ_DEVICE_CONTROL handler for the Tcpip driver. User mode code may send IOCTL codes to this function when querying for active connections. This type of hook can be easily identified with Speakeasy by looking for memory writes to critical kernel objects as shown in Figure 10.


Figure 10: Memory write event used to hook the Tcpip network driver

System Service Dispatch Table Hooks

Finally, the rootkit will attempt to hide itself using the nearly ancient technique of system service dispatch table (SSDT) patching. Speakeasy allocates a fake SSDT so malware can interact with it. The SSDT is a function table that exposes kernel functionality to user mode code. The event in Figure 11 shows that the SSDT structure was modified at runtime.


Figure 11: SSDT hook detected by Speakeasy

If we look at the malware in IDA Pro, we can confirm that the malware patches the SSDT entry for the ZwQueryDirectoryFile and ZwEnumerateKey APIs that it uses to hide itself from file system and registry analysis. The SSDT patch function is shown in Figure 12.


Figure 12: File hiding SSDT patching function shown in IDA Pro

After setting up these hooks, the system thread will exit. The other entry points (such as the IRP handlers and DriverUnload routines) in the driver are less interesting and contain mostly boilerplate driver code.

Acquiring the Injected User Mode Implant

Now that we have a good idea what the driver does to hide itself on the system, we can use the memory dumps created by Speakeasy to acquire the injected DLL discussed earlier. Opening the zip file we created at emulation time, we can find the memory tag referenced in Figure 6. We quickly confirm the memory block has a valid PE header and it successfully loads into IDA Pro as shown in Figure 13.


Figure 13: Injected user mode DLL recovered from Speakeasy memory dump

Conclusion

In this blog post, we discussed how Speakeasy can be effective at automatically identifying rootkit activity from the kernel mode binary. Speakeasy can be used to quickly triage kernel binaries that may otherwise be difficult to dynamically analyze. For more information and to check out the code, head over to our GitHub repository.

❌
❌