Normal view

There are new articles available, click to refresh the page.
Before yesterdayCrowdStrike

Data Protection Day 2024: As Technology and Threats Evolve, Data Protection Is Paramount

31 January 2024 at 20:13

Today’s cybersecurity landscape poses one of the most significant risks to data. This holds true for organizations of all sizes, across all industries, tasked with protecting their most essential data amid an increasingly regulated environment and faster, more innovative adversaries.

Recent years have introduced a steady drumbeat of new data privacy regulations. There are now 14 U.S. states that have passed privacy laws. In July 2023, the Securities and Exchange Commission (SEC) adopted new rules requiring organizations to disclose material cybersecurity incidents, as well as information regarding their risk management, strategy and governance. On a global level, dozens of countries have updated their guidance on data privacy.  

Organizations must now comply with an “alphabet soup” of data protection requirements including GDPR, CCPA, APPI, PDPA and LGPD. Some of these are evolving to incentivize the adoption of stronger security practices. Newly updated regulations in Brazil, for example, give breached organizations a fine reduction of up to 75% if they have state-of-the-art protection in place at the time of a cyberattack. 

The list is growing: In 2024, many organizations will face new requirements stemming from the SEC’s new rules and state privacy laws, including amendments to the CCPA, industry-specific mandates, and those imposed on critical infrastructure by the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA). These developments include new incident reporting obligations and requirements to implement certain security technologies, as well as demonstrate compliance through cybersecurity audits, risk assessments, public disclosures and other measures. 

These myriad legal requirements broadly raise the bar for “reasonable” security. However, adversaries typically move faster than data protection mandates can keep up. Organizations must pay close attention to how adversaries are evolving their techniques and determine whether they’re prepared to defend their data against modern threats.

Data Extortion and the Defender’s Dilemma 

The emergence of new regulations has been a game-changer for adversaries and defenders alike. Protecting against data breaches has only grown more challenging as threat actors evolve their tradecraft and quickly learn the pressure these regulations put on breached organizations.

Today’s adversaries are working smarter, not harder. This is clear in the growth of data extortion, which has emerged in recent years as an easier, less risky means for adversaries to profit. Threat actors are shifting away from noisy ransomware campaigns, which typically trigger alarm bells in security tools — instead, they are quietly stealing victims’ data and then threatening to leak it if their financial demands aren’t met. 

The rise in data extortion has corresponded with adversaries increasingly targeting identities, a critical threat vector organizations must consider as they build their data protection plans.  Rather than relying on malware-laced phishing emails to breach target organizations, they can use a set of compromised credentials to simply log in. A growing number of access broker advertisements enables the sale of credentials, vulnerability exploits and other forms of illicit access: Last year, CrowdStrike reported a 147% increase in access broker ads on the dark web. Adversaries can now more stealthily infiltrate organizations, take valuable data and demand their price, putting victims in a tough position.

Data protection regulations change the calculus for organizations hit with data extortion — and adversaries know it. When threat actors steal information and tell their victims they’re in violation of HIPAA, GDPR, CCPA or other regulations, the stakes are higher. They know exactly how much an extortion attack will cost a business once it’s disclosed to regulators, and they can use this to coerce organizations into paying them instead. This may be a false choice, as many disclosure requirements apply regardless, but the coercion is real.

There are other ways adversaries use regulation consciousness to their advantage. In one 2023 case, a ransomware gang filed an SEC whistleblower complaint directed at one of its victims. The complaint, filed before the new SEC rules actually went into effect, attempted to claim that the victim was in violation of its duty to disclose a material cyber incident. 

Organizations must be incentivized to protect their data from modern threats. They should not feel stuck between the fear of reporting a breach and the pressure to meet adversaries’ ransom demands. With the right safeguards in place, businesses can protect their data from adversaries’ evolving attempts to access it. This is where CrowdStrike comes in. 

How CrowdStrike Can Help 

As we recognize Data Protection Day 2024, it is essential we consider what data protection involves and how critical cybersecurity is — not only for compliance, but for protecting privacy. Organizations must adopt best practices to protect their data in addition to achieving compliance requirements. 

Visibility is essential to maintain regulatory compliance and protect sensitive data from today’s adversaries. If you don’t have visibility into your data flows, your credentials or the sensitive data your organization holds, how can you know whether that data is at risk? 

An organization’s data is among its most valuable assets — and adversaries are after it. Protecting that data should be a top priority. CrowdStrike Falcon® Data Protection provides deep, real-time visibility into what’s happening with your sensitive data as it flows across endpoints, cloud, web browsers and SaaS applications. As the modern approach to data protection, our technology ensures compliance with minimal configuration and provides comprehensive protection against modern threats. 

It is more important than ever for organizations to understand data protection and data security are interdependent and cannot be considered in isolation. Both are critical in protecting privacy. Moreover, if personal data is stolen in a cyberattack, those affected can claim damages — but certain jurisdictions provide fine and liability mitigations where the breached organization can prove its cybersecurity protections were reasonable and state-of-the-art.

In this threat landscape and regulatory environment, Data Protection Day provides an opportunity for privacy and security teams to align on modern threats to privacy, risks of non-compliance and the best technical and organizational means to protect data.

Additional Resources

CrowdStrike’s View on the New U.S. Policy for Artificial Intelligence

21 November 2023 at 20:37

The major news in technology policy circles is this month’s release of the long-anticipated Executive Order (E.O.) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. While E.O.s govern policy areas within the direct control of the U.S. government’s Executive Branch, they are important broadly because they inform industry best practices and can even potentially inform subsequent laws and regulations in the U.S. and abroad.

Accelerating developments in AI — particularly generative AI — over the past year or so has captured policymakers’ attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) in particular has further heightened attention in Washington, D.C. In that context, the E.O. should be viewed as an early and significant step addressing AI policy rather than a final word.

Given CrowdStrike’s extensive experience with AI since the company’s founding in 2011, we want to highlight a few key topics that relate to innovation, public policy and cybersecurity.

The E.O. in Context

Like the technology it seeks to influence, the E.O. itself has many parameters. Its 13 sections cover a broad cross section of administrative and policy imperatives. These range from policing and biosecurity to consumer protection and the AI workforce. Appropriately, there’s significant attention to the nexus between AI and cybersecurity, which is covered at some length in Section 4.

Before diving into specific cybersecurity provisions, it is important to highlight a few observations on the document’s overall scope and approach. Fundamentally, the document strikes a reasonable balance between exercising caution regarding potential risks and enabling innovation, experimentation and adoption of potentially transformational technologies. In complex policy areas, some stakeholders will always disagree with how to achieve balance, but we’re encouraged by several attributes of the document.

First, in numerous areas of the E.O., agencies are designated as “owners” of specific next steps. This clarifies for stakeholders how to provide feedback and reduces the odds for gaps or duplicative efforts.

Second, the E.O. outlines several opportunities for stakeholder consultation and feedback. These will likely materialize through Request for Comment (RFC) opportunities issued by individual agencies. Further, there are several areas where the E.O. tasks existing — or establishes new — advisory panels to integrate structured stakeholder feedback on AI policy issues.

Third, the E.O. mandates a brisk progression for next steps. Many E.O.s require tasks to be finished in 30- or 60-day windows, which are difficult for agencies to meet at all, let alone in deliberate fashion. This document in many instances provides for 240-day deadlines, which should enable 30- and 60-day engagement periods through RFCs, as outlined above.

Finally, the E.O. states plainly that “as generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI.” This should help ensure that government agencies explore positive use cases for leveraging AI for their own mission areas. If history is any guide, it’s easy to imagine a scenario where a talented junior staffer at a given agency identifies a key way to leverage AI at some time next year, that no one could easily forecast this year. It would be unwise to foreclose that possibility, as innovation should be encouraged inside and outside of government.

AI and Cybersecurity Provisions

On cybersecurity specifically, the E.O. touches on a number of key areas. It’s good to see specific callouts to agencies like the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA) and Office of the National Cyber Director (ONCD) that have significant applied cyber expertise.

One section of the E.O. attempts to reduce risks of synthetic content — that is, generative audio, imagery and text. It’s clear the measures cited here are exploratory in nature rather than rigidly prescriptive. As a community, we’ll need to innovate solutions to this problem set. And with U.S. elections around the corner, we hope to see rapid advancements in this space.

In many instances, the E.O.’s authors paid close attention to enumerating AI policy through established mechanisms, some of which are closely related to ongoing cybersecurity efforts. This includes the direction to align with the AI Risk Management Framework (NIST AI 100-1) and the Secure Software Development Framework. This will reduce risks associated with establishing new processes, while enabling more coherent frameworks for areas where there are only subtle distinctions or boundaries between, for example, software, security and AI.

The document also attempts to leverage sector risk management agencies (SRMAs) to drive better preparedness within critical infrastructure sectors. Specifically, it mandates:

Within 90 days of the date of this order, and at least annually thereafter … relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.

This is important, but we also encourage these working groups to consider benefits along with risks. There are many areas where AI can drive better protection of critical assets. When done correctly, AI can rapidly surface hidden threats, accelerate the decision making of less experienced security analysts and simplify a multitude of complex tasks.

At CrowdStrike, AI has been fundamental to our approach from the beginning and has been built natively into the CrowdStrike Falcon® platform. Beyond replacing legacy AV, our platform uses analytics to help prioritize critical vulnerabilities that introduce risk and employs the power of AI to generate and validate new indicators of attack (IOAs). With Charlotte AI, CrowdStrike is harnessing the power of generative AI to make customers faster at detecting and responding to incidents, more productive by automating manual tasks, and more valuable by learning new skills with ease. This type of AI-fueled innovation is fundamental to keep pace with ever-evolving adversaries incorporating AI into their own tactics, techniques and procedures.

In Summary

This E.O. represents a key step in the evolution of U.S. AI policy. It’s also particularly timely. As we described in our recent testimony to the House Judiciary Committee, AI is key to driving better cybersecurity outcomes and is also of increasing interest to cyber threat actors. As a community, we’ll need to continue to work together to ensure defenders realize the leverage AI can provide, while mitigating whatever harms might come from threat actors’ abuse of AI systems.

This article was first published in SC Magazine: The Biden EO on AI: A stepping stone to the cybersecurity benefits of AI

Additional Resources

❌
❌