There are new articles available, click to refresh the page.
Before yesterdayCrowdStrike

Webcast: Unique Security Coalition Aims to Guide Work-From-Home Transition

28 October 2020 at 18:30
black and red image of a video play button

CSOs, CISOs and security professionals everywhere are contending with a “new normal” due to the global pandemic. Employees are increasingly working from home, an abrupt and often unplanned shift for most companies. IT staff as a result have had to adapt security protocols while also maintaining seamless user experiences and fighting opportunistic cyber adversaries with their rapidly evolving tradecraft.   

Faced with these extraordinarily challenging times, and recognizing the synergy of their talent and technologies, security vendors CrowdStrike, Netskope, Okta and Proofpoint have formed an unprecedented alliance to provide guidance and thought leadership.

In this on-demand webcast, executives from these four companies share best practices they’ve learned and adopted as their workforces and customers have become increasingly distributed and remote. “How Leading CSOs Are Staying on Top in Today’s Threat Landscape” features CrowdStrike Chief Product Officer Amol Kulkarni, Netskope CISO Lamont Orange, Okta CSO David Bradbury and Proofpoint CISO Lucia Milica, along with moderator and Lightstream VP of Security Strategy Rafal Los.  

Watch this on-demand webcast to learn:

  • The biggest challenges these vendors’ customers and prospects have faced since the onset of the global pandemic
  • Trending threats and how to mitigate them while dealing with an accelerated digital transformation
  • Proactive measures your organization can take to prepare and protect remote employees
  • The long-term strategic goals required to lead the way, measure ROI and more

Key takeaways from this panel discussion:

  • “The platform that does not provide end-to-end visibility will hamper detecting anything malicious.” — Amol Kulkarni, CrowdStrike
  • “Let’s double down on understanding what the baselines are today because the ones that we had before are of no use to us anymore.” — Lamont Orange, Netskope
  • “[Our security alliance is] taking the pain out of trying to integrate.” — David Bradbury, Okta
  • “People are the new enterprise edge, and we have to start thinking about security from that perspective.” — Lucia Milica, Proofpoint

Additional Resources

The post Webcast: Unique Security Coalition Aims to Guide Work-From-Home Transition appeared first on .

New Podcast Series: The Importance of Cyber Threat Intelligence in Cybersecurity

29 October 2020 at 00:01
photo of Adam Meyers on red background

A new CrowdStrike® podcast series hosted by Cybercrime Magazine focuses on the critical role cyber threat intelligence (CTI) plays in an effective cybersecurity strategy. The series features CrowdStrike SVP of Intelligence Adam Meyers, a renowned expert in the field of cyber intelligence and a highly sought-after speaker. In this 12-part series, Meyers will cover a wide array of CTI topics ranging from how to build an effective threat intelligence practice to how adversaries and the threat landscape are evolving and what organizations can do to better protect themselves.

Here’s the podcast lineup and quick summary of each episode. Put them on your list!

Getting to Know Adam Meyers

Meyers has long been considered a leading expert in the field of threat intelligence. In this first podcast to launch the series, he explains how his wide-ranging interests — from political science to epidemiology to computer science — and his work in both government and commercial organizations have contributed to his passion for and expertise in CTI. You’ll hear about the team of unmatched intelligence experts Meyers has built at CrowdStrike and how his team has evolved. He began with a mission to build government-quality intelligence for the private sector, focusing on nation-state adversaries, and the team soon evolved into tracking eCrime, hacktivism and recently, COVID-19-themed attacks.

Outpacing Your Adversaries

Meyers discusses the importance of knowing the capabilities and intentions of cyber adversaries that are targeting your organization and industry. He stresses that staying ahead of today’s ever-evolving adversary groups is critical and can’t be accomplished without effective CTI. Ultimately, understanding as much as possible about the “who, what and how” of your attacker is key. Meyers says, “I think about trying to bring the right components of technology and the right information together to ensure that you can, if not prevent, then certainly very quickly detect an adversary as they make attempts to access your infrastructure.”

Why CTI Is Critical to the C-Suite

Meyers discusses the importance of keeping C-level executives and board members apprised of security and risk issues and offers recommendations on the best way to present CTI to them. He recommends starting with basic information that enables them to understand what’s going on and how it may impact the organization. He explains the importance of understanding what the C-suite wants to gain from the discussion. You must first ask, “Who is your audience? Who are you bringing this intelligence to, and what is your expected outcome? Because you need to really understand what they are hoping to get out of this information.” He feels this is particularly critical because many organizations try to figure out their return on investment for threat intelligence before they have defined what threat intelligence is to them and what their measurements of success are.

How CTI Helps Security Operations Center (SOC) Teams and Incident Response (IR)

The benefits that CTI offers to SOC and IR teams start with intelligence automation, which makes their jobs easier. Meyers discusses the importance of offering context and analysis to threats, giving teams a better perspective and understanding of each threat and its potential capabilities. Meyers believes that CTI can be particularly beneficial to investigations being conducted in real time: “If they’re dealing with an active incident where the adversary is still there, understanding how to properly mitigate that incident so as not to cause the adversary to do something that would be unexpected or perhaps disruptive or destructive is critical. It’s really a very important part of the IR side of things.”

The CTI Lifecycle

Meyers discusses the need for security teams to better understand the business questions the C-suite is asking so they can better protect the organization. Meyers talks about how applying the intelligence lifecycle helps organizations answer these questions  by providing a framework for the collection, analysis and dissemination of impactful threat intelligence to leadership.  Meyers underscores the importance of decision makers to provide feedback to the team in order to help them keep pace with evolving business and risk reduction strategies.

Business Drivers

Many organizations begin implementing threat intelligence when their security teams find themselves addressing the same problem over and over, and when leadership stumps them with questions about the latest threats they may see in the news. Meyers discusses the prerequisites to implementing a successful threat intelligence program and how to find, recruit and retain skilled intelligence analysts.

Team Members

Meyers discusses the importance of building cyber threat intelligence teams with a focus on both technical and human analysis. He explains that a technical staff is required to derive intelligence by examining an adversary’s malware, tools and infrastructure.  Meyers further states that human analysts are critical, as they add an understanding of the adversaries intentions enhanced by experience that allows them to make estimations about what may happen in the future.

Hostile Nations

In this episode, Meyers examines Chinese, Russian, North Korean and Iranian cyber operations.  He breaks down nation-state activity by discussing diplomatic, political, military, and economic espionage as well as describing disruptive/destructive offensive cyber operations.  Meyers also delves into how North Korea uses financially motivated attacks and how these nation states cooperate to meet mutually beneficial objectives.

Asia-Pac and Japan

Meyers discusses CrowdStrike’s Asia Pacific/Japan State of Security Survey with a special emphasis on how COVID-19 has shaped organizations’ digital and work-from-home strategies.
He describes how adversaries are preying on the fear and disruption caused by the coronavirus pandemic and how the rapid pivot of organizations to work-from-home has created opportunities for adversaries to probe for security gaps in the newly deployed infrastructure.

Who Are Hacktivists?

Meyers explores how activists, nationalists, terrorists and socio-economically motivated groups leverage DDOS attacks, doxxing and web defacement to express their ideologies. He states “Hacktivists movements around the globe are constantly changing and very dynamic, but any place you see any sort of social or political or economic issue you can expect to find it”.  Meyers recommends steps CISOs should take to be prepared and aware of these unexpected events.

For Manufacturing 

In the first six months of 2020, the CrowdStrike Falcon OverWatchTM team has tracked more intrusions than what they have seen in all of 2019.  Meyers discusses the acceleration of these attacks and how adversaries are joining the growing trend of targeted, low-volume/high-return ransomware deployment known as “big game hunting.”

For G2000 CISOs

Meyers discusses how the global threat landscape changed due to COVID-19 and how the rapid pivot of organizations to work-from-home has created opportunities for adversaries to probe for security gaps in the newly deployed infrastructure.  He predicts a continued acceleration in threat levels — attempted intrusions, ransomware attacks and other malicious activities — and discusses how CrowdStrike is helping organizations innovate and evolve faster than the adversary.

Additional Resources

The post New Podcast Series: The Importance of Cyber Threat Intelligence in Cybersecurity appeared first on .

How to Integrate with your SIEM

30 October 2020 at 07:00
By: Ted Pan
CrowdStrike Tech Center


The Falcon SIEM Connector provides users a turnkey, SIEM-consumable data stream. The Falcon SIEM Connector:

  • Transforms Crowdstrike API data into a format that a SIEM can consume
  • Maintains the connection to the CrowdStrike Event Streaming API and your SIEM
  • Manages the data-stream pointer to prevent data loss

SIEM connector-overview


Before using the Falcon SIEM Connector, you’ll want to first define the API client and set its scope. Refer to this guide to getting access to the CrowdStrike API for setting up a new API client key. For the new API client, make sure the scope includes read access for Event streams.

SIEM connector event streams scope

The CrowdStrike Falcon SIEM Connector (SIEM Connector) runs as a service on a local Linux server.

The resource requirements (CPU/Memory/Hard drive) are minimal and the system can be a VM.

  • Supported OS (64-bit only):
    • CentOS/RHEL 6.x-7.x
    • Ubuntu 14.x
    • Ubuntu 16.04
    • Ubuntu 18.04
  • Connectivity: Internet connectivity and ability to connect the CrowdStrike Cloud (HTTPS/TCP 443)
  • Authorization: Crowdstrike API Event Streaming scope access
  • Time: The date and time on the host running the Falcon SIEM Connector must be current (NTP is recommended)

Installation and Configuration

To get started, you need to download the rpm install packages for the SIEM Connector from the CrowdStrike Falcon UI.

For a more comprehensive guide, please visit the SIEM Connector Feature Guide.

SIEM Connector Download from tools

Download the package for your operating system to the Linux server you’d like to use.

Open a terminal and run the installation command where <installer package> is the installer that you had downloaded :

  • CentOS:
    sudo rpm -Uvh <installer package>
  • Ubuntu:
    sudo dpkg -i <installer package>

The last step before starting the SIEM Connector is to pick a configuration. There are a couple of decisions to make. The SIEM connector can:

  • Output to a local file (your SIEM or other tools would have to actively read from that file)
  • Output to a syslog server (most modern SIEMs have a build in syslog receiver)
  • Output to a format such as CEF or LEEF for your SIEM

Here is a flow diagram of how to pick the right configuration file:

SIEM Connector configuration flow

To get you started, we’ll use the default output to a JSON file and only change the Client ID and Client Secret. Since we’re just going to be testing with a single SIEM Connector, the app_id can stay as the default. 

Open the SIEM Connector config file with sudo and your favorite editor and change the client_id and client_secret options:


SIEM Connector configuration file

Once you save the configuration file you can start the SIEM connector service with one of the following commands:

  • CentOS:
    sudo service cs.falconhoseclientd start
  • Ubuntu 14.x:
    sudo start cs.falconhoseclientd
  • Ubuntu 16.04 and later:
    sudo systemctl start cs.falconhoseclientd.service

To verify that your setup was correct and your connectivity has been established, you can check the log file with the following command:

tail -f /var/log/crowdstrike/falconhoseclient/cs.falconhoseclient.log

You should see a Heartbeat. If you see an error message that mentions the access token, double check your Crowdstrike API Client ID and Secret.

Tail for heartbeat


The process above shows how to get started with the CrowdStrike Falcon SIEM Connector. There are many more options for this connector (using a proxy to reach the streaming API, custom log formats and syslog configurations, etc.) that can be found in the “SIEM Connector Feature Guide” as part of the Documentation package in the Falcon UI.

More resources

The post How to Integrate with your SIEM appeared first on .

How to Consume Threat Feeds

30 October 2020 at 07:00
By: Ted Pan
CrowdStrike Tech Center


As part of the CrowdStrike API, the “Custom IOC APIs” allows you to retrieve, upload, update, search, and delete custom Indicators of Compromise (IOCs) that you want CrowdStrike to identify.

With the ability to upload IOCs to the endpoints can automatically detect and prevent attacks identified by the indicators provided from a threat feed.


To get started with the CrowdStrike API, you’ll want to first define the API client and set its scope. Refer to this guide to getting access to the CrowdStrike API for setting up a new API client key. For the new API client, make sure the scope includes read and write access for IOCs (Indicators of Compromise).

IOC Client Scope

As example IOCs, we will be using the test domain “evil-domain.com” and the file “this_does_nothing.exe” (this_does_nothing.exe (zipped), Source Code (zipped), which has a sha256 hash value of 4e106c973f28acfc4461caec3179319e784afa9cd939e3eda41ee7426e60989f .

Beginning with the Crowdstrike API

CrowdStrike leverages Swagger to provide documentation, reference information, and a simple interface to try out the API.

Before accessing the Swagger UI, make sure that you’re already logged into the Falcon Console.

Here’s a link to CrowdStrike’s Swagger UI.  Authorize with your Client ID and Client Secret that’s associated with the IOC scope as shown in the guide to getting access to the CrowdStrike API.

Authorize API

After you’re authorized, find the IOCs resource on the page. These are going to be the requests that we’ll demonstrate in this guide.

Creating an IOC

First, let’s create a couple of new IOCs. We will add an IOC for the domain “evil-domain.com” and the file hash “4e106c973f28acfc4461caec3179319e784afa9cd939e3eda41ee7426e60989f” from our sample file.

Click on POST /indicators/entities/iocs/v1 to expand it. This will provide you with descriptions of the parameters and how you can use them. It also shows sample responses below as well.

POST indicators


The information provided here is great at helping you understand how to issue the requests and is all very interesting, but we can actually take it to the next step by making a request directly from the interface with the “Try it out” button. This guides you on how to implement the CrowdStrike API and allows you to test requests directly while having the documentation readily available.

Click on “Try it out”.

Try it out button

The “Try it out” button will make the Example Value box editable. It is prepopulated with placeholder values which we will replace in just a moment. We can see that even though there are several keys that we can modify, the only required ones are type, value, and policy. We’ll use the required keys for now and just enter the necessary values that we need to create the IOCs.

We can create an individual IOC or multiple IOCs in a single request, so we’re going to add both sample IOCs with our single request. You can edit your Example Values manually or just replace the existing contests with the following:

        "policy": "detect",
        "policy": "detect",

Hit the “Execute” button at the bottom and you can see your response body below.

Execute POST

If everything went as expected, you will receive a “200” under Code and no “errors” in the body of the response. If you receive a “401” error and see “access denied” in the body of the message, double check your authorization.

Note: The actual curl command will include authorization information that is not shown here.

POST Create IOC Workflow

Listing IOCs

Now that we’ve created a few IOCs in the CrowdStrike Platform, let’s list them out. Click on GET /indicators/queries/iocs/v1 to expand it.

Get indicators request

Again, it’ll provide you with a description of the available parameters and how to use them. Now, click on the “Try it out” button.

Something that you might notice right away is that instead of a single Example Value box, the IOC search resource provides a series of fields where you can enter values in directly.

For example, you can enter “sha256” into the “types” box and then hit “Execute”.

Execute GET Indicators

After we execute the request, it will pull up the sha256 hash of the IOC that we created earlier and list it in the details section below. CrowdStrike provides many other parameters that you can use to perform your searches. For example, you can narrow down your search to only IOCs created after a specified time or for specific hash values. Take a look at the other fields to see what else you can do.

IOC Search Results

An example detection from an imported IOC

To demonstrate what a detection based on your custom IOC looks like, we will use a Windows machine with CrowdStrike Falcon installed.
You can run our test tool “this_does_nothing.exe” (see beginning of article) and verify in the command window that opens, that the sha256 hash matches the IOC we uploaded.

Sample File Execution

Immediately after you execute the test tool, you will see a detection in the Falcon UI.

Detection in Falcon

Deleting an IOC

So far, we’ve created a few IOCs and searched for them. Now, let’s use the Delete request to remove IOCs that we no longer want detected.

Click on DELETE /indicators/entities/iocs/v1 to expand it. Since deleting an IOC is a very straight forward process, there are only two parameters available here, just the type and value, both of which are required.

Click on the “Try it out” button.

Delete IOC Try it out

The Delete resource also provides fields that you can fill in. We’ll enter the same sha256 value where the type is “sha256” and the value is “4e106c973f28acfc4461caec3179319e784afa9cd939e3eda41ee7426e60989f”. Just enter those values into the fields and hit the “Execute” button.

Execute Delete Indicators

Now let’s verify that we have deleted the file hash by executing the Search IOC request again.

Expand the GET /indicators/queries/iocs/v1 again and this time, let’s leave all the fields blank. Since none of the fields are required, this will search through all the IOCs in our CrowdStrike environment.

When we receive the response, we can see that the only IOC still listed is the domain.

Verify sha256 indicator is deleted

You can now delete the evil-domain.com with the delete request as well.


This guide is just the start of your journey with the CrowdStrike API. There is plenty of additional information in the CrowdStrike API Swagger UI, as well as in the Custom IOC APIs Documentation accessible through the Falcon console Docs menu.

More resources

The post How to Consume Threat Feeds appeared first on .

How to Enable Kernel Exploit Prevention

30 October 2020 at 18:34
CrowdStrike Tech Center


This document and video will demonstrate how to enable kernel exploit prevention to protect hosts from sophisticated attacks that attempt kernel code execution.



Malware, and in particular ransomware, is increasingly using sophisticated attack chains to bypass traditional AV and execute successfully. As an example, the Robinhood ransomware was updated to load and exploit a legitimately signed driver as a mechanism to achieve kernel code execution. With a lot of endpoint solutions, the malware can execute and successfully encrypt the file system because the driver appears to be legitimate. 

Even with a detection only policy, execution of the Robinhood ransomware triggers multiple CrowdStrike detections as shown below. While machine learning correctly identifies the ransomware, Falcon also detects data encryption as well as kernel level defense evasion.

Enabling Kernel Exploit Prevention

To prevent this type of attack, a simple policy change is required. Along with machine learning and behavioral based protections, CrowdStrike can also block executions by category. For this attack, enabling the prevention of  “Suspicious Kernel Drivers” will ensure that any driver found to be malicious by CrowdStrike will be blocked from loading.

Kernel Exploit Protection

With prevention enabled, the attack fails and the files are not encrypted. The execution details illustrate that CrowdStrike blocked the operation to start a malicious driver. The critical severity detection includes the tactic, technique and ID, as well as the triggering indicator of attack and a written description.


While the use of legitimate drivers might bypass traditional anti virus, CrowdStrike’s easy to configure prevention capabilities enable detection of malicious drivers and protect organizations against sophisticated attacks.

More resources


The post How to Enable Kernel Exploit Prevention appeared first on .

Seeing Malware Through the Eyes of a Convolutional Neural Network

3 November 2020 at 17:36
image of neurons


Deep learning models have been considered “black boxes” in the past, due to the lack of interpretability they were presented with. However, in the last few years, there has been a great deal of work toward visualizing how decisions are made in neural networks. These efforts are saluted, as one of their goals is to strengthen people’s confidence in the deep-learning-based solutions proposed for various use cases. Deep learning for malware detection is no exception here. The target is to obtain visual explanations of the decision-making — put another way, we are interested in the highlights that the model makes in the input as having potential malicious outcomes. Being able to explain these highlights from a threat analyst’s point of view gives a confirmation that we are on the right path with that model. We need to have the confidence that what we are building does not take completely random decisions (and get lucky most of the time), but rather that it has the right criteria for the required discrimination. With proof that the appropriate, distinctive features between malware and clean files are primarily considered during decision-making, we are going to be more inclined to give the model a chance at deployment and further improvements based on the results it has “in the wild.”


With visualizations we want to confirm that a given neural network is activating around the proper features. If this is not the case, it means that it hasn’t learned properly the underlying patterns in the given dataset, and the model is not ready for deployment. A potential reaction, if the visualizations indicate errors in the selection of discriminative features, is to collect additional data and revisit the training procedure used.

This blog focuses on convolutional neural networks (CNNs) — a powerful deep learning architecture with many applications in computer vision (CV), and in recent years also used successfully in various natural language processing (NLP) tasks. To be more specific, CNNs operating at the character level (CharCNNs) are the subject of visualizations considered throughout this article. You can read more about the motivation behind using CharCNNs in CrowdStrike’s fight against malware in this blog, “CharCNNs and PowerShell Scripts: Yet Another Fight Against Malware.” The use case is the same in the upcoming analysis: detecting malicious PowerShell scripts based on their contents, with a different target this time aimed at interpretability. The goal is to see through the eyes of our model and to validate the criteria on which its decisions are based.  

One important property of convolutional layers is the retention of spatial information, which is lost in the upcoming fully connected layers. In CV, a number of works have asserted that deeper layers tend to capture more and more high-level visual information. This is why we use the last convolutional layer to obtain high-level semantics that are class-specific (i.e., malware is of interest in our case) with the spatial localization in the given input. Other verifications in our implementation are aimed at reducing the number of unclear explanations due to low model accuracy or text ambiguity. To address this, we take only the samples that are correctly classified, with a confidence level above a given decision threshold of 0.999.

Related Work

Among the groundbreaking discoveries leading to the current developments in CNN visualizations, there is proof (Zhou et al., 2015a) that the convolutional units in the CNNs for CV, act as object detectors without having any supervision on the objects’ location. This is also true in NLP use-cases, as the localization of meaningful words discriminative of a given class is possible in a similar way. Among the initial efforts towards CNN visualizations we can also mention the deconvolutional networks used by Zeiler and Fergus (2014) to visualize what patterns activate each unit. Mahendran and Vedaldi (2015) and Dosovitskiy and Brox (2015) show what information is being preserved in the deep features without highlighting the relative importance of this information.

Zhou et al, 2015b propose CAM (class activation mappings) — a method based on global average pooling (GAP), which enables the highlighting of discriminative image regions used by the CNN to identify a given category. The direct applicability is in CV, as it is the case with most of the research done in this respect. However, it should be mentioned, once again, that the use cases extend to NLP as well, with a few modifications in the interpretations. 

CAM has the disadvantage of not being applicable to architectures that have multiple fully connected layers before the output layer. Thus, in order to use this method for such networks, the dense layers need to be replaced with convolutional layers and the network re-trained. This is not the case with Grad-CAM (Selvaraju et al., 2016), Gradient-weighted CAM, where class-specific gradient information is used to localize important regions in the input. This is important as we want a visual explanation to be first, class-discriminative, meaning we want it to highlight the features (words/characters/n-grams) that are the most predictive of a class of interest which, in our case, would be malware. 


Figure 1 offers a high-level view of how Grad-CAM is applied to textual inputs. Particularly, our interest is to visualize malicious PowerShell content in a heatmap fashion. In these visualizations, the most predictive features for malware are highlighted with different intensities according to their weights, from the perspective of our convolutional model. The following elements and actions are applicable in the mentioned Grad-CAM flow:

Figure 1: High-level view on the applicability of Grad-CAM for textual inputs. The more specific use-case hinted to in the diagram is heat map visualizations for malicious PowerShell scripts. (click image to enlarge)

  • Inputs — in the current use-case the inputs provided are PowerShell scripts contents and the category of interest, which is malware in our case.
  • Forward pass is necessary as we need to propagate the inputs through the model in order to obtain the raw class scores before softmax. 
  • Gradients are set to 0 for clean and 1 for malware (the class that is targeted).
  • Backpropagate the signals obtained from the forward pass to the rectified convolutional feature map of interest, which is actually the coarse Grad-CAM localization. 
  • GradCAM outputs, in our case, are visualizations of the original scripts, with the most predictive features for the given class highlighted. 

In Grad-CAM we try to reverse the learning process in order to be able to interpret the model. This is possible using a technique called gradient ascent. Instead of the regular weighted delta updates that are done to reduce the loss gradient x (-1) x lr, in gradient ascent, more delta is added to the area to be visualized in order to highlight the features of interest. 

The importance of the k-th feature map for target class c is determined by performing Global Average Pooling (GAP) in the gradient of the k-th feature map:

\[ \alpha_k^c = \overbrace{\frac{1}{Z} \sum_{i}\sum_{j}}^{global\; average\; pooling} \underbrace{\frac{\delta y^{c}}{\delta A_{ij}^{k}}}_{gradients\; via\; backprop} \]


\( A_k \in \mathbb{R} ^ {u \times v} \) is the k-th feature map produced by the last convolutional layer, of width u and height v.
\( y ^ c  \) — any differentiable activation (not only class scores), which is why Grad-CAM works no matter what convolutional architecture is used.

Global-average-pooling is applied on the gradients computed:

\[ \frac{\delta y^{c}}{\delta A_{ij}^{k}} \]

Thus, we obtain the weights \( \alpha_k^c \), which represent a partial linearization of the deep network downstream from A and capture the importance of feature map k for a target class c.

Then we take the weighted average of the activation for each feature map. In order to do this, we multiply the importance of the k-th feature map:

\[ L_{Grad-CAM}^{c} = ReLU \underbrace{\left(\sum_{k}{\alpha_k^c}{A^k}\right)}_{linear\; combination} \]
\( L_{Grad-CAM}^{c} \) class discriminative localization map, which is a weighted combination of the feature maps, followed by ReLU.


In our experiments, the following steps apply at the implementation level:


1. Automatically find the last convolutional layer in the network. Generally, the feature maps in the last convolutional layers tend to have the best compromise between high-level semantics and detailed spatial information. This is precisely the reason why we also use it in this process. In Tensorflow, which is the framework used in our experiments, we can identify various types of layers by their names. For our architecture in particular, we can iterate through the layers in reverse order and determine the first layer with a three-dimensional output, which is actually our last 1D convoluted layer.

Figure 2: Heatmap visualizations of clean (left) and malicious (right) PowerShell files, resulted from GradCAM.

2. We build a gradient model by providing:

a. Inputs: the same as the inputs to the pre-trained CharCNN model
b. Outputs:

i.The output to the final convolutional layer (previously determined) of the network
ii. The output of the softmax activations from the model

3. In order to compute the gradient of the class output w.r.t. the feature map, we use GradientTape for automatic differentiation.
4. We pass the current sample through the gradient model and grab the predictions as well as the convolutional outputs.
5. Process the convolutional outputs such that they can be used in further computations (i.e., discard the batch dimension, evaluate the tensors, convert to numpy arrays if necessary, etc.).
6. Average the feature maps obtained and normalize between 0 and 1.

The result of all of the processes described above is a heatmap visualization for the class label of interest (i.e., malware in our case). We scale this heatmap and use it to check out where the CNN is “looking” in the input to make that classification. Examples of heatmap visualizations for both the clean and malicious PowerShell inputs are displayed in Figure 2. The left heatmap represents a clean file, with different nuances of blue for different substrings, where deeper blue is correlated with the most predictive features for clean scripts. On the right, we have a similar interpretation for malware. A darker assigned color means that the corresponding substring is more likely describing suspicious activity. 

Most Predictive Features

In order to justify the class-discrimination capabilities of our PowerShell model, we produce visualizations for both malware and clean samples, with a focus on the class of interest — malware. We consult with our threat analysts and conclude that many of the substrings highlighted in the samples are actually of interest in classifying maliciousness in scripts. Through this section, we are often going to refer to these highlights as substring matches.

A strict selection is performed on all of the generated substring matches. These are checked against our Malquery database, with zero tolerance for false positives. Later, the substrings in the selected subset are considered for additional support to our PowerShell detection capabilities. The following sections briefly introduce three of the broadest categories of important substrings for our model in deciding the maliciousness of PowerShell scripts.

Base64 Strings

Substring 1: piJ0DAIB9EAB0seuZZotF3GaJh4gAAADrtmaLReBmiYeKAAAA665Vi+yhtGdMAItNGIP4AQ+FwZ4DAItFCIP4

Click image to enlarge.

Many of the substring matches generated are Base64 strings, which are eventually converted to byte code. Take, for example, Substring 1, a Base64 string that also shows up in a malicious script (i.e., out-2112577385.ps1) on a page full of exploits. This string, of particular importance for our model in detecting malware in the given PowerShell content, is a tiny part of a huge Base64 string made out of many smaller pieces. The mentioned full string decodes into the hexdump of a Portable Executable (PE) file, which is a malicious AutoIt compiled file. Among the elements identified in the mentioned PE code, we can count intercepting both asynchronous and synchronous keyboard presses, the ability to send and receive files, as well as some other nested Base64 encoded PE content.

Common Syntax

Substring 2: IEX (New-Object IO.StreamReader(New-Object

Click image to enlarge.

Substring 3: IO.Compression.CompressionMode]::Decompress))).ReadToEnd()))

Click image to enlarge.

The opposite of Base64 strings, in terms of readability, is common PowerShell syntax, which is also included in the patterns generated. In this category we have substrings from comments, invokes that are not necessarily indicators for dirty scripts, download paths/upload links, etc. These are, in general, not a safe bet for standalone use in malware hunts. This is one of the reasons why we double-check the ML (machine learning)-picked substring matches against our database and remove the ones that give even one false positive. However, some of this regular PowerShell syntax keeps showing up in the set of selected patterns. Substring 2 is a good example, representing a part of a decoding sequence for Base64 strings. This type of encoding is a common practice used by attackers to evade simple detection techniques (e.g., regex, heuristic approaches). The last part of said decoding syntax also shows up as one of the ML-selected patterns of interest here — see Substring 3. Perhaps surprisingly, again, Substring 3 shows up in 93 malicious scripts in an exact Malquery search, and in no files tagged as clean. However, this is still common syntax that in and of itself has nothing to do with malware.

Nonetheless, our interpretation is that it is more likely that we will see this syntax in malicious code rather than in clean scripts, as we’ve rarely encountered benign PowerShell with Base64 strings and their almost immediate decoding.

Another substring match that is not part of a Base64 string is Substring 4, which, according to the PowerShell documentation, is short for running a PowerShell command in hidden window style. It is perhaps implied that this substring match might also show up in clean PowerShell scripts as part of some benign automation. However, from our validation, this substring search in Malquery only hits tagged dirty files, if we do not consider the 318 hits with unknown labels. Thus, this is an example of a pattern that will be subject to further validation in staging.

Substring 4: powershell /w 1 /C

Click image to enlarge.

Common Obfuscation

Obfuscation techniques for Windows Portable Executables are also detected as significant strings in our model. It is clear why attackers would try to obfuscate other executions launching from within the current script. Two such examples selected from our set of potential templates are Substring 5 and Substring 6Substring 5: Chr(Asc("x")) + Chr(Asc(“e”))

Click image to enlarge.

Substring 6: Chr(101) & "xe';"

Click image to enlarge.


In this blog we introduced a visualization technique that exploits the explainability potential of convolutional neural networks. The context, motivation and importance of seeking interpretability in our models were also discussed. We briefly described the components and flow in a visualization technique coming from the computer vision domain, namely Grad-CAM. By employing this technique with additional filtering, we can obtain a set of malware predictive substrings that further complement our current detection capabilities for PowerShell scripts. A few examples of such substrings are discussed at length in this article.

Additional Resources

The post Seeing Malware Through the Eyes of a Convolutional Neural Network appeared first on .

Offering Our People Autonomy, Mastery and Purpose: Patrick McCormack, SVP Cloud Engineering

3 November 2020 at 18:13
red and white lettering on black background

When deciding to take a new job, one of the biggest concerns is often who you’ll be working for — not just the company itself, but the management and leadership team. What kind of manager will you have? Who is leading the organization from the top and what is their philosophy on learning and development? Will these people deliver all they promised during the interview phase?

In our latest installment of 5 Questions, Patrick McCormack, CrowdStrike® SVP of Cloud Engineering, talks about the three things he tries to provide every CrowdStrike engineer: autonomy, mastery and purpose.

Q: First things first, what do you look for in new candidates? Is experience in cybersecurity required?

That’s a really interesting question. CrowdStrike is a place for everyone. We have people with deep expertise in cybersecurity in areas like reverse engineering or writing detection algorithms, but actually, a lot of the work, particularly as it relates to the cloud and our sensor, is deep software engineering that is not concerned with security.

I didn’t have a particularly strong security background when I joined. But I brought a lot of knowledge and experience in building large-scale distributed systems in the cloud. So when we’re looking at hiring engineers — whether they are UI platform devs, cloud development engineers or kernel engineers working on the sensor side — they don’t necessarily have to know anything about security. I think this is an important point because engineers can be put off because they think they need to know about security to apply for positions at CrowdStrike, but for the most part, we are looking for world-class engineers regardless of their cybersecurity background. 

Q: A lot of engineers might not be familiar with CrowdStrike. Why should they be interested in our company?  

The thing I like most about working here is knowing that we’re protecting our customers. I’ve seen situations where we’ve had alerts go off late in the afternoon because we’re seeing a huge influx of detections from a particular customer. We realize that the customer has been targeted via email and people are clicking on malicious attachments. We’re registering all of those detections, but we can see that there’s nobody in the company that’s noticing, apart from their SOC (security operations center). The average user might be sending emails on a laptop as usual, and not even know that we’re at work protecting them from all this bad stuff that’s trying to run on the machine. That’s huge for me.

From a technology point of view, what we do here is incredible. We have kernel-level developers working on our sensor, we run a hybrid cloud backend over 4 geographic regions worldwide and we are architected to be up 24×7 with zero downtime. If you are interested in working at the kernel or OS level, or on massive async event-driven distributed systems, or on a state of the art UI and you also want your work to be important and impactful, then this is the place for you.

Q: So tell me a little more about the culture. What’s it like to work for CrowdStrike?

Culture is very important to us. Our internal mantra is “one team, one fight” and we live that every day. We foster a high-trust and high-autonomy work environment. We have a high bar for new hires so we don’t want to bring smart engineers on board and  tell them “Okay, sit there and I’ll tell you exactly what to do.” We want them to tell us what to do. We encourage a free and open exchange of ideas and it does not matter if you are an intern or a senior engineer with 20 years of experience. A healthy organization is one where anyone can approach the leadership team or their teammates and say, “Hey, I see you guys are doing something in a particular way. I’ve developed a system like that in the past and I think I can improve on what you are doing; let me tell you about it.” Everyone’s voice and opinion matters — we’re stronger because of our diversity of opinions and ideas. 

There’s a refinement of self-determination theory that says that there are three things people want from every job: autonomy, mastery and purpose. This is something that as a leader I consciously think about, as we are scaling up the engineering organization. We try to give our engineers autonomy in their work so they take pride in what they do because they own their work.

We also keep people learning, which is the mastery part. I always want to be stretched in different directions, whether it’s technically — taking on new areas, looking at new ways of doing things or adopting new technology — or from a management and leadership position. I think most engineers, most people in fact, feel the same way.

Finally, there’s purpose, which we’ve talked about a bit already. I think that’s the one thing that everyone here really loves. We have a strong sense of purpose. We’re not trying to get people to tweet at each other or target people with ads—we’re protecting our customers and making the world a safer place. That may seem like hyperbole, but we protect computers in hospitals and if a doctor can’t access a sick patient’s records because of ransomware, that’s a very serious situation. Our engineers get a huge sense of satisfaction that we’re doing something that has a very strong sense of purpose.

Q: Tell me a little more about being a platform-based company. What do you and your team do day-to-day?

The platform group builds and operates “platform-as-a-service” (PaaS), which is used by product groups who are building customer-facing products. We build for scale and reliability and we cover everything from the sensor to cloud ingestion, data processing pipelines, services and libraries for building sensors and user interfaces. We don’t want each one of these product groups to have to figure out how to build and deploy cloud services on their own or how to scale them or make them robust. We provide that platform for them to plug into, and they can focus really on the business logic of what they’re building. Also, just to give you an idea of the scale we operate at, the cloud ingests and processes in near real time over four trillion security events a week.

I oversee both the tactical and strategic aspects of platform development and operations. Tactical means everything from running and operating our cloud product across the world, 24×7, at scale. This also means planning and executing on the delivery of new platform features over the next three to six months. When we look six months and beyond we are thinking about the new capabilities we need to build and also about existing systems that need to be revved to handle our growth. You always have to be thinking about the future because at our scale you can’t wait for an inflection point to start developing; by that time it’s too late and you’ll have hit a hard scaling limit.  

Q: What do you like to do for fun outside of work?  

Mostly hiking and reading. In fact, right now I’m in the middle of The Undoing Project by Michael Lewis. The Undoing Project is super interesting because it’s about how people make decisions and common errors in decision-making. In addition, I am reading a book about Roland Barthes, as well as Practical TLA+ .  

Are you interested in working on Patrick’s team at CrowdStrike? Head to our Resource & Engagement Portal at GopherCon 2020 where you can find out how to meet our team and talk all things cyber: https://www.gophercon.com/page/1623781/crowdstrike 

Not attending GopherCon? check out the CrowdStrike Careers page to learn more about our teams, our culture and current open positions.

Additional Resources:

The post Offering Our People Autonomy, Mastery and Purpose: Patrick McCormack, SVP Cloud Engineering appeared first on .

The Critical Role of Cybersecurity in M&A: Part 2, Pre-Close

4 November 2020 at 16:59
Black and Red image of overlapping boxes

This is Part 2 of our three-part blog series on the critical importance of cybersecurity in the M&A process. Part 1 addressed due diligence, and in this blog, we cover the pre-close phase.

The pre-close period of an M&A transaction typically lasts just 30 days — an extraordinarily brief period considering the incredible amount of work that goes into closing a deal.

During this period, cybersecurity is sometimes neglected or even outright overlooked, as the IT function considers bigger issues related to integration, divestment and maintenance. But this could prove to be a fatal error, as a breach occurring during the pre-close period could lead to significant operational, financial and reputational issues for both the buyer and seller. At the same time, preparations made during this time will certainly influence the success of the integration post-close.

Considerations During Pre-Close 

Here we explore three cybersecurity considerations that every organization should address during pre-close in order to maintain network security and help prime the organization for post-close success. 

1. Establish responsibility for the security agenda with a transition services agreement (TSA)

While global M&A activity has decreased dramatically in 2020, cyberattacks are up.

For instance, in a recent survey of the APJ region, CrowdStrike observed a 330% increase in eCrime activity in the first half of 2020 as compared to the same period in 2019. As such, it’s important to protect the health and security of the target organization. The question is, who takes responsibility for this activity?

One of the first items on the cybersecurity agenda during the pre-close phase is establishing a transition services agreement (TSA). This document outlines who will own and manage all aspects of the target company’s digital security plan, including proactive measures, such as prevention and monitoring services, as well as reactive efforts if and when an incident occurs. As part of this agreement, organizations should also outline any consequential risks identified during the due diligence phase and determine how to fill those gaps or otherwise strengthen defenses. 

Buyers should be especially mindful of the need for a clear and comprehensive TSA, as they will ultimately bear the cost, financially and operationally, of resolving any incidents that occur during the pre-close phase. Also, since the terms of the deal have already been negotiated and agreed upon, any events that change the valuation of the target, such as data loss or theft, could significantly impact the value of the investment.

Finally, it’s important to examine the TSA within the context of the company’s insurance policies. Unfortunately, cybersecurity issues that may impact M&A activity often are not covered by warranty and indemnity (W&I) insurance, directors and operators (D&O) liability insurance, or even cybersecurity policies. In many cases, this is because the cyber risk was not assessed in the due diligence phase and was therefore excluded or not explicitly mentioned in such policies. If a full assessment has not been completed, buyers may be responsible for the cost of breaches occurring at this stage.

2. Confirm the health of the IT environment with a hygiene assessment 

In Part 1 of this series, covering the critical role of cybersecurity due diligence in M&A activity, we discussed the importance of conducting a compromise assessment to identify known risks associated with the target company. The assessment identifies any past or current threat activity. The focus of the assessment is to answer the question: “Has the organization we are acquiring been breached?” In addition to a compromise assessment, it should also be determined if the organization has good IT hygiene practices.  

An IT hygiene assessment is typically the first step in maintaining a healthy network. Like the compromise assessment, it will identify points of concern, such as unprotected devices on the network, unpatched systems and other vulnerabilities that could be exploited by a threat actor. However, it will take the process a step further, helping the organization analyze the situation and interpret the data in order to prioritize vulnerabilities and determine an appropriate response. 

An organization that shows multiple instances of past threat activity during a compromise assessment, and poor IT hygiene practices during a hygiene assessment, has an increased risk profile, which should be clearly understood before closing the deal and integrating the networks. 

For example, cybersecurity professionals may note that the IT function uses multiple antivirus vendors. This is purely an observation, which may have little meaning to the buyer. The hygiene assessment will go a step further, helping the organization understand what implications this has for the business and how significant of an issue it is. 

In this case, does having more than one vendor strengthen security, or does it make the task of managing security and detections more complex and therefore less effective? The cybersecurity assessment team will also take any necessary steps to address and resolve the issue, assuming it is a priority for the organization.

3. Preserve the health and hygiene of the network with comprehensive monitoring, response and remediation tools and services

Cybersecurity is a universal, ongoing concern. Every organization faces the risk of a breach — and network health and hygiene can change from day to day. For organizations involved in M&A activity, each company’s risk essentially doubles overnight since network integration will expose each organization to threats originating with the other.

Companies must adopt a holistic security strategy that incorporates a variety of endpoint monitoring, detection and response capabilities to ensure the safety of their network. Similar to the hygiene assessment, analysis and interpretation play a big role in this activity. Organizations face risks all the time — the key is knowing which to prioritize and how to remediate the threat with minimal disruption to the business. As such, most organizations should leverage both a comprehensive cybersecurity toolset, as well as on-call resources to help analyze events and respond to them.

One way to achieve immediate security maturity is via a managed service such as CrowdStrike® Falcon Complete™. Falcon Complete is CrowdStrike’s endpoint protection solution delivered as a managed detection and response service that utilizes both the expertise of CrowdStrike Services threat hunters and the power of the CrowdStrike Falcon® platform to detect and respond to threats present in a customer’s environment. 

Falcon Complete provides 24/7 hands-on management and optimization of the endpoint security environment, ensuring that cybersecurity matters are handled professionally throughout the pre-close period. The Falcon Complete team of expert analysts automatically detects and intelligently prioritizes malicious and attacker activity and helps organizations respond quickly to contain, investigate and remediate compromised systems. Our services map to MITRE’s proprietary Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework, which helps clients understand even the most complex detections at a glance.

Making the Most of Pre-Close

Cost is often a big driver of decisions during the pre-close phase of the M&A lifecycle. Cybersecurity, which is often overlooked during this period, may not seem like a worthy investment, though the risk of ignoring this issue is both clear and substantial: Research such as the Ponemon 2020 Cost of a Data Breach report shows that 80% of breaches involved customer PII (personally identifiable information) with the average cost of a breach topping $3.86M. In addition, steps taken during pre-close will help set the organization up for a successful integration post-close. For both buyers and sellers, we recommend using these 30 days wisely to maintain a healthy investment today and establish a more secure one for tomorrow.

Additional Resources

The post The Critical Role of Cybersecurity in M&A: Part 2, Pre-Close appeared first on .

A Behind-the-Scenes Look at the Life of a CrowdStrike Engineer with Sorabh Lall, Senior Engineer

4 November 2020 at 18:19
red and white lettering on black background

Cybersecurity is all about anomalies — and perhaps no one can prove that point better than Sorabh Lall. As a senior engineer with a background in cybersecurity, his skills and experience are in hot demand across the industry — so what made him want to come to CrowdStrike? 

In our latest installment of 5 Questions, we sit down with Sorabh to learn more about his journey into the security field and, ultimately, to CrowdStrike.

Q: How did you make your way into cybersecurity and to CrowdStrike?

When I did my Masters in computer engineering back in 2012, there wasn’t much awareness about cybersecurity. There weren’t loads of big companies selling a robust suite of security products. So at that time, schools were not very focused on having security-specific programs, though that’s changed today.

I segued into this field with a more general background and a networking specialization. When CrowdStrike came on my radar, I was working on a project that was using Cassandra as a backend. We had an issue where at peak traffic time the service’s performance was getting hammered due to frequent compaction cycles. During the investigation to solve it, I stumbled on a new time-window-based compaction strategy that was designed to work with time-series-based data — and that new strategy was designed and built by CrowdStrike. 

I was very intrigued to learn about this small security startup that was working with such a phenomenal amount of data that they had to invent their own compaction strategy to assist them with where they wanted to be.

Q: Tell me about that. What was the interview process like?

One of the unique experiences about interviewing at CrowdStrike is that people are more interested in learning about you rather than showing off what they know. The process began with a design question that I was given in advance of my interview. I had ample time to prepare and then I presented my idea and design. Everyone gets a design question when they interview in cloud engineering, from a junior software engineer to a VP position.

As an engineer, that was really refreshing. So often, we leave an interview saying, “Hey, why didn’t you ask me questions that are more relevant rather than all those coding questions that nobody cares about?” I feel like CrowdStrike didn’t make that mistake. So that was the tipping point for me — the moment when I realized I really wanted to work here.

Q: What do you find most rewarding about working at CrowdStrike?

The rewarding part of the job is knowing that our products actually work. We hear that from our customers and see all of this positive news coverage about the company. That definitely gives you a sense of pride and validation that your work matters. We’re starting from a point where we can focus on improving our offering as opposed to proving ourselves. I think that’s a very unique position to be in, a once-in-a-lifetime opportunity, really — especially in the security world, where a lot of new companies are popping up and they’re trying to do their best but can’t compete.

Q. What else do you find unique about the culture here?

At CrowdStrike, there is a sense of trust and responsibility for employees. That comes from the execs, but also from the people all around you. When you’re working on a project, you have the full decision power of how you want to do things. There will be people around to help you, to guide you and to nurture you, but the decision-making is still on you. That trust and responsibility helps you grow a lot.

Q: OK, enough about work, what do you do for fun?

Outside of work I try to stay active, going for hikes and working out. I volunteer at a local food bank each week, or at least I did before COVID-19 hit. I also play rink hockey — it’s sort of like ice hockey, but instead of blades, we have quad skates with wheels. I competed in the U.S. Nationals last year while I was working at CrowdStrike, and that was pretty fun. 

Are you interested in working with Sorabh at CrowdStrike? Head to our Resource & Engagement Portal at GopherCon 2020 where you can find out how to meet our team and talk all things cyber: https://www.gophercon.com/page/1623781/crowdstrike 

Not attending GopherCon? Check out the CrowdStrike Careers page to learn more about our teams, our culture and current open positions.

Additional Resources:

The post A Behind-the-Scenes Look at the Life of a CrowdStrike Engineer with Sorabh Lall, Senior Engineer appeared first on .

Learning How to Problem-Solve at Scale and Embrace a World of Continuous Change with Morgan Maxwell, Cloud Engineer

4 November 2020 at 18:33
red and white lettering on black background

At CrowdStrike, we sometimes like to say, “There’s data, big data and CrowdStrike data,” by which we mean our engineers work with a volume and scale of data that is totally unmatched in the tech world. Cloud Engineer Morgan Maxwell learned as much firsthand, when she joined four and a half years ago. Since then, she’s come to embrace the volume of data, the scale of our cloud and the pace of our business model.

In our latest installment of 5 Questions, we sit down with Morgan to learn more about what it’s like to work at a place with “more microservices than people.”

Q: What does your role entail?

I’m a cloud engineer, which is truly the most awesome title ever and leads to much amusement among family and friends who do not work in technology.

Like most cloud engineers, I’m not a computer security subject matter expert. Our focus is on robust, distributed systems that can process unprecedented amounts of data. Within cloud, I’m part of the auth team — we handle all the authentication and authorization infrastructure for the platform.

My day-to-day is a combination of application development, operations and cat herding. We are always adding new features, which may involve creating a new microservice — at this point, I’m quite sure cloud services outnumber cloud engineers. We spend a lot of time thinking about reliability — monitoring, alerting, reducing errors with automation, swapping out existing components to obtain greater scalability, operational flexibility, etc. — and we are first to the scene of many fires.

One unique aspect of auth is that because so much of the cloud integrates with or extends our infrastructure, we get to engage with a wide range of teams and learn about their use cases.

Q: What’s the most rewarding part of your job?

It’s solving thorny problems! When I started, I was really insecure in my troubleshooting abilities and uncomfortable in high-pressure situations. Working in the cloud, you get thrown into these chaotic, ambiguous situations, where some highly visible feature is breaking in a way everyone finds completely befuddling. Yet, with everyone’s powers combined, you somehow figure it out. It’s a real rush.

I also love being part of a team that’s feverishly developing something new. Shortly after COVID-19 hit, a strike team was assembled to work nonstop until we could ship a home-use offering. We released it in less than two weeks, and this sounds weird, but it was one of the most fantastic experiences I’ve had at CrowdStrike. I also got to work on the first iteration of the CrowdStrike Store and our free trial program. These were more extended projects. I really enjoy the first weeks where you can focus on the specific component you’re building. Then you move into the phase where everything is ostensibly complete, but nothing is working together like it’s supposed to. But you can feel the momentum building, and it just carries you through those days of troubleshooting and fixes that are standing between you and that final deployment.

As a company, we have a great mission. It’s in these moments that you viscerally feel like you’re part of something bigger than yourself.

Q: What’s something unique about working here?

It’s the sheer scale at which we operate — this plays out in three ways.

The first is event volume: We’re in a unique position where we’re pushing common applications and libraries to their limits and encountering problems few others see. In some cases, it’s not a slow build — you hit a certain threshold, and things escalate quickly! 

The second is how vast and sprawling the cloud is. Few people have managed to hold all of it in their heads. You start a new project that interacts with some part you never encountered, and it’s a whole new world to conquer. I find that at any given time, there are lots of interesting things happening ambiently than I can possibly follow. 

The third, from a people perspective, is about how distributed we are across the world. It’s surreal to interact with colleagues from so many different geographies in one day! 

Q: How would you describe working at CrowdStrike for people who may be considering a role here?

 It’s intense, but rewarding! I’ve never worked with so many smart and driven people — I feel humbled to be in their company! More than that, engineers here are really invested in each other’s success — they’re generous with their knowledge and go out of their way to help with projects. There’s a real sense that we’re all pulling toward a common goal.

Q: What’s something your coworkers might not know about you?

I went to law school, passed the bar, decided that the engineers at my various workplaces were the ones having all the fun, and started my first engineering job when I was 29!

Are you interested in working with Morgan at CrowdStrike? Head to our Resource & Engagement Portal at GopherCon 2020 where you can find out how to meet our team and talk all things cyber: https://www.gophercon.com/page/1623781/crowdstrike 

Not attending GopherCon? Check out the CrowdStrike Careers page to learn more about our teams, our culture and current open positions.

 Additional Resources:

The post Learning How to Problem-Solve at Scale and Embrace a World of Continuous Change with Morgan Maxwell, Cloud Engineer appeared first on .

2021 Threat Hunting Report: OverWatch Once Again Leaves Adversaries with Nowhere to Hide

8 September 2021 at 05:00

This time last year, the CrowdStrike Falcon OverWatch™ reported on mounting cyber threats facing organizations as they raced to adopt work-from-home practices and adapt to constraints imposed by the rapidly escalating COVID-19 crisis. Unfortunately, the 12 months that followed have offered little in the way of reprieve for defenders. The past year has been marked by some of the most significant and widespread cyberattacks the world has seen. 

The OverWatch team has seen attempted interactive intrusion activity continue at record levels. Both eCrime and targeted intrusion adversaries have continued to evolve and mature their tradecraft, finding new ways to evade technology-based defenses. 

In the newly released Falcon OverWatch annual report, 2021 Threat Hunting Report: Insights From the Falcon OverWatch Team, threat hunters share the trends in adversary tradecraft that have emerged over the past year. This report, now in its fourth year, documents OverWatch’s ongoing campaign to disrupt adversaries’ attempts at interactive intrusions.

In the battle defined by both stealth and speed, OverWatch is winning — leaving adversaries with nowhere to hide

Threat Hunting by the Numbers

The 2021 Threat Hunting Report reveals the scale and spread of potential interactive cyber intrusions uncovered and disrupted with the help of OverWatch. In the 12 months from July 1, 2020 to June 30, 2021, OverWatch tracked adversaries in the networks of organizations from every corner of the globe and nearly every industry vertical. No organization is outside the reach of today’s highly motivated adversaries. 

OverWatch has eyes-on-glass 24/7/365, looking for even the faintest signal of adversary activity. Adversaries do not sleep — they are not restricted by time zone or geography. Adversaries also move fast — they are capable of moving laterally to additional hosts within just minutes of achieving initial access. It is in this context that OverWatch’s around-the-clock vigilance proves so critical.

In this past year alone, OverWatch’s human threat hunters have directly identified more than 65,000 potential intrusions. That’s approximately 1 potential intrusion every 8 minutes ― every hour of the day and night. 

Human-triggered detections are only half of the OverWatch equation. In order to detect intrusion attempts at speed and on a global scale, OverWatch draws on its threat hunting findings to continuously advance the autonomous detection techniques in the CrowdStrike Falcon®platform. Over the last year, threat hunters have distilled their findings into the development of hundreds of new behavioral-based preventions for the Falcon platform, resulting in the direct prevention of malicious activity on approximately 248,000 unique endpoints.

With a powerful combination of human expertise and industry-leading technology, OverWatch can not only disrupt the most sophisticated intrusion attempts today, but also develop insights into detections that ensure swift identification and prevention of known threats into the future.

What You’ll Find in This Year’s Report

  • An overview of how OverWatch combines human ingenuity with patent protected workflows to find the threats technology alone cannot (the SEARCH methodology)
  • A 10,000-foot view of the interactive threat landscape as observed by OverWatch
  • Six detailed case-studies providing insights into how adversaries are carrying out their campaigns in the wild 
  • A new look not only at the most common tactics, techniques and procedures (TTPs) used by adversaries, but also those OverWatch believes defenders should have on their radar
  • An analysis of potential intrusions by vertical, including a special feature on the telecommunications vertical, which saw attempted intrusions double this past year 
  • Recommendations for defenders looking to better protect their organization from current and emerging threats

Whether you’re a seasoned defender looking to learn the latest or a cyber professional just starting out, the 2021 OverWatch Threat Hunting Report has something for you. Be sure to download your copy of the report today.  

Additional Resources

The post 2021 Threat Hunting Report: OverWatch Once Again Leaves Adversaries with Nowhere to Hide appeared first on crowdstrike.com.

Threat Protection from Cloud to Ground: Unified Power of EDR with SaaS and Application Security

9 September 2021 at 09:39

There’s no stopping when it comes to scaling your business, so why should your security remain stagnant? With your organization constantly expanding and your IT and security stack increasing in tools, your threat landscape is bound to grow with it. And by leveraging an increasing number of external applications and software-as-a-service (SaaS)-delivered solutions, you’re broadening your attack surface for new threats to take hold. To ensure full coverage that scales with your business, your security and IT teams need to extend visibility into your application environment and implement effective response controls before an adversary can do serious damage like moving laterally and injecting malware

CrowdStrike and its CrowdStrike Store partners DoControl and TrueFort help deliver comprehensive SaaS and application security, leveraging the CrowdStrike Falcon® platform’s single, intelligent agent and rich contextual data. The CrowdStrike Store extends the power of the Falcon platform to ensure you can stay ahead of modern attackers — DoControl’s new automated SaaS security app and the Zero Trust capabilities of TrueFort’s existing Fortress application help you to stop threats in your application environment at scale.

Remediate Compromised Assets Hidden in Your SaaS Apps

With many enterprises using SaaS applications daily — like Box, Google Drive, Slack and more — across all functions of the business, your critical corporate data is left outside your security perimeter and relies on the security measures of each SaaS application independently. With increased collaboration in these applications from vendors, partners and customers, controlling data access in an efficient and effective manner is key to ensure complete coverage while minimizing the likelihood of a data breach. 

By combining the Falcon platform’s rich telemetry with further visibility and control of SaaS applications on unmanaged devices where Falcon is not present, end users and external collaborators are prevented from uploading, accessing and sharing malicious assets on any of your corporate SaaS applications, ensuring that your employees and external collaborators are protected from malware and advanced threats. To achieve complete control over this growing attack surface, DoControl and CrowdStrike have partnered to help you identify and control the SaaS applications in your environment to achieve speed and agility of response.

(Click to enlarge)

DoControl automatically cross-references CrowdStrike Falcon detections with the same files stored in your SaaS applications to identify and remediate malicious activity at speed and scale. By immediately alerting your security teams to said cross-referenced detections, workflows can be triggered to remediate hosts by killing processes and file executions and deleting the files. With DoControl and CrowdStrike, you can prevent files from being added, stored or accessed by employees or external collaborators with known compromises, allowing you to gain control over your SaaS applications with faster and more accurate identification and response. By combining DoControl with CrowdStrike Falcon’s rich endpoint telemetry, you can easily manage assets, improve visibility and automate workflows to prevent data breaches in corporate SaaS applications. 

Gain Zero Trust Application Protection

Applications and workloads are top breach targets and avenues for adversaries to move laterally in your network. To proactively protect your organization from attacks, you need to fully understand application behavior and reduce excessive trust to effectively block or contain threats like ransomware, insider threats, supply chain attacks and other cyberattacks. TrueFort Fortress has enhanced its existing Zero Trust application protection capabilities with CrowdStrike Falcon to deliver micro-segmentation for all of your applications and workloads. 

The TrueFort Fortress app in the CrowdStrike Store leverages the Falcon platform’s rich endpoint data alongside its firewall creation, management and enforcement capabilities to help you gain visibility and control for detection and response at the application level. The Fortress app allows you to visualize your application flows and dependencies, automatically generate policies based on observed behavior, monitor for anomalies, streamline investigations, enable automated policy enforcement, and deliver robust reports — reducing excessive trust and related risks. By using application behavior telemetry from the Falcon platform, machine intelligence, and automation, Fortress continuously assesses and learns each application’s trusted runtime behaviors and creates a dynamic application trust graph, giving you comprehensive visibility. With this Zero Trust baseline for authorized behavior, your team is empowered to continuously identify and remediate risk-related deviations across all of your cloud, hybrid, containerized and on-premises workloads. With TrueFort and CrowdStrike, you can automate adaptive application security to stop threats, reduce your attack surface and stay compliant.  

Learn more about how to use TrueFort and CrowdStrike for micro-segmentation in our joint webcast, Stop Cyberthreats with Microsegmentation, on Sep 15, 2021.

Your Business Is Growing — So Should Your Security

With your business growth and increased scale, you need to focus on securing your environment end-to-end with unified platform-delivered solutions that can give you holistic visibility and control to stop breaches. With powerful application and SaaS security delivered by TrueFort and DoControl — available in the CrowdStrike Store — your team can automate detection and response in your complex application environment with proactive and effective tools to prevent malicious activity, stop advanced threats and maintain a high level of security efficacy. 

To learn more about DoControl and TrueFort or try these apps today, visit the CrowdStrike Store.

Additional Resources 

The post Threat Protection from Cloud to Ground: Unified Power of EDR with SaaS and Application Security appeared first on crowdstrike.com.

Everything You Think You Know About (Storing and Searching) Logs Is Wrong

9 September 2021 at 13:20

This blog was originally published Aug. 25, 2020 on humio.com. Humio is a CrowdStrike Company.

Humio’s technology was built out of a need to rethink how log data was collected, stored, and searched. As the requirements for data ingest and management are increasing, traditional logging technologies and the assumptions on which they were built no longer match the reality of what organisations have to manage today.

This article explores some of those assumptions, the changes in technology that impact them, and why Humio’s purpose-built approach is a better option for customers to get value with real-time search and lower costs.

3 assumptions about log data

There are three main assumptions that just don’t hold true today (and we like things that come in threes because it makes for neat sections in a blog).

1. Indexes are for search, therefore searches need indexes – False

Traditional thinking about how to do search at scale comes down to one concept: indexing the data. Indexing traditionally involves scanning the documents in question, extracting and ranking the terms, etc., etc. For many years, the ubiquitous technology for this has been Apache Lucene. This is the underlying technology in the search engines of many tools, and in more recent years has been “industrialized” into a really flexible technology thanks to the work of Elastic with the Elasticsearch tools.

But it’s not the best choice for logs (or more specifically streaming human-readable machine data). The assumption that indexes are best for all search scenarios is wrong.

This is no reflection on the technology itself; it’s designed for randomised search and it does that very well. Elastic gets a pass, they didn’t set out to build a log aggregation and search tool.

The other vendors that did set out to build such a tool and took an index-based approach may also get a pass, because indexing was the prevailing technology at the time.

2. Compression, and the obverse, are slow – Not anymore

Data can be compressed to make storage more efficient, but the perception remains that compressing and decompressing data will slow things down significantly. But compressing data can actually make search faster. There are two pieces to that discussion.

Firstly, if you design and optimise your system around compression, it makes reading, writing, storing, and moving data faster. Humio does exactly that, and you can read about some of this thinking in a Humio blog post: How fast can you grep?. Compression is assumed to be slow because so many users have experienced it in systems where it was introduced as an afterthought, a kludge to help solve the storage requirements of indexed data.

Secondly, compression algorithms are still making progress and being optimised. There are arguments that the latest techniques are reaching theoretical limits of performance, but let’s not declare that everything that will be invented has been.

Humio makes use of the Zstandard family of compression algorithms, and they are FAST. More about that in a bit.

3. Datasets become less manageable with size/age, or are put in the freezer – Datasets are not vegetables!

We often talk to prospective customers that have a requirement for Hot/Warm/Cold storage; and in the context of uncompressed, indexed data, this can make sense. People are used to the concept that storage is expensive, and that the storage “tier” is something the application needs to be aware of (e.g., hot data on local disk, warm data on SAN, etc).

Two things have changed significantly here; storage is no longer as expensive as people are used to it being, and a whole new class of storage has become available to application developers and users alike, Object Storage.

The merits of Object Storage are covered in a bit more detail in a recent post: (The Indestructible Blob, and described in the Humio How-To Guide: Optimize the stack with cloud storage.

How does Humio break these conventions?

We’re not going to give you all the details for what Humio does in these areas, but we can certainly discuss the general ways in which Humio reexamined these assumptions, and some of the results of doing so.

Indexes are not the solution

Indexing streaming data for the purposes of search is expensive, slow, and doesn’t result in a faster system for the kinds of use cases customers have for Humio. The interesting thing is that even the leading vendors of other data analytics platforms know this. They have had to work around this very problem to achieve acceptable solutions with things like “live tail” and “live searches”, etc. These index-based tools have to work around their own indexing latency to get the performance needed to claim “live” data … that should have been a big hint that maybe indexing wasn’t needed at all!

By moving away from the use of indexes (Ed: Humio still does actually index event timestamps, but we get the point), Humio does not have to do any of the processing and index maintenance that goes along with it. This means that:

  • When data arrives at Humio it is ready for search almost immediately. We’re taking 100-300 ms between event arrival and that same event being returned in a search result (manual search or a live search that is already running, or an alert, or a dashboard update).
  • Humio does not have to maintain indexes, merge them with new indexes, track which indexes exist, fix corruption in indexes, none of that. For those technologies that do rely on indexes, the indexes themselves become very large. Assuming the index is used to make the entire event searchable, indexing can make the data up to 300% larger than it was in its raw form.
  • With Humio, all queries are against the same datastore; there’s no split processing between historical and live data. Now consider where indexing is used for “search” and some sort of live streaming query is used to power “live” views of the data: tools that take this approach will often show users a spike in a live dashboard, but the user cannot search those events in detail or even view them in the live view.

Find out more about how Humio’s index-free architecture from a blog post: How Humio’s index-free log management searches 1 PB in under a second.

Compression everywhere

Humio uses optimal compression algorithms to ensure minimal storage space is required (did I mention we don’t build indexes?); often achieving 15:1 compression against the original raw data, and in some cases exceeding 30:1 compression.

These compression algorithms allow for extremely fast decompression of the data. Humio analyses and organises incoming data so it can make use of techniques like compression dictionaries, meaning we can do this for the optimally-sized segment files in storage (i.e., we don’t have to build and access monolithic blocks of data to achieve high compression ratios).

This is a good original article to read to get some more background on the kinds of techniques Humio uses from Facebook Engineering: Smaller and faster data compression with Zstandard.

Find out more about Humio compression: Humio product page: Humio: Keep 5-15x more data, for longer.

Accessing data

The final piece of the puzzle here is getting access to the right data when a user issues a query. Humio can’t go scanning all the raw event content no matter how fast it might be. This is where the storage pattern that Humio utilises comes into the picture, and the heuristics for a node in the cluster to get access to the data and scan it.

Firstly, segment files are built around optimally-sized groups of data (some secret sauce is added here to make that happen effectively and transparently to the user). These segment files also have accompanying bloom filters built, which means Humio can quickly and effectively identify only the relevant segments for any given query.

The segments work really well on local or network-attached storage, and their size and nature make them an excellent fit for Object Storage.

What does a query pipeline typically look like?

  1. A query is issued against a Humio cluster. Humio identifies which segment files are relevant, based on the time range and scope of the query.
  2. The nodes that handle the query then fetch the relevant segment files for their part of the query job:
    1. First, check on the local storage/cache for the segment.
    2. Secondly, check the other nodes in the cluster for the segment.
    3. Finally, fetch the segment from the object storage.
  3. Complete the scan and return the results to the query coordinator.

Fun fact: Because the object storage can be so efficient, you can tell Humio to always fetch missing segments from the object storage rather than the other nodes in the cluster as that’s sometimes the fastest way to do things.

For more information on the Humio architecture, see this blog post that summarizes a presentation given by Humio CTO Kresten Krab Thorup: How Humio leverages Kafka and brute-force search to get blazing-fast search results.


Humio has reconsidered the problem of ingesting and searching log data. Through a new approach and new technologies that are available, it has built a solution that scales efficiently and performs better than the systems that have come before it, often by more than an order of magnitude in terms of speed, storage, and total cost of ownership.

Want to find out more? Set up some time with us for a live demo, or see how it performs for yourself with a 30-day trial.

The post Everything You Think You Know About (Storing and Searching) Logs Is Wrong appeared first on crowdstrike.com.

HIMSS and Beyond: What’s Next in Healthcare Security

9 September 2021 at 13:28

The Healthcare Security Crisis

The FBI has released many warnings of ongoing ransomware attacks targeting U.S. healthcare and first-responder networks over the last three years, with ransomware families being updated with new names as hackers exchange sophisticated hacker-for-hire code and models to exploit vulnerable healthcare facilities. From penalties and Health Insurance Portability and Accountability Act (HIPAA) violations to denial of service availability, healthcare providers are forced to invest in security for endpoints, Internet of Things (IoT) devices and surgical devices (or other medical care equipment) while facing challenges in manpower, expertise and integration with existing systems. 

The challenge of maintaining protected health information (PHI) and network security isn’t limited to hospital and hospice providers — many manufacturers of healthcare and life-saving equipment are also expanding their certifications, adding much-needed network security certifications into their already lifesaving and preserving Internet of Medical Things (IoMT) and IoT devices. From robotic-assisted surgery devices to monitoring devices and technology, IoMT is here to stay, and it’s expanding — while hackers have already begun looking for ways to compromise these devices to launch their attacks against a system. Hospital networks are a complex and diverse grouping of medical and non-medical devices, managed separately but integrated continuously. Often, administrators have looked to two different lists when trying to determine endpoints on their system versus medical devices, due to each being administered by separate teams. 

New CrowdStrike Partner: Nihon Kohden

Because the number of attacks has grown so sharply in the last two years, Nihon Kohden is one of the first to onboard CrowdStrike Falcon® endpoint information into its larger patient monitoring systems to establish full facility threat visibility, protection and efficiency. Nihon Kohden has certified and validated the Falcon platform, rigorously examining and testing how it interacts to keep medical devices secure from ransomware and other denial-of-availability type attacks. The two companies are providing best-of-breed security that doesn’t impact availability or response of medical devices. Nihon Kohden will be offering the CrowdStrike solution as part of its Nihon Kohden Network Care service, and CrowdStrike is proud to be a partner as it moves toward solving issues so many medical manufacturers struggle with post-initial approval.

IoT/IoMT systems often report into patient records and data storage, combining to make a homogenous attack surface that provides avenues for adversaries to exploit. CrowdStrike’s partnerships offer increased visibility and understanding of these systems, driven by the vital requirement for comprehensive protection of these areas.

These partnerships address an area that many are hesitant to talk about — the divide between IT services and clinical engineering IoMT services. While all healthcare providers have provided endpoint security and firewalls in a traditional way to protect their hospital networks, CrowdStrike is leaping ahead to find ways to protect the many lifesaving medical devices in use every day and prevent those devices from becoming an avenue of attack. 

The new security model sees all endpoints and devices as equally important on the network, from understanding all users, privileges, and service accounts to industrial control systems, IoT/IoMT medical tech and more. CrowdStrike and our partners provide visibility for all devices to collect and correlate data across multiple security layers — email, endpoint, IoT device, patient portal and network — with advanced detection and response capabilities. 

This holistic approach offers quicker detection of threats, as well as improved investigation and response times through incident analysis. Medical and manufacturing industries have some of the most vital requirements for Zero Trust solutions, and CrowdStrike helps monitor every transaction and every session, correlating and alerting against known attack patterns with a backend team of experts that analyze new patterns as new bad actors make themselves known by their activity.

New CrowdStrike Partner: Medigate

That’s not the only fantastic medical partnership announcement this month: CrowdStrike recently announced a healthcare partnership with Medigate, a company built around security, asset management and operational analytics for medical providers. Hospitals that have both Medigate and CrowdStrike Falcon protecting their network will have new insight into discovery, profiling and network monitoring, to provide visibility into all managed and unmanaged endpoints including medical devices with network access. 

The integrated solution offers security teams at healthcare delivery organizations the industry’s first consolidated view of threat activity. It also ensures automated, next-gen incident response capability spanning all network-connected assets. 

Partnering for Success with IoT and Healthcare

It takes solid partnerships to deliver in a new age of healthcare security — and it’s even more important for security vendors to integrate and play well together as we bring our unique experience and understanding to form new and improved security solutions. CrowdStrike Falcon’s single lightweight-agent architecture uses cloud-scale artificial intelligence (AI) to offer real-time protection and visibility across the hospital or facility, preventing attacks on endpoints on or off the network. Falcon Zero Trust protects the identities of every user, human or service account/machine that accesses the domain controller. Falcon Discover™ IT hygiene helps provide a census across the network or facility, finding all devices that connect to the network. Humio enables collection of events and extraction of valuable information from any endpoint, identity or source at scale. All of these are powered by the proprietary CrowdStrike Threat Graph® database engine, making CrowdStrike one of the world’s most advanced data platforms for security.

It takes Zero Trust solutions, endpoint detection and response (EDR), automation and threat discovery to work with security professionals on signal and network interoperability. These are the critical solutions that will determine the fate and security of the healthcare infrastructure — from vendors and automation, to the conjunction of network and operations into one visible stream. CrowdStrike is pleased to partner with other medical, IoT device and healthcare-specific attack experts and technologies to create best-of-breed solutions that will meet the strident demands of the healthcare IoT space. 

Additional Resources

The post HIMSS and Beyond: What’s Next in Healthcare Security appeared first on crowdstrike.com.

How Fast Can You Grep?

14 September 2021 at 12:54

This blog was originally published Sept. 28, 2017 on humio.com. Humio is a CrowdStrike Company.

Assume that you have a 1GB text you want to search.

A typical SSD lets you read on the order of 1GB/s, which means that you can copy the file contents from disk into memory at that speed.

Next, you will then need to scan through that 1GB of memory using some string search algorithm.

If you try to run a plain string search (memmem) on 1GB, you realize that it also comes at a cost. A decent implementation of memmem will do ~10GB/s, so it adds another 1/10th of a second to your result to search through 1GB of data. Total time: 1.1 second (or 0.9GB/s).

Now, what if we compress the input first?

Imagine for simplicity that the input compresses 10x using lz4 to 0.1GB (on most workloads we see 5–10x compression). It takes just 0.1 second to read in 0.1GB at 1GB/s from disk into main memory. lz4 decompresses at ~2GB/s on a stock Intel i7, or 0.5 second for 1GB. Add search time of 0.1 second to a total of 0.6s for reading from disk and decompressing, and we can now search through 1GB in just 0.7s (or 1.4GB/s). And all of the above is on a single machine. Who needs clusters?

Compressing the input has the obvious additional advantage that the data takes up less disk space, so you can keep more data around and/or keep it for a longer period of time. If, on the other hand, you use a search system that builds an index, then you’re likely to bloat your storage requirements by 5–10x. This is why Humio lets you store 25–100x the data of systems that use indexing.

Assuming we’re on a 4-core i7 machine, we can split the compressed data it into four units of work that are individually decompressed and searched on each core for an easy 4x speed up; 1/4th of 0.6 seconds on each core is 0.125s. This gives us a total search time of 0.225 seconds, or 4.4GB/s on a single 4-core machine.

But we can do better.

All of the above assumes that we work in main memory, which is limited by a theoretical ~50GB/s bandwith on a modern CPU, in practice we see ~25GB/s.

Once data is on the CPU’s caches it can be accessed even faster. The downside is that the caches are rather small. The level-2 cache for instance is 256kbytes. In the previous example, by the time the decompression of 1/4 of 1GB is done, the beginning of those 256MB have long been evicted from the cache.

So what if we move the data onto the level-2 cache in little compressed chunks, so that their decompression also fits in the same cache, and then search in an incremental way? Memory-accesses on the level-2 cache are ~10x faster than main memory, so this would let us speed up the decompress-and-search phase by an order of magnitude.

To achieve this, we preprocess the input by splitting the 1GB into up to 128k chunks that are individually compressed.

Adding all this up for a search of 1GB to 0.1s for read-from-disk, 0.004s main-to-core 0.1GB @ 25GB/s, and blazing 10x at 0.0125s to decompress-and-search, for a total of 0.1265 seconds reaching 7.9GB/s.

But what if the 1GB file contents is already in the operating system’s file system cache? If it was recently written, or if this is the second time around doing a similar search.The loading the file contents would be instantaneous, and the entire processing would be just 0.0265 seconds, or 37GB/s.

Loading data from disk can be done concurrently with processing data, so the loading and processing can overlap in time. Notice that we’re now again dominated by I/O (the blue bar above is wider than the other ones combined), which is why Humio searches faster the better the input compresses. If you search more than a few GBs, then processing is essentially limited by the speed at which we can load the compressed data from disk.

To enable even faster searches you simply employ multiple machines. The problem is trivially parallelizable, so to be searching at 100GB/s would just need 3 machines the likes of a desktop i7.

The beauty is that this generalizes not just to search, but many other data processing problems which can be expressed in Humio’s query language. Whatever processing is presented the entire input; which makes it easy to extract data and do aggregations such as averages, percentiles, count distinct, etc.

But in the Real World…

Many interesting aggregate computations require non-trivial state (probabilistic percentiles need a sample pool, the hyper-log-log we use for count distinct needs some fancy bitmaps), and these ruin the on-CPU caching somewhat, thereby reducing the performance. Even something as simple as keeping the most recent 200 entries around slows down things.

In all honesty, most of the above is more or less wishful thinking. It’s the theoretical limits of an optimal program. For several reasons, we really only get around 6GB/s or 1/6th of the theoretical speed, not ~37GB/s per node that I tallied up above. Trouble is that our system does many other things that end up influencing the outcome, and it is really hard to measure exactly where the problem is at the appropriate level of detail without influencing the outcome. But performance is still decent — and (unfortunately) our customers are asking for more features, not more performance, at present.

The system really lends itself to a data processing problem where lots of data is ingested but queries are relatively rare. So it’s a good match for a logging tool: logs arrive continually, they are relatively fast to compress, and few people such as sysops and developers initiate queries. Humio easily sustains a large volume of ingest, we have seen successful single-node deployments taking in +1TB/day; when someone comes around to ask a question, it will use all available processing power (for a short while) for just that single query.

In a later post, I’ll get back to how we improve these tradeoffs using stream processing to maintain ‘views’ that are readily available for retrieval.

The post How Fast Can You Grep? appeared first on crowdstrike.com.

Big Game Hunting TTPs Continue to Shift After DarkSide Pipeline Attack

14 September 2021 at 18:15

The eCrime ecosystem is an active and diverse economy of financially motivated threat actors  engaging in a myriad of criminal activities to generate revenue. With the CrowdStrike eCrime Index (ECX), CrowdStrike’s Intelligence team maintains a composite score to track changes to this ecosystem. The ECX is composed of several key observables covering different aspects of criminal activity that are combined using a mathematical model. In recent weeks, the Intelligence team observed a notable shift in big game hunting (BGH) activity and tactics, techniques and procedures (TTPs) that resulted in a downward trend of the ECX. As noted in a previous CrowdStrike Intelligence blog, the intense attention surrounding the Colonial Pipeline and JBS incidents had a significant impact on the criminal marketplace and the political landscape. Get more Intel updates on the latest eCrime activity and TTPs at Fal.Con, our annual cybersecurity conference, Oct. 12-14 — register for free today.

ECX Suggests Downward Trend in Ransomware Operations Following Colonial Pipeline Attack

By the time of the Colonial Pipeline attack on May 7, 2021, observed BGH ransomware incidents had reached a yearly high. However, publicly observable BGH activity declined throughout early June 2021, immediately after the incident, amid reports of mounting U.S. pressure to pursue BGH actors. A similar decline was also observed in the number of specific leaks posted to adversaries’ dedicated leak sites (DLS). Despite the decline in the ECX, there has been sustained ransomware activity, likely indicating that a number of adversaries are remaining active despite the dismantling of other groups.

BGH Actor Developments 

BGH adversaries responded to the Colonial Pipeline ransomware incident and the resulting widespread media coverage in many ways. Some named actors shuttered ransomware-as-a-service (RaaS) affiliate programs — at least publicly — while others have continued deploying ransomware. 

CARBON SPIDER (operators of DarkSide ransomware) continues to create active command-and-control (C2) servers to deploy their Domenus PS backdoor and Cobalt Strike post-exploitation framework. The activation of new C2 servers demonstrates that CARBON SPIDER has not halted activities despite allegedly losing control of DarkSide-related infrastructure and having their ransomware funds seized by the U.S. government.1 However, in late July 2021, CrowdStrike Intelligence observed a new ransomware called BlackMatter being distributed. Code overlaps indicate that BlackMatter is highly likely the successor of CARBON SPIDER’s DarkSide ransomware. CARBON SPIDER has also created a Linux version of BlackMatter that resembles the Linux version of DarkSide in multiple ways. After taking a short break, CARBON SPIDER reinstated their BGH operations involving this RaaS and have stated that they have an interest in purchasing and executing unauthorized access to corporate networks.

RIDDLE SPIDER (operators of Avaddon ransomware) closed down their operations in late June. Earlier in June 2021, media sources allegedly received emails containing a password and links to 7zip files containing Avaddon ransomware decryption keys.2 RIDDLE SPIDER’s DLS also went offline in June. While CrowdStrike Intelligence cannot confirm RIDDLE SPIDER’s motivations for closing down the Avaddon RaaS, the decision was likely influenced by the Colonial Pipeline incident and its resulting effects throughout the ransomware industry.

GRACEFUL SPIDER had several members of their group arrested on June 16, 2021, by a joint international law enforcement operation.3 These members were involved in laundering cryptocurrency funds acquired through the use of GRACEFUL SPIDER’s Clop ransomware.  The immediate impact to GRACEFUL SPIDER operations resulting from these arrests is currently unclear. GRACEFUL SPIDER’s DLS site remains active after the arrests, with two new listings in June, indicating they have not ceased their activity.

PINCHY SPIDER (developers and operators of the popular REvil RaaS) continued operating at a high pace throughout June and early July 2021, and the group introduced a new ransomware named REvix, which is used to target ESXi and Linux environments. However, on the morning of July 13, 2021, PINCHY SPIDER’s REvil infrastructure supporting their DLS and payment portal went offline. On the same day, the forum administrator of the Russian-language criminal forum XSS banned the actor Unknown (aka UNKN), who has acted as the public spokesperson for PINCHY SPIDER since 2019. PINCHY SPIDER had released REvil version 2.08 a few days prior, confirming the ransomware was under active development, and version 1.2 of REvix was observed on July 23. 

On Sept. 7, after an approximately three month hiatus, CrowdStrike Intelligence observed PINCHY SPIDER’s REvil infrastructure come back online. Financial activity in terms of BTC transactions from previously identified REvil addresses was also detected on Sept. 5.

On June 4, a sample of INDRIK SPIDER’s Hades ransomware was identified using the name PAYLOADBIN, similar to Babuk Locker’s DLS site Payload.bin. INDRIK SPIDER likely switched the names in an effort to avoid attribution by law enforcement and therefore avoid Office of Foreign Assets Control (OFAC) sanctions. Prior to this recent name change, INDRIK SPIDER attempted to change the names of Hades and their Phoenix CryptoLocker ransomware at least one other time to avoid OFAC sanctions. The changes made to avoid these sanctions indicates that INDRIK SPIDER desires to continue their deployment of ransomware.

In July 2021, CrowdStrike Intelligence determined that Grief ransomware is developed by DOPPEL SPIDER, likely as an intended successor to DoppelPaymer ransomware. The cessation in DoppelPaymer activity coincided with the emergence of the Grief DLS that was first observed in May. Analysis of recently identified Grief samples indicates a number of technical overlaps with DOPPEL SPIDER’s wider toolset that provides a definitive link to the adversary.

WIZARD SPIDER continues to actively deploy Conti ransomware and update the Conti DLS. In June 2021, WIZARD SPIDER continued to target large entities in Europe and the United States, including organizations in real estate, education and local government. Recent developments related to the Colonial Pipeline and JBS incidents have not slowed down WIZARD SPIDER’s ransomware operations. This indicates WIZARD SPIDER remains largely unaffected by external pressure, similar to their response to the September 2020 takedown efforts targeting TrickBot infrastructure


The confluence of U.S. and international law enforcement pressure and forum bans on ransomware activity has led to a highly fluid and chaotic situation in the eCrime ecosystem. The ECX has indicated a change in BGH activity May through June 2021 as well as the persistence of ongoing BGH incidents at a level observed in the first quarter of 2021. However, the downward trend in BGH victims posted to DLSs in June likely indicates that some BGH actors have shifted TTPs to make tracking their activity more difficult.

Numerous adversaries have shown themselves keen to take advantage of the situation and to attract new affiliates. These adversaries have explicitly expressed their intent to continue ransomware operations despite reports of possible U.S.-Russian collaboration — or more aggressive unilateral enforcement actions by the U.S. — in response to incidents, suggesting that a complete drop in BGH activity is highly unlikely to occur in the near future. 

The ECX remains a valuable tool used to identify significant events affecting the eCrime ecosystem. The ECX provides an easily referenced index to mark areas of disruption or change in the eCrime ecosystem in real time.

Monitor the ECX regularly in the CrowdStrike Adversary Universe to make sure you stay up-to-date on eCrime trends.


  1. https[:]//www.justice[.]gov/opa/pr/department-justice-seizes-23-million-cryptocurrency-paid-ransomware-extortionists-darkside
  2. https[:]//www.bleepingcomputer.com/news/security/avaddon-ransomware-shuts-down-and-releases-decryption-keys/
  3. https[:]//www.npu.gov[.]ua/news/kiberzlochini/kiberpolicziya-vikrila-xakerske-ugrupovannya-u-rozpovsyudzhenni-virusu-shifruvalnika-ta-nanesenni-inozemnim-kompaniyam-piv-milyarda-dolariv-zbitkiv/

Additional Resources

The post Big Game Hunting TTPs Continue to Shift After DarkSide Pipeline Attack appeared first on crowdstrike.com.

Senior UX Writer Hema Manwani on Kickstarting a Career in Cybersecurity and Shifting to Remote Work

14 September 2021 at 20:04

For Hema Manwani, a successful day at work is one where she helps guide someone from point A to point B. But she’s not a logistics manager or a dispatcher — she’s a writer. A newly hired Senior UX Writer at CrowdStrike to be more specific. 

Having started her new position just four months ago, Hema joins us here to share the details of her transition to the cybersecurity industry, her first impressions of CrowdStrike and the most rewarding part of her day.

Hema Manwani

Q. What brought you to CrowdStrike and what do you do here? 

I’m a senior UX writer for the platforms team. My goal is to write usable, simple, understandable content so that our users are able to accomplish their goals. 

I joined CrowdStrike about four months ago after coming across a video by a UX writer from CrowdStrike. I normally don’t stop to watch LinkedIn videos unless they’re recommended to me by someone, but something about this one drew me in. She was presenting a topic that sounded very technical, but she was breaking it into usable and easy patterns so the audience could understand. The main job of a UX writer is to make it easy for the users to understand. I was intrigued by the presentation and thought it could be a good challenge to pursue. So I reached out to her on LinkedIn to congratulate her on a job well done and she actually shared a job opportunity with me. I have years of experience writing in financial services and tech, but I didn’t know much about cybersecurity. I gave it a shot anyway. Long story short, I applied and now I’m here. 

Q. That’s a great story. We always try to impart on people that they don’t necessarily need experience in cybersecurity to apply for a job at CrowdStrike. Did you have reservations about applying without industry experience? 

That topic came up when I was trading messages with the presenter via LinkedIn, and she said to me, “Everybody here is learning.” That was reinforced during the interview process too. During my last round of interviews, which was a group session, people asked about my knowledge and interest in cybersecurity. I mentioned to them that I didn’t know much about the field, but after watching that LinkedIn talk, I was very interested in learning. 

The best part was when everyone in the interview confirmed, “Everybody’s learning here. You won’t meet a person at CrowdStrike who says, ‘Hey, I am the guru of everything. Come to me and I’ll answer all your questions.’” That experience has instilled a lot of confidence in me. While there are many people here that are experts at many things, in all the time I’ve been here I’ve always seen the team encouraging each other to ask questions. There’s no such thing as a “wrong” question here. 

Q. Can you tell me about what you do in a typical day?

That’s a very good question! People often assume that because I’m a writer, I write all day. But writing is just a small part of what I do. 

As a UX writer, I collaborate with cross-functional teams a lot. I spend time in working sessions with our researchers, engineers and designers to understand the product and how it works, what’s feasible and what’s not, and how we can make a great experience for our customers. I have to understand what our users are seeing, what their pain points are. Then I take all of that information and make it easy for our customers to understand how to use the product and what it can do every step of the way through conversational language. My job is done when a user can get from point A to point B without any questions or confusion.

Q. What do you find different or unique about working at CrowdStrike?

CrowdStrike is a remote-first organization. That’s by design from the start, long before COVID-19. That was a change for me. Many people have the idea that employees have to be at a workplace together, especially when a role is so collaborative, like mine. The expectation is that you need to sit with the designer, co-create the designs, maybe do some whiteboarding and stuff like that. 

So while I was excited to join as a remote worker, I wondered, Is it going to work seamlessly? How are our teams going to work if we don’t sit down and talk to each other in the same room? What I found is that even though it’s a very fast-paced environment, people are always there to support you, to answer your questions. CrowdStrike is unique in that remote-first culture — even before COVID-19, remote collaboration was the norm. So I’ve never felt stuck with a problem on my own, even though my position is remote. CrowdStrike has definitely proved it’s a myth that you need to be in the same room as your team members to successfully collaborate.

Q. What do you like to do in your spare time?

I love to read — and I’m a fast reader too! I can read a novel in a day. I also write a lot outside of work. I’ve been published in newspapers and three times in the Chicken Soup for the Soul series. I write about social issues and topics that touch me personally. I also enjoy talking to other people to get new perspectives on different topics. I feel there’s always something you can learn from other people. 

Check out some of Hema’s writing below:

Are you interested in getting from point A to point B? Browse our job listings today to start planning your path to CrowdStrike.

The post Senior UX Writer Hema Manwani on Kickstarting a Career in Cybersecurity and Shifting to Remote Work appeared first on crowdstrike.com.

Humio Recognized as Top 3 Observability Award Winner by EMA

15 September 2021 at 13:03

Humio delivers modern log management with streaming observability to enable customers to log everything and answer anything in real time. Today, Humio is proud to be recognized by Enterprise Management Associates (EMA) as a Top 3 Award Winner for Log Management and Observability. This award is further validation of Humio’s approach to delivering streaming observability for our customers. 

Overcoming Today’s Observability Challenges

To prevent system outages and keep your organization safe, it’s more important than ever to have real-time visibility into your organization’s systems to log all of your data, turning data into actionable insights that help your team quickly respond to incidents. EMA has recognized Humio’s unique ability to ingest data from almost any source to help organizations answer any question. 

Humio’s index-free architecture enables real-time querying and alerting and delivers intelligent insights based on the context of each query. The end result is that developers, infrastructure operations teams and business staff can discover previously hidden correlations between business KPIs, user experience, application performance, infrastructure configuration, code changes and more. 

“Our developers are digging into their logs much more than before, setting alerts, creating dashboards. It really means the world in a self-service, developer-focused microservice environment.” — Humio customer Kasper Nissen, cloud architect at Lunar

Humio’s Business Impact

EMA highlights the business impact of Humio in the following features:

  • Index-free logging to enhance productivity for developers and accelerate software development
  • Empowers users in real time by providing machine learning-driven identification of important events
  • Business-driven optimization for IT and DevOps
  • Built-in, cloud-native log management, such as for Kubernetes 
  • Setup of continuous compliance management through automated auditing 

Torsten Volk, managing research director at EMA, summarizes the power of Humio, saying, “Humio helps organizations tap into their vastly unused operations data without having to worry about the boundaries of individual data sources or the time it will take to execute complex queries that cross these boundaries. This ability to simply correlate anything with anything else is exactly what is needed to create a data-driven culture within all parts of an organization. When you log everything, you can basically ask any question. This is exciting.”

Read the full report to learn how customers can use Humio to transform their businesses.

Additional Resources

The post Humio Recognized as Top 3 Observability Award Winner by EMA appeared first on crowdstrike.com.

Shining a Light on DarkOxide

15 September 2021 at 16:30

Since September 2019, Falcon OverWatch™ has been tracking an as yet unattributed actor, conducting targeted operations against organizations within the Asia Pacific (APAC) semiconductor industry. CrowdStrike Intelligence tracks this activity cluster under the name DarkOxide.

CrowdStrike Intelligence has not yet determined the motivation of this activity cluster, but its tactics, techniques and procedures (TTPs) and target scope indicate it is more likely focused on the theft of sensitive information than on direct financial gain.

Telltale TTPs Reveal a Cluster of Activity

The DarkOxide cluster exhibits a very specific set of TTPs that have changed very little over the last two years.

Initially, the actor engages a target via a business-oriented social media platform under the guise of carrying out a recruitment drive (to read more about this technique, see https://attack.mitre.org/techniques/T1566/003/). The target is then encouraged to download a lure document purportedly relating to a job opening. In reality, this file is a malicious executable with a double file extension. The executables in these lures have used non-standard executable file extensions such as .PIF (program information file) and .SCR (screensaver). As Windows, by default, hides the extension of known file types, these files initially appear to be legitimate document files when viewed in Windows File Explorer. 

To date, the targets of the phishing attacks have included engineering staff with access to sensitive documents and source code, indicating that theft of intellectual property is the likely motivation for these operations.

The following screenshot shows the detection that appears in the CrowdStrike Falcon UI when a victim runs one of these malicious screensaver files. In this case, the customer had enabled preventions, allowing the pattern of activity to be recognized by the sensor and terminated before the actor could complete the installation of their remote access software.

(Click to enlarge)

When the payload is executed, it utilizes a number of scripting interfaces, including PowerShell and Visual Basic Script, to download a further malicious binary executable. This second executable, also with a .PIF or .SCR extension, in turn installs a copy of the legitimate remote access tool, Remote Utilities, with a preconfigured command-and-control (C2) address. In a small number of cases in addition to Remote Utilities, the actor also installed the Total Manager Pro file manager. It is likely that this was in order to conduct file system searches, or to package files for exfiltration.

Although the Remote Utilities binary, rutserv.exe, is a legitimate signed binary, its use is relatively rare across CrowdStrike’s customer set.

As of at least March 2020, this TTP has been slightly modified, removing the first stage downloader and moving directly from the initial phishing attack to the installation of the Remote Utilities software.

The following table shows how these TTPs have been shared across a number of intrusions and how they map to the MITRE ATT&CK® framework.

In June 2021, the cluster was observed deploying additional tooling to a host. Again these tools were commercial off-the-shelf software. The tooling observed included:

  • Total Spy: a commercial spyware suite with capabilities including keylogging, screen capture, messaging capture and social network capture
  • RDP Wrapper: an open source tool allowing RDP access to the host
  • DWServe: an open source tool allowing the host to be remotely controlled from a web browser

In almost all cases, the cluster’s activity has been frustrated, either by preventions enabled by the customer, or by early notifications from Falcon OverWatch, allowing the affected systems to be contained before the actor could take further actions on objectives. In the single case where follow-up activity was observed, it consisted of modifications to the registry in order to allow further access to the host via Remote Desktop Protocol. (To read more about these techniques see: https://attack.mitre.org/techniques/T1112/ and https://attack.mitre.org/techniques/T1133/.)

Since CrowdStrike began tracking DarkOxide, the activity cluster has continued to conduct operations against a number of semiconductor companies, almost exclusively located within the South Asia region. 

Your Best Defense Against DarkOxide

Over the past two years, Falcon OverWatch, alongside CrowdStrike Intelligence, has been tracking an activity cluster, DarkOxide, actively targeting the semiconductor industry. Although the actor’s TTPs have remained largely consistent, they have demonstrated the capacity to adapt and improve their processes, having recently streamlined their activity by removing the need for a first-stage downloader in their intrusion process. 

Defenders in the semiconductor industry should be particularly alert to this activity, which drives home the need to enlist end users as the first line of defense. The actor is actively targeting employees via social media to gain initial access. Well-trained staff can be an asset in combating the continued threat of phishing and related social engineering techniques.

As noted above, the Falcon platform can identify and prevent actors’ use of malicious files with double extensions, but it is crucial the sensor is rolled out across the environment with appropriate prevention settings turned on. Defenders can slow down malicious activity by employing strict user account management based on the principle of least privilege. 

Finally, but most crucially, this activity shows the lengths to which threat actors go in their attempt to evade automated detections. Whether by gaining access through phishing activities, or by using legitimate tooling to achieve actions on objectives, threat actors are always looking for new ways to pierce an organization’s defenses. A managed threat hunting service, like Falcon OverWatch, provides the continuous monitoring that is required to identify and disrupt  malicious activity before the damage is done.

Indicators of Compromise

First Stage Payload

SHA256 Hash Lure Filename
48c19ad7436f3d311e9e63327801d0a2d6d25c0d7c7bbc3d2c6a32afb95a0187 Final.exe
9d34f653edf948d9f46522081ff00dddf2f4b62b18d138c49e3b281ca953aeb1 Resume pdf.pif
1fcb6b54b17a6c3df0047a48280b4dcab8b2f2cad2ef4b8c802b05119cedce42 Talent Recruitment Web meeting system.pif
6d1480cd5b10739af130850f9d9bfa7ebe50024c5db68dd231bc7e4bd560ffa6 msi6.9.pif
9d68049510581ff4827fd72510c59d685ce54609b07733be17492bf2403442b4 Job description sr.scr
b414dca98e117d3755903ff27ffc07880f1fe2bfabfb49f6956cf82c06f4eab1 Job description sr.scr
8045f3e00e52c663ab942f39ec779ffc7ac90197ece8e574e5a70c422aa32b36 Job detail description.scr
49fbf9884299fbc6b09e640449fdc834f82a752908d381a68e2057a9861e3618 Job description.scr
186a7abdfcc2df113148650eb1673620a11bb8bfcf3c53f8a1c7429703cda715 Job detail description.scr
45e6653af40fb838eae0657a34905d5ba36052bd41819873d2afc240874b14b6 Qualcomm Job description India.scr
041398a0d34794df5b8d22683f5be7991647416f6243c7bc0441abd7c71c7c27 Qualcomm Job details.scr

Second Stage Payload

SHA256 Hash Filename
9d34f653edf948d9f46522081ff00dddf2f4b62b18d138c49e3b281ca953aeb1 one.pif

Legitimate Binaries Observed

SHA256 Hash Filename
5ada6d1fd62bb1740ea80a30788e55988758acc2b835e6835d6524af1e7afcbd rutserv.exe
C295bd2653d6d8752ff5805b4114eee8e4370a0f16e922d81aecc5f49fa8c9c9 rfusclient.exe
966ef76fe3476d530b1b97a6f40947ed14ada378f13e44ecfe774edc998cd0b0 srvinst.exe
798af20db39280f90a1d35f2ac2c1d62124d1f5218a2a0fa29d87a13340bd3e4 rdpwrap.dll
07935229c213d1735655cc8453daa29718da2656546e05d5b3990cb49c248b98 RDPWInst.exe
43fbae4f6637c8eaa955db7e394eebd39cd261f91f36a5bc646303f123e68f13 tsmon.exe
39235102a3aeeb88678cad8d841292fc17ec3b0551cf57d755fdd523985567e8 tsmon4.exe
1ad4b06e282e3c3f22c6d194dabdc272215154f004c57b93b3882c161efc5279 tsmon5.exe
4515d7ee0d5e2e2e236499d35a154b427f07124e9edd379b6e9d62af2ae88c4d tsmon6.exe

Hard-Coded Command and Control for Remote Utilities

  • 54.149.69[.]226
  • 54.188.107[.]146
  • 60.254.95[.]183 
  • 34.221.96[.]116

Additional Resources

The post Shining a Light on DarkOxide appeared first on crowdstrike.com.