🔒
There are new articles available, click to refresh the page.
Yesterday — 3 December 2021Research - Companies

Threat Roundup for November 26 to December 3

Today, Talos is publishing a glimpse into the most prevalent threats we've observed between Nov. 26 and Dec. 3. As with previous roundups, this post isn't meant to be an in-depth analysis. Instead, this post will summarize the threats we've observed by highlighting key behavioral characteristics,...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

Talos Takes Ep. #79: Emotet's back with the worst type of holiday present

3 December 2021 at 15:46
By Jon Munshaw. The latest episode of Talos Takes is available now. Download this episode and subscribe to Talos Takes using the buttons below, or visit the Talos Takes page. Emotet is back, and it brought the worst possible holiday present (just in time for peak spam season, too!). We...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

End-to-end Testing: How a Modular Testing Model Increases Efficiency and Scalability

3 December 2021 at 09:00

In our last post, Testing Data Flows using Python and Remote Functions, we discussed how organizations can use remote functions in Python to create an end-to-end testing and validation strategy. Here we build on that concept and discuss how it is possible to design the code to be more flexible.  

For our purposes, flexible code means two things:

  1. Writing the code in such a way that most of it can be reused
  2. Creating a pool of functionalities that can be combined to create tests that are bigger and more complex.
What is a flow?
A flow is any unique complex sequence of steps and interactions that is independently testable.
Flows can mimic business or functional requirements.
Flows can be combined in any way between themselves to create higher level or specific flows.

The Need to Update the Classic Testing View

A classical testing view is defined by a sequence of steps that do a particular action on the system. This typically contains:

  • A setup which will prepare the test environment for the actual test. Eg: creating users, populating data into a DB, etc.
  • A series of actions that modifies the current state of the system and checks for the outcome of the performed actions
  • A teardown that should return the system to initial state before the test

Each of these actions are separate but dependent on one another. This means that for each test, we can assume that the setup has run successfully and, in the end, that the teardown has run to clean it up. In complex test scenarios this can become cumbersome and difficult to orchestrate or reuse.

In a classical testing model, unless the developer writes helpers that are used inside the test, the code typically cannot be reused to build other tests. In addition, when helpers are written, they tend to be specific to certain use cases and scenarios, making them irrelevant in most other situations. On the other hand, some helpers are so generic that they will still require the implementation of additional logic when using them with certain test data.

Finally, using test-centric development means that many test sequences or test scenario steps must be rewritten every time you need them in different combinations of functionality and test data.

CrowdStrike’s Approach: A Modular Testing Model

To avoid these issues, we take a modularized approach to testing. You can imagine each test component as a Lego block, wherein each piece can be made to fit together in order to create something bigger or more complex. 

These flows can map to a specific functionality and should be atomic. As more complex architectures are built, unless the functionality has changed, you don’t need to rewrite existing functions to fit. Rather, you can combine them to follow business context and business use cases. 

The second part of our approach relates to functional programming, which means we create independent testable functions. We can then separate data into payloads to functions, making them easy to process in parallel.

Business Case: A Product That Identifies Security Vulnerabilities

To illustrate this point, let’s take a business use case for a product that identifies vulnerabilities for certain applications installed on a PC. The evaluation will be based on information sent by agents installed on PCs about supported installed applications. Information could be the name of the application, installed version, architecture (32 or 64 bits). This use case dictates that if a computer is online, the agent will send all relevant information to the cloud where it will be processed and evaluated against a publicly available DB of vulnerabilities (NVD). (If you are unfamiliar with common vulnerabilities and exposures, or CVE, learn more here.)

Our testing flows will be designed around the actual product and business flows. You can see below a basic diagram of the architecture for this proposed product.

You can see the following highlights from the diagram above:

  • A database of profiles for different versions of OS and application to be able to cover a wide range of configurations 
  • An orchestrator for the tests which is called Test Controller with functionalities:
    • Algorithm for selecting datasets based on the particularities of the scenario it has to run
    • Support for creating constructs for the simulated data. 
    • Constructs that will be used to create our expected data in order to do our validations post-processing

There is a special section for generating and sending data points to the cloud. This can be distributed and every simulated agent can be run in parallel and scaled horizontally to cover performance use cases.

Interaction with post-processed data is done through internal and external flows each with their own capabilities and access to data through auto-generated REST/GRPC clients.

Below you can see a diagram of the flows designed to test the product and interaction between them.

A closer look at the flows diagram and code

Flows are organized into separate packages based on actual business needs. They can be differentiated into public and internal flows. A general rule of thumb is that public flows can be used to design test scenarios, whereas internal flows should only be used as helpers inside other flows. Public flows should always implement the same parameters, which are the structures for test data (in our case being simulated hosts and services).

# Build Simulated Hosts/Services Constructs

In this example all data is stored in a simulated host construct. This is created at the beginning of the test based on meaningful data selection algorithms and encapsulates data relevant to the test executed, which may relate to a particular combination of OS or application data.

import agent_primitives

from api_helpers import VulnerabilitiesDbApiHelperRest, InternalVulnerabilitiesApiHelperGrpc, ExternalVulnerabilitiesApiHelperRest

@dataclass
class TestDataApp:
   name: str
   version: str
   architecture: str
   vendor: str


@dataclass
class TestDataDevice:
   architecture: ArchitectureEnum
   os_version: str
   kernel: str


@dataclass
class TestDataProfile:
   app: TestDataApp
   device: TestDataDevice


@dataclass
class SimulatedHost:
   id: str
   device: TestDataDevice
   apps: List[TestDataApp]
   agent: agent_primitives.SimulatedAgent


def flow_build_test_data(profile: TestDataProfile, count: int) -> List[SimulatedHost]:
   test_data_configurations = get_test_data_from_s3(profile, count)
   simulated_hosts = []
   for configuration in test_data_configurations:
       agent = agent_primitives.get_simulated_agent(configuration)
       host = SimulatedHost(device=configuration.get('device'),
                            apps=[TestDataApp(**app) for app in configuration.get('apps')],
                            agent=agent)
       simulated_hosts.append(host)
   return simulated_hosts

Once the Simulated Host construct is created, it can be passed to any public flows that accept that construct. This will be our container of test data to be used in all other testing flows. 

In case you need to mutate states or other information related to that construct, any particular flow can return the host’s constructs to be used by another higher-level flow.

TestServices construct encompasses all the REST/GRPC services clients that will be needed to interact with cloud services to perform queries, get post-processing data, etc. This will be initialized once and passed around where it is needed.

@dataclass
class TestServices:
   vulnerabilities_db_api: VulnerabilitiesDbApiHelperRest
   internal_vulnerabilities_api: InternalVulnerabilitiesApiHelperGrpc
   external_vulnerabilities_api: ExternalVulnerabilitiesApiHelperRest

Function + Data constructs = Flow. Separation of data and functionality is crucial in this approach. Besides the fact that it makes the function work with a large number of payloads that implement the same structure, it also makes curating datasets a lot easier when implementing complex logic for selection data for particular scenarios independent from function implementation.

# Agent Flows

def flow_agent_ready_for_cloud(simulated_hosts: List[SimulatedHost]):
   for host in simulated_hosts:
       host.agent.ping_cloud()
       host.agent.keepalive()
       host.agent.connect_to_cloud()

def flow_agent_send_device_information(simulated_hosts: List[SimulatedHost]):
   for host in simulated_hosts:
       host.agent.send_device_data(host.device_name)
       host.agent.send_device_data(host.device.architecture)
       host.agent.send_device_data(host.device.os_version)
       host.agent.send_device_data(host.device.kernel)

def flow_agent_application_information(simulated_hosts: List[SimulatedHost]):
   for host in simulated_hosts:
       for app in host.apps:
           host.agent.send_application_data(app.application_name)
           host.agent.send_application_data(app.version)

Notice the name of one of the functions above, which captures the main purpose of the function like flow_agent_send_device_information that will send device information like os_version, device_name.

# Internal API Flows

Internal flows are mainly used to gather information from services and do validations. For validations we use a Python library called PyHamcrest and a generic validation method that compares our internal structures with expected outcome built at the beginning of test

@retry(AssertionError, tries=8, delay=3, backoff=2, logger=logging)
def flow_validate_simulated_hosts_for_vulnerabilities(hosts: List[SimulatedHost], services: TestServices):
   expected_data = build_expected_flows.evaluate_simulated_hosts_for_vulnerabilities(simulated_hosts, services)
   for host in simulated_host:
       actual_data = flow_get_simulated_host_vulnerability_flow(host, services)
       augmented_expected = {
           "total": sum([expected_data.total_open,
                         expected_data.total_reopen,
                         expected_data.total_closed])
       }
       actual, expected, missing_fields = test_utils.create_validation_map(actual_data, [expected_data],
                                                                           augmented_expected)
       test_utils.assert_on_validation_map(actual_status_counter, actual, expected, missing_fields)


def flow_get_simulated_host_vulnerabilities(simulated_host: SimulatedHost, services: TestServices):
   response = services.internal_vulnerabilities_api.get_vulnerabilities(host)
   response_json = validate_response_call_and_get_json(rest_response, fail_on_errors=True)
   return response_json

We first use a method called create_validation_map which takes a list of structures that will contain data relevant for the actual response from the services. This is used to normalize all the structures and create an actual and an expected validation map which will be used in the assert_on_validation_map method together with a specific matcher from PyHamcrest called has_entries to do the assert.

The Advantages of a Modular Testing Approach 

There are several key advantages to this approach: 

  1. Increased testing versatility and simplicity. Developers are not dependent on a certain implementation because everything is unique to that function. Modules are independent and can work in any combination. The code base is independently testable. As such, if you have two flows that do the same thing, it means that one can be removed. 
  2. Improved efficiency. It is possible to “double track” most of the flows so that they can be processed in parallel. This means that they can run in any sequence and that the load can be distributed to run in a distributed infrastructure. Because there are no dependencies between the flows, you can also “parallelize” it locally to run multiple threads or multiple processes.
  3. Enhanced testing maturity. Taken together, these principles mean that developers can build more and more complex tests by reusing common elements and building on top of what exists. Test modules can be developed in parallel because they don’t have dependencies between them. Every flow covers a small part of functionality.

Final Thoughts: When to Use Flow-based Testing

Flow-based testing works well in end-to-end tests for complex products and distributed architectures because it takes the best practices in writing and testing code at scale. Testing and validation has a basis in experimental science and implementing a simulated version of the product inside the validation engine is still one of the most comprehensive ways to test the quality of a product. Flows-based testing helps to reduce the complexity in building this and makes it scalable and easier to maintain than the classical testing approach.

However, it is not ideal when testing a single service due to the complexity that exists at the beginning of the process related to data separation and creation of structures to serialize data. In those instances, the team would probably be better served by a classical testing approach. 

Finally, in complex interactions between multiple components, functionality needs to be compartmentalized in order to run it at scale. In that case, flow-based testing is one of the best approaches you can take.

When do you use flow-based testing — and what questions do you have? Sound off on our social media channels @CrowdStrike.

UltraVNC Viewer VNC client Remote Memory Leak Vulnerability

2 December 2021 at 23:06

EIP-5182fb5b

A vulnerability exists within UltraVNC view due to a lack of proper stack memory buffer cleanup before constructing the ‘rfbTextChat’ message, which results in a leak of 3-bytes of stack memory. An attacker can leverage this in conjunction with other vulnerabilities to execute code in the context of the UltraVNC Viewer process.

Vulnerability Identifiers

  • Exodus Intelligence: EIP-5182fb5b
  • MITRE CVE: 

Vulnerability Metrics

  • CVSSv2 Score: 4.3

Vendor References

  • https://github.com/ultravnc/UltraVNC/releases/tag/1.3.4

Discovery Credit

  • Exodus Intelligence

Disclosure Timeline

  • Disclosed to affected vendor: June 21th, 2021
  • Disclosed to public: September 25th, 2021

Further Information

Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program.

The post UltraVNC Viewer VNC client Remote Memory Leak Vulnerability appeared first on Exodus Intelligence.

Before yesterdayResearch - Companies

Technical Advisory – Authenticated SQL Injection in SOAP Request in Broadcom CA Network Flow Analysis (CVE-2021-44050)

2 December 2021 at 19:54
Vendor: Broadcom
Vendor URL: https://www.broadcom.com/
Systems Affected: CA Network Flow Analysis
Versions affected: 9.3.8, 9.5, 10.0, 10.0.2, 10.0.3, 10.0.4, 10.0.5, 10.0.6, 10.0.7, 21.2.1 (Note: older, unsupported versions may be affected)
Author: Anthony Ferrillo <anthony.ferrillo[at]nccgroup[dot]com>
CVE Identifier: CVE-2021-44050
Advisory URL: https://support.broadcom.com/external/content/security-advisories/CA20211201-01-Security-Notice-for-CA-Network-Flow-Analysis/19689
Risk: Medium - 6.5 (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N) (Authenticated SQL Injection)

Summary

The Network Flow Analysis software (formerly known as CA Network Flow Analysis) is a network traffic monitoring solution, which is used to monitor and optimize the performance of network infrastructures. The “Interfaces” Section of the Network Flow Analysis web application made use of a Flash application, which performed SOAP requests. The Flash request was reachable from the following URL:

The Interface search bar performed internal SOAP requests. The request was providing a series of parameters which were used to perform a SQL query to retrieve information from the backend database. The parameters were not validated prior the SQL query, allowing a malicious user to inject arbitrary SQL queries to enumerate and retrieve information from the database.

Impact

Successful exploitation of this issue would allow a low privileged user to enumerate and retrieve information from the backend database of the Network Flow Analysis web application.

Details

The Interface search bar performed internal SOAP requests. The following is an example of the request:

POST //ra/authorization/GroupTreeWS.asmx HTTP/1.1
[…]

<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <SOAP-ENV:Body>
    <tns:GetRouterInterfaceByGroupID xmlns:tns="http://example/GroupTreeWS">
      <tns:userId>61</tns:userId>
      <tns:groupId>1597</tns:groupId>
      <tns:orderBy>RouterName, Name </tns:orderBy>
      <tns:sortOrder></tns:sortOrder>
      <tns:limit>10</tns:limit>
      <tns:offset>0</tns:offset>
      <tns:filter>test</tns:filter>
      <tns:activeFilter xsi:nil="true"/>
    </tns:GetRouterInterfaceByGroupID>
  </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

It was possible to retrieve a verbose error message from the backend database by tampering the request in the orderBy parameter. An example request of the vulnerability is the following:

Request

POST //ra/authorization/GroupTreeWS.asmx HTTP/1.1
[…]

<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <SOAP-ENV:Body>
    <tns:GetRouterInterfaceByGroupID xmlns:tns="http://example/GroupTreeWS">
      <tns:userId>61</tns:userId>
      <tns:groupId>1597</tns:groupId>
      <tns:orderBy>RouterName, Name' or 0=0 -- </tns:orderBy>
      <tns:sortOrder></tns:sortOrder>
      <tns:limit>10</tns:limit>
      <tns:offset>0</tns:offset>
      <tns:filter>test</tns:filter>
      <tns:activeFilter xsi:nil="true"/>
    </tns:GetRouterInterfaceByGroupID>
  </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

The following payload was used for the boolean-based blind SQL injection in the request:

' or 0=0 --

Recommendation

Upgrade to 21.2.2 or above.
Alternatively, apply the appropriate fix provided for 10.0.2, 10.0.3, 10.0.4, 10.0.5, 10.0.6, 10.0.7, and/or 21.2.1.

Vendor Communication

2021-06-10 - Reported to Broadcom Product Security Center
2021-06-29 - Broadcom confirm they are able to reproduce the vulnerability and are working to address the vulnerability
2021-06-29 - We request an estimated date for a fix from Broadcom
2021-07-16 - Broadcom advise they are still working on addressing the issue. Request that we hold off any disclosure.
2021-12-01 - New version released, which addresses the reported vulnerability.
2021-12-02 - Advisory Published

About NCC Group

NCC Group is a global expert in cybersecurity and risk mitigation, working with businesses to protect their brand, value and reputation against the ever-evolving threat landscape. With our knowledge, experience and global footprint, we are best placed to help businesses identify, assess, mitigate & respond to the risks they face. We are passionate about making the Internet safer and revolutionizing the way in which organizations think about cybersecurity.

Published date:  12/02/2021

Written by:  Anthony Ferrillo

Threat Source Newsletter (Dec. 2, 2021)

2 December 2021 at 19:00
Newsletter compiled by Jon Munshaw.Good afternoon, Talos readers.   The Thanksgiving holiday in the U.S. didn't slow us down at all, even though we were all still trying to sleep off the food coma from the long weekend. But we came back this week with lots of fun content. Cisco received...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

Why Actionable Logs Require Sufficient History

2 December 2021 at 05:16

This blog was originally published Oct. 26, 2021 on humio.com. Humio is a CrowdStrike Company.

Improve visibility and increase insights by logging everything

ITOps, DevOps and SecOps teams need historical log data to ensure the security, performance and availability of IT systems and applications. Detailed historical log data is fundamental for understanding system behavior, mitigating security threats, troubleshooting problems and isolating service quality issues.

But when it comes to indexing, structuring, and maintaining log data, traditional log management solutions are notoriously inefficient and costly. Many businesses today simply can’t afford to gather and retain massive volumes of log data from all their networking gear, security products and other IT platforms using conventional log management solutions.

To make matters worse, many log management vendors use volume-based software licensing schemes that are prohibitively expensive for most businesses. For all these reasons, most organizations limit the types of log records they collect or periodically age out log data, leaving security and IT operations professionals in the dark.

So what can be done about it?

Comprehensive historical log data is fundamental for IT and security operations

Whether you work in DevOps, ITOps or SecOps, comprehensive historical log records are essential tools of the trade. They are critical for:

  • Troubleshooting and root cause analysis. Historical data is fundamental for identifying IT infrastructure issues, pinpointing faults and resolving problems. By going back in time and analyzing detailed log records, you can correlate network and system issues with configuration changes, software upgrades or other adds, moves and changes that might have affected IT infrastructure and impacted applications.
  • Mitigating security threats. Historical data is also fundamental for isolating security breaches and remediating threats. By examining access logs and investigating changes to firewall rules or other security settings, you can pinpoint attacks, take corrective actions and avoid extensive data loss or system downtime.
  • Optimizing performance and service quality. Historical data is vital for identifying compute, storage and networking performance bottlenecks and for optimizing user experience. By analyzing detailed performance data from a variety of sources, development and operations teams can gain insights into design and implementation issues impairing application service quality or response time.

Log everything with Humio

Humio’s flexible, modern architecture improves the log management experience for organizations by transforming massive volumes of historical log data into meaningful and actionable insights, enabling complete observability to answer any question, explore threats and vulnerabilities, and gain valuable insights from all logs in real time. Many organizations still struggle with cost constraints dictating their log strategies, but unlike conventional log management systems, Humio cost-effectively ingests any amount of data at any throughput, providing the full visibility needed to identify, isolate and resolve the most complex issues. The TCO Estimator is a quick and easy way to see this value.

With Humio’s innovative index-free design, organizations are no longer forced to make difficult decisions about which data to log and how long to retain it. By logging everything, organizations gain the holistic insights needed to investigate and mitigate any issue.

Additional resources

Magnat campaigns use malvertising to deliver information stealer, backdoor and malicious Chrome extension

2 December 2021 at 13:00
By Tiago Pereira. Talos recently observed a malicious campaign offering fake installers of popular software as bait to get users to execute malware on their systems. This campaign includes a set of malware distribution campaigns that started in late 2018 and have targeted mainly Canada, along...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

DORA and ICT Risk Management: how to self-assess your compliance

2 December 2021 at 10:09

TL;DR – In this blogpost, we will give you an introduction to the key requirements associated with the Risk Management Framework introduced by DORA (Digital Operational Resilience Act); 

More specifically, throughout this blogpost we will try to formulate an answer to following questions:

  • What are the key requirements associated with the Risk Management Framework of DORA?
  • What are the biggest challenges associated with these requirements?
  • How can you prepare yourself and what are the actions that you should took in aligning your organization to the Risk Management Framework requirements?

In the following sections, we will share our thoughts on how to self-assess your compliance on this requirement. Note also that, if this self-assessment checklist is of interest to you, you will be able to find it in an excel format in our GitHub repository, here.  

What are the ICT Risk Management requirements?

DORA requires organizations to apply a strong risk-based approach in their digital operational resilience efforts. This approach is reflected in Chapter 2 of the regulation.

Chapter 2 – Section 1 – Risk management governance

The first part of Chapter 2 addresses the risk management governance requirements. They include, but are not limited to, setting roles and responsibilities of the management body, planning and periodic auditing.

This section states the responsibilities of the management body for the definition, approval, overseeing of all arrangements related to the ICT risk management framework.

This section also states the definition and attribution of the role of ICT third party Officer. This position shall be in charge of defining and monitoring all the arrangements concluded with ICT third-party service providers on the use of ICT services.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 4 Governance and organisation
Responsibilities of the management body The management body shall define, approve, oversee and be accountable for the implementation of all arrangements related to the ICT risk management framework.
ICT third party Officer The role of ICT third party Officer shall be defined to monitor the arrangements concluded with ICT third-party service providers on the use of ICT services 
Training of the management body The management body shall, on a regular basis, follow specific trainings related to ICT risks and their impact on the operations 

Chapter 2 – Section 2 – Risk management framework

The second part of Chapter 2 introduces the ICT risk management framework itself as a critical component of the regulation.

ICT risk management requirements form a set of key principles revolving around specific functions (identification, protection and prevention, detection, response and recovery, learning and evolving and communication). Most of them are recognized by current technical standards and industry best practices, such as the NIST framework, and thus the DORA does not impose specific standardization itself.

Before exploring the functions, let’s note that DORA specifies several governance mechanisms around the risk management framework. They include, but are not limited to, setting the objectives of the risk management framework, planning and periodic auditing.

The following table provides a checklist for financial entities to self-assess their compliance on these governance requirement:

Article 5 ICT risk management framework
Protecting physical elements Entities shall define a well-documented ICT risk management framework which shall include strategies, policies, procedures, ICT protocols and tools which are necessary to protect all relevant physical components and infrastructures
Information on ICT risks Entities shall minimise the impact of ICT risk by deploying appropriate strategies, policies, procedures, protocols and tools
ISMS Entities shall implement an information security management system based on recognized international standards
Three lines of defence  Entities shall ensure appropriate segregation of ICT management functions, control functions, and internal audit functions
Review The ICT risk management framework shall be reviewed at least once a year, as well as upon the occurrence of major ICT-related incidents
Improvement The ICT risk management framework shall be continuously improved on the basis of lessons derived from implementation and monitoring
Audit The ICT risk management framework shall be audited on a regular basis by ICT auditors 
Remediation Entities shall define a formal follow-up process for the timely verification and remediation of critical ICT audit findings
ICT risk management framework objectives The ICT risk management framework shall include the methods to address ICT risk and attain specific ICT objectives

Identification

Financial entities shall identify and classify the ICT-related business functions, information assets and supporting ICT resources based on which risks posed by current cyber threats and ICT vulnerabilities are identified and assessed.

The following table provides a checklist for financial entities to self-assess their compliance on the Identification requirement:

Article 7 Identification 
Asset Identification Entities shall identify and adequately document:
(a) ICT-related business functions
(b) Information assets supporting these functions
(c) ICT system configurations and interconnections with internal and external ICT systems
Asset Classification  Entities shall classify and adequately document:
(a) ICT-related business functions
(b) Information assets supporting these functions
(c) ICT system configurations and interconnections with internal and external ICT systems
Asset Classification Review  Entities shall review as needed, and at least yearly, the adequacy of the classification of the information assets 
ICT risks Identification and Assessment  Entities shall identify all sources of ICT risks, and assess cyber threats and ICT vulnerabilities relevant to their ICT-related business functions and information assets. 
ICT risks Identification and Assessment Review Entities shall regularly review the ICT risks Identification and Assessment yearly or upon each major change in the network and information system infrastructure
ICT mapping Entities shall identify all ICT systems accounts, the network resources and hardware equipment
(a) Entities shall map physical equipment considered critical
(b) Entities shall map the configuration of the ICT assets and the links and interdependencies between the different ICT assets. 
 ICT third-party service providers identification Entities shall identify all ICT third-party service providers
(a) Entities shall identify and document all processes that are dependent on ICT third-party service providers
(b) Entities shall identify interconnections with ICT third-party service providers.
 ICT third-party service providers identification review Entities shall regularly review the  ICT third-party service providers identification
Legacy ICT systems Entities shall on a regular basis, and at least yearly, conduct a specific ICT risk assessment on all legacy ICT systems

This ICT risk management framework shall include the identification of critical and important functions as well as the mapping of the ICT assets that underpin them. Moreover, this ICT risk management framework shall also include the assessment of all risks associated with the ICT-related business functions and information assets identified.

What to identify and assess? Well …

  • ICT-related business functions
  • Supporting information assets supporting these functions
  • ICT system configurations
  • Interconnections with internal and external systems
  • Sources of ICT risk
  • All ICT system accounts
  • Network resources and hardware equipment
  • Critical physical equipment
  • All processes dependent on and interconnections with ICT third-party service providers

Protection and Prevention

Financial entities shall (based on the risk assessment) set up protection and prevention measures to ensure the resilience, continuity and availability of ICT systems. These shall include ICT  security  strategies, policies,  procedures and appropriate technologies.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 8 Protection and Prevention 
CIA Entities shall develop and document an information security policy defining rules to protect the confidentiality, integrity and availability of theirs, and their customers’ ICT resources, data and information assets; 
Segmentation Entities shall establish a sound network and infrastructure management using appropriate techniques, methods and protocols including implementing automated mechanisms to isolate affected information assets in case of cyber-attacks
Access privileges Entities shall implement policies that limit the physical and virtual access to ICT system resources and data and establish to that effect a set of policies, procedures and controls that address access privileges
Authentication mechanisms Entities shall implement policies and protocols for strong authentication mechanisms and dedicated controls systems to prevent access to cryptographic keys 
ICT change management  Entities shall implement policies, procedures and controls for ICT change management including changes to software, hardware, firmware components, system or security changes. The ICT change management process shall be approved by appropriate lines of management and shall have specific protocols enabled for emergency changes. 
Patching Entities shall have appropriate and comprehensive policies for patches and updates

What does this entail?

  • Ensuring the resilience, continuity and availability of ICT systems
  • Ensuring the security, confidentiality and integrity of data
  • Ensuring the continuous monitoring and control of ICT systems and tools
  • Defining and implementing Information security policies such as
    • Limit physical and virtual access to ICT systems
    • Protocols on strong authentication
    • Change management
    • Patching / updates management

Detection

Financial entities shall continuously monitor and promptly detect anomalous activities, threats and compromises of the ICT environment.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 9 Detection 
Detect anomalous activities Entities shall have in place mechanisms to promptly detect anomalous activities
(a) ICT network performance issues
(b) ICT-related incidents
Detect single points of failure Entities shall have in place mechanisms to identify all potential material single points of failure
Testing All detection mechanisms shall be regularly tested 
Alert mechanism All detection mechanisms shall enable multiple layers of control
(a) Define alert thresholds
(b) Define criteria to trigger ICT-related incident detection
(c) Define criteria to trigger ICT-related incident response processes
(d) Have automatic alert mechanisms in place for relevant staff in charge of ICT-related incident response. 
Trade reports checking Entities shall have in place systems that can effectively check trade reports for completeness, identify omissions and obvious errors and request re-transmission of any such erroneous reports. 

What does this entail?

  • Ensure the prompt detection of anomalous activities
  • Enforce multiple layers of control
  • Enable the identification of single points of failure

Response and recovery (including Backup policies and recovery methods)

Financial entities shall put in place dedicated and comprehensive business continuity policies and disaster and recovery plans to adequately react to identified security incidents and to ensure the resilience, continuity and availability of ICT systems.

The following table provides a checklist for financial entities to self-assess their compliance on Response and recovery requirements:

Article 10 Response and recovery 
ICT Business Continuity Policy  Entities shall put in place a dedicated and comprehensive ICT Business Continuity Policy as an integral part of the operational business continuity policy  of the entity
ICT Business Continuity Mechanisms Entities shall implement the ICT Business Continuity Policy through appropriate and documented arrangements, plans, procedures and mechanisms aimed at:
(a) recording all ICT-related incidents ;
(b) ensuring the continuity of the entity’s critical functions;
(c) quickly, appropriately and effectively responding to and resolving all ICT-related incidents
(d) activating without delay dedicated plans that enable containment measures, processes and technologies, as well as tailored response and recovery procedures 
(e) estimating preliminary impacts, damages and losses;
(f) setting out communication and crisis management actions which ensure that updated information is transmitted to all relevant internal staff and external stakeholders, and reported to competent authorities 
ICT Disaster Recovery Plan Entities shall implement an associated ICT Disaster Recovery Plan
ICT Disaster Recovery Audit Review Entities shall define a process for the ICT Disaster Recovery Plan to be subject to independent audit reviews.  
ICT Business Continuity Test  Entities shall periodically test the ICT Business Continuity Policy, at least yearly and after substantive changes to the ICT systems;
ICT Disaster Recovery Test Entities shall periodically test the ICT Disaster Recovery Plan, at least yearly and after substantive changes to the ICT systems;
Testing Plans Entities shall include in the testing plans scenarios of cyber-attacks and switchovers between the primary ICT infrastructure and the redundant capacity, backups and redundant facilities 
Crisis Communication Plans Entities shall implement a crisis communication plan
Crisis Communication Plans Test Entities shall periodically test the crisis communication plans, at least yearly and after substantive changes to the ICT systems;
Crisis Management Function Entities shall have a crisis management function, which, in case of activation of their ICT Business Continuity Policy or ICT Disaster Recovery Plan, shall set out clear procedures to manage internal and external crisis communications 
Records of Activities Entities shall keep records of activities before and during disruption events when their ICT Business Continuity Policy or ICT Disaster Recovery Plan is activated. 
ICT Business Continuity Policy Communication When implementing changes to the ICT Business Continuity Policy, entities shall communicate those changes to the competent authorities
Test Communication Entities shall define a process to provide to the competent authorities copies of the results of the ICT business continuity tests
Incident Communication Entities shall define a process to report to competent authorities all costs and losses caused by ICT disruptions and ICT-related incidents

The following table provides a checklist for financial entities to self-assess their compliance on Backup policies requirements:

Article 11 Backup policies and recovery methods 
Backup Policy Entities shall develop a backup policy
(a) specifying the scope of the data that is subject to the backup
(b) specifying the minimum frequency of the backup
(c) based on the criticality of information or the sensitiveness of the data
Backup Restoration When restoring backup data using own systems, entities shall use ICT systems that have an operating environment different from the main one, that is not directly connected with the latter and that is securely protected from any unauthorized access or ICT corruption
Recovery Plans Entities shall develop a recovery plans which enable the recovery of all transactions at the time of disruption to allow the central counterparty to continue to operate with certainty and to complete settlement on the scheduled date
Recovery Methods Entities shall develop recovery methods to limit downtime and limited disruption
ICT third-party providers Continuity Entities shall ensure that their ICT third-party providers maintain at least one secondary processing site endowed with resources, capabilities, functionalities and staffing arrangements sufficient and appropriate to ensure business needs
ICT third-party providers secondary processing site Entities shall ensure that the ICT third-party provider secondary processing site is:
(a) located at a geographical distance from the primary processing site
(b) capable of ensuring the continuity of critical services identically to the primary site
(c) immediately accessible to the entity’s staff to ensure continuity of critical services 
Recovery time objectives Entities shall determine recovery time and point objectives for each function. Such time objectives shall ensure that, in extreme scenarios, the agreed service levels are met
Recovery checks When recovering from an ICT-related incident, entities shall perform multiple checks, including reconciliations, in order to ensure that the level of data integrity is of the highest level

How to meet the compliance on the Response and Recovery requirements?

  • Define and implement an ICT Business Continuity Policy
  • Define and implement an ICT Disaster Recovery Plans
  • Define and implement an Back-up policies
  • Develop recovery methods
  • Determine flexible recovery time and point objectives for each function

Developing response and recovery strategies and plans adds an additional level of complexity, as it will require financial entities to think carefully about substitutability, including investing in backup and restoration systems, as well as assess whether – and how – certain critical functions can operate through alternative systems or methods of delivery while primary systems are checked and brought back up.

Learning and evolving

Financial entities shall include continuous learning and evolving in the internal processes in the form of information-gathering, as well as post-incident review and analysis.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 12 Learning and evolving 
Risk landscape Entities shall gather information on vulnerabilities and cyber threats, ICT-related incidents, in particular cyber-attacks, and analyse their likely impacts on their digital operational resilience.
Post ICT-related incident reviews  Entities shall put in place post ICT-related incident reviews after significant ICT disruptions of their core activities
(a) analysing the causes of disruption
(b) identifying required improvements to the ICT operations or within the ICT Business Continuity Policy  
Post ICT-related incident reviews mechanism Entities shall ensure the post ICT-related incident reviews determines whether the established procedures were followed and the actions taken were effective
(a) the promptness in responding to security alerts and determining the impact of ICT-related incidents and their severity;
(b) the quality and speed in performing forensic analysis;
(c) the effectiveness of incident escalation within the financial entity;
(d) the effectiveness of internal and external communication 
Lessons learned from the ICT Business Continuity and ICT Disaster Recovery tests Entities shall derive lessons from the ICT Business Continuity and ICT Disaster Recovery tests. Lessons shall be duly incorporated on a continuous basis into the ICT risk assessment process
Lessons learned reporting Senior ICT staff shall report at least yearly to the management body on the findings derived from the lessons learned from the ICT Business Continuity and ICT Disaster Recovery tests
Monitor the effectiveness of the implementation of the digital resilience strategy Entities shall map the evolution of ICT risks over time, analyse the frequency, types, magnitude and evolution of ICT-related incidents, in particular cyber-attacks and their patterns, with a view to understand the level of ICT risk exposure and enhance the cyber maturity and preparedness of the entity
ICT security awareness programs  Entities shall develop ICT security awareness trainings as compulsory modules in their staff training schemes
Digital operational resilience training Entities shall develop ICT digital operational resilience trainings as compulsory modules in their staff training schemes 

What does this entail?

  • Ensure information gathering on vulnerabilities and cyber threats
  • Ensure post-incident reviews after significant ICT disruptions
  • Define a procedure for the analysis of causes of disruptions
  • Define a procedure for the reporting to the management body
  • Develop ICT security awareness programs and trainings

Developing an ICT security awareness programs and trainings adds another level of complexity, as DORA does not only introduces compulsory training on digital operational resilience for the management body, DORA also introduces it for the whole staff, as part of their general training package. 

Communication

Financial entities shall define a communication strategy, plans and procedures for communicating ICT-related incidents to clients, counterparts and the public

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 13 Communication 
Clients and counterparts communication Entities shall have in place communication plans enabling a responsible disclosure of ICT-related incidents or major vulnerabilities to clients and counterparts as well as to the public, as appropriate. 
Staff communication Entities shall implement communication policies for staff and for external stakeholders.
(a) Communication policies for staff shall take into account the need to differentiate between staff involved in the ICT risk management, in particular response and recovery, and staff that needs to be informed. 
Mandate At least one person in the entity shall be tasked with implementing the communication strategy for ICT-related incidents and fulfil the role of public and media spokesperson for that purpose. 

What does this entail?

  • Develop communication plans to communicate to clients, counterparts and the public
  • Mandate at least one person to implement the communication strategy for ICT-related incidents

I hope you found this blogpost interesting.

Keep an eye out for the following parts! This blog post is part of a series. In the following blogposts, we will further explore the requirements associated with the Incident Management process, the Digital Operational Resilience Testing and the ICT Third-Party Risk Management of DORA.

About the Author

Nicolas is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into his technical hands-on experiences as well as his managerial academic background to help organisations build out their Cyber Security Strategy. He has a strong interest IT management, Digital Transformation, Information Security and Data Protection. In his personal life, he likes adventurous vacations. He hiked several 4000+ summits around the world, and secretly dreams about one day hiking all of the top summits. In his free time, he is an academic teacher who has been teaching for 7 years at both the Solvay Brussels School of Economics and Management and the Brussels School of Engineering. 

Find out more about Nicolas on Linkedin.

Tracking a P2P network related to TA505

2 December 2021 at 09:34

This post is by Nikolaos Pantazopoulos and Michael Sandee

tl;dr – Executive Summary

For the past few months NCC Group has been closely tracking the operations of TA505 and the development of their various projects (e.g. Clop). During this research we encountered a number of binary files that we have attributed to the developer(s) of ‘Grace’ (i.e. FlawedGrace). These included a remote administration tool (RAT) used exclusively by TA505. The identified binary files are capable of communicating with each other through a peer-to-peer (P2P) network via UDP. While there does not appear to be a direct interaction between the identified samples and a host infected by ‘Grace’, we believe with medium to high confidence that there is a connection to the developer(s) of ‘Grace’ and the identified binaries.

In summary, we found the following:

  • P2P binary files, which are downloaded along with other Necurs components (signed drivers, block lists)
  • P2P binary files, which transfer certain information (records) between nodes
  • Based on the network IDs of the identified samples, there seem to be at least three different networks running
  • The programming style and dropped file formats match the development standards of ‘Grace’

History of TA505’s Shift to Ransomware Operations

2014: Emergence as a group

The threat actor, often referred to as TA505 publicly, has been distinguished as an independent threat actor by NCC Group since 2014. Internally we used the name “Dridex RAT group”. Initially it was a group that integrated quite closely with EvilCorp, utilising their Dridex banking malware platform to execute relatively advanced attacks, using often custom made tools for a single purpose and repurposing commonly available tools such as ‘Ammyy Admin’ and ‘RMS’/’RUT’ to complement their arsenal. The attacks performed mostly consisted of compromising organisations and social engineering victims to execute high value bank transfers to corporate mule accounts. These operations included social engineering correctly implemented two-factor authentication with dual authorization by both the creator of a transaction and the authorizee.

2017: Evolution

Late 2017, EvilCorp and TA505 (Dridex RAT Group) split as a partnership. Our hypothesis is that EvilCorp had started to use the Bitpaymer ransomware to extort organisations rather than doing banking fraud. This built on the fact they had already been using the Locky ransomware previously and was attracting unwanted attention. EvilCorp’s ability to execute enterprise ransomware across large-scale businesses was first demonstrated in May 2017. Their capability and success at pulling off such attacks stemmed from the numerous years of experience in compromising corporate networks for banking fraud activity, specifically moving laterally to separate hosts controlled by employees who had the required access and control of corporate bank accounts. The same techniques in relation to lateral movement and tools (such as Empire, Armitage, Cobalt Strike and Metasploit) enabled EvilCorp to become highly effective in targeted ransomware attacks.

However in 2017 TA505 went on their own path and specifically in 2018 executed a large number of attacks using the tool called ‘Grace’, also known publicly as ‘FlawedGrace’ and ‘GraceWire’. The victims were mostly financial institutions and a large number of the victims were located in Africa, South Asia, and South East Asia with confirmed fraudulent wire transactions and card data theft originating from victims of TA505. The tool ‘Grace’ had some interesting features, and showed some indications that it was originally designed as banking malware which had latterly been repurposed. However, the tool was developed and was used in hundreds of victims worldwide, while remaining relatively unknown to the wider public in its first years of use.

2019: Clop and wider tooling

In early 2019, TA505 started to utilise the Clop ransomware, alongside other tools such as ‘SDBBot’ and ‘ServHelper’, while continuing to use ‘Grace’ up to and including 2021. Today it appears that the group has realised the potential of ransomware operations as a viable business model and the relative ease with which they can extort large sums of money from victims.

The remainder of this post dives deeper into a tool discovered by NCC Group that we believe is related to TA505 and the developer of ‘Grace’. We assess that the identified tool is part of a bigger network, possibly related with Grace infections.

Technical Analysis

The technical analysis we provide below focuses on three components of the execution chain:

  1. A downloader – Runs as a service (each identified variant has a different name) and downloads the rest of the components along with a target processes/services list that the driver uses while filtering information. Necurs have used similar downloaders in the past.
  2. A signed driver (both x86 and x64 available) – Filters processes/services in order to avoid detection and/or prevent removal. In addition, it injects the payload into a new process.
  3. Node tool – Communicates with other nodes in order to transfer victim’s data.

It should be noted that for all above components, different variations were identified. However, the core functionality and purposes remain the same.

Upon execution, the downloader generates a GUID (used as a bot ID) and stores it in the ProgramData folder under the filename regid.1991-06.com.microsoft.dat. Any downloaded file is stored temporarily in this directory. In addition, the downloader reads the version of crypt32.dll in order to determine the version of the operating system.

Next, it contacts the command and control server and downloads the following files:

  • t.dat – Expected to contain the string ‘kwREgu73245Nwg7842h’
  • p3.dat – P2P Binary. Saved as ‘payload.dll’
  • d1c.dat – x86 (signed) Driver
  • d2c.dat – x64 (signed) Driver
  • bn.dat – List of processes for the driver to filter. Stored as ‘blacknames.txt’
  • bs.dat – List of services’ name for the driver to filter. Stored as ‘blacksigns.txt’
  • bv.dat – List of files’ version names for the driver to filter. Stored as ‘blackvers.txt’.
  • r.dat – List of registry keys for the driver to filter. Stored as ‘registry.txt’

The network communication of the downloader is simple. Firstly, it sends a GET request to the command and control server, downloads and saves on disk the appropriate component. Then, it reads the component from disk and decrypts it (using the RC4 algorithm) with the hardcoded key ‘ABCDF343fderfds21’. After decrypting it, the downloader deletes the file.

Depending on the component type, the downloader stores each of them differently. Any configurations (e.g. list of processes to filter) are stored in registry under the key HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID with the value name being the thread ID of the downloader. The data are stored in plaintext with a unique ID value at the start (e.g. 0x20 for the processes list), which is used later by the driver as a communication method.

In addition, in one variant, we detected a reporting mechanism to the command and control server for each step taken. This involves sending a GET request, which includes the generated bot ID along with a status code. The below table summarises each identified request (Table 1).

Request Description
/c/p1/dnsc.php?n=%s&in=%s First parameter is the bot ID and the second is the formatted string (“Version_is_%d.%d_(%d)_%d__ARCH_%d”), which contains operating system info
/c/p1/dnsc.php?n=%s&sz=DS_%d First parameter is the bot ID and the second is the downloaded driver’s size
/c/p1/dnsc.php?n=%s&er=ERR_%d First parameter is the bot ID and the second is the error code
/c/p1/dnsc.php?n=%s&c1=1 The first parameter is the bot ID. Notifies the server that the driver was installed successfully
/c/p1/dnsc.php?n=%s&c1=1&er=REB_ERR_%d First parameter is the bot ID and the second is the error code obtained while attempting to shut down the host after finding Windows Defender running
/c/p1/dnsc.php?n=%s&sz=ErrList_%d_% First parameter is the bot ID, second parameter is the resulted error code while retrieving the blocklist processes. The third parameter is set to 1. The same command is also issued after downloading the blacklisted services’ names and versions. The only difference is on the third parameter, which is increased to ‘2’ for blacklisted services, ‘3’ for versions and ‘4’ for blacklisted registry keys
/c/p1/dnsc.php?n=%s&er=PING_ERR_%d First parameter is the bot ID and the second parameter is the error code obtained during the driver download process
/c/p1/dnsc.php?n=%s&c1=1&c2=1 First parameter is the bot ID. Informs the server that the bot is about to start the downloading process.
/c/p1/dnsc.php?n=%s&c1=1&c2=1&c3=1 First parameter is the bot ID. Notified the server that the payload (node tool) was downloaded and stored successfully
Table 1 – Reporting to C2 requests

Driver Analysis

The downloaded driver is the same one that Necurs uses. It has been analysed publically already [1] but in summary, it does the following.

In the first stage, the driver decrypts shellcode, copies it to a new allocated pool and then executes the payload. Next, the shellcode decrypts and runs (in memory) another driver (stored encrypted in the original file). The decryption algorithm remains the same in both cases:

xor_key =  extracted_xor_key
bits = 15
result = b''
for i in range(0,payload_size,4):
	data = encrypted[i:i+4]
	value = int.from_bytes (data, 'little' )^ xor_key
	result += ( _rol(value, bits, 32)  ^ xor_key).to_bytes(4,'little')

Eventually, the decrypted driver injects the payload (the P2P binary) into a new process (‘wmiprvse.exe’) and proceeds with the filtering of data.

A notable piece of code of the driver is the strings’ decryption routine, which is also present in recent GraceRAT samples, including the same XOR key (1220A51676E779BD877CBECAC4B9B8696D1A93F32B743A3E6790E40D745693DE58B1DD17F65988BEFE1D6C62D5416B25BB78EF0622B5F8214C6B34E807BAF9AA).

Payload Attribution and Analysis

The identified sample is written in C++ and interacts with other nodes in the network using UDP. We believe that the downloaded binary file is related with TA505 for (at least) the following reasons:

  1. Same serialisation library
  2. Same programming style with ‘Grace’ samples
  3. Similar naming convention in the configuration’s keys with ‘Grace’ samples
  4. Same output files (dsx), which we have seen in previous TA505 compromises. DSX files have been used by ‘Grace’ operators to store information related with compromised machines.

Initialisation Phase

In the initialisation phase, the sample ensures that the configurations have been loaded and the appropriate folders are created.

All identified samples store their configurations in a resource with name XC.

ANALYST NOTE: Due to limit visibility of other nodes, we were not able to identify the purpose of each key of the configurations.

The first configuration stores the following settings:

  • cx – Parent name
  • nid – Node ID. This is used as a network identification method during network communication. If the incoming network packet does not have the same ID then the packet is treated as a packet from a different network and is ignored.
  • dgx – Unknown
  • exe – Binary mode flag (DLL/EXE)
  • key – RSA key to use for verifying a record
  • port – UDP port to listen
  • va – Parent name. It includes the node IPs to contact.

The second configuration contains the following settings (or metadata as the developer names them):

  • meta – Parent name
  • app – Unknown. Probably specifies the variant type of the server. The following seem to be supported:
    • target (this is the current set value)
    • gate
    • drop
    • control
  • mod – Specifies if current binary is the core module.
  • bld – Unknown
  • api – Unknown
  • llr – Unknown
  • llt- Unknown

Next, the sample creates a set of folders and files in a directory named ‘target’. These folders are:

  • node (folder) – Stores records of other nodes
  • trash (folder) – Move files for deletion
  • units (folder) – Unknown. Appears to contain PE files, which the core module loads.
  • sessions (folder) – Active nodes’ sessions
  • units.dsx (file) – List of ‘units’ to load
  • probes.dsx (file) – Stores the connected nodes IPs along with other metadata (e.g. connection timestamp, port number)
  • net.dsx (file) – Node peer name
  • reports.dsx (file) – Used in recent versions only. Unknown purpose.

Network communication

After the initialisation phase has been completed, the sample starts sending UDP requests to a list of IPs in order to register itself into the network and then exchange information.

Every network packet has a header, which has the below structure:

struct Node_Network_Packet_Header
{
 BYTE XOR_Key;
 BYTE Version; // set to 0x37 ('7')
 BYTE Encrypted_node_ID[16]; // XORed with XOR_Key above
 BYTE Peer_Name[16]; // Xored with XOR_Key above. Connected peer name
 BYTE Command_ID; //Internally called frame type
 DWORD Watermark; //XORed with XOR_Key above
 DWORD Crc32_Data; //CRC32 of above data
};

When the sample requires adding additional information in a network packet, it uses the below structure:

struct Node_Network_Packet_Payload
{
 DWORD Size;
 DWORD CRC32_Data;
 BYTE Data[Size]; // Xored with same key used in the header packet (XOR_Key)
};

As expected, each network command (Table 2) adds a different set of information in the ‘Data’ field of the above structure but most of the commands follow a similar format. For example, an ‘invitation’ request (Command ID 1) has the structure:

struct Node_Network_Invitation_Packet 
{
 BYTE CMD_ID;
 DWORD Session_Label;
 BYTE Invitation_ID[16];
 BYTE Node_Peer_Name[16];
 WORD Node_Binded_Port;
};

The sample supports a limited set of commands, which have as a primary role to exchange ‘records’ between each other.

Command ID Description
1 Requests to register in the other nodes (‘invitation’ request)
2 Adds node IP to the probes list
3 Sends a ping request. It includes number of active connections and records
4 Sends number of active connections and records in the node
5 Adds a new node IP:Port that the remote node will check
6 Sends a record ID along with the number of data blocks
7 Requests metadata of a record
8 Sends metadata of a record
9 Requests the data of a record
10 Receives data of a record and store them on disk
Table 2 – Set of command IDs

ANALYST NOTE: When information, such as record IDs or number of active connections/records, is sent, the binary adds the length of the data followed by the actual data. For example, in case of sending number of active connections and records:

01 05 01 02 01 02

The above is translated as:

2 active connections from a total of 5 with 2 records.

Moreover, when a node receives a request, it sends an echo reply (includes the same packet header) to acknowledge that the request was read. In general, the following types are supported:

  • Request type of 0x10 for echo request.
  • Request type of 0x07 when sending data, which fit in one packet.
  • Request type of 0xD when sending data in multiple packets (size of payload over 1419 bytes).
  • Request type 0x21. It exists in the binary but not supported during the network communications.

Record files

As mentioned already, a record has its own sub-folder under the ‘node’ folder with each sub-folder containing the below files:

  • m – Metadata of record file
  • l – Unknown purpose
  • p – Payload data

The metadata file contains a set of information for the record such as the node peer name and the node network ID. Among this information, the keys ‘tag’ and ‘pwd’ appear to be very important too. The ‘tag’ key represents a command (different from table 2 set) that the node will execute once it receives the record. Currently, the binary only supports the command ‘updates’. The payload file (p) keeps the updated content encrypted with the value of key ‘pwd’ being the AES key.

Even though we have not been able yet to capture any network traffic for the above command, we believe that it is used to update the current running core module.

IoCs

Nodes’ IPs

45.142.213[.]139:555

195.123.246[.]14:555

45.129.137[.]237:33964

78.128.112[.]139:33964

145.239.85[.]6:3333

Binaries

SHA-1 Description
A21D19EB9A90C6B579BCE8017769F6F58F9DADB1   P2P Binary
2F60DE5091AB3A0CE5C8F1A27526EFBA2AD9A5A7 P2P Binary
2D694840C0159387482DC9D7E59217CF1E365027 P2P Binary
02FFD81484BB92B5689A39ABD2A34D833D655266 x86 Driver
B4A9ABCAAADD80F0584C79939E79F07CBDD49657 x64 Driver
00B5EBE5E747A842DEC9B3F14F4751452628F1FE X64 Driver
22F8704B74CE493C01E61EF31A9E177185852437 Downloader
D1B36C9631BCB391BC97A507A92BCE90F687440A Downloader
Table 3 – Binary hashes

Encryption Does Not Equal Invisibility – Detecting Anomalous TLS Certificates with the Half-Space-Trees Algorithm

2 December 2021 at 08:00

tl;dr

In our Research and Intelligence Fusion Team (RIFT) we applied an incremental anomaly detection model to detect suspicious TLS certificates. This model gives security operations teams the opportunity to detect suspicious behavior in real-time, despite the contained traffic being encrypted. This blogpost discusses the research that we performed and the model that we applied with the Half-Space-Trees algorithm.

Introduction

Encrypted network traffic is both a challenge and an advantage for cyber security defenders. Protocols as Transport Layer Security (TLS) are widely used by organizations as a mitigation to prevent eavesdroppers from viewing sensitive data.

However, adversaries have adopted to using encryption too. Subsequently, more network traffic is being encrypted, which means defenders cannot do deep packet inspection the same way as they once did.

There is often (meta)data to consider when looking at encrypted traffic which still has operational value. In this blogpost, we describe the research on the characteristics of TLS certificates that we conducted and the incremental machine learning model that we applied to detect the anomalous certificates. This model gives us the following advantages:

  • Detection despite encryption: By looking at the characteristics of TLS certificates, it is possible to detect suspicious behavior, despite the contained traffic being encrypted.
  • Real-time detection: Acting in real-time is important to prevent adversaries achieving their objectives. By doing so, an attack can be prevented and/or the level of damage mitigated. As our model is implemented in our Managed Detection & Response service it can instantly send an alert output to the SOC analyst when it detects a suspicious certificate.
  • Incremental learning: The Half-Space-Trees algorithm is an incremental machine learning algorithm. The model is sensitive to new patterns in changing data and can learn from streaming data coming in. In a previous blogpost by our data science team incremental machine learning is discussed (i).

Adversaries may install TLS certificates too

Nowadays, TLS certificates are widely used by organizations as a form of authorization and to prevent eavesdroppers from viewing sensitive data. If you visit the domain fox-it.com, most browsers show the lock symbol in the search bar. The lock symbol means that the TLS certificate of the domain you visit is entrusted by the browser and an encrypted connection over TLS is established. In addition, it should authorize that the data you get back is actually coming from fox-it.com.

However, adversaries may install TLS certificates too (MITRE, T1608.003) (ii). The certificates can be used for credibility, to spoof the identity of the victim, or to encrypt traffic to stay undetected. Adversaries can obtain certificates in different ways, most commonly by: 

  • Creating free certificates from a Certificate Authority (CA). For example, Let’s Encrypt is a popular CA for issuing free certificates in different malware families to stay under the rader (iii, iv) or in phishing domains to instill credibility.
  • Creating self-signed certificates. Self-signed certificates are not signed by a CA. For example, certain attack frameworks such as Cobalt Strike offer the option to create self-signed certificates.
  • Buying or stealing certificates from a CA. For example, adversaries can deceive a CA to issue a certificate for a fake organization.

Research at RIFT

Our research started with investigating a dataset containing malicious and legitimate TLS certificates. An example of a legitimate certificate is the certificate used by the fox-it.com domain. You can observe that the attributes in the fields are properly filled in (the abbreviation of the attributes in the certificate can be found in table 1). In the subject name you can find the information of the owner (NCC Group), and in the Issuer field the information of the Certificate Authority (Entrust).

Subject Name:
C=GB, L=Manchester, O=NCC Group PLC, CN=www.nccgroup[.]com
Issuer Name:
C=US, O=Entrust, Inc., OU=See http://www.entrust[.]net/legal-terms, OU=(c) 2012 Entrust, Inc. – for authorized use only, CN=Entrust Certification Authority – L1K

Example 1. TLS certificate used by fox-it.com

The second example is an anomalous certificate that was used in Cobalt Strike. This is a clear example of a self-signed certificate because there is no Certificate Authority present in the Issuer Name. Furthermore, the Organization names, “lol” (O) and the empty Organizational Units (OU) look anomalous. If you investigate further, you may find that the domain in the Common Name (CN) attribute is related to Ryuk ransomware (v, vi).

Subject Name:
C=US, ST=TX, L=Texas, O=lol, OU=, CN=idrivedownload[.]com
Issuer Name:
C=US, ST=TX, L=Texas, O=lol, OU=, CN=idrivedownload[.]com

Example 2. TLS certificate used in Cobalt Strike for Ryuk ransomware

Attribute Meaning
C Country of the entity
S State of the province
L Locality
O Organizational name
OU Organizational unit
CN Common name
Table 1. Overview of attributes that can be found in the issuer name and subject name fields of STLS certificates. The subject usually includes the information of the owner, and the issuer field includes the entity that has signed and issued the certificate (RFC, 5280) (vii). A Certificate Authority (CA) usually issues the certificate.

We conducted an exploratory analysis on our dataset of known legitimate and malicious certificates with the knowledge of our security operations centers. Furthermore, we applied supervised models to run on our dataset. By applying white-box algorithms such as the Random Forest, we identified features that helped identify the malicious certificates. For example, the amount of empty attributes has a statistical relationship with how likely it is used for malicious activities. However, we did not want to train a model solely on our known malicious certificates, but broaden the scope of our detection and find new patterns in new data. Hence, we want to apply a model that can detect anomalies in real-time in a unsupervised way.

Anomaly detection: Taking the isolation-based approach

The Isolation Forest was the first isolation-based anomaly detection model, created by Liu et al. in 2008 (viii). Since anomalies are by definition rare and behave differently, they figured that anomalies are easy to isolate from the rest of the data, which became the foundation of the isolation-based approach. The Isolation Forest computes how easy an anomaly is isolated from the rest of the data by the amounts of splits being made in a binary tree-based structure (viii). The more anomalous an observation, the closer to the root of the tree (and thus faster) it gets isolated. The advantage of this approach is that it does not require a lot of memory or computational costs, in contrast to density and distance based approaches that execute lots of calculations (viii, ix). More importantly, the isolation-based approach has proven repeatedly to be a very effective method to detect anomalies (viii, ix).

Half-Space-Trees: What’s in a name?

Half-Space-Trees (HST) algorithm is an unsupervised anomaly detection algorithm that works isolation based and is an incremental learning successor of the Isolation Forest. The HST is an ensemble method, meaning it consists of multiple single half-space-trees (x). Graph 1 demonstrates how a simple half-space-tree isolates anomalies. Next to being an incremental learner for streaming data, a major advantage of the HST is that it can build trees without data: it only needs the data space dimensions. In this way, the trees can be built quickly and efficiently for fast anomaly detection (ix, x). Another advantage to us is that the HST is available in the River package for incremental machine learning with streaming data (xi).

Graph 1:  An example of 2-dimensional data in a window divided by two simple half-space-trees, the visualization is inspired by the original paper (x). A single half-space-tree divides the window space in half-spaces based on the features in the data. Every single half-space-tree does this randomly and goes on as long as the set height of the tree. The half-space-tree calculates the amount of data points per subspace and gives a mass score to that subspace (which is represented by the colors). The subspaces where most datapoints fall in are considered high-mass subspaces, and the subspaces with low or no data points are considered low-mass subspaces. Most data points are expected to fall in high-mass subspaces because they need many more splits i.e., a higher tree to be isolated. The sum of the mass of all single half-space-trees become the final anomaly score of the HST (x).

Testing the model on our dataset

The HST was initially trained on legitimate certificates. When the model observes a suspicious TLS certificate it should isolate the certificate rapidly and give a high anomaly score. We tested the model on our test data that included both legitimate and malicious certificates.

The anomaly scores fall between 0 and 1. The closer the anomaly score is to 1, the easier it was to isolate the certificate and the more likely that the certificate is anomalous. For example, our certificate from example 1, used by fox-it.com, received an anomaly score of 0.43. The certificate from example 2, used by Ryuk ransomware, received an anomaly score of 0.84.  The performance metrics on our test set can be found in table 2. 

Performance Metric Score
Precision 0.95
Recall 0.98
F-Score 0.96
Table 2. Scores of the HST on the test data.

Testing the model in the security operations centers

After these results, we tested the model in our global security operation centers to explore how the HST performs on real-life streaming data. After training on the sensors, we analyzed and tuned the outputs that the model generated. For example, we saw that certain thresholds for anomalies can differ per sensor, so we adapted these on a sensor level as well. 

The good, the bad, the weird

Keep in mind that high anomaly scores do not instantly indicate malicious behavior, but can be a sign of weirdness or novelty as well. Hence, to improve the detection results, we combined our model with rules and other models. The insights of our research helped with creating these rules and choosing other models to combine it with. In the end, it is also a matter of feedback loops.

Conclusions

We applied an unsupervised, incremental anomaly detection model with the Half-Space-Trees algorithm in our security operations centers. Even though the surrounding data may be encrypted, the model is able to detect anomalous TLS certificates rapidly and sent an alerts output to the SOC analyst. Because we combine the model with other rules and models, the precision and attribution of the alerts output is enhanced.

We encourage you to look at TLS Certificates

We would like to encourage other cyber security defenders to look at the characteristics of TLS certificates to detect malicious activities despite encrypted traffic. Encryption does not equal invisibility and there is often (meta)data to consider when searching for anomalous behavior. Particularly, as a Data Science team we found that the Half-Space-Trees is an effective and quick anomaly detector in streaming data.

References

[i]         https://attack.mitre.org/techniques/T1608/003/

[ii]         NCC Group & Fox-IT. (2021). “Incremental Machine Learning by Example: Detecting Suspicious Activity with Zeek Data Streams, River, and JA3 Hashes.”

[iii]        Mokbel, M. (2021). “The State of SSL/TLS Certificate Usage in Malware C&C Communications.” Trend Micro. <https://www.trendmicro.com/en_us/research/21/i/analyzing-ssl-tls-certificates-used-by-malware.html>

[iv]        <https://sslbl.abuse.ch/statistics/>

[v]        <https://attack.mitre.org/software/S0446/>

[vi]        Goody, K., Kennelly, J., Shilko, J. Elovitz, S., Bienstock, D. (2020). “Kegtap and SingleMalt with Ransomware Chaser.” FireEye. <https://www.fireeye.com/blog/jp-threat-research/2020/10/kegtap-and-singlemalt-with-a-ransomware-chaser.html>

[vii]       Cooper, D., Santesson, S., Farrell, S., Boeyen, S., Housley, R., and W. Polk. (2008). “Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile”, RFC 5280, DOI 10.17487/RFC5280. <https://datatracker.ietf.org/doc/html/rfc5280>

[viii]      Liu, F. T. , Ting, K. M.  & Zhou, Z. (2008). “Isolation Forest”. Eighth IEEE International Conference on Data Mining, pp. 413-422, doi: 10.1109/ICDM.2008.17. <https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation-forest>

[ix]        Togbe, M.U., Chabchoub, Y., Boly, A., Barry, M., Chiky, R., & Bahri, M. (2021). “Anomalies Detection Using Isolation in Concept-Drifting Data Streams.” Comput., 10, 13. <https://www.mdpi.com/2073-431X/10/1/13>

[x]        Tan, S. Ting, K. & Liu, F.T. (2011). “Fast Anomaly Detection for Streaming Data.” 1511-1516. 10.5591/978-1-57735-516-8/IJCAI11-254. <https://www.ijcai.org/Proceedings/11/Papers/254.pdf>

[xi]        <https://riverml.xyz>

CrowdStrike Is Working to Strengthen the U.S. Government’s Cybersecurity Posture

1 December 2021 at 09:30

The United States and like-minded nations face unprecedented threats from today’s adversaries. Continuous cyberattacks on critical infrastructure, supply chains, government agencies and more present significant ongoing threats to national security, and the critical services millions of citizens rely on every day. At CrowdStrike, we are on a mission to stop breaches and rise to the challenge by protecting many of the most critically important organizations around the globe from some of the most sophisticated adversaries. This is why I am especially enthusiastic about recent initiatives in our work to help strengthen the cybersecurity posture of departments and agencies at all levels (federal, state, local, tribal and territorial) of government by empowering key defenders of U.S. critical infrastructure with our innovative technologies and services.

Earlier this year, the Administration issued an Executive Order to help address these threats, emphasizing the use of capabilities like endpoint detection and response (EDR) and Zero Trust. Based on our experience in preventing some of the world’s most sophisticated threat actors from impacting customers representing just about every industry, we believe that these measures stand to help. We also know that the road to protecting the nation’s most critical assets and infrastructure will require a strong partnership between government and private sector. Only by working together can we prevail.  

CrowdStrike has long been committed to working with federal, state, local, tribal and territorial governments to furnish them with the world-class technology and elite human expertise required to stay ahead of today’s attackers. Strategic partnerships with the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the Center for Internet Security (CIS) are key milestones that continue to enhance CrowdStrike’s efforts to protect the public sector and its partners.

Today, we’re proud to announce that CISA and CrowdStrike are strengthening their partnership to secure our nation’s critical infrastructure and assets. CISA will deploy the CrowdStrike Falcon® platform to secure CISA’s critical endpoints and workloads as well as multiple federal agencies. This partnership directly operationalizes the president’s Executive Order on Improving the Nation’s Cybersecurity, the landmark guidance that unifies several initiatives and policies to strengthen the U.S. national and federal government cybersecurity posture.

By applying CrowdStrike’s unique combination of intelligence, managed threat hunting and endpoint detection and response (EDR), CISA will strengthen its Continuous Diagnostics and Mitigation (CDM) program, advancing CISA’s mission to secure civilian “.gov” networks. This partnership also further improves CISA’s capabilities to better understand and manage cyber and physical risks to the nation’s critical infrastructure.

Validation to Fulfill the Mission

CrowdStrike Falcon is a FedRAMP-authorized endpoint protection platform (EPP) that rapidly enables agencies to detect and prevent cyberattacks, a goal of the cybersecurity Executive Order.

Importantly, CrowdStrike has recently been prioritized by the FedRAMP Joint Advisory Board (JAB) to begin work toward achieving a Provisional Authority to Operate (P-ATO). FedRAMP JAB is composed of major departments in the U.S. government, including Department of Defense (DoD), DHS and the General Services Administration (GSA). The FedRAMP JAB prioritizes only the most used and demanded cloud services within the U.S. government, selecting only approximately 12 cloud service offerings a year. This prioritization and our commitment to the FedRAMP JAB demonstrates CrowdStrike’s continued support and commitment to deliver our best-of-breed Falcon platform to help defend some of the most targeted departments and agencies in the world.

Strengthening Cyber Defenses for State, Local, Tribal and Territorial (SLTT) Governments

CrowdStrike’s work in the SLTT government space is not only critical to supporting these agencies but also vital to protecting critical infrastructure and ensuring the resilience of the communities they serve. In fact, CrowdStrike Falcon is currently being leveraged by more than a third of all U.S. state governments. Despite our success in this space, there is still more work to do. That is why after many years of partnership, CrowdStrike and CIS are taking our work to protect SLTT governments to the next level. CIS’s new fully managed endpoint security services (ESS) solution is now powered exclusively by CrowdStrike.

CrowdStrike brings direct deployment to endpoint devices with the cloud-native, intelligent single agent of the CrowdStrike Falcon platform. This provides CIS with a full suite of solutions to protect CIS managed endpoints, including next-generation antivirus (NGAV), EDR, asset and software inventory, USB device monitoring, user account monitoring and host-based firewall management.

Previously, CIS chose CrowdStrike to protect its Elections Infrastructure Information Sharing and Analysis Center® (EI-ISAC®). The new solution expands on the existing partnership, providing a new, fully managed 24/7/365 next-generation cybersecurity offering exclusively tailored to SLTT organizations. This includes more than 12,000 Multi-State Information and Analysis Center® members across the U.S., with more than 14 million endpoints in total.

Moving the Needle Forward for the Public Sector

CrowdStrike has operated a FedRAMP-authorized government cloud since 2018, giving SLTT governments a secure and compliant service that provides innovative and best-of-breed technology to secure their digital assets. Since then, more than one-third of states have standardized on CrowdStrike as their EPP vendor of choice.

To deepen our relationship, we continue to build partnerships with CIS, while formalizing our federal government partnership by becoming an industry launch partner to CISA’s Joint Cyber Defense Collaborative (JCDC). We continue to gain the trust of our government customers as they seek best-of-breed technology to defend their infrastructure and begin their journey to Zero Trust. Our prioritization by, and commitment to, the FedRAMP JAB will only bolster this trust and partnership. Put simply, empowering government defenders with the very technologies successfully embraced by complex private sector organizations is an important step in thwarting adversaries that target governments and, consequently, the functions upon which citizens depend. 

George Kurtz is Chief Executive Officer and Co-founder of CrowdStrike.

Additional Resources

This shouldn't have happened: A vulnerability postmortem

1 December 2021 at 18:38
By: Ryan

Posted by Tavis Ormandy, Project Zero

Introduction

This is an unusual blog post. I normally write posts to highlight some hidden attack surface or interesting complex vulnerability class. This time, I want to talk about a vulnerability that is neither of those things. The striking thing about this vulnerability is just how simple it is. This should have been caught earlier, and I want to explore why that didn’t happen.

In 2021, all good bugs need a catchy name, so I’m calling this one “BigSig”.

First, let’s take a look at the bug, I’ll explain how I found it and then try to understand why we missed it for so long.

Analysis

Network Security Services (NSS) is Mozilla's widely used, cross-platform cryptography library. When you verify an ASN.1 encoded digital signature, NSS will create a VFYContext structure to store the necessary data. This includes things like the public key, the hash algorithm, and the signature itself.

struct VFYContextStr {

   SECOidTag hashAlg; /* the hash algorithm */

   SECKEYPublicKey *key;

   union {

       unsigned char buffer[1];

       unsigned char dsasig[DSA_MAX_SIGNATURE_LEN];

       unsigned char ecdsasig[2 * MAX_ECKEY_LEN];

       unsigned char rsasig[(RSA_MAX_MODULUS_BITS + 7) / 8];

   } u;

   unsigned int pkcs1RSADigestInfoLen;

   unsigned char *pkcs1RSADigestInfo;

   void *wincx;

   void *hashcx;

   const SECHashObject *hashobj;

   SECOidTag encAlg;    /* enc alg */

   PRBool hasSignature;

   SECItem *params;

};

Fig 1. The VFYContext structure from NSS.


The maximum size signature that this structure can handle is whatever the largest union member is, in this case that’s RSA at
2048 bytes. That’s 16384 bits, large enough to accommodate signatures from even the most ridiculously oversized keys.

Okay, but what happens if you just....make a signature that’s bigger than that?

Well, it turns out the answer is memory corruption. Yes, really.


The untrusted signature is simply copied into this fixed-sized buffer, overwriting adjacent members with arbitrary attacker-controlled data.

The bug is simple to reproduce and affects multiple algorithms. The easiest to demonstrate is RSA-PSS. In fact, just these three commands work:

# We need 16384 bits to fill the buffer, then 32 + 64 + 64 + 64 bits to overflow to hashobj,

# which contains function pointers (bigger would work too, but takes longer to generate).

$ openssl genpkey -algorithm rsa-pss -pkeyopt rsa_keygen_bits:$((16384 + 32 + 64 + 64 + 64)) -pkeyopt rsa_keygen_primes:5 -out bigsig.key

# Generate a self-signed certificate from that key

$ openssl req -x509 -new -key bigsig.key -subj "/CN=BigSig" -sha256 -out bigsig.cer

# Verify it with NSS...

$ vfychain -a bigsig.cer

Segmentation fault

Fig 2. Reproducing the BigSig vulnerability in three easy commands.

The actual code that does the corruption varies based on the algorithm; here is the code for RSA-PSS. The bug is that there is simply no bounds checking at all; sig and key are  arbitrary-length, attacker-controlled blobs, and cx->u is a fixed-size buffer.

           case rsaPssKey:

               sigLen = SECKEY_SignatureLen(key);

               if (sigLen == 0) {

                   /* error set by SECKEY_SignatureLen */

                   rv = SECFailure;

                   break;

               }

               if (sig->len != sigLen) {

                   PORT_SetError(SEC_ERROR_BAD_SIGNATURE);

                   rv = SECFailure;

                   break;

               }

               PORT_Memcpy(cx->u.buffer, sig->data, sigLen);

               break;

Fig 3. The signature size must match the size of the key, but there are no other limitations. cx->u is a fixed-size buffer, and sig is an arbitrary-length, attacker-controlled blob.

I think this vulnerability raises a few immediate questions:

  • Was this a recent code change or regression that hadn’t been around long enough to be discovered? No, the original code was checked in with ECC support on the 17th October 2003, but wasn't exploitable until some refactoring in June 2012. In 2017, RSA-PSS support was added and made the same error.

  • Does this bug require a long time to generate a key that triggers the bug? No, the example above generates a real key and signature, but it can just be garbage as the overflow happens before the signature check. A few kilobytes of A’s works just fine.

  • Does reaching the vulnerable code require some complicated state that fuzzers and static analyzers would have difficulty synthesizing, like hashes or checksums? No, it has to be well-formed DER, that’s about it.

  • Is this an uncommon code path? No, Firefox does not use this code path for RSA-PSS signatures, but the default entrypoint for certificate verification in NSS, CERT_VerifyCertificate(), is vulnerable.

  • Is it specific to the RSA-PSS algorithm? No, it also affects DSA signatures.

  • Is it unexploitable, or otherwise limited impact? No, the hashobj member can be clobbered. That object contains function pointers, which are used immediately.

This wasn’t a process failure, the vendor did everything right. Mozilla has a mature, world-class security team. They pioneered bug bounties, invest in memory safety, fuzzing and test coverage.

NSS was one of the very first projects included with oss-fuzz, it was officially supported since at least October 2014. Mozilla also fuzz NSS themselves with libFuzzer, and have contributed their own mutator collection and distilled coverage corpus. There is an extensive testsuite, and nightly ASAN builds.

I'm generally skeptical of static analysis, but this seems like a simple missing bounds check that should be easy to find. Coverity has been monitoring NSS since at least December 2008, and also appears to have failed to discover this.

Until 2015, Google Chrome used NSS, and maintained their own testsuite and fuzzing infrastructure independent of Mozilla. Today, Chrome platforms use BoringSSL, but the NSS port is still maintained.

  • Did Mozilla have good test coverage for the vulnerable areas? YES.
  • Did Mozilla/chrome/oss-fuzz have relevant inputs in their fuzz corpus? YES.
  • Is there a mutator capable of extending ASN1_ITEMs? YES.
  • Is this an intra-object overflow, or other form of corruption that ASAN would have difficulty detecting? NO, it's a textbook buffer overflow that ASAN can easily detect.

How did I find the bug?

I've been experimenting with alternative methods for measuring code coverage, to see if any have any practical use in fuzzing. The fuzzer that discovered this vulnerability used a combination of two approaches, stack coverage and object isolation.

Stack Coverage

The most common method of measuring code coverage is block coverage, or edge coverage when source code is available. I’ve been curious if that is always sufficient. For example, consider a simple dispatch table with a combination of trusted and untrusted parameters, as in Fig 4.

#include <stdio.h>

#include <string.h>

#include <limits.h>

 

static char buf[128];

 

void cmd_handler_foo(int a, size_t b) { memset(buf, a, b); }

void cmd_handler_bar(int a, size_t b) { cmd_handler_foo('A', sizeof buf); }

void cmd_handler_baz(int a, size_t b) { cmd_handler_bar(a, sizeof buf); }

 

typedef void (* dispatch_t)(int, size_t);

 

dispatch_t handlers[UCHAR_MAX] = {

    cmd_handler_foo,

    cmd_handler_bar,

    cmd_handler_baz,

};

 

int main(int argc, char **argv)

{

    int cmd;

 

    while ((cmd = getchar()) != EOF) {

        if (handlers[cmd]) {

            handlers[cmd](getchar(), getchar());

        }

    }

}

Fig 4. The coverage of command bar is a superset of command foo, so an input containing the latter would be discarded during corpus minimization. There is a vulnerability unreachable via command bar that might never be discovered. Stack coverage would correctly keep both inputs.[1]

To solve this problem, I’ve been experimenting with monitoring the call stack during execution.

The naive implementation is too slow to be practical, but after a lot of optimization I had come up with a library that was fast enough to be integrated into coverage-guided fuzzing, and was testing how it performed with NSS and other libraries.

Object Isolation

Many data types are constructed from smaller records. PNG files are made of chunks, PDF files are made of streams, ELF files are made of sections, and X.509 certificates are made of ASN.1 TLV items. If a fuzzer has some understanding of the underlying format, it can isolate these records and extract the one(s) causing some new stack trace to be found.

The fuzzer I was using is able to isolate and extract interesting new ASN.1 OIDs, SEQUENCEs, INTEGERs, and so on. Once extracted, it can then randomly combine or insert them into template data. This isn’t really a new idea, but is a new implementation. I'm planning to open source this code in the future.

Do these approaches work?

I wish that I could say that discovering this bug validates my ideas, but I’m not sure it does. I was doing some moderately novel fuzzing, but I see no reason this bug couldn’t have been found earlier with even rudimentary fuzzing techniques.

Lessons Learned

How did extensive, customized fuzzing with impressive coverage metrics fail to discover this bug?

What went wrong

Issue #1 Missing end-to-end testing.

NSS is a modular library. This layered design is reflected in the fuzzing approach, as each component is fuzzed independently. For example, the QuickDER decoder is tested extensively, but the fuzzer simply creates and discards objects and never uses them.

extern "C" int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) {

 char *dest[2048];

 for (auto tpl : templates) {

   PORTCheapArenaPool pool;

   SECItem buf = {siBuffer, const_cast<unsigned char *>(Data),

                  static_cast<unsigned int>(Size)};

   PORT_InitCheapArena(&pool, DER_DEFAULT_CHUNKSIZE);

   (void)SEC_QuickDERDecodeItem(&pool.arena, dest, tpl, &buf);

   PORT_DestroyCheapArena(&pool);

 }

Fig 5. The QuickDER fuzzer simply creates and discards objects. This verifies the ASN.1 parsing, but not whether other components handle the resulting objects correctly.

This fuzzer might have produced a SECKEYPublicKey that could have reached the vulnerable code, but as the result was never used to verify a signature, the bug could never be discovered.

Issue #2 Arbitrary size limits.

There is an arbitrary limit of 10000 bytes placed on fuzzed input. There is no such limit within NSS; many structures can exceed this size. This vulnerability demonstrates that errors happen at extremes, so this limit should be chosen thoughtfully.

A reasonable choice might be 224-1 bytes, the largest possible certificate that can be presented by a server during a TLS handshake negotiation.

While NSS might handle objects even larger than this, TLS cannot possibly be involved, reducing the overall severity of any vulnerabilities missed.

Issue #3 Misleading metrics.

All of the NSS fuzzers are represented in combined coverage metrics by oss-fuzz, rather than their individual coverage. This data proved misleading, as the vulnerable code is fuzzed extensively but by fuzzers that could not possibly generate a relevant input.

This is because fuzzers like the tls_server_target use fixed, hardcoded certificates. This exercises code relevant to certificate verification, but only fuzzes TLS messages and protocol state changes.

What Worked

  • The design of the mozilla::pkix validation library prevented this bug from being worse than it could have been. Unfortunately it is unused outside of Firefox and Thunderbird.

It’s debatable whether this was just good fortune or not. It seems likely RSA-PSS would eventually be permitted by mozilla::pkix, even though it was not today.

Recommendations

This issue demonstrates that even extremely well-maintained C/C++ can have fatal, trivial mistakes.

Short Term

  • Raise the maximum size of ASN.1 objects produced by libFuzzer from 10,000 to 224-1 = 16,777,215  bytes.
  • The QuickDER fuzzer should call some relevant APIs with any objects successfully created before destroying them.
  • The oss-fuzz code coverage metrics should be divided by fuzzer, not by project.

Solution

This vulnerability is CVE-2021-43527, and is resolved in NSS 3.73.0. If you are a vendor that distributes NSS in your products, you will most likely need to update or backport the patch.

Credits

I would not have been able to find this bug without assistance from my colleagues from Chrome, Ryan Sleevi and David Benjamin, who helped answer my ASN.1 encoding questions and engaged in thoughtful discussion on the topic.

Thanks to the NSS team, who helped triage and analyze the vulnerability.


[1] In this minimal example, a workaround if source was available would be to use a combination of sancov's data-flow instrumentation options, but that also fails on more complex variants.

Four Key Factors When Selecting a Cloud Workload Protection Platform

1 December 2021 at 09:24

Security budgets are not infinite. Every dollar spent must produce a return on investment (ROI) in the form of better detection or prevention. 

Getting the highest ROI for security purchases is a key consideration for any IT leader. But the path to achieving that goal is not always easy to find. It is tempting for CISOs and CIOs to succumb to “shiny toy” syndrome: to buy the newest tool claiming to address the security challenges facing their hybrid environment. With cloud adoption on the rise, securing cloud assets will be a critical aspect of supporting digital transformation efforts and the continuous delivery of applications and services to customers well into the future. 

However, embracing the cloud widens the attack surface. That attack surface includes private, public and hybrid environments. A traditional approach to security simply doesn’t provide the level of security needed to protect this environment, and requires organizations to have granular visibility over cloud events. Organizations need a new approach — one that provides them with the visibility and control they need while also supporting the continuous integration/continuous delivery (CI/CD) pipeline.

Where to Start

To address these challenges head on, organizations are turning to cloud workload protection platforms. But how do IT and business leaders know which boxes these solutions should check? Which solution is best in addressing cloud security threats based on the changing adversary landscape? 

To help guide the decision-making process, CrowdStrike has prepared a buyer’s guide with advice on choosing the right solution for your organization. In this guide, we discuss different aspects of these solutions that customers should consider in the buying process, including detection, prevention and CI/CD integration. Here are four key evaluation points highlighted in the buyer’s guide: 

  • Cloud Protection as an Extension of Endpoint Security: Focusing on endpoint security alone is not sufficient to secure the hybrid environments many organizations now have to protect. For those organizations, choosing the right cloud workload protection platform is vital.
  • Understanding Adversary Actions Against Your Cloud Workloads: Real-time, up-to-date threat intelligence is a critical consideration when evaluating CWP platforms. As adversaries ramp up actions to exploit cloud services, having the latest information about attacker tactics and applying it successfully is a necessary part of breach prevention. For example, CrowdStrike researchers noted seeing adversaries targeting neglected cloud infrastructure slated for retirement that still contains sensitive data as well as adversaries leveraging common cloud services as a way to obfuscate malicious activity (learn more in our CrowdStrike cloud security eBook, Adversaries Have Their Heads In the Cloud and Are Targeting Your Weak Points). A proper approach to securing cloud resources leverages enriched threat intelligence to deliver a visual representation of relationships across account roles, workloads and APIs to provide deeper context for a faster, more effective response. 
  • Complete Visibility into Misconfigurations, Vulnerabilities and More: Closing the door on attackers also involves identifying the vulnerabilities and misconfigurations they’re most likely to exploit. A strong approach to cloud security will weave these capabilities into the CI/CD pipeline, enabling organizations to catch vulnerabilities early. For example, they can create verified image policies to guarantee that only approved images are allowed to pass through the pipeline. By continuously scanning container images for known vulnerabilities and configuration issues and integrating security with developer toolchains, organizations can accelerate application delivery and empower DevOps teams. Catching vulnerabilities is also the job of cloud security posture management technology. These solutions allow organizations to continuously monitor the compliance of all of their cloud resources. This ability is critical because misconfigurations are at the heart of many data leaks and breaches. Having these solutions bolstering your cloud security strategy will enable you to reduce risk and embrace the cloud with more confidence.
  • Managed Threat Hunting: Technology alone is not enough. As adversaries refine their tradecraft to avoid detection, access to MDR and advanced threat hunting services for the cloud can be the difference in stopping a breach. Managed services should be able to leverage up-to-the-minute threat intelligence to search for stealthy and sophisticated attacks. This human touch adds a team of experts that can augment existing security capabilities and improve customers’ ability to detect and respond to threats.

Making the Right Decision

Weighing the differences between security vendors is not always simple. However, there are some must-haves for cloud security solutions. From detection to prevention to integration with DevOps tools, organizations need to adopt the capabilities that put them in the best position to take advantage of cloud computing as securely as possible. 

To learn more, download the CrowdStrike Cloud Workload Protection Platform Buyers Guide.

Additional Resources

Vulnerability Spotlight: Use-after-free condition in Google Chrome could lead to code execution

1 December 2021 at 13:23
Marcin Towalski of Cisco Talos discovered this vulnerability. Blog by Jon Munshaw.  Cisco Talos recently discovered an exploitable use-after-free vulnerability in Google Chrome.   Google Chrome is a cross-platform web browser — and Chromium is the open-source version of the browser...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

Tracking a P2P network related to TA505

This post is by Nikolaos Pantazopoulos and Michael Sandee

tl;dr – Executive Summary

For the past few months NCC Group has been closely tracking the operations of TA505 and the development of their various projects (e.g. Clop). During this research we encountered a number of binary files that we have attributed to the developer(s) of ‘Grace’ (i.e. FlawedGrace). These included a remote administration tool (RAT) used exclusively by TA505. The identified binary files are capable of communicating with each other through a peer-to-peer (P2P) network via UDP. While there does not appear to be a direct interaction between the identified samples and a host infected by ‘Grace’, we believe with medium to high confidence that there is a connection to the developer(s) of ‘Grace’ and the identified binaries.

In summary, we found the following:

  • P2P binary files, which are downloaded along with other Necurs components (signed drivers, block lists)
  • P2P binary files, which transfer certain information (records) between nodes
  • Based on the network IDs of the identified samples, there seem to be at least three different networks running
  • The programming style and dropped file formats match the development standards of ‘Grace’

History of TA505’s Shift to Ransomware Operations

2014: Emergence as a group

The threat actor, often referred to as TA505 publicly, has been distinguished as an independent threat actor by NCC Group since 2014. Internally we used the name “Dridex RAT group”. Initially it was a group that integrated quite closely with EvilCorp, utilising their Dridex banking malware platform to execute relatively advanced attacks, using often custom made tools for a single purpose and repurposing commonly available tools such as ‘Ammyy Admin’ and ‘RMS’/’RUT’ to complement their arsenal. The attacks performed mostly consisted of compromising organisations and social engineering victims to execute high value bank transfers to corporate mule accounts. These operations included social engineering correctly implemented two-factor authentication with dual authorization by both the creator of a transaction and the authorizee.

2017: Evolution

Late 2017, EvilCorp and TA505 (Dridex RAT Group) split as a partnership. Our hypothesis is that EvilCorp had started to use the Bitpaymer ransomware to extort organisations rather than doing banking fraud. This built on the fact they had already been using the Locky ransomware previously and was attracting unwanted attention. EvilCorp’s ability to execute enterprise ransomware across large-scale businesses was first demonstrated in May 2017. Their capability and success at pulling off such attacks stemmed from the numerous years of experience in compromising corporate networks for banking fraud activity, specifically moving laterally to separate hosts controlled by employees who had the required access and control of corporate bank accounts. The same techniques in relation to lateral movement and tools (such as Empire, Armitage, Cobalt Strike and Metasploit) enabled EvilCorp to become highly effective in targeted ransomware attacks.

However in 2017 TA505 went on their own path and specifically in 2018 executed a large number of attacks using the tool called ‘Grace’, also known publicly as ‘FlawedGrace’ and ‘GraceWire’. The victims were mostly financial institutions and a large number of the victims were located in Africa, South Asia, and South East Asia with confirmed fraudulent wire transactions and card data theft originating from victims of TA505. The tool ‘Grace’ had some interesting features, and showed some indications that it was originally designed as banking malware which had latterly been repurposed. However, the tool was developed and was used in hundreds of victims worldwide, while remaining relatively unknown to the wider public in its first years of use.

2019: Clop and wider tooling

In early 2019, TA505 started to utilise the Clop ransomware, alongside other tools such as ‘SDBBot’ and ‘ServHelper’, while continuing to use ‘Grace’ up to and including 2021. Today it appears that the group has realised the potential of ransomware operations as a viable business model and the relative ease with which they can extort large sums of money from victims.

The remainder of this post dives deeper into a tool discovered by NCC Group that we believe is related to TA505 and the developer of ‘Grace’. We assess that the identified tool is part of a bigger network, possibly related with Grace infections.

Technical Analysis

The technical analysis we provide below focuses on three components of the execution chain:

  1. A downloader – Runs as a service (each identified variant has a different name) and downloads the rest of the components along with a target processes/services list that the driver uses while filtering information. Necurs have used similar downloaders in the past.
  2. A signed driver (both x86 and x64 available) – Filters processes/services in order to avoid detection and/or prevent removal. In addition, it injects the payload into a new process.
  3. Node tool – Communicates with other nodes in order to transfer victim’s data.

It should be noted that for all above components, different variations were identified. However, the core functionality and purposes remain the same.

Upon execution, the downloader generates a GUID (used as a bot ID) and stores it in the ProgramData folder under the filename regid.1991-06.com.microsoft.dat. Any downloaded file is stored temporarily in this directory. In addition, the downloader reads the version of crypt32.dll in order to determine the version of the operating system.

Next, it contacts the command and control server and downloads the following files:

  • t.dat – Expected to contain the string ‘kwREgu73245Nwg7842h’
  • p3.dat – P2P Binary. Saved as ‘payload.dll’
  • d1c.dat – x86 (signed) Driver
  • d2c.dat – x64 (signed) Driver
  • bn.dat – List of processes for the driver to filter. Stored as ‘blacknames.txt’
  • bs.dat – List of services’ name for the driver to filter. Stored as ‘blacksigns.txt’
  • bv.dat – List of files’ version names for the driver to filter. Stored as ‘blackvers.txt’.
  • r.dat – List of registry keys for the driver to filter. Stored as ‘registry.txt’

The network communication of the downloader is simple. Firstly, it sends a GET request to the command and control server, downloads and saves on disk the appropriate component. Then, it reads the component from disk and decrypts it (using the RC4 algorithm) with the hardcoded key ‘ABCDF343fderfds21’. After decrypting it, the downloader deletes the file.

Depending on the component type, the downloader stores each of them differently. Any configurations (e.g. list of processes to filter) are stored in registry under the key HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID with the value name being the thread ID of the downloader. The data are stored in plaintext with a unique ID value at the start (e.g. 0x20 for the processes list), which is used later by the driver as a communication method.

In addition, in one variant, we detected a reporting mechanism to the command and control server for each step taken. This involves sending a GET request, which includes the generated bot ID along with a status code. The below table summarises each identified request (Table 1).

Request Description
/c/p1/dnsc.php?n=%s&in=%s First parameter is the bot ID and the second is the formatted string (“Version_is_%d.%d_(%d)_%d__ARCH_%d”), which contains operating system info
/c/p1/dnsc.php?n=%s&sz=DS_%d First parameter is the bot ID and the second is the downloaded driver’s size
/c/p1/dnsc.php?n=%s&er=ERR_%d First parameter is the bot ID and the second is the error code
/c/p1/dnsc.php?n=%s&c1=1 The first parameter is the bot ID. Notifies the server that the driver was installed successfully
/c/p1/dnsc.php?n=%s&c1=1&er=REB_ERR_%d First parameter is the bot ID and the second is the error code obtained while attempting to shut down the host after finding Windows Defender running
/c/p1/dnsc.php?n=%s&sz=ErrList_%d_% First parameter is the bot ID, second parameter is the resulted error code while retrieving the blocklist processes. The third parameter is set to 1. The same command is also issued after downloading the blacklisted services’ names and versions. The only difference is on the third parameter, which is increased to ‘2’ for blacklisted services, ‘3’ for versions and ‘4’ for blacklisted registry keys
/c/p1/dnsc.php?n=%s&er=PING_ERR_%d First parameter is the bot ID and the second parameter is the error code obtained during the driver download process
/c/p1/dnsc.php?n=%s&c1=1&c2=1 First parameter is the bot ID. Informs the server that the bot is about to start the downloading process.
/c/p1/dnsc.php?n=%s&c1=1&c2=1&c3=1 First parameter is the bot ID. Notified the server that the payload (node tool) was downloaded and stored successfully
Table 1 – Reporting to C2 requests

Driver Analysis

The downloaded driver is the same one that Necurs uses. It has been analysed publically already [1] but in summary, it does the following.

In the first stage, the driver decrypts shellcode, copies it to a new allocated pool and then executes the payload. Next, the shellcode decrypts and runs (in memory) another driver (stored encrypted in the original file). The decryption algorithm remains the same in both cases:

xor_key =  extracted_xor_key
bits = 15
result = b''
for i in range(0,payload_size,4):
	data = encrypted[i:i+4]
	value = int.from_bytes (data, 'little' )^ xor_key
	result += ( _rol(value, bits, 32)  ^ xor_key).to_bytes(4,'little')

Eventually, the decrypted driver injects the payload (the P2P binary) into a new process (‘wmiprvse.exe’) and proceeds with the filtering of data.

A notable piece of code of the driver is the strings’ decryption routine, which is also present in recent GraceRAT samples, including the same XOR key (1220A51676E779BD877CBECAC4B9B8696D1A93F32B743A3E6790E40D745693DE58B1DD17F65988BEFE1D6C62D5416B25BB78EF0622B5F8214C6B34E807BAF9AA).

Payload Attribution and Analysis

The identified sample is written in C++ and interacts with other nodes in the network using UDP. We believe that the downloaded binary file is related with TA505 for (at least) the following reasons:

  1. Same serialisation library
  2. Same programming style with ‘Grace’ samples
  3. Similar naming convention in the configuration’s keys with ‘Grace’ samples
  4. Same output files (dsx), which we have seen in previous TA505 compromises. DSX files have been used by ‘Grace’ operators to store information related with compromised machines.

Initialisation Phase

In the initialisation phase, the sample ensures that the configurations have been loaded and the appropriate folders are created.

All identified samples store their configurations in a resource with name XC.

ANALYST NOTE: Due to limit visibility of other nodes, we were not able to identify the purpose of each key of the configurations.

The first configuration stores the following settings:

  • cx – Parent name
  • nid – Node ID. This is used as a network identification method during network communication. If the incoming network packet does not have the same ID then the packet is treated as a packet from a different network and is ignored.
  • dgx – Unknown
  • exe – Binary mode flag (DLL/EXE)
  • key – RSA key to use for verifying a record
  • port – UDP port to listen
  • va – Parent name. It includes the node IPs to contact.

The second configuration contains the following settings (or metadata as the developer names them):

  • meta – Parent name
  • app – Unknown. Probably specifies the variant type of the server. The following seem to be supported:
    • target (this is the current set value)
    • gate
    • drop
    • control
  • mod – Specifies if current binary is the core module.
  • bld – Unknown
  • api – Unknown
  • llr – Unknown
  • llt- Unknown

Next, the sample creates a set of folders and files in a directory named ‘target’. These folders are:

  • node (folder) – Stores records of other nodes
  • trash (folder) – Move files for deletion
  • units (folder) – Unknown. Appears to contain PE files, which the core module loads.
  • sessions (folder) – Active nodes’ sessions
  • units.dsx (file) – List of ‘units’ to load
  • probes.dsx (file) – Stores the connected nodes IPs along with other metadata (e.g. connection timestamp, port number)
  • net.dsx (file) – Node peer name
  • reports.dsx (file) – Used in recent versions only. Unknown purpose.

Network communication

After the initialisation phase has been completed, the sample starts sending UDP requests to a list of IPs in order to register itself into the network and then exchange information.

Every network packet has a header, which has the below structure:

struct Node_Network_Packet_Header
{
 BYTE XOR_Key;
 BYTE Version; // set to 0x37 ('7')
 BYTE Encrypted_node_ID[16]; // XORed with XOR_Key above
 BYTE Peer_Name[16]; // Xored with XOR_Key above. Connected peer name
 BYTE Command_ID; //Internally called frame type
 DWORD Watermark; //XORed with XOR_Key above
 DWORD Crc32_Data; //CRC32 of above data
};

When the sample requires adding additional information in a network packet, it uses the below structure:

struct Node_Network_Packet_Payload
{
 DWORD Size;
 DWORD CRC32_Data;
 BYTE Data[Size]; // Xored with same key used in the header packet (XOR_Key)
};

As expected, each network command (Table 2) adds a different set of information in the ‘Data’ field of the above structure but most of the commands follow a similar format. For example, an ‘invitation’ request (Command ID 1) has the structure:

struct Node_Network_Invitation_Packet 
{
 BYTE CMD_ID;
 DWORD Session_Label;
 BYTE Invitation_ID[16];
 BYTE Node_Peer_Name[16];
 WORD Node_Binded_Port;
};

The sample supports a limited set of commands, which have as a primary role to exchange ‘records’ between each other.

Command ID Description
1 Requests to register in the other nodes (‘invitation’ request)
2 Adds node IP to the probes list
3 Sends a ping request. It includes number of active connections and records
4 Sends number of active connections and records in the node
5 Adds a new node IP:Port that the remote node will check
6 Sends a record ID along with the number of data blocks
7 Requests metadata of a record
8 Sends metadata of a record
9 Requests the data of a record
10 Receives data of a record and store them on disk
Table 2 – Set of command IDs

ANALYST NOTE: When information, such as record IDs or number of active connections/records, is sent, the binary adds the length of the data followed by the actual data. For example, in case of sending number of active connections and records:

01 05 01 02 01 02

The above is translated as:

2 active connections from a total of 5 with 2 records.

Moreover, when a node receives a request, it sends an echo reply (includes the same packet header) to acknowledge that the request was read. In general, the following types are supported:

  • Request type of 0x10 for echo request.
  • Request type of 0x07 when sending data, which fit in one packet.
  • Request type of 0xD when sending data in multiple packets (size of payload over 1419 bytes).
  • Request type 0x21. It exists in the binary but not supported during the network communications.

Record files

As mentioned already, a record has its own sub-folder under the ‘node’ folder with each sub-folder containing the below files:

  • m – Metadata of record file
  • l – Unknown purpose
  • p – Payload data

The metadata file contains a set of information for the record such as the node peer name and the node network ID. Among this information, the keys ‘tag’ and ‘pwd’ appear to be very important too. The ‘tag’ key represents a command (different from table 2 set) that the node will execute once it receives the record. Currently, the binary only supports the command ‘updates’. The payload file (p) keeps the updated content encrypted with the value of key ‘pwd’ being the AES key.

Even though we have not been able yet to capture any network traffic for the above command, we believe that it is used to update the current running core module.

IoCs

Nodes’ IPs

45.142.213[.]139:555

195.123.246[.]14:555

45.129.137[.]237:33964

78.128.112[.]139:33964

145.239.85[.]6:3333

Binaries

SHA-1 Description
A21D19EB9A90C6B579BCE8017769F6F58F9DADB1   P2P Binary
2F60DE5091AB3A0CE5C8F1A27526EFBA2AD9A5A7 P2P Binary
2D694840C0159387482DC9D7E59217CF1E365027 P2P Binary
02FFD81484BB92B5689A39ABD2A34D833D655266 x86 Driver
B4A9ABCAAADD80F0584C79939E79F07CBDD49657 x64 Driver
00B5EBE5E747A842DEC9B3F14F4751452628F1FE X64 Driver
22F8704B74CE493C01E61EF31A9E177185852437 Downloader
D1B36C9631BCB391BC97A507A92BCE90F687440A Downloader
Table 3 – Binaries hashes

References

  1. https://pro-cdo-web-resources.s3.eu-west-1.amazonaws.com/elevenpaths/uploads/2020/6/elevenpaths-informe-aptualizator-2-en.pdf

CrowdStrike Announces Expanded Partnership at AWS re:Invent 2021

30 November 2021 at 09:05

We’re ready to meet you in person in Las Vegas! CrowdStrike is a proud Gold sponsor of AWS re:Invent 2021, being held Nov. 29 through Dec. 3. Stop by Booth #152 at the Venetian for a chance to obtain one of our new limited-edition adversary figures while supplies last. (More details below.) Plus, connect 1:1 with a CrowdStrike expert in person. Register today so you don’t miss out on CrowdStrike in action! Check out what else we have to offer here

Here’s a sneak peek.

What’s New 

At AWS re:Invent 2021, we are announcing expansions to our strategic partnership with AWS to provide breach protection and control for edge computing workloads running on cloud and customer-managed infrastructure, providing simplified infrastructure management and security consolidation, without impact to productivity. 

Build with AWS, Secure with CrowdStrike

AWS Outposts Rack (42U), AWS Outposts Servers (1U and 2U) 

CrowdStrike is proud to be a launch partner of AWS Outposts 1U and 2U servers and is now compatible with the AWS Outposts rack. AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs and tools to on-premises data centers, co-location space, or edge locations like retail stores, branch offices, factories and office locations for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, data residency and migration of applications with local system interdependencies. As a launch partner, this allows CrowdStrike to provide complete end-to-end visibility and protection for a customer’s AWS Hybrid environments as well as Internet of Things (IoT) and edge computing use cases.  

CrowdStrike Achieves EKS Anywhere Certification

Amazon EKS Anywhere is a new deployment option for Amazon EKS that allows customers to create and operate Kubernetes clusters on customer-managed infrastructure, supported by AWS. Starting today, AWS customers can now run Amazon EKS Anywhere on their own on-premises infrastructure using VMware vSphere. Now, with the Amazon EKS Anywhere certification, joint CrowdStrike and AWS solutions deliver end-to-end protection from the host to the cloud, delivering greater visibility, compliance, and threat detection and response to outsmart the adversary. CrowdStrike supports development and production of Amazon EKS workloads across Amazon EKS, Amazon EKS with AWS Fargate, and now Amazon EKS Anywhere.

Humio Log Management Integrations with AWS Services 

Humio‘s purpose-built, large-scale log management platform is now more tightly integrated with a number of AWS services, including AWS Quick Starts and AWS FireLens

  • AWS Quick Starts for Humio: AWS Quick Starts are automated reference deployments built by AWS solutions architects and AWS Partners. AWS Quick Starts help you deploy popular technologies on AWS according to AWS best practices. Joint customers will be able to initiate Humio clusters via AWS Quick Starts Templates to reduce manual procedures to just a few steps, empowering customers to start attaining Humio’s streaming observability at scale and with consistency, within minutes.
  • Humio Integration with AWS FireLens: Customers are now able to ingest AWS service and event data into Humio via AWS FireLens container log router for Amazon ECS and AWS Fargate. Humio customers will now have greater extensibility to use the breadth of services at AWS to simplify routing of logs to Humio, enabling accelerated threat hunting and search across their AWS footprint for novel and advanced cyber threats.

AWS Security Hub Integration Now Supports AWS GovCloud 

CrowdStrike Falcon already integrates with AWS Security Hub to enable a comprehensive, real-time view of high-priority security alerts. CrowdStrike’s API-first approach sends alerts back into AWS Security Hub and accelerates investigation, ultimately helping to automate security tasks. 

We have now extended this integration to publish detections identified by CrowdStrike Falcon for workloads residing within AWS GovCloud to AWS Security Hub to assist customers operating in highly regulated environments, such as the U.S. public sector. This will allow customers’ security operations center (SOC) and DevOps team to streamline communications and simultaneously view and access the same cybersecurity event data. 

CrowdStrike and AWS Partnership 

CrowdStrike is an AWS Partner Network (APN) Advanced Technology Partner, a global partner program to leverage AWS business, technical and marketing support to build solutions for customers. In addition, CrowdStrike has passed the technical review for the AWS Well Architected ISV Certification. By achieving this certification, CrowdStrike has proven to adopt AWS best practices to lower costs, drive better security and performance, adopt cloud-native architectures, drive industry compliance and scale to meet traffic demands. CrowdStrike product offerings are available in the AWS Marketplace.

The Powerful Benefits of CrowdStrike and AWS 

Our joint solutions and integrations in various AWS services are powered by CrowdStrike Threat Graph®, which captures trillions of high-fidelity signals per day in real time from across the globe. Customers benefit from better protection, better performance and immediate time-to-value delivered by the cloud-native Falcon platform, designed to stop breaches. With over 14 service level integrations available, joint AWS and CrowdStrike customers are provided a consistent security posture between their on-premises workloads and those running in the AWS Cloud.

  • Unified, hybrid security experience: To reiterate, CrowdStrike supports development and production of Amazon EKS workloads across Amazon EKS, Amazon EKS with AWS Fargate, and Amazon EKS Anywhere. With a single lightweight agent and single management console, customers can experience a unified, end-to-end experience from the host to the cloud. No matter where the compute workloads are located, customers benefit from visibility, compliance, and threat detection and response to outsmart the adversary.
  • Real-time observability at enterprise scale: Humio offers the freedom to log hundreds of terabytes a day with no compromises. Now with the direct integration with AWS FireLens, customers have complete visibility to see anomalies, threats and problems to get to the root of anything nefarious that has happened across their AWS infrastructure in real time.
  • A modern and consistent security approach: The latest integrations, support and certifications from CrowdStrike for AWS allow organizations to implement a modern enterprise security approach where protection is provided across your AWS infrastructure to defend against sophisticated threat activity. 

Visit CrowdStrike at Booth #152

Come by Booth #152 for a chance to win your own adversary figure, engage in product demos and chat with CrowdStrike experts.

How to Obtain Your Own Adversary Figure 

Earn a limited-edition adversary collectable card for each step you complete. Then show your three collectable cards to a CrowdStrike representative at our giveaway station in our booth, and you’ll be rewarded with your very own adversary figure while supplies last! 

  1. Listen to a theater presentation at the CrowdStrike booth 
  2. Engage in a product demo at one of our demo stations
  3. Snap a selfie and tag #GoCrowdStrike (we will have adversary masks in the booth)

Meet 1:1 with a CrowdStrike Executive

CrowdStrike will have executives and leaders attending AWS re:Invent in person. If you’re interested in a 1:1 onsite meeting, please fill out the form here

Questions? Please contact [email protected]. We look forward to seeing you at AWS re:Invent 2021!

Additional Resources

Conference Talks – December 2021

30 November 2021 at 17:14

This month, members of NCC Group will be presenting their work at the following conferences:

  • Matt Lewis (NCC Group) & Mark McFadden, “Show me the numbers: Workshop on Analyzing IETF Data (AID)”, to be presented at the IETF Internet Architecture Board Workshop on Analyzing IETF Data 2021 (November 29 – December 1 2021)
  • Michael Gough, “ARTHIR: ATT&CK Remote Threat Hunting Incident Response Windows Tool”, to be presented at Open Source Digital Forensics Conference (December 1 2021)
  • Juan Garrido, “From Hero to Zero. Hardening Microsoft 365 services”, to be presented at STIC – CCN-CERT (December 3 2021)
  • Jennifer Fernick, “Financial Post-Quantum Cryptography in Production: A CISO’s Guide”, to be presented at FS-ISAC (December 21 2021)

Please join us!

Show me the numbers: Workshop on Analyzing IETF Data (AID) 
Matt Lewis (NCC Group) & Mark McFadden
IETF Internet Architecture Board Workshop on Analyzing IETF Data 2021
November 29 – December 1 2021

RFCs have played a pivotal role in helping to formalise ideas and requirements for much of the Internet’s design and engineering. They have facilitated peer review amongst engineers, researchers and computer scientists, which in turn has resulted in specification of key Internet protocols and their behaviours so that developers can implement those protocols in products and services, with a degree of certainty around correctness in design and interoperability between different implementations. Security considerations within RFCs were not present from the outset, but rather, evolved over time as the Internet grew in size and complexity, and as our understanding of security concepts and best practices matured. Arguably, security requirements across the corpus of RFCs (over 8,900 at the time of writing) has been inconsistent, and perhaps attests to how and when we often see security vulnerabilities manifest themselves both in protocol design, and subsequent implementation.

In early 2021, Research Director Matt Lewis of NCC Group (global cyber security and risk mitigation specialists) released research exploring properties of RFCs in terms of security, which included analyses on how security is (or isn’t) prescribed within RFCs. This was done in order to help understand, how and why security vulnerabilities manifest themselves from design to implementation. The research parsed RFCs, extracting RFC data and metadata into graph databases to explore and query relationships between different properties of RFCs. The ultimate aim of the research was to use any key observations and insights to stimulate further thought and discussion on how and where security improvements could be made to the RFC process, allowing for maximised security assurance at protocol specification and design so as to facilitate security and defence-in-depth. The research showed the value of mining large volumes of data for the purpose of gaining useful insights, and the value of techniques such as graph databases to help cut through the complexities involved with processing and interpreting large volumes of data.

Following publication of NCC Group’s research, other interested parties read it and identified commonalities with research performed by Mark McFadden (of Internet Policy Advisors LTD), an expert on the development of global internet addressing standards and policies, and an active contributor to work in the IETF and ICANN. Mark had very similar research goals to NCC Group, and in that endeavour he had performed analysis around RFC3552 (Guidelines for Writing RC Text on Security Considerations). RFC3552 provides guidance to authors in crafting RFC text on Security Considerations. Mark noted that the RFC is more than fifteen years old and with the threat landscape and security ecosystem significantly changed since the RFC was published, RFC3552 is a candidate for update. Mark authored an internet draft proposing that, prior to drafting an update to RFC3552, an examination of recent, published Security Considerations sections be carried out as a baseline for how to improve RFC3552. His draft suggested a methodology for examining Security Considerations sections in published RFCs and the extraction of both quantitative and qualitative information that could inform a revision of the older guidance. It also reported on an experiment involving textual analysis of sixteen years of RFC Security Consideration sections.

Matt and Mark are thus very much aligned on this topic, and between their respective approaches, have already gone some way in seeking to baseline how RFC Security Considerations should be expressed and improved. They are therefore seeking to collaborate further on this topic, which will include even further analysis of empirical evidence that exists within the vast bodies of IETF data. Matt and Mark would welcome participation at the forthcoming workshop on analysing IETF Data (AID), 2021. We propose active contribution by way of presentation of our existing research and insights, and would welcome community engagement and discussion on the topic so as to understand how we can utilise the IETF data for the baselining and improvement of security requirement specification within the RFC process.


ARTHIR: ATT&CK Remote Threat Hunting Incident Response Windows Tool 
Michael Gough
Open Source Digital Forensics Conference
December 1 2021

ArTHIR is a modular framework that can be used remotely against one, or many target systems to perform threat hunting, incident response, compromise assessments, configuration, containment, and any other activities you can conjure up utilizing built-in PowerShell (any version) and Windows Remote Management (WinRM).

This is an improvement to the well-known tool Kansa, but with more capabilities than just running PowerShell scripts. ArTHIR makes it easier to push and execute any binary remotely and retrieve back the output!

One goal of ArTHIR is for you to map your threat hunting and incident response modules to the MITRE ATT&CK Framework. Map your modules to one or more tactics and technique IDs and fill in your MITRE ATT&CK Matrix on your capabilities, and gaps needing improvement.

Have an idea for a module? Have a utility you want run remotely but no easy way to do it volume? ArTHIR provides you this capability. An open source project, hosted on GitHub, everyone is encouraged to contribute and build modules, share ideas, and request updates. There is even a SLACK page to ask questions, share ideas, and collaborate.

Included in ArTHIR are all the original Kansa modules, and several LOG-MD free edition modules. Also included is a template of some key items you will need to build your own PowerShell or utility modules.


From Hero to Zero. Hardening Microsoft 365 services
Juan Garrido
STIC – CCN-CERT
December 3 2021

In this talk, Juan will describe and demonstrate multiple techniques for bypassing existing Office 365 application security controls, showing how data can be exfiltrated from highly secure Office 365 tenants which employ strict security policies, such as Network-Location or Conditional Access Policies, which are used to control access to cloud applications.

Juan will also introduce a new PowerShell module that will help IT security administrators to better prevent, respond and react to bad actors in Microsoft 365 tenants.


Financial Post-Quantum Cryptography in Production: A CISO’s Guide
Jennifer Fernick
FS-ISAC
December 21 2021

Security leaders have to constantly filter signal from noise about emerging threats, including security risks associated with novel emerging technologies like quantum computing. In this presentation, we will explore post-quantum cryptography specifically through the lens of upgrading financial institutions’ cryptographic infrastructure.

We’re going to take a different approach to most post-quantum presentations, by not discussing quantum mechanics or why quantum computing is a threat, and instead starting from the known fact that most of the public-key cryptography on the internet will be trivially broken by existing quantum algorithms, and cover strategic applied security topics to address this need for a cryptographic upgrade, such as:  

  • Financial services use cases for cryptography and quantum-resistance, and context-specific nuances in computing environments such as mainframes, HSMs, public cloud, CI/CD pipelines, third-party and multi-party financial protocols, customer-facing systems, and more 
  • Whether quantum technologies like QKD are necessary to achieve quantum-resistant security
  • Post-quantum cryptographic algorithms for digital signatures, key distribution, and encryption 
  • How much confidence cryptanalysts currently have in the quantum-resistance of those ciphers, and what this may mean for cryptography standards over time 
  • Deciding when to begin integrating PQC in a world of competing technology standards 
  • Designing extensible cryptographic architectures
  • Actions financial institutions’ cryptography teams can take immediately 
  • How to talk about this risk with your corporate board

This presentation is rooted in both research and practice, is entirely vendor- and product-agnostic, and will be easily accessible to non-cryptographers, helping security leaders think through the practical challenges and tradeoffs when deploying quantum-resistant technologies. 

Cisco named leader in Incident Response Services

30 November 2021 at 16:58
By Brad Garnett. It has been more than two years already since Cisco Incident Response became a part of the Talos family. Since then, my team has continued a journey to simplify our offering for consumption and make incident response the ultimate team sport.  That is why I could not be more...

[[ This is only the beginning! Please visit the blog for the complete entry ]]
❌