RSS Security

🔒
❌ About FreshRSS
There are new articles available, click to refresh the page.
Before yesterdayThreat Research

Executive Briefing in New York with Former Secretary of Homeland Security Michael Chertoff

2 April 2012 at 22:17

On March 15, Mandiant hosted an executive briefing over breakfast in New York City. The location in the W Hotel in Downtown NYC overlooked the 9/11 Memorial and the rising One World Trade Center-an arresting view and a unique setting for this event.

Former Secretary of Homeland Security Michael Chertoff kicked off the morning by discussing his perspective on the global threat landscape. He touched on Iran's cyber warfare capabilities in particular. He remarked on recent alleged Iranian attacks against the BBC and said that there is no point in debating the reality of cyber war. If one side believes they are engaged in such a battle, then that is reality-and "Iran clearly believes they are already participants in cyber war." He also noted that Iran's capabilities are already quite advanced. After being hit by Stuxnet, Iran views it as imperative to be prepared to respond in kind.

It is always nice to see someone like Mr. Chertoff connecting the dots so articulately on a technical level. At one point, he commented about how important it was to not just look for malware. Smart responders, he said, need to look for all trace evidence of compromise in order to fully understand the scope of an incident. Coincidentally, this is trend #1 in our recent M-Trends report, and Mr. Chertoff described the problem with a malware-centric approach perfectly.

Richard Bejtlich spoke next and used a role-playing exercise to help the audience understand the challenge of responding to targeted threats. His premise was simple: "Pretend I'm a law enforcement agent who comes to your office and tells you that you are compromised, and that I have your own internal documents as evidence. What do you do next?"

This provoked discussion and the audience started asking questions about the nature of the intrusion and what they should do to respond. As we explored the scenario through Q&A, it became clear that most organizations lack the visibility they need to adequately respond to attacks. What about your organization? If you found out today that you had been the victim of a substantial breach, where would you look first? How would you validate the intrusion? How could you discover the scopeor identify what had been stolen?

Those of you who have attended Mandiant events know that we are pretty light on the product pitches (we often don't mention our products at all). However, we do have a product that helps answer the questions that Richard was posing. Mandiant Intelligent Response has helped hundreds of companies answer the question "Now What??" when they are on the receiving end of the scenario Richard outlined in New York.

M-Trends #1: Malware Only Tells Half the Story

14 May 2012 at 20:45

When I joined Mandiant earlier this year, I was given the opportunity to help write our annual M-Trends report. This is the third year Mandiant has published the report, which is a summary of the trends we've observed in our investigations over the last twelve months.

I remember reading Mandiant's first M-Trends report when it came out in 2010 and recall being surprised that Mandiant didn't pull any punches. They talked about the advanced persistent threat or APT (they had been using that term for several years...long before it was considered a cool marketing, buzz word), and they were open about the origin of the attacks. The report summarized what I'd been seeing in industry, and offered useful insights for detection and response. Needless to say, I enjoyed the opportunity to work on the latest version.

In this year's report it details six trends we identified in 2011. We developed the six trends for the report very organically. That is, I spent quite a few days and nights reading all of the reports from our outstanding incident response team and wrote about what we saw-we didn't start with trends and then look for evidence to support them.

If you haven't picked up a copy of the report yet, you can do so here. I will be blogging on each of the six trends over the next two weeks; you can even view the videos we've developed for each trend as each blog post is published:

Malware Only Tells Half the Story.

Of the many systems compromised in each investigation, about half of them were never touched by attacker malware.

In so many cases, the intruders logged into systems and took data from them (or used them as a staging point for exfiltration), but didn't install tools. It is ironic that the very systems that hold the data targeted by an attacker are probably the least likely to have malware installed on them. While finding the malware used in an intrusion is important, it is impossible to understand the full scope of an intrusion if this is the focal point of the investigation. We illustrate actual examples of this in the graphical spread on pages 6-7 of the report.

What does this mean for victim organizations?

You could start by looking for malware, but don't end there! A smart incident response process will seek to fully understand the scope of compromise and find all impacted systems in the environment. This could mean finding the registry entries that identify lateral movement, traces of deleted .rar files in unallocated space, or use of a known compromised account. It turns out that Mandiant has a product that does all of this, but the footnote on page 5 is the only mention you'll see in the entire report (and even that was an afterthought).

Thoughts and questions about this trend or the M-Trends report?

Incident Response with NTFS INDX Buffers – Part 1: Extracting an INDX Attribute

18 September 2012 at 23:23

By William Ballenthin & Jeff Hamm

On August 30, 2012, we presented a webinar on how to use INDX buffers to assist in an incident response investigation. During the Q&A portion of the webinar we received many questions; however, we were not able to answer all of them. We're going to attempt to answer the remaining questions by posting a four part series on this blog. This series will address:

Part 1: Extracting an INDX Record

An INDX buffer in the NTFS file system tracks the contents of a folder. INDX buffers can be resident in the $MFT (Master File Table) as an index root attribute (attribute type 0x90) or non-resident as an index allocation attribute (attribute 0xA0) (non-resident meaning that the content of the attribute is in the data area on the volume.)

INDX root attributes have a dynamic size in the MFT, so as the contents change, the size of the attributes change. When an INDX root attribute shrinks, the surrounding attributes shift and overwrite any old data. Therefore, it is not possible to recover slack entries from INDX root attributes. On the other hand, the file system driver allocates INDX allocation attributes in multiples of 4096, even though the records may only be 40 bytes. As file system activity adds and removes INDX records from an allocation attribute, old records may still be recoverable in the slack space found between the last valid entry and the end of the 4096 chunk. This is very interesting to a forensic investigator. Fortunately, many forensic tools support extracting the INDX allocation attributes from images of an NTFS file system.

Scenario

Let's say that during your investigation you identified a directory of interest that you want to examine further. In the scenario we used during the webinar, we identified a directory as being of interest because we did a keyword search for "1.rar". The results of the search indicated that the slack space of an INDX attribute contained the suspicious filename "1.rar". The INDX attribute had the $MFT record number 49.

Before we can parse the data, we need to extract the valid index attribute's content. Using various forensic tools, we are capable of this as demonstrated below.

The SleuthKit

We can use the SleuthKit tools to extract both the INDX root and allocation data. To extract the INDX attribute using the SleuthKit, the first step is to identify the $MFT record IDs for the attributes of the inode. We want the content of the index root attribute (attribute type 0x90 or 144d) and the index allocation attribute (attribute type 0xA0 or 160d).

To identify the attribute IDs, run the command:

istat -f ntfs ntfs.dd 49

The istat command returns inode information from the $MFT. In the command we are specifying the NTFS file system with the "-f" switch. The tool reads a raw image named "ntfs.dd" and locates record number 49. The result of our output (truncated) was as follows:

....
Attributes:
Type: $STANDARD_INFORMATION (16-0) Name: Resident size: 72

...

Type: $I30 (144-6) Name: $I30 Resident size: 26

Type: $I30 (160-7) Name: $I30 Non-Resident size: 4096

The information returned for the attribute list includes the index root - $I30 (144-6) - and an index allocation - $I30 (160-7). The attribute identifier is the integer listed after the dash. Therefore, the index root attribute 144 has an identifier of 6, and the index allocation attribute 160 has an identifier of 7.

With this information, we can gather the content of the attributes with the SleuthKit commands:

icat -f ntfs ntfs.dd 49-144-6 > INDX_ROOT.bin

icat -f ntfs ntfs.dd 49-160-7 > INDX_ALLOCATION.bin

The icat command uses the NTFS module to identify the record (49) attribute (144-6 and 144-7), and outputs the attribute data into the respective files INDX_ROOT.bin and INDX_ALLOCATION.bin.

EnCase

We can use EnCase to extract the INDX allocation data. To use EnCase version 6.x to gather the content of the INDX buffers, in the explorer tree, right click the folder icon. The "Copy/UnErase..." option applied to a directory will copy the content of the INDX buffer as a binary file. Specify a location to save the file. Note that the "Copy Folders..." option will copy the directory and its contents and will NOT extract the INDX structure.

FTK

We can use the Forensic Toolkit (FTK) to extract the INDX allocation data. Using FTK or FTK Imager, the INDX allocation attributes appear in the file list pane. These have the name "$I30" because the stream name is identified as $I30 in the index root and index allocation attributes. To extract the content of an index attribute, in the explorer pane, highlight the folder. In the file list pane, right click the relevant $I30 file and choose the option to "export". This will prompt you for a location to save the binary content.

Mandiant Intelligent Response®

The Mandiant Intelligent Response® (MIR) agent v.2.2 has the ability to extract INDX records natively. To generate a list of INDX buffers in MIR, run a RAW file audit. One of the options in the audit is to "Parse NTFS INDX Buffers". You can run this recursively, or you can target specific directories. We recommend the latter because this option will generate numerous entries when done recursively.

To display a list of parsed INDX buffers, you can filter a file listing in MIR by choosing the "FileAttributes" are "like" "*INDX*". The MIR agent recognizes "INDX" as an attribute because the files listed in the indices may or may not be deleted.

Results

Regardless of which method is used, your binary file should begin with the string "INDX" if you grabbed the correct data stream. You can verify the results quickly in a hex editor. Ensure that the first four bytes of the binary data is the string "INDX".

Conclusion

This example demonstrates three ways to use various tools to extract INDX attribute content. Our next post will detail the internal structures of a file name attribute. A file name attribute will exist for each file tracked in a directory. These structures include the MACb (Modified, Accessed, Changed, and birth) times of a file and can be a valuable timeline source in an investigation.

Mandiant Exposes APT1 – One of China's Cyber Espionage Units & Releases 3,000 Indicators

19 February 2013 at 07:00

Today, The Mandiant® Intelligence Center™ released an unprecedented report exposing APT1's multi-year, enterprise-scale computer espionage campaign. APT1 is one of dozens of threat groups Mandiant tracks around the world and we consider it to be one of the most prolific in terms of the sheer quantity of information it has stolen.

Highlights of the report include:

  • Evidence linking APT1 to China's 2nd Bureau of the People's Liberation Army (PLA) General Staff Department's (GSD) 3rd Department (Military Cover Designator 61398).
  • A timeline of APT1 economic espionage conducted since 2006 against 141 victims across multiple industries.
  • APT1's modus operandi (tools, tactics, procedures) including a compilation of videos showing actual APT1 activity.
  • The timeline and details of over 40 APT1 malware families.
  • The timeline and details of APT1's extensive attack infrastructure.

Mandiant is also releasing a digital appendix with more than 3,000 indicators to bolster defenses against APT1 operations. This appendix includes:

  • Digital delivery of over 3,000 APT1 indicators, such as domain names, and MD5 hashes of malware.
  • Thirteen (13) X.509 encryption certificates used by APT1.
  • A set of APT1 Indicators of Compromise (IOCs) and detailed descriptions of over 40 malware families in APT1's arsenal of digital weapons.
  • IOCs that can be used in conjunction with Redline™, Mandiant's free host-based investigative tool, or with Mandiant Intelligent Response® (MIR), Mandiant's commercial enterprise investigative tool.

The scale and impact of APT1's operations compelled us to write this report. The decision to publish a significant part of our intelligence about Unit 61398 was a painstaking one. What started as a "what if" discussion about our traditional non-disclosure policy quickly turned into the realization that the positive impact resulting from our decision to expose APT1 outweighed the risk of losing much of our ability to collect intelligence on this particular APT group. It is time to acknowledge the threat is originating from China, and we wanted to do our part to arm and prepare security professionals to combat the threat effectively. The issue of attribution has always been a missing link in the public's understanding of the landscape of APT cyber espionage. Without establishing a solid connection to China, there will always be room for observers to dismiss APT actions as uncoordinated, solely criminal in nature, or peripheral to larger national security and global economic concerns. We hope that this report will lead to increased understanding and coordinated action in countering APT network breaches.

We recognize that no one entity can understand the entire complex picture that many years of intense cyber espionage by a single group creates. We look forward to seeing the surge of data and conversations a report like this will likely generate.

Dan McWhorter

Managing Director, Threat Intelligence

Utilities Industry in the Cyber Targeting Scope

17 June 2013 at 20:40

There's often a lot of rhetoric in the press and in the security community around threats to the utilities industry, and risk exposure surrounding critical infrastructure. We've determined that the utilities industry (power, water, waste) has been, and likely will continue to be, a target for cyber espionage primarily from Chinese APT groups. We also anticipate that U.S. utilities infrastructure is vulnerable to computer network attack (CNA) from a variety of threat actors motivated by a desire to disrupt, deny access, or destroy. It's important to recognize the difference between actors seeking to steal data or intellectual property, and actors seeking to destroy systems or cause mass destruction. Often the distinction between computer network exploitation (CNE) and CNA gets lost in media coverage that bundles diverse cyber activity together. The type of cyber activity has implications for how we tackle the problem, thus it's key to distinguish.

As part of our incident response and managed defense work, Mandiant has observed Chinese APT groups exploiting the computer networks of U.S. utilities enterprises servicing or providing electric power to U.S. consumers, industry, and government. The most likely targeted information for data theft in this industry includes smart grid technologies, water and waste management expertise, and negotiations information related to existing or pending deals involving Western utilities companies operating in China.

Why would Chinese APT Groups Seek to Exploit Utilities?

Since 2010, Mandiant has responded to what we assessed were Chinese cyber espionage incidents occurring at multiple utilities companies involved in electric power generation. We recognize the PRC's utilities sector for electric power development, construction, operations, and distribution is heavily concentrated on a select few state-owned enterprises (SOE) with close ties to the central government. We suspect these relationships provide APT groups with a fundamental incentive to conduct espionage to attain advanced technology and operations expertise.

By way of possible motivation, the PRC is in the midst of a historic makeover that involves the transformation of urban infrastructures, which, by 2025, is likely to produce 15 mega-cities with an average of 25 million inhabitants, or about the entire population of the United States.[i] The impacts from this transition are intensifying pressures on an already fragile and outdated utilities infrastructure in China that currently struggles to provide sufficient electric power, water, and waste treatment. We believe APT groups are stealing data that will allow them to improve historic PRC urbanization efforts and the modernization of infrastructure, which is receiving billions of government investment dollars for development.

While we have tracked multiple attributed Chinese APT groups active in the utilities industries, we certainly don't discount that other, non-Chinese state-sponsored (or independent) actors could be engaged in data theft related to utilities.

The Risk of Disruptive Cyber Attacks

Computer network attacks (CNA) - that is, offensive cyber operations meant to disrupt or destroy-are also a threat to the utilities industry from state actors in times of major conflict. Perpetrators may include hostile adversaries, possibly nation-states, during times of escalated tensions, or terrorist operatives who gain the required expertise. The threat of a state-sponsored actor or proxy targeting this industry using CNA is a growing concern, particularly in the case of Iran, though wide-scale data theft is the primary type of threat we've observed to this point. Several large US news outlets did recently report that Iranian-based actors infiltrated some of the US' industrial control systems, however, and some have speculated their motivation in doing so was to map the network or identify resources for future attack scenarios.

For more intelligence reporting and specific details related to data theft in the utilities industry, the involved actors, and other threats, consider subscribing to the Mandiant Intelligence Center.

Back to Basics Series: OpenIOC

12 September 2013 at 19:33

Over the next few months, a few of my colleagues and I will be touching on various topics related to Mandiant and computer security. As part of this series, we are going to be talking about OpenIOC - how we got where we are today, how to make and use IOCs, and the future of OpenIOC. This topic can't be rolled into a single blog post, so we have developed a brief syllabus to outline the topics that we will be covering in the near future.

The History of OpenIOC

In this post we're going to focus on the past state of threat intelligence sharing and how it has evolved up to this point in the industry. We'll specifically focus on how we implemented OpenIOC at Mandiant as a way to codify threat intelligence.

Back to the Basics

What are indicators of compromise? What is an OpenIOC? What goes into an IOC? How do you read these things? What can you include in them? We will address all of these questions and more.

Investigating with Indicators of Compromise (IOCs)

In this post, we will model an actual incident response and walk through the creation of IOCs to detect malware (and attacker activity) related to the incident presented. Multiple blog posts and IOCs will come out of this exercise.

IOCs at Scale

So far, we will have touched on using Redline™ to deploy OpenIOC when investigating a single host. To deploy IOCs across an enterprise, we use Mandiant for Intelligent Response® (MIR®). Briefly, we will show how we can use MIR to deploy the IOCs built in the previous exercise across an enterprise scenario, expediting the incident response process.

The Future of OpenIOC

What is next? OpenIOC was open sourced in the fall of 2011, and it is time for an upgrade! With the soft-launch of the draft OpenIOC 1.1 spec at Black Hat USA, we've since released a draft of the OpenIOC 1.1 schema. We'll discuss what we changed and why, as well as some of the exciting possibilities that this update makes possible.

Integrating OpenIOC 1.1 into tools

With the release of the OpenIOC 1.1 draft, we also released a tool, ioc writer, which can be used to create or modify OpenIOC 1.1 IOCs. We will detail the capabilities of this library and how you could use it to add OpenIOC support to your own applications. We will also discuss the potential for using parameters to further define application behavior that is not built into the OpenIOC schema itself.

We have several goals for this series:

  • Introduce OpenIOC for people that are not already familiar with OpenIOC
  • Show how you can use OpenIOC to record artifacts identified during an investigation
  • Show how OpenIOC can be deployed to investigate a single host - or a whole enterprise
  • Discuss the future of OpenIOC & the upcoming specification, OpenIOC 1.1
  • Show how OpenIOC 1.1 support can be added to tools, and future capabilities of the standard

Stay tuned for our next post in the series!

The History of OpenIOC

17 September 2013 at 23:36

With the buzz in the security industry this year about sharing threat intelligence, it's easy to get caught up in the hype, and believe that proper, effective sharing of Indicators or Intelligence is something that can just be purchased along with goods or services from any security vendor.

It's really a much more complex problem than most make it out to be, and one that we've been working on for a while. A large part of our solution for managing Threat Indicators is using the OpenIOC standard format.

Since its founding, Mandiant sought to solve the problem of how to conduct leading-edge Incident Response (IR) - and how to scale that response to an entire enterprise. We created OpenIOC as an early step in tackling that problem. Mandiant released OpenIOC to the public as an Open Source project under the Apache 2 license in November of 2011, but OpenIOC had been used internally at Mandiant for several years prior.

IR is a discipline that usually requires highly trained professionals doing very resource-intensive work. Traditionally, these professionals would engage in time-intensive investigations on only a few hosts on a compromised network. Practical limitations on staffing, resources, time, and money would prevent investigations from covering anything other than a very small percentage of most enterprises. Responders would only be able to examine what they had direct access to, with their corresponding conclusions constrained by time and budget.

This level of investigation was almost never enough to give confidence on anything other than the hosts that had been examined - responders were unable to confirm whether other systems were still compromised, or whether the adversary still had footholds in other parts of the network.

Creating a standard way of recording Threat Intelligence into an Indicator was part of what allowed Mandiant to bring a new approach to IR, including the use of an automated solution, such as Mandiant for Intelligent Response® (MIR®). Mandiant's new strategy for IR enabled investigators, who previously could only get to a few hosts in an engagement, to now query entire enterprises in only slightly more time. Using OpenIOC as a standardized format, the Indicators of Compromise (IOCs) were recorded once, and then used to help gather the same information and conduct the same testing on every host across the enterprise via the automated solution. Incident Responders could now spend only a little more time, but cover an exponentially larger number of hosts during the course of an investigation.

Recording the IOCs in OpenIOC had other benefits as well. Indicators from one investigation could be shared with other investigations or organizations, and allow investigators to look for the exact same IOCs wherever they were, without having to worry about translation problems associated with ambiguous formats, such as lists or text documents. One investigator could create an IOC, and then share it with others, who could put that same IOC into their investigative system and look for the same evil as the first person, with little to no additional work.

The format grew organically over time. We always intended that the format be expandable and improvable. Instead of trying to map out every possible use case, Mandiant has updated the format and expanded the dictionaries of IOC Terms as new needs have arisen over time. The version we released in 2011 as "1.0" had already been revised and improved upon internally several times before its Open Source debut. We continue to update the standard as needed, allowing for features and requests that we have received over time from other users or interested parties.

Unlike traditional "signatures," OpenIOC provides the ability to use logical comparison of Indicators in an IOC, providing more flexibility and discrimination than simple lists of artifacts. An extensive library of Indicator Terms allows for a variety of information from hosts and networks to be recorded, and if something is not covered in the existing terms, additional terms may be added. Upcoming features like parameters will allow for further expansion of the standard, including customization for application or organization specific use cases.

Having the OpenIOC standard in place is tremendously powerful, providing benefits for scaling detection and investigation, both of which are key parts of managing the threat lifecycle. OpenIOC enables easy, standardized information sharing once implemented, without adding much to workloads or draining resources. And it is freely available as Open Source; so that others can benefit from some of the methods we have already found to work well in our practice. We hope that as we improve it, you can take even more advantage of what OpenIOC has to offer in your IR and Threat Intelligence workflows.

Next up in the Back to Basics: OpenIOC series, we'll talk about some of the basics of what OpenIOC is and what using it involves - and some of the upcoming things in the future of OpenIOC.

How Will I Fill This Web Historian-Shaped Hole in My Heart?

19 September 2013 at 01:11

With the recent integration of Mandiant Web Historian™ into Mandiant Redline™, you may be asking "How do I review my Web History using Redline?" If so, then follow along as I explain how to collect and review web history data in Redline - with a focus on areas where the workflow and features differ from that of Web Historian.

For those of you unfamiliar with Redline, it is Mandiant's premier free tool, providing host investigative capabilities to help users find signs of malicious activity through memory and file analysis and the development of a threat assessment profile.

Configuring Web History Data Collection

Web Historian provided three options for choosing how to find web history data that you want to analyze: scan my local system, scan a profile folder, and parse an individual history file. Redline allows you to accomplish all three of these operations using a single option, Analyze this Computer, which is found under the Main Menu in the upper left corner. Specifying the path to a profile folder or a history file will require some additional configuration:

Figure 1: Finding your web history data Web Historian (Left) vs. Redline (Right)

Click on Analyze this Computer to begin configuring your analysis session. To ensure that Redline collects your desired web history data, click the link to Edit your script . On the View and Edit Your Script window are several options; take a look around and turn on any and all data that might interest you. For our purposes, we will be focusing on the Browser History options underneath the Network tab. This section contains all of the familiar options from Web Historian; simply select the boxes corresponding to the data you wish to collect.

One difference you may notice is the absence of an option to specify the browser(s) you would like to target. You can now find that option by selecting Show Advanced Parameters from the upper right corner of the window. Once advanced parameters are enabled, simply type the name of any browser(s) you would like to target, each on its own separate line in the Target Browser field. To have Redline collect any web history data regardless of browser, just leave this field empty.

You may also notice that enabling advanced parameters activates a field for History Files Location. As you may have guessed, this is where you can specify a path to a profile folder or history file to analyze directly, as you were able to do in Web Historian.

Figure 2: Configuring Redline to Collect Browser History Data

Now that you have finished configuring your script, choose a location to save your analysis session and then hit OK . Redline will run the script, which will require Administrator privileges and may trigger a UAC prompt before running depending on how your system is configured. After a brief collecting and processing time, your web history data will be ready for review.

Reviewing your Data

For the actual review of your web history data, you should feel right at home in Redline. Just like Web Historian, Redline uses a sortable, searchable, configurable table view for each of the individual categories of web history data.

Figure 3: Displaying your web history data for review in both Web Historian (behind) and Redline (front)

Although similar, Redline does have a few minor differences in how it visualizes your data:

  • Redline does not break the data into pages; instead it will discretely page in large data sets (25k+ rows) automatically as you scroll down through the list.
  • To configure the table view, you will need to manipulate the column headers for ordering and resizing, and right-click on a column header to show and hide columns - as opposed to using the column configuration menu in Web Historian.
  • Searching and simple filtering is done in each individual table view and is not applied globally. To access the find options, either press the magnifying glass in the bottom right corner, or press Ctrl-f while a table view is selected.
  • To export your data to a CSV (comma separated values) format file, click on export in the bottom right corner. Like Web Historian, Redline will only export data currently in the table view. If you applied any filtering or tags, those will affect the data as it is exported.

In addition to the features that have always been available in Web Historian, Redline also allows you to review your web history with its full suite of analytical capabilities and investigative tools. Check out the Redline user guide for a full description of these capabilities. Here are just a few of the most popular:

  • Timeline provides a chronological listing of all web-based events (e.g., URL last browsed to, File Download Started, etc.) in a single heterogeneous display. You can employ this to follow the activities of a user or attacker as they played out on the system. You can also quickly reduce your target investigative scope using the Timeline's powerful filtering capabilities.
  • Use tags and comments to mark-up your findings as you perform your investigation, making it easier to keep track of what you have seen while moving forward. You can then go back and export those results into your favorite reporting solution.
  • Use Indicators of Compromise (IOCs) as a quick way to determine if your system contains any potential security breaches or other evidence of compromise. Visit http://www.openioc.org/ to learn more about IOCs.
  • Last but not least, Redline gives you the ability to examine an entire system worth of metadata. With Redline, you are not simply restricted to Web History related data; you can investigate security incidents with the scope and context of the full system.

If your favorite feature from Web Historian has not yet been included in Redline (Graphing, Complex Filtering, etc.), feel free to make a request using one of the contact methods specified below. We will be taking feedback into consideration when choosing what the Redline team works on in the future.

As always, feel free to contact us with send any additional questions. And just in case you do not already have it, the latest version of Redline (v1.10 as of the time of this writing) can always be found here.

OpenIOC: Back to the Basics

1 October 2013 at 18:45

Written by Will Gibb & Devon Kerr

One challenge investigators face during incident response is finding a way to organize information about an attackers' activity, utilities, malware and other indicators of compromise, called IOCs. The OpenIOC format addresses this challenge head-on. OpenIOC provides a standard format and terms for describing the artifacts encountered during the course of an investigation. In this post we're going to provide a high-level overview of IOCs, including IOC use cases, the structure of an IOC and IOC logic.

Before we continue, it's important to mention that IOCs are not signatures, and they aren't meant to function as a signature would. It is often understated, but an IOC is meant to be used in combination with human intelligence. IOCs are designed to aid in your investigation, or the investigations of others with whom you share threat intelligence.

IOC Use Cases:

There are several use cases for codifying your IOCs, and these typically revolve around your objectives as an investigator. The number of potential use cases is quite large, and we've listed some of the most common use cases below:

  • Find malware/utility: This is the most common use case. Essentially, this is an IOC written to find some type of known malware or utility, either by looking for attributes of the binary, itself, or for some artifact created upon execution, such as a prefetch file or registry key.
  • Methodology: Unlike an IOC written to identify malware or utilities, these IOCs find things you don't necessarily know about, in order to generate investigative leads. For example, if you wanted to identify any service DLL that wasn't signed and which was loaded from any directory that did not include the path "windowssystem32", you could write an IOC to describe that condition. Another good example of a methodology IOC is an IOC that looks for the Registry text value of all "Run" keys for a string ending ".jpg". This represents an abnormal condition which upon investigation may lead to evidence of a compromise.
  • Bulk: You may already be using this kind of IOC. Many organizations subscribe to threat intelligence feeds that deliver a list of MD5s or IP addresses; a bulk IOC can represent a collection of those indicators. These kinds of IOCs are very black and white and are typically only good for an exact match.
  • Investigative: As you investigate systems in an environment and identify evidence of malicious activity such as metadata related to the installation of backdoors, execution of tools, or files being staged for theft, you can track that information in an IOC. These IOCs are similar to bulk IOCs; however, an investigative IOC only contains indicators from a single investigation. Using this type of IOC can help you to prioritize which systems you want to analyze.

IOC components:

An IOC is made up of three main parts: IOC metadata, references and the definition. Let's examine each one of these more closely, noting that we're using the Mandiant IOC Editor (IOCe) downloadable from the Mandiant website:

Metadata: IOC metadata describes information like the author of the IOC ([email protected]), the name of the IOC (Evil.exe (BACKDOOR) and a brief description such as "This variant of the open source Poison Ivy backdoor has been configured to beacon to 10.127.10.128 and registers itself as "Microsoft 1atent time services".

References: Within the IOC, references can find information like the name of an investigation or case number, comments and information on the maturity of the IOC such as Alpha, Beta, Release, etc. This data can help you to understand where the IOC fits in your library of threat intelligence. One common use for references is to associate an IOC with a particular threat group. It is not uncommon for certain references to be removed from IOCs when sharing IOCs with third parties.

Definition: This is the content of the IOC, containing the artifacts that an investigator decided to codify in the IOC. For example, these may include the MD5 of a file, a registry path or something found in process memory. Inside the definition, indicators are listed out or combined into expressions that consist of two terms and some form of Boolean logic.

One thing about the OpenIOC format that makes it particularly useful is the ability to combine similar terms using Boolean AND & OR logic. The previous example shows how this type of logic can be used. This type of IOC describes three possible scenarios:

  1. The name of a service is "MS 1atent time services" - OR -
  2. The ServiceDLL name for any service contains "evil.exe" - OR -
  3. The filename is "bad.exe" AND the filesize attribute of that file is within the range 4096 to 10240 bytes.

When you use Boolean logic, remember the following:

  • An AND requires that both sides of the expression are true
  • An OR requires that one side of the expression is true

Understanding that the Boolean statements, such as 'The name of a service is "MS 1atent time services", OR "filename is "bad.exe" AND the filesize attribute of that file is within the range 4096 to 10240 bytes"', are evaluated separately is an important aspect understanding how the logic within an IOC works. These statements are not if-else statements, nor do they both have to exist in order for the IOC to have a match. When investigating a host, if the "MS 1atent time services" service is present, this IOC would have a match; regardless of whether or not the malicious file the IOC described was present on the host as well.

In our next post we're going to have a crash course in writing IOC definitions using the Mandiant IOC editor, IOCe. 

Another Darkleech Campaign

3 October 2013 at 17:23

Last week got us up close and personal with Darkleech and Blackhole with our external careers web site.

The fun didn’t end there, this week we saw a tidal wave of Darkleech activity linked to a large-scale malvertising campaign identified by the following URL:

hXXp://delivery[.]globalcdnnode[.]com/7f01baa99716452bda5bba0572c58be9/afr-zone.php

Again Darkleech was up to its tricks, injecting URLs and sending victims to a landing page belonging to the Blackhole Exploit Kit, one of the most popular and effective exploit kits available today. Blackhole wreaks havoc on computers by exploiting vulnerabilities in client applications like IE, Java and Adobe, computers that are vulnerable to exploits launched by Blackhole are likely to become infected with one of several flavors of malware including ransomware, Zeus/Zbot variants and clickfraud trojans like ZeroAccess.

We started logging hits at 21:31:00 UTC on Sunday 09/22/2013, the campaign has been ongoing, peaking Monday and tapered down through out the week.

During most of the campaign’s run, delivery[.]globalcdnnode[.]com appeared to have gone dark, no longer serving the exploit kit’s landing page as expected and then stopped resolving altogether, yet tons of requests kept flowing.

This left some scratching their heads as to whether the noise was a real threat.

Indeed, it was a real threat, as Blackhole showed up to the party a couple of days later; this was confirmed by actually witnessing a system get attacked on a subsequent visit to the URL.

 

globalcdn_BHEK

Figure 1. – Session demonstrating exploit via IE browser and Java.

 

The server returned the (obfuscated) Blackhole Landing page; no 404 this time.

Darkleech_LANDING

Figure 2 – request and response to to delivery[.]globalcdnnode[.]com.

 

The next stage was to load a new URL for the malicious jar file. At this point, the unpatched Windows XP system running vulnerable Java quickly succumbed to CVE-2013-0422.

Darkleech_JAVA

Figure 3 – Packet capture showing JAR file being downloaded.

Darkleech_Java_Classes

Figure 4. – Some of the Java class files visible in the downloaded Jar.

 

Even though our system was exploited and the browser was left in a hung state, it did not receive the payload. Given the sporadic availability during the week of both the host and exploit kit’s landing page, it’s possible the system is or was undergoing further setup and this is the prelude to yet another large-scale campaign.

We can’t say for sure but we know this is not the last time we will see it or the crimeware actor behind it.

Registrant:

Name: Alexey Prokopenko
Organization: home
Address: Lenina 4, kv 1
City: Ubileine
Province/state: LUGANSKA OBL
Country: UA
Postal Code: 519000
Email: alex1978a @bigmir.net

By the way, this actor has a long history of malicious activity online too.

The campaign also appears to be abusing Amazon Web Services.

globalcdnnode.com
Server:           ns-293.awsdns-36.com
Address:    205.251.193.37#53

globalcdnnode.com
origin = ns-293.awsdns-36.com
mail addr = awsdns-hostmaster.amazon.com

At time of this writing, the domain delivery[.]globalcdnnode[.]com was still resolving, using fast-flux DNS techniques to resolve to a different IP address every couple minutes, thwarting attempts at shutting down the domain by constantly being on the move.

GlobalCDN_PLESK

Figure 5. – The familiar Plesk control panel, residing on the server.

This was a widespread campaign, indirectly affecting many web sites via malvertising techniques. The referring hosts run the gamut from local radio stations to high profile news, sports, and shopping sites. Given the large amounts of web traffic these types of sites see, its not surprising there was a tidal wave of requests to delivery[.]globalcdnnode[.]com. Every time a page with the malvertisement was loaded, a request was made to hXXp://delivery.globalcdnnode.com/7f01baa99716452bda5bba0572c58be9/afr-zone.php, in the background.

To give an example of what this activity looked like from DTI, you can see the numbers in the chart below.

 

Darkleech_splunk

Figure 6. – DTI graph showing number of Darkleech detections logged each day.

By using malvertising and or posing as a legitimate advertiser or content delivery network, the bad guys infiltrate the web advertisement ecosystem. This results in their malicious content getting loaded in your browser, often times in the background, while you browse sites that have nothing to do with the attack (as was the case in our careers site).

Imagine a scenario where a good portion of enterprise users have a home page set to a popular news website. More than likely, the main web page has advertisements, and some of those ads could be served from 3rd party advertiser networks and or CDNs. If just one of those advertisements on the page is malicious, visitors to that page are at risk of redirection and or infection, even though the news website’s server is itself clean.

So, when everybody shows up to work on Monday and opens their browsers, there could be a wave of clients making requests to exploit kit landing pages, if Darkleech is lurking in those advertisement waters, you could end up with a leech or 2 attached to your network.

Critical Infrastructure Beyond the Power Grid

19 November 2013 at 21:26

The term "critical infrastructure" has earned its spot on the board of our ongoing game of cyber bingo--right next to "Digital Pearl Harbor," "Cyber 9/11," "SCADA" and "Stuxnet."

With "critical infrastructure" thrown about in references to cyber threats nearly every week, we thought it was time for a closer look at just what the term means-and what it means to other cyber threat actors.

The term "critical infrastructure" conjures up images of highways, electrical grids, pipelines, government facilities and utilities. But the U.S. government definition also includes economic security and public health. The Department of Homeland Security defines critical infrastructure as "Systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters."[1]

Certainly the U.S. definition is expansive, but some key cyber actors go a step further to include a more abstract "information" asset. Russian officials view information content, flow and influencers as an enormous component of critical infrastructure. Iran and China similarly privilege the security of their information assets in order to protect their governments.

The bottom line?

U.S. companies, who may have never considered themselves a plausible target for cyber threats, could become victims of offensive or defensive state cyber operations. Earlier this year several media outlets-including the New York Times and Washington Post-disclosed that they had been the victims of China-based intrusions. The Times and the Post linked the intrusions on their networks to their reporting on corruption in the upper echelons of the Chinese Communist Party and other issues.

These media outlets weren't sitting on plans for a new fighter jet or cutting edge wind turbines-information often assumed to be at risk for data theft. Rather, the reporters at the Times and Post were perched in key positions to influence U.S. government and public views of the Chinese leadership, possibly in a negative light. The Chinese government had conducted these intrusions against what it deemed critical infrastructure that supported the flow of valuable information.

Who's up next?

State actors motivated to target critical infrastructure (by their own definition or the U.S.') won't just be the usual attention grabbers in cyberspace. We estimate that Iran, Syria, and North Korea all have interest and would be able to conduct or direct some level of network operations. These states are also likely to conduct operations in the near term to identify red lines and gauge corporate and government reactions. With little reputational loss at stake, we expect actors sponsored by or associated with these states to target an array of critical infrastructure targets. Companies who serve as key information brokers-for the public or the U.S. government-should be particularly attuned to the criticality their work is assigned by a variety of cyber threat actors.

 


 

 

 

OpenIOC Series: Investigating with Indicators of Compromise (IOCs) – Part I

16 December 2013 at 20:58

Written by Devon Kerr & Will Gibb

The Back to Basics: OpenIOC blog series previously discussed how Indicators of Compromise (IOCs) can be used to codify information about malware or utilities and describe an attacker's methodology. Also touched on were the parts of an IOC, such as the metadata, references, and definition sections. This blog post will focus on writing IOCs by providing a common investigation scenario, following along with an incident response team as they investigate a compromise and assemble IOCs.

Our scenario involves a fictional organization, "Acme Widgets Co.", which designs, manufactures and distributes widgets. Last week, this organization held a mandatory security-awareness training that provided attendees with an overview of common security topics including password strength, safe browsing habits, email phishing and the risks of social media. During the section on phishing, one employee expressed concern that he may have been phished recently. Bob Bobson, an administrator, indicated that some time back he'd received a strange email about a competitor's widget and was surprised that the PDF attachment wouldn't open. A member of the security operations staff, John Johnson, was present during the seminar and quickly initiated an investigation of Bob's system using the Mandiant Redline™ host investigation tool. John used Redline to create a portable collectors configured to obtain live response data from Bob's system which included file system metadata, the contents of the registry, event logs, web browser history, as well as service information.

John ran the collectors on Bob's system and brought the data back to his analysis workstation for review. Through discussions with Bob, John learned that the suspicious e-mail likely arrived on October 13, 2013.

After initial review of the evidence, John assembled the following timeline of suspicious activity on the system.

Table 1: Summary of significant artifacts

Based on this analysis, John pieced together a preliminary narrative: The attacker sent a spear-phishing email to Bob which contained a malicious PDF attachment, "Ultrawidget.pdf", which Bob saved to the desktop on October 10, 2013, at 20:19:07 UTC. Approximately five minutes later, at 20:24:44 UTC, the file "C:WINDOWSSysWOW64acmCleanup.exe" was created as well as a Run key used for persistence. These events were likely the result of Bob opening the PDF. John sent the PDF to a malware analyst to determine the nature of the exploit used to infect Bob's PC.

The first IOC John writes describes the malware identified on Bob's PC, "acmCleanup.exe", as well as the malicious PDF. IOCs sometimes start out as rudimentary - looking for the known file hashes, filenames and persistence mechanisms of the malware identified. Here is what an initial IOC looks like:

Figure 1: Initial IOC for acmCleanup.exe (BACKDOOR)

As analysis continues, these IOCs are updated and improved - incorporating indicator terms from malware and intelligence analysis as well as being refined based on the environment. In this sense, the IOC is a living document. For example, additional analysis may reveal more registry key information, additional files which may be written to disk, or information for identifying the malware in memory. Here is the same IOC, after augmenting it with the results of malware analysis:

Figure 2: Augmented IOC for acmCleanup.exe (BACKDOOR)

The augmented IOC will continue to identify the exact malware discovered on Bob's PC. This improved IOC will also identify malware which has things in common with that backdoor:

  • Malware which uses the same Mutex, a process attribute that will prevent the machine from being infected multiple times with the same backdoor
  • Malware which performs DNS queries for the malicious domain "23vsx.evil.com"
  • Malware which has a specific set of import functions; in this case which correspond to reverse shell and keylogger functionality

Beginning on October 11, the attacker accessed the system and executed the Windows command "ipconfig" at 20:24:00 UTC, resulting in the creation of a prefetch file. Approximately five minutes later, the attacker began uploading files to "C:$RECYCLE.BIN". Based on review of their MD5 hashes, three files ("wce.exe", "getlsasrvaddr.exe", "update.exe") were identified as known credential dumping utilities while others ("filewalk32.exe", "rar.exe") were tools for performing file system searches and creating WinRAR archives. These items should be recorded in an IOC, as a representation of an attacker's tools, techniques and procedures (TTPs). It is important to know that an attacker is likely to have interacted with many more systems than were infected with malware; for this reason it is crucial to look for evidence that an attacker has accessed systems. Some of the most common activities attackers perform on these systems include password-dumping, reconnaissance and data theft.

Seeing that the attacker had staged file archives and utilities in the "$Recycle.Bin" folder, John also created an IOC to find artifacts present there. This IOC was designed to identify files in the root of the "$Recycle.Bin" directory; or to identify if a user (notably, the attacker) tried to access the "$Recycle.Bin" folder by manually typing it in the address bar of Explorer by checking TypedPaths in the Registry. This is an example of encapsulating an attacker's TTPs in an OpenIOC form.

Figure 3: IOC for Unusual Files in "C:RECYCLER" and "C:$RECYCLE.BIN" (Methodology)

On October 15, 2013, at 12:15:37 UTC, the Sysinternals PsExec utility was created in "C:$RECYCLE.BIN". Approximately four hours later, at 16:11:03 UTC, the attacker used the Internet Explorer browser to access text files located in "C:$RECYCLE.BIN". Between 16:11:03 UTC and 16:11:06 UTC, the attacker accessed two text files which were no longer present on the system. At 16:17:55 UTC, the attacker mounted the remote hidden share "\10.20.30.101C$". At 16:20:29 UTC, the attacker executed the Windows "tree" command, resulting in the creation of a prefetch file, a command which produces a filesystem listing. At 17:37:37 UTC, the attacker created one WinRAR archive, "C:$Recycle.Bina.rar" which contained two text files, "c.txt" and "a.txt". These text files contained output from the Windows "tree" and "ipconfig" commands. No further evidence was present on the system. John noted that the earliest event log entries present on the system start approximately two minutes after the creation of "a.rar". It is likely that the attacker cleared the event logs before disconnecting from the system.

John identified the lateral movement to the 10.20.30.101 host, from "ACM-BOBBO" through registry keys that recorded the mount of the hidden C$ share. By recording this type of artifact in an IOC, John will be able to quickly see if the attacker has pivoted to another part of the Acme Widgets Co. network when investigating 10.20.30.101. He included artifacts looking for other common hidden shares such as IPC$ and ADMIN$. The effectiveness of an IOC may be influenced by the environment the IOC was created for. This IOC, for example, may generate a significant number of false-positives in an environment where these hidden shares are legitimately used. At Acme Widgets Co., however, the use of hidden shares is considered highly suspicious.

Figure 4 IOC for Lateral Movement (Methodology)Figure 4 IOC for Lateral Movement (Methodology)

At the end of the day, John authored three new IOCs based on his current investigation. He knows that if he records the artifacts he identified from his investigation into the "ACM-BOBBO" system, he can apply that knowledge to the investigation of the host at 10.20.30.101. Once he collects information on 10.20.30.101 with his Redline portable collector, he'll be able to match IOCs against the Live Response data, which will let him identify known artifacts quickly prior to beginning Live Response analysis. Although John is using these IOCs to search systems individually, these same IOCs could be used to search for evidence of attacker activity across the enterprise. Armed with this set of IOCs, John sets out to hunt for evil on the host at "10.20.30.101".

Stay tuned for our next blog post, seeing how this investigation develops.

Best of the Best in 2013: The Armory

20 December 2013 at 21:48

Everyone likes something for free. And there is no better place to go to get free analysis, intelligence and tools than The Armory on M-Unition. During the past year, we've offered intelligence and analysis on new threat activity, sponsored open source projects and offered insight on free tools like Redline™, all of which has been highlighted on our blog.

In case you've missed it, here are some of our most popular posts:

Challenges in Malware and Intelligence Analysis: Similar Network Protocols, Different Backdoors and Threat Groups

In this post, Mandiant's Intel shares insight on threat activity. Specifically, two separate APT groups, using two different backdoors that had very similar networking protocols. Read more to learn what they found.

New Release: OWASP Broken Web Applications Project VM Version 1.1

Chuck Willis overviews version 1.1 of the Mandiant-sponsored OWASP Broken Web Applications Project Virtual Machine (VM). If you are not familiar with this open source project, it provides a freely downloadable VM containing more than 30 web applications with known or intentional security vulnerabilities. Many people use the VM for training or self-study to learn about web application security vulnerabilities, including how to find them, exploit them, and fix them. It can also be used for other purposes such as testing web application assessment tools and techniques or understanding evidence of web application attacks.

Back to Basics Series: OpenIOC

Will Gibb and a few of his colleagues at Mandiant embark on a series going back to the basics and looking deeper at OpenIOC - how we got where we are today, how to make and use IOCs, and the future of OpenIOC.

Check out related posts here: The History of OpenIOC, Back to the Basics, OpenIOC, IOC Writer and Other Free Tools.

Live from Black Hat 2013: Redline, Turbo Talk, and Arsenal

Sitting poolside at Black Hat USA 2013, Mandiant's Kristen Cooper chats with Ted Wilson about Redline in this latest podcast. Ted leads the development of Redline where he provides innovative investigative features and capabilities enabling both the seasoned investigator and those with considerably less experience to answer the question, "have you been compromised?"

Utilities Industry in the Cyber Targeting Scope

Our intel team is back again, this time with an eye on the utilities industry. As part of our incident response and managed defense work, Mandiant has observed Chinese APT groups exploiting the computer networks of U.S. utilities enterprises servicing or providing electric power to U.S. consumers, industry, and government. The most likely targets for data theft in this industry include smart grid technologies, water and waste management expertise, and negotiations information related to existing or pending deals involving Western utilities companies operating in China.

Leveraging the Power of Solutions and Intelligence

27 January 2014 at 20:40

Welcome to my first post as a FireEye™ employee! Many of you have asked me what I think of FireEye's acquisition of Mandiant. One of the aspects of the new company that I find most exciting is our increased threat intelligence capabilities. This post will briefly explore what that means for our customers, prospects, and the public.

By itself, Mandiant generates threat intelligence in a fairly unique manner from three primary sources. First, our professional services division learns about adversary tools, tactics, and procedures (TTPs) by assisting intrusion victims. This "boots on the ground" offering is unlike any other, in terms of efficiency (a small number of personnel required), speed (days or weeks onsite, instead of weeks or months), and effectiveness (we know how to remove advanced foes). By having consultants inside a dozen or more leading organizations every week of the year, Mandiant gains front-line experience of cutting-edge intrusion activity. Second, the Managed Defense™ division operates our software and provides complementary services on a multi-year subscription basis. This team develops long-term counter-intrusion experience by constantly assisting another set of customers in a managed security services model. Finally, Mandiant's intelligence team acquires data from a variety of sources, fusing it with information from professional services and managed defense. The output of all this work includes deliverables such as the annual M-Trends report and last year's APT1 document, both of which are free to the public. Mandiant customers have access to more intelligence through our software and services.

As a security software company, FireEye deploys powerful appliances into customer environments to inspect and (if so desired) quarantine malicious content. Most customers choose to benefit from the cloud features of the FireEye product suite. This decision enables community self-defense and exposes a rich collection of the world's worst malware. As millions more instances of FireEye's MVX technology expand to mobile, cloud and data center environments, all of us benefit in terms of protection and visibility. Furthermore, FireEye's own threat intelligence and services components generate knowledge based on their visibility into adversary software and activity. Recent examples include breaking news on Android malware, identifying Yahoo! systems serving malware, and exploring "cyber arms" dealers. Like Mandiant, FireEye's customers benefit from intelligence embedded into the MVX platforms.

Many have looked at the Mandiant and FireEye combination from the perspective of software and services. While these are important, both ultimately depend on access to the best threat intelligence available. As a combined entity, FireEye can draw upon nearly 2,000 employees in 40 countries, with a staff of security consultants, analysts, engineers, and experts not found in any other private organization. Stay tuned to the FireEye and Mandiant blogs as we work to provide an integrated view of adversary activity throughout 2014.

I hope you can attend the FireEye + Mandiant - 4 Key Steps to Continuous Threat Protection webinar on Wednesday, Jan 29 at 2pm ET. During the webinar Manish Gupta, FireEye SVP of Products, and Dave Merkel, Mandiant CTO and VP of Products will discuss why traditional IT security defenses are no longer the safeguards they once were and what's now needed to protect against today's advanced threats.

The 2013 FireEye Advanced Threat Report!

27 February 2014 at 14:00

FireEye has just released its 2013 Advanced Threat Report (ATR), which provides a high-level overview of the computer network attacks that FireEye discovered last year.

In this ATR, we focused almost exclusively on a small, but very important subset of our overall data analysis – the advanced persistent threat (APT).

APTs, due to their organizational structure, mission focus, and likely some level of nation-state support, often pose a more serious danger to enterprises than a lone hacker or hacker group ever could.

Over the long term, APTs are capable of cyber attacks that can rise to a strategic level, including widespread intellectual property theft, espionage, and attacks on national critical infrastructures.

The data contained in this report is gleaned from the FireEye Dynamic Threat Intelligence (DTI) cloud, and is based on attack metrics shared by FireEye customers around the world.

Its insight is derived from:

  • 39,504 cyber security incidents
  • 17,995 malware infections
  • 4,192 APT incidents
  • 22 million command and control (CnC) communications
  • 159 APT-associated malware families
  • CnC infrastructure in 206 countries and territories

FireEye 2013 Threat Report Final

Based on our data, the U.S., South Korea, and Canada were the top APT targets in 2013; the U.S., Canada, and Germany were targeted by the highest number of unique malware families.

The ATR describes attacks on 20+ industry verticals. Education, Finance, and High-Tech were the top overall targets, while Government, Services/Consulting, and High-Tech were targeted by the highest number of unique malware families.

In 2013, FireEye discovered eleven zero-day attacks. In the first half of the year, Java was the most common target for zero-days; in the second half, FireEye observed a surge in Internet Explorer (IE) zero-days that were used in watering hole attacks, including against U.S. government websites.

Last year, FireEye analyzed five times more Web-based security alerts than email-based alerts – possibly stemming from an increased awareness of spear phishing as well as a more widespread use of social media.

In sum, the 2013 ATR offers strong evidence that malware infections occur within enterprises at an alarming rate, that attacker infrastructure is global in scope, and that advanced attackers continue to penetrate legacy defenses, such as firewalls and anti-virus (AV), with ease.

Investigating with Indicators of Compromise (IOCs) – Part II

6 March 2014 at 01:42

Written by Will Gibb & Devon Kerr

In our blog post "Investigating with Indicators of Compromise (IOCs) - Part I," we presented a scenario involving the "Acme Widgets Co.," a company investigating an intrusion, and its incident responder, John. John's next objective is to examine the system "ACMWH-KIOSK" for evidence of attacker activity. Using the IOCs containing the tactics, techniques and procedures (TTPs) he developed during the analysis of "ACM-BOBBO," John identifies the following IOC hits:

Table 1: IOC hit summary

After reviewing IOC hits in Redline™, John performs Live Response and put together the following timeline of suspicious activity on "ACMWH-KIOSK".

Table 2: Summary of significant artifacts

John examines the timeline in order to surmise the following chain of activities:

  • On October 15, 2013, at approximately 16:17:56 UTC, the attacker performed a type 3 network logon, described here, to "ACMWH-KIOSK" from "ACM-BOBBO" using the ACME domain service account, "backupDaily."
  • A few seconds later, the attacker copied two executables, "acmCleanup.exe" and "msspcheck.exe," from "ACM-BOBBO" to "C:$RECYCLE.BIN."

By generating MD5 hashes of both files, John determines that "acmCleanup.exe" is identical to the similarly named file on "ACM-BOBBO", which means it won't require malware analysis. Since the MD5 hash of "msspcheck.exe" was not previously identified, John sends binary to a malware analyst, whose analysis reveals that "msspcheck.exe" is a variant of the "acmCleanup.exe" malware.

  • About a minute later, at 16:19:02 UTC, the attacker executed the Sysinternals PsExec remote command execution utility, which ran for approximately three minutes.
  • During this time period, event logs recorded the start and stop of the "WCE SERVICE" service, which corresponds to the execution of the Windows Credentials Editor (WCE).

John can assume PsExec was used to execute WCE, which is reasonable given the timeline of events and artifacts present on the system.

About seven minutes, later the registry recorded the modification of a Run key associated with persistence for "msspcheck.exe." Finally, at 16:48:11 UTC, the attacker logged off from "ACMWH-KIOSK".

John found no additional evidence of compromise. Since the IOCs are living documents, John's next step is to update his IOCs with relevant findings from his investigation of the kiosk system. John updates the "acmCleanup.exe (BACKDOOR)" IOC with information from the "msspcheck.exe" variant of the attacker's backdoor including:

  • File metadata like filename and MD5.
  • A uniquely named process handle discovered by John's malware analyst.
  • A prefetch entry for the "msspcheck.exe" binary.
  • Registry artifacts associated with persistence of "msspcheck.exe."

From there, John double-checks the uniqueness of the additional filename with some search engine queries. He can confirm that "msspcheck.exe" is a unique filename and update his "acmCleanup.exe (BACKDOOR)" IOC. By updating his existing IOC, John ensures that he will be able to identify evidence attributed to this specific variant of the backdoor.

Changes to the original IOC have been indicated in green.

The updated IOC can be seen below:

Figure 1: Augmented IOC for acmCleanup.exe (BACKDOOR)

John needs additional information before he can start to act. He knows two things about the attacker that might be able to help him out:

  • The attacker is using a domain service account to perform network logons.
  • The attacker has been using WCE to obtain credentials and the "WCE SERVICE" service appears in event logs.

John turns to his security event and incident management (SEIM) system, which aggregates logs from his domain controllers, enterprise antivirus and intrusion detection systems. A search of type 3 network logons using the "backupDaily" domain account turns up 23 different systems accessed using that account. John runs another query for the "WCE SERVICE" and sees that logs from 3 domain controllers contain that service event. John needs to look for IOCs across all Acme machines in a short period of time.

A Not-So Civic Duty: Asprox Botnet Campaign Spreads Court Dates and Malware

16 June 2014 at 14:00

Executive Summary

FireEye Labs has been tracking a recent spike in malicious email detections that we attribute to a campaign that began in 2013. While malicious email campaigns are nothing new, this one is significant in that we are observing mass-targeting attackers adopting the malware evasion methods pioneered by the stealthier APT attackers. And this is certainly a high-volume business, with anywhere from a few hundred to ten thousand malicious emails sent daily – usually distributing between 50 and 500,000 emails per outbreak.

Through the FireEye Dynamic Threat Intelligence (DTI) cloud, FireEye Labs discovered that each and every major spike in email blasts brought a change in the attributes of their attack. These changes have made it difficult for anti-virus, IPS, firewalls and file-based sandboxes to keep up with the malware and effectively protect endpoints from infection. Worse, if past is prologue, we can expect other malicious, mass-targeting email operators to adopt this approach to bypass traditional defenses.

This blog will cover the trends of the campaign, as well as provide a short technical analysis of the payload.

Campaign Details

fig1

Figure 1: Attack Architecture

The campaign first appeared in late December of 2013 and has since been seen in fairly cyclical patterns each month. It appears that the threat actors behind this campaign are fairly responsive to published blogs and reports surrounding their malware techniques, tweaking their malware accordingly to continuously try and evade detection with success.

In late 2013, malware labeled as Kuluoz, the specific spam component of the Asprox botnet, was discovered to be the main payload of what would become the first malicious email campaign. Since then, the threat actors have continuously tweaked the malware by changing its hardcoded strings, remote access commands, and encryption keys.

Previously, Asprox malicious email campaigns targeted various industries in multiple countries and included a URL link in the body. The current version of Asprox includes a simple zipped email attachment that contains the malicious payload “exe.” Figure 2 below represents a sample message while Figure 3 is an example of the various court-related email headers used in the campaign.

fig2

Figure 2 Email Sample

fig3

Figure 3 Email Headers

Some of the recurring campaign that Asporox used includes themes focused around airline tickets, postal services and license keys. In recent months however, the court notice and court request-themed emails appear to be the most successful phishing scheme theme for the campaign.

The following list contains examples of email subject variations, specifically for the court notice theme:

  • Urgent court notice
  • Notice to Appear in Court
  • Notice of appearance in court
  • Warrant to appear
  • Pretrial notice
  • Court hearing notice
  • Hearing of your case
  • Mandatory court appearance

The campaign appeared to increase in volume during the month of May. Figure 4 shows the increase in activity of Asprox compared to other crimewares towards the end of May specifically. Figure 5 highlights the regular monthly pattern of overall malicious emails. In comparison, Figure 6 is a compilation of all the hits from our analytics.

fig4

Figure 4 Worldwide Crimeware Activity

fig5

Figure 5 Overall Asprox Botnet tracking

fig6

Figure 6 Asprox Botnet Activity Unique Samples

These malicious email campaign spikes revealed that FireEye appliances, with the support of DTI cloud, were able to provide a full picture of the campaign (blue), while only a fraction of the emailed malware samples could be detected by various Anti-Virus vendors (yellow).

fig7

Figure 7 FireEye Detection vs. Anti-Virus Detection

By the end of May, we observed a big spike on the unique binaries associated with this malicious activity. Compared to the previous days where malware authors used just 10-40 unique MD5s or less per day, we saw about 6400 unique MD5s sent out on May 29th. That is a 16,000% increase in unique MD5s over the usual malicious email campaign we’d observed. Compared to other recent email campaigns, Asprox uses a volume of unique samples for its campaign.

fig8

Figure 8 Asprox Campaign Unique Sample Tracking

fig9

Figure 9 Geographical Distribution of the Campaign

fig10

Figure 10 Distribution of Industries Affected

Brief Technical Analysis

fig11

Figure 11 Attack Architecture

Infiltration

The infiltration phase consists of the victim receiving a phishing email with a zipped attachment containing the malware payload disguised as an Office document. Figure 11 is an example of one of the more recent phishing attempts.

fig12

Figure 12 Malware Payload Icon

Evasion

Once the victim executes the malicious payload, it begins to start an svchost.exe process and then injects its code into the newly created process. Once loaded into memory, the injected code is then unpacked as a DLL. Notice that Asprox uses a hardcoded mutex that can be found in its strings.

  1. Typical Mutex Generation
    1. "2GVWNQJz1"
  2. Create svchost.exe process
  3. Code injection into svchost.exe

Entrenchment

Once the dll is running in memory it then creates a copy of itself in the following location:

%LOCALAPPDATA%/[8 CHARACTERS].EXE

Example filename:

%LOCALAPPDATA%\lwftkkea.exe

It’s important to note that the process will first check itself in the startup registry key, so a compromised endpoint will have the following registry populated with the executable:

HKCU\Software\Microsoft\Windows\CurrentVersion\Run

Exfiltration/Communication

The malware uses various encryption techniques to communicate with the command and control (C2) nodes. The communication uses an RSA (i.e. PROV_RSA_FULL) encrypted SSL session using the Microsoft Base Cryptographic Provider while the payloads themselves are RC4 encrypted. Each sample uses a default hardcoded public key shown below.

Default Public Key

-----BEGIN PUBLIC KEY-----

MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDCUAUdLJ1rmxx+bAndp+Cz6+5I'

Kmgap2hn2df/UiVglAvvg2US9qbk65ixqw3dGN/9O9B30q5RD+xtZ6gl4ChBquqw

jwxzGTVqJeexn5RHjtFR9lmJMYIwzoc/kMG8e6C/GaS2FCgY8oBpcESVyT2woV7U

00SNFZ88nyVv33z9+wIDAQAB

-----END PUBLIC KEY-----

First Communication Packet

Bot ID RC4 Encrypted URL

POST /5DBA62A2529A51B506D197253469FA745E7634B4FC

HTTP/1.1

Accept: */*

Content-Type: application/x-www-form-urlencoded

User-Agent: <host useragent>

Host: <host ip>:443

Content-Length: 319

Cache-Control: no-cache

<knock><id>5DBA62A247BC1F72B98B545736DEA65A</id><group>0206s</group><src>3</src><transport>0</transport><time>1881051166</time><version>1537</version><status>0</status><debug>none<debug></knock>

C2 Commands

In comparison to the campaign at the end of 2013, the current campaign uses one of the newer versions of the Asprox family where threat actors added the command “ear.”

if ( wcsicmp(Str1, L"idl") )

{

if ( wcsicmp(Str1, L"run") )

{

if ( wcsicmp(Str1, L"rem") )

{

if ( wcsicmp(Str1, L"ear")

{

if ( wcsicmp(Str1, L"rdl") )

{

if ( wcsicmp(Str1, L"red") )

{

if ( !wcsicmp(Str1, L"upd") )

C2 commands Description
idl idl This commands idles the process to wait for commands This commands idles the process to wait for commands
run run Download from a partner site and execute from a specified path Download from a partner site and execute from a specified path
rem rem Remove itself Remove itself
ear ear Download another executable and create autorun entry Download another executable and create autorun entry
rdl rdl Download, inject into svchost, and run Download, inject into svchost, and run
upd upd Download and update Download and update
red red Modify the registry Modify the registry

C2 Campaign Characteristics

fig13

For the two major malicious email campaign spikes in April and May of 2014, separate sets of C2 nodes were used for each major spike.

April May-June
94.23.24.58 94.23.24.58 192.69.192.178 192.69.192.178
94.23.43.184 94.23.43.184 213.21.158.141 213.21.158.141
1.234.53.27 1.234.53.27 213.251.150.3 213.251.150.3
84.124.94.52 84.124.94.52 27.54.87.235 27.54.87.235
133.242.134.76 133.242.134.76 61.19.32.24 61.19.32.24
173.45.78.226 173.45.78.226 69.64.56.232 69.64.56.232
37.59.9.98 37.59.9.98 72.167.15.89 72.167.15.89
188.93.74.192 188.93.74.192 84.234.71.214 84.234.71.214
187.16.250.214 187.16.250.214 89.22.96.113 89.22.96.113
85.214.220.78 85.214.220.78 89.232.63.147 89.232.63.147
91.121.20.71 91.121.20.71
91.212.253.253 91.212.253.253
91.228.77.15 91.228.77.15

Conclusion

The data reveals that each of the Asprox botnet’s malicious email campaigns changes its method of luring victims and C2 domains, as well as the technical details on monthly intervals. And, with each new improvement, it becomes more difficult for traditional security methods to detect certain types of malware.

Acknowledgements:

Nart Villeneuve, Jessa dela Torre, and David Sancho. Asprox Reborn. Trend Micro. 2013. http://www.trendmicro.com/cloud-content/us/pdfs/security-intelligence/white-papers/wp-asprox-reborn.pdf

Havex, It’s Down With OPC

17 July 2014 at 14:00

FireEye recently analyzed the capabilities of a variant of Havex (referred to by FireEye as “Fertger” or “PEACEPIPE”), the first publicized malware reported to actively scan OPC servers used for controlling SCADA (Supervisory Control and Data Acquisition) devices in critical infrastructure (e.g., water and electric utilities), energy, and manufacturing sectors.

While Havex itself is a somewhat simple PHP Remote Access Trojan (RAT) that has been analyzed by other sources, none of these have covered the scanning functionality that could impact SCADA devices and other industrial control systems (ICS). Specifically, this Havex variant targets servers involved in OPC (Object linking and embedding for Process Control) communication, a client/server technology widely used in process control systems (for example, to control water pumps, turbines, tanks, etc.).

Note: ICS is a general term that encompasses SCADA (Supervisory Control and Data Acquisition) systems, DCS (Distributed Control Systems), and other control system environments. The term SCADA is well-known to wider audiences, and throughout this article, ICS and SCADA will be used interchangeably.

Threat actors have leveraged Havex in attacks across the energy sector for over a year, but the full extent of industries and ICS systems affected by Havex is unknown. We decided to examine the OPC scanning component of Havex more closely, to better understand what happens when it’s executed and the possible implications.

OPC Testing Environment

To conduct a true test of the Havex variant’s functionality, we constructed an OPC server test environment that fully replicates a typical OPC server setup (Figure 1 [3]). As shown, ICS or SCADA systems involve OPC client software that interacts directly with an OPC server, which works in tandem with the PLC (Programmable Logic Controller) to control industrial hardware (such as a water pump, turbine, or tank). FireEye replicated both the hardware and software the OPC server setup (the components that appear within the dashed line on the right side of Figure 1).

 

 

havex1

Figure 1: Topology of typical OPC server setup

The components of our test environment are robust and comprehensive to the point that our system could be deployed in an environment to control actual SCADA devices. We utilized an Arduino Uno [1] as the primary hardware platform, acting as the OPC server. The Arduino Uno is an ideal platform for developing an ICS test environment because of the low power requirements, a large number of libraries to make programming the microcontroller easier, serial communication over USB, and cheap cost. We leveraged the OPC Server and libraries from St4makers [2] (as shown in Figure 2). This software is available for free to SCADA engineers to allow them to develop software to communicate information to and from SCADA devices.

havex2

Figure 2: OPC Server Setup

Using the OPC Server libraries allowed us to make the Arduino Uno act as a true, functioning OPC SCADA device (Figure 3).

havex3

Figure 3: Matrikon OPC Explorer showing Arduino OPC Server

We also used Matrikon’s OPC Explorer [1], which enables browsing between the Arduino OPC server and the Matrikon embedded simulation OPC server. In addition, the Explorer can be used to add certain data points to the SCADA device – in this case, the Arduino device.

havex4

Figure 4: Tags identified for OPC server

In the OPC testing environment, we created tags in order to simulate a true OPC server functioning. Tags, in relation to ICS devices, are single data points. For example: temperature, vibration, or fill level. Tags represent a single value monitored or controlled by the system at a single point in time.

With our test environment complete, we executed the malicious Havex “.dll" file and analyzed how Havex’s OPC scanning module might affect OPC servers it comes in contact with.

Analysis

The particular Havex sample we looked at was a file named PE.dll (6bfc42f7cb1364ef0bfd749776ac6d38). When looking into the scanning functionality of the particular Havex sample, it directly scans for OPC servers, both on the server the sample was submitted on, and laterally, across the entire network.

The scanning process starts when the Havex downloader calls the runDll export function.  The OPC scanner module identifies potential OPC servers by using the Windows networking (WNet) functions.  Through recursive calls to WNetOpenEnum and WNetEnumResources, the scanner builds a list of all servers that are globally accessible through Windows networking.  The list of servers is then checked to determine if any of them host an interface to the Component Object Models (COM) listed below:

 

 

 

Screen Shot 2014-07-17 at 12.31.56 PM

 

 

Figure 5: Relevant COM objects

Once OPC servers are identified, the following CLSIDs are used to determine the capabilities of the OPC server:

Screen Shot 2014-07-17 at 12.33.22 PM

            Figure 6: CLSIDs used to determine capabilities of the OPC server

When executing PE.dll, all of the OPC server data output is first saved as %TEMP%\[random].tmp.dat. The results of a capability scan of an OPC server is stored in %TEMP%\OPCServer[random].txt. Files are not encrypted or deleted once the scanning process is complete.

Once the scanning completes, the log is deleted and the contents are encrypted and stored into a file named %TEMP%\[random].tmp.yls.  The encryption process uses an RSA public key obtained from the PE resource TYU.  The RSA key is used to protect a randomly generated 168-bit 3DES key that is used to encrypt the contents of the log.

The TYU resource is BZip2 compressed and XORed with the string “1312312”.  A decoded configuration for 6BFC42F7CB1364EF0BFD749776AC6D38 is included in the figure below:

Screen Shot 2014-07-17 at 12.27.24 PM

Figure 7: Sample decoded TYU resource

The 4409de445240923e05c5fa6fb4204 value is believed to be an RSA key identifier. The AASp1… value is the Base64 encoded RSA key.

A sample encrypted log file (%TEMP%\[random].tmp.yls) is below.

 

 

 

 

 

 

 

 

 

 

 

 

 

00000000  32 39 0a 66 00 66 00 30  00 30 00 66 00 66 00 30 29.f.f.0.0.f.f.000000010  00 30 00 66 00 66 00 30  00 30 00 66 00 66 00 30 .0.f.f.0.0.f.f.000000020  00 30 00 66 00 66 00 30  00 30 00 66 00 66 00 30 .0.f.f.0.0.f.f.000000030  00 30 00 66 00 66 00 30  00 30 00 66 00 37 39 36 .0.f.f.0.0.f.79600000040  0a 31 32 38 0a 96 26 cc  34 93 a5 4a 09 09 17 d3 .128..&.4..J....00000050  e0 bb 15 90 e8 5d cb 01  c0 33 c1 a4 41 72 5f a5 .....]...3..Ar_.00000060  13 43 69 62 cf a3 80 e3  6f ce 2f 95 d1 38 0f f2 .Cib....o./..8..00000070  56 b1 f9 5e 1d e1 43 92  61 f8 60 1d 06 04 ad f9 V..^..C.a.`.....00000080  66 98 1f eb e9 4c d3 cb  ee 4a 39 75 31 54 b8 02 f....L...J9u1T..00000090  b5 b6 4a 3c e3 77 26 6d  93 b9 66 45 4a 44 f7 a2 ..J<.w&m..fEJD..000000A0  08 6a 22 89 b7 d3 72 d4  1f 8d b6 80 2b d2 99 5d .j"...r.....+..]000000B0  61 87 c1 0c 47 27 6a 61  fc c5 ee 41 a5 ae 89 c3 a...G'ja...A....000000C0  9e 00 54 b9 46 b8 88 72  94 a3 95 c8 8e 5d fe 23 ..T.F..r.....].#000000D0  2d fb 48 85 d5 31 c7 65  f1 c4 47 75 6f 77 03 6b -.H..1.e..Guow.k

 

--Truncated--Probable Key Identifierff00ff00ff00ff00ff00ff00ff00fRSA Encrypted 3DES Key5A EB 13 80 FE A6 B9 A9 8A 0F 41…The 3DES key will be the last 24 bytes of the decrypted result.3DES IV88 72  94 a3 95 c8 8e 5d3DES Encrypted Logfe 23 2d fb 48 85 d5 31 c7 65 f1…

Figure 8: Sample encrypted .yls file

Execution

When executing PE.dll against the Arduino OPC server, we observe interesting responses within the plaintext %TEMP%\[random].tmp.dat:

 

 

Screen Shot 2014-07-17 at 12.41.27 PM

 

 

Figure 9: Sample scan log

The contents of the tmp.dat file are the results of the scan of the network devices, looking for OPC servers. These are not the in-depth results of the OPC servers themselves, and only perform the initial scanning.

The particular Havex sample in question also enumerates OPC tags and fully interrogates the OPC servers identified within %TEMP%\[random].tmp.dat. The particular fields queried are: server state, tag name, type, access, and id. The contents of a sample %TEMP%\OPCServer[random].txt can be found below:

 

 

Screen Shot 2014-07-17 at 12.43.48 PM

 

 

Figure 10: Contents of OPCServer[Random].txt OPC interrogation

While we don’t have a particular case study to prove the attacker’s next steps, it is likely after these files are created and saved, they will be exfiltrated to a command and control server for further processing.

Conclusion

Part of threat intelligence requires understanding all parts of a particular threat. This is why we took a closer look at the OPC functionality of this particular Havex variant.  We don’t have any case study showcasing why the OPC modules were included, and this is the first “in the wild” sample using OPC scanning. It is possible that these attackers could have used this malware as a testing ground for future utilization, however.

Since ICS networks typically don’t have a high-level of visibility into the environment, there are several ways to help minimize some of the risks associated with a threat like Havex. First, ICS environments need to have the ability to perform full packet capture ability. This gives incident responders and engineers better visibility should an incident occur.

Also, having mature incident processes for your ICS environment is important. Being able to have security engineers that also understand ICS environments during an incident is paramount. Finally, having trained professionals consistently perform security checks on ICS environments is helpful. This ensures standard sets of security protocols and best practices are followed within a highly secure environment.

We hope that this information will further educate industrial control systems owners and the security community about how the OPC functionality of this threat works and serves as the foundation for more investigation. Still, lots of questions remain about this component of Havex. What is the attack path? Who is behind it? What is their intention? We’re continuing to track this specific threat and will provide further updates as this new tactic unfolds.

Acknowledgements

We would like to thank Josh Homan for his help and support.

Related MD5s

ba8da708b8784afd36c44bb5f1f436bc

6bfc42f7cb1364ef0bfd749776ac6d38

4102f370aaf46629575daffbd5a0b3c9

References

The Five W’s of Penetration Testing

16 September 2014 at 20:49

Often in discussions with customers and potential customers, questions arise about our penetration testing services, as well as penetration testing in general. In this post, we want to walk through Mandiant's take on the five W's of penetration testing, in hopes of helping those of you who many have some of these same questions. For clarity, we are going to walk through these W's in a non-traditional order.

Why

First and foremost, it's important to be upfront with yourself with why you are having a penetration test performed (or at least considering one). If your organization's primary motivation is compliance and needing to "check the box," then be on the lookout for your people attempting to subtly (or not so subtly) hinder the test in order to earn an "easy pass" by minimizing the number of findings (and therefore the amount of potential remediation work required). Individuals could attempt to hinder a penetration test by placing undue restrictions on the scope of systems assessed, the types of tools that can be used, or the timing of the test.

Even if compliance is a motivating factor, we hope you're able to take advantage of the opportunity penetration testing provides to determine where vulnerabilities lie and make your systems more secure. That is the real value that penetration testing can provide.

Finally, if you are getting a penetration test to comply with requirements imposed on your organization, that will often drive some of the answers to later questions about the type and scope of the test. Keep in mind that standards only dictate minimum requirements, however, so you should also consider additional penetration testing activities beyond the "bare minimum."

Who

There are really two "who" questions to consider, but for now we will just deal with the first: Who are the attackers that concern you? Are they:

  1. Random individuals on the Internet?
  2. Specific threat actors, such as state-sponsored attackers, organized criminals, or hacktivist groups?
  3. An individual or malware that is behind the firewall and on your internal corporate network?
  4. Your own employees ("insider threats")?
  5. Your customers (or attackers who may compromise customers' systems/accounts)?
  6. Your vendors, service providers, and other business partners (or attackers who may have compromised their systems)?

The answer to this will help drive the type of testing to be performed and the types of test user accounts (if any) to provision. The next section will describe some possible penetration test types, but it's helpful to also discuss the types of attackers you would like the penetration test to simulate.

What

What type of penetration test do you want performed? For organizations new to penetration testing, we recommend starting with an external network penetration test, which will assess your Internet-accessible systems in the same way that an attacker anywhere in the world could access them. Beyond that, there are several options:

  1. Internal network penetration test - A penetration test of your internal corporate network. Typically we start these types of assessments with only a network connection on the corporate networks, but a common variant is what we call an "Insider Threat Assessment," where we start with one of your standard workstations and a standard user account.
  2. Web application security assessment - A review of custom web application code for security vulnerabilities such as access control issues, SQL injection, cross-site scripting (XSS) and others. These are best done in a test or development environment to minimize impact to the production environment.
  3. Social engineering - Using deceptive email, phone calls, and/or physical entry to gain access to systems.
  4. Wireless penetration test - A detailed security assessment of wireless network(s) at one or more of your locations. This typically includes a survey of the location looking for unauthorized ("rogue") wireless access points that have been connected to the corporate network and are often insecurely configured.

If budgets were not an issue, you would want to do all of the above, but in reality you will need to prioritize your efforts on what makes sense for your organization. Keep in mind that the best approach may change over time as your organization matures.

Where

In what physical location should the test take place? Many types of penetration testing can be done remotely, but some require the testers to visit your facility. Physical social engineering engagements and wireless assessments clearly need to be performed at one (or more) of your locations.

Some internal penetration tests can be done remotely via a VPN connection, but we recommend conducting them at your location whenever possible. If your internal network has segmentation in place (as we recommend), then you should work with your penetration testing organization to determine the best physical location for the test to be performed. Generally, you'll want to do the internal penetration test from a network segment that has broad access to other portions of the internal network in order to get the best coverage from the test.

Another "Where" to consider for remote testing is where the testers are physically located. When testers are in a different country than you, legal issues can arise with data provisioning and accessibility. Differences in language, culture, and time zones could also make coordination and interpretation of results more difficult.

When

We recommend that most organizations get some sort of security assessment on an annual basis, but that security assessment does not necessarily need to be a penetration test (see Penetration Testing Has Come Of Age - How to Take Your Security Program to the Next Level). Larger organizations may have multiple assessments per year, each focused in a different area.

Within the year, the timing of the penetration test is usually pretty flexible. You will want to make sure that the right people from your organization are available to initiate and manage the test - and to receive results and begin implementing changes. Based on your organization's change control procedures, you may need to work around system freezes or other activities. Testing in December can be difficult due to holidays and vacation, along with year-end closeout activities, especially for organizations in retail, e-commerce, and payment processing.

If you have significant upgrades planned for the systems that will be tested, it is typically best to schedule the test for a month or two after the upgrades are due to be finished. This will allow some time for the inevitable delays in deploying the upgrades as well give the upgraded systems (and their administrators) a bit of time to "settle in" and get fully configured before being tested.

Who (part 2)

The other "who" question to consider is who will perform the penetration test? We recommend considering the following when selecting a penetration testing provider:

  1. What are the qualifications of the organization and the individuals who will be performing the test? What differentiates them from other providers?
  2. To what degree does their testing rely on automated vulnerability scanners vs. hands on manual testing?
  3. How well do they understand the threat actors that are relevant to your environment? How well are they able to emulate real world attacks?
  4. What deliverables will you receive from the test? Are they primarily the output of an automated tool? Ask for samples.
  5. Are they unbiased? Do they use penetration tests as a means to sell or resell other products and services?

No doubt, there are other questions that you will want to consider when scoping a penetration test, but we hope that these will help you get started. If you'd like to read more about Mandiant's penetration testing (and other) services, you can do so here. Of course, also feel free to contact us if you'd like to talk about your situation and how Mandiant can best assess your organization's security.

❌