Normal view

There are new articles available, click to refresh the page.
Before yesterdayStories by BI.ZONE on Medium

BI.ZONE detects destructive attacks by the Key Wolf group

By: BI.ZONE
22 March 2023 at 14:07

A new threat has been uncovered. The Key Wolf hacker group is bombarding Russian users with file-encrypting ransomware. Interestingly enough, the attackers do not demand any ransom. Nor do they provide any options to decrypt the affected files. Our experts were the first to detect the proliferation of the new malware. In this publication, we will take a closer look at the attack and share our view on ways to mitigate it.

Key Wolf uses two malicious files with nearly identical names Информирование зарегистрированных.exe and Информирование зарегистрированных.hta (the words in Russian can be loosely translated as “Information for the registered”). The files are presumably delivered to the victims via email.

The first one is a self-extracting archive containing two files: gUBmQx.exe and LICENSE.

The second is an archive with a download script for gUBmQx.exe. The file is downloaded from Zippyshare with the help of Background Intelligent Transfer Service (BITS).

The file contains Key Group ransomware, which is based on another malicious program, Chaos. Information about the Chaos ransomware family first emerged on a popular underground forum in June 2021. The user ryukRans wrote that he was working on a ransomware builder and even shared a GitHub link to it (figure 1).

Figure 1. Underground forum post on Chaos Ransomware Builder*
*(translated from Russian) Wanna share the ransomware I’ve been working on lately. What do you think? What feature would you like this ransomware to have?
Download link: https://github.com/Hetropo/ryuk-ransomware
try it out on a virtual machine

Several versions of the builder were released within a year. In June 2022, a so-called partner program was announced. It sought to attract pentesters and organize attacks on corporate networks (figure 2).

Figure 2. Underground forum post on Chaos Ransomware Builder

It is worth noting that Key Group ransomware was made with Chaos Ransomware Builder 4.0.

Ransomware mechanics

Once launched, Key Group performs the following:

  • Checks whether there is a process with the same name as that of the malicious file. If there is, it means that the ransomware is already running, so the newly launched process will stop.
  • If the checkSleep field is true, and, if the launch directory is not %APPDATA%, the .exe file waits for the number of seconds specified in the sleepTextbox field.
  • If checkAdminPrivilage is true, the malicious file copies itself into %APPDATA% and launches a new process as admin using runas. If the operation is declined by the user (UAC), the function restarts. If the names coincide and the program was launched from %APPDATA%, the function stops (thus, there is no infinite recursion during launch).
  • If checkAdminPrivilage is false, but checkCopyRoaming is true, the same process occurs as when checkAdminPrivilage is true, but without the escalation of privileges using runas.
  • If checkStartupFolder is true, then a web link to a malicious file is created in %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup, which means that the file will be downloaded automatically.
  • If checkAdminPrivilage is true, then:
  • If checkdeleteShadowCopies is enabled, the function deletes shadow copies using vssadmin delete shadows /all /quiet & wmic shadowcopy delete.
  • If checkDisableRecoveryMode is enabled, the function turns off the recovery mode using bcdedit /set {default} bootstatuspolicy ignoreallfailures & bcdedit /set {default} recoveryenabled no.
  • If checkdeleteBackupCatalog is enabled, the function deletes all backup copies using wbadmin delete catalog -quiet.
  • If checkSpread is true, the malware copies itself to all disks except C. Its file name is set up in the spreadName configuration (in this case, surprise.exe).
  • Creates a note in %APPDATA%\\\<droppedMessageTextbox\> and opens it. The note contains the following text:
    We are the keygroup777 ransomware we decided to help Ukraine destroy Russian computers, you can help us and transfer money to a bitcoin wallet <redacted>.
  • Installs the image shown below (figure 3) as the desktop theme.
Figure 3. Desktop theme
  • Encrypts each disk (except disk C) and the following folders recursively:
  • %USERPROFILE%\\Desktop
  • %USERPROFILE%\\Links
  • %USERPROFILE%\\Contacts
  • %USERPROFILE%\\Desktop
  • %USERPROFILE%\\Documents
  • %USERPROFILE%\\Downloads
  • %USERPROFILE%\\Pictures
  • %USERPROFILE%\\Music
  • %USERPROFILE%\\OneDrive
  • %USERPROFILE%\\Saved Games
  • %USERPROFILE%\\Favourites
  • %USERPROFILE%\\Searches
  • %USERPROFILE%\\Videos
  • %APPDATA%
  • %PUBLIC%\\Documents
  • %PUBLIC%\\Pictures
  • %PUBLIC%\\Music
  • %PUBLIC%\\Videos
  • %PUBLIC%\\Desktop

The malware will check each file in the directory whether it has one of the correct extensions and whether it is a note. The following process then depends on the file size:

  • If the file is under 2,117,152 bytes, it is encrypted with AES256-CBC. The key and IV are generated with the help of Rfc2898DeriveBytes with a password and salt [1, 2, 3, 4, 5, 6, 7, 8]. The password is 20 bytes in size. It has the character set abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890*!=&?&/, and is generated with the help of the standard function Random(). After encryption, the password is written to the file under the XML tag <EncryptedKey>, which is encrypted by RSA1024-OAEP and encoded in Base64, then comes the encrypted file itself, encoded in Base64.
  • If the file is 2,117,152 bytes or more, but less than or equal to 200,000,000 bytes, the number of random bytes generated and added to the file equals one-fourth of the file’s original size. The bytes are added in the same format as in the case described above. The file contains a random encrypted password and is theoretically unrecoverable.
  • When the file size exceeds 200,000,000 bytes, a random number of bytes between 200,000,000 and 300,000,000 is added to the file in the same format as in the first case. The file contains a random encrypted password and is theoretically unrecoverable.

If the directory contains subdirectories, the malware will perform the same operation for each of them.

The program also has an additional functionality: it checks if there is a bitcoin address in the clipboard and substitutes it with one belonging to the attackers.

The indicators of compromise and detection rules are available to BI.ZONE ThreatVision clients.

Protecting against Key Wolf

The ransomware usually targets its victims through email attachments. One way to prevent a ransomware attack is to use a specialized solution that will stop a malicious message from being delivered to the inbox.

Among these solutions is BI.ZONE CESP. By inspecting every single incoming message, it helps companies avoid illegitimate messages without slowing down the exchange of secure emails. The service relies on over 600 filtering rules based on machine learning and methods of statistical, signature, and heuristic analysis.

Masscan with HTTPS support

By: BI.ZONE
21 March 2022 at 14:00

By Konstantin Molodyakov

Masscan is a fast network scanner that is good for scanning a large range of IP addresses and ports. We’ve adapted it to our needs by giving it a little tweak.

The biggest inconvenience in the original version was the inability to collect banners from HTTPS servers. And what is a modern web without HTTPS? You can’t really scan anything. That’s what motivated us to modify masscan. As it usually happens, one little improvement led to another one, with some bugs being discovered along the way. Now we want to share our work with the community. All the modifications we’ll be talking about are already available in our repository on GitHub.

What are network scanners for

Network scanners are one of the universal tools in cybersecurity research. We use them to solve such tasks as perimeter analysis, vulnerability scanning, phishing and data leak detection, C&C detection, and host information collection.

How masscan works

Before we talk about the custom version, let’s understand how the original masscan works. If you are already familiar with it, you may be interested in the selection of useful scanner options. Or go straight to the section “Our modifications to masscan.”

The masscan project is small and, in our opinion, written scrupulously and logically. It was nice to see the abundance of comments — even deficiencies and kludges are clearly marked in the code:

Logically, the code can be divided into several parts as follows:

  • implementation of application protocols
  • implementation of the TCP stack
  • packet processing and transmission threads
  • implementation of output formats
  • reading raw packets

Let’s look at some of them in more detail.

Implementation of application protocols

Masscan is based on a modular concept. Thus, it can support any protocol, all you need is to register the appropriate structure and specify its use everywhere you need it (ha-ha):

Here’s a little description of the structure.

The protocol name and the standard port are informative only. The сtrl_flags field is not used anywhere.

The init function initiates the protocol, parse is the method responsible for processing the incoming data feed and generating response messages, and cleanup is the cleanup function for the connection.

The transmit_hello function is used to generate a hello packet if the server itself does not transmit something first, and the data from the hello field is used if the function is not specified.

The function that tests the functionality can be specified in the selftest.

Through this mechanism, for example, it’s possible to write handlers in Lua (the option --script). However, we never got around to checking if it really works. The thing we came across with masscan is that most of the interesting options are not described in the documentation, and the documentation itself is scattered in different places, partially overlapping. Part of the flags can only be found in the source code (main-conf.c). The --script option is one of them, and we have collected some other useful and interesting functions in the section "Useful options of the original masscan."

Implementation of the TCP stack

One of the reasons why masscan is so fast and can handle many simultaneous connections is its native implementation of the TCP stack*. It takes about 1,000 lines of code in the fileproto-tcp.c.

* A native TCP stack allows you to bypass OS restrictions, not to use OS resources, not to use heavier OS mechanisms, and to shorten the packet processing path

Packet processing and transmission threads

Masscan is fast and single-threaded. More specifically, it uses two threads per each network interface, one of which is a thread to process incoming packets. But no one really runs on more than one interface at a time.

One thread:

  1. reads raw data from the network interface.
  2. processes this data by running it through its own TCP stack and application protocol handlers.
  3. forms necessary data to be transmitted.
  4. stacks them in the transmit_queue.

The other thread takes the messages prepared for transmission from transmit_queue and writes them to the network interface (Fig. 1). If the messages sent from the queue do not exceed the limit, SYN packets are generated and sent for the next scanning targets.

Fig. 1. Packet processing and transmission schematic

Implementation of output formats

This part is conceptually similar to the modular implementation of protocols: it also has the OutputType structure that contains the main serialization functions. There's an abundance of all possilble output formats: custom binary, the modern NDJSON, the nasty XML, and the grepable. There's even the option of saving data to Redis. Let us know in the comments if you've tried it :)

Some formats are compatible with (or, as the author of masscan puts it, inspired by) similar utilities, such as nmap and unicornscan.

Reading raw packets

Masscan provides the ability to work with the network adapter through the PCAP or PFRING libraries, and to read data from the PCAP dump. The rawsock.c file contains several functions that abstract the main code from specific interfaces.

To select PFRING, you have to use the --pfring parameter, and to enable reading from the dump, you have to put the file prefix on the adapter name.

Useful options of the original masscan

Let’s take a look at some interesting and useful options of the original masscan that are rarely talked about.

Options

  • --nmap, --help
    Description: Help
    Comment: Even combined, these options give very little useful information. The documentation also contains incomplete information and is scattered in different files: README.md, man, FAQ. There’s also a small HOWTO on how to use the scanner together with AFL (american fuzzy lop). If you want to know about all the options, you can find the full list of them only in the source code (main-conf.c)
  • --output-format ndjson, -oD, --ndjson-status
    Description: NDJSON support
    Comment: Gigabytes of line-by-line NDJSON files are much nicer to handle than JSON. And the status output in NDJSON format is useful for writing utilities that monitor masscan performance
  • --output-format redis
    Description: Ability to save outputs directly to Redis
    Comment: Well, why not?:) If you haven’t worked with this tool, read about it here
  • --range fe80::/67
    Description: IPv6 support
    Comment: Everything’s clear here, but it would be interesting to read about real use cases in the comments. I can think of scanning a local network or only a small range of some particular country obtained through BGP
  • --http-*
    Description: HTTP request customization
    Comment: When creating an HTTP request, you can change any part of it to suit your needs: method, URI, version, headers, and/or body
  • --hello-[http, ssl, smbv1]
    Description: Scanning protocols on non-standard ports
    Comment: If masscan hasn’t received a hello packet from the target, its default setting is to send the request first, choosing a protocol based on the target’s port. But sometimes you might want to scan HTTP on some non-standard port
  • --resume
    Description: Pause
    Comment: Masscan knows how to delicately stop and resume where it paused. With Ctrl+C (SIGINT) masscan terminates, saving state and startup parameters, and with --resume it reads that data and continues operation
  • --rotate-size
    Description: Rotation of the output file
    Comment: The output can contain a lot of data, and this parameter allows you to specify the maximum file size at which the output will start to be written to the next file
  • --shard
    Description: Horizontal scaling
    Comment: Masscan pseudorandomly selects targets from the scanned range. If you want to run masscan on multiple machines within the same range, you can use this parameter to achieve the same random distribution even between machines
  • --top-ports
    Description: Scanning of N popular ports (array top_tcp_ports)
    Comment: This parameter came from nmap
  • --script
    Description: Lua scripts
    Comment: I have doubts that it works, but the possibility itself is interesting. Is there anyone who uses it? Let me know if you have any interesting examples
  • --vuln [heartbleed, ticketbleed, poodle, ntp-monlist]
    Description: Search for certain known vulnerabilities
    Comment: We cannot say anything about its correctness and efficiency, since this mechanism of vulnerability detection is a kind of kludge scattered throughout the code and conflicts with many other options, and we did not have to apply it in real tasks

Just to remind you of an important point everyone stumbles upon: masscan probably won’t work if you just run it to collect banners. The documentation does say this, but who cares to read it, right? Since masscan uses its own network stack, the OS knows nothing about the connections it creates and is rather surprised when it receives a packet (SYN, ACK) from somewhere in the network in response to a SYN request from the scanner. And then, depending on the type and settings of OS and firewall, the OS transmits an ICMP or RST packet, which is extremely adverse to the output. So you need to read the documentation and take this point into account.

Our modifications to masscan

We’ve added HTTPS support

The Internet is quite the fortress these days, even the most backward scammers have already given up on unencrypted HTTP. Therefore, it’s rather inconvenient without HTTPS support — this feature makes investigation, such as searching for C&C servers and phishing, much easier. There’re other tools besides masscan, but they are slower. We wanted to have a universal tool that would cover HTTPS and still be fast.

The first thing to do was to implement a full-fledged SSL. What the original masscan has is the ability to send a predefined hello packet then fetch and process a server certificate. Our version can establish and maintain an SSL connection and analyze the contents of nested protocols, which means it can collect HTTP banners from HTTPS servers.

Here’s how we achieved that. We added a new application-layer protocol to the source code and used the standard solution, OpenSSL, to implement SSL. Here we needed to do some fine-tuning, and the structure describing the application-layer protocol in the custom scanner looks like this:

We added handlers for protocol deinitialization, connection initiation and expanded the set of handler parameters. As a result, it became possible to handle nested protocols. We also managed to implement the change of application protocol handler more precisely. It is necessary when it’s impossible to process data with the current protocol or if such mechanism is embedded in the protocol itself, for example, when using STARTTLS.

Then we had some problems with performance and packet loss. SSL is heavy on the CPU. We had the option to try something faster than OpenSSL, but we went in the direction of processing incoming packets in several threads within one network interface. After implementing this, the packet processing pipeline looks like this:

Fig. 2. Updated packet processing and transmission schematic

The th_recv_read thread is needed to read data from the network interface regardless of the data processing speed. The q_recv_pb queue helps to detect cases when the data transmission speed is too high, and inbound packets cannot be processed in time. The th_recv_sched thread dispatches messages based on the hashes of the outbound and inbound IP addresses and ports to the th_recv_hdl_* threads so that the same connection falls into the same handler. The options related to this functionality are --num-handle-threads—the number of handler threads, and --tranquility—for automatic reduction of packet transmission speed when inbound packets cannot be handled fast enough.

HTTPS support is enabled with the parameter --dynamic-ssl while --output-filename-ssl-keys can be used to save master keys.

You can also notice a small cosmetic improvement — namely, the names of the threads. In our version, it became clear which threads consume resources:

Before
After

We’ve improved code quality

Masscan was found to have many strange things and errors. For example, the conversion of time to ticks** looked as follows:

** A unit of time measurement in which there’s enough accuracy, and which does not take up too much space

Network TCP connections were often handled incorrectly, resulting in broken connections and unnecessary repeat transmissions:

Fig. 3. Example of incorrect handling of network TCP connections

We also discovered errors in memory handling, including memory leaks. We managed to fix many of them, but not all. For example, when scanning /0:80, we see a leak of several ranges of 2 bytes each.

These errors were detected thanks to our colleagues, who meticulously used our developments, static analyzers (GCC, Clang, and VS), UB and memory sanitizers. Separately, I want to thank PVS-Studio. Those guys are unparalleled in quality and convenience.

We’ve added a build for different OSs

To consolidate the outputs, we’ve written a build and a test for Windows, Linux, and macOS using GitHub Actions.

The build pipeline looks like this (Fig. 4):

  • format check
  • static clang analyzer check
  • assembly debugging with sanitizers and running built-in tests
  • assembly and sending data to SonarCloud and CodeQL services
Fig. 4. Assembly pipeline

You can download compiled binaries from the build or release artifacts:

Fig. 5. Release artifacts

We’ve added a few more features

Here are the rest of the less significant things that were introduced in our version:

  • --regex(--regex-only-banners) is data-level message filtering in TCP. A regular expression is applied to the contents of each TCP packet. If the regular expression is triggered, the connection information will be in the output.
  • --dynamic-set-host is used to input the header hostinto a HTTP request. The IP address of the target being scanned is taken as a value.
  • Output of internal signature triggers on masscan protocols in the output.
  • An option to specify URIs in HTTP requests. We removed it later because the author of the original masscan added the same functionality. This is part of the --http-* options family.

Our New Log4j Scanner to Combat Log4Shell

By: BI.ZONE
14 December 2021 at 14:55

Log4Shell is a critical vulnerability in the Log4j logging library, which is used by many Java web applications.

In protecting against the exploit of Log4Shell, you need to know what applications are vulnerable to this attack, which is a rather difficult task. To make things easier, we have developed a special scanner, which is now available on GitHub.

The scanner will help find applications that are using the vulnerable Log4j library.

Log4Shell is a critical vulnerability in the Log4j logging library, which is used by many Java web applications. The exploitation of this vulnerability leads to remote code execution (RCE). The exploit has already been published, and all Log4j libraries as recent as version 2.15.0 can be affected.

Problem. Log4Shell poses a serious risk and requires immediate understanding of how to protect against any attacks exploiting this vulnerability. However, there is no easy way to find out which applications need to be secured.

  • On the web, you can find the types of affected software. But what if the services within your own organization are using Log4j?
  • Scanning external service hosts will not provide a clear picture. This is because Log4Shell can manifest itself regardless of what is being logged, a User-Agent header or user entries in a form at any moment after authentication. There is no guarantee that a scanner will detect the vulnerable library, but adversaries could easily come across it.

BI.ZONE solution. We have developed our own scanner that uses YARA rules, which is now deployed on GitHub. It scans the memory of Java processes for Log4j signatures. The scanner functions directly on the host, rather than through the Internet.

The scan output is a list of hosts that contain applications with Log4j, which enables you to personally check if the library version is vulnerable.

If it does turn out to be vulnerable, the BI.ZONE WAF cloud service will help you protect against external attacks using Log4j. It is not going to eliminate the need to install patches, but it will mitigate the risk of successful Log4Shell exploitation.

A tale of Business Email Compromise

By: BI.ZONE
12 October 2021 at 13:22

We are seeing a surge in Business Email Compromise (BEC) attacks. BEC attacks are not new or uncommon, but this wave has caught our attention because of its scale.

Many affected companies have been contacting us since June, and all the attacks share several key patterns.

This article explains how the attackers behind this BEC campaign operate and whether it is possible to mitigate the attack.

BEC attacks

A little digression for those who have never heard of Business Email Compromise attacks.

A BEC attack starts with compromising a corporate email: attackers infiltrate the email accounts of top management, finance department employees or others along the invoice approval chain.

After examining the email correspondence, infiltrators proceed to impersonate the owner of a compromised account. A CEO’s email opens up the possibility to ask the accounting for an urgent money transfer, likewise, a sales manager’s email provides the opportunity to manipulate a customer’s invoice. Another objective of the attack may be to obtain confidential information: the victims feel comfortable sharing this information because they believe they are talking to a person they trust.

Notably, adversaries sometimes avoid using compromised email accounts to remain undetected. Instead, they will register phishing domains which resemble the original domain and communicate from there.

Business Email Compromise is associated with significant financial and reputational risks, and affects all parties involved in the interaction.

Chapter 1, where we are asked to conduct an investigation

We were approached by companies who had lost out on some payments for their goods and services due to invoice fraud. (We will refer to these companies as victims.)

The victims would communicate with their partners by email. When it was time to issue invoices, the partner company somehow received the wrong details. The message appeared to be genuine and stored the entire correspondence history, but the invoice was incorrect. Eventually, the money would end up in the criminals’ accounts.

Since the invoice was tampered with, it is easy to assume a classic man-in-the-middle attack: the attackers intercepted the messages and modified the content to their benefit. But this raises a lot of questions:

  • How were the attackers able to jump into an email conversation they had not been a part of at an arbitrary point in time?
  • Why were they able to see the whole message history?
  • Was it the work of an insider or was it an external adversary?

We started looking into it.

For reasons of NDAs and sheer consideration, we shall not be giving you all the details of the investigation. But we will try to make our case as complete and comprehensible as possible.

Chapter 2, where we test our initial assumptions

We examined several email threads which had been compromised and saw that as soon as payment transfers came up in dialogue, a third party would get involved. No one took notice because the emails were coming from domains which resembled familiar company names, but were in fact phishing. Our team managed to spot this suspicious activity, but that was the whole purpose of us going through the records. However, your average company employee would not raise any suspicion if they see airbuus.com instead of the conventional airbus.com in the middle of a thread, especially if they see the entire message history tailing below.

Having detected email address spoofing, we suspected that we were dealing with a BEC attack.

We extracted all the phishing domains we could and set out to investigate the possible infrastructure used by the attackers. It turned out that the domains we found had identical DNS records:

1. MX record specifies an email processing server. The domains we detected are hosted on mailhostbox.com. An example of an MX record:

<phishing_domain>. 5 IN MX 100 us2.mx1.mailhostbox.com.
<phishing_domain>. 5 IN MX 100 us2.mx3.mailhostbox.com.
<phishing_domain>. 5 IN MX 100 us2.mx2.mailhostbox.com.

2. NS record indicates which servers a domain is hosted on. The domains we detected are hosted on monovm.com. An example of an NS record:

<phishing_domain>. 5 IN NS monovm.earth.orderbox-dns.com.
<phishing_domain>. 5 IN NS monovm.mercury.orderbox-dns.com.
<phishing_domain>. 5 IN NS monovm.venus.orderbox-dns.com.
<phishing_domain>. 5 IN NS monovm.mars.orderbox-dns.com.

3. TXT record contains an SPF record. An example of a TXT record:

<phishing_domain>. 5 IN TXT "v=spf1 redirect=_spf.mailhostbox.com"

4. SOA record is the initial record for the zone which indicates the location of the master record for the domain, and also contains the email address of the person responsible for the zone. An example of a SOA record:

<phishing_domain>. 5 IN SOA monovm.mars.orderbox-dns.com. <fraud_email>. 2021042001 7200 7200 172800 38400

We’re using <phishing_domain> to hide the phishing domain, similarly, <fraud_email> conceals the email address which the attackers used to register the phishing domain.

Chapter 3, where we assess the scale of the campaign

We got curious about the Mailhostbox + MonoVM hosting combination and decided to look for other domains that could be used in the campaign. For this purpose, we used the internal databases, which include all the domains from www.icann.org, and sampled the domains with the necessary MX, NS and TXT records.

The results were impressive: at the time of analysis, we had 47,532 domains similar to those found in the incident. A total of 5,296 email addresses were used to register them, over half of which were registered with popular email services: Gmail.com, Mail.ru, Yahoo.com, ProtonMail.com, Yandex.ru, Outlook.com and Hotmail.com. One particular email address ([email protected]) had 1403 domains registered to it.

It’s difficult to say whether each one of those 50,000 or so domains were created for a BEC attack. However, we suspect that the vast majority of them were intended precisely for that purpose. We speculate this is the case because of their obvious likeness to famous brand domains:

  • the-boeings[.]com
  • airbuus[.]com
  • airbuxs[.]com
  • bmw-my[.]com
  • uksamsung[.]com
  • a-adidas[.]com
  • giorgioarmani-hk[.]com

Mass registration of such domains began in the second half of 2020: more than 46,000 domains have been registered since July 2020. The registration rate peaked in the summer of this year as more than 5,000 domains were registered in June alone:

Chapter 4, where we map out what happened

Using the email field from the SOA record as an indicator for a particular campaign, we compiled a list of domains registered by the attackers for each of the victims who reached out to us.

We got two types of domains:

  • Some were already familiar, we had come across them in the victims’ correspondence with their partners.
  • Others were new and seemed to bear no resemblance to the domains of the victims or the partners. In the context of a BEC attack, we assumed that these domains had been used to compromise the emails. This was confirmed when we found that some of these addresses had been used to deliver phishing emails to the victims.

This is how we established the vector of intrusion.

The attackers approached potential victims with an offer to do business, be it long term or short term contracts. The request was sent via a feedback form on the company’s website or to publicly available group addresses ([email protected], etc., where example.com hides the name of a real organisation). The plan was to have an employee respond to the request from their corporate email account.

After getting a response from an employee, the attackers sent them a phishing email. The body of the email contained a phishing link that supposedly led to a page for downloading some documents: an agreement, a data sheet, a purchase order, etc. The download feature required the employee to enter their password to log into their email, a prior notice of this requirement was given in the actual email, citing confidentiality requirements. After entering the password, of course, no documents were downloaded, but the attackers had the data to access the email account.

The phishing links had a rather specific format: hxxps://pokajca[.]web[.]app/?x1=<victim_email>. First, the x1 parameter in the URLs passed the value of the phishing recipient's email (we've masked it using <victim_email>). In order to appear more credible, the message asking for the email password displayed the recipient's email when opening the phishing page. Secondly, the links were created using servlet services like netlify.app, cloudflare.com or similar. Finally, the phishing pages had almost no static content and the content was generated using JS scripts, which made such pages much harder for spam filters to detect.

If the response to the original request came not from an employee’s unique address, but from a publicly available one, the attackers would still use it to send phishing emails. This is confirmed by phishing links from our internal databases containing addresses like [email protected] or [email protected] in the x1 parameter.

In total, we detected more than 450 phishing links of this kind in our internal databases. They were disguised as Dropbox, Microsoft SharePoint, OneDrive, Adobe PDF Online and other file sharing resources:

Chapter 5, where we detail the geography of the campaign

We extracted user email addresses from all the links we found and matched them with company names.

Our databases contain only a fraction of all phishing links, so we were far from having an exhaustive list of potential victims. But even this data suggests that a wave of attacks is sweeping the globe.

Analysis shows that at least 200 companies from various countries were targeted by this BEC campaign. The potential victims were manufacturers, distributors, retailers and suppliers of various goods and services. In simple terms, this campaign targeted everyone who signs contracts with customers and partners for whatever products or services.

The area of distribution is all continents excluding Antarctica. The majority of potential victims are organisations from Europe, Asia and North America (55.6%, 24.0% and 14.8% respectively):

Chapter 6, where we answer any remaining questions

When we received the first account spoofing reports, we wondered how the attackers managed to enter the dialogue at exactly the right moment and retain the entire correspondence. Having established the infiltration vector, we successfully solved these mysteries.

After a successful phishing incident that gave the attackers access to the victim’s email, the BEC attack evolved along two different paths.

The first option required good coordination:

  1. The attackers read the victim’s correspondence with their customers and partners.
  2. After noticing that the conversation was slowly getting to payment issues, the cybercriminals forwarded the required message with the entire story to Phishing address 1 (P1), which is similar to the victim’s address.
  3. From P1, the criminals would write to the victim’s partner.
  4. The partner would reply to P1.
  5. The attackers would then set in motion Phishing address 2 (P2), now similar to the partner’s address. An email that the partner sent to P1 was forwarded to the victim using P2.
  6. The victim simply responded to P2.

Finally, the cycle was complete: the victim wrote to P2, the partner wrote to P1, and the attackers forwarded their emails to each other. The corporate habit of replying to all further increased the chances of a successful attack. By becoming facilitators of sorts, the attackers could easily substitute the invoice in the forwarded email at the right time.

The second, more advanced, option involved setting up email forwarding rules. More recently, Microsoft wrote about a similar BEC campaign: if the words ‘payment’, ‘invoice’ and the like appeared in the email body, the email was not sent to the address specified by the victims, but rather to the attackers.

The attack then proceeded along the route described above.

The interaction process in both cases looked like this:

Conclusion

BEC attacks are insidious. The only way to protect yourself from the attack described in our article is to ensure that the phishing campaign does not succeed. This is where email spam filters and employee trainings come in handy. There are two other practices which could be very effective for large companies: to register domains that look like the official one so that they do not get registered by the criminals sooner; and to monitor the appearance of domains that look like the official one so that illegitimate ones can be blocked via registrars and hosting providers.

If an email compromise does occur, it will be virtually impossible to prevent a BEC attack from happening: the parties involved in the correspondence are likely to have developed a natural trust for each other, and upon receiving a seemingly normal email (with the whole thread!), the victims would not even think to check the sender’s address. To make matters worse, phishing domains are often a homoglyph, making it difficult for even an experienced security professional to spot the presence of a stranger in the midst.

From pentest to APT attack: cybercriminal group FIN7 disguises its malware as an ethical hacker’s…

By: BI.ZONE
13 May 2021 at 08:37

From pentest to APT attack: cybercriminal group FIN7 disguises its malware as an ethical hacker’s toolkit

The article was prepared by BI.ZONE Cyber Threats Research Team

This is not the first time we have come across a cybercriminal group that pretends to be a legitimate organisation and disguises its malware as a security analysis tool. These groups hire employees who are not even aware that they are working with real malware or that their employer is a real criminal group.

One such group is the infamous FIN7 known for its APT attacks on various organisations around the globe. Recently they developed Lizar (formerly known as Tirion), a toolkit for reconnaissance and getting a foothold inside infected systems. Disguised as a legitimate cybersecurity company, the group distributes Lizar as a pentesting tool for Windows networks. This caught our attention and we did some research, the results of which we will share in this article.

A few words about FIN7

The APT group FIN7 was presumably founded back in 2013, but we will focus on its activities starting from 2020: that’s when cybercriminals focused on ransomware attacks.

FIN7 compiled a list of victims by filtering companies by revenue using the Zoominfo service. In 2020–2021, we saw attacks on an IT company headquartered in Germany, a key financial institution in Panama, a gambling establishment, several educational institutions and pharmaceuticalcompanies in the US.

For quite some time, FIN7 members have been using the Carbanak backdoor toolkit for reconnaissance purposes and to gain a foothold on infected systems, you can read about it in the series on FireEye’s blog (posts: 1, 2, 3, 4). We repeatedly observed the attackers attempting to masquerade as Check Point Software Technology and Forcepoint.

An example of this can be seen in the interface of Carbanak backdoor version 3.7.4, referencing Check Point Software Technology (Fig. 1).

Figure 1. Carbanak backdoor version 3.7.4 interface
Figure 1. Carbanak backdoor version 3.7.4 interface

A new malware package, Lizar, was recently released by the criminals.
A report on Lizar version 1.6.4 was previously published online, so we decided to investigate the functionality of the newer version, 2.0.4 (compile date and time: Fri Jan 29 03:27:43 2021), which we discovered in February 2021.

Lizar toolkit architecture

The Lizar toolkit is structurally similar to Carbanak. The components we found are listed in Table 1.

Table 1. Essence and purpose of Lizar components

Lizar loader and Lizar plugins run on an infected system and can logically be combined into the Lizar bot component.

Figure 2 shows how Lizar’s tools function and interact.

Figure 2. Schematic of the Lizar toolkit operation

Lizar client

Lizar client consistes of the following components:

  • client.ini.xml — XML configuration file;
  • client.exe — client's main executable;
  • libwebp_x64.dll — 64-bit version of libwebp library;
  • libwebp_x86.dll — 32-bit version of libwebp library;
  • keys — a directory with the keys for encrypting traffic between the client and the server;
  • plugins/extra — plugin directory (in practice only some plugins are present in this directory, the rest are located on the server);
  • rat — directory with the public key from Carbanak (this component has been added in the latest version of Lizar).

Below is the content and description of the configuration file (Table 2).

Table 2. Configuration file structure: elements and their descriptions

Table 3 shows the characteristics of the discovered client.exe file.

Table 3. Characteristics of client.exe

Figure 3 is a screenshot of the interface of the latest client version we discovered.

Figure 3. Lizar client version 2.0.4 interface

The client supports several bot commands. The way they look in the GUI can be seen in Fig. 4.

Figure 4. List of commands supported by the Lizar client

This is what each of the commands does:

  • Info — retrieve information about the system. The plugin for this command is located on the server. When a result is received from the plugin, the information is logged in the Info column.
  • Kill — stop plugin.
  • Period — change response frequency (Fig. 5).
Figure 5. Period command in the Lizar client GUI
  • Screenshot — take a screenshot (Fig. 6). The plugin for this command is located on the server. Once a screenshot is taken, it will be displayed in a separate window.
Figure 6. Screenshot command in the Lizar client GUI
  • List Processes — get a list of processes (Fig. 7). The plugin for this command is located on the server. If the plugin is successful, the list of processes will appear in a separate window.
Figure 7. List Processes command in the Lizar client GUI
  • Command Line — get CMD on the infected system. The plugin for this command is located on the server. If the plugin executes the command successfully, the result will appear in a separate window.
  • Executer — launch an additional module (Fig. 8).
Figure 8. Executer command in the Lizar client GUI
  • Jump to — migrate the loader to another process. The plugin for this command is located on the server. The command parameters are passed through the client.ini.xml file.
  • New session — create another loader session (run a copy of the loader on the infected system).
  • Mimikatz — run Mimikatz.
  • Grabber — run one of the plugins that collect passwords in browsers and OS. The Grabber tab has two buttons: Passwords + Screens and RDP (Fig. 9). Activating either of them sends a command to start the corresponding plugin.
Figure 9. Grabber command in the Lizar client GUI
  • Network analysis — run one of the plugins to retrieve Active Directory and network information (Fig. 10).
Figure 10. Network analysis command in the Lizar client GUI
  • Rat — run Carbanak (RAT). The IP address and port of the server and admin panel are set via the client.ini.xml configuration file (Fig. 11).
Figure 11. Rat command in the Lizar client GUI

We skipped the Company computers command in the general list – it does not have a handler yet, so we cannot determine exactly what it does.

Lizar server

The Lizar server application, similar to the Lizar client, is written using the .NET Framework. However, unlike the client, the server runs on a remote Linux host.

Date and time of the last detected server version compilation: Fri Feb 19 16:16:25 2021.

The application is run using the Wine utility with the pre-installed Wine Mono (wine-mono-5.0.0-x86.msi).

The server application directory includes the following components:

  • client/keys — directory with encryption keys for proper communication with the client;
  • loader/keys — directory with encryption keys for proper communication with the loader;
  • logs — directory with server logs (client-traffic, error, info);
  • plugins — plugin directory;
  • ThirdScripts — directory with the ps2x.py script and the ps2p.py helper module. The ps2x.py script is designed to execute files on the remote host and is implemented using the Impacket project. Command templates for this script are displayed in the client application when the appropriate option is selected.

Full list of arguments supported by the script.

  • x64 — directory containing the SQLite.interop.dll auxiliary library file (64-bit version).
  • x86 — directory containing the SQLite.interop.dll auxiliary library file (32-bit version).
  • AV.lst — a CSV file containing the name of the process which is associated with the antivirus product, the name and description of the antivirus product.

Several lines from the AV.lst file:

  • data.db — a database file containing information on all loaders (this information is loaded into the client application).
  • server.exe — server application.
  • server.ini.xml — server application configuration file.

Example contents of the configuration file:

  • System.Data.SQLite.dll — auxiliary library file.

Communication between client and server

Before being sent to the server, the data is encrypted on a session key with a length ranging from 5 to 15 bytes and then on the key specified in the configuration (31 bytes). The encryption function is shown below.

If the key specified in the configuration (31 bytes) does not match the key on the server, no data is sent from the server.

To verify the key on the side of the server, the client sends a checksum of the key, calculated according to the following algorithm:

Data received from the server is decrypted on a session key with a length ranging from 5 to 15 bytes, then on the same pair of session key and configuration key. Function for decryption:

The client and the server exchange data in binary format. The decrypted data is a list of bots (Fig. 12).

Figure 12. Example of decrypted data transmitted from server to client

Lizar loader

The Lizar loader is designed to execute commands by running plugins, and to run additional modules. It runs on the infected computer.

As we have already mentioned, Lizar loader and Lizar plugins run on the infected system and can logically be combined into the Lizar bot component. The bot’s modular architecture makes the tool scalable and allows for independent development of all components.

We’ve detected three kinds of bots: DLLs, EXEs and PowerShell scripts, which execute a DLL in the address space of the PowerShell process.

The pseudocode of the main loader function, along with the reconstructed function structure, is shown in Fig. 13.

Figure 13. Loader’s main function pseudocode

The following are some of the actions the x_Init function performs:

1. Generate a random key g_ConfigKey31 using the function SystemFunction036. This key is used to encrypt and decrypt the configuration data.

2. Obtain system information and calculate the checksum from the information received (Fig. 14).

Figure 14. Pseudocode for retrieving system information and calculating its checksum

3. Retrieve the current process ID (the checksum and PID of the loader process are displayed in the Id column in the client application).

4. Calculate the checksum from the previously received checksum and the current process ID (labelled g_BotId in Figure 13).

5. Decrypt configuration data: list of IP addresses, list of ports for each server. Configuration data is decrypted on 31-byte g_LoaderKey with XOR algorithm. After decryption, the data is re-encrypted on g_ConfigKey31 with an XOR algorithm. The g_LoaderKey is also used when encrypting data sent to the server and when decrypting data received from the server.

6. Initialise global variables and critical sections for some variables. This is needed to access data from different threads.

7. Initialise executable memory for plugin execution.

8. Launch five threads which process the queue of messages from the server. This mechanism is implemented using the PostQueuedCompletionStatus and GetQueuedCompletionStatus functions. Data received from the server is decrypted and sent to the handler (Fig.15).

Figure 15. Pseudocode algorithm for decrypting data received from the server and sending it for processing

The handler accepts data using the GetQueuedCompletionStatus function.

The vServerData→ServerData variable contains the plugin body after decryption (look again at Fig. 15). The algorithm's pseudocode for decrypting data received from the server is shown in Fig. 16.

Figure 16. Pseudocode of the algorithm for decrypting data received from the server

Before being sent to the server, the data structure has to pass through shaping as shown in Fig. 17.

Figure 17. Pseudocode of the function that generates the structure sent to the server

plugins from plugins directory

The plugins in the plugins directory are sent from the server to the loader and are executed by the loader when a certain action is performed in the Lizar client application.

The six stages of the plugins’ lifecycle:

  1. The user selects a command in the Lizar client application interface.
  2. The Lizar server receives the information about the selected command.
  3. Depending on the command and loader bitness, the server finds a suitable plugin from the plugins directory, then sends the loader a request containing the command and the body of the plugin (e.g., Screenshot{bitness}.dll).
  4. The loader executes the plugin and stores the result of the plugin’s execution in a specially allocated area of memory on the heap.
  5. The server retrieves the results of plugin execution and sends them on to the client.
  6. The client application displays the plugin results.

A full list of plugins (32-bit and 64-bit DLLs) in the plugins directory.

  • CommandLine32.dll
  • CommandLine64.dll
  • Executer32.dll
  • Executer64.dll
  • Grabber32.dll
  • Grabber64.dll
  • Info32.dll
  • Info64.dll
  • Jumper32.dll
  • Jumper64.dll
  • ListProcess32.dll
  • ListProcess64.dll
  • mimikatz32.dll
  • mimikatz64.dll
  • NetSession32.dll
  • NetSession64.dll
  • rat32.dll
  • rat64.dll
  • Screenshot32.dll
  • Screenshot64.dll

CommandLine32.dll/CommandLine64.dll

The plugin is designed to give attackers access to the command line interface on an infected system.

Sending commands to the cmd.exe process and receiving the result of the commands is implemented via pipes (Fig. 18).

Figure 18. CommandLine32.dll/CommandLine64.dll main function pseudocode

Executer32.dll/Executer64.dll

Executer32.dll/Executer64.dll launches additional components specified in the Lizar client application interface.

The plugin can run the following components:

  • EXE file from the %TEMP% directory;
  • PowerShell script from the %TEMP% directory, which is run using the following command: {path to powershell.exe} -ex bypass -noprof -nolog -nonint -f {path to the PowerShell script};
  • DLL in memory;
  • shellcode.

The plugin code that runs shellcode is shown in Fig. 19.

Figure 19. Executer32.dll/Executer64.dll code running shellcode

Note that the plugin file Executer64.dll contains the path to the PDB: M:\paal\Lizar\bin\Release\Plugins\Executer64.pdb.

Grabber32.dll/Grabber64.dll

Contrary to its name, this plugin has no grabber functionality and is a typical PE loader.

Although attackers call it a grabber, the loaded PE file actually performs the functions of other types of tools, such as a stealer.

Both versions of the plugin are used as client-side grabber loaders: PswRdInfo64 and PswInfoGrabber64.

Info32.dll/Info64.dll

The plugin is designed to retrieve information about the infected system.

The plugin is executed by using the Info command in the Lizar client application. A data structure containing the OS version, user name and computer name is sent to the server.

On the server side, the received structure is converted to a special string (Fig. 20).

Figure 20. Pseudocode snippet responsible for conversion of the received structure into a special string on the server

Jumper32.dll/Jumper64.dll

The plugin is designed to migrate the loader to the address space of another process. Injection parameters are set in the Lizar client configuration file. It should be noted that this plugin can be used not only to inject the loader, but also to execute other PE files in the address space of the specified process.

Figure 21 shows the main function of the plugin.

Figure 21. Jumper32.dll/Jumper64.dll main function pseudocode

From the pseudocode above we see that the loader can migrate to the address space of the specified process in three ways:

  • by performing an injection into the process with a certain PID;
  • by creating a process with a certain name and performing an injection into it;
  • by creating a process with the same name as the current one and performing an injection into it.

Let’s take a closer look at each method.

Algorithm for injection by process ID

  1. OpenProcess — The plugin retrieves the process handle for the specified process identifier (PID).
  2. VirtualAllocEx + WriteProcessMemory — the plugin allocates memory in the virtual address space of the specified process and writes in it the contents to be executed afterwards.
  3. CreateRemoteThread — the plugin creates a thread in the virtual address space of the specified process, with the lpStartAddress serving as the main function of the loader.

If CreateRemoteThread fails, plugin uses the RtlCreateUserThread function (Fig. 22).

Figure 22. Pseudocode for a function to create a thread in the virtual address space of the specified process

Injection algorithm by executable file name

1. The plugin finds the path to the system executable file to be injected. The location of this file depends on the bitness of the loader. 64-bit file is located in %SYSTEMROOT%\System32 directory, 32-bit — in %SYSTEMROOT%\SysWOW64 directory.

2. The plugin creates a process for the received system executable, and receives the identifier of the created process.

Depending on the plugin parameters, there are two ways to implement this step:

  • If the appropriate flag is set in the structure passed to the plugin, the plugin creates a process in the security context of the explorer.exe process (Fig. 23).
Figure 23. Running an executable in the security context of explorer.exe
  • If the flag is not set, the executable file is started by calling the CreateProcessA function (Fig. 24).
Figure 24. Calling CreateProcessA process

3. The plugin allocates memory in the virtual address space of the created process and writes in it the contents, which are to be executed later (VirtualAllocEx + WriteProcessMemory).

4. The plugin runs functions in the virtual address space of the created process in one of the following ways, depending on the bitness of the process:

  • in case of the 64-bit process, a function is started with another function, shown in Fig. 25;
Figure 25. Pseudocode of the algorithm for injecting into a 64-bit process
  • in case of the 32-bit process, a function is started using the CreateRemoteThread and RtlCreateUserThread functions, which create a thread in the virtual address space of the specified process.

Algorithm for injection into the same-name process

  1. The plugin retrieves the path to the executable file for the process in the address space of which it is running.
  2. The plugin launches this executable file and injects it into the created process.

The pseudocode for this method is shown in Fig. 26.

Figure 26. Pseudocode for injecting Jumper32.dll/Jumper64.dll into the same process

ListProcesses32.dll/ListProcesses64.dll

This plugin is designed to provide information on running processes (Fig. 27 and 28).

Figure 27. Retrieving information about each active process
Figure 28. Inserting the retrieved information to be sent to the server at a later time

The following can be retrieved for each process:

  • process identifier;
  • path to the executable file;
  • information about the user running the process.

mimikatz32.dll/mimikatz64.dll

The Mimikatz plugin is a wrapper for client-side Powerkatz modules:

  • powerkatz_full32.dll
  • powerkatz_full64.dll
  • powerkatz_short32.dll
  • powerkatz_short64.dll

NetSession32.dll/NetSession64.dll

The plugin is designed to retrieve information about all active network sessions on the infected server. For each session, the host address from which the connection is made can be retrieved, along with the name of the user initiating the connection.

The pseudocode of the function in which the information is received is shown in Fig. 29 and 30.

Figure 29. Retrieving network session information using WinAPI functions
Figure 30. Inserting the information retrieved by the plugin to be sent to the server

rat32.dll/rat64.dll

The plugin is a simplified version of the Carbanak toolkit bot. As we reported at the beginning of this article, this toolkit is heavily used by the FIN7 faction.

Screenshot32.dll/Screenshot64.dll

The plugin can take a JPEG screenshot on the infected system. The part of the function used to save the resulting image to the stream is shown below (Fig. 31).

Figure 31. The part of the function used to save a screenshot taken by the plugin to the stream

The received stream is then sent to the loader to be sent to the server.

plugins from the plugins/extra directory

plugins from the plugins/extra directory are transferred from the client to the server, then from the server to the loader (on the infected system).

List of files in the plugins/extra directory:

  • ADRecon.ps1
  • GetHash32.dll
  • GetHash64.dll
  • GetPass32.dll
  • GetPass64.dll
  • powerkatz_full32.dll
  • powerkatz_full64.dll
  • powerkatz_short32.dll
  • powerkatz_short64.dll
  • PswInfoGrabber32.dll
  • PswInfoGrabber64.dll
  • PswRdInfo64.dll

ADRecon

The ADRecon.ps1 file is a tool for generating reports that contain information from Active Directory. Read more about ADRecon project on GitHub. Note that this plugin is not developed by FIN7, however, it is actively used by the group in its attacks.

GetHash32/GetHash64

The plugin is designed to retrieve user NTLM/LM hashes. The plugin is based on the code of the lsadump component from Mimikatz.

Fig. 32 shows a screenshot with pseudocode of exported Entry function (function names are chosen according to Mimikatz function names).

Figure 32. Pseudocode of the exported Entry function for the GetHash plugin

The return value of the Execute function (value of the g_outputBuffer variable) contains a pointer to the buffer with data resulting from the plugin's operation.

If the plugin fails to start with SYSTEM permissions, it will fill the buffer with the data shown in Fig. 33.

Figure 33. Buffer contents when running the plugin without SYSTEM permissions

The contents of the buffer in this case are similar to the output of mimikatz when running the module lsadump::sam without SYSTEM permissions (Fig. 34).

Figure 34. Mimikatz output when running lsadump::sam without SYSTEM permissions

If the plugin is run with SYSTEM permissions, it will put all the information the attacker is looking for into the buffer (Fig. 35).

Figure 35. Buffer contents when running the plugin with SYSTEM permissions

The same data can be retrieved by running lsadump::sam from mimikatz with SYSTEM permissions (Fig. 36).

Figure 36. Result of lsadump::sam command from mimikatz with SYSTEM permissions

GetPass32/GetPass64

The plugin is designed to retrieve user passwords. It is based on the code of the sekurlsa component from Mimikatz. The pseudocode of the exported Entry function is shown in Fig. 37.

Figure 37. Exportable Entry function pseudocode

Based on the plugin’s results, we will see in the value of the g_outputBuffer variable a pointer to the data buffer that can be retrieved by executing the sekurlsa::logonpasswords command in Mimikatz (Fig. 38).

Figure 38. Result of the sekurlsa::logonpasswords command

powerkatz_full32/powerkatz_full64

The plugin is a Mimikatz version compiled in the Second_Release_PowerShell configuration. This version can be loaded into the address space of a PowerShell process via reflective DLL loading as implemented in the Exfiltration module of PowerSploit.

Pseudocode of the exported powershell_reflective_mimikatz function (variable and function names in the decompiled output are changed to match the names of the corresponding variables and functions from Mimikatz):

The input parameter is used to pass a list of commands, separated by a space. The global variable outputBuffer is used to pass the result of the commands. The decompiled view of the wmain function is shown below:

powerkatz_short32/powerkatz_short64

The powerkatz_short plugin is a modified version of the standard powerkatz library described in the previous paragraph.

A list of powerkatz functions that are absent from powerkatz_short:

  • kuhl_m_acr_clean;
  • kuhl_m_busylight_clean;
  • kuhl_m_c_rpc_clean;
  • kuhl_m_c_rpc_init;
  • kuhl_m_c_service_clean;
  • kuhl_m_crypto_clean;
  • kuhl_m_crypto_init;
  • kuhl_m_kerberos_clean;
  • kuhl_m_kerberos_init;
  • kuhl_m_vault_clean;
  • kuhl_m_vault_init;
  • kull_m_busylight_devices_get;
  • kull_m_busylight_keepAliveThread.

PswInfoGrabber32.dll/PswInfoGrabber64.dll

The plugin can retrieve the following data:

  • browser history from Firefox, Google Chrome, Microsoft Edge and Internet Explorer;
  • usernames and passwords stored in the listed browsers;
  • email accounts from Microsoft Outlook and Mozilla Thunderbird.

The nss3.dll library is used to retrieve sensitive data from the Firefox browser and is loaded from the directory with the installed browser (Fig. 39).

Figure 39. Dynamic retrieval of function addresses from nss3.dll library

Using the functions shown in Fig. 38, the credentials are retrieved from the logins.json file and the browser history is retrieved from the places.sqlite database.

In relation to Google Chrome, the plugin retrieves browser history from %LOCALAPPDATA%\Google\Chrome\User Data\Default\History and passwords from %LOCALAPPDATA%\Google\Chrome\User Data\Default\Login Data (data encrypted using DPAPI).

History, places.sqlite, Login Data are all sqlite3 database files. To work with sqlite3 databases the plugin uses functions from the sqlite library, statically linked with the resulting DLL, i.e. the plugin itself.

For Internet Explorer and Microsoft Edge browsers, the plugin retrieves user credentials using functions from the vaultcli.dll library that implements the functions of the vaultcmd.exe utility.

PswRdInfo64.dll

PswRdInfo64.dll is designed primarily to collect domain credentials and retrieve credentials for accessing other hosts via RDP. The plugin is activated from the client application using the Grabber → RDP tab.

The workflow of the plugin depends on the following conditions.

When started from SYSTEM, the plugin lists all active console sessions (WTSGetActiveConsoleSessionId) and gets user names for these sessions:

(WTSQuerySessionInformationW)(0i64, SessionId, WTSUserName, &vpSessionInformationUserName, &pBytesReturned))

The plugin then retrieves the private keys from the C:\Users\{SessionInformationUserName}AppData\Local\Microsoft\Credentials directory for each user and injects itself into the lsass.exe process to extract domain credentials.

When started by another user (other than SYSTEM), the plugin attempts to collect credentials for RDP access to other hosts. Credentials are collected using CredEnumerateW function, with the TERMSRV string as the target.

Conclusion

As the analysis shows, Lizar is a diverse and complex toolkit. It is currently still under active development and testing, yet it is already being widely used to control infected computers, mostly throughout the United States.

However, it seems that FIN7 are not looking to stop there, and we will soon be hearing about more Lizar-enabled attacks from around the world.

IoC

IP:

108.61.148.97
136.244.81.250
185.33.84.43
195.123.214.181
31.192.108.133
45.133.203.121

SHA256:

166b0c5e49c44f87886ecaad46e60b496b6b7512d1c57db41d9cf752fada95c8
188d76c31fa7f500799762237508203bdd1927ec4d5232cc189d46bc76b7a30d
1e5514e8f95dcf6dd7289acef6f6b88c460105660cb0c5b86ec7b854f70ee857
21850bb5d8df021e850e740c1899353f40af72f119f2cd71ad234e91c2ccb772
3b63eb184bea5b6515697ae3f13a57365f04e6a3309c79b18773291e62a64fcb
4d933b6b60a097ad5ce5876a66c569e6f46707b934ebd3c442432711af195124
515b94290111b7be80e001bfa2335d2f494937c8619cfdaafb2077d9d6af06fe
61cfe83259640df9f19df2be4b67bb1c6e5816ac52b8a5a02ee8b79bde4b2b70
fbd2d816147112bd408e26b1300775bbaa482342f9b33924d93fd71a5c312cce
a3b3f56a61c6dc8ba2aa25bdd9bd7dc2c5a4602c2670431c5cbc59a76e2b4c54
e908f99c6753a56440127e54ce990adbc5128d10edc11622d548ddd67e6662ac
7d48362091d710935726ab4d32bf594b363683e8335f1ee70ae2ae81f4ee36ca
e894dedb4658e006c8a85f02fa5bbab7ecd234331b92be41ab708fa22a246e25
b8691a33aa99af0f0c1a86321b70437efcf358ace1cf3f91e4cb8793228d1a62
bd1e5ea9556cb6cba9a509eab8442bf37ca40006c0894c5a98ce77f6d84b03c7
98fbccd9c2e925d2f7b8bcfa247790a681497dfb9f7f8745c0327c43db10952f
552c00bb5fd5f10b105ca247b0a78082bd6a63e2bab590040788e52634f96d11
21db55edc9df9e096fc994972498cbd9da128f8f3959a462d04091634a569a96

Easter Egg in APK Files: What Is Frosting

By: BI.ZONE
28 December 2020 at 14:04

By Konstantin Molodyakov

A file structure is a whole fascinating world with its own history, mysteries and a home-grown circus of freaks, where workarounds are applied liberally. If you dig deeper into it, you can discover loads of interesting stuff.

In our digging we came across a particular feature of APK files — a special signature with a specific block of metadata, i.e. frosting. It allows you to determine unambiguously if a file was distributed via Google Play. This signature would be useful to antivirus vendors and sandboxes when analyzing malware. It can also help forensic investigators pinpoint the source of a file.

There’s hardly any information out there regarding this topic. The only reference appears to be in Security Metadata in Early 2018 on the Android Developers Blog, and there is also an Avast utility that allows this signature to be validated. I decided to explore the feature and check the Avast developers’ assumptions about the contents of the frosting block and share my findings.

Frosting and APK Signing Block

Google uses a special signature for APK files when publishing apps on Google Play. This signature is stored in the APK Signing Block, which precedes the central directory of ZIP files and follows its primary contents:

The magic APK Sig Block 42 can be used to identify the APK Signing Block. The signing block may contain other blocks, whose application can be determined by the 4-byte ID. Thus, we get a ZIP format extension with backward compatibility. If you are interested in reading more or seeing the source code, you can check out the description of the method ApkSigningBlockUtils.findSignature here.

Let us take some file as an example, for instance 2124948e2b7897cd6fbbe5fbd655c26d. You can use androguard to view the block identifiers within the APK Signing Block:

There are several types of blocks with various identifiers, which are officially described in the documentation:

Some of the blocks can be found in Android source codes:

Other types of blocks that may come up:

  • 0x504b4453 (DEPENDENCY_INFO_BLOCK_ID) — a block that apparently contains dependency metadata, which is saved by the Android Gradle plugin to identify any issues related to dependencies
  • 0x71777777 (APK_CHANNEL_BLOCK_ID) — a Walle (Chinese gizmo) assembler block, which contains JSON with a channel ID
  • 0xff3b5998 — a zero block, which I ran into in the file — I couldn't find any information on that
  • 0x2146444e — a block with the necessary metadata from Google Play

Frosting and Play Market

Let us get back to analyzing the 0x2146444e block. First off, we should explore the innards of the Play Market application.

The identifier of our interest is found in two locations. As we delve deeper, we quite quickly spot the class responsible for block parsing. This is the first time that the name of a frosting block pops up among the constants:

Having compared different versions of the Play Market application, I have made the following observation: the code responsible for the parsing of this type of signature appeared around January 2018 together with the release of the 8.6.X version. While the frosting metadata block already existed, it was during this period that it took on its current form.

In order to parse the data, we need a primitive for reading 4-byte numbers. The scheme is a standard varint without any tricks involving negative numbers.

Though simple, the block parsing function is fairly large. It allows you to gain an understanding of the data structure:

To validate the signature, the first-field hash and key from validation_sequence are used, where validation_strategy equals zero. The signature itself is taken from signature_sequence with the same ordinal number as the validation_sequence entry. The figure below presents the explanatory pseudocode:

The signing_key_index value indicates the index in the array finsky.peer_app_sharing_api.frosting_public_keys, which contains only one key so far as shown below:

The size_signed_data is signed with the ECDSA_SHA256 algorithm starting with the size_frosting variable. Note that the signed data contains SHA-256 of the file data:

1) data from the beginning of the file to the signing block

2) data from the central directory to the end of the central directory, with the field value ‘offset of start of central with respect to the starting disk number’ at the end of the central directory replaced with a signing block offset

The signature scheme version 2 block (if any) is inserted between the data from the above items 1 and 2 with APK_SIGNATURE_SCHEME_V2_BLOCK_ID preceding it.

The hash calculation function in the Play Market application is represented as follows:

Frosting and ProtoBuf

This information is sufficient for signature validation. Alas, I failed to figure out what is hidden in the frosting block data. The only thing I was able to discover was the data has a ProtoBuf format and varies greatly in size and the number of fields depending on the file.

Typical representation of decoded data without a scheme (4b005c9e9ea0731330a757fcf3abeb6e):

But you can come across some instances (471c589acc800135eb318057c43a8068) with around five hundred fields.

The data occasionally contains such curious strings as: android.hardware.ram.low, com.samsung.feature.SAMSUNG_EXPERIENCE, com.google.android.apps.photos.PIXEL_2018_PRELOAD. These strings are not explicitly declared feature names, which a device may have.

You can find the description of available features in the files on the device — in the /etc/sysconfig/ folder:

If we were to give an example of a declared feature, this could be checking the camera availability by calling the android.hardware.camera function through the method PackageManager hasSystemFeature. However, the function of these strings within this context is vague.

I could not guess, find or recover the data scheme for the Play Market APK classes. It would be great if anyone could tell us what is out there and how they managed to figure it out. Meanwhile, all we have now are the assumptions of the Avast utility developers about the ProtoBuf structure and the string com.google.android.apps.photos.PIXEL_2018_PRELOAD indicating a system or pre-installed app:

I would like to share some of my comments with respect to the above.

1. When it comes to the string com.google.android.apps.photos.PIXEL_2018_PRELOAD: you can easily prove that this assumption is incorrect. If we download a few Google factory images, we will realize that they have neither such strings nor a single app with a frosting block.

We can look into it in more detail using the image walleye for Pixel 2 9.0.0 (PQ3A.190801.002, Aug 2019). Having installed the image, we are not able to spot a single file with a frosting block among a total of 187 APK files. If we update all the apps, 33 out of the 264 APK files will acquire a frosting block. However, only 5 of them will contain these strings:

com.google.android.as:

  • com.google.android.feature.DPS
  • com.google.android.feature.PIXEL_EXPERIENCE
  • com.google.android.feature.PIXEL_2017_EXPERIENCE
  • com.google.android.feature.PIXEL_2019_EXPERIENCE
  • com.google.android.feature.ANDROID_ONE_EXPERIENCE
  • com.google.android.feature.PIXEL_2018_EXPERIENCE
  • com.google.android.feature.PIXEL_2020_EXPERIENCE

google.android.inputmethod.latin:

  • android.hardware.ram.low

google.android.dialer:

  • com.google.android.apps.dialer.GO_EXPERIENCE
  • com.google.android.feature.PIXEL_2020_EXPERIENCE

google.android.GoogleCamera:

  • android.hardware.camera.level.full

google.android.apps.photos:

  • com.google.android.feature.PIXEL_2020_EXPERIENCE

We can assume that these strings show the relevance of features to the device where the app is installed. However, requesting a full list of features on the updated device proves that the assumption is wrong.

2. I would disagree with the ‘frosting versions’ as you can find similar data, but with values other than 1. The maximum value of this field that I have come across so far is 26.

3. I would disagree with the ‘С timestamp of the frosting creation’: I have been monitoring a specific app and noticed that this field value does not necessarily increase with every new version. It tends to be unstable and can become negative.

4. MinSdkLevel and VersionCode appear plausible.

Conclusion

In summary, a frosting block in the signature helps to precisely ascertain if a file has been distributed through an official store. I wasn’t able to derive any other benefit from this signature.

For the finale, here is an illustration from the ApkLab mobile sandbox report of how this information is applied:

❌
❌