Normal view

There are new articles available, click to refresh the page.
Before yesterdayStories by Chris Hernandez on Medium

Confessions of a top-ranked bug bounty hunter

From 2016 to 2017 I was very active in the bug bounty space, working almost exclusively with Synack. My first year doing bug bounties I was able to claim a top spot on the Synack leaderboards, all while doing bug bounties part time (I still had a day job as a red teamer at a fortune 50 retailer). As a reward for my efforts, I was invited to a few private events, and received access to programs that had exclusive access to top researchers. In addition to private access, I was able to maintain a first-name basis relationship with some of the program managers at Synack.

Over time those program managers moved on and I lost my intimate relationship with the staff at Synack. I also lost interest in working on “those kind” of bug bounty programs in general. So, how does a hacker go from being a top-researcher to being completely idle on the Synack platform? Here are a few things that contributed to the decline. I also feel these problems highlight the overall weaknesses in the bug bounty space.

Issue #1 The time/money trade-off

The economics of bug bounty programs are pretty great, for the bug bounty company. When a new customer is onboarded, their money goes into a pool, that pool is slowly paid out to researchers over time when they submit bugs. If their pool of researchers are unable to find enough bugs, the bug bounty company risks losing a client, however they don’t have to give the money back ;)

Contrast that with the bug bounty researcher, if you spend 10 hours researching a target, and only find one interesting bug. You may get paid the value of the bug. If… it’s not a duplicate. If it’s a dupe, you get nothing! if you find nothing, you also get paid nothing. Not great economics!

On the flip side I’ve been able to find RCE bugs in as little as 2 hours, and getting a two-thousand-dollar bounty, in 2 hours, is not a bad payday either.

This arrangement has an unfortunate drawback. It prioritizes bugs that are “shallow”. A shallow bug is one that is easily found by automation, say for example reflected or stored XSS issues. A bug bounty hunter may only get $300 for a stored XSS bug, but if you can automate your code and find several in a day, the economics work out for you.

If, on the other hand, you want to find high impact bugs like RCE bugs you may need to spend a lot of time researching a hard target. You may find RCE, you may not. But if you don’t find RCE in a hard target you definitely won’t get paid! The drawback here is, that time spent researching has a non-zero value which the bug bounty program does not factor into its model.

Issue #2 Target Selection Bias

While issue number one represents the most common issue, another less common issue is what I call target selection Bias. If someone doesn’t want to spend time looking for shallow bugs. If instead they want to focus their skills on deep knowledge of one specific tech stack, they can usually find some pretty high impact bugs. One example of this would be a researcher that focuses on finding java deserialization bugs. The issue here for a customer is, what if you don't run a tech stack that many researchers are familiar with? Does that mean that you have no deeply hidden vulnerabilities? probably not, it’s just that the talent pool at bug bounty programs isn’t usually selecting for your tech stack.

Bug bounty programs therefore prioritize the discovery of specific vulnerability classes. These may or may not be in your perimeter’s tech stack.

Issue #3 Scoping

This one may or may not be obvious but real attackers don’t follow scoping guidelines, so if you are considering a bug bounty your entire enterprise should be in scope. If not, you are probably missing out on some key vulnerabilities in systems that will likely be exploited by the actual bad guys some day! Some bug bounty programs do a better job of supporting larger scopes than others. Synack for example had host-based programs in which an entire range was in scope. I liked those the best because I could look at every system in scope and find the weakest, or most interesting link.

Bug bounty programs therefore are lacking authenticity in reproducing the adversary perspective.

So are there better options in the marketplace today?

If you are a researcher yourself: In the last 2 years I switched my focus to private vulnerability disclosure and acquisition programs. Personally the Zero Day Initiative is a great program in which a researcher is paid enough to spend hours upon hours doing deep research on hard(er) targets to find high impact vulnerabilities. I’ve spent weeks worth of effort finding and chaining multiple bugs together to get RCE. This is a win-win from my perspective because in Pwn2Own an RCE bug is worth about 20k, if its a duplicate, which also happens you still are rewarded financially for your effort. It’s certainly not as much, but its usually around 5–10k for your time and effort. That softens the blow significantly!

I’ve competed in Pwn2Own a few years in a row and disclosed some high impact RCE vulnerabilities in ICS/SCADA systems and NAS devices. I feel like a researchers time is valued at ZDI.

Finally, If you run a security program: I’ve launched Adversary Academy a research focused offensive security company. Our goal is to become the leader in research focused offensive security consulting. We set aside funds from every engagement to target devices and systems we encounter during penetration tests, and vulnerability assessments for clients. After an engagement is complete we spend our own research hours and money on finding vulnerabilities in our clients systems. We then notify our clients of high-impact exploitable vulnerabilities before anyone else. In my mind this does what no other pentest-shop is doing currently which is breaking out of the point-in-time nature of pentesting and delivering value even after the reports are completed.

Confessions of a bug bounty program manager

In my previous article I wrote about my experiences as a top ranked bug bounty hunter. In this article I will write about my experiences on the other side of the fence triaging bug bounty program submissions. This article will hopefully serve to highlight some of the traps that exist in the bug bounty space.

Hopefully my dual roles in the bug bounty space will help existing researchers have insight into what a program manager might be looking for when receiving a bug bounty submission. I am also hoping to I highlight some of the challenges that I see for the customer or client side of the bug bounty space.

When I was supporting a bug bounty program the program was managed by Bugcrowd. Other programs may have their own unique set of challenges, but I would expect that some of the common challenges are exactly the same across the majority of the bug bounty space.

Looking back from a value perspective, I think that the highest value we received was from two unique events happening in our program.

The first event was during the initial program launch. New programs usually attract quite a bit of attention as researchers rush to find shallow and easily monetized bugs. As a result of this attention the initial “penetration test” or targeted test phase of the bug bounty program revealed a decent number of low to medium severity findings. If a company were looking to replace a traditional point in time penetration test and still be able to provide auditors with a “pentest report” this initial launch phase would be a good place to look to replace a traditional pentest vendor albeit at a higher price point.

Quick! find XSS!

However, once the initial burst of activity was over quickly delved into an extended period of low value or duplicate bugs. Something a company might want to consider is the amount of time that it takes their internal staff to triage and review bugs that are essentially useless. During this time, I found myself to be frustrated with the program. I can’t blame the researchers however, as I mentioned in my previous article the economics of most bug bounty programs incentivizes researchers to target easily automated bugs.

The Second Event was quite a bit later in the programs run time when we finally had researcher submit high impact bugs. These bugs demonstrated the value of having access to a large researcher pool. As I mentioned before, instead of focusing on shallow bugs and easily automated bugs some researchers focus on a specific class of vulnerability. And in our case, we were able to attract the attention of a researcher who had familiarity with our tech stack.

Bug Bounty! Great Success!

However, one thing to consider is the average price of a yearly contract with a bug bounty vendor is about $100,000. So, while the high impact vulnerabilities were valuable I still have difficulty matching the value extracted to the price paid per year. As usual your mileage may vary with a program like this, I would expect to see the number of high and critical vulnerabilities trail off year over year. So it may be that the first year or two of the bug bounty programs represents a great value for your organization and in the years following the value starts to trailer in correlation with the number of high and critical submissions.

So, what can someone do to ensure their program is successful?

If it were up to me, these are the questions that I would ask a bug bounty vendor.

Do you have a certain number of researchers who are skilled in my tech stack?

Bounty vendor may have statistics on the number of researchers who are part of their program who are skilled at say API testing. But do they have details on the percentage of researchers who are skilled at finding Java deserialization vulnerabilities? Those types of metrics might be useful for someone who knows what they’re using for their tech stack.

If they can’t answer this question with explicit detail, you may want to consider other options!

Can I invite or incentivize certain researchers?

If someone is finding bugs in your platform, you may want to ask them and incentivize them to spend more time looking at your software.

Realize that most bug bounty programs do not have exclusive access to researchers. Most researchers are looking at programs across multiple platforms. So, if you can get a better price with one vendor versus another it may be in your best interest to go with that vendor.

Pay above an above average rate for high and critical vulnerabilities.

Really, we are trying to reward researchers for their time and disincentivize researchers for low impact bugs. If it were up to me, I would pay a below average payout for low and medium severity bugs and an above average payout for high and critical bugs. In the screenshot below you can see Bugcrowd’s payment trends by severity. For P1 severity bugs the average payout is $1000 to about $5000. I would probably start p1 pay outs at $5000 to $10+ thousand depending upon the overall impact to the organization.

make your p1 and p2 payouts above the average line, p4 and p5 below average

Think about it this way, if you pay bugcrowd $100,000 for one year of a bug bounty program, and you only get one P1 bug because your payout was at or below the average payout. That bug essentially cost you 100k, yet the researcher only gets 1 or 2k for their work. If instead, you set the payout to $10,000 and you get 10 p1 bugs, you are paying the market rate for exports and significantly driving down your overall attack surface. In my mind that is the definition of a win-win. Researchers are getting paid for their time and you are getting a better return on investment.

Are there any other options?

Adversary Academy offers targeted vulnerability research as part of its pentest services. This breaks the traditional challenge with penetration tests being point-in-time assessments and brings in the perspective and capabilities of an advanced and well funded adversary. In this case you are paying for sustained and focused research targeting your enterprises technology.

Another option would be a partnership with a responsible vulnerability acquisition platform like ZDI. For some pwn2own categories you can sponsor or support the event and your software or hardware will be in the list of targets for the competition. Tesla does this every year.

No Mud No Lotus: The art of transforming breaches through security improvement

A lot of gross stuff at the bottom of a pond is responsible for this

In Buddhist philosophy I often hear the expression “No Mud, No Lotus” this expression aligns with the Buddhist view that life and existence in many ways are circular. Things that are negative can actually be used for our benefit, and things that are good when overused can harm us. Life is a duality.

I've been thinking about how organizations are transformed by negative events, specifically security breaches or incidents. These unfortunate events which are caused by evildoers with malicious intent usually have some unintended consequences. Those unintended consequences are, hopefully, that the affected organization's cybersecurity posture will significantly improve as a result of the breach. Unfortunately, sometimes the evildoer's intended purpose, financial loss, or monetary gain is also the outcome.

In the case of improvements in cybersecurity after a breach, this improvement is not without much pain and suffering on the part of the incident responders who work countless hours, the customers of the organization who lose their data, or their identities, and the IT staff who have to rebuild after much is destroyed.

Often times out of the ashes of a significant breach, a more defensible organization with a more realistic view of the cost of failure is born. CISOs, executives, and board members who have never before been leaders in an organization that has been hit by a massive incident now understand that security is truly everyone's responsibility. These lessons, in the case of a breach, are learned the hard way. “The hard way” is certainly a way but it's not always the best way, and it's a way I would like people to avoid if at all possible.

Your SOC if they haven’t dealt with an adversary before

Returning back to Buddhist philosophy there is also the concept of the “middle way.” Applying that concept to cybersecurity informs us that it may be possible to significantly improve security without the pain of a significant breach. What would that look like? Well, from my perspective the middle way does still have an adversary, just not one that wants to cause you actual monetary or other harm. Many cybersecurity thinkers have quoted the art of war by Sun Tzu, a 6th-century BCE military strategist. I will spare you recitations of those concepts. Rather, in keeping with Buddhist inspiration I will explore a few concepts from “The Book of Five Rings” by Miyamoto Musashi. Miyamoto was an incredibly skilled Japanese swordsman, philosopher, strategist, writer, and rōnin. (I bet you thought your LinkedIn profile was impressive!) Miyamoto was also a Buddhist, not necessarily in the peace-loving modern sense, this was feudal Japan and Miyamoto killed a lot of people… but I digress… how can his strategies and philosophies help defend the modern enterprise?

Taking inspiration from The Book of Five Rings here are a few quotes to ponder.

“The important thing in strategy is to suppress the enemy’s useful actions but allow his useless actions”

Miyamoto dropping knowledge bombs

Here are the factors to consider:

When an adversary gains a foothold in an environment we need to ensure that they are not able to take any useful actions without being detected and blocked. There are however a number of useless actions that we can allow them to take on the system, this allows them to waste their time on a system, and increase the likelihood that they will be detected as soon as they attempt a useful action. Using a tool to automatically evaluate your EDR frameworks detective capability comes to mind. A tool like MITRE’s caldera framework can launch a battery of tests on an endpoint. You can then evaluate which actions your EDR solution can detect, which it cannot detect, which undetected actions should be prioritized, and which ones can safely be ignored. If you do this you will be implementing Miyamoto's strategy of suppressing the enemy's useful actions.

“You can only fight the way you practice”

After running countless purple team engagements, red team exercises, and penetration tests over my career there has not been a single time that all teams did not collectively walk away with something to improve on or focus on for the next time. If you are not practicing how to defend your environment from an adversary on a regular basis how can you expect to fight one off in a real-world breach?

There are so many other insightful quotes in the Book of Five Rings but I will leave you with one final gem.

“In strategy, it is important to see distant things as if they were close and to take a distanced view of close things.”

This one should hit home for all security people, there is a constant flood of small tasks, alerts, things to do, and people to help. Those are close things, consider the urgency of someone reporting a phishing email. Yes, it is possibly a phishing email, but what if you were to wait to respond for 30 minutes and plan out an incident tabletop? The distant things (a breach) need to be examined closely, what are you doing to prepare for that eventuality?

okay okay, last one:

“You must understand that there is more than one path to the top of the mountain”

There is no one right way to secure an organization, and there are also many wrong ways. Many organizations choose a variety of paths to improved security, some build out their own adversary emulation teams, and others bring in an outside party, some keep their systems disconnected from the internet entirely. If you’d like to discuss what may work for your organization you can reach me at chris [at] adversaryacademy.com

Clocking into The Network — Attacking KronosTimeclock Devices

Clocking into The Network — Attacking KronosTimeclock Devices

One of the services I’m most excited about at Adversary Academy is our targeted vulnerability research (TVR) program. The challenge I’ve seen with historical pentest providers is that there is typically a one to two-week engagement window and after that, you usually don’t hear from your pentest provider until it's time for next year's test. In order to disrupt that cycle We’ve started a program that allows for researchers to spend time attacking interesting systems they’ve encountered on customer networks, long after the engagement is over. Typically on a penetration test your “spidey senses” will go off at some point when you encounter a system that just feels vulnerable, or impactful if it were to be vulnerable. With the TVR program, we are able to spend cycles researching those items that appear to be high-impact.

One recent example was for a customer who employed the Kronos InTouch DX timeclock device. A really fancy Android based timeclock that supports biometric data as well as facial recognition.

For this engagement, the ability to jump to an enterprise network would be very valuable. We hypothesized that the Kronos Timeclock devices may be connected to an enterprise network and not properly segmented. Attempting to access all of the settings on the customer devices were locked out and non-default passwords were used. Later we purchased our own version of the hardware to perform a full teardown.

Further documentation available on the FCC report website shows that ssh is an option that can be configured on the device and that a maintenance mode badge or button can be used to bypass the initial configuration.

Pressing and holding the maintenance mode button (4) will bypass the locked screens

After bypassing the lockout and enabling SSH with a known password the user is logged in as root rather than a low privilege account. With root access, any configuration can be changed however one of the most valuable configuration files we found was the wpa_supplicant.conf file which contains the wifi credentials in plaintext. This file would allow a local attacker to then join the network that the Kronos time clock is connected to, potentially joining an enterprise network and carrying out further attacks once on the network.

our root access as rauser printing out the wpa_supplicant.conf file

After discovering this issue the Adversary Academy team reached out to Kronos and recommended that the wpa_supplicant file be wiped when maintenance mode is entered by physical keypress, which would better protect the wpa_supplicant.conf file, or alternatively making the Rauser non-root and only giving them access to the needed configuration files is also a suitable alternative. Currently no patch or update is available for this issue and we have yet to hear a response from Kronos / UGK.

❌
❌