❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayBlog - Atredis Partners

Use the Source, Luke

27 August 2019 at 17:00

Your pentesters should be asking for source code. And you should probably be sharing it.

One of my favorite things about working at Atredis Partners is that part of our research-centric model includes throwing folks at all kinds of targets that they've never seen before. First chairs on projects are always going to be in their comfort zone, but for second chairs, we like to mix things up a bit, because it helps folks grow and we often find new ways of looking at things that we've collectively looked at the same way for years.

This not only means that folks from traditional pentesting backgrounds get to grow into doing things like hardware or mobile hacking, it also means that I get to watch people with backgrounds in say, reverse engineering or exploit development take a look at a network perimeter or a web app, which yields some great new perspectives.

Last week, I was on an internal kickoff call for what was ostensibly an assessment of an API on a public webserver. There were some other targets that were more complex than that, but for this part of the call, we were discussing testing the API. Two people on the team were coming at the target from a more bughunting-centric background, and were asking about how we'd be testing.

"So, are they gonna give us source code?"

"Probably not, they seemed pretty cagey about it. I can ask again, though..."

"That's stupid."

"I mean, I guess it kinda is. A lot of times we'll just get say, API docs, and some example code, maybe cobble together an API client to throw traffic at it, tool around with requests in Burp, that sort of thing."

"How about shell access to the server while you're testing so you can debug?"

"Uh, I dunno, like I said, these folks were pretty cagey about sharing much beyond access to the API itself."

"Jeez. Well, I guess they don't want us to find anything."

I was a bit dumbstruck at first, because a lot of the time, that's all you'll get for a web app or web services assessment, a couple of logins, maybe a walkthrough of the app, and then, well, yeah, actually... Good luck finding anything. It was a useful reminder how wrongheaded it is to be on the hook for finding all the bugs in something without source code.

We do ask for source, and shell access, for pretty much any software assessment we do. But we often get told no. Even more often than that, the client tells us "nobody's ever asked me that before."

And that is dumb.

Runtime testing alone is absolutely not the right way to find the most bugs possible, in the least amount of time, in pretty much any target. It's a way to find an opportunistic subset of the bugs that are floating at or very near the surface.

How likely is runtime testing to find more complex bugs several layers deep in the app, or that are only exploitable in a very narrow window in-session? What about finding the ten other places you're vulnerable to SSRF, all of which require a more complex trigger than the single case your tester found at runtime? How likely are your devs to fix every vector, when they only know about one? And how likely is it that a dev will find a way down the road to expose the other ten?

Source allows you to identify all kinds of systemic problems in an application that you just don't see when you're flinging yourself willy-nilly at a web interface or web service, or any software target, really. It allows you to confirm runtime findings and weed out false positives, and follow the bugs you found at runtime down into the bowels of the broken function or ugly third party library that spawned them.

So why don't more folks do source-driven or source-assisted pentests?

Well, for one thing, a lot of pentesters out there can't read the source in the first place, so it wouldn't help them much. You need more seasoned people, typically with some dev experience, to get any value out of sharing your source code. The last thing you want is a mountain of crappy informational bugs that somebody lifted from whole cloth out of a source code scanner report, trust me.

And yes, of course, there are the IP concerns. I get that. Folks will tell us their source code contains trade secrets, it's sensitive, it's HIPAA protected, it's export controlled, it's buried underground and written on stone tablets, etc, etc. I don't see IP problems as particularly insurmountable. They can be odious, sure; we've flown overseas to look at code that couldn't leave a building, we've had devs scroll through source over Skype, sat in a locked room auditing source on client-provided systems with cameras on us, you name it. There are ways to give access to source in a controlled fashion, if source code leaving the building is a concern.

A third reason, related to the above, and I think the biggest one, is that it's more work for the testing team and for the client. The testing team has to add source review to their workflow, map the bugs they found at runtime back to source, and chase down bugs found in source to see if they're exploitable in the wild. On the client end, the client has to go wrangle devs for repo access (which the internal security team often doesn't even have themselves) and often has to figure out either how to get the testing team inside the corporate LAN, or how to schlep a 120MB tarball over to the tester.

It's far easier to just re-enable the "pentester1" and "pentester2" accounts from last year's test and get back to reading /r/agedlikemilk (which is pretty funny, to be fair). Besides, you're probably just going to rotate firms next year, so what's the point?

Seriously folks, if you have an in-house developed app, especially if it's part of your core business, you're wasting time on "black box" testing. Give your testing team full source access and full transparency, and they'll find bugs that have been missed by other teams doing runtime testing for years. I know this, because we do, every time a client takes us up on the request.

While I'm at it, let's dispense with the old saw that black box testing is so you can see "how long a real hacker would take to break this"... "Real" hackers get to take as long as they need to until they land a shell, plus they get to wear sunglasses, a hoodie and a ski mask while they do it. I have yet to have a client offer our team the luxury of unlimited time, and ski masks tend to make for awkward video calls with the CSO.

Revolving Door Pentesting

19 October 2018 at 04:07

I recently had a client ask me if it makes sense to rotate security testing firms. "It's something I've always done, but I'm not sure if it really works or not."

I said in my experience, it doesn't really work very well at all.

I run into it less now than I did ten years ago, but there are still quite a few folks out there using a different firm for each annual pentest, or who never use the same firm on the same target more than once and keep a rotating roster of firms in the hopper.

What blows my mind about the whole switch-vendors-every-year mentality is that it's built around the presumption that most pentesters are terrible (plausible, in some cases) and are only going to try hard when you're a new client. There's also a perception that there's no value in building an ongoing relationship with a firm, since everyone does the same things, in the same order, to the same target every time.

On any of our engagements, the first time we look at a given target, we have to ramp up and learn everything we can about it: what mistakes your developers are more prone to make, what misconfigurations you made in your EDR deployment, how to keep from knocking the staging environment offline, which sysadmin knows how to bring it back up. The list goes on and on.

The early phases of a new assessment for a new client are a lot like the first few days on the job for a new employee. You won't really see productive results until they've learned the ropes a bit and have a handle on how things work (and don't work) in your environment.

You need to keep working with a pentest firm once they've ramped up on your environment for the same reason you need to keep employees: they've learned valuable things that someone new would have to relearn, and that's a poor use of time and resources if you have a seasoned person on hand to do the job.

When they wrap that first gig, a good pentester is already thinking about different and better ways to go after the target next time.

On the other side of things, if you're rotating firms over and over, and you don't see any value in follow-on projects, maybe you're not investing in the relationship yourself. Heck, maybe you don't even want to, maybe you just want another annual rotated-firm rubber-stamp assessment to keep the auditors happy. Maybe you're even cynical enough to admit that if you let the same firm hit the same targets two years in a row somebody would finally figure out how to get past the WAF and then you'd have a lot more work to do.

I've had people proudly say to me, "we have new people hit this every year and they find the same bugs". What they don't seem to understand is that it also follows that if you use theΒ sameΒ people again, they'll most likely findΒ newΒ bugs. Or, if you really need "fresh eyes", use different resources from a firm you already trust.

To me, the goal of pentesting is to push things forward, or it should be: to iteratively test and improve a little each time, both as attackers and defenders. The best way to do that is to get attackers and defenders collaborating. Building a longstanding working relationship is a great way to do that.

❌
❌