Normal view

There are new articles available, click to refresh the page.
Before yesterdayVulnerabily Research

CurveBall – An Unimaginative Pun but a Devastating Bug

18 January 2020 at 05:49

Enterprise customers looking for information on defending against Curveball can find information here.

2020 came in with a bang this year, and it wasn’t from the record-setting number of fireworks on display around the world to celebrate the new year. Instead, just over two weeks into the decade, the security world was rocked by a fix for CVE-2020-0601 introduced in Microsoft’s first patch Tuesday of the year. The bug was submitted by the National Security Administration (NSA) to Microsoft, and though initially deemed as only “important”, it didn’t take long for everyone to figure out this bug fundamentally undermines the entire concept of trust that we rely on to secure web sites and validate files. The vulnerability relies on ECC (Elliptic Curve Cryptography), which is a very common method of digitally signing certificates, including both those embedded in files as well as those used to secure web pages. It represents a mathematical combination of values that produce a public and private key for trusted exchange of information. Ignoring the intimate details for now, ECC allows us to validate that files we open or web pages we visit have been signed by a well-known and trusted authority. If that trust is broken, malicious actors can “fake” signed files and web sites and make them look to the average person as if they were still trusted or legitimately signed. The flaw lies in the Microsoft library crypt32.dll, which has two vulnerable functions. The bug is straightforward in that these functions only validate the encrypted public key value, and NOT the parameters of the ECC curve itself. What this means is that if an attacker can find the right mathematical combination of private key and the corresponding curve, they can generate the identical public key value as the trusted certificate authority, whomever that is. And since this is the only value checked by the vulnerable functions, the “malicious” or invalid parameters will be ignored, and the certificate will pass the trust check.

As soon as we caught wind of the flaw, McAfee’s Advanced Threat Research team set out to create a working proof-of-concept (PoC) that would allow us to trigger the bug, and ultimately create protections across a wide range of our products to secure our customers. We were able to accomplish this in a matter of hours, and within a day or two there were the first signs of public PoCs as the vulnerability became better understood and researchers discovered the relative ease of exploitation.

Let’s pause for a moment to celebrate the fact that (conspiracy theories aside) government and private sector came together to report, patch and publicly disclose a vulnerability before it was exploited in the wild. We also want to call out Microsoft’s Active Protections Program, which provided some basic details on the vulnerability allowing cyber security practitioners to get a bit of a head start on analysis.

The following provides some basic technical detail and timeline of the work we did to analyze, reverse engineer and develop working exploits for the bug.  This blog focuses primarily on the research efforts behind file signing certificates.  For a more in-depth analysis of the web vector, please see this post.

Creating the proof-of-concept

The starting point for simulating an attack was to have a clear understanding of where the problem was. An attacker could forge an ECC root certificate with the same public key as a Microsoft ECC Root CA, such as the ECC Product Root Certificate Authority 2018, but with different “parameters”, and it would still be recognized as a trusted Microsoft CA. The API would use the public key to identify the certificate but fail to verify that the parameters provided matched the ones that should go with the trusted public key.

There have been many instances of cryptography attacks that leveraged failure of an API to validate parameters (such as these two) and attackers exploiting this type of vulnerability. Hearing about invalid parameters should raise a red flag immediately.

To minimize effort, an important initial step is to find the right level of abstraction and details we need to care about. Minimal details on the bug refer to public key and curve parameters and nothing about specific signature details, so likely reading about how to generate public/private key in Elliptical Curve (EC) cryptography and how to define a curve should be enough.

The first part of this Wikipedia article defines most of what we need to know. There’s a point G that’s on the curve and is used to generate another point. To create a pair of public/private keys, we take a random number k (the private key) and multiply it by G to get the public key (Q). So, we have Q = k*G. How this works doesn’t really matter for this purpose, so long as the scalar multiplication behaves as we’d expect. The idea here is that knowing Q and G, it’s hard to recover k, but knowing k and G, it’s very easy to compute Q.

Rephrasing this in the perspective of the bug, we want to find a new k’ (a new private key) with different parameters (a new curve, or maybe a new G) so that the ECC math gives the same Q back. The easiest solution is to consider a new generator G’ that is equal to our target public key (G’= Q). This way, with k’=1 (a private key equal to 1) we get k’G’ = Q which would satisfy the constraints (finding a new private key and keeping the same public key).

The next step is to verify if we can actually specify a custom G’ while specifying the curve we want to use. Microsoft’s documentation is not especially clear about these details, but OpenSSL, one of the most common cryptography libraries, has a page describing how to generate EC key pairs and certificates. The following command shows the standard parameters of the P384 curve, the one used by the Microsoft ECC Root CA.

Elliptic Curve Parameter Values

We can see that one of the parameters is the Generator, so it seems possible to modify it.

Now we need to create a new key pair with explicit parameters (so all the parameters are contained in the key file, rather than just embedding the standard name of the curve) and modify them following our hypothesis. We replace the Generator G’ by the Q from Microsoft Certificate, we replace the private key k’ by 1 and lastly, we replace the public key Q’ of the certificate we just generated by the Q of the Microsoft certificate.

To make sure our modification is functional, and the modified key is a valid one, we use OpenSSL to sign a text file and successfully verify its signature.

Signing a text file and verifying the signature using the modified key pair (k’=1, G’=Q, Q’=Q)

From there, we followed a couple of tutorials to create a signing certificate using OpenSSL and signed custom binaries with signtool. Eventually we’re greeted with a signed executable that appeared to be signed with a valid certificate!

Spoofed/Forged Certificate Seemingly Signed by Microsoft ECC Root CA

Using Sysinternal’s SigChecker64.exe along with Rohitab’s API Monitor (which, ironically is on a site not using HTTPS) on an unpatched system with our PoC, we can clearly see the vulnerability in action by the return values of these functions.

Rohitab API Monitor – API Calls for Certificate Verification

Industry-wide vulnerabilities seem to be gaining critical mass and increasing visibility even to non-technical users. And, for once, the “cute” name for the vulnerability showed up relatively late in the process. Visibility is critical to progress, and an understanding and healthy respect for the potential impact are key factors in whether businesses and individuals quickly apply patches and dramatically reduce the threat vector. This is even more essential with a bug that is so easy to exploit, and likely to have an immediate exploitation impact in the wild.

McAfee Response

McAfee aggressively developed updates across its entire product lines.  Specific details can be found here.

 

The post CurveBall – An Unimaginative Pun but a Devastating Bug appeared first on McAfee Blog.

Introduction and Application of Model Hacking

19 February 2020 at 09:01

Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog.

The term “Adversarial Machine Learning” (AML) is a mouthful!  The term describes a research field regarding the study and design of adversarial attacks targeting Artificial Intelligence (AI) models and features.  Even this simple definition can send the most knowledgeable security practitioner running!  We’ve coined the easier term “model hacking” to enhance the reader’s comprehension of this increasing threat.  In this blog, we will decipher this very important topic and provide examples of the real-world implications, including findings stemming from the combined efforts of McAfee’s Advanced Analytic Team (AAT) and Advanced Threat Research (ATR) for a critical threat in autonomous driving.

  1. First, the Basics

AI is interpreted by most markets to include Machine Learning (ML), Deep Learning (DL), and actual AI, and we will succumb to using this general term of AI here.  Within AI, the model – a mathematical algorithm that provides insights to enable business results – can be attacked without knowledge of the actual model created.  Features are those characteristics of a model that define the output desired.  Features can also be attacked without knowledge of the features used!  What we have just described is known as a “black box” attack in AML – not knowing the model and features – or “model hacking.”  Models and/or features can be known or unknown, increasing false positives or negatives, without security awareness unless these vulnerabilities are monitored and ultimately protected and corrected.

In the feedback learning loop of AI, recurrent training of the model occurs in order to comprehend new threats and keep the model current (see Figure 1).  With model hacking, the attacker can poison the Training Set.  However, the Test Set can also be hacked, causing false negatives to increase, evading the model’s intent and misclassifying a model’s decision.  Simply by perturbating – changing the magnitudes of a few features (such as pixels for images), zeros to ones/ones to zeros, or removing a few features – the attacker can wreak havoc in security operations with disastrous effects.  Hackers will continue to “ping” unobtrusively until they are rewarded with nefarious outcomes – and they don’t even have to attack with the same model that we are using initially!

Figure 1. The continuous feedback loop of AI learning.
  1. Digital Attacks of Images and Malware

Hackers’ goals can be targeted (specific features and one specific error class) or non-targeted (indiscriminate classifiers and more than one specific error class), digital (e.g., images, audio) or physical (e.g., speed limit sign).  Figure 2 shows a rockhopper penguin targeted digitally.  A white-box evasion example (we knew the model and the features), a few pixel changes and the poor penguin in now classified as a frying pan or a computer with excellent accuracy.

Figure 2. An evasion example of a white-box, targeted, and digital attack resulting in the penguin being detected as a desktop computer (85.54%) or a frying pan (93.07%) following pixel perturbations.

While most current model hacking research focuses on image recognition, we have investigated evasion attacks and mitigation methods for malware detection and static analysis.  We utilized DREBIN[1], an Android malware dataset, and replicated the results of Grosse, et al., 2016[2].  Utilizing 625 malware samples highlighting FakeInstaller, and 120k benign samples and 5.5K malware, we developed a four-layer deep neural network with about 1.5K features (see Figure 3).  However, following an evasion attack with only modifying less than 10 features, the malware evaded the neural net nearly 100%.  This, of course, is a concern to all of us.

 

Figure 3. Metrics of the malware dataset and sample sizes.

 

 

Using the CleverHans[1] open-source library’s Jacobian Saliency Map Approach (JSMA) algorithm, we generated perturbations creating adversarial examples.  Adversarial examples are inputs to ML models that an attacker has intentionally designed to cause the model to make a mistake[1].  The JSMA algorithm needs only a minimum number of features need to be modified.  Figure 4 demonstrates the original malware sample (detected as malware with 91% confidence).  After adding just two API calls in a white-box attack, the adversarial example is now detected with 100% confidence as benign. Obviously, that can be catastrophic!

Figure 4. Perturbations added to malware in the feature space resulting in a benign detection with 100% confidence.

 

In 2016, Papernot[5] demonstrated that an attacker doesn’t need to know the exact model that is utilized in detecting malware.  Demonstrating this theory of transferability in Figure 5, the attacker constructed a source (or substitute) model of a K-Nearest Neighbor (KNN) algorithm, creating adversarial examples, which targeted a Support Vector Machine (SVM) algorithm.  It resulted in an 82.16% success rate, ultimately proving that substitution and transferability of one model to another allows black-box attacks to be, not only possible, but highly successful.

Figure 5. Papernot’s 5 successful transferability of adversarial examples created from one model (K Nearest Neighbor or KNN) to attack another model (Support Vector Machine or SVM).

 

In a black-box attack, the DREBIN Android malware dataset was detected 92% as malware.  However, using a substitute model and transferring the adversarial examples to the victim (i.e., source) system, we were able to reduce the detection of the malware to nearly zero.  Another catastrophic example!

Figure 6. Demonstration of a black-box attack of DREBIN malware.
  1. Physical Attack of Traffic Signs

While malware represents the most common artifact deployed by cybercriminals to attack victims, numerous other targets exist that pose equal or perhaps even greater threats. Over the last 18 months, we have studied what has increasingly become an industry research trend: digital and physical attacks on traffic signs. Research in this area dates back several years and has since been replicated and enhanced in numerous publications. We initially set out to reproduce one of the original papers on the topic, and built a highly robust classifier, using an RGB (Red Green Blue) webcam to classify stop signs from the LISA[6] traffic sign data set. The model performed exceptionally well, handling lighting, viewing angles, and sign obstruction. Over a period of several months, we developed model hacking code to cause both untargeted and targeted attacks on the sign, in both the digital and physical realms. Following on this success, we extended the attack vector to speed limit signs, recognizing that modern vehicles increasingly implement camera-based speed limit sign detection, not just as input to the Heads-Up-Display (HUD) on the vehicle, but in some cases, as input to the actual driving policy of the vehicle. Ultimately, we discovered that minuscule modifications to speed limit signs could allow an attacker to influence the autonomous driving features of the vehicle, controlling the speed of the adaptive cruise control! For more detail on this research, please refer to our extensive blog post on the topic.

  1. Detecting and Protecting Against Model Hacking

The good news is that much like classic software vulnerabilities, model hacking is possible to defend against, and the industry is taking advantage of this rare opportunity to address the threat before it becomes of real value to the adversary. Detecting and protecting against model hacking continues to develop with many articles published weekly.

Detection methods include ensuring that all software patches have been installed, closely monitoring drift of False Positives and False Negatives, noting cause and effect of having to change thresholds, retraining frequently, and auditing decay in the field (i.e., model reliability).  Explainable AI (“XAI”) is being examined in the research field for answering “why did this NN make the decision it did?” but can also be applied to small changes in prioritized features to assess potential model hacking.  In addition, human-machine teaming is critical to ensure that machines are not working autonomously and have oversight from humans-in-the-loop.  Machines currently do not understand context; however, humans do and can consider all possible root causes and mitigations of a nearly imperceptible shift in metrics.

Protection methods commonly employed include many analytic solutions: Feature Squeezing and Reduction, Distillation, adding noise, Multiple Classifier System, Reject on Negative Impact (RONI), and many others, including combinatorial solutions.  There are pros and cons of each method, and the reader is encouraged to consider their specific ecosystem and security metrics to select the appropriate method.

  1. Model Hacking Threats and Ongoing Research

While there has been no documented report of model hacking in the wild yet, it is notable to see the increase of research over the past few years: from less than 50 literature articles in 2014 to over 1500 in 2020.  And it would be ignorant of us to assume that sophisticated hackers aren’t reading this literature.  It is also notable that, perhaps for the first time in cybersecurity, a body of researchers have proactively developed the attack, detection, and protection against these unique vulnerabilities.

We will continue to add to the greater body of knowledge of model hacking attacks as well as ensure the solutions we implement have built-in detection and protection.  Our research excels in targeting the latest algorithms, such as GANS (Generative Adversarial Networks) in malware detection, facial recognition, and image libraries.  We are also in process of transferring traffic sign model hacking to further real-world examples.

Lastly, we believe McAfee leads the security industry in this critical area. One aspect that sets McAfee apart is the unique relationship and cross-team collaboration between ATR and AAT. Each leverages its unique skillsets; ATR with in-depth and leading-edge security research capabilities, and AAT, through its world-class data analytics and artificial intelligence expertise. When combined, these teams are able to do something few can; predict, research, analyze and defend against threats in an emerging attack vector with unique components, before malicious actors have even begun to understand or weaponize the threat.

For further reading, please see any of the references cited, or “Introduction to Adversarial Machine Learning” at https://mascherari.press/introduction-to-adversarial-machine-learning/

 

 

[1] Courtesy of Technische Universitat Braunschweig.

[2] Grosse, Kathrin, Nicolas Papernot, et al. ”Adversarial Perturbations Against Deep Neural Networks for Malware Classification” Cornell University Library. 16 Jun 2016.

[3] Cleverhans: An adversarial example library for constructing attacks, building defenses, and benchmarking both located at https://github.com/tensorflow/cleverhans.

[4] Goodfellow, Ian, et al. “Generative Adversarial Nets” https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.

[5] Papernot, Nicholas, et al. “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples”  https://arxiv.org/abs/1605.07277.

[6] LISA = Laboratory for Intelligent and Safe Automobiles

The post Introduction and Application of Model Hacking appeared first on McAfee Blog.

Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles

19 February 2020 at 09:01

The last several years have been fascinating for those of us who have been eagerly observing the steady move towards autonomous driving. While semi-autonomous vehicles have existed for many years, the vision of fleets of fully autonomous vehicles operating as a single connected entity is very much still a thing of the future. However, the latest technical advances in this area bring us a unique and compelling picture of some of the capabilities we might expect to see “down the road.” Pun intended.

For example, nearly every new vehicle produced in 2019 has a model which implements state-of-the art sensors that utilize analytics technologies, such as machine learning or artificial intelligence, and are designed to automate, assist or replace many of the functions humans were formerly responsible for. These can range from rain-sensors on the windshield to control wiper blades, to object detection sensors using radar and lidar for collision avoidance, to camera systems capable of recognizing objects in range and providing direct driving input to the vehicle.

This broad adoption represents a fascinating development in our industry; it’s one of those very rare times when researchers can lead the curve ahead of adversaries in identifying weaknesses in underlying systems.

McAfee Advanced Threat Research (ATR) has a specific goal: identify and illuminate a broad spectrum of threats in today’s complex landscape. With model hacking, the study of how adversaries could target and evade artificial intelligence, we have an incredible opportunity to influence the awareness, understanding and development of more secure technologies before they are implemented in a way that has real value to the adversary.

With this in mind, we decided to focus our efforts on the broadly deployed MobilEye camera system, today utilized across over 40 million vehicles, including Tesla models that implement Hardware Pack 1.

18 Months of Research

McAfee Advanced Threat Research follows a responsible disclosure policy, as stated on our website. As such, we disclosed the findings below to both Tesla and MobilEye 90 days prior to public disclosure. McAfee disclosed the findings to Tesla on September 27th, 2019 and MobilEye on October 3rd, 2019. Both vendors indicated interest and were grateful for the research but have not expressed any current plans to address the issue on the existing platform. MobilEye did indicate that the more recent version(s) of the camera system address these use cases.

MobilEye is one of the leading vendors of Advanced Driver Assist Systems (ADAS) catering to some of the world’s most advanced automotive companies. Tesla, on the other hand, is a name synonymous with ground-breaking innovation, providing the world with the innovative and eco-friendly smart cars.

 

MobilEye camera sensor
A table showing MobilEye’s EyeQ3 being used in Tesla’s hardware pack 1.

As we briefly mention above, McAfee Advanced Threat Research has been studying what we call “Model Hacking,” also known in the industry as adversarial machine learning. Model Hacking is the concept of exploiting weaknesses universally present in machine learning algorithms to achieve adverse results. We do this to identify the upcoming problems in an industry that is evolving technology at a pace that security has not kept up with.

We started our journey into the world of model hacking by replicating industry papers on methods of attacking machine learning image classifier systems used in autonomous vehicles, with a focus on causing misclassifications of traffic signs. We were able to reproduce and significantly expand upon previous research focused on stop signs, including both targeted attacks, which aim for a specific misclassification, as well as untargeted attacks, which don’t prescribe what an image is misclassified as, just that it is misclassified. Ultimately, we were successful in creating extremely efficient digital attacks which could cause misclassifications of a highly robust classifier, built to determine with high precision and accuracy what it is looking at, approaching 100% confidence.

Targeted digital white-box attack on stop sign, causing custom traffic sign classifier to misclassify as 35-mph speed sign

We further expanded our efforts to create physical stickers, shown below, that model the same type of perturbations, or digital changes to the original photo, which trigger weaknesses in the classifier and cause it to misclassify the target image.

Targeted physical white-box attack on stop sign, causing custom traffic sign classifier to misclassify the stop sign as an added lane sign

This set of stickers has been specifically created with the right combination of color, size and location on the target sign to cause a robust webcam-based image classifier to think it is looking at an “Added Lane” sign instead of a stop sign.

Video demo of our resilient classifier in the lab which correctly recognizes the 35-mph speed limit sign, even when it is partially obstructed

In reality, modern vehicles don’t yet rely on stop signs to enable any kind of autonomous features such as applying the brakes, so we decided to alter our approach and shift (pun intended) over to speed limit signs. We knew, for example, that the MobilEye camera is used by some vehicles to determine the speed limit, display it on the heads-up display (HUD), and potentially even feed that speed limit to certain features of the car related to autonomous driving. We’ll come back to that!

We then repeated the stop sign experiments on traffic signs, using a highly robust classifier, and our trusty high-resolution webcam. And just to show how robust our classifier is, we can make many changes to the sign— block it partially, place the stickers in random locations — and the classifier does an outstanding job of correctly predicting the true sign, as demonstrated in the video above. While there were many obstacles to achieving the same success, we were ultimately able to prove both targeted and untargeted attacks, digitally and physically, against speed limit signs. The below images highlight a few of those tests.

Example of targeted digital perturbations printed out using a black and white printer which cause a misclassification of 35-mph speed sign to 45-mph speed sign.

At this point, you might be wondering “what’s so special about tricking a webcam into misclassifying a speed limit sign, outside of just the cool factor?” Not much, really. We felt the same, and decided it was time to test the “black box theory.”

What this means, in its most simple form, is attacks leveraging model hacking which are trained and executed against white box, also known as open source systems, will successfully transfer to black box, or fully closed and proprietary systems, so long as the features and properties of the attack are similar enough. For example, if one system is relying on the specific numeric values of the pixels of an image to classify it, the attack should replicate on another camera system that relies on pixel-based features as well.

The last part of our lab-based testing involved simplifying this attack and applying it to a real-world target. We wondered if the MobilEye camera was as robust as the webcam-based classifier we built in the lab? Would it truly require several highly specific, and easily noticeable stickers to cause a misclassification? Thanks to several friendly office employees, we were able to run repeated tests on a 2016 Model “S” and 2016 Model “X” Tesla using the MobilEye camera (Tesla’s hardware pack 1 with EyeQ3 mobilEye chip). The first test involved simply attempting to recreate the physical sticker test – and, it worked, almost immediately and with a high rate of reproducibility.

In our lab tests, we had developed attacks that were resistant to change in angle, lighting and even reflectivity, knowing this would emulate real-world conditions. While these weren’t perfect, our results were relatively consistent in getting the MobilEye camera to think it was looking at a different speed limit sign than it was. The next step in our testing was to reduce the number of stickers to determine at which point they failed to cause a misclassification. As we began, we realized that the HUD continued to misclassify the speed limit sign. We continued reducing stickers from 4 adversarial stickers in the only locations possible to confuse our webcam, all the way down to a single piece of black electrical tape, approximately 2 inches long, and extending the middle of the 3 on the traffic sign.

A robust, inconspicuous black sticker achieves a misclassification from the Tesla model S, used for Speed Assist when activating TACC (Traffic Aware Cruise Control)

Even to a trained eye, this hardly looks suspicious or malicious, and many who saw it didn’t realize the sign had been altered at all. This tiny piece of sticker was all it took to make the MobilEye camera’s top prediction for the sign to be 85 mph.

 

The finish line was close (last pun…probably).

Finally, we began to investigate whether any of the features of the camera sensor might directly affect any of the mechanical, and even more relevant, autonomous features of the car. After extensive study, we came across a forum referencing the fact that a feature known as Tesla Automatic Cruise Control (TACC) could use speed limit signs as input to set the vehicle speed.

There was majority of consensus among owners that this might be a supported feature. It was clear that there was also confusion among forum members as to whether this capability was possible, so our next step was to verify by consulting Tesla software updates and new feature releases.

A software release for TACC contained just enough information to point us towards speed assist, with the following statement, under the Tesla Automatic Cruise Control feature description.

“You can now immediately adjust your set speed to the speed determined by Speed Assist.”

This took us down our final documentation-searching rabbit hole; Speed Assist, a feature quietly rolled out by Tesla in 2014.

Finally! We can now add these all up to surmise that it might be possible, for Tesla models enabled with Speed Assist (SA) and Tesla Automatic Cruise Control (TACC), to use our simple modification to a traffic sign to cause the car to increase speed on its own!

Despite being confident this was theoretically possible, we decided to simply run some tests to see for ourselves.

McAfee ATR’s lead researcher on the project, Shivangee Trivedi, partnered with another of our vulnerability researchers Mark Bereza, who just so happened to own a Tesla that exhibited all these features. Thanks Mark!

For an exhaustive look at the number of tests, conditions, and equipment used to replicate and verify misclassification on this target, we have published our test matrix here.

The ultimate finding here is that we were able to achieve the original goal. By making a tiny sticker-based modification to our speed limit sign, we were able to cause a targeted misclassification of the MobilEye camera on a Tesla and use it to cause the vehicle to autonomously speed up to 85 mph when reading a 35-mph sign. For safety reasons, the video demonstration shows the speed start to spike and TACC accelerate on its way to 85, but given our test conditions, we apply the brakes well before it reaches target speed. It is worth noting that this is seemingly only possible on the first implementation of TACC when the driver double taps the lever, engaging TACC. If the misclassification is successful, the autopilot engages 100% of the time. This quick demo video shows all these concepts coming together.

Of note is that all these findings were tested against earlier versions (Tesla hardware pack 1, mobilEye version EyeQ3) of the MobilEye camera platform. We did get access to a 2020 vehicle implementing the latest version of the MobilEye camera and were pleased to see it did not appear to be susceptible to this attack vector or misclassification, though our testing was very limited. We’re thrilled to see that MobilEye appears to have embraced the community of researchers working to solve this issue and are working to improve the resilience of their product. Still, it will be quite some time before the latest MobilEye camera platform is widely deployed. The vulnerable version of the camera continues to account for a sizeable installation base among Tesla vehicles. The newest models of Tesla vehicles do not implement MobilEye technology any longer, and do not currently appear to support traffic sign recognition at all.

Looking Forward

We feel it is important to close this blog with a reality check. Is there a feasible scenario where an adversary could leverage this type of an attack to cause harm? Yes, but in reality, this work is highly academic at this time. Still, it represents some of the most important work we as an industry can focus on to get ahead of the problem. If vendors and researchers can work together to identify and solve these problems in advance, it would truly be an incredible win for us all. We’ll leave you with this:

In order to drive success in this key industry and shift the perception that machine learning systems are secure, we need to accelerate discussions and awareness of the problems and steer the direction and development of next-generation technologies. Puns intended.

 

The post Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles appeared first on McAfee Blog.

What’s in the Box? Part II: Hacking the iParcelBox

18 June 2020 at 07:01

Package delivery is just one of those things we take for granted these days. This is especially true in the age of Coronavirus, where e-commerce and at-home deliveries make up a growing portion of consumer buying habits.

In 2019, McAfee Advanced Threat Research (ATR) conducted a vulnerability research project on a secure home package delivery product, known as BoxLock. The corresponding blog can be found here and highlights a vulnerability we found in the Bluetooth Low Energy (BLE) configuration used by the device. Ultimately, the flaw allowed us to unlock any BoxLock in Bluetooth range with a standard app from the Apple or Google store.

Shortly after we released this blog, a similar product company based in the UK reached out to the primary researcher (Sam Quinn) here at McAfee ATR, requesting that the team perform research analysis on his product, called the iParcelBox. This device is comprised of a secure steel container with a push-button on the outside, allowing for package couriers to request access to the delivery container with a simple button press, notifying the homeowner via the app and allowing remote open/close functions.

iParcelBox – Secure Package Delivery & iParcelBox App

The researcher was able to take a unique spin on this project by performing OSINT (Open Source Intelligence), which is the practice of using publicly available information, often unintentionally publicized, to compromise a device, system or user. In this case, the primary developer for the product wasn’t practicing secure data hygiene for his online posts, which allowed the researcher to discover information that dramatically shortened what would have been a much more complicated project. He discovered administrative credentials and corresponding internal design and configurations, effectively providing the team access to any and all iParcelBox devices worldwide, including the ability to unlock any device at a whim. All test cases were executed on lab devices owned by the team or approved by iParcelBox. Further details of the entire research process can be found in the full technical version of the research blog here.

The actual internals of the system were well-designed from a security perspective, utilizing concepts like SSL for encryption, disabling hardware debugging, and performing proper authentication checks. Unfortunately, this level of design and security were all undermined by the simple fact that credentials were not properly protected online. Armed with these credentials the researcher was able to extract sensitive certificates, keys, device passwords, and WIFI passwords off any iParcelBox.

Secondly, as the researcher prepared the writeup on the OSINT techniques used for this, he made a further discovery. When analyzing the configuration used by the Android app to interact with the cloud-based IOT framework (AWS-IOT), he found that even without an administrative password, he could leak plaintext temporary credentials to query the AWS database. These credentials had a permission misconfiguration which allowed the researcher to query all the information about any iParcelBox device and to become the primary owner.

In both cases, to target a device, an attacker would need to know the MAC address of the victim’s iParcelBox; however, the iParcelBox MAC addresses appeared to be generated non-randomly and were simple to guess.

A typical research effort for McAfee ATR involves complex hardware analysis, reverse engineering, exploit development and much more. While the developer made some high-level mistakes regarding configuration and data hygiene, we want to take a moment to recognize the level of effort put into both physical and digital security. iParcelBox implemented numerous security concepts that are uncommon for IOT devices and significantly raise the bar for attackers. It’s much easier to fix issues like leaked passwords or basic configuration issues than to rebuild hardware or reprogram software to bolt on security after the fact. This may be why the company was able to fix both issues almost immediately after we informed them in March of 2020. We’re thrilled to see more and more companies of all sizes embracing the security research community and collaborating quickly to improve their products, even from the beginning of the development cycle.

What can be done?

For consumers:

Even developers are subject to the same issues we all have; choosing secure and complex passwords, protecting important credentials, practicing security hygiene, and choosing secure configurations when implementing controls for a device. As always, we encourage you to evaluate the vendor’s approach to security. Do they embrace and encourage vulnerability research on their products? How quick are they to implement fixes and are they done correctly? Nearly every product on the market will have security flaws if you look hard enough, but the way they are handled is arguably more important than the flaws themselves.

For developers and vendors:

This case study should provide a valuable testament to the power of community. Don’t be afraid to engage security researchers and embrace the discovery of vulnerabilities. The more critical the finding, the better! Work with researchers or research companies that practice responsible disclosure, such as McAfee ATR. Additionally, it can be easy to overlook the simple things such as the unintentional leak of critical data found during this project. A security checklist should include both complex and simple steps to ensure the product maintains proper security controls and essential data is protected and periodically audited.

The post What’s in the Box? Part II: Hacking the iParcelBox appeared first on McAfee Blog.

Major HTTP Vulnerability in Windows Could Lead to Wormable Exploit

12 May 2021 at 15:48
AI Cyber Security

Today, Microsoft released a highly critical vulnerability (CVE-2021-31166) in its web server http.sys. This product is a Windows-only HTTP server which can be run standalone or in conjunction with IIS (Internet Information Services) and is used to broker internet traffic via HTTP network requests. The vulnerability is very similar to CVE-2015-1635, another Microsoft vulnerability in the HTTP network stack reported in 2015.

With a CVSS score of 9.8, the vulnerability announced has the potential to be both directly impactful and is also exceptionally simple to exploit, leading to a remote and unauthenticated denial-of-service (Blue Screen of Death) for affected products.

The issue is due to Windows improperly tracking pointers while processing objects in network packets containing HTTP requests. As HTTP.SYS is implemented as a kernel driver, exploitation of this bug will result in at least a Blue Screen of Death (BSoD), and in the worst-case scenario, remote code execution, which could be wormable. While this vulnerability is exceptional in terms of potential impact and ease of exploitation, it remains to be seen whether effective code execution will be achieved. Furthermore, this vulnerability only affects the latest versions of Windows 10 and Windows Server (2004 and 20H2), meaning that the exposure for internet-facing enterprise servers is fairly limited, as many of these systems run Long Term Servicing Channel (LTSC) versions, such as Windows Server 2016 and 2019, which are not susceptible to this flaw.

At the time of this writing, we are unaware of any “in-the-wild” exploitation for CVE-2021-31166 but will continue to monitor the threat landscape and provide relevant updates. We urge Windows users to apply the patch immediately wherever possible, giving special attention to externally facing devices that could be compromised from the internet. For those who are unable to apply Microsoft’s update, we are providing a “virtual patch” in the form of a network IPS signature that can be used to detect and prevent exploitation attempts for this vulnerability.

McAfee Network Security Platform (NSP) Protection
Sigset Version: 10.8.21.2
Attack ID: 0x4528f000
Attack Name: HTTP: Microsoft HTTP Protocol Stack Remote Code Execution Vulnerability (CVE-2021-31166)

McAfee Knowledge Base Article KB94510:
https://kc.mcafee.com/corporate/index?page=content&id=KB94510

 

 

The post Major HTTP Vulnerability in Windows Could Lead to Wormable Exploit appeared first on McAfee Blog.

❌
❌