Normal view

There are new articles available, click to refresh the page.
Yesterday — 29 April 2024Security News

Google prevented 2.28 million policy-violating apps from being published on Google Play in 2023

29 April 2024 at 20:24

Google announced they have prevented 2.28 million policy-violating apps from being published in the official Google Play.

Google announced that in 2023, they have prevented 2.28 million policy-violating apps from being published on Google Play. This amazing result was possible thanks to the introduction of enhanced security features, policy updates, and advanced machine learning and app review processes.

Additionally, Google Play strengthened its developer onboarding and review procedures, requesting a more accurate identification during account setup. These efforts resulted in the ban of 333,000 accounts for confirmed malware and repeated severe policy breaches.

Google also rejected or remediated approximately 200K app submissions to ensure proper use of sensitive permissions such as background location or SMS access. Google has closely worked with SDK providers to protect users’ privacy and prevent sensitive data access and sharing. Over 31 SDKs have enhanced their posture impacting 790K+ apps.

“We also significantly expanded the Google Play SDK Index, which now covers the SDKs used in almost 6 million apps across the Android ecosystem.” states Google. “This valuable resource helps developers make better SDK choices, boosts app quality and minimizes integration risks.”

Google continues to work on improving the Android environment. In November, 2023, it moved the App Defense Alliance (ADA) under the umbrella of the Linux Foundation, with Meta, Microsoft, and Google as founding steering members. The Alliance encourages widespread adoption of best practices and guidelines for app security across the industry, while also developing countermeasures to address emerging security threats.

Google enhanced Google Play Protect’s security capabilities to provide stronger protection for users installing apps from outside the Play Store. The company implemented real-time scanning at the code-level to detect new malicious apps. The company revealed that this measure has already identified over 5 million new malicious apps outside of the Play Store, enhancing Android users’ global security.

Pierluigi Paganini

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

(SecurityAffairs – hacking, Google Play)

Today — 30 April 2024Security News

The FCC imposes $200 million in fines on four US carriers for unlawfully sharing user location data

30 April 2024 at 05:36

The Federal Communications Commission (FCC) fined the largest U.S. wireless carriers $200 million for sharing customers’ real-time location data without consent.

The FCC has fined four major U.S. wireless carriers nearly $200 million for unlawfully selling access to real-time location data of their customers without consent. The fines come as a result of the Notices of Apparent Liability (NAL) issued by the FCC against AT&T, Sprint, T-Mobile, and Verizon in February 2020.

T-Mobile is facing a proposed fine exceeding $91 million, while AT&T is looking at one over $57 million. Verizon, on the other hand, faces a proposed fine exceeding $48 million, and Sprint faces a proposed fine of more than $12 million due to the actions taken by the FCC.

“The Federal Communications Commission today proposed fines against the nation’s four largest wireless carriers for apparently selling access to their customers’ location information without taking reasonable measures to protect against unauthorized access to that information.” reads the announcement published by FCC. “As a result, T-Mobile faces a proposed fine of more than $91 million; AT&T faces a proposed fine of more than $57 million; Verizon faces a proposed fine of more than $48 million; and Sprint faces a proposed fine of more than $12 million. The FCC also admonished these carriers for apparently disclosing their customers’ location information, without their authorization, to a third party.”

The FCC’s Enforcement Bureau launched an investigation after Missouri Sheriff Cory Hutcheson misused a “location-finding service” provided by Securus, a communications service provider for correctional facilities, to access the location data of wireless carrier customers without their consent from 2014 to 2017. Hutcheson allegedly provided irrelevant documents, such as health insurance and auto insurance policies, along with pages from sheriff training manuals, as evidence of authorization to access the data.

FCC added that the carriers continued to sell access to the customers’ location information and did not sufficiently guard it from further unauthorized access even after discovering irregular procedures.

All four carriers condemned the FCC’s decision and announced they would appeal it.

The Communications Act mandates that telecommunications carriers safeguard the confidentiality of specific customer data, including location information, about telecommunications services. Carriers must adopt reasonable measures to prevent unauthorized access to customer data. Furthermore, carriers or their representatives must typically secure explicit consent from customers before utilizing, disclosing, or permitting access to such data. Carriers bear responsibility for the actions of their representatives in this regard.

Pierluigi Paganini

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

(SecurityAffairs – hacking, Federal Communications Commission)

NCSC: New UK law bans default passwords on smart devices

30 April 2024 at 07:23

The UK National Cyber Security Centre (NCSC) orders smart device manufacturers to ban default passwords starting from April 29, 2024.

The U.K. National Cyber Security Centre (NCSC) is urging manufacturers of smart devices to comply with new legislation that bans default passwords.

The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will be effective on April 29, 2024.

“From 29 April 2024, manufacturers of consumer ‘smart’ devices must comply with new UK law.” reads the announcement published by NCSC. “The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will help consumers to choose smart devices that have been designed to provide ongoing protection against cyber attacks.”

The U.K. is the first country in the world to ban default credentia from IoT devices.

The law prohibits manufacturers from supplying devices with default passwords, which are easily accessible online and can be shared.

The law applies to the following products:

  • Smart speakers, smart TVs, and streaming devices
  • Smart doorbells, baby monitors, and security cameras
  • Cellular tablets, smartphones, and game consoles
  • Wearable fitness trackers (including smart watches)
  • Smart domestic appliances (such as light bulbs, plugs, kettles, thermostats, ovens, fridges, cleaners, and washing machines)

Threat actors could use them to access a local network or launch cyber attacks.

Manufacturers are obliged to designate a contact point for reporting security issues and must specify the minimum duration for which the device will receive crucial security updates.

The NCSC clarified that the PSTI act also applies to organizations importing or retailing products for the UK market, including most smart devices manufactured outside the UK. Manufacturers that don’t comply with the act will be punished with fines of up to £10 million or 4% of qualifying worldwide revenue.

Pierluigi Paganini

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

(SecurityAffairs – hacking, smart device manufacturers)

CISA guidelines to protect critical infrastructure against AI-based threats

30 April 2024 at 17:23

The US government’s cybersecurity agency CISA published a series of guidelines to protect critical infrastructure against AI-based attacks.

CISA collaborated with Sector Risk Management Agencies (SRMAs) and regulatory agencies to conduct sector-specific assessments of AI risks to U.S. critical infrastructure, as mandated by Executive Order 14110 Section 4.3(a)(i). The analysis categorized AI risks into three categories:

  • Attacks Using AI;
  • Attacks Targeting AI Systems;
  • Failures in AI Design and Implementation.

AI risk management for critical infrastructure is an ongoing process throughout the AI lifecycle.

These guidelines integrate the AI Risk Management Framework into enterprise risk management programs for critical infrastructure. The AI RMF Core consists of the Govern, Map, Measure, and Manage functions.

The Govern function within the AI RMF establishes an organizational approach to AI Risk Management within existing Enterprise Risk Management (ERM). Recommended actions for addressing risks throughout the AI lifecycle are integrated into the Map, Measure, and Manage functions. These guidelines improve AI safety and security risk management practices proposed by the NIST AI RMF.

CISA highlights that the risks are context-dependent, this implies that critical infrastructure operators should consider sector-specific and context-specific factors when assessing and mitigating AI risks. Specific sectors may need to define their own tailored guidelines for managing AI risk. Stakeholders may focus on different aspects of the AI lifecycle depending on their sector or role, whether they are involved in the design, development, procurement, deployment, operation, management, maintenance, or retirement of AI systems.

“Critical infrastructure owners and operators can foster a culture of risk management by aligning AI safety and security priorities with their own organizational principles and strategic priorities. This organizational approach follows a “secure by design” philosophy where leaders prioritize and take ownership of safety and security outcomes and build organizational structures that make security a top priority.” read the guidelines.

Pierluigi Paganini

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

(SecurityAffairs – hacking, CISA)

❌
❌