πŸ”’
❌
There are new articles available, click to refresh the page.
Today β€” 8 August 2022Tools

Smap - A Drop-In Replacement For Nmap Powered By Shodan.Io

8 August 2022 at 12:30
By: Zion3R


Smap is a replica of Nmap which uses shodan.io's free API for port scanning. It takes same command line arguments as Nmap and produces the same output which makes it a drop-in replacament for Nmap.


Features

  • Scans 200 hosts per second
  • Doesn't require any account/api key
  • Vulnerability detection
  • Supports all nmap's output formats
  • Service and version fingerprinting
  • Makes no contact to the targets

Installation

Binaries

You can download a pre-built binary from here and use it right away.

Manual

go install -v github.com/s0md3v/smap/cmd/[email protected]

Confused or something not working? For more detailed instructions, click here

AUR pacakge

Smap is available on AUR as smap-git (builds from source) and smap-bin (pre-built binary).

Homebrew/Mac

Smap is also avaible on Homebrew.

brew update
brew install smap

Usage

Smap takes the same arguments as Nmap but options other than -p, -h, -o*, -iL are ignored. If you are unfamiliar with Nmap, here's how to use Smap.

Specifying targets

smap 127.0.0.1 127.0.0.2

You can also use a list of targets, seperated by newlines.

smap -iL targets.txt

Supported formats

1.1.1.1         // IPv4 address
example.com // hostname
178.23.56.0/8 // CIDR

Output

Smap supports 6 output formats which can be used with the -o* as follows

smap example.com -oX output.xml

If you want to print the output to terminal, use hyphen (-) as filename.

Supported formats

oX    // nmap's xml format
oG // nmap's greppable format
oN // nmap's default format
oA // output in all 3 formats above at once
oP // IP:PORT pairs seperated by newlines
oS // custom smap format
oJ // json

Note: Since Nmap doesn't scan/display vulnerabilities and tags, that data is not available in nmap's formats. Use -oS to view that info.

Specifying ports

Smap scans these 1237 ports by default. If you want to display results for certain ports, use the -p option.

smap -p21-30,80,443 -iL targets.txt

Considerations

Since Smap simply fetches existent port data from shodan.io, it is super fast but there's more to it. You should use Smap if:

You want

  • vulnerability detection
  • a super fast port scanner
  • results for most common ports (top 1237)
  • no connections to be made to the targets

You are okay with

  • not being able to scan IPv6 addresses
  • results being up to 7 days old
  • a few false negatives


Yesterday β€” 7 August 2022Tools

BlackStone - Pentesting Reporting Tool

7 August 2022 at 12:30
By: Zion3R

Pentesting Reporting Tool (1)


BlackStone project or "BlackStone Project" is a tool created in order to automate the work of drafting and submitting a report on audits of ethical hacking or pentesting.

In this tool we can register in the database the vulnerabilities that we find in the audit, classifying them by internal, external audit or wifi, in addition, we can put your description and recommendation, as well as the level of severity and effort for its correction. This information will then help us generate in the report a criticality table as a global summary of the vulnerabilities found.

We can also register a company and, just by adding its web page, the tool will be able to find subdomains, telephone numbers, social networks, employee emails...


Pentesting Reporting Tool (2)

Docker Install

Install Docker

Install docker-compose
Install BlackStone
git clone https://github.com/micro-joan/BlackStone
cd BlackStone
docker-compose up -d

User: blackstone

Password: blackstone

Manual Install

  • First we must download an Apache server to host the tool, in my case I use Mamp (I recommend following these steps): https://www.mamp.info/en/downloads/
  • We will download the content of this repository and we will have 2 folders (BlackStone and BBDD)
  • Once the server starts we will go to c://MAMP/htdocs and paste all the contents of the downloaded folder "BlackStone"
  • For the application to work we will have to import the database, we will go to our browser and write "localhost/phpMyAdmin/", you have the database connection file in the folder BlackStone/conexion.php
  • We will create a database called blackstone and import the data from the downloaded BBDD folder
  • Log in to BlackStone with the username and password "blackstone"

Use

First you need to go to profile settings and add Hunter.io and haveibeenpwned.com tokens:

Pentesting Reporting Tool (3)

After having vulnerabilities in the database, we will go to the audited client and we will register a client along with their web page, once registered we can go to customer details and we can see the following information:

THE USE OF THIS APPLICATION IS FOR PROFESSIONAL USE, THE AUTHOR IS NOT RESPONSIBLE FOR A MISUSE EMPLOYED

  • Name of business owner
  • Social networks of the company owner
  • Email and telephone number of the owner of the company
  • Exposed password check on the company owner's deep web
  • Subdomains of the website as well as information of interest found in google
  • Emails of company workers

Pentesting Reporting Tool (4)

Once we have the company that we are going to audit registered in the database, we will create a report, adding the date, name of the report and the company to which will be audited. When we register the report, we will give it edit and then we will select the vulnerabilities that we want to appear in the report:

Pentesting Reporting Tool (5)

Finally, we will generate the report by clicking on the "overview report" button, and later we will save the page that is generated as ".mht", then we will open it with Word to be able to work on the generated report:

Pentesting Reporting Tool (6)



Before yesterdayTools

Pict - Post-Infection Collection Toolkit

6 August 2022 at 12:30
By: Zion3R


This set of scripts is designed to collect a variety of data from an endpoint thought to be infected, to facilitate the incident response process. This data should not be considered to be a full forensic data collection, but does capture a lot of useful forensic information.

If you want true forensic data, you should really capture a full memory dump and image the entire drive. That is not within the scope of this toolkit.


How to use

The script must be run on a live system, not on an image or other forensic data store. It does not strictly require root permissions to run, but it will be unable to collect much of the intended data without.

Data will be collected in two forms. First is in the form of summary files, containing output of shell commands, data extracted from databases, and the like. For example, the browser module will output a browser_extensions.txt file with a summary of all the browser extensions installed for Safari, Chrome, and Firefox.

The second are complete files collected from the filesystem. These are stored in an artifacts subfolder inside the collection folder.

Syntax

The script is very simple to run. It takes only one parameter, which is required, to pass in a configuration script in JSON format:

./pict.py -c /path/to/config.json

The configuration script describes what the script will collect, and how. It should look something like this:

collection_dest

This specifies the path to store the collected data in. It can be an absolute path or a path relative to the user's home folder (by starting with a tilde). The default path, if not specified, is /Users/Shared.

Data will be collected in a folder created in this location. That folder will have a name in the form PICT-computername-YYYY-MM-DD, where the computer name is the name of the machine specified in System Preferences > Sharing and date is the date of collection.

all_users

If true, collects data from all users on the machine whenever possible. If false, collects data only for the user running the script. If not specified, this value defaults to true.

collectors

PICT is modular, and can easily be expanded or reduced in scope, simply by changing what Collector modules are used.

The collectors data is a dictionary where the key is the name of a module to load (the name of the Python file without the .py extension) and the value is the name of the Collector subclass found in that module. You can add additional entries for custom modules (see Writing your own modules), or can remove entries to prevent those modules from running. One easy way to remove modules, without having to look up the exact names later if you want to add them again, is to move them into a top-level dictionary named unused.

settings

This dictionary provides global settings.

keepLSData specifies whether the lsregister.txt file - which can be quite large - should be kept. (This file is generated automatically and is used to build output by some other modules. It contains a wealth of useful information, but can be well over 100 MB in size. If you don't need all that data, or don't want to deal with that much data, set this to false and it will be deleted when collection is finished.)

zipIt specifies whether to automatically generate a zip file with the contents of the collection folder. Note that the process of zipping and unzipping the data will change some attributes, such as file ownership.

moduleSettings

This dictionary specifies module-specific settings. Not all modules have their own settings, but if a module does allow for its own settings, you can provide them here. In the above example, you can see a boolean setting named collectArtifacts being used with the browser module.

There are also global module settings that are maintained by the Collector class, and that can be set individually for each module.

collectArtifacts specifies whether to collect the file artifacts that would normally be collected by the module. If false, all artifacts will be omitted for that module. This may be needed in cases where storage space is a consideration, and the collected artifacts are large, or in cases where the collected artifacts may represent a privacy issue for the user whose system is being analyzed.

Writing your own modules

Modules must consist of a file containing a class that is subclassed from Collector (defined in collectors/collector.py), and they must be placed in the collectors folder. A new Collector module can be easily created by duplicating the collectors/template.py file and customizing it for your own use.

def __init__(self, collectionPath, allUsers)

This method can be overridden if necessary, but the super Collector.init() must be called in such a case, preferably before your custom code executes. This gives the object the chance to get its properties set up before your code tries to use them.

def printStartInfo(self)

This is a very simple method that will be called when this module's collection begins. Its intent is to print a message to stdout to give the user a sense of progress, by providing feedback about what is happening.

def applySettings(self, settingsDict)

This gives the module the chance to apply any custom settings. Each module can have its own self-defined settings, but the settingsDict should also be passed to the super, so that the Collection class can handle any settings that it defines.

def collect(self)

This method is the core of the module. This is called when it is time for the module to begin collection. It can write as many files as it needs to, but should confine this activity to files within the path self.collectionPath, and should use filenames that are not already taken by other modules.

If you wish to collect artifacts, don't try to do this on your own. Simply add paths to the self.pathsToCollect array, and the Collector class will take care of copying those into the appropriate subpaths in the artifacts folder, and maintaining the metadata (permissions, extended attributes, flags, etc) on the artifacts.

When the method finishes, be sure to call the super (Collector.collect(self)) to give the Collector class the chance to handle its responsibilities, such as collecting artifacts.

Your collect method can use any data collected in the basic_info.txt or lsregister.txt files found at self.collectionPath. These are collected at the beginning by the pict.py script, and can be assumed to be available for use by any other modules. However, you should not rely on output from any other modules, as there is no guarantee that the files will be available when your module runs. Modules may not run in the order they appear in your configuration JSON, since Python dictionaries are unordered.

Credits

Thanks to Greg Neagle for FoundationPlist.py, which solved lots of problems with reading binary plists, plists containing date data types, etc.



  • There are no more articles
❌