Editing the Nmap dictionary? - brute-force

When using the Nmap scripts for brute-forcing, let's take an example of FTP brute-force available here : https://nmap.org/nsedoc/scripts/ftp-brute.html
How do i access the Nmap dictionary list of this script? for the need of editing it !
I would appreciate any help .

The documentation you linked lists the script arguments that can be passed, one of which is the passdb argument. This is documented in the unpwdb library documentation. The default list is in Nmap's data directory under nselib/data/passwords.list, but you can pass the name of any newline-separated file with --script-args 'passdb=example.txt'.

Related

How to find complete list of arguments to use WORD in CLI

I would like to know if it is possible to have all the arguments of the command / exe "winword".
In the official docs the are some examples:
https://support.microsoft.com/en-us/office/command-line-switches-for-microsoft-office-products-079164cd-4ef5-4178-b235-441737deb3a6
But there not all the commands, for example, there is "/mFilePrintDefault" or "/mFileExit"
The objective being to be able to print a pdf without user action.
The Macro Commands are going to vary from one version to the next and of course you can start with your own macro file or use an AutoMacro.
The examples you give are 2 from part of the "Fixed Commands"
There are three lists available but I have no idea which commands function in a specific version as a command line macro, you would need to try them out.
ECMA-376 Office Open XML File Formats [ECMA-376], Fixed commands are here https://learn.microsoft.com/en-us/openspecs/office_standards/ms-oe376/37bf80f7-1d74-47f6-8721-aa077cadca4d
ISO/IEC-29500 Office Open XML File Formats [ISO/IEC-29500:2012], Fixed commands are here
https://learn.microsoft.com/en-us/openspecs/office_standards/ms-oi29500/1ecf33cf-3601-45f0-89fb-0ab824739343
And listed under Basic Types as https://learn.microsoft.com/en-us/openspecs/office_file_formats/ms-doc/e86aecb4-bb67-4de2-9b06-37771115f274
A common example to show using more than one command(s) in sequence is
"path to\winword.exe" "path and filename.docx" /q /n /mFilePrintDefault /mFileExit
But your default printer needs to be a PDF one otherwise your macro needs to more complex

perl how to include a pre-defined variable list

Is it possible to have a perl file that defines a list of variables with certain values, and then have the main perl script include that file which contains the list of variable definitions?
Or any sort of perl approach which would make that intention work.
Thanks!
Store those values in a config file and retrieve it using config parsing in your main perl script.
Refer following link for example.
https://perlmaven.com/reading-configuration-files-in-perl

how to create a Doxygen link to the same file

I would like to write a Doxygen comment that names the file in which the comment occurs. Rather than write the filename explicitly, I would like Doxygen to supply me with it. Thus, if I change the name of the file, or move some of the content into a different file, I don't need to change hard-coded instances of the name.
For a concrete example, let's say I'm adding comments to functions in array.hpp, and I want the comment for certain functions to say "This function should only be used within array.hpp." I want to be able to write
/**
* This function should only be used within #thisfile.
*/
where #thisfile is a Doxygen expression that translates into array.hpp within the file array.hpp.
I've looked at the Doxygen documentation, including "Automatic link generation/Links to files" and the entire "Special Commands" section, but I haven't found what I'm looking for. Does such functionality exist?
Note that essentially the same question was asked on the Doxygen mailing list a few weeks ago. It has not received any replies.
General
As far as I know such functionality does not exist out-of-the-box. But you can add it by configuring an INPUT_FILTER in your Doxyfile. The path to the file is passed as an argument to the filter by doxygen. This can be used by the filter to replace your keyword (for example #thisfile) with the path to the file.
Below I give an example how to implement this with bash. A solution for other shells or Windows should be quite similar.
Example for bash
Write a short bash script infiltrate_filename.sh:
#!/bin/bash
pathToScript=`pwd`"/"
sed -e "s:#thisfile:${1/$pathToScript/}:g" $1
This script truncates the path to the file by the working directory. The resulting string is used to replace the keyword of your choice (here: #thisfile).
Make your script executable: chmod +x infiltrate_filename.sh
Set the INPUT_FILTER in your Doxyfile to INPUT_FILTER = ./infiltrate_filename.sh
That's it! 🎉 Now you can use #thisfile in your documentation blocks and it will be replaced by the path to the file. As the paths are relative to Doxygen's working directory they will automatically be linked to the file.
Notes
This solution assumes that the filter script is located in the working directory of doxygen (for example ~/my_project) and that the INPUT files are in subdirectories of the working directory (for example ~/my_project/src/foo/bar).
I have tested this example on a minimum working example. I am not a bash or sed expert. This solution may be improvable.

How do you specify that a parameter accepts wildcard characters in a custom script cmdlet

When you do Get-Help SomeCommand -Full, under each parameter, after the description, there are some additional parameter properties. One of those properties is 'Accept Wildcard Characters?'. When I create my help information for a custom script cmdlet how do I specify that a parameter accepts wildcards?
In the param section of your script, add the attribute SupportsWildcards().
ex.:
param (
[SupportsWildcards()][String]$variable
)
If you want to be able to do this it will require a few things. First off, you either have to create a .dll file, which you are not doing, or you have to create a module. I am not going to go into all of the ins and outs of creating a module, there are already many well written guides on how to do that out there on the internet that you can go look up.
As a part of your module you can include .XML files that provide Help information similarly to the commented help available for individual scripts. The XML style does have some advantages, such as consistency and some advanced features, but does require more effort. Towards this end I would strongly suggest reading Writing Help for Windows PowerShell Modules, as it will explain where to place your XML files, how to structure them, and required headers and what not.
If it were me I'd probably copy an existing XML help file and edit it to suit my needs for the cmdlet, find and read one of the quick-and-dirty HowTo's about creating a module, and then give up on the idea since it's not worth the effort involved to just add that 'Supports Wildcards' flag (in my opinion) if this all started out just as a basic script with commented help.
But the answer is, create a module and supporting XML based Help file for your cmdlet. With that you can add support for the Accepts Wildcards flag for your parameters.

SAS- Reading multiple compressed data files

I hope you are all well.
So my question is about the procedure to open multiple raw data files that are compressed.
My files' names are ordered so I have for example : o_equities_20080528.tas.zip o_equities_20080529.tas.zip o_equities_20080530.tas.zip ...
Thank you all in advance.
How much work this will be depends on whether:
You have enough space to extract all the files simultaneously into one folder
You need to be able to keep track of which file each record has come from (i.e. you can't tell just from looking at a particular record).
If you have enough space to extract everything and you don't need to track which records came from which file, then the simplest option is to use a wildcard infile statement, allowing you to import the records from all of your files in one data step:
infile "c:\yourdir\o_equities_*.tas" <other infile options as per individual files>;
This syntax works regardless of OS - it's a SAS feature, not shell expansion.
If you have enough space to extract everything in advance but you need to keep track of which records came from each file, then please refer to this page for an example of how to do this using the filevar option on the infile statement:
http://www.ats.ucla.edu/stat/sas/faq/multi_file_read.htm
If you don't have enough space to extract everything in advance, but you have access to 7-zip or another archive utility, and you don't need to keep track of which records came from each file, you can use a pipe filename and extract to standard output. If you're on a Linux platform then this is very simple, as you can take advantage of shell expansion:
filename cmd pipe "nice -n 19 gunzip -c /yourdir/o_equities_*.tas.zip";
infile cmd <other infile options as per individual files>;
On windows it's the same sort of idea, but as you can't use shell expansion, you have to construct a separate filename for each zip file, or use some of 7zip's more arcane command-line options, e.g.:
filename cmd pipe "7z.exe e -an -ai!C:\yourdir\o_equities_*.tas.zip -so -y";
This will extract all files from all of the matching archives to standard output. You can narrow this down further via the 7-zip command if necessary. You will have multiple header lines mixed in with the data - you can use findstr to filter these out in the pipe before SAS sees them, or you can just choose to tolerate the odd error message here and there.
Here, the -an tells 7-zip not to read the zip file name from the command line, and the -ai tells it to expand the wildcard.
If you need to keep track of what came from where and you can't extract everything at once, your best bet (as far as I know) is to write a macro to process one file at a time, using the above techniques and add this information while you're importing each dataset.