Can nmap run multiple nmap script with multiple arguments in one command? - nmap

I want to run multiple nmap scripts, each of which takes in one or multiple arguments.
For example, I want to run 3 scripts: sc1, sc2, sc3.
sc1 uses args: sc1.ag1, sc1.ag2, sc1.ag3
sc2 uses args: sc2.ag1, sc2.ag2
sc3 uses args: sc3.ag1
Is it possible to run a command like this?
nmap --script sc1,sc2,sc3 --script-args=sc1.ag1,sc1.ag2,sc1.ag3,sc2.ag1,sc2.ag2,sc3.ag1 192.168.111.111

Yes, that is allowed. You should be careful with quoting for your shell, since script args can contain spaces and quote characters.
You may also be interested in the --script-args-file option, which allows you to put each script argument on a separate line of a text file. The newline acts the same as the comma (",") in your example.
Script specification is covered in the online documentation.

Related

Why is the k8s container spec "command" field an array?

According to this official kubernetes documentation page, it is possible to provide "a command" and args to a container.
The page has 13 occurrences of the string "a command" and 10 occurrences of "the command" -- note the use of singular.
There are (besides file names) 3 occurrences of the plural "commands":
One leads to the page Get a Shell to a Running Container, which I am not interested in. I am interested in the start-up command of the container.
One mention is concerned with running several piped commands in a shell environment, however the provided example uses a single string: command: ["/bin/sh"].
The third occurrence is in the introductory sentence:
This page shows how to define commands and arguments when you run a container in a Pod.
All examples, including the explanation of how command and args interact when given or omitted, only ever show a single string in an array. It even seems to be intended to use a single command only, which would receive all specified args, since the field is named with a singular.
The question is: Why is this field an array?
I assume the developers of kubernetes had a good reason for this, but I cannot think of one. What is going on here? Is it legacy? If so, how come? Is it future-readiness? If so, what for? Is it for compatibility? If so, to what?
Edit:
As I have written in a comment below, the only reason I can conceive of at this moment is this: The k8s developers wanted to achieve the interaction of command and args as documented AND allow a user to specify all parts of a command in a single parameter instead of having a command span across both command and args.
So essentially a compromise between a feature and readability.
Can anyone confirm this hypothesis?
Because the execve(2) system call takes an array of words. Everything at a higher level fundamentally reduces to this. As you note, a container only runs a single command, and then exits, so the array syntax is a native-Unix way of providing the command rather than a way to try to specify multiple commands.
For the sake of argument, consider a file named a file; with punctuation, where the spaces and semicolon are part of the filename. Maybe this is the input to some program, so in a shell you might write
some_program 'a file; with punctuation'
In C you could write this out as an array of strings and just run it
char *const argv[] = {
"some_program",
"a file; with punctuation", /* no escaping or quoting, an ordinary C string */
NULL
};
execvp(argv[0], argv); /* does not return */
and similarly in Kubernetes YAML you can write this out as a YAML array of bare words
command:
- some_program
- a file; with punctuation
Neither Docker nor Kubernetes will automatically run a shell for you (except in the case of the Dockerfile shell form of ENTRYPOINT or CMD). Part of the question is "which shell"; the natural answer would be a POSIX Bourne shell in the container's /bin/sh, but a very-lightweight container might not even have that, and sometimes Linux users expect /bin/sh to be GNU Bash, and confusion results. There are also potential lifecycle issues if the main container process is a shell rather than the thing it launches. If you do need a shell, you need to run it explicitly
command:
- /bin/sh
- -c
- some_program 'a file; with punctuation'
Note that sh -c's argument is a single word (in our C example, it would be a single entry in the argv array) and so it needs to be a single item in a command: or args: list. If you have the sh -c wrapper it can do anything you could type at a shell prompt, including running multiple commands in sequence. For a very long command it's not uncommon to see YAML block-scalar syntax here.
I think the reason the command field is an array is because it directly overrides the entrypoint of the container (and args the CMD) which can be an array, and should be one in order to use command and args together properly (see the documentation)

Difference between '-- /bin/sh -c ls' vs 'ls' when setting a command in kubectl?

I am bit confused with commands in kubectl. I am not sure when I can use the commands directly like
command: ["command"] or -- some_command
vs
command: [/bin/sh, -c, "command"] or -- /bin/sh -c some_command
I am bit confused with commands in kubectl. I am not sure when I can use the commands directly
Thankfully the distinction is easy(?): every command: is fed into the exec system call (or its golang equivalent); so if your container contains a binary that the kernel can successfully execute, you are welcome to use it in command:; if it is a shell built-in, shell alias, or otherwise requires sh (or python or whatever) to execute, then you must be explicit to the container runtime about that distinction
If it helps any, the command: syntax of kubernetes container:s are the equivalent of ENTRYPOINT ["",""] line of Dockerfile, not CMD ["", ""] and for sure not ENTRYPOINT echo this is fed to /bin/sh for you.
At a low level, every (Unix/Linux) command is invoked as a series of "words". If you type a command into your shell, the shell does some preprocessing and then creates the "words" and runs the command. In Kubernetes command: (and args:) there isn't a shell involved, unless you explicitly supply one.
I would default to using the list form unless you specifically need shell features.
command: # overrides Docker ENTRYPOINT
- the_command
- --an-argument
- --another
- value
If you use list form, you must explicitly list out each word. You may use either YAML block list syntax as above or flow list syntax [command, arg1, arg2]. If there are embedded spaces in a single item [command, --option value] then those spaces are included in a single command-line option as if you quoted it, which frequently confuses programs.
You can explicitly invoke a shell if you need to:
command:
- sh
- -c
- the_command --an-argument --another value
This command is in exactly three words, sh, the option -c, and the shell command. The shell will process this command in the usual way and execute it.
You need the shell form only if you're doing something more complicated than running a simple command with fixed arguments. Running multiple sequential commands c1 && c2 or environment variable expansion c1 "$OPTION" are probably the most common ones, but any standard Bourne shell syntax would be acceptable here (redirects, pipelines, ...).

Launch external program with multiple commands via Powershell

I am currently attempting to launch a different console (.exe) and pass multiple commands; while starting and entering a command works just fine, I haven't been able to find out how multiple ones can be entered via powershell.
& "C:\Program Files\Docker Toolbox\start.sh" docker-compose up -d --build
The given command works fine, but as mentioned I need to pass more than one command - I tried using arrays, ScriptBlocks and different sequences, though to no avail.
Edit:
Noticed that the docker build has a -f tag which allows me to specify a file; however, the issue now seems to be that the executed cmd removes all backslashes & special characters, rendering the path given useless.
Example:
&"C:\Program Files\Docker Toolbox\start.sh" 'docker-compose build -f
path\to\dockerfile'
will result in an error stating that "pathtodockerfile" is an invalid path.
Your start.sh needs to be able to handle multiple arguments. This doesn't look like a PowerShell question
Turns out that it was easier than expected; Solved it by executing a seperate file that contained the two commands needed and passing it to the start.sh file.
&"C:\Program Files\Docker Toolbox\start.sh" './xyz/fileContainingCommands.sh'

How can I run external programs using Perl 6? (e.g. like "system" in Perl 5)

I can use system in Perl 5 to run external programs. I like to think of system like a miniature "Linux command line" inside Perl. However, I cannot find documentation for system in Perl 6. What is the equivalent?
Perl6 actually has two commands that replace system from Perl 5.
In Perl6, shell passes its argument to the shell, similar to Perl 5's system when it has one argument that contains metacharacters.
In Perl6, run tries to avoid using the shell. It takes its first argument as a command and the remaining arguments as arguments to that command, similar to Perl 5's system when it has multiple arguments.
For example:
shell('ls > file.log.txt'); # Capture output from ls (shell does all the parsing, etc)
run('ls','-l','-r','-t'); # Run ls with -l, -r, and -t flags
run('ls','-lrt'); # Ditto
See also this 2014 Perl 6 Advent post on "running external programs".
In addition to using shell or run, which replace system from Perl 5, you can also use NativeCall to invoke the libc system function.
On my Windows box, it looks like this:
use NativeCall;
sub system(Str --> int32) is native("msvcr110.dll") { * };
system("echo 42");

How can I find out what script, program, or shell executed my Perl script?

How would I determine what script, program, or shell executed my Perl script?
Example: I might want to have human readable output if executed from shell (customized for each type of shell), a different type of output if called as a script from another perl script, and a machine readable format if executed from a program such as a continuous integration server.
Motivation: I have a tool that changes its output based on which shell executes it. I'd normally implement this behavior as an option to the script, but this tool's design doesn't allow for options. Other shells have environment variables that indicate what shell is running. I'm working on a patch to support Powershell, which has no such special variable.
Edit: Many of these answers happen to be linux specific. Unfortuantely, Powershell is for Windows. getppid, the $ENV{SHELL} variable, and shelling out to ps won't help in this case. This script needs to run cross-platform.
You use getppid(). Take this snippet in child.pl:
my $ppid = getppid();
system("ps --no-headers $ppid");
If you run it from the command line, system will show bash or similar (among other things). Execute it with system("perl child.pl"); in another script, e.g. parent.pl, and you will see that perl parent.pl executed it.
To capture just the name of the process with arguments (thanks to ikegami for the correct ps syntax):
my $ppid = getppid();
my $ps = `ps --no-headers -o cmd $ppid`;
chomp $ps;
EDIT: An alternative to this approach, might be to create soft links to your script, make the different contexts use different links to access your script and inspect $0 to build logic around that.
I would suggest a different approach to accomplish your goal. Instead of guessing at the context, make it more explicit. Each use case is wholly separate, so have three different interfaces.
A function which can be called inside a Perl program. This would likely return a Perl data structure. This is far easier, faster and more reliable than parsing script output. It would also serve as the basis for the scripts.
A script which outputs for the current shell. It can look at $ENV{SHELL} to discover what shell is running. For bonus points, provide a switch to explicitly override.
A script which can be called inside a non-Perl program, such as your continuous integration server, and issue machine readable output. XML and/or JSON or whatever.
2 and 3 would be just thin wrappers to format the data coming out of 1.
Each is tailored to fit its specific need. Each will work without heuristics. Each will be far simpler than trying to guess the context and what the user wants.
If you can't separate 2 and 3, have the continuous integration server set an environment variable and look for it.
Depending on your environment, you may be able to pick it up from the environment variables. Consider the following code:
/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);' | grep sh
On my Ubuntu system, it gets me:
'SHELL' => '/bin/bash',
So I guess that says I'm running perl from a bash shell. If you use something else, the SHELL variable may give you a hint.
But let's say you know you're in bash, but perl is run from a subshell. Then try:
/bin/sh -c "/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);'" | grep sh
You will find:
'_' => '/bin/sh',
'SHELL' => '/bin/bash',
So the shell is still bash, but bash has a variable $_ which also show the absolute filename of the shell or script being executed, which may also give a valuable hint. Similarily, for other environments there will most probably be clues left in the perl %ENV hash that should give you valuable hints.
If you're running PowerShell 2.0 or above (most likely), you can infer the shell as a parent process by examining the environment variable %psmodulepath%. By default, it points to the system modules under %windir%\system32\windowspowershell\v1.0\modules; this is what you would see if you examine the variable from cmd.exe.
However, when PowerShell starts up, it prepends the user's default module search path to this environment variable which looks like: %userprofile%\documents\windowspowershell\modules. This is inherited by child processes. So, your logic would be to test if %psmodulepath% starts with %userprofile% to detect powershell 2.0 or higher. This won't work in PowerShell 1.0 because it does not support modules.
This is on Windows XP with PowerShell v2.0, so take it with a grain of salt.
In a cmd.exe shell, I get:
PSModulePath=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\
whereas in the PowerShell console window, I get:
PSModulePath=E:\Home\user\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsP
owerShell\v1.0\Modules\
where E:\Home\user is where my "My Documents" folder is. So, one heuristic may be to check if PSModulePath contains a user dependent path.
In addition, in a console window, I get:
!::=::\
in the environment. From the PowerShell ISE, I get:
!::=::\
!C:=C:\Documents and Settings\user