nmap and nmap -sL giving me contradictory results - nmap

I want to find the names of all PCs in some subnet.
In order to do this, I type in the following command:
nmap x.x.x.0/24.
(Whereby each x stands for one digit.)
An alternative command to achieve the same thing is supposed to be this:
nmap -sL x.x.x.0/24.
The only difference to the first alternative is supposed to relate to the format in which the results are printed to the command line.
However, not only the format differs but also the content. The result of the first command tells me for each computer on the subnet that it's up.
The result of the second command tells me for each computer on the subnet that it's down!!
What is going on here? Why is the first command telling me the opposite of the second one?

Extracted from nmap documentation:
(-sL option). This feature simply enumerates every IP address in the given target netblock(s) and does a reverse-DNS lookup (unless -n was specified) on each.
That's because -sL is only listing and nothing more. And your other command without parameters is performing real scan.

Related

Why is the k8s container spec "command" field an array?

According to this official kubernetes documentation page, it is possible to provide "a command" and args to a container.
The page has 13 occurrences of the string "a command" and 10 occurrences of "the command" -- note the use of singular.
There are (besides file names) 3 occurrences of the plural "commands":
One leads to the page Get a Shell to a Running Container, which I am not interested in. I am interested in the start-up command of the container.
One mention is concerned with running several piped commands in a shell environment, however the provided example uses a single string: command: ["/bin/sh"].
The third occurrence is in the introductory sentence:
This page shows how to define commands and arguments when you run a container in a Pod.
All examples, including the explanation of how command and args interact when given or omitted, only ever show a single string in an array. It even seems to be intended to use a single command only, which would receive all specified args, since the field is named with a singular.
The question is: Why is this field an array?
I assume the developers of kubernetes had a good reason for this, but I cannot think of one. What is going on here? Is it legacy? If so, how come? Is it future-readiness? If so, what for? Is it for compatibility? If so, to what?
Edit:
As I have written in a comment below, the only reason I can conceive of at this moment is this: The k8s developers wanted to achieve the interaction of command and args as documented AND allow a user to specify all parts of a command in a single parameter instead of having a command span across both command and args.
So essentially a compromise between a feature and readability.
Can anyone confirm this hypothesis?
Because the execve(2) system call takes an array of words. Everything at a higher level fundamentally reduces to this. As you note, a container only runs a single command, and then exits, so the array syntax is a native-Unix way of providing the command rather than a way to try to specify multiple commands.
For the sake of argument, consider a file named a file; with punctuation, where the spaces and semicolon are part of the filename. Maybe this is the input to some program, so in a shell you might write
some_program 'a file; with punctuation'
In C you could write this out as an array of strings and just run it
char *const argv[] = {
"some_program",
"a file; with punctuation", /* no escaping or quoting, an ordinary C string */
NULL
};
execvp(argv[0], argv); /* does not return */
and similarly in Kubernetes YAML you can write this out as a YAML array of bare words
command:
- some_program
- a file; with punctuation
Neither Docker nor Kubernetes will automatically run a shell for you (except in the case of the Dockerfile shell form of ENTRYPOINT or CMD). Part of the question is "which shell"; the natural answer would be a POSIX Bourne shell in the container's /bin/sh, but a very-lightweight container might not even have that, and sometimes Linux users expect /bin/sh to be GNU Bash, and confusion results. There are also potential lifecycle issues if the main container process is a shell rather than the thing it launches. If you do need a shell, you need to run it explicitly
command:
- /bin/sh
- -c
- some_program 'a file; with punctuation'
Note that sh -c's argument is a single word (in our C example, it would be a single entry in the argv array) and so it needs to be a single item in a command: or args: list. If you have the sh -c wrapper it can do anything you could type at a shell prompt, including running multiple commands in sequence. For a very long command it's not uncommon to see YAML block-scalar syntax here.
I think the reason the command field is an array is because it directly overrides the entrypoint of the container (and args the CMD) which can be an array, and should be one in order to use command and args together properly (see the documentation)

Output results of telnet and nmap to powershell/cmd session

So I have a serious fundamental gap in my knowledge that I'm sure has an easy answer, but after googling and looking on here, I can't find what I'm looking for:
I use nmap and telnet on an almost daily basis for checking ports and logging into IP codecs and I use them through either the powershell or cmd consoles, but when I tried to script something and run that script with either a .bat or .ps1 suffix, either will give me the classic not recognized... message. But, if you're able to run it in the console, you should be able to script it, right? How can one go about that?
Sample code for telnet (that works in when inputting to either console, but not in script form):
telnet 192.168.87.21
Sample code for nmap (again, works when inputting to either console, but not in script form):
nmap -p 9999 192.168.87.101
Add a '&' symbol before 'telnet' like that: & telnet 127.0.0.1
For more information how to run executables from Powershell look there: https://social.technet.microsoft.com/wiki/contents/articles/7703.powershell-running-executables.aspx

Perl and SNMP - input options

script uses Net::SNMP module for Perl.
I'm trying to run snmpget command with some options added e.g. ( -Ir ) (here is list of options), but I haven't found any way to do that. In documentation for this module I didn't found anything about adding input options to snmp command.
If there is any other module that supports this, it would bi nice but it wouldn't be first pick as that would require a lot of changes in script (not mine script, just doing minor changes).
I could run system (or backticks) command from Perl, e.g.:
snmpget -v2c -c COMMUNITY -Ir HOST OID
and parse output but I would like to avoid that also.
Any advice or solution would be welcome since I'm still new to Perl.
Thx.
You linked to the documentation of Net::SNMP so I'm sure you've read it all before asking... Right?
There is no "command", there is only your script's calls to the API.
[Edit after the below comments]
Net::SNMP has no option to check indexes before sending the request. So, you could say the equivalent of -Ir is enabled by default. In fact, Net::SNMP does not load your MIB, so it has no way of checking the validity of your requested variables before sending the request.

extend runtime limit for a USUSP job

when I doing a calculation halfway, I just found the runtime limit 50:00 may not be sufficient. So I use $bstop 1234 to stop the job 1234 and try to modify the old runtime -W 50:00 to -W 100:00
Can you suggest a command to do so?
I tried
$ bmod -W 100:00 1234
Please request for a minimum of 32 cores!
For more information, please contact XXX#XXX.
Request aborted by esub. Job not modified.
$ bmod [-W 100:00| -Wn ] 1234
-bash: -Wn]: command not found
100:00[8217]: Illegal job ID.
. Job not modified.
according to
[-W [hour:]minute[/host_name | /host_model] | -Wn]
from http://www.cisl.ucar.edu/docs/LSF/7.0.3/command_reference/bmod.cmdref.html
I don't quite understand the syntax, -Wn does it mean Wall time new
Many thanks for your help!
The first command fails because LSF calls a the mandatory esub defined by your administrator to do some preprocessing on the command line, and this is returning an error. Here's the relevant quote from the page you linked:
Like bsub, bmod calls the master esub (mesub), which invokes any
mandatory esub executables configured by an LSF administrator, and any
executable named esub (without .application) if it exists in
LSF_SERVERDIR.
You're going to have to come up with a bmod command line that passes the esub checks, but that might cause other problems because some parameters (like -n I believe) can't be changed at runtime by default so bmod will reject the request if you specify it.
The -Wn option is used to remove the run limit from the job entirely rather than change it to a different value.

How can a program read console input after input redirection?

I've always found the command-line mysql utility to be a bit surprising, because you can do this:
gzcat dumpfile.sql.gz | mysql -u <user> -p <options>
and mysql will prompt you for a password. Now, stdin in mysql is redirected -- one would expect it to read the password from the dumpfile. Instead, it somehow bypasses stdin and goes straight for the terminal. ssh does the same kind of thing.
I suspect this is some sort of /dev/tty or /dev/pty magic, but I'd appreciate a proper explanation for this apparent magic :) (and why these programs can do it on any platform, even Windows).
As you surmise, it is using /dev/tty, which is specified this way:
In each process, [/dev/tty is] a synonym for the controlling terminal associated with the process group of that process, if any. It is useful for programs or shell procedures that wish to be sure of writing messages to or reading data from the terminal no matter how output has been redirected. It can also be used for programs that demand the name of a file for output, when typed output is desired and it is tiresome to find out what terminal is currently in use.
No real "magic" beyond that.
[link]