Python Piping 'type' question - command-line

>> type countlines.py | python countlines.py
Can someone explain the what "type" does? Does it just type the components of the file out? Can't find any documentation

On windows, 'type' is the equivalent of the 'cat' command on linux, and yes - it just prints out the content of the file.

From the bash man page:
type [-aftpP] name [name ...]
With no options, indicate how each name would be interpreted if
used as a command name. If the -t option is used, type prints a
string which is one of alias, keyword, function, builtin, or
file if name is an alias, shell reserved word, function,
builtin, or disk file, respectively. If the name is not found,
then nothing is printed, and an exit status of false is
returned. If the -p option is used, type either returns the
name of the disk file that would be executed if name were speci‐
fied as a command name, or nothing if ``type -t name'' would not
return file. The -P option forces a PATH search for each name,
even if ``type -t name'' would not return file. If a command is
hashed, -p and -P print the hashed value, not necessarily the
file that appears first in PATH. If the -a option is used, type
prints all of the places that contain an executable named name.
This includes aliases and functions, if and only if the -p
option is not also used. The table of hashed commands is not
consulted when using -a. The -f option suppresses shell func‐
tion lookup, as with the command builtin. type returns true if
all of the arguments are found, false if any are not found.

Related

Powershell Variable replacement not working from command line

I have the following command I want to run from PowerShell:
docker run -v C:\src\docker_certs:/root/.dotnet/https -it MyContainer:MyTag /bin/bash
When I run that it works perfectly. (It mounts a volume using the source folder at the destination folder.)
But when I run this:
docker run -v $env:DOCKER_CERTS_PATH:/root/.dotnet/https -it MyContainer:MyTag /bin/bash
The volume does not get mounted.
I run this to check the value:
echo $env:DOCKER_CERTS_PATH
And it returns:
C:\src\docker_certs
As I understood things, it should have replaced the value of $env:DOCKER_CERTS_PATH with C:\src\docker_certs in the second command.
How can I get the PowerShell reference to an environment variable to replace when I run a command?
Enclose the environment-variable reference in {...}:
docker run -v ${env:DOCKER_CERTS_PATH}:/root/.dotnet/https ...
Alternatively, `-escape the : char. following the env.-var. reference:
docker run -v $env:DOCKER_CERTS_PATH`:/root/.dotnet/https ...
As for what you tried:
docker run -v $env:DOCKER_CERTS_PATH:/root/.dotnet/https ...
If you don't use {...} to explicitly delineate a variable name, PowerShell may have a different idea of where the variable name ends than you do.
As an alternative to using {...}, you can `-escape the first character you don't want to be considered part of the variable name.
Note that your command argument is in this case implicitly treated as if it were enclosed in "...", so the above applies to expandable strings ("...") too.
For an comprehensive discussion of how unquoted tokens are parsed as command arguments, see this answer.
In the case at hand, the : that follows $env:DOCKER_CERTS_PATH is not considered the end of the variable reference; instead, it is considered part of the variable name, so that PowerShell looks for an environment variable (env:) literally named DOCKER_CERTS_PATH: (sic).
Since no such environment variable (presumably) exists, $env:DOCKER_CERTS_PATH: expands to the empty string and all that is passed to docker is /root/.dotnet/https.
You can verify that DOCKER_CERTS_PATH: is a valid environment variable name as follows:
PS> $env:DOCKER_CERTS_PATH: = 'hi'; $env:DOCKER_CERTS_PATH:
hi
By contrast, a regular (shell) variable is not permitted to contain :, because that : - in the absence of a namespace prefix such as env: - is itself considered a namespace prefix, which fails, because then the variable-name part is missing:
PS> $DOCKER_CERTS_PATH: = 'hi' # BREAKS, even with {...}
Variable reference is not valid. ':' was not followed by a valid variable name character. Consider using ${} to delimit the name.
The first : in a variable identifier is invariably considered the end of the namespace identifier, which must refer to an existing PowerShell drive name, as reported by Get-PSDrive.
This notation is called namespace variable notation, as explained in this answer.

"log=..." command-line parameter to send script output to STDOUT? [duplicate]

I'm working with a command line utility that requires passing the name of a file to write output to, e.g.
foo -o output.txt
The only thing it writes to stdout is a message that indicates that it ran successfully. I'd like to be able to pipe everything that is written to output.txt to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don't need to keep, and I'd rather pipe the streams than work on massive files in a stepwise manner.
Is there any way in this scenario to pipe the real output (i.e. output.txt) to another command? Can I somehow magically pass stdout as the file argument?
Solution 1: Using process substitution
The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:
foo -o >(other_command)
(Note that this is a bashism. There's similar solutions for other shells, but bottom line is that it's not portable.)
Solution 2: Using named pipes explicitly
You can do the above explicitly / manually as follows:
Create a named pipe using the mkfifo command.
mkfifo my_buf
Launch your other command with that file as input
other_command < my_buf
Execute foo and let it write it's output to my_buf
foo -o my_buf
Solution 3: Using /dev/stdout
You can also use the device file /dev/stdout as follows
foo -o /dev/stdout | other_command
Named pipes work fine, but you have a nicer, more direct syntax available via bash process substitution that has the added benefit of not using a permanent named pipe that must later be deleted (process substitution uses temporary named pipes behind the scenes):
foo -o >(other command)
Also, should you want to pipe the output to your command and also save the output to a file, you can do this:
foo -o >(tee output.txt) | other command
For the sake of making stackoverflow happy let me write a long enough sentence because my proposed solution is only 18 characters long instead of the required 30+
foo -o /dev/stdout
You could use the magic of UNIX and create a named pipe :)
Create the pipe
$ mknod -p mypipe
Start the process that reads from the pipe
$ second-process < mypipe
Start the process, that writes into the pipe
$ foo -o mypipe
foo -o <(cat)
if for some reason you don't have permission to write to /dev/stdout
I use /dev/tty as the output filename, equivalent to using /dev/nul/ when you want to output nothing at all. Then | and you are done.

How to add tab-completion to command with two arguments (and tab complete both)?

Let's have a command command. This command has an option -o that takes 2 arguments. I'd like to add tab-completion to both of those arguments.
I've tried
complete -c command -x -s o -a "complete first arg"
I but cannot add tab-completion to the second argument.
I would like to autocomplete the command when no option is specified. this work fine:
complete -c command -a "no option completion"
but when it I hit tab after the first argument in -o option, those ^ completions are shown.
like this:
command -o "fist" <tab>
no
option
completion
if I cannot add completion for the second argument I'd like to at least remove those completions.
This command has an option -o that takes 2 arguments.
That's quite unusual. Are you sure about that?
Usually, you'll either have options that take one argument, or options that act as "flags" and change all other arguments. So you would just check their presence.
"-o" "-old-style" options also aren't as common as "--gnu-style" long options or "-s" short options, so I suggest double-checking.
complete -c command -a "no option completion"
This means to offer "no", "option" and "completion" if the command is "command".
There is no condition specified, so these are always offered.
What you want is to use the "--condition" (or "-n") option to complete. This takes script (as a string) that is executed. If it returns 0 (i.e. true), the corresponding completion (the rest of that complete invocation - the option and arguments) is offered.
Something like
# The first condition is just to see that `commandline -opc`,
# which tokenizes (-o) all tokens of the current process (-p)
# up to (but not including) the cursor (-c)
# returns just one token - which must be the command.
#
# Alternatively this condition could also be
# the inverse of the other conditions
# (offer this if nothing else would be)
complete -c command -n 'test (count (commandline -opc)) -lt 2' -a 'stuff for no argument'
# The first argument to the option we handle via "-a" to the option
complete -c command -o the-option -a 'the first argument'
# The second argument needs to be offered
# if the second-to-last token is the option.
#
# This is incomplete, because theoretically a token that
# _looks_ like the option could be offered as an argument to _another_ option.
complete -c command -n 'set -l option (commandline -opc)[-2]; test "$option" = "-o"' -a 'the second argument'

Passing optional parameters to rundeck script

I have a python script that I would like to run using rundeck that is invoked as follows:
createInstance.py [-n <name>] <env> <version>
Where name is optional and env and version are required.
e.g. if I want to call the script with a name I would call:
createInstance.py -n test staging 1.2.3.4
If I want to default the name, I would call:
createInstance.py staging 1.2.3.4
The problem i have is that I dont know how to specify the script arguments string in rundeck. I have a job, with 3 options, one for env, version and name and if I define the arguments string as:
-n ${option.name} ${option.env} ${option.version}
Whenever the name is unset, rundeck calls:
createInstance.py -n staging 1.2.3.4
Instead I would like it to omit the -n. Is there any way of doing this? Right now my only option is to change the script to be more forgiving in how it handles the -n, and to always ensure its at the end, e.g.:
createInstance.py staging 1.2.3.4 -n
createInstance.py staging 1.2.3.4 -n test
I would like to avoid making this change though, as I want to be able to use the scripts standalone as well.
Rather than use a command step, try an inline script step. Your inline script can count the number of arguments and if they are set. Then with that logic you can choose how to set the creteInstance.py args.
As #Alex-SF suggests, I've also used an inline script for this, along with a Key Value Data log filter. The script is:
#!/bin/bash
# Parse optional parameters
# https://stackoverflow.com/questions/41233996/passing-optional-parameters-to-rundeck-script
# Arguments to this script should be in the format "flag" "value", eg "-p" ${option.name}
# If value is not missing then return will be "flag value", otherwise blank
echo -n "RUNDECK:DATA:"
while (( "$#" )); do
flag="$1"
value="$2"
if [[ -z "$value" ]] || [[ $value =~ ^\- ]]; then
# no value for this parameter (empty or picking up the next flag)
echo -n ""
shift
else
# value provided for this parameter
echo -n "$flag $value "
shift
shift
fi
done
And the key value data filter uses the pattern ^RUNDECK:DATA:(.*)$ and the name data args. Then I use ${data.args*} as the input for the real command.
It's all rather messy, and I can't find any open issue requesting this as a feature (yet).
Use an inline script and use conditional variable expansion feature from bash.
createInstance.py ${RD_OPTION_NAME:+-n $RD_OPTION_NAME} $RD_OPTION_ENV $RD_OPTION_VERSION
This will omit the first option altogether if it is empty ("").

Checking if a file is a text file without using -T?

Title is pretty self explanatory, are there file testing functions in perl or is there a built in module that allows file testing operations?
This is a non-issue as -T like all of the file test operators are perl builtins.
They are documented here: perldoc -X
-X FILEHANDLE
-X EXPR
-X DIRHANDLE
-X
A file test, where X is one of the letters listed below. This unary operator takes one argument, either a filename, a filehandle, or a dirhandle, and tests the associated file to see if something is true about it. If the argument is omitted, tests $_ , except for -t , which tests STDIN. Unless otherwise documented, it returns 1 for true and '' for false, or the undefined value if the file doesn't exist. Despite the funny names, precedence is the same as any other named unary operator.
...
-T File is an ASCII text file (heuristic guess).
-B File is a "binary" file (opposite of -T).
The "file test" functions available in Perl are part of the programming language itself. Based on what you're saying and from the comments on this page, it may be that you have been "asked not to use external commands" because someone thinks that the -T flag is relying on something that belongs to the underlying environment and not the Perl language.
-T is part of the -X file test unary operators which are inherent to Perl:
http://perldoc.perl.org/functions/-X.html
Underlying the -T operator (specifically) is the function pp_fttext, which lives in pp_sys.c. These are part of the underlying code that comprises Perl, and you can verify this by looking in the root directory of the Perl source distribution:
http://www.perl.org/get.html
It may be the only way to do what you were originally asking (how to do this without -T) might be to do what you were asked not to do (use something external to Perl to perform the test).