I have a script that makes of stdin. It also includes internal calls to curl, that are constructed based on the stdin data, and that require password authentication via form variables in the url.
I'd like to be able to type my password when I run the script, rather than storing it somewhere, but stdin is "taken". What's a good way to write a script like this? I'd be interested in either a good way to keep it in the filesystem or if there's some conceivable way to get it as live input without fouling up the incoming pipe.
Read it from the TTY:
if terminal=$(tty < /dev/tty || tty || tty 0<&1 || tty 0<&2) 2> /dev/null
then
echo "Enter password: " > "$terminal"
IFS= read -r password < "$terminal"
else
echo "There is no terminal to read a password from." >&2
exit 1
fi
This tries to get the terminal associated with /dev/tty, or stdin/stdout/stderr if the OS doesn't support it, and uses it to read and write directly to the user.
If it doesn't have to be portable, e.g. when using use Bash on Linux or FreeBSD, you can simplify and improve this:
if ! IFS= read -rs -p "Enter password: " password < /dev/tty 2> /dev/tty
then
echo "Password entry failed"
exit 1
fi
At risk of muddying the question, I needed to switch over to perl and thought it might be useful to some people who find themselves here if I posted a perl translation of "that other guy"'s answer. (Don't know if this is the cleanest way, but it worked for me.)
open (TTY, "+< /dev/tty" )
or eval 'sub Term::ReadLine::findConsole { ("&STDIN", "&STDERR") }';
die $# if $#;
print TTY "Enter your password: ";
use Term::ReadKey;
ReadMode('noecho', *TTY);
my $password = ReadLine(0, *TTY);
chomp $password;
ReadMode('normal', *TTY);
print TTY "\n";
close (TTY);
Related
I want to execute some code before execution(redirect stderr to stdout).
perl -e "BEGIN {open STDERR, '>&STDOUT'}" perl.pl
But when there is -e, no file will be executed. I know $Config{sitelib}/sitecustomize.pl can pre-execute some code, and -f option can disable it. But this way is inflexible. In most cases, I do not need to add extra code. I don't want to add -f every time.
I cannot use shell to redirect. I want to set org-babel-perl-command in emacs org mode so that stdout and stderr can be printed in the same way, instead of opening another window to print stderr. org-babel-perl-command should be like perl.
For example, org-babel-python-command can be set to python -i -c "import sys; sys.stderr = sys.stdout".
perl -e'
open( STDERR, ">&STDOUT" );
do( shift( #ARGV ) );
' perl.pl
(Error handling needed.)
For the case in question, the following would suffice:
perl perl.pl 2>&1
Maybe even
./perl.pl 2>&1
You could just make a wrapper for perl. For example:
#!/bin/bash
exec perl "$#" 2>&1
Then make it executable and use instead of perl in your org-babel-perl-command. Ensure it can be found in your PATH or use full location.
I am not familiar with perl. I am reading an installation guide atm and the following Linux command has come up:
perl -p -i -e "s/enforcing/disabled/" /etc/selinux/config
Now, I am trying to understand this. Here is my understanding so far:
-e simply allows for executing whatever follows
-p puts my commands that follow -e in a loop. Now this is strange to me, as to me this command seems to be trying to say: Write "s/enforcing/disabled/" into /etc/selinux/config. Then again, where is the "write" command? And what is this -i (inline) good for?
-p changes
s/enforcing/disabled/
to something equivalent to
while (<>) {
s/enforcing/disabled/;
print;
}
which is short for
while (defined( $_ = <ARGV> )) {
$_ =~ s/enforcing/disabled/;
print($_);
}
What this does:
It reads a line from ARGV into $_. ARGV is a special file handle that reads from the each of the files specified as arguments (or STDIN if no files are provided).
If EOF has been reached, the loop and therefore the program exits.
It replaces the first occurrence of enforcing with disabled.
It prints out the modified line to the default output handle. Because of -i, this is a handle to a new file with the same name as the one from which the program is currently reading.*
Repeat.
For example,
$ cat a
foo
bar enforcing the law
baz
enforcing enforcing
$ perl -pe's/enforcing/disabled/' -i a
$ cat a
foo
bar disabled the law
baz
disabled enforcing
* — In old versions of Perl, the old file has already been deleted at this point, but it's still accessible as long as there's an open file handle to it. In very new versions of Perl, this writes to temporary file that will later overwrite the file from which the program is reading.
To find out exactly what Perl is going to do, you can use the O module
perl -MO=Deparse -p -i -e "s/enforcing/disabled/" file
outputs
BEGIN { $^I = ""; }
LINE: while (defined($_ = readline ARGV)) {
s/enforcing/disabled/;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
I have a very simple Perl script that fails with this error message:
sh: 1: Syntax error: Bad fd number
Here is the script (two lines)
#!/usr/bin/perl
system("xterm >& /dev/null &");
If I run the same xterm command from the command-line, it works. From the Perl script, it doesn't. What is wrong?
system(EXPR)
is short for[1]
system("/bin/sh", "-c", EXPR)
In other words, it takes a bourne shell command.
xterm >& /dev/null &
isn't a valid bourne shell command. You want
xterm >/dev/null 2>&1 &
Maybe you used a different shell when you tested it outside of Perl.
Technically, it's closer to
use Config qw( );
system($Config::Config{sh}, "-c", EXPR)
Except in Windows.
Firstly, the preferred syntax for redirecting both stdout and stderr in Bash is &>, not >&, because the latter can be confused with other redirection forms.
Secondly, system uses /bin/sh which may behave differently than your default shell.
Try writing it out explicitly, as in
system("xterm >/dev/null 2>&1 &");
or skipping the shell altogether.
if (fork() == 0) {
open STDOUT, '>', '/dev/null';
open STDERR, '>&', *STDOUT;
exec "xterm";
POSIX::_exit();
}
I am sort of new to Perl. I was trying to write a script which will take a mysqldump and restore it in a new database. The main idea is to migrate a DB from one server to another.
Here's the scrip that I wrote:
#!/usr/bin/perl
use warnings;
print "Starting the migration process!\n";
print "Enter the source server address, make sure you enter the FQDN of the server";
$source_address = promt ("Source server address: ");
check_string($source_address);
print "Enter the destination server address, make sure you enter the FQDN of the server";
$destination_address = promt ("Destination server address:");
check_string($destination_address);
print "Enter the Source server password for the root user";
$source_password = promt ("Source server address:");
check_string($source_password);
print "Enter the destination server password for the root user";
$destination_password = promt ("Destination server address:");
check_string($destination_password);
$current_dir = cwd();
system("mysqldump --single-transaction -u root -p$source_password --force -h $source_address -A -R -E --triggers
--routines --max_allowed_packet=512M | gzip -c >$current_dir/old_db_dump.sql") or die "system call to create Mysqldump failed: $?";
system("pt-show-grants -uroot -p$source_password -h $source_address > $current_dir/old_grants.sql") or die "system call to create grant failed: $?";
system("mysql -u root -p$destination_password -h $destination_address < $current_dir/old_db_dump.sql") or die "System call to import the sqldump failed: $?";
system("mysql -u root -p$destination_password -h $destination_address < $current_dir/old_grants.sql") or die "System call to import the grants failed: $?";
# A function that checks if the passed string is null or not
sub check_string{
$string_to_check = $_[0];
if ($string_to_check eq '') {
print "The entered value is empty, the program will exit now, re-run the program";
exit 0;
}
}
sub prompt {
my ($text) = #_;
print $text;
my $answer = <STDIN>;
chomp $answer;
return $answer;
}
But when I try to execute the code, I end up with the following error:
Starting the migration process!
Undefined subroutine &main::promt called at migrate_mysql.pl line 26.
Enter the source server address, make sure you enter the FQDN of the server
For writing the Prompt function, I followed the tutorial mentioned in the post here: http://perlmaven.com/subroutines-and-functions-in-perl
I do not know why am I getting this error here. Do I have to include some packages?
Also it would be nice if you could comment on the system block of the code:
system("mysqldump --single-transaction -u root -p$source_password --force -h $source_address -A -R -E --triggers
--routines --max_allowed_packet=512M | gzip -c >$current_dir/old_db_dump.sql") or die "system call to create Mysqldump failed: $?";
system("pt-show-grants -uroot -p$source_password -h $source_address > $current_dir/old_grants.sql") or die "system call to create grant failed: $?";
system("mysql -u root -p$destination_password -h $destination_address < $current_dir/old_db_dump.sql") or die "System call to import the sqldump failed: $?";
system("mysql -u root -p$destination_password -h $destination_address < $current_dir/old_grants.sql") or die "System call to import the grants failed: $?";
Am I doing it in the right way? Am I passing the variable values in a correct manner?
From this error message:
Undefined subroutine &main::promt called at migrate_mysql.pl line 26.
You should look at line 26. Which is odd, because your error isn't on line 26, but here:
$source_address = promt ("Source server address: ");
If I run your code I get:
Undefined subroutine &main::promt called at line 9.
You've got "promt" not "prompt" which is a subroutine that is undefined.
You should really also add use strict; to your code, and then rejig it - it'll generate a lot more errors initially, but it'll avoid some real gotchas in future if you spell a variable incorrectly.
It's quite easy in your code - just put my in front of the first use of a variable - you've been good with scoping otherwise, but otherwise it means that once you first use $string_to_check it remains visible to the rest of the program, which is just a bit messy and can lead to some odd bugs.
Is there an idiomatic way to simulate Perl's diamond operator in bash? With the diamond operator,
script.sh | ...
reads stdin for its input and
script.sh file1 file2 | ...
reads file1 and file2 for its input.
One other constraint is that I want to use the stdin in script.sh for something else other than input to my own script. The below code does what I want for the file1 file2 ... case above, but not for data provided on stdin.
command - $# <<EOF
some_code_for_first_argument_of_command_here
EOF
I'd prefer a Bash solution but any Unix shell is OK.
Edit: for clarification, here is the content of script.sh:
#!/bin/bash
command - $# <<EOF
some_code_for_first_argument_of_command_here
EOF
I want this to work the way the diamond operator would work in Perl, but it only handles filenames-as-arguments right now.
Edit 2: I can't do anything that goes
cat XXX | command
because the stdin for command is not the user's data. The stdin for command is my data in the here-doc. I would like the user data to come in on the stdin of my script, but it can't be the stdin of the call to command inside my script.
Sure, this is totally doable:
#!/bin/bash
cat $# | some_command_goes_here
Users can then call your script with no arguments (or '-') to read from stdin, or multiple files, all of which will be read.
If you want to process the contents of those files (say, line-by-line), you could do something like this:
for line in $(cat $#); do
echo "I read: $line"
done
Edit: Changed $* to $# to handle spaces in filenames, thanks to a helpful comment.
Kind of cheezy, but how about
cat file1 file2 | script.sh
I am (like everyone else, it seems) a bit confused about exactly what the goal is here, so I'll give three possible answers that may cover what you actually want. First, the relatively simple goal of getting the script to read from either a list of files (supplied on the command line) or from its regular stdin:
if [ $# -gt 0 ]; then
exec < <(cat "$#")
fi
# From this point on, the script's stdin is redirected from the files
# (if any) supplied on the command line
Note: the double-quoted use of $# is the best way to avoid problems with funny characters (e.g. spaces) in filenames -- $* and unquoted $# both mess this up. The <() trick I'm using here is a bash-only feature; it fires off cat in the background to feed data from files supplied on the command line, and then we use exec to replace the script's stdin with the output from cat.
...but that doesn't seem to be what you actually want. What you seem to really want is to pass the supplied filenames or the script's stdin as arguments to a command inside the script. This requires sort of the opposite process: converting the script's stdin into a file (actually a named pipe) whose name can be passed to the command. Like this:
if [[ $# -gt 0 ]]; then
command "$#" <<EOF
here-doc goes here
EOF
else
command <(cat) <<EOF
here-doc goes here
EOF
fi
This uses <() to launder the script's stdin through cat to a named pipe, which is then passed to command as an argument. Meanwhile, command's stdin is taken from the here-doc.
Now, I think that's what you want to do, but it's not quite what you've asked for, which is to both redirect the script's stdin from the supplied files and pass stdin to the command inside the script. This can be done by combining the above techniques:
if [ $# -gt 0 ]; then
exec < <(cat "$#")
fi
command <(cat) <<EOF
here-doc goes here
EOF
...although I can't think why you'd actually want to do this.
The Perl diamond operator essentially loops across all the command line arguments, treating each as a filename. It opens each file and reads them line-by-line. Here's some bash code that will do approximately the same.
for f in "$#"
do
# Do something with $f, such as...
cat $f | command1 | command2
-or-
command1 < $f
-or-
# Read $f line-by-line
cat $f | while read line_from_f
do
# Do stuff with $line_from_f
done
done
You want to take the first argument and do something with it, and then either read from any files specified or stdin if no files?
Personally, I'd suggest using getopt to indicate arguments using the "-a value" syntax to help disambiguate, but that's just me. Here's how I'd do it in bash without getopts:
firstarg=${1?:usage: $0 arg [file1 .. fileN]}
shift
typeset -a files
if [[ ${##} -gt 0 ]]
then
files=( "$#" )
else
files=( "/dev/stdin" )
fi
for file in "${files[#]}"
do
whatever_you_want < "$file"
done
The ?: operator will die if there are no args specified, since you seem to want at least one arg either way. After grabbing that, shift the args over by one, and then either use the remaining args as your file list, or the bash special filehandle "/dev/stdin" if there were no other args.
I think that the "if no files are specified, use /dev/stdin - otherwise use the files on the command line" piece is probably what you're looking for, but the rest of the code is at least useful for context.
Also a little cheezy, but how about this:
if [[ $# -eq 0 ]]
then
# read from stdin
else
# read from $* (args)
fi
If you need to read and process line-by-line (which is likely) and don't want to copy/paste the same code twice (which is likely), define a function in your script and just pass the lines one-by-one to this function, and process them in said function.
Why not use ``cat #* in the script? For example:
x=`cat $*`
echo $x