sh: variable substitution with heredoc - sh

cat "${pos}" | /usr/bin/iconv -f CP1251 -t UTF-8 | uniq | sed -En "/^CLIENT_ID.*/!p" | while read line
do
.....
......
cat >> "$TMPFILE" << EOF
INSERT INTO ......;
EOF
done
As you can see each iteration writes a SQL statement to a tmp-file.
I launched this script from a regular interactive shell and got the expected output. Launched from a cron job - nothing.
After investigating I found a problem. When I use "$TMPFILE" without "" the script works ok. Why does this happen?
OS: FreeBSD, bourne shell.

IIRC, cron doesn't source all the files that a login shell does, so you will end up with different settings for environment variables. Could be the path $TMPFILE is pointing to contains spaces when run from cron for example.
Also, on some systems (depending on setup), cron uses a different shell. So if you start your script from command line, for example /usr/bin/sh might be used, whereas when started by cron, /bin/sh is used. (I have no experience with *BSD, but I have observed this on linux.)

Related

setup new database in ubuntu using a script [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

What "*#" means after executint a command in PostgreSql 10 on Windows 7?

I'm using PostgreSQL on Windows 7 through the command line. I want to import the content of different CSV files into a newly created table.
After executing the command the database name appeared like:
database=#
Now appears like
database*# after executing:
type directory/*.csv | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER';
What does *# mean?
Thanks
This answer is for Linux and as such doesn't answer OP's question for Windows. I'll leave it up anyway for anyone that comes across this in the future.
You accidentally started a block comment with your type directory/*.csv. type doesn't do what you think it does. From the bash built-ins:
With no options, indicate how each name would be interpreted if used as a command name.
Try doing cat instead:
cat directory/*.csv | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER';
If this gives you issues because each CSV has its own header, you can also do:
for file in directory/*.csv; do cat "$file" | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER'; done
Type Command
The type built-in command in Bash is a way of viewing command interpreter results. For example, using it with ssh:
$ type ssh
ssh is /usr/bin/ssh
This indicates how ssh would be interpreted when you run ssh as a command in the current Bash environment. This is useful for things like aliases. As an example for this, ll is usually an alias to ls -l. Here's what my Bash environment had for ll:
$ type ll
ll is aliased to `ls -l --color=auto'
For you, when you pipe the result of this command to psql, it encounters the /* in the input and assumes it's a block comment, which is what the database*# prompt means (the * indicates it's waiting for the comment close pattern, */).
Cat Command
cat is for concatenating multiple files together. By default, it writes to standard out, so cat directory/*.csv will write each CSV file to standard out one after another. However, piping this means that each CSV's header will also be piped mid-stream of the copy. This may not be desirable, so:
For Loop
We can use for to loop over each file and individually import it. The version I have above, for file in directory/*.csv, will properly handle files with spaces. Properly formatted:
for file in directory/*; do
cat "$file" | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER'
done
References
PostgreSQL 10 Comments Documentation (postgresql.org)
type built-in Manual page (mankier.com)
cat Manual page (mankier.com)
Bash looping tutorial (tldp.org)

Why does "sed -n -i" delete existing file contents?

Running Fedora 25 server edition. sed --version gives me sed (GNU sed) 4.2.2 along with the usual copyright and contact info. I've create a text file sudo vi ./potential_sed_bug. Vi shows the contents of this file (with :set list enabled) as:
don't$
delete$
me$
please$
I then run the following command:
sudo sed -n -i.bak /please/a\testing ./potential_sed_bug
Before we discuss the results; here is what the sed man page says:
-n, --quiet, --silent
suppress automatic printing of pattern space
and
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if extension supplied). The default operation mode is to break symbolic and hard links. This can be changed with --follow-symlinks and --copy.
I've also looked other sed command references to learn how to append with sed. Based on my understanding from the research I've done; the resulting file content should be:
don't
delete
me
please
testing
However, running sudo cat ./potential_sed_bug gives me the following output:
testing
In light of this discrepancy, is my understanding of the command I ran incorrect or is there a bug with sed/the environment?
tl;dr
Don't use -n with -i: unless you use explicit output commands in your sed script, nothing will be written to your file.
Using -i produces no stdout (terminal) output, so there's nothing extra you need to do to make your command quiet.
By default, sed automatically prints the (possibly modified) input lines to whatever its output target is, whether implied or explicitly specified: by default, to stdout (the terminal, unless redirected); with -i, to a temporary file that ultimately replaces the input file.
In both cases, -n suppresses this automatic printing, so that - unless you use explicit output functions such as p or, in your case, a - nothing gets printed to stdout / written to the temporary file.
Note that the automatic printing applies to the so-called pattern space, which is where the (possibly modified) input is held; explicit output functions such as p, a, i and c do not print to the pattern space (for potential subsequent modification), they print directly to the target stream / file, which is why a\testing was able to produce output, despite the use of -n.
Note that with -i, sed's implicit printing / explicit output commands only print to the temporary file, and not also to stdout, so a command using -i is invariably quiet with respect to stdout (terminal) output - there's nothing extra you need to do.
To give a concrete example (GNU sed syntax).
Since the use of -i is incidental to the question, I've omitted it for simplicity. Note that -i prints to a temporary file first, which, on completion, replaces the original. This comes with pitfalls, notably the potential destruction of symlinks; see the lower half of this answer of mine.
# Print input (by default), and append literal 'testing' after
# lines that contain 'please'.
$ sed '/please/ a testing' <<<$'yes\nplease\nmore'
yes
please
testing
more
# Adding `-n` suppresses the default printing, so only `testing` is printed.
# Note that the sequence of processing is exactly the same as without `-n`:
# If and when a line with 'please' is found, 'testing' is appended *at that time*.
$ sed -n '/please/ a testing' <<<$'yes\nplease\nmore'
testing
# Adding an unconditional `p` (print) call undoes the effect of `-n`.
$ sed -n 'p; /please/ a testing' <<<$'yes\nplease\nmore'
yes
please
testing
more

Command not running fron cron

I have a perl script which runs successfully from the root cron on my redhat server.
However, I have added a command to the perl script to execute an ldapsearch and when running the perl script from the command line it works perfectly, yet running from cron, the ldapseach command does not work.
I've the the full path to the ldapsearch executable and I have tried using both system and exec before the ldapsearch command, but no good.
The line in the perl code does the ldap search for container room, greps the specific line in the results, then parses the data and cuts the 1st two characters of the results. The code is:
$userRoom = `exec /usr/local/bin/ldapsearch -h myldap.domain.com '(cn=$user)' room | /bin/grep -i room | /bin/grep -iv internal | /bin/cut -d'=' -f2 | /bin/cut -c 1-2`;
I'm assuming it's an evironment or permissions thing. I just can't find the right answer.
Any suggestions greatly appreciated.

Executing perl script inside bash script

I inherited a long bash script that I recently needed to modify. The bash script is run as a cronjob on a daily basis. I am decent with bash scripting, but I do not know much about Perl.
I had to substitute all "rm" commands with a call to a perl script that does something similar (for security purposes). This script was not written by me, so there is no -f flag to skip the confirmation prompt. Therefore, to automate this script I pipe "yes" to the script.
Here is an example where I am sequentially deleting two directories:
echo REMOVING FILES TO SAVE DISK SPACE
echo "yes | sudo nice -n -10 perl <path_to_delete_script.pl> -dir <del_dir1>"
yes | sudo nice -n -10 perl <path_to_delete_script.pl> -dir <del_dir1>
echo "yes | sudo nice -n -10 perl <path_to_delete_script.pl> -dir <del_dir2>"
yes | sudo nice -n -10 perl <path_to_delete_script.pl> -dir <del_dir2>
echo DONE.
In my output file, I see the following:
REMOVING FILES TO SAVE DISK SPACE
yes | sudo nice -n -10 perl <path_to_delete_script.pl> -dir <del_dir1>
yes | sudo nice -n -10 perl <path_to_delete_script.pl> -dir <del_dir2>
DONE.
It does not appear that the perl script has run. Yet when I copy and paste those two commands into the terminal, they both run fine.
Any help is appreciated. Thank you in advance.
You simply put do
yes | ./myscript.pl
Thanks for all the comments. I ended up changing the group and permissions of the tool and all output files. This allowed me to run the perl script without using "sudo," which others pointed out is bad practice.