I'm making a script to synchronize directories with rsync over ssh. I come into trouble when I want to define a custom port. Suppose a normal working script would have a syntax:
#! /bin/sh
rval=2222
port="ssh -p $rval"
rsync --progress -av -e "$port" sflash#192.168.10.107:/home/sflash/Documents/tmp/tcopy/ /home/sflash/Documents/tmp/tcopy
the syntax when disclosing a custom port is -e "ssh -p 2222". However, if I want to use a variable in this case like:
#! /bin/sh
rval=2222
port="-e \"ssh -p $rval\""
rsync --progress -av $port sflash#192.168.10.107:/home/sflash/Documents/tmp/tcopy/ /home/sflash/Documents/tmp/tcopy
This will not work likely due to some sort of interaction with IFS. I can completely avoid this scenario if I introduce an if statement to check if port is defined, but I am curious on the exact reason why this fails and if a solution exists to forcefully implement this method.
EDIT: sorry I am restricted to just posix shell
You haven't actually provided enough detail to be certain, but I suspect you are hitting a common misconception.
When you do:
rval=2222
rsync --progress -av -e "ssh -p $rval" src dst
rsync is invoked with 6 arguments: --progress, -av, -e, ssh -p 2222, src, and dst.
On the other hand, when you do:
port="-e \"ssh -p $rval\""
rsync --progress -av $port src dst
rsync is invoked with 8 arguments: --progress, -av, -e, "ssh, -p, 2222", src, and dst.
You do not want the double quotes to be passed to rsync, and you do not want the ssh -p 2222 to be split up into 3 arguments. One (terrible) way to do what you want is to use eval. But it seems what you really want is:
rval=2222
port="ssh -p $rval"
rsync --progress -av ${port:+-e "$port"} src dst
Now, if port is defined and not the empty string, rsync will be invoked with the additional arguments -e and ssh -p 2222 (as desired), and if port is undefined or empty, neither the -e nor the $port argument will be used.
Note that this is a case where you must not use double quotes around ${port:+-e "$port"}. If you do so, then an empty string would be passed as an argument when $port is the empty string. When $port is not the empty string, it would pass a single argument -e ssh -p 2222 rather than splitting into 2 arguments -e and ssh -p 2222.
Related
I'm backing up my database that's in a docker container, and since the total filesize is too large to fit onto the remaining disk space I execute it via SSH and dump it onto my local pc with this command (I'm using Ubuntu default bash):
docker-machine ssh my-machine-image "docker exec container-id /bin/sh -c 'mongodump --archive -u=admin --authenticationDatabase=admin -p=mongo-pwd --db=my-db --collection=my-collection --gzip'" > myfile.dump
This works pretty great, however I'm having trouble getting it to work with the --query command. Mongodump requires it to be in strict JSON and I'm having trouble with getting the nested quotes in bash to work. My most successful attempt (aka it actually successfuly executed the command instead of returning a syntax/JSON error) was with a string literal like this, however that seems to parse the JSON wrong, since it always returns 0 documents, no matter the query:
docker-machine ssh my-machine-image "docker exec container-id /bin/sh -c $'mongodump --archive -u=admin --authenticationDatabase=admin -p=mongo-pwd --db=my-db --collection=my-collection --query=\'{ \"_id\": { \"$oid\": \"some_random_object_id\" } }\' --gzip'" > myfile.dump
What is the correct way to pass strict JSON to the --query parameter with this amount of nested quotes?
Since you have multiple layers of quoting it would be easiest to assign each layer to a variable. Then use bash's printf %q to automatically quote any string for use in a shell.
#! /usr/bin/env bash
json='{"_id": { "'"$oid"'": "some_random_object_id" } }'
cmd="mongodump --archive -u=admin --authenticationDatabase=admin -p=mongo-pwd --db=my-db --collection=my-collection --query=$(printf %q "$json") --gzip"
sshCmd="docker exec container-id /bin/sh -c $(printf %q "$cmd")"
docker-machine ssh my-machine-image "$sshCmd"
I have tried following command to split backup file but it is always showing error illegal option split:
pg_dump.exe" -h localhost -p 5432 -U
postgres --inserts | split -b 2m – backup.sql -f "D:\post\filename.sql"
db_name
you use pipe (|) and unix split command as argument to pg_dump.exe. It won't work. Consider trying 7zip volumes for that. Or any other command line splitter
To test whether the output is terminal we can do -t STDOUT:
if (-t STDOUT) {
# print with terminal control chars
} else {
# just plain print
}
But when the script is executed in the ssh session not run from terminal (Jenkins in my case), the -t test still returns true and my output gets polluted with control chars:
ssh user#server "/my/script.pl"
Why does the -t detects the terminal?
I don't know why ssh is allocating a terminal for you — mine defaults to not doing that even if the output of ssh goes to a terminal — but passing -T to ssh will disable pseudo-tty creation on the remote end.
$ ssh -t localhost "perl -E'say -t STDOUT ?1:0'"
1
Connection to localhost closed.
$ ssh -T localhost "perl -E'say -t STDOUT ?1:0'"
0
From ssh's man page:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary
screen-based programs on a remote machine, which can be very useful,
e.g. when implementing menu services. Multiple -t options force tty
allocation, even if ssh has no local tty.
Perhaps it would be better if you instead forced ssh to allocate a pty —
From the ssh manual:
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs
on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
The longer answer: -t (the Perl or Bourne shell function) reliably detects whether the stream is a “typewriter,” but ssh will normally only allocate a pseudo-teletype (pty) stream in interactive shells, not when other programs are being started.
See also RequestTTY as an option in .ssh/config.
I'm trying to set my ip address as described in my previous question. However, the best candidate method (even though it doesn't work yet) I've found seems to be to use passwordless sudo with the sys.process package since I need sudo privileges to perform the necessary actions as follows:
import sys.process._
val a = "sudo rm -f /etc/network/interfaces.d/eth0.cfg" !
val b = s"""sudo sh -c 'echo -e
auto eth0
iface eth0 inet static
address $ip
netmask 255.255.255.0
gateway 192.168.2.1
dns-nameservers 8.8.8.8
> /etc/network/interfaces.d/eth0.cfg'""" !
val c = "sudo /sbin/ifup eth0" !
There are a few issues with this:
I'm receiving the following error, which shows both a syntax error and failure to successfully write the file that describes eth0 (*.cfg is sourced in /etc/network/interface):
-e: 1: -e: Syntax error: Unterminated quoted string
Ignoring unknown interface eth0=eth0.
I have to insert val a = ..., val b = ..., etc. to make the code parse correctly. I do want to handle errors in any of these commands appropriately though.
It appears that file io usually uses #> which requires the right-hand-side to be a file, which in this case requires sudo in order to write to. Is there a solution for this?
How can I do this correctly and in the nicest and most idiomatic way possible?
Do:
Seq("sudo", "sh", "-c", s"""
rm -f /etc/network/interfaces.d/eth0.cfg
echo -n "auto eth0
iface eth0 inet static
address $ip
netmask 255.255.255.0
gateway 192.168.2.1
dns-nameservers 8.8.8.8
" > /etc/network/interfaces.d/eth0.cfg
/sbin/ifup eth0
""").!
There is no need to invoke sudo multiple times, instead, invoke it once and have it run several commands in a shell. echo needs its parameter inside quotes, otherwise sh will interpret newline as the end of the echo command. You needed val a = ... because of the ambiguity with postfix operators, with .! you can avoid this. We also need to give ! a Seq[String] instead of a String. With a String, Scala will split on whitespace to separate the command and its arguments, which doesn't do what we want in this case, e.g. sh -c 'echo x' would get turned into Seq("sh", "-c", "'echo", "x'") instead of Seq("sh", "-c", "echo x").
I am trying to find a good way to tail a file on a remote host. This is on an internal network of Linux machines. The requirements are:
Must be well behaved (no extra process laying around, or continuing output)
Cannot require someone's pet Perl module.
Can be invoked through Perl.
If possible, doesn't require a custom built script or utility on the remote machine (regular linux utilities are fine)
The solutions I have tried are generally of this sort
ssh remotemachine -f <some command>
"some command" has been:
tail -f logfile
Basic tail doesn't work because the remote process continues to write output to the terminal after the local ssh process dies.
$socket = IO:Socket::INET->new(...);
$pid = fork();
if(!$pid)
{
exec("ssh $host -f '<script which connects to socket and writes>'");
exit;
}
$client = $socket->accept;
while(<$client>)
{
print $_;
}
This works better because there is no output to the screen after the local process exits but the remote process doesn't figure out that its socket is down and it lives on indefinitely.
Have you tried
ssh -t remotemachine <some command>
-t option from the ssh man page:
-t Force pseudo-tty allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which
can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
instead of
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background.
This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
Some ideas:
You could mount it over NFS or CIFS, and then use File::Tail.
You could use one of Perl's SSH modules (there are a number of them), combined with tail -f.
You could try Survlog Its OS X only though.
netcat should do it for you.
You can Tail files remotely using bash and rsync. The following script is taken from this tutorial: Tail files remotely using bash and rsync
#!/bin/bash
#Code Snippet from and copyright by sshadmincontrol.com
#You may use this code freely as long as you keep this notice.
PIDHOME=/a_place/to/store/flag/file
FILE=`echo ${0} | sed 's:.*/::'`
RUNFILEFLAG=${PIDHOME}/${FILE}.running
if [ -e $RUNFILEFLAG ]; then
echo "Already running ${RUNFILEFLAG}"
exit 1
else
touch ${RUNFILEFLAG}
fi
hostname=$1 #host name to remotlely access
log_dir=$2 #log directory on the remotehost
log_file=$3 #remote log file name
username=$3 #username to use to access remote host
log_base=$4 #where to save the log locally
ORIGLOG="$log_base/$hostname/${log_file}.orig"
INTERLOG="$log_base/$hostname/${log_file}.inter"
FINALLOG="$log_base/$hostname/${log_file}.log"
rsync -q -e ssh $username#$hostname:$log_dir/$log_file ${ORIGLOG}
grep -Ev ".ico|.jpg|.gif|.png|.css" > ${INTERLOG}
if [ ! -e $FINALLOG ]; then
cp ${INTERLOG} ${FINALLOG}
else
LINE=`tail -1 ${FINALLOG}`
grep -F "$LINE" -A 999999999 ${INTERLOG} \
| grep -Fv "$LINE" >> ${FINALLOG}
fi
rm ${RUNFILEFLAG}
exit 0
rsync://[USER#]HOST[:PORT]/SRC... [DEST] | tail [DEST] ?
Someone suggested using nc (netcat). This solution does work but is less ideal than just using ssh -t. The biggest problem is that you have to use nc on both sides of the connection and need to do some port discovery on the local machine to find a suitable port over which to connect. Here is the adaptation of the above code to use netcat:
$pid = fork();
if(!$pid)
{
exec("ssh $host -f 'tail -f $filename |nc $localhost $port'");
exit;
}
exec("nc -l -p $port");
There is File::Tail. Don't know if it helps?