With sudo, write multiple lines to a file using sys.process in Scala - scala

I'm trying to set my ip address as described in my previous question. However, the best candidate method (even though it doesn't work yet) I've found seems to be to use passwordless sudo with the sys.process package since I need sudo privileges to perform the necessary actions as follows:
import sys.process._
val a = "sudo rm -f /etc/network/interfaces.d/eth0.cfg" !
val b = s"""sudo sh -c 'echo -e
auto eth0
iface eth0 inet static
address $ip
netmask 255.255.255.0
gateway 192.168.2.1
dns-nameservers 8.8.8.8
> /etc/network/interfaces.d/eth0.cfg'""" !
val c = "sudo /sbin/ifup eth0" !
There are a few issues with this:
I'm receiving the following error, which shows both a syntax error and failure to successfully write the file that describes eth0 (*.cfg is sourced in /etc/network/interface):
-e: 1: -e: Syntax error: Unterminated quoted string
Ignoring unknown interface eth0=eth0.
I have to insert val a = ..., val b = ..., etc. to make the code parse correctly. I do want to handle errors in any of these commands appropriately though.
It appears that file io usually uses #> which requires the right-hand-side to be a file, which in this case requires sudo in order to write to. Is there a solution for this?
How can I do this correctly and in the nicest and most idiomatic way possible?

Do:
Seq("sudo", "sh", "-c", s"""
rm -f /etc/network/interfaces.d/eth0.cfg
echo -n "auto eth0
iface eth0 inet static
address $ip
netmask 255.255.255.0
gateway 192.168.2.1
dns-nameservers 8.8.8.8
" > /etc/network/interfaces.d/eth0.cfg
/sbin/ifup eth0
""").!
There is no need to invoke sudo multiple times, instead, invoke it once and have it run several commands in a shell. echo needs its parameter inside quotes, otherwise sh will interpret newline as the end of the echo command. You needed val a = ... because of the ambiguity with postfix operators, with .! you can avoid this. We also need to give ! a Seq[String] instead of a String. With a String, Scala will split on whitespace to separate the command and its arguments, which doesn't do what we want in this case, e.g. sh -c 'echo x' would get turned into Seq("sh", "-c", "'echo", "x'") instead of Seq("sh", "-c", "echo x").

Related

use existing SSH_AUTH_SOCK to execute commands on remote server

I connect to my work server (workserver1.com) from my local PC (localhost) using SSH and execute a bunch of commands on workserver1.
Below are the commands I execute using SSH
1) run script on server collect production data and put it in a txt
ssh -A workserver1.com 'python3 /usr/local/collect_data_online.py 2>&1 | tee /home/myname/out.txt'
$ please input your dynamic token: <manually input credential token generated every 15s>
2) filter lines I need and put in a dat file
ssh -A workserver1.com "grep 'my-keyword-cron' out.txt | grep -oP '({.*})' | tee workserver2.dat"
$ please input your dynamic token: <manually input credential token again>
3) send data collected in 2) and send to workserver2 which could only access through workserver1**
ssh -A workserver1.com 'curl workserver2.com --data-binary "#workserver2.dat" --compressed' "
$ please input your dynamic token: <manually input credential token 3rd time>
In each steps above , I actually created 3 completed different socket with workserver1.com. I got this info from running command below on remote server
$ ssh -A workserver1.com 'printenv | grep SSH'
SSH_CLIENT=10.126.192.xxx 58276 22
SSH_SESSION_ID=787878787878787878
SSH_TTY=/dev/pts/0
SSH_AUTH_SOCK=/tmp/ssh-XXXXKuJLEX/agent.29291
SSH_AUTH_CERT_SERIAL=666666666
SSH_AUTH_CERT_KEY=myname
# SSH_CONNECTION changes each time I make a SSH request to workserver1.com. so I need repeatedly input dynamic token manually
SSH_CONNECTION=10.126.192.xxx 58276 10.218.35.yyy 22
On my localhost I can also see SSH sock which used for the SSH connection
$ SSH_AUTH_SOCK=/tmp/ssh-localhost/agent.12345
My question is , is there a way to using single existing socket to avoid making multiple SSH connections and just input the dynamic token once. I hope I could use existing sock to interactively type commands to this SSH server and collect outpu/data as I want , just like on my localhost
What's in my mind is
1) socat can I run some command on localhost like
socat UNIX-CONNECT:$SSH_AUTH_SOCK,exec:'commands I want to execute' - ==> possible to get an interactive client&server shell?
2) is there any ssh option I could use ?
I am new to socat and not familiar with ssh except some commonly used commands
Thank you for your help in advance
The solution is open the first connection with '-M'
First use ControlMaster and ControlPath in ~/.ssh/config as below:
host *
ControlMaster auto
ControlPath ~/.ssh/ssh_mux_%h_%p_%r
And when connect toremote host the very first time, add '-M'
ssh -M $remotehost
Then in follow ssh connection with the same host you could just use
ssh $remotehost

Shell Script how to pass an argument with spaces inside a variable

I'm making a script to synchronize directories with rsync over ssh. I come into trouble when I want to define a custom port. Suppose a normal working script would have a syntax:
#! /bin/sh
rval=2222
port="ssh -p $rval"
rsync --progress -av -e "$port" sflash#192.168.10.107:/home/sflash/Documents/tmp/tcopy/ /home/sflash/Documents/tmp/tcopy
the syntax when disclosing a custom port is -e "ssh -p 2222". However, if I want to use a variable in this case like:
#! /bin/sh
rval=2222
port="-e \"ssh -p $rval\""
rsync --progress -av $port sflash#192.168.10.107:/home/sflash/Documents/tmp/tcopy/ /home/sflash/Documents/tmp/tcopy
This will not work likely due to some sort of interaction with IFS. I can completely avoid this scenario if I introduce an if statement to check if port is defined, but I am curious on the exact reason why this fails and if a solution exists to forcefully implement this method.
EDIT: sorry I am restricted to just posix shell
You haven't actually provided enough detail to be certain, but I suspect you are hitting a common misconception.
When you do:
rval=2222
rsync --progress -av -e "ssh -p $rval" src dst
rsync is invoked with 6 arguments: --progress, -av, -e, ssh -p 2222, src, and dst.
On the other hand, when you do:
port="-e \"ssh -p $rval\""
rsync --progress -av $port src dst
rsync is invoked with 8 arguments: --progress, -av, -e, "ssh, -p, 2222", src, and dst.
You do not want the double quotes to be passed to rsync, and you do not want the ssh -p 2222 to be split up into 3 arguments. One (terrible) way to do what you want is to use eval. But it seems what you really want is:
rval=2222
port="ssh -p $rval"
rsync --progress -av ${port:+-e "$port"} src dst
Now, if port is defined and not the empty string, rsync will be invoked with the additional arguments -e and ssh -p 2222 (as desired), and if port is undefined or empty, neither the -e nor the $port argument will be used.
Note that this is a case where you must not use double quotes around ${port:+-e "$port"}. If you do so, then an empty string would be passed as an argument when $port is the empty string. When $port is not the empty string, it would pass a single argument -e ssh -p 2222 rather than splitting into 2 arguments -e and ssh -p 2222.

Problem with nested quotes in bash: mongodump with query in a docker container via ssh

I'm backing up my database that's in a docker container, and since the total filesize is too large to fit onto the remaining disk space I execute it via SSH and dump it onto my local pc with this command (I'm using Ubuntu default bash):
docker-machine ssh my-machine-image "docker exec container-id /bin/sh -c 'mongodump --archive -u=admin --authenticationDatabase=admin -p=mongo-pwd --db=my-db --collection=my-collection --gzip'" > myfile.dump
This works pretty great, however I'm having trouble getting it to work with the --query command. Mongodump requires it to be in strict JSON and I'm having trouble with getting the nested quotes in bash to work. My most successful attempt (aka it actually successfuly executed the command instead of returning a syntax/JSON error) was with a string literal like this, however that seems to parse the JSON wrong, since it always returns 0 documents, no matter the query:
docker-machine ssh my-machine-image "docker exec container-id /bin/sh -c $'mongodump --archive -u=admin --authenticationDatabase=admin -p=mongo-pwd --db=my-db --collection=my-collection --query=\'{ \"_id\": { \"$oid\": \"some_random_object_id\" } }\' --gzip'" > myfile.dump
What is the correct way to pass strict JSON to the --query parameter with this amount of nested quotes?
Since you have multiple layers of quoting it would be easiest to assign each layer to a variable. Then use bash's printf %q to automatically quote any string for use in a shell.
#! /usr/bin/env bash
json='{"_id": { "'"$oid"'": "some_random_object_id" } }'
cmd="mongodump --archive -u=admin --authenticationDatabase=admin -p=mongo-pwd --db=my-db --collection=my-collection --query=$(printf %q "$json") --gzip"
sshCmd="docker exec container-id /bin/sh -c $(printf %q "$cmd")"
docker-machine ssh my-machine-image "$sshCmd"

perl sudo using Net::Openssh not working

I am using salvas' Net::Openssh module, but not able to figure how to use sudo. I have tried the following, but it is not working...
There is nothing printed in results. Single word commands like ls, pwd are also not producing anything.
version of sudo on target system:
$ /usr/local/bin/sudo -V
CU Sudo version 1.5.7p2
$ /usr/local/bin/sudo -h
CU Sudo version 1.5.7p2
usage: /usr/local/bin/sudo -V | -h | -l | -v | -k | -H | [-b] [-p prompt] [-u username/#uid] -s | <command>
since CU sudo does not allow more than 1 option at a time, i supply -k before supplying the command.
please note that this sudo version does not have -S switch to pass password using stdin. so it expects password from terminal. can u pl help more. thx.
$ssh->system("$sudo_path -k");
my #output = $ssh->capture({tty => 1,
stdin_data => "$PASS"},
$sudo_path,
"-p",'', "$cmd");
print " result=#output \n";
OR
$ssh->system("$sudo_path -k");
my #output = $ssh->capture({stdin_data => "$PASS"},
$sudo_path,
"-p",'', "$cmd");
print " result=#output \n";
It would be more helpful if you explained more of what you were trying to accomplish in your question, but I'm assuming you're trying to run a command that requires sudo via ssh using the Net::OpenSSH module in perl.
If that is the case, you should consider trying a 'heredoc' to script a series of commands.
Here is the PerlDoc for Quote-Like operators - look for the area talking about <<EOF as I've often used here docs to script things like this.
If for some reason using a heredoc within the Net::OpenSSH command doesn't work - Net::OpenSSH also works with Expect as documented here.
And if for some reason that doesn't work for you, you could always create a shell script that runs the command with sudo via a heredoc on the remote system, and just execute that script via your Net::OpenSSH connection.

How can I tail a remote file?

I am trying to find a good way to tail a file on a remote host. This is on an internal network of Linux machines. The requirements are:
Must be well behaved (no extra process laying around, or continuing output)
Cannot require someone's pet Perl module.
Can be invoked through Perl.
If possible, doesn't require a custom built script or utility on the remote machine (regular linux utilities are fine)
The solutions I have tried are generally of this sort
ssh remotemachine -f <some command>
"some command" has been:
tail -f logfile
Basic tail doesn't work because the remote process continues to write output to the terminal after the local ssh process dies.
$socket = IO:Socket::INET->new(...);
$pid = fork();
if(!$pid)
{
exec("ssh $host -f '<script which connects to socket and writes>'");
exit;
}
$client = $socket->accept;
while(<$client>)
{
print $_;
}
This works better because there is no output to the screen after the local process exits but the remote process doesn't figure out that its socket is down and it lives on indefinitely.
Have you tried
ssh -t remotemachine <some command>
-t option from the ssh man page:
-t Force pseudo-tty allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which
can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
instead of
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background.
This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
Some ideas:
You could mount it over NFS or CIFS, and then use File::Tail.
You could use one of Perl's SSH modules (there are a number of them), combined with tail -f.
You could try Survlog Its OS X only though.
netcat should do it for you.
You can Tail files remotely using bash and rsync. The following script is taken from this tutorial: Tail files remotely using bash and rsync
#!/bin/bash
#Code Snippet from and copyright by sshadmincontrol.com
#You may use this code freely as long as you keep this notice.
PIDHOME=/a_place/to/store/flag/file
FILE=`echo ${0} | sed 's:.*/::'`
RUNFILEFLAG=${PIDHOME}/${FILE}.running
if [ -e $RUNFILEFLAG ]; then
echo "Already running ${RUNFILEFLAG}"
exit 1
else
touch ${RUNFILEFLAG}
fi
hostname=$1 #host name to remotlely access
log_dir=$2 #log directory on the remotehost
log_file=$3 #remote log file name
username=$3 #username to use to access remote host
log_base=$4 #where to save the log locally
ORIGLOG="$log_base/$hostname/${log_file}.orig"
INTERLOG="$log_base/$hostname/${log_file}.inter"
FINALLOG="$log_base/$hostname/${log_file}.log"
rsync -q -e ssh $username#$hostname:$log_dir/$log_file ${ORIGLOG}
grep -Ev ".ico|.jpg|.gif|.png|.css" > ${INTERLOG}
if [ ! -e $FINALLOG ]; then
cp ${INTERLOG} ${FINALLOG}
else
LINE=`tail -1 ${FINALLOG}`
grep -F "$LINE" -A 999999999 ${INTERLOG} \
| grep -Fv "$LINE" >> ${FINALLOG}
fi
rm ${RUNFILEFLAG}
exit 0
rsync://[USER#]HOST[:PORT]/SRC... [DEST] | tail [DEST] ?
Someone suggested using nc (netcat). This solution does work but is less ideal than just using ssh -t. The biggest problem is that you have to use nc on both sides of the connection and need to do some port discovery on the local machine to find a suitable port over which to connect. Here is the adaptation of the above code to use netcat:
$pid = fork();
if(!$pid)
{
exec("ssh $host -f 'tail -f $filename |nc $localhost $port'");
exit;
}
exec("nc -l -p $port");
There is File::Tail. Don't know if it helps?