how can i ssh into a server, and read a pid file and bring back the #? - perl

i am tasked with Perl to ssh into another server and stop/start/restart the process if it is not already started.
i will break this down into small chunks:
::how can i ssh into a server, and read a pid file and bring back the #?::
i can do this: system("ssh serverid.gcsc.att.com -l myid -i /home/myid/.ssh/authorized_keys 'kill -9 1234'"); just fine, kill the process after authenticating into the server.
but how do i read a pid file/any file on that server, and get the value into a variable so that i can then ssh and kill the process?

Easiest way is with backticks.
my $output = `ssh server -l myid -i /home/myid/.ssh/authorized_keys some_command`;
$output will contain the output of your ssh command.

Related

use existing SSH_AUTH_SOCK to execute commands on remote server

I connect to my work server (workserver1.com) from my local PC (localhost) using SSH and execute a bunch of commands on workserver1.
Below are the commands I execute using SSH
1) run script on server collect production data and put it in a txt
ssh -A workserver1.com 'python3 /usr/local/collect_data_online.py 2>&1 | tee /home/myname/out.txt'
$ please input your dynamic token: <manually input credential token generated every 15s>
2) filter lines I need and put in a dat file
ssh -A workserver1.com "grep 'my-keyword-cron' out.txt | grep -oP '({.*})' | tee workserver2.dat"
$ please input your dynamic token: <manually input credential token again>
3) send data collected in 2) and send to workserver2 which could only access through workserver1**
ssh -A workserver1.com 'curl workserver2.com --data-binary "#workserver2.dat" --compressed' "
$ please input your dynamic token: <manually input credential token 3rd time>
In each steps above , I actually created 3 completed different socket with workserver1.com. I got this info from running command below on remote server
$ ssh -A workserver1.com 'printenv | grep SSH'
SSH_CLIENT=10.126.192.xxx 58276 22
SSH_SESSION_ID=787878787878787878
SSH_TTY=/dev/pts/0
SSH_AUTH_SOCK=/tmp/ssh-XXXXKuJLEX/agent.29291
SSH_AUTH_CERT_SERIAL=666666666
SSH_AUTH_CERT_KEY=myname
# SSH_CONNECTION changes each time I make a SSH request to workserver1.com. so I need repeatedly input dynamic token manually
SSH_CONNECTION=10.126.192.xxx 58276 10.218.35.yyy 22
On my localhost I can also see SSH sock which used for the SSH connection
$ SSH_AUTH_SOCK=/tmp/ssh-localhost/agent.12345
My question is , is there a way to using single existing socket to avoid making multiple SSH connections and just input the dynamic token once. I hope I could use existing sock to interactively type commands to this SSH server and collect outpu/data as I want , just like on my localhost
What's in my mind is
1) socat can I run some command on localhost like
socat UNIX-CONNECT:$SSH_AUTH_SOCK,exec:'commands I want to execute' - ==> possible to get an interactive client&server shell?
2) is there any ssh option I could use ?
I am new to socat and not familiar with ssh except some commonly used commands
Thank you for your help in advance
The solution is open the first connection with '-M'
First use ControlMaster and ControlPath in ~/.ssh/config as below:
host *
ControlMaster auto
ControlPath ~/.ssh/ssh_mux_%h_%p_%r
And when connect toremote host the very first time, add '-M'
ssh -M $remotehost
Then in follow ssh connection with the same host you could just use
ssh $remotehost

PostgreSQL COPY pipe output to gzip and then to STDOUT

The following command works well
$ psql -c "copy (select * from foo limit 3) to stdout csv header"
# output
column1,column2
val1,val2
val3,val4
val5,val6
However the following does not:
$ psql -c "copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
# output
COPY 3
Why do I have COPY 3 as the output from this command? I would expect that the output would be the compressed CSV string, after passing through gzip.
The command below works, for instance:
$ psql -c "copy (select * from foo limit 3) to stdout csv header" | gzip -f -c
# output (this garbage is just the compressed string and is as expected)
߉T`M�A �0 ᆬ}6�BL�I+�^E�gv�ijAp���qH�1����� FfВ�,Д���}������+��
How to make a single SQL command that directly pipes the result into gzip and sends the compressed string to STDOUT?
When you use COPY ... TO PROGRAM, the PostgreSQL server process (backend) starts a new process and pipes the file to the process's standard input. The standard output of that process is lost. It only makes sense to use COPY ... TO PROGRAM if the called program writes the data to a file or similar.
If your goal is to compress the data that go across the network, you could use sslmode=require sslcompression=on in your connect string to use the SSL network compression feature I built into PostgreSQL 9.2. Unfortunately this has been deprecated and most OpenSSL binaries are shipped with the feature disabled.
There is currently a native network compression patch under development, but it is questionable whether that will make v14.
Other than that, you cannot get what you want at the moment.
copy is running gzip on the server and not forwarding the STDOUT from gzip on to the client.
You can use \copy instead, which would run gzip on the client:
psql -q -c "\copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
This is fundamentally the same as piping to gzip, which you show in your question.
If the goal is to compress the output of copy so it transfers faster over the network, then...
psql "postgresql://ip:port/dbname?sslmode=require&sslcompression=1"
It should display "compression active" if it's enabled. That probably requires some server config variable to be enabled though.
Or you can simply use ssh:
ssh user#dbserver "psql -c \"copy (select * from foo limit 3) to stdout csv header\" | gzip -f -c" >localfile.csv.gz
But... of course, you need ssh access to the db server.
If you don't have ssh to the db server, maybe you have ssh to another box in the same datacenter that has a fast network link to the db server, in that case you can ssh to it instead of the db server. Data will be transferred uncompressed between that box and the database, compressed on the box, and piped via ssh to your local machine. That will even save cpu on the database server since it won't be doing the compression.
If that doesn't work, well then, why not put the ssh command into the "to program" and have the server send it via ssh to your machine? You'll have to setup your router and open a port, but you can do that. Of course you'll have to find a way to put the password in the ssh command line, that's usually a big no-no, but maybe just for once. Or just use netcat instead, that doesn't require a password.
Also, if you want speed, please, use zstd instead of gzip.
Here's an example with netcat. I just tested it and it worked.
On destination machine which is 192.168.0.1:
nc -lp 65001 | zstd -d >file.csv
In another terminal:
psql -c "copy (select * from foo) to program 'zstd -9 |nc -N 192.168.0.1 65001' csv header" test
Note -N option for netcat.
You can use copy to PROGRAM:
COPY foo_table to PROGRAM 'gzip > /tmp/foo_table.csv' delimiters',' CSV HEADER;

backtick in Perl printing output on terminal

I am trying to get the output of a command in a variable and checking whether its matching with other variable.
$login1=`ssh ****************** date`;
This command when typed manually will expect a prompt " Password: " . When i run it from the script it is ruuning that command and printing that prompt waiting for user to enter, but i dont need that. I just need to get that output and compare
if($login1=~ /Password:/)
{
print " yes";
}
else
{
print "No ";
}
However the script is just stopping at Password prompt . Please suggest me on how to achieve this .
You might want to look at the -f flag for ssh:
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. This
implies -n. The recommended way to start X11 programs at a
remote site is with something like ssh -f host xterm.
If you want to avoid passwords, set up a public/private key pair with no passphrase (dangerous, but much less dangerous than putting a password in a script) and copy the public key to the remote site. IIRC, it goes something like this:
localhost $ ssh-keygen -b 2048 -t ecdsa -N '' -f ./datekey
localhost $ scp ./datekey.pub remotehost:/tmp
localhost $ ssh remotehost
(login)
remotehost $ cat /tmp/datekey.pub >> ~/.ssh/authorized_keys
remotehost $ logout
localhost $ ssh -i ./datekey remotehost date
Make sure you store ./datekey somewhere no other user can access it at all -- not even read access.
If you're just trying to detect, you might simply need to feed it EOF to get it to move along:
$login1=`ssh ****************** date < /dev/null`;

How do you stop a perl Dancer/Starman/Plack server?

I started a Dancer/Starman server using:
sudo plackup -s Starman -p 5001 -E deployment --workers=10 -a mywebapp/bin/app.pl
but I'm unsure how I can stop the server. Can someone provide me with a quick way of stopping it and all the workers it has spawned?
Use the
--pid /path/to/the/pid.file
and you can kill the process based on his PID
So, using the above options, you can use
kill $(cat /path/to/the/pid.file)
the pid.file simply stores the master's PID - don't need analyze the ps output...
pkill -f starman
Kill processes based on name.
On Windows you can do "CTRL + C" like making a copy but Cancel in this case. Tested working.

How can I tail a remote file?

I am trying to find a good way to tail a file on a remote host. This is on an internal network of Linux machines. The requirements are:
Must be well behaved (no extra process laying around, or continuing output)
Cannot require someone's pet Perl module.
Can be invoked through Perl.
If possible, doesn't require a custom built script or utility on the remote machine (regular linux utilities are fine)
The solutions I have tried are generally of this sort
ssh remotemachine -f <some command>
"some command" has been:
tail -f logfile
Basic tail doesn't work because the remote process continues to write output to the terminal after the local ssh process dies.
$socket = IO:Socket::INET->new(...);
$pid = fork();
if(!$pid)
{
exec("ssh $host -f '<script which connects to socket and writes>'");
exit;
}
$client = $socket->accept;
while(<$client>)
{
print $_;
}
This works better because there is no output to the screen after the local process exits but the remote process doesn't figure out that its socket is down and it lives on indefinitely.
Have you tried
ssh -t remotemachine <some command>
-t option from the ssh man page:
-t Force pseudo-tty allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which
can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
instead of
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background.
This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
Some ideas:
You could mount it over NFS or CIFS, and then use File::Tail.
You could use one of Perl's SSH modules (there are a number of them), combined with tail -f.
You could try Survlog Its OS X only though.
netcat should do it for you.
You can Tail files remotely using bash and rsync. The following script is taken from this tutorial: Tail files remotely using bash and rsync
#!/bin/bash
#Code Snippet from and copyright by sshadmincontrol.com
#You may use this code freely as long as you keep this notice.
PIDHOME=/a_place/to/store/flag/file
FILE=`echo ${0} | sed 's:.*/::'`
RUNFILEFLAG=${PIDHOME}/${FILE}.running
if [ -e $RUNFILEFLAG ]; then
echo "Already running ${RUNFILEFLAG}"
exit 1
else
touch ${RUNFILEFLAG}
fi
hostname=$1 #host name to remotlely access
log_dir=$2 #log directory on the remotehost
log_file=$3 #remote log file name
username=$3 #username to use to access remote host
log_base=$4 #where to save the log locally
ORIGLOG="$log_base/$hostname/${log_file}.orig"
INTERLOG="$log_base/$hostname/${log_file}.inter"
FINALLOG="$log_base/$hostname/${log_file}.log"
rsync -q -e ssh $username#$hostname:$log_dir/$log_file ${ORIGLOG}
grep -Ev ".ico|.jpg|.gif|.png|.css" > ${INTERLOG}
if [ ! -e $FINALLOG ]; then
cp ${INTERLOG} ${FINALLOG}
else
LINE=`tail -1 ${FINALLOG}`
grep -F "$LINE" -A 999999999 ${INTERLOG} \
| grep -Fv "$LINE" >> ${FINALLOG}
fi
rm ${RUNFILEFLAG}
exit 0
rsync://[USER#]HOST[:PORT]/SRC... [DEST] | tail [DEST] ?
Someone suggested using nc (netcat). This solution does work but is less ideal than just using ssh -t. The biggest problem is that you have to use nc on both sides of the connection and need to do some port discovery on the local machine to find a suitable port over which to connect. Here is the adaptation of the above code to use netcat:
$pid = fork();
if(!$pid)
{
exec("ssh $host -f 'tail -f $filename |nc $localhost $port'");
exit;
}
exec("nc -l -p $port");
There is File::Tail. Don't know if it helps?