Send Multiple Commands to External Program - powershell

We are trying to write a PowerShell script that invokes an external application -- a Redis client (redis-cli.exe) -- and then sends multiple commands to that .exe. We have no issue sending individual commands like the below:
& redis-cli -h localhost -p 6379 SMEMBERS someKey
The problem is that this will start a Redis client, issue a single command, close the client, and then return control to PowerShell. We need to issue multiple commands in a transaction. For example, here are the commands that we want to send to the client:
MULTI
DEL someKey
DEL someSet
EXEC
The Redis client does support sending a LUA script string as a command, but this unfortunately doesn't support the MULTI/EXEC transactional commands. In other words, we need to be able to issue multiple commands like I listed above.

Since redis-cli appears to read input from STDIN you could feed it an array with the command strings like this:
'MULTI', 'EXEC' | & redis-cli -h localhost -p 6379
Using echo (alias for Write-Output) is not required for feeding the array into the pipeline.
You could also store the command array in a variable first:
$cmds = 'MULTI', 'EXEC'
$cmds | & redis-cli -h localhost -p 6379

Related

use existing SSH_AUTH_SOCK to execute commands on remote server

I connect to my work server (workserver1.com) from my local PC (localhost) using SSH and execute a bunch of commands on workserver1.
Below are the commands I execute using SSH
1) run script on server collect production data and put it in a txt
ssh -A workserver1.com 'python3 /usr/local/collect_data_online.py 2>&1 | tee /home/myname/out.txt'
$ please input your dynamic token: <manually input credential token generated every 15s>
2) filter lines I need and put in a dat file
ssh -A workserver1.com "grep 'my-keyword-cron' out.txt | grep -oP '({.*})' | tee workserver2.dat"
$ please input your dynamic token: <manually input credential token again>
3) send data collected in 2) and send to workserver2 which could only access through workserver1**
ssh -A workserver1.com 'curl workserver2.com --data-binary "#workserver2.dat" --compressed' "
$ please input your dynamic token: <manually input credential token 3rd time>
In each steps above , I actually created 3 completed different socket with workserver1.com. I got this info from running command below on remote server
$ ssh -A workserver1.com 'printenv | grep SSH'
SSH_CLIENT=10.126.192.xxx 58276 22
SSH_SESSION_ID=787878787878787878
SSH_TTY=/dev/pts/0
SSH_AUTH_SOCK=/tmp/ssh-XXXXKuJLEX/agent.29291
SSH_AUTH_CERT_SERIAL=666666666
SSH_AUTH_CERT_KEY=myname
# SSH_CONNECTION changes each time I make a SSH request to workserver1.com. so I need repeatedly input dynamic token manually
SSH_CONNECTION=10.126.192.xxx 58276 10.218.35.yyy 22
On my localhost I can also see SSH sock which used for the SSH connection
$ SSH_AUTH_SOCK=/tmp/ssh-localhost/agent.12345
My question is , is there a way to using single existing socket to avoid making multiple SSH connections and just input the dynamic token once. I hope I could use existing sock to interactively type commands to this SSH server and collect outpu/data as I want , just like on my localhost
What's in my mind is
1) socat can I run some command on localhost like
socat UNIX-CONNECT:$SSH_AUTH_SOCK,exec:'commands I want to execute' - ==> possible to get an interactive client&server shell?
2) is there any ssh option I could use ?
I am new to socat and not familiar with ssh except some commonly used commands
Thank you for your help in advance
The solution is open the first connection with '-M'
First use ControlMaster and ControlPath in ~/.ssh/config as below:
host *
ControlMaster auto
ControlPath ~/.ssh/ssh_mux_%h_%p_%r
And when connect toremote host the very first time, add '-M'
ssh -M $remotehost
Then in follow ssh connection with the same host you could just use
ssh $remotehost

PostgreSQL COPY pipe output to gzip and then to STDOUT

The following command works well
$ psql -c "copy (select * from foo limit 3) to stdout csv header"
# output
column1,column2
val1,val2
val3,val4
val5,val6
However the following does not:
$ psql -c "copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
# output
COPY 3
Why do I have COPY 3 as the output from this command? I would expect that the output would be the compressed CSV string, after passing through gzip.
The command below works, for instance:
$ psql -c "copy (select * from foo limit 3) to stdout csv header" | gzip -f -c
# output (this garbage is just the compressed string and is as expected)
߉T`M�A �0 ᆬ}6�BL�I+�^E�gv�ijAp���qH�1����� FfВ�,Д���}������+��
How to make a single SQL command that directly pipes the result into gzip and sends the compressed string to STDOUT?
When you use COPY ... TO PROGRAM, the PostgreSQL server process (backend) starts a new process and pipes the file to the process's standard input. The standard output of that process is lost. It only makes sense to use COPY ... TO PROGRAM if the called program writes the data to a file or similar.
If your goal is to compress the data that go across the network, you could use sslmode=require sslcompression=on in your connect string to use the SSL network compression feature I built into PostgreSQL 9.2. Unfortunately this has been deprecated and most OpenSSL binaries are shipped with the feature disabled.
There is currently a native network compression patch under development, but it is questionable whether that will make v14.
Other than that, you cannot get what you want at the moment.
copy is running gzip on the server and not forwarding the STDOUT from gzip on to the client.
You can use \copy instead, which would run gzip on the client:
psql -q -c "\copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
This is fundamentally the same as piping to gzip, which you show in your question.
If the goal is to compress the output of copy so it transfers faster over the network, then...
psql "postgresql://ip:port/dbname?sslmode=require&sslcompression=1"
It should display "compression active" if it's enabled. That probably requires some server config variable to be enabled though.
Or you can simply use ssh:
ssh user#dbserver "psql -c \"copy (select * from foo limit 3) to stdout csv header\" | gzip -f -c" >localfile.csv.gz
But... of course, you need ssh access to the db server.
If you don't have ssh to the db server, maybe you have ssh to another box in the same datacenter that has a fast network link to the db server, in that case you can ssh to it instead of the db server. Data will be transferred uncompressed between that box and the database, compressed on the box, and piped via ssh to your local machine. That will even save cpu on the database server since it won't be doing the compression.
If that doesn't work, well then, why not put the ssh command into the "to program" and have the server send it via ssh to your machine? You'll have to setup your router and open a port, but you can do that. Of course you'll have to find a way to put the password in the ssh command line, that's usually a big no-no, but maybe just for once. Or just use netcat instead, that doesn't require a password.
Also, if you want speed, please, use zstd instead of gzip.
Here's an example with netcat. I just tested it and it worked.
On destination machine which is 192.168.0.1:
nc -lp 65001 | zstd -d >file.csv
In another terminal:
psql -c "copy (select * from foo) to program 'zstd -9 |nc -N 192.168.0.1 65001' csv header" test
Note -N option for netcat.
You can use copy to PROGRAM:
COPY foo_table to PROGRAM 'gzip > /tmp/foo_table.csv' delimiters',' CSV HEADER;

How do I pass my username and password into a perl script from an ansible role?

I have a perl script for creating ssl certificates on a IBM MQ qmgr. The script needs a username and password for it to work.
I have an ansible role that calls a ready made perl script to create a MQ qmgr and another to create a ssl kdb. Like this:
- name: Create MQ Queue Manager
shell: "./CreateQmgr.sh -m {{MQ_QMGR1}}"
args:
chdir: /opt/wmqinf/utilities
- name: SSL the new Qmgr
shell: "./renewSSL.pl -S {{SSL_PEER1}} -U -m {{MQ_QMGR1}} -G {{GBGF}}"
args:
chdir: /opt/wmqinf/utilities
The playbook / role fails when it can't create the ssl kdb as no username is entered.
Is there a way I can pass the .pl script my username and password for it to work?
I'm sure there is a better way to do this. such as modify your perl script to accept extra command line arguments. however, this should/might work
shell: "./renewSSL.pl -S {{SSL_PEER1}} -U -m {{MQ_QMGR1}} -G {{GBGF}}" <(echo -n "str2") <(echo -n "str3")

call postgresql pg_dump.exe from visual basic

Im trying to do a backup of my database from my application made in visual basic (visual studio 2012)
I copy the pg_dump.exe with the necessary dll files to the application root.
Test the pg_dump doing a backup from cmd window and goes ok
This is the code i use to try to call the pg_dump exe but apparently does not receive the parameters i'm trying to send.
' New ProcessStartInfo created
Dim p As New ProcessStartInfo
Dim args As String = "-h serverip -p 5432 -U postgres db_name > " & txtPath.Text.ToString
' Specify the location of the binary
p.FileName = "pg_dump.exe"
' Use these arguments for the process
p.Arguments = args
' Use a hidden window
p.WindowStyle = ProcessWindowStyle.Maximized
' Start the process
Process.Start(p)
When the process start i get this msg:
pd_dump: too many command-line arguments (first is ">")
Try "pg_dump --help" for more information
if i type this in cmd the backup is done ok
pg_dump.exe -h serverip -p 5432 -U postgres db_name > c:\db_bak.backup
But i cant make it work from visual.
First, your command is just plain wrong. You're invoking pg_dump, which wants to read from a file, so you don't want to use >. That'd write to the file and overwrite it in the process. Instead you'd want <, the "read file as standard input" operator.
That's not the immediate cause of your error though. pg_dump doesn't understand >, <, etc. These operations instruct the shell (cmd.exe) to do I/O redirection. If you're not running cmd.exe then they don't work. To get I/O redirection (>, etc) you need to run the process via the cmd shell, not invoke it directly.
In this case it's probably better to use the -f filename option to pg_dump to tell it to write to a file instead of standard output. That way you avoid I/O redirection and don't need the shell anymore. It should be as simple as:
Dim args As String = "-h serverip -p 5432 -U postgres db_name -f " & txtPath.Text.ToString
Alternately, you can use cmd /C to invoke pg_dump via the command shell. Visual Basic might offer a shortcut way to do that; I don't use it, so I can't comment specifically on the mechanics of process invocation in Visual Basic. Check the CreateProcess docs; VB likely uses CreateProcess under the hood.
Personally I recommend that you do

Is there a way to query for database names within an SQLAnywhere Service?

I have launched an SQLAnywhere v12 service instance with the following command:
"C:\Program Files\SQL Anywhere 12\Bin64\dbsvc.exe" -as -s auto -t network
-w TestEmpty12 "C:\Program Files\SQL Anywhere 12\Bin64\dbsrv12.exe"
-n TestEmpty12 -x tcpip -c 512m
"C:\bin\Test Databases\Empty12\TestEmpty12.db"
I know that I can find running instances of SQLAnywhere using the dblocate executable. However, that utility only provides server name, address and port information. Is there a method in which I can get the database name, in this case 'TestEmpty12'?
Note that I am not necessarily on the same computer as the service.
You can run dblocate with the -d option. This will list database for each server as a comma-delimited list.
Documentation for dblocate:
Server Enumeration utility (dblocate)