Can't connect to my server with restric RSYNC (RRSYNC) - perl

I explain my problem:
I used backuppc for remote some database on an other server and for saving my data I used rsync which uses ssh. On my remote server I put the ssh key of backuppc and it worked.
But I wanted to secure this connexion, so I used rrsync (a perl script for restrict the access), for a "read-only" access with copy.
So now, in the remote server I have in root/ssh/authorized_keys
command="/usr/local/bin/rrsync -ro /" ssh-rsa
But when I try to connect I have this message:
/usr/local/bin/rrsync: Not invoked via sshd
It's a message from the perl script, but I don't know what it means or what can I do for this to work.

As far as I can tell, this message appears when you try to access the server with the restricted key without using rsync. It may be possible to edit the script to allow other programs, but I'm not skilled enough to attempt that.

Related

PsExec connects using system name but not ipAddress

I need to use my local computer to simulate a test stand which will be on a domain and access a remote computer which is on a workgroup using PsExec. The testing computer is built from an imaging tool. The IP will be the same every time but the name isn't. The process I'm working with was used on an embedded XP system and is now being upgraded to WIN10. I've added network security using GPO and have found workarounds to be able to open the connection but for some reason just trying to run cmd on the remote machine does not work when using the IP, only the name. Using the IP returns the "access is denied" error. I have already added the token filter key to the registry. Has anyone heard of something like this before?
I have a script I'm trying to run but in the meantime I'm just trying to get
psexec \IP_ADDERESS -h -u USER_NAME(this is an admin) -p PASSWORD cmd
edit: I have to keep my computer on a domain but I have a spare that I was able to put on a workgroup with the test system. Running psexec went perfect. It makes no sense why it works for the name and not ip on a domain->workgroup connection and works exactly how I need it to on a workgroup->workgroup connection.

How do I SSH from a Docker container to a remote server

I am building a docker image off postgres image, and I would like to seed it with some data.
I am following the initialization-scripts section of the documentation.
But the problem I am facing now, is that my initialisation scripts needs to ssh to a remote database and dumb data from there. Basically something like this:
ssh remote.host "pg_dump -U user -d somedb" > some.sql
but this fails with the error that ssh: command not found
Question now is, in general, how do I ssh from a docker container to a remote server. In this case, specifically how do I ssh from a docker container to a remote database server as part of the initialisation step of seeding a postgres database?
As a general rule you don't do things this way. Typical Docker images contain only the server they're running and some core tools, but network clients like ssh or curl generally aren't part of this. In the particular case of ssh, securely managing the credentials required is also tricky (not impossible, but not obvious).
In your particular case, I might rearrange things so that your scripts didn't have the hard assumption the database was running locally. Provision an empty database container, then run your script from the host targeting that empty database. It may even work to set the PGHOST and PGPORT environment variables to point to your host machine's host name and the port you publish the database interface on, and then run that script unmodified.
Looking closer at that specific command, you also may find it better to set up a cron job to run that specific database dump and put the contents somewhere. Then a developer can get a snapshot of the data without having to make a connection to the live database server, and you can limit the number of people who will have access. Once you have this dump file, you can use the /docker-entrypoint-initdb.d mechanism to cause it to be loaded at first startup time.

Unable to ssh to a remote machine through shell script while accessing it from UI

I have a Linux machine where I have created a cgi script (JarPatch.cgi), the code of which looks like this:
#!/usr/bin/perl
use warnings;
print "Content-type: text/html\n\n";
system ("sh JarPatch.sh");
The code of JarPatch.sh looks like this:
#!/bin/bash
echo "Inside jar patching tool";
PJS_DEV=app4915#slcai833.us.oracle.com;
ssh -f $PJS_DEV "cd /slot/ems4915/appmgr/tmp; echo stopping server ; ./find_stop_servers.sh;"
echo "Exit jar patching tool";
This script will basically shut down a server running on the remote machine
Problem statement is this:
When I execute this cgi script through Linux terminal. I can see that the ssh commands are getting executed. Server is shut down.
When I access the cgi script through a windows machine in a browser, the shell script is invoked but ssh seems not to be working.
Can any one give me a pointer to resolve this issue please?
I am new to perl/shell integration. So might be missing something small as well.
Thanks
When you ssh from windows machine all connections are made as webuser which is not authorized to ssh into remote machine. On other hand when you ssh from linux terminal you are able to ssh as user there is authorized to do so. This is because linux user has its ssh key on remote server.
You can also try to look into ProxyCommand which might come to rescue but i have no idea how it will work with windows.
Other approach is to create ssh keys for webuser and put them into remomte server which will be security risk.
When you run it as yourself ssh is offering your keys to authenticate you. When you run it through the webserver, the webserver user is trying to run the ssh command, and does not have your ssh keys to offer, so is probably being prompted for a password, and not successfully logging in.
You could fix this by generating ssh keys for the webserver user and sharing that key with the target system as well, which has some security implications to say the least.

Emacs-Tramp: Not working properly

I'm trying to use Tramp/Emacs-23 in Ubuntu 12.04 in order to edit the remote host files. My remote host has two step authentication (RSA+Passwd). I use multiplexing through .ssh/config to ensure that tramp can directly connect to the remote shell without having to provide passwords.
My problem however is that I have 3 different remote hosts. When I try to connect to remote host through tramp without the initial multiplexing (through terminal), the TRAMP hangs with a message stating "Tramp: Waiting for prompts from remote shell". I used the below mentioned commands in .ssh/config to ensure the connection gets lost after a specified interval upon no prompt.
Host *
ServerAliveCountMax=30
ServerAliveInterval=5
However this doesn't seem to have any effect on the tramp connection. It will be of help if someone can help me in fixing this issue.
Sorry that your question has been left hanging so long.
I can offer a couple of things to try, use the tramp protocol sshx instead of ssh, it seems to cope better with most non-vanilla ssh connections.
e.g.
/sshx:user#host:path/filename
The other thing to try is adding your ssh key passphrase to the session at startup, run an ssh-agent on the machine, and connect to it at startup, then run ssh-add to enter the passphrase once.
As a side note, upgrade your Emacs to 24.3 there's a lot of new/great stuff in there since 23.x

Problems using teamcity command line to perform ssh remote login

I was wondering if anyone has tried using teamcity's command line builder to perform ssh remote login.
Right now, I would like to automate some testing on a QNX neutrino OS which is currently unsupported by teamcity. As a work around, I setup a ssh server on the target qnx machine so i could ssh and sftp the executables in.
Firstly, the source are compiled on Windows XP using qnx's compiler (based on g++). Followed by sftp-ing the executables into qnx neutrino.
Next, using ssh, script the login to remotely start the test apps and send the results back to the remote agent for publishing.
The batch script I created works well standalone, however, after hooking it up on the remote agent, it fails to login ssh and hangs indefinitely at the following command:
ssh -l "./.sh"
Notes:
I have added the remote agent's RSA public key in the QNX .ssh/authorized keys file, automatic login is working.
Is there a need to add the teamcity server's RSA public key in too?
Anyone has any idea on this problem?
I had a few weird problems with key-based SSH logins on QNX related to file permissions for the keys in .ssh. and permissions of parent folders (/home/username and /root).
Add
LogLevel DEBUG3
to /etc/openssh/sshd_config, make sure syslog is configured and is logging sshd output, restart sshd and try again - it will most likely complain about something.
Also, ssh -l "./.sh" makes no sense - -l is used to specify the user name, something is off there.