Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I ssh into a shared host (WebFaction) and then use ssh-agent to establish a connection to a mercurial repository (BitBucket). I call the agent like so:
eval `ssh-agent`
This then spews out the pid of the agent and sets its relevant environment variables. I then use ssh-add as follows to add my identity (after typing my passphrase):
ssh-add /path/to/a/key
My ssh connection eventually times out and I'm disconnected from the server. When I log back in, I can no longer connect to the Hg server and so I do this:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
And then repeat the two commands at the top of the post (ie. invoke the agent using eval and call ssh-add).
I'm sure that there's a well established idiom for avoiding this process and maintaining a "reference" to the agent that was spawned initially. I've tried redirecting I/O of the first command to a file (in the hope of sourcing it in my .bashrc), but I only get the agent's pid.
How can I avoid having to go through this process each time I ssh into the host?
My *NIX skills are weak, so constructive criticism on any aspect of the post is welcome, not just my use of ssh-agent.
Short answer:
With ssh-agent running locally and identities added, ssh -A user#host.webfaction.com provides the secure shell on the remote host with the local agent's identities.
Long answer:
As Charles suggested, agent forwarding is the solution.
At first, I thought that I could just issue an ssh user#host.webfaction.com and then, from within the secure session on the remote host, connect to the BitBucket repository using hg+ssh. But that failed, and so I investigated the ForwardAgent and AgentForwardingEnabled flags.
Thinking that I'd have to settle for a workaround in .bashrc that involved keeping my private key on the remote host, I went looking for a shell-script solution but was spared from this kludge by this answer in SuperUser, which is perfect and works without any client configuration (I'm not sure how the sshd server is configured on WebFaction).
Aside: in my question, I posted the following:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
but this is actually inefficient and requires the user to know his/her uid (available via /etc/passwd). pgrep is much easier:
pgrep -u username process-name
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am to re-install postgresql as something I was not able to log in anymore(lost password). However, every time I am trying to kill the process on corresponding port (5432), the PID changes and the port is still does not get freed. I am getting frustrated, this is taking over 2 weeks now.
Here is what I am doung:
#find the PID on 5432
sudo lsof -i: 5432 # this gives me a line where I can identify the process ID
sudo kill -9 <PID> # I use the PID given by the previous function
The last command gives a prompt asking me whether I want postgres to accept incoming network connections. Whichever option I choose (deny or allow) leads to the same thing. When I try to start postgres is still tells me that port 5432 is busy and indeed it is busy. When I re-use the first command above I notice that postgres is still there and the PID has changed.
I sorted the problem. I had other instances of postgres(9.5 I believe running in the background). I found it in my Library. now that port is completely free.
This question already has answers here:
Putty won't cache the keys to access a server when run script in hudson
(11 answers)
Closed 3 years ago.
I'm writing a Perl script to SSH into remote linux and maci machines from a windows. For that I'm running plink (putty link) command using qx. The problem is that when I try to run the plink command it gives a prompt
The server's host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is. ...... If you do not trust this host, press Return to abandon the
connection. Store key in cache? (y/n)
I have to automate the process of running a command remotely. So, I somehow want to bypass this warning.
I could think of two ways of doing this but don't know how to accomplish these
Somehow bypass this warning from putty itself through some command line options or other commands
Some Perl way of passing input to plink when prompted
Can anyone suggest how to do this either in one of above ways or some other solutions.
I solved it using pipes to pass Y to plink when prompted - echo Y | plink -ssh <user>#<host> -pw <password> <command>.
For more details refer to this answer. Also note the answer by #clay where he says
For internal servers, the blind echo y | ... trick is probably adequate (and super simple). However, for external servers accessed over the internet, it is much more secure to accept the server host key once rather than blindly accepting every time.
This was the case with me - I was using plink to ssh to internal servers.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have two servers, one in Amazon EC2 instance (t1.medium) and another in Microsoft Azure (medium) instance. Both these servers have the same config Ubuntu LTS 12.04.1, 64-bit arch running PostgreSQL 9.1. I need to setup disaster recovery system on Azure (turn on WAL archiving for the Amazon instance's Database for my specific schedules of data backups via pgbarman).
While going through the pgbarman-docs, one of the mandatory requirements is thaat,
ssh communication required on both ends without password authentication/prompt. (Pgbarman has a pre-requisite to have postgres#amazon to ssh directly to barman#azure and vice-versa. See, Getting started with Pgbarman).
But my complexities for logging to these instances are below:
Amazon EC2 has a .pem file which can be accessed without any password authentication: ssh -i my-pem-file.pem ubuntu#my-instance-public-ip-region.compute.amazonaws.com
Azure doesn't has a .pem file. Instead, it can be accessed with a password mechanism: ssh azure-user#app.cloudapp.net
Still, to enable the setup I did the below,
I created a key file postgres-barman.pub via ssh-keygen from barman#azure.
Transferred this file to Amazon via ssh-copy-id -i ubuntu#amazon (See below links for more information)
My problems are:
ssh Azure to Amazon:
I cannot transfer this file to postgres user:
cat postgres-barman.pub | ssh -i my-pem-file.pem postgres#amazon 'cat >> .ssh/authorized_keys' but if I change destination's user to ubuntu, the file gets copied.
After transferring the file (via ubuntu user), I try to do this: ssh postgres#amazon. It fails.
ssh Amazon to Azure
The same file is now residing on both sides. Still, if I issue ssh barman#azure, it asks for a password authentication (which is set to yes in /etc/ssh/sshd_config of the Azure instance). I cannot proceed with this die to barman pre-req.
Amazon allows to be sshed only via ubuntu user. I need to be enable this for postgres user. Can this be done?
Note: Amazon has PasswordAuthentication set to no in it's sshd_config file.
References:
ssh-copy-id:
Ubuntu SSH,
3 steps to Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id and
SSH-in-Linux.
Anyway, I got it sorted out.
I wasn't doing the configuration properly. This is what I did.
On Amazon:
ubuntu#amazon~$ sudo -s
root#amazon~$ passwd postgres
Enter new UNIX Password:
ubuntu#amazon~$ su - postgres
Password:
postgres#amazon~$ ssh-keygen -t rsa
postgres#amazon~$ scp .ssh/id_rsa.pub barman#azure-ip:~/.ssh/
On Azure:
ubuntu#azure~$ sudo -s
root#azure~$ passwd barman
Enter new UNIX Password:
ubuntu#azure~$ su - barman
Password:
barman#azure~$ cd .ssh
barman#azure~$ cat .ssh/id_rsa.pub >>~/.ssh/authorized_keys
Now, ssh to azure:
postgres#amazon:~$ ssh barman#azure
Now, repeat the same for Azure.
Only difference was that, the key transfer to Amazon wasn't happening via scp. So, I copied the contents from id_rsa.pub in barman#azure's /.ssh folder, pasted in postgres#amazon's .ssh/authorized_keys file and saved it.
Now, ssh to amazon:
barman#azure:~$ ssh postgres#amazon
It works! Thanks for the advice!
References:
Switch user in Linux/Ubuntu
Barman-setup-explained
Now to worry about barman' cronjob.
This question already has answers here:
Using Emacs server and emacsclient on other machines as other users
(4 answers)
remote emacs client connects, but doesn't create new frame in terminal
(1 answer)
Closed 8 years ago.
Is it possible to run emacs in server mode so remote clients can connect from remote locations via network? I'm just looking for the way to run emacs on remote powerful server and edit buffers locally using emacsclient while running compile command remotelly. This looks much better approach then using ssh session. Should not depend on network latency.
I think the best approach is: http://www.emacswiki.org/emacs/TrampMode
Basing on my comment above I'd recommend the following workflow:
Retrieve the sources you work on to the local directory (via scp or git, whatever)
Introduce the required changed to the code
To compile the code on remote server specify a custom compile-command which will:
Push changed files back to the remote server. E.g.: scp -r my-sources/ user-name#example.net:my-sources or via git push remote my-dev-branch
Run the compilation command through ssh and show the output. E.g.: ssh user-name#example.net -C "cd ~/my-sourcesl; make && ./bin/compiled-app"
note: for smooth commands execution through ssh, there is should a configured key-based authentication.
The significant drawback here is that at least it might not run X11 applications correctly (or at all)
I was trying to setup an SSH connection with Github following this tutorial:
Testing your SSH connection
I came across the following command:
$ ssh -T git#github.com
# Attempts to ssh to github
Curious, I looked at the ssh manual. It said the following:
-T Disable pseudo-tty allocation.
What is tty allocation? What does tty stand for? Why are we disabling it?I earnestly tried to look it up but I was unable to find even a definition.
As explained in "gitolite: PTY allocation request failed on channel 0", it is important to do ssh test connection with -T, because some server could abort the transaction entirely if a text-terminal (tty) is requested.
-T avoids requesting said terminal, since GitHub has no intention of giving you an interactive secure shell, where you could type command.
GitHub only wants to reply to your ssh request, in order to ascertain that the ssh command does work (you have the right public/private keys, and the public one has been registered to your GitHub account)
PuTTy would be an example of a terminal emulator, serial console and network file transfer application. It supports several network protocols, including SCP, SSH, Telnet and rlogin.
The name "PuTTY" has no definitive meaning, though "tty" is the name for a terminal in the Unix tradition, usually held to be short for Teletype.
Other use-cases for -T (beside testing)
Transferring binary files
Execute commands on a remote server
SSH tunneling: ssh -fnT -L port:server:port user#server (-f for background: you don't want to execute command, don't need a TTY and just want to establish a tunnel)