ssh from a cluster node triggers public key error for all remote hosts (MWE for github) [closed] - github

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Question:
For some reason all remote hosts stopped accepting my ssh key.
While troubleshooting this, I finally realized that even removing my public key completely from github (which should still fall back to password until 8/13) still produces a "publickey" error. How do I fix this?
Steps to reproduce:
remove my cluster account public key from github user settings
attempt to connect (produces error)
[me#login-node:/data/homevols/me] $ssh -T git#github.com
Permission denied (publickey).
Sanity-check:
[me#login-node:/data/homevols/me] $less ~/.ssh/config
Host *
IdentityFile ~/.ssh/id_rsa
/data/homevols/me/.ssh/config (END)

I have never seen GitHub fall back to password with SSH: it uses the technical account git, for which there is no password anyway.
That means ssh -oPubkeyAuthentication=no git#github.com would still return git#github.com: Permission denied (publickey)., without asking for password.
In your case: generate a new SSH key, add the public one to your profile, and try again:
ssh -Tv git#github.com
You should see a Welcome message
> Hi username! You've successfully authenticated, but GitHub does not
> provide shell access.

Related

How to change Postgres database username from inside the pod? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I want to change my Postgres database username and password for the running pod.
I am able to change the password but how to change the username?
Connect to the pod:
kubectl exec -it <pod-name> bash
Run psql
# psql
psql>
Create the user:
CREATE USER name CREATEUSER;
ALTER USER name WITH PASSWORD 'your-password';
or simply run createuser from the pod:
# createuser --aduser name

Difference between authorized_keys and id_rsa.pub [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am experimenting with vagrant and I see that when I run vagrant, the vagrant box already has an authorized_keys file in ~/.ssh/
Inside is an rsa key. What is the difference in this key and if I create an id_rsa.pub public key myself using
ssh-keygen -t rsa -b 4096 -C "your_email#example.com"
id_rsa.pub is a public key that you add to other hosts' authorized_keys files to allow you to log in as that user. Vagrant has one so it can be added to other hosts' authorized_keys files so it can log in automatically. The one you generated with ssh-keygen is for you to use, not Vagrant.
authorized_keys is a list of public keys that are allowed to log into that specific account on that specific server.
Think of id_rsa.pub as a signature for a specific user and authorized_keys as a list of authorized signatures who can log into that account on that specific host without a password (assuming they can prove they own the signature).

ssh: pgbarman setup issues in Amazon-EC2 & Azure [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have two servers, one in Amazon EC2 instance (t1.medium) and another in Microsoft Azure (medium) instance. Both these servers have the same config Ubuntu LTS 12.04.1, 64-bit arch running PostgreSQL 9.1. I need to setup disaster recovery system on Azure (turn on WAL archiving for the Amazon instance's Database for my specific schedules of data backups via pgbarman).
While going through the pgbarman-docs, one of the mandatory requirements is thaat,
ssh communication required on both ends without password authentication/prompt. (Pgbarman has a pre-requisite to have postgres#amazon to ssh directly to barman#azure and vice-versa. See, Getting started with Pgbarman).
But my complexities for logging to these instances are below:
Amazon EC2 has a .pem file which can be accessed without any password authentication: ssh -i my-pem-file.pem ubuntu#my-instance-public-ip-region.compute.amazonaws.com
Azure doesn't has a .pem file. Instead, it can be accessed with a password mechanism: ssh azure-user#app.cloudapp.net
Still, to enable the setup I did the below,
I created a key file postgres-barman.pub via ssh-keygen from barman#azure.
Transferred this file to Amazon via ssh-copy-id -i ubuntu#amazon (See below links for more information)
My problems are:
ssh Azure to Amazon:
I cannot transfer this file to postgres user:
cat postgres-barman.pub | ssh -i my-pem-file.pem postgres#amazon 'cat >> .ssh/authorized_keys' but if I change destination's user to ubuntu, the file gets copied.
After transferring the file (via ubuntu user), I try to do this: ssh postgres#amazon. It fails.
ssh Amazon to Azure
The same file is now residing on both sides. Still, if I issue ssh barman#azure, it asks for a password authentication (which is set to yes in /etc/ssh/sshd_config of the Azure instance). I cannot proceed with this die to barman pre-req.
Amazon allows to be sshed only via ubuntu user. I need to be enable this for postgres user. Can this be done?
Note: Amazon has PasswordAuthentication set to no in it's sshd_config file.
References:
ssh-copy-id:
Ubuntu SSH,
3 steps to Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id and
SSH-in-Linux.
Anyway, I got it sorted out.
I wasn't doing the configuration properly. This is what I did.
On Amazon:
ubuntu#amazon~$ sudo -s
root#amazon~$ passwd postgres
Enter new UNIX Password:
ubuntu#amazon~$ su - postgres
Password:
postgres#amazon~$ ssh-keygen -t rsa
postgres#amazon~$ scp .ssh/id_rsa.pub barman#azure-ip:~/.ssh/
On Azure:
ubuntu#azure~$ sudo -s
root#azure~$ passwd barman
Enter new UNIX Password:
ubuntu#azure~$ su - barman
Password:
barman#azure~$ cd .ssh
barman#azure~$ cat .ssh/id_rsa.pub >>~/.ssh/authorized_keys
Now, ssh to azure:
postgres#amazon:~$ ssh barman#azure
Now, repeat the same for Azure.
Only difference was that, the key transfer to Amazon wasn't happening via scp. So, I copied the contents from id_rsa.pub in barman#azure's /.ssh folder, pasted in postgres#amazon's .ssh/authorized_keys file and saved it.
Now, ssh to amazon:
barman#azure:~$ ssh postgres#amazon
It works! Thanks for the advice!
References:
Switch user in Linux/Ubuntu
Barman-setup-explained
Now to worry about barman' cronjob.

GitHub SSH Config: "Bad configuration option: IdentifyFile" [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed last year.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to create a .ssh/config file for multiple SSH accounts (specifically for github.com). I've tried several tutorials and github help walk-throughs but nothing seems to work.
I created a id_rsa_test and id_rsa_test.pub. I uploaded id_rsa_test.pub to github.
I then created a ~/.ssh/config file with the following:
# github account
Host github.com-test github.com
Hostname github.com
User git
IdentifyFile ~/.ssh/id_rsa_test
and
# github account
Host github.com-test github.com
Hostname github.com
User git
IdentifyFile ~/.ssh/id_rsa_test.pub
I then try several commands. i.e.:
git clone git#github-test:username/my_project.git
git push
...everytime I get the following error:
/home/username/.ssh/config: line 5: Bad configuration option: IdentifyFile
/home/username/.ssh/config: terminating, 1 bad configuration options
fatal: The remote end hung up unexpectedly
Any suggestions?
It is IdentityFile with a 't', not IdentifyFile.

ssh-agent across ssh sessions on shared host [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I ssh into a shared host (WebFaction) and then use ssh-agent to establish a connection to a mercurial repository (BitBucket). I call the agent like so:
eval `ssh-agent`
This then spews out the pid of the agent and sets its relevant environment variables. I then use ssh-add as follows to add my identity (after typing my passphrase):
ssh-add /path/to/a/key
My ssh connection eventually times out and I'm disconnected from the server. When I log back in, I can no longer connect to the Hg server and so I do this:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
And then repeat the two commands at the top of the post (ie. invoke the agent using eval and call ssh-add).
I'm sure that there's a well established idiom for avoiding this process and maintaining a "reference" to the agent that was spawned initially. I've tried redirecting I/O of the first command to a file (in the hope of sourcing it in my .bashrc), but I only get the agent's pid.
How can I avoid having to go through this process each time I ssh into the host?
My *NIX skills are weak, so constructive criticism on any aspect of the post is welcome, not just my use of ssh-agent.
Short answer:
With ssh-agent running locally and identities added, ssh -A user#host.webfaction.com provides the secure shell on the remote host with the local agent's identities.
Long answer:
As Charles suggested, agent forwarding is the solution.
At first, I thought that I could just issue an ssh user#host.webfaction.com and then, from within the secure session on the remote host, connect to the BitBucket repository using hg+ssh. But that failed, and so I investigated the ForwardAgent and AgentForwardingEnabled flags.
Thinking that I'd have to settle for a workaround in .bashrc that involved keeping my private key on the remote host, I went looking for a shell-script solution but was spared from this kludge by this answer in SuperUser, which is perfect and works without any client configuration (I'm not sure how the sshd server is configured on WebFaction).
Aside: in my question, I posted the following:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
but this is actually inefficient and requires the user to know his/her uid (available via /etc/passwd). pgrep is much easier:
pgrep -u username process-name