ssh: pgbarman setup issues in Amazon-EC2 & Azure [closed] - postgresql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have two servers, one in Amazon EC2 instance (t1.medium) and another in Microsoft Azure (medium) instance. Both these servers have the same config Ubuntu LTS 12.04.1, 64-bit arch running PostgreSQL 9.1. I need to setup disaster recovery system on Azure (turn on WAL archiving for the Amazon instance's Database for my specific schedules of data backups via pgbarman).
While going through the pgbarman-docs, one of the mandatory requirements is thaat,
ssh communication required on both ends without password authentication/prompt. (Pgbarman has a pre-requisite to have postgres#amazon to ssh directly to barman#azure and vice-versa. See, Getting started with Pgbarman).
But my complexities for logging to these instances are below:
Amazon EC2 has a .pem file which can be accessed without any password authentication: ssh -i my-pem-file.pem ubuntu#my-instance-public-ip-region.compute.amazonaws.com
Azure doesn't has a .pem file. Instead, it can be accessed with a password mechanism: ssh azure-user#app.cloudapp.net
Still, to enable the setup I did the below,
I created a key file postgres-barman.pub via ssh-keygen from barman#azure.
Transferred this file to Amazon via ssh-copy-id -i ubuntu#amazon (See below links for more information)
My problems are:
ssh Azure to Amazon:
I cannot transfer this file to postgres user:
cat postgres-barman.pub | ssh -i my-pem-file.pem postgres#amazon 'cat >> .ssh/authorized_keys' but if I change destination's user to ubuntu, the file gets copied.
After transferring the file (via ubuntu user), I try to do this: ssh postgres#amazon. It fails.
ssh Amazon to Azure
The same file is now residing on both sides. Still, if I issue ssh barman#azure, it asks for a password authentication (which is set to yes in /etc/ssh/sshd_config of the Azure instance). I cannot proceed with this die to barman pre-req.
Amazon allows to be sshed only via ubuntu user. I need to be enable this for postgres user. Can this be done?
Note: Amazon has PasswordAuthentication set to no in it's sshd_config file.
References:
ssh-copy-id:
Ubuntu SSH,
3 steps to Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id and
SSH-in-Linux.

Anyway, I got it sorted out.
I wasn't doing the configuration properly. This is what I did.
On Amazon:
ubuntu#amazon~$ sudo -s
root#amazon~$ passwd postgres
Enter new UNIX Password:
ubuntu#amazon~$ su - postgres
Password:
postgres#amazon~$ ssh-keygen -t rsa
postgres#amazon~$ scp .ssh/id_rsa.pub barman#azure-ip:~/.ssh/
On Azure:
ubuntu#azure~$ sudo -s
root#azure~$ passwd barman
Enter new UNIX Password:
ubuntu#azure~$ su - barman
Password:
barman#azure~$ cd .ssh
barman#azure~$ cat .ssh/id_rsa.pub >>~/.ssh/authorized_keys
Now, ssh to azure:
postgres#amazon:~$ ssh barman#azure
Now, repeat the same for Azure.
Only difference was that, the key transfer to Amazon wasn't happening via scp. So, I copied the contents from id_rsa.pub in barman#azure's /.ssh folder, pasted in postgres#amazon's .ssh/authorized_keys file and saved it.
Now, ssh to amazon:
barman#azure:~$ ssh postgres#amazon
It works! Thanks for the advice!
References:
Switch user in Linux/Ubuntu
Barman-setup-explained
Now to worry about barman' cronjob.

Related

VS Code ask for password repeatedly when opening different folder on same host

I have connected to a remote Ubuntu host from Windows using VS Code and using it for remote development. Often times I open different code repositories in VS Code but every time I have to open a different folder despite having the connection established the VS Code ask for password.
It seems that once we are commented to a remote host then successive opening of different folder from same host should not prompt for password.
Is there any setting I am missing or should do to resolve this or save password.
I'm assuming you're connecting to an ssh remote.
There are two ways to authenticate an ssh connection, via password and via public/private key. When using the latter you don't need to enter the password each time.
To use the public/private keys here's what you have to do:
You first need a pair (public/private) of ssh keys. On windows you can use ssh-keygen to generate them for you and put them in the default ssh config folder ( ~/.ssh/)
You then have to configure the remote server to allow your ssh key, you can do this in two ways:
with the ssh-copy-id command if available (I think on windows it's not there, but you can try)
by manually add your public key (~/.ssh/id_rsa.pub) to the.ssh/authorized_keys file on the host machine
Here's a link to know more about passwordless logins via ssh: https://www.redhat.com/sysadmin/passwordless-ssh
Open git bash on Windows
cd .ssh
ssh-copy-id -i id_ed25519.pub your-username#your-server

How to save ssh password to vscode?

I am using vscode to connect to a remote host. I use Remote-SSH (ms-vscode-remote.remote-ssh) extension to do so. Every time I want to connect to the remote host, I need to enter the password.
Is there a way to save the ssh password to vscode?
To setup password-less authentication for ssh on Visual Studio Code, perform the following steps.
These examples assume the following (replace with your actual details)
Host: myhost
Local User: localuser
Remote User: remoteuser
Remote User Home Dir: remoteuserhome
SSH Port: 22
I'm using a Mac so Windows will be a bit different but the basics are the same
Tell VS Code and your machine in general how you will be connecting to myhost
Edit:
/Users/<localuser>/.ssh/config
Add:
Host <myhost>
HostName <myhost>
User <remoteuser>
Port 22
PreferredAuthentications publickey
IdentityFile "/Users/<localuser>/.ssh/<myhost>_rsa"
Next generate a public and a private key with something like OpenSSL
ssh-keygen -q -b 2048 -P "" -f /Users/<localuser>/.ssh/keys/<myhost>_rsa -t rsa
This should make two files:
<myhost>_rsa (private key)
<myhost>_rsa.pub (public key)
The private key (<myhost>_rsa) can stay in the local .ssh folder
The public key (<myhost>_rsa.pub) needs to be copied to the server (<myhost>)
I did it with FTP but you can do it however you wish but it needs to end up in a similar directory on the server.
ON THE SERVER
There is a file on the server which has a list of public keys inside it.
<remoteuserhome>/.ssh/authorized_keys
If it exists already, you need to add the contents of <myhost>_rsa.pub to the end of the file.
If it does not exist you can use the <myhost>_rsa.pub and rename it to authorized_keys with permissions of 600.
If everything goes according to plan you should now be able to go into terminal and type
ssh <remoteuser>#<myhost>
and you should be in without a password. The same will now apply in Visual Studio Code.
Let's answer the OP's question first:
How to 'save ssh password'?
Since there is no such thing as "ssh password", the answer to "how to save the remote user password" is:
This is not supported by VSCode.
VSCode proposes to setup an SSH Agent in order to cache the passphrase (in case you are using an encrypted key)
But if the public key was not properly registered to the remote account ~/.ssh/authorized_key, SSH daemon will default to the remote user credentials (username/password).
It is called PasswordAuthentication, often the remote user password.
And caching that password is not supported for SSH sessions.
It is only supported by a Git credential helper, when using HTTPS URLs.
(it defers to the OS underlying credential manager)
But I don't know of a remote user password cache when SSH is used.
As Chagai Friedlander comments, the answer to the original question is therefore:
No, but you can use SSH keys and that is better.
Speaking of SSH keys:
"ssh password": Assuming you are referring to a ssh passphrase, meaning you have created an encrypted private key, then "saving the ssh password" would mean caching that passphrase in order to avoid entering it every time you want to access the remote host.
Check first if you can setup the ssh-agent, in order to cache the passphrase protecting your private key.
See "VSCode: Setting up the SSH Agent"
This assumes you are using an SSH key, as described in "VSCode: Connect to a remote host", and you are not using directly the remote user password.
Using an SSH key means its public key would have been registered to the remote account ~/.ssh/authorized_keys file.
This section is the workaround the OP ended up accepting: registering the public key on the remote user account, and caching the local private key passphrase worked.
For those trying to connect through Vscode Remote SSH Extension steps provided at https://code.visualstudio.com/docs/remote/troubleshooting#_ssh-tips)
For Windows(Host) --> Linux(Remote)
Create an SSH .pub key in your windows ssh-keygen -t rsa -b 4096
Copy the contents of the .pub key (default path C:\Users\username/.ssh/id_rsa.pub)
SSH into Remote machine and append the contents of the pub key in authorized keys echo "pub-key" >> ~/.ssh/authorized_keys

docker-compose pull gives either a gpg error or a permissions error when I attempt to use it with or without sudo

everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad
I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.

PostgreSQL strange behaviour in Ubuntu 18.04 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I installed PostgreSQL using:
sudo apt install libpq-dev postgresql postgresql-contrib
Everything is working fine at the beginning, but I need also remote connection,
so
I need to modify:
pg_hba.conf and postgresql.conf
but I make backups of them, before modifying.
Restart - sudo systemctl restart posgresql
Sometimes it works perfect
but in other cases, when I try sudo -u postgres psql I get the following error:
psql: colud not connect to the server: No such file or directory. Is
the server running locally and accepting connections on Unix domain
socket "/var/run/postgresql/.s.PGSQL.5432
It is very strange because, I change just the IP address in pg_hba.conf to allow remote connection and sometimes works with no errors and sometimes I receive the error. Also remote stop working.
I go back to the backup files, restart server(so no changes for remote in files), the error remains.
I check the service: sudo systemctl status postgresql
Is Active and working.
I have no idea what is wrong, because returned to initial files from backups I expected to fix the error. Please help
I found the errors asked multiple times, but in my case the server is active, and even returned back to backup and is not working.
I manage to solve this by the following method.
Check postgresql logs
> tail -f /var/log/postgresql/<what-ever-postgresql-log-name>.logs
If you log is showing FATAL: could not remove old lock file as follow. Then go for step 2.
2019-09-06 01:49:13.477 UTC [5439] LOG: database system is shut down
pg_ctl: another server might be running; trying to start server anyway
2019-09-06 01:51:17.668 UTC [1039] FATAL: could not remove old lock file "postmaster.pid": Permission denied
2019-09-06 01:51:17.668 UTC [1039] HINT: The file seems accidentally left over, but it could not be removed. Please remove the file by hand and try again.
pg_ctl: could not start server
Examine the log output.
Remove postmaster.pid at data_directory path.
You can check your data_directory path via
cat /etc/postgresql/*/main/postgresql.conf
Confirm your data_directory path - then issue the command below.
rm /var/lib/postgresql/10/main/postmaster.pid
Set permission for postgres to data_directory path. At my case is at /var/lib/postgresql/ -
Honestly I am still looking for a (why we still need to set permission) where by default it is already have permission for postgres user.
sudo chmod 765 /var/run/postgresql
sudo chown postgres /var/run/postgresql
Then restart service
sudo service postgresql restart
Test whether is working.
sudo -u postgres psql
Note: I am using Postgresql 10

ssh-agent across ssh sessions on shared host [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I ssh into a shared host (WebFaction) and then use ssh-agent to establish a connection to a mercurial repository (BitBucket). I call the agent like so:
eval `ssh-agent`
This then spews out the pid of the agent and sets its relevant environment variables. I then use ssh-add as follows to add my identity (after typing my passphrase):
ssh-add /path/to/a/key
My ssh connection eventually times out and I'm disconnected from the server. When I log back in, I can no longer connect to the Hg server and so I do this:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
And then repeat the two commands at the top of the post (ie. invoke the agent using eval and call ssh-add).
I'm sure that there's a well established idiom for avoiding this process and maintaining a "reference" to the agent that was spawned initially. I've tried redirecting I/O of the first command to a file (in the hope of sourcing it in my .bashrc), but I only get the agent's pid.
How can I avoid having to go through this process each time I ssh into the host?
My *NIX skills are weak, so constructive criticism on any aspect of the post is welcome, not just my use of ssh-agent.
Short answer:
With ssh-agent running locally and identities added, ssh -A user#host.webfaction.com provides the secure shell on the remote host with the local agent's identities.
Long answer:
As Charles suggested, agent forwarding is the solution.
At first, I thought that I could just issue an ssh user#host.webfaction.com and then, from within the secure session on the remote host, connect to the BitBucket repository using hg+ssh. But that failed, and so I investigated the ForwardAgent and AgentForwardingEnabled flags.
Thinking that I'd have to settle for a workaround in .bashrc that involved keeping my private key on the remote host, I went looking for a shell-script solution but was spared from this kludge by this answer in SuperUser, which is perfect and works without any client configuration (I'm not sure how the sshd server is configured on WebFaction).
Aside: in my question, I posted the following:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
but this is actually inefficient and requires the user to know his/her uid (available via /etc/passwd). pgrep is much easier:
pgrep -u username process-name