docker-compose pull gives either a gpg error or a permissions error when I attempt to use it with or without sudo - docker-compose

everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad

I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.

Related

How to push docker image to ghcr.io organization

I am trying to push images I have built locally to the GitHub Container Registry aka Packages.
I have authenticated GitHub using PAT and authorized access to the organization. Let's name this organization EXAMPLEORG.
used the following command:
export CR_PAT=ghp_example_pat ; echo $CR_PAT | sudo docker login ghcr.io -u exampleuser --password-stdin
After that, I used the following command to push the image to ghcr.io:
docker push ghcr.io/exampleorg/exampleapp:v0.5
Unfortunately, I am getting this message after trying to upload image layers:
unauthorized: unauthenticated: User cannot be authenticated with the token provided.
Does somebody knows what I am missing here?
Followed this guide:
https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry
Is there something more I need to do in order to manually push image to Org packages (not interested to do it from the workflow at the moment).
Apparently, it was due to the wrong content of the ~/.docker/config.json file. During the first command, it happens to fail while writing. So I used sudo to circumvent this, and indeed it was circumvented, but the new file is now written in /root/.docker/config.json which is not desired outcome. Using docker login afterward will not read the config file from the root's home.
The solution to this is not to use sudo instead delete ~/.docker/config.json and then execute:
export CR_PAT=ghp_example_pat ; echo $CR_PAT | docker login ghcr.io -u exampleuser --password-stdin

Adding SSL Certs via mounted volume causing permissions issue

In my docker-compose.yml, I'm using the following to mount SSL certs into my container:
- ./certs:/var/lib/postgresql/certs
The ./certs folder and everything within it is owned by root locally.
However, upon starting the container, I receive:
2022-08-26 20:04:40.623 UTC [1] FATAL: could not load server certificate file "/var/lib/postgresql/certs/db.crt": Permission denied
Updating the permissions locally to anything else (777,755, etc..) results in a separate error:
FATAL: private key file "/var/lib/postgresql/certs/postgresdb.key" has group or world access
I realize I can copy the certs via my Dockerfile, but I'd rather not have to build a new image each time I want to change certificates. What is the best way to go about handling this?
Change the ownership of the certs to the user that's used inside the container, before you start the container.
You need to double-check the id of the user, since you didn't show what image you run. Below is an example.
sudo chmod -R 400 ./certs
sudo chown -R 5432:5432 ./certs
Alternatively, you can run the container with your local user ID. I only recommend this for development purpose.
docker run --user "$(id -u)" postgres
In that case, also make sure your local user has permissions on the certs dir.

I forgot to configure login Email id and password while install pg admin 4. how i can change after installation

I am new to pgAdmin 4. I have forgot to setup of login Email id and password while install pgAdmin 4. How can I change after installation? Any one can help me?
I'm assuming that since you don't remember your admin/password, you didn't do much with pgAdmin yet. If that's the case, you can move/remove the SQLite database out of the way and restart pgAdmin:
rm /var/lib/pgadmin/pgadmin4-server.db
or
mv /var/lib/pgadmin/pgadmin4-server.db /tmp
When you do this, you'll be prompted for a password the next time you start up the app.
You can also dump the contents of the SQLite database before moving/removing:
sqlite3 /var/lib/pgadmin/pgadmin4-server.db .dump
As my friend #Almadani and #richyen post are completely right.
But If you are working on remote host databases its best to delete that folder.Such as it may on some instance[ssh]
sudo rm -rf /var/lib/pgadmin
After deletion of folder. you can simply create new credentials
sudo /usr/pgadmin4/bin/setup-web.sh
You can also check it out from the blog.
By the way the blog is from here.
I solved this issue on Linux Fedora 32, and it's working. I hope you find it useful for you.
cd /var/lib/pgadmin4/
[root#localhost pgadmin4]# ls
pgadmin4.db sessions storage
[root#localhost pgadmin4]# rm pgadmin4.db
rm: remove regular file 'pgadmin4.db'? y
[root#localhost pgadmin4]# ls
sessions storage
there is still a way to find the database pgadmin4.db download and open this file via HeidiSQL as SQLite, directly viewing the table is not available - you can run the query SELECT * from user you will see your username, and if you do not remember the password can be changed to any known (encoded) - this method helped me
It was solved with me by removing pgadmin4.db, and run
python3.6 /usr/lib/python3.6/site-packages/pgadmin4-web/setup.py
and restart Apache web server, pgadmin4.

I can’t login to the server as the user I’ve created

I got “Permission denied (publickey)" using:
ssh $USERNAME#my-ip
Things I’ve done:
Using Public/Private Key authentication, I can login to the server as root.
I created a user in the sudo group
I confirmed that my created user has sudo priveleges as I viewed auth.log successfully (sudo cat /var/log/auth.log)
I thought it was possibly because my server was unable to identify which key to use, as I have created multiple keys, so I specified which key to use:
ssh -i /path/to/key/id_rsa $USERNAME#my-ip
I got "Permission denied (publickey)" again.
I figured it out! Turns out I was missing an 's' in 'ssh' at the beginning of my authorized_keys file in my user. :) I also matched the permissions between the root and user authorized_keys files, though not sure if this helped truly.

Permission denied (public key) during fetch from GitHub with Jenkins user on Ubuntu

Here is my setup:
Jenkins is running on my linux machine as 'jenkins' user.
I have generated a ssh key-pair as described in Linux - Setup Git, for the 'jenkins' user.
When I sudo su jenkins and try ssh -vT git#github.com, I am always asked my passphrase, but I am always eventually authenicated. (the verbose option shows which key is used, among others).
I could clone my repo from GitHub using jenkins:
Thusly:
jenkins#alpm:~/jobs/test git/workspace$ git pull
Enter passphrase for key '/var/lib/jenkins/.ssh/id*_rsa':
Already up-to-date.
Up to this point I have followed the instructions to the letter. The problem is that the Jenkins job fails with the following error:
status code 128:
stdout:
stderr: Permission denied (publickey).
fatal: The remote end hung up unexpectedly
This is same error as I get when I typo the passphrase (but of course, Jenkins does not ask me for the passphrase). The following pages:
GitHub - SSH Issues
Using SSH Agent Forwarding
indicate to me that ssh-agent could help remember the passphrase, which it does when I am using my own user, but not the jenkins id. Note that while running as my normal user yields:
echo "$SSH_AUTH_SOCK"
/tmp/keyring-nQlwf9/ssh
While running the same command as my 'jenkins' yields nothing (not even permission denied)
My understanding of the problem is that the passphrase is not remembered.
Do you have any idea?
Shall I start a ssh-agent or key ring manager for the jenkins user? How?
Or is ssh forwarding suitable when forwarding to the same machine?
Any brighter idea?
ps: I never sudo gitted, I always used jenkins or my user account (as mentioned in this SO post - Ubuntu/GitHub SSH Key Issue)
Since nobody wrote the answer from the comments for several months, I will quickly do so.
There are 2 possible problems/solutions:
id_rsa created with wrong user
Create id_rsa as the jenkins user (see hudson cannot fetch from git repository)
Leave passphrase empty
To summarise what must be done on the Jenkins server:
# 1. Create the folder containing the SSH keys if necessary
if [ ! -e ~jenkins/.ssh ]; then mkdir ~jenkins/.ssh; fi
cd ~jenkins/.ssh/
# 2. Create the SSH pair of keys
# The comment will help to identify the SSH key on target systems
ssh-keygen -C "jenkins" -f ~jenkins/.ssh/id_rsa -P ""
# 3. Assign the proper access rights
chown -R jenkins ~jenkins/.ssh/
chmod 700 ~jenkins/.ssh
chmod 600 ~jenkins/.ssh/*
Remember:
Please keep the default "id_rsa" name when generating the keys, as other such as "id_rsa_jenkins" won't work, even if correctly set up.
Do not use a passphrase for your key
Check that the public key (id_rsa.pub) has been uploaded on the git server (GitHub, Bitbucket, etc). Once done, test your SSH key by running: ssh -vvv git#github.com (change address according to your git server)
I got around this problem by simply leaving the passphrase empty when creating the keys.
I would add that if you created the keys by hand, they might still be owned by you and not readable by jenkins, try:
sudo chown jenkins -R /var/lib/jenkins/.ssh/*
To check are the following:
if the right public key (id_rsa.pub) is uploaded to the git-server.
jenkins user will access to github -> to CHECK if the right private key (id_rsa) is copied to /var/lib/jenkins/.ssh/
if the known_hosts file is created inside ~/.ssh folder. Try ssh -vvv git#github.com to see debug logs. If thing goes well, github.com will be added to known_hosts.
if the permission of id_rsa is set to 755 (chmod 755 id_rsa)
After all checks -> try ssh -vvv git#github.com
Dont try to do config in jenkins until ssh works!
If you are running jenkins as a service in windows, you need to verify the user running the service. If you created the keys using the user "MACHINENAME\user", change the service so the user running it can match
For Mac users, the issue can be solved by removing the existing keys and creating new Private and Public Keys by following these steps:
1.Remove all Public and Private keys located at /Users/Username/.ssh
2.Remove all the credentials saved under the Credentials tab in Jenkins.
3.Remove the existing Public SSH keys defined in the Github Repository Settings.
4.Create new SSH keys (private and public: id_rsa and id_rsa.pub) by following the steps from https://confluence.atlassian.com/bitbucketserver/creating-ssh-keys-776639788.html#CreatingSSHkeys-CreatinganSSHkeyonLinux&MacOSX
5.Set the newly created public SSH key (id_rsa.pub) in Github or an equivalent Repository Settings.
6.In Jenkins,create new credentials by adding the private SSH key(id_rsa) for your Github username.
7.The Error should be removed now.
keys need to generated from jenkins user.
sudo su jenkins
ssh-keygen
once the key is generated, it should be added as ssh key in bitbucket or github.